text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In computer graphics, a sprite is a two-dimensional bitmap that is integrated into a larger scene, most often in a 2D video game. Originally, the term sprite referred to fixed-sized objects composited together, by hardware, with a background. Use of the term has since become more general.
Systems with hardware sprites include arcade video games of the 1970s and 1980s; game consoles including as the Atari VCS (1977), ColecoVision (1982), Famicom (1983), Genesis/Mega Drive (1988); and home computers such as the TI-99/4 (1979), Atari 8-bit computers (1979), Commodore 64 (1982), MSX (1983), Amiga (1985), and X68000 (1987). Hardware varies in the number of sprites supported, the size and colors of each sprite, and special effects such as scaling or reporting pixel-precise overlap.
Hardware composition of sprites occurs as each scan line is prepared for the video output device, such as a cathode-ray tube, without involvement of the main CPU and without the need for a full-screen frame buffer. Sprites can be positioned or altered by setting attributes used during the hardware composition process. The number of sprites which can be displayed per scan line is often lower than the total number of sprites a system supports. For example, the Texas Instruments TMS9918 chip supports 32 sprites, but only four can appear on the same scan line.
The CPUs in modern computers, video game consoles, and mobile devices are fast enough that bitmaps can be drawn into a frame buffer without special hardware assistance. Beyond that, GPUs can render vast numbers of scaled, rotated, anti-aliased, partially translucent, very high resolution images in parallel with the CPU.
== Etymology ==
According to Karl Guttag, one of two engineers for the 1979 Texas Instruments TMS9918 video display processor, this use of the word sprite came from David Ackley, a manager at TI. It was also used by Danny Hillis at Texas Instruments in the late 1970s. The term was derived from the fact that sprites "float" on top of the background image without overwriting it, much like a ghost or mythological sprite.
Some hardware manufacturers used different terms, especially before sprite became common:
Player/Missile Graphics was a term used by Atari, Inc. for hardware sprites in the Atari 8-bit computers (1979) and Atari 5200 console (1982). The term reflects the use for both characters ("players") and smaller associated objects ("missiles") that share the same color. The earlier Atari Video Computer System and some Atari arcade games used player, missile, and ball.
Stamp was used in some arcade hardware in the early 1980s, including Ms. Pac-Man.
Movable Object Block, or MOB, was used in MOS Technology's graphics chip literature. Commodore, the main user of MOS chips and the owner of MOS for most of the chip maker's lifetime, instead used the term sprite for the Commodore 64.
OBJs (short for objects) is used in the developer manuals for the NES, Super NES, and Game Boy. The region of video RAM used to store sprite attributes and coordinates is called OAM (Object Attribute Memory). This also applies to the Game Boy Advance and Nintendo DS.
== History ==
=== Arcade video games ===
The use of sprites originated with arcade video games. Nolan Bushnell came up with the original concept when he developed the first arcade video game, Computer Space (1971). Technical limitations made it difficult to adapt the early mainframe game Spacewar! (1962), which performed an entire screen refresh for every little movement, so he came up with a solution to the problem: controlling each individual game element with a dedicated transistor. The rockets were essentially hardwired bitmaps that moved around the screen independently of the background, an important innovation for producing screen images more efficiently and providing the basis for sprite graphics.
The earliest video games to represent player characters as human player sprites were arcade sports video games, beginning with Taito's TV Basketball, released in April 1974 and licensed to Midway Manufacturing for release in North America. Designed by Tomohiro Nishikado, he wanted to move beyond simple Pong-style rectangles to character graphics, by rearranging the rectangle shapes into objects that look like basketball players and basketball hoops. Ramtek released another sports video game in October 1974, Baseball, which similarly displayed human-like characters.
The Namco Galaxian arcade system board, for the 1979 arcade game Galaxian, displays animated, multi-colored sprites over a scrolling background. It became the basis for Nintendo's Radar Scope and Donkey Kong arcade hardware and home consoles such as the Nintendo Entertainment System. According to Steve Golson from General Computer Corporation, the term "stamp" was used instead of "sprite" at the time.
=== Home systems ===
Signetics devised the first chips capable of generating sprite graphics (referred to as objects by Signetics) for home systems. The Signetics 2636 video processors were first used in the 1978 1292 Advanced Programmable Video System and later in the 1979 Elektor TV Games Computer.
The Atari VCS, released in 1977, has a hardware sprite implementation where five graphical objects can be moved independently of the game playfield. The term sprite was not in use at the time. The VCS's sprites are called movable objects in the programming manual, further identified as two players, two missiles, and one ball. These each consist of a single row of pixels that are displayed on a scan line. To produce a two-dimensional shape, the sprite's single-row bitmap is altered by software from one scan line to the next.
The 1979 Atari 400 and 800 home computers have similar, but more elaborate, circuitry capable of moving eight single-color objects per scan line: four 8-bit wide players and four 2-bit wide missiles. Each is the full height of the display—a long, thin strip. DMA from a table in memory automatically sets the graphics pattern registers for each scan line. Hardware registers control the horizontal position of each player and missile. Vertical motion is achieved by moving the bitmap data within a player or missile's strip. The feature was called player/missile graphics by Atari.
Texas Instruments developed the TMS9918 chip with sprite support for its 1979 TI-99/4 home computer. An updated version is used in the 1981 TI-99/4A.
=== In 2.5D and 3D games ===
Sprites remained popular with the rise of 2.5D games (those which recreate a 3D game space from a 2D map) in the late 1980s and early 1990s. A technique called billboarding allows 2.5D games to keep onscreen sprites rotated toward the player view at all times. Some 2.5D games, such as 1993's Doom, allow the same entity to be represented by different sprites depending on its rotation relative to the viewer, furthering the illusion of 3D.
Fully 3D games usually present world objects as 3D models, but sprites are supported in some 3D game engines, such as GoldSrc and Unreal, and may be billboarded or locked to fixed orientations. Sprites remain useful for small details, particle effects, and other applications where the lack of a third dimension is not a major detriment.
== Systems with hardware sprites ==
These are base hardware specs and do not include additional programming techniques, such as using raster interrupts to repurpose sprites mid-frame.
== See also ==
2.5D
== References == | Wikipedia/Sprite_(computer_graphics) |
Graphics hardware is computer hardware that generates computer graphics and allows them to be shown on a display, usually using a graphics card (video card) in combination with a device driver to create the images on the screen.
== Types ==
=== Graphics cards ===
The most important piece of graphics hardware is the graphics card, which is the piece of equipment that renders out all images and sends them to a display. There are two types of graphics cards: integrated and dedicated.
An integrated graphics card, usually by Intel to use in their computers, is bound to the motherboard and shares RAM (Random Access Memory) with the CPU, reducing the total amount of RAM available. This is undesirable for running programs and applications that use a large amount of video memory.
A dedicated graphics card has its own RAM and Processor for generating its images and does not slow down the computer. Dedicated graphics cards also have higher performance than integrated graphics cards. It is possible to have both dedicated and integrated graphics, however once a dedicated graphics card is installed, the integrated card will no longer function until the dedicated card is removed.
==== Parts of a graphics card ====
The GPU, or graphics processing unit, is the unit that allows the graphics card to function. It performs a large amount of the work given to the card. The majority of video playback on a computer is controlled by the GPU. Once again, a GPU can be either integrated or dedicated.
Video Memory is built-in RAM on the graphics card, which provides it with its own memory, allowing it to run smoothly without taking resources intended for general use by the rest of the computer. The term "Video" here is an informal designation and is not intended in a narrow sense. In particular, it does not imply exclusively video data. The data in this form of memory comprises all manner of graphical data including those for still images, icons, fonts, and generally anything that is displayed on the screen. In Integrated graphics cards, which lack this built-in memory, the main memory available for general computation is used instead, which means less memory for other functions of the system.
=== Display drivers ===
A display driver is a piece of software which allows your graphics hardware to communicate with your operating system. Drivers in general allow your computer to utilize parts of itself, and without them, the machine would not function. This is because a graphics device usually communicates in its own language, which is more sophisticated, and a computer communicates in its own language, which largely deals with general commands. Therefore, a driver is required to translate between the two, and convert general commands into specific commands, and vice versa, so that each of the devices can understand the instructions and results. Every card needs its own driver, although some drivers include the driver for several cards. Ex. a GTX1060 driver will not work in a Radeon card.
== Installation ==
Dedicated graphics cards are not bound to the motherboard, and therefore most are removable, replaceable, or upgradable. They are installed in an expansion slot and connected to the motherboard.
On the other hand, an integrated graphics card cannot be changed without buying a new motherboard with a better chip, as they are bound to the motherboard.
Also, if an integrated graphics card gets damaged or ceases to function, a new motherboard must be purchased to replace it, as it is bound to the motherboard and cannot be removed or replaced. On the other hand, if there is a problem with a dedicated graphics card, it can be replaced by installing another.
Drivers for the hardware are installed through software downloaded or provided by the manufacturer. Each brand of graphics hardware has its own drivers that are required for the hardware to run appropriately.
== Brands ==
The major competing brands in graphics hardware are NVidia and AMD. NVidia is known largely in the computer graphics department due to its GeForce brand, whereas AMD is known due to its Radeon brand. These two brands account for largely 100 percent of the graphics hardware market, with NVidia making 4 billion dollars in revenue and AMD generating 6.5 billion in revenue (through all sales, not specifically graphics cards).
Radeon used to be ATI, until AMD bought ATI for $5.4 billion in 2006. ATI cards are no longer produced, and Radeon is now part of AMD
More recently, Intel has released its Iris graphics, adding a 3rd competitor to the market.
== Costs ==
The price of graphics hardware varies with its power and speed. Most high-end gaming hardware are dedicated graphics cards that cost from $200 up to the price of a new computer. In the graphics cards department, using integrated chips is much cheaper than buying a dedicated card, however the performance conforms to the price.
Also, computer graphics hardware usually generates a larger amount of heat, especially high end gaming pieces, and requires additional cooling systems to prevent overheating. This may further raise the cost, although some dedicated graphics cards come with built-in fans.
== See also ==
Graphics hardware and FOSS
== References ==
== External links ==
Media related to Graphics hardware at Wikimedia Commons | Wikipedia/Graphics_hardware |
Clock rate or clock speed in computing typically refers to the frequency at which the clock generator of a processor can generate pulses used to synchronize the operations of its components. It is used as an indicator of the processor's speed. Clock rate is measured in the SI unit of frequency hertz (Hz).
The clock rate of the first generation of computers was measured in hertz or kilohertz (kHz), the first personal computers from the 1970s through the 1980s had clock rates measured in megahertz (MHz). In the 21st century the speed of modern CPUs is commonly advertised in gigahertz (GHz). This metric is most useful when comparing processors within the same family, holding constant other features that may affect performance.
== Determining factors ==
=== Binning ===
Manufacturers of modern processors typically charge higher prices for processors that operate at higher clock rates, a practice called binning. For a given CPU, the clock rates are determined at the end of the manufacturing process through testing of each processor. Chip manufacturers publish a "maximum clock rate" specification, and they test chips before selling them to make sure they meet that specification, even when executing the most complicated instructions with the data patterns that take the longest to settle (testing at the temperature and voltage that gives the lowest performance). Processors successfully tested for compliance with a given set of standards may be labeled with a higher clock rate, e.g., 3.50 GHz, while those that fail the standards of the higher clock rate yet pass the standards of a lower clock rate may be labeled with the lower clock rate, e.g., 3.3 GHz, and sold at a lower price.
=== Engineering ===
The clock rate of a CPU is normally determined by the frequency of an oscillator crystal. Typically a crystal oscillator produces a fixed sine wave—the frequency reference signal. Electronic circuitry translates that into a square wave at the same frequency for digital electronics applications (or, when using a CPU multiplier, some fixed multiple of the crystal reference frequency). The clock distribution network inside the CPU carries that clock signal to all the parts that need it. An A/D Converter has a "clock" pin driven by a similar system to set the sampling rate. With any particular CPU, replacing the crystal with another crystal that oscillates at half the frequency ("underclocking") will generally make the CPU run at half the performance and reduce waste heat produced by the CPU. Conversely, some people try to increase performance of a CPU by replacing the oscillator crystal with a higher frequency crystal ("overclocking"). However, the amount of overclocking is limited by the time for the CPU to settle after each pulse, and by the extra heat created.
After each clock pulse, the signal lines inside the CPU need time to settle to their new state. That is, every signal line must finish transitioning from 0 to 1, or from 1 to 0. If the next clock pulse comes before that, the results will be incorrect. In the process of transitioning, some energy is wasted as heat (mostly inside the driving transistors). When executing complicated instructions that cause many transitions, the higher the clock rate the more heat produced. Transistors may be damaged by excessive heat.
There is also a lower limit of the clock rate, unless a fully static core is used.
== Historical milestones and current records ==
The first fully mechanical digital computer, the Z1, operated at 1 Hz (cycle per second) clock frequency and the first electromechanical general purpose computer, the Z3, operated at a frequency of about 5–10 Hz. The first electronic general purpose computer, the ENIAC, used a 100 kHz clock in its cycling unit. As each instruction took 20 cycles, it had an instruction rate of 5 kHz.
The first commercial PC, the Altair 8800 (by MITS), used an Intel 8080 CPU with a clock rate of 2 MHz (2 million cycles per second). The original IBM PC (c. 1981) had a clock rate of 4.77 MHz (4,772,727 cycles per second). In 1992, both Hewlett-Packard and Digital Equipment Corporation (DEC) exceeded 100 MHz with RISC techniques in the PA-7100 and AXP 21064 DEC Alpha respectively. In 1995, Intel's P5 Pentium chip ran at 100 MHz (100 million cycles per second). On March 6, 2000, AMD demonstrated passing the 1 GHz milestone a few days ahead of Intel shipping 1 GHz in systems. In 2002, an Intel Pentium 4 model was introduced as the first CPU with a clock rate of 3 GHz (three billion cycles per second corresponding to ~ 0.33 nanoseconds per cycle). Since then, the clock rate of production processors has increased more slowly, with performance improvements coming from other design changes.
Set in 2011, the Guinness World Record for the highest CPU clock rate is 8.42938 GHz with an overclocked AMD FX-8150 Bulldozer-based chip in an LHe/LN2 cryobath, 5 GHz on air. This is surpassed by the CPU-Z overclocking record for the highest CPU clock rate at 8.79433 GHz with an AMD FX-8350 Piledriver-based chip bathed in LN2, achieved in November 2012. It is also surpassed by the slightly slower AMD FX-8370 overclocked to 8.72 GHz which tops off the HWBOT frequency rankings. These records were broken in 2025 when an Intel Core i9-13900KF was overclocked to 9.12 GHz.
The highest boost clock rate on a production processor is the i9-14900KS, clocked at 6.2 GHz, which was released in Q1 2024.
== Research ==
Engineers continue to find new ways to design CPUs that settle a little more quickly or use slightly less energy per transition, pushing back those limits, producing new CPUs that can run at slightly higher clock rates. The ultimate limits to energy per transition are explored in reversible computing.
The first fully reversible CPU, the Pendulum, was implemented using standard CMOS transistors in the late 1990s at the Massachusetts Institute of Technology.
Engineers also continue to find new ways to design CPUs so that they complete more instructions per clock cycle, thus achieving a lower CPI (cycles or clock cycles per instruction) count, although they may run at the same or a lower clock rate as older CPUs. This is achieved through architectural techniques such as instruction pipelining and out-of-order execution which attempts to exploit instruction level parallelism in the code.
== Comparing ==
The clock rate of a CPU is most useful for providing comparisons between CPUs in the same family. The clock rate is only one of several factors that can influence performance when comparing processors in different families. For example, an IBM PC with an Intel 80486 CPU running at 50 MHz will be about twice as fast (internally only) as one with the same CPU and memory running at 25 MHz, while the same will not be true for MIPS R4000 running at the same clock rate as the two are different processors that implement different architectures and microarchitectures. Further, a "cumulative clock rate" measure is sometimes assumed by taking the total cores and multiplying by the total clock rate (e.g. a dual-core 2.8 GHz processor running at a cumulative 5.6 GHz). There are many other factors to consider when comparing the performance of CPUs, like the width of the CPU's data bus, the latency of the memory, and the cache architecture.
The clock rate alone is generally considered to be an inaccurate measure of performance when comparing different CPUs families. Software benchmarks are more useful. Clock rates can sometimes be misleading since the amount of work different CPUs can do in one cycle varies. For example, superscalar processors can execute more than one instruction per cycle (on average), yet it is not uncommon for them to do "less" in a clock cycle. In addition, subscalar CPUs or use of parallelism can also affect the performance of the computer regardless of clock rate.
== See also ==
== References == | Wikipedia/Clock_rate |
In computer graphics, fixed-function is a term primarily used to describe 3D graphics APIs and GPUs designed prior to the advent of programmable shaders. The term is also used to describe APIs and graphics pipelines that do not allow users to change its underlying processing techniques, hence the word 'fixed'. Fixed-function can also refer to graphics processing techniques that employ non-programmable dedicated hardware, like the use of ROPs to rasterize an image.
== History ==
Although the exact origin of the term 'fixed-function' is unclear, the first known graphics hardware that is considered to be fixed-function is the IBM 8514/A graphics add-in-board from 1987. When compared to other graphics hardware of its time, particularly hardware that made use of the RISC-based TMS34010, the 8514/A has similar processing speeds while also launching at a significantly less expensive price point. However, those benefits came at a cost of programming flexibility, as the 8514/A was designed more like an ASIC than its competition that were similar to general-purpose CPUs.
Following the 8514/A, the most powerful dedicated graphics hardware of the 1990s have pipelines that are not programmable, only configurable to a limited degree. This means that host CPUs have no direct influence on how its GPUs will process vertex and rasterization operations, beyond issuing indirect commands and transferring data bidirectionally from CPU-sided RAM to GPU-sided VRAM. As more hardware with this fixed-function design released, 3D graphics API developers of the 90s mimicked the nature of available hardware in their own software design by creating logical graphics pipelines that are only configurable and non-programmable. Graphics APIs of this time, notably OpenGL and early versions DirectX (Direct3D), are themselves retroactively referred to as fixed-function as they ultimately share many design characteristics with the fixed-function hardware they targeted.
Historically fixed-function APIs consisted of a set of function entry points that would approximately or directly map to dedicated logic for their named purpose in GPUs designed to support them. As shader based GPUs and APIs evolved, fixed-function APIs were implemented by graphics driver engineers using the more general purpose shading architecture. This approach served as a segue that would continue providing the fixed-function API abstraction most developers were experienced with while allowing further development and enhancements of the newer shader-based architectures.
OpenGL, OpenGL ES and DirectX are all 3D graphics APIs that went through the transition from the fixed-function programming model to the shader-based programming model. Below is a table of when the transition from fixed-function to shaders was made:
Even after the popularization of programmable shaders and graphics pipelines, certain GPU features would remain non-programmable to optimize for speed over flexibility. For example, the NVIDIA GeForce 6 series GPUs delegated early culling, rasterization, MSAA, depth queries, texture mapping and more to fixed-function implementations. The CPU does not direct the GPU how to specifically process these operations; these features can only be configured to a limited degree.
== Fixed function vs shaders ==
Fixed function APIs tend to be a simpler programming abstraction with a series of well-defined and specifically named graphics pipeline stages. Shader-based APIs treat graphics data (vertices and pixels / texels) generically and allow a great deal of flexibility in how this data is modulated. More sophisticated rendering techniques are possible using a shader-based API.
== References == | Wikipedia/Fixed-function |
The GeForce 256 is the original release in Nvidia's "GeForce" product line. Announced on August 31, 1999 and released on October 11, 1999, the GeForce 256 improves on its predecessor (RIVA TNT2) by increasing the number of fixed pixel pipelines, offloading host geometry calculations to a hardware transform and lighting (T&L) engine, and adding hardware motion compensation for MPEG-2 video. It offered a notable leap in 3D PC gaming performance and was the first fully Direct3D 7-compliant 3D accelerator.
== Architecture ==
GeForce 256 was marketed as "the world's first 'GPU', or Graphics Processing Unit", a term Nvidia defined at the time as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second".
The "256" in its name stems from the "256-bit QuadPipe Rendering Engine", a term describing the four 64-bit pixel pipelines of the NV10 chip. In single-textured games NV10 could put out 4 pixels per cycle, while a two-textured scenario would limit this to 2 multitextured pixels per cycle, as the chip still had only one TMU per pipeline, just as TNT2. In terms of rendering features, GeForce 256 also added support for cube environment mapping and dot-product (Dot3) bump mapping.
The integration of the transform and lighting hardware into the GPU itself set the GeForce 256 apart from older 3D accelerators that relied on the CPU to perform these calculations (also known as software transform and lighting). This reduction of 3D graphics solution complexity brought the cost of such hardware to a new low and made it accessible to cheap consumer graphics cards instead of being limited to the previous expensive professionally oriented niche designed for computer-aided design (CAD). NV10's T&L engine also allowed Nvidia to enter the CAD market with dedicated cards for the first time, with a product called Quadro. The Quadro line uses the same silicon chips as the GeForce cards, but has different driver support and certifications tailored to the unique requirements of CAD applications.
The chip was manufactured by TSMC using its 220 nm CMOS process.
There was only one GeForce 256 release, as succeeding GeForce products would have varying chip speeds. However there were two memory configurations, with the SDR version released in October 1999 and the DDR version released in mid-December 1999 – each with a different type of SDRAM memory. The SDR version uses SDR SDRAM memory from Samsung Electronics, while the later DDR version uses DDR SDRAM memory from Hyundai Electronics (now SK Hynix).
== Product comparisons ==
Compared to previous high-end 3D game accelerators, such as 3dfx Voodoo3 3500 and Nvidia RIVA TNT2 Ultra, GeForce provided up to a 50% or greater improvement in frame rate in some games (ones specifically written to take advantage of the hardware T&L) when coupled with a very-low-budget CPU. The later release and widespread adoption of GeForce 2 MX/4 MX cards with the same feature set meant unusually long support for the GeForce 256, until approximately 2006, in games such as Star Wars: Empire at War or Half-Life 2, the latter of which featured a Direct3D 7-compatible path, using a subset of Direct3D 9 to target the fixed-function pipeline of these GPUs.
Without broad application support at the time, critics pointed out that the T&L technology had little real-world value. Initially, it was only somewhat beneficial in certain situations in a few OpenGL-based 3D first-person shooters, most notably Quake III Arena. Benchmarks using low-budget CPUs like the Celeron 300A would give favourable results for the GeForce 256, but benchmarks done with some CPUs such as the Pentium II 300 would give better results with some older graphics cards like the 3dfx Voodoo 2. 3dfx and other competing graphics-card companies pointed out that a fast CPU could more than make up for the lack of a T&L unit. Software support for hardware T&L was not commonplace until several years after the release of the first GeForce. Early drivers were buggy and slow, while 3dfx cards enjoyed efficient, high-speed, mature Glide API and/or MiniGL support for the majority of games. Only after the GeForce 256 was replaced by the GeForce 2, and ATI's T&L-equipped Radeon was also on the market, did hardware T&L become a widely utilized feature in games.
The GeForce 256 was also quite expensive for the time and didn't offer tangible advantages over competitors' products outside of 3D acceleration. For example, its GUI and video playback acceleration were not significantly better than that offered by competition or even older Nvidia products. Additionally, some GeForce cards were plagued with poor analog signal circuitry, which caused display output to be blurry.
As CPUs became faster, the GeForce 256 demonstrated that the disadvantage of hardware T&L is that, if a CPU is fast enough, it can perform T&L functions faster than the GPU, thus making the GPU a hindrance to rendering performance. This changed the way the graphics market functioned, encouraging shorter graphics-card lifetimes and placing less emphasis on the CPU for gaming.
=== Motion compensation ===
The GeForce 256 introduced motion compensation as a functional unit of the NV10 chip, this first-generation unit would be succeeded by Nvidia's HDVP (High-Definition Video Processor) in GeForce 2 GTS.
== Specifications ==
All models are made via TSMC 220 nm fabrication process
== Discontinued support ==
NVIDIA has ceased driver support for the GeForce 256 series.
=== Final drivers ===
Windows 9x & Windows Me: 71.84 released on March 11, 2005; Download;
Product Support List Windows 95/98/Me – 71.84.
Windows 2000 & 32-bit Windows XP: 71.89 released on April 14, 2005; Download.
Product Support List Windows XP/2000 – 71.84.
The drivers for Windows 2000/XP may be installed on later versions of Windows such as Windows Vista and 7; however, they do not support desktop compositing or the Aero effects of these operating systems.
== Competitors ==
ATI Rage 128 and Rage Fury MAXX
3dfx Voodoo3
Matrox G400
S3 Savage 2000
== See also ==
Graphics card
Graphics processing unit
List of Nvidia graphics processing units
== References ==
== External links ==
NVIDIA: GeForce 256 – The World's First GPU from web archive
ForceWare 71.84 drivers, Final Windows 9x/ME driver release
ForceWare 71.89 drivers, Final Windows XP driver release
techPowerUp! GPU Database | Wikipedia/GeForce_256 |
In the field of 3D computer graphics, the unified shader model (known in Direct3D 10 as "Shader Model 4.0") refers to a form of shader hardware in a graphical processing unit (GPU) where all of the shader stages in the rendering pipeline (geometry, vertex, pixel, etc.) have the same capabilities. They can all read textures and buffers, and they use instruction sets that are almost identical.
== History ==
Earlier GPUs generally included two types of shader hardware, with the vertex shaders having considerably more instructions than the simpler pixel shaders. This lowered the cost of implementation of the GPU as a whole, and allowed more shaders in total on a single unit. This was at the cost of making the system less flexible, and sometimes leaving one set of shaders idle if the workload used one more than the other. As improvements in fabrication continued, this distinction became less useful. ATI Technologies introduced a unified architecture on the hardware they developed for the Xbox 360. Nvidia quickly followed with their Tesla design. AMD introduced a unified shader in card form two years later in the TeraScale line. The concept has been universal since then.
Early shader abstractions (such as Shader Model 1.x) used very different instruction sets for vertex and pixel shaders, with vertex shaders having much more flexible instruction set. Later shader models (such as Shader Model 2.x and 3.0) reduced the differences, approaching unified shader model. Even in the unified model the instruction set may not be completely the same between different shader types; different shader stages may have a few distinctions. Fragment/pixel shaders can compute implicit texture coordinate gradients, while geometry shaders can emit rendering primitives.
== Unified shader architecture ==
Unified shader architecture (or unified shading architecture) is a hardware design by which all shader processing units of a piece of graphics hardware are capable of handling any type of shading tasks. Most often Unified Shading Architecture hardware is composed of an array of computing units and some form of dynamic scheduling/load balancing system that ensures that all of the computational units are kept working as often as possible.
Unified shader architecture allows more flexible use of the graphics rendering hardware. For example, in a situation with a heavy geometry workload the system could allocate most computing units to run vertex and geometry shaders. In cases with less vertex workload and heavy pixel load, more computing units could be allocated to run pixel shaders.
While unified shader architecture hardware and unified shader model programming interfaces are not a requirement for each other, a unified architecture is most sensible when designing hardware intended to support an API offering a unified shader model.
OpenGL 3.3 (which offers a unified shader model) can still be implemented on hardware that does not have unified shader architecture. Similarly, hardware that supported non unified shader model APIs could be based on a unified shader architecture, as is the case with Xenos graphics chip in Xbox 360, for example.
The unified shader architecture was introduced with the Nvidia GeForce 8 series, ATI Radeon HD 2000 series, S3 Chrome 400, Intel GMA X3000 series, Xbox 360's GPU, Qualcomm Adreno 200 series, Mali Midgard, PowerVR SGX GPUs and is used in all subsequent series.
For example, the unified shader is referred as "CUDA core" or "shader core" on NVIDIA GPUs, and is referred as "ALU core" on Intel GPUs.
== Architectures ==
Nvidia
Tesla
Fermi
Kepler
Maxwell
Pascal
Volta
Turing
Ampere
Ada Lovelace
Blackwell
Intel
Intel Arc
ATI/AMD
TeraScale
Graphics Core Next
RDNA
CDNA
== References == | Wikipedia/Unified_shader_model |
The GeForce 8 series is the eighth generation of Nvidia's GeForce line of graphics processing units. The third major GPU architecture developed by Nvidia, Tesla represents the company's first unified shader architecture.
== Overview ==
All GeForce 8 Series products are based on Tesla.
As with many GPUs, it is important to note that the larger numbers these cards carry does not guarantee superior performance over previous generation cards with a lower number. For example, the GeForce 8300 and 8400 entry-level cards cannot be compared to the previous GeForce 7200 and 7300 cards due to their inferior performance. The same can be said for the high-end GeForce 8800 GTX card, which cannot be compared to the previous GeForce 7800 GTX card due to differences in performance.
=== Max resolution ===
Dual dual-link DVI support:
Able to drive two flat-panel displays up to 2560×1600 resolution. Available on select GeForce 8800 and 8600 GPUs.
One dual-link DVI support:
Able to drive one flat-panel display up to 2560×1600 resolution. Available on select GeForce 8500 GPUs and GeForce 8400 GS cards based on the G98.
One single-link DVI support:
Able to drive one flat-panel display up to 1920×1200 resolution. Available on select GeForce 8400 GPUs. GeForce 8400 GS cards based on the G86 only support single-link DVI.
=== Display capabilities ===
The GeForce 8 series supports 10-bit per channel display output, up from 8-bit on previous Nvidia cards. This potentially allows higher fidelity color representation and separation on capable displays. The GeForce 8 series, like its recent predecessors, also supports Scalable Link Interface (SLI) for multiple installed cards to act as one via an SLI Bridge, so long as they are of similar architecture.
NVIDIA's PureVideo HD video rendering technology is an improved version of the original PureVideo introduced with GeForce 6. It now includes GPU-based hardware acceleration for decoding HD movie formats, post-processing of HD video for enhanced images, and optional High-bandwidth Digital Content Protection (HDCP) support at the card level.
== GeForce 8300 and 8400 series ==
In the summer of 2007 Nvidia released the entry-level GeForce 8300 GS and 8400 GS graphics cards, based on the G86 core. The GeForce 8300 was only available in the OEM market, and was also available in integrated motherboard GPU form as the GeForce 8300 mGPU. The GeForce 8300 series was only available in PCI Express, with the GeForce 8400 series using either PCI Express or PCI. The first version of the 8400 GS is sometimes called "GeForce 8400 GS Rev. 1".
Being entry-level cards, it is usually less powerful than with mid-range and high-end cards. Because of the reduced graphics performance of these cards, it is not suitable for intense 3D applications such as fast, high-resolution video games, however they could still play most games in lower resolutions and settings, making these cards (in particular the 8400 series) popular among casual gamers and HTPC (Media Center) builders without a PCI Express or AGP slot on the motherboard.
The GeForce 8300 and 8400 series were originally designed to replace the low-cost GeForce 7200 series and entry-level GeForce 7300 series, however they were not able to do so due to their aforementioned inferior gaming performance.
At the end of 2007 Nvidia released a new GeForce 8400 GS based on the G98 (D8M) chip. It is quite different from the G86 used for the "first" 8400 GS, as the G98 features VC-1 and MPEG2 video decoding completely in hardware, lower power consumption, reduced 3D-performance and a smaller fabrication process. The G98 also features dual-link DVI support and PCI Express 2.0. G86 and G98 cards were both sold as "8400 GS", the difference showing only in the technical specifications. This card is sometimes referred to as "GeForce 8400 GS Rev. 2".
During mid-2010 Nvidia released another revision of the GeForce 8400 GS based on the GT218 chip. It has a larger amount of RAM, a significantly reduced 3D-performance, and is capable of DirectX 10.1, OpenGL 3.3 and Shader 4.1. This card is also known as "GeForce 8400 GS Rev. 3".
== GeForce 8500 and 8600 series ==
On April 17, 2007, Nvidia released the GeForce 8500 GT for the entry-level market, and the GeForce 8600 GT and 8600 GTS for the mid-range market. The GeForce 8600 GS was also available. They are based on the G84 core. This series came in PCI Express configurations, with some cards in PCI.
With the 8600 series being mid-range cards, they provided more power than entry-level cards such as the 8400 and 8500 series but are not as powerful as with the high-end cards such as the 8800 series. They provided adequate performance in most games with decent resolutions and settings but may struggle with handling some higher-resolution video games.
Nvidia introduced 2nd-generation PureVideo with this series. As the first major update to PureVideo since the GeForce 6's launch, 2nd-gen PureVideo offered much improved hardware-decoding for H.264.
== GeForce 8800 series ==
The 8800 series, codenamed G80, was launched on November 8, 2006, with the release of the GeForce 8800 GTX and GTS for the high-end market. A 320 MB GTS was released on February 12 and the Ultra was released on May 2, 2007. The cards are larger than their predecessors, with the 8800 GTX measuring 10.6 in (~26.9 cm) in length and the 8800 GTS measuring 9 in (~23 cm). Both cards have two dual-link DVI connectors and an HDTV/S-Video out connector. The 8800 GTX requires 2 PCIe power inputs to keep within the PCIe standard, while the GTS requires just one.
=== 8800 GS ===
The 8800 GS is a trimmed-down 8800 GT with 96 stream processors and either 384 or 768 MB of RAM on a 192-bit bus. In May 2008, it was rebranded as the 9600 GSO in an attempt to spur sales.
The early 2008 iMac models featured an 8800 GS GPU that is actually a modified version of the 8800M GTS (which is a laptop-specific GPU normally found in high-end laptops) with a slightly higher clock speed, rebranded as an 8800 GS. These newly updated models with the rebranded 8800 GS GPUs were announced by Apple on April 28, 2008. It uses 512 MB of GDDR3 video memory clocked at 800 MHz, 64 unified stream processors, a 500 MHz core speed, a 256-bit memory bus width, and a 1250 MHz shader clock. These specifications are highly similar to that of the 8800M GTS, of which the iMac's 8800 GS GPU is based on.
=== 8800 GTX / 8800 Ultra ===
The 8800 GTX is equipped with 768 MB GDDR3 RAM. The 8800 series replaced the GeForce 7900 series as Nvidia's top-performing consumer GPU. GeForce 8800 GTX and GTS use identical GPU cores, but the GTS model disables parts of the GPU and reduces RAM size and bus width to lower production cost.
At the time, the G80 was the largest commercial GPU ever constructed. It consists of 681 million transistors covering a 480 mm2 die surface area built on a 90 nm process. (In fact the G80's total transistor count is ~686 million, but since the chip was made on a 90 nm process and due to process limitations and yield feasibility, Nvidia had to break the main design into two chips: Main shader core at 681 million transistors and NV I/O core of about ~5 million transistors making the entire G80 design standing at ~686 million transistors).
A minor manufacturing defect related to a resistor of improper value caused a recall of the 8800 GTX models just two days before the product launch, though the launch itself was unaffected.
The GeForce 8800 GTX was by far the fastest GPU when first released, and 13 months after its initial debut it still remained one of the fastest. The GTX has 128 stream processors clocked at 1.35 GHz, a core clock of 575 MHz, and 768 MB of 384-bit GDDR3 memory at 1.8 GHz, giving it a memory bandwidth of 86.4 GB/s. The card performs faster than a single Radeon HD 2900 XT, and faster than 2 Radeon X1950 XTXs in Crossfire or 2 GeForce 7900 GTXs in SLI. The 8800 GTX also supports HDCP, but one major flaw is its older NVIDIA PureVideo processor that uses more CPU resources. Originally retailing for around US$600, prices came down to under US$400 before it was discontinued. The 8800 GTX was also very power hungry for its time, demanding up to 185 watts of power and requiring two 6-pin PCI-E power connectors to operate. The 8800 GTX also has 2 SLI connector ports, allowing it to support NVIDIA 3-way SLI for users who run demanding games at extreme resolutions such as 2560x1600.
The 8800 Ultra, retailing at a higher price, is identical to the GTX architecturally, but features higher clocked shaders, core and memory. Nvidia told the media in May 2007 that the 8800 Ultra was a new stepping, creating less heat therefore clocking higher. Originally retailing from $800 to $1000, most users thought the card to be a poor value, offering only 10% more performance than the GTX but costing hundreds of dollars more. Prices dropped to as low as $200 before being discontinued on January 23, 2008. The core clock of the Ultra runs at 612 MHz, the shaders at 1.5 GHz, and finally the memory at 2.16 GHz, giving the Ultra a theoretical memory bandwidth of 103.7 GB/s. It has 2 SLI connector ports, allowing it to support Nvidia 3-way SLI. An updated dual slot cooler was also implemented, allowing for quieter and cooler operation at higher clock speeds.
=== 8800 GT ===
The 8800 GT, codenamed G92, was released on October 29, 2007. This card is the first to transition to the 65 nm process, and supports PCI-Express 2.0. It has a single-slot cooler as opposed to the dual-slot cooler on the 8800 GTS and GTX, and uses less power than GTS and GTX due to its aforementioned 65 nm process. While its core processing power is comparable to that of the GTX, the 256-bit memory interface and the 512 MB of GDDR3 memory often hinders its performance at very high resolutions and graphics settings. The 8800 GT, unlike other 8800 cards, is equipped with the PureVideo HD VP2 engine for GPU assisted decoding of the H.264 and VC-1 codecs.
The release of this card presents an odd dynamic to the graphics processing industry. With an initial projected street price at around $300, this card outperforms ATI's flagship HD2900XT in most situations, and even NVIDIA's own 8800 GTS 640 MB (previously priced at an MSRP of $400). The card, while only marginally slower in synthetic and gaming benchmarks than the 8800 GTX, also takes much of the value away from Nvidia's own high-end card.
Performance benchmarks at stock speeds place it above the 8800 GTS (640 MB and 320 MB versions) and slightly below the 8800 GTX. A 256 MB version of the 8800 GT with lower stock memory speeds (1.4 GHz as opposed to 1.8 GHz) but with the same core is also available. Performance benchmarks have shown that the 256 MB version of the 8800 GT has a considerable performance disadvantage when compared to its 512 MB counterpart, especially in newer games such as Crysis. Some manufacturers also make models with 1 GB of memory; and with large resolutions and big textures, one can perceive a significant performance difference in the benchmarks. These models are more likely to take up to 2 slots of the computer due to its usage of dual-slot coolers instead of a single-slot cooler on other models.
The performance (at the time) and popularity of this card is demonstrated by the fact that even as late as 2014, the 8800 GT was often listed as the minimum requirement for modern games developed for much more powerful hardware.
=== 8800 GTS ===
The first releases of the 8800 GTS line, in November 2006, came in 640 MB and 320 MB configurations of GDDR3 RAM and utilized Nvidia's G80 GPU. While the 8800 GTX has 128 stream processors and a 384-bit memory bus, these versions of 8800 GTS feature 96 stream processors and a 320-bit bus. With respect to features, however, they are identical because they use the same GPU.
Around the same release date as the 8800 GT, Nvidia released a new 640 MB version of the 8800 GTS. While still based on the 90 nm G80 core, this version has 7 out of the 8 clusters of 16 stream processors enabled (as opposed to 6 out 8 on the older GTSs), giving it a total of 112 stream processors instead of 96. Most other aspects of the card remain unchanged. However, because the only 2 add-in partners producing this card (BFG and EVGA) decided to overclock it, this version of the 8800 GTS actually ran slightly faster than a stock GTX in most scenarios, especially at higher resolutions, due to the increased clock speeds.
Nvidia released a new 8800 GTS 512 MB based on the 65 nm G92 GPU on December 10, 2007. This 8800 GTS has 128 stream processors, compared to the 96 processors of the original GTS models. It is equipped with 512 MB GDDR3 on a 256-bit bus. Combined with a 650 MHz core clock and architectural enhancements, this gives the card raw GPU performance exceeding that of 8800 GTX, but it is constrained by the narrower 256-bit memory bus. Its performance can match the 8800 GTX in some situations, and it outperforms the older GTS cards in all situations.
=== Compatibility issues with PCI Express 1.0a on GeForce 8800 GT/8800 GTS 512 MB cards ===
Shortly after their release, an incompatibility issue with older PCI Express 1.0a motherboards surfaced. When using the PCI Express 2.0 compliant 8800 GT or 8800 GTS 512 in some motherboards with PCI Express 1.0a slots, the card would not produce any display image, but the computer would often boot (with the fan on the video card spinning at a constant 100%). The incompatibility has been confirmed on motherboards with VIA PT880Pro/Ultra, Intel 925 and Intel 5000P PCI Express 1.0a chipsets.
Some graphics cards had a workaround that involves re-flashing the graphics card's BIOS with an older Gen 1 BIOS, however this effectively made it into a PCI Express 1.0 card, which is unable to utilize PCI Express 2.0 functions. This could be considered a non-issue however, since the card itself could not even utilize the full capacity of the regular PCI Express 1.0 slots, there was no noticeable reduction in performance. Also, flashing the video card's BIOS usually voided the warranties of most video card manufacturers (if not all), thus making it a less-than-optimum way of getting the card to work properly. A proper workaround to this is to flash the BIOS of the motherboard to the latest version, which depending on the manufacturer of the motherboard, may contain a fix.
In relation to this, the high numbers of cards reported as DOA (as much as 13–15%) were believed to be inaccurate. When it was revealed that the G92 8800 GT and 8800 GTS 512 MB were going to be designed with PCI Express 2.0 connections, NVIDIA claimed that all cards would have full backwards compatibility, but failed to mention that this was only true for PCI Express 1.1 motherboards. The source of the BIOS-flash workaround did not come from NVIDIA or any of their partners, but rather ASRock, a mainboard producer, who mentioned the fix in one of their motherboard FAQs. ASUSTek sells the 8800 GT with their sticker, posted a newer version of their 8800 GT BIOS on their website, but did not mention that it fixed this issue. EVGA also posted a new bios to fix this issue.
== Technical summary ==
Direct3D 10 and OpenGL 3.3 support
1 Unified shaders: texture mapping units: render output units
2 Full G80 contains 32 texture address units and 64 texture filtering units unlike G92 which contains 64 texture address units and 64 texture filtering units
3 To calculate the processing power, see Performance.
=== Features ===
Compute Capability 1.1: has support for Atomic functions, which are used to write thread-safe programs.
Compute Capability 1.2: for details see CUDA
== GeForce 8M series ==
On May 10, 2007, Nvidia announced the availability of their GeForce 8 notebook GPUs through select OEMs. The lineup consists of the 8200M, 8400M, 8600M, 8700M and 8800M series chips.
It was announced by Nvidia that some of their graphics chips have a higher than expected rate of failure due to overheating when used in particular notebook configurations. Some major laptop manufacturers made adjustments to fan setting and firmware updates to help delay the occurrence of any potential GPU failure. In late July 2008, Dell released a set of BIOS updates that made the laptop fans spin more frequently. As of mid-August 2008, Nvidia has not published any further details publicly, though it has been heavily rumored that most, if not all, of the 8400 and 8600 cards had this issue.
=== GeForce 8200M series ===
The GeForce 8200M is an entry-level series of GeForce 8M GPUs. It can be found in some entry-level to mid-range laptops as an alternative to integrated graphics. The GeForce 8200M G is the only GPU in this series.
Its GPU core was based on the GeForce 9200M/9300M GS GPUs. This series was not designed for gaming, but rather for viewing high-definition video content. It can still play older games just fine, but may struggle to play with then-current games at low settings.
Some HP Pavilion, Compaq Presario, and Asus laptops have GeForce 8200M G GPUs.
=== GeForce 8400M series ===
The GeForce 8400M is the entry-level series for the GeForce 8M chipset. Normally found on mid-range laptops as an alternative solution to integrated graphics, the 8400M was designed for watching high-definition video content rather than gaming.
Versions include the 8400M G, 8400M GS, and 8400M GT. These were not designed for gaming (it was only meant for non-gaming tasks such as high-definition video content as mentioned above), however the GDDR3-equipped 8400M GT can handle most games of its time at medium settings and was suitable for occasional gaming. On the other hand, the rest of the 8400M series aside from the 8400M GT handled older games quite well but can only run then-current games at low settings.
Some ASUS and Acer laptops featured 8400M G GPUs. Some Acer Aspire models, some HP Pavilion dv2000, dv6000, dv9000 models, some Dell Vostro 1500 and 1700 models, the Dell XPS M1330, and some Sony VAIO models featured 8400M GS GPUs. Various Acer Aspire and Sony VAIO laptop models featured 8400M GT GPUs.
=== GeForce 8600M series ===
The GeForce 8600M was offered in mid-range laptops as a mid-range performance solution for enthusiasts who want to watch high-definition content such as Blu-ray Disc and HD DVD movies and play then-current and some future games with decent settings.
Versions include the 8600M GS and 8600M GT (with the GT being the more powerful one). They provided decent gaming performance (due to the implementation of GDDR3 memory in the higher-end 8600M models) for then-current games.
It is available on the Dell XPS M1530 portable, some Dell Inspiron 1720 models, HP Pavilion dv9000 models, Asus G1S, Sony VAIO VGN-FZ21Z, select Lenovo IdeaPad models, some models of the Acer Aspire 5920, Acer Aspire 9920G and BenQ Joybook S41, the Mid 2007 to Late 2008 MacBook Pro, and some models of Fujitsu Siemens.
The common failure of this chip in, amongst others, MacBook Pro's purchased between May 2007 and September 2008 were part of a class-action suit against nVidia which resulted in Apple providing an extended 4 year warranty related to the issue after confirming that the issue was caused by the Nvidia chip themselves. This warranty replacement service was expected to cost nVidia around $150 to $200 million and knocked over $3 billion off their market capitalisation after being sued by their own shareholders for attempting to cover the issue up.
=== GeForce 8700M series ===
The GeForce 8700M was developed for the mid-range market. The 8700M GT is the only GPU in this series.
This chipset is available on high-end laptops such as the Dell XPS M1730, Sager NP5793, and Toshiba Satellite X205.
While this card is considered by most in the field to be a decent mid-range card, it is hard to classify the 8700M GT as a high-end card due to its 128-bit memory bus, and is essentially an overclocked 8600M GT GDDR3 mid-range card. However, it shows strong performance when in a dual-card SLI configuration, and provides decent gaming performance in a single-card configuration.
=== GeForce 8800M series ===
The GeForce 8800M was developed to succeed the 8700M in the high-end market, and can be found in high-end gaming notebook computers.
Versions include the 8800M GTS and 8800M GTX. These were released as the first truly high-end mobile GeForce 8 Series GPUs, each with a 256-bit memory bus and a standard 512 megabytes of GDDR3 memory, and provide high-end gaming performance equivalent to many desktop GPUs. In SLI, these can produce 3DMark06 results in the high thousands.
Laptop models which include the 8800M GPUs are: Sager NP5793, Sager NP9262, Alienware m15x and m17x, HP HDX9000, and Dell XPS M1730. Clevo also manufactures similar laptop models for CyberPower, Rock, and Sager (among others) - all with the 8800M GTX, while including the 8800M GTS in the Gateway P-6831 FX and P-6860 FX models.
The 8800M GTS was used in modified form as the GeForce 8800 GS in the early 2008 iMac models.
=== Technical summary ===
The series has been succeeded by GeForce 9 series (which in turn was succeeded by the GeForce 200 series). The GeForce 8400M GS was the only exception, which has not been renamed in neither the GeForce 9 and GeForce 200 series.
== Problems ==
Some chips of the GeForce 8 series (concretely those from the G84 [for example, G84-600-A2] and G86 series) suffer from an overheating problem. Nvidia states this issue should not affect many chips, whereas others assert that all of the chips in these series are potentially affected. Nvidia CEO Jen-Hsun Huang and CFO Marvin Burkett were involved in a lawsuit filed on September 9, 2008, alleging their knowledge of the flaw, and their intent to hide it.
== End-of-life driver support ==
Nvidia has ceased Windows driver support for GeForce 8 series on April 1, 2016.
Windows XP 32-bit & Media Center Edition: version 340.52 released on July 29, 2014; Download
Windows XP 64-bit: version 340.52 released on July 29, 2014; Download
Windows Vista, 7, 8, 8.1 32-bit: version 342.01 (WHQL) released on December 14, 2016; Download
Windows Vista, 7, 8, 8.1 64-bit: version 342.01 (WHQL) released on December 14, 2016; Download
Windows 10, 32-bit: version 342.01 (WHQL) released on December 14, 2016; Download
Windows 10, 64-bit: version 342.01 (WHQL) released on December 14, 2016; Download
== See also ==
Comparison of Nvidia graphics processing units
GeForce 7 series
GeForce 9 series
GeForce 100 series
GeForce 200 series
GeForce 300 series
Nvidia Quadro - Nvidia workstation graphics system
Nvidia Tesla - Nvidia's first dedicated general purpose GPU (graphical processor unit)
== References ==
== External links ==
NVIDIA's GeForce 8 series page
Nvidia GeForce 8800 Series
Nvidia GeForce 8600 Series
Nvidia GeForce 8500 Series
Nvidia GeForce 8400 Series
Nvidia GeForce 8800M Series
Nvidia GeForce 8600M Series
Nvidia GeForce 8400M Series
Nvidia Nsight
Nvidia GeForce Drivers for the GeForce 8x00 series (v. 340.52)
NVIDIA GeForce 8800 GPU Architecture Overview - a somewhat longer and more detailed document about the new 8800 features
OpenGL Extension Specifications for the G8x | Wikipedia/GeForce_8_series |
A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles. GPUs were later found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. The ability of GPUs to rapidly perform vast numbers of calculations has led to their adoption in diverse fields including artificial intelligence (AI) where they excel at handling data-intensive and computationally demanding tasks. Other non-graphical uses include the training of neural networks and cryptocurrency mining.
== History ==
=== 1970s ===
Arcade system boards have used specialized graphics circuits since the 1970s. In early video game hardware, RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor.
A specialized barrel shifter circuit helped the CPU animate the framebuffer graphics for various 1970s arcade video games from Midway and Taito, such as Gun Fight (1975), Sea Wolf (1976), and Space Invaders (1978). The Namco Galaxian arcade system in 1979 used specialized graphics hardware that supported RGB color, multi-colored sprites, and tilemap backgrounds. The Galaxian hardware was widely used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega, and Taito.
The Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor. Atari 8-bit computers (1979) had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored (so there did not need to be a contiguous frame buffer). 6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU.
=== 1980s ===
The NEC μPD7220 was the first implementation of a personal computer graphics display processor as a single large-scale integration (LSI) integrated circuit chip. This enabled the design of low-cost, high-performance video graphics cards such as those from Number Nine Visual Technology. It became the best-known GPU until the mid-1980s. It was the first fully integrated VLSI (very large-scale integration) metal–oxide–semiconductor (NMOS) graphics display processor for PCs, supported up to 1024×1024 resolution, and laid the foundations for the PC graphics market. It was used in a number of graphics cards and was licensed for clones such as the Intel 82720, the first of Intel's graphics processing units. The Williams Electronics arcade games Robotron 2084, Joust, Sinistar, and Bubbles, all released in 1982, contain custom blitter chips for operating on 16-color bitmaps.
In 1984, Hitachi released the ARTC HD63484, the first major CMOS graphics processor for personal computers. The ARTC could display up to 4K resolution when in monochrome mode. It was used in a number of graphics cards and terminals during the late 1980s. In 1985, the Amiga was released with a custom graphics chip including a blitter for bitmap manipulation, line drawing, and area fill. It also included a coprocessor with its own simple instruction set, that was capable of manipulating graphics hardware registers in sync with the video beam (e.g. for per-scanline palette switches, sprite multiplexing, and hardware windowing), or driving the blitter. In 1986, Texas Instruments released the TMS34010, the first fully programmable graphics processor. It could run general-purpose code but also had a graphics-oriented instruction set. During 1990–1992, this chip became the basis of the Texas Instruments Graphics Architecture ("TIGA") Windows accelerator cards.
In 1987, the IBM 8514 graphics system was released. It was one of the first video cards for IBM PC compatibles that implemented fixed-function 2D primitives in electronic hardware. Sharp's X68000, released in 1987, used a custom graphics chipset with a 65,536 color palette and hardware support for sprites, scrolling, and multiple playfields. It served as a development machine for Capcom's CP System arcade board. Fujitsu's FM Towns computer, released in 1989, had support for a 16,777,216 color palette. In 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21 and Taito Air System.
IBM introduced its proprietary Video Graphics Array (VGA) display standard in 1987, with a maximum resolution of 640×480 pixels. In November 1988, NEC Home Electronics announced its creation of the Video Electronics Standards Association (VESA) to develop and promote a Super VGA (SVGA) computer display standard as a successor to VGA. Super VGA enabled graphics display resolutions up to 800×600 pixels, a 56% increase.
=== 1990s ===
In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an indication of the performance increase it promised. The 86C911 spawned a variety of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. Fixed-function Windows accelerators surpassed expensive general-purpose graphics coprocessors in Windows performance, and such coprocessors faded from the PC market.
Throughout the 1990s, 2D GUI acceleration evolved. As manufacturing capabilities improved, so did the level of integration of graphics chips. Additional application programming interfaces (APIs) arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, and their later DirectDraw interface for hardware acceleration of 2D games in Windows 95 and later.
In the early- and mid-1990s, real-time 3D graphics became increasingly common in arcade, computer, and console games, which led to increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as the Sega Model 1, Namco System 22, and Sega Model 2, and the fifth-generation video game consoles such as the Saturn, PlayStation, and Nintendo 64. Arcade systems such as the Sega Model 2 and SGI Onyx-based Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L (transform, clipping, and lighting) years before appearing in consumer graphics cards. Another early example is the Super FX chip, a RISC-based on-cartridge graphics chip used in some SNES games, notably Doom and Star Fox. Some systems used DSPs to accelerate transformations. Fujitsu, which worked on the Sega Model 2 arcade system, began working on integrating T&L into a single LSI solution for use in home computers in 1995; the Fujitsu Pinolite, the first 3D geometry processor for personal computers, released in 1997. The first hardware T&L GPU on home video game consoles was the Nintendo 64's Reality Coprocessor, released in 1996. In 1997, Mitsubishi released the 3Dpro/2MP, a GPU capable of transformation and lighting, for workstations and Windows NT desktops; ATi used it for its FireGL 4000 graphics card, released in 1997.
The term "GPU" was coined by Sony in reference to the 32-bit Sony GPU (designed by Toshiba) in the PlayStation video game console, released in 1994.
In the PC world, notable failed attempts for low-cost 3D graphics chips included the S3 ViRGE, ATI Rage, and Matrox Mystique. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many were pin-compatible with the earlier-generation chips for ease of implementation and minimal cost. Initially, 3D graphics were possible only with discrete boards dedicated to accelerating 3D functions (and lacking 2D graphical user interface (GUI) acceleration entirely) such as the PowerVR and the 3dfx Voodoo. However, as manufacturing technology continued to progress, video, 2D GUI acceleration, and 3D functionality were all integrated into one chip. Rendition's Verite chipsets were among the first to do this well. In 1997, Rendition collaborated with Hercules and Fujitsu on a "Thriller Conspiracy" project which combined a Fujitsu FXG-1 Pinolite geometry processor with a Vérité V2200 core to create a graphics card with a full T&L engine years before Nvidia's GeForce 256; This card, designed to reduce the load placed upon the system's CPU, never made it to market. NVIDIA RIVA 128 was one of the first consumer-facing GPU integrated 3D processing unit and 2D processing unit on a chip.
OpenGL was introduced in the early 1990s by Silicon Graphics as a professional graphics API, with proprietary hardware support for 3D rasterization. In 1994, Microsoft acquired Softimage, the dominant CGI movie production tool used for early CGI movie hits like Jurassic Park, Terminator 2 and Titanic. With that deal came a strategic relationship with SGI and a commercial license of their OpenGL libraries, enabling Microsoft to port the API to the Windows NT OS but not to the upcoming release of Windows 95. Although it was little known at the time, SGI had contracted with Microsoft to transition from Unix to the forthcoming Windows NT OS; the deal which was signed in 1995 was not announced publicly until 1998. In the intervening period, Microsoft worked closely with SGI to port OpenGL to Windows NT. In that era, OpenGL had no standard driver model for competing hardware accelerators to compete on the basis of support for higher level 3D texturing and lighting functionality. In 1994 Microsoft announced DirectX 1.0 and support for gaming in the forthcoming Windows 95 consumer OS. In 1995 Microsoft announced the acquisition of UK based Rendermorphics Ltd and the Direct3D driver model for the acceleration of consumer 3D graphics. The Direct3D driver model shipped with DirectX 2.0 in 1996. It included standards and specifications for 3D chip makers to compete to support 3D texture, lighting and Z-buffering. ATI, which was later to be acquired by AMD, began development on the first Direct3D GPUs. Nvidia quickly pivoted from a failed deal with Sega in 1996 to aggressively embracing support for Direct3D. In this era Microsoft merged their internal Direct3D and OpenGL teams and worked closely with SGI to unify driver standards for both industrial and consumer 3D graphics hardware accelerators. Microsoft ran annual events for 3D chip makers called "Meltdowns" to test their 3D hardware and drivers to work both with Direct3D and OpenGL. It was during this period of strong Microsoft influence over 3D standards that 3D accelerator cards moved beyond being simple rasterizers to become more powerful general purpose processors as support for hardware accelerated texture mapping, lighting, Z-buffering and compute created the modern GPU. During this period the same Microsoft team responsible for Direct3D and OpenGL driver standardization introduced their own Microsoft 3D chip design called Talisman. Details of this era are documented extensively in the books "Game of X" v.1 and v.2 by Russel Demaria, "Renegades of the Empire" by Mike Drummond, "Opening the Xbox" by Dean Takahashi and "Masters of Doom" by David Kushner. The Nvidia GeForce 256 (also known as NV10) was the first consumer-level card with hardware-accelerated T&L. While the OpenGL API provided software support for texture mapping and lighting, the first 3D hardware acceleration for these features arrived with the first Direct3D accelerated consumer GPU's.
=== 2000s ===
NVIDIA released the GeForce 256, marketed as the world's first GPU, integrating transform and lighting engines for advanced 3D graphics rendering. Nvidia was first to produce a chip capable of programmable shading: the GeForce 3. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. Used in the Xbox console, this chip competed with the one in the PlayStation 2, which used a custom vector unit for hardware-accelerated vertex processing (commonly referred to as VU0/VU1). The earliest incarnations of shader execution engines used in Xbox were not general-purpose and could not execute arbitrary pixel code. Vertices and pixels were processed by different units, which had their resources, with pixel shaders having tighter constraints (because they execute at higher frequencies than vertices). Pixel shading engines were more akin to a highly customizable function block and did not "run" a program. Many of these disparities between vertex and pixel shading were not addressed until the Unified Shader Model.
In October 2002, with the introduction of the ATI Radeon 9700 (also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement looping and lengthy floating point math, and were quickly becoming as flexible as CPUs, yet orders of magnitude faster for image-array operations. Pixel shading is often used for bump mapping, which adds texture to make an object look shiny, dull, rough, or even round or extruded.
With the introduction of the Nvidia GeForce 8 series and new generic stream processing units, GPUs became more generalized computing devices. Parallel GPUs are making computational inroads against the CPU, and a subfield of research, dubbed GPU computing or GPGPU for general purpose computing on GPU, has found applications in fields as diverse as machine learning, oil exploration, scientific image processing, linear algebra, statistics, 3D reconstruction, and stock options pricing. GPGPU was the precursor to what is now called a compute shader (e.g. CUDA, OpenCL, DirectCompute) and actually abused the hardware to a degree by treating the data passed to algorithms as texture maps and executing algorithms by drawing a triangle or quad with an appropriate pixel shader. This entails some overheads since units like the scan converter are involved where they are not needed (nor are triangle manipulations even a concern—except to invoke the pixel shader).
Nvidia's CUDA platform, first introduced in 2007, was the earliest widely adopted programming model for GPU computing. OpenCL is an open standard defined by the Khronos Group that allows for the development of code for both GPUs and CPUs with an emphasis on portability. OpenCL solutions are supported by Intel, AMD, Nvidia, and ARM, and according to a report in 2011 by Evans Data, OpenCL had become the second most popular HPC tool.
=== 2010s ===
In 2010, Nvidia partnered with Audi to power their cars' dashboards, using the Tegra GPU to provide increased functionality to cars' navigation and entertainment systems. Advances in GPU technology in cars helped advance self-driving technology. AMD's Radeon HD 6000 series cards were released in 2010, and in 2011 AMD released its 6000M Series discrete GPUs for mobile devices. The Kepler line of graphics cards by Nvidia were released in 2012 and were used in the Nvidia's 600 and 700 series cards. A feature in this GPU microarchitecture included GPU boost, a technology that adjusts the clock-speed of a video card to increase or decrease it according to its power draw. The Kepler microarchitecture was manufactured.
The PS4 and Xbox One were released in 2013; they both use GPUs based on AMD's Radeon HD 7850 and 7790. Nvidia's Kepler line of GPUs was followed by the Maxwell line, manufactured on the same process. Nvidia's 28 nm chips were manufactured by TSMC in Taiwan using the 28 nm process. Compared to the 40 nm technology from the past, this manufacturing process allowed a 20 percent boost in performance while drawing less power. Virtual reality headsets have high system requirements; manufacturers recommended the GTX 970 and the R9 290X or better at the time of their release. Cards based on the Pascal microarchitecture were released in 2016. The GeForce 10 series of cards are of this generation of graphics cards. They are made using the 16 nm manufacturing process which improves upon previous microarchitectures. Nvidia released one non-consumer card under the new Volta architecture, the Titan V. Changes from the Titan XP, Pascal's high-end card, include an increase in the number of CUDA cores, the addition of tensor cores, and HBM2. Tensor cores are designed for deep learning, while high-bandwidth memory is on-die, stacked, lower-clocked memory that offers an extremely wide memory bus. To emphasize that the Titan V is not a gaming card, Nvidia removed the "GeForce GTX" suffix it adds to consumer gaming cards.
In 2018, Nvidia launched the RTX 20 series GPUs that added ray-tracing cores to GPUs, improving their performance on lighting effects. Polaris 11 and Polaris 10 GPUs from AMD are fabricated by a 14 nm process. Their release resulted in a substantial increase in the performance per watt of AMD video cards. AMD also released the Vega GPU series for the high end market as a competitor to Nvidia's high end Pascal cards, also featuring HBM2 like the Titan V.
In 2019, AMD released the successor to their Graphics Core Next (GCN) microarchitecture/instruction set. Dubbed RDNA, the first product featuring it was the Radeon RX 5000 series of video cards. The company announced that the successor to the RDNA microarchitecture would be incremental (a "refresh"). AMD unveiled the Radeon RX 6000 series, its RDNA 2 graphics cards with support for hardware-accelerated ray tracing. The product series, launched in late 2020, consisted of the RX 6800, RX 6800 XT, and RX 6900 XT. The RX 6700 XT, which is based on Navi 22, was launched in early 2021.
The PlayStation 5 and Xbox Series X and Series S were released in 2020; they both use GPUs based on the RDNA 2 microarchitecture with incremental improvements and different GPU configurations in each system's implementation.
Intel first entered the GPU market in the late 1990s, but produced lackluster 3D accelerators compared to the competition at the time. Rather than attempting to compete with the high-end manufacturers Nvidia and ATI/AMD, they began integrating Intel Graphics Technology GPUs into motherboard chipsets, beginning with the Intel 810 for the Pentium III, and later into CPUs. They began with the Intel Atom 'Pineview' laptop processor in 2009, continuing in 2010 with desktop processors in the first generation of the Intel Core line and with contemporary Pentiums and Celerons. This resulted in a large nominal market share, as the majority of computers with an Intel CPU also featured this embedded graphics processor. These generally lagged behind discrete processors in performance. Intel re-entered the discrete GPU market in 2022 with its Arc series, which competed with the then-current GeForce 30 series and Radeon 6000 series cards at competitive prices.
=== 2020s ===
In the 2020s, GPUs have been increasingly used for calculations involving embarrassingly parallel problems, such as training of neural networks on enormous datasets that are needed for large language models. Specialized processing cores on some modern workstation's GPUs are dedicated for deep learning since they have significant FLOPS performance increases, using 4×4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications. These tensor cores are expected to appear in consumer cards, as well.
== GPU companies ==
Many companies have produced GPUs under a number of brand names. In 2009, Intel, Nvidia, and AMD/ATI were the market share leaders, with 49.4%, 27.8%, and 20.6% market share respectively. In addition, Matrox produces GPUs. Chinese companies such as Jingjia Micro have also produced GPUs for the domestic market although in terms of worldwide sales, they still lag behind market leaders.
Modern smartphones use mostly Adreno GPUs from Qualcomm, PowerVR GPUs from Imagination Technologies, and Mali GPUs from ARM.
== Computational functions ==
Modern GPUs have traditionally used most of their transistors to do calculations related to 3D computer graphics. In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities (usually with a VGA compatibility mode). Newer cards such as AMD/ATI HD5000–HD7000 lack dedicated 2D acceleration; it is emulated by 3D hardware. GPUs were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons. Later, dedicated hardware was added to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations that are supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces.
Several factors of GPU construction affect the performance of the card for real-time rendering, such as the size of the connector pathways in the semiconductor device fabrication, the clock signal frequency, and the number and size of various on-chip memory caches. Performance is also affected by the number of streaming multiprocessors (SM) for NVidia GPUs, or compute units (CU) for AMD GPUs, or Xe cores for Intel discrete GPUs, which describe the number of on-silicon processor core units within the GPU chip that perform the core calculations, typically working in parallel with other SM/CUs on the GPU. GPU performance is typically measured in floating point operations per second (FLOPS); GPUs in the 2010s and 2020s typically deliver performance measured in teraflops (TFLOPS). This is an estimated performance measure, as other factors can affect the actual display rate.
=== GPU accelerated video decoding and encoding ===
Most GPUs made since 1995 support the YUV color space and hardware overlays, important for digital video playback, and many GPUs made since 2000 also support MPEG primitives such as motion compensation and iDCT. This hardware-accelerated video decoding, in which portions of the video decoding process and video post-processing are offloaded to the GPU hardware, is commonly referred to as "GPU accelerated video decoding", "GPU assisted video decoding", "GPU hardware accelerated video decoding", or "GPU hardware assisted video decoding".
Recent graphics cards decode high-definition video on the card, offloading the central processing unit. The most common APIs for GPU accelerated video decoding are DxVA for Microsoft Windows operating systems and VDPAU, VAAPI, XvMC, and XvBA for Linux-based and UNIX-like operating systems. All except XvMC are capable of decoding videos encoded with MPEG-1, MPEG-2, MPEG-4 ASP (MPEG-4 Part 2), MPEG-4 AVC (H.264 / DivX 6), VC-1, WMV3/WMV9, Xvid / OpenDivX (DivX 4), and DivX 5 codecs, while XvMC is only capable of decoding MPEG-1 and MPEG-2.
There are several dedicated hardware video decoding and encoding solutions.
==== Video decoding processes that can be accelerated ====
Video decoding processes that can be accelerated by modern GPU hardware are:
Motion compensation (mocomp)
Inverse discrete cosine transform (iDCT)
Inverse telecine 3:2 and 2:2 pull-down correction
Inverse modified discrete cosine transform (iMDCT)
In-loop deblocking filter
Intra-frame prediction
Inverse quantization (IQ)
Variable-length decoding (VLD), more commonly known as slice-level acceleration
Spatial-temporal deinterlacing and automatic interlace/progressive source detection
Bitstream processing (Context-adaptive variable-length coding/Context-adaptive binary arithmetic coding) and perfect pixel positioning
These operations also have applications in video editing, encoding, and transcoding.
=== 2D graphics APIs ===
An earlier GPU may support one or more 2D graphics API for 2D acceleration, such as GDI and DirectDraw.
=== 3D graphics APIs ===
A GPU can support one or more 3D graphics API, such as DirectX, Metal, OpenGL, OpenGL ES, Vulkan.
== GPU forms ==
=== Terminology ===
In the 1970s, the term "GPU" originally stood for graphics processor unit and described a programmable processing unit working independently from the CPU that was responsible for graphics manipulation and output. In 1994, Sony used the term (now standing for graphics processing unit) in reference to the PlayStation console's Toshiba-designed Sony GPU. The term was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU". It was presented as a "single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines". Rival ATI Technologies coined the term "visual processing unit" or VPU with the release of the Radeon 9700 in 2002. The AMD Alveo MA35D features dual VPU’s, each using the 5 nm process in 2023.
In personal computers, there are two main forms of GPUs. Each has many synonyms:
Dedicated graphics also called discrete graphics.
Integrated graphics also called shared graphics solutions, integrated graphics processors (IGP), or unified memory architecture (UMA).
==== Usage-specific GPU ====
Most GPUs are designed for a specific use, real-time 3D graphics, or other mass calculations:
Gaming
GeForce GTX, RTX
Nvidia Titan
Radeon HD, R5, R7, R9, RX, Vega and Navi series
Radeon VII
Intel Arc
Cloud Gaming
Nvidia GRID
Radeon Sky
Workstation
Nvidia Quadro
Nvidia RTX
AMD FirePro
AMD Radeon Pro
Intel Arc Pro
Cloud Workstation
Nvidia Tesla
AMD FireStream
Artificial Intelligence training and Cloud
Nvidia Tesla
AMD Radeon Instinct
Automated/Driverless car
Nvidia Drive PX
=== Dedicated graphics processing unit ===
Dedicated graphics processing units uses RAM that is dedicated to the GPU rather than relying on the computer’s main system memory. This RAM is usually specially selected for the expected serial workload of the graphics card (see GDDR). Sometimes systems with dedicated discrete GPUs were called "DIS" systems as opposed to "UMA" systems (see next section).
Dedicated GPUs are not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that graphics cards have RAM that is dedicated to the card's use, not to the fact that most dedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts.
Graphics cards with dedicated GPUs typically interface with the motherboard by means of an expansion slot such as PCI Express (PCIe) or Accelerated Graphics Port (AGP). They can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still use Peripheral Component Interconnect (PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is not available.
Technologies such as Scan-Line Interleave by 3dfx, SLI and NVLink by Nvidia and CrossFire by AMD allow multiple GPUs to draw images simultaneously for a single screen, increasing the processing power available for graphics. These technologies, however, are increasingly uncommon; most games do not fully use multiple GPUs, as most users cannot afford them. Multiple GPUs are still used on supercomputers (like in Summit), on workstations to accelerate video (processing multiple videos at once) and 3D rendering, for VFX, GPGPU workloads and for simulations, and in AI to expedite training, as is the case with Nvidia's lineup of DGX workstations and servers, Tesla GPUs, and Intel's Ponte Vecchio GPUs.
=== Integrated graphics processing unit ===
Integrated graphics processing units (IGPU), integrated graphics, shared graphics solutions, integrated graphics processors (IGP), or unified memory architectures (UMA) use a portion of a computer's system RAM rather than dedicated graphics memory. IGPs can be integrated onto a motherboard as part of its northbridge chipset, or on the same die (integrated circuit) with the CPU (like AMD APU or Intel HD Graphics). On certain motherboards, AMD's IGPs can use dedicated sideport memory: a separate fixed block of high performance memory that is dedicated for use by the GPU. As of early 2007 computers with integrated graphics account for about 90% of all PC shipments. They are less costly to implement than dedicated graphics processing, but tend to be less capable. Historically, integrated processing was considered unfit for 3D games or graphically intensive programs but could run less intensive programs such as Adobe Flash. Examples of such IGPs would be offerings from SiS and VIA circa 2004. However, modern integrated graphics processors such as AMD Accelerated Processing Unit and Intel Graphics Technology (HD, UHD, Iris, Iris Pro, Iris Plus, and Xe-LP) can handle 2D graphics or low-stress 3D graphics.
Since GPU computations are memory-intensive, integrated processing may compete with the CPU for relatively slow system RAM, as it has minimal or no dedicated video memory. IGPs use system memory with bandwidth up to a current maximum of 128 GB/s, whereas a discrete graphics card may have a bandwidth of more than 1000 GB/s between its VRAM and GPU core. This memory bus bandwidth can limit the performance of the GPU, though multi-channel memory can mitigate this deficiency. Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it.
On systems with "Unified Memory Architecture" (UMA), including modern AMD processors with integrated graphics, modern Intel processors with integrated graphics, Apple processors, the PS5 and Xbox Series (among others), the CPU cores and the GPU block share the same pool of RAM and memory address space. This allows the system to dynamically allocate memory between the CPU cores and the GPU block based on memory needs (without needing a large static split of the RAM) and thanks to zero copy transfers, removes the need for either copying data over a bus between physically separate RAM pools or copying between separate address spaces on a single physical pool of RAM, allowing more efficient transfer of data.
=== Hybrid graphics processing ===
Hybrid GPUs compete with integrated graphics in the low-end desktop and notebook markets. The most common implementations of this are ATI's HyperMemory and Nvidia's TurboCache.
Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. They share memory with the system and have a small dedicated memory cache, to make up for the high latency of the system RAM. Technologies within PCI Express make this possible. While these solutions are sometimes advertised as having as much as 768 MB of RAM, this refers to how much can be shared with the system memory.
=== Stream processing and general purpose GPUs (GPGPU) ===
It is common to use a general purpose graphics processing unit (GPGPU) as a modified form of stream processor (or a vector processor), running compute kernels. This turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "Dedicated graphics processing unit" above) GPU designers, AMD and Nvidia, are pursuing this approach with an array of applications. Both Nvidia and AMD teamed with Stanford University to create a GPU-based client for the Folding@home distributed computing project for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications.
GPGPUs can be used for many types of embarrassingly parallel tasks including ray tracing. They are generally suited to high-throughput computations that exhibit data-parallelism to exploit the wide vector width SIMD architecture of the GPU.
GPU-based high performance computers play a significant role in large-scale modelling. Three of the ten most powerful supercomputers in the world take advantage of GPU acceleration.
GPUs support API extensions to the C programming language such as OpenCL and OpenMP. Furthermore, each GPU vendor introduced its own API which only works with their cards: AMD APP SDK from AMD, and CUDA from Nvidia. These allow functions called compute kernels to run on the GPU's stream processors. This makes it possible for C programs to take advantage of a GPU's ability to operate on large buffers in parallel, while still using the CPU when appropriate. CUDA was the first API to allow CPU-based applications to directly access the resources of a GPU for more general purpose computing without the limitations of using a graphics API.
Since 2005 there has been interest in using the performance offered by GPUs for evolutionary computation in general, and for accelerating the fitness evaluation in genetic programming in particular. Most approaches compile linear or tree programs on the host PC and transfer the executable to the GPU to be run. Typically a performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU's SIMD architecture. However, substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there. Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU can simultaneously interpret hundreds of thousands of very small programs.
=== External GPU (eGPU) ===
An external GPU is a graphics processor located outside of the housing of the computer, similar to a large external hard drive. External graphics processors are sometimes used with laptop computers. Laptops might have a substantial amount of RAM and a sufficiently powerful central processing unit (CPU), but often lack a powerful graphics processor, and instead have a less powerful but more energy-efficient on-board graphics chip. On-board graphics chips are often not powerful enough for playing video games, or for other graphically intensive tasks, such as editing video or 3D animation/rendering.
Therefore, it is desirable to attach a GPU to some external bus of a notebook. PCI Express is the only bus used for this purpose. The port may be, for example, an ExpressCard or mPCIe port (PCIe ×1, up to 5 or 2.5 Gbit/s respectively), a Thunderbolt 1, 2, or 3 port (PCIe ×4, up to 10, 20, or 40 Gbit/s respectively), a USB4 port with Thunderbolt compatibility, or an OCuLink port. Those ports are only available on certain notebook systems. eGPU enclosures include their own power supply (PSU), because powerful GPUs can consume hundreds of watts.
== Energy efficiency ==
== Sales ==
In 2013, 438.3 million GPUs were shipped globally and the forecast for 2014 was 414.2 million. However, by the third quarter of 2022, shipments of PC GPUs totaled around 75.5 million units, down 19% year-over-year.
== See also ==
=== Hardware ===
List of AMD graphics processing units
List of Nvidia graphics processing units
List of Intel graphics processing units
List of discrete and integrated graphics processing units
Intel GMA
Larrabee
Nvidia PureVideo – the bit-stream technology from Nvidia used in their graphics chips to accelerate video decoding on hardware GPU with DXVA.
SoC
UVD (Unified Video Decoder) – the video decoding bit-stream technology from ATI to support hardware (GPU) decode with DXVA
=== APIs ===
=== Applications ===
GPU cluster
Mathematica – includes built-in support for CUDA and OpenCL GPU execution
Molecular modeling on GPU
Deeplearning4j – open-source, distributed deep learning for Java
== References ==
== Sources ==
Peddie, Jon (1 January 2023). The History of the GPU – New Developments. Springer Nature. ISBN 978-3-03-114047-1. OCLC 1356877844.
== External links == | Wikipedia/Integrated_graphics |
Sega is a video game developer, publisher, and hardware development company headquartered in Tokyo, Japan, with multiple offices around the world. The company's involvement in the arcade game industry began as a Japan-based distributor of coin-operated machines, including pinball games and jukeboxes. Sega imported second-hand machines that required frequent maintenance. This necessitated the construction of replacement guns, flippers, and other parts for the machines. According to former Sega director Akira Nagai, this is what led to the company into developing their own games.
Sega released Pong-Tron, its first video-based game, in 1973. The company prospered from the arcade game boom of the late 1970s, with revenues climbing to over US$100 million by 1979. Nagai has stated that Hang-On and Out Run helped to pull the arcade game market out of the 1983 downturn and created new genres of video games.
In terms of arcades, Sega is the world's most prolific arcade game producer, having developed more than 500 games, 70 franchises, and 20 arcade system boards since 1981. It has been recognized by Guinness World Records for this achievement. The following list comprises the various arcade system boards developed and used by Sega in their arcade games.
== Arcade system boards ==
== Additional arcade hardware ==
Sega has developed and released additional arcade games that use technology other than their dedicated arcade system boards. The first arcade game manufactured by Sega was Periscope, an electromechanical game. This was followed by Missile in 1969. Subsequent video-based games such as Pong-Tron (1973), Fonz (1976), and Monaco GP (1979) used discrete logic boards without a CPU microprocessor. Frogger (1981) used a system powered by two Z80 CPU microprocessors. Some titles, such as Zaxxon (1982) were developed externally from Sega, a practice that was not uncommon at the time.
== See also ==
Sega R360
List of Sega pinball machines
List of Sega video game consoles
== References == | Wikipedia/Sega_Model_1 |
Transform, clipping, and lighting (T&L or TCL) is a term used in computer graphics.
== Overview ==
Transformation is the task of producing a two-dimensional view of a three-dimensional scene. Clipping means only drawing the parts of the scene that will be present in the picture after rendering is completed. Lighting is the task of altering the colour of the various surfaces of the scene on the basis of lighting information.
== Hardware ==
Hardware T&L had been used by arcade game system boards since 1993, and by home video game consoles since the Sega Genesis's Virtua Processor (SVP), Sega Saturn's SCU-DSP and Sony PlayStation's GTE in 1994 and the Nintendo 64's RSP in 1996, though it wasn't traditional hardware T&L, but still software T&L running on a coprocessor instead of the main CPU, and could be used for rudimentary programmable pixel and vertex shaders as well. More traditional hardware T&L would appear on consoles with the GameCube and Xbox in 2001 (the PS2 still using a vector coprocessor for T&L). Personal computers implemented T&L in software until 1999, as it was believed faster CPUs would be able to keep pace with demands for ever more realistic rendering. However, 3D computer games of the time were producing increasingly complex scenes and detailed lighting effects much faster than the increase of CPU processing power.
Nvidia's GeForce 256 was released in late 1999 and introduced hardware support for T&L to the consumer PC graphics card market. It had faster vertex processing not only due to the T&L hardware, but also because of a cache that avoided having to process the same vertex twice in certain situations. While DirectX 7.0 (particularly Direct3D 7) was the first release of that API to support hardware T&L, OpenGL had supported it much longer and was typically the purview of older professionally oriented 3D accelerators which were designed for computer-aided design (CAD) instead of games.
Aladdin's ArtX integrated graphics chipset also featured T&L hardware, being released in November 1999 as part of the Aladdin VII motherboards for socket 7 platform.
S3 Graphics launched the Savage 2000 accelerator in late 1999, shortly after GeForce 256, but S3 never developed working Direct3D 7.0 drivers that would have enabled hardware T&L support.
== Usefulness ==
Hardware T&L did not have broad application support in games at the time (mainly due to Direct3D games transforming their geometry on the CPU and not being allowed to use indexed geometries), so critics contended that it had little real-world value. Initially, it was only somewhat beneficial in a few OpenGL-based 3D first-person shooter titles of the time, most notably Quake III Arena. 3dfx and other competing graphics card companies contended that a fast CPU would make up for the lack of a T&L unit.
ATI's initial response to GeForce 256 was the dual-chip Rage Fury MAXX. By using two Rage 128 chips, each rendering an alternate frame, the card was able to somewhat approach the performance of SDR memory GeForce 256 cards, but the GeForce 256 DDR still retained the top speed. ATI was developing their own GPU at the time known as the Radeon which also implemented hardware T&L.
3dfx's Voodoo5 5500 did not have a T&L unit but it was able to match the performance of the GeForce 256, although the Voodoo5 was late to market and by its release it could not match the succeeding GeForce 2 GTS.
STMicroelectronics' PowerVR Kyro II, released in 2001, was able to rival the costlier ATI Radeon DDR and NVIDIA GeForce 2 GTS in benchmarks of the time, despite not having hardware transform and lighting. As more and more games were optimised for hardware transform and lighting, the KYRO II lost its performance advantage and is not supported by most modern games.
Futuremark's 3DMark 2000 heavily utilized hardware T&L, which resulted in the Voodoo 5 and Kyro II both scoring poorly in the benchmark tests, behind budget T&L video cards such as the GeForce 2 MX and Radeon SDR.
== Industry standardization ==
By 2000, only ATI with their comparable Radeon 7xxx series, would remain in direct competition with Nvidia's GeForce 256 and GeForce 2. By the end of 2001, all discrete graphics chips would have hardware T&L.
Support of hardware T&L assured the GeForce and Radeon of a strong future, unlike its Direct3D 6 predecessors which relied upon software T&L. While hardware T&L does not add new rendering features, the extra performance allowed for much more complex scenes and an increasing number of games recommended it anyway to run at optimal performance. GPUs that support T&L in hardware are usually considered to be in the DirectX 7.0 generation.
After hardware T&L had become standard in GPUs, the next step in computer 3D graphics was DirectX 8.0 with fully programmable vertex and pixel shaders. Nonetheless, many early games using DirectX 8.0 shaders, such as Half-Life 2, made that feature optional so DirectX 7.0 hardware T&L GPUs could still run the game. For instance, the GeForce 256 was supported in games up until approximately 2006, in games such as Star Wars: Empire at War.
== References == | Wikipedia/Transform,_clipping,_and_lighting |
AMD Accelerated Processing Unit (APU), formerly known as Fusion, is a series of 64-bit microprocessors from Advanced Micro Devices (AMD), combining a general-purpose AMD64 central processing unit (CPU) and 3D integrated graphics processing unit (IGPU) on a single die.
AMD announced the first generation APUs, Llano for high-performance and Brazos for low-power devices, in January 2011 and launched the first units on June 14. The second generation Trinity for high-performance and Brazos-2 for low-power devices were announced in June 2012. The third generation Kaveri for high performance devices were launched in January 2014, while Kabini and Temash for low-power devices were announced in the summer of 2013. Since the launch of the Zen microarchitecture, Ryzen and Athlon APUs have released to the global market as Raven Ridge on the DDR4 platform, after Bristol Ridge a year prior.
AMD has also supplied semi-custom APUs for consoles starting with the release of Sony PlayStation 4 and Microsoft Xbox One eighth generation video game consoles.
== History ==
The AMD Fusion project started in 2006 with the aim of developing a system on a chip that combined a CPU with a GPU on a single die. This effort was moved forward by AMD's acquisition of graphics chipset manufacturer ATI in 2006. The project reportedly required three internal iterations of the Fusion concept to create a product deemed worthy of release. Reasons contributing to the delay of the project include the technical difficulties of combining a CPU and GPU on the same die at a 45 nm process, and conflicting views on what the role of the CPU and GPU should be within the project.
The first generation desktop and laptop APU, codenamed Llano, was announced on 4 January 2011 at the 2011 Consumer Electronics Show in Las Vegas and released shortly thereafter. It featured K10 CPU cores and a Radeon HD 6000 series GPU on the same die on the FM1 socket. An APU for low-power devices was announced as the Brazos platform, based on the Bobcat microarchitecture and a Radeon HD 6000 series GPU on the same die.
At a conference in January 2012, corporate fellow Phil Rogers announced that AMD would re-brand the Fusion platform as the Heterogeneous System Architecture (HSA), stating that "it's only fitting that the name of this evolving architecture and platform be representative of the entire, technical community that is leading the way in this very important area of technology and programming development." However, it was later revealed that AMD had been the subject of a trademark infringement lawsuit by the Swiss company Arctic, who used the name "Fusion" for a line of power supply products.
The second generation desktop and laptop APU, codenamed Trinity, was announced at AMD's 2010 Financial Analyst Day and released in October 2012. It featured Piledriver CPU cores and Radeon HD 7000 series GPU cores on the FM2 socket. AMD released a new APU based on the Piledriver microarchitecture on 12 March 2013 for Laptops/Mobile and on 4 June 2013 for desktops under the codename Richland. The second generation APU for low-power devices, Brazos 2.0, used exactly the same APU chip, but ran at higher clock speed and rebranded the GPU as Radeon HD 7000 series and used a new I/O controller chip.
Semi-custom chips were introduced in the Microsoft Xbox One and Sony PlayStation 4 video game consoles, and subsequently in the Microsoft Xbox Series X|S and Sony PlayStation 5 consoles.
A third generation of the technology was released on 14 January 2014, featuring greater integration between CPU and GPU. The desktop and laptop variant is codenamed Kaveri, based on the Steamroller architecture, while the low-power variants, codenamed Kabini and Temash, are based on the Jaguar architecture.
Since the introduction of Zen-based processors, AMD renamed their APUs as the Ryzen with Radeon Graphics and Athlon with Radeon Graphics, with desktop units assigned with G suffix on their model numbers (e.g. Ryzen 5 3400G & Athlon 3000G) to distinguish them from regular processors or with basic graphics and also to differentiate away from their former Bulldozer era A-series APUs. The mobile counterparts were always paired with Radeon Graphics regardless of suffixes.
In November 2017, HP released the Envy x360, featuring the Ryzen 5 2500U APU, the first 4th generation APU, based on the Zen CPU architecture and the Vega graphics architecture.
== Features ==
=== Heterogeneous System Architecture ===
AMD is a founding member of the Heterogeneous System Architecture (HSA) Foundation and is consequently actively working on developing HSA in cooperation with other members. The following hardware and software implementations are available in AMD's APU-branded products:
=== Feature overview ===
The following table shows features of AMD's processors with 3D graphics, including APUs (see also: List of AMD processors with 3D graphics).
== APU or Radeon Graphics branded platforms ==
AMD APUs have CPU modules, cache, and a discrete-class graphics processor, all on the same die using the same bus. This architecture allows for the use of graphics accelerators, such as OpenCL, with the integrated graphics processor. The goal is to create a "fully integrated" APU, which, according to AMD, will eventually feature 'heterogeneous cores' capable of processing both CPU and GPU work automatically, depending on the workload requirement.
=== TeraScale-based GPU ===
==== K10 architecture (2011): Llano ====
"Stars" AMD K10-cores
Integrated Evergreen/VLIW5-based GPU (branded Radeon HD 6000 series)
Northbridge
PCIe
DDR3 memory controller to arbitrate between coherent and non-coherent memory requests. The physical memory is partitioned between the GPU (up to 512 MB) and the CPU (the remainder).
Unified Video Decoder
AMD Eyefinity multi-monitor-support
The first generation APU, released in June 2011, was used in both desktops and laptops. It was based on the K10 architecture and built on a 32 nm process featuring two to four CPU cores on a thermal design power (TDP) of 65-100 W, and integrated graphics based on the Radeon HD 6000 series with support for DirectX 11, OpenGL 4.2 and OpenCL 1.2. In performance comparisons against the similarly priced Intel Core i3-2105, the Llano APU was criticised for its poor CPU performance and praised for its better GPU performance. AMD was later criticised for abandoning Socket FM1 after one generation.
==== Bobcat architecture (2011): Ontario, Zacate, Desna, Hondo ====
Bobcat-based CPU
Evergreen/VLIW5-based GPU (branded Radeon HD 6000 series and Radeon HD 7000 series)
Northbridge
PCIe support.
DDR3 SDRAM memory controller to arbitrate between coherent and non-coherent memory requests. The physical memory is partitioned between the GPU (up to 512 MB) and the CPU (the remainder).
Unified Video Decoder (UVD)
The AMD Brazos platform was introduced on 4 January 2011, targeting the subnotebook, netbook and low power small form factor markets. It features the 9-watt AMD C-Series APU (codename: Ontario) for netbooks and low power devices as well as the 18-watt AMD E-Series APU (codename: Zacate) for mainstream and value notebooks, all-in-ones and small form factor desktops. Both APUs feature one or two Bobcat x86 cores and a Radeon Evergreen Series GPU with full DirectX11, DirectCompute and OpenCL support including UVD3 video acceleration for HD video including 1080p.
AMD expanded the Brazos platform on 5 June 2011 with the announcement of the 5.9-watt AMD Z-Series APU (codename: Desna) designed for the Tablet market. The Desna APU is based on the 9-watt Ontario APU. Energy savings were achieved by lowering the CPU, GPU and northbridge voltages, reducing the idle clocks of the CPU and GPU as well as introducing a hardware thermal control mode. A bidirectional turbo core mode was also introduced.
AMD announced the Brazos-T platform on 9 October 2012. It comprised the 4.5-watt AMD Z-Series APU (codenamed Hondo) and the A55T Fusion Controller Hub (FCH), designed for the tablet computer market. The Hondo APU is a redesign of the Desna APU. AMD lowered energy use by optimizing the APU and FCH for tablet computers.
The Deccan platform including Krishna and Wichita APUs were cancelled in 2011. AMD had originally planned to release them in the second half 2012.
==== Piledriver architecture (2012): Trinity and Richland ====
Piledriver-based CPU
Northern Islands/VLIW4-based GPU (branded Radeon HD 7000 and 8000 series)
Unified Northbridge – includes AMD Turbo Core 3.0, which enables automatic bidirectional power management between CPU modules and GPU. Power to the CPU and GPU is controlled automatically by changing the clock rate depending on the load. For example, for a non-overclocked A10-5800K APU the CPU frequency can change from 1.4 GHz to 4.2 GHz, and the GPU frequency can change from 304 MHz to 800 MHz. In addition, CC6 mode is capable of powering down individual CPU cores, while PC6 mode is able to lower the power on the entire rail.
AMD HD Media Accelerator – includes AMD Perfect Picture HD, AMD Quick Stream technology, and AMD Steady Video technology.
Display controllers: AMD Eyefinity-support for multi-monitor set-ups, HDMI, DisplayPort 1.2, DVI
Trinity
The first iteration of the second generation platform, released in October 2012, brought improvements to CPU and GPU performance to both desktops and laptops. The platform features 2 to 4 Piledriver CPU cores built on a 32 nm process with a TDP between 65 W and 100 W, and a GPU based on the Radeon HD7000 series with support for DirectX 11, OpenGL 4.2, and OpenCL 1.2. The Trinity APU was praised for the improvements to CPU performance compared to the Llano APU.
Richland
"Enhanced Piledriver" CPU cores
Temperature Smart Turbo Core technology. An advancement of the existing Turbo Core technology, which allows internal software to adjust the CPU and GPU clock speed to maximise performance within the constraints of the Thermal design power of the APU.
New low-power consumption CPUs with only 45 W TDP
The release of this second iteration of this generation was 12 March 2013 for mobile parts and 5 June 2013 for desktop parts.
=== Graphics Core Next-based GPU ===
==== Jaguar architecture (2013): Kabini and Temash ====
Jaguar-based CPU
Graphics Core Next 2nd Gen-based GPU
Socket AM1 and Socket FT3 support
Target segment desktop and mobile
In January 2013 the Jaguar-based Kabini and Temash APUs were unveiled as the successors of the Bobcat-based Ontario, Zacate and Hondo APUs. The Kabini APU is aimed at the low-power, subnotebook, netbook, ultra-thin and small form factor markets, while the Temash APU is aimed at the tablet, ultra-low power and small form factor markets. The two to four Jaguar cores of the Kabini and Temash APUs feature numerous architectural improvements regarding power requirement and performance, such as support for newer x86-instructions, a higher IPC count, a CC6 power state mode and clock gating. Kabini and Temash are AMD's first, and also the first ever quad-core x86 based SoCs. The integrated Fusion Controller Hubs (FCH) for Kabini and Temash are codenamed "Yangtze" and "Salton", respectively. The Yangtze FCH features support for two USB 3.0 ports, two SATA 6 Gbit/s ports, as well as the xHCI 1.0 and SD/SDIO 3.0 protocols for SD-card support.
Both chips feature DirectX 11.1-compliant GCN-based graphics as well as numerous HSA improvements.
They were fabricated at a 28 nm process in an FT3 ball grid array package by Taiwan Semiconductor Manufacturing Company (TSMC), and were released on 23 May 2013.
The PlayStation 4 and Xbox One were revealed to both be powered by 8-core semi-custom Jaguar-derived APUs.
==== Steamroller architecture (2014): Kaveri ====
Steamroller-based CPU with 2–4 cores
Graphics Core Next 2nd Gen-based GPU with 192–512 shader processors
15–95 W thermal design power
Fastest mobile processor of this series: AMD FX-7600P (35 W)
Fastest desktop processor of this series: AMD A10-7850K (95 W)
Socket FM2+ and Socket FP3
Target segment desktop and mobile
Heterogeneous System Architecture-enabled zero-copying through pointer passing
The third generation of the platform, codenamed Kaveri, was partly released on 14 January 2014. Kaveri contains up to four Steamroller CPU cores clocked to 3.9 GHz with a turbo mode of 4.1 GHz, up to a 512-core Graphics Core Next GPU, two decode units per module instead of one (which allows each core to decode four instructions per cycle instead of two), AMD TrueAudio, Mantle API, an on-chip ARM Cortex-A5 MPCore, and will release with a new socket, FM2+. Ian Cutress and Rahul Garg of Anandtech asserted that Kaveri represented the unified system-on-a-chip realization of AMD's acquisition of ATI. The performance of the 45 W A8-7600 Kaveri APU was found to be similar to that of the 100 W Richland part, leading to the claim that AMD made significant improvements in on-die graphics performance per watt; however, CPU performance was found to lag behind similarly specified Intel processors, a lag that was unlikely to be resolved in the Bulldozer family APUs. The A8-7600 component was delayed from a Q1 launch to an H1 launch because the Steamroller architecture components allegedly did not scale well at higher clock speeds.
AMD announced the release of the Kaveri APU for the mobile market on 4 June 2014 at Computex 2014, shortly after the accidental announcement on the AMD website on 26 May 2014. The announcement included components targeted at the standard voltage, low-voltage, and ultra-low voltage segments of the market. In early-access performance testing of a Kaveri prototype laptop, AnandTech found that the 35 W FX-7600P was competitive with the similarly priced 17 W Intel i7-4500U in synthetic CPU-focused benchmarks, and was significantly better than previous integrated GPU systems on GPU-focused benchmarks. Tom's Hardware reported the performance of the Kaveri FX-7600P against the 35 W Intel i7-4702MQ, finding that the i7-4702MQ was significantly better than the FX-7600P in synthetic CPU-focused benchmarks, whereas the FX-7600P was significantly better than the i7-4702MQ's Intel HD 4600 iGPU in the four games that could be tested in the time available to the team.
==== Puma architecture (2014): Beema and Mullins ====
Puma-based CPU
Graphics Core Next 2nd Gen-based GPU with 128 shader processors
Socket FT3
Target segment ultra-mobile
==== Puma+ architecture (2015): Carrizo-L ====
Puma+-based CPU with 2–4 cores
Graphics Core Next 2nd Gen-based GPU with 128 shader processors
12–25 W configurable TDP
Socket FP4 support; pin-compatible with Carrizo
Target segment mobile and ultra-mobile
==== Excavator architecture (2015): Carrizo ====
Excavator-based CPU with 4 cores
Graphics Core Next 3rd Gen-based GPU
Memory controller supports DDR3 SDRAM at 2133 MHz and DDR4 SDRAM at 1866 MHz
15–35 W configurable TDP (with the 15 W cTDP unit having reduced performance)
Integrated southbridge
Socket FP4
Target segment mobile
Announced by AMD on YouTube (19 November 2014)
==== Steamroller architecture (Q2–Q3 2015): Godavari ====
Update of the desktop Kaveri series with higher clock frequencies or smaller power envelope
Steamroller-based CPU with 4 cores
Graphics Core Next 2nd Gen-based GPU
Memory controller supports DDR3 SDRAM at 2133 MHz
65/95 W TDP with support for configurable TDP
Socket FM2+
Target segment desktop
Listed since Q2 2015
==== Excavator architecture (2016): Bristol Ridge and Stoney Ridge ====
Excavator-based CPU with 2–4 cores
1 MB L2 cache per module
Graphics Core Next 3rd Gen-based GPU
Memory controller supports DDR4 SDRAM
15/35/45/65 W TDP with support for configurable TDP
28 nm
Socket AM4 for desktop
Target segment desktop, mobile and ultra-mobile
==== Zen architecture (2017): Raven Ridge ====
Zen-based CPU cores with simultaneous multithreading (SMT)
512 KB L2 cache per core
4 MB L3 cache
Precision Boost 2
Graphics Core Next 5th Gen "Vega"-based GPU
Memory controller supports DDR4 SDRAM
Video Core Next as successor of UVD+VCE
14 nm at GlobalFoundries
Socket FP5 for mobile and AM4 for desktop
Target segment desktop and mobile
Listed since Q4 2017
==== Zen+ architecture (2018): Picasso ====
Zen+-based CPU microarchitecture
Refresh of Raven Ridge on 12 nm with improved latency and efficiency/clock frequency. Features similar to Raven Ridge
Launched April 2018
==== Zen 2 architecture (2019): Renoir ====
Zen 2-based CPU microarchitecture
Graphics Core Next 5th Gen "Vega"-based GPU
VCN 2.1
Memory controller supports DDR4 and LPDDR4X SDRAM up to 4266 MHz
15 and 45 W TDP for mobile and 35 and 65 W TDP for desktop
7 nm at TSMC
Socket FP6 for mobile and socket AM4 for desktop
Release July 2019
==== Zen 3 architecture (2020): Cezanne ====
Zen 3-based CPU microarchitecture
Graphics Core Next 5th Gen "Vega"-based GPU
Memory controller supports DDR4 and LPDDR4X SDRAM up to 4266 MHz
Up to 45 W TDP for mobile; 35W to 65W TDP for desktop.
7 nm at TSMC
Socket AM4 for desktop
Socket FP6 for mobile
Released for mobiles early 2021 with desktop counterparts released in November 2020.
=== RDNA-based GPU ===
==== Zen 3+ architecture (2022): Rembrandt ====
Zen 3+ based CPU microarchitecture
RDNA 2-based GPU
Memory controller supports DDR5-4800 and LPDDR5-6400
Up to 45 W TDP for mobile
Node: TSMC N6
Socket FP7 for mobile
Released for mobiles early 2022
== See also ==
List of AMD processors with 3D graphics
Ryzen
AMD mobile platform
List of AMD mobile microprocessors
Radeon
Intel Graphics Technology
List of Nvidia graphics processing units
== References ==
== External links ==
HSA Heterogeneous System Architecture Overview on YouTube by Vinod Tipparaju at SC13 in November 2013
HSA and the software ecosystem
HSA | Wikipedia/AMD_Accelerated_Processing_Unit |
Intel Graphics Technology (GT) is the collective name for a series of integrated graphics processors (IGPs) produced by Intel that are manufactured on the same package or die as the central processing unit (CPU). It was first introduced in 2010 as Intel HD Graphics and renamed in 2017 as Intel UHD Graphics.
Intel Iris Graphics and Intel Iris Pro Graphics are the IGP series introduced in 2013 with some models of Haswell processors as the high-performance versions of HD Graphics. Iris Pro Graphics was the first in the series to incorporate embedded DRAM. Since 2016 Intel refers to the technology as Intel Iris Plus Graphics with the release of Kaby Lake.
In the fourth quarter of 2013, Intel integrated graphics represented, in units, 65% of all PC graphics processor shipments. However, this percentage does not represent actual adoption as a number of these shipped units end up in systems with discrete graphics cards.
== History ==
Before the introduction of Intel HD Graphics, Intel integrated graphics were built into the motherboard's northbridge, as part of the Intel's Hub Architecture. They were known as Intel Extreme Graphics and Intel GMA. As part of the Platform Controller Hub (PCH) design, the northbridge was eliminated and graphics processing was moved to the same die as the central processing unit (CPU).
The previous Intel integrated graphics solution, Intel GMA, had a reputation of lacking performance and features, and therefore was not considered to be a good choice for more demanding graphics applications, such as 3D gaming. The performance increases brought by Intel's HD Graphics made the products competitive with integrated graphics adapters made by its rivals, Nvidia and ATI/AMD. Intel HD Graphics, featuring minimal power consumption that is important in laptops, was capable enough that PC manufacturers often stopped offering discrete graphics options in both low-end and high-end laptop lines, where reduced dimensions and low power consumption are important.
== Generations ==
Intel HD and Iris Graphics are divided into generations, and within each generation are divided into 'tiers' of increasing performance, denominated by the 'GTx' label. Each generation corresponds to the implementation of a Gen graphics microarchitecture with a corresponding GEN instruction set architecture since Gen4.
=== Gen5 architecture ===
==== Westmere ====
In January 2010, Clarkdale and Arrandale processors with Ironlake graphics were released, and branded as Celeron, Pentium, or Core with HD Graphics. There was only one specification: 12 execution units, up to 43.2 GFLOPS at 900 MHz. It can decode a H.264 1080p video at up to 40 fps.
Its direct predecessor, the GMA X4500, featured 10 EUs at 800 MHz, but it lacked some capabilities.
=== Gen6 architecture ===
==== Sandy Bridge ====
In January 2011, the Sandy Bridge processors were released, introducing the "second generation" HD Graphics:
Sandy Bridge Celeron and Pentium have Intel HD, while Core i3 and above have either HD 2000 or HD 3000. HD Graphics 2000 and 3000 include hardware video encoding and HD postprocessing effects.
=== Gen7 architecture ===
==== Ivy Bridge ====
On 24 April 2012, Ivy Bridge was released, introducing the "third generation" of Intel's HD graphics:
Ivy Bridge Celeron and Pentium have Intel HD, while Core i3 and above have either HD 2500 or HD 4000. HD Graphics 2500 and 4000 include hardware video encoding and HD postprocessing effects.
For some low-power mobile CPUs there is limited video decoding support, while none of the desktop CPUs have this limitation. HD P4000 is featured on the Ivy Bridge E3 Xeon processors with the 12X5 v2 descriptor, and supports unbuffered ECC RAM.
=== Gen7.5 architecture ===
==== Haswell ====
In June 2013, Haswell CPUs were announced, with four tiers of integrated GPUs:
The 128 MB of eDRAM in the Iris Pro GT3e is in the same package as the CPU, but on a separate die manufactured in a different process. Intel refers to this as a Level 4 cache, available to both CPU and GPU, naming it Crystalwell. The Linux drm/i915 driver is aware and capable of using this eDRAM since kernel version 3.12.
=== Gen8 architecture ===
==== Broadwell ====
In November 2013, it was announced that Broadwell-K desktop processors (aimed at enthusiasts) would also carry Iris Pro Graphics.
The following models of integrated GPU are announced for Broadwell processors:
==== Braswell ====
=== Gen9 architecture ===
==== Skylake ====
The Skylake line of processors, launched in August 2015, retires VGA support, while supporting multi-monitor setups of up to three monitors connected via HDMI 1.4, DisplayPort 1.2 or Embedded DisplayPort (eDP) 1.3 interfaces.
The following models of integrated GPU are available or announced for the Skylake processors:
==== Apollo Lake ====
The Apollo Lake line of processors was launched in August 2016.
=== Gen9.5 architecture ===
==== Kaby Lake ====
The Kaby Lake line of processors was introduced in August 2016. New features: speed increases, support for 4K UHD "premium" (DRM encoded) streaming services, media engine with full hardware acceleration of 8- and 10-bit HEVC and VP9 decode.
==== Kaby Lake Refresh / Amber Lake / Coffee Lake / Coffee Lake Refresh / Whiskey Lake / Comet Lake ====
The Kaby Lake Refresh line of processors was introduced in October 2017. New features: HDCP 2.2 support
==== Gemini Lake/Gemini Lake Refresh ====
New features: HDMI 2.0 support, VP9 10-bit Profile2 hardware decoder
=== Gen11 architecture ===
==== Ice Lake ====
New features: 10 nm Gen 11 GPU microarchitecture, two HEVC 10-bit encode pipelines, three 4K display pipelines (or 2× 5K60, 1× 4K120), variable rate shading (VRS), and integer scaling.
While the microarchitecture continues to support double-precision floating-point as previous versions did, the mobile configurations of it do not include the feature and therefore on these it is supported only through emulation.
=== Xe-LP architecture (Gen12) ===
These are based on the Intel Xe-LP microarchitecture, the low power variant of the Intel Xe GPU architecture also known as Gen 12. New features include Sampler Feedback, Dual Queue Support, DirectX12 View Instancing Tier2, and AV1 8-bit and 10-bit fixed-function hardware decoding. Support for FP64 was removed.
=== Arc Alchemist Tile GPU (Gen12.7) ===
Intel Meteor Lake and Arrow Lake use Intel Arc Alchemist Tile GPU microarchitecture.
New features: DirectX 12 Ultimate Feature Level 12_2 support, 8K 10-bit AV1 hardware encoder, HDMI 2.1 48Gbps native support
==== Meteor Lake ====
=== Arc Battlemage Tile GPU ===
Intel Lunar Lake will use Intel Arc Battlemage Tile GPU microarchitecture.
== Features ==
=== Intel Insider ===
Beginning with Sandy Bridge, the graphics processors include a form of digital copy protection and digital rights management (DRM) called Intel Insider, which allows decryption of protected media within the processor. Previously there was a similar technology called Protected Audio Video Path (PAVP).
=== HDCP ===
Intel Graphics Technology supports the HDCP technology, but the actual HDCP support depends on the computer's motherboard.
=== Intel Quick Sync Video ===
Intel Quick Sync Video is Intel's hardware video encoding and decoding technology, which is integrated into some of the Intel CPUs. The name "Quick Sync" refers to the use case of quickly transcoding ("syncing") a video from, for example, a DVD or Blu-ray Disc to a format appropriate to, for example, a smartphone. Quick Sync was introduced with the Gen 6 in Sandy Bridge microprocessors on 9 January 2011.
=== Graphics Virtualization Technology ===
Graphics Virtualization Technology (GVT) was announced 1 January 2014 and introduced at the same time as Intel Iris Pro. Intel integrated GPUs support the following sharing methods:
Direct passthrough (GVT-d): the GPU is available for a single virtual machine without sharing with other machines
Paravirtualized API forwarding (GVT-s): the GPU is shared by multiple virtual machines using a virtual graphics driver; few supported graphics APIs (OpenGL, DirectX), no support for GPGPU
Full GPU virtualization (GVT-g): the GPU is shared by multiple virtual machines (and by the host machine) on a time-sharing basis using a native graphics driver; similar to AMD's MxGPU and Nvidia's vGPU, which are available only on professional line cards (Radeon Pro and Nvidia Quadro)
Full GPU virtualization in hardware (SR-IOV): The gpu can be partitioned and used/shared by multiple virtual machines and the host with support built-in hardware, unlike GVT-g that does this in software(driver).
Gen9 (i.e. Graphics powering 6th through 9th generation Intel processors) is the last generation of the software-based vGPU solution GVT-G (Intel® Graphics Virtualization Technology –g).
SR-IOV (Single Root IO Virtualization) is supported only on platforms with 11th Generation Intel® Core™ "G" Processors (products formerly known as Tiger Lake) or newer. This leaves Rocket Lake (11th Gen Intel Processors) without support for GVT-g and/or SR-IOV. This means Rocket Lake has no full virtualization support. Started from 12th Generation Intel® Core™ Processors, both desktop and laptop Intel CPUs have GVT-g and SR-IOV support.
=== Multiple monitors ===
==== Ivy Bridge ====
HD 2500 and HD 4000 GPUs in Ivy Bridge CPUs are advertised as supporting three active monitors, but this only works if two of the monitors are configured identically, which covers many but not all three-monitor configurations. The reason for this is that the chipsets only include two phase-locked loops (PLLs) for generating the pixel clocks timing the data being transferred to the displays.
Therefore, three simultaneously active monitors can only be achieved when at least two of them share the same pixel clock, such as:
Using two or three DisplayPort connections, as they require only a single pixel clock for all connections. Passive adapters from DisplayPort to some other connector do not count as a DisplayPort connection, as they rely on the chipset being able to emit a non-DisplayPort signal through the DisplayPort connector. Active adapters that contain additional logic to convert the DisplayPort signal to some other format count as a DisplayPort connection.
Using two non-DisplayPort connections of the same connection type (for example, two HDMI connections) and the same clock frequency (like when connected to two identical monitors at the same resolution), so that a single unique pixel clock can be shared between both connections.
Another possible three-monitor solution uses the Embedded DisplayPort on a mobile CPU (which does not use a chipset PLL at all) along with any two chipset outputs.
==== Haswell ====
ASRock Z87- and H87-based motherboards support three displays simultaneously. Asus H87-based motherboards are also advertised to support three independent monitors at once.
== Capabilities (GPU hardware) ==
OpenCL 2.1 and 2.2 possible with software update on OpenCL 2.0 hardware (Broadwell+) with future software updates.
Support in Mesa is provided by two Gallium3D-style drivers, with the Iris driver supporting Broadwell hardware and later, while the Crocus driver supports Haswell and earlier. The classic Mesa i965 driver was removed in Mesa 22.0, although it would continue to see further maintenance as part of the Amber branch.
New OpenCL driver is Mesa RustiCL and this driver written in new language Rust is OpenCL 3.0 conformant for Intel XE Graphics with Mesa 22.3. Intel Broadwell and higher will be also conformant to 3.0 with many 2.x features. For Intel Ivy Bridge and Haswell target is OpenCL 1.2. Actual development state is available in mesamatrix.
NEO compute runtime driver supports openCL 3.0 with 1.2, 2.0 and 2.1 included for Broadwell and higher and Level Zero API 1.3 for Skylake and higher.
All GVT virtualization methods are supported since the Broadwell processor family with KVM and Xen.
== Capabilities (GPU video acceleration) ==
Intel developed a dedicated SIP core which implements multiple video decompression and compression algorithms branded Intel Quick Sync Video. Some are implemented completely, some only partially.
=== Hardware-accelerated algorithms ===
=== Intel Pentium and Celeron family ===
=== Intel Atom family ===
== Documentation ==
Intel releases programming manuals for most of Intel HD Graphics devices via its Open Source Technology Center. This allows various open source enthusiasts and hackers to contribute to driver development, and port drivers to various operating systems, without the need for reverse engineering.
== See also ==
Graphics card
AMD APU
Free and open-source graphics device driver
List of Intel graphics processing units
List of Nvidia graphics processing units
List of AMD graphics processing units
== Notes ==
== References ==
== External links ==
Intel Graphics Performance Analyzers 2024.1
Intel's Embedded DRAM
Intel Open Source Technology Center: Linux graphics documentation (includes the GPU manuals) | Wikipedia/Intel_Graphics_Technology |
In 3D computer graphics, ray tracing is a technique for modeling light transport for use in a wide variety of rendering algorithms for generating digital images.
On a spectrum of computational cost and visual fidelity, ray tracing-based rendering techniques, such as ray casting, recursive ray tracing, distribution ray tracing, photon mapping and path tracing, are generally slower and higher fidelity than scanline rendering methods. Thus, ray tracing was first deployed in applications where taking a relatively long time to render could be tolerated, such as still CGI images, and film and television visual effects (VFX), but was less suited to real-time applications such as video games, where speed is critical in rendering each frame.
Since 2018, however, hardware acceleration for real-time ray tracing has become standard on new commercial graphics cards, and graphics APIs have followed suit, allowing developers to use hybrid ray tracing and rasterization-based rendering in games and other real-time applications with a lesser hit to frame render times.
Ray tracing is capable of simulating a variety of optical effects, such as reflection, refraction, soft shadows, scattering, depth of field, motion blur, caustics, ambient occlusion and dispersion phenomena (such as chromatic aberration). It can also be used to trace the path of sound waves in a similar fashion to light waves, making it a viable option for more immersive sound design in video games by rendering realistic reverberation and echoes. In fact, any physical wave or particle phenomenon with approximately linear motion can be simulated with ray tracing.
Ray tracing-based rendering techniques that involve sampling light over a domain generate image noise artifacts that can be addressed by tracing a very large number of rays or using denoising techniques.
== History ==
The idea of ray tracing comes from as early as the 16th century when it was described by Albrecht Dürer, who is credited for its invention.
Dürer described multiple techniques for projecting 3-D scenes onto an image plane. Some of these project chosen geometry onto the image plane, as is done with rasterization today. Others determine what geometry is visible along a given ray, as is done with ray tracing.
Using a computer for ray tracing to generate shaded pictures was first accomplished by Arthur Appel in 1968. Appel used ray tracing for primary visibility (determining the closest surface to the camera at each image point) by tracing a ray through each point to be shaded into the scene to identify the visible surface. The closest surface intersected by the ray was the visible one. This non-recursive ray tracing-based rendering algorithm is today called "ray casting". His algorithm then traced secondary rays to the light source from each point being shaded to determine whether the point was in shadow or not.
Later, in 1971, Goldstein and Nagel of MAGI (Mathematical Applications Group, Inc.) published "3-D Visual Simulation", wherein ray tracing was used to make shaded pictures of solids. At the ray-surface intersection point found, they computed the surface normal and, knowing the position of the light source, computed the brightness of the pixel on the screen. Their publication describes a short (30 second) film “made using the University of Maryland’s display hardware outfitted with a 16mm camera. The film showed the helicopter and a simple ground level gun emplacement. The helicopter was programmed to undergo a series of maneuvers including turns, take-offs, and landings, etc., until it eventually is shot down and crashed.” A CDC 6600 computer was used. MAGI produced an animation video called MAGI/SynthaVision Sampler in 1974.
Another early instance of ray casting came in 1976, when Scott Roth created a flip book animation in Bob Sproull's computer graphics course at Caltech. The scanned pages are shown as a video in the accompanying image. Roth's computer program noted an edge point at a pixel location if the ray intersected a bounded plane different from that of its neighbors. Of course, a ray could intersect multiple planes in space, but only the surface point closest to the camera was noted as visible. The platform was a DEC PDP-10, a Tektronix storage-tube display, and a printer which would create an image of the display on rolling thermal paper. Roth extended the framework, introduced the term ray casting in the context of computer graphics and solid modeling, and in 1982 published his work while at GM Research Labs.
Turner Whitted was the first to show recursive ray tracing for mirror reflection and for refraction through translucent objects, with an angle determined by the solid's index of refraction, and to use ray tracing for anti-aliasing. Whitted also showed ray traced shadows. He produced a recursive ray-traced film called The Compleat Angler in 1979 while an engineer at Bell Labs. Whitted's deeply recursive ray tracing algorithm reframed rendering from being primarily a matter of surface visibility determination to being a matter of light transport. His paper inspired a series of subsequent work by others that included distribution ray tracing and finally unbiased path tracing, which provides the rendering equation framework that has allowed computer generated imagery to be faithful to reality.
For decades, global illumination in major films using computer-generated imagery was approximated with additional lights. Ray tracing-based rendering eventually changed that by enabling physically-based light transport. Early feature films rendered entirely using path tracing include Monster House (2006), Cloudy with a Chance of Meatballs (2009), and Monsters University (2013).
== Algorithm overview ==
Optical ray tracing describes a method for producing visual images constructed in 3-D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.
Scenes in ray tracing are described mathematically by a programmer or by a visual artist (normally using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.
Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.
It may at first seem counterintuitive or "backward" to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded.
Therefore, the shortcut taken in ray tracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated.
=== Calculate rays for rectangular viewport ===
On input we have (in calculation we use vector normalization and cross product):
E
∈
R
3
{\displaystyle E\in \mathbb {R^{3}} }
eye position
T
∈
R
3
{\displaystyle T\in \mathbb {R^{3}} }
target position
θ
∈
[
0
,
π
]
{\displaystyle \theta \in [0,\pi ]}
field of view - for humans, we can assume
≈
π
/
2
rad
=
90
∘
{\displaystyle \approx \pi /2{\text{ rad}}=90^{\circ }}
m
,
k
∈
N
{\displaystyle m,k\in \mathbb {N} }
numbers of square pixels on viewport vertical and horizontal direction
i
,
j
∈
N
,
1
≤
i
≤
k
∧
1
≤
j
≤
m
{\displaystyle i,j\in \mathbb {N} ,1\leq i\leq k\land 1\leq j\leq m}
numbers of actual pixel
v
→
∈
R
3
{\displaystyle {\vec {v}}\in \mathbb {R^{3}} }
vertical vector which indicates where is up and down, usually
v
→
=
[
0
,
1
,
0
]
{\displaystyle {\vec {v}}=[0,1,0]}
- roll component which determine viewport rotation around point C (where the axis of rotation is the ET section)
The idea is to find the position of each viewport pixel center
P
i
j
{\displaystyle P_{ij}}
which allows us to find the line going from eye
E
{\displaystyle E}
through that pixel and finally get the ray described by point
E
{\displaystyle E}
and vector
R
→
i
j
=
P
i
j
−
E
{\displaystyle {\vec {R}}_{ij}=P_{ij}-E}
(or its normalisation
r
→
i
j
{\displaystyle {\vec {r}}_{ij}}
). First we need to find the coordinates of the bottom left viewport pixel
P
1
m
{\displaystyle P_{1m}}
and find the next pixel by making a shift along directions parallel to viewport (vectors
b
→
n
{\displaystyle {\vec {b}}_{n}}
,
v
→
n
{\displaystyle {\vec {v}}_{n}}
) multiplied by the size of the pixel. Below we introduce formulas which include distance
d
{\displaystyle d}
between the eye and the viewport. However, this value will be reduced during ray normalization
r
→
i
j
{\displaystyle {\vec {r}}_{ij}}
(so you might as well accept that
d
=
1
{\displaystyle d=1}
and remove it from calculations).
Pre-calculations: let's find and normalise vector
t
→
{\displaystyle {\vec {t}}}
and vectors
b
→
,
v
→
{\displaystyle {\vec {b}},{\vec {v}}}
which are parallel to the viewport (all depicted on above picture)
t
→
=
T
−
E
,
b
→
=
t
→
×
v
→
{\displaystyle {\vec {t}}=T-E,\qquad {\vec {b}}={\vec {t}}\times {\vec {v}}}
t
→
n
=
t
→
|
|
t
→
|
|
,
b
→
n
=
b
→
|
|
b
→
|
|
,
v
→
n
=
t
→
n
×
b
→
n
{\displaystyle {\vec {t}}_{n}={\frac {\vec {t}}{||{\vec {t}}||}},\qquad {\vec {b}}_{n}={\frac {\vec {b}}{||{\vec {b}}||}},\qquad {\vec {v}}_{n}={\vec {t}}_{n}\times {\vec {b}}_{n}}
note that viewport center
C
=
E
+
t
→
n
d
{\displaystyle C=E+{\vec {t}}_{n}d}
, next we calculate viewport sizes
h
x
,
h
y
{\displaystyle h_{x},h_{y}}
divided by 2 including inverse aspect ratio
m
−
1
k
−
1
{\displaystyle {\frac {m-1}{k-1}}}
g
x
=
h
x
2
=
d
tan
θ
2
,
g
y
=
h
y
2
=
g
x
m
−
1
k
−
1
{\displaystyle g_{x}={\frac {h_{x}}{2}}=d\tan {\frac {\theta }{2}},\qquad g_{y}={\frac {h_{y}}{2}}=g_{x}{\frac {m-1}{k-1}}}
and then we calculate next-pixel shifting vectors
q
x
,
q
y
{\displaystyle q_{x},q_{y}}
along directions parallel to viewport (
b
→
,
v
→
{\displaystyle {\vec {b}},{\vec {v}}}
), and left bottom pixel center
p
1
m
{\displaystyle p_{1m}}
q
→
x
=
2
g
x
k
−
1
b
→
n
,
q
→
y
=
2
g
y
m
−
1
v
→
n
,
p
→
1
m
=
t
→
n
d
−
g
x
b
→
n
−
g
y
v
→
n
{\displaystyle {\vec {q}}_{x}={\frac {2g_{x}}{k-1}}{\vec {b}}_{n},\qquad {\vec {q}}_{y}={\frac {2g_{y}}{m-1}}{\vec {v}}_{n},\qquad {\vec {p}}_{1m}={\vec {t}}_{n}d-g_{x}{\vec {b}}_{n}-g_{y}{\vec {v}}_{n}}
Calculations: note
P
i
j
=
E
+
p
→
i
j
{\displaystyle P_{ij}=E+{\vec {p}}_{ij}}
and ray
R
→
i
j
=
P
i
j
−
E
=
p
→
i
j
{\displaystyle {\vec {R}}_{ij}=P_{ij}-E={\vec {p}}_{ij}}
so
p
→
i
j
=
p
→
1
m
+
q
→
x
(
i
−
1
)
+
q
→
y
(
j
−
1
)
{\displaystyle {\vec {p}}_{ij}={\vec {p}}_{1m}+{\vec {q}}_{x}(i-1)+{\vec {q}}_{y}(j-1)}
r
→
i
j
=
R
→
i
j
|
|
R
→
i
j
|
|
=
p
→
i
j
|
|
p
→
i
j
|
|
{\displaystyle {\vec {r}}_{ij}={\frac {{\vec {R}}_{ij}}{||{\vec {R}}_{ij}||}}={\frac {{\vec {p}}_{ij}}{||{\vec {p}}_{ij}||}}}
== Detailed description of ray tracing computer algorithm and its genesis ==
=== What happens in nature (simplified) ===
In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). Any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength color in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.
=== Ray casting algorithm ===
The idea behind ray casting, the predecessor to recursive ray tracing, is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3-D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.
=== Volume ray casting algorithm ===
In the method of volume ray casting, each ray is traced so that color and/or density can be sampled along the ray and then be combined into a final pixel color.
This is often used when objects cannot be easily represented by explicit surfaces (such as triangles), for example when rendering clouds or 3D medical scans.
=== SDF ray marching algorithm ===
In SDF ray marching, or sphere tracing, each ray is traced in multiple steps to approximate an intersection point between the ray and a surface defined by a signed distance function (SDF). The SDF is evaluated for each iteration in order to be able take as large steps as possible without missing any part of the surface. A threshold is used to cancel further iteration when a point is reached that is close enough to the surface. This method is often used for 3-D fractal rendering.
=== Recursive ray tracing algorithm ===
Earlier algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Recursive ray tracing continues the process. When a ray hits a surface, additional rays may be cast because of reflection, refraction, and shadow.:
A reflection ray is traced in the mirror-reflection direction. The closest object it intersects is what will be seen in the reflection.
A refraction ray traveling through transparent material works similarly, with the addition that a refractive ray could be entering or exiting a material. Turner Whitted extended the mathematical logic for rays passing through a transparent solid to include the effects of refraction.
A shadow ray is traced toward each light. If any opaque object is found between the surface and the light, the surface is in shadow and the light does not illuminate it.
These recursive rays add more realism to ray traced images.
=== Advantages over other rendering methods ===
Ray tracing-based rendering's popularity stems from its basis in a realistic simulation of light transport, as compared to other rendering methods, such as rasterization, which focuses more on the realistic simulation of geometry. Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. The computational independence of each ray makes ray tracing amenable to a basic level of parallelization, but the divergence of ray paths makes high utilization under parallelism quite difficult to achieve in practice.
=== Disadvantages ===
A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen). Until the late 2010s, ray tracing in real time was usually considered impossible on consumer hardware for nontrivial tasks. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform spatial anti-aliasing and improve image quality where needed.
Whitted-style recursive ray tracing handles interreflection and optical effects such as refraction, but is not generally photorealistic. Improved realism occurs when the rendering equation is fully evaluated, as the equation conceptually includes every physical effect of light flow. However, this is infeasible given the computing resources required, and the limitations on geometric and material modeling fidelity. Path tracing is an algorithm for evaluating the rendering equation and thus gives a higher fidelity simulations of real-world lighting.
=== Reversed direction of traversal of scene by the rays ===
The process of shooting rays from the eye to the light source to render an image is sometimes called backwards ray tracing, since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term backwards ray tracing to mean shooting rays from the lights and gathering the results. Therefore, it is clearer to distinguish eye-based versus light-based ray tracing.
While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.
Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points. The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias.
An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.
To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions.
First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example, if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.
=== Example ===
As a demonstration of the principles involved in ray tracing, consider how one would find the intersection between a ray and a sphere. This is merely the math behind the line–sphere intersection and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of ray tracing, but this demonstrates an example of the algorithms used.
In vector notation, the equation of a sphere with center
c
{\displaystyle \mathbf {c} }
and radius
r
{\displaystyle r}
is
‖
x
−
c
‖
2
=
r
2
.
{\displaystyle \left\Vert \mathbf {x} -\mathbf {c} \right\Vert ^{2}=r^{2}.}
Any point on a ray starting from point
s
{\displaystyle \mathbf {s} }
with direction
d
{\displaystyle \mathbf {d} }
(here
d
{\displaystyle \mathbf {d} }
is a unit vector) can be written as
x
=
s
+
t
d
,
{\displaystyle \mathbf {x} =\mathbf {s} +t\mathbf {d} ,}
where
t
{\displaystyle t}
is its distance between
x
{\displaystyle \mathbf {x} }
and
s
{\displaystyle \mathbf {s} }
. In our problem, we know
c
{\displaystyle \mathbf {c} }
,
r
{\displaystyle r}
,
s
{\displaystyle \mathbf {s} }
(e.g. the position of a light source) and
d
{\displaystyle \mathbf {d} }
, and we need to find
t
{\displaystyle t}
. Therefore, we substitute for
x
{\displaystyle \mathbf {x} }
:
‖
s
+
t
d
−
c
‖
2
=
r
2
.
{\displaystyle \left\Vert \mathbf {s} +t\mathbf {d} -\mathbf {c} \right\Vert ^{2}=r^{2}.}
Let
v
=
d
e
f
s
−
c
{\displaystyle \mathbf {v} \ {\stackrel {\mathrm {def} }{=}}\ \mathbf {s} -\mathbf {c} }
for simplicity; then
‖
v
+
t
d
‖
2
=
r
2
{\displaystyle \left\Vert \mathbf {v} +t\mathbf {d} \right\Vert ^{2}=r^{2}}
v
2
+
t
2
d
2
+
2
v
⋅
t
d
=
r
2
{\displaystyle \mathbf {v} ^{2}+t^{2}\mathbf {d} ^{2}+2\mathbf {v} \cdot t\mathbf {d} =r^{2}}
(
d
2
)
t
2
+
(
2
v
⋅
d
)
t
+
(
v
2
−
r
2
)
=
0.
{\displaystyle (\mathbf {d} ^{2})t^{2}+(2\mathbf {v} \cdot \mathbf {d} )t+(\mathbf {v} ^{2}-r^{2})=0.}
Knowing that d is a unit vector allows us this minor simplification:
t
2
+
(
2
v
⋅
d
)
t
+
(
v
2
−
r
2
)
=
0.
{\displaystyle t^{2}+(2\mathbf {v} \cdot \mathbf {d} )t+(\mathbf {v} ^{2}-r^{2})=0.}
This quadratic equation has solutions
t
=
−
(
2
v
⋅
d
)
±
(
2
v
⋅
d
)
2
−
4
(
v
2
−
r
2
)
2
=
−
(
v
⋅
d
)
±
(
v
⋅
d
)
2
−
(
v
2
−
r
2
)
.
{\displaystyle t={\frac {-(2\mathbf {v} \cdot \mathbf {d} )\pm {\sqrt {(2\mathbf {v} \cdot \mathbf {d} )^{2}-4(\mathbf {v} ^{2}-r^{2})}}}{2}}=-(\mathbf {v} \cdot \mathbf {d} )\pm {\sqrt {(\mathbf {v} \cdot \mathbf {d} )^{2}-(\mathbf {v} ^{2}-r^{2})}}.}
The two values of
t
{\displaystyle t}
found by solving this equation are the two ones such that
s
+
t
d
{\displaystyle \mathbf {s} +t\mathbf {d} }
are the points where the ray intersects the sphere.
Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from
s
{\displaystyle \mathbf {s} }
with opposite direction).
If the quantity under the square root (the discriminant) is negative, then the ray does not intersect the sphere.
Let us suppose now that there is at least a positive solution, and let
t
{\displaystyle t}
be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere.
The normal to the sphere is simply
n
=
y
−
c
‖
y
−
c
‖
,
{\displaystyle \mathbf {n} ={\frac {\mathbf {y} -\mathbf {c} }{\left\Vert \mathbf {y} -\mathbf {c} \right\Vert }},}
where
y
=
s
+
t
d
{\displaystyle \mathbf {y} =\mathbf {s} +t\mathbf {d} }
is the intersection point found before. The reflection direction can be found by a reflection of
d
{\displaystyle \mathbf {d} }
with respect to
n
{\displaystyle \mathbf {n} }
, that is
r
=
d
−
2
(
n
⋅
d
)
n
.
{\displaystyle \mathbf {r} =\mathbf {d} -2(\mathbf {n} \cdot \mathbf {d} )\mathbf {n} .}
Thus the reflected ray has equation
x
=
y
+
u
r
.
{\displaystyle \mathbf {x} =\mathbf {y} +u\mathbf {r} .\,}
Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and that of the sphere are combined by the reflection.
== Adaptive depth control ==
Adaptive depth control means that the renderer stops generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. There must always be a set maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced.
Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 × 0.5 = 0.25, the third: 0.25 × 0.5 = 0.125, the fourth: 0.125 × 0.5 = 0.0625, the fifth: 0.0625 × 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution.
For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenberg found that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.
== Bounding volumes ==
Enclosing groups of objects in sets of bounding volume hierarchies (BVH) decreases the amount of computations required for ray tracing. A cast ray is first tested for an intersection with the bounding volume, and then if there is an intersection, the volume is recursively divided until the ray hits the object. The best type of bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin, then a sphere will enclose mainly empty space compared to a box. Boxes are also easier to generate hierarchical bounding volumes.
Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and result in a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this. Furthermore, this acceleration structure makes the ray-tracing computation output-sensitive. I.e. the complexity of the ray intersection calculations depends on the number of objects that actually intersect the rays and not (only) on the number of objects in the scene.
Kay & Kajiya give a list of desired properties for hierarchical bounding volumes:
Subtrees should contain objects that are near each other and the further down the tree the closer should be the objects.
The volume of each node should be minimal.
The sum of the volumes of all bounding volumes should be minimal.
Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove more potential objects than one farther down the tree.
The time spent constructing the hierarchy should be much less than the time saved by using it.
== Interactive ray tracing ==
The first implementation of an interactive ray tracer was the LINKS-1 Computer Graphics System built in 1982 at Osaka University's School of Engineering, by professors Ohmura Kouichi, Shirakawa Isao and Kawata Toru with 50 students. It was a massively parallel processing computer system with 514 microprocessors (257 Zilog Z8001s and 257 iAPX 86s), used for 3-D computer graphics with high-speed ray tracing. According to the Information Processing Society of Japan: "The core of 3-D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint, light source, and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images." It was used to create an early 3-D planetarium-like video of the heavens made completely with computer graphics. The video was presented at the Fujitsu pavilion at the 1985 International Exposition in Tsukuba." It was the second system to do so after the Evans & Sutherland Digistar in 1982. The LINKS-1 was claimed by the designers to be the world's most powerful computer in 1984.
The next interactive ray tracer, and the first known to have been labeled "real-time" was credited at the 2005 SIGGRAPH computer graphics conference as being the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system. Initially published in 1987 at USENIX, the BRL-CAD ray tracer was an early implementation of a parallel network distributed ray tracing system that achieved several frames per second in rendering performance. This performance was attained by means of the highly optimized yet platform independent LIBRT ray tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray tracer, including the REMRT/RT tools, continue to be available and developed today as open source software.
Since then, there have been considerable efforts and research towards implementing ray tracing at real-time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3-D graphics applications such as demoscene productions, computer and video games, and image rendering. Some real-time software 3-D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s.
In 1999 a team from the University of Utah, led by Steven Parker, demonstrated interactive ray tracing live at the 1999 Symposium on Interactive 3D Graphics. They rendered a 35 million sphere model at 512 by 512 pixel resolution, running at approximately 15 frames per second on 60 CPUs.
The Open RT project included a highly optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterization based approach for interactive 3-D graphics. Ray tracing hardware, such as the experimental Ray Processing Unit developed by Sven Woop at the Saarland University, was designed to accelerate some of the computationally intensive operations of ray tracing.
The idea that video games could ray trace their graphics in real time received media attention in the late 2000s. During that time, a researcher named Daniel Pohl, under the guidance of graphics professor Philipp Slusallek and in cooperation with the Erlangen University and Saarland University in Germany, equipped Quake III and Quake IV with an engine he programmed himself, which Saarland University then demonstrated at CeBIT 2007. Intel, a patron of Saarland, became impressed enough that it hired Pohl and embarked on a research program dedicated to ray traced graphics, which it saw as justifying increasing the number of its processors' cores.: 99–100 On June 12, 2008, Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14–29 frames per second on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz.
At SIGGRAPH 2009, Nvidia announced OptiX, a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion. OptiX-based renderers are used in Autodesk Arnold, Adobe AfterEffects, Bunkspeed Shot, Autodesk Maya, 3ds max, and many other renderers.
In 2014, a demo of the PlayStation 4 video game The Tomorrow Children, developed by Q-Games and Japan Studio, demonstrated new lighting techniques developed by Q-Games, notably cascaded voxel cone ray tracing, which simulates lighting in real-time and uses more realistic reflections rather than screen space reflections.
Nvidia introduced their GeForce RTX and Quadro RTX GPUs September 2018, based on the Turing architecture that allows for hardware-accelerated ray tracing. The Nvidia hardware uses a separate functional block, publicly called an "RT core". This unit is somewhat comparable to a texture unit in size, latency, and interface to the processor core. The unit features BVH traversal, compressed BVH node decompression, ray-AABB intersection testing, and ray-triangle intersection testing. The GeForce RTX, in the form of models 2080 and 2080 Ti, became the first consumer-oriented brand of graphics card that can perform ray tracing in real time, and, in November 2018, Electronic Arts' Battlefield V became the first game to take advantage of its ray tracing capabilities, which it achieves via Microsoft's new API, DirectX Raytracing. AMD, which already offered interactive ray tracing on top of OpenCL through its Radeon ProRender, unveiled in October 2020 the Radeon RX 6000 series, its second generation Navi GPUs with support for hardware-accelerated ray tracing at an online event. Subsequent games that render their graphics by such means appeared since, which has been credited to the improvements in hardware and efforts to make more APIs and game engines compatible with the technology. Current home gaming consoles implement dedicated ray tracing hardware components in their GPUs for real-time ray tracing effects, which began with the ninth-generation consoles PlayStation 5, Xbox Series X and Series S.
On 4 November, 2021, Imagination Technologies announced their IMG CXT GPU with hardware-accelerated ray tracing. On January 18, 2022, Samsung announced their Exynos 2200 AP SoC with hardware-accelerated ray tracing. On June 28, 2022, Arm announced their Immortalis-G715 with hardware-accelerated ray tracing. On November 16, 2022, Qualcomm announced their Snapdragon 8 Gen 2 with hardware-accelerated ray tracing.
On September 12, 2023 Apple introduced hardware-accelerated ray tracing in its chip designs, beginning with the A17 Pro chip for iPhone 15 Pro models. Later the same year, Apple released M3 family of processors with HW enabled ray tracing support. Currently, this technology is accessible across iPhones, iPads, and Mac computers via the Metal API. Apple reports up to a 4x performance increase over previous software-based ray tracing on the phone and up to 2.5x faster comparing M3 to M1 chips. The hardware implementation includes acceleration structure traversal and dedicated ray-box intersections, and the API supports RayQuery (Inline Ray Tracing) as well as RayPipeline features.
== Computational complexity ==
Various complexity results have been proven for certain formulations of the ray tracing problem. In particular, if the decision version of the ray tracing problem is defined as follows – given a light ray's initial position and direction and some fixed point, does the ray eventually reach that point, then the referenced paper proves the following results:
Ray tracing in 3-D optical systems with a finite set of reflective or refractive objects represented by a system of rational quadratic inequalities is undecidable.
Ray tracing in 3-D optical systems with a finite set of refractive objects represented by a system of rational linear inequalities is undecidable.
Ray tracing in 3-D optical systems with a finite set of rectangular reflective or refractive objects is undecidable.
Ray tracing in 3-D optical systems with a finite set of reflective or partially reflective objects represented by a system of linear inequalities, some of which can be irrational is undecidable.
Ray tracing in 3-D optical systems with a finite set of reflective or partially reflective objects represented by a system of rational linear inequalities is PSPACE-hard.
For any dimension equal to or greater than 2, ray tracing with a finite set of parallel and perpendicular reflective surfaces represented by rational linear inequalities is in PSPACE.
== Software architecture ==
=== Middleware ===
GPUOpen
Nvidia GameWorks
=== API ===
Metal (API)
Vulkan
DirectX
== See also ==
== References ==
== External links ==
Interactive Ray Tracing: The replacement of rasterization?
The Compleat Angler (1978)
Writing a Simple Ray Tracer (scratchapixel) Archived February 28, 2015, at the Wayback Machine
Ray tracing a torus
Ray Tracing in One Weekend Book Series
Ray Tracing with Voxels - Part 1 | Wikipedia/Ray_tracing_(graphics) |
Texas Instruments Graphics Architecture (TIGA) is a graphics interface standard created by Texas Instruments that defined the software interface to graphics processors. Using this standard, any software written for TIGA should work correctly on a TIGA-compliant graphics interface card. Texas Instrument's TMS34010 and TMS34020 Graphics System Processors (GSP) were the original TIGA-compliant graphics processors.
The TIGA standard is independent of resolution and color depth which provides a certain degree of future proofing. This standard was designed for high-end graphics. However, TIGA was not widely adopted. Instead, VESA and Super VGA became the de facto standard for PC graphics devices after the VGA.
== Clone Hardware ==
The primary manufacturers of mainstream TIGA cards for the PC clone market included Number Nine Visual Technology and Hercules. Number Nine Visual Technology graphics cards using Texas Instruments' TIGA co-processors were made from about 1986 to 1992, including the Pepper and GX series. Hercules manufactured cards such as the Graphics Station and Chrome lines which were marketed primarily toward users of Microsoft Windows.
Desktop Computing AGA 1024 card was capable of emulating TIGA standards, as well as the IBM 8514.
In the early 1990s, Texas Instruments France (which had marketing control for TIGA architecture and GSP chipsets in Europe) experimented with manufacturing and selling its own range of consumer oriented video cards based on TIGA and aimed at speeding up the user experience of Windows. These products were named TIGA Diamond (34020 based) and TIGA Star (34010 based), and provided a platform for selling TI DRAM and video palette chips as well as the GSP chips themselves.
== Impact ==
Despite the superiority of the technology in comparison to typical Super VGA cards of the era, the relatively high cost and emerging local bus graphics standards meant that IT distributors and PC manufacturers could not see a niche for these products at consumer level.
The (limited) success of the graphics cards paved the way for products based upon various derivatives and clones of IBM's 8514 architecture. Part of the effort to make graphics accelerators useful required TI to convince Microsoft that the internal interfaces to its Windows Operating System had to be adaptable instead of hard-coded. Indeed, all versions of Windows prior to Windows 3.0 were "hard-coded" to specific graphics hardware.
== See also ==
TMS34010
Number Nine Visual Technology TIGA cards
VESA
Super VGA
IBM 8514
== References ==
== External links ==
TMS340 Interface User's Guide spvu015c
TMS340 FAMILY GRAPHICS LIBRARY USER'S GUIDE spvu027 | Wikipedia/Texas_Instruments_Graphics_Architecture |
A network on a chip or network-on-chip (NoC en-oh-SEE or knock) is a network-based communications subsystem on an integrated circuit ("microchip"), most typically between modules in a system on a chip (SoC). The modules on the IC are typically semiconductor IP cores schematizing various functions of the computer system, and are designed to be modular in the sense of network science. The network on chip is a router-based packet switching network between SoC modules.
NoC technology applies the theory and methods of computer networking to on-chip communication and brings notable improvements over conventional bus and crossbar communication architectures. Networks-on-chip come in many network topologies, many of which are still experimental as of 2018.
In 2000s, researchers had started to propose a type of on-chip interconnection in the form of packet switching networks in order to address the scalability issues of bus-based design. Preceding researches proposed the design that routes data packets instead of routing the wires. Then, the concept of "network on chips" was proposed in 2002. NoCs improve the scalability of systems-on-chip and the power efficiency of complex SoCs compared to other communication subsystem designs. They are an emerging technology, with projections for large growth in the near future as multicore computer architectures become more common.
== Structure ==
NoCs can span synchronous and asynchronous clock domains, known as clock domain crossing, or use unclocked asynchronous logic. NoCs support globally asynchronous, locally synchronous electronics architectures, allowing each processor core or functional unit on the System-on-Chip to have its own clock domain.
== Architectures ==
NoC architectures typically model sparse small-world networks (SWNs) and scale-free networks (SFNs) to limit the number, length, area and power consumption of interconnection wires and point-to-point connections.
== Topology ==
The topology determines the physical layout and connections between nodes and channels. The message traverses hops, and each hop's channel length depends on the topology. The topology significantly influences both latency and power consumption. Furthermore, since the topology determines the number of alternative paths between nodes, it affects the network traffic distribution, and hence the network bandwidth and performance achieved.
== Benefits ==
Traditionally, ICs have been designed with dedicated point-to-point connections, with one wire dedicated to each signal. This results in a dense network topology. For large designs, in particular, this has several limitations from a physical design viewpoint. It requires power quadratic in the number of interconnections. The wires occupy much of the area of the chip, and in nanometer CMOS technology, interconnects dominate both performance and dynamic power dissipation, as signal propagation in wires across the chip requires multiple clock cycles. This also allows more parasitic capacitance, resistance and inductance to accrue on the circuit. (See Rent's rule for a discussion of wiring requirements for point-to-point connections).
Sparsity and locality of interconnections in the communications subsystem yield several improvements over traditional bus-based and crossbar-based systems.
== Parallelism and scalability ==
The wires in the links of the network-on-chip are shared by many signals. A high level of parallelism is achieved, because all data links in the NoC can operate simultaneously on different data packets. Therefore, as the complexity of integrated systems keeps growing, a NoC provides enhanced performance (such as throughput) and scalability in comparison with previous communication architectures (e.g., dedicated point-to-point signal wires, shared buses, or segmented buses with bridges). The algorithms must be designed in such a way that they offer large parallelism and can hence utilize the potential of NoC.
== Current research ==
Some researchers think that NoCs need to support quality of service (QoS), namely achieve the various requirements in terms of throughput, end-to-end delays, fairness, and deadlines. Real-time computation, including audio and video playback, is one reason for providing QoS support. However, current system implementations like VxWorks, RTLinux or QNX are able to achieve sub-millisecond real-time computing without special hardware.
This may indicate that for many real-time applications the service quality of existing on-chip interconnect infrastructure is sufficient, and dedicated hardware logic would be necessary to achieve microsecond precision, a degree that is rarely needed in practice for end users (sound or video jitter need only tenth of milliseconds latency guarantee). Another motivation for NoC-level quality of service (QoS) is to support multiple concurrent users sharing resources of a single chip multiprocessor in a public cloud computing infrastructure. In such instances, hardware QoS logic enables the service provider to make contractual guarantees on the level of service that a user receives, a feature that may be deemed desirable by some corporate or government clients.
Many challenging research problems remain to be solved at all levels, from the physical link level through the network level, and all the way up to the system architecture and application software. The first dedicated research symposium on networks on chip was held at Princeton University, in May 2007. The second IEEE International Symposium on Networks-on-Chip was held in April 2008 at Newcastle University.
Research has been conducted on integrated optical waveguides and devices comprising an optical network on a chip (ONoC).
The possible way to increasing the performance of NoC is use wireless communication channels between chiplets — named wireless network on chip (WiNoC).
== Side benefits ==
In a multi-core system, connected by NoC, coherency messages and cache miss requests have to pass switches. Accordingly, switches can be augmented with simple tracking and forwarding elements to detect which cache blocks will be requested in the future by which cores. Then, the forwarding elements multicast any requested block to all the cores that may request the block in the future. This mechanism reduces cache miss rate.
== Benchmarks ==
NoC development and studies require comparing different proposals and options. NoC traffic patterns are under development to help such evaluations. Existing NoC benchmarks include NoCBench and MCSL NoC Traffic Patterns.
== Interconnect processing unit ==
An interconnect processing unit (IPU) is an on-chip communication network with hardware and software components which jointly implement key functions of different system-on-chip programming models through a set of communication and synchronization primitives and provide low-level platform services to enable advanced features in modern heterogeneous applications on a single die.
== See also ==
Arteris
Electronic design automation (EDA)
Integrated circuit design
CUDA
Globally asynchronous, locally synchronous
Network architecture
== Notes ==
== References ==
Adapted from Avinoam Kolodny's's column in the ACM SIGDA e-newsletter by Igor Markov The original text can be found at http://www.sigda.org/newsletter/2006/060415.txt
== Further reading ==
Kundu, Santanu; Chattopadhyay, Santanu (2014). Network-on-chip: the Next Generation of System-on-Chip Integration (1st ed.). Boca Raton, FL: CRC Press. ISBN 978-1-4665-6527-2. OCLC 895661009.
Sheng Ma; Libo Huang; Mingche Lai; Wei Shi; Zhiying Wang (2014). Networks-on-Chip: From Implementations to Programming Paradigms (1st ed.). Amsterdam, NL: Morgan Kaufmann. ISBN 978-0-12-801178-2. OCLC 894609116.
Giorgios Dimitrakopoulos; Anastasios Psarras; Ioannis Seitanidis (2014-08-27). Microarchitecture of Network-on-Chip Routers: A Designer's Perspective (1st ed.). New York, NY. ISBN 978-1-4614-4301-8. OCLC 890132032.{{cite book}}: CS1 maint: location missing publisher (link)
Natalie Enright Jerger; Tushar Krishna; Li-Shiuan Peh (2017-06-19). On-chip Networks (2nd ed.). San Rafael, California. ISBN 978-1-62705-996-1. OCLC 991871622.{{cite book}}: CS1 maint: location missing publisher (link)
Marzieh Lenjani; Mahmoud Reza Hashemi (2014). "Tree-based scheme for reducing shared cache miss rate leveraging regional, statistical and temporal similarities". IET Computers & Digital Techniques. 8: 30–48. doi:10.1049/iet-cdt.2011.0066. Archived from the original on December 9, 2018.
== External links ==
DATE 2006 workshop on NoC
NoCS 2007 - The 1st ACM/IEEE International Symposium on Networks-on-Chip
NoCS 2008 - The 2nd IEEE International Symposium on Networks-on-Chip
Jean-Jacques Lecler, Gilles Baillieu, Design Automation for Embedded Systems (Springer), "Application driven network-on-chip architecture exploration & refinement for a complex SoC", June 2011, Volume 15, Issue 2, pp 133–158, doi:10.1007/s10617-011-9075-5 [Online] http://www.arteris.com/hs-fs/hub/48858/file-14363521-pdf/docs/springer-appdrivennocarchitecture8.5x11.pdf | Wikipedia/Network_on_a_chip |
The GeForce 3 series (NV20) is the third generation of Nvidia's GeForce line of graphics processing units (GPUs). Introduced in February 2001, it advanced the GeForce architecture by adding programmable pixel and vertex shaders, multisample anti-aliasing and improved the overall efficiency of the rendering process.
The GeForce 3 was unveiled during the 2001 Macworld Conference & Expo/Tokyo 2001 in Makuhari Messe and powered realtime demos of Pixar's Junior Lamp and id Software's Doom 3. Apple would later announce launch rights for its new line of computers.
The GeForce 3 family comprises 3 consumer models: the GeForce 3, the GeForce 3 Ti200, and the GeForce 3 Ti500. A separate professional version, with a feature-set tailored for computer aided design, was sold as the Quadro DCC. A derivative of the GeForce 3, known as the NV2A, is used in the Microsoft Xbox game console.
== Architecture ==
The GeForce 3 was introduced three months after Nvidia acquired the assets of 3dfx. It was marketed as the nFinite FX Engine, and was the first Microsoft Direct3D 8.0 compliant 3D-card. Its programmable shader architecture enabled applications to execute custom visual effects programs in Microsoft Shader language 1.1. It is believed that the fixed-function T&L hardware from GeForce 2 was still included on the chip for use with Direct3D 7.0 applications, as the single vertex shader was not fast enough to emulate it yet. With respect to pure pixel and texel throughput, the GeForce 3 has four pixel pipelines which each can sample two textures per clock. This is the same configuration as GeForce 2, excluding the slower GeForce 2 MX line.
To take better advantage of available memory performance, the GeForce 3 has a memory subsystem dubbed Lightspeed Memory Architecture (LMA). This is composed of several mechanisms that reduce overdraw, conserve memory bandwidth by compressing the z-buffer (depth buffer) and better manage interaction with the DRAM.
Other architectural changes include EMBM support (first introduced by Matrox in 1999) and improvements to anti-aliasing functionality. Previous GeForce chips could perform only super-sampled anti-aliasing (SSAA), a demanding process that renders the image at a large size internally and then scales it down to the end output resolution. GeForce 3 adds multi-sampling anti-aliasing (MSAA) and Quincunx anti-aliasing methods, both of which perform significantly better than super-sampling anti-aliasing at the expense of quality. With multi-sampling, the render output units super-sample only the Z-buffers and stencil buffers, and using that information get greater geometry detail needed to determine if a pixel covers more than one polygonal object. This saves the pixel/fragment shader from having to render multiple fragments for pixels where the same object covers all of the same sub-pixels in a pixel. This method fails with texture maps which have varying transparency (e.g. a texture map that represents a chain link fence). Quincunx anti-aliasing is a blur filter that shifts the rendered image a half-pixel up and a half-pixel left in order to create sub-pixels which are then averaged together in a diagonal cross pattern, destroying both jagged edges but also some overall image detail. Finally, the GeForce 3's texture sampling units were upgraded to support 8-tap anisotropic filtering, compared to the previous limit of 2-tap with GeForce 2. With 8-tap anisotropic filtering enabled, distant textures can be noticeably sharper.
A derivative of the GeForce 3, known as the NV2A, is used in the Microsoft Xbox game console. It is clocked the same as the original GeForce 3 but features an additional vertex shader.
== Performance ==
The GeForce 3 GPU (NV20) has the same theoretical pixel and texel throughput per clock as the GeForce 2 (NV15). The GeForce 2 Ultra is clocked 25% faster than the original GeForce 3 and 43% faster than the Ti200; this means that in select instances, like Direct3D 7 T&L benchmarks, the GeForce 2 Ultra and sometimes even GTS can outperform the GeForce 3 and Ti200, because the newer GPUs use the same fixed-function T&L unit, but are clocked lower. The GeForce 2 Ultra also has considerable raw memory bandwidth available to it, only matched by the GeForce 3 Ti500. However, when comparing anti-aliasing performance the GeForce 3 is clearly superior because of its MSAA support and memory bandwidth/fillrate management efficiency.
When comparing the shading capabilities to the Radeon 8500, reviewers noted superior precision with the ATi card.
== Product positioning ==
Nvidia refreshed the lineup in October 2001 with the release of the GeForce 3 Ti200 and Ti500. This coincided with ATI's releases of the Radeon 7500 and Radeon 8500. The Ti500 has higher core and memory clocks (240 MHz core/250 MHz RAM) than the original GeForce 3 (200 MHz/230 MHz), and generally matches the Radeon 8500 in performance. The Ti200 is clocked lower (175 MHz/200 MHz) making it the lowest-priced GeForce 3 release, but it still surpasses the Radeon 7500 in speed and feature set although lacking dual-monitor implementation.
The original GeForce3 and Ti500 were only released in 64 MiB configurations, while the Ti200 was also released as 128 MiB versions.
The GeForce 4 Ti (NV25), introduced in April 2002, was a revision of the GeForce 3 architecture. The GeForce 4 Ti was very similar to the GeForce 3; the main differences were higher core and memory speeds, a revised memory controller, improved vertex and pixel shaders, hardware anti-aliasing and DVD playback. Proper dual-monitor support was also brought over from the GeForce 2 MX. With the GeForce 4 Ti 4600 as the new flagship product, this was the beginning of the end of the GeForce 3 Ti 500 which was already difficult to produce due to poor yields, and it was later completely replaced by the much cheaper but similarly performing GeForce 4 Ti 4200. Also announced at the same time was the GeForce 4 MX (NV17), which despite the name was closer in terms of architecture and feature set to the GeForce 2 (NV 11 and NV15). The GeForce 3 Ti200 was still kept in production for a short while as it occupied a niche spot between the (delayed) GeForce 4 Ti4200 and GeForce 4 MX460, with performance equivalent to the DirectX 7.0 compliant MX460 while also having full DirectX 8.0 support, although lacking the ability to support dual-monitors. However, ATI released the Radeon 8500LE (a slower clocked version of the 8500) which outperformed both the Ti200 and MX460. ATI's move in turn compelled Nvidia to roll out the Ti4200 earlier than planned, also at a similar price to the MX 460, and soon afterwards discontinuing the Ti200 by summer 2002 due to naming confusion with the GeForce 4 MX and Ti lines. The GeForce 3 Ti200 still outperforms the Radeon 9000 (RV250) that was introduced around the time of the Ti200's discontinuation; as unlike the 8500LE which was just a slower-clocked 8500, the 9000 was a major redesign to reduce production cost and power usage; the Radeon 9000's performance was equivalent to the GeForce 4 MX440.
== Specifications ==
All models are made via TSMC 150 nm fabrication process
All models support Direct3D 8.0 and OpenGL 1.3
All models support 3D Textures, Lightspeed Memory Architecture (LMA), nFiniteFX Engine, Shadow Buffers
== Discontinued support ==
Nvidia has ceased driver support for GeForce 3 series.
=== Final drivers ===
Windows 9x & Windows Me: 81.98 released on December 21, 2005; Download;
Product Support List Windows 95/98/Me – 81.98.
Driver version 81.98 for Windows 9x/Me was the last driver version ever released by Nvidia for these systems; no new official releases were later made for these systems.
Windows 2000, 32-bit Windows XP & Media Center Edition: 93.71 released on November 2, 2006; Download.
Also available: 93.81 (beta) released on November 28, 2006; Download.
Linux 32-bit: 96.43.23 released on September 14, 2012; Download.
The drivers for Windows 2000/XP can also be installed on later versions of Windows such as Windows Vista and 7; however, they do not support desktop compositing or the Aero effects of these operating systems.
Note: Despite claims in the documentation that 94.24 (released on May 17, 2006) supports the Geforce 3 series, it does not (94.24 actually supports only GeForce 6 and GeForce 7 series).
(Products supported list also on this page)
Windows 95/98/Me Driver Archive
Windows XP/2000 Driver Archive
Unix Driver Archive
== See also ==
Graphics card
Graphics processing unit
Kelvin (microarchitecture)
== References ==
== External links ==
Nvidia: GeForce3 - The Infinite Effects GPU
ForceWare 81.98 drivers, Final Windows 9x/ME driver release
ForceWare 93.71 drivers, Final Windows XP driver release
Anandtech: Nvidia GeForce3
Anandtech: Nvidia's Fall Product Line: GeForce3 Titanium
techPowerUp! GPU Database | Wikipedia/GeForce_3 |
GeForce is a brand of graphics processing units (GPUs) designed by Nvidia and marketed for the performance market. As of the GeForce 50 series, there have been nineteen iterations of the design. In August 2017, Nvidia stated that "there are over 200 million GeForce gamers".
The first GeForce products were discrete GPUs designed for add-on graphics boards, intended for the high-margin PC gaming market, and later diversification of the product line covered all tiers of the PC graphics market, ranging from cost-sensitive GPUs integrated on motherboards to mainstream add-in retail boards. Most recently, GeForce technology has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets.
With respect to discrete GPUs, found in add-in graphics-boards, Nvidia's GeForce and AMD's Radeon GPUs are the only remaining competitors in the high-end market. GeForce GPUs are very dominant in the general-purpose graphics processor unit (GPGPU) market thanks to their proprietary Compute Unified Device Architecture (CUDA). GPGPU is expected to expand GPU functionality beyond the traditional rasterization of 3D graphics, to turn it into a high-performance computing device able to execute arbitrary programming code in the same way a CPU does, but with different strengths (highly parallel execution of straightforward calculations) and weaknesses (worse performance for complex branching code).
== Name origin ==
The "GeForce" name originated from a contest held by Nvidia in early 1999 called "Name That Chip". The company called out to the public to name the successor to the RIVA TNT2 line of graphics boards. There were over 12,000 entries received and seven winners received a RIVA TNT2 Ultra graphics card as a reward. Brian Burke, senior PR manager at Nvidia, told Maximum PC in 2002 that "GeForce" originally stood for "Geometry Force" since GeForce 256 was the first GPU for personal computers to calculate the transform-and-lighting geometry, offloading that function from the CPU.
== Graphics processor generations ==
=== GeForce 256 ===
=== GeForce 2 series ===
Launched in March 2000, the first GeForce2 (NV15) was another high-performance graphics chip. Nvidia moved to a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. Later, Nvidia released the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a fraction of the cost. The MX was a compelling value in the low/mid-range market segments and was popular with OEM PC manufacturers and users alike. The GeForce 2 Ultra was the high-end model in this series.
=== GeForce 3 series ===
Launched in February 2001, the GeForce3 (NV20) introduced programmable vertex and pixel shaders to the GeForce family and to consumer-level graphics accelerators. It had good overall performance and shader support, making it popular with enthusiasts although it never hit the midrange price point. The NV2A developed for the Microsoft Xbox game console is a derivative of the GeForce 3.
=== GeForce 4 series ===
Launched in February 2002, the then-high-end GeForce4 Ti (NV25) was mostly a refinement to the GeForce3. The biggest advancements included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. Another member of the GeForce 4 family, the budget GeForce4 MX was based on the GeForce2, with the addition of some features from the GeForce4 Ti. It targeted the value segment of the market and lacked pixel shaders. Most of these models used the AGP 4× interface, but a few began the transition to AGP 8×.
=== GeForce FX series ===
Launched in 2003, the GeForce FX (NV30) was a huge change in architecture compared to its predecessors. The GPU was designed not only to support the new Shader Model 2 specification but also to perform well on older titles. However, initial models like the GeForce FX 5800 Ultra suffered from weak floating point shader performance and excessive heat which required infamously noisy two-slot cooling solutions. Products in this series carry the 5000 model number, as it is the fifth generation of the GeForce, though Nvidia marketed the cards as GeForce FX instead of GeForce 5 to show off "the dawn of cinematic rendering".
=== GeForce 6 series ===
Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 support to the GeForce family, while correcting the weak floating point shader performance of its predecessor. It also implemented high-dynamic-range imaging and introduced SLI (Scalable Link Interface) and PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing).
=== GeForce 7 series ===
The seventh generation GeForce (G70/NV47) was launched in June 2005 and was the last Nvidia video card series that could support the AGP bus. The design was a refined version of GeForce 6, with the major improvements being a widened pipeline and an increase in clock speed. The GeForce 7 also offers new transparency supersampling and transparency multisampling anti-aliasing modes (TSAA and TMAA). These new anti-aliasing modes were later enabled for the GeForce 6 series as well. The GeForce 7950GT featured the highest performance GPU with an AGP interface in the Nvidia line. This era began the transition to the PCI-Express interface.
A 128-bit, eight render output unit (ROP) variant of the 7800 GTX, called the RSX Reality Synthesizer, is used as the main GPU in the Sony PlayStation 3.
=== GeForce 8 series ===
Released on November 8, 2006, the eighth-generation GeForce (originally called G80) was the first ever GPU to fully support Direct3D 10. Manufactured using a 90 nm process and built around the new Tesla microarchitecture, it implemented the unified shader model. Initially just the 8800GTX model was launched, while the GTS variant was released months into the product line's life, and it took nearly six months for mid-range and OEM/mainstream cards to be integrated into the 8 series. The die shrink down to 65 nm and a revision to the G80 design, codenamed G92, were implemented into the 8 series with the 8800GS, 8800GT and 8800GTS-512, first released on October 29, 2007, almost one whole year after the initial G80 release.
=== GeForce 9 series and 100 series ===
The first product was released on February 21, 2008. Not even four months older than the initial G92 release, all 9-series designs are simply revisions to existing late 8-series products. The 9800GX2 uses two G92 GPUs, as used in later 8800 cards, in a dual PCB configuration while still only requiring a single PCI-Express 16x slot. The 9800GX2 utilizes two separate 256-bit memory busses, one for each GPU and its respective 512 MB of memory, which equates to an overall of 1 GB of memory on the card (although the SLI configuration of the chips necessitates mirroring the frame buffer between the two chips, thus effectively halving the memory performance of a 256-bit/512 MB configuration). The later 9800GTX features a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory.
Prior to the release, no concrete information was known except that the officials claimed the next generation products had close to 1 TFLOPS processing power with the GPU cores still being manufactured in the 65 nm process, and reports about Nvidia downplaying the significance of Direct3D 10.1. In March 2009, several sources reported that Nvidia had quietly launched a new series of GeForce products, namely the GeForce 100 Series, which consists of rebadged 9 Series parts. GeForce 100 series products were not available for individual purchase.
=== GeForce 200 series and 300 series ===
Based on the GT200 graphics processor consisting of 1.4 billion transistors, codenamed Tesla, the 200 series was launched on June 16, 2008. The next generation of the GeForce series takes the card-naming scheme in a new direction, by replacing the series number (such as 8800 for 8-series cards) with the GTX or GTS suffix (which used to go at the end of card names, denoting their 'rank' among other similar models), and then adding model-numbers such as 260 and 280 after that. The series features the new GT200 core on a 65nm die. The first products were the GeForce GTX 260 and the more expensive GeForce GTX 280. The GeForce 310 was released on November 27, 2009, which is a rebrand of GeForce 210. The 300 series cards are rebranded DirectX 10.1 compatible GPUs from the 200 series, which were not available for individual purchase.
=== GeForce 400 series and 500 series ===
On April 7, 2010, Nvidia released the GeForce GTX 470 and GTX 480, the first cards based on the new Fermi architecture, codenamed GF100; they were the first Nvidia GPUs to utilize 1 GB or more of GDDR5 memory. The GTX 470 and GTX 480 were heavily criticized due to high power use, high temperatures, and very loud noise that were not balanced by the performance offered, even though the GTX 480 was the fastest DirectX 11 card as of its introduction.
In November 2010, Nvidia released a new flagship GPU based on an enhanced GF100 architecture (GF110) called the GTX 580. It featured higher performance, less power utilization, heat and noise than the preceding GTX 480. This GPU received much better reviews than the GTX 480. Nvidia later also released the GTX 590, which packs two GF110 GPUs on a single card.
=== GeForce 600 series, 700 series and 800M series ===
In September 2010, Nvidia announced that the successor to Fermi microarchitecture would be the Kepler microarchitecture, manufactured with the TSMC 28 nm fabrication process. Earlier, Nvidia had been contracted to supply their top-end GK110 cores for use in Oak Ridge National Laboratory's "Titan" supercomputer, leading to a shortage of GK110 cores. After AMD launched their own annual refresh in early 2012, the Radeon HD 7000 series, Nvidia began the release of the GeForce 600 series in March 2012. The GK104 core, originally intended for their mid-range segment of their lineup, became the flagship GTX 680. It introduced significant improvements in performance, heat, and power efficiency compared to the Fermi architecture and closely matched AMD's flagship Radeon HD 7970. It was quickly followed by the dual-GK104 GTX 690 and the GTX 670, which featured only a slightly cut-down GK104 core and was very close in performance to the GTX 680.
With the GTX Titan, Nvidia also released GPU Boost 2.0, which would allow the GPU clock speed to increase indefinitely until a user-set temperature limit was reached without passing a user-specified maximum fan speed. The final GeForce 600 series release was the GTX 650 Ti BOOST based on the GK106 core, in response to AMD's Radeon HD 7790 release. At the end of May 2013, Nvidia announced the 700 series, which was still based on the Kepler architecture, however it featured a GK110-based card at the top of the lineup. The GTX 780 was a slightly cut-down Titan that achieved nearly the same performance for two-thirds of the price. It featured the same advanced reference cooler design, but did not have the unlocked double-precision cores and was equipped with 3 GB of memory.
At the same time, Nvidia announced ShadowPlay, a screen capture solution that used an integrated H.264 encoder built into the Kepler architecture that Nvidia had not revealed previously. It could be used to record gameplay without a capture card, and with negligible performance decrease compared to software recording solutions, and was available even on the previous generation GeForce 600 series cards. The software beta for ShadowPlay, however, experienced multiple delays and would not be released until the end of October 2013. A week after the release of the GTX 780, Nvidia announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 shortly after, which was also based on the GK104 core and similar to the GTX 660 Ti. No more 700 series cards were set for release in 2013, although Nvidia announced G-Sync, another feature of the Kepler architecture that Nvidia had left unmentioned, which allowed the GPU to dynamically control the refresh rate of G-Sync-compatible monitors which would release in 2014, to combat tearing and judder. However, in October, AMD released the R9 290X, which came in at $100 less than the GTX 780. In response, Nvidia slashed the price of the GTX 780 by $150 and released the GTX 780 Ti, which featured a full 2880-core GK110 core even more powerful than the GTX Titan, along with enhancements to the power delivery system which improved overclocking, and managed to pull ahead of AMD's new release.
The GeForce 800M series consists of rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture.
=== GeForce 900 series ===
In March 2013, Nvidia announced that the successor to Kepler would be the Maxwell microarchitecture. It was released in September 2014, with the GM10x series chips, emphasizing the new power efficiency architectural improvements in OEM, and low TDP products in desktop GTX 750/750 ti, and mobile GTX 850M/860M. Later that year Nvidia pushed the TDP with the GM20x chips for power users, skipping the 800 series for desktop entirely, with the 900 series of GPUs.
This was the last GeForce series to support analog video output through DVI-I. Although, analog display adapters exist and are able to convert a digital Display Port, HDMI, or DVI-D (Digital).
=== GeForce 10 series ===
In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture; announced on May 6, 2016, and were released several weeks later on May 27 and June 10, respectively. Architectural improvements include the following:
In Pascal, an SM (streaming multiprocessor) consists of 128 CUDA cores. Kepler packed 192, Fermi 32 and Tesla only 8 CUDA cores into an SM; the GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA Cores, an instruction buffer, a warp scheduler, 2 texture mapping units and 2 dispatch units.
GDDR5X – New memory standard supporting 10 Gbit/s data rates and an updated memory controller. Only the Nvidia Titan X (and Titan Xp), GTX 1080, GTX 1080 Ti, and GTX 1060 (6 GB Version) support GDDR5X. The GTX 1070 Ti, GTX 1070, GTX 1060 (3 GB version), GTX 1050 Ti, and GTX 1050 use GDDR5.
Unified memory – A memory architecture, where the CPU and GPU can access both main system memory and memory on the graphics card with the help of a technology called "Page Migration Engine".
NVLink – A high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s.
16-bit (FP16) floating-point operations can be executed at twice the rate of 32-bit floating-point operations ("single precision") and 64-bit floating-point operations ("double precision") executed at half the rate of 32-bit floating point operations (Maxwell 1/32 rate).
More advanced process node, TSMC 16mm instead of the older TSMC 28 nm
=== GeForce 20 series and 16 series ===
In August 2018, Nvidia announced the GeForce successor to Pascal. The new microarchitecture name was revealed as "Turing" at the Siggraph 2018 conference. This new GPU microarchitecture is aimed to accelerate the real-time ray tracing support and AI Inferencing. It features a new Ray Tracing unit (RT Core) which can dedicate processors to the ray tracing in hardware. It supports the DXR extension in Microsoft DirectX 12. Nvidia claims the new architecture is up to 6 times faster than the older Pascal architecture. A whole new Tensor core design since Volta introduces AI deep learning acceleration, which allows the utilisation of DLSS (Deep Learning Super Sampling), a new form of anti-aliasing that uses AI to provide crisper imagery with less impact on performance. It also changes its integer execution unit which can execute in parallel with the floating point data path. A new unified cache architecture which doubles its bandwidth compared with previous generations was also announced.
The new GPUs were revealed as the Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000. The high end Quadro RTX 8000 features 4,608 CUDA cores and 576 Tensor cores with 48 GB of VRAM. Later during the Gamescom press conference, Nvidia's CEO Jensen Huang, unveiled the new GeForce RTX series with RTX 2080 Ti, 2080, and 2070 that will use the Turing architecture. The first Turing cards were slated to ship to consumers on September 20, 2018. Nvidia announced the RTX 2060 on January 6, 2019, at CES 2019.
On July 2, 2019, Nvidia announced the GeForce RTX Super line of cards, a 20 series refresh which comprises higher-spec versions of the RTX 2060, 2070 and 2080. The RTX 2070 and 2080 were discontinued.
In February 2019, Nvidia announced the GeForce 16 series. It is based on the same Turing architecture used in the GeForce 20 series, but disabling the Tensor (AI) and RT (ray tracing) cores to provide more affordable graphic cards for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations.
Like the RTX Super refresh, Nvidia on October 29, 2019, announced the GTX 1650 Super and 1660 Super cards, which replaced their non-Super counterparts.
On June 28, 2022, Nvidia quietly released their GTX 1630 card, which was meant for low-end gamers.
=== GeForce 30 series ===
Nvidia officially announced at the GeForce Special Event that the successor to GeForce 20 series will be the 30 series, it is built on the Ampere microarchitecture. The GeForce Special Event introduced took place on September 1, 2020, and set September 17th as the official release date for the RTX 3080 GPU, September 24 for the RTX 3090 GPU and October 29th for the RTX 3070 GPU. With the latest GPU launch being the RTX 3090 Ti. The RTX 3090 Ti is the highest-end Nvidia GPU on the Ampere microarchitecture, it features a fully unlocked GA102 die built on the Samsung 8 nm node due to supply shortages with TSMC. The RTX 3090 Ti has 10,752 CUDA cores, 336 Tensor cores and texture mapping units, 112 ROPs, 84 RT cores, and 24 gigabytes of GDDR6X memory with a 384-bit bus. When compared to the RTX 2080 Ti, the 3090 Ti has 6,400 more CUDA cores. Due to the global chip shortage, the 30 series was controversial as scalpers and high demand meant that GPU prices skyrocketed for the 30 series and the AMD RX 6000 series.
=== GeForce 40 series ===
On September 20, 2022, Nvidia announced its GeForce 40 Series graphics cards. These came out as the RTX 4090, on October 12, 2022, the RTX 4080, on November 16, 2022, the RTX 4070 Ti, on January 3, 2023, with the RTX 4070, on April 13, 2023, and the RTX 4060 Ti on May 24, 2023, and the RTX 4060, following on June 29, 2023. These were built on the Ada Lovelace architecture, with current part numbers being, "AD102", "AD103", "AD104" "AD106" and "AD107". These parts are manufactured using the TSMC N4 process node which is a custom-designed process for Nvidia. At the time, the RTX 4090 was the fastest chip for the mainstream market that has been released by a major company, consisting of around 16,384 CUDA cores, boost clocks of 2.2 / 2.5 GHz, 24 GB of GDDR6X, a 384-bit memory bus, 128 3rd gen RT cores, 512 4th gen Tensor cores, DLSS 3.0 and a TDP of 450W. From October to December 2024, the RTX 4090, 4080, 4070 and relating variants were officially discontinued, marking the end of a two-year production run, in order to free up production space for the coming RTX 50 series.
Notably, a China-only edition of the RTX 4090 was released, named the RTX 4090D (Dragon). The RTX 4090D features a shaved down AD102 die with 14592 CUDA cores, down from 16384 cores of the original 4090. This was primarily owing to the United States Department of Commerce beginning the enactment of restrictions on the Nvidia RTX 4090 for export to certain countries in 2023. This was targeted mainly towards China as an attempt to halt its AI development.
The 40 series saw Nvidia re-releasing the 'Super' variant of graphics cards, not seen since the 20 series, as well as being the first generation in Nvidia's lineup to combine both 'Super' and 'Ti' brandings together. This began with the release of the RTX 4070 Super on January 17, 2024, following with the RTX 4070 Ti Super on January 24, 2024, and the RTX 4080 Super on January 31, 2024.
=== GeForce 50 series (Current) ===
The GeForce 50 series, based on the Blackwell microarchitecture, was announced at CES 2025, with availability starting in January. Nvidia CEO Jensen Huang presented prices for the RTX 5070, RTX 5070 Ti, RTX 5080, and RTX 5090.
== Variants ==
=== Mobile GPUs ===
Since the GeForce 2 series, Nvidia has produced a number of graphics chipsets for notebook computers under the GeForce Go branding. Most of the features present in the desktop counterparts are present in the mobile ones. These GPUs are generally optimized for lower power consumption and less heat output in order to be used in notebook PCs and small desktops.
Beginning with the GeForce 8 series, the GeForce Go brand was discontinued and the mobile GPUs were integrated with the main line of GeForce GPUs, but their name suffixed with an M. This ended in 2016 with the launch of the laptop GeForce 10 series – Nvidia dropped the M suffix, opting to unify the branding between their desktop and laptop GPU offerings, as notebook Pascal GPUs are almost as powerful as their desktop counterparts (something Nvidia tested with their "desktop-class" notebook GTX 980 GPU back in 2015).
The GeForce MX brand, previously used by Nvidia for their entry-level desktop GPUs, was revived in 2017 with the release of the GeForce MX150 for notebooks. The MX150 is based on the same Pascal GP108 GPU as used on the desktop GT 1030, and was quietly released in June 2017.
=== Small form factor GPUs ===
Similar to the mobile GPUs, Nvidia also released a few GPUs in "small form factor" format, for use in all-in-one desktops. These GPUs are suffixed with an S, similar to the M used for mobile products.
=== Integrated desktop motherboard GPUs ===
Beginning with the nForce 4, Nvidia started including onboard graphics solutions in their motherboard chipsets. These were called mGPUs (motherboard GPUs). Nvidia discontinued the nForce range, including these mGPUs, in 2009.
After the nForce range was discontinued, Nvidia released their Ion line in 2009, which consisted of an Intel Atom CPU partnered with a low-end GeForce 9 series GPU, fixed on the motherboard. Nvidia released an upgraded Ion 2 in 2010, this time containing a low-end GeForce 300 series GPU.
== Nomenclature ==
From the GeForce 4 series until the GeForce 9 series, the naming scheme below is used.
Since the release of the GeForce 100 series of GPUs, Nvidia changed their product naming scheme to the one below.
Earlier cards such as the GeForce4 follow a similar pattern.
cf. Nvidia's Performance Graph here.
== Graphics device drivers ==
=== Official proprietary ===
Nvidia develops and publishes GeForce drivers for Windows 10 x86/x86-64 and later, Linux x86/x86-64/ARMv7-A, OS X 10.5 and later, Solaris x86/x86-64 and FreeBSD x86/x86-64. A current version can be downloaded from Nvidia and most Linux distributions contain it in their own repositories. Nvidia GeForce driver 340.24 from 8 July 2014 supports the EGL interface enabling support for Wayland in conjunction with this driver. This may be different for the Nvidia Quadro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers. On the same day the Vulkan graphics API was publicly released, Nvidia released drivers that fully supported it. Nvidia has released drivers with optimizations for specific video games concurrent with their release since 2014, having released 150 drivers supporting 400 games in April 2022.
Basic support for the DRM mode-setting interface in the form of a new kernel module named nvidia-modeset.ko has been available since version 358.09 beta.
The support of Nvidia's display controller on the supported GPUs is centralized in nvidia-modeset.ko. Traditional display interactions (X11 modesets, OpenGL SwapBuffers, VDPAU presentation, SLI, stereo, framelock, G-Sync, etc.) initiate from the various user-mode driver components and flow to nvidia-modeset.ko.
In May 2022, Nvidia announced that it would release a partially open-source driver for the (GSP enabled) Turing architecture and newer, in order to enhance the ability for it to be packaged as part of Linux distributions. At launch Nvidia considered the driver to be alpha quality for consumer GPUs, and production ready for datacenter GPUs. Currently the userspace components of the driver (including OpenGL, Vulkan, and CUDA) remain proprietary. In addition, the open-source components of the driver are only a wrapper (CPU-RM) for the GPU System Processor (GSP) firmware, a RISC-V binary blob that is now required for running the open-source driver. The GPU System Processor is a RISC-V coprocessor codenamed "Falcon" that is used to offload GPU initialization and management tasks. The driver itself is still split for the host CPU portion (CPU-RM) and the GSP portion (GSP-RM). Windows 11 and Linux proprietary drivers also support enabling GSP and make even gaming faster. CUDA supports GSP since version 11.6. Upcoming Linux kernel 6.7 will support GSP in Nouveau.
=== Third-party free and open-source ===
Community-created, free and open-source drivers exist as an alternative to the drivers released by Nvidia. Open-source drivers are developed primarily for Linux, however there may be ports to other operating systems. The most prominent alternative driver is the reverse-engineered free and open-source nouveau graphics device driver. Nvidia has publicly announced to not provide any support for such additional device drivers for their products, although Nvidia has contributed code to the Nouveau driver.
Free and open-source drivers support a large portion (but not all) of the features available in GeForce-branded cards. For example, as of January 2014 nouveau driver lacks support for the GPU and memory clock frequency adjustments, and for associated dynamic power management. Also, Nvidia's proprietary drivers consistently perform better than nouveau in various benchmarks. However, as of August 2014 and version 3.16 of the Linux kernel mainline, contributions by Nvidia allowed partial support for GPU and memory clock frequency adjustments to be implemented.
=== Licensing and privacy issues ===
The license has common terms against reverse engineering and copying, and it disclaims warranties and liability.
Starting in 2016 the GeForce license says Nvidia "SOFTWARE may access, collect non-personally identifiable information about, update, and configure Customer's system in order to properly optimize such system for use with the SOFTWARE." The privacy notice goes on to say, "We are not able to respond to "Do Not Track" signals set by a browser at this time. We also permit third party online advertising networks and social media companies to collect information... We may combine personal information that we collect about you with the browsing and tracking information collected by these [cookies and beacons] technologies."
The software configures the user's system to optimize its use, and the license says, "NVIDIA will have no responsibility for any damage or loss to such system (including loss of data or access) arising from or relating to (a) any changes to the configuration, application settings, environment variables, registry, drivers, BIOS, or other attributes of the system (or any part of such system) initiated through the SOFTWARE".
=== GeForce Experience ===
GeForce Experience is a software suite developed by Nvidia that served as a companion application for PCs equipped with Nvidia graphics cards. Initially released in 2013, it was designed to enhance the gaming experience by providing performance optimization tools, driver management, and various capture and streaming features.
One of its core functions was the ability to optimize game settings automatically based on the user's hardware configuration, helping to strike a balance between visual quality and performance. It also allowed users to manage driver updates seamlessly, particularly through the distribution of "Game Ready Drivers," which were released in sync with major game launches to ensure optimal performance from day one.
GeForce Experience included Nvidia ShadowPlay, a popular feature that enabled gameplay recording and live streaming with minimal performance impact. It also featured Nvidia Ansel, a tool for capturing high-resolution, 360-degree, and HDR in-game screenshots, as well as Nvidia Freestyle, which allowed gamers to apply real-time visual filters. Laptop users benefited from features like Battery Boost, which helped conserve battery life while gaming by intelligently adjusting system performance.
By August 2017, the software had been installed on over 90 million PCs, making it one of the most widely used applications among gamers. Despite its broad adoption, GeForce Experience faced ongoing criticism for its resource usage, mandatory login requirement, and occasional user experience issues. One major controversy stemmed from a critical security vulnerability discovered before a patch released on March 26, 2019. The vulnerability exposed users to remote code execution, denial of service, and privilege escalation attacks. Additionally, the software was known to force a system restart after installing new drivers, initiating a 60-second countdown that offered no option to cancel or postpone.
On November 12, 2024, Nvidia officially retired GeForce Experience and launched its successor, the Nvidia App, with version 1.0. The new application was designed to modernize the user interface and streamline the experience, offering faster performance, better integration of features, and a more intuitive layout. It consolidated key tools like game optimization, driver updates, and hardware monitoring into a single platform, while also enhancing support for content creators through deeper integration with Nvidia Studio technologies.
This transition marks a new chapter in Nvidia's software ecosystem, with the Nvidia App aiming to deliver a more efficient and user-friendly experience tailored to the needs of modern gamers and creators.
=== Nvidia App ===
The Nvidia App is a program that is intended to replace both GeForce Experience and the Nvidia Control Panel which can be downloaded from Nvidia's website. In August 2024, it was in a beta version. On November 12, 2024, version 1.0 was released, marking its stable release.
New features include an overhauled user interface, a new in-game overlay, support for ShadowPlay with 120 fps, as well as RTX HDR and RTX Dynamic Vibrance, which are AI-based in-game filters that enable HDR and increase color saturation in any DirectX 9 (and newer) or Vulkan game, respectively.
The Nvidia App also features Auto Tuning, which adjusts the GPU's clock rate based on regular hardware scans to ensure optimal performance. According to Nvidia, this feature will not cause any damage to the GPU and retain its warranty. However, it might cause instability issues. The feature is similar to the GeForce Experience's "Enable automatic tuning" option, which was released in 2021, with the difference being that this was a one-off overclocking feature that did not adjust the GPU's clock speed on a regular basis.
In January 2025, Nvidia added Smooth Motion to the Nvidia App, a feature similar to Frame Generation which generates an extra frame between two natively randered frames. Because the feature is driver-based, it also works in games that do not support DLSS's Frame Generation option. As of its release, the feature is only available on GeForce 50 series GPUs, though Nvidia stated they will add support for GeForce 40 series GPUs in the future as well.
== References ==
== External links ==
GeForce product page on Nvidia's website
GeForce powered games on Nvidia's website
TechPowerUp GPU Specs Database | Wikipedia/GeForce |
A network processor is an integrated circuit which has a feature set specifically targeted at the networking application domain.
Network processors are typically software programmable devices and would have generic characteristics similar to general purpose central processing units that are commonly used in many different types of equipment and products.
== History of development ==
In modern telecommunications networks, information (voice, video, data) is transferred as packet data (termed packet switching) which is in contrast to older telecommunications networks that carried information as analog signals such as in the public switched telephone network (PSTN) or analog TV/Radio networks. The processing of these packets has resulted in the creation of integrated circuits (IC) that are optimised to deal with this form of packet data. Network processors have specific features or architectures that are provided to enhance and optimise packet processing within these networks.
Network processors have evolved into ICs with specific functions. This evolution has resulted in more complex and more flexible ICs being created. The newer circuits are programmable and thus allow a single hardware IC design to undertake a number of different functions, where the appropriate software is installed.
Network processors are used in the manufacture of many different types of network equipment such as:
Routers, software routers and switches (Inter-network processors)
Firewalls
Session border controllers
Intrusion detection devices
Intrusion prevention devices
Network monitoring systems
Network security (secure cryptoprocessors)
=== Reconfigurable Match-Tables ===
Reconfigurable Match-Tables were introduced in 2013 to allow switches to operate at high speeds while maintaining flexibility when it comes to the network protocols running on them, or the processing to does to them. P4 is used to program the chips. The company Barefoot Networks was based around these processors and was later purchased by Intel in 2019.
An RMT pipeline relies on three main stages; the programmable parser, the Match-Action tables and the programmable deparser. The parser reads the packet in chunks and processes these chunks to find out which protocols are used in the packet (Ethernet, VLAN, IPv4...) and extracts certain fields from the packet into the Packet Header Vector (PHV). Certain fields in the PHV may be reserved for special uses such as present headers or total packet length. The protocols are typically programmable, and so are the fields to extract. The Match-Action tables are a series of units that read an input PHV, match certain fields in it using a crossbar and CAM memory, the result is a wide instruction that operates on one or more fields of the PHV and data to support this instruction. The output PHV is then sent to the next MA stage or to the deparser. The deparser takes in the PHV as well as the original packet and its metadata (to fill in missing bits that weren't extracted into the PHV) and then outputs the modified packet as chunks. It's typically programmable as with the parser and may reuse some of the configuration files.
FlexNIC attempts to apply this model to Network Interface Controllers allowing servers to send and receive packets at high speeds while maintaining protocol flexibility and without increasing the CPU overhead.
== Generic functions ==
In the generic role as a packet processor, a number of optimised features or functions are typically present in a network processor, which include:
Pattern matching – the ability to find specific patterns of bits or bytes within packets in a packet stream.
Key lookup – the ability to quickly undertake a database lookup using a key (typically an address in a packet) to find a result, typically routing information.
Computation
Data bitfield manipulation – the ability to change certain data fields contained in the packet as it is being processed.
Queue management – as packets are received, processed and scheduled to be sent onwards, they are stored in queues.
Control processing – the micro operations of processing a packet are controlled at a macro level which involves communication and orchestration with other nodes in a system.
Quick allocation and re-circulation of packet buffers.
== Architectural paradigms ==
In order to deal with high data-rates, several architectural paradigms are commonly used:
Pipeline of processors - each stage of the pipeline consisting of a processor performing one of the functions listed above.
Parallel processing with multiple processors, often including multithreading.
Specialized microcoded engines to more efficiently accomplish the tasks at hand.
With the advent of multicore architectures, network processors can be used for higher layer (L4-L7) processing.
Additionally, traffic management, which is a critical element in L2-L3 network processing and used to be executed by a variety of co-processors, has become an integral part of the network processor architecture, and a substantial part of its silicon area ("real estate") is devoted to the integrated traffic manager. Modern network processors are also equipped with low-latency high-throughput on-chip interconnection networks optimized for the exchange of small messages among cores (few data words). Such networks can be used as an alternative facility for the efficient inter-core communication aside of the standard use of shared memory.
== Applications ==
Using the generic function of the network processor, a software program implements an application that the network processor executes, resulting in the piece of physical equipment performing a task or providing a service. Some of the applications types typically implemented as software running on network processors are:
Packet or frame discrimination and forwarding, that is, the basic operation of a router or switch.
Quality of service (QoS) enforcement – identifying different types or classes of packets and providing preferential treatment for some types or classes of packet at the expense of other types or classes of packet.
Access Control functions – determining whether a specific packet or stream of packets should be allowed to traverse the piece of network equipment.
Encryption of data streams – built in hardware-based encryption engines allow individual data flows to be encrypted by the processor.
TCP offload processing
== See also ==
Content processor
Multi-core processor
Knowledge-based processor
Active networking
Computer engineering
Internet
List of defunct network processor companies
Network Processing Forum
Queueing theory
Network on a chip
Network interface controller
== References == | Wikipedia/Network_processor |
Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.
Computers have been capable of generating 2D images such as simple lines, images and polygons in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics.
Different techniques for rendering now exist, such as ray-tracing and rasterization. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience.
== Principles of real-time 3D computer graphics ==
The goal of computer graphics is to generate computer-generated images, or frames, using certain desired metrics. One such metric is the number of frames generated in a given second. Real-time computer graphics systems differ from traditional (i.e., non-real-time) rendering systems in that non-real-time graphics typically rely on ray tracing. In this process, millions or billions of rays are traced from the camera to the world for detailed rendering—this expensive operation can take hours or days to render a single frame.
Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique of z-buffer triangle rasterization. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle gets positioned, rotated and scaled on the screen, and rasterizer hardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units called fragments that are suitable for displaying on a display screen. The fragments are drawn on the screen using a color that is computed in several steps. For example, a texture can be used to "paint" a triangle based on a stored image, and then shadow mapping can alter that triangle's colors based on line-of-sight to light sources.
=== Video game graphics ===
Real-time graphics optimizes image quality subject to time and hardware constraints. GPUs and other advances increased the image quality that real-time graphics can produce. GPUs are capable of handling millions of triangles per frame, and modern DirectX/OpenGL class hardware is capable of generating complex effects, such as shadow volumes, motion blurring, and triangle generation, in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actual gameplay graphics and the pre-rendered cutscenes traditionally found in video games. Cutscenes are typically rendered in real-time—and may be interactive. Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate.
=== Advantages ===
Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions.
In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame. Usually, the display's response-time is far slower than the input device—this is justified by the immense difference between the (fast) response time of a human being's motion and the (slow) perspective speed of the human visual system. This difference has other effects too: because input devices must be very fast to keep up with human motion response, advancements in input devices (e.g., the current Wii remote) typically take much longer to achieve than comparable advancements in display devices.
Another important factor controlling real-time computer graphics is the combination of physics and animation. These techniques largely dictate what is to be drawn on the screen—especially where to draw objects in the scene. These techniques help realistically imitate real world behavior (the temporal dimension, not the spatial dimensions), adding to the computer graphics' degree of realism.
Real-time previewing with graphics software, especially when adjusting lighting effects, can increase work speed. Some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time.
== Rendering pipeline ==
The graphics rendering pipeline ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics. Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more.
=== Architecture ===
The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and rasterization.
=== Application stage ===
The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as collision detection, speed-up techniques, animation and force feedback, in addition to handling user input.
Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller.
The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via transforms, and geometry morphing. Finally, it produces primitives (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline.
=== Geometry stage ===
The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs. Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages.
==== Model and view transformation ====
Before the final model is shown on the output device, the model is transformed onto multiple spaces or coordinate systems. Transformations move and manipulate objects by altering their vertices. Transformation is the general term for the four specific ways that manipulate the shape or position of a point, line or shape.
==== Lighting ====
In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a right-handed coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right.
==== Projection ====
Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are orthographic projection (also called parallel) and perspective projection. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight.
==== Clipping ====
Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage.
==== Screen mapping ====
The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage.
==== Rasterizer stage ====
The rasterizer stage applies color and turns the graphic elements into pixels or picture elements.
== See also ==
== References ==
== Bibliography ==
Möller, Tomas; Haines, Eric (1999). Real-Time Rendering (1st ed.). Natick, MA: A K Peters, Ltd.
Salvator, Dave (21 June 2001). "3D Pipeline". Extremetech.com. Extreme Tech. Archived from the original on 17 May 2008. Retrieved 2 Feb 2007.
Malhotra, Priya (July 2002). Issues involved in Real-Time Rendering of Virtual Environments (Master's). Blacksburg, VA: Virginia Tech. pp. 20–31. hdl:10919/35382. Retrieved 31 January 2007.
Haines, Eric (1 February 2007). "Real-Time Rendering Resources". Retrieved 12 Feb 2007.
== External links ==
RTR Portal – a trimmed-down "best of" set of links to resources | Wikipedia/Real-time_computer_graphics |
In computer architecture, shared graphics memory refers to a design where the graphics chip does not have its own dedicated memory, and instead shares the main system RAM with the CPU and other components.
This design is used with many integrated graphics solutions to reduce the cost and complexity of the motherboard design, as no additional memory chips are required on the board. There is usually some mechanism (via the BIOS or a jumper setting) to select the amount of system memory to use for graphics, which means that the graphics system can be tailored to only use as much RAM as is actually required, leaving the rest free for applications. A side effect of this is that when some RAM is allocated for graphics, it becomes effectively unavailable for anything else, so an example computer with 512 MiB RAM set up with 64 MiB graphics RAM will appear to the operating system and user to only have 448 MiB RAM installed.
The disadvantage of this design is lower performance because system RAM usually runs slower than dedicated graphics RAM, and there is more contention as the memory bus has to be shared with the rest of the system. It may also cause performance issues with the rest of the system if it is not designed with the fact in mind that some RAM will be 'taken away' by graphics.
A similar approach that gave similar results is the boost up of graphics used in some SGi computers, most notably the O2/O2+. The memory in these machines is simply one fast pool (2.1 GB per second in 1996) shared between system and graphics. Sharing is performed on demand, including pointer redirection communication between main system and graphics subsystem. This is called Unified Memory Architecture (UMA).
== History ==
Most early personal computers used a shared memory design with graphics hardware sharing memory with the CPU. Such designs saved money as a single bank of DRAM could be used for both display and program. Examples of this include the Apple II computer, the Commodore 64, the Radio Shack Color Computer, the Atari ST, and the Apple Macintosh.
A notable exception was the IBM PC. Graphics display was facilitated by the use of an expansion card with its own memory plugged into an ISA slot.
The first IBM PC to use the SMA was the IBM PCjr, released in 1984. Video memory was shared with the first 128 KiB of RAM. The exact size of the video memory could be reconfigured by software to meet the needs of the current program.
An early hybrid system was the Commodore Amiga which could run as a shared memory system, but would load executable code preferentially into non-shared "fast RAM" if it was available.
== See also ==
IBM PCjr
Video memory
Shared memory, in general, other than graphics
== References ==
== External links ==
PC Magazine Definition for SMA
IBM PCjr information | Wikipedia/Shared_graphics_memory |
A graphics library or graphics API is a program library designed to aid in rendering computer graphics to a monitor. This typically involves providing optimized versions of functions that handle common rendering tasks. This can be done purely in software and running on the CPU, common in embedded systems, or being hardware accelerated by a GPU, more common in PCs. By employing these functions, a program can assemble an image to be output to a monitor. This relieves the programmer of the task of creating and optimizing these functions, and allows them to focus on building the graphics program. Graphics libraries are mainly used in video games and simulations.
The use of graphics libraries in connection with video production systems, such as Pixar RenderMan, is not covered here.
Some APIs use Graphics Library (GL) in their name, notably OpenGL and WebGL.
== Examples ==
Allegro
ANGLE
Cairo (graphics)
DFPSR https://dawoodoz.com/dfpsr.html — GUI toolkit and software renderer
DirectX — a library created by Microsoft, to run under Windows operating systems and 'Direct' Xbox
Display PostScript
emWin — an Embedded Graphics Library
FLTK — GUI Toolkit and Graphics Library
GTK — a GUI toolkit
Mesa 3D — a library that implements OpenGL and Vulkan
Mobile 3D Graphics API
Qt — cross-platform application framework
Quartz (graphics layer)
SFML
SIGIL — Sound, Input, and Graphics Integration Library
Simple DirectMedia Layer (SDL)
Skia Graphics Library
X Window System
== See also ==
List of 3D graphics libraries
List of open source code libraries
Anti-Grain Geometry
Software development kit (SDK)
OpenGL ES
Graphical Widget toolkit graphical control elements drawn on bitmap displays
== References == | Wikipedia/Graphics_library |
Molecular modeling on GPU is the technique of using a graphics processing unit (GPU) for molecular simulations.
In 2007, Nvidia introduced video cards that could be used not only to show graphics but also for scientific calculations. These cards include many arithmetic units (as of 2016, up to 3,584 in Tesla P100) working in parallel. Long before this event, the computational power of video cards was purely used to accelerate graphics calculations. The new features of these cards made it possible to develop parallel programs in a high-level application programming interface (API) named CUDA. This technology substantially simplified programming by enabling programs to be written in C/C++. More recently, OpenCL allows cross-platform GPU acceleration.
Quantum chemistry calculations and molecular mechanics simulations (molecular modeling in terms of classical mechanics) are among beneficial applications of this technology. The video cards can accelerate the calculations tens of times, so a PC with such a card has the power similar to that of a cluster of workstations based on common processors.
== GPU accelerated molecular modelling software ==
=== Programs ===
Abalone – Molecular Dynamics (Benchmark)
ACEMD on GPUs since 2009 Benchmark
AMBER on GPUs version
Ascalaph on GPUs version – Ascalaph Liquid GPU
AutoDock – Molecular docking
BigDFT Ab initio program based on wavelet
BrianQC Quantum chemistry (HF and DFT) and molecular mechanics
Blaze ligand-based virtual screening
CHARMM – Molecular dynamics [1]
CP2K Ab initio molecular dynamics
Desmond (software) on GPUs, workstations, and clusters
Firefly (formerly PC GAMESS)
FastROCS
GOMC – GPU Optimized Monte Carlo simulation engine
GPIUTMD – Graphical processors for Many-Particle Dynamics
GPU4PySCF – GPU accelerated plugin package for PySCF
GPUMD - A light weight general-purpose molecular dynamics code
GROMACS on GPUs
HALMD – Highly Accelerated Large-scale MD package
HOOMD-blue Archived 2011-11-11 at the Wayback Machine – Highly Optimized Object-oriented Many-particle Dynamics—Blue Edition
LAMMPS on GPUs version – lammps for accelerators
LIO DFT-Based GPU optimized code - [2]
Octopus has support for OpenCL.
oxDNA – DNA and RNA coarse-grained simulations on GPUs
PWmat – Plane-Wave Density Functional Theory simulations
RUMD - Roskilde University Molecular Dynamics
TeraChem – Quantum chemistry and ab initio Molecular Dynamics
TINKER on GPUs.
VMD & NAMD on GPUs versions
YASARA runs MD simulations on all GPUs using OpenCL.
=== API ===
BrianQC – has an open C level API for quantum chemistry simulations on GPUs, provides GPU-accelerated version of Q-Chem and PSI
OpenMM – an API for accelerating molecular dynamics on GPUs, v1.0 provides GPU-accelerated version of GROMACS
mdcore – an open-source platform-independent library for molecular dynamics simulations on modern shared-memory parallel architectures.
=== Distributed computing projects ===
GPUGRID distributed supercomputing infrastructure
Folding@home distributed computing project
Exscalate4Cov large-scale virtual screening experiment
== See also ==
== References ==
== External links ==
More links for classical and quantum сhemistry on GPUs | Wikipedia/Molecular_modeling_on_GPU |
S3 Graphics, Ltd. was an American computer graphics company. The company sold the Trio, ViRGE, Savage, and Chrome series of graphics processors. Struggling against competition from 3dfx Interactive, ATI and Nvidia, it merged with hardware manufacturer Diamond Multimedia in 1999. The resulting company renamed itself to SONICblue Incorporated, and, two years later, the graphics portion was spun off into a new joint effort with VIA Technologies. The new company focused on the mobile graphics market. VIA Technologies' stake in S3 Graphics was purchased by HTC in 2011.
== History ==
S3 was founded and incorporated in January 1989 by Dado Banatao and Ronald Yara. It was named S3 as it was Banatao's third startup company.
The company's first products were among the earliest graphical user interface (GUI) accelerators. These chips were popular with video card manufacturers, and their followup designs, including the Trio64, made strong inroads with OEMs. S3 took over the high end 2D market just prior to the popularity of 3D accelerators.
S3's first 3D accelerator chips, the ViRGE series, controlled half of the market early on but could not compete against the high end 3D accelerators from ATI, Nvidia, and 3Dfx. In some cases, the chips performed worse than software-based solutions without an accelerator. As S3 lost market share, their offerings competed in the mid-range market. Their next design, the Savage 3D, was released early and suffered from driver issues, but it introduced S3TC, which became an industry standard. S3 bought Number Nine's assets in 1999, then merged with Diamond Multimedia. The resulting company renamed itself SONICblue, refocused on consumer electronics, and sold its graphics business to VIA Technologies. Savage-derived chips were integrated into numerous VIA motherboard chipsets. Subsequent discrete derivations carried the brand names DeltaChrome and GammaChrome.
In July 2011, HTC Corporation announced they were buying VIA Technologies' stake in S3 Graphics, thus becoming the majority owner of S3 Graphics. In November, the United States International Trade Commission ruled against S3 in a patent dispute with Apple.
== Graphics controllers ==
S3 911, 911A (June 10, 1991) - S3's first Windows accelerators (16/256-color, high-color acceleration)
S3 924 - 24-bit true-color acceleration
S3 801, 805, 805i - mainstream DRAM VLB Windows accelerators (16/256-color, high-color acceleration)
S3 928 - 24/32-bit true-color acceleration, DRAM or VRAM
S3 805p, 928p - S3's first PCI support
S3 Vision864, Vision964 (1994) - 2nd generation Windows accelerators (64-bit wide framebuffer). Support MPEG-1 video acceleration.
S3 Vision868, Vision968 - S3's first motion video accelerator (zoom and YUV→RGB conversion)
S3 Trio 32, 64, 64V+, 64V2 (1995) - S3's first integrated (RAMDAC+VGA) accelerator. The 64-bit versions were S3's most successful product range.
ViRGE (no suffix), VX, DX, GX, GX2, Trio3D, Trio3D/2X - S3's first Windows 3D-accelerators. Notoriously poor 3D. Sold well to OEMs mainly because of low price and excellent 2D-performance.
Savage 3D (1998), 4 (1999), 2000 (2000) - S3's first recognizably modern 3D hardware implementation. Poor yields meant actual clock speeds were 30% below expectations, and buggy drivers caused further problems. S3 Texture Compression went on to become an industry standard, and the Savage3D's DVD acceleration was market leading at introduction. Savage2000 was announced as the first chip with integrated Transformation and Lighting (S3TL) co-processor.
Aurora64V+, S3 ViRGE/MX, SuperSavage, SavageXP - Mobile chipsets
ProSavage, Twister, UniChrome, Chrome 9 - Integrated implementations of Savage chipset for VIA motherboards
GammaChrome, DeltaChrome, Chrome 20 series, Chrome 440 series, Chrome 500 series - Discrete cards post acquisition by VIA.
S3 GenDAC, SDAC - VGA RAMDAC with high/true-color bypass (SDAC had integrated PLLs, dot-clocks, and hardware Windows cursor)
=== Media chipsets ===
Sonic/AD sound chipset - A programmable, sigma-delta audio DAC, featuring an integrated PLL, stereo 16-bit analogue output
SonicVibes - PCI Audio Accelerator
Scenic/MX2 - MPEG Decoder
== References ==
== External links ==
Official website at the Wayback Machine (archived 2017-01-04)
S3.com products cached from 1997
VIA Graphics
Firingsquad: S3: From Virge to Savage 2000
Xbitlabs: The Return of S3: DeltaChrome Graphics Card Review
Techreport: A look at S3's DeltaChrome
The Inquirer: S3's DirectX 10 Roadmap | Wikipedia/S3_Graphics |
General-purpose computing on graphics processing units (GPGPU, or less often GPGP) is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.
Essentially, a GPGPU pipeline is a kind of parallel processing between one or more GPUs and CPUs that analyzes data as if it were in image or other graphic form. While GPUs operate at lower frequencies, they typically have many times the number of cores. Thus, GPUs can process far more pictures and graphical data per second than a traditional CPU. Migrating data into graphical form and then using the GPU to scan and analyze it can create a large speedup.
GPGPU pipelines were developed at the beginning of the 21st century for graphics processing (e.g. for better shaders). These pipelines were found to fit scientific computing needs well, and have since been developed in this direction.
The best-known GPGPUs are Nvidia Tesla that are used for Nvidia DGX, alongside AMD Instinct and Intel Gaudi.
== History ==
In principle, any arbitrary Boolean function, including addition, multiplication, and other mathematical functions, can be built up from a functionally complete set of logic operators. In 1987, Conway's Game of Life became one of the first examples of general-purpose computing using an early stream processor called a blitter to invoke a special sequence of logical operations on bit vectors.
General-purpose computing on GPUs became more practical and popular after about 2001, with the advent of both programmable shaders and floating point support on graphics processors. Notably, problems involving matrices and/or vectors – especially two-, three-, or four-dimensional vectors – were easy to translate to a GPU, which acts with native speed and support on those types. A significant milestone for GPGPU was the year 2003 when two research groups independently discovered GPU-based approaches for the solution of general linear algebra problems on GPUs that ran faster than on CPUs. These early efforts to use GPUs as general-purpose processors required reformulating computational problems in terms of graphics primitives, as supported by the two major APIs for graphics processors, OpenGL and DirectX. This cumbersome translation was obviated by the advent of general-purpose programming languages and APIs such as Sh/RapidMind, Brook and Accelerator.
These were followed by Nvidia's CUDA, which allowed programmers to ignore the underlying graphical concepts in favor of more common high-performance computing concepts. Newer, hardware-vendor-independent offerings include Microsoft's DirectCompute and Apple/Khronos Group's OpenCL. This means that modern GPGPU pipelines can leverage the speed of a GPU without requiring full and explicit conversion of the data to a graphical form.
Mark Harris, the founder of GPGPU.org, claims he coined the term GPGPU.
== Implementations ==
Any language that allows the code running on the CPU to poll a GPU shader for return values, can create a GPGPU framework. Programming standards for parallel computing include OpenCL (vendor-independent), OpenACC, OpenMP and OpenHMPP.
As of 2016, OpenCL is the dominant open general-purpose GPU computing language, and is an open standard defined by the Khronos Group. OpenCL provides a cross-platform GPGPU platform that additionally supports data parallel compute on CPUs. OpenCL is actively supported on Intel, AMD, Nvidia, and ARM platforms. The Khronos Group has also standardised and implemented SYCL, a higher-level programming model for OpenCL as a single-source domain specific embedded language based on pure C++11.
The dominant proprietary framework is Nvidia CUDA. Nvidia launched CUDA in 2006, a software development kit (SDK) and application programming interface (API) that allows using the programming language C to code algorithms for execution on GeForce 8 series and later GPUs.
ROCm, launched in 2016, is AMD's open-source response to CUDA. It is, as of 2022, on par with CUDA with regards to features, and still lacking in consumer support.
OpenVIDIA was developed at University of Toronto between 2003–2005, in collaboration with Nvidia.
Altimesh Hybridizer created by Altimesh compiles Common Intermediate Language to CUDA binaries. It supports generics and virtual functions. Debugging and profiling is integrated with Visual Studio and Nsight. It is available as a Visual Studio extension on Visual Studio Marketplace.
Microsoft introduced the DirectCompute GPU computing API, released with the DirectX 11 API.
Alea GPU, created by QuantAlea, introduces native GPU computing capabilities for the Microsoft .NET languages F# and C#. Alea GPU also provides a simplified GPU programming model based on GPU parallel-for and parallel aggregate using delegates and automatic memory management.
MATLAB supports GPGPU acceleration using the Parallel Computing Toolbox and MATLAB Distributed Computing Server, and third-party packages like Jacket.
GPGPU processing is also used to simulate Newtonian physics by physics engines, and commercial implementations include Havok Physics, FX and PhysX, both of which are typically used for computer and video games.
C++ Accelerated Massive Parallelism (C++ AMP) is a library that accelerates execution of C++ code by exploiting the data-parallel hardware on GPUs.
=== Mobile computers ===
Due to a trend of increasing power of mobile GPUs, general-purpose programming became available also on the mobile devices running major mobile operating systems.
Google Android 4.2 enabled running RenderScript code on the mobile device GPU. Renderscript has since been deprecated in favour of first OpenGL compute shaders and later Vulkan Compute. OpenCL is available on many Android devices, but is not officially supported by Android. Apple introduced the proprietary Metal API for iOS applications, able to execute arbitrary code through Apple's GPU compute shaders.
== Hardware support ==
Computer video cards are produced by various vendors, such as Nvidia, AMD. Cards from such vendors differ on implementing data-format support, such as integer and floating-point formats (32-bit and 64-bit). Microsoft introduced a Shader Model standard, to help rank the various features of graphic cards into a simple Shader Model version number (1.0, 2.0, 3.0, etc.).
=== Integer numbers ===
Pre-DirectX 9 video cards only supported paletted or integer color types. Sometimes another alpha value is added, to be used for transparency. Common formats are:
8 bits per pixel – Sometimes palette mode, where each value is an index in a table with the real color value specified in one of the other formats. Sometimes three bits for red, three bits for green, and two bits for blue.
16 bits per pixel – Usually the bits are allocated as five bits for red, six bits for green, and five bits for blue.
24 bits per pixel – There are eight bits for each of red, green, and blue.
32 bits per pixel – There are eight bits for each of red, green, blue, and alpha.
=== Floating-point numbers ===
For early fixed-function or limited programmability graphics (i.e., up to and including DirectX 8.1-compliant GPUs) this was sufficient because this is also the representation used in displays. This representation does have certain limitations. Given sufficient graphics processing power even graphics programmers would like to use better formats, such as floating point data formats, to obtain effects such as high-dynamic-range imaging. Many GPGPU applications require floating point accuracy, which came with video cards conforming to the DirectX 9 specification.
DirectX 9 Shader Model 2.x suggested the support of two precision types: full and partial precision. Full precision support could either be FP32 or FP24 (floating point 32- or 24-bit per component) or greater, while partial precision was FP16. ATI's Radeon R300 series of GPUs supported FP24 precision only in the programmable fragment pipeline (although FP32 was supported in the vertex processors) while Nvidia's NV30 series supported both FP16 and FP32; other vendors such as S3 Graphics and XGI supported a mixture of formats up to FP24.
The implementations of floating point on Nvidia GPUs are mostly IEEE compliant; however, this is not true across all vendors. This has implications for correctness which are considered important to some scientific applications. While 64-bit floating point values (double precision float) are commonly available on CPUs, these are not universally supported on GPUs. Some GPU architectures sacrifice IEEE compliance, while others lack double-precision. Efforts have occurred to emulate double-precision floating point values on GPUs; however, the speed tradeoff negates any benefit to offloading the computing onto the GPU in the first place.
=== Vectorization ===
Most operations on the GPU operate in a vectorized fashion: one operation can be performed on up to four values at once. For example, if one color ⟨R1, G1, B1⟩ is to be modulated by another color ⟨R2, G2, B2⟩, the GPU can produce the resulting color ⟨R1*R2, G1*G2, B1*B2⟩ in one operation. This functionality is useful in graphics because almost every basic data type is a vector (either 2-, 3-, or 4-dimensional). Examples include vertices, colors, normal vectors, and texture coordinates. Many other applications can put this to good use, and because of their higher performance, vector instructions, termed single instruction, multiple data (SIMD), have long been available on CPUs.
== GPU vs. CPU ==
Originally, data was simply passed one-way from a central processing unit (CPU) to a graphics processing unit (GPU), then to a display device. As time progressed, however, it became valuable for GPUs to store at first simple, then complex structures of data to be passed back to the CPU that analyzed an image, or a set of scientific-data represented as a 2D or 3D format that a video card can understand. Because the GPU has access to every draw operation, it can analyze data in these forms quickly, whereas a CPU must poll every pixel or data element much more slowly, as the speed of access between a CPU and its larger pool of random-access memory (or in an even worse case, a hard drive) is slower than GPUs and video cards, which typically contain smaller amounts of more expensive memory that is much faster to access. Transferring the portion of the data set to be actively analyzed to that GPU memory in the form of textures or other easily readable GPU forms results in speed increase. The distinguishing feature of a GPGPU design is the ability to transfer information bidirectionally back from the GPU to the CPU; generally the data throughput in both directions is ideally high, resulting in a multiplier effect on the speed of a specific high-use algorithm.
GPGPU pipelines may improve efficiency on especially large data sets and/or data containing 2D or 3D imagery. It is used in complex graphics pipelines as well as scientific computing; more so in fields with large data sets like genome mapping, or where two- or three-dimensional analysis is useful – especially at present biomolecule analysis, protein study, and other complex organic chemistry. An example of such applications is NVIDIA software suite for genome analysis.
Such pipelines can also vastly improve efficiency in image processing and computer vision, among other fields; as well as parallel processing generally. Some very heavily optimized pipelines have yielded speed increases of several hundred times the original CPU-based pipeline on one high-use task.
A simple example would be a GPU program that collects data about average lighting values as it renders some view from either a camera or a computer graphics program back to the main program on the CPU, so that the CPU can then make adjustments to the overall screen view. A more advanced example might use edge detection to return both numerical information and a processed image representing outlines to a computer vision program controlling, say, a mobile robot. Because the GPU has fast and local hardware access to every pixel or other picture element in an image, it can analyze and average it (for the first example) or apply a Sobel edge filter or other convolution filter (for the second) with much greater speed than a CPU, which typically must access slower random-access memory copies of the graphic in question.
GPGPU is fundamentally a software concept, not a hardware concept; it is a type of algorithm, not a piece of equipment. Specialized equipment designs may, however, even further enhance the efficiency of GPGPU pipelines, which traditionally perform relatively few algorithms on very large amounts of data. Massively parallelized, gigantic-data-level tasks thus may be parallelized even further via specialized setups such as rack computing (many similar, highly tailored machines built into a rack), which adds a third layer – many computing units each using many CPUs to correspond to many GPUs. Some Bitcoin "miners" used such setups for high-quantity processing.
=== Caches ===
Historically, CPUs have used hardware-managed caches, but the earlier GPUs only provided software-managed local memories. However, as GPUs are being increasingly used for general-purpose applications, state-of-the-art GPUs are being designed with hardware-managed multi-level caches which have helped the GPUs to move towards mainstream computing. For example, GeForce 200 series GT200 architecture GPUs did not feature an L2 cache, the Fermi GPU has 768 KiB last-level cache, the Kepler GPU has 1.5 MiB last-level cache, the Maxwell GPU has 2 MiB last-level cache, and the Pascal GPU has 4 MiB last-level cache.
=== Register file ===
GPUs have very large register files, which allow them to reduce context-switching latency. Register file size is also increasing over different GPU generations, e.g., the total register file size on Maxwell (GM200), Pascal and Volta GPUs are 6 MiB, 14 MiB and 20 MiB, respectively. By comparison, the size of a register file on CPUs is small, typically tens or hundreds of kilobytes.
=== Energy efficiency ===
The high performance of GPUs comes at the cost of high power consumption, which under full load is in fact as much power as the rest of the PC system combined. The maximum power consumption of the Pascal series GPU (Tesla P100) was specified to be 250W.
== Classical GPGPU ==
Before CUDA was published in 2007, GPGPU was "classical" and involved repurposing graphics primitives. A standard structure of such was:
Load arrays into textures
Draw a quadrangle
Apply pixel shaders and textures to quadrangle
Read out pixel values in the quadrangle as array
More examples are available in part 4 of GPU Gems 2.
=== Linear algebra ===
Using GPU for numerical linear algebra began at least in 2001. It had been used for Gauss-Seidel solver, conjugate gradients, etc.
== Stream processing ==
GPUs are designed specifically for graphics and thus are very restrictive in operations and programming. Due to their design, GPUs are only effective for problems that can be solved using stream processing and the hardware can only be used in certain ways.
The following discussion referring to vertices, fragments and textures concerns mainly the legacy model of GPGPU programming, where graphics APIs (OpenGL or DirectX) were used to perform general-purpose computation. With the introduction of the CUDA (Nvidia, 2007) and OpenCL (vendor-independent, 2008) general-purpose computing APIs, in new GPGPU codes it is no longer necessary to map the computation to graphics primitives. The stream processing nature of GPUs remains valid regardless of the APIs used. (See e.g.,)
GPUs can only process independent vertices and fragments, but can process many of them in parallel. This is especially effective when the programmer wants to process many vertices or fragments in the same way. In this sense, GPUs are stream processors – processors that can operate in parallel by running one kernel on many records in a stream at once.
A stream is simply a set of records that require similar computation. Streams provide data parallelism. Kernels are the functions that are applied to each element in the stream. In the GPUs, vertices and fragments are the elements in streams and vertex and fragment shaders are the kernels to be run on them. For each element we can only read from the input, perform operations on it, and write to the output. It is permissible to have multiple inputs and multiple outputs, but never a piece of memory that is both readable and writable.
Arithmetic intensity is defined as the number of operations performed per word of memory transferred. It is important for GPGPU applications to have high arithmetic intensity else the memory access latency will limit computational speedup.
Ideal GPGPU applications have large data sets, high parallelism, and minimal dependency between data elements.
=== GPU programming concepts ===
==== Computational resources ====
There are a variety of computational resources available on the GPU:
Programmable processors – vertex, primitive, fragment and mainly compute pipelines allow programmer to perform kernel on streams of data
Rasterizer – creates fragments and interpolates per-vertex constants such as texture coordinates and color
Texture unit – read-only memory interface
Framebuffer – write-only memory interface
In fact, a program can substitute a write only texture for output instead of the framebuffer. This is done either through Render to Texture (RTT), Render-To-Backbuffer-Copy-To-Texture (RTBCTT), or the more recent stream-out.
==== Textures as stream ====
The most common form for a stream to take in GPGPU is a 2D grid because this fits naturally with the rendering model built into GPUs. Many computations naturally map into grids: matrix algebra, image processing, physically based simulation, and so on.
Since textures are used as memory, texture lookups are then used as memory reads. Certain operations can be done automatically by the GPU because of this.
==== Kernels ====
Compute kernels can be thought of as the body of loops. For example, a programmer operating on a grid on the CPU might have code that looks like this:
On the GPU, the programmer only specifies the body of the loop as the kernel and what data to loop over by invoking geometry processing.
==== Flow control ====
In sequential code it is possible to control the flow of the program using if-then-else statements and various forms of loops. Such flow control structures have only recently been added to GPUs. Conditional writes could be performed using a properly crafted series of arithmetic/bit operations, but looping and conditional branching were not possible.
Recent GPUs allow branching, but usually with a performance penalty. Branching should generally be avoided in inner loops, whether in CPU or GPU code, and various methods, such as static branch resolution, pre-computation, predication, loop splitting, and Z-cull can be used to achieve branching when hardware support does not exist.
=== GPU methods ===
==== Map ====
The map operation simply applies the given function (the kernel) to every element in the stream. A simple example is multiplying each value in the stream by a constant (increasing the brightness of an image). The map operation is simple to implement on the GPU. The programmer generates a fragment for each pixel on screen and applies a fragment program to each one. The result stream of the same size is stored in the output buffer.
==== Reduce ====
Some computations require calculating a smaller stream (possibly a stream of only one element) from a larger stream. This is called a reduction of the stream. Generally, a reduction can be performed in multiple steps. The results from the prior step are used as the input for the current step and the range over which the operation is applied is reduced until only one stream element remains.
==== Stream filtering ====
Stream filtering is essentially a non-uniform reduction. Filtering involves removing items from the stream based on some criteria.
==== Scan ====
The scan operation, also termed parallel prefix sum, takes in a vector (stream) of data elements and an (arbitrary) associative binary function '+' with an identity element 'i'. If the input is [a0, a1, a2, a3, ...], an exclusive scan produces the output [i, a0, a0 + a1, a0 + a1 + a2, ...], while an inclusive scan produces the output [a0, a0 + a1, a0 + a1 + a2, a0 + a1 + a2 + a3, ...] and does not require an identity to exist. While at first glance the operation may seem inherently serial, efficient parallel scan algorithms are possible and have been implemented on graphics processing units. The scan operation has uses in e.g., quicksort and sparse matrix-vector multiplication.
==== Scatter ====
The scatter operation is most naturally defined on the vertex processor. The vertex processor is able to adjust the position of the vertex, which allows the programmer to control where information is deposited on the grid. Other extensions are also possible, such as controlling how large an area the vertex affects.
The fragment processor cannot perform a direct scatter operation because the location of each fragment on the grid is fixed at the time of the fragment's creation and cannot be altered by the programmer. However, a logical scatter operation may sometimes be recast or implemented with another gather step. A scatter implementation would first emit both an output value and an output address. An immediately following gather operation uses address comparisons to see whether the output value maps to the current output slot.
In dedicated compute kernels, scatter can be performed by indexed writes.
==== Gather ====
Gather is the reverse of scatter. After scatter reorders elements according to a map, gather can restore the order of the elements according to the map scatter used. In dedicated compute kernels, gather may be performed by indexed reads. In other shaders, it is performed with texture-lookups.
==== Sort ====
The sort operation transforms an unordered set of elements into an ordered set of elements. The most common implementation on GPUs is using radix sort for integer and floating point data and coarse-grained merge sort and fine-grained sorting networks for general comparable data.
==== Search ====
The search operation allows the programmer to find a given element within the stream, or possibly find neighbors of a specified element. Mostly the search method used is binary search on sorted elements.
==== Data structures ====
A variety of data structures can be represented on the GPU:
Dense arrays
Sparse matrices (sparse array) – static or dynamic
Adaptive structures (union type)
== Applications ==
The following are some of the areas where GPUs have been used for general purpose computing:
Automatic parallelization
Physical based simulation and physics engines (usually based on Newtonian physics models)
Conway's Game of Life, cloth simulation, fluid incompressible flow by solution of Euler equations (fluid dynamics) or Navier–Stokes equations
Statistical physics
Ising model
Lattice gauge theory
Segmentation – 2D and 3D
Level set methods
CT reconstruction
Fast Fourier transform
GPU learning – machine learning and data mining computations, e.g., with software BIDMach
k-nearest neighbor algorithm
Fuzzy logic
Tone mapping
Audio signal processing
Audio and sound effects processing, to use a GPU for digital signal processing (DSP)
Analog signal processing
Speech processing
Digital image processing
Video processing
Hardware accelerated video decoding and post-processing
Motion compensation (mo comp)
Inverse discrete cosine transform (iDCT)
Variable-length decoding (VLD), Huffman coding
Inverse quantization (IQ, not to be confused with Intelligence Quotient)
In-loop deblocking
Bitstream processing (CAVLC/CABAC) using special purpose hardware for this task because this is a serial task not suitable for regular GPGPU computation
Deinterlacing
Spatial-temporal deinterlacing
Noise reduction
Edge enhancement
Color correction
Hardware accelerated video encoding and pre-processing
Global illumination – ray tracing, photon mapping, radiosity among others, subsurface scattering
Geometric computing – constructive solid geometry, distance fields, collision detection, transparency computation, shadow generation
Scientific computing
Monte Carlo simulation of light propagation
Weather forecasting
Climate research
Molecular modeling on GPU
Quantum mechanical physics
Astrophysics
Number theory
Primality testing and integer factorization
Bioinformatics
Medical imaging
Clinical decision support system (CDSS)
Computer vision
Digital signal processing / signal processing
Control engineering
Operations research
Implementations of: the GPU Tabu Search algorithm solving the Resource Constrained Project Scheduling problem is freely available on GitHub; the GPU algorithm solving the Nurse scheduling problem is freely available on GitHub.
Neural networks
Database operations
Computational Fluid Dynamics especially using Lattice Boltzmann methods
Cryptography and cryptanalysis
Performance modeling: computationally intensive tasks on GPU
Implementations of: MD6, Advanced Encryption Standard (AES), Data Encryption Standard (DES), RSA, elliptic curve cryptography (ECC)
Password cracking
Cryptocurrency transactions processing ("mining") (Bitcoin mining)
Electronic design automation
Antivirus software
Intrusion detection
Increase computing power for distributed computing projects like SETI@home, Einstein@home
=== Bioinformatics ===
GPGPU usage in Bioinformatics:
==== Molecular dynamics ====
† Expected speedups are highly dependent on system configuration. GPU performance compared against multi-core x86 CPU socket. GPU performance benchmarked on GPU supported features and may be a kernel to kernel performance comparison. For details on configuration used, view application website. Speedups as per Nvidia in-house testing or ISV's documentation.
‡ Q=Quadro GPU, T=Tesla GPU. Nvidia recommended GPUs for this application. Check with developer or ISV to obtain certification information.
== See also ==
AI accelerator
Audio processing unit
Close to Metal
Deep learning processor (DLP)
Fastra II
Larrabee (microarchitecture)
Physics engine
Advanced Simulation Library
Physics processing unit (PPU)
== References ==
== Further reading ==
Owens, J.D.; Houston, M.; Luebke, D.; Green, S.; Stone, J.E.; Phillips, J.C. (May 2008). "GPU Computing". Proceedings of the IEEE. 96 (5): 879–899. doi:10.1109/JPROC.2008.917757. ISSN 0018-9219. S2CID 17091128.
Brodtkorb, André R.; Hagen, Trond R.; Sætra, Martin L. (1 January 2013). "Graphics processing unit (GPU) programming strategies and trends in GPU computing". Journal of Parallel and Distributed Computing. Metaheuristics on GPUs. 73 (1): 4–13. doi:10.1016/j.jpdc.2012.04.003. hdl:10852/40283. ISSN 0743-7315. | Wikipedia/General-purpose_computing_on_graphics_processing_units |
The Graphics Device Interface (GDI) is a legacy component of Microsoft Windows responsible for representing graphical objects and transmitting them to output devices such as monitors and printers. It was superseded by DirectDraw API and later Direct2D API. Windows apps use Windows API to interact with GDI, for such tasks as drawing lines and curves, rendering fonts, and handling palettes. The Windows USER subsystem uses GDI to render such UI elements as window frames and menus. Other systems have components that are similar to GDI; for example: Mac OS had QuickDraw, and Linux and Unix have X Window System core protocol.
GDI's most significant advantages over more direct methods of accessing the hardware are perhaps its scaling capabilities and its abstract representation of target devices. Using GDI, it is possible to draw on multiple devices, such as a screen and a printer, and expect proper reproduction in each case. This capability is at the center of most "What You See Is What You Get" applications for Microsoft Windows.
Simple games that do not require fast graphics rendering may use GDI. However, GDI is relatively hard to use for advanced animation, lacks a notion for synchronizing with individual video frames in the video card, and lacks hardware rasterization for 3D. Modern games usually use DirectX, Vulkan, or OpenGL instead.
== Technical details ==
In GDI, a device context (DC) defines the attributes of text and images for the output device, e.g., screen or printer. GDI maintains the actual context. Generating the output requires a handle to the device context (HDC). After generating the output, the handle could be released.
GDI uses Bresenham's line drawing algorithm to draw aliased lines.
== Version history ==
=== Early versions ===
GDI was present in the initial release of Windows. MS-DOS programs had manipulated the graphics hardware using software interrupts (sometimes via the Video BIOS) and by manipulating video memory directly. Code written in this way expects that it is the only user of the video memory, which was not tenable on a multi-tasked environment, such as Windows. The BYTE magazine, in December 1983, discussed Microsoft's plans for a system to output graphics to both printers and monitors with the same code in the forthcoming first release of Windows.
On Windows 3.1x and Windows 9x, GDI can use Bit blit features for 2D acceleration, if suitable graphics card driver is installed.
=== Windows XP ===
With the introduction of Windows XP, GDI+ complemented GDI. GDI+ has been written in C++. It adds anti-aliased 2D graphics, floating point coordinates, gradient shading, more complex path management, intrinsic support for modern graphics-file formats like JPEG and PNG, and support for composition of affine transformations in the 2D view pipeline. GDI+ uses RGBA values to represent color. Use of these features is apparent in Windows XP components, such as Microsoft Paint, Windows Picture and Fax Viewer, Photo Printing Wizard, and the My Pictures Slideshow screensaver. Their presence in the basic graphics layer greatly simplifies implementations of vector-graphics systems such as Adobe Flash or SVG. Besides, .NET Framework provides a managed interface for GDI+ via the System.Drawing namespace.
While GDI+ is included with Windows XP and later, the GDI+ dynamic library can also be shipped with an application and used on older versions of Windows.
Because of the additional text processing and resolution independence capabilities in GDI+, the CPU undertakes text rendering. The result is an order of magnitude slower than the hardware-accelerated GDI. Chris Jackson published some tests indicating that a piece of text rendering code he had written could render 99,000 glyphs per second in GDI, but the same code using GDI+ rendered 16,600 glyphs per second.
GDI+ is similar (in purpose and structure) to Apple's QuickDraw GX subsystem, and the open-source libart and Cairo libraries.
=== Windows Vista ===
In Windows Vista, all Windows applications including GDI and GDI+ applications run in the new compositing engine, Desktop Window Manager (DWM), which is hardware-accelerated. As such, the GDI itself is no longer hardware-accelerated. Because of the nature of the composition operations, window moves can be faster or more responsive because underlying content does not need to be re-rendered by the application.
=== Windows 7 ===
Windows 7 includes GDI hardware acceleration for blitting operations in the Windows Display Driver Model v1.1. This improves GDI performance and allows DWM to use local video memory for compositing, thereby reducing system memory footprint and increasing the performance of graphics operations. Most primitive GDI operations are still not hardware-accelerated, unlike Direct2D. GDI+ continues to rely on software rendering in Windows 7.
== GDI printers ==
A GDI printer or Winprinter (analogous to a Winmodem) is a printer designed to accept output from a host computer running Windows. The host computer does all print processing: GDI renders a page as a bitmap, which the printer driver receives, processes, and sends to the associated printer. The combination of GDI and the driver is bidirectional; they receive information from the printer such as whether it is ready to print or is out of paper.
Printers that do not rely on GDI require hardware, firmware, and memory for page rendering while a GDI printer uses the host computer for this. However, a printer with its own control language can accept input from any device with a suitable driver, while a GDI printer requires a PC running Windows. GDI printers can be made available to computers on a network if they are connected as shared printers on a computer which is on and running Windows. Some "generic" GDI drivers such as pnm2ppa have been written; they aim to make GDI printers compatible with non-Windows operating systems such as FreeBSD, but they cannot support all printers.
In order to allow simpler creation of drivers for Winprinters, the Microsoft Universal Printer Driver was created. This allows printer vendors to write Generic Printer Description (GPD) "minidrivers", which describe the printer's capabilities and command set in plaintext, rather than having to do kernel mode driver development.
Microsoft has moved away from this printing model with Open XML Paper Specification.
== Limitations ==
Each window consumes GDI objects. As the complexity of the window increases, with additional features such as buttons and images, its GDI object usage also increases. When too many objects are in use, Windows is unable to draw any more GDI objects, leading to misbehaving software and frozen and unresponsive program operation. Many applications are also incorrectly coded and fail to release GDI objects after use, which further adds to the problem. The total available GDI objects varies from one version of Windows to the next: Windows 9x had a limit of 1,200 total objects; Windows 2000 has a limit of 16,384 objects; and Windows XP and later have a configurable limit (via the registry) that defaults to 10,000 objects per process (but a theoretical maximum of 65,536 for the entire session). Windows 8 and later increase the GDI object limit to 65,536 per user login session.
Earlier versions of Windows such as Windows 3.1 and Windows 98 included a Resource Meter program to allow the user to monitor how much of the total system GDI resources were in use. This resource meter consumed GDI objects itself. Later versions such as Windows 2000 and Windows XP can report GDI object usage for each program in the Task Manager, but they cannot tell the user the total GDI capacity available.
Overflowing GDI capacity can affect Windows itself, preventing new windows from opening, menus from displaying, and alert boxes from appearing. The situation can be difficult to clear and can potentially require a forced reset of the system, since it prevents core system programs from functioning. In Windows 8 and 8.1, a forced log-off occurs as a result of GDI capacity overflow, instead of a reboot.
== Successor ==
Direct2D is the successor of GDI and GDI+. Its sibling, DirectWrite, replaces Uniscribe. They were shipped with Windows 7 and Windows Server 2008 R2, and were available for Windows Vista and Windows Server 2008 (with Platform Update installed). Later, Microsoft developed Win2D, a free and open-source GDI-like class library. Win2D's target audience are developers that use C++, C#, and Visual Basic.NET to develop Universal Windows Platform apps.
== See also ==
WinG
Microsoft Windows library files
== Notes and references ==
== External links ==
Microsoft's GDI+ page
Bob Powell's GDI+ FAQ list
MSDN article on GDI overview
Microsoft Security Bulletin MS04-028
F-Secure: Critical vulnerability in MS Windows may escalate the virus threat Archived 2009-02-04 at the Wayback Machine
IGDI+ - Delphi Open Source GDI+ library. | Wikipedia/Graphics_Device_Interface |
The RGB color model is an additive color model in which the red, green, and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue.
The main purpose of the RGB color model is for the sensing, representation, and display of images in electronic systems, such as televisions and computers, though it has also been used in conventional photography and colored lighting. Before the electronic age, the RGB color model already had a solid theory behind it, based in human perception of colors.
RGB is a device-dependent color model: different devices detect or reproduce a given RGB value differently, since the color elements (such as phosphors or dyes) and their response to the individual red, green, and blue levels vary from manufacturer to manufacturer, or even in the same device over time. Thus an RGB value does not define the same color across devices without some kind of color management.
Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma, OLED, quantum dots, etc.), computer and mobile phone displays, video projectors, multicolor LED displays and large screens such as the Jumbotron. Color printers, on the other hand, are not RGB devices, but subtractive color devices typically using the CMYK color model.
== Additive colors ==
To form a color with RGB, three light beams (one red, one green, and one blue) must be superimposed (for example by emission from a black screen or by reflection from a white screen). Each of the three beams is called a component of that color, and each of them can have an arbitrary intensity, from fully off to fully on, in the mixture.
The RGB color model is additive in the sense that if light beams of differing color (frequency) are superposed in space their light spectra adds up, wavelength for wavelength, to make up a resulting, total spectrum. This is in contrast to the subtractive color model, particularly the CMY Color Model, which applies to paints, inks, dyes and other substances whose color depends on reflecting certain components (frequencies) of the light under which we see them.
In the additive model, if the resulting spectrum, e.g. of superposing three colors, is flat, white color is perceived by the human eye upon direct incidence on the retina. This is in stark contrast to the subtractive model, where the perceived resulting spectrum is what reflecting surfaces, such as dyed surfaces, emit. A dye filters out all colors but its own; two blended dyes filter out all colors but the common color component between them, e.g. green as the common component between yellow and cyan, red as the common component between magenta and yellow, and blue-violet as the common component between magenta and cyan. There is no common color component among magenta, cyan and yellow, thus rendering a spectrum of zero intensity: black.
Zero intensity for each component gives the darkest color (no light, considered the black), and full intensity of each gives a white; the quality of this white depends on the nature of the primary light sources, but if they are properly balanced, the result is a neutral white matching the system's white point. When the intensities for all the components are the same, the result is a shade of gray, darker, or lighter depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and weakest of the intensities of the primary colors employed.
When one of the components has the strongest intensity, the color is a hue near this primary color (red-ish, green-ish, or blue-ish), and when two components have the same strongest intensity, then the color is a hue of a secondary color (a shade of cyan, magenta, or yellow). A secondary color is formed by the sum of two primary colors of equal intensity: cyan is green+blue, magenta is blue+red, and yellow is red+green. Every secondary color is the complement of one primary color: cyan complements red, magenta complements green, and yellow complements blue. When all the primary colors are mixed in equal intensities, the result is white.
The RGB color model itself does not define what is meant by red, green, and blue colorimetrically, and so the results of mixing them are not specified as absolute, but relative to the primary colors. When the exact chromaticities of the red, green, and blue primaries are defined, the color model then becomes an absolute color space, such as sRGB or Adobe RGB.
== Physical principles for the choice of red, green, and blue ==
The choice of primary colors is related to the physiology of the human eye; good primaries are stimuli that maximize the difference between the responses of the cone cells of the human retina to light of different wavelengths, and that thereby make a large color triangle.
The normal three kinds of light-sensitive photoreceptor cells in the human eye (cone cells) respond most to yellow (long wavelength or L), green (medium or M), and violet (short or S) light (peak wavelengths near 570 nm, 540 nm and 440 nm, respectively). The difference in the signals received from the three kinds allows the brain to differentiate a wide gamut of different colors, while being most sensitive (overall) to yellowish-green light and to differences between hues in the green-to-orange region.
As an example, suppose that light in the orange range of wavelengths (approximately 577 nm to 597 nm) enters the eye and strikes the retina. Light of these wavelengths would activate both the medium and long wavelength cones of the retina, but not equally—the long-wavelength cells will respond more. The difference in the response can be detected by the brain, and this difference is the basis of our perception of orange. Thus, the orange appearance of an object results from light from the object entering our eye and stimulating the different cones simultaneously but to different degrees.
Use of the three primary colors is not sufficient to reproduce all colors; only colors within the color triangle defined by the chromaticities of the primaries can be reproduced by additive mixing of non-negative amounts of those colors of light.
== History of RGB color model theory and usage ==
The RGB color model is based on the Young–Helmholtz theory of trichromatic color vision, developed by Thomas Young and Hermann von Helmholtz in the early to mid-nineteenth century, and on James Clerk Maxwell's color triangle that elaborated that theory (c. 1860).
=== Photography ===
The first experiments with RGB in early color photography were made in 1861 by Maxwell himself, and involved the process of combining three color-filtered separate takes. To reproduce the color photograph, three matching projections over a screen in a dark room were necessary.
The additive RGB model and variants such as orange–green–violet were also used in the Autochrome Lumière color plates and other screen-plate technologies such as the Joly color screen and the Paget process in the early twentieth century. Color photography by taking three separate plates was used by other pioneers, such as the Russian Sergey Prokudin-Gorsky in the period 1909 through 1915. Such methods lasted until about 1960 using the expensive and extremely complex tri-color carbro Autotype process.
When employed, the reproduction of prints from three-plate photos was done by dyes or pigments using the complementary CMY model, by simply using the negative plates of the filtered takes: reverse red gives the cyan plate, and so on.
=== Television ===
Before the development of practical electronic TV, there were patents on mechanically scanned color systems as early as 1889 in Russia. The color TV pioneer John Logie Baird demonstrated the world's first RGB color transmission in 1928, and also the world's first color broadcast in 1938, in London. In his experiments, scanning and display were done mechanically by spinning colorized wheels.
The Columbia Broadcasting System (CBS) began an experimental RGB field-sequential color system in 1940. Images were scanned electrically, but the system still used a moving part: the transparent RGB color wheel rotating at above 1,200 rpm in synchronism with the vertical scan. The camera and the cathode-ray tube (CRT) were both monochromatic. Color was provided by color wheels in the camera and the receiver. More recently, color wheels have been used in field-sequential projection TV receivers based on the Texas Instruments monochrome DLP imager.
The modern RGB shadow mask technology for color CRT displays was patented by Werner Flechsig in Germany in 1938.
=== Personal computers ===
Personal computers of the late 1970s and early 1980s, such as the Apple II and VIC-20, use composite video. The Commodore 64 and the Atari 8-bit computers use S-Video derivatives. IBM introduced a 16-color scheme (4 bits—1 bit each for red, green, blue, and intensity) with the Color Graphics Adapter (CGA) for its IBM PC in 1981, later improved with the Enhanced Graphics Adapter (EGA) in 1984. The first manufacturer of a truecolor graphics card for PCs (the TARGA) was Truevision in 1987, but it was not until the arrival of the Video Graphics Array (VGA) in 1987 that RGB became popular, mainly due to the analog signals in the connection between the adapter and the monitor which allowed a very wide range of RGB colors. Actually, it had to wait a few more years because the original VGA cards were palette-driven just like EGA, although with more freedom than VGA, but because the VGA connectors were analog, later variants of VGA (made by various manufacturers under the informal name Super VGA) eventually added true-color. In 1992, magazines heavily advertised true-color Super VGA hardware.
== RGB devices ==
=== RGB and displays ===
One common application of the RGB color model is the display of colors on a cathode-ray tube (CRT), liquid-crystal display (LCD), plasma display, or organic light emitting diode (OLED) display such as a television, a computer's monitor, or a large scale screen. Each pixel on the screen is built by driving three small and very close but still separated RGB light sources. At common viewing distance, the separate sources are indistinguishable, which the eye interprets as a given solid color. All the pixels together arranged in the rectangular screen surface conforms the color image.
During digital image processing each pixel can be represented in the computer memory or interface hardware (for example, a graphics card) as binary values for the red, green, and blue color components. When properly managed, these values are converted into intensities or voltages via gamma correction to correct the inherent nonlinearity of some devices, such that the intended intensities are reproduced on the display.
The Quattron released by Sharp uses RGB color and adds yellow as a sub-pixel, supposedly allowing an increase in the number of available colors.
==== Video electronics ====
RGB is also the term referring to a type of component video signal used in the video electronics industry. It consists of three signals—red, green, and blue—carried on three separate cables/pins. RGB signal formats are often based on modified versions of the RS-170 and RS-343 standards for monochrome video. This type of video signal is widely used in Europe since it is the best quality signal that can be carried on the standard SCART connector. This signal is known as RGBS (4 BNC/RCA terminated cables exist as well), but it is directly compatible with RGBHV used for computer monitors (usually carried on 15-pin cables terminated with 15-pin D-sub or 5 BNC connectors), which carries separate horizontal and vertical sync signals.
Outside Europe, RGB is not very popular as a video signal format; S-Video takes that spot in most non-European regions. However, almost all computer monitors around the world use RGB.
==== Video framebuffer ====
A framebuffer is a digital device for computers which stores data in the so-called video memory (comprising an array of Video RAM or similar chips). This data goes either to three digital-to-analog converters (DACs) (for analog monitors), one per primary color or directly to digital monitors. Driven by software, the CPU (or other specialized chips) write the appropriate bytes into the video memory to define the image. Modern systems encode pixel color values by devoting 8 bits to each of the R, G, and B components. RGB information can be either carried directly by the pixel bits themselves or provided by a separate color look-up table (CLUT) if indexed color graphic modes are used.
A CLUT is a specialized RAM that stores R, G, and B values that define specific colors. Each color has its own address (index)—consider it as a descriptive reference number that provides that specific color when the image needs it. The content of the CLUT is much like a palette of colors. Image data that uses indexed color specifies addresses within the CLUT to provide the required R, G, and B values for each specific pixel, one pixel at a time. Of course, before displaying, the CLUT has to be loaded with R, G, and B values that define the palette of colors required for each image to be rendered. Some video applications store such palettes in PAL files (Age of Empires game, for example, uses over half-a-dozen) and can combine CLUTs on screen.
RGB24 and RGB32
This indirect scheme restricts the number of available colors in an image CLUT—typically 256-cubed (8 bits in three color channels with values of 0–255)—although each color in the RGB24 CLUT table has only 8 bits representing 256 codes for each of the R, G, and B primaries, making 16,777,216 possible colors. However, the advantage is that an indexed-color image file can be significantly smaller than it would be with only 8 bits per pixel for each primary.
Modern storage, however, is far less costly, greatly reducing the need to minimize image file size. By using an appropriate combination of red, green, and blue intensities, many colors can be displayed. Current typical display adapters use up to 24 bits of information for each pixel: 8-bit per component multiplied by three components (see the Numeric representations section below (24 bits = 2563, each primary value of 8 bits with values of 0–255). With this system, 16,777,216 (2563 or 224) discrete combinations of R, G, and B values are allowed, providing millions of different (though not necessarily distinguishable) hue, saturation and lightness shades. Increased shading has been implemented in various ways, some formats such as .png and .tga files among others using a fourth grayscale color channel as a masking layer, often called RGB32.
For images with a modest range of brightnesses from the darkest to the lightest, 8 bits per primary color provides good-quality images, but extreme images require more bits per primary color as well as the advanced display technology. For more information see High Dynamic Range (HDR) imaging.
==== Nonlinearity ====
In classic CRT devices, the brightness of a given point over the fluorescent screen due to the impact of accelerated electrons is not proportional to the voltages applied to the electron gun control grids, but to an expansive function of that voltage. The amount of this deviation is known as its gamma value (
γ
{\displaystyle \gamma }
), the argument for a power law function, which closely describes this behavior. A linear response is given by a gamma value of 1.0, but actual CRT nonlinearities have a gamma value around 2.0 to 2.5.
Similarly, the intensity of the output on TV and computer display devices is not directly proportional to the R, G, and B applied electric signals (or file data values which drive them through digital-to-analog converters). On a typical standard 2.2-gamma CRT display, an input intensity RGB value of (0.5, 0.5, 0.5) only outputs about 22% of full brightness (1.0, 1.0, 1.0), instead of 50%. To obtain the correct response, a gamma correction is used in encoding the image data, and possibly further corrections as part of the color calibration process of the device. Gamma affects black-and-white TV as well as color. In standard color TV, broadcast signals are gamma corrected.
=== RGB and cameras ===
In color television and video cameras manufactured before the 1990s, the incoming light was separated by prisms and filters into the three RGB primary colors feeding each color into a separate video camera tube (or pickup tube). These tubes are a type of cathode-ray tube, not to be confused with that of CRT displays.
With the arrival of commercially viable charge-coupled device (CCD) technology in the 1980s, first, the pickup tubes were replaced with this kind of sensor. Later, higher scale integration electronics was applied (mainly by Sony), simplifying and even removing the intermediate optics, thereby reducing the size of home video cameras and eventually leading to the development of full camcorders. Current webcams and mobile phones with cameras are the most miniaturized commercial forms of such technology.
Photographic digital cameras that use a CMOS or CCD image sensor often operate with some variation of the RGB model. In a Bayer filter arrangement, green is given twice as many detectors as red and blue (ratio 1:2:1) in order to achieve higher luminance resolution than chrominance resolution. The sensor has a grid of red, green, and blue detectors arranged so that the first row is RGRGRGRG, the next is GBGBGBGB, and that sequence is repeated in subsequent rows. For every channel, missing pixels are obtained by interpolation in the demosaicing process to build up the complete image. Also, other processes used to be applied in order to map the camera RGB measurements into a standard color space as sRGB.
=== RGB and scanners ===
In computing, an image scanner is a device that optically scans images (printed text, handwriting, or an object) and converts it to a digital image which is transferred to a computer. Among other formats, flat, drum and film scanners exist, and most of them support RGB color. They can be considered the successors of early telephotography input devices, which were able to send consecutive scan lines as analog amplitude modulation signals through standard telephonic lines to appropriate receivers; such systems were in use in press since the 1920s to the mid-1990s. Color telephotographs were sent as three separated RGB filtered images consecutively.
Currently available scanners typically use CCD or contact image sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. Early color film scanners used a halogen lamp and a three-color filter wheel, so three exposures were needed to scan a single color image. Due to heating problems, the worst of them being the potential destruction of the scanned film, this technology was later replaced by non-heating light sources such as color LEDs.
== Numeric representations ==
A color in the RGB color model is described by indicating how much of each of the red, green, and blue is included. The color is expressed as an RGB triplet (r,g,b), each component of which can vary from zero to a defined maximum value. If all the components are at zero the result is black; if all are at maximum, the result is the brightest representable white.
These ranges may be quantified in several different ways:
From 0 to 1, with any fractional value in between. This representation is used in theoretical analyses, and in systems that use floating point representations.
Each color component value can also be written as a percentage, from 0% to 100%.
In computers, the component values are often stored as unsigned integer numbers in the range 0 to 255, the range that a single 8-bit byte can offer. These are often represented as either decimal or hexadecimal numbers.
High-end digital image equipment are often able to deal with larger integer ranges for each primary color, such as 0..1023 (10 bits), 0..65535 (16 bits) or even larger, by extending the 24 bits (three 8-bit values) to 32-bit, 48-bit, or 64-bit units (more or less independent from the particular computer's word size).
For example, brightest saturated red is written in the different RGB notations as:
In many environments, the component values within the ranges are not managed as linear (that is, the numbers are nonlinearly related to the intensities that they represent), as in digital cameras and TV broadcasting and receiving due to gamma correction, for example. Linear and nonlinear transformations are often dealt with via digital image processing. Representations with only 8 bits per component are considered sufficient if gamma correction is used.
Following is the mathematical relationship between RGB space to HSI space (hue, saturation, and intensity: HSI color space):
I
=
R
+
G
+
B
3
S
=
1
−
3
(
R
+
G
+
B
)
min
(
R
,
G
,
B
)
H
=
cos
−
1
(
(
R
−
G
)
+
(
R
−
B
)
2
(
R
−
G
)
2
+
(
R
−
B
)
(
G
−
B
)
)
assuming
G
>
B
{\displaystyle {\begin{aligned}I&={\frac {R+G+B}{3}}\\S&=1\,-\,{\frac {3}{(R+G+B)}}\,\min(R,G,B)\\H&=\cos ^{-1}\left({\frac {(R-G)+(R-B)}{2{\sqrt {(R-G)^{2}+(R-B)(G-B)}}}}\right)\qquad {\text{assuming }}G>B\end{aligned}}}
If
B
>
G
{\displaystyle B>G}
, then
H
=
360
−
H
{\displaystyle H=360-H}
.
=== Color depth ===
The RGB color model is one of the most common ways to encode color in computing, and several different digital representations are in use. The main characteristic of all of them is the quantization of the possible values per component (technically a sample) by using only integer numbers within some range, usually from 0 to some power of two minus one (2n − 1) to fit them into some bit groupings. Encodings of 1, 2, 4, 5, 8, and 16 bits per color are commonly found; the total number of bits used for an RGB color is typically called the color depth.
== Geometric representation ==
Since colors are usually defined by three components, not only in the RGB model, but also in other color models such as CIELAB and Y'UV, among others, then a three-dimensional volume is described by treating the component values as ordinary Cartesian coordinates in a Euclidean space. For the RGB model, this is represented by a cube using non-negative values within a 0–1 range, assigning black to the origin at the vertex (0, 0, 0), and with increasing intensity values running along the three axes up to white at the vertex (1, 1, 1), diagonally opposite black.
An RGB triplet (r,g,b) represents the three-dimensional coordinate of the point of the given color within the cube or its faces or along its edges. This approach allows computations of the color similarity of two given RGB colors by simply calculating the distance between them: the shorter the distance, the higher the similarity. Out-of-gamut computations can also be performed this way.
== Colors in web-page design ==
Initially, the limited color depth of most video hardware led to a limited color palette of 216 RGB colors, defined by the Netscape Color Cube. The web-safe color palette consists of the 216 (63) combinations of red, green, and blue where each color can take one of six values (in hexadecimal): #00, #33, #66, #99, #CC or #FF (based on the 0 to 255 range for each value discussed above). These hexadecimal values = 0, 51, 102, 153, 204, 255 in decimal, which = 0%, 20%, 40%, 60%, 80%, 100% in terms of intensity. This seems fine for splitting up 216 colors into a cube of dimension 6. However, lacking gamma correction, the perceived intensity on a standard 2.5 gamma CRT / LCD is only: 0%, 2%, 10%, 28%, 57%, 100%. See the actual web safe color palette for a visual confirmation that the majority of the colors produced are very dark.
With the predominance of 24-bit displays, the use of the full 16.7 million colors of the HTML RGB color code no longer poses problems for most viewers. The sRGB color space (a device-independent color space) for HTML was formally adopted as an Internet standard in HTML 3.2, though it had been in use for some time before that. All images and colors are interpreted as being sRGB (unless another color space is specified) and all modern displays can display this color space (with color management being built in into browsers or operating systems).
The syntax in CSS is:
rgb(#,#,#)
where # equals the proportion of red, green, and blue respectively. This syntax can be used after such selectors as "background-color:" or (for text) "color:".
Wide gamut color is possible in modern CSS, being supported by all major browsers since 2023.
For example, a color on the DCI-P3 color space can be indicated as:
color(display-p3 # # #)
where # equals the proportion of red, green, and blue in 0.0 to 1.0 respectively.
== Color management ==
Proper reproduction of colors, especially in professional environments, requires color management of all the devices involved in the production process, many of them using RGB. Color management results in several transparent conversions between device-independent (sRGB, XYZ, L*a*b*) and device-dependent color spaces (RGB and others, as CMYK for color printing) during a typical production cycle, in order to ensure color consistency throughout the process. Along with the creative processing, such interventions on digital images can damage the color accuracy and image detail, especially where the gamut is reduced. Professional digital devices and software tools allow for 48 bpp (bits per pixel) images to be manipulated (16 bits per channel), to minimize any such damage.
ICC profile compliant applications, such as Adobe Photoshop, use either the Lab color space or the CIE 1931 color space as a Profile Connection Space when translating between color spaces.
== RGB model and luminance–chrominance formats relationship ==
All luminance–chrominance formats used in the different TV and video standards such as YIQ for NTSC, YUV for PAL, YDBDR for SECAM, and YPBPR for component video use color difference signals, by which RGB color images can be encoded for broadcasting/recording and later decoded into RGB again to display them. These intermediate formats were needed for compatibility with pre-existent black-and-white TV formats. Also, those color difference signals need lower data bandwidth compared to full RGB signals.
Similarly, current high-efficiency digital color image data compression schemes such as JPEG and MPEG store RGB color internally in YCBCR format, a digital luminance–chrominance format based on YPBPR. The use of YCBCR also allows computers to perform lossy subsampling with the chrominance channels (typically to 4:2:2 or 4:1:1 ratios), which reduces the resultant file size.
== See also ==
== References ==
== External links ==
RGB mixer
Demonstrative color conversion applet | Wikipedia/RGB_color_model |
Sega is a video game developer, publisher, and hardware development company headquartered in Tokyo, Japan, with multiple offices around the world. The company's involvement in the arcade game industry began as a Japan-based distributor of coin-operated machines, including pinball games and jukeboxes. Sega imported second-hand machines that required frequent maintenance. This necessitated the construction of replacement guns, flippers, and other parts for the machines. According to former Sega director Akira Nagai, this is what led to the company into developing their own games.
Sega released Pong-Tron, its first video-based game, in 1973. The company prospered from the arcade game boom of the late 1970s, with revenues climbing to over US$100 million by 1979. Nagai has stated that Hang-On and Out Run helped to pull the arcade game market out of the 1983 downturn and created new genres of video games.
In terms of arcades, Sega is the world's most prolific arcade game producer, having developed more than 500 games, 70 franchises, and 20 arcade system boards since 1981. It has been recognized by Guinness World Records for this achievement. The following list comprises the various arcade system boards developed and used by Sega in their arcade games.
== Arcade system boards ==
== Additional arcade hardware ==
Sega has developed and released additional arcade games that use technology other than their dedicated arcade system boards. The first arcade game manufactured by Sega was Periscope, an electromechanical game. This was followed by Missile in 1969. Subsequent video-based games such as Pong-Tron (1973), Fonz (1976), and Monaco GP (1979) used discrete logic boards without a CPU microprocessor. Frogger (1981) used a system powered by two Z80 CPU microprocessors. Some titles, such as Zaxxon (1982) were developed externally from Sega, a practice that was not uncommon at the time.
== See also ==
Sega R360
List of Sega pinball machines
List of Sega video game consoles
== References == | Wikipedia/Sega_Model_2 |
The computer graphics pipeline, also known as the rendering pipeline, or graphics pipeline, is a framework within computer graphics that outlines the necessary procedures for transforming a three-dimensional (3D) scene into a two-dimensional (2D) representation on a screen. Once a 3D model is generated, the graphics pipeline converts the model into a visually perceivable format on the computer display. Due to the dependence on specific software, hardware configurations, and desired display attributes, a universally applicable graphics pipeline does not exist. Nevertheless, graphics application programming interfaces (APIs), such as Direct3D, OpenGL and Vulkan were developed to standardize common procedures and oversee the graphics pipeline of a given hardware accelerator. These APIs provide an abstraction layer over the underlying hardware, relieving programmers from the need to write code explicitly targeting various graphics hardware accelerators like AMD, Intel, Nvidia, and others.
The model of the graphics pipeline is usually used in real-time rendering. Often, most of the pipeline steps are implemented in hardware, which allows for special optimizations. The term "pipeline" is used in a similar sense for the pipeline in processors: the individual steps of the pipeline run in parallel as long as any given step has what it needs.
== Concept ==
The 3D pipeline usually refers to the most common form of computer 3-Dimensional rendering called 3D polygon rendering, distinct from Raytracing and Raycasting. In Raycasting, a ray originates at the point where the camera resides, and if that ray hits a surface, the color and lighting of the point on the surface where the ray hit is calculated. In 3D polygon rendering the reverse happens – the area that is given the camera is calculated and then rays are created from every part of every surface given the camera and traced back to the camera.
== Structure ==
A graphics pipeline can be divided into three main parts: Application, Geometry, and Rasterization.
=== Application ===
The application step is executed by the software on the main processor (CPU). During the application step, changes are made to the scene as required, for example, by user interaction using input devices or during an animation. The new scene with all its primitives, usually triangles, lines, and points, is then passed on to the next step in the pipeline.
Examples of tasks that are typically done in the application step are collision detection, animation, morphing, and acceleration techniques using spatial subdivision schemes such as Quadtrees or Octrees. These are also used to reduce the amount of main memory required at a given time. The "world" of a modern computer game is far larger than what could fit into memory at once.
=== Geometry ===
The geometry step (with Geometry pipeline), which is responsible for the majority of the operations with polygons and their vertices (with Vertex pipeline), can be divided into the following five tasks. It depends on the particular implementation of how these tasks are organized as actual parallel pipeline steps.
==== Definitions ====
A vertex (plural: vertices) is a point in the world. Many points are used to join the surfaces. In special cases, point clouds are drawn directly, but this is still the exception.
A triangle is the most common geometric primitive of computer graphics. It is defined by its three vertices and a normal vector – the normal vector serves to indicate the front face of the triangle and is a vector that is perpendicular to the surface. The triangle may be provided with a color or with a texture (image "glued" on top of it). Triangles are preferred over rectangles because any three points in 3D space always create a flat triangle (i.e. a triangle in a single plane). On the other hand, four points in a 3D space may not necessarily create a flat rectangle.
==== The World Coordinate System ====
The world coordinate system is the coordinate system in which the virtual world is created. This should meet a few conditions for the following mathematics to be easily applicable:
It must be a rectangular Cartesian coordinate system in which all axes are equally scaled.
The definition of the coordinate system is left to the developer. Whether, therefore, the unit vector of the system corresponds in reality to one meter or an Ångström depends on the application.
Whether a right-handed or a left-handed coordinate system is to be used may be determined by the graphic library to be used.
Example: If we are to develop a flight simulator, we can choose the world coordinate system so that the origin is in the middle of the Earth and the unit is set to one meter. In addition, to make the reference to reality easier, we define that the X axis should intersect the equator on the zero meridian, and the Z axis passes through the poles. In a Right-handed system, the Y-axis runs through the 90°-East meridian (somewhere in the Indian Ocean). Now we have a coordinate system that describes every point on Earth in three-dimensional Cartesian coordinates. In this coordinate system, we are now modeling the principles of our world, mountains, valleys, and oceans.
Note: Aside from computer geometry, geographic coordinates are used for the Earth, i.e., latitude and longitude, as well as altitudes above sea level. The approximate conversion – if one does not consider the fact that the Earth is not an exact sphere – is simple:
(
x
y
z
)
=
(
(
R
+
h
a
s
l
)
∗
cos
(
l
a
t
)
∗
cos
(
l
o
n
g
)
(
R
+
h
a
s
l
)
∗
cos
(
l
a
t
)
∗
sin
(
l
o
n
g
)
(
R
+
h
a
s
l
)
∗
sin
(
l
a
t
)
)
{\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}(R+{hasl})*\cos({lat})*\cos({long})\\(R+{hasl})*\cos({lat})*\sin({long})\\(R+{hasl})*\sin({lat})\end{pmatrix}}}
with R = Radius of the Earth [6.378.137m], lat = Latitude, long = Longitude, hasl = height above sea level.
All of the following examples apply in a right-handed system. For a left-handed system, the signs may need to be interchanged.
The objects contained within the scene (houses, trees, cars) are often designed in their object coordinate system (also called model coordinate system or local coordinate system) for reasons of simpler modeling. To assign these objects to coordinates in the world coordinate system or global coordinate system of the entire scene, the object coordinates are transformed using translation, rotation, or scaling. This is done by multiplying the corresponding transformation matrices. In addition, several differently transformed copies can be formed from one object, for example, a forest from a tree; This technique is called instancing.
To place a model of an aircraft in the world, we first determine four matrices. Since we work in three-dimensional space, we need four-dimensional homogeneous matrices for our calculations.
First, we need three rotation matrices, namely one for each of the three aircraft axes (vertical axis, transverse axis, longitudinal axis).
Around the X axis (usually defined as a longitudinal axis in the object coordinate system)
R
x
=
(
1
0
0
0
0
cos
(
α
)
sin
(
α
)
0
0
−
sin
(
α
)
cos
(
α
)
0
0
0
0
1
)
{\displaystyle R_{x}={\begin{pmatrix}1&0&0&0\\0&\cos(\alpha )&\sin(\alpha )&0\\0&-\sin(\alpha )&\cos(\alpha )&0\\0&0&0&1\end{pmatrix}}}
Around the Y axis (usually defined as the transverse axis in the object coordinate system)
R
y
=
(
cos
(
α
)
0
−
sin
(
α
)
0
0
1
0
0
sin
(
α
)
0
cos
(
α
)
0
0
0
0
1
)
{\displaystyle R_{y}={\begin{pmatrix}\cos(\alpha )&0&-\sin(\alpha )&0\\0&1&0&0\\\sin(\alpha )&0&\cos(\alpha )&0\\0&0&0&1\end{pmatrix}}}
Around the Z axis (usually defined as a vertical axis in the object coordinate system)
R
z
=
(
cos
(
α
)
sin
(
α
)
0
0
−
sin
(
α
)
cos
(
α
)
0
0
0
0
1
0
0
0
0
1
)
{\displaystyle R_{z}={\begin{pmatrix}\cos(\alpha )&\sin(\alpha )&0&0\\-\sin(\alpha )&\cos(\alpha )&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}}
We also use a translation matrix that moves the aircraft to the desired point in our world:
T
x
,
y
,
z
=
(
1
0
0
0
0
1
0
0
0
0
1
0
x
y
z
1
)
{\displaystyle T_{x,y,z}={\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\x&y&z&1\end{pmatrix}}}
.
Remark: The above matrices are transposed with respect to the ones in the article rotation matrix. See further down for an explanation of why.
Now we could calculate the position of the vertices of the aircraft in world coordinates by multiplying each point successively with these four matrices. Since the multiplication of a matrix with a vector is quite expensive (time-consuming), one usually takes another path and first multiplies the four matrices together. The multiplication of two matrices is even more expensive but must be executed only once for the whole object. The multiplications
(
(
(
(
v
∗
R
x
)
∗
R
y
)
∗
R
z
)
∗
T
)
{\displaystyle ((((v*R_{x})*R_{y})*R_{z})*T)}
and
(
v
∗
(
(
(
R
x
∗
R
y
)
∗
R
z
)
∗
T
)
)
{\displaystyle (v*(((R_{x}*R_{y})*R_{z})*T))}
are equivalent. Thereafter, the resulting matrix could be applied to the vertices. In practice, however, the multiplication with the vertices is still not applied, but the camera matrices (see below) are determined first.
For our example from above, however, the translation has to be determined somewhat differently, since the common meaning of up – apart from at the North Pole – does not coincide with our definition of the positive Z axis and therefore the model must also be rotated around the center of the Earth:
T
K
u
g
e
l
=
T
x
,
y
,
z
(
0
,
0
,
R
+
h
a
s
l
)
∗
R
y
(
Π
/
2
−
l
a
t
)
∗
R
z
(
l
o
n
g
)
{\displaystyle T_{Kugel}=T_{x,y,z}(0,0,R+{hasl})*R_{y}(\Pi /2-{lat})*R_{z}({long})}
The first step pushes the origin of the model to the correct height above the Earth's surface, then it is rotated by latitude and longitude.
The order in which the matrices are applied is important because the matrix multiplication is not commutative. This also applies to the three rotations, which can be demonstrated by an example: The point (1, 0, 0) lies on the X-axis, if one rotates it first by 90° around the X- and then around The Y-axis, it ends up on the Z-axis (the rotation around the X-axis does not affect a point that is on the axis). If, on the other hand, one rotates around the Y-axis first and then around the X-axis, the resulting point is located on the Y-axis. The sequence itself is arbitrary as long as it is always the same. The sequence with x, then y, then z (roll, pitch, heading) is often the most intuitive because the rotation causes the compass direction to coincide with the direction of the "nose".
There are also two conventions to define these matrices, depending on whether you want to work with column vectors or row vectors. Different graphics libraries have different preferences here. OpenGL prefers column vectors, DirectX row vectors. The decision determines from which side the point vectors are to be multiplied by the transformation matrices.
For column vectors, the multiplication is performed from the right, i.e.
v
o
u
t
=
M
∗
v
i
n
{\displaystyle v_{out}=M*v_{in}}
, where vout and vin are 4x1 column vectors. The concatenation of the matrices also is done from the right to left, i.e., for example
M
=
T
x
∗
R
x
{\displaystyle M=T_{x}*R_{x}}
, when first rotating and then shifting.
In the case of row vectors, this works exactly the other way around. The multiplication now takes place from the left as
v
o
u
t
=
v
i
n
∗
M
{\displaystyle v_{out}=v_{in}*M}
with 1x4-row vectors and the concatenation is
M
=
R
x
∗
T
x
{\displaystyle M=R_{x}*T_{x}}
when we also first rotate and then move. The matrices shown above are valid for the second case, while those for column vectors are transposed. The rule
(
v
∗
M
)
T
=
M
T
∗
v
T
{\displaystyle (v*M)^{T}=M^{T}*v^{T}}
applies, which for multiplication with vectors means that you can switch the multiplication order by transposing the matrix.
In matrix chaining, each transformation defines a new coordinate system, allowing for flexible extensions. For instance, an aircraft's propeller, modeled separately, can be attached to the aircraft nose through translation, which only shifts from the model to the propeller coordinate system. To render the aircraft, its transformation matrix is first computed to transform the points, followed by multiplying the propeller model matrix by the aircraft's matrix for the propeller points. This calculated matrix is known as the 'world matrix,' essential for each object in the scene before rendering. The application can then dynamically alter these matrices, such as updating the aircraft's position with each frame based on speed.
The matrix calculated in this way is also called the world matrix. It must be determined for each object in the world before rendering. The application can introduce changes here, for example, changing the position of the aircraft according to the speed after each frame.
==== Camera Transformation ====
In addition to the objects, the scene also defines a virtual camera or viewer that indicates the position and direction of view relative to which the scene is rendered. The scene is transformed so that the camera is at the origin looking along the Z-axis. The resulting coordinate system is called the camera coordinate system and the transformation is called camera transformation or View Transformation.
The view matrix is usually determined from the camera position, target point (where the camera looks), and an "up vector" ("up" from the viewer's viewpoint). The first three auxiliary vectors are required:
Zaxis = normal(cameraPosition – cameraTarget)
Xaxis = normal(cross(cameraUpVector, Zaxis))
Yaxis = cross(Zaxis, Xaxis )
With normal(v) = normalization of the vector v;
cross(v1, v2) = cross product of v1 and v2.
Finally, the matrix:
(
x
a
x
i
s
.
x
y
a
x
i
s
.
x
z
a
x
i
s
.
x
0
x
a
x
i
s
.
y
y
a
x
i
s
.
y
z
a
x
i
s
.
y
0
x
a
x
i
s
.
z
y
a
x
i
s
.
z
z
a
x
i
s
.
z
0
−
d
o
t
(
x
a
x
i
s
,
c
a
m
e
r
a
P
o
s
i
t
i
o
n
)
−
d
o
t
(
y
a
x
i
s
,
c
a
m
e
r
a
P
o
s
i
t
i
o
n
)
−
d
o
t
(
z
a
x
i
s
,
c
a
m
e
r
a
P
o
s
i
t
i
o
n
)
1
)
{\displaystyle {\begin{pmatrix}{xaxis}.x&{yaxis}.x&{zaxis}.x&0\\{xaxis}.y&{yaxis}.y&{zaxis}.y&0\\{xaxis}.z&{yaxis}.z&{zaxis}.z&0\\-{dot}({xaxis},{cameraPosition})&-{dot}({yaxis},{cameraPosition})&-{dot}({zaxis},{cameraPosition})&1\end{pmatrix}}}
with dot(v1, v2) = dot product of v1 and v2.
==== Projection ====
The 3D projection step transforms the view volume into a cube with the corner point coordinates (−1, −1, 0) and (1, 1, 1); Occasionally other target volumes are also used. This step is called projection, even though it transforms a volume into another volume, since the resulting Z coordinates are not stored in the image, but are only used in Z-buffering in the later rastering step. In a perspective illustration, a central projection is used. To limit the number of displayed objects, two additional clipping planes are used; The visual volume is therefore a truncated pyramid (frustum). The parallel or orthogonal projection is used, for example, for technical representations because it has the advantage that all parallels in the object space are also parallel in the image space, and the surfaces and volumes are the same size regardless of the distance from the viewer. Maps use, for example, an orthogonal projection (so-called orthophoto), but oblique images of a landscape cannot be used in this way – although they can technically be rendered, they seem so distorted that we cannot make any use of them. The formula for calculating a perspective mapping matrix is:
(
w
0
0
0
0
h
0
0
0
0
f
a
r
/
(
n
e
a
r
−
f
a
r
)
−
1
0
0
(
n
e
a
r
∗
f
a
r
)
/
(
n
e
a
r
−
f
a
r
)
0
)
{\displaystyle {\begin{pmatrix}w&0&0&0\\0&h&0&0\\0&0&{far}/({near-far})&-1\\0&0&({near}*{far})/({near}-{far})&0\end{pmatrix}}}
With h = cot (fieldOfView / 2.0) (aperture angle of the camera); w = h / aspect Ratio (aspect ratio of the target image); near = Smallest distance to be visible; far = The longest distance to be visible.
The reasons why the smallest and the greatest distance have to be given here are, on the one hand, that this distance is divided to reach the scaling of the scene (more distant objects are smaller in a perspective image than near objects), and on the other hand to scale the Z values to the range 0..1, for filling the Z-buffer. This buffer often has only a resolution of 16 bits, which is why the near and far values should be chosen carefully. A too-large difference between the near and the far value leads to so-called Z-fighting because of the low resolution of the Z-buffer. It can also be seen from the formula that the near value cannot be 0 because this point is the focus point of the projection. There is no picture at this point.
For the sake of completeness, the formula for parallel projection (orthogonal projection):
(
2.0
/
w
0
0
0
0
2.0
/
h
0
0
0
0
1.0
/
(
n
e
a
r
−
f
a
r
)
−
1
0
0
n
e
a
r
/
(
n
e
a
r
−
f
a
r
)
0
)
{\displaystyle {\begin{pmatrix}2.0/w&0&0&0\\0&2.0/h&0&0\\0&0&1.0/({near-far})&-1\\0&0&{near}/({near}-{far})&0\end{pmatrix}}}
with w = width of the target cube (dimension in units of the world coordinate system); H = w / aspect Ratio (aspect ratio of the target image); near = Smallest distance to be visible; far = longest distance to be visible.
For reasons of efficiency, the camera and projection matrix are usually combined into a transformation matrix so that the camera coordinate system is omitted. The resulting matrix is usually the same for a single image, while the world matrix looks different for each object. In practice, therefore, view and projection are pre-calculated so that only the world matrix has to be adapted during the display. However, more complex transformations such as vertex blending are possible. Freely programmable geometry shaders that modify the geometry can also be executed.
In the actual rendering step, the world matrix * camera matrix * projection matrix is calculated and then finally applied to every single point. Thus, the points of all objects are transferred directly to the screen coordinate system (at least almost, the value range of the axes is still −1..1 for the visible range, see section "Window-Viewport-Transformation").
==== Lighting ====
Often a scene contains light sources placed at different positions to make the lighting of the objects appear more realistic. In this case, a gain factor for the texture is calculated for each vertex based on the light sources and the material properties associated with the corresponding triangle. In the later rasterization step, the vertex values of a triangle are interpolated over its surface. A general lighting (ambient light) is applied to all surfaces. It is the diffuse and thus direction-independent brightness of the scene. The sun is a directed light source, which can be assumed to be infinitely far away. The illumination effected by the sun on a surface is determined by forming the scalar product of the directional vector from the sun and the normal vector of the surface. If the value is negative, the surface is facing the sun.
==== Clipping ====
Only the primitives that are within the visual volume need to be rastered (drawn). This visual volume is defined as the inside of a frustum, a shape in the form of a pyramid with a cut-off top. Primitives that are completely outside the visual volume are discarded; This is called frustum culling. Further culling methods such as back-face culling, which reduces the number of primitives to be considered, can theoretically be executed in any step of the graphics pipeline. Primitives that are only partially inside the cube must be clipped against the cube. The advantage of the previous projection step is that the clipping always takes place against the same cube. Only the – possibly clipped – primitives, which are within the visual volume, are forwarded to the final step.
==== Window-Viewport transformation ====
To output the image to any target area (viewport) of the screen, another transformation, the Window-Viewport transformation, must be applied. This is a shift, followed by scaling. The resulting coordinates are the device coordinates of the output device. The viewport contains 6 values: the height and width of the window in pixels, the upper left corner of the window in window coordinates (usually 0, 0), and the minimum and maximum values for Z (usually 0 and 1).
Formally:
(
x
y
z
)
=
(
v
p
.
X
+
(
1.0
+
v
.
X
)
∗
v
p
.
w
i
d
t
h
/
2.0
v
p
.
Y
+
(
1.0
−
v
.
Y
)
∗
v
p
.
h
e
i
g
h
t
/
2.0
v
p
.
m
i
n
z
+
v
.
Z
∗
(
v
p
.
m
a
x
z
−
v
p
.
m
i
n
z
)
)
{\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}{vp}.X+(1.0+v.X)*{vp}.{width}/2.0\\{vp}.Y+(1.0-v.Y)*{vp}.{height}/2.0\\{vp}.{minz}+v.Z*({vp}.{maxz}-{vp}.{minz})\end{pmatrix}}}
With vp=Viewport; v=Point after projection
On modern hardware, most of the geometry computation steps are performed in the vertex shader. This is, in principle, freely programmable, but generally performs at least the transformation of the points and the illumination calculation. For the DirectX programming interface, the use of a custom vertex shader is necessary from version 10, while older versions still have a standard shader.
=== Rasterization ===
The rasterization step is the final step before the fragment shader pipeline that all primitives are rasterized with. In the rasterization step, discrete fragments are created from continuous primitives.
In this stage of the graphics pipeline, the grid points are also called fragments, for the sake of greater distinctiveness. Each fragment corresponds to one pixel in the frame buffer and this corresponds to one pixel of the screen. These can be colored (and possibly illuminated). Furthermore, it is necessary to determine the visible, closer to the observer fragment, in the case of overlapping polygons. A Z-buffer is usually used for this so-called hidden surface determination. The color of a fragment depends on the illumination, texture, and other material properties of the visible primitive and is often interpolated using the triangle vertex properties. Where available, a fragment shader (also called Pixel Shader) is run in the rastering step for each fragment of the object. If a fragment is visible, it can now be mixed with already existing color values in the image if transparency or multi-sampling is used. In this step, one or more fragments become a pixel.
To prevent the user from seeing the gradual rasterization of the primitives, double buffering takes place. The rasterization is carried out in a special memory area. Once the image has been completely rasterized, it is copied to the visible area of the image memory.
=== Inverse ===
All matrices used are nonsingular and thus invertible. Since the multiplication of two nonsingular matrices creates another nonsingular matrix, the entire transformation matrix is also invertible. The inverse is required to recalculate world coordinates from screen coordinates – for example, to determine from the mouse pointer position the clicked object. However, since the screen and the mouse have only two dimensions, the third is unknown. Therefore, a ray is projected at the cursor position into the world and then the intersection of this ray with the polygons in the world is determined.
== Shader ==
Classic graphics cards are still relatively close to the graphics pipeline. With increasing demands on the GPU, restrictions were gradually removed to create more flexibility. Modern graphics cards use a freely programmable, shader-controlled pipeline, which allows direct access to individual processing steps. To relieve the main processor, additional processing steps have been moved to the pipeline and the GPU.
The most important shader units are vertex shaders, geometry shaders, and pixel shaders.
The Unified Shader has been introduced to take full advantage of all units. This gives a single large pool of shader units. As required, the pool is divided into different groups of shaders. A strict separation between the shader types is therefore no longer useful.
It is also possible to use a so-called compute-shader to perform any calculations off the display of a graphic on the GPU. The advantage is that they run very parallel, but there are limitations. These universal calculations are also called general-purpose computing on graphics processing units, or GPGPU for short.
Mesh shaders are a recent addition, aiming to overcome the bottlenecks of the geometry pipeline fixed layout.
== See also ==
Pipeline (computing)
Instruction pipelining
Hardware acceleration
== Sources ==
Akenine-Möller, Tomas; Haines, Eric; Hoffman, Naty (2019) [2008]. Real-Time Rendering. CRC Press. ISBN 9781315362007.
Bender, Michael; Brill, Manfred (2006). Computergrafik: ein anwendungsorientiertes Lehrbuch. München: Hanser. ISBN 3-446-40434-1.
Fischer, Martin (2011-07-04). Pixel-Fabrik. Wie Grafikchips Spielewelten auf den Schirm zaubern. c't Magazin für Computer Technik. Heise Zeitschriften Verlag. p. 180. ISSN 0724-8679.
== References ==
== External links == | Wikipedia/Graphics_pipeline |
In computer graphics, a video card's pixel fillrate refers to the number of pixels that can be rendered on the screen and written to video memory in one second. Pixel fillrates are given in megapixels per second or in gigapixels per second (in the case of newer cards), and are obtained by multiplying the number of render output units (ROPs) by the clock frequency of the graphics processing unit (GPU) of a video card.
A similar concept, texture fillrate, refers to the number of texture map elements (texels) the GPU can map to pixels in one second. Texture fillrate is obtained by multiplying the number of texture mapping units (TMUs) by the clock frequency of the GPU. Texture fillrates are given in mega or gigatexels per second.
However, there is no full agreement on how to calculate and report fillrates. Another possible method is to multiply the number of pixel pipelines by the GPU's clock frequency.
The results of these multiplications correspond to a theoretical number. The actual fillrate depends on many other factors. In the past, the fillrate has been used as an indicator of performance by video card manufacturers such as ATI and NVIDIA, however, the importance of the fillrate as a measurement of performance has declined as the bottleneck in graphics applications has shifted. For example, today, the number and speed of unified shader processing units has gained attention. Although fillrate doesn't provide a substantial bottleneck in games, it can still provide a bottleneck for certain parts of the game, for example applying a gaussian blur can be bottlenecked by fillrate.
Scene complexity can be increased by overdrawing, which happens when an object is drawn to the frame buffer, and another object (such as a wall) is then drawn on top of it, covering it up. The time spent drawing the first object is thus wasted because it is not visible. When a sequence of scenes is extremely complex (many pixels have to be drawn for each scene), the frame rate for the sequence may drop. When designing graphics intensive applications, one can determine whether the application is fillrate-limited (or shader limited) by seeing if the frame rate increases dramatically when the application runs at a lower resolution or in a smaller window. Although this is not a full-proof method, modern videogame engines can dynamically reduce the level-of-detail required and thereby reducing fillrate-limited applications. The best way to find fillrate bottlenecks is to use GPU vendor software like NVIDIA Nsight Graphics, AMD Radeon GPU Profile and the Intel Graphics Performance Analyzers.
== See also ==
Graphics processing unit
Pixel shader
== References == | Wikipedia/Fillrate |
3dfx Interactive, Inc. was an American computer hardware company headquartered in San Jose, California, founded in 1994, that specialized in the manufacturing of 3D graphics processing units, and later, video cards. It was a pioneer in the field from the mid 1990s to 2000.
The company's original product was the Voodoo Graphics, an add-in card that implemented hardware acceleration of 3D graphics. The hardware accelerated only 3D rendering, relying on the PC's current video card for 2D support. Despite this limitation, the Voodoo Graphics product and its follow-up, Voodoo2, were popular. It became standard for 3D games to offer support for the company's Glide API.
Renewed interest in 3D gaming led to the success of the company's products and by the second half of the 1990s products combining a 2D output with 3D performance were appearing. This was accelerated by the introduction of Microsoft's Direct3D, which provided a single high-performance API that could be implemented on these cards, seriously eroding the value of Glide. While 3dfx continued to offer high-performance options, the value proposition was no longer compelling.
In the late 1990s 3dfx had an infringement lawsuit which combined with lower sales in the latter years led Nvidia to acquire 3dfx for their engineers, which they acquired around one hundred of. Most of the company's assets were acquired by Nvidia Corporation on December 15, 2000, mostly for intellectual property rights. The acquisition was accounted for as a purchase by Nvidia and was completed by the first quarter of their fiscal year of 2002. 3dfx ceased supporting their products on February 15, 2001, and filed for bankruptcy on October 15, 2002.
== Company history ==
=== Early products ===
==== First chips ====
The company was founded on August 24, 1994, as 3D/fx, Inc. Ross Smith, Gary Tarolli and Scott Sellers, all former employees of Silicon Graphics Inc. They were soon joined by Gordie Campbell of TechFarm. 3dfx released its first product, the Voodoo Graphics 3D chip, to manufacturing on November 6, 1995. The chip is a VGA 3D accelerator that features rendering methods such as point-sampled texture mapping, Z- and double buffering, Gouraud shading, subpixel correction, alpha compositing, and anti-aliasing. Alongside the chip came 3dfx's Glide API, designed to take full advantage of the Voodoo Graphics' features. The company stated that Glide's creation was because it found that no existing APIs at the time could fully utilize the chip's capabilities. DirectX 3.0 was deemed to be lacking, and OpenGL was regarded as suitable only for CAD/CAM workstations. The first graphics card to use the chip was Orchid Technology's Righteous 3D, released on October 7, 1996. The company manufactured only the chips and some reference boards, and initially did not sell any product to consumers; rather, it acted as an OEM supplier for graphics card companies, which designed, manufactured, marketed, and sold their own graphics cards including the Voodoo chipset.
3dfx gained initial fame in the arcade market. The first arcade machine that 3dfx Voodoo Graphics hardware was used in was a 1996 baseball game featuring a bat controller with motion sensing technology called ICE Home Run Derby. Later that year it was featured in more popular titles, such as Atari's San Francisco Rush and Wayne Gretzky's 3D Hockey. 3dfx also developed MiniGL after id Software's John Carmack released a 1997 version of Quake that used the OpenGL API. The MiniGL translated OpenGL commands into Glide, and gave 3dfx the advantage as the sole consumer chip company to deliver a functional graphics library driver until 1998.
==== Entry to the consumer market ====
Towards the end of 1995, the cost of DRAM dropped significantly and 3dfx was able to enter the consumer PC hardware market with aggressive pricing compared to the few previous 3D graphics solutions for computers. Prior to affordable 3D hardware, games such as Doom and Quake had compelled video game players to move from their 80386s to 80486s, and then to the Pentium.
By the end of 1997, the Voodoo Graphics was by far the most widely adopted 3D accelerator among both consumers and software developers. The Voodoo's primary competition was from PowerVR and Rendition. PowerVR produced a similar 3D-only add-on card with capable 3D support, although it was not comparable to Voodoo Graphics in either image quality or performance. 3dfx saw intense competition in the market from cards that offered the combination of 2D and 3D acceleration. While these cards, such as the Nvidia NV1, Matrox Mystique, S3 ViRGE, Vérité V1000, and ATI 3D Rage, offered inferior 3D acceleration in terms of image quality, performance, or both, their lower cost and simplicity often appealed to OEM system builders.
==== Dreamcast ====
In 1997, 3dfx was working with entertainment company Sega to develop a new video game console hardware platform. Sega solicited two competing designs: a unit code-named "Katana", developed in Japan using NEC and Imagination Technologies (then VideoLogic) technology, and "Blackbelt", a system designed in the United States using 3dfx technology.
However, on July 22, 1997, 3dfx announced that Sega was terminating the development contract. Sega chose to use NEC's PowerVR chipset for its game console, though it still planned to purchase the rights to 3dfx's technology in order to prevent competitors from acquiring it.
3dfx said Sega has still not given a reason as to why it terminated the contract or why it chose NEC's accelerator chipset over 3dfx's. According to Dale Ford, senior analyst at Dataquest, a market research firm based in San Jose, California, a number of factors could have influenced Sega's decision to move to NEC, including NEC's proven track record of supplying chipsets for the Nintendo 64 and the demonstrated ability to be able to handle a major influx of capacity if the company decided to ramp up production on a moment's notice.
"This is a highly competitive market with price wars happening all the time and it would appear that after evaluating a number of choices—and the ramifications each choice brings—Sega went with a decision that it thought was best for the company's longevity," said Mr. Ford.
"Sega has to make a significant move to stay competitive and they need to make it soon. Now whether this move is to roll out another home console platform or move strictly to the PC gaming space is unknown."
Sega quickly quashed 3dfx's "Blackbelt" and used the NEC-based "Katana" as the model for the product that would be marketed and sold as the Dreamcast. 3dfx sued Sega for breach of contract, accusing Sega of starting the deal in bad faith in order to take 3dfx technology. The case was settled out of court.
=== New chips, competition, and decline ===
==== Development of Rampage ====
In early 1998, 3dfx embarked on a new development project. The Rampage development project was new technology for use in a new graphics card that would take approximately two years to develop, and would supposedly be several years ahead of the competition once it debuted. The company hired hardware and software teams in Austin, Texas, to develop 2D and 3D Windows device drivers for Rampage in the summer of 1998. The hardware team in Austin initially focused on Rampage, but then worked on transform and lighting (T&L) engines and on MPEG decoder technology.
==== Acquisition of STB ====
3dfx announced in January 1999 that their Banshee cards had sold about one million units. While Nvidia had yet to launch a product in the add-in board market that sold as well as 3dfx's Voodoo line, the company was gaining steady ground in the OEM market. The Nvidia RIVA TNT was a similar, highly integrated product that had two major advantages in greater 3D speed and 32-bit 3D color support. 3dfx, by contrast, had very limited OEM sales, as the Banshee was adopted only in small numbers by OEMs.
3dfx executed a major strategy change just prior to the launch of Voodoo3 by purchasing STB Systems for US $141 million on December 14, 1998. STB Systems was one of the larger graphics card manufacturers at the time; the intent was for 3dfx to start manufacturing, marketing, and selling its own graphics cards, rather than functioning only as an OEM supplier. Purchase of STB was intended to give 3dfx access to that company's considerable OEM resources and sales channels, but the intended benefits of the acquisition never materialized. The two corporations were vastly different entities, with different cultures and structures, and they never integrated smoothly.
STB prior to the 3dfx acquisition also approached Nvidia as a potential partner to acquire the company. At the time, STB was Nvidia's largest customer and was only minimally engaged with 3dfx. 3dfx management mistakenly believed that acquiring STB would ensure OEM design wins with their products and that product limitations would be overcome with STB's knowledge in supporting the OEM sales/design win cycles. Nvidia decided not to acquire STB and to continue to support many brands of graphics board manufacturers. After STB was acquired by 3dfx, Nvidia focused on being a virtual graphics card manufacturer for the OEMs and strengthened its position in selling finished reference designs ready for market to the OEMs. STB's manufacturing facility in Juarez, Mexico was not able to compete from either a cost or quality point of view when compared to the burgeoning original design manufacturers (ODMs) and Contract electronic manufacturers (CEMs) that were delivering solutions in Asia for Nvidia. Prior to the STB merger finalizing, some of 3dfx's OEMs warned the company that any product from Juarez will not be deemed fit to ship with their systems, however 3dfx management believed these problems could be addressed over time. Those customers generally became Nvidia customers and no longer chose to ship 3dfx products.
The acquisition of STB was one of the main contributors to 3dfx's downfall; the Voodoo 3 became the first 3dfx card to be developed in-house rather than by third-party manufacturers, which were a significant source of revenue for the company. These third-party manufacturers turned into competitors and began sourcing graphics chips from Nvidia. This also further alienated 3dfx's remaining OEM customers, as they had a single source for 3dfx products and could not choose an OEM to provide cost flexibility. With the purchase of STB, 3dfx created two cards targeting the low-end market, the Velocity 100, which has 8 MB of SDRAM, and the Velocity 200, which has 16 MB of SGRAM. The cards both used a chipset based on the Voodoo3 2000, and it was claimed that they were "underclocked". However, it was revealed by testing that the Velocity 100 chipset has the same clock speed as a typical Voodoo3 2000—at 143 MHz—and that, while one of its two TMUs is disabled in OpenGL and Glide applications for memory management, it can be re-enabled to increase those applications' performance, and AnandTech found no side effects of enabling the component.
As 3dfx focused more on the retail graphics card space, further inroads into the OEM space were limited. A significant requirement of the OEM business was the ability to consistently produce new products on the six-month product refresh cycle the computer manufacturers required; 3dfx did not have the methodology nor the mindset to focus on this business model. In the end, 3dfx opted to be a retail distribution company manufacturing their own branded products.
==== Delays ====
The company's final product was code-named Napalm. Originally, this was just a Voodoo3 modified to support newer technologies and higher clock speeds, with performance estimated to be around the level of the RIVA TNT2. However, Napalm was delayed, and in the meantime Nvidia brought out their landmark GeForce 256 chip, which shifted even more of the computational work from the CPU to the graphics chip. Napalm would have been unable to compete with the GeForce, so it was redesigned to support multiple chip configurations, like the Voodoo2 had. The end-product was named VSA-100, with VSA standing for Voodoo Scalable Architecture. 3dfx was finally able to have a product that could defeat the GeForce.
However, by the time the VSA-100 based cards made it to the market, the GeForce 2 and ATI Radeon cards had arrived and were offering higher performance for the same price. The only real advantage the Voodoo 5 5500 had over the GeForce 2 GTS or Radeon was its superior spatial anti-aliasing implementation, and the fact that, relative to its peers, it did not suffer such a large performance hit when anti-aliasing was enabled. 3dfx was fully aware of the Voodoo 5's speed deficiency, so they touted it as quality over speed, which was a reversal of the Voodoo 3 marketing which emphasized raw performance over features. 5500 sales were respectable but volumes were not at a level to keep 3dfx afloat.
==== GigaPixel and insolvency ====
On March 28, 2000, 3dfx bought GigaPixel for US$186 million, in order to help launch its Rampage product to market quicker. GigaPixel had previously almost won the contract to build Microsoft's Xbox console, but lost out to Nvidia.
However, in late 2000, not long after the launch of the Voodoo 4, several of 3dfx's creditors decided to initiate bankruptcy proceedings. 3dfx, as a whole, would have had virtually no chance of successfully contesting these proceedings, and instead opted to sell its assets to Nvidia, effectively ceasing to exist as a company. The resolution and legality of those arrangements (with respect to the purchase, 3dfx's creditors and its bankruptcy proceedings) took over a decade to resolve in the federal courts. Specifically, the trustee of 3dfx's bankruptcy estate challenged 3dfx's sale of its assets to Nvidia as an allegedly fraudulent conveyance. On November 6, 2014, in an unpublished memorandum order, the U.S. Court of Appeals for the Ninth Circuit affirmed the "district court's judgment affirming the bankruptcy court's determination that [Nvidia] did not pay less than fair market value for assets purchased from 3dfx shortly before 3dfx filed for bankruptcy".
A majority of the engineering and design team working on Rampage/Sage who remained with the transition, were requested and remained in house to work on what became the GeForce FX series. Others accepted employment with ATI to bring their knowledge to the creation of the X series of video cards and the development of Crossfire, their own version of SLI.
The prototype Spectre 1000 cards were delivered to software developers mere days before declaring insolvency. The software team developed both device drivers and a binary-compatible soft emulation of the Rampage function set. Thus, there were working Windows NT device drivers within a few days of the power on of the Rampage system on the 2nd week of December, 2000. At the time of Nvidia's acquisition, 3dfx had already been developing the successors to Spectre. "Fear", based on a next-generation Rampage called Fusion, and Sage2. "Mojo" would combine both into a single die, implement tiled rendering, and showcase some advanced technologies from the GigaPixel acquisition. The unreleased Spectre 1000 card, based on Rampage, would eventually be leaked and tested. Performance indicated that it would have struggled to compete with Nvidia's already-released GeForce 256, though the proposed Spectre 2000 and Spectre 3000 cards, which featured a combination of Rampage and Sage units, would have led the market until late 2002, with Nvidia's GeForce 4 series.
After Nvidia acquired 3dfx's intellectual property, they announced that they would not provide technical support for 3dfx products. As of 2019, drivers and support are still offered by community websites. However, while functional, the drivers do not carry a manufacturer's backing and are considered beta software. For a limited time, Nvidia offered a program under which 3dfx owners could trade in their cards for Nvidia cards of similar performance. On December 15, 2000, 3dfx apologized to the customers with a final press release. In 2003, the source code for 3dfx drivers leaked, resulting in fan-made, updated drivers and further support.
Although 1997 was marked by analysts as a turning point for 3dfx due to the marketing led by the new CEO Greg Ballard, there was criticism of Ballard's understanding of R&D in the graphics industry. Single-card 2D/3D solutions were taking over the market, and although Ballard saw the need and attempted to direct the company there with the Voodoo Banshee and the Voodoo3, both of these cost the company millions in sales and lost market share while diverting vital resources from the Rampage project. Then 3dfx released word in early 1999 that the still-competitive Voodoo2 would support only OpenGL and Glide under Microsoft's Windows 2000 operating system, and not Direct3D. Many games were transitioning to Direct3D at this point, and the announcement caused many PC gamers – the core demographic of 3dfx's market – to switch to Nvidia or ATI offerings for their new machines.
Ballard resigned shortly after, in January 2000.
== Product development history ==
=== Voodoo Graphics PCI ===
A typical Voodoo Graphics PCI expansion card consisted of a DAC, a frame buffer processor and a texture mapping unit, along with 4 MB of EDO DRAM. The RAM and graphics processors operated at 50 MHz. It provided only 3D acceleration and as such the computer also needed a traditional video controller for conventional 2D software. A pass-through VGA cable daisy-chained the video controller to the Voodoo, which was itself connected to the monitor. The method used to engage the Voodoo's output circuitry varied between cards, with some using mechanical relays while others utilized purely solid-state components. The mechanical relays emitted an audible "clicking" sound when they engaged and disengaged.
=== Voodoo Rush ===
In August 1997, 3dfx released the Voodoo Rush chipset, combining a Voodoo chip with a 2D chip that lay on the same circuit board, eliminating the need for a separate VGA card. Most cards were built with an Alliance Semiconductor AT25/AT3D 2D component, but there were some built with a Macronix chip and there were initial plans to partner with Trident but no such boards were ever marketed.
The Rush had the same specifications as Voodoo Graphics, but did not perform as well because the Rush chipset had to share memory bandwidth with the CRTC of the 2D chip. Furthermore, the Rush chipset was not directly present on the PCI bus but had to be programmed through linked registers of the 2D chip. Like the Voodoo Graphics, there was no interrupt mechanism, so the driver had to poll the Rush in order to determine whether a command had completed or not; the indirection through the 2D component added significant overhead here and tended to back up traffic on the PCI interface. The typical performance hit was around 10% compared to Voodoo Graphics, and even worse in windowed mode. Later, Rush boards were released by Hercules featuring 8 MiB VRAM and a 10% higher clock speed, in an attempt to close this performance gap.
Some manufacturers bundled a PC version of Atari Games' racing game San Francisco Rush, the arcade version of which utilised a slightly upgraded Voodoo Graphics chipset with an extra texture mapping unit and additional texture memory.
The Voodoo Rush was 3dfx's first commercial failure. Sales were very poor, and the cards were discontinued within a year.
=== Voodoo2 ===
The 3dfx Voodoo2, the successor to the Voodoo Graphics chipset released in March 1998, was architecturally similar, but the basic board configuration added a second texturing unit, allowing two textures to be drawn in a single pass.
The Voodoo2 required three chips and a separate VGA graphics card, whereas new competing 3D products, such as the ATI Rage Pro, Nvidia RIVA 128, and Rendition Verite 2200, were single-chip products. Despite some shortcomings, such as the card's dithered 16-bit 3D color rendering and 800x600 resolution limitations, no other manufacturers' products could match the smooth framerates that the Voodoo2 produced. It was a landmark (and expensive) achievement in PC 3D-graphics. Its excellent performance, and the mindshare gained from the original Voodoo Graphics, resulted in its success. Many users even preferred Voodoo2's dedicated purpose, because they were free to use the quality 2D card of their choice as a result. Some 2D/3D combined solutions at the time offered quite sub-par 2D quality and speed.
The Voodoo2 introduced Scan-Line Interleave (SLI), in which two Voodoo2 boards were connected together, each drawing half the scan lines of the screen. SLI increased the maximum resolution supported to 1024×768. Because of the high cost and inconvenience of using three separate graphics cards (two Voodoo2 SLI plus the general purpose 2D graphics adapter), the Voodoo2 SLI scheme had minimal effect on total market share and was not a financial success. SLI capability was not offered in subsequent 3dfx board designs, although the technology would be later used to link the VSA-100 chips on the Voodoo 5.
The arrival of the Nvidia RIVA TNT with integrated 2D/3D chipset would offer minor challenge to the Voodoo2's supremacy months later.
=== Voodoo Banshee ===
Near the end of 1998, 3dfx released the Voodoo Banshee, which featured a lower price achieved through higher component integration, and a more complete feature-set including 2D acceleration, to target the mainstream consumer market. A single-chip solution, the Banshee was a combination of a 2D video card and partial (only one texture mapping unit) Voodoo2 3D hardware. Due to the missing second TMU, in 3D scenes which used multiple textures per polygon, the Voodoo2 was significantly faster. However, in scenes dominated by single-textured polygons, the Banshee could match or exceed the Voodoo2 due to its higher clock speed and resulting greater pixel fillrate.
Banshee's 2D acceleration was the first such hardware from 3dfx and it was very capable. It rivaled the fastest 2D cores from Matrox, Nvidia, and ATI. It consisted of a 128-bit 2D GUI engine and a 128-bit VESA VBE 3.0 VGA core. The graphics chip capably accelerated DirectDraw and supported all of the Windows Graphics Device Interface (GDI) in hardware, with all 256 raster operations and tertiary functions, and hardware polygon acceleration. The 2D core achieved near-theoretical maximum performance with a null driver test in Windows NT.
Voodoo Banshee supports MPEG2 video acceleration.
=== Voodoo3 ===
The Voodoo 3 was hyped as the graphics card that would make 3dfx the undisputed leader, but the actual product was below expectations. Though it was still the fastest as it edged the RIVA TNT2 by a small margin, the Voodoo3 lacked 32-bit color and large texture support. Though at that time few games supported large textures and 32-bit color, and those that did generally were too demanding to be run at playable framerates, the features "32-bit color support" and "2048×2048 textures" were much more impressive on paper than 16-bit color and 256×256 texture support. The Voodoo3 sold relatively well, but was disappointing compared to the first two models and 3dfx lost the market leadership to Nvidia.
As 3dfx attempted to counter the TNT2 threat, it was surprised by Nvidia's GeForce 256. The GeForce was a single-chip processor with integrated transform, lighting, triangle setup/clipping (hardware T&L), and rendering engines, giving it a significant performance advantage over the Voodoo3. The 3dfx Voodoo3 2000 PCI was the highest-performance 2D/3D card available for the Apple Macintosh at the time of its release, though support from 3dfx was labeled as 'beta' and required a firmware reflash. As game developers switched to DirectX and OpenGL, which respectively had become the industry standard and were becoming increasingly popular, 3dfx released its Glide API under the General Public License on December 6, 1999.
=== Voodoo 4 & 5 ===
The Voodoo 5 5000, which had 32 MB of VRAM to the 5500's 64 MB, was never launched.
The only other member of the Voodoo 5 line, the Voodoo 4 4500, was as much of a disaster as Voodoo Rush, because it had performance well short of its value-oriented peers combined with a late launch. Voodoo 4 was beaten in almost all areas by the GeForce 2 MX—a low-cost board sold mostly as an OEM part for computer manufacturers—and the Radeon VE.
One unusual trait of the Voodoo 4 and 5 was that the Macintosh versions of these cards had both VGA and DVI output jacks, whereas the PC versions had only the VGA connector. Also, the Mac versions of the Voodoo 4 and 5 had a weakness in that they did not support hardware-based MPEG2 decode acceleration, which hindered the playback of DVDs on a Mac equipped with a Voodoo graphics card.
The Voodoo 5 6000 never made it to market, due to a severe bug resulting in data corruption on the AGP bus on certain boards, and was limited to AGP 2x. It was thus incompatible with the new Pentium 4 motherboards. Only a few more than one thousand units of the graphics card were ever produced. Later tests proved that the Voodoo 5 6000 outperformed not only the GeForce 2 GTS and ATI Radeon 7200, but also the faster GeForce 2 Ultra and Radeon 7500. In some cases it was shown to compete well with the GeForce 3, trading performance places with the card on various tests. However, the prohibitively high production cost of the card, particularly the 4 chip setup, external power supply and 128 MB of VRAM (which would have made it the first consumer card with that amount of memory), would have likely hampered its competitiveness.
== Products ==
1 VGA: Whether the card included a built-in VGA subsystem and ran as a standalone graphics card
2 Texture mapping units:render output units – 3Dfx Velocity cards only have 1 working TMU enabled on OpenGL and Glide games, but both TMUs working under DirectX games.
== References ==
== Further reading ==
Hodge, Shayne (July 29, 2013). "3dfx Oral History Panel – Gordon Campbell, Scott Sellers, Ross Q. Smith, and Gary M. Tarolli" (PDF) (Interview). Computer History Museum. Retrieved December 5, 2021.
Gontarczyk, Piotr (March 31, 2016). "Historia 3dfx – firmy, która swoim Voodoo zrewolucjonizowała gry na platformie PC i... upadła". PC Lab. Archived from the original on July 2, 2020.
== External links ==
Official website at the Wayback Machine (archived October 19, 2000)
Greg Ballard discusses some of the reasons for 3dfx's decline, Stanford University, November 2006
Interview with AVOC | Wikipedia/Comparison_of_3dfx_graphics_processing_units |
4K resolution refers to a horizontal display resolution of approximately 4,000 pixels. Digital television and digital cinematography commonly use several different 4K resolutions. In television and consumer media, 3840 × 2160 (4K UHD) with a 16:9 aspect ratio is the dominant standard, whereas the movie projection industry uses 4096 × 2160 (DCI 4K).
The 4K television market share increased as prices fell dramatically throughout 2013 and 2014.
== 4K standards and terminology ==
The term "4K" is generic and refers to any resolution with a horizontal pixel count of approximately 4,000.: 2 Several different 4K resolutions have been standardized by various organizations.
The terms "4K" and "Ultra HD" are used more widely in marketing than "2160p" (cf. "1080p"). While typically referring to motion pictures, some digital camera vendors have used the term "4K photo" for still photographs, making it appear like an especially high resolution even though 3840×2160 pixels equal approximately 8.3 megapixels, which is not considered to be especially high for still photographs.
=== DCI Digital Cinema System Specification ===
In 2005, Digital Cinema Initiatives (DCI), a prominent standards organization in the cinema industry, published the Digital Cinema System Specification. This specification establishes standardized 2K and 4K container formats for digital cinema production, with resolutions of 2048 × 1080 and 4096 × 2160 respectively.: §4.3.1 The resolution of the video content inside follows the SMPTE 428-1 standard,: §3.2.1 which establishes the following resolutions for a 4K distribution:: 6
4096 × 2160 (full frame, 256∶135 or ≈1.90∶1 aspect ratio)
3996 × 2160 (flat crop, 1.85∶1 aspect ratio)
4096 × 1716 (CinemaScope crop, ≈2.39∶1 aspect ratio)
2K distributions can have a frame rate of either 24 or 48 FPS, while 4K distributions must have a frame rate of 24 FPS.: §3.1.4.2
Some articles claim that the terms "2K" and "4K" were coined by DCI and refer exclusively to the 2K and 4K formats defined in the DCI standard. However, usage of these terms in the cinema industry predates the publication of the DCI standard, and they are generally understood as casual terms for any resolution approximately 2000 or 4000 pixels in width, rather than names for specific resolutions.: 2 : 109
=== SMPTE UHDTV standard ===
In 2007, the Society of Motion Picture and Television Engineers published SMPTE ST 2036-1, which defines parameters for two UHDTV systems called UHDTV1 and UHDTV2. The standard defines the following characteristics for these systems:
A resolution of 3840 × 2160 (UHDTV1) or 7680 × 4320 (UHDTV2): §5.2
Square (1∶1) pixels, for an overall image aspect ratio of 16∶9: §5.1
A framerate of 23.976, 24, 25, 29.97, 30, 50, 59.94, 60, 100, 119.88, or 120 Hz with progressive scan: §1.2
RGB, Y′CBCR 4:4:4, 4:2:2, or 4:2:0 pixel encoding: §7.7
10 bpc (30 bit/px) or 12 bpc (36 bit/px) color depth: §1.2
Colorimetry characteristics as defined in the standard, including color primaries, quantization parameters, and the electro-optical transfer function. These are the same characteristics later standardized in ITU-R BT.2020. UHDTV1 systems are permitted to use BT.709 color primaries up to 60 Hz.: §6.2
=== ITU-R UHDTV standard ===
In 2012, the International Telecommunication Union, Radiocommunication Sector published Recommendation ITU-R BT.2020, also known as the Ultra High Definition Television (UHDTV) standard. It adopts the same image parameters defined in SMPTE ST 2036–1.
Although the UHDTV standard does not define any official names for the formats it defines, ITU typically uses the terms "4K", "4K UHD", or "4K UHDTV" to refer to the 3840 × 2160 system in public announcements and press releases ("8K" for the 7680 × 4320 system). In some of ITU's other standards documents, the terms "UHDTV1" and "UHDTV2" are used as shorthand.
=== CEA Ultra HD ===
In October 2012, the Consumer Electronics Association (CEA) announced their definition of the term Ultra High-Definition (or Ultra HD) for use with marketing consumer display devices. CEA defines an Ultra HD product as a TV, monitor, or projector with the following characteristics:
A resolution of 3840 × 2160 or larger
An aspect ratio of 1.77∶1 (16∶9) or wider
Support for color depth of 8 bpc (24 bit/px) or higher
At least one HDMI input capable of supporting 3840 × 2160 at 24, 30, and 60 Hz progressive scan (though not necessarily with RGB / Y′CBCR 4:4:4 color), and HDCP 2.2
Capable of processing images according to the color space defined in ITU-R BT.709
Capable of upscaling HD content (i.e. 720p / 1080p)
The CEA definition does allow manufacturers to use other terms—such as 4K—alongside the Ultra HD logo.: 9 Since the resolution in CEA's definition is only a minimum requirement, displays with higher resolutions such as 4096 × 2160 or 5120 × 2880 also qualify as "Ultra HD" displays, provided they meet the other requirements.
=== 2160p resolution ===
Some 4K resolutions, like 3840 × 2160, are often casually referred to as 2160p. This name follows from the previous naming convention used by HDTV and SDTV formats, which refer to a format by the number of pixels/lines along the vertical axis (such as "1080p" for 1920 × 1080 progressive scan, or "480i" for the 480-line interlaced SDTV formats) rather than the horizontal pixel count (≈4000 or "4K" for 3840 × 2160).
The term "2160p" could be applied to any format with a height of 2160 pixels, but it is most commonly used in reference to the 4K UHDTV resolution of 3840 × 2160 due to its association with the well-known 720p and 1080p HDTV formats. Although 3840 × 2160 is both a 4K resolution and a 2160p resolution, these terms cannot always be used interchangeably since not all 4K resolutions are 2160 pixels tall, and not all 2160p resolutions are ≈4000 pixels wide. However, some companies have begun using the term "4K" to describe devices with support for a 2160p resolution, even if it is not close to 4000 pixels wide. For example, many "4K" dash cams only support a resolution of 2880 × 2160 (4∶3); although this is a 2160p resolution, it is not a 4K resolution. Conversely, Samsung released a 5120 × 2160 (64∶27) TV, but marketed it as a "4K" TV despite its 5K-class resolution.
==== M+ or RGBW TV controversy ====
In 2015, LG Display announced the implementation of a new technology called M+ which is the addition of white subpixel along with the regular RGB dots in their IPS panel technology. The media and internet users later called this "RGBW" TVs because of the white sub pixel.
Most of the new M+ technology was employed on 4K TV sets which led to a controversy after tests showed that the addition of a white sub pixel replacing the traditional RGB structure would reduce the resolution by around 25%. After tests done by Intertek in which the technical aspects of LG M+ TVs were analyzed and they concluded that "the addressable resolution display is 2,880 X 2,160 for each red, green, blue", in other words, the LG TVs were technically 2.8K as it became known in the controversy. Although LG Display has developed this technology for use in notebook display, outdoor and smartphones, it is more popular in the TV market due to the supposed 4K UHD marketed resolution but still being incapable of achieving true 4K UHD resolution as defined by the CTA as 3840x2160 active pixels with 8-bit per color. This negatively impacts the rendering of text, making it a bit fuzzier, which is especially noticeable when a TV is used as a PC monitor.
=== CinemaWide 4K ===
In 2019, Sony was granted the CinemaWide trademark by the European Union Intellectual Property Office (EUIPO), in which the trademark covers 'Class 9' electronic devices, including smartphones. According to Sony and SID, the standard defines a CinemaWide 4K product with the following characteristics:
A resolution of 3840 × 1644 or larger
An aspect ratio of 21∶9
Capable of playing back 4K resolution video (2160p) in an aspect ratio of 21∶9
Capable of upscaling non-4K content (i.e. 720p / 1080p)
Sony Xperia smartphones are the most widely known products that equipped with CinemaWide 4K display, such as Xperia 1, Xperia 1 II, Xperia 1 III, Xperia 1 IV and Xperia 1 V.
== Adoption ==
Video sharing website YouTube and the television industry have adopted 3840 × 2160 as their 4K standard. As of 2014, 4K content from major broadcasters remained limited. By late 2014, 4K content was becoming more widely available online, including on Apple TV, YouTube, Netflix, Hulu, and Amazon Prime Video.
By 2013, some UHDTV models were available to general consumers in the range of US$600. As of 2015, prices on smaller computer and television panels had dropped below US$400.
=== ATSC ===
On March 26, 2013, the Advanced Television Systems Committee announced new proposals of a new standard called ATSC 3.0 which would implement UHD broadcasts at resolutions of up to 3840 × 2160 or 7680 × 4320. The standard would also include framerates of up to 120 Hz, HEVC encoding, wide color gamut, as well as high dynamic range.
=== DVB ===
In 2014, the Digital Video Broadcasting Project released a new set of standards intended to guide the implementation of high resolution content in broadcast television. Dubbed DVB-UHDTV, it establishes two standards, known as UHD-1 (for 4K content) and UHD-2 (for 8K content). These standards use resolutions of 3840 × 2160 and 7680 × 4320 respectively, with framerates of up to 60 Hz, color depth up to 10 bpc (30 bit/px), and HEVC encoding for transmission. DVB is currently focusing on the implementation of the UHD-1 standard.
DVB finalized UHD-1 Phase 2 in 2016, with the introduction of service by broadcasters expected in 2017. UHD-1 Phase 2 adds features such as high dynamic range (using HLG and PQ at 10 or 12 bits), wide color gamut (BT. 2020/2100 colorimetry), and high frame rate (up to 120 Hz).
=== Video streaming ===
As of February 2025, both YouTube and Vimeo support high-resolution video uploads, with maximum resolutions of 4096 × 2304 pixels (approximately 9.4 megapixels) and 4096 × 2160 pixels (approximately 8.8 megapixels), respectively. The growing availability of 4K content across streaming platforms like Netflix, Amazon Prime Video, and YouTube has made it more accessible to consumers. Vimeo's 4K content is currently limited to mostly nature documentaries and tech coverage.
High Efficiency Video Coding (HEVC or H.265) facilitates streaming of 4K content at bitrates between 20 to 30 Mbit/s, offering efficient compression without significant quality loss.
In January 2014, Naughty America launched the first adult video service streaming in 4K.
In February 2025, Super Bowl LIX was broadcast in 4K resolution with Dolby Vision HDR and Dolby Atmos sound for the first time. Fox aired the game, and it was also available for free streaming in 4K on Tubi, marking a significant milestone in sports broadcasting.
=== Mobile phone cameras ===
The first mobile phones to be able to record at 2160p (3840 × 2160) were released in late 2013, including the Samsung Galaxy Note 3, which is able to record 2160p at 30 frames per second.
In the year 2014, the OnePlus One was released with the option to record DCI 4K (4096 × 2160) at 24 frames per second, as well as LG G3 and Samsung Galaxy Note 4 with optical image stabilization.
In the year 2015, Apple announced the iPhone 6s was released with the 12 megapixel camera that has the option to record 4K at 25 or 30 frames per second.
In the years 2017 and 2018, mobile phone chipsets reached sufficient processing power that mobile phone vendors started releasing mobile phones that allow recording 2160p footage at 60 frames per second for a smoother and more realistic appearance.
=== Personal computers ===
iMac with Retina Display (2014) is one of the earliest computers that utilise 4K widescreen.
== History ==
In 1984, Hitachi released the ARTC HD63484 graphics processor, which was capable of displaying up to 4K resolution when in monochrome mode. The resolution was targeted at the bit-mapped desktop publishing market. The first commercially available 4K camera for cinematographic purposes was the Dalsa Origin, released in 2003. 4K technology was developed by several research groups in universities around the world, such as University of California, San Diego, CALIT2, Keio University, Naval Postgraduate School and others that realized several demonstrations in venues such as IGrid in 2004 and CineGrid. YouTube began supporting 4K for video uploads in 2010 as a result of leading manufacturers producing 4K cameras. Users could view 4K video by selecting "Original" from the quality settings until December 2013, when the 2160p option appeared in the quality menu. In November 2013, YouTube began to use the VP9 video compression standard, saying that it was more suitable for 4K than High Efficiency Video Coding (HEVC). Google, which owns YouTube, developed VP9.
Theaters began projecting movies at 4K resolution in 2011. Sony was offering 4K projectors as early as 2004. The first 4K home theater projector was released by Sony in 2012. Despite this, there's not many finished films with 4K resolution as of 2023. Even for movies and TV shows shot using 6K or 8K cameras, almost all finished films are edited in HD resolution and enlarged to fit a 4K format.
Sony is one of the leading studios promoting UHDTV content, as of 2013 offering a little over 70 movie and television titles via digital download to a specialized player that stores and decodes the video. The large files (≈40 GB), distributed through consumer broadband connections, raise concerns about data caps.
In 2014, Netflix began streaming House of Cards, Breaking Bad, and "some nature documentaries" at 4K to compatible televisions with an HEVC decoder. Most 4K televisions sold in 2013 did not natively support HEVC, with most major manufacturers announcing support in 2014. Amazon Studios began shooting their full-length original series and new pilots with 4K resolution in 2014. They are now currently available through Amazon Video.
In March 2016 the first players and discs for Ultra HD Blu-ray—a physical optical disc format supporting 4K resolution and high-dynamic-range video (HDR) at 60 frames per second—were released.
On August 2, 2016, Microsoft released the Xbox One S, which supports 4K streaming and has an Ultra HD Blu-ray disc drive, but does not support 4K gaming. On November 10, 2016, Sony released the PlayStation 4 Pro, which supports 4K streaming and gaming, though many games use checkerboard rendering or are upscaled 4K. On November 7, 2017, Microsoft released the Xbox One X, which supports 4K streaming and gaming, though not all games are rendered at native 4K.
=== Home video projection ===
Though the price of home cinema viewing devices began to drop rapidly from 2013, the digital video projector market saw limited expansion as very few manufacturers had fully 4K-capable lineups. Native 4K projectors remained priced in the five-figure range well into 2015, only falling below US$10,000 later that year. Sony was the sole major manufacturer offering a comprehensive 4K projection solution as of 2015. Critics argue that, at typical direct-view panel sizes and viewing distances, the extra pixels of 4K are unnecessary for normal human vision. In contrast, home cinema projectors use larger screens without necessarily increasing the viewing distance to match the scale. One technique to provide a more affordable 4K experience in home cinema projectors is "e-shift." Developed by some manufacturers, e-shift extrapolates additional pixels from 1080p sources to either upscale to 4K or display 4K from native 4K sources at a much lower price point than native 4K projectors. This technology reached its fourth generation in 2016. JVC applied this technology to create an 8K flight simulation system for Boeing, meeting the visual acuity limits of 20/25.
The first pixel-shifted 4K UHD projectors adopted by the market are Optoma, BenQ, Dell, et al., for those adopt a 2718×1528 pixel structure. The amount of data these projectors process is true 4K, but they overlap the pixels, which is what pixel shifting is. In fact, each of those pixels is far larger. In fact, each one has 50% more area than true 4K. Pixel shifting projectors. This way, they project a pixel and shift it up to the right by a half diameter and project it again with modified data—the second pixel overlaid on the first. This would result in adjacent red and green pixels effectively forming yellow, with a fringe on one side of red, on the other of green—except that the fringe takes on another color as the next line of pixels overlaps too. 4K UHD or 1080p pixel shifting cannot reveal the fine detail of a true 4K projector such as those Sony ships in the business, education, and home markets. JVC has one true 4K at $35,000 (in mid-2017) and another for $120,000.
While projecting UHD, it might look as though the pixel structures would have 1/4 the area of 1080p; it just doesn't happen with pixel shifting. That much resolution is only carried by a true 4K projector. This is why "true" 4K costs so much more than 4K UHD projectors that have more or less similar feature sets. They produce smaller pixels, finer resolution—no loss of detail or color from the overlapping pixels. This is in stark contrast to the small variation in the aspect ratio difference, which would be capable of being noticeable in a few companies, such as Kaleidescape, offering media servers that enable 4K UHD Blu-ray movies with a wide dynamic range in a home theater.
== Broadcasting ==
In November 2014, American satellite provider DirecTV (owned by AT&T) became the first pay-TV provider to offer access to 4K content, although limited to selected video-on-demand films. In August 2015, British sports network BT Sport launched a 4K feed, with its first broadcast being the 2015 FA Community Shield football match. Two production units were used, producing the traditional broadcast in high-definition, and a separate 4K broadcast. As the network did not want to mix 4K footage with upconverted HD footage, this telecast did not feature traditional studio segments at pre-game or half-time, but those hosted from the stadium by the match commentators using a 4K camera. BT envisioned that if viewers wanted to watch studio analysis, they would switch to the HD broadcast and then back for the game. Footage was compressed using H.264 encoders and transmitted to BT Tower, where it was then transmitted back to BT Sport studios and decompressed for distribution, via 4K-compatible BT TV set-top boxes on an eligible BT Infinity internet plan with at least a 25 Mbit/s connection.
In late 2015 and January 2016, three of Canada's television providers – including Quebec-based Vidéotron, Ontario-based Rogers Cable, and Bell Fibe TV, announced that they would begin to offer 4K compatible set-top boxes that can stream 4K content to subscribers over gigabit internet service. On October 5, 2015, alongside the announcement of its 4K set-top box and gigabit internet, Canadian media conglomerate Rogers Communications announced that it planned to produce 101 sports telecasts in 4K in 2016 via its Sportsnet division, including all Toronto Blue Jays home games, and "marquee" National Hockey League games beginning in January 2016. Bell Media announced via its TSN division a slate of 4K telecasts to begin on January 20, 2016, including selected Toronto Raptors games and regional NHL games.
On January 14, 2016, in cooperation with BT Sport, Sportsnet broadcast the first ever NBA game produced in 4K – a Toronto Raptors/Orlando Magic game at O2 Arena in London, England. On January 20, also during a Raptors game, TSN presented the first live 4K telecast produced in North America. Three days later, Sportsnet presented the first NHL game in 4K.
Dome Productions, a joint venture of Bell Media and Rogers Media (the respective owners of TSN and Sportsnet), constructed a "side-by-side" 4K mobile production unit shared by Sportsnet and TSN's first 4K telecasts; it was designed to operate alongside a separate HD truck and utilize cameras capable of output in both formats. For the opening game of the 2016 Toronto Blue Jays season, Dome constructed "Trillium" – a production truck integrating both 4K and 1080i high-definition units. Bell Media's CTV also broadcast the 2016 Juno Awards in 4K as the first awards show presented in the format.
In February 2016, Spanish-language Univision trialed 4K by producing a closed-circuit TV broadcast of a football friendly between the national teams of Mexico and Senegal from Miami (America) in the format. The broadcast was streamed privately to several special viewing locations. Univision aimed to develop a 4K streaming app to publicly televise the final of Copa América Centenario in 4K. In March 2016, DirecTV and CBS Sports announced that they would produce the "Amen Corner" supplemental coverage from the Masters golf tournament in 4K.
In late 2016, Telus TV announced that they would begin to offer 4K compatible set-top boxes.
After having trialed the technology in limited matches at the 2013 FIFA Confederations Cup, and the 2014 FIFA World Cup (via private tests and public viewings in the host city of Rio de Janeiro), the 2018 FIFA World Cup was the first FIFA World Cup in which all matches were produced in 4K. Host Broadcasting Services stated that at least 75% of the broadcast cut on each match would come from 4K cameras (covering the majority of main angles), with instant replays and some camera angles being upconverted from 1080p sources. These broadcasts were made available from selected rightsholders, such as the BBC in the UK, and selected television providers in the United States.
Technical limitations in distributing 4K broadcasts (including the increased cost of 4K-compatible production equipment) have led to some broadcasters deciding against the format in favour of emphasizing 1080p/HDR broadcasts instead. After having broadcast UEFA Euro and the Champions League final in the format, UEFA discontinued 4K coverage for both in 2024, as broadcasters elected to put resources behind HDR and other on-air features instead. Some U.S. broadcasters, such as CBS Sports, Fox Sports, and USA Network have broadcast events promoted as having "4K" feeds, but are actually 1080p/HDR broadcasts upconverted to 4K. For the 2024 Summer Olympics, USA Network's "4K" coverage was sourced from host broadcaster Olympic Broadcasting Services (OBS) in 4K, but downconverted to 1080p when received by NBC Sports' studios, and then upconverted to 4K for distribution.
== Resolutions ==
=== 3840 × 2160 ===
The resolution of 3840 × 2160 is the dominant 4K resolution in the consumer media and display industries. This is the resolution of the UHDTV1 format defined in SMPTE ST 2036–1, as well as the 4K UHDTV format defined by ITU-R in Rec. 2020, and is also the minimum resolution for CEA's definition of Ultra HD displays and projectors. The resolution of 3840 × 2160 was also chosen by the DVB project for their 4K broadcasting standard, UHD-1.
This resolution has an aspect ratio of 16∶9, with 8,294,400 total pixels. It is exactly double the horizontal and vertical resolution of 1080p (1920 × 1080) for a total of 4 times as many pixels, and triple the horizontal and vertical resolution of 720p (1280 × 720) for a total of 9 times as many pixels. It is sometimes referred to as "2160p", based on the naming patterns established by the previous 720p and 1080p HDTV standards.
In 2013, televisions capable of displaying UHD resolutions were seen by consumer electronics companies as the next trigger for an upgrade cycle after a lack of consumer interest in 3D television.
=== 4096 × 2160 ===
This resolution is used mainly in digital cinema production, and has a total of 8,847,360 pixels with an aspect ratio of 256∶135 (≈19∶10). It was standardized as the resolution of the 4K container format defined by Digital Cinema Initiatives in the Digital Cinema System specification, and is the native resolution of all DCI-compliant 4K digital projectors and monitors. The DCI specification allows several different resolutions for the content inside the container, depending on the desired aspect ratio. The allowed resolutions are defined in SMPTE 428-1:: §3.2.1 : p. 6
4096 × 2160 (full frame, 256∶135 or ≈1.90∶1 aspect ratio)
3996 × 2160 (flat crop, 1.85∶1 aspect ratio)
4096 × 1716 (CinemaScope crop, ≈2.39∶1 aspect ratio)
The DCI 4K standard has twice the horizontal and vertical resolution of DCI 2K (2048 × 1080), with four times as many pixels overall.
Digital movies made in 4K may be produced, scanned, or stored in a number of other resolutions depending on what storage aspect ratio is used. In the digital cinema production chain, a resolution of 4096 × 3112 is often used for acquiring "open gate" or anamorphic input material, a resolution based on the historical resolution of scanned Super 35 mm film.
=== Other 4K resolutions ===
Various other non-standardized 4K resolutions have been used in displays, including:
4096 × 2560 (1.60:1 or 16:10); this resolution was used in the Canon DP-V3010, a 30-inch (76 cm) 4K reference monitor designed for reviewing cinema footage in post-production, released in 2013.
4096 × 2304 (1.77:1 or 16:9); this resolution was used in the 21.5-inch (55 cm) LG UltraFine 22MD4KA 4K monitor, jointly announced by LG and Apple in 2016 and used in the 21.5" 4K Retina iMac computer.
3840 × 2400 (1.60:1 or 16:10); this resolution was used in the 22.2-inch (56 cm) IBM T220 and T221 monitors, released in 2001 and 2002 respectively. This resolution is also referred to as "WQUXGA", and is four times the resolution of WUXGA (1920 × 1200). More recently, this resolution has returned in the Dell XPS Laptop series, under the name "UHD+".
3840 × 1920 (2:1 or 16:8); this resolution is largely used by 360° videos as they largely use a 2:1 aspect ratio. The reason is to represent a 360° on the horizontal axis and a 180° on the vertical.
3840 × 1600 (2.40:1 or 12:5); a number of computer monitors with this resolution have been produced, the first being the 37.5-inch (95 cm) LG 38UC99-W released in 2016. This resolution is equivalent to WQXGA (2560 × 1600) extended in width by 50%, or 3840 × 2160 reduced in height by ≈26%. LG refers to this resolution as "WQHD+" (Wide Quad HD+), while Acer uses the term "UW-QHD+" (Ultra-wide Quad HD+) and some media outlets have used the term "UW4K" (Ultra-wide 4K).
3840 × 1080 (3.55:1 or 32:9); this resolution was first used in the Samsung C49HG70, a 49-inch (120 cm) curved gaming monitor released in 2017. This resolution is equivalent to dual 1080p displays (1920 × 1080) side-by-side, but with no border interrupting the image. It is also exactly one half of a 4K UHD (3840 × 2160) display. Samsung refers to this resolution as "DFHD" (Dual Full HD).
== Recording ==
=== Detail benefit ===
The main advantage of recording video at the 4K standard is that fine spatial detail is resolved well. Individual still frames extracted from 3840×2160-pixel video footage can act as 8.3 megapixel still photographs, while only 2.1 megapixels at 1080p and 0.9 megapixels at 720p. If the final video resolution is reduced to 2K from a 4K recording, more detail is apparent than would have been achieved from a native 2K recording. Increased fineness and contrast is then possible with output to DVD and Blu-ray. Some cinematographers record at 4K with the Super 35 film format to offset any resolution loss that may occur during video processing.
=== Chroma subsampling ===
Many consumer electronics such as mobile phones store video footage in Y′CBCR format with 4:2:0 chroma subsampling, which records color information at only one quarter the resolution as the brightness information. For 3840 × 2160 video, this means that the color information is only stored at 1920 × 1080.
=== Bit rates ===
Consumer cameras and mobile phones record 2160p footage at much higher bit rates (usually 50 to 100 Mbit/s) than 1080p (usually 10 to 30 Mbit/s). This higher bit rate reduces the visibility of compression artifacts, even if viewed on monitors with a lower resolution than 2160p.
== See also ==
1080p Full HD – digital video format with a resolution of 1920 × 1080, with vertical resolution of 1080 lines
1440p (WQHD) – vertical resolution of 1440 lines
List of 4K video recording devices
2K resolution – digital video formats with a horizontal resolution of around 2,000 pixels
5K resolution – digital video formats with a horizontal resolution of around 5,000 pixels, aimed at non-television computer monitor usage
8K resolution – digital video formats with a horizontal resolution of around 8,000 pixels
10K resolution – digital video formats with a horizontal resolution of around 10,000 pixels
16K resolution – experimental VR format
32K resolution
Aspect ratio (image) – proportional relationship between an image's width and height
Digital cinema
Display resolution standards
High Efficiency Video Coding (HEVC) – video standard that supports 4K & 8K UHDTV and resolutions up to 8192 × 4320
Rec. 2020 – ITU-R recommendation for UHDTV, defining formats with resolutions of 4K (3840 × 2160) and 8K (7680 × 4320)
Ultrawide formats
== References ==
== External links ==
=== Articles ===
"3D TV is Dead, Long Live 4K", Forbes, Jan 10, 2013
Gurule, Donn, 4k and 8k Production Workflows Become More Mainstream, Light beam, archived from the original on 2013-02-16, retrieved 2013-01-29
What is the meaning of UHDTV and its difference to HDTV?, UHDMI, archived from the original on 2013-02-05, retrieved 2014-09-10
"Ultra high resolution television (UHDV) prototype", CD Freaks, archived from the original on 2008-11-18, retrieved 2013-01-29
"Just Like High-Definition TV, but With Higher Definition", The New York Times, Jun 3, 2004
"Japan demonstrates next-gen TV Broadcast", Electronic Engineering Times, archived from the original on 2013-05-01, retrieved 2013-01-29.
"Researchers craft HDTV's successor", PC World, archived from the original on 2008-06-04, retrieved 2013-01-29
Sugawara, Masayuki (2008), Super Hi-Vision—research on a future ultra-HDTV system (PDF) (technical review), CH: EBU, archived from the original (PDF) on 2009-03-26, retrieved 2013-01-29
Ball, Christopher Lee (Oct 2008), "Farewell to the Kingdom of Shadows: A filmmaker's first impression of Super Hi-Vision television", Musings, archived from the original on 2013-03-23, retrieved 2013-01-29
"Visual comparison of the different 4K resolutions", 4k TV, archived from the original on 2014-08-10, retrieved 2014-08-08
"Why Ultra HD 4K TVs are still stupid", CNet, 2015 follow-up article: "Why 4K TVs aren't stupid (anymore)", CNet
=== Official sites of NHK ===
Super Hi-Vision, JP: NHK, archived from the original on 2010-10-06, retrieved 2013-01-29.
Science & Technical Research Laboratories, JP: NHK.
Super Hi-Vision research (annual report), JP: NHK STRL, 2009, archived from the original on 2012-10-18, retrieved 2013-01-29.
=== Video ===
"4K resolution video test sequences for Research", Ultra video, FI: TUT. | Wikipedia/4K_resolution |
A fitness function is a particular type of objective or cost function that is used to summarize, as a single figure of merit, how close a given candidate solution is to achieving the set aims. It is an important component of evolutionary algorithms (EA), such as genetic programming, evolution strategies or genetic algorithms. An EA is a metaheuristic that reproduces the basic principles of biological evolution as a computer algorithm in order to solve challenging optimization or planning tasks, at least approximately. For this purpose, many candidate solutions are generated, which are evaluated using a fitness function in order to guide the evolutionary development towards the desired goal. Similar quality functions are also used in other metaheuristics, such as ant colony optimization or particle swarm optimization.
In the field of EAs, each candidate solution, also called an individual, is commonly represented as a string of numbers (referred to as a chromosome). After each round of testing or simulation the idea is to delete the n worst individuals, and to breed n new ones from the best solutions. Each individual must therefore to be assigned a quality number indicating how close it has come to the overall specification, and this is generated by applying the fitness function to the test or simulation results obtained from that candidate solution.
Two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and one where the fitness function is mutable, as in niche differentiation or co-evolving the set of test cases. Another way of looking at fitness functions is in terms of a fitness landscape, which shows the fitness for each possible chromosome. In the following, it is assumed that the fitness is determined based on an evaluation that remains unchanged during an optimization run.
A fitness function does not necessarily have to be able to calculate an absolute value, as it is sometimes sufficient to compare candidates in order to select the better one. A relative indication of fitness (candidate a is better than b) is sufficient in some cases, such as tournament selection or Pareto optimization.
== Requirements of evaluation and fitness function ==
The quality of the evaluation and calculation of a fitness function is fundamental to the success of an EA optimisation. It implements Darwin's principle of "survival of the fittest". Without fitness-based selection mechanisms for mate selection and offspring acceptance, EA search would be blind and hardly distinguishable from the Monte Carlo method. When setting up a fitness function, one must always be aware that it is about more than just describing the desired target state. Rather, the evolutionary search on the way to the optimum should also be supported as much as possible (see also section on auxiliary objectives), if and insofar as this is not already done by the fitness function alone. If the fitness function is designed badly, the algorithm will either converge on an inappropriate solution, or will have difficulty converging at all.
Definition of the fitness function is not straightforward in many cases and often is performed iteratively if the fittest solutions produced by an EA is not what is desired. Interactive genetic algorithms address this difficulty by outsourcing evaluation to external agents which are normally humans.
== Computational efficiency ==
The fitness function should not only closely align with the designer's goal, but also be computationally efficient. Execution speed is crucial, as a typical evolutionary algorithm must be iterated many times in order to produce a usable result for a non-trivial problem.
Fitness approximation may be appropriate, especially in the following cases:
Fitness computation time of a single solution is extremely high
Precise model for fitness computation is missing
The fitness function is uncertain or noisy.
Alternatively or also in addition to the fitness approximation, the fitness calculations can also be distributed to a parallel computer in order to reduce the execution times. Depending on the population model of the EA used, both the EA itself and the fitness calculations of all offspring of one generation can be executed in parallel.
== Multi-objective optimization ==
Practical applications usually aim at optimizing multiple and at least partially conflicting objectives. Two fundamentally different approaches are often used for this purpose, Pareto optimization and optimization based on fitness calculated using the weighted sum.
=== Weighted sum and penalty functions ===
When optimizing with the weighted sum, the single values of the
O
{\displaystyle O}
objectives are first normalized so that they can be compared. This can be done with the help of costs or by specifying target values and determining the current value as the degree of fulfillment. Costs or degrees of fulfillment can then be compared with each other and, if required, can also be mapped to a uniform fitness scale. Without loss of generality, fitness is assumed to represent a value to be maximized. Each objective
o
i
{\displaystyle o_{i}}
is assigned a weight
w
i
{\displaystyle w_{i}}
in the form of a percentage value so that the overall raw fitness
f
r
a
w
{\displaystyle f_{raw}}
can be calculated as a weighted sum:
f
r
a
w
=
∑
i
=
1
O
o
i
⋅
w
i
w
i
t
h
∑
i
=
1
O
w
i
=
1
{\displaystyle f_{raw}=\sum _{i=1}^{O}{o_{i}\cdot w_{i}}\quad {\mathsf {with}}\quad \sum _{i=1}^{O}{w_{i}}=1}
A violation of
R
{\displaystyle R}
restrictions
r
j
{\displaystyle r_{j}}
can be included in the fitness determined in this way in the form of penalty functions. For this purpose, a function
p
f
j
(
r
j
)
{\displaystyle pf_{j}(r_{j})}
can be defined for each restriction which returns a value between
0
{\displaystyle 0}
and
1
{\displaystyle 1}
depending on the degree of violation, with the result being
1
{\displaystyle 1}
if there is no violation. The previously determined raw fitness is multiplied by the penalty function(s) and the result is then the final fitness
f
f
i
n
a
l
{\displaystyle f_{final}}
:
f
f
i
n
a
l
=
f
r
a
w
⋅
∏
j
=
1
R
p
f
j
(
r
j
)
=
∑
i
=
1
O
(
o
i
⋅
w
i
)
⋅
∏
j
=
1
R
p
f
j
(
r
j
)
{\displaystyle f_{final}=f_{raw}\cdot \prod _{j=1}^{R}{pf_{j}(r_{j})}=\sum _{i=1}^{O}{(o_{i}\cdot w_{i})}\cdot \prod _{j=1}^{R}{pf_{j}(r_{j})}}
This approach is simple and has the advantage of being able to combine any number of objectives and restrictions. The disadvantage is that different objectives can compensate each other and that the weights have to be defined before the optimization. This means that the compromise lines must be defined before optimization, which is why optimization with the weighted sum is also referred to as the a priori method. In addition, certain solutions may not be obtained, see the section on the comparison of both types of optimization.
=== Pareto optimization ===
A solution is called Pareto-optimal if the improvement of one objective is only possible with a deterioration of at least one other objective. The set of all Pareto-optimal solutions, also called Pareto set, represents the set of all optimal compromises between the objectives. The figure below on the right shows an example of the Pareto set of two objectives
f
1
{\displaystyle f_{1}}
and
f
2
{\displaystyle f_{2}}
to be maximized. The elements of the set form the Pareto front (green line). From this set, a human decision maker must subsequently select the desired compromise solution. Constraints are included in Pareto optimization in that solutions without constraint violations are per se better than those with violations. If two solutions to be compared each have constraint violations, the respective extent of the violations decides.
It was recognized early on that EAs with their simultaneously considered solution set are well suited to finding solutions in one run that cover the Pareto front sufficiently well. They are therefore well suited as a-posteriori methods for multi-objective optimization, in which the final decision is made by a human decision maker after optimization and determination of the Pareto front. Besides the SPEA2, the NSGA-II and NSGA-III have established themselves as standard methods.
The advantage of Pareto optimization is that, in contrast to the weighted sum, it provides all alternatives that are equivalent in terms of the objectives as an overall solution. The disadvantage is that a visualization of the alternatives becomes problematic or even impossible from four objectives on. Furthermore, the effort increases exponentially with the number of objectives. If there are more than three or four objectives, some have to be combined using the weighted sum or other aggregation methods.
=== Comparison of both types of assessment ===
With the help of the weighted sum, the total Pareto front can be obtained by a suitable choice of weights, provided that it is convex. This is illustrated by the adjacent picture on the left. The point
P
{\displaystyle {\mathsf {P}}}
on the green Pareto front is reached by the weights
w
1
{\displaystyle w_{1}}
and
w
2
{\displaystyle w_{2}}
, provided that the EA converges to the optimum. The direction with the largest fitness gain in the solution set
Z
{\displaystyle Z}
is shown by the drawn arrows.
In case of a non-convex front, however, non-convex front sections are not reachable by the weighted sum. In the adjacent image on the right, this is the section between points
A
{\displaystyle {\mathsf {A}}}
and
B
{\displaystyle {\mathsf {B}}}
. This can be remedied to a limited extent by using an extension of the weighted sum, the cascaded weighted sum.
Comparing both assessment approaches, the use of Pareto optimization is certainly advantageous when little is known about the possible solutions of a task and when the number of optimization objectives can be narrowed down to three, at most four. However, in the case of repeated optimization of variations of one and the same task, the desired lines of compromise are usually known and the effort to determine the entire Pareto front is no longer justified. This is also true when no human decision is desired or possible after optimization, such as in automated decision processes.
== Auxiliary objectives ==
In addition to the primary objectives resulting from the task itself, it may be necessary to include auxiliary objectives in the assessment to support the achievement of one or more primary objectives. An example of a scheduling task is used for illustration purposes. The optimization goals include not only a general fast processing of all orders but also the compliance with a latest completion time. The latter is especially necessary for the scheduling of rush orders. The second goal is not achieved by the exemplary initial schedule, as shown in the adjacent figure. A following mutation does not change this, but schedules the work step d earlier, which is a necessary intermediate step for an earlier start of the last work step e of the order. As long as only the latest completion time is evaluated, however, the fitness of the mutated schedule remains unchanged, even though it represents a relevant step towards the objective of a timely completion of the order. This can be remedied, for example, by an additional evaluation of the delay of work steps. The new objective is an auxiliary one, since it was introduced in addition to the actual optimization objectives to support their achievement. A more detailed description of this approach and another example can be found in.
== See also ==
Evolutionary computation
Inferential programming
Test functions for optimization
Loss function
== External links ==
A Nice Introduction to Adaptive Fuzzy Fitness Granulation (AFFG) (PDF), A promising approach to accelerate the convergence rate of EAs.
The cyber shack of Adaptive Fuzzy Fitness Granulation (AFFG) That is designed to accelerate the convergence rate of EAs.
Fitness functions in evolutionary robotics: A survey and analysis (AFFG) (PDF), A review of fitness functions used in evolutionary robotics.
Ford, Neal; Richards, Mark, Sadalage, Pramod; Dehghani, Zhamak. (2021) Software Architecture: The Hard Parts O'Reilly Media, Inc. ISBN 9781492086895.
== References == | Wikipedia/Fitness_(genetic_algorithm) |
In the field of 3D computer graphics, the unified shader model (known in Direct3D 10 as "Shader Model 4.0") refers to a form of shader hardware in a graphical processing unit (GPU) where all of the shader stages in the rendering pipeline (geometry, vertex, pixel, etc.) have the same capabilities. They can all read textures and buffers, and they use instruction sets that are almost identical.
== History ==
Earlier GPUs generally included two types of shader hardware, with the vertex shaders having considerably more instructions than the simpler pixel shaders. This lowered the cost of implementation of the GPU as a whole, and allowed more shaders in total on a single unit. This was at the cost of making the system less flexible, and sometimes leaving one set of shaders idle if the workload used one more than the other. As improvements in fabrication continued, this distinction became less useful. ATI Technologies introduced a unified architecture on the hardware they developed for the Xbox 360. Nvidia quickly followed with their Tesla design. AMD introduced a unified shader in card form two years later in the TeraScale line. The concept has been universal since then.
Early shader abstractions (such as Shader Model 1.x) used very different instruction sets for vertex and pixel shaders, with vertex shaders having much more flexible instruction set. Later shader models (such as Shader Model 2.x and 3.0) reduced the differences, approaching unified shader model. Even in the unified model the instruction set may not be completely the same between different shader types; different shader stages may have a few distinctions. Fragment/pixel shaders can compute implicit texture coordinate gradients, while geometry shaders can emit rendering primitives.
== Unified shader architecture ==
Unified shader architecture (or unified shading architecture) is a hardware design by which all shader processing units of a piece of graphics hardware are capable of handling any type of shading tasks. Most often Unified Shading Architecture hardware is composed of an array of computing units and some form of dynamic scheduling/load balancing system that ensures that all of the computational units are kept working as often as possible.
Unified shader architecture allows more flexible use of the graphics rendering hardware. For example, in a situation with a heavy geometry workload the system could allocate most computing units to run vertex and geometry shaders. In cases with less vertex workload and heavy pixel load, more computing units could be allocated to run pixel shaders.
While unified shader architecture hardware and unified shader model programming interfaces are not a requirement for each other, a unified architecture is most sensible when designing hardware intended to support an API offering a unified shader model.
OpenGL 3.3 (which offers a unified shader model) can still be implemented on hardware that does not have unified shader architecture. Similarly, hardware that supported non unified shader model APIs could be based on a unified shader architecture, as is the case with Xenos graphics chip in Xbox 360, for example.
The unified shader architecture was introduced with the Nvidia GeForce 8 series, ATI Radeon HD 2000 series, S3 Chrome 400, Intel GMA X3000 series, Xbox 360's GPU, Qualcomm Adreno 200 series, Mali Midgard, PowerVR SGX GPUs and is used in all subsequent series.
For example, the unified shader is referred as "CUDA core" or "shader core" on NVIDIA GPUs, and is referred as "ALU core" on Intel GPUs.
== Architectures ==
Nvidia
Tesla
Fermi
Kepler
Maxwell
Pascal
Volta
Turing
Ampere
Ada Lovelace
Blackwell
Intel
Intel Arc
ATI/AMD
TeraScale
Graphics Core Next
RDNA
CDNA
== References == | Wikipedia/Unified_Shader_Model |
A memory controller, also known as memory chip controller (MCC) or a memory controller unit (MCU), is a digital circuit that manages the flow of data going to and from a computer's main memory. When a memory controller is integrated into another chip, such as an integral part of a microprocessor, it is usually called an integrated memory controller (IMC).
Memory controllers contain the logic necessary to read and write to dynamic random-access memory (DRAM), and to provide the critical memory refresh and other functions. Reading and writing to DRAM is performed by selecting the row and column data addresses of the DRAM as the inputs to the multiplexer circuit, where the demultiplexer on the DRAM uses the converted inputs to select the correct memory location and return the data, which is then passed back through a multiplexer to consolidate the data in order to reduce the required bus width for the operation. Memory controllers' bus widths range from 8-bit in earlier systems, to 512-bit in more complicated systems, where they are typically implemented as four 64-bit simultaneous memory controllers operating in parallel, though some operate with two 64-bit memory controllers being used to access a 128-bit memory device.
Some memory controllers, such as the one integrated into PowerQUICC II processors, include error detection and correction hardware. Many modern processors are also integrated memory management unit (MMU), which in many operating systems implements virtual addressing. On early x86-32 processors, the MMU is integrated in the CPU, but the memory controller is usually part of northbridge.
== History ==
Older Intel and PowerPC-based computers have memory controller chips that are separate from the main processor. Often these are integrated into the northbridge of the computer, also sometimes called a memory controller hub.
Most modern desktop or workstation microprocessors use an integrated memory controller (IMC), including microprocessors from Intel, AMD, and those built around the ARM architecture. Prior to K8 (circa 2003), AMD microprocessors had a memory controller implemented on their motherboard's northbridge. In K8 and later, AMD employed an integrated memory controller. Likewise, until Nehalem (circa 2008), Intel microprocessors used memory controllers implemented on the motherboard's northbridge. Nehalem and later switched to an integrated memory controller. Other examples of microprocessor architectures that use integrated memory controllers include NVIDIA's Fermi, IBM's POWER5, and Sun Microsystems's UltraSPARC T1.
While an integrated memory controller has the potential to increase the system's performance, such as by reducing memory latency, it locks the microprocessor to a specific type (or types) of memory, forcing a redesign in order to support newer memory technologies. When DDR2 SDRAM was introduced, AMD released new Athlon 64 CPUs. These new models, with a DDR2 controller, use a different physical socket (known as Socket AM2), so that they will only fit in motherboards designed for the new type of RAM. When the memory controller is not on-die, the same CPU may be installed on a new motherboard, with an updated northbridge to use newer memory.
Some microprocessors in the 1990s, such as the DEC Alpha 21066 and HP PA-7300LC, had integrated memory controllers; however, rather than for performance gains, this was implemented to reduce the cost of systems by eliminating the need for an external memory controller.
Some CPUs are designed to have their memory controllers as dedicated external components that are not part of the chipset. An example is IBM POWER8, which uses external Centaur chips that are mounted onto DIMM modules and act as memory buffers, L4 cache chips, and as the actual memory controllers. The first version of the Centaur chip used DDR3 memory but an updated version was later released which can use DDR4.
== Security ==
A few experimental memory controllers contain a second level of address translation, in addition to the first level of address translation performed by the CPU's memory management unit to improve cache and bus performance.
Memory controllers integrated into certain Intel Core processors provide memory scrambling as a feature that turns user data written to the main memory into pseudo-random patterns. Memory scrambling has the potential to prevent forensic and reverse-engineering analysis based on DRAM data remanence by effectively rendering various types of cold boot attacks ineffective. In current practice, this has not been achieved; memory scrambling has only been designed to address DRAM-related electrical problems. The late 2010s memory scrambling standards do address security issues and are not cryptographically secure or open to public revision or analysis.
ASUS and Intel have their separate memory scrambling standards. ASUS motherboards have allowed the user to choose which memory scrambling standard to use (ASUS or Intel) or whether to turn the feature off entirely.
== Variants ==
=== Double data rate memory ===
Double data rate (DDR) memory controllers are used to drive DDR SDRAM, where data is transferred on both rising and falling edges of the system's memory clock. DDR memory controllers are significantly more complicated when compared to single data rate controllers, but they allow for twice the data to be transferred without increasing the memory's clock rate or bus width.
=== Multichannel memory ===
Multichannel memory controllers are memory controllers where the DRAM devices are separated onto multiple buses to allow the memory controller(s) to access them in parallel. This increases the theoretical amount of bandwidth of the bus by a factor of the number of channels. While a channel for every DRAM would be the ideal solution, adding more channels increases complexity and cost.
=== Fully buffered memory ===
Fully buffered memory systems place a memory buffer device on every memory module (called an FB-DIMM when fully buffered RAM is used), which unlike traditional memory controller devices, use a serial data link to the memory controller instead of the parallel link used in previous RAM designs. This decreases the number of wires necessary to place the memory devices on a motherboard (allowing for a smaller number of layers to be used, meaning more memory devices can be placed on a single board), at the expense of increasing latency (the time necessary to access a memory location). This increase is due to the time required to convert the parallel information read from the DRAM cell to the serial format used by the FB-DIMM controller, and back to a parallel form in the memory controller on the motherboard.
=== Flash memory controller ===
Many flash memory devices, such as USB flash drives and solid-state drives, include a flash memory controller. Flash memory is inherently slower to access than RAM and often becomes unusable after a few million write cycles, which generally makes it unsuitable for RAM applications.
== See also ==
Address generation unit
Memory scrubbing
Storage controller
== References ==
== External links ==
Infineon/Kingston (a memory vendor) Dual Channel DDR Memory Whitepaper at the Wayback Machine (archived 2011-09-29) – explains dual channel memory controllers, and how to use them
Introduction to Memory Controller
Intel guide on Single- and Multichannel Memory Modes
What is a Memory Controller and How Does it Work
What is Memory Controller?
Memory Controllers:History and How it Work [sic]
Flash Memory: Types and Development History | Wikipedia/Memory_controller |
In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. Any cut determines a cut-set, the set of edges that have one endpoint in each subset of the partition. These edges are said to cross the cut. In a connected graph, each cut-set determines a unique cut, and in some cases cuts are identified with their cut-sets rather than with their vertex partitions.
In a flow network, an s–t cut is a cut that requires the source and the sink to be in different subsets, and its cut-set only consists of edges going from the source's side to the sink's side. The capacity of an s–t cut is defined as the sum of the capacity of each edge in the cut-set.
== Definition ==
A cut C = (S, T) is a partition of V of a graph G = (V, E) into two subsets S and T.
The cut-set of a cut C = (S, T) is the set {(u, v) ∈ E | u ∈ S, v ∈ T} of edges that have one endpoint in S and the other endpoint in T.
If s and t are specified vertices of the graph G, then an s–t cut is a cut in which s belongs to the set S and t belongs to the set T.
In an unweighted undirected graph, the size or weight of a cut is the number of edges crossing the cut. In a weighted graph, the value or weight is defined by the sum of the weights of the edges crossing the cut.
A bond is a cut-set that does not have any other cut-set as a proper subset.
== Minimum cut ==
A cut is minimum if the size or weight of the cut is not larger than the size of any other cut. The illustration on the right shows a minimum cut: the size of this cut is 2, and there is no cut of size 1 because the graph is bridgeless.
The max-flow min-cut theorem proves that the maximum network flow and the sum of the cut-edge weights of any minimum cut that separates the source and the sink are equal. There are polynomial-time methods to solve the min-cut problem, notably the Edmonds–Karp algorithm.
== Maximum cut ==
A cut is maximum if the size of the cut is not smaller than the size of any other cut. The illustration on the right shows a maximum cut: the size of the cut is equal to 5, and there is no cut of size 6, or |E| (the number of edges), because the graph is not bipartite (there is an odd cycle).
In general, finding a maximum cut is computationally hard.
The max-cut problem is one of Karp's 21 NP-complete problems.
The max-cut problem is also APX-hard, meaning that there is no polynomial-time approximation scheme for it unless P = NP.
However, it can be approximated to within a constant approximation ratio using semidefinite programming.
Note that min-cut and max-cut are not dual problems in the linear programming sense, even though one gets from one problem to other by changing min to max in the objective function. The max-flow problem is the dual of the min-cut problem.
== Sparsest cut ==
The sparsest cut problem is to bipartition the vertices so as to minimize the ratio of the number of edges across the cut divided by the number of vertices in the smaller half of the partition. This objective function favors solutions that are both sparse (few edges crossing the cut) and balanced (close to a bisection). The problem is known to be NP-hard, and the best known approximation algorithm is an
O
(
log
n
)
{\displaystyle O({\sqrt {\log n}})}
approximation due to Arora, Rao & Vazirani (2009).
== Cut space ==
The family of all cut sets of an undirected graph is known as the cut space of the graph. It forms a vector space over the two-element finite field of arithmetic modulo two, with the symmetric difference of two cut sets as the vector addition operation, and is the orthogonal complement of the cycle space. If the edges of the graph are given positive weights, the minimum weight basis of the cut space can be described by a tree on the same vertex set as the graph, called the Gomory–Hu tree. Each edge of this tree is associated with a bond in the original graph, and the minimum cut between two nodes s and t is the minimum weight bond among the ones associated with the path from s to t in the tree.
== See also ==
Connectivity (graph theory)
Graph cuts in computer vision
Split (graph theory)
Vertex separator
Bridge (graph theory)
Cutwidth
== References == | Wikipedia/Cut_(graph_theory) |
Intuitively, an algorithmically random sequence (or random sequence) is a sequence of binary digits that appears random to any algorithm running on a (prefix-free or not) universal Turing machine. The notion can be applied analogously to sequences on any finite alphabet (e.g. decimal digits). Random sequences are key objects of study in algorithmic information theory.
In measure-theoretic probability theory, introduced by Andrey Kolmogorov in 1933, there is no such thing as a random sequence. For example, consider flipping a fair coin infinitely many times. Any particular sequence, be it
0000
…
{\displaystyle 0000\dots }
or
011010
…
{\displaystyle 011010\dots }
, has equal probability of exactly zero. There is no way to state that one sequence is "more random" than another sequence, using the language of measure-theoretic probability. However, it is intuitively obvious that
011010
…
{\displaystyle 011010\dots }
looks more random than
0000
…
{\displaystyle 0000\dots }
. Algorithmic randomness theory formalizes this intuition.
As different types of algorithms are sometimes considered, ranging from algorithms with specific bounds on their running time to algorithms which may ask questions of an oracle machine, there are different notions of randomness. The most common of these is known as Martin-Löf randomness (K-randomness or 1-randomness), but stronger and weaker forms of randomness also exist. When the term "algorithmically random" is used to refer to a particular single (finite or infinite) sequence without clarification, it is usually taken to mean "incompressible" or, in the case the sequence is infinite and prefix algorithmically random (i.e., K-incompressible), "Martin-Löf–Chaitin random".
Since its inception, Martin-Löf randomness has been shown to admit many equivalent characterizations—in terms of compression, randomness tests, and gambling—that bear little outward resemblance to the original definition, but each of which satisfies our intuitive notion of properties that random sequences ought to have: random sequences should be incompressible, they should pass statistical tests for randomness, and it should be difficult to make money betting on them. The existence of these multiple definitions of Martin-Löf randomness, and the stability of these definitions under different models of computation, give evidence that Martin-Löf randomness is natural and not an accident of Martin-Löf's particular model.
It is important to disambiguate between algorithmic randomness and stochastic randomness. Unlike algorithmic randomness, which is defined for computable (and thus deterministic) processes, stochastic randomness is usually said to be a property of a sequence that is a priori known to be generated by (or is the outcome of) an independent identically distributed equiprobable stochastic process.
Because infinite sequences of binary digits can be identified with real numbers in the unit interval, random binary sequences are often called (algorithmically) random real numbers. Additionally, infinite binary sequences correspond to characteristic functions of sets of natural numbers; therefore those sequences might be seen as sets of natural numbers.
The class of all Martin-Löf random (binary) sequences is denoted by RAND or MLR.
== History ==
=== Richard von Mises ===
Richard von Mises formalized the notion of a test for randomness in order to define a random sequence as one that passed all tests for randomness. He defined a "collective" (kollektiv) to be an infinite binary string
x
1
:
∞
{\displaystyle x_{1:\infty }}
defined such that
There exists a limit
lim
n
1
n
∑
i
=
1
n
x
i
=
p
∈
(
0
,
1
)
{\displaystyle \lim _{n}{\frac {1}{n}}\sum _{i=1}^{n}x_{i}=p\in (0,1)}
.
For any "admissible" rule, such that it picks out an infinite subsequence
(
x
m
i
)
i
{\displaystyle (x_{m_{i}})_{i}}
from the string, we still have
lim
n
1
n
∑
i
=
1
n
x
m
i
=
p
{\displaystyle \lim _{n}{\frac {1}{n}}\sum _{i=1}^{n}x_{m_{i}}=p}
. He called this principle "impossibility of a gambling system".
To pick out a subsequence, first pick a binary function
ϕ
{\displaystyle \phi }
, such that given any binary string
x
1
:
k
{\displaystyle x_{1:k}}
, it outputs either 0 or 1. If it outputs 1, then we add
x
k
+
1
{\displaystyle x_{k+1}}
to the subsequence, else we continue. In this definition, some admissible rules might abstain forever on some sequences, and thus fail to pick out an infinite subsequence. We only consider those that do pick an infinite subsequence.
Stated in another way, each infinite binary string is a coin-flip game, and an admissible rule is a way for a gambler to decide when to place bets. A collective is a coin-flip game where there is no way for one gambler to do better than another over the long run. That is, there is no gambling system that works for the game.
The definition generalizes from binary alphabet to countable alphabet:
The frequency of each letter converges to a limit greater than zero.
For any "admissible" rule, such that it picks out an infinite subsequence
(
x
m
i
)
i
{\displaystyle (x_{m_{i}})_{i}}
from the string, the frequency of each letter in the subsequence still converges to the same limit.
Usually the admissible rules are defined to be rules computable by a Turing machine, and we require
p
=
1
/
2
{\displaystyle p=1/2}
. With this, we have the Mises–Wald–Church random sequences. This is not a restriction, since given a sequence with
p
=
1
/
2
{\displaystyle p=1/2}
, we can construct random sequences with any other computable
p
∈
(
0
,
1
)
{\displaystyle p\in (0,1)}
. (Here, "Church" refers to Alonzo Church, whose 1940 paper proposed using Turing-computable rules.)
However, this definition was found not to be strong enough.
Intuitively, the long-time average of a random sequence should oscillate on both sides of
p
{\displaystyle p}
, like how a random walk should cross the origin infinitely many times. However, Jean Ville showed that, even with countably many rules, there exists a binary sequence that tends towards
p
{\displaystyle p}
fraction of ones, but, for every finite prefix, the fraction of ones is less than
p
{\displaystyle p}
.
=== Per Martin-Löf ===
The Ville construction suggests that the Mises–Wald–Church sense of randomness is not good enough, because some random sequences do not satisfy some laws of randomness. For example, the Ville construction does not satisfy one of the laws of the iterated logarithm:
lim sup
n
→
∞
−
∑
k
=
1
n
(
x
k
−
1
/
2
)
2
n
log
log
n
≠
1
{\displaystyle \limsup _{n\to \infty }{\frac {-\sum _{k=1}^{n}(x_{k}-1/2)}{\sqrt {2n\log \log n}}}\neq 1}
Naively, one can fix this by requiring a sequence to satisfy all possible laws of randomness, where a "law of randomness" is a property that is satisfied by all sequences with probability 1. However, for each infinite sequence
y
1
:
∞
∈
2
N
{\displaystyle y_{1:\infty }\in 2^{\mathbb {N} }}
, we have a law of randomness that
x
1
:
∞
≠
y
1
:
∞
{\displaystyle x_{1:\infty }\neq y_{1:\infty }}
, leading to the conclusion that there are no random sequences.
(Per Martin-Löf, 1966) defined "Martin-Löf randomness" by only allowing laws of randomness that are Turing-computable. In other words, a sequence is random iff it passes all Turing-computable tests of randomness.
The thesis that the definition of Martin-Löf randomness "correctly" captures the intuitive notion of randomness has been called the Martin-Löf–Chaitin Thesis; it is somewhat similar to the Church–Turing thesis.
Church–Turing thesis. The mathematical concept of "computable by Turing machines" captures the intuitive notion of a function being "computable". Like how Turing-computability has many equivalent definitions, Martin-Löf randomness also has many equivalent definitions. See next section.
== Three equivalent definitions ==
Martin-Löf's original definition of a random sequence was in terms of constructive null covers; he defined a sequence to be random if it is not contained in any such cover. Gregory Chaitin, Leonid Levin and Claus-Peter Schnorr proved a characterization in terms of algorithmic complexity: a sequence is random if there is a uniform bound on the compressibility of its initial segments. Schnorr gave a third equivalent definition in terms of martingales. Li and Vitanyi's book An Introduction to Kolmogorov Complexity and Its Applications is the standard introduction to these ideas.
Algorithmic complexity (Chaitin 1969, Schnorr 1973, Levin 1973): Algorithmic complexity (also known as (prefix-free) Kolmogorov complexity or program-size complexity) can be thought of as a lower bound on the algorithmic compressibility of a finite sequence (of characters or binary digits). It assigns to each such sequence w a natural number K(w) that, intuitively, measures the minimum length of a computer program (written in some fixed programming language) that takes no input and will output w when run. The complexity is required to be prefix-free: The program (a sequence of 0 and 1) is followed by an infinite string of 0s, and the length of the program (assuming it halts) includes the number of zeroes to the right of the program that the universal Turing machine reads. The additional requirement is needed because we can choose a length such that the length codes information about the substring. Given a natural number c and a sequence w, we say that w is c-incompressible if
K
(
w
)
≥
|
w
|
−
c
{\displaystyle K(w)\geq |w|-c}
.
An infinite sequence S is Martin-Löf random if and only if there is a constant c such that all of S's finite prefixes are c-incompressible. More succinctly,
K
(
w
)
≥
|
w
|
−
O
(
1
)
{\displaystyle K(w)\geq |w|-O(1)}
.
Constructive null covers (Martin-Löf 1966): This is Martin-Löf's original definition. For a finite binary string w we let Cw denote the cylinder generated by w. This is the set of all infinite sequences beginning with w, which is a basic open set in Cantor space. The product measure μ(Cw) of the cylinder generated by w is defined to be 2−|w|. Every open subset of Cantor space is the union of a countable sequence of disjoint basic open sets, and the measure of an open set is the sum of the measures of any such sequence. An effective open set is an open set that is the union of the sequence of basic open sets determined by a recursively enumerable sequence of binary strings. A constructive null cover or effective measure 0 set is a recursively enumerable sequence
U
i
{\displaystyle U_{i}}
of effective open sets such that
U
i
+
1
⊆
U
i
{\displaystyle U_{i+1}\subseteq U_{i}}
and
μ
(
U
i
)
≤
2
−
i
{\displaystyle \mu (U_{i})\leq 2^{-i}}
for each natural number i. Every effective null cover determines a
G
δ
{\displaystyle G_{\delta }}
set of measure 0, namely the intersection of the sets
U
i
{\displaystyle U_{i}}
.
A sequence is defined to be Martin-Löf random if it is not contained in any
G
δ
{\displaystyle G_{\delta }}
set determined by a constructive null cover.
Constructive martingales (Schnorr 1971): A martingale is a function
d
:
{
0
,
1
}
∗
→
[
0
,
∞
)
{\displaystyle d:\{0,1\}^{*}\to [0,\infty )}
such that, for all finite strings w,
d
(
w
)
=
(
d
(
w
⌢
0
)
+
d
(
w
⌢
1
)
)
/
2
{\displaystyle d(w)=(d(w^{\smallfrown }0)+d(w^{\smallfrown }1))/2}
, where
a
⌢
b
{\displaystyle a^{\smallfrown }b}
is the concatenation of the strings a and b. This is called the "fairness condition": if a martingale is viewed as a betting strategy, then the above condition requires that the bettor plays against fair odds. A martingale d is said to succeed on a sequence S if
lim sup
n
→
∞
d
(
S
↾
n
)
=
∞
,
{\displaystyle \limsup _{n\to \infty }d(S\upharpoonright n)=\infty ,}
where
S
↾
n
{\displaystyle S\upharpoonright n}
is the first n bits of S. A martingale d is constructive (also known as weakly computable, lower semi-computable) if there exists a computable function
d
^
:
{
0
,
1
}
∗
×
N
→
Q
{\displaystyle {\widehat {d}}:\{0,1\}^{*}\times \mathbb {N} \to {\mathbb {Q} }}
such that, for all finite binary strings w
d
^
(
w
,
t
)
≤
d
^
(
w
,
t
+
1
)
<
d
(
w
)
,
{\displaystyle {\widehat {d}}(w,t)\leq {\widehat {d}}(w,t+1)<d(w),}
for all positive integers t,
lim
t
→
∞
d
^
(
w
,
t
)
=
d
(
w
)
.
{\displaystyle \lim _{t\to \infty }{\widehat {d}}(w,t)=d(w).}
A sequence is Martin-Löf random if and only if no constructive martingale succeeds on it.
== Interpretations of the definitions ==
The Kolmogorov complexity characterization conveys the intuition that a random sequence is incompressible: no prefix can be produced by a program much shorter than the prefix.
The null cover characterization conveys the intuition that a random real number should not have any property that is "uncommon". Each measure 0 set can be thought of as an uncommon property. It is not possible for a sequence to lie in no measure 0 sets, because each one-point set has measure 0. Martin-Löf's idea was to limit the definition to measure 0 sets that are effectively describable; the definition of an effective null cover determines a countable collection of effectively describable measure 0 sets and defines a sequence to be random if it does not lie in any of these particular measure 0 sets. Since the union of a countable collection of measure 0 sets has measure 0, this definition immediately leads to the theorem that there is a measure 1 set of random sequences. Note that if we identify the Cantor space of binary sequences with the interval [0,1] of real numbers, the measure on Cantor space agrees with Lebesgue measure.
An effective measure 0 set can be interpreted as a Turing machine that is able to tell, given an infinite binary string, whether the string looks random at levels of statistical significance. The set is the intersection of shrinking sets
U
1
⊃
U
2
⊃
U
3
⊃
⋯
{\displaystyle U_{1}\supset U_{2}\supset U_{3}\supset \cdots }
, and since each set
U
n
{\displaystyle U_{n}}
is specified by an enumerable sequence of prefixes, given any infinite binary string, if it is in
U
n
{\displaystyle U_{n}}
, then the Turing machine can decide in finite time that the string does fall inside
U
n
{\displaystyle U_{n}}
. Therefore, it can "reject the hypothesis that the string is random at significance level
2
−
n
{\displaystyle 2^{-n}}
". If the Turing machine can reject the hypothesis at all significance levels, then the string is not random. A random string is one that, for each Turing-computable test of randomness, manages to remain forever un-rejected at some significance level.
The martingale characterization conveys the intuition that no effective procedure should be able to make money betting against a random sequence. A martingale d is a betting strategy. d reads a finite string w and bets money on the next bit. It bets some fraction of its money that the next bit will be 0, and then remainder of its money that the next bit will be 1. d doubles the money it placed on the bit that actually occurred, and it loses the rest. d(w) is the amount of money it has after seeing the string w. Since the bet placed after seeing the string w can be calculated from the values d(w), d(w0), and d(w1), calculating the amount of money it has is equivalent to calculating the bet. The martingale characterization says that no betting strategy implementable by any computer (even in the weak sense of constructive strategies, which are not necessarily computable) can make money betting on a random sequence.
== Properties and examples of Martin-Löf random sequences ==
=== Universality ===
There is a universal constructive martingale d. This martingale is universal in the sense that, given any constructive martingale d, if d succeeds on a sequence, then d succeeds on that sequence as well. Thus, d succeeds on every sequence in RANDc (but, since d is constructive, it succeeds on no sequence in RAND). (Schnorr 1971)
There is a constructive null cover of RANDc. This means that all effective tests for randomness (that is, constructive null covers) are, in a sense, subsumed by this universal test for randomness, since any sequence that passes this single test for randomness will pass all tests for randomness. (Martin-Löf 1966) Intuitively, this universal test for randomness says "If the sequence has increasingly long prefixes that can be increasingly well-compressed on this universal Turing machine", then it is not random." -- see next section.
Construction sketch: Enumerate the effective null covers as
(
(
U
m
,
n
)
n
)
m
{\displaystyle ((U_{m,n})_{n})_{m}}
. The enumeration is also effective (enumerated by a modified universal Turing machine). Now we have a universal effective null cover by diagonalization:
(
∪
n
U
n
,
n
+
k
+
1
)
k
{\displaystyle (\cup _{n}U_{n,n+k+1})_{k}}
.
=== Passing randomness tests ===
If a sequence fails an algorithmic randomness test, then it is algorithmically compressible. Conversely, if it is algorithmically compressible, then it fails an algorithmic randomness test.
Construction sketch: Suppose the sequence fails a randomness test, then it can be compressed by lexicographically enumerating all sequences that fails the test, then code for the location of the sequence in the list of all such sequences. This is called "enumerative source encoding".
Conversely, if the sequence is compressible, then by the pigeonhole principle, only a vanishingly small fraction of sequences are like that, so we can define a new test for randomness by "has a compression by this universal Turing machine". Incidentally, this is the universal test for randomness.
For example, consider a binary sequence sampled IID from the Bernoulli distribution. After taking a large number
N
{\displaystyle N}
of samples, we should have about
M
≈
p
N
{\displaystyle M\approx pN}
ones. We can code for this sequence as "Generate all binary sequences with length
N
{\displaystyle N}
, and
M
{\displaystyle M}
ones. Of those, the
i
{\displaystyle i}
-th sequence in lexicographic order.".
By Stirling approximation,
log
2
(
N
p
N
)
≈
N
H
(
p
)
{\displaystyle \log _{2}{\binom {N}{pN}}\approx NH(p)}
where
H
{\displaystyle H}
is the binary entropy function. Thus, the number of bits in this description is:
2
(
1
+
ϵ
)
log
2
N
+
(
1
+
ϵ
)
N
H
(
p
)
+
O
(
1
)
{\displaystyle 2(1+\epsilon )\log _{2}N+(1+\epsilon )NH(p)+O(1)}
The first term is for prefix-coding the numbers
N
{\displaystyle N}
and
M
{\displaystyle M}
. The second term is for prefix-coding the number
i
{\displaystyle i}
. (Use Elias omega coding.) The third term is for prefix-coding the rest of the description.
When
N
{\displaystyle N}
is large, this description has just
∼
H
(
p
)
N
{\displaystyle \sim H(p)N}
bits, and so it is compressible, with compression ratio
∼
H
(
p
)
{\displaystyle \sim H(p)}
. In particular, the compression ratio is exactly one (incompressible) only when
p
=
1
/
2
{\displaystyle p=1/2}
. (Example 14.2.8 )
=== Impossibility of a gambling system ===
Consider a casino offering fair odds at a roulette table. The roulette table generates a sequence of random numbers. If this sequence is algorithmically random, then there is no lower semi-computable strategy to win, which in turn implies that there is no computable strategy to win. That is, for any gambling algorithm, the long-term log-payoff is zero (neither positive nor negative). Conversely, if this sequence is not algorithmically random, then there is a lower semi-computable strategy to win.
=== Examples ===
Chaitin's halting probability Ω is an example of a random sequence.
Every random sequence is not computable.
Every random sequence is normal, satisfies the law of large numbers, and satisfies all Turing-computable properties satisfied by an IID stream of uniformly random numbers. (Theorem 14.5.2 )
=== Relation to the arithmetic hierarchy ===
RANDc (the complement of RAND) is a measure 0 subset of the set of all infinite sequences. This is implied by the fact that each constructive null cover covers a measure 0 set, there are only countably many constructive null covers, and a countable union of measure 0 sets has measure 0. This implies that RAND is a measure 1 subset of the set of all infinite sequences.
The class RAND is a
Σ
2
0
{\displaystyle \Sigma _{2}^{0}}
subset of Cantor space, where
Σ
2
0
{\displaystyle \Sigma _{2}^{0}}
refers to the second level of the arithmetical hierarchy. This is because a sequence S is in RAND if and only if there is some open set in the universal effective null cover that does not contain S; this property can be seen to be definable by a
Σ
2
0
{\displaystyle \Sigma _{2}^{0}}
formula.
There is a random sequence which is
Δ
2
0
{\displaystyle \Delta _{2}^{0}}
, that is, computable relative to an oracle for the Halting problem. (Schnorr 1971) Chaitin's Ω is an example of such a sequence.
No random sequence is decidable, computably enumerable, or co-computably-enumerable. Since these correspond to the
Δ
1
0
{\displaystyle \Delta _{1}^{0}}
,
Σ
1
0
{\displaystyle \Sigma _{1}^{0}}
, and
Π
1
0
{\displaystyle \Pi _{1}^{0}}
levels of the arithmetical hierarchy, this means that
Δ
2
0
{\displaystyle \Delta _{2}^{0}}
is the lowest level in the arithmetical hierarchy where random sequences can be found.
Every sequence is Turing reducible to some random sequence. (Kučera 1985/1989, Gács 1986). Thus there are random sequences of arbitrarily high Turing degree.
== Relative randomness ==
As each of the equivalent definitions of a Martin-Löf random sequence is based on what is computable by some Turing machine, one can naturally ask what is computable by a Turing oracle machine. For a fixed oracle A, a sequence B which is not only random but in fact, satisfies the equivalent definitions for computability relative to A (e.g., no martingale which is constructive relative to the oracle A succeeds on B) is said to be random relative to A. Two sequences, while themselves random, may contain very similar information, and therefore neither will be random relative to the other. Any time there is a Turing reduction from one sequence to another, the second sequence cannot be random relative to the first, just as computable sequences are themselves nonrandom; in particular, this means that Chaitin's Ω is not random relative to the halting problem.
An important result relating to relative randomness is van Lambalgen's theorem, which states that if C is the sequence composed from A and B by interleaving the first bit of A, the first bit of B, the second bit of A, the second bit of B, and so on, then C is algorithmically random if and only if A is algorithmically random, and B is algorithmically random relative to A. A closely related consequence is that if A and B are both random themselves, then A is random relative to B if and only if B is random relative to A.
== Stronger than Martin-Löf randomness ==
Relative randomness gives us the first notion which is stronger than Martin-Löf randomness, which is randomness relative to some fixed oracle A. For any oracle, this is at least as strong, and for most oracles, it is strictly stronger, since there will be Martin-Löf random sequences which are not random relative to the oracle A. Important oracles often considered are the halting problem,
∅
′
{\displaystyle \emptyset '}
, and the nth jump oracle,
∅
(
n
)
{\displaystyle \emptyset ^{(n)}}
, as these oracles are able to answer specific questions which naturally arise. A sequence which is random relative to the oracle
∅
(
n
−
1
)
{\displaystyle \emptyset ^{(n-1)}}
is called n-random; a sequence is 1-random, therefore, if and only if it is Martin-Löf random. A sequence which is n-random for every n is called arithmetically random. The n-random sequences sometimes arise when considering more complicated properties. For example, there are only countably many
Δ
2
0
{\displaystyle \Delta _{2}^{0}}
sets, so one might think that these should be non-random. However, the halting probability Ω is
Δ
2
0
{\displaystyle \Delta _{2}^{0}}
and 1-random; it is only after 2-randomness is reached that it is impossible for a random set to be
Δ
2
0
{\displaystyle \Delta _{2}^{0}}
.
== Weaker than Martin-Löf randomness ==
Additionally, there are several notions of randomness which are weaker than Martin-Löf randomness. Some of these are weak 1-randomness, Schnorr randomness, computable randomness, partial computable randomness. Yongge Wang showed
that Schnorr randomness is different from computable randomness. Additionally, Kolmogorov–Loveland randomness is known to be no stronger than Martin-Löf randomness, but it is not known whether it is actually weaker.
At the opposite end of the randomness spectrum there is the notion of a K-trivial set. These sets are anti-random in that all initial segment is logarithmically compressible (i.e.,
K
(
w
)
≤
K
(
|
w
|
)
+
b
{\displaystyle K(w)\leq K(|w|)+b}
for each initial segment w), but they are not computable.
== See also ==
Random sequence
Gregory Chaitin
Stochastics
Monte Carlo method
K-trivial set
Universality probability
Statistical randomness
== References ==
== Further reading ==
Eagle, Antony (2021), "Chance versus Randomness", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Spring 2021 ed.), Metaphysics Research Lab, Stanford University, retrieved 2024-01-28
Downey, Rod; Hirschfeldt, Denis R.; Nies, André; Terwijn, Sebastiaan A. (2006). "Calibrating Randomness". The Bulletin of Symbolic Logic. 12 (3/4): 411–491. CiteSeerX 10.1.1.135.4162. doi:10.2178/bsl/1154698741. Archived from the original on 2016-02-02.
Gács, Péter (1986). "Every sequence is reducible to a random one" (PDF). Information and Control. 70 (2/3): 186–192. doi:10.1016/s0019-9958(86)80004-3.
Kučera, A. (1985). "Measure, Π01-classes and complete extensions of PA". Recursion Theory Week. Lecture Notes in Mathematics. Vol. 1141. Springer-Verlag. pp. 245–259. doi:10.1007/BFb0076224. ISBN 978-3-540-39596-6.
Kučera, A. (1989). "On the use of diagonally nonrecursive functions". Studies in Logic and the Foundations of Mathematics. Vol. 129. North-Holland. pp. 219–239.
Levin, L. (1973). "On the notion of a random sequence". Soviet Mathematics - Doklady. 14: 1413–1416.
Li, M.; Vitanyi, P. M. B. (1997). An Introduction to Kolmogorov Complexity and its Applications (Second ed.). Berlin: Springer-Verlag.
Martin-Löf, P. (1966). "The definition of random sequences". Information and Control. 9 (6): 602–619. doi:10.1016/s0019-9958(66)80018-9.
Nies, André (2009). Computability and randomness. Oxford Logic Guides. Vol. 51. Oxford: Oxford University Press. ISBN 978-0-19-923076-1. Zbl 1169.03034.
Schnorr, C. P. (1971). "A unified approach to the definition of a random sequence". Mathematical Systems Theory. 5 (3): 246–258. doi:10.1007/BF01694181. S2CID 8931514.
Schnorr, Claus P. (1973). "Process complexity and effective random tests". Journal of Computer and System Sciences. 7 (4): 376–388. doi:10.1016/s0022-0000(73)80030-3.
Chaitin, Gregory J. (1969). "On the Length of Programs for Computing Finite Binary Sequences: Statistical Considerations". Journal of the ACM. 16 (1): 145–159. doi:10.1145/321495.321506. S2CID 8209877.
Ville, J. (1939). Etude critique de la notion de collectif. Paris: Gauthier-Villars. | Wikipedia/Algorithmic_randomness |
An embedded system is a specialized computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is embedded as part of a complete device often including electrical or electronic hardware and mechanical parts.
Because an embedded system typically controls physical operations of the machine that it is embedded within, it often has real-time computing constraints. Embedded systems control many devices in common use. In 2009, it was estimated that ninety-eight percent of all microprocessors manufactured were used in embedded systems.
Modern embedded systems are often based on microcontrollers (i.e. microprocessors with integrated memory and peripheral interfaces), but ordinary microprocessors (using external chips for memory and peripheral interface circuits) are also common, especially in more complex systems. In either case, the processor(s) used may be types ranging from general purpose to those specialized in a certain class of computations, or even custom designed for the application at hand. A common standard class of dedicated processors is the digital signal processor (DSP).
Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce the size and cost of the product and increase its reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale.
Embedded systems range in size from portable personal devices such as digital watches and MP3 players to bigger machines like home appliances, industrial assembly lines, robots, transport vehicles, traffic light controllers, and medical imaging systems. Often they constitute subsystems of other machines like avionics in aircraft and astrionics in spacecraft. Large installations like factories, pipelines, and electrical grids rely on multiple embedded systems networked together. Generalized through software customization, embedded systems such as programmable logic controllers frequently comprise their functional units.
Embedded systems range from those low in complexity, with a single microcontroller chip, to very high with multiple units, peripherals and networks, which may reside in equipment racks or across large geographical areas connected via long-distance communications lines.
== History ==
=== Background ===
The origins of the microprocessor and the microcontroller can be traced back to the MOS integrated circuit, which is an integrated circuit chip fabricated from MOSFETs (metal–oxide–semiconductor field-effect transistors) and was developed in the early 1960s. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor system could be contained on several MOS LSI chips.
The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, released in 1971. It was developed by Federico Faggin, using his silicon-gate MOS technology, along with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima.
=== Development ===
One of the first recognizably modern embedded systems was the Apollo Guidance Computer, developed ca. 1965 by Charles Stark Draper at the MIT Instrumentation Laboratory. At the project's inception, the Apollo guidance computer was considered the riskiest item in the Apollo project as it employed the then newly developed monolithic integrated circuits to reduce the computer's size and weight.
An early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that represented the first high-volume use of integrated circuits.
Since these early applications in the 1960s, embedded systems have come down in price and there has been a dramatic rise in processing power and functionality. An early microprocessor, the Intel 4004 (released in 1971), was designed for calculators and other small systems but still required external memory and support chips. By the early 1980s, memory, input and output system components had been integrated into the same chip as the processor forming a microcontroller. Microcontrollers find applications where a general-purpose computer would be too costly. As the cost of microprocessors and microcontrollers fell, the prevalence of embedded systems increased.
A comparatively low-cost microcontroller may be programmed to fulfill the same role as a large number of separate components. With microcontrollers, it became feasible to replace, even in consumer products, expensive knob-based analog components such as potentiometers and variable capacitors with up/down buttons or knobs read out by a microprocessor. Although in this context an embedded system is usually more complex than a traditional solution, most of the complexity is contained within the microcontroller itself. Very few additional components may be needed and most of the design effort is in the software. Software prototype and test can be quicker compared with the design and construction of a new circuit not using an embedded processor.
== Applications ==
Embedded systems are commonly found in consumer, industrial, automotive, home appliances, medical, telecommunication, commercial, aerospace and military applications.
Telecommunications systems employ numerous embedded systems from telephone switches for the network to cell phones at the end user. Computer networking uses dedicated routers and network bridges to route data.
Consumer electronics include MP3 players, television sets, mobile phones, video game consoles, digital cameras, GPS receivers, and printers. Household appliances, such as microwave ovens, washing machines and dishwashers, include embedded systems to provide flexibility, efficiency and features. Advanced heating, ventilation, and air conditioning (HVAC) systems use networked thermostats to more accurately and efficiently control temperature that can change by time of day and season. Home automation uses wired and wireless networking that can be used to control lights, climate, security, audio/visual, surveillance, etc., all of which use embedded devices for sensing and controlling.
Transportation systems from flight to automobiles increasingly use embedded systems. New airplanes contain advanced avionics such as inertial guidance systems and GPS receivers that also have considerable safety requirements. Spacecraft rely on astrionics systems for trajectory correction. Various electric motors — brushless DC motors, induction motors and DC motors — use electronic motor controllers. Automobiles, electric vehicles, and hybrid vehicles increasingly use embedded systems to maximize efficiency and reduce pollution. Other automotive safety systems using embedded systems include anti-lock braking system (ABS), electronic stability control (ESC/ESP), traction control (TCS) and automatic four-wheel drive.
Medical equipment uses embedded systems for monitoring, and various medical imaging (positron emission tomography (PET), single-photon emission computed tomography (SPECT), computed tomography (CT), and magnetic resonance imaging (MRI) for non-invasive internal inspections. Embedded systems within medical equipment are often powered by industrial computers.
Embedded systems are used for safety-critical systems in aerospace and defense industries. Unless connected to wired or wireless networks via on-chip 3G cellular or other methods for IoT monitoring and control purposes, these systems can be isolated from hacking and thus be more secure. For fire safety, the systems can be designed to have a greater ability to handle higher temperatures and continue to operate. In dealing with security, the embedded systems can be self-sufficient and be able to deal with cut electrical and communication systems.
Miniature wireless devices called motes are networked wireless sensors. Wireless sensor networking makes use of miniaturization made possible by advanced integrated circuit (IC) design to couple full wireless subsystems to sophisticated sensors, enabling people and companies to measure a myriad of things in the physical world and act on this information through monitoring and control systems. These motes are completely self-contained and will typically run off a battery source for years before the batteries need to be changed or charged.
== Characteristics ==
Embedded systems are designed to perform a specific task, in contrast with general-purpose computers designed for multiple tasks. Some have real-time performance constraints that must be met, for reasons such as safety and usability; others may have low or no performance requirements, allowing the system hardware to be simplified to reduce costs.
Embedded systems are not always standalone devices. Many embedded systems are a small part within a larger device that serves a more general purpose. For example, the Gibson Robot Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot Guitar is to play music. Similarly, an embedded system in an automobile provides a specific function as a subsystem of the car itself.
The program instructions written for embedded systems are referred to as firmware, and are stored in read-only memory or flash memory chips. They run with limited computer hardware resources: little memory, small or non-existent keyboard or screen.
=== User interfaces ===
Embedded systems range from no user interface at all, in systems dedicated to one task, to complex graphical user interfaces that resemble modern computer desktop operating systems. Simple embedded devices use buttons, light-emitting diodes (LED), graphic or character liquid-crystal displays (LCD) with a simple menu system. More sophisticated devices that use a graphical screen with touch sensing or screen-edge soft keys provide flexibility while minimizing space used: the meaning of the buttons can change with the screen, and selection involves the natural behavior of pointing at what is desired.
Some systems provide user interface remotely with the help of a serial (e.g. RS-232) or network (e.g. Ethernet) connection. This approach extends the capabilities of the embedded system, avoids the cost of a display, simplifies the board support package (BSP) and allows designers to build a rich user interface on the PC. A good example of this is the combination of an embedded HTTP server running on an embedded device (such as an IP camera or a network router). The user interface is displayed in a web browser on a PC connected to the device.
=== Processors in embedded systems ===
Examples of properties of typical embedded computers when compared with general-purpose counterparts, are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the expense of limited processing resources.
Numerous microcontrollers have been developed for embedded systems use. General-purpose microprocessors are also used in embedded systems, but generally, require more support circuitry than microcontrollers.
==== Ready-made computer boards ====
PC/104 and PC/104+ are examples of standards for ready-made computer boards intended for small, low-volume embedded and ruggedized systems. These are mostly x86-based and often physically small compared to a standard PC, although still quite large compared to most simple (8/16-bit) embedded systems. They may use DOS, FreeBSD, Linux, NetBSD, OpenHarmony or an embedded real-time operating system (RTOS) such as MicroC/OS-II, QNX or VxWorks.
In certain applications, where small size or power efficiency are not primary concerns, the components used may be compatible with those used in general-purpose x86 personal computers. Boards such as the VIA EPIA range help to bridge the gap by being PC-compatible but highly integrated, physically smaller or have other attributes making them attractive to embedded engineers. The advantage of this approach is that low-cost commodity components may be used along with the same software development tools used for general software development. Systems built in this way are still regarded as embedded since they are integrated into larger devices and fulfill a single role. Examples of devices that may adopt this approach are automated teller machines (ATM) and arcade machines, which contain code specific to the application.
However, most ready-made embedded systems boards are not PC-centered and do not use the ISA or PCI busses. When a system-on-a-chip processor is involved, there may be little benefit to having a standardized bus connecting discrete components, and the environment for both hardware and software tools may be very different.
One common design style uses a small system module, perhaps the size of a business card, holding high density BGA chips such as an ARM-based system-on-a-chip processor and peripherals, external flash memory for storage, and DRAM for runtime memory. The module vendor will usually provide boot software and make sure there is a selection of operating systems, usually including Linux and some real-time choices. These modules can be manufactured in high volume, by organizations familiar with their specialized testing issues, and combined with much lower volume custom mainboards with application-specific external peripherals. Prominent examples of this approach include Arduino and Raspberry Pi.
==== ASIC and FPGA SoC solutions ====
A system on a chip (SoC) contains a complete system - consisting of multiple processors, multipliers, caches, even different types of memory and commonly various peripherals like interfaces for wired or wireless communication on a single chip. Often graphics processing units (GPU) and DSPs are included such chips. SoCs can be implemented as an application-specific integrated circuit (ASIC) or using a field-programmable gate array (FPGA) which typically can be reconfigured.
ASIC implementations are common for very-high-volume embedded systems like mobile phones and smartphones. ASIC or FPGA implementations may be used for not-so-high-volume embedded systems with special needs in kind of signal processing performance, interfaces and reliability, like in avionics.
=== Peripherals ===
Embedded systems talk with the outside world via peripherals, such as:
Serial communication interfaces (SCI): RS-232, RS-422, RS-485, etc.
Synchronous Serial Interface: I2C, SPI, SSC and ESSI (Enhanced Synchronous Serial Interface)
Universal Serial Bus (USB)
Media cards (SD cards, CompactFlash, etc.)
Network interface controller: Ethernet, WiFi, etc.
Fieldbuses: CAN bus, LIN-Bus, PROFIBUS, etc.
Timers: Phase-locked loops, programmable interval timers
General Purpose Input/Output (GPIO)
Analog-to-digital and digital-to-analog converters
Debugging: JTAG, In-system programming, background debug mode interface port, BITP, and DB9 ports.
=== Tools ===
As with other software, embedded system designers use compilers, assemblers, and debuggers to develop embedded system software. However, they may also use more specific tools:
In circuit debuggers or emulators (see next section).
Utilities to add a checksum or CRC to a program, so the embedded system can check if the program is valid.
For systems using digital signal processing, developers may use a computational notebook to simulate the mathematics.
System-level modeling and simulation tools help designers to construct simulation models of a system with hardware components such as processors, memories, DMA, interfaces, buses and software behavior flow as a state diagram or flow diagram using configurable library blocks. Simulation is conducted to select the right components by performing power vs. performance trade-offs, reliability analysis and bottleneck analysis. Typical reports that help a designer to make architecture decisions include application latency, device throughput, device utilization, power consumption of the full system as well as device-level power consumption.
A model-based development tool creates and simulates graphical data flow and UML state chart diagrams of components like digital filters, motor controllers, communication protocol decoding and multi-rate tasks.
Custom compilers and linkers may be used to optimize specialized hardware.
An embedded system may have its own special language or design tool, or add enhancements to an existing language such as Forth or Basic.
Another alternative is to add a RTOS or embedded operating system
Modeling and code generating tools often based on state machines
Software tools can come from several sources:
Software companies that specialize in the embedded market
Ported from the GNU software development tools
Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative to a common PC processor
Embedded software often requires a variety of development tools, including programming languages such as C++, Rust, or Python, and frameworks like Qt for graphical interfaces. These tools enable developers to create efficient, scalable, and feature-rich applications tailored to the specific requirements of embedded systems. The choice of tools is driven by factors such as real-time performance, integration with hardware, or energy efficiency.
As the complexity of embedded systems grows, higher-level tools and operating systems are migrating into machinery where it makes sense. For example, cellphones, personal digital assistants and other consumer computers often need significant software that is purchased or provided by a person other than the manufacturer of the electronics. In these systems, an open programming environment such as Linux, NetBSD, FreeBSD, OSGi or Embedded Java is required so that the third-party software provider can sell to a large market.
== Debugging ==
Embedded debugging may be performed at different levels, depending on the facilities available. Considerations include: does it slow down the main application, how close is the debugged system or application to the actual system or application, how expressive are the triggers that can be set for debugging (e.g., inspecting the memory when a particular program counter value is reached), and what can be inspected in the debugging process (such as, only memory, or memory and registers, etc.).
From simplest to most sophisticated debugging techniques and systems are roughly grouped into the following areas:
Interactive resident debugging, using the simple shell provided by the embedded operating system (e.g. Forth and Basic)
Software-only debuggers have the benefit that they do not need any hardware modification but have to carefully control what they record in order to conserve time and storage space.
External debugging using logging or serial port output to trace operation using either a monitor in flash or using a debug server like the Remedy Debugger that even works for heterogeneous multicore systems.
An in-circuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or Nexus interface. This allows the operation of the microprocessor to be controlled externally, but is typically restricted to specific debugging capabilities in the processor.
An in-circuit emulator (ICE) replaces the microprocessor with a simulated equivalent, providing full control over all aspects of the microprocessor.
A complete emulator provides a simulation of all aspects of the hardware, allowing all of it to be controlled and modified, and allowing debugging on a normal PC. The downsides are expense and slow operation, in some cases up to 100 times slower than the final system.
For SoC designs, the typical approach is to verify and debug the design on an FPGA prototype board. Tools such as Certus are used to insert probes in the FPGA implementation that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs in an implementation with capabilities similar to a logic analyzer.
Unless restricted to external debugging, the programmer can typically load and run software through the tools, view the code running in the processor, and start or stop its operation. The view of the code may be as high-level programming language, assembly code or mixture of both.
=== Tracing ===
Real-time operating systems often support tracing of operating system events. A graphical view is presented by a host PC tool, based on a recording of the system behavior. The trace recording can be performed in software, by the RTOS, or by special tracing hardware. RTOS tracing allows developers to understand timing and performance issues of the software system and gives a good understanding of the high-level system behaviors. Trace recording in embedded systems can be achieved using hardware or software solutions. Software-based trace recording does not require specialized debugging hardware and can be used to record traces in deployed devices, but it can have an impact on CPU and RAM usage. One example of a software-based tracing method used in RTOS environments is the use of empty macros which are invoked by the operating system at strategic places in the code, and can be implemented to serve as hooks.
=== Reliability ===
Embedded systems often reside in machines that are expected to run continuously for years without error, and in some cases recover by themselves if an error occurs. Therefore, the software is usually developed and tested more carefully than that for personal computers, and unreliable mechanical moving parts such as disk drives, switches or buttons are avoided.
Specific reliability issues may include:
The system cannot safely be shut down for repair, or it is too inaccessible to repair. Examples include space systems, undersea cables, navigational beacons, bore-hole systems, and automobiles.
The system must be kept running for safety reasons. Reduced functionality in the event of failure may be intolerable. Often backups are selected by an operator. Examples include aircraft navigation, reactor control systems, safety-critical chemical factory controls, train signals.
The system will lose large amounts of money when shut down: Telephone switches, factory controls, bridge and elevator controls, funds transfer and market making, automated sales and service.
A variety of techniques are used, sometimes in combination, to recover from errors—both software bugs such as memory leaks, and also soft errors in the hardware:
watchdog timer that resets and restarts the system unless the software periodically notifies the watchdog subsystems
Designing with a trusted computing base (TCB) architecture ensures a highly secure and reliable system environment
A hypervisor designed for embedded systems is able to provide secure encapsulation for any subsystem component so that a compromised software component cannot interfere with other subsystems, or privileged-level system software. This encapsulation keeps faults from propagating from one subsystem to another, thereby improving reliability. This may also allow a subsystem to be automatically shut down and restarted on fault detection.
Immunity-aware programming can help engineers produce more reliable embedded systems code. Guidelines and coding rules such as MISRA C/C++ aim to assist developers produce reliable, portable firmware in a number of different ways: typically by advising or mandating against coding practices which may lead to run-time errors (memory leaks, invalid pointer uses), use of run-time checks and exception handling (range/sanity checks, divide-by-zero and buffer index validity checks, default cases in logic checks), loop bounding, production of human-readable, well commented and well structured code, and avoiding language ambiguities which may lead to compiler-induced inconsistencies or side-effects (expression evaluation ordering, recursion, certain types of macro). These rules can often be used in conjunction with code static checkers or bounded model checking for functional verification purposes, and also assist in determination of code timing properties.
=== High vs. low volume ===
For high-volume systems such as mobile phones, minimizing cost is usually the primary design consideration. Engineers typically select hardware that is just good enough to implement the necessary functions.
For low-volume or prototype embedded systems, general-purpose computers may be adapted by limiting the programs or by replacing the operating system with an RTOS.
== Embedded software architectures ==
In 1978 National Electrical Manufacturers Association released ICS 3-1978, a standard for programmable microcontrollers, including almost any computer-based controllers, such as single-board computers, numerical, and event-based controllers.
There are several different types of software architecture in common use.
=== Simple control loop ===
In this design, the software simply has a loop which monitors the input devices. The loop calls subroutines, each of which manages a part of the hardware or software. Hence it is called a simple control loop or programmed input-output.
=== Interrupt-controlled system ===
Some embedded systems are predominantly controlled by interrupts. This means that tasks performed by the system are triggered by different kinds of events; an interrupt could be generated, for example, by a timer at a predefined interval, or by a serial port controller receiving data.
This architecture is used if event handlers need low latency, and the event handlers are short and simple. These systems run a simple task in a main loop also, but this task is not very sensitive to unexpected delays. Sometimes the interrupt handler will add longer tasks to a queue structure. Later, after the interrupt handler has finished, these tasks are executed by the main loop. This method brings the system close to a multitasking kernel with discrete processes.
=== Cooperative multitasking ===
Cooperative multitasking is very similar to the simple control loop scheme, except that the loop is hidden in an API. The programmer defines a series of tasks, and each task gets its own environment to run in. When a task is idle, it calls an idle routine which passes control to another task.
The advantages and disadvantages are similar to that of the control loop, except that adding new software is easier, by simply writing a new task, or adding to the queue.
=== Preemptive multitasking or multi-threading ===
In this type of system, a low-level piece of code switches between tasks or threads based on a timer invoking an interrupt. This is the level at which the system is generally considered to have an operating system kernel. Depending on how much functionality is required, it introduces more or less of the complexities of managing multiple tasks running conceptually in parallel.
As any code can potentially damage the data of another task (except in systems using a memory management unit) programs must be carefully designed and tested, and access to shared data must be controlled by some synchronization strategy such as message queues, semaphores or a non-blocking synchronization scheme.
Because of these complexities, it is common for organizations to use an off-the-shelf RTOS, allowing the application programmers to concentrate on device functionality rather than operating system services. The choice to include an RTOS brings in its own issues, however, as the selection must be made prior to starting the application development process. This timing forces developers to choose the embedded operating system for their device based on current requirements and so restricts future options to a large extent.
The level of complexity in embedded systems is continuously growing as devices are required to manage peripherals and tasks such as serial, USB, TCP/IP, Bluetooth, Wireless LAN, trunk radio, multiple channels, data and voice, enhanced graphics, multiple states, multiple threads, numerous wait states and so on. These trends are leading to the uptake of embedded middleware in addition to an RTOS.
=== Microkernels and exokernels ===
A microkernel allocates memory and switches the CPU to different threads of execution. User-mode processes implement major functions such as file systems, network interfaces, etc.
Exokernels communicate efficiently by normal subroutine calls. The hardware and all the software in the system are available to and extensible by application programmers.
=== Monolithic kernels ===
A monolithic kernel is a relatively large kernel with sophisticated capabilities adapted to suit an embedded environment. This gives programmers an environment similar to a desktop operating system like Linux or Microsoft Windows, and is therefore very productive for development. On the downside, it requires considerably more hardware resources, is often more expensive, and, because of the complexity of these kernels, can be less predictable and reliable.
Common examples of embedded monolithic kernels are embedded Linux, VXWorks and Windows CE.
Despite the increased cost in hardware, this type of embedded system is increasing in popularity, especially on the more powerful embedded devices such as wireless routers and GPS navigation systems.
=== Additional software components ===
In addition to the core operating system, many embedded systems have additional upper-layer software components. These components include networking protocol stacks like CAN, TCP/IP, FTP, HTTP, and HTTPS, and storage capabilities like FAT and flash memory management systems. If the embedded device has audio and video capabilities, then the appropriate drivers and codecs will be present in the system. In the case of the monolithic kernels, many of these software layers may be included in the kernel. In the RTOS category, the availability of additional software components depends upon the commercial offering.
=== Domain-specific architectures ===
In the automotive sector, AUTOSAR is a standard architecture for embedded software.
== See also ==
== Notes ==
== References ==
== Further reading ==
John Catsoulis (May 2005). Designing Embedded Hardware, 2nd Edition. O'Reilly. ISBN 0-596-00755-8.
James M. Conrad; Alexander G. Dean (September 2011). Embedded Systems, An Introduction Using the Renesas RX62N Microcontroller. Micrium. ISBN 978-1935-7729-96.
Klaus Elk (August 2016). Embedded Software Development for the Internet Of Things, The Basics, The Technologies and Best Practices. CreateSpace Independent Publishing Platform. ISBN 978-1534602533.
== External links ==
Embedded Systems course with mbed YouTube, ongoing from 2015
Trends in Cyber Security and Embedded Systems Dan Geer, November 2013
Modern Embedded Systems Programming Video Course YouTube, ongoing from 2013
Embedded Systems Week (ESWEEK) yearly event with conferences, workshops and tutorials covering all aspects of embedded systems and software
Workshop on Embedded and Cyber-Physical Systems Education at the Wayback Machine (archived 2018-02-11), workshop covering educational aspects of embedded systems
Developing Embedded Systems - A Tools Introduction | Wikipedia/Embedded_systems |
Chemical reaction network theory is an area of applied mathematics that attempts to model the behaviour of real-world chemical systems. Since its foundation in the 1960s, it has attracted a growing research community, mainly due to its applications in biochemistry and theoretical chemistry. It has also attracted interest from pure mathematicians due to the interesting problems that arise from the mathematical structures involved.
== History ==
Dynamical properties of reaction networks were studied in chemistry and physics after the invention of the law of mass action. The essential steps in this study were introduction of detailed balance for the complex chemical reactions by Rudolf Wegscheider (1901), development of the quantitative theory of chemical chain reactions by Nikolay Semyonov (1934), development of kinetics of catalytic reactions by Cyril Norman Hinshelwood, and many other results.
Three eras of chemical dynamics can be revealed in the flux of research and publications. These eras may be associated with leaders: the first is the van 't Hoff era, the second may be called the Semenov–Hinshelwood era and the third is definitely the Aris era.
The "eras" may be distinguished based on the main focuses of the scientific leaders:
van’t Hoff was searching for the general law of chemical reaction related to specific chemical properties. The term "chemical dynamics" belongs to van’t Hoff.
The Semenov-Hinshelwood focus was an explanation of critical phenomena observed in many chemical systems, in particular in flames. A concept chain reactions elaborated by these researchers influenced many sciences, especially nuclear physics and engineering.
Aris’ activity was concentrated on the detailed systematization of mathematical ideas and approaches.
The mathematical discipline "chemical reaction network theory" was originated by Rutherford Aris, a famous expert in chemical engineering, with the support of Clifford Truesdell, the founder and editor-in-chief of the journal Archive for Rational Mechanics and Analysis. The paper of R. Aris in this journal was communicated to the journal by C. Truesdell. It opened the series of papers of other authors (which were communicated already by R. Aris). The well known papers of this series are the works of Frederick J. Krambeck, Roy Jackson, Friedrich Josef Maria Horn, Martin Feinberg and others, published in the 1970s. In his second "prolegomena" paper, R. Aris mentioned the work of N.Z. Shapiro, L.S. Shapley (1965), where an important part of his scientific program was realized.
Since then, the chemical reaction network theory has been further developed by a large number of researchers internationally.
== Overview ==
A chemical reaction network (often abbreviated to CRN) comprises a set of reactants, a set of products (often intersecting the set of reactants), and a set of reactions. For example, the pair of combustion reactions
form a reaction network. The reactions are represented by the arrows. The reactants appear to the left of the arrows, in this example they are
H
2
{\displaystyle {\ce {H2}}}
(hydrogen),
O
2
{\displaystyle {\ce {O2}}}
(oxygen) and C (carbon). The products appear to the right of the arrows, here they are
H
2
O
{\displaystyle {\ce {H2O}}}
(water) and
CO
2
{\displaystyle {\ce {CO2}}}
(carbon dioxide). In this example, since the reactions are irreversible and neither of the products are used in the reactions, the set of reactants and the set of products are disjoint.
Mathematical modelling of chemical reaction networks usually focuses on what happens to the concentrations of the various chemicals involved as time passes. Following the example above, let a represent the concentration of
H
2
{\displaystyle {\ce {H2}}}
in the surrounding air, b represent the concentration of
O
2
{\displaystyle {\ce {O2}}}
, c represent the concentration of
H
2
O
{\displaystyle {\ce {H2O}}}
, and so on. Since all of these concentrations will not in general remain constant, they can be written as a function of time e.g.
a
(
t
)
,
b
(
t
)
{\displaystyle a(t),b(t)}
, etc.
These variables can then be combined into a vector
x
(
t
)
=
(
a
(
t
)
b
(
t
)
c
(
t
)
⋮
)
{\displaystyle x(t)=\left({\begin{array}{c}a(t)\\b(t)\\c(t)\\\vdots \end{array}}\right)}
and their evolution with time can be written
x
˙
≡
d
x
d
t
=
(
d
a
d
t
d
b
d
t
d
c
d
t
⋮
)
.
{\displaystyle {\dot {x}}\equiv {\frac {dx}{dt}}=\left({\begin{array}{c}{\frac {da}{dt}}\\[6pt]{\frac {db}{dt}}\\[6pt]{\frac {dc}{dt}}\\[6pt]\vdots \end{array}}\right).}
This is an example of a continuous autonomous dynamical system, commonly written in the form
x
˙
=
f
(
x
)
{\displaystyle {\dot {x}}=f(x)}
. The number of molecules of each reactant used up each time a reaction occurs is constant, as is the number of molecules produced of each product. These numbers are referred to as the stoichiometry of the reaction, and the difference between the two (i.e. the overall number of molecules used up or produced) is the net stoichiometry. This means that the equation representing the chemical reaction network can be rewritten as
x
˙
=
Γ
V
(
x
)
{\displaystyle {\dot {x}}=\Gamma V(x)}
Here, each column of the constant matrix
Γ
{\displaystyle \Gamma }
represents the net stoichiometry of a reaction, and so
Γ
{\displaystyle \Gamma }
is called the stoichiometry matrix.
V
(
x
)
{\displaystyle V(x)}
is a vector-valued function where each output value represents a reaction rate, referred to as the kinetics.
== Common assumptions ==
For physical reasons, it is usually assumed that reactant concentrations cannot be negative, and that each reaction only takes place if all its reactants are present, i.e. all have non-zero concentration. For mathematical reasons, it is usually assumed that
V
(
x
)
{\displaystyle V(x)}
is continuously differentiable.
It is also commonly assumed that no reaction features the same chemical as both a reactant and a product (i.e. no catalysis or autocatalysis), and that increasing the concentration of a reactant increases the rate of any reactions that use it up. This second assumption is compatible with all physically reasonable kinetics, including mass action, Michaelis–Menten and Hill kinetics. Sometimes further assumptions are made about reaction rates, e.g. that all reactions obey mass action kinetics.
Other assumptions include mass balance, constant temperature, constant pressure, spatially uniform concentration of reactants, and so on.
== Types of results ==
As chemical reaction network theory is a diverse and well-established area of research, there is a significant variety of results. Some key areas are outlined below.
=== Number of steady states ===
These results relate to whether a chemical reaction network can produce significantly different behaviour depending on the initial concentrations of its constituent reactants. This has applications in e.g. modelling biological switches—a high concentration of a key chemical at steady state could represent a biological process being "switched on" whereas a low concentration would represent being "switched off".
For example, the catalytic trigger is the simplest catalytic reaction without autocatalysis that allows multiplicity of steady states (1976):
This is the classical adsorption mechanism of catalytic oxidation.
Here,
A
2
,
B
{\displaystyle {\ce {A2, B}}}
and
AB
{\displaystyle {\ce {AB}}}
are gases (for example,
O
2
{\displaystyle {\ce {O2}}}
,
CO
{\displaystyle {\ce {CO}}}
and
CO
2
{\displaystyle {\ce {CO2}}}
),
Z
{\displaystyle {\ce {Z}}}
is the "adsorption place" on the surface of the solid catalyst (for example,
Pt
{\displaystyle {\ce {Pt}}}
),
AZ
{\displaystyle {\ce {AZ}}}
and
BZ
{\displaystyle {\ce {BZ}}}
are the intermediates on the surface (adatoms, adsorbed molecules or radicals).
This system may have two stable steady states of the surface for the same concentrations of the gaseous components.
=== Stability of steady states ===
Stability determines whether a given steady state solution is likely to be observed in reality. Since real systems (unlike deterministic models) tend to be subject to random background noise, an unstable steady state solution is unlikely to be observed in practice. Instead of them, stable oscillations or other types of attractors may appear.
=== Persistence ===
Persistence has its roots in population dynamics. A non-persistent species in population dynamics can go extinct for some (or all) initial conditions. Similar questions are of interests to chemists and biochemists, i.e. if a given reactant was present to start with, can it ever be completely used up?
=== Existence of stable periodic solutions ===
Results regarding stable periodic solutions attempt to rule out "unusual" behaviour. If a given chemical reaction network admits a stable periodic solution, then some initial conditions will converge to an infinite cycle of oscillating reactant concentrations. For some parameter values it may even exhibit quasiperiodic or chaotic behaviour. While stable periodic solutions are unusual in real-world chemical reaction networks, well-known examples exist, such as the Belousov–Zhabotinsky reactions. The simplest catalytic oscillator (nonlinear self-oscillations without autocatalysis)
can be produced from the catalytic trigger by adding a "buffer" step.
where (BZ) is an intermediate that does not participate in the main reaction.
=== Network structure and dynamical properties ===
One of the main problems of chemical reaction network theory is the connection between network structure and properties of dynamics. This connection is important even for linear systems, for example, the simple cycle with equal interaction weights has the slowest decay of the oscillations among all linear systems with the same number of states.
For nonlinear systems, many connections between structure and dynamics have been discovered. First of all, these are results about stability. For some classes of networks, explicit construction of Lyapunov functions is possible without apriori assumptions about special relations between rate constants. Two results of this type are well known: the deficiency zero theorem and the theorem about systems without interactions between different components.
The deficiency zero theorem gives sufficient conditions for the existence of the Lyapunov function in the classical free energy form
G
(
c
)
=
∑
i
c
i
(
ln
c
i
c
i
∗
−
1
)
{\displaystyle G(c)=\sum _{i}c_{i}\left(\ln {\frac {c_{i}}{c_{i}^{*}}}-1\right)}
, where
c
i
{\displaystyle c_{i}}
is the concentration of the i-th component. The theorem about systems without interactions between different components states that if a network consists of reactions of the form
n
k
A
i
→
∑
j
β
k
j
A
j
{\displaystyle n_{k}A_{i}\to \sum _{j}\beta _{kj}A_{j}}
(for
k
≤
r
{\displaystyle k\leq r}
, where r is the number of reactions,
A
i
{\displaystyle A_{i}}
is the symbol of ith component,
n
k
≥
1
{\displaystyle n_{k}\geq 1}
, and
β
k
j
{\displaystyle \beta _{kj}}
are non-negative integers) and allows the stoichiometric conservation law
M
(
c
)
=
∑
i
m
i
c
i
=
const
{\displaystyle M(c)=\sum _{i}m_{i}c_{i}={\text{const}}}
(where all
m
i
>
0
{\displaystyle m_{i}>0}
), then the weighted L1 distance
∑
i
m
i
|
c
i
1
(
t
)
−
c
i
2
(
t
)
|
{\displaystyle \sum _{i}m_{i}|c_{i}^{1}(t)-c_{i}^{2}(t)|}
between two solutions
c
1
(
t
)
and
c
2
(
t
)
{\displaystyle c^{1}(t)\;{\mbox{and}}\;c^{2}(t)}
with the same M(c) monotonically decreases in time.
=== Model reduction ===
Modelling of large reaction networks meets various difficulties: the models include too many unknown parameters and high dimension makes the modelling computationally expensive. The model reduction methods were developed together with the first theories of complex chemical reactions. Three simple basic ideas have been invented:
The quasi-equilibrium (or pseudo-equilibrium, or partial equilibrium) approximation (a fraction of reactions approach their equilibrium fast enough and, after that, remain almost equilibrated).
The quasi steady state approximation or QSS (some of the species, very often these are some of intermediates or radicals, exist in relatively small amounts; they reach quickly their QSS concentrations, and then follow, as dependent quantities, the dynamics of these other species remaining close to the QSS). The QSS is defined as the steady state under the condition that the concentrations of other species do not change.
The limiting step or bottleneck is a relatively small part of the reaction network, in the simplest cases it is a single reaction, which rate is a good approximation to the reaction rate of the whole network.
The quasi-equilibrium approximation and the quasi steady state methods were developed further into the methods of slow invariant manifolds and computational singular perturbation. The methods of limiting steps gave rise to many methods of the analysis of the reaction graph.
== References ==
== External links ==
Specialist wiki on the mathematics of reaction networks | Wikipedia/Chemical_reaction_network |
A cryptographically secure pseudorandom number generator (CSPRNG) or cryptographic pseudorandom number generator (CPRNG) is a pseudorandom number generator (PRNG) with properties that make it suitable for use in cryptography. It is also referred to as a cryptographic random number generator (CRNG).
== Background ==
Most cryptographic applications require random numbers, for example:
key generation
initialization vectors
nonces
salts in certain signature schemes, including ECDSA and RSASSA-PSS
token generation
The "quality" of the randomness required for these applications varies. For example, creating a nonce in some protocols needs only uniqueness. On the other hand, the generation of a master key requires a higher quality, such as more entropy. And in the case of one-time pads, the information-theoretic guarantee of perfect secrecy only holds if the key material comes from a true random source with high entropy, and thus just any kind of pseudorandom number generator is insufficient.
Ideally, the generation of random numbers in CSPRNGs uses entropy obtained from a high-quality source, generally the operating system's randomness API. However, unexpected correlations have been found in several such ostensibly independent processes. From an information-theoretic point of view, the amount of randomness, the entropy that can be generated, is equal to the entropy provided by the system. But sometimes, in practical situations, numbers are needed with more randomness than the available entropy can provide. Also, the processes to extract randomness from a running system are slow in actual practice. In such instances, a CSPRNG can sometimes be used. A CSPRNG can "stretch" the available entropy over more bits.
== Requirements ==
The requirements of an ordinary PRNG are also satisfied by a cryptographically secure PRNG, but the reverse is not true. CSPRNG requirements fall into two groups:
They pass statistical randomness tests:
Every CSPRNG should satisfy the next-bit test. That is, given the first k bits of a random sequence, there is no polynomial-time algorithm that can predict the (k+1)th bit with probability of success non-negligibly better than 50%. Andrew Yao proved in 1982 that a generator passing the next-bit test will pass all other polynomial-time statistical tests for randomness.
They hold up well under serious attack, even when part of their initial or running state becomes available to an attacker:
Every CSPRNG should withstand "state compromise extension attacks".: 4 In the event that part or all of its state has been revealed (or guessed correctly), it should be impossible to reconstruct the stream of random numbers prior to the revelation. Additionally, if there is an entropy input while running, it should be infeasible to use knowledge of the input's state to predict future conditions of the CSPRNG state.
For instance, if the PRNG under consideration produces output by computing bits of pi in sequence, starting from some unknown point in the binary expansion, it may well satisfy the next-bit test and thus be statistically random, as pi is conjectured to be a normal number. However, this algorithm is not cryptographically secure; an attacker who determines which bit of pi is currently in use (i.e. the state of the algorithm) will be able to calculate all preceding bits as well.
Most PRNGs are not suitable for use as CSPRNGs and will fail on both counts. First, while most PRNGs' outputs appear random to assorted statistical tests, they do not resist determined reverse engineering. Specialized statistical tests may be found specially tuned to such a PRNG that shows the random numbers not to be truly random. Second, for most PRNGs, when their state has been revealed, all past random numbers can be retrodicted, allowing an attacker to read all past messages, as well as future ones.
CSPRNGs are designed explicitly to resist this type of cryptanalysis.
== Definitions ==
In the asymptotic setting, a family of deterministic polynomial time computable functions
G
k
:
{
0
,
1
}
k
→
{
0
,
1
}
p
(
k
)
{\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}
for some polynomial p, is a pseudorandom number generator (PRNG, or PRG in some references), if it stretches the length of its input (
p
(
k
)
>
k
{\displaystyle p(k)>k}
for any k), and if its output is computationally indistinguishable from true randomness, i.e. for any probabilistic polynomial time algorithm A, which outputs 1 or 0 as a distinguisher,
|
Pr
x
←
{
0
,
1
}
k
[
A
(
G
(
x
)
)
=
1
]
−
Pr
r
←
{
0
,
1
}
p
(
k
)
[
A
(
r
)
=
1
]
|
<
μ
(
k
)
{\displaystyle \left|\Pr _{x\gets \{0,1\}^{k}}[A(G(x))=1]-\Pr _{r\gets \{0,1\}^{p(k)}}[A(r)=1]\right|<\mu (k)}
for some negligible function
μ
{\displaystyle \mu }
. (The notation
x
←
X
{\displaystyle x\gets X}
means that x is chosen uniformly at random from the set X.)
There is an equivalent characterization: For any function family
G
k
:
{
0
,
1
}
k
→
{
0
,
1
}
p
(
k
)
{\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}
, G is a PRNG if and only if the next output bit of G cannot be predicted by a polynomial time algorithm.
A forward-secure PRNG with block length
t
(
k
)
{\displaystyle t(k)}
is a PRNG
G
k
:
{
0
,
1
}
k
→
{
0
,
1
}
k
×
{
0
,
1
}
t
(
k
)
{\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{k}\times \{0,1\}^{t(k)}}
, where the input string
s
i
{\displaystyle s_{i}}
with length k is the current state at period i, and the output (
s
i
+
1
{\displaystyle s_{i+1}}
,
y
i
{\displaystyle y_{i}}
) consists of the next state
s
i
+
1
{\displaystyle s_{i+1}}
and the pseudorandom output block
y
i
{\displaystyle y_{i}}
of period i, that withstands state compromise extensions in the following sense. If the initial state
s
1
{\displaystyle s_{1}}
is chosen uniformly at random from
{
0
,
1
}
k
{\displaystyle \{0,1\}^{k}}
, then for any i, the sequence
(
y
1
,
y
2
,
…
,
y
i
,
s
i
+
1
)
{\displaystyle (y_{1},y_{2},\dots ,y_{i},s_{i+1})}
must be computationally indistinguishable from
(
r
1
,
r
2
,
…
,
r
i
,
s
i
+
1
)
{\displaystyle (r_{1},r_{2},\dots ,r_{i},s_{i+1})}
, in which the
r
i
{\displaystyle r_{i}}
are chosen uniformly at random from
{
0
,
1
}
t
(
k
)
{\displaystyle \{0,1\}^{t(k)}}
.
Any PRNG
G
:
{
0
,
1
}
k
→
{
0
,
1
}
p
(
k
)
{\displaystyle G\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}
can be turned into a forward secure PRNG with block length
p
(
k
)
−
k
{\displaystyle p(k)-k}
by splitting its output into the next state and the actual output. This is done by setting
G
(
s
)
=
G
0
(
s
)
‖
G
1
(
s
)
{\displaystyle G(s)=G_{0}(s)\Vert G_{1}(s)}
, in which
|
G
0
(
s
)
|
=
|
s
|
=
k
{\displaystyle |G_{0}(s)|=|s|=k}
and
|
G
1
(
s
)
|
=
p
(
k
)
−
k
{\displaystyle |G_{1}(s)|=p(k)-k}
; then G is a forward secure PRNG with
G
0
{\displaystyle G_{0}}
as the next state and
G
1
{\displaystyle G_{1}}
as the pseudorandom output block of the current period.
== Entropy extraction ==
Santha and Vazirani proved that several bit streams with weak randomness can be combined to produce a higher-quality, quasi-random bit stream.
Even earlier, John von Neumann proved that a simple algorithm can remove a considerable amount of the bias in any bit stream, which should be applied to each bit stream before using any variation of the Santha–Vazirani design.
== Designs ==
CSPRNG designs are divided into two classes:
Designs based on cryptographic primitives such as ciphers and cryptographic hashes
Designs based on mathematical problems thought to be hard
=== Designs based on cryptographic primitives ===
A secure block cipher can be converted into a CSPRNG by running it in counter mode using, for example, a special construct that the NIST in SP 800-90A calls CTR DRBG. CTR_DBRG typically uses Advanced Encryption Standard (AES).
AES-CTR_DRBG is often used as a random number generator in systems that use AES encryption.
The NIST CTR_DRBG scheme erases the key after the requested randomness is output by running additional cycles. This is wasteful from a performance perspective, but does not immediately cause issues with forward secrecy. However, realizing the performance implications, the NIST recommends an "extended AES-CTR-DRBG interface" for its Post-Quantum Cryptography Project submissions. This interface allows multiple sets of randomness to be generated without intervening erasure, only erasing when the user explicitly signals the end of requests. As a result, the key could remain in memory for an extended time if the "extended interface" is misused. Newer "fast-key-erasure" RNGs erase the key with randomness as soon as randomness is requested.
A stream cipher can be converted into a CSPRNG. This has been done with RC4, ISAAC, and ChaCha20, to name a few.
A cryptographically secure hash might also be a base of a good CSPRNG, using, for example, a construct that NIST calls Hash DRBG.
An HMAC primitive can be used as a base of a CSPRNG, for example, as part of the construct that NIST calls HMAC DRBG.
=== Number-theoretic designs ===
The Blum Blum Shub algorithm has a security proof based on the difficulty of the quadratic residuosity problem. Since the only known way to solve that problem is to factor the modulus, it is generally regarded that the difficulty of integer factorization provides a conditional security proof for the Blum Blum Shub algorithm. However the algorithm is very inefficient and therefore impractical unless extreme security is needed.
The Blum–Micali algorithm has a security proof based on the difficulty of the discrete logarithm problem but is also very inefficient.
Daniel Brown of Certicom wrote a 2006 security proof for Dual EC DRBG, based on the assumed hardness of the Decisional Diffie–Hellman assumption, the x-logarithm problem, and the truncated point problem. The 2006 proof explicitly assumes a lower outlen (amount of bits provided per iteration) than in the Dual_EC_DRBG standard, and that the P and Q in the Dual_EC_DRBG standard (which were revealed in 2013 to be probably backdoored by NSA) are replaced with non-backdoored values.
=== Practical schemes ===
"Practical" CSPRNG schemes not only include an CSPRNG algorithm, but also a way to initialize ("seed") it while keeping the seed secret. A number of such schemes have been defined, including:
Implementations of /dev/random in Unix-like systems.
Yarrow, which attempts to evaluate the entropic quality of its seeding inputs, and uses SHA-1 and 3DES internally. Yarrow was used in macOS and other Apple OS' up until about December 2019, after which it switched to Fortuna.
Fortuna, the successor to Yarrow, which does not attempt to evaluate the entropic quality of its inputs; it uses SHA-256 and "any good block cipher". Fortuna is used in FreeBSD. Apple changed to Fortuna for most or all Apple OSs beginning around Dec. 2019.
The Linux kernel CSPRNG, which uses ChaCha20 to generate data, and BLAKE2s to ingest entropy.
arc4random, a CSPRNG in Unix-like systems that seeds from /dev/random. It originally is based on RC4, but all main implementations now use ChaCha20.
CryptGenRandom, part of Microsoft's CryptoAPI, offered on Windows. Different versions of Windows use different implementations.
ANSI X9.17 standard (Financial Institution Key Management (wholesale)), which has been adopted as a FIPS standard as well. It takes as input a TDEA (keying option 2) key bundle k and (the initial value of) a 64-bit random seed s. Each time a random number is required, it executes the following steps:
Obviously, the technique is easily generalized to any block cipher; AES has been suggested. If the key k is leaked, the entire X9.17 stream can be predicted; this weakness is cited as a reason for creating Yarrow.
All these above-mentioned schemes, save for X9.17, also mix the state of a CSPRNG with an additional source of entropy. They are therefore not "pure" pseudorandom number generators, in the sense that the output is not completely determined by their initial state. This addition aims to prevent attacks even if the initial state is compromised.
== Standards ==
Several CSPRNGs have been standardized. For example:
FIPS 186-4
NIST SP 800-90A
NIST SP 800-90A Rev.1
ANSI X9.17-1985 Appendix C
ANSI X9.31-1998 Appendix A.2.4
ANSI X9.62-1998 Annex A.4, obsoleted by ANSI X9.62-2005, Annex D (HMAC_DRBG)
A good reference is maintained by NIST.
There are also standards for statistical testing of new CSPRNG designs:
A Statistical Test Suite for Random and Pseudorandom Number Generators, NIST Special Publication 800-22.
== Security flaws ==
=== NSA kleptographic backdoor in the Dual_EC_DRBG PRNG ===
The Guardian and The New York Times reported in 2013 that the National Security Agency (NSA) inserted a backdoor into a pseudorandom number generator (PRNG) of NIST SP 800-90A, which allows the NSA to readily decrypt material that was encrypted with the aid of Dual EC DRBG. Both papers reported that, as independent security experts long suspected, the NSA had been introducing weaknesses into CSPRNG standard 800-90; this being confirmed for the first time by one of the top-secret documents leaked to The Guardian by Edward Snowden. The NSA worked covertly to get its own version of the NIST draft security standard approved for worldwide use in 2006. The leaked document states that "eventually, NSA became the sole editor". In spite of the known potential for a kleptographic backdoor and other known significant deficiencies with Dual_EC_DRBG, several companies such as RSA Security continued using Dual_EC_DRBG until the backdoor was confirmed in 2013. RSA Security received a $10 million payment from the NSA to do so.
=== DUHK attack ===
On October 23, 2017, Shaanan Cohney, Matthew Green, and Nadia Heninger, cryptographers at the University of Pennsylvania and Johns Hopkins University, released details of the DUHK (Don't Use Hard-coded Keys) attack on WPA2 where hardware vendors use a hardcoded seed key for the ANSI X9.31 RNG algorithm, stating "an attacker can brute-force encrypted data to discover the rest of the encryption parameters and deduce the master encryption key used to encrypt web sessions or virtual private network (VPN) connections."
=== Japanese PURPLE cipher machine ===
During World War II, Japan used a cipher machine for diplomatic communications; the United States was able to crack it and read its messages, mostly because the "key values" used were insufficiently random.
== References ==
== External links ==
RFC 4086, Randomness Requirements for Security
Java "entropy pool" for cryptographically secure unpredictable random numbers. Archived 2008-12-02 at the Wayback Machine
Java standard class providing a cryptographically strong pseudo-random number generator (PRNG).
Cryptographically Secure Random number on Windows without using CryptoAPI
Conjectured Security of the ANSI-NIST Elliptic Curve RNG, Daniel R. L. Brown, IACR ePrint 2006/117.
A Security Analysis of the NIST SP 800-90 Elliptic Curve Random Number Generator, Daniel R. L. Brown and Kristian Gjosteen, IACR ePrint 2007/048. To appear in CRYPTO 2007.
Cryptanalysis of the Dual Elliptic Curve Pseudorandom Generator, Berry Schoenmakers and Andrey Sidorenko, IACR ePrint 2006/190.
Efficient Pseudorandom Generators Based on the DDH Assumption, Reza Rezaeian Farashahi and Berry Schoenmakers and Andrey Sidorenko, IACR ePrint 2006/321.
Analysis of the Linux Random Number Generator, Zvi Gutterman and Benny Pinkas and Tzachy Reinman.
NIST Statistical Test Suite documentation and software download. | Wikipedia/Cryptographically_secure_pseudo-random_number_generator |
Pocklington's algorithm is a technique for solving a congruence of the form
x
2
≡
a
(
mod
p
)
,
{\displaystyle x^{2}\equiv a{\pmod {p}},}
where x and a are integers and a is a quadratic residue.
The algorithm is one of the first efficient methods to solve such a congruence. It was described by H.C. Pocklington in 1917.
== The algorithm ==
(Note: all
≡
{\displaystyle \equiv }
are taken to mean
(
mod
p
)
{\displaystyle {\pmod {p}}}
, unless indicated otherwise.)
Inputs:
p, an odd prime
a, an integer which is a quadratic residue
(
mod
p
)
{\displaystyle {\pmod {p}}}
.
Outputs:
x, an integer satisfying
x
2
≡
a
{\displaystyle x^{2}\equiv a}
. Note that if x is a solution, −x is a solution as well and since p is odd,
x
≠
−
x
{\displaystyle x\neq -x}
. So there is always a second solution when one is found.
=== Solution method ===
Pocklington separates 3 different cases for p:
The first case, if
p
=
4
m
+
3
{\displaystyle p=4m+3}
, with
m
∈
N
{\displaystyle m\in \mathbb {N} }
, the solution is
x
≡
±
a
m
+
1
{\displaystyle x\equiv \pm a^{m+1}}
.
The second case, if
p
=
8
m
+
5
{\displaystyle p=8m+5}
, with
m
∈
N
{\displaystyle m\in \mathbb {N} }
and
a
2
m
+
1
≡
1
{\displaystyle a^{2m+1}\equiv 1}
, the solution is
x
≡
±
a
m
+
1
{\displaystyle x\equiv \pm a^{m+1}}
.
a
2
m
+
1
≡
−
1
{\displaystyle a^{2m+1}\equiv -1}
, 2 is a (quadratic) non-residue so
4
2
m
+
1
≡
−
1
{\displaystyle 4^{2m+1}\equiv -1}
. This means that
(
4
a
)
2
m
+
1
≡
1
{\displaystyle (4a)^{2m+1}\equiv 1}
so
y
≡
±
(
4
a
)
m
+
1
{\displaystyle y\equiv \pm (4a)^{m+1}}
is a solution of
y
2
≡
4
a
{\displaystyle y^{2}\equiv 4a}
. Hence
x
≡
±
y
/
2
{\displaystyle x\equiv \pm y/2}
or, if y is odd,
x
≡
±
(
p
+
y
)
/
2
{\displaystyle x\equiv \pm (p+y)/2}
.
The third case, if
p
=
8
m
+
1
{\displaystyle p=8m+1}
, put
D
≡
−
a
{\displaystyle D\equiv -a}
, so the equation to solve becomes
x
2
+
D
≡
0
{\displaystyle x^{2}+D\equiv 0}
. Now find by trial and error
t
1
{\displaystyle t_{1}}
and
u
1
{\displaystyle u_{1}}
so that
N
=
t
1
2
−
D
u
1
2
{\displaystyle N=t_{1}^{2}-Du_{1}^{2}}
is a quadratic non-residue. Furthermore, let
t
n
=
(
t
1
+
u
1
D
)
n
+
(
t
1
−
u
1
D
)
n
2
,
u
n
=
(
t
1
+
u
1
D
)
n
−
(
t
1
−
u
1
D
)
n
2
D
{\displaystyle t_{n}={\frac {(t_{1}+u_{1}{\sqrt {D}})^{n}+(t_{1}-u_{1}{\sqrt {D}})^{n}}{2}},\qquad u_{n}={\frac {(t_{1}+u_{1}{\sqrt {D}})^{n}-(t_{1}-u_{1}{\sqrt {D}})^{n}}{2{\sqrt {D}}}}}
.
The following equalities now hold:
t
m
+
n
=
t
m
t
n
+
D
u
m
u
n
,
u
m
+
n
=
t
m
u
n
+
t
n
u
m
and
t
n
2
−
D
u
n
2
=
N
n
{\displaystyle t_{m+n}=t_{m}t_{n}+Du_{m}u_{n},\quad u_{m+n}=t_{m}u_{n}+t_{n}u_{m}\quad {\mbox{and}}\quad t_{n}^{2}-Du_{n}^{2}=N^{n}}
.
Supposing that p is of the form
4
m
+
1
{\displaystyle 4m+1}
(which is true if p is of the form
8
m
+
1
{\displaystyle 8m+1}
), D is a quadratic residue and
t
p
≡
t
1
p
≡
t
1
,
u
p
≡
u
1
p
D
(
p
−
1
)
/
2
≡
u
1
{\displaystyle t_{p}\equiv t_{1}^{p}\equiv t_{1},\quad u_{p}\equiv u_{1}^{p}D^{(p-1)/2}\equiv u_{1}}
. Now the equations
t
1
≡
t
p
−
1
t
1
+
D
u
p
−
1
u
1
and
u
1
≡
t
p
−
1
u
1
+
t
1
u
p
−
1
{\displaystyle t_{1}\equiv t_{p-1}t_{1}+Du_{p-1}u_{1}\quad {\mbox{and}}\quad u_{1}\equiv t_{p-1}u_{1}+t_{1}u_{p-1}}
give a solution
t
p
−
1
≡
1
,
u
p
−
1
≡
0
{\displaystyle t_{p-1}\equiv 1,\quad u_{p-1}\equiv 0}
.
Let
p
−
1
=
2
r
{\displaystyle p-1=2r}
. Then
0
≡
u
p
−
1
≡
2
t
r
u
r
{\displaystyle 0\equiv u_{p-1}\equiv 2t_{r}u_{r}}
. This means that either
t
r
{\displaystyle t_{r}}
or
u
r
{\displaystyle u_{r}}
is divisible by p. If it is
u
r
{\displaystyle u_{r}}
, put
r
=
2
s
{\displaystyle r=2s}
and proceed similarly with
0
≡
2
t
s
u
s
{\displaystyle 0\equiv 2t_{s}u_{s}}
. Not every
u
i
{\displaystyle u_{i}}
is divisible by p, for
u
1
{\displaystyle u_{1}}
is not. The case
u
m
≡
0
{\displaystyle u_{m}\equiv 0}
with m odd is impossible, because
t
m
2
−
D
u
m
2
≡
N
m
{\displaystyle t_{m}^{2}-Du_{m}^{2}\equiv N^{m}}
holds and this would mean that
t
m
2
{\displaystyle t_{m}^{2}}
is congruent to a quadratic non-residue, which is a contradiction. So this loop stops when
t
l
≡
0
{\displaystyle t_{l}\equiv 0}
for a particular l. This gives
−
D
u
l
2
≡
N
l
{\displaystyle -Du_{l}^{2}\equiv N^{l}}
, and because
−
D
{\displaystyle -D}
is a quadratic residue, l must be even. Put
l
=
2
k
{\displaystyle l=2k}
. Then
0
≡
t
l
≡
t
k
2
+
D
u
k
2
{\displaystyle 0\equiv t_{l}\equiv t_{k}^{2}+Du_{k}^{2}}
. So the solution of
x
2
+
D
≡
0
{\displaystyle x^{2}+D\equiv 0}
is got by solving the linear congruence
u
k
x
≡
±
t
k
{\displaystyle u_{k}x\equiv \pm t_{k}}
.
== Examples ==
The following are 4 examples, corresponding to the 3 different cases in which Pocklington divided forms of p. All
≡
{\displaystyle \equiv }
are taken with the modulus in the example.
=== Example 0 ===
x
2
≡
43
(
mod
47
)
.
{\displaystyle x^{2}\equiv 43{\pmod {47}}.}
This is the first case, according to the algorithm,
x
≡
43
(
47
+
1
)
/
2
=
43
12
≡
2
{\displaystyle x\equiv 43^{(47+1)/2}=43^{12}\equiv 2}
, but then
x
2
=
2
2
=
4
{\displaystyle x^{2}=2^{2}=4}
not 43, so we should not apply the algorithm at all. The reason why the algorithm is not applicable is that a=43 is a quadratic non residue for p=47.
=== Example 1 ===
Solve the congruence
x
2
≡
18
(
mod
23
)
.
{\displaystyle x^{2}\equiv 18{\pmod {23}}.}
The modulus is 23. This is
23
=
4
⋅
5
+
3
{\displaystyle 23=4\cdot 5+3}
, so
m
=
5
{\displaystyle m=5}
. The solution should be
x
≡
±
18
6
≡
±
8
(
mod
23
)
{\displaystyle x\equiv \pm 18^{6}\equiv \pm 8{\pmod {23}}}
, which is indeed true:
(
±
8
)
2
≡
64
≡
18
(
mod
23
)
{\displaystyle (\pm 8)^{2}\equiv 64\equiv 18{\pmod {23}}}
.
=== Example 2 ===
Solve the congruence
x
2
≡
10
(
mod
13
)
.
{\displaystyle x^{2}\equiv 10{\pmod {13}}.}
The modulus is 13. This is
13
=
8
⋅
1
+
5
{\displaystyle 13=8\cdot 1+5}
, so
m
=
1
{\displaystyle m=1}
. Now verifying
10
2
m
+
1
≡
10
3
≡
−
1
(
mod
13
)
{\displaystyle 10^{2m+1}\equiv 10^{3}\equiv -1{\pmod {13}}}
. So the solution is
x
≡
±
y
/
2
≡
±
(
4
a
)
2
/
2
≡
±
800
≡
±
7
(
mod
13
)
{\displaystyle x\equiv \pm y/2\equiv \pm (4a)^{2}/2\equiv \pm 800\equiv \pm 7{\pmod {13}}}
. This is indeed true:
(
±
7
)
2
≡
49
≡
10
(
mod
13
)
{\displaystyle (\pm 7)^{2}\equiv 49\equiv 10{\pmod {13}}}
.
=== Example 3 ===
Solve the congruence
x
2
≡
13
(
mod
17
)
{\displaystyle x^{2}\equiv 13{\pmod {17}}}
. For this, write
x
2
−
13
=
0
{\displaystyle x^{2}-13=0}
. First find a
t
1
{\displaystyle t_{1}}
and
u
1
{\displaystyle u_{1}}
such that
t
1
2
+
13
u
1
2
{\displaystyle t_{1}^{2}+13u_{1}^{2}}
is a quadratic nonresidue. Take for example
t
1
=
3
,
u
1
=
1
{\displaystyle t_{1}=3,u_{1}=1}
. Now find
t
8
{\displaystyle t_{8}}
,
u
8
{\displaystyle u_{8}}
by computing
t
2
=
t
1
t
1
+
13
u
1
u
1
=
9
−
13
=
−
4
≡
13
(
mod
17
)
,
{\displaystyle t_{2}=t_{1}t_{1}+13u_{1}u_{1}=9-13=-4\equiv 13{\pmod {17}},}
u
2
=
t
1
u
1
+
t
1
u
1
=
3
+
3
≡
6
(
mod
17
)
.
{\displaystyle u_{2}=t_{1}u_{1}+t_{1}u_{1}=3+3\equiv 6{\pmod {17}}.}
And similarly
t
4
=
−
299
≡
7
(
mod
17
)
,
u
4
=
156
≡
3
(
mod
17
)
{\displaystyle t_{4}=-299\equiv 7{\pmod {17}},u_{4}=156\equiv 3{\pmod {17}}}
such that
t
8
=
−
68
≡
0
(
mod
17
)
,
u
8
=
42
≡
8
(
mod
17
)
.
{\displaystyle t_{8}=-68\equiv 0{\pmod {17}},u_{8}=42\equiv 8{\pmod {17}}.}
Since
t
8
=
0
{\displaystyle t_{8}=0}
, the equation
0
≡
t
4
2
+
13
u
4
2
≡
7
2
−
13
⋅
3
2
(
mod
17
)
{\displaystyle 0\equiv t_{4}^{2}+13u_{4}^{2}\equiv 7^{2}-13\cdot 3^{2}{\pmod {17}}}
which leads to solving the equation
3
x
≡
±
7
(
mod
17
)
{\displaystyle 3x\equiv \pm 7{\pmod {17}}}
. This has solution
x
≡
±
8
(
mod
17
)
{\displaystyle x\equiv \pm 8{\pmod {17}}}
. Indeed,
(
±
8
)
2
=
64
≡
13
(
mod
17
)
{\displaystyle (\pm 8)^{2}=64\equiv 13{\pmod {17}}}
.
== References ==
Leonard Eugene Dickson, "History Of The Theory Of Numbers" vol 1 p 222, Chelsea Publishing 1952 | Wikipedia/Pocklington's_algorithm |
In computer science and graph theory, Karger's algorithm is a randomized algorithm to compute a minimum cut of a connected graph. It was invented by David Karger and first published in 1993.
The idea of the algorithm is based on the concept of contraction of an edge
(
u
,
v
)
{\displaystyle (u,v)}
in an undirected graph
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
. Informally speaking, the contraction of an edge merges the nodes
u
{\displaystyle u}
and
v
{\displaystyle v}
into one, reducing the total number of nodes of the graph by one. All other edges connecting either
u
{\displaystyle u}
or
v
{\displaystyle v}
are "reattached" to the merged node, effectively producing a multigraph. Karger's basic algorithm iteratively contracts randomly chosen edges until only two nodes remain; those nodes represent a cut in the original graph. By iterating this basic algorithm a sufficient number of times, a minimum cut can be found with high probability.
== The global minimum cut problem ==
A cut
(
S
,
T
)
{\displaystyle (S,T)}
in an undirected graph
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
is a partition of the vertices
V
{\displaystyle V}
into two non-empty, disjoint sets
S
∪
T
=
V
{\displaystyle S\cup T=V}
. The cutset of a cut consists of the edges
{
u
v
∈
E
:
u
∈
S
,
v
∈
T
}
{\displaystyle \{\,uv\in E\colon u\in S,v\in T\,\}}
between the two parts. The size (or weight) of a cut in an unweighted graph is the cardinality of the cutset, i.e., the number of edges between the two parts,
w
(
S
,
T
)
=
|
{
u
v
∈
E
:
u
∈
S
,
v
∈
T
}
|
.
{\displaystyle w(S,T)=|\{\,uv\in E\colon u\in S,v\in T\,\}|\,.}
There are
2
|
V
|
{\displaystyle 2^{|V|}}
ways of choosing for each vertex whether it belongs to
S
{\displaystyle S}
or to
T
{\displaystyle T}
, but two of these choices make
S
{\displaystyle S}
or
T
{\displaystyle T}
empty and do not give rise to cuts. Among the remaining choices, swapping the roles of
S
{\displaystyle S}
and
T
{\displaystyle T}
does not change the cut, so each cut is counted twice; therefore, there are
2
|
V
|
−
1
−
1
{\displaystyle 2^{|V|-1}-1}
distinct cuts.
The minimum cut problem is to find a cut of smallest size among these cuts.
For weighted graphs with positive edge weights
w
:
E
→
R
+
{\displaystyle w\colon E\rightarrow \mathbf {R} ^{+}}
the weight of the cut is the sum of the weights of edges between vertices in each part
w
(
S
,
T
)
=
∑
u
v
∈
E
:
u
∈
S
,
v
∈
T
w
(
u
v
)
,
{\displaystyle w(S,T)=\sum _{uv\in E\colon u\in S,v\in T}w(uv)\,,}
which agrees with the unweighted definition for
w
=
1
{\displaystyle w=1}
.
A cut is sometimes called a “global cut” to distinguish it from an “
s
{\displaystyle s}
-
t
{\displaystyle t}
cut” for a given pair of vertices, which has the additional requirement that
s
∈
S
{\displaystyle s\in S}
and
t
∈
T
{\displaystyle t\in T}
. Every global cut is an
s
{\displaystyle s}
-
t
{\displaystyle t}
cut for some
s
,
t
∈
V
{\displaystyle s,t\in V}
. Thus, the minimum cut problem can be solved in polynomial time by iterating over all choices of
s
,
t
∈
V
{\displaystyle s,t\in V}
and solving the resulting minimum
s
{\displaystyle s}
-
t
{\displaystyle t}
cut problem using the max-flow min-cut theorem and a polynomial time algorithm for maximum flow, such as the push-relabel algorithm, though this approach is not optimal. Better deterministic algorithms for the global minimum cut problem include the Stoer–Wagner algorithm, which has a running time of
O
(
m
n
+
n
2
log
n
)
{\displaystyle O(mn+n^{2}\log n)}
.
== Contraction algorithm ==
The fundamental operation of Karger’s algorithm is a form of edge contraction. The result of contracting the edge
e
=
{
u
,
v
}
{\displaystyle e=\{u,v\}}
is a new node
u
v
{\displaystyle uv}
. Every edge
{
w
,
u
}
{\displaystyle \{w,u\}}
or
{
w
,
v
}
{\displaystyle \{w,v\}}
for
w
∉
{
u
,
v
}
{\displaystyle w\notin \{u,v\}}
to the endpoints of the contracted edge is replaced by an edge
{
w
,
u
v
}
{\displaystyle \{w,uv\}}
to the new node. Finally, the contracted nodes
u
{\displaystyle u}
and
v
{\displaystyle v}
with all their incident edges are removed. In particular, the resulting graph contains no self-loops. The result of contracting edge
e
{\displaystyle e}
is denoted
G
/
e
{\displaystyle G/e}
.
The contraction algorithm repeatedly contracts random edges in the graph, until only two nodes remain, at which point there is only a single cut.
The key idea of the algorithm is that it is far more likely for non min-cut edges than min-cut edges to be randomly selected and lost to contraction, since min-cut edges are usually vastly outnumbered by non min-cut edges. Subsequently, it is plausible that the min-cut edges will survive all the edge contraction, and the algorithm will correctly identify the min-cut edge.
procedure contract(
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
):
while
|
V
|
>
2
{\displaystyle |V|>2}
choose
e
∈
E
{\displaystyle e\in E}
uniformly at random
G
←
G
/
e
{\displaystyle G\leftarrow G/e}
return the only cut in
G
{\displaystyle G}
When the graph is represented using adjacency lists or an adjacency matrix, a single edge contraction operation can be implemented with a linear number of updates to the data structure, for a total running time of
O
(
|
V
|
2
)
{\displaystyle O(|V|^{2})}
. Alternatively, the procedure can be viewed as an execution of Kruskal’s algorithm for constructing the minimum spanning tree in a graph where the edges have weights
w
(
e
i
)
=
π
(
i
)
{\displaystyle w(e_{i})=\pi (i)}
according to a random permutation
π
{\displaystyle \pi }
. Removing the heaviest edge of this tree results in two components that describe a cut. In this way, the contraction procedure can be implemented like Kruskal’s algorithm in time
O
(
|
E
|
log
|
E
|
)
{\displaystyle O(|E|\log |E|)}
.
The best known implementations use
O
(
|
E
|
)
{\displaystyle O(|E|)}
time and space, or
O
(
|
E
|
log
|
E
|
)
{\displaystyle O(|E|\log |E|)}
time and
O
(
|
V
|
)
{\displaystyle O(|V|)}
space, respectively.
=== Success probability of the contraction algorithm ===
In a graph
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
with
n
=
|
V
|
{\displaystyle n=|V|}
vertices, the contraction algorithm returns a minimum cut with polynomially small probability
(
n
2
)
−
1
{\displaystyle {\binom {n}{2}}^{-1}}
. Recall that every graph has
2
n
−
1
−
1
{\displaystyle 2^{n-1}-1}
cuts (by the discussion in the previous section), among which at most
(
n
2
)
{\displaystyle {\tbinom {n}{2}}}
can be minimum cuts. Therefore, the success probability for this algorithm is much better than the probability for picking a cut at random, which is at most
(
n
2
)
2
n
−
1
−
1
{\displaystyle {\frac {\tbinom {n}{2}}{2^{n-1}-1}}}
.
For instance, the cycle graph on
n
{\displaystyle n}
vertices has exactly
(
n
2
)
{\displaystyle {\binom {n}{2}}}
minimum cuts, given by every choice of 2 edges. The contraction procedure finds each of these with equal probability.
To further establish the lower bound on the success probability, let
C
{\displaystyle C}
denote the edges of a specific minimum cut of size
k
{\displaystyle k}
. The contraction algorithm returns
C
{\displaystyle C}
if none of the random edges deleted by the algorithm belongs to the cutset
C
{\displaystyle C}
. In particular, the first edge contraction avoids
C
{\displaystyle C}
, which happens with probability
1
−
k
/
|
E
|
{\displaystyle 1-k/|E|}
. The minimum degree of
G
{\displaystyle G}
is at least
k
{\displaystyle k}
(otherwise a minimum degree vertex would induce a smaller cut where one of the two partitions contains only the minimum degree vertex), so
|
E
|
⩾
n
k
/
2
{\displaystyle |E|\geqslant nk/2}
. Thus, the probability that the contraction algorithm picks an edge from
C
{\displaystyle C}
is
k
|
E
|
⩽
k
n
k
/
2
=
2
n
.
{\displaystyle {\frac {k}{|E|}}\leqslant {\frac {k}{nk/2}}={\frac {2}{n}}.}
The probability
p
n
{\displaystyle p_{n}}
that the contraction algorithm on an
n
{\displaystyle n}
-vertex graph avoids
C
{\displaystyle C}
satisfies the recurrence
p
n
⩾
(
1
−
2
n
)
p
n
−
1
{\displaystyle p_{n}\geqslant \left(1-{\frac {2}{n}}\right)p_{n-1}}
, with
p
2
=
1
{\displaystyle p_{2}=1}
, which can be expanded as
p
n
⩾
∏
i
=
0
n
−
3
(
1
−
2
n
−
i
)
=
∏
i
=
0
n
−
3
n
−
i
−
2
n
−
i
=
n
−
2
n
⋅
n
−
3
n
−
1
⋅
n
−
4
n
−
2
⋯
3
5
⋅
2
4
⋅
1
3
=
(
n
2
)
−
1
.
{\displaystyle p_{n}\geqslant \prod _{i=0}^{n-3}{\Bigl (}1-{\frac {2}{n-i}}{\Bigr )}=\prod _{i=0}^{n-3}{\frac {n-i-2}{n-i}}={\frac {n-2}{n}}\cdot {\frac {n-3}{n-1}}\cdot {\frac {n-4}{n-2}}\cdots {\frac {3}{5}}\cdot {\frac {2}{4}}\cdot {\frac {1}{3}}={\binom {n}{2}}^{-1}\,.}
=== Repeating the contraction algorithm ===
By repeating the contraction algorithm
T
=
(
n
2
)
ln
n
{\displaystyle T={\binom {n}{2}}\ln n}
times with independent random choices and returning the smallest cut, the probability of not finding a minimum cut is
[
1
−
(
n
2
)
−
1
]
T
≤
1
e
ln
n
=
1
n
.
{\displaystyle \left[1-{\binom {n}{2}}^{-1}\right]^{T}\leq {\frac {1}{e^{\ln n}}}={\frac {1}{n}}\,.}
The total running time for
T
{\displaystyle T}
repetitions for a graph with
n
{\displaystyle n}
vertices and
m
{\displaystyle m}
edges is
O
(
T
m
)
=
O
(
n
2
m
log
n
)
{\displaystyle O(Tm)=O(n^{2}m\log n)}
.
== Karger–Stein algorithm ==
An extension of Karger’s algorithm due to David Karger and Clifford Stein achieves an order of magnitude improvement.
The basic idea is to perform the contraction procedure until the graph reaches
t
{\displaystyle t}
vertices.
procedure contract(
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
,
t
{\displaystyle t}
):
while
|
V
|
>
t
{\displaystyle |V|>t}
choose
e
∈
E
{\displaystyle e\in E}
uniformly at random
G
←
G
/
e
{\displaystyle G\leftarrow G/e}
return
G
{\displaystyle G}
The probability
p
n
,
t
{\displaystyle p_{n,t}}
that this contraction procedure avoids a specific cut
C
{\displaystyle C}
in an
n
{\displaystyle n}
-vertex graph is
p
n
,
t
≥
∏
i
=
0
n
−
t
−
1
(
1
−
2
n
−
i
)
=
(
t
2
)
/
(
n
2
)
.
{\displaystyle p_{n,t}\geq \prod _{i=0}^{n-t-1}{\Bigl (}1-{\frac {2}{n-i}}{\Bigr )}={\binom {t}{2}}{\Bigg /}{\binom {n}{2}}\,.}
This expression is approximately
t
2
/
n
2
{\displaystyle t^{2}/n^{2}}
and becomes less than
1
2
{\displaystyle {\frac {1}{2}}}
around
t
=
n
/
2
{\displaystyle t=n/{\sqrt {2}}}
. In particular, the probability that an edge from
C
{\displaystyle C}
is contracted grows towards the end. This motivates the idea of switching to a slower algorithm after a certain number of contraction steps.
procedure fastmincut(
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
):
if
|
V
|
≤
6
{\displaystyle |V|\leq 6}
:
return contract(
G
{\displaystyle G}
,
2
{\displaystyle 2}
)
else:
t
←
⌈
1
+
|
V
|
/
2
⌉
{\displaystyle t\leftarrow \lceil 1+|V|/{\sqrt {2}}\rceil }
G
1
←
{\displaystyle G_{1}\leftarrow }
contract(
G
{\displaystyle G}
,
t
{\displaystyle t}
)
G
2
←
{\displaystyle G_{2}\leftarrow }
contract(
G
{\displaystyle G}
,
t
{\displaystyle t}
)
return min{fastmincut(
G
1
{\displaystyle G_{1}}
), fastmincut(
G
2
{\displaystyle G_{2}}
)}
=== Analysis ===
The contraction parameter
t
{\displaystyle t}
is chosen so that each call to contract has probability at least 1/2 of success (that is, of avoiding the contraction of an edge from a specific cutset
C
{\displaystyle C}
). This allows the successful part of the recursion tree to be modeled as a random binary tree generated by a critical Galton–Watson process, and to be analyzed accordingly.
The probability
P
(
n
)
{\displaystyle P(n)}
that this random tree of successful calls contains a long-enough path to reach the base of the recursion and find
C
{\displaystyle C}
is given by the recurrence relation
P
(
n
)
=
1
−
(
1
−
1
2
P
(
⌈
1
+
n
2
⌉
)
)
2
{\displaystyle P(n)=1-\left(1-{\frac {1}{2}}P\left({\Bigl \lceil }1+{\frac {n}{\sqrt {2}}}{\Bigr \rceil }\right)\right)^{2}}
with solution
P
(
n
)
=
Ω
(
1
log
n
)
{\displaystyle P(n)=\Omega \left({\frac {1}{\log n}}\right)}
. The running time of fastmincut satisfies
T
(
n
)
=
2
T
(
⌈
1
+
n
2
⌉
)
+
O
(
n
2
)
{\displaystyle T(n)=2T\left({\Bigl \lceil }1+{\frac {n}{\sqrt {2}}}{\Bigr \rceil }\right)+O(n^{2})}
with solution
T
(
n
)
=
O
(
n
2
log
n
)
{\displaystyle T(n)=O(n^{2}\log n)}
. To achieve error probability
O
(
1
/
n
)
{\displaystyle O(1/n)}
, the algorithm can be repeated
O
(
log
n
/
P
(
n
)
)
{\displaystyle O(\log n/P(n))}
times, for an overall running time of
T
(
n
)
⋅
log
n
P
(
n
)
=
O
(
n
2
log
3
n
)
{\displaystyle T(n)\cdot {\frac {\log n}{P(n)}}=O(n^{2}\log ^{3}n)}
. This is an order of magnitude improvement over Karger’s original algorithm.
=== Improvement bound ===
To determine a min-cut, one has to touch every edge in the graph at least once, which is
Θ
(
n
2
)
{\displaystyle \Theta (n^{2})}
time in a dense graph. The Karger–Stein's min-cut algorithm takes the running time of
O
(
n
2
ln
O
(
1
)
n
)
{\displaystyle O(n^{2}\ln ^{O(1)}n)}
, which is very close to that.
== References == | Wikipedia/Karger's_algorithm |
In mathematics and computer science, the method of conditional probabilities is a systematic method for converting non-constructive probabilistic existence proofs into efficient deterministic algorithms that explicitly construct the desired object.
Often, the probabilistic method is used to prove the existence of mathematical objects with some desired combinatorial properties. The proofs in that method work by showing that a random object, chosen from some probability distribution, has the desired properties with positive probability. Consequently, they are nonconstructive — they don't explicitly describe an efficient method for computing the desired objects.
The method of conditional probabilities converts such a proof, in a "very precise sense", into an efficient deterministic algorithm, one that is guaranteed to compute an object with the desired properties. That is, the method derandomizes the proof. The basic idea is to replace each random choice in a random experiment by a deterministic choice, so as to keep the conditional probability of failure, given the choices so far, below 1.
The method is particularly relevant in the context of randomized rounding (which uses the probabilistic method to design approximation algorithms).
When applying the method of conditional probabilities, the technical term pessimistic estimator refers to a quantity used in place of the true conditional probability (or conditional expectation) underlying the proof.
== Overview ==
Raghavan gives this description:
We first show the existence of a provably good approximate solution using the probabilistic method... [We then] show that the probabilistic existence proof can be converted, in a very precise sense, into a deterministic approximation algorithm.
Raghavan is discussing the method in the context of randomized rounding, but it works with the probabilistic method in general.
To apply the method to a probabilistic proof, the randomly chosen object in the proof must be choosable by a random experiment that consists of a sequence of "small" random choices.
Here is a trivial example to illustrate the principle.
Lemma: It is possible to flip three coins so that the number of tails is at least 2.
Probabilistic proof. If the three coins are flipped randomly, the expected number of tails is 1.5. Thus, there must be some outcome (way of flipping the coins) so that the number of tails is at least 1.5. Since the number of tails is an integer, in such an outcome there are at least 2 tails. QED
In this example the random experiment consists of flipping three fair coins. The experiment is illustrated by the rooted tree in the adjacent diagram. There are eight outcomes, each corresponding to a leaf in the tree. A trial of the random experiment corresponds to taking a random walk from the root (the top node in the tree, where no coins have been flipped) to a leaf. The successful outcomes are those in which at least two coins came up tails. The interior nodes in the tree correspond to partially determined outcomes, where only 0, 1, or 2 of the coins have been flipped so far.
To apply the method of conditional probabilities, one focuses on the conditional probability of failure, given the choices so far as the experiment proceeds step by step.
In the diagram, each node is labeled with this conditional probability. (For example, if only the first coin has been flipped, and it comes up tails, that corresponds to the second child of the root. Conditioned on that partial state, the probability of failure is 0.25.)
The method of conditional probabilities replaces the random root-to-leaf walk in the random experiment by a deterministic root-to-leaf walk, where each step is chosen to inductively maintain the following invariant:
the conditional probability of failure, given the current state, is less than 1.
In this way, it is guaranteed to arrive at a leaf with label 0, that is, a successful outcome.
The invariant holds initially (at the root), because the original proof showed that the (unconditioned) probability of failure is less than 1. The conditional probability at any interior node is the average of the conditional probabilities of its children. The latter property is important because it implies that any interior node whose conditional probability is less than 1 has at least one child whose conditional probability is less than 1. Thus, from any interior node, one can always choose some child to walk to so as to maintain the invariant. Since the invariant holds at the end, when the walk arrives at a leaf and all choices have been determined, the outcome reached in this way must be a successful one.
== Efficiency ==
In a typical application of the method, the goal is to be able to implement the resulting deterministic process by a reasonably efficient algorithm (the word "efficient" usually means an algorithm that runs in polynomial time), even though typically the number of possible outcomes is huge (exponentially large). For example, consider the task with coin flipping, but extended to n flips for large n.
In the ideal case, given a partial state (a node in the tree), the conditional probability of failure (the label on the node) can be efficiently and exactly computed. (The example above is like this.) If this is so, then the algorithm can select the next node to go to by computing the conditional probabilities at each of the children of the current node, then moving to any child whose conditional probability is less than 1. As discussed above, there is guaranteed to be such a node.
Unfortunately, in most applications, the conditional probability of failure is not easy to compute efficiently. There are two standard and related techniques for dealing with this:
=== Using a conditional expectation ===
Many probabilistic proofs work as follows: they implicitly define a random variable Q, show that (i) the expectation of Q is at most (or at least) some threshold value, and (ii) in any outcome where Q is at most (at least) this threshold, the outcome is a success. Then (i) implies that there exists an outcome where Q is at most (at least) the threshold, and this and (ii) imply that there is a successful outcome. (In the example above, Q is the number of tails, which should be at least the threshold 1.5. In many applications, Q is the number of "bad" events (not necessarily disjoint) that occur in a given outcome, where each bad event corresponds to one way the experiment can fail, and the expected number of bad events that occur is less than 1.)
In this case, to keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of Q below (or above) the threshold. To do this, instead of computing the conditional probability of failure, the algorithm computes the conditional expectation of Q and proceeds accordingly: at each interior node, there is some child whose conditional expectation is at most (at least) the node's conditional expectation; the algorithm moves from the current node to such a child, thus keeping the conditional expectation below (above) the threshold.
=== Using a pessimistic estimator ===
In some cases, as a proxy for the exact conditional expectation of the quantity Q, one uses an appropriately tight bound called a pessimistic estimator. The pessimistic estimator is a function of the current state. It should be an upper (or lower) bound for the conditional expectation of Q given the current state, and it should be non-increasing (or non-decreasing) in expectation with each random step of the experiment. Typically, a good pessimistic estimator can be computed by precisely deconstructing the logic of the original proof.
== Example using conditional expectations ==
This example demonstrates the method of conditional probabilities using a conditional expectation.
=== Max-Cut Lemma ===
Given any undirected graph G = (V, E), the Max cut problem is to color each vertex of the graph with one of two colors (say black or white) so as to maximize the number of edges whose endpoints have different colors. (Say such an edge is cut.)
Max-Cut Lemma: In any graph G = (V, E), at least |E|/2 edges can be cut.
Probabilistic proof. Color each vertex black or white by flipping a fair coin. By calculation, for any edge e in E, the probability that it is cut is 1/2. Thus, by linearity of expectation, the expected number of edges cut is |E|/2. Thus, there exists a coloring that cuts at least |E|/2 edges. QED
=== The method of conditional probabilities with conditional expectations ===
To apply the method of conditional probabilities, first model the random experiment as a sequence of small random steps. In this case it is natural to consider each step to be the choice of color for a particular vertex (so there are |V| steps).
Next, replace the random choice at each step by a deterministic choice, so as to keep the conditional probability of failure, given the vertices colored so far, below 1. (Here failure means that finally fewer than |E|/2 edges are cut.)
In this case, the conditional probability of failure is not easy to calculate. Indeed, the original proof did not calculate the probability of failure directly; instead, the proof worked by showing that the expected number of cut edges was at least |E|/2.
Let random variable Q be the number of edges cut. To keep the conditional probability of failure below 1, it suffices to keep the conditional expectation of Q at or above the threshold |E|/2. This is because, as long as the conditional expectation of Q is at least |E|/2, there must be some still-reachable outcome where Q is at least |E|/2, so the conditional probability of reaching such an outcome is positive. To keep the conditional expectation of Q at |E|/2 or above, the algorithm will, at each step, color the vertex under consideration so as to maximize the resulting conditional expectation of Q. This suffices, because there must be some child whose conditional expectation is at least the current state's conditional expectation (and thus at least |E|/2).
Given that some of the vertices are colored already, what is this conditional expectation? Following the logic of the original proof, the conditional expectation of the number of cut edges is
the number of edges whose endpoints are colored differently so far
+ (1/2)*(the number of edges with at least one endpoint not yet colored).
=== Algorithm ===
The algorithm colors each vertex to maximize the resulting value of the above conditional expectation. This is guaranteed to keep the conditional expectation at |E|/2 or above, and so is guaranteed to keep the conditional probability of failure below 1, which in turn guarantees a successful outcome. By calculation, the algorithm simplifies to the following:
1. For each vertex u in V (in any order):
2. Consider the already-colored neighboring vertices of u.
3. Among these vertices, if more are black than white, then color u white.
4. Otherwise, color u black.
Because of its derivation, this deterministic algorithm is guaranteed to cut at least half the edges of the given graph. This makes it a 0.5-approximation algorithm for Max-cut.
== Example using pessimistic estimators ==
The next example demonstrates the use of pessimistic estimators.
=== Turán's theorem ===
One way of stating Turán's theorem is the following:
Any graph G = (V, E) contains an independent set of size at least |V|/(D+1), where D = 2|E|/|V| is the average degree of the graph.
=== Probabilistic proof of Turán's theorem ===
Consider the following random process for constructing an independent set S:
1. Initialize S to be the empty set.
2. For each vertex u in V in random order:
3. If no neighbors of u are in S, add u to S
4. Return S.
Clearly the process computes an independent set. Any vertex u that is considered before all of its neighbors will be added to S. Thus, letting d(u) denote the degree of u, the probability that u is added to S is at least 1/(d(u)+1). By linearity of expectation, the expected size of S is at least
∑
u
∈
V
1
d
(
u
)
+
1
≥
|
V
|
D
+
1
.
{\displaystyle \sum _{u\in V}{\frac {1}{d(u)+1}}~\geq ~{\frac {|V|}{D+1}}.}
(The inequality above follows because 1/(x+1) is convex in x, so the left-hand side is minimized, subject to the sum of the degrees being fixed at 2|E|, when each d(u) = D = 2|E|/|V|.) QED
=== The method of conditional probabilities using pessimistic estimators ===
In this case, the random process has |V| steps. Each step considers some not-yet considered vertex u and adds u to S if none of its neighbors have yet been added. Let random variable Q be the number of vertices added to S. The proof shows that E[Q] ≥ |V|/(D+1).
We will replace each random step by a deterministic step that keeps the conditional expectation of Q at or above |V|/(D+1). This will ensure a successful outcome, that is, one in which the independent set S has size at least |V|/(D+1), realizing the bound in Turán's theorem.
Given that the first t steps have been taken, let S(t) denote the vertices added so far. Let R(t) denote those vertices that have not yet been considered, and that have no neighbors in S(t). Given the first t steps, following the reasoning in the original proof, any given vertex w in R(t) has conditional probability at least 1/(d(w)+1) of being added to S, so the conditional expectation of Q is at least
|
S
(
t
)
|
+
∑
w
∈
R
(
t
)
1
d
(
w
)
+
1
.
{\displaystyle |S^{(t)}|~+~\sum _{w\in R^{(t)}}{\frac {1}{d(w)+1}}.}
Let Q(t) denote the above quantity, which is called a pessimistic estimator for the conditional expectation.
The proof showed that the pessimistic estimator is initially at least |V|/(D+1). (That is, Q(0) ≥ |V|/(D+1).) The algorithm will make each choice to keep the pessimistic estimator from decreasing, that is, so that Q(t+1) ≥ Q(t) for each t. Since the pessimistic estimator is a lower bound on the conditional expectation, this will ensure that the conditional expectation stays above |V|/(D+1), which in turn will ensure that the conditional probability of failure stays below 1.
Let u be the vertex considered by the algorithm in the next ((t+1)-st) step.
If u already has a neighbor in S, then u is not added to S and (by inspection of Q(t)), the pessimistic estimator is unchanged. If u does not have a neighbor in S, then u is added to S.
By calculation, if u is chosen randomly from the remaining vertices, the expected increase in the pessimistic estimator is non-negative. [The calculation. Conditioned on choosing a vertex in R(t), the probability that a given term 1/(d(w)+1) is dropped from the sum in the pessimistic estimator is at most (d(w)+1)/|R(t)|, so the expected decrease in each term in the sum is at most 1/|R(t)|. There are R(t) terms in the sum. Thus, the expected decrease in the sum is at most 1. Meanwhile, the size of S increases by 1.]
Thus, there must exist some choice of u that keeps the pessimistic estimator from decreasing.
=== Algorithm maximizing the pessimistic estimator ===
The algorithm below chooses each vertex u to maximize the resulting pessimistic estimator. By the previous considerations, this keeps the pessimistic estimator from decreasing and guarantees a successful outcome.
Below, N(t)(u) denotes the neighbors of u in R(t) (that is, neighbors of u that are neither in S nor have a neighbor in S).
1. Initialize S to be the empty set.
2. While there exists a not-yet-considered vertex u with no neighbor in S:
3. Add such a vertex u to S where u minimizes
∑
w
∈
N
(
t
)
(
u
)
∪
{
u
}
1
d
(
w
)
+
1
{\displaystyle \sum _{w\in N^{(t)}(u)\cup \{u\}}{\frac {1}{d(w)+1}}}
.
4. Return S.
=== Algorithms that don't maximize the pessimistic estimator ===
For the method of conditional probabilities to work, it suffices if the algorithm keeps the pessimistic estimator from decreasing (or increasing, as appropriate). The algorithm does not necessarily have to maximize (or minimize) the pessimistic estimator. This gives some flexibility in deriving the algorithm. The next two algorithms illustrate this.
1. Initialize S to be the empty set.
2. While there exists a vertex u in the graph with no neighbor in S:
3. Add such a vertex u to S, where u minimizes d(u) (the initial degree of u).
4. Return S.
1. Initialize S to be the empty set.
2. While the remaining graph is not empty:
3. Add a vertex u to S, where u has minimum degree in the remaining graph.
4. Delete u and all of its neighbors from the graph.
5. Return S.
Each algorithm is analyzed with the same pessimistic estimator as before. With each step of either algorithm, the net increase in the pessimistic estimator is
1
−
∑
w
∈
N
(
t
)
(
u
)
∪
{
u
}
1
d
(
w
)
+
1
,
{\displaystyle 1-\sum _{w\in N^{(t)}(u)\cup \{u\}}{\frac {1}{d(w)+1}},}
where N(t)(u) denotes the neighbors of u in the remaining graph (that is, in R(t)).
For the first algorithm, the net increase is non-negative because, by the choice of u,
∑
w
∈
N
(
t
)
(
u
)
∪
{
u
}
1
d
(
w
)
+
1
≤
(
d
(
u
)
+
1
)
1
d
(
u
)
+
1
=
1
{\displaystyle \sum _{w\in N^{(t)}(u)\cup \{u\}}{\frac {1}{d(w)+1}}\leq (d(u)+1){\frac {1}{d(u)+1}}=1}
,
where d(u) is the degree of u in the original graph.
For the second algorithm, the net increase is non-negative because, by the choice of u,
∑
w
∈
N
(
t
)
(
u
)
∪
{
u
}
1
d
(
w
)
+
1
≤
(
d
′
(
u
)
+
1
)
1
d
′
(
u
)
+
1
=
1
{\displaystyle \sum _{w\in N^{(t)}(u)\cup \{u\}}{\frac {1}{d(w)+1}}\leq (d'(u)+1){\frac {1}{d'(u)+1}}=1}
,
where d′(u) is the degree of u in the remaining graph.
== See also ==
Probabilistic method
Derandomization
Randomized rounding
== References ==
== Further reading ==
The method of conditional rounding is explained in several textbooks:
Alon, Noga; Spencer, Joel (2008). The probabilistic method. Wiley-Interscience Series in Discrete Mathematics and Optimization (Third ed.). Hoboken, NJ: John Wiley and Sons. pp. 250 et seq. ISBN 978-0-470-17020-5. MR 2437651. (cited pages in 2nd edition, ISBN 9780471653981)
Motwani, Rajeev; Raghavan, Prabhakar (25 August 1995). Randomized algorithms. Cambridge University Press. pp. 120–. ISBN 978-0-521-47465-8.
Vazirani, Vijay (5 December 2002), Approximation algorithms, Springer Verlag, pp. 130–, ISBN 978-3-540-65367-7
== External links ==
The probabilistic method — method of conditional probabilities, blog entry by Neal E. Young, accessed 19/04/2012. | Wikipedia/Method_of_conditional_probabilities |
In computational complexity theory, Yao's principle (also called Yao's minimax principle or Yao's lemma) relates the performance of randomized algorithms to deterministic (non-random) algorithms. It states that, for certain classes of algorithms, and certain measures of the performance of the algorithms, the following two quantities are equal:
The optimal performance that can be obtained by a deterministic algorithm on a random input (its average-case complexity), for a probability distribution on inputs chosen to be as hard as possible and for an algorithm chosen to work as well as possible against that distribution
The optimal performance that can be obtained by a random algorithm on a deterministic input (its expected complexity), for an algorithm chosen to have the best performance on its worst case inputs, and the worst case input to the algorithm
Yao's principle is often used to prove limitations on the performance of randomized algorithms, by finding a probability distribution on inputs that is difficult for deterministic algorithms, and inferring that randomized algorithms have the same limitation on their worst case performance.
This principle is named after Andrew Yao, who first proposed it in a 1977 paper. It is closely related to the minimax theorem in the theory of zero-sum games, and to the duality theory of linear programs.
== Formulation ==
Consider an arbitrary real valued cost measure
c
(
A
,
x
)
{\displaystyle c(A,x)}
of an algorithm
A
{\displaystyle A}
on an input
x
{\displaystyle x}
, such as its running time, for which we want to study the expected value over randomized algorithms and random inputs. Consider, also, a finite set
A
{\displaystyle {\mathcal {A}}}
of deterministic algorithms (made finite, for instance, by limiting the algorithms to a specific input size), and a finite set
X
{\displaystyle {\mathcal {X}}}
of inputs to these algorithms. Let
R
{\displaystyle {\mathcal {R}}}
denote the class of randomized algorithms obtained from probability distributions over the deterministic behaviors in
A
{\displaystyle {\mathcal {A}}}
, and let
D
{\displaystyle {\mathcal {D}}}
denote the class of probability distributions on inputs in
X
{\displaystyle {\mathcal {X}}}
. Then, Yao's principle states that:
max
D
∈
D
min
A
∈
A
E
x
∼
D
[
c
(
A
,
x
)
]
=
min
R
∈
R
max
x
∈
X
E
[
c
(
R
,
x
)
]
.
{\displaystyle \max _{D\in {\mathcal {D}}}\min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]=\min _{R\in {\mathcal {R}}}\max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].}
Here,
E
{\displaystyle \mathbb {E} }
is notation for the expected value, and
x
∼
D
{\displaystyle x\sim D}
means that
x
{\displaystyle x}
is a random variable distributed according to
D
{\displaystyle D}
. Finiteness of
A
{\displaystyle {\mathcal {A}}}
and
X
{\displaystyle {\mathcal {X}}}
allows
D
{\displaystyle {\mathcal {D}}}
and
R
{\displaystyle {\mathcal {R}}}
to be interpreted as simplices of probability vectors, whose compactness implies that the minima and maxima in these formulas exist.
Another version of Yao's principle weakens it from an equality to an inequality, but at the same time generalizes it by relaxing the requirement that the algorithms and inputs come from a finite set. The direction of the inequality allows it to be used when a specific input distribution has been shown to be hard for deterministic algorithms, converting it into a lower bound on the cost of all randomized algorithms. In this version, for every input distribution
D
∈
D
{\displaystyle D\in {\mathcal {D}}}
, and for every randomized algorithm
R
{\displaystyle R}
in
R
{\displaystyle {\mathcal {R}}}
,
min
A
∈
A
E
x
∼
D
[
c
(
A
,
x
)
]
≤
max
x
∈
X
E
[
c
(
R
,
x
)
]
.
{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].}
That is, the best possible deterministic performance against distribution
D
{\displaystyle D}
is a lower bound for the performance of each randomized algorithm
R
{\displaystyle R}
against its worst-case input. This version of Yao's principle can be proven through the chain of inequalities
min
A
∈
A
E
x
∼
D
[
c
(
A
,
x
)
]
≤
E
x
∼
D
[
c
(
R
,
x
)
]
≤
max
x
∈
X
E
[
c
(
R
,
x
)
]
,
{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \mathbb {E} _{x\sim D}[c(R,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)],}
each of which can be shown using only linearity of expectation and the principle that
min
≤
E
≤
max
{\displaystyle \min \leq \mathbb {E} \leq \max }
for all distributions. By avoiding maximization and minimization over
D
{\displaystyle {\mathcal {D}}}
and
R
{\displaystyle {\mathcal {R}}}
, this version of Yao's principle can apply in some cases where
X
{\displaystyle {\mathcal {X}}}
or
A
{\displaystyle {\mathcal {A}}}
are not finite. Although this direction of inequality is the direction needed for proving lower bounds on randomized algorithms, the equality version of Yao's principle, when it is available, can also be useful in these proofs. The equality of the principle implies that there is no loss of generality in using the principle to prove lower bounds: whatever the actual best randomized algorithm might be, there is some input distribution through which a matching lower bound on its complexity can be proven.
== Applications and examples ==
=== Time complexity ===
When the cost
c
{\displaystyle c}
denotes the running time of an algorithm, Yao's principle states that the best possible running time of a deterministic algorithm, on a hard input distribution, gives a lower bound for the expected time of any Las Vegas algorithm on its worst-case input. Here, a Las Vegas algorithm is a randomized algorithm whose runtime may vary, but for which the result is always correct. For example, this form of Yao's principle has been used to prove the optimality of certain Monte Carlo tree search algorithms for the exact evaluation of game trees.
=== Comparisons ===
The time complexity of comparison-based sorting and selection algorithms is often studied using the number of comparisons between pairs of data elements as a proxy for the total time. When these problems are considered over a fixed set of elements, their inputs can be expressed as permutations and a deterministic algorithm can be expressed as a decision tree. In this way both the inputs and the algorithms form finite sets as Yao's principle requires. A symmetrization argument identifies the hardest input distributions: they are the random permutations, the distributions on
n
{\displaystyle n}
distinct elements for which all permutations are equally likely. This is because, if any other distribution were hardest, averaging it with all permutations of the same hard distribution would be equally hard, and would produce the distribution for a random permutation. Yao's principle extends lower bounds for the average case number of comparisons made by deterministic algorithms, for random permutations, to the worst case analysis of randomized comparison algorithms.
An example given by Yao is the analysis of algorithms for finding the
k
{\displaystyle k}
th largest of a given set of
n
{\displaystyle n}
values, the selection problem. Subsequently, to Yao's work, Walter Cunto and Ian Munro showed that, for random permutations, any deterministic algorithm must perform at least
n
+
min
(
k
,
n
−
k
)
−
O
(
1
)
{\displaystyle n+\min(k,n-k)-O(1)}
expected comparisons. By Yao's principle, the same number of comparisons must be made by randomized algorithms on their worst-case input. The Floyd–Rivest algorithm comes within
O
(
n
log
n
)
{\displaystyle O({\sqrt {n\log n}})}
comparisons of this bound.
=== Evasiveness of graph properties ===
Another of the original applications by Yao of his principle was to the evasiveness of graph properties, the number of tests of the adjacency of pairs of vertices needed to determine whether a graph has a given property, when the only access to the graph is through such tests. Richard M. Karp conjectured that every randomized algorithm for every nontrivial monotone graph property (a property that remains true for every subgraph of a graph with the property) requires a quadratic number of tests, but only weaker bounds have been proven.
As Yao stated, for graph properties that are true of the empty graph but false for some other graph on
n
{\displaystyle n}
vertices with only a bounded number
s
{\displaystyle s}
of edges, a randomized algorithm must probe a quadratic number of pairs of vertices. For instance, for the property of being a planar graph,
s
=
9
{\displaystyle s=9}
because the 9-edge utility graph is non-planar. More precisely, Yao states that for these properties, at least
(
1
2
−
p
)
1
s
(
n
2
)
{\displaystyle \left({\tfrac {1}{2}}-p\right){\tfrac {1}{s}}{\tbinom {n}{2}}}
tests are needed, for every
ε
>
0
{\displaystyle \varepsilon >0}
, for a randomized algorithm to have probability at most
p
{\displaystyle p}
of making a mistake. Yao also used this method to show that quadratically many queries are needed for the properties of containing a given tree or clique as a subgraph, of containing a perfect matching, and of containing a Hamiltonian cycle, for small enough constant error probabilities.
=== Black-box optimization ===
In black-box optimization, the problem is to determine the minimum or maximum value of a function, from a given class of functions, accessible only through calls to the function on arguments from some finite domain. In this case, the cost to be optimized is the number of calls. Yao's principle has been described as "the only method available for proving lower bounds for all randomized search heuristics for selected classes of problems". Results that can be proven in this way include the following:
For Boolean functions on
n
{\displaystyle n}
-bit binary strings that test whether the input equals some fixed but unknown string, the optimal expected number of function calls needed to find the unknown string is
2
n
−
1
+
1
2
{\displaystyle 2^{n-1}+{\tfrac {1}{2}}}
. This can be achieved by a function that tests strings in a random order, and proved optimal by using Yao's principle on an input distribution that chooses a uniformly random function from this class.
A unimodal function
f
{\displaystyle f}
from
n
{\displaystyle n}
-bit binary strings to real numbers is defined by the following property: For each input string
x
{\displaystyle x}
, either
f
(
x
)
{\displaystyle f(x)}
is the unique maximum value of
f
{\displaystyle f}
, or
x
{\displaystyle x}
can be changed in a single bit to a string
y
{\displaystyle y}
with a larger value. Thus, a local search that changes one bit at a time when this produces a larger value will always eventually find the maximum value. Such a search may take exponentially many steps, but nothing significantly better is possible. For any randomized algorithm that performs
2
o
(
n
)
{\displaystyle 2^{o(n)}}
queries, some function in this class will cause the algorithm to have an exponentially small probability of finding the maximum.
=== Communication complexity ===
In communication complexity, an algorithm describes a communication protocol between two or more parties, and its cost may be the number of bits or messages transmitted between the parties. In this case, Yao's principle describes an equality between the average-case complexity of deterministic communication protocols, on an input distribution that is the worst case for the problem, and the expected communication complexity of randomized protocols on their worst-case inputs.
An example described by Avi Wigderson (based on a paper by Manu Viola) is the communication complexity for two parties, each holding
n
{\displaystyle n}
-bit input values, to determine which value is larger. For deterministic communication protocols, nothing better than
n
{\displaystyle n}
bits of communication is possible, easily achieved by one party sending their whole input to the other. However, parties with a shared source of randomness and a fixed error probability can exchange 1-bit hash functions of prefixes of the input to perform a noisy binary search for the first position where their inputs differ, achieving
O
(
log
n
)
{\displaystyle O(\log n)}
bits of communication. This is within a constant factor of optimal, as can be shown via Yao's principle with an input distribution that chooses the position of the first difference uniformly at random, and then chooses random strings for the shared prefix up to that position and the rest of the inputs after that position.
=== Online algorithms ===
Yao's principle has also been applied to the competitive ratio of online algorithms. An online algorithm must respond to a sequence of requests, without knowledge of future requests, incurring some cost or profit per request depending on its choices. The competitive ratio is the ratio of its cost or profit to the value that could be achieved achieved by an offline algorithm with access to knowledge of all future requests, for a worst-case request sequence that causes this ratio to be as far from one as possible. Here, one must be careful to formulate the ratio with the algorithm's performance in the numerator and the optimal performance of an offline algorithm in the denominator, so that the cost measure can be formulated as an expected value rather than as the reciprocal of an expected value.
An example given by Borodin & El-Yaniv (2005) concerns page replacement algorithms, which respond to requests for pages of computer memory by using a cache of
k
{\displaystyle k}
pages, for a given parameter
k
{\displaystyle k}
. If a request matches a cached page, it costs nothing; otherwise one of the cached pages must be replaced by the requested page, at a cost of one page fault. A difficult distribution of request sequences for this model can be generated by choosing each request uniformly at random from a pool of
k
+
1
{\displaystyle k+1}
pages. Any deterministic online algorith has
n
k
+
1
{\displaystyle {\tfrac {n}{k+1}}}
expected page faults, over
n
{\displaystyle n}
requests. Instead, an offline algorithm can divide the request sequence into phases within which only
k
{\displaystyle k}
pages are used, incurring only one fault at the start of a phase to replace the one page that is unused within the phase. As an instance of the coupon collector's problem, the expected requests per phase is
(
k
+
1
)
H
k
{\displaystyle (k+1)H_{k}}
, where
H
k
=
1
+
1
2
+
⋯
+
1
k
{\displaystyle H_{k}=1+{\tfrac {1}{2}}+\cdots +{\tfrac {1}{k}}}
is the
k
{\displaystyle k}
th harmonic number. By renewal theory, the offline algorithm incurs
n
(
k
+
1
)
H
k
+
o
(
n
)
{\displaystyle {\tfrac {n}{(k+1)H_{k}}}+o(n)}
page faults with high probability, so the competitive ratio of any deterministic algorithm against this input distribution is at least
H
k
{\displaystyle H_{k}}
. By Yao's principle,
H
k
{\displaystyle H_{k}}
also lower bounds the competitive ratio of any randomized page replacement algorithm against a request sequence chosen by an oblivious adversary to be a worst case for the algorithm but without knowledge of the algorithm's random choices.
For online problems in a general class related to the ski rental problem, Seiden has proposed a cookbook method for deriving optimally hard input distributions, based on certain parameters of the problem.
== Relation to game theory and linear programming ==
Yao's principle may be interpreted in game theoretic terms, via a two-player zero-sum game in which one player, Alice, selects a deterministic algorithm, the other player, Bob, selects an input, and the payoff is the cost of the selected algorithm on the selected input. Any randomized algorithm
R
{\displaystyle R}
may be interpreted as a randomized choice among deterministic algorithms, and thus as a mixed strategy for Alice. Similarly, a non-random algorithm may be thought of as a pure strategy for Alice. In any two-player zero-sum game, if one player chooses a mixed strategy, then the other player has an optimal pure strategy against it. By the minimax theorem of John von Neumann, there exists a game value
c
{\displaystyle c}
, and mixed strategies for each player, such that the players can guarantee expected value
c
{\displaystyle c}
or better by playing those strategies, and such that the optimal pure strategy against either mixed strategy produces expected value exactly
c
{\displaystyle c}
. Thus, the minimax mixed strategy for Alice, set against the best opposing pure strategy for Bob, produces the same expected game value
c
{\displaystyle c}
as the minimax mixed strategy for Bob, set against the best opposing pure strategy for Alice. This equality of expected game values, for the game described above, is Yao's principle in its form as an equality. Yao's 1977 paper, originally formulating Yao's principle, proved it in this way.
The optimal mixed strategy for Alice (a randomized algorithm) and the optimal mixed strategy for Bob (a hard input distribution) may each be computed using a linear program that has one player's probabilities as its variables, with a constraint on the game value for each choice of the other player. The two linear programs obtained in this way for each player are dual linear programs, whose equality is an instance of linear programming duality. However, although linear programs may be solved in polynomial time, the numbers of variables and constraints in these linear programs (numbers of possible algorithms and inputs) are typically too large to list explicitly. Therefore, formulating and solving these programs to find these optimal strategies is often impractical.
== Extensions ==
For Monte Carlo algorithms, algorithms that use a fixed amount of computational resources but that may produce an erroneous result, a form of Yao's principle applies to the probability of an error, the error rate of an algorithm. Choosing the hardest possible input distribution, and the algorithm that achieves the lowest error rate against that distribution, gives the same error rate as choosing an optimal algorithm and its worst case input distribution. However, the hard input distributions found in this way are not robust to changes in the parameters used when applying this principle. If an input distribution requires high complexity to achieve a certain error rate, it may nevertheless have unexpectedly low complexity for a different error rate. Ben-David and Blais show that, for Boolean functions under many natural measures of computational complexity, there exists an input distribution that is simultaneously hard for all error rates.
Variants of Yao's principle have also been considered for quantum computing. In place of randomized algorithms, one may consider quantum algorithms that have a good probability of computing the correct value for every input (probability at least
2
3
{\displaystyle {\tfrac {2}{3}}}
); this condition together with polynomial time defines the complexity class BQP. It does not make sense to ask for deterministic quantum algorithms, but instead one may consider algorithms that, for a given input distribution, have probability 1 of computing a correct answer, either in a weak sense that the inputs for which this is true have probability
≥
2
3
{\displaystyle \geq {\tfrac {2}{3}}}
, or in a strong sense in which, in addition, the algorithm must have probability 0 or 1 of generating any particular answer on the remaining inputs. For any Boolean function, the minimum complexity of a quantum algorithm that is correct with probability
≥
2
3
{\displaystyle \geq {\tfrac {2}{3}}}
against its worst-case input is less than or equal to the minimum complexity that can be attained, for a hard input distribution, by the best weak or strong quantum algorithm against that distribution. The weak form of this inequality is within a constant factor of being an equality, but the strong form is not.
== References == | Wikipedia/Randomized_algorithms_as_zero-sum_games |
In analysis of algorithms, probabilistic analysis of algorithms is an approach to estimate the computational complexity of an algorithm or a computational problem. It starts from an assumption about a probabilistic distribution of the set of all possible inputs. This assumption is then used to design an efficient algorithm or to derive the complexity of a known algorithm.
This approach is not the same as that of probabilistic algorithms, but the two may be combined.
For non-probabilistic, more specifically deterministic, algorithms, the most common types of complexity estimates are the average-case complexity and the almost-always complexity. To obtain the average-case complexity, given an input distribution, the expected time of an algorithm is evaluated, whereas for the almost-always complexity estimate, it is evaluated that the algorithm admits a given complexity estimate that almost surely holds.
In probabilistic analysis of probabilistic (randomized) algorithms, the distributions or average of all possible choices in randomized steps is also taken into account, in addition to the input distributions.
== See also ==
Amortized analysis
Average-case complexity
Best, worst and average case
Random self-reducibility
Principle of deferred decision
== References ==
Frieze, Alan M.; Reed, Bruce (1998), "Probabilistic analysis of algorithms", in Habib, Michel; McDiarmid, Colin; Ramirez-Alfonsin, Jorge; Reed, Bruce (eds.), Probabilistic Methods for Algorithmic Discrete Mathematics, Algorithms and Combinatorics, vol. 16, Springer, pp. 36–92, doi:10.1007/978-3-662-12788-9_2, ISBN 9783662127889
Hofri, Micha (1987), Probabilistic Analysis of Algorithms: On Computing Methodologies for Computer Algorithms Performance Evaluation, Springer, doi:10.1007/978-1-4612-4800-2, ISBN 9781461248002
Frieze, A. M. (1990), "Probabilistic analysis of graph algorithms", in Tinhofer, G.; Mayr, E.; Noltemeier, H.; Syslo, M. M. (eds.), Computational Graph Theory, Computing Supplementa, vol. 7, Springer, pp. 209–233, doi:10.1007/978-3-7091-9076-0_11, ISBN 9783709190760 | Wikipedia/Probabilistic_analysis_of_algorithms |
Competitive analysis is a method invented for analyzing online algorithms, in which the performance of an online algorithm (which must satisfy an unpredictable sequence of requests, completing each request without being able to see the future) is compared to the performance of an optimal offline algorithm that can view the sequence of requests in advance. An algorithm is competitive if its competitive ratio—the ratio between its performance and the offline algorithm's performance—is bounded. Unlike traditional worst-case analysis, where the performance of an algorithm is measured only for "hard" inputs, competitive analysis requires that an algorithm perform well both on hard and easy inputs, where "hard" and "easy" are defined by the performance of the optimal offline algorithm.
For many algorithms, performance is dependent not only on the size of the inputs, but also on their values. For example, sorting an array of elements varies in difficulty depending on the initial order. Such data-dependent algorithms are analysed for average-case and worst-case data. Competitive analysis is a way of doing worst case analysis for on-line and randomized algorithms, which are typically data dependent.
In competitive analysis, one imagines an "adversary" which deliberately chooses difficult data, to maximize the ratio of the cost of the algorithm being studied and some optimal algorithm. When considering a randomized algorithm, one must further distinguish between an oblivious adversary, which has no knowledge of the random choices made by the algorithm pitted against it, and an adaptive adversary which has full knowledge of the algorithm's internal state at any point during its execution. (For a deterministic algorithm, there is no difference; either adversary can simply compute what state that algorithm must have at any time in the future, and choose difficult data accordingly.)
For example, the quicksort algorithm chooses one element, called the "pivot", that is, on average, not too far from the center value of the data being sorted. Quicksort then separates the data into two piles, one of which contains all elements with value less than the value of the pivot, and the other containing the rest of the elements. If quicksort chooses the pivot in some deterministic fashion (for instance, always choosing the first element in the list), then it is easy for an adversary to arrange the data beforehand so that quicksort will perform in worst-case time. If, however, quicksort chooses some random element to be the pivot, then an adversary without knowledge of what random numbers are coming up cannot arrange the data to guarantee worst-case execution time for quicksort.
The classic on-line problem first analysed with competitive analysis (Sleator & Tarjan 1985) is the list update problem: Given a list of items and a sequence of requests for the various items, minimize the cost of accessing the list where the elements closer to the front of the list cost less to access. (Typically, the cost of accessing an item is equal to its position in the list.) After an access, the list may be rearranged. Most rearrangements have a cost. The Move-To-Front algorithm simply moves the requested item to the front after the access, at no cost. The Transpose algorithm swaps the accessed item with the item immediately before it, also at no cost. Classical methods of analysis showed that Transpose is optimal in certain contexts. In practice, Move-To-Front performed much better. Competitive analysis was used to show that an adversary can make Transpose perform arbitrarily badly compared to an optimal algorithm, whereas Move-To-Front can never be made to incur more than twice the cost of an optimal algorithm.
In the case of online requests from a server, competitive algorithms are used to overcome uncertainties about the future. That is, the algorithm does not "know" the future, while the imaginary adversary (the "competitor") "knows". Similarly, competitive algorithms were developed for distributed systems, where the algorithm has to react to a request arriving at one location, without "knowing" what has just happened in a remote location. This setting was presented in (Awerbuch, Kutten & Peleg 1992).
== See also ==
Adversary (online algorithm)
Amortized analysis
K-server problem
List update problem
Online algorithm
== References ==
Sleator, D.; Tarjan, R. (1985), "Amortized efficiency of list update and paging rules", Communications of the ACM, 28 (2): 202–208, doi:10.1145/2786.2793.
Aspnes, James (1998), "Competitive analysis of distributed algorithms", in Fiat, A.; Woeginger, G. J. (eds.), Online Algorithms: The State of the Art, Lecture Notes in Computer Science, vol. 1442, pp. 118–146, doi:10.1007/BFb0029567, ISBN 978-3-540-64917-5.
Borodin, A.; El-Yaniv, R. (1998), Online Computation and Competitive Analysis, Cambridge University Press, ISBN 0-521-56392-5.
Awerbuch, B.; Kutten, S.; Peleg, D. (1992), "Competitive Distributed Job Scheduling", ACM STOC, Victoria, BC, Canada. | Wikipedia/Competitive_analysis_(online_algorithm) |
Atlantic City algorithm is a probabilistic polynomial time algorithm (PP Complexity Class) that answers correctly at least 75% of the time (or, in some versions, some other value greater than 50%). The term "Atlantic City" was first introduced in 1982 by J. Finn in an unpublished manuscript entitled Comparison of probabilistic tests for primality.
Two other common classes of probabilistic algorithms are Monte Carlo algorithms and Las Vegas algorithms. Monte Carlo algorithms are always fast, but only probably correct. On the other hand, Las Vegas algorithms are always correct, but only probably fast. The Atlantic City algorithms, which are bounded probabilistic polynomial time algorithms are probably correct and probably fast.
== See also ==
Monte Carlo Algorithm
Las Vegas Algorithm
== References == | Wikipedia/Atlantic_City_algorithm |
In mathematics, the probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for certain, without any possible error.
This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding), and information theory.
== Introduction ==
If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. Thus, by contraposition, if the probability that a random object chosen from the collection has that property is nonzero, then some object in the collection must possess the property.
Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties.
Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value.
Alternatively, the probabilistic method can also be used to guarantee the existence of a desired element in a sample space with a value that is greater than or equal to the calculated expected value, since the non-existence of such element would imply every element in the sample space is less than the expected value, a contradiction.
Common tools used in the probabilistic method include Markov's inequality, the Chernoff bound, and the Lovász local lemma.
== Two examples due to Erdős ==
Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamiltonian cycles), many of the most well known proofs using this method are due to Erdős. The first example below describes one such result from 1947 that gives a proof of a lower bound for the Ramsey number R(r, r).
=== First example ===
Suppose we have a complete graph on n vertices. We wish to show (for small enough values of n) that it is possible to color the edges of the graph in two colors (say red and blue) so that there is no complete subgraph on r vertices which is monochromatic (every edge colored the same color).
To do so, we color the graph randomly. Color each edge independently with probability 1/2 of being red and 1/2 of being blue. We calculate the expected number of monochromatic subgraphs on r vertices as follows:
For any set
S
r
{\displaystyle S_{r}}
of
r
{\displaystyle r}
vertices from our graph, define the variable
X
(
S
r
)
{\displaystyle X(S_{r})}
to be 1 if every edge amongst the
r
{\displaystyle r}
vertices is the same color, and 0 otherwise. Note that the number of monochromatic
r
{\displaystyle r}
-subgraphs is the sum of
X
(
S
r
)
{\displaystyle X(S_{r})}
over all possible subsets
S
r
{\displaystyle S_{r}}
. For any individual set
S
r
i
{\displaystyle S_{r}^{i}}
, the expected value of
X
(
S
r
i
)
{\displaystyle X(S_{r}^{i})}
is simply the probability that all of the
C
(
r
,
2
)
{\displaystyle C(r,2)}
edges in
S
r
i
{\displaystyle S_{r}^{i}}
are the same color:
E
[
X
(
S
r
i
)
]
=
2
⋅
2
−
(
r
2
)
{\displaystyle E[X(S_{r}^{i})]=2\cdot 2^{-{r \choose 2}}}
(the factor of 2 comes because there are two possible colors).
This holds true for any of the
C
(
n
,
r
)
{\displaystyle C(n,r)}
possible subsets we could have chosen, i.e.
i
{\displaystyle i}
ranges from 1 to
C
(
n
,
r
)
{\displaystyle C(n,r)}
. So we have that the sum of
E
[
X
(
S
r
i
)
]
{\displaystyle E[X(S_{r}^{i})]}
over all
S
r
i
{\displaystyle S_{r}^{i}}
is
∑
i
=
1
C
(
n
,
r
)
E
[
X
(
S
r
i
)
]
=
(
n
r
)
2
1
−
(
r
2
)
.
{\displaystyle \sum _{i=1}^{C(n,r)}E[X(S_{r}^{i})]={n \choose r}2^{1-{r \choose 2}}.}
The sum of expectations is the expectation of the sum (regardless of whether the variables are independent), so the expectation of the sum (the expected number of all monochromatic
r
{\displaystyle r}
-subgraphs) is
E
[
X
(
S
r
)
]
=
(
n
r
)
2
1
−
(
r
2
)
.
{\displaystyle E[X(S_{r})]={n \choose r}2^{1-{r \choose 2}}.}
Consider what happens if this value is less than 1. Since the expected number of monochromatic r-subgraphs is strictly less than 1, there exists a coloring satisfying the condition that the number of monochromatic r-subgraphs is strictly less than 1. The number of monochromatic r-subgraphs in this random coloring is a non-negative integer, hence it must be 0 (0 is the only non-negative integer less than 1). It follows that if
E
[
X
(
S
r
)
]
=
(
n
r
)
2
1
−
(
r
2
)
<
1
{\displaystyle E[X(S_{r})]={n \choose r}2^{1-{r \choose 2}}<1}
(which holds, for example, for n = 5 and r = 4), there must exist a coloring in which there are no monochromatic r-subgraphs.
By definition of the Ramsey number, this implies that R(r, r) must be bigger than n. In particular, R(r, r) must grow at least exponentially with r.
A weakness of this argument is that it is entirely nonconstructive. Even though it proves (for example) that almost every coloring of the complete graph on (1.1)r vertices contains no monochromatic r-subgraph, it gives no explicit example of such a coloring. The problem of finding such a coloring has been open for more than 50 years.
=== Second example ===
A 1959 paper of Erdős (see reference cited below) addressed the following problem in graph theory: given positive integers g and k, does there exist a graph G containing only cycles of length at least g, such that the chromatic number of G is at least k?
It can be shown that such a graph exists for any g and k, and the proof is reasonably simple. Let n be very large and consider a random graph G on n vertices, where every edge in G exists with probability p = n1/g−1. We show that with positive probability, G satisfies the following two properties:
Property 1. G contains at most n/2 cycles of length less than g.
Proof. Let X be the number cycles of length less than g. The number of cycles of length i in the complete graph on n vertices is
n
!
2
⋅
i
⋅
(
n
−
i
)
!
≤
n
i
2
{\displaystyle {\frac {n!}{2\cdot i\cdot (n-i)!}}\leq {\frac {n^{i}}{2}}}
and each of them is present in G with probability pi. Hence by Markov's inequality we have
Pr
(
X
>
n
2
)
≤
2
n
E
[
X
]
≤
1
n
∑
i
=
3
g
−
1
p
i
n
i
=
1
n
∑
i
=
3
g
−
1
n
i
g
≤
g
n
n
g
−
1
g
=
g
n
−
1
g
=
o
(
1
)
.
{\displaystyle \Pr \left(X>{\tfrac {n}{2}}\right)\leq {\frac {2}{n}}E[X]\leq {\frac {1}{n}}\sum _{i=3}^{g-1}p^{i}n^{i}={\frac {1}{n}}\sum _{i=3}^{g-1}n^{\frac {i}{g}}\leq {\frac {g}{n}}n^{\frac {g-1}{g}}=gn^{-{\frac {1}{g}}}=o(1).}
Thus for sufficiently large n, property 1 holds with a probability of more than 1/2.
Property 2. G contains no independent set of size
⌈
n
2
k
⌉
{\displaystyle \lceil {\tfrac {n}{2k}}\rceil }
.
Proof. Let Y be the size of the largest independent set in G. Clearly, we have
Pr
(
Y
≥
y
)
≤
(
n
y
)
(
1
−
p
)
y
(
y
−
1
)
2
≤
n
y
e
−
p
y
(
y
−
1
)
2
=
e
−
y
2
⋅
(
p
y
−
2
ln
n
−
p
)
=
o
(
1
)
,
{\displaystyle \Pr(Y\geq y)\leq {n \choose y}(1-p)^{\frac {y(y-1)}{2}}\leq n^{y}e^{-{\frac {py(y-1)}{2}}}=e^{-{\frac {y}{2}}\cdot (py-2\ln n-p)}=o(1),}
when
y
=
⌈
n
2
k
⌉
.
{\displaystyle y=\left\lceil {\frac {n}{2k}}\right\rceil \!.}
Thus, for sufficiently large n, property 2 holds with a probability of more than 1/2.
For sufficiently large n, the probability that a graph from the distribution has both properties is positive, as the events for these properties cannot be disjoint (if they were, their probabilities would sum up to more than 1).
Here comes the trick: since G has these two properties, we can remove at most n/2 vertices from G to obtain a new graph G′ on
n
′
≥
n
/
2
{\displaystyle n'\geq n/2}
vertices that contains only cycles of length at least g. We can see that this new graph has no independent set of size
⌈
n
′
k
⌉
{\displaystyle \left\lceil {\frac {n'}{k}}\right\rceil }
. G′ can only be partitioned into at least k independent sets, and, hence, has chromatic number at least k.
This result gives a hint as to why the computation of the chromatic number of a graph is so difficult: even when there are no local reasons (such as small cycles) for a graph to require many colors the chromatic number can still be arbitrarily large.
== See also ==
Interactive proof system
Las Vegas algorithm
Incompressibility method
Method of conditional probabilities
Probabilistic proofs of non-probabilistic theorems
Random graph
== Additional resources ==
Probabilistic Methods in Combinatorics, MIT OpenCourseWare
== References ==
Alon, Noga; Spencer, Joel H. (2000). The probabilistic method (2ed). New York: Wiley-Interscience. ISBN 0-471-37046-0.
Erdős, P. (1959). "Graph theory and probability". Can. J. Math. 11: 34–38. doi:10.4153/CJM-1959-003-9. MR 0102081. S2CID 122784453.
Erdős, P. (1961). "Graph theory and probability, II". Can. J. Math. 13: 346–352. CiteSeerX 10.1.1.210.6669. doi:10.4153/CJM-1961-029-9. MR 0120168. S2CID 15134755.
J. Matoušek, J. Vondrak. The Probabilistic Method. Lecture notes.
Alon, N and Krivelevich, M (2006). Extremal and Probabilistic Combinatorics
Elishakoff I., Probabilistic Methods in the Theory of Structures: Random Strength of Materials, Random Vibration, and Buckling, World Scientific, Singapore, ISBN 978-981-3149-84-7, 2017
Elishakoff I., Lin Y.K. and Zhu L.P., Probabilistic and Convex Modeling of Acoustically Excited Structures, Elsevier Science Publishers, Amsterdam, 1994, VIII + pp. 296; ISBN 0 444 81624 0
== Footnotes == | Wikipedia/Probabilistic_method |
In mathematics, discrepancy theory describes the deviation of a situation from the state one would like it to be in. It is also called the theory of irregularities of distribution. This refers to the theme of classical discrepancy theory, namely distributing points in some space such that they are evenly distributed with respect to some (mostly geometrically defined) subsets. The discrepancy (irregularity) measures how far a given distribution deviates from an ideal one.
Discrepancy theory can be described as the study of inevitable irregularities of distributions, in measure-theoretic and combinatorial settings. Just as Ramsey theory elucidates the impossibility of total disorder, discrepancy theory studies the deviations from total uniformity.
A significant event in the history of discrepancy theory was the 1916 paper of Weyl on the uniform distribution of sequences in the unit interval.
== Theorems ==
Discrepancy theory is based on the following classic theorems:
Geometric discrepancy theory
The theorem of van Aardenne-Ehrenfest
Arithmetic progressions (Roth, Sarkozy, Beck, Matousek & Spencer)
Beck–Fiala theorem
Six Standard Deviations Suffice (Spencer)
== Major open problems ==
The unsolved problems relating to discrepancy theory include:
Axis-parallel rectangles in dimensions three and higher (folklore)
Komlós conjecture
Heilbronn triangle problem on the minimum area of a triangle determined by three points from an n-point set
== Applications ==
Applications for discrepancy theory include:
Numerical integration: Monte Carlo methods in high dimensions
Computational geometry: Divide-and-conquer algorithm
Image processing: Halftoning
Random trial formulation: Randomized controlled trial
== See also ==
Discrepancy of hypergraphs
Geometric discrepancy theory
== References ==
== Further reading ==
Beck, József; Chen, William W. L. (1987). Irregularities of Distribution. New York: Cambridge University Press. ISBN 0-521-30792-9.
Chazelle, Bernard (2000). The Discrepancy Method: Randomness and Complexity. New York: Cambridge University Press. ISBN 0-521-77093-9.
Matousek, Jiri (1999). Geometric Discrepancy: An Illustrated Guide. Algorithms and combinatorics. Vol. 18. Berlin: Springer. ISBN 3-540-65528-X. | Wikipedia/Discrepancy_theory |
The approximate counting algorithm allows the counting of a large number of events using a small amount of memory. Invented in 1977 by Robert Morris of Bell Labs, it uses probabilistic techniques to increment the counter. It was fully analyzed in the early 1980s by Philippe Flajolet of INRIA Rocquencourt, who coined the name approximate counting, and strongly contributed to its recognition among the research community. When focused on high quality of approximation and low probability of failure, Nelson and Yu showed that a very slight modification to the Morris Counter is asymptotically optimal amongst all algorithms for the problem. The algorithm is considered one of the precursors of streaming algorithms, and the more general problem of determining the frequency moments of a data stream has been central to the field.
== Theory of operation ==
Using Morris' algorithm, the counter represents an "order of magnitude estimate" of the actual count. The approximation is mathematically unbiased.
To increment the counter, a pseudo-random event is used, such that the incrementing is a probabilistic event. To save space, only the exponent is kept. For example, in base 2, the counter can estimate the count to be 1, 2, 4, 8, 16, 32, and all of the powers of two. The memory requirement is simply to hold the exponent.
As an example, to increment from 4 to 8, a pseudo-random number would be generated such that the probability the counter is increased is 0.25. Otherwise, the counter remains at 4.
The table below illustrates some of the potential values of the counter:
If the counter holds the value of 101, which equates to an exponent of 5 (the decimal equivalent of 101), then the estimated count is
2
5
{\displaystyle 2^{5}}
, or 32. There is a fairly low probability that the actual count of increment events was 5 (
1
1024
=
1
×
1
2
×
1
4
×
1
8
×
1
16
{\displaystyle {\frac {1}{1024}}=1\times {\frac {1}{2}}\times {\frac {1}{4}}\times {\frac {1}{8}}\times {\frac {1}{16}}}
). The actual count of increment events is likely to be "around 32", but it could be arbitrarily high (with decreasing probabilities for actual counts above 39).
=== Selecting counter values ===
While using powers of 2 as counter values is memory efficient, arbitrary values tend to create a dynamic error range, and the smaller values will have a greater error ratio than bigger values. Other methods of selecting counter values consider parameters such as memory availability, desired error ratio, or counting range to provide an optimal set of values.
However, when several counters share the same values, values are optimized according to the counter with the largest counting range, and produce sub-optimal accuracy for smaller counters. Mitigation is achieved by maintaining Independent Counter Estimation buckets, which restrict the effect of a larger counter to the other counters in the bucket.
== Algorithm ==
The algorithm can be implemented by hand. When incrementing the counter, flip a coin a number of times corresponding to the counter's current value. If it comes up heads each time, then increment the counter. Otherwise, do not increment it.
This can be easily achieved on a computer. Let
c
{\displaystyle c}
be the current value of the counter. Generating
c
{\displaystyle c}
pseudo-random bits and using the logical AND of all those bits and add the result to the counter. As the result was zero if any of those pseudo-random bits are zero, achieving an increment probability of
2
−
c
{\displaystyle 2^{-c}}
. This procedure is executed each time the request is made to increment the counter.
== Applications ==
The algorithm is useful in examining large data streams for patterns. This is particularly useful in applications of data compression, sight and sound recognition, and other artificial intelligence applications.
== See also ==
HyperLogLog
== References ==
== Sources ==
Morris, R. Counting large numbers of events in small registers. Communications of the ACM 21, 10 (1978), 840–842
Flajolet, P. Approximate Counting: A Detailed Analysis. BIT 25, (1985), 113–134 [1]
Fouchs, M., Lee, C-K., Prodinger, H., Approximate Counting via the Poisson-Laplace-Mellin Method [2] | Wikipedia/Approximate_counting_algorithm |
In mathematics, motivic L-functions are a generalization of Hasse–Weil L-functions to general motives over global fields. The local L-factor at a finite place v is similarly given by the characteristic polynomial of a Frobenius element at v acting on the v-inertial invariants of the v-adic realization of the motive. For infinite places, Jean-Pierre Serre gave a recipe in (Serre 1970) for the so-called Gamma factors in terms of the Hodge realization of the motive. It is conjectured that, like other L-functions, that each motivic L-function can be analytically continued to a meromorphic function on the entire complex plane and satisfies a functional equation relating the L-function L(s, M) of a motive M to L(1 − s, M∨), where M∨ is the dual of the motive M.
== Examples ==
Basic examples include Artin L-functions and Hasse–Weil L-functions. It is also known (Scholl 1990), for example, that a motive can be attached to a newform (i.e. a primitive cusp form), hence their L-functions are motivic.
== Conjectures ==
Several conjectures exist concerning motivic L-functions. It is believed that motivic L-functions should all arise as automorphic L-functions, and hence should be part of the Selberg class. There are also conjectures concerning the values of these L-functions at integers generalizing those known for the Riemann zeta function, such as Deligne's conjecture on special values of L-functions, the Beilinson conjecture, and the Bloch–Kato conjecture (on special values of L-functions).
== Notes ==
== References ==
Deligne, Pierre (1979), "Valeurs de fonctions L et périodes d'intégrales" (PDF), in Borel, Armand; Casselman, William (eds.), Automorphic Forms, Representations, and L-Functions, Proceedings of the Symposium in Pure Mathematics (in French), vol. 33, Providence, RI: AMS, pp. 313–346, ISBN 0-8218-1437-0, MR 0546622, Zbl 0449.10022
Langlands, Robert P. (1980), "L-functions and automorphic representations", Proceedings of the International Congress of Mathematicians (Helsinki, 1978) (PDF), vol. 1, Helsinki: Academia Scientiarum Fennica, pp. 165–175, MR 0562605, archived from the original (PDF) on 2016-03-03, retrieved 2011-05-11 alternate URL
Scholl, Anthony (1990), "Motives for modular forms", Inventiones Mathematicae, 100 (2): 419–430, Bibcode:1990InMat.100..419S, doi:10.1007/BF01231194, MR 1047142, S2CID 17109327
Serre, Jean-Pierre (1970), "Facteurs locaux des fonctions zêta des variétés algébriques (définitions et conjectures)", Séminaire Delange-Pisot-Poitou, 11 (2 (1969–1970) exp. 19): 1–15 | Wikipedia/Motivic_L-function |
The Journal of Number Theory (JNT) is a monthly peer-reviewed scientific journal covering all aspects of number theory. The journal was established in 1969 by R.P. Bambah, P. Roquette, A. Ross, A. Woods, and H. Zassenhaus (Ohio State University). It is currently published monthly by Elsevier and the editor-in-chief is Dorian Goldfeld (Columbia University). According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.7.
== David Goss prize ==
The David Goss Prize in Number theory, founded by the Journal of Number Theory, is awarded every two years, to mathematicians under the age of 35 for outstanding contributions to number theory. The prize is dedicated to the memory of David Goss who was the former editor in chief of the Journal of Number Theory. The current award is 10,000 USD.
The winners are selected and chosen by the scientific organizing committee of the JNT Biennial Conference and announced during the JNT Biennial Conference.
=== List of winners ===
== References ==
== External links ==
Official website
JNT 2019 Biennial: https://www.math.columbia.edu/~goldfeld/JNTBiennial2019.html
JNT 2022 Biennial: https://www.math.columbia.edu/~goldfeld/JNTBiennial2022.html
JNT 2024 Biennial: https://www.math.columbia.edu/~goldfeld/JNTBiennial2024.html | Wikipedia/Journal_of_Number_Theory |
In mathematics, the prime zeta function is an analogue of the Riemann zeta function, studied by Glaisher (1891). It is defined as the following infinite series, which converges for
ℜ
(
s
)
>
1
{\displaystyle \Re (s)>1}
:
P
(
s
)
=
∑
p
∈
p
r
i
m
e
s
1
p
s
=
1
2
s
+
1
3
s
+
1
5
s
+
1
7
s
+
1
11
s
+
⋯
.
{\displaystyle P(s)=\sum _{p\,\in \mathrm {\,primes} }{\frac {1}{p^{s}}}={\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+{\frac {1}{5^{s}}}+{\frac {1}{7^{s}}}+{\frac {1}{11^{s}}}+\cdots .}
== Properties ==
The Euler product for the Riemann zeta function ζ(s) implies that
log
ζ
(
s
)
=
∑
n
>
0
P
(
n
s
)
n
{\displaystyle \log \zeta (s)=\sum _{n>0}{\frac {P(ns)}{n}}}
which by Möbius inversion gives
P
(
s
)
=
∑
n
>
0
μ
(
n
)
log
ζ
(
n
s
)
n
{\displaystyle P(s)=\sum _{n>0}\mu (n){\frac {\log \zeta (ns)}{n}}}
When s goes to 1, we have
P
(
s
)
∼
log
ζ
(
s
)
∼
log
(
1
s
−
1
)
{\displaystyle P(s)\sim \log \zeta (s)\sim \log \left({\frac {1}{s-1}}\right)}
.
This is used in the definition of Dirichlet density.
This gives the continuation of P(s) to
ℜ
(
s
)
>
0
{\displaystyle \Re (s)>0}
, with an infinite number of logarithmic singularities at points s where ns is a pole (only ns = 1 when n is a squarefree number greater than or equal to 1), or zero of the Riemann zeta function ζ(.). The line
ℜ
(
s
)
=
0
{\displaystyle \Re (s)=0}
is a natural boundary as the singularities cluster near all points of this line.
If one defines a sequence
a
n
=
∏
p
k
∣
n
1
k
=
∏
p
k
∣∣
n
1
k
!
{\displaystyle a_{n}=\prod _{p^{k}\mid n}{\frac {1}{k}}=\prod _{p^{k}\mid \mid n}{\frac {1}{k!}}}
then
P
(
s
)
=
log
∑
n
=
1
∞
a
n
n
s
.
{\displaystyle P(s)=\log \sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}.}
(Exponentiation shows that this is equivalent to Lemma 2.7 by Li.)
The prime zeta function is related to Artin's constant by
ln
C
A
r
t
i
n
=
−
∑
n
=
2
∞
(
L
n
−
1
)
P
(
n
)
n
{\displaystyle \ln C_{\mathrm {Artin} }=-\sum _{n=2}^{\infty }{\frac {(L_{n}-1)P(n)}{n}}}
where Ln is the nth Lucas number.
Specific values are:
== Analysis ==
=== Integral ===
The integral over the prime zeta function is usually anchored at infinity,
because the pole at
s
=
1
{\displaystyle s=1}
prohibits defining a nice lower bound
at some finite integer without entering a discussion on branch cuts in the complex plane:
∫
s
∞
P
(
t
)
d
t
=
∑
p
1
p
s
log
p
{\displaystyle \int _{s}^{\infty }P(t)\,dt=\sum _{p}{\frac {1}{p^{s}\log p}}}
The noteworthy values are again those where the sums converge slowly:
=== Derivative ===
The first derivative is
P
′
(
s
)
≡
d
d
s
P
(
s
)
=
−
∑
p
log
p
p
s
{\displaystyle P'(s)\equiv {\frac {d}{ds}}P(s)=-\sum _{p}{\frac {\log p}{p^{s}}}}
The interesting values are again those where the sums converge slowly:
== Generalizations ==
=== Almost-prime zeta functions ===
As the Riemann zeta function is a sum of inverse powers over the integers
and the prime zeta function a sum of inverse powers of the prime numbers,
the
k
{\displaystyle k}
-primes (the integers which are a product of
k
{\displaystyle k}
not
necessarily distinct primes) define a sort of intermediate sums:
P
k
(
s
)
≡
∑
n
:
Ω
(
n
)
=
k
1
n
s
{\displaystyle P_{k}(s)\equiv \sum _{n:\Omega (n)=k}{\frac {1}{n^{s}}}}
where
Ω
{\displaystyle \Omega }
is the total number of prime factors.
Each integer in the denominator of the Riemann zeta function
ζ
{\displaystyle \zeta }
may be classified by its value of the index
k
{\displaystyle k}
, which decomposes the Riemann zeta
function into an infinite sum of the
P
k
{\displaystyle P_{k}}
:
ζ
(
s
)
=
1
+
∑
k
=
1
,
2
,
…
P
k
(
s
)
{\displaystyle \zeta (s)=1+\sum _{k=1,2,\ldots }P_{k}(s)}
Since we know that the Dirichlet series (in some formal parameter u) satisfies
P
Ω
(
u
,
s
)
:=
∑
n
≥
1
u
Ω
(
n
)
n
s
=
∏
p
∈
P
(
1
−
u
p
−
s
)
−
1
,
{\displaystyle P_{\Omega }(u,s):=\sum _{n\geq 1}{\frac {u^{\Omega (n)}}{n^{s}}}=\prod _{p\in \mathbb {P} }\left(1-up^{-s}\right)^{-1},}
we can use formulas for the symmetric polynomial variants with a generating function of the right-hand-side type. Namely, we have the coefficient-wise identity that
P
k
(
s
)
=
[
u
k
]
P
Ω
(
u
,
s
)
=
h
(
x
1
,
x
2
,
x
3
,
…
)
{\displaystyle P_{k}(s)=[u^{k}]P_{\Omega }(u,s)=h(x_{1},x_{2},x_{3},\ldots )}
when the sequences correspond to
x
j
:=
j
−
s
χ
P
(
j
)
{\displaystyle x_{j}:=j^{-s}\chi _{\mathbb {P} }(j)}
where
χ
P
{\displaystyle \chi _{\mathbb {P} }}
denotes the characteristic function of the primes. Using Newton's identities, we have a general formula for these sums given by
P
n
(
s
)
=
∑
k
1
+
2
k
2
+
⋯
+
n
k
n
=
n
k
1
,
…
,
k
n
≥
0
[
∏
i
=
1
n
P
(
i
s
)
k
i
k
i
!
⋅
i
k
i
]
=
−
[
z
n
]
log
(
1
−
∑
j
≥
1
P
(
j
s
)
z
j
j
)
.
{\displaystyle P_{n}(s)=\sum _{{k_{1}+2k_{2}+\cdots +nk_{n}=n} \atop {k_{1},\ldots ,k_{n}\geq 0}}\left[\prod _{i=1}^{n}{\frac {P(is)^{k_{i}}}{k_{i}!\cdot i^{k_{i}}}}\right]=-[z^{n}]\log \left(1-\sum _{j\geq 1}{\frac {P(js)z^{j}}{j}}\right).}
Special cases include the following explicit expansions:
P
1
(
s
)
=
P
(
s
)
P
2
(
s
)
=
1
2
(
P
(
s
)
2
+
P
(
2
s
)
)
P
3
(
s
)
=
1
6
(
P
(
s
)
3
+
3
P
(
s
)
P
(
2
s
)
+
2
P
(
3
s
)
)
P
4
(
s
)
=
1
24
(
P
(
s
)
4
+
6
P
(
s
)
2
P
(
2
s
)
+
3
P
(
2
s
)
2
+
8
P
(
s
)
P
(
3
s
)
+
6
P
(
4
s
)
)
.
{\displaystyle {\begin{aligned}P_{1}(s)&=P(s)\\P_{2}(s)&={\frac {1}{2}}\left(P(s)^{2}+P(2s)\right)\\P_{3}(s)&={\frac {1}{6}}\left(P(s)^{3}+3P(s)P(2s)+2P(3s)\right)\\P_{4}(s)&={\frac {1}{24}}\left(P(s)^{4}+6P(s)^{2}P(2s)+3P(2s)^{2}+8P(s)P(3s)+6P(4s)\right).\end{aligned}}}
=== Prime modulo zeta functions ===
Constructing the sum not over all primes but only over primes which are in the same modulo class introduces further types of infinite series that are a reduction of the Dirichlet L-function.
== See also ==
Divergence of the sum of the reciprocals of the primes
== References ==
Merrifield, C. W. (1881). "The Sums of the Series of Reciprocals of the Prime Numbers and of Their Powers". Proceedings of the Royal Society. 33 (216–219): 4–10. doi:10.1098/rspl.1881.0063. JSTOR 113877.
Fröberg, Carl-Erik (1968). "On the prime zeta function". Nordisk Tidskr. Informationsbehandling (BIT). 8 (3): 187–202. doi:10.1007/BF01933420. MR 0236123. S2CID 121500209.
Glaisher, J. W. L. (1891). "On the Sums of Inverse Powers of the Prime Numbers". Quart. J. Math. 25: 347–362.
Mathar, Richard J. (2008). "Twenty digits of some integrals of the prime zeta function". arXiv:0811.4739 [math.NT].
Li, Ji (2008). "Prime graphs and exponential composition of species". Journal of Combinatorial Theory. Series A. 115 (8): 1374–1401. arXiv:0705.0038. doi:10.1016/j.jcta.2008.02.008. MR 2455584. S2CID 6234826.
Mathar, Richard J. (2010). "Table of Dirichlet L-series and prime zeta modulo functions for small moduli". arXiv:1008.2547 [math.NT].
== External links ==
Weisstein, Eric W. "Prime Zeta Function". MathWorld. | Wikipedia/Prime_zeta_function |
In mathematics, the Riemann xi function is a variant of the Riemann zeta function, and is defined so as to have a particularly simple functional equation. The function is named in honour of Bernhard Riemann.
== Definition ==
Riemann's original lower-case "xi"-function,
ξ
{\displaystyle \xi }
was renamed with an upper-case
Ξ
{\displaystyle ~\Xi ~}
(Greek letter "Xi") by Edmund Landau. Landau's lower-case
ξ
{\displaystyle ~\xi ~}
("xi") is defined as
ξ
(
s
)
=
1
2
s
(
s
−
1
)
π
−
s
/
2
Γ
(
s
2
)
ζ
(
s
)
{\displaystyle \xi (s)={\frac {1}{2}}s(s-1)\pi ^{-s/2}\Gamma \left({\frac {s}{2}}\right)\zeta (s)}
for
s
∈
C
{\displaystyle s\in \mathbb {C} }
. Here
ζ
(
s
)
{\displaystyle \zeta (s)}
denotes the Riemann zeta function and
Γ
(
s
)
{\displaystyle \Gamma (s)}
is the gamma function.
The functional equation (or reflection formula) for Landau's
ξ
{\displaystyle ~\xi ~}
is
ξ
(
1
−
s
)
=
ξ
(
s
)
.
{\displaystyle \xi (1-s)=\xi (s)~.}
Riemann's original function, rebaptised upper-case
Ξ
{\displaystyle ~\Xi ~}
by Landau, satisfies
Ξ
(
z
)
=
ξ
(
1
2
+
z
i
)
{\displaystyle \Xi (z)=\xi \left({\tfrac {1}{2}}+zi\right)}
,
and obeys the functional equation
Ξ
(
−
z
)
=
Ξ
(
z
)
.
{\displaystyle \Xi (-z)=\Xi (z)~.}
Both functions are entire and purely real for real arguments.
== Values ==
The general form for positive even integers is
ξ
(
2
n
)
=
(
−
1
)
n
+
1
n
!
(
2
n
)
!
B
2
n
2
2
n
−
1
π
n
(
2
n
−
1
)
{\displaystyle \xi (2n)=(-1)^{n+1}{\frac {n!}{(2n)!}}B_{2n}2^{2n-1}\pi ^{n}(2n-1)}
where Bn denotes the n-th Bernoulli number. For example:
ξ
(
2
)
=
π
6
{\displaystyle \xi (2)={\frac {\pi }{6}}}
== Series representations ==
The
ξ
{\displaystyle \xi }
function has the series expansion
d
d
z
ln
ξ
(
−
z
1
−
z
)
=
∑
n
=
0
∞
λ
n
+
1
z
n
,
{\displaystyle {\frac {d}{dz}}\ln \xi \left({\frac {-z}{1-z}}\right)=\sum _{n=0}^{\infty }\lambda _{n+1}z^{n},}
where
λ
n
=
1
(
n
−
1
)
!
d
n
d
s
n
[
s
n
−
1
log
ξ
(
s
)
]
|
s
=
1
=
∑
ρ
[
1
−
(
1
−
1
ρ
)
n
]
,
{\displaystyle \lambda _{n}={\frac {1}{(n-1)!}}\left.{\frac {d^{n}}{ds^{n}}}\left[s^{n-1}\log \xi (s)\right]\right|_{s=1}=\sum _{\rho }\left[1-\left(1-{\frac {1}{\rho }}\right)^{n}\right],}
where the sum extends over ρ, the non-trivial zeros of the zeta function, in order of
|
ℑ
(
ρ
)
|
{\displaystyle |\Im (\rho )|}
.
This expansion plays a particularly important role in Li's criterion, which states that the Riemann hypothesis is equivalent to having λn > 0 for all positive n.
== Hadamard product ==
A simple infinite product expansion is
ξ
(
s
)
=
1
2
∏
ρ
(
1
−
s
ρ
)
,
{\displaystyle \xi (s)={\frac {1}{2}}\prod _{\rho }\left(1-{\frac {s}{\rho }}\right),\!}
where ρ ranges over the roots of ξ.
To ensure convergence in the expansion, the product should be taken over "matching pairs" of zeroes, i.e., the factors for a pair of zeroes of the form ρ and 1−ρ should be grouped together.
== References ==
Weisstein, Eric W. "Xi-Function". MathWorld.
Keiper, J.B. (1992). "Power series expansions of Riemann's xi function". Mathematics of Computation. 58 (198): 765–773. Bibcode:1992MaCom..58..765K. doi:10.1090/S0025-5718-1992-1122072-5.
This article incorporates material from Riemann Ξ function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Riemann_Xi_function |
In mathematics, the Z function is a function used for studying the Riemann zeta function along the critical line where the argument is one-half. It is also called the Riemann–Siegel Z function, the Riemann–Siegel zeta function, the Hardy function, the Hardy Z function and the Hardy zeta function. It can be defined in terms of the Riemann–Siegel theta function and the Riemann zeta function by
Z
(
t
)
=
e
i
θ
(
t
)
ζ
(
1
2
+
i
t
)
.
{\displaystyle Z(t)=e^{i\theta (t)}\zeta \left({\frac {1}{2}}+it\right).}
It follows from the functional equation of the Riemann zeta function that the Z function is real for real values of t. It is an even function, and real analytic for real values. It follows from the fact that the Riemann–Siegel theta function and the Riemann zeta function are both holomorphic in the critical strip, where the imaginary part of t is between −1/2 and 1/2, that the Z function is holomorphic in the critical strip also. Moreover, the real zeros of Z(t) are precisely the zeros of the zeta function along the critical line, and complex zeros in the Z function critical strip correspond to zeros off the critical line of the Riemann zeta function in its critical strip.
== The Riemann–Siegel formula ==
Calculation of the value of Z(t) for real t, and hence of the zeta function along the critical line, is greatly expedited by the Riemann–Siegel formula. This formula tells us
Z
(
t
)
=
2
∑
n
2
<
t
/
2
π
n
−
1
/
2
cos
(
θ
(
t
)
−
t
log
n
)
+
R
(
t
)
,
{\displaystyle Z(t)=2\sum _{n^{2}<t/2\pi }n^{-1/2}\cos(\theta (t)-t\log n)+R(t),}
where the error term R(t) has a complex asymptotic expression in terms of the function
Ψ
(
z
)
=
cos
2
π
(
z
2
−
z
−
1
/
16
)
cos
2
π
z
{\displaystyle \Psi (z)={\frac {\cos 2\pi (z^{2}-z-1/16)}{\cos 2\pi z}}}
and its derivatives. If
u
=
(
t
2
π
)
1
/
4
{\displaystyle u=\left({\frac {t}{2\pi }}\right)^{1/4}}
,
N
=
⌊
u
2
⌋
{\displaystyle N=\lfloor u^{2}\rfloor }
and
p
=
u
2
−
N
{\displaystyle p=u^{2}-N}
then
R
(
t
)
∼
(
−
1
)
N
−
1
(
Ψ
(
p
)
u
−
1
−
1
96
π
2
Ψ
(
3
)
(
p
)
u
−
3
+
⋯
)
{\displaystyle R(t)\sim (-1)^{N-1}\left(\Psi (p)u^{-1}-{\frac {1}{96\pi ^{2}}}\Psi ^{(3)}(p)u^{-3}+\cdots \right)}
where the ellipsis indicates we may continue on to higher and increasingly complex terms.
Other efficient series for Z(t) are known, in particular several using the incomplete gamma function. If
Q
(
a
,
z
)
=
Γ
(
a
,
z
)
Γ
(
a
)
=
1
Γ
(
a
)
∫
z
∞
u
a
−
1
e
−
u
d
u
{\displaystyle Q(a,z)={\frac {\Gamma (a,z)}{\Gamma (a)}}={\frac {1}{\Gamma (a)}}\int _{z}^{\infty }u^{a-1}e^{-u}\,du}
then an especially nice example is
Z
(
t
)
=
2
ℜ
(
e
i
θ
(
t
)
(
∑
n
=
1
∞
Q
(
s
2
,
π
i
n
2
)
−
π
s
/
2
e
π
i
s
/
4
s
Γ
(
s
2
)
)
)
{\displaystyle Z(t)=2\Re \left(e^{i\theta (t)}\left(\sum _{n=1}^{\infty }Q\left({\frac {s}{2}},\pi in^{2}\right)-{\frac {\pi ^{s/2}e^{\pi is/4}}{s\Gamma \left({\frac {s}{2}}\right)}}\right)\right)}
== Behavior of the Z function ==
From the critical line theorem, it follows that the density of the real zeros of the Z function is
c
2
π
log
t
2
π
{\displaystyle {\frac {c}{2\pi }}\log {\frac {t}{2\pi }}}
for some constant c > 2/5. Hence, the number of zeros in an interval of a given size slowly increases. If the Riemann hypothesis is true, all of the zeros in the critical strip are real zeros, and the constant c is one. It is also postulated that all of these zeros are simple zeros.
=== An Omega theorem ===
Because of the zeros of the Z function, it exhibits oscillatory behavior. It also slowly grows both on average and in peak value. For instance, we have, even without the Riemann hypothesis, the Omega theorem that
Z
(
t
)
=
Ω
(
exp
(
3
4
log
t
log
log
t
)
)
,
{\displaystyle Z(t)=\Omega \left(\exp \left({\frac {3}{4}}{\sqrt {\frac {\log t}{\log \log t}}}\right)\right),}
where the notation means that
Z
(
t
)
{\displaystyle Z(t)}
divided by the function within the Ω does not tend to zero with increasing t.
=== Average growth ===
The average growth of the Z function has also been much studied. We can find the root mean square (abbreviated RMS) average from
1
T
∫
0
T
Z
(
t
)
2
d
t
∼
log
T
{\displaystyle {\frac {1}{T}}\int _{0}^{T}Z(t)^{2}dt\sim \log T}
or
1
T
∫
T
2
T
Z
(
t
)
2
d
t
∼
log
T
{\displaystyle {\frac {1}{T}}\int _{T}^{2T}Z(t)^{2}dt\sim \log T}
which tell us that the RMS size of Z(t) grows as
log
t
{\displaystyle {\sqrt {\log t}}}
.
This estimate can be improved to
1
T
∫
0
T
Z
(
t
)
2
d
t
=
log
T
+
(
2
γ
−
2
log
(
2
π
)
−
1
)
+
O
(
T
−
15
/
22
)
{\displaystyle {\frac {1}{T}}\int _{0}^{T}Z(t)^{2}dt=\log T+(2\gamma -2\log(2\pi )-1)+O(T^{-15/22})}
If we increase the exponent, we get an average value which depends more on the peak values of Z. For fourth powers, we have
1
T
∫
0
T
Z
(
t
)
4
d
t
∼
1
2
π
2
(
log
T
)
4
{\displaystyle {\frac {1}{T}}\int _{0}^{T}Z(t)^{4}dt\sim {\frac {1}{2\pi ^{2}}}(\log T)^{4}}
from which we may conclude that the fourth root of the mean fourth power grows as
1
2
1
/
4
π
log
t
.
{\displaystyle {\frac {1}{2^{1/4}{\sqrt {\pi }}}}\log t.}
=== The Lindelöf hypothesis ===
Higher even powers have been much studied, but less is known about the corresponding average value. It is conjectured, and follows from the Riemann hypothesis, that
1
T
∫
0
T
Z
(
t
)
2
k
d
t
=
o
(
T
ε
)
{\displaystyle {\frac {1}{T}}\int _{0}^{T}Z(t)^{2k}\,dt=o(T^{\varepsilon })}
for every positive ε. Here the little "o" notation means that the left hand side divided by the right hand side does converge to zero; in other words little o is the negation of Ω. This conjecture is called the Lindelöf hypothesis, and is weaker than the Riemann hypothesis. It is normally stated in an important equivalent form, which is
Z
(
t
)
=
o
(
t
ε
)
;
{\displaystyle Z(t)=o(t^{\varepsilon });}
in either form it tells us the rate of growth of the peak values cannot be too high. The best known bound on this rate of growth is not strong, telling us that any
ϵ
>
89
570
≈
0.156
{\displaystyle \epsilon >{\frac {89}{570}}\approx 0.156}
is suitable. It would be astonishing to find that the Z function grew anywhere close to as fast as this. Littlewood proved that on the Riemann hypothesis,
Z
(
t
)
=
o
(
exp
(
10
log
t
log
log
t
)
)
,
{\displaystyle Z(t)=o\left(\exp \left({\frac {10\log t}{\log \log t}}\right)\right),}
and this seems far more likely.
== References ==
Edwards, H.M. (1974). Riemann's zeta function. Pure and Applied Mathematics. Vol. 58. New York-London: Academic Press. ISBN 0-12-232750-0. Zbl 0315.10035.
Ivić, Aleksandar (2013). The theory of Hardy's Z-function. Cambridge Tracts in Mathematics. Vol. 196. Cambridge: Cambridge University Press. ISBN 978-1-107-02883-8. Zbl 1269.11075.
Paris, R. B.; Kaminski, D. (2001). Asymptotics and Mellin-Barnes Integrals. Encyclopedia of Mathematics and Its Applications. Vol. 85. Cambridge: Cambridge University Press. ISBN 0-521-79001-8. Zbl 0983.41019.
Ramachandra, K. (February 1996). Lectures on the mean-value and Omega-theorems for the Riemann Zeta-function. Lectures on Mathematics and Physics. Mathematics. Tata Institute of Fundamental Research. Vol. 85. Berlin: Springer-Verlag. ISBN 3-540-58437-4. Zbl 0845.11003.
Titchmarsh, E. C. (1986) [1951]. Heath-Brown, D.R. (ed.). The Theory of the Riemann Zeta-Function (second revised ed.). Oxford University Press.
== External links ==
Weisstein, Eric W. "Riemann–Siegel Functions". MathWorld.
Wolfram Research – Riemann-Siegel function Z (includes function plotting and evaluation) | Wikipedia/Z_function |
The Möbius function
μ
(
n
)
{\displaystyle \mu (n)}
is a multiplicative function in number theory introduced by the German mathematician August Ferdinand Möbius (also transliterated Moebius) in 1832. It is ubiquitous in elementary and analytic number theory and most often appears as part of its namesake the Möbius inversion formula. Following work of Gian-Carlo Rota in the 1960s, generalizations of the Möbius function were introduced into combinatorics, and are similarly denoted
μ
(
x
)
{\displaystyle \mu (x)}
.
== Definition ==
The Möbius function is defined by
μ
(
n
)
=
{
1
if
n
=
1
(
−
1
)
k
if
n
is the product of
k
distinct primes
0
if
n
is divisible by a square
>
1
{\displaystyle \mu (n)={\begin{cases}1&{\text{if }}n=1\\(-1)^{k}&{\text{if }}n{\text{ is the product of }}k{\text{ distinct primes}}\\0&{\text{if }}n{\text{ is divisible by a square}}>1\end{cases}}}
The Möbius function can alternatively be represented as
μ
(
n
)
=
δ
ω
(
n
)
Ω
(
n
)
λ
(
n
)
,
{\displaystyle \mu (n)=\delta _{\omega (n)\Omega (n)}\lambda (n),}
where
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta,
λ
(
n
)
{\displaystyle \lambda (n)}
is the Liouville function,
ω
(
n
)
{\displaystyle \omega (n)}
is the number of distinct prime divisors of
n
{\displaystyle n}
, and
Ω
(
n
)
{\displaystyle \Omega (n)}
is the number of prime factors of
n
{\displaystyle n}
, counted with multiplicity.
Another characterization by Carl Friedrich Gauss is the sum of all primitive roots.
== Values ==
The values of
μ
(
n
)
{\displaystyle \mu (n)}
for the first 50 positive numbers are
The first 50 values of the function are plotted below:
Larger values can be checked in:
Wolframalpha
the b-file of OEIS
== Applications ==
=== Mathematical series ===
The Dirichlet series that generates the Möbius function is the (multiplicative) inverse of the Riemann zeta function; if
s
{\displaystyle s}
is a complex number with real part larger than 1 we have
∑
n
=
1
∞
μ
(
n
)
n
s
=
1
ζ
(
s
)
.
{\displaystyle \sum _{n=1}^{\infty }{\frac {\mu (n)}{n^{s}}}={\frac {1}{\zeta (s)}}.}
This may be seen from its Euler product
1
ζ
(
s
)
=
∏
p
prime
(
1
−
1
p
s
)
=
(
1
−
1
2
s
)
(
1
−
1
3
s
)
(
1
−
1
5
s
)
⋯
{\displaystyle {\frac {1}{\zeta (s)}}=\prod _{p{\text{ prime}}}{\left(1-{\frac {1}{p^{s}}}\right)}=\left(1-{\frac {1}{2^{s}}}\right)\left(1-{\frac {1}{3^{s}}}\right)\left(1-{\frac {1}{5^{s}}}\right)\cdots }
Also:
∑
n
=
1
∞
|
μ
(
n
)
|
n
s
=
ζ
(
s
)
ζ
(
2
s
)
;
{\displaystyle \sum \limits _{n=1}^{\infty }{\frac {|\mu (n)|}{n^{s}}}={\frac {\zeta (s)}{\zeta (2s)}};}
∑
n
=
1
∞
μ
(
n
)
n
=
0
;
{\displaystyle \sum _{n=1}^{\infty }{\frac {\mu (n)}{n}}=0;}
∑
n
=
1
∞
μ
(
n
)
ln
n
n
=
−
1
;
{\displaystyle \sum \limits _{n=1}^{\infty }{\frac {\mu (n)\ln n}{n}}=-1;}
∑
n
=
1
∞
μ
(
n
)
ln
2
n
n
=
−
2
γ
,
{\displaystyle \sum \limits _{n=1}^{\infty }{\frac {\mu (n)\ln ^{2}n}{n}}=-2\gamma ,}
where
γ
{\displaystyle \gamma }
is Euler's constant.
The Lambert series for the Möbius function is
∑
n
=
1
∞
μ
(
n
)
q
n
1
−
q
n
=
q
,
{\displaystyle \sum _{n=1}^{\infty }{\frac {\mu (n)q^{n}}{1-q^{n}}}=q,}
which converges for
|
q
|
<
1
{\displaystyle |q|<1}
. For prime
α
≥
2
{\displaystyle \alpha \geq 2}
, we also have
∑
n
=
1
∞
μ
(
α
n
)
q
n
q
n
−
1
=
∑
n
≥
0
q
α
n
,
|
q
|
<
1.
{\displaystyle \sum _{n=1}^{\infty }{\frac {\mu (\alpha n)q^{n}}{q^{n}-1}}=\sum _{n\geq 0}q^{\alpha ^{n}},|q|<1.}
=== Algebraic number theory ===
Gauss proved that for a prime number
p
{\displaystyle p}
the sum of its primitive roots is congruent to
μ
(
p
−
1
)
mod
p
{\displaystyle \mu (p-1)\mod p}
.
If
F
q
{\displaystyle \mathbb {F} _{q}}
denotes the finite field of order
q
{\displaystyle q}
(where
q
{\displaystyle q}
is necessarily a prime power), then the number
N
{\displaystyle N}
of monic irreducible polynomials of degree
n
{\displaystyle n}
over
F
q
{\displaystyle \mathbb {F} _{q}}
is given by
N
(
q
,
n
)
=
1
n
∑
d
∣
n
μ
(
d
)
q
n
d
.
{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{\frac {n}{d}}.}
The Möbius function is used in the Möbius inversion formula.
=== Physics ===
The Möbius function also arises in the primon gas or free Riemann gas model of supersymmetry. In this theory, the fundamental particles or "primons" have energies
log
(
p
)
{\displaystyle \log(p)}
. Under second quantization, multiparticle excitations are considered; these are given by
log
(
n
)
{\displaystyle \log(n)}
for any natural number
n
{\displaystyle n}
. This follows from the fact that the factorization of the natural numbers into primes is unique.
In the free Riemann gas, any natural number can occur, if the primons are taken as bosons. If they are taken as fermions, then the Pauli exclusion principle excludes squares. The operator
(
−
1
)
F
{\displaystyle (-1)^{F}}
that distinguishes fermions and bosons is then none other than the Möbius function
μ
(
n
)
{\displaystyle \mu (n)}
.
The free Riemann gas has a number of other interesting connections to number theory, including the fact that the partition function is the Riemann zeta function. This idea underlies Alain Connes's attempted proof of the Riemann hypothesis.
== Properties ==
The Möbius function is multiplicative (i.e.,
μ
(
a
b
)
=
μ
(
a
)
μ
(
b
)
{\displaystyle \mu (ab)=\mu (a)\mu (b)}
whenever
a
{\displaystyle a}
and
b
{\displaystyle b}
are coprime).
Proof: Given two coprime numbers
m
≥
n
{\displaystyle m\geq n}
, we induct on
m
n
{\displaystyle mn}
. If
m
n
=
1
{\displaystyle mn=1}
, then
μ
(
m
n
)
=
1
=
μ
(
m
)
μ
(
n
)
{\displaystyle \mu (mn)=1=\mu (m)\mu (n)}
. Otherwise,
m
>
n
≥
1
{\displaystyle m>n\geq 1}
, so
0
=
∑
d
|
m
n
μ
(
d
)
=
μ
(
m
n
)
+
∑
d
|
m
n
;
d
<
m
n
μ
(
d
)
=
induction
μ
(
m
n
)
−
μ
(
m
)
μ
(
n
)
+
∑
d
|
m
;
d
′
|
n
μ
(
d
)
μ
(
d
′
)
=
μ
(
m
n
)
−
μ
(
m
)
μ
(
n
)
+
∑
d
|
m
μ
(
d
)
∑
d
′
|
n
μ
(
d
′
)
=
μ
(
m
n
)
−
μ
(
m
)
μ
(
n
)
+
0
{\displaystyle {\begin{aligned}0&=\sum _{d|mn}\mu (d)\\&=\mu (mn)+\sum _{d|mn;d<mn}\mu (d)\\&{\stackrel {\text{induction}}{=}}\mu (mn)-\mu (m)\mu (n)+\sum _{d|m;d'|n}\mu (d)\mu (d')\\&=\mu (mn)-\mu (m)\mu (n)+\sum _{d|m}\mu (d)\sum _{d'|n}\mu (d')\\&=\mu (mn)-\mu (m)\mu (n)+0\end{aligned}}}
The sum of the Möbius function over all positive divisors of
n
{\displaystyle n}
(including
n
{\displaystyle n}
itself and 1) is zero except when
n
=
1
{\displaystyle n=1}
:
∑
d
∣
n
μ
(
d
)
=
{
1
if
n
=
1
,
0
if
n
>
1.
{\displaystyle \sum _{d\mid n}\mu (d)={\begin{cases}1&{\text{if }}n=1,\\0&{\text{if }}n>1.\end{cases}}}
The equality above leads to the important Möbius inversion formula and is the main reason why
μ
{\displaystyle \mu }
is of relevance in the theory of multiplicative and arithmetic functions.
Other applications of
μ
(
n
)
{\displaystyle \mu (n)}
in combinatorics are connected with the use of the Pólya enumeration theorem in combinatorial groups and combinatorial enumerations.
There is a formula for calculating the Möbius function without directly knowing the factorization of its argument:
μ
(
n
)
=
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
e
2
π
i
k
n
,
{\displaystyle \mu (n)=\sum _{\stackrel {1\leq k\leq n}{\gcd(k,\,n)=1}}e^{2\pi i{\frac {k}{n}}},}
i.e.
μ
(
n
)
{\displaystyle \mu (n)}
is the sum of the primitive
n
{\displaystyle n}
-th roots of unity. (However, the computational complexity of this definition is at least the same as that of the Euler product definition.)
Other identities satisfied by the Möbius function include
∑
k
≤
n
⌊
n
k
⌋
μ
(
k
)
=
1
{\displaystyle \sum _{k\leq n}\left\lfloor {\frac {n}{k}}\right\rfloor \mu (k)=1}
and
∑
j
k
≤
n
sin
(
π
j
k
2
)
μ
(
k
)
=
1
{\displaystyle \sum _{jk\leq n}\sin \left({\frac {\pi jk}{2}}\right)\mu (k)=1}
.
The first of these is a classical result while the second was published in 2020. Similar identities hold for the Mertens function.
=== Proof of the formula for the sum of ===
μ
{\displaystyle \mu }
over divisors
The formula
∑
d
∣
n
μ
(
d
)
=
{
1
if
n
=
1
,
0
if
n
>
1
{\displaystyle \sum _{d\mid n}\mu (d)={\begin{cases}1&{\text{if }}n=1,\\0&{\text{if }}n>1\end{cases}}}
can be written using Dirichlet convolution as:
1
∗
μ
=
ε
{\displaystyle 1*\mu =\varepsilon }
where
ε
{\displaystyle \varepsilon }
is the identity under the convolution.
One way of proving this formula is by noting that the Dirichlet convolution of two multiplicative functions is again multiplicative. Thus it suffices to prove the formula for powers of primes. Indeed, for any prime
p
{\displaystyle p}
and for any
k
>
0
{\displaystyle k>0}
1
∗
μ
(
p
k
)
=
∑
d
∣
p
k
μ
(
d
)
=
μ
(
1
)
+
μ
(
p
)
+
∑
1
<
m
<=
k
μ
(
p
m
)
=
1
−
1
+
∑
0
=
0
=
ε
(
p
k
)
{\displaystyle 1*\mu (p^{k})=\sum _{d\mid p^{k}}\mu (d)=\mu (1)+\mu (p)+\sum _{1<m<=k}\mu (p^{m})=1-1+\sum 0=0=\varepsilon (p^{k})}
,
while for
n
=
1
{\displaystyle n=1}
1
∗
μ
(
1
)
=
∑
d
∣
1
μ
(
d
)
=
μ
(
1
)
=
1
=
ε
(
1
)
{\displaystyle 1*\mu (1)=\sum _{d\mid 1}\mu (d)=\mu (1)=1=\varepsilon (1)}
.
==== Other proofs ====
Another way of proving this formula is by using the identity
μ
(
n
)
=
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
e
2
π
i
k
n
,
{\displaystyle \mu (n)=\sum _{\stackrel {1\leq k\leq n}{\gcd(k,\,n)=1}}e^{2\pi i{\frac {k}{n}}},}
The formula above is then a consequence of the fact that the
n
{\displaystyle n}
th roots of unity sum to 0, since each
n
{\displaystyle n}
th root of unity is a primitive
d
{\displaystyle d}
th root of unity for exactly one divisor
d
{\displaystyle d}
of
n
{\displaystyle n}
.
However it is also possible to prove this identity from first principles. First note that it is trivially true when
n
=
1
{\displaystyle n=1}
. Suppose then that
n
>
1
{\displaystyle n>1}
. Then there is a bijection between the factors
d
{\displaystyle d}
of
n
{\displaystyle n}
for which
μ
(
d
)
≠
0
{\displaystyle \mu (d)\neq 0}
and the subsets of the set of all prime factors of
n
{\displaystyle n}
. The asserted result follows from the fact that every non-empty finite set has an equal number of odd- and even-cardinality subsets.
This last fact can be shown easily by induction on the cardinality
|
S
|
{\displaystyle |S|}
of a non-empty finite set
S
{\displaystyle S}
. First, if
|
S
|
=
1
{\displaystyle |S|=1}
, there is exactly one odd-cardinality subset of
S
{\displaystyle S}
, namely
S
{\displaystyle S}
itself, and exactly one even-cardinality subset, namely
∅
{\displaystyle \emptyset }
. Next, if
|
S
|
>
1
{\displaystyle |S|>1}
, then divide the subsets of
S
{\displaystyle S}
into two subclasses depending on whether they contain or not some fixed element
x
{\displaystyle x}
in
S
{\displaystyle S}
. There is an obvious bijection between these two subclasses, pairing those subsets that have the same complement relative to the subset
{
x
}
{\displaystyle \{x\}}
. Also, one of these two subclasses consists of all the subsets of the set
S
∖
{
x
}
{\displaystyle S\setminus \{x\}}
, and therefore, by the induction hypothesis, has an equal number of odd- and even-cardinality subsets. These subsets in turn correspond bijectively to the even- and odd-cardinality
{
x
}
{\displaystyle \{x\}}
-containing subsets of
S
{\displaystyle S}
. The inductive step follows directly from these two bijections.
A related result is that the binomial coefficients exhibit alternating entries of odd and even power which sum symmetrically.
=== Average order ===
The mean value (in the sense of average orders) of the Möbius function is zero. This statement is, in fact, equivalent to the prime number theorem.
=== ===
μ
(
n
)
{\displaystyle \mu (n)}
sections
μ
(
n
)
=
0
{\displaystyle \mu (n)=0}
if and only if
n
{\displaystyle n}
is divisible by the square of a prime. The first numbers with this property are
4, 8, 9, 12, 16, 18, 20, 24, 25, 27, 28, 32, 36, 40, 44, 45, 48, 49, 50, 52, 54, 56, 60, 63, ... (sequence A013929 in the OEIS).
If
n
{\displaystyle n}
is prime, then
μ
(
n
)
=
−
1
{\displaystyle \mu (n)=-1}
, but the converse is not true. The first non prime
n
{\displaystyle n}
for which
μ
(
n
)
=
−
1
{\displaystyle \mu (n)=-1}
is
30
=
2
×
3
×
5
{\displaystyle 30=2\times 3\times 5}
. The first such numbers with three distinct prime factors (sphenic numbers) are
30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154, 165, 170, 174, 182, 186, 190, 195, 222, ... (sequence A007304 in the OEIS).
and the first such numbers with 5 distinct prime factors are
2310, 2730, 3570, 3990, 4290, 4830, 5610, 6006, 6090, 6270, 6510, 6630, 7410, 7590, 7770, 7854, 8610, 8778, 8970, 9030, 9282, 9570, 9690, ... (sequence A046387 in the OEIS).
== Mertens function ==
In number theory another arithmetic function closely related to the Möbius function is the Mertens function, defined by
M
(
n
)
=
∑
k
=
1
n
μ
(
k
)
{\displaystyle M(n)=\sum _{k=1}^{n}\mu (k)}
for every natural number n. This function is closely linked with the positions of zeroes of the Riemann zeta function. See the article on the Mertens conjecture for more information about the connection between
M
(
n
)
{\displaystyle M(n)}
and the Riemann hypothesis.
From the formula
μ
(
n
)
=
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
e
2
π
i
k
n
,
{\displaystyle \mu (n)=\sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}e^{2\pi i{\frac {k}{n}}},}
it follows that the Mertens function is given by
M
(
n
)
=
−
1
+
∑
a
∈
F
n
e
2
π
i
a
,
{\displaystyle M(n)=-1+\sum _{a\in {\mathcal {F}}_{n}}e^{2\pi ia},}
where
F
n
{\displaystyle {\mathcal {F}}_{n}}
is the Farey sequence of order
n
{\displaystyle n}
.
This formula is used in the proof of the Franel–Landau theorem.
== Generalizations ==
=== Incidence algebras ===
In combinatorics, every locally finite partially ordered set (poset) is assigned an incidence algebra. One distinguished member of this algebra is that poset's "Möbius function". The classical Möbius function treated in this article is essentially equal to the Möbius function of the set of all positive integers partially ordered by divisibility. See the article on incidence algebras for the precise definition and several examples of these general Möbius functions.
=== Popovici's function ===
Constantin Popovici defined a generalised Möbius function
μ
k
=
μ
∗
⋯
∗
μ
{\displaystyle \mu _{k}=\mu *\cdots *\mu }
to be the
k
{\displaystyle k}
-fold Dirichlet convolution of the Möbius function with itself. It is thus again a multiplicative function with
μ
k
(
p
a
)
=
(
−
1
)
a
(
k
a
)
{\displaystyle \mu _{k}\left(p^{a}\right)=(-1)^{a}{\binom {k}{a}}\ }
where the binomial coefficient is taken to be zero if
a
>
k
{\displaystyle a>k}
. The definition may be extended to complex
k
{\displaystyle k}
by reading the binomial as a polynomial in
k
{\displaystyle k}
.
== Implementations ==
Mathematica
Maxima
geeksforgeeks C++, Python3, Java, C#, PHP, JavaScript
Rosetta Code
Sage
== See also ==
== Notes ==
=== Citations ===
== Sources ==
== External links ==
Weisstein, Eric W. "Möbius function". MathWorld. | Wikipedia/Möbius_function |
In mathematics, the main conjecture of Iwasawa theory is a deep relationship between p-adic L-functions and ideal class groups of cyclotomic fields, proved by Kenkichi Iwasawa for primes satisfying the Kummer–Vandiver conjecture and proved for all primes by
Mazur and Wiles (1984). The Herbrand–Ribet theorem and the Gras conjecture are both easy consequences of the main conjecture.
There are several generalizations of the main conjecture, to totally real fields, CM fields, elliptic curves, and so on.
== Motivation ==
Iwasawa (1969a) was partly motivated by an analogy with Weil's description of the zeta function of an algebraic curve over a finite field in terms of eigenvalues of the Frobenius endomorphism on its Jacobian variety. In this analogy,
The action of the Frobenius corresponds to the action of the group Γ.
The Jacobian of a curve corresponds to a module X over Γ defined in terms of ideal class groups.
The zeta function of a curve over a finite field corresponds to a p-adic L-function.
Weil's theorem relating the eigenvalues of Frobenius to the zeros of the zeta function of the curve corresponds to Iwasawa's main conjecture relating the action of the Iwasawa algebra on X to zeros of the p-adic zeta function.
== History ==
The main conjecture of Iwasawa theory was formulated as an assertion that two methods of defining p-adic L-functions (by module theory, by interpolation) should coincide, as far as that was well-defined. This was proved by Mazur & Wiles (1984) for Q, and for all totally real number fields by Wiles (1990). These proofs were modeled upon Ken Ribet's proof of the converse to Herbrand's theorem (the Herbrand–Ribet theorem).
Karl Rubin found a more elementary proof of the Mazur–Wiles theorem by using Thaine's method and Kolyvagin's Euler systems, described in Lang (1990) and Washington (1997), and later proved other generalizations of the main conjecture for imaginary quadratic fields.
In 2014, Christopher Skinner and Eric Urban proved several cases of the main conjectures for a large class of modular forms. As a consequence, for a modular elliptic curve over the rational numbers, they prove that the vanishing of the Hasse–Weil L-function L(E, s) of E at s = 1 implies that the p-adic Selmer group of E is infinite. Combined with theorems of Gross-Zagier and Kolyvagin, this gave a conditional proof (on the Tate–Shafarevich conjecture) of the conjecture that E has infinitely many rational points if and only if L(E, 1) = 0, a (weak) form of the Birch–Swinnerton-Dyer conjecture. These results were used by Manjul Bhargava, Skinner, and Wei Zhang to prove that a positive proportion of elliptic curves satisfy the Birch–Swinnerton-Dyer conjecture.
== Statement ==
p is a prime number.
Fn is the field Q(ζ) where ζ is a root of unity of order pn+1.
Γ is the largest subgroup of the absolute Galois group of F∞ isomorphic to the p-adic integers.
γ is a topological generator of Γ.
Ln is the p-Hilbert class field of Fn.
Hn is the Galois group Gal(Ln/Fn), isomorphic to the subgroup of elements of the ideal class group of Fn whose order is a power of p.
H∞ is the inverse limit of the Galois groups Hn.
V is the vector space H∞⊗ZpQp.
ω is the Teichmüller character.
Vi is the ωi eigenspace of V.
hp(ωi,T) is the characteristic polynomial of γ acting on the vector space Vi.
Lp is the p-adic L function with Lp(ωi,1–k) = –Bk(ωi–k)/k, where B is a generalized Bernoulli number.
u is the unique p-adic number satisfying γ(ζ) = ζu for all p-power roots of unity ζ.
Gp is the power series with Gp(ωi,us–1) = Lp(ωi,s).
The main conjecture of Iwasawa theory proved by Mazur and Wiles states that if i is an odd integer not congruent to 1 mod p–1 then the ideals of
Z
p
[
[
T
]
]
{\displaystyle \mathbf {Z} _{p}[[T]]}
generated by hp(ωi,T) and Gp(ω1–i,T) are equal.
== Notes ==
== Sources == | Wikipedia/Main_conjecture_of_Iwasawa_theory |
Comptes rendus de l'Académie des Sciences (French pronunciation: [kɔ̃t ʁɑ̃dy də lakademi de sjɑ̃s], Proceedings of the Academy of Sciences), or simply Comptes rendus, is a French scientific journal published since 1835. It is the proceedings of the French Academy of Sciences. It is currently split into seven sections, published on behalf of the Academy until 2020 by Elsevier: Mathématique, Mécanique, Physique, Géoscience, Palévol, Chimie, and Biologies. As of 2020, the Comptes Rendus journals are published by the Academy with a diamond open access model.
== Naming history ==
The journal has had several name changes and splits over the years.
=== 1835–1965 ===
Comptes rendus was initially established in 1835 as Comptes rendus hebdomadaires des séances de l'Académie des Sciences. It began as an alternative publication pathway for more prompt publication than the Mémoires de l'Académie des Sciences, which had been published since 1666. The Mémoires, which continued to be published alongside the Comptes rendus throughout the nineteenth century, had a publication cycle which resulted in memoirs being published years after they had been presented to the Academy. Some academicians continued to publish in the Mémoires because of the strict page limits in the Comptes rendus.
=== 1966–1980 ===
After 1965 this title was split into five sections:
Série A (Sciences mathématiques) – mathematics
Série B (Sciences physiques) – physics and geosciences
Série C (Sciences chimiques) – chemistry
Série D (Sciences naturelles) – life sciences
Vie académique – academy notices and miscellanea (between 1968 and 1970, and again between 1979 and 1983)
Series A and B were published together in one volume except in 1974.
=== 1981–1993 ===
The areas were rearranged as follows:
Série I - (Sciences Mathématiques) - mathematics
Série II (Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terre) - physics, chemistry, astronomy and geosciences
Série III - (Sciences de la vie) - life sciences
Vie académique – academy notices and miscellanea (the last 3 volumes of the second edition, between 1981 and 1983)
Vie des sciences – A renamed Vie académique (from 1984 to 1996)
=== 1994–2001 ===
These publications remained the same:
Série I (Sciences mathématiques) – mathematics
Série III (Sciences de la Vie) – life sciences
Vie des sciences – A renamed Vie académique (until 1996)
The areas published in Série II were slowly split into other publications in ways that caused some confusion.
In 1994, Série II, which covered physics, chemistry, astronomy and geosciences, was replaced by Série IIA and Série IIB. Série IIA was exclusive to geosciences, and Série IIB covered chemistry and astronomy and the now-distinct mechanics and physics.
In 1998, Série IIB covered mechanics, physics and astronomy; chemistry got its separate publication, Série IIC.
In 2000, Série IIB became dedicated exclusively to mechanics in May. Astronomy got redefined as astrophysics, which along with physics was covered by the new Série IV. Série IV began publishing in March; however, Séries IIB published two more issues on physics and astrophysics in April and May before starting the new run.
=== 2002 onwards ===
The present naming and subject assignment was established in 2002:
Comptes Rendus Biologies – life sciences except paleontology and evolutionary biology. Continues in part Série IIC (biochemistry) and III.
Comptes Rendus Chimie – chemistry. Continues in part Série IIC.
Comptes Rendus Géoscience – geosciences. Continues in part Série IIA.
Comptes Rendus Mathématique – mathematics. Continues Série I.
Comptes Rendus Mécanique – mechanics. Continues Série IIB.
Comptes Rendus Palévol – paleontology and evolutionary biology. Continues in part Série IIA and III.
Comptes Rendus Physique – topical issues in physics (mainly optics, astrophysics and particle physics). Continues Série IV.
== Online open archives ==
The Comptes rendus de l'Académie des Sciences publications are available through the National Library of France as part of its free online library and archive of other historical documents and works of art, Gallica. The publications available online are:
Comptes rendus hebdomadaires des séances de l'Académie des science (1835–1965)
Séries A et B, Sciences Mathématiques et Sciences Physiques (1966–1973)
Série A, Sciences Mathématiques, (1974)
Série B, Sciences Physiques, (1974)
Séries A et B, Sciences Mathématiques et Sciences Physiques (1975–1980)
Besides the material for this timeframe, this collection also has a separate set of scans of all the material of Série I - Mathématique from 1981 to 1990
Série C, Sciences Chimique
Série D, Sciences Naturelle
Vie Académique (1968–1970)
Vie Académique (1979–1983)
Série I - Mathématique
Séries A et B, Sciences Mathématiques et Sciences Physiques (1975–1980) has a different set of scans for all of this material.
Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terr
The link to Série I - Mathématique (1984–1996) includes a different set of scans for the first 3 issues of 1981 of this series.
Série III - Sciences de la vie
Série I - Mathématique
Séries A et B, Sciences Mathématiques et Sciences Physiques (1975–1980) has a different set of scans for this series' material until 1990.
This collection contains a different set of scans of the 1981 material of Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terr (1981–1983).
Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terre (1984–1994)
The first year of material (1994) of material of Série IIb - Mécanique, physique, chimie, astronomie (1995–1996) is misfiled in this collection.
Série IIa - Sciences de la terre et des planètes (1994–1996)
Série IIb - Mécanique, physique, chimie, astronomie (1995–1996)
The first year of material (1994) is misfiled together with Série II - Mécanique-physique, Chimie, Sciences de l'univers, Sciences de la Terre (1994–1996).
Série III - Sciences de la vie
Vie des sciences
All publications from 1997 to 2019 were published commercially by Elsevier. From 2020 on, the Comptes Rendus Palevol have been published by the Muséum National d'Histoire Naturelle (Paris) for the Académie des Sciences. All other series of the Comptes Rendus of the Acamémie des Sciences have been published (from 2020 on) by Mersenne under a Diamond Open Access model.
== References ==
== External links ==
"Comptes Rendus official website". French Academy of Sciences. Retrieved 23 May 2024.
Comptes Rendus de l'Académie des sciences numérisés sur le site de la Bibliothèque nationale de France
Scholarly Societies project: French Academy of Sciences page; provides information on naming and publication history up to 1980, as well as on previous journals of the Academy. Retrieved 2006-DEC-10.
Bibliothèque nationale de France: Catalog record and full-text scans of Comptes rendus. Retrieved 2009-JUN-22.
Comptes rendus series: [1]
ScienceDirect list of titles (from 1997 onwards): https://www.sciencedirect.com/browse/journals-and-books?searchPhrase=comptes | Wikipedia/Comptes_rendus_hebdomadaires_des_séances_de_l'Académie_des_Sciences |
In mathematics, the study of special values of L-functions is a subfield of number theory devoted to generalising formulae such as the Leibniz formula for π, namely
1
−
1
3
+
1
5
−
1
7
+
1
9
−
⋯
=
π
4
,
{\displaystyle 1\,-\,{\frac {1}{3}}\,+\,{\frac {1}{5}}\,-\,{\frac {1}{7}}\,+\,{\frac {1}{9}}\,-\,\cdots \;=\;{\frac {\pi }{4}},\!}
by the recognition that expression on the left-hand side is also
L
(
1
)
{\displaystyle L(1)}
where
L
(
s
)
{\displaystyle L(s)}
is the Dirichlet L-function for the field of Gaussian rational numbers. This formula is a special case of the analytic class number formula, and in those terms reads that the Gaussian field has class number 1. The factor
1
4
{\displaystyle {\tfrac {1}{4}}}
on the right hand side of the formula corresponds to the fact that this field contains four roots of unity.
== Conjectures ==
There are two families of conjectures, formulated for general classes of L-functions (the very general setting being for L-functions associated to Chow motives over number fields), the division into two reflecting the questions of:
how to replace
π
{\displaystyle \pi }
in the Leibniz formula by some other "transcendental" number (regardless of whether it is currently possible for transcendental number theory to provide a proof of the transcendence); and
how to generalise the rational factor in the formula (class number divided by number of roots of unity) by some algebraic construction of a rational number that will represent the ratio of the L-function value to the "transcendental" factor.
Subsidiary explanations are given for the integer values of
n
{\displaystyle n}
for which a formulae of this sort involving
L
(
n
)
{\displaystyle L(n)}
can be expected to hold.
The conjectures for (a) are called Beilinson's conjectures, for Alexander Beilinson. The idea is to abstract from the regulator of a number field to some "higher regulator" (the Beilinson regulator), a determinant constructed on a real vector space that comes from algebraic K-theory.
The conjectures for (b) are called the Bloch–Kato conjectures for special values (for Spencer Bloch and Kazuya Kato; this circle of ideas is distinct from the Bloch–Kato conjecture of K-theory, extending the Milnor conjecture, a proof of which was announced in 2009). They are also called the Tamagawa number conjecture, a name arising via the Birch–Swinnerton-Dyer conjecture and its formulation as an elliptic curve analogue of the Tamagawa number problem for linear algebraic groups. In a further extension, the equivariant Tamagawa number conjecture (ETNC) has been formulated, to consolidate the connection of these ideas with Iwasawa theory, and its so-called Main Conjecture.
=== Current status ===
All of these conjectures are known to be true only in special cases.
== See also ==
Brumer–Stark conjecture
== Notes ==
== References ==
Kings, Guido (2003), "The Bloch–Kato conjecture on special values of L-functions. A survey of known results", Journal de théorie des nombres de Bordeaux, 15 (1): 179–198, doi:10.5802/jtnb.396, ISSN 1246-7405, MR 2019010
"Beilinson conjectures", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"K-functor in algebraic geometry", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Mathar, Richard J. (2010), "Table of Dirichlet L-Series and Prime Zeta Modulo Functions for small moduli", arXiv:1008.2547 [math.NT]
== External links ==
L-funktionen und die Vermutingen von Deligne und Beilinson (L-functions and the conjectures of Deligne and Beilsnson) | Wikipedia/Special_values_of_L-functions |
In mathematics, the Ramanujan conjecture, due to Srinivasa Ramanujan (1916, p. 176), states that Ramanujan's tau function given by the Fourier coefficients τ(n) of the cusp form Δ(z) of weight 12
Δ
(
z
)
=
∑
n
>
0
τ
(
n
)
q
n
=
q
∏
n
>
0
(
1
−
q
n
)
24
=
q
−
24
q
2
+
252
q
3
−
1472
q
4
+
4830
q
5
−
⋯
,
{\displaystyle \Delta (z)=\sum _{n>0}\tau (n)q^{n}=q\prod _{n>0}\left(1-q^{n}\right)^{24}=q-24q^{2}+252q^{3}-1472q^{4}+4830q^{5}-\cdots ,}
where
q
=
e
2
π
i
z
{\displaystyle q=e^{2\pi iz}}
, satisfies
|
τ
(
p
)
|
≤
2
p
11
/
2
,
{\displaystyle |\tau (p)|\leq 2p^{11/2},}
when p is a prime number. The generalized Ramanujan conjecture or Ramanujan–Petersson conjecture, introduced by Petersson (1930), is a generalization to other modular forms or automorphic forms.
== Ramanujan L-function ==
The Riemann zeta function and the Dirichlet L-function satisfy the Euler product,
and due to their completely multiplicative property
Are there L-functions other than the Riemann zeta function and the Dirichlet L-functions satisfying the above relations? Indeed, the L-functions of automorphic forms satisfy the Euler product (1) but they do not satisfy (2) because they do not have the completely multiplicative property. However, Ramanujan discovered that the L-function of the modular discriminant satisfies the modified relation
where τ(p) is Ramanujan's tau function. The term
1
p
2
s
−
11
{\displaystyle {\frac {1}{p^{2s-11}}}}
is thought of as the difference from the completely multiplicative property. The above L-function is called Ramanujan's L-function.
== Ramanujan conjecture ==
Ramanujan conjectured the following:
τ is multiplicative,
τ is not completely multiplicative but for prime p and j in N we have: τ(p j+1) = τ(p)τ(p j ) − p11τ(p j−1 ), and
|τ(p)| ≤ 2p11/2.
Ramanujan observed that the quadratic equation of u = p−s in the denominator of RHS of (3),
1
−
τ
(
p
)
u
+
p
11
u
2
{\displaystyle 1-\tau (p)u+p^{11}u^{2}}
would have always imaginary roots from many examples. The relationship between roots and coefficients of quadratic equations leads to the third relation, called Ramanujan's conjecture. Moreover, for the Ramanujan tau function, let the roots of the above quadratic equation be α and β, then
Re
(
α
)
=
Re
(
β
)
=
p
11
/
2
,
{\displaystyle \operatorname {Re} (\alpha )=\operatorname {Re} (\beta )=p^{11/2},}
which looks like the Riemann Hypothesis. It implies an estimate that is only slightly weaker for all the τ(n), namely for any ε > 0:
O
(
n
11
/
2
+
ε
)
.
{\displaystyle O\left(n^{11/2+\varepsilon }\right).}
In 1917, L. Mordell proved the first two relations using techniques from complex analysis, specifically using what are now known as Hecke operators. The third statement followed from the proof of the Weil conjectures by Deligne (1974). The formulations required to show that it was a consequence were delicate, and not at all obvious. It was the work of Michio Kuga with contributions also by Mikio Sato, Goro Shimura, and Yasutaka Ihara, followed by Deligne (1971). The existence of the connection inspired some of the deep work in the late 1960s when the consequences of the étale cohomology theory were being worked out.
== Ramanujan–Petersson conjecture for modular forms ==
In 1937, Erich Hecke used Hecke operators to generalize the method of Mordell's proof of the first two conjectures to the automorphic L-function of the discrete subgroups Γ of SL(2, Z). For any modular form
f
(
z
)
=
∑
n
=
0
∞
a
n
q
n
q
=
e
2
π
i
z
,
{\displaystyle f(z)=\sum _{n=0}^{\infty }a_{n}q^{n}\qquad q=e^{2\pi iz},}
one can form the Dirichlet series
φ
(
s
)
=
∑
n
=
1
∞
a
n
n
s
.
{\displaystyle \varphi (s)=\sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}.}
For a modular form f (z) of weight k ≥ 2 for Γ, φ(s) absolutely converges in Re(s) > k, because an = O(nk−1+ε). Since f is a modular form of weight k, (s − k)φ(s) turns out to be an entire and R(s) = (2π)−sΓ(s)φ(s) satisfies the functional equation:
R
(
k
−
s
)
=
(
−
1
)
k
/
2
R
(
s
)
;
{\displaystyle R(k-s)=(-1)^{k/2}R(s);}
this was proved by Wilton in 1929. This correspondence between f and φ is one to one (a0 = (−1)k/2 Ress=k R(s)). Let g(x) = f (ix) −a0 for x > 0, then g(x) is related with R(s) via the Mellin transformation
R
(
s
)
=
∫
0
∞
g
(
x
)
x
s
−
1
d
x
⇔
g
(
x
)
=
1
2
π
i
∫
Re
(
s
)
=
σ
0
R
(
s
)
x
−
s
d
s
.
{\displaystyle R(s)=\int _{0}^{\infty }g(x)x^{s-1}\,dx\Leftrightarrow g(x)={\frac {1}{2\pi i}}\int _{\operatorname {Re} (s)=\sigma _{0}}R(s)x^{-s}\,ds.}
This correspondence relates the Dirichlet series that satisfy the above functional equation with the automorphic form of a discrete subgroup of SL(2, Z).
In the case k ≥ 3 Hans Petersson introduced a metric on the space of modular forms, called the Petersson metric (also see Weil–Petersson metric). This conjecture was named after him. Under the Petersson metric it is shown that we can define the orthogonality on the space of modular forms as the space of cusp forms and its orthogonal space and they have finite dimensions. Furthermore, we can concretely calculate the dimension of the space of holomorphic modular forms, using the Riemann–Roch theorem (see the dimensions of modular forms).
Deligne (1971) used the Eichler–Shimura isomorphism to reduce the Ramanujan conjecture to the Weil conjectures that he later proved. The more general Ramanujan–Petersson conjecture for holomorphic cusp forms in the theory of elliptic modular forms for congruence subgroups has a similar formulation, with exponent (k − 1)/2 where k is the weight of the form. These results also follow from the Weil conjectures, except for the case k = 1, where it is a result of Deligne & Serre (1974).
The Ramanujan–Petersson conjecture for Maass forms is still open (as of 2025) because Deligne's method, which works well in the holomorphic case, does not work in the real analytic case. A proof has recently been claimed by André Unterberger using tecniques from automorphic distribution theory.
== Ramanujan–Petersson conjecture for automorphic forms ==
Satake (1966) reformulated the Ramanujan–Petersson conjecture in terms of automorphic representations for GL(2) as saying that the local components of automorphic representations lie in the principal series, and suggested this condition as a generalization of the Ramanujan–Petersson conjecture to automorphic forms on other groups. Another way of saying this is that the local components of cusp forms should be tempered. However, several authors found counter-examples for anisotropic groups where the component at infinity was not tempered. Kurokawa (1978) and Howe & Piatetski-Shapiro (1979) showed that the conjecture was also false even for some quasi-split and split groups, by constructing automorphic forms for the unitary group U(2, 1) and the symplectic group Sp(4) that are non-tempered almost everywhere, related to the representation θ10.
After the counterexamples were found, Howe & Piatetski-Shapiro (1979) suggested that a reformulation of the conjecture should still hold. The current formulation of the generalized Ramanujan conjecture is for a globally generic cuspidal automorphic representation of a connected reductive group, where the generic assumption means that the representation admits a Whittaker model. It states that each local component of such a representation should be tempered. It is an observation due to Langlands that establishing functoriality of symmetric powers of automorphic representations of GL(n) will give a proof of the Ramanujan–Petersson conjecture.
== Bounds towards Ramanujan over number fields ==
Obtaining the best possible bounds towards the generalized Ramanujan conjecture in the case of number fields has caught the attention of many mathematicians. Each improvement is considered a milestone in the world of modern number theory. In order to understand the Ramanujan bounds for GL(n), consider a unitary cuspidal automorphic representation:
π
=
⨂
π
v
.
{\displaystyle \pi =\bigotimes \pi _{v}.}
The Bernstein–Zelevinsky classification tells us that each p-adic πv can be obtained via unitary parabolic induction from a representation
τ
1
,
v
⊗
⋯
⊗
τ
d
,
v
.
{\displaystyle \tau _{1,v}\otimes \cdots \otimes \tau _{d,v}.}
Here each
τ
i
,
v
{\displaystyle \tau _{i,v}}
is a representation of GL(ni), over the place v, of the form
τ
i
0
,
v
⊗
|
det
|
v
σ
i
,
v
{\displaystyle \tau _{i_{0},v}\otimes \left|\det \right|_{v}^{\sigma _{i,v}}}
with
τ
i
0
,
v
{\displaystyle \tau _{i_{0},v}}
tempered. Given n ≥ 2, a Ramanujan bound is a number δ ≥ 0 such that
max
i
|
σ
i
,
v
|
≤
δ
.
{\displaystyle \max _{i}\left|\sigma _{i,v}\right|\leq \delta .}
Langlands classification can be used for the archimedean places. The generalized Ramanujan conjecture is equivalent to the bound δ = 0.
Jacquet, Piatetskii-Shapiro & Shalika (1983) obtain a first bound of δ ≤ 1/2 for the general linear group GL(n), known as the trivial bound. An important breakthrough was made by Luo, Rudnick & Sarnak (1999), who currently hold the best general bound of δ ≡ 1/2 − (n2+1)−1 for arbitrary n and any number field. In the case of GL(2), Kim and Sarnak established the breakthrough bound of δ = 7/64 when the number field is the field of rational numbers, which is obtained as a consequence of the functoriality result of Kim (2002) on the symmetric fourth obtained via the Langlands–Shahidi method. Generalizing the Kim-Sarnak bounds to an arbitrary number field is possible by the results of Blomer & Brumley (2011).
For reductive groups other than GL(n), the generalized Ramanujan conjecture would follow from principle of Langlands functoriality. An important example are the classical groups, where the best possible bounds were obtained by Cogdell et al. (2004) as a consequence of their Langlands functorial lift.
== The Ramanujan–Petersson conjecture over global function fields ==
Drinfeld's proof of the global Langlands correspondence for GL(2) over a global function field leads towards a proof of the Ramanujan–Petersson conjecture. Lafforgue (2002) successfully extended Drinfeld's shtuka technique to the case of GL(n) in positive characteristic. Via a different technique that extends the Langlands–Shahidi method to include global function fields, Lomelí (2009) proves the Ramanujan conjecture for the classical groups.
== Applications ==
An application of the Ramanujan conjecture is the explicit construction of Ramanujan graphs by Lubotzky, Phillips and Sarnak. Indeed, the name "Ramanujan graph" was derived from this connection. Another application is that the Ramanujan–Petersson conjecture for the general linear group GL(n) implies Selberg's conjecture about eigenvalues of the Laplacian for some discrete groups.
== References ==
Blomer, Valentin; Brumley, Farrell (July 2011). "On the Ramanujan conjecture over number fields". Annals of Mathematics. 174 (1): 581–605. arXiv:1003.0559. doi:10.4007/annals.2011.174.1.18. ISSN 0003-486X. MR 2811610. S2CID 54686173.
Cogdell, J. W.; Kim, H. H.; Piatetski-Shapiro, I. I.; Shahidi, F. (June 2004). "Functoriality for the classical groups". Publications Mathématiques de l'IHÉS. 99 (1): 163–233. CiteSeerX 10.1.1.495.6662. doi:10.1007/s10240-004-0020-z. ISSN 0073-8301. S2CID 7731057.
Deligne, Pierre (1971). "Formes modulaires et représentations l-adiques". Séminaire Bourbaki. 1968/69: Exposés 347 - 363. Lecture notes in mathematics. Vol. 179. Berlin, New York: Springer-Verlag. doi:10.1007/BFb0058801. ISBN 978-3-540-05356-9.
Deligne, Pierre (December 1974). "La conjecture de Weil. I". Publications Mathématiques de l'IHÉS (in French). 43 (1): 273–307. doi:10.1007/BF02684373. ISSN 0073-8301. MR 0340258. S2CID 123139343.
Deligne, Pierre; Serre, Jean-Pierre (1974). "Formes modulaires de poids". Annales Scientifiques de l'École Normale Supérieure. Série 4. 7 (4): 507–530. doi:10.24033/asens.1277. ISSN 0012-9593. MR 0379379.
Howe, Roger; Piatetski-Shapiro, I. I.; et al. (American Mathematical Society) (1979). "A counterexample to the "generalized Ramanujan conjecture" for (quasi-) split groups". In Borel, Armand; Casselman, Bill (eds.). Automorphic forms, representations, and L-functions. Proceedings of symposia in pure mathematics. Providence, R.I: American Mathematical Society. pp. 315–322. ISBN 978-0-8218-1435-2. MR 0546605.
Jacquet, H.; Piatetskii-Shapiro, I. I.; Shalika, J. A. (April 1983). "Rankin-Selberg Convolutions". American Journal of Mathematics. 105 (2): 367. doi:10.2307/2374264. JSTOR 2374264. S2CID 124304599.
Kim, Henry (2002). "Functoriality for the exterior square of 𝐺𝐿₄ and the symmetric fourth of 𝐺𝐿₂" (PDF). Journal of the American Mathematical Society. 16 (1): 139–183. doi:10.1090/S0894-0347-02-00410-1. ISSN 0894-0347.
Kurokawa, Nobushige (June 1978). "Examples of eigenvalues of Hecke operators on Siegel cusp forms of degree two". Inventiones Mathematicae. 49 (2): 149–165. Bibcode:1978InMat..49..149K. doi:10.1007/BF01403084. ISSN 0020-9910. MR 0511188. S2CID 120041528.
Langlands, R. P. (1970). "Problems in the theory of automorphic forms". In Taam, Choy T. (ed.). Lectures in modern analysis and applications. 3. Lecture notes in mathematics. Vol. 170. Berlin: Springer. pp. 18–61. doi:10.1007/BFb0079065. ISBN 978-3-540-05284-5. MR 0302614.
Lomelí, L. A. (2009). "Functoriality for the Classical Groups over Function Fields". International Mathematics Research Notices: 4271–4335. doi:10.1093/imrn/rnp089. ISSN 1073-7928. MR 2552304.
Luo, Wenzhi; Rudnick, Zeév; Sarnak, Peter (1999). Doran, Robert; Dou, Ze-Li; Gilbert, George (eds.). "On the generalized Ramanujan conjecture for 𝐺𝐿(𝑛)". Proc. Sympos. Pure Math. Proceedings of Symposia in Pure Mathematics. 66 (2). Providence, Rhode Island: American Mathematical Society: 301–310. doi:10.1090/pspum/066.2/1703764. ISBN 978-0-8218-1051-4.
Petersson, Hans (December 1930). "Theorie der automorphen Formen beliebiger reeller Dimension und ihre Darstellung durch eine neue Art Poincaréscher Reihen". Mathematische Annalen (in German). 103 (1): 369–436. doi:10.1007/BF01455702. ISSN 0025-5831. S2CID 122378161.
Borel, Armand; Casselman, W. (1979). "Multiplicity one theorems". In Borel, Armand; Casselman., W. (eds.). Automorphic forms, representations, and L-functions. Proceedings of Symposia in Pure Mathematics. Providence (Rhode Island): American Mathematical Society. pp. 209–212. ISBN 978-0-8218-1474-1. MR 0546599.
Ramanujan, Srinivasa (1916). "On certain arithmetical functions" (PDF). Transactions of the Cambridge Philosophical Society. XXII (9): 159–184. Reprinted in Ramanujan Aiyangar, Srinivasa (2000). "Paper 18". In Hardy, Godfrey H. (ed.). Collected papers of Srinivasa Ramanujan (Reprint ed.). Providence, RI: AMS Chelsea Publ. pp. 136–162. ISBN 978-0-8218-2076-6. MR 2280843.
Sarnak, Peter (2005). "Notes on the generalized Ramanujan conjectures" (PDF). In Clay Mathematics Institute; Arthur, James; Ellwood, D.; Kottwitz, Robert E. (eds.). Harmonic analysis, the trace formula, and Shimura varieties: proceedings of the Clay Mathematics Institute, 2003 Summer School, the Fields Institute, Toronto, Canada, June 2-27, 2003. Clay mathematics proceedings. Vol. 4. Providence, RI: American Mathematical Society. pp. 659–685. ISBN 978-0-8218-3844-0. MR 2192019. OCLC 62282742.
Satake, Ichirô (1966). "Spherical functions and Ramanujan conjecture". In Borel, Armand; Mostow, George D. (eds.). Algebraic Groups and Discontinuous Subgroups (Boulder, Colo., 1965). Proc. Sympos. Pure Math. Vol. IX. Providence, R.I. pp. 258–264. ISBN 978-0-8218-3213-4. MR 0211955.{{cite book}}: CS1 maint: location missing publisher (link) | Wikipedia/Ramanujan–Petersson_conjecture |
In mathematics, the arithmetic zeta function is a zeta function associated with a scheme of finite type over integers. The arithmetic zeta function generalizes the Riemann zeta function and Dedekind zeta function to higher dimensions. The arithmetic zeta function is one of the most-fundamental objects of number theory.
== Definition ==
The arithmetic zeta function ζX (s) is defined by an Euler product analogous to the Riemann zeta function:
ζ
X
(
s
)
=
∏
x
1
1
−
N
(
x
)
−
s
,
{\displaystyle {\zeta _{X}(s)}=\prod _{x}{\frac {1}{1-N(x)^{-s}}},}
where the product is taken over all closed points x of the scheme X. Equivalently, the product is over all points whose residue field is finite. The cardinality of this field is denoted N(x).
== Examples and properties ==
=== Varieties over a finite field ===
If X is the spectrum of a finite field with q elements, then
ζ
X
(
s
)
=
1
1
−
q
−
s
.
{\displaystyle \zeta _{X}(s)={\frac {1}{1-q^{-s}}}.}
For a variety X over a finite field, it is known by Grothendieck's trace formula that
ζ
X
(
s
)
=
Z
(
X
,
q
−
s
)
{\displaystyle \zeta _{X}(s)=Z(X,q^{-s})}
where
Z
(
X
,
t
)
{\displaystyle Z(X,t)}
is a rational function (i.e., a quotient of polynomials).
Given two varieties X and Y over a finite field, the zeta function of
X
×
Y
{\displaystyle X\times Y}
is given by
Z
(
X
,
t
)
⋆
Z
(
Y
,
t
)
=
Z
(
X
×
Y
,
t
)
,
{\displaystyle Z(X,t)\star Z(Y,t)=Z(X\times Y,t),}
where
⋆
{\displaystyle \star }
denotes the multiplication in the ring
W
(
Z
)
{\displaystyle W(\mathbf {Z} )}
of Witt vectors of the integers.
=== Ring of integers ===
If X is the spectrum of the ring of integers, then ζX (s) is the Riemann zeta function. More generally, if X is the spectrum of the ring of integers of an algebraic number field, then ζX (s) is the Dedekind zeta function.
=== Zeta functions of disjoint unions ===
The zeta function of affine and projective spaces over a scheme X are given by
ζ
A
n
(
X
)
(
s
)
=
ζ
X
(
s
−
n
)
ζ
P
n
(
X
)
(
s
)
=
∏
i
=
0
n
ζ
X
(
s
−
i
)
{\displaystyle {\begin{aligned}\zeta _{\mathbf {A} ^{n}(X)}(s)&=\zeta _{X}(s-n)\\\zeta _{\mathbf {P} ^{n}(X)}(s)&=\prod _{i=0}^{n}\zeta _{X}(s-i)\end{aligned}}}
The latter equation can be deduced from the former using that, for any X that is the disjoint union of a closed and open subscheme U and V, respectively,
ζ
X
(
s
)
=
ζ
U
(
s
)
ζ
V
(
s
)
.
{\displaystyle \zeta _{X}(s)=\zeta _{U}(s)\zeta _{V}(s).}
Even more generally, a similar formula holds for infinite disjoint unions. In particular, this shows that the zeta function of X is the product of the ones of the reduction of X modulo the primes p:
ζ
X
(
s
)
=
∏
p
ζ
X
p
(
s
)
.
{\displaystyle \zeta _{X}(s)=\prod _{p}\zeta _{X_{p}}(s).}
Such an expression ranging over each prime number is sometimes called Euler product and each factor is called Euler factor. In many cases of interest, the generic fiber XQ is smooth. Then, only finitely many Xp are singular (bad reduction). For almost all primes, namely when X has good reduction, the Euler factor is known to agree with the corresponding factor of the Hasse–Weil zeta function of XQ. Therefore, these two functions are closely related.
== Main conjectures ==
There are a number of conjectures concerning the behavior of the zeta function of a regular irreducible equidimensional scheme X (of finite type over the integers). Many (but not all) of these conjectures generalize the one-dimensional case of well known theorems about the Euler-Riemann-Dedekind zeta function.
The scheme need not be flat over Z, in this case it is a scheme of finite type over some Fp. This is referred to as the characteristic p case below. In the latter case, many of these conjectures (with the most notable exception of the Birch and Swinnerton-Dyer conjecture, i.e. the study of special values) are known. Very little is known for schemes that are flat over Z and are of dimension two and higher.
=== Meromorphic continuation and functional equation ===
Hasse and Weil conjectured that ζX (s) has a meromorphic continuation to the complex plane and satisfies a functional equation with respect to s → n − s where n is the absolute dimension of X.
This is proven for n = 1 and some very special cases when n > 1 for flat schemes over Z and for all n in positive characteristic. It is a consequence of the Weil conjectures (more precisely, the Riemann hypothesis part thereof) that the zeta function has a meromorphic continuation up to
R
e
(
s
)
>
n
−
1
2
{\displaystyle \mathrm {Re} (s)>n-{\tfrac {1}{2}}}
.
=== The generalized Riemann hypothesis ===
According to the generalized Riemann Hypothesis the zeros of ζX (s) are conjectured to lie inside the critical strip 0 ≤ Re(s) ≤ n lie on the vertical lines Re(s) = 1/2, 3/2, ... and the poles of ζX (s) inside the critical strip 0 ≤ Re(s) ≤ n lie on the vertical lines Re(s) = 0, 1, 2, ....
This was proved (Emil Artin, Helmut Hasse, André Weil, Alexander Grothendieck, Pierre Deligne) in positive characteristic for all n. It is not proved for any scheme that is flat over Z. The Riemann hypothesis is a partial case of Conjecture 2.
=== Pole orders ===
Subject to the analytic continuation, the order of the zero or pole and the residue of ζX (s) at integer points inside the critical strip is conjectured to be expressible by important arithmetic invariants of X. An argument due to Serre based on the above elementary properties and Noether normalization shows that the zeta function of X has a pole at s = n whose order equals the number of irreducible components of X with maximal dimension. Secondly, Tate conjectured
o
r
d
s
=
n
−
1
ζ
X
(
s
)
=
r
k
O
X
×
(
X
)
−
r
k
P
i
c
(
X
)
{\displaystyle \mathrm {ord} _{s=n-1}\zeta _{X}(s)=rk{\mathcal {O}}_{X}^{\times }(X)-rk\mathrm {Pic} (X)}
i.e., the pole order is expressible by the rank of the groups of invertible regular functions and the Picard group. The Birch and Swinnerton-Dyer conjecture is a partial case this conjecture. In fact, this conjecture of Tate's is equivalent to a generalization of Birch and Swinnerton-Dyer.
More generally, Soulé conjectured
o
r
d
s
=
n
−
m
ζ
X
(
s
)
=
−
∑
i
(
−
1
)
i
r
k
K
i
(
X
)
(
m
)
{\displaystyle \mathrm {ord} _{s=n-m}\zeta _{X}(s)=-\sum _{i}(-1)^{i}rkK_{i}(X)^{(m)}}
The right hand side denotes the Adams eigenspaces of algebraic K-theory of X. These ranks are finite under the Bass conjecture.
These conjectures are known when n = 1, that is, the case of number rings and curves over finite fields. As for n > 1, partial cases of the Birch and Swinnerton-Dyer conjecture have been proven, but even in positive characteristic the conjecture remains open.
== Methods and theories ==
The arithmetic zeta function of a regular connected equidimensional arithmetic scheme of Kronecker dimension n can be factorized into the product of appropriately defined L-factors and an auxiliary factor. Hence, results on L-functions imply corresponding results for the arithmetic zeta functions. However, there are still very few proven results about the L-factors of arithmetic schemes in characteristic zero and dimensions 2 and higher. Ivan Fesenko initiated a theory which studies the arithmetic zeta functions directly, without working with their L-factors. It is a higher-dimensional generalisation of Tate's thesis, i.e. it uses higher adele groups, higher zeta integral and objects which come from higher class field theory. In this theory, the meromorphic continuation and functional equation of proper regular models of elliptic curves over global fields is related to mean-periodicity property of a boundary function. In his joint work with M. Suzuki and G. Ricotta a new correspondence in number theory is proposed, between the arithmetic zeta functions and mean-periodic functions in the space of smooth functions on the real line of not more than exponential growth. This correspondence is related to the Langlands correspondence. Two other applications of Fesenko's theory are to the poles of the zeta function of proper models of elliptic curves over global fields and to the special value at the central point.
== References ==
Sources
François Bruhat (1963). Lectures on some aspects of p-adic analysis. Tata Institute of Fundamental Research.
Serre, Jean-Pierre (1969–1970). "Facteurs locaux des fonctions zeta des varietés algébriques (définitions et conjectures)". Séminaire Delange-Pisot-Poitou. 19. | Wikipedia/Arithmetic_zeta_function |
In mathematics, the Odlyzko–Schönhage algorithm is a fast algorithm for evaluating the Riemann zeta function at many points, introduced by (Odlyzko & Schönhage 1988). The main point is the use of the fast Fourier transform to speed up the evaluation of a finite Dirichlet series of length N at O(N) equally spaced values from O(N2) to O(N1+ε) steps (at the cost of storing O(N1+ε) intermediate values). The Riemann–Siegel formula used for
calculating the Riemann zeta function with imaginary part T uses a finite Dirichlet series with about N = T1/2 terms, so when finding about N values of the Riemann zeta function it is sped up by a factor of about T1/2. This reduces the time to find the zeros of the zeta function with imaginary part at most T from
about T3/2+ε steps to about T1+ε steps.
The algorithm can be used not just for the Riemann zeta function, but also for many other functions given by Dirichlet series.
The algorithm was used by Gourdon (2004) to verify the Riemann hypothesis for the first 1013 zeros of the zeta function.
== References ==
Gourdon, X., Numerical evaluation of the Riemann Zeta-function
Gourdon (2004), The 1013 first zeros of the Riemann Zeta function, and zeros computation at very large height
Odlyzko, A. (1992), The 1020-th zero of the Riemann zeta function and 175 million of its neighbors This unpublished book describes the implementation of the algorithm and discusses the results in detail.
Odlyzko, A. M.; Schönhage, A. (1988), "Fast algorithms for multiple evaluations of the Riemann zeta function", Trans. Amer. Math. Soc., 309 (2): 797–809, doi:10.2307/2000939, JSTOR 2000939, MR 0961614 | Wikipedia/Odlyzko–Schönhage_algorithm |
In mathematics, the Hasse–Weil zeta function attached to an algebraic variety V defined over an algebraic number field K is a meromorphic function on the complex plane defined in terms of the number of points on the variety after reducing modulo each prime number p. It is a global L-function defined as an Euler product of local zeta functions.
Hasse–Weil L-functions form one of the two major classes of global L-functions, alongside the L-functions associated to automorphic representations. Conjecturally, these two types of global L-functions are actually two descriptions of the same type of global L-function; this would be a vast generalisation of the Taniyama-Weil conjecture, itself an important result in number theory.
For an elliptic curve over a number field K, the Hasse–Weil zeta function is conjecturally related to the group of rational points of the elliptic curve over K by the Birch and Swinnerton-Dyer conjecture.
== Definition ==
The description of the Hasse–Weil zeta function up to finitely many factors of its Euler product is relatively simple. This follows the initial suggestions of Helmut Hasse and André Weil, motivated by the Riemann zeta function, which results from the case when V is a single point.
Taking the case of K the rational number field
Q
{\displaystyle \mathbb {Q} }
, and V a non-singular projective variety, we can for almost all prime numbers p consider the reduction of V modulo p, an algebraic variety Vp over the finite field
F
p
{\displaystyle \mathbb {F} _{p}}
with p elements, just by reducing equations for V. Scheme-theoretically, this reduction is just the pullback of the Néron model of V along the canonical map Spec
F
p
{\displaystyle \mathbb {F} _{p}}
→ Spec
Z
{\displaystyle \mathbb {Z} }
. Again for almost all p it will be non-singular. We define a Dirichlet series of the complex variable s,
Z
V
,
Q
(
s
)
=
∏
p
Z
V
,
p
(
p
−
s
)
,
{\displaystyle Z_{V\!,\mathbb {Q} }(s)=\prod _{p}Z_{V\!,\,p}(p^{-s}),}
which is the infinite product of the local zeta functions
Z
V
,
p
(
p
−
s
)
=
exp
(
∑
k
=
1
∞
N
k
k
(
p
−
s
)
k
)
{\displaystyle Z_{V\!,\,p}(p^{-s})=\exp \left(\sum _{k=1}^{\infty }{\frac {N_{k}}{k}}(p^{-s})^{k}\right)}
where Nk is the number of points of V defined over the finite field extension
F
p
k
{\displaystyle \mathbb {F} _{p^{k}}}
of
F
p
{\displaystyle \mathbb {F} _{p}}
.
This
Z
V
,
Q
(
s
)
{\displaystyle Z_{V\!,\mathbb {Q} }(s)}
is well-defined only up to multiplication by rational functions in
p
−
s
{\displaystyle p^{-s}}
for finitely many primes p.
Since the indeterminacy is relatively harmless, and has meromorphic continuation everywhere, there is a sense in which the properties of Z(s) do not essentially depend on it. In particular, while the exact form of the functional equation for Z(s), reflecting in a vertical line in the complex plane, will definitely depend on the 'missing' factors, the existence of some such functional equation does not.
A more refined definition became possible with the development of étale cohomology; this neatly explains what to do about the missing, 'bad reduction' factors. According to general principles visible in ramification theory, 'bad' primes carry good information (theory of the conductor). This manifests itself in the étale theory in the Néron–Ogg–Shafarevich criterion for good reduction; namely that there is good reduction, in a definite sense, at all primes p for which the Galois representation ρ on the étale cohomology groups of V is unramified. For those, the definition of local zeta function can be recovered in terms of the characteristic polynomial of
ρ
(
Frob
(
p
)
)
,
{\displaystyle \rho (\operatorname {Frob} (p)),}
Frob(p) being a Frobenius element for p. What happens at the ramified p is that ρ is non-trivial on the inertia group I(p) for p. At those primes the definition must be 'corrected', taking the largest quotient of the representation ρ on which the inertia group acts by the trivial representation. With this refinement, the definition of Z(s) can be upgraded successfully from 'almost all' p to all p participating in the Euler product. The consequences for the functional equation were worked out by Serre and Deligne in the later 1960s; the functional equation itself has not been proved in general.
== Hasse–Weil conjecture ==
The Hasse–Weil conjecture states that the Hasse–Weil zeta function should extend to a meromorphic function for all complex s, and should satisfy a functional equation similar to that of the Riemann zeta function. For elliptic curves over the rational numbers, the Hasse–Weil conjecture follows from the modularity theorem: each elliptic curve E over
Q
{\displaystyle \mathbb {Q} }
is modular.
== Birch and Swinnerton-Dyer conjecture ==
The Birch and Swinnerton-Dyer conjecture states that the rank of the abelian group E(K) of points of an elliptic curve E is the order of the zero of the Hasse–Weil L-function L(E, s) at s = 1, and that the first non-zero coefficient in the Taylor expansion of L(E, s) at s = 1 is given by more refined arithmetic data attached to E over K. The conjecture is one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof.
== Elliptic curves over Q ==
An elliptic curve is a specific type of variety. Let E be an elliptic curve over Q of conductor N. Then, E has good reduction at all primes p not dividing N, it has multiplicative reduction at the primes p that exactly divide N (i.e. such that p divides N, but p2 does not; this is written p || N), and it has additive reduction elsewhere (i.e. at the primes where p2 divides N). The Hasse–Weil zeta function of E then takes the form
Z
V
,
Q
(
s
)
=
ζ
(
s
)
ζ
(
s
−
1
)
L
(
E
,
s
)
.
{\displaystyle Z_{V\!,\mathbb {Q} }(s)={\frac {\zeta (s)\zeta (s-1)}{L(E,s)}}.\,}
Here, ζ(s) is the usual Riemann zeta function and L(E, s) is called the L-function of E/Q, which takes the form
L
(
E
,
s
)
=
∏
p
L
p
(
E
,
s
)
−
1
{\displaystyle L(E,s)=\prod _{p}L_{p}(E,s)^{-1}\,}
where, for a given prime p,
L
p
(
E
,
s
)
=
{
(
1
−
a
p
p
−
s
+
p
1
−
2
s
)
,
if
p
∤
N
(
1
−
a
p
p
−
s
)
,
if
p
∣
N
and
p
2
∤
N
1
,
if
p
2
∣
N
{\displaystyle L_{p}(E,s)={\begin{cases}(1-a_{p}p^{-s}+p^{1-2s}),&{\text{if }}p\nmid N\\(1-a_{p}p^{-s}),&{\text{if }}p\mid N{\text{ and }}p^{2}\nmid N\\1,&{\text{if }}p^{2}\mid N\end{cases}}}
where in the case of good reduction ap is p + 1 − (number of points of E mod p), and in the case of multiplicative reduction ap is ±1 depending on whether E has split (plus sign) or non-split (minus sign) multiplicative reduction at p. A multiplicative reduction of curve E by the prime p is said to be split if -c6 is a square in the finite field with p elements.
There is a useful relation not using the conductor:
If p doesn't divide
Δ
{\displaystyle \Delta }
(where
Δ
{\displaystyle \Delta }
is the discriminant of the elliptic curve) then E has good reduction at p.
If p divides
Δ
{\displaystyle \Delta }
but not
c
4
{\displaystyle c_{4}}
then E has multiplicative bad reduction at p.
If p divides both
Δ
{\displaystyle \Delta }
and
c
4
{\displaystyle c_{4}}
then E has additive bad reduction at p.
== See also ==
Arithmetic zeta function
== References ==
== Bibliography ==
J.-P. Serre, Facteurs locaux des fonctions zêta des variétés algébriques (définitions et conjectures), 1969/1970, Sém. Delange–Pisot–Poitou, exposé 19 | Wikipedia/Hasse–Weil_zeta_function |
In mathematics, the Hurwitz zeta function is one of the many zeta functions. It is formally defined for complex variables s with Re(s) > 1 and a ≠ 0, −1, −2, … by
ζ
(
s
,
a
)
=
∑
n
=
0
∞
1
(
n
+
a
)
s
.
{\displaystyle \zeta (s,a)=\sum _{n=0}^{\infty }{\frac {1}{(n+a)^{s}}}.}
This series is absolutely convergent for the given values of s and a and can be extended to a meromorphic function defined for all s ≠ 1. The Riemann zeta function is ζ(s,1). The Hurwitz zeta function is named after Adolf Hurwitz, who introduced it in 1882.
== Integral representation ==
The Hurwitz zeta function has an integral representation
ζ
(
s
,
a
)
=
1
Γ
(
s
)
∫
0
∞
x
s
−
1
e
−
a
x
1
−
e
−
x
d
x
{\displaystyle \zeta (s,a)={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }{\frac {x^{s-1}e^{-ax}}{1-e^{-x}}}dx}
for
Re
(
s
)
>
1
{\displaystyle \operatorname {Re} (s)>1}
and
Re
(
a
)
>
0.
{\displaystyle \operatorname {Re} (a)>0.}
(This integral can be viewed as a Mellin transform.) The formula can be obtained, roughly, by writing
ζ
(
s
,
a
)
Γ
(
s
)
=
∑
n
=
0
∞
1
(
n
+
a
)
s
∫
0
∞
x
s
e
−
x
d
x
x
=
∑
n
=
0
∞
∫
0
∞
y
s
e
−
(
n
+
a
)
y
d
y
y
{\displaystyle \zeta (s,a)\Gamma (s)=\sum _{n=0}^{\infty }{\frac {1}{(n+a)^{s}}}\int _{0}^{\infty }x^{s}e^{-x}{\frac {dx}{x}}=\sum _{n=0}^{\infty }\int _{0}^{\infty }y^{s}e^{-(n+a)y}{\frac {dy}{y}}}
and then interchanging the sum and integral.
The integral representation above can be converted to a contour integral representation
ζ
(
s
,
a
)
=
−
Γ
(
1
−
s
)
1
2
π
i
∫
C
(
−
z
)
s
−
1
e
−
a
z
1
−
e
−
z
d
z
{\displaystyle \zeta (s,a)=-\Gamma (1-s){\frac {1}{2\pi i}}\int _{C}{\frac {(-z)^{s-1}e^{-az}}{1-e^{-z}}}dz}
where
C
{\displaystyle C}
is a Hankel contour counterclockwise around the positive real axis, and the principal branch is used for the complex exponentiation
(
−
z
)
s
−
1
{\displaystyle (-z)^{s-1}}
. Unlike the previous integral, this integral is valid for all s, and indeed is an entire function of s.
The contour integral representation provides an analytic continuation of
ζ
(
s
,
a
)
{\displaystyle \zeta (s,a)}
to all
s
≠
1
{\displaystyle s\neq 1}
. At
s
=
1
{\displaystyle s=1}
, it has a simple pole with residue
1
{\displaystyle 1}
.
== Hurwitz's formula ==
The Hurwitz zeta function satisfies an identity which generalizes the functional equation of the Riemann zeta function:
ζ
(
1
−
s
,
a
)
=
Γ
(
s
)
(
2
π
)
s
(
e
−
π
i
s
/
2
∑
n
=
1
∞
e
2
π
i
n
a
n
s
+
e
π
i
s
/
2
∑
n
=
1
∞
e
−
2
π
i
n
a
n
s
)
,
{\displaystyle \zeta (1-s,a)={\frac {\Gamma (s)}{(2\pi )^{s}}}\left(e^{-\pi is/2}\sum _{n=1}^{\infty }{\frac {e^{2\pi ina}}{n^{s}}}+e^{\pi is/2}\sum _{n=1}^{\infty }{\frac {e^{-2\pi ina}}{n^{s}}}\right),}
valid for Re(s) > 1 and 0 < a ≤ 1. The Riemann zeta functional equation is the special case a = 1:
ζ
(
1
−
s
)
=
2
Γ
(
s
)
(
2
π
)
s
cos
(
π
s
2
)
ζ
(
s
)
{\displaystyle \zeta (1-s)={\frac {2\Gamma (s)}{(2\pi )^{s}}}\cos \left({\frac {\pi s}{2}}\right)\zeta (s)}
Hurwitz's formula can also be expressed as
ζ
(
s
,
a
)
=
2
Γ
(
1
−
s
)
(
2
π
)
1
−
s
(
sin
(
π
s
2
)
∑
n
=
1
∞
cos
(
2
π
n
a
)
n
1
−
s
+
cos
(
π
s
2
)
∑
n
=
1
∞
sin
(
2
π
n
a
)
n
1
−
s
)
{\displaystyle \zeta (s,a)={\frac {2\Gamma (1-s)}{(2\pi )^{1-s}}}\left(\sin \left({\frac {\pi s}{2}}\right)\sum _{n=1}^{\infty }{\frac {\cos(2\pi na)}{n^{1-s}}}+\cos \left({\frac {\pi s}{2}}\right)\sum _{n=1}^{\infty }{\frac {\sin(2\pi na)}{n^{1-s}}}\right)}
(for Re(s) < 0 and 0 < a ≤ 1).
Hurwitz's formula has a variety of different proofs. One proof uses the contour integration representation along with the residue theorem. A second proof uses a theta function identity, or equivalently Poisson summation. These proofs are analogous to the two proofs of the functional equation for the Riemann zeta function in Riemann's 1859 paper. Another proof of the Hurwitz formula uses Euler–Maclaurin summation to express the Hurwitz zeta function as an integral
ζ
(
s
,
a
)
=
s
∫
−
a
∞
⌊
x
⌋
−
x
+
1
2
(
x
+
a
)
s
+
1
d
x
{\displaystyle \zeta (s,a)=s\int _{-a}^{\infty }{\frac {\lfloor x\rfloor -x+{\frac {1}{2}}}{(x+a)^{s+1}}}dx}
(−1 < Re(s) < 0 and 0 < a ≤ 1) and then expanding the numerator as a Fourier series.
=== Functional equation for rational a ===
When a is a rational number, Hurwitz's formula leads to the following functional equation: For integers
1
≤
m
≤
n
{\displaystyle 1\leq m\leq n}
,
ζ
(
1
−
s
,
m
n
)
=
2
Γ
(
s
)
(
2
π
n
)
s
∑
k
=
1
n
[
cos
(
π
s
2
−
2
π
k
m
n
)
ζ
(
s
,
k
n
)
]
{\displaystyle \zeta \left(1-s,{\frac {m}{n}}\right)={\frac {2\Gamma (s)}{(2\pi n)^{s}}}\sum _{k=1}^{n}\left[\cos \left({\frac {\pi s}{2}}-{\frac {2\pi km}{n}}\right)\;\zeta \left(s,{\frac {k}{n}}\right)\right]}
holds for all values of s.
This functional equation can be written as another equivalent form:
ζ
(
1
−
s
,
m
n
)
=
Γ
(
s
)
(
2
π
n
)
s
∑
k
=
1
n
[
e
π
i
s
2
e
−
2
π
i
k
m
n
ζ
(
s
,
k
n
)
+
e
−
π
i
s
2
e
2
π
i
k
m
n
ζ
(
s
,
k
n
)
]
{\displaystyle \zeta \left(1-s,{\frac {m}{n}}\right)={\frac {\Gamma (s)}{(2\pi n)^{s}}}\sum _{k=1}^{n}\left[e^{\frac {\pi is}{2}}e^{-{\frac {2\pi ikm}{n}}}\zeta \left(s,{\frac {k}{n}}\right)+e^{-{\frac {\pi is}{2}}}e^{\frac {2\pi ikm}{n}}\zeta \left(s,{\frac {k}{n}}\right)\right]}
.
== Some finite sums ==
Closely related to the functional equation are the following finite sums, some of which may be evaluated in a closed form
∑
r
=
1
m
−
1
ζ
(
s
,
r
m
)
cos
2
π
r
k
m
=
m
Γ
(
1
−
s
)
(
2
π
m
)
1
−
s
sin
π
s
2
⋅
{
ζ
(
1
−
s
,
k
m
)
+
ζ
(
1
−
s
,
1
−
k
m
)
}
−
ζ
(
s
)
{\displaystyle \sum _{r=1}^{m-1}\zeta \left(s,{\frac {r}{m}}\right)\cos {\dfrac {2\pi rk}{m}}={\frac {m\Gamma (1-s)}{(2\pi m)^{1-s}}}\sin {\frac {\pi s}{2}}\cdot \left\{\zeta \left(1-s,{\frac {k}{m}}\right)+\zeta \left(1-s,1-{\frac {k}{m}}\right)\right\}-\zeta (s)}
∑
r
=
1
m
−
1
ζ
(
s
,
r
m
)
sin
2
π
r
k
m
=
m
Γ
(
1
−
s
)
(
2
π
m
)
1
−
s
cos
π
s
2
⋅
{
ζ
(
1
−
s
,
k
m
)
−
ζ
(
1
−
s
,
1
−
k
m
)
}
{\displaystyle \sum _{r=1}^{m-1}\zeta \left(s,{\frac {r}{m}}\right)\sin {\dfrac {2\pi rk}{m}}={\frac {m\Gamma (1-s)}{(2\pi m)^{1-s}}}\cos {\frac {\pi s}{2}}\cdot \left\{\zeta \left(1-s,{\frac {k}{m}}\right)-\zeta \left(1-s,1-{\frac {k}{m}}\right)\right\}}
∑
r
=
1
m
−
1
ζ
2
(
s
,
r
m
)
=
(
m
2
s
−
1
−
1
)
ζ
2
(
s
)
+
2
m
Γ
2
(
1
−
s
)
(
2
π
m
)
2
−
2
s
∑
l
=
1
m
−
1
{
ζ
(
1
−
s
,
l
m
)
−
cos
π
s
⋅
ζ
(
1
−
s
,
1
−
l
m
)
}
ζ
(
1
−
s
,
l
m
)
{\displaystyle \sum _{r=1}^{m-1}\zeta ^{2}\left(s,{\frac {r}{m}}\right)={\big (}m^{2s-1}-1{\big )}\zeta ^{2}(s)+{\frac {2m\Gamma ^{2}(1-s)}{(2\pi m)^{2-2s}}}\sum _{l=1}^{m-1}\left\{\zeta \left(1-s,{\frac {l}{m}}\right)-\cos \pi s\cdot \zeta \left(1-s,1-{\frac {l}{m}}\right)\right\}\zeta \left(1-s,{\frac {l}{m}}\right)}
where m is positive integer greater than 2 and s is complex, see e.g. Appendix B in.
== Series representation ==
A convergent Newton series representation defined for (real) a > 0 and any complex s ≠ 1 was given by Helmut Hasse in 1930:
ζ
(
s
,
a
)
=
1
s
−
1
∑
n
=
0
∞
1
n
+
1
∑
k
=
0
n
(
−
1
)
k
(
n
k
)
(
a
+
k
)
1
−
s
.
{\displaystyle \zeta (s,a)={\frac {1}{s-1}}\sum _{n=0}^{\infty }{\frac {1}{n+1}}\sum _{k=0}^{n}(-1)^{k}{n \choose k}(a+k)^{1-s}.}
This series converges uniformly on compact subsets of the s-plane to an entire function. The inner sum may be understood to be the nth forward difference of
a
1
−
s
{\displaystyle a^{1-s}}
; that is,
Δ
n
a
1
−
s
=
∑
k
=
0
n
(
−
1
)
n
−
k
(
n
k
)
(
a
+
k
)
1
−
s
{\displaystyle \Delta ^{n}a^{1-s}=\sum _{k=0}^{n}(-1)^{n-k}{n \choose k}(a+k)^{1-s}}
where Δ is the forward difference operator. Thus, one may write:
ζ
(
s
,
a
)
=
1
s
−
1
∑
n
=
0
∞
(
−
1
)
n
n
+
1
Δ
n
a
1
−
s
=
1
s
−
1
log
(
1
+
Δ
)
Δ
a
1
−
s
{\displaystyle {\begin{aligned}\zeta (s,a)&={\frac {1}{s-1}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n+1}}\Delta ^{n}a^{1-s}\\&={\frac {1}{s-1}}{\log(1+\Delta ) \over \Delta }a^{1-s}\end{aligned}}}
== Taylor series ==
The partial derivative of the zeta in the second argument is a shift:
∂
∂
a
ζ
(
s
,
a
)
=
−
s
ζ
(
s
+
1
,
a
)
.
{\displaystyle {\frac {\partial }{\partial a}}\zeta (s,a)=-s\zeta (s+1,a).}
Thus, the Taylor series can be written as:
ζ
(
s
,
x
+
y
)
=
∑
k
=
0
∞
y
k
k
!
∂
k
∂
x
k
ζ
(
s
,
x
)
=
∑
k
=
0
∞
(
s
+
k
−
1
s
−
1
)
(
−
y
)
k
ζ
(
s
+
k
,
x
)
.
{\displaystyle \zeta (s,x+y)=\sum _{k=0}^{\infty }{\frac {y^{k}}{k!}}{\frac {\partial ^{k}}{\partial x^{k}}}\zeta (s,x)=\sum _{k=0}^{\infty }{s+k-1 \choose s-1}(-y)^{k}\zeta (s+k,x).}
Alternatively,
ζ
(
s
,
q
)
=
1
q
s
+
∑
n
=
0
∞
(
−
q
)
n
(
s
+
n
−
1
n
)
ζ
(
s
+
n
)
,
{\displaystyle \zeta (s,q)={\frac {1}{q^{s}}}+\sum _{n=0}^{\infty }(-q)^{n}{s+n-1 \choose n}\zeta (s+n),}
with
|
q
|
<
1
{\displaystyle |q|<1}
.
Closely related is the Stark–Keiper formula:
ζ
(
s
,
N
)
=
∑
k
=
0
∞
[
N
+
s
−
1
k
+
1
]
(
s
+
k
−
1
s
−
1
)
(
−
1
)
k
ζ
(
s
+
k
,
N
)
{\displaystyle \zeta (s,N)=\sum _{k=0}^{\infty }\left[N+{\frac {s-1}{k+1}}\right]{s+k-1 \choose s-1}(-1)^{k}\zeta (s+k,N)}
which holds for integer N and arbitrary s. See also Faulhaber's formula for a similar relation on finite sums of powers of integers.
== Laurent series ==
The Laurent series expansion can be used to define generalized Stieltjes constants that occur in the series
ζ
(
s
,
a
)
=
1
s
−
1
+
∑
n
=
0
∞
(
−
1
)
n
n
!
γ
n
(
a
)
(
s
−
1
)
n
.
{\displaystyle \zeta (s,a)={\frac {1}{s-1}}+\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\gamma _{n}(a)(s-1)^{n}.}
In particular, the constant term is given by
lim
s
→
1
[
ζ
(
s
,
a
)
−
1
s
−
1
]
=
γ
0
(
a
)
=
−
Γ
′
(
a
)
Γ
(
a
)
=
−
ψ
(
a
)
{\displaystyle \lim _{s\to 1}\left[\zeta (s,a)-{\frac {1}{s-1}}\right]=\gamma _{0}(a)={\frac {-\Gamma '(a)}{\Gamma (a)}}=-\psi (a)}
where
Γ
{\displaystyle \Gamma }
is the gamma function and
ψ
=
Γ
′
/
Γ
{\displaystyle \psi =\Gamma '/\Gamma }
is the digamma function. As a special case,
γ
0
(
1
)
=
−
ψ
(
1
)
=
γ
0
=
γ
{\displaystyle \gamma _{0}(1)=-\psi (1)=\gamma _{0}=\gamma }
.
== Discrete Fourier transform ==
The discrete Fourier transform of the Hurwitz zeta function with respect to the order s is the Legendre chi function.
== Particular values ==
=== Negative integers ===
The values of ζ(s, a) at s = 0, −1, −2, ... are related to the Bernoulli polynomials:
ζ
(
−
n
,
a
)
=
−
B
n
+
1
(
a
)
n
+
1
.
{\displaystyle \zeta (-n,a)=-{\frac {B_{n+1}(a)}{n+1}}.}
For example, the
n
=
0
{\displaystyle n=0}
case gives
ζ
(
0
,
a
)
=
1
2
−
a
.
{\displaystyle \zeta (0,a)={\frac {1}{2}}-a.}
=== s-derivative ===
The partial derivative with respect to s at s = 0 is related to the gamma function:
∂
∂
s
ζ
(
s
,
a
)
|
s
=
0
=
log
Γ
(
a
)
−
1
2
log
(
2
π
)
{\displaystyle \left.{\frac {\partial }{\partial s}}\zeta (s,a)\right|_{s=0}=\log \Gamma (a)-{\frac {1}{2}}\log(2\pi )}
In particular,
ζ
′
(
0
)
=
−
1
2
log
(
2
π
)
.
{\textstyle \zeta '(0)=-{\frac {1}{2}}\log(2\pi ).}
The formula is due to Lerch.
== Relation to Jacobi theta function ==
If
ϑ
(
z
,
τ
)
{\displaystyle \vartheta (z,\tau )}
is the Jacobi theta function, then
∫
0
∞
[
ϑ
(
z
,
i
t
)
−
1
]
t
s
/
2
d
t
t
=
π
−
(
1
−
s
)
/
2
Γ
(
1
−
s
2
)
[
ζ
(
1
−
s
,
z
)
+
ζ
(
1
−
s
,
1
−
z
)
]
{\displaystyle \int _{0}^{\infty }\left[\vartheta (z,it)-1\right]t^{s/2}{\frac {dt}{t}}=\pi ^{-(1-s)/2}\Gamma \left({\frac {1-s}{2}}\right)\left[\zeta (1-s,z)+\zeta (1-s,1-z)\right]}
holds for
ℜ
s
>
0
{\displaystyle \Re s>0}
and z complex, but not an integer. For z=n an integer, this simplifies to
∫
0
∞
[
ϑ
(
n
,
i
t
)
−
1
]
t
s
/
2
d
t
t
=
2
π
−
(
1
−
s
)
/
2
Γ
(
1
−
s
2
)
ζ
(
1
−
s
)
=
2
π
−
s
/
2
Γ
(
s
2
)
ζ
(
s
)
.
{\displaystyle \int _{0}^{\infty }\left[\vartheta (n,it)-1\right]t^{s/2}{\frac {dt}{t}}=2\ \pi ^{-(1-s)/2}\ \Gamma \left({\frac {1-s}{2}}\right)\zeta (1-s)=2\ \pi ^{-s/2}\ \Gamma \left({\frac {s}{2}}\right)\zeta (s).}
where ζ here is the Riemann zeta function. Note that this latter form is the functional equation for the Riemann zeta function, as originally given by Riemann. The distinction based on z being an integer or not accounts for the fact that the Jacobi theta function converges to the periodic delta function, or Dirac comb in z as
t
→
0
{\displaystyle t\rightarrow 0}
.
== Relation to Dirichlet L-functions ==
At rational arguments the Hurwitz zeta function may be expressed as a linear combination of Dirichlet L-functions and vice versa: The Hurwitz zeta function coincides with Riemann's zeta function ζ(s) when a = 1, when a = 1/2 it is equal to (2s−1)ζ(s), and if a = n/k with k > 2, (n,k) > 1 and 0 < n < k, then
ζ
(
s
,
n
/
k
)
=
k
s
φ
(
k
)
∑
χ
χ
¯
(
n
)
L
(
s
,
χ
)
,
{\displaystyle \zeta (s,n/k)={\frac {k^{s}}{\varphi (k)}}\sum _{\chi }{\overline {\chi }}(n)L(s,\chi ),}
the sum running over all Dirichlet characters mod k. In the opposite direction we have the linear combination
L
(
s
,
χ
)
=
1
k
s
∑
n
=
1
k
χ
(
n
)
ζ
(
s
,
n
k
)
.
{\displaystyle L(s,\chi )={\frac {1}{k^{s}}}\sum _{n=1}^{k}\chi (n)\;\zeta \left(s,{\frac {n}{k}}\right).}
There is also the multiplication theorem
k
s
ζ
(
s
)
=
∑
n
=
1
k
ζ
(
s
,
n
k
)
,
{\displaystyle k^{s}\zeta (s)=\sum _{n=1}^{k}\zeta \left(s,{\frac {n}{k}}\right),}
of which a useful generalization is the distribution relation
∑
p
=
0
q
−
1
ζ
(
s
,
a
+
p
/
q
)
=
q
s
ζ
(
s
,
q
a
)
.
{\displaystyle \sum _{p=0}^{q-1}\zeta (s,a+p/q)=q^{s}\,\zeta (s,qa).}
(This last form is valid whenever q a natural number and 1 − qa is not.)
== Zeros ==
If a=1 the Hurwitz zeta function reduces to the Riemann zeta function itself; if a=1/2 it reduces to the Riemann zeta function multiplied by a simple function of the complex argument s (vide supra), leading in each case to the difficult study of the zeros of Riemann's zeta function. In particular, there will be no zeros with real part greater than or equal to 1. However, if 0<a<1 and a≠1/2, then there are zeros of Hurwitz's zeta function in the strip 1<Re(s)<1+ε for any positive real number ε. This was proved by Davenport and Heilbronn for rational or transcendental irrational a, and by Cassels for algebraic irrational a.
== Rational values ==
The Hurwitz zeta function occurs in a number of striking identities at rational values. In particular, values in terms of the Euler polynomials
E
n
(
x
)
{\displaystyle E_{n}(x)}
:
E
2
n
−
1
(
p
q
)
=
(
−
1
)
n
4
(
2
n
−
1
)
!
(
2
π
q
)
2
n
∑
k
=
1
q
ζ
(
2
n
,
2
k
−
1
2
q
)
cos
(
2
k
−
1
)
π
p
q
{\displaystyle E_{2n-1}\left({\frac {p}{q}}\right)=(-1)^{n}{\frac {4(2n-1)!}{(2\pi q)^{2n}}}\sum _{k=1}^{q}\zeta \left(2n,{\frac {2k-1}{2q}}\right)\cos {\frac {(2k-1)\pi p}{q}}}
and
E
2
n
(
p
q
)
=
(
−
1
)
n
4
(
2
n
)
!
(
2
π
q
)
2
n
+
1
∑
k
=
1
q
ζ
(
2
n
+
1
,
2
k
−
1
2
q
)
sin
(
2
k
−
1
)
π
p
q
{\displaystyle E_{2n}\left({\frac {p}{q}}\right)=(-1)^{n}{\frac {4(2n)!}{(2\pi q)^{2n+1}}}\sum _{k=1}^{q}\zeta \left(2n+1,{\frac {2k-1}{2q}}\right)\sin {\frac {(2k-1)\pi p}{q}}}
One also has
ζ
(
s
,
2
p
−
1
2
q
)
=
2
(
2
q
)
s
−
1
∑
k
=
1
q
[
C
s
(
k
q
)
cos
(
(
2
p
−
1
)
π
k
q
)
+
S
s
(
k
q
)
sin
(
(
2
p
−
1
)
π
k
q
)
]
{\displaystyle \zeta \left(s,{\frac {2p-1}{2q}}\right)=2(2q)^{s-1}\sum _{k=1}^{q}\left[C_{s}\left({\frac {k}{q}}\right)\cos \left({\frac {(2p-1)\pi k}{q}}\right)+S_{s}\left({\frac {k}{q}}\right)\sin \left({\frac {(2p-1)\pi k}{q}}\right)\right]}
which holds for
1
≤
p
≤
q
{\displaystyle 1\leq p\leq q}
. Here, the
C
ν
(
x
)
{\displaystyle C_{\nu }(x)}
and
S
ν
(
x
)
{\displaystyle S_{\nu }(x)}
are defined by means of the Legendre chi function
χ
ν
{\displaystyle \chi _{\nu }}
as
C
ν
(
x
)
=
Re
χ
ν
(
e
i
x
)
{\displaystyle C_{\nu }(x)=\operatorname {Re} \,\chi _{\nu }(e^{ix})}
and
S
ν
(
x
)
=
Im
χ
ν
(
e
i
x
)
.
{\displaystyle S_{\nu }(x)=\operatorname {Im} \,\chi _{\nu }(e^{ix}).}
For integer values of ν, these may be expressed in terms of the Euler polynomials. These relations may be derived by employing the functional equation together with Hurwitz's formula, given above.
== Applications ==
Hurwitz's zeta function occurs in a variety of disciplines. Most commonly, it occurs in number theory, where its theory is the deepest and most developed. However, it also occurs in the study of fractals and dynamical systems. In applied statistics, it occurs in Zipf's law and the Zipf–Mandelbrot law. In particle physics, it occurs in a formula by Julian Schwinger, giving an exact result for the pair production rate of a Dirac electron in a uniform electric field.
== Special cases and generalizations ==
The Hurwitz zeta function with a positive integer m is related to the polygamma function:
ψ
(
m
)
(
z
)
=
(
−
1
)
m
+
1
m
!
ζ
(
m
+
1
,
z
)
.
{\displaystyle \psi ^{(m)}(z)=(-1)^{m+1}m!\zeta (m+1,z)\ .}
The Barnes zeta function generalizes the Hurwitz zeta function.
The Lerch transcendent generalizes the Hurwitz zeta:
Φ
(
z
,
s
,
q
)
=
∑
k
=
0
∞
z
k
(
k
+
q
)
s
{\displaystyle \Phi (z,s,q)=\sum _{k=0}^{\infty }{\frac {z^{k}}{(k+q)^{s}}}}
and thus
ζ
(
s
,
a
)
=
Φ
(
1
,
s
,
a
)
.
{\displaystyle \zeta (s,a)=\Phi (1,s,a).\,}
Hypergeometric function
ζ
(
s
,
a
)
=
a
−
s
⋅
s
+
1
F
s
(
1
,
a
1
,
a
2
,
…
a
s
;
a
1
+
1
,
a
2
+
1
,
…
a
s
+
1
;
1
)
{\displaystyle \zeta (s,a)=a^{-s}\cdot {}_{s+1}F_{s}(1,a_{1},a_{2},\ldots a_{s};a_{1}+1,a_{2}+1,\ldots a_{s}+1;1)}
where
a
1
=
a
2
=
…
=
a
s
=
a
and
a
∉
N
and
s
∈
N
+
.
{\displaystyle a_{1}=a_{2}=\ldots =a_{s}=a{\text{ and }}a\notin \mathbb {N} {\text{ and }}s\in \mathbb {N} ^{+}.}
Meijer G-function
ζ
(
s
,
a
)
=
G
s
+
1
,
s
+
1
1
,
s
+
1
(
−
1
|
0
,
1
−
a
,
…
,
1
−
a
0
,
−
a
,
…
,
−
a
)
s
∈
N
+
.
{\displaystyle \zeta (s,a)=G\,_{s+1,\,s+1}^{\,1,\,s+1}\left(-1\;\left|\;{\begin{matrix}0,1-a,\ldots ,1-a\\0,-a,\ldots ,-a\end{matrix}}\right)\right.\qquad \qquad s\in \mathbb {N} ^{+}.}
== Notes ==
== References ==
Apostol, T. M. (2010), "Hurwitz zeta function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
See chapter 12 of Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001
Milton Abramowitz and Irene A. Stegun, Handbook of Mathematical Functions, (1964) Dover Publications, New York. ISBN 0-486-61272-4. (See Paragraph 6.4.10 for relationship to polygamma function.)
Davenport, Harold (1967). Multiplicative number theory. Lectures in advanced mathematics. Vol. 1. Chicago: Markham. Zbl 0159.06303.
Miller, Jeff; Adamchik, Victor S. (1998). "Derivatives of the Hurwitz Zeta Function for Rational Arguments". Journal of Computational and Applied Mathematics. 100 (2): 201–206. doi:10.1016/S0377-0427(98)00193-9.
Whittaker, E. T.; Watson, G. N. (1927). A Course Of Modern Analysis (4th ed.). Cambridge, UK: Cambridge University Press.
== External links ==
Jonathan Sondow and Eric W. Weisstein. "Hurwitz Zeta Function". MathWorld. | Wikipedia/Hurwitz_zeta_function |
In mathematics, a p-adic zeta function, or more generally a p-adic L-function, is a function analogous to the Riemann zeta function, or more general L-functions, but whose domain and target are p-adic (where p is a prime number). For example, the domain could be the p-adic integers Zp, a profinite p-group, or a p-adic family of Galois representations, and the image could be the p-adic numbers Qp or its algebraic closure.
The source of a p-adic L-function tends to be one of two types. The first source—from which Tomio Kubota and Heinrich-Wolfgang Leopoldt gave the first construction of a p-adic L-function (Kubota & Leopoldt 1964)—is via the p-adic interpolation of special values of L-functions. For example, Kubota–Leopoldt used Kummer's congruences for Bernoulli numbers to construct a p-adic L-function, the p-adic Riemann zeta function ζp(s), whose values at negative odd integers are those of the Riemann zeta function at negative odd integers (up to an explicit correction factor). p-adic L-functions arising in this fashion are typically referred to as analytic p-adic L-functions. The other major source of p-adic L-functions—first discovered by Kenkichi Iwasawa—is from the arithmetic of cyclotomic fields, or more generally, certain Galois modules over towers of cyclotomic fields or even more general towers. A p-adic L-function arising in this way is typically called an arithmetic p-adic L-function as it encodes arithmetic data of the Galois module involved. The main conjecture of Iwasawa theory (now a theorem due to Barry Mazur and Andrew Wiles) is the statement that the Kubota–Leopoldt p-adic L-function and an arithmetic analogue constructed by Iwasawa theory are essentially the same. In more general situations where both analytic and arithmetic p-adic L-functions are constructed (or expected), the statement that they agree is called the main conjecture of Iwasawa theory for that situation. Such conjectures represent formal statements concerning the philosophy that special values of L-functions contain arithmetic information.
== Dirichlet L-functions ==
The Dirichlet L-function is given by the analytic continuation of
L
(
s
,
χ
)
=
∑
n
χ
(
n
)
n
s
=
∏
p
prime
1
1
−
χ
(
p
)
p
−
s
{\displaystyle L(s,\chi )=\sum _{n}{\frac {\chi (n)}{n^{s}}}=\prod _{p{\text{ prime}}}{\frac {1}{1-\chi (p)p^{-s}}}}
The Dirichlet L-function at negative integers is given by
L
(
1
−
n
,
χ
)
=
−
B
n
,
χ
n
{\displaystyle L(1-n,\chi )=-{\frac {B_{n,\chi }}{n}}}
where Bn,χ is a generalized Bernoulli number defined by
∑
n
=
0
∞
B
n
,
χ
t
n
n
!
=
∑
a
=
1
f
χ
(
a
)
t
e
a
t
e
f
t
−
1
{\displaystyle \displaystyle \sum _{n=0}^{\infty }B_{n,\chi }{\frac {t^{n}}{n!}}=\sum _{a=1}^{f}{\frac {\chi (a)te^{at}}{e^{ft}-1}}}
for χ a Dirichlet character with conductor f.
== Definition using interpolation ==
The Kubota–Leopoldt p-adic L-function Lp(s, χ) interpolates the Dirichlet L-function with the Euler factor at p removed.
More precisely, Lp(s, χ) is the unique continuous function of the p-adic number s such that
L
p
(
1
−
n
,
χ
)
=
(
1
−
χ
(
p
)
p
n
−
1
)
L
(
1
−
n
,
χ
)
{\displaystyle \displaystyle L_{p}(1-n,\chi )=(1-\chi (p)p^{n-1})L(1-n,\chi )}
for positive integers n divisible by p − 1. The right hand side is just the usual Dirichlet L-function, except that the Euler factor at p is removed, otherwise it would not be p-adically continuous. The continuity of the right hand side is closely related to the Kummer congruences.
When n is not divisible by p − 1 this does not usually hold; instead
L
p
(
1
−
n
,
χ
)
=
(
1
−
χ
ω
−
n
(
p
)
p
n
−
1
)
L
(
1
−
n
,
χ
ω
−
n
)
{\displaystyle \displaystyle L_{p}(1-n,\chi )=(1-\chi \omega ^{-n}(p)p^{n-1})L(1-n,\chi \omega ^{-n})}
for positive integers n.
Here χ is twisted by a power of the Teichmüller character ω.
== Viewed as a p-adic measure ==
p-adic L-functions can also be thought of as p-adic measures (or p-adic distributions) on p-profinite Galois groups. The translation between this point of view and the original point of view of Kubota–Leopoldt (as Qp-valued functions on Zp) is via the Mazur–Mellin transform (and class field theory).
== Totally real fields ==
Deligne & Ribet (1980), building upon previous work of Serre (1973), constructed analytic p-adic L-functions for totally real fields. Independently, Barsky (1978) and Cassou-Noguès (1979) did the same, but their approaches followed Takuro Shintani's approach to the study of the L-values.
== References ==
Barsky, Daniel (1978), "Fonctions zeta p-adiques d'une classe de rayon des corps de nombres totalement réels", in Amice, Y.; Barsky, D.; Robba, P. (eds.), Groupe d'Etude d'Analyse Ultramétrique (5e année: 1977/78), vol. 16, Paris: Secrétariat Math., ISBN 978-2-85926-266-2, MR 0525346
Cassou-Noguès, Pierrette (1979), "Valeurs aux entiers négatifs des fonctions zêta et fonctions zêta p-adiques", Inventiones Mathematicae, 51 (1): 29–59, Bibcode:1979InMat..51...29C, doi:10.1007/BF01389911, ISSN 0020-9910, MR 0524276
Coates, John (1989), "On p-adic L-functions", Astérisque (177): 33–59, ISSN 0303-1179, MR 1040567
Colmez, Pierre (2004), Fontaine's rings and p-adic L-functions (PDF)
Deligne, Pierre; Ribet, Kenneth A. (1980), "Values of abelian L-functions at negative integers over totally real fields", Inventiones Mathematicae, 59 (3): 227–286, Bibcode:1980InMat..59..227D, doi:10.1007/BF01453237, ISSN 0020-9910, MR 0579702
Iwasawa, Kenkichi (1969), "On p-adic L-functions", Annals of Mathematics, Second Series, 89 (1), Annals of Mathematics: 198–205, doi:10.2307/1970817, ISSN 0003-486X, JSTOR 1970817, MR 0269627
Iwasawa, Kenkichi (1972), Lectures on p-adic L-functions, Princeton University Press, ISBN 978-0-691-08112-0, MR 0360526
Katz, Nicholas M. (1975), "p-adic L-functions via moduli of elliptic curves", Algebraic geometry, Proc. Sympos. Pure Math., vol. 29, Providence, R.I.: American Mathematical Society, pp. 479–506, MR 0432649
Koblitz, Neal (1984), p-adic Numbers, p-adic Analysis, and Zeta-Functions, Graduate Texts in Mathematics, vol. 58, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96017-3, MR 0754003
Kubota, Tomio; Leopoldt, Heinrich-Wolfgang (1964), "Eine p-adische Theorie der Zetawerte. I. Einführung der p-adischen Dirichletschen L-Funktionen", Journal für die reine und angewandte Mathematik, 214/215: 328–339, doi:10.1515/crll.1964.214-215.328, ISSN 0075-4102, MR 0163900
Serre, Jean-Pierre (1973), "Formes modulaires et fonctions zêta p-adiques", in Kuyk, Willem; Serre, Jean-Pierre (eds.), Modular functions of one variable, III (Proc. Internat. Summer School, Univ. Antwerp, 1972), Lecture Notes in Math, vol. 350, Berlin, New York: Springer-Verlag, pp. 191–268, doi:10.1007/978-3-540-37802-0_4, ISBN 978-3-540-06483-1, MR 0404145 | Wikipedia/P-adic_L-function |
In mathematics, Riemann's differential equation, named after Bernhard Riemann, is a generalization of the hypergeometric differential equation, allowing the regular singular points to occur anywhere on the Riemann sphere, rather than merely at 0, 1, and
∞
{\displaystyle \infty }
. The equation is also known as the Papperitz equation.
The hypergeometric differential equation is a second-order linear differential equation which has three regular singular points, 0, 1 and
∞
{\displaystyle \infty }
. That equation admits two linearly independent solutions; near a singularity
z
s
{\displaystyle z_{s}}
, the solutions take the form
x
s
f
(
x
)
{\displaystyle x^{s}f(x)}
, where
x
=
z
−
z
s
{\displaystyle x=z-z_{s}}
is a local variable, and
f
{\displaystyle f}
is locally holomorphic with
f
(
0
)
≠
0
{\displaystyle f(0)\neq 0}
. The real number
s
{\displaystyle s}
is called the exponent of the solution at
z
s
{\displaystyle z_{s}}
. Let α, β and γ be the exponents of one solution at 0, 1 and
∞
{\displaystyle \infty }
respectively; and let α′, β′ and γ′ be those of the other. Then
α
+
α
′
+
β
+
β
′
+
γ
+
γ
′
=
1.
{\displaystyle \alpha +\alpha '+\beta +\beta '+\gamma +\gamma '=1.}
By applying suitable changes of variable, it is possible to transform the hypergeometric equation: Applying Möbius transformations will adjust the positions of the regular singular points, while other transformations (see below) can change the exponents at the regular singular points, subject to the exponents adding up to 1.
== Definition ==
The differential equation is given by
d
2
w
d
z
2
+
[
1
−
α
−
α
′
z
−
a
+
1
−
β
−
β
′
z
−
b
+
1
−
γ
−
γ
′
z
−
c
]
d
w
d
z
{\displaystyle {\frac {d^{2}w}{dz^{2}}}+\left[{\frac {1-\alpha -\alpha '}{z-a}}+{\frac {1-\beta -\beta '}{z-b}}+{\frac {1-\gamma -\gamma '}{z-c}}\right]{\frac {dw}{dz}}}
+
[
α
α
′
(
a
−
b
)
(
a
−
c
)
z
−
a
+
β
β
′
(
b
−
c
)
(
b
−
a
)
z
−
b
+
γ
γ
′
(
c
−
a
)
(
c
−
b
)
z
−
c
]
w
(
z
−
a
)
(
z
−
b
)
(
z
−
c
)
=
0.
{\displaystyle +\left[{\frac {\alpha \alpha '(a-b)(a-c)}{z-a}}+{\frac {\beta \beta '(b-c)(b-a)}{z-b}}+{\frac {\gamma \gamma '(c-a)(c-b)}{z-c}}\right]{\frac {w}{(z-a)(z-b)(z-c)}}=0.}
The regular singular points are a, b, and c. The exponents of the solutions at these regular singular points are, respectively, α; α′, β; β′, and γ; γ′. As before, the exponents are subject to the condition
α
+
α
′
+
β
+
β
′
+
γ
+
γ
′
=
1.
{\displaystyle \alpha +\alpha '+\beta +\beta '+\gamma +\gamma '=1.}
== Solutions and relationship with the hypergeometric function ==
The solutions are denoted by the Riemann P-symbol (also known as the Papperitz symbol)
w
(
z
)
=
P
{
a
b
c
α
β
γ
z
α
′
β
′
γ
′
}
{\displaystyle w(z)=P\left\{{\begin{matrix}a&b&c&\;\\\alpha &\beta &\gamma &z\\\alpha '&\beta '&\gamma '&\;\end{matrix}}\right\}}
The standard hypergeometric function may be expressed as
2
F
1
(
a
,
b
;
c
;
z
)
=
P
{
0
∞
1
0
a
0
z
1
−
c
b
c
−
a
−
b
}
{\displaystyle \;_{2}F_{1}(a,b;c;z)=P\left\{{\begin{matrix}0&\infty &1&\;\\0&a&0&z\\1-c&b&c-a-b&\;\end{matrix}}\right\}}
The P-functions obey a number of identities; one of them allows a general P-function to be expressed in terms of the hypergeometric function. It is
P
{
a
b
c
α
β
γ
z
α
′
β
′
γ
′
}
=
(
z
−
a
z
−
b
)
α
(
z
−
c
z
−
b
)
γ
P
{
0
∞
1
0
α
+
β
+
γ
0
(
z
−
a
)
(
c
−
b
)
(
z
−
b
)
(
c
−
a
)
α
′
−
α
α
+
β
′
+
γ
γ
′
−
γ
}
{\displaystyle P\left\{{\begin{matrix}a&b&c&\;\\\alpha &\beta &\gamma &z\\\alpha '&\beta '&\gamma '&\;\end{matrix}}\right\}=\left({\frac {z-a}{z-b}}\right)^{\alpha }\left({\frac {z-c}{z-b}}\right)^{\gamma }P\left\{{\begin{matrix}0&\infty &1&\;\\0&\alpha +\beta +\gamma &0&\;{\frac {(z-a)(c-b)}{(z-b)(c-a)}}\\\alpha '-\alpha &\alpha +\beta '+\gamma &\gamma '-\gamma &\;\end{matrix}}\right\}}
In other words, one may write the solutions in terms of the hypergeometric function as
w
(
z
)
=
(
z
−
a
z
−
b
)
α
(
z
−
c
z
−
b
)
γ
2
F
1
(
α
+
β
+
γ
,
α
+
β
′
+
γ
;
1
+
α
−
α
′
;
(
z
−
a
)
(
c
−
b
)
(
z
−
b
)
(
c
−
a
)
)
{\displaystyle w(z)=\left({\frac {z-a}{z-b}}\right)^{\alpha }\left({\frac {z-c}{z-b}}\right)^{\gamma }\;_{2}F_{1}\left(\alpha +\beta +\gamma ,\alpha +\beta '+\gamma ;1+\alpha -\alpha ';{\frac {(z-a)(c-b)}{(z-b)(c-a)}}\right)}
The full complement of Kummer's 24 solutions may be obtained in this way; see the article hypergeometric differential equation for a treatment of Kummer's solutions.
== Fractional linear transformations ==
The P-function possesses a simple symmetry under the action of fractional linear transformations known as Möbius transformations (that are the conformal remappings of the Riemann sphere), or equivalently, under the action of the group GL(2, C). Given arbitrary complex numbers A, B, C, D such that AD − BC ≠ 0, define the quantities
u
=
A
z
+
B
C
z
+
D
and
η
=
A
a
+
B
C
a
+
D
{\displaystyle u={\frac {Az+B}{Cz+D}}\quad {\text{ and }}\quad \eta ={\frac {Aa+B}{Ca+D}}}
and
ζ
=
A
b
+
B
C
b
+
D
and
θ
=
A
c
+
B
C
c
+
D
{\displaystyle \zeta ={\frac {Ab+B}{Cb+D}}\quad {\text{ and }}\quad \theta ={\frac {Ac+B}{Cc+D}}}
then one has the simple relation
P
{
a
b
c
α
β
γ
z
α
′
β
′
γ
′
}
=
P
{
η
ζ
θ
α
β
γ
u
α
′
β
′
γ
′
}
{\displaystyle P\left\{{\begin{matrix}a&b&c&\;\\\alpha &\beta &\gamma &z\\\alpha '&\beta '&\gamma '&\;\end{matrix}}\right\}=P\left\{{\begin{matrix}\eta &\zeta &\theta &\;\\\alpha &\beta &\gamma &u\\\alpha '&\beta '&\gamma '&\;\end{matrix}}\right\}}
expressing the symmetry.
== Exponents ==
If the Moebius transformation above moves the singular points but does not change the exponents,
the following transformation does not move the singular points but changes the exponents:
(
z
−
a
z
−
b
)
k
(
z
−
c
z
−
b
)
l
P
{
a
b
c
α
β
γ
z
α
′
β
′
γ
′
}
=
P
{
a
b
c
α
+
k
β
−
k
−
l
γ
+
l
z
α
′
+
k
β
′
−
k
−
l
γ
′
+
l
}
{\displaystyle \left({\frac {z-a}{z-b}}\right)^{k}\left({\frac {z-c}{z-b}}\right)^{l}P\left\{{\begin{matrix}a&b&c&\;\\\alpha &\beta &\gamma &z\\\alpha '&\beta '&\gamma '&\;\end{matrix}}\right\}=P\left\{{\begin{matrix}a&b&c&\;\\\alpha +k&\beta -k-l&\gamma +l&z\\\alpha '+k&\beta '-k-l&\gamma '+l&\;\end{matrix}}\right\}}
== See also ==
Method of Frobenius
Monodromy
== Notes ==
== References ==
Milton Abramowitz and Irene A. Stegun, eds., Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (Dover: New York, 1972)
Chapter 15 Hypergeometric Functions
Section 15.6 Riemann's Differential Equation | Wikipedia/Riemann's_differential_equation |
In mathematics and theoretical physics, zeta function regularization is a type of regularization or summability method that assigns finite values to divergent sums or products, and in particular can be used to define determinants and traces of some self-adjoint operators. The technique is now commonly applied to problems in physics, but has its origins in attempts to give precise meanings to ill-conditioned sums appearing in number theory.
== Definition ==
There are several different summation methods called zeta function regularization for defining the sum of a possibly divergent series a1 + a2 + ....
One method is to define its zeta regularized sum to be ζA(−1) if this is defined, where the zeta function is defined for large Re(s) by
ζ
A
(
s
)
=
1
a
1
s
+
1
a
2
s
+
⋯
{\displaystyle \zeta _{A}(s)={\frac {1}{a_{1}^{s}}}+{\frac {1}{a_{2}^{s}}}+\cdots }
if this sum converges, and by analytic continuation elsewhere.
In the case when an = n, the zeta function is the ordinary Riemann zeta function. This method was used by Ramanujan to "sum" the series 1 + 2 + 3 + 4 + ⋯ to ζ(−1) = −1/12.
Hawking (1977) showed that in flat space, in which the eigenvalues of Laplacians are known, the zeta function corresponding to the partition function can be computed explicitly. Consider a scalar field φ contained in a large box of volume V in flat spacetime at the temperature T = β−1. The partition function is defined by a path integral over all fields φ on the Euclidean space obtained by putting τ = it which are zero on the walls of the box and which are periodic in τ with period β. In this situation from the partition function he computes energy, entropy and pressure of the radiation of the field φ. In case of flat spaces the eigenvalues appearing in the physical quantities are generally known, while in case of curved space they are not known: in this case asymptotic methods are needed.
Another method defines the possibly divergent infinite product a1a2.... to be exp(−ζ′A(0)). Ray & Singer (1971) used this to define the determinant of a positive self-adjoint operator A (the Laplacian of a Riemannian manifold in their application) with eigenvalues a1, a2, ...., and in this case the zeta function is formally the trace of A−s. Minakshisundaram & Pleijel (1949) showed that if A is the Laplacian of a compact Riemannian manifold then the Minakshisundaram–Pleijel zeta function converges and has an analytic continuation as a meromorphic function to all complex numbers, and Seeley (1967) extended this to elliptic pseudo-differential operators A on compact Riemannian manifolds. So for such operators one can define the determinant using zeta function regularization. See "analytic torsion."
Hawking (1977) suggested using this idea to evaluate path integrals in curved spacetimes. He studied zeta function regularization in order to calculate the partition functions for thermal graviton and matter's quanta in curved background such as on the horizon of black holes and on de Sitter background using the relation by the inverse Mellin transformation to the trace of the kernel of heat equations.
== Example ==
The first example in which zeta function regularization is available appears in the Casimir effect, which is in a flat space with the bulk contributions of the quantum field in three space dimensions. In this case we must calculate the value of Riemann zeta function at –3, which diverges explicitly. However, it can be analytically continued to s = –3 where hopefully there is no pole, thus giving a finite value to the expression. A detailed example of this regularization at work is given in the article on the detail example of the Casimir effect, where the resulting sum is very explicitly the Riemann zeta-function (and where the seemingly legerdemain analytic continuation removes an additive infinity, leaving a physically significant finite number).
An example of zeta-function regularization is the calculation of the vacuum expectation value of the energy of a particle field in quantum field theory. More generally, the zeta-function approach can be used to regularize the whole energy–momentum tensor both in flat and in curved spacetime. [1] [2] [3]
The unregulated value of the energy is given by a summation over the zero-point energy of all of the excitation modes of the vacuum:
⟨
0
|
T
00
|
0
⟩
=
∑
n
ℏ
|
ω
n
|
2
{\displaystyle \langle 0|T_{00}|0\rangle =\sum _{n}{\frac {\hbar |\omega _{n}|}{2}}}
Here,
T
00
{\displaystyle T_{00}}
is the zeroth component of the energy–momentum tensor and the sum (which may be an integral) is understood to extend over all (positive and negative) energy modes
ω
n
{\displaystyle \omega _{n}}
; the absolute value reminding us that the energy is taken to be positive. This sum, as written, is usually infinite (
ω
n
{\displaystyle \omega _{n}}
is typically linear in n). The sum may be regularized by writing it as
⟨
0
|
T
00
(
s
)
|
0
⟩
=
∑
n
ℏ
|
ω
n
|
2
|
ω
n
|
−
s
{\displaystyle \langle 0|T_{00}(s)|0\rangle =\sum _{n}{\frac {\hbar |\omega _{n}|}{2}}|\omega _{n}|^{-s}}
where s is some parameter, taken to be a complex number. For large, real s greater than 4 (for three-dimensional space), the sum is manifestly finite, and thus may often be evaluated theoretically.
The zeta-regularization is useful as it can often be used in a way such that the various symmetries of the physical system are preserved. Zeta-function regularization is used in conformal field theory, renormalization and in fixing the critical spacetime dimension of string theory.
== Relation to other regularizations ==
Zeta function regularization is equivalent to dimensional regularization, see[4]. However, the main advantage of the zeta regularization is that it can be used whenever the dimensional regularization fails, for example if there are matrices or tensors inside the calculations
ϵ
i
,
j
,
k
{\displaystyle \epsilon _{i,j,k}}
== Relation to Dirichlet series ==
Zeta-function regularization gives an analytic structure to any sums over an arithmetic function f(n). Such sums are known as Dirichlet series. The regularized form
f
~
(
s
)
=
∑
n
=
1
∞
f
(
n
)
n
−
s
{\displaystyle {\tilde {f}}(s)=\sum _{n=1}^{\infty }f(n)n^{-s}}
converts divergences of the sum into simple poles on the complex s-plane. In numerical calculations, the zeta-function regularization is inappropriate, as it is extremely slow to converge. For numerical purposes, a more rapidly converging sum is the exponential regularization, given by
F
(
t
)
=
∑
n
=
1
∞
f
(
n
)
e
−
t
n
.
{\displaystyle F(t)=\sum _{n=1}^{\infty }f(n)e^{-tn}.}
This is sometimes called the Z-transform of f, where z = exp(−t). The analytic structure of the exponential and zeta-regularizations are related. By expanding the exponential sum as a Laurent series
F
(
t
)
=
a
N
t
N
+
a
N
−
1
t
N
−
1
+
⋯
{\displaystyle F(t)={\frac {a_{N}}{t^{N}}}+{\frac {a_{N-1}}{t^{N-1}}}+\cdots }
one finds that the zeta-series has the structure
f
~
(
s
)
=
a
N
s
−
N
+
⋯
.
{\displaystyle {\tilde {f}}(s)={\frac {a_{N}}{s-N}}+\cdots .}
The structure of the exponential and zeta-regulators are related by means of the Mellin transform. The one may be converted to the other by making use of the integral representation of the Gamma function:
Γ
(
s
)
=
∫
0
∞
t
s
−
1
e
−
t
d
t
{\displaystyle \Gamma (s)=\int _{0}^{\infty }t^{s-1}e^{-t}\,dt}
which leads to the identity
Γ
(
s
)
f
~
(
s
)
=
∫
0
∞
t
s
−
1
F
(
t
)
d
t
{\displaystyle \Gamma (s){\tilde {f}}(s)=\int _{0}^{\infty }t^{s-1}F(t)\,dt}
relating the exponential and zeta-regulators, and converting poles in the s-plane to divergent terms in the Laurent series.
== Heat kernel regularization ==
The sum
f
(
s
)
=
∑
n
a
n
e
−
s
|
ω
n
|
{\displaystyle f(s)=\sum _{n}a_{n}e^{-s|\omega _{n}|}}
is sometimes called a heat kernel or a heat-kernel regularized sum; this name stems from the idea that the
ω
n
{\displaystyle \omega _{n}}
can sometimes be understood as eigenvalues of the heat kernel. In mathematics, such a sum is known as a generalized Dirichlet series; its use for averaging is known as an Abelian mean. It is closely related to the Laplace–Stieltjes transform, in that
f
(
s
)
=
∫
0
∞
e
−
s
t
d
α
(
t
)
{\displaystyle f(s)=\int _{0}^{\infty }e^{-st}\,d\alpha (t)}
where
α
(
t
)
{\displaystyle \alpha (t)}
is a step function, with steps of
a
n
{\displaystyle a_{n}}
at
t
=
|
ω
n
|
{\displaystyle t=|\omega _{n}|}
. A number of theorems for the convergence of such a series exist. For example, by the Hardy-Littlewood Tauberian theorem, if [5]
L
=
lim sup
n
→
∞
log
|
∑
k
=
1
n
a
k
|
|
ω
n
|
{\displaystyle L=\limsup _{n\to \infty }{\frac {\log \vert \sum _{k=1}^{n}a_{k}\vert }{|\omega _{n}|}}}
then the series for
f
(
s
)
{\displaystyle f(s)}
converges in the half-plane
ℜ
(
s
)
>
L
{\displaystyle \Re (s)>L}
and is uniformly convergent on every compact subset of the half-plane
ℜ
(
s
)
>
L
{\displaystyle \Re (s)>L}
. In almost all applications to physics, one has
L
=
0
{\displaystyle L=0}
== History ==
Much of the early work establishing the convergence and equivalence of series regularized with the heat kernel and zeta function regularization methods was done by G. H. Hardy and J. E. Littlewood in 1916[6] and is based on the application of the Cahen–Mellin integral. The effort was made in order to obtain values for various ill-defined, conditionally convergent sums appearing in number theory.
In terms of application as the regulator in physical problems, before Hawking (1977), J. Stuart Dowker and Raymond Critchley in 1976 proposed a zeta-function regularization method for quantum physical problems.[7] Emilio Elizalde and others have also proposed a method based on the zeta regularization for the integrals
∫
a
∞
x
m
−
s
d
x
{\displaystyle \int _{a}^{\infty }x^{m-s}dx}
, here
x
−
s
{\displaystyle x^{-s}}
is a regulator and the divergent integral depends on the numbers
ζ
(
s
−
m
)
{\displaystyle \zeta (s-m)}
in the limit
s
→
0
{\displaystyle s\to 0}
see renormalization. Also unlike other regularizations such as dimensional regularization and analytic regularization, zeta regularization has no counterterms and gives only finite results.
== See also ==
Generating function – Formal power series; coefficients encode information about a sequence indexed by natural numbers
Perron's formula – Formula to calculate the sum of an arithmetic function in analytic number theory
Renormalization – Method in physics used to deal with infinities
1 + 1 + 1 + 1 + ⋯ – Divergent series
1 + 2 + 3 + 4 + ⋯ – Divergent series
Analytic torsion – Topological invariant of manifolds that can distinguish homotopy-equivalent manifolds
Ramanujan summation – Mathematical techniques for summing divergent infinite series
Minakshisundaram–Pleijel zeta function
Zeta function (operator)
== References ==
^ Tom M. Apostol, "Modular Functions and Dirichlet Series in Number Theory", "Springer-Verlag New York. (See Chapter 8.)"
^ A. Bytsenko, G. Cognola, E. Elizalde, V. Moretti and S. Zerbini, "Analytic Aspects of Quantum Fields", World Scientific Publishing, 2003, ISBN 981-238-364-6
^ G.H. Hardy and J.E. Littlewood, "Contributions to the Theory of the Riemann Zeta-Function and the Theory of the Distribution of Primes", Acta Mathematica, 41(1916) pp. 119–196. (See, for example, theorem 2.12)
Hawking, S. W. (1977), "Zeta function regularization of path integrals in curved spacetime", Communications in Mathematical Physics, 55 (2): 133–148, Bibcode:1977CMaPh..55..133H, doi:10.1007/BF01626516, ISSN 0010-3616, MR 0524257, S2CID 121650064
^ V. Moretti, "Direct z-function approach and renormalization of one-loop stress tensor in curved spacetimes, Phys. Rev.D 56, 7797 (1997).
Minakshisundaram, S.; Pleijel, Å. (1949), "Some properties of the eigenfunctions of the Laplace-operator on Riemannian manifolds", Canadian Journal of Mathematics, 1 (3): 242–256, doi:10.4153/CJM-1949-021-5, ISSN 0008-414X, MR 0031145
Ray, D. B.; Singer, I. M. (1971), "R-torsion and the Laplacian on Riemannian manifolds", Advances in Mathematics, 7 (2): 145–210, doi:10.1016/0001-8708(71)90045-4, MR 0295381
"Zeta-function method for regularization", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Seeley, R. T. (1967), "Complex powers of an elliptic operator", in Calderón, Alberto P. (ed.), Singular Integrals (Proc. Sympos. Pure Math., Chicago, Ill., 1966), Proceedings of Symposia in Pure Mathematics, vol. 10, Providence, R.I.: Amer. Math. Soc., pp. 288–307, ISBN 978-0-8218-1410-9, MR 0237943
^ Dowker, J. S.; Critchley, R. (1976), "Effective Lagrangian and energy–momentum tensor in de Sitter space", Physical Review D, 13 (12): 3224–3232, Bibcode:1976PhRvD..13.3224D, doi:10.1103/PhysRevD.13.3224
^ D. Fermi, L. Pizzocchero, "Local zeta regularization and the scalar Casimir effect. A general approach based on integral kernels", World Scientific Publishing, ISBN 978-981-3224-99-5 (hardcover), ISBN 978-981-3225-01-5 (ebook). doi:10.1142/10570 (2017). | Wikipedia/Zeta_function_regularization |
Riemann (pronounced REE mahn) is a lunar impact crater that is located near the northeastern limb of the Moon, and can just be observed edge-on when libration effects bring it into sight. It lies to the east-northeast of the large walled plain Gauss. To the southeast, beyond sight on the far side, is the crater Vestine.
This is a heavily battered and eroded formation that is only a remnant of its former self. The outer rim has been worn away in many places, and now forms an irregular series of ridges in a rough circle. The rim is overlain along the south-southwestern rim by Beals, and several smaller craters lie along the western and southeast rim. The most intact portion of the outer wall is along the eastern edge.
The interior floor is a mixture of level terrain mixed with rough ground where impacts have stirred up the surface. It is generally less rough in the eastern half, especially near the center. A small, bowl-shaped crater lies on the floor in the southeastern part of the interior, and the faint remnants of several other lesser craters can be observed in the surface.
== Satellite craters ==
By convention these features are identified on lunar maps by placing the letter on the side of the crater midpoint that is closest to Riemann.
The following craters have been renamed by the IAU.
Riemann A — See Beals (crater).
== References == | Wikipedia/Riemann_(crater) |
In mathematics, the Clausen function, introduced by Thomas Clausen (1832), is a transcendental, special function of a single variable. It can variously be expressed in the form of a definite integral, a trigonometric series, and various other forms. It is intimately connected with the polylogarithm, inverse tangent integral, polygamma function, Riemann zeta function, Dirichlet eta function, and Dirichlet beta function.
The Clausen function of order 2 – often referred to as the Clausen function, despite being but one of a class of many – is given by the integral:
Cl
2
(
φ
)
=
−
∫
0
φ
log
|
2
sin
x
2
|
d
x
:
{\displaystyle \operatorname {Cl} _{2}(\varphi )=-\int _{0}^{\varphi }\log \left|2\sin {\frac {x}{2}}\right|\,dx:}
In the range
0
<
φ
<
2
π
{\displaystyle 0<\varphi <2\pi \,}
the sine function inside the absolute value sign remains strictly positive, so the absolute value signs may be omitted. The Clausen function also has the Fourier series representation:
Cl
2
(
φ
)
=
∑
k
=
1
∞
sin
k
φ
k
2
=
sin
φ
+
sin
2
φ
2
2
+
sin
3
φ
3
2
+
sin
4
φ
4
2
+
⋯
{\displaystyle \operatorname {Cl} _{2}(\varphi )=\sum _{k=1}^{\infty }{\frac {\sin k\varphi }{k^{2}}}=\sin \varphi +{\frac {\sin 2\varphi }{2^{2}}}+{\frac {\sin 3\varphi }{3^{2}}}+{\frac {\sin 4\varphi }{4^{2}}}+\cdots }
The Clausen functions, as a class of functions, feature extensively in many areas of modern mathematical research, particularly in relation to the evaluation of many classes of logarithmic and polylogarithmic integrals, both definite and indefinite. They also have numerous applications with regard to the summation of hypergeometric series, summations involving the inverse of the central binomial coefficient, sums of the polygamma function, and Dirichlet L-series.
== Basic properties ==
The Clausen function (of order 2) has simple zeros at all (integer) multiples of
π
,
{\displaystyle \pi ,\,}
since if
k
∈
Z
{\displaystyle k\in \mathbb {Z} \,}
is an integer, then
sin
k
π
=
0
{\displaystyle \sin k\pi =0}
Cl
2
(
m
π
)
=
0
,
m
=
0
,
±
1
,
±
2
,
±
3
,
⋯
{\displaystyle \operatorname {Cl} _{2}(m\pi )=0,\quad m=0,\,\pm 1,\,\pm 2,\,\pm 3,\,\cdots }
It has maxima at
θ
=
π
3
+
2
m
π
[
m
∈
Z
]
{\displaystyle \theta ={\frac {\pi }{3}}+2m\pi \quad [m\in \mathbb {Z} ]}
Cl
2
(
π
3
+
2
m
π
)
=
1.01494160
…
{\displaystyle \operatorname {Cl} _{2}\left({\frac {\pi }{3}}+2m\pi \right)=1.01494160\ldots }
and minima at
θ
=
−
π
3
+
2
m
π
[
m
∈
Z
]
{\displaystyle \theta =-{\frac {\pi }{3}}+2m\pi \quad [m\in \mathbb {Z} ]}
Cl
2
(
−
π
3
+
2
m
π
)
=
−
1.01494160
…
{\displaystyle \operatorname {Cl} _{2}\left(-{\frac {\pi }{3}}+2m\pi \right)=-1.01494160\ldots }
The following properties are immediate consequences of the series definition:
Cl
2
(
θ
+
2
m
π
)
=
Cl
2
(
θ
)
{\displaystyle \operatorname {Cl} _{2}(\theta +2m\pi )=\operatorname {Cl} _{2}(\theta )}
Cl
2
(
−
θ
)
=
−
Cl
2
(
θ
)
{\displaystyle \operatorname {Cl} _{2}(-\theta )=-\operatorname {Cl} _{2}(\theta )}
See Lu & Perez (1992).
== General definition ==
More generally, one defines the two generalized Clausen functions:
S
z
(
θ
)
=
∑
k
=
1
∞
sin
k
θ
k
z
{\displaystyle \operatorname {S} _{z}(\theta )=\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{z}}}}
C
z
(
θ
)
=
∑
k
=
1
∞
cos
k
θ
k
z
{\displaystyle \operatorname {C} _{z}(\theta )=\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{z}}}}
which are valid for complex z with Re z >1. The definition may be extended to all of the complex plane through analytic continuation.
When z is replaced with a non-negative integer, the standard Clausen functions are defined by the following Fourier series:
Cl
2
m
+
2
(
θ
)
=
∑
k
=
1
∞
sin
k
θ
k
2
m
+
2
{\displaystyle \operatorname {Cl} _{2m+2}(\theta )=\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{2m+2}}}}
Cl
2
m
+
1
(
θ
)
=
∑
k
=
1
∞
cos
k
θ
k
2
m
+
1
{\displaystyle \operatorname {Cl} _{2m+1}(\theta )=\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{2m+1}}}}
Sl
2
m
+
2
(
θ
)
=
∑
k
=
1
∞
cos
k
θ
k
2
m
+
2
{\displaystyle \operatorname {Sl} _{2m+2}(\theta )=\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{2m+2}}}}
Sl
2
m
+
1
(
θ
)
=
∑
k
=
1
∞
sin
k
θ
k
2
m
+
1
{\displaystyle \operatorname {Sl} _{2m+1}(\theta )=\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{2m+1}}}}
N.B. The SL-type Clausen functions have the alternative notation
Gl
m
(
θ
)
{\displaystyle \operatorname {Gl} _{m}(\theta )\,}
and are sometimes referred to as the Glaisher–Clausen functions (after James Whitbread Lee Glaisher, hence the GL-notation).
== Relation to the Bernoulli polynomials ==
The SL-type Clausen function are polynomials in
θ
{\displaystyle \,\theta \,}
, and are closely related to the Bernoulli polynomials. This connection is apparent from the Fourier series representations of the Bernoulli polynomials:
B
2
n
−
1
(
x
)
=
2
(
−
1
)
n
(
2
n
−
1
)
!
(
2
π
)
2
n
−
1
∑
k
=
1
∞
sin
2
π
k
x
k
2
n
−
1
.
{\displaystyle B_{2n-1}(x)={\frac {2(-1)^{n}(2n-1)!}{(2\pi )^{2n-1}}}\,\sum _{k=1}^{\infty }{\frac {\sin 2\pi kx}{k^{2n-1}}}.}
B
2
n
(
x
)
=
2
(
−
1
)
n
−
1
(
2
n
)
!
(
2
π
)
2
n
∑
k
=
1
∞
cos
2
π
k
x
k
2
n
.
{\displaystyle B_{2n}(x)={\frac {2(-1)^{n-1}(2n)!}{(2\pi )^{2n}}}\,\sum _{k=1}^{\infty }{\frac {\cos 2\pi kx}{k^{2n}}}.}
Setting
x
=
θ
/
2
π
{\displaystyle \,x=\theta /2\pi \,}
in the above, and then rearranging the terms gives the following closed form (polynomial) expressions:
Sl
2
m
(
θ
)
=
(
−
1
)
m
−
1
(
2
π
)
2
m
2
(
2
m
)
!
B
2
m
(
θ
2
π
)
,
{\displaystyle \operatorname {Sl} _{2m}(\theta )={\frac {(-1)^{m-1}(2\pi )^{2m}}{2(2m)!}}B_{2m}\left({\frac {\theta }{2\pi }}\right),}
Sl
2
m
−
1
(
θ
)
=
(
−
1
)
m
(
2
π
)
2
m
−
1
2
(
2
m
−
1
)
!
B
2
m
−
1
(
θ
2
π
)
,
{\displaystyle \operatorname {Sl} _{2m-1}(\theta )={\frac {(-1)^{m}(2\pi )^{2m-1}}{2(2m-1)!}}B_{2m-1}\left({\frac {\theta }{2\pi }}\right),}
where the Bernoulli polynomials
B
n
(
x
)
{\displaystyle \,B_{n}(x)\,}
are defined in terms of the Bernoulli numbers
B
n
≡
B
n
(
0
)
{\displaystyle \,B_{n}\equiv B_{n}(0)\,}
by the relation:
B
n
(
x
)
=
∑
j
=
0
n
(
n
j
)
B
j
x
n
−
j
.
{\displaystyle B_{n}(x)=\sum _{j=0}^{n}{\binom {n}{j}}B_{j}x^{n-j}.}
Explicit evaluations derived from the above include:
Sl
1
(
θ
)
=
π
2
−
θ
2
,
{\displaystyle \operatorname {Sl} _{1}(\theta )={\frac {\pi }{2}}-{\frac {\theta }{2}},}
Sl
2
(
θ
)
=
π
2
6
−
π
θ
2
+
θ
2
4
,
{\displaystyle \operatorname {Sl} _{2}(\theta )={\frac {\pi ^{2}}{6}}-{\frac {\pi \theta }{2}}+{\frac {\theta ^{2}}{4}},}
Sl
3
(
θ
)
=
π
2
θ
6
−
π
θ
2
4
+
θ
3
12
,
{\displaystyle \operatorname {Sl} _{3}(\theta )={\frac {\pi ^{2}\theta }{6}}-{\frac {\pi \theta ^{2}}{4}}+{\frac {\theta ^{3}}{12}},}
Sl
4
(
θ
)
=
π
4
90
−
π
2
θ
2
12
+
π
θ
3
12
−
θ
4
48
.
{\displaystyle \operatorname {Sl} _{4}(\theta )={\frac {\pi ^{4}}{90}}-{\frac {\pi ^{2}\theta ^{2}}{12}}+{\frac {\pi \theta ^{3}}{12}}-{\frac {\theta ^{4}}{48}}.}
== Duplication formula ==
For
0
<
θ
<
π
{\displaystyle 0<\theta <\pi }
, the duplication formula can be proven directly from the integral definition (see also Lu & Perez (1992) for the result – although no proof is given):
Cl
2
(
2
θ
)
=
2
Cl
2
(
θ
)
−
2
Cl
2
(
π
−
θ
)
{\displaystyle \operatorname {Cl} _{2}(2\theta )=2\operatorname {Cl} _{2}(\theta )-2\operatorname {Cl} _{2}(\pi -\theta )}
Denoting Catalan's constant by
K
=
Cl
2
(
π
2
)
{\displaystyle K=\operatorname {Cl} _{2}\left({\frac {\pi }{2}}\right)}
, immediate consequences of the duplication formula include the relations:
Cl
2
(
π
4
)
−
Cl
2
(
3
π
4
)
=
K
2
{\displaystyle \operatorname {Cl} _{2}\left({\frac {\pi }{4}}\right)-\operatorname {Cl} _{2}\left({\frac {3\pi }{4}}\right)={\frac {K}{2}}}
2
Cl
2
(
π
3
)
=
3
Cl
2
(
2
π
3
)
{\displaystyle 2\operatorname {Cl} _{2}\left({\frac {\pi }{3}}\right)=3\operatorname {Cl} _{2}\left({\frac {2\pi }{3}}\right)}
For higher order Clausen functions, duplication formulae can be obtained from the one given above; simply replace
θ
{\displaystyle \,\theta \,}
with the dummy variable
x
{\displaystyle x}
, and integrate over the interval
[
0
,
θ
]
.
{\displaystyle \,[0,\theta ].\,}
Applying the same process repeatedly yields:
Cl
3
(
2
θ
)
=
4
Cl
3
(
θ
)
+
4
Cl
3
(
π
−
θ
)
{\displaystyle \operatorname {Cl} _{3}(2\theta )=4\operatorname {Cl} _{3}(\theta )+4\operatorname {Cl} _{3}(\pi -\theta )}
Cl
4
(
2
θ
)
=
8
Cl
4
(
θ
)
−
8
Cl
4
(
π
−
θ
)
{\displaystyle \operatorname {Cl} _{4}(2\theta )=8\operatorname {Cl} _{4}(\theta )-8\operatorname {Cl} _{4}(\pi -\theta )}
Cl
5
(
2
θ
)
=
16
Cl
5
(
θ
)
+
16
Cl
5
(
π
−
θ
)
{\displaystyle \operatorname {Cl} _{5}(2\theta )=16\operatorname {Cl} _{5}(\theta )+16\operatorname {Cl} _{5}(\pi -\theta )}
Cl
6
(
2
θ
)
=
32
Cl
6
(
θ
)
−
32
Cl
6
(
π
−
θ
)
{\displaystyle \operatorname {Cl} _{6}(2\theta )=32\operatorname {Cl} _{6}(\theta )-32\operatorname {Cl} _{6}(\pi -\theta )}
And more generally, upon induction on
m
,
m
≥
1
{\displaystyle \,m,\;m\geq 1}
Cl
m
+
1
(
2
θ
)
=
2
m
[
Cl
m
+
1
(
θ
)
+
(
−
1
)
m
Cl
m
+
1
(
π
−
θ
)
]
{\displaystyle \operatorname {Cl} _{m+1}(2\theta )=2^{m}\left[\operatorname {Cl} _{m+1}(\theta )+(-1)^{m}\operatorname {Cl} _{m+1}(\pi -\theta )\right]}
Use of the generalized duplication formula allows for an extension of the result for the Clausen function of order 2, involving Catalan's constant. For
m
∈
Z
≥
1
{\displaystyle \,m\in \mathbb {Z} \geq 1\,}
Cl
2
m
(
π
2
)
=
2
2
m
−
1
[
Cl
2
m
(
π
4
)
−
Cl
2
m
(
3
π
4
)
]
=
β
(
2
m
)
{\displaystyle \operatorname {Cl} _{2m}\left({\frac {\pi }{2}}\right)=2^{2m-1}\left[\operatorname {Cl} _{2m}\left({\frac {\pi }{4}}\right)-\operatorname {Cl} _{2m}\left({\frac {3\pi }{4}}\right)\right]=\beta (2m)}
Where
β
(
x
)
{\displaystyle \,\beta (x)\,}
is the Dirichlet beta function.
=== Proof of the duplication formula ===
From the integral definition,
Cl
2
(
2
θ
)
=
−
∫
0
2
θ
log
|
2
sin
x
2
|
d
x
{\displaystyle \operatorname {Cl} _{2}(2\theta )=-\int _{0}^{2\theta }\log \left|2\sin {\frac {x}{2}}\right|\,dx}
Apply the duplication formula for the sine function,
sin
x
=
2
sin
x
2
cos
x
2
{\displaystyle \sin x=2\sin {\frac {x}{2}}\cos {\frac {x}{2}}}
to obtain
−
∫
0
2
θ
log
|
(
2
sin
x
4
)
(
2
cos
x
4
)
|
d
x
=
−
∫
0
2
θ
log
|
2
sin
x
4
|
d
x
−
∫
0
2
θ
log
|
2
cos
x
4
|
d
x
{\displaystyle {\begin{aligned}&-\int _{0}^{2\theta }\log \left|\left(2\sin {\frac {x}{4}}\right)\left(2\cos {\frac {x}{4}}\right)\right|\,dx\\={}&-\int _{0}^{2\theta }\log \left|2\sin {\frac {x}{4}}\right|\,dx-\int _{0}^{2\theta }\log \left|2\cos {\frac {x}{4}}\right|\,dx\end{aligned}}}
Apply the substitution
x
=
2
y
,
d
x
=
2
d
y
{\displaystyle x=2y,dx=2\,dy}
on both integrals:
−
2
∫
0
θ
log
|
2
sin
x
2
|
d
x
−
2
∫
0
θ
log
|
2
cos
x
2
|
d
x
=
2
Cl
2
(
θ
)
−
2
∫
0
θ
log
|
2
cos
x
2
|
d
x
{\displaystyle {\begin{aligned}&-2\int _{0}^{\theta }\log \left|2\sin {\frac {x}{2}}\right|\,dx-2\int _{0}^{\theta }\log \left|2\cos {\frac {x}{2}}\right|\,dx\\={}&2\,\operatorname {Cl} _{2}(\theta )-2\int _{0}^{\theta }\log \left|2\cos {\frac {x}{2}}\right|\,dx\end{aligned}}}
On that last integral, set
y
=
π
−
x
,
x
=
π
−
y
,
d
x
=
−
d
y
{\displaystyle y=\pi -x,\,x=\pi -y,\,dx=-dy}
, and use the trigonometric identity
cos
(
x
−
y
)
=
cos
x
cos
y
−
sin
x
sin
y
{\displaystyle \cos(x-y)=\cos x\cos y-\sin x\sin y}
to show that:
cos
(
π
−
y
2
)
=
sin
y
2
⟹
Cl
2
(
2
θ
)
=
2
Cl
2
(
θ
)
−
2
∫
0
θ
log
|
2
cos
x
2
|
d
x
=
2
Cl
2
(
θ
)
+
2
∫
π
π
−
θ
log
|
2
sin
y
2
|
d
y
=
2
Cl
2
(
θ
)
−
2
Cl
2
(
π
−
θ
)
+
2
Cl
2
(
π
)
{\displaystyle {\begin{aligned}&\cos \left({\frac {\pi -y}{2}}\right)=\sin {\frac {y}{2}}\\\Longrightarrow \qquad &\operatorname {Cl} _{2}(2\theta )=2\,\operatorname {Cl} _{2}(\theta )-2\int _{0}^{\theta }\log \left|2\cos {\frac {x}{2}}\right|\,dx\\={}&2\,\operatorname {Cl} _{2}(\theta )+2\int _{\pi }^{\pi -\theta }\log \left|2\sin {\frac {y}{2}}\right|\,dy\\={}&2\,\operatorname {Cl} _{2}(\theta )-2\,\operatorname {Cl} _{2}(\pi -\theta )+2\,\operatorname {Cl} _{2}(\pi )\end{aligned}}}
Cl
2
(
π
)
=
0
{\displaystyle \operatorname {Cl} _{2}(\pi )=0\,}
Therefore,
Cl
2
(
2
θ
)
=
2
Cl
2
(
θ
)
−
2
Cl
2
(
π
−
θ
)
.
◻
{\displaystyle \operatorname {Cl} _{2}(2\theta )=2\,\operatorname {Cl} _{2}(\theta )-2\,\operatorname {Cl} _{2}(\pi -\theta )\,.\,\Box }
== Derivatives of general-order Clausen functions ==
Direct differentiation of the Fourier series expansions for the Clausen functions give:
d
d
θ
Cl
2
m
+
2
(
θ
)
=
d
d
θ
∑
k
=
1
∞
sin
k
θ
k
2
m
+
2
=
∑
k
=
1
∞
cos
k
θ
k
2
m
+
1
=
Cl
2
m
+
1
(
θ
)
{\displaystyle {\frac {d}{d\theta }}\operatorname {Cl} _{2m+2}(\theta )={\frac {d}{d\theta }}\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{2m+2}}}=\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{2m+1}}}=\operatorname {Cl} _{2m+1}(\theta )}
d
d
θ
Cl
2
m
+
1
(
θ
)
=
d
d
θ
∑
k
=
1
∞
cos
k
θ
k
2
m
+
1
=
−
∑
k
=
1
∞
sin
k
θ
k
2
m
=
−
Cl
2
m
(
θ
)
{\displaystyle {\frac {d}{d\theta }}\operatorname {Cl} _{2m+1}(\theta )={\frac {d}{d\theta }}\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{2m+1}}}=-\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{2m}}}=-\operatorname {Cl} _{2m}(\theta )}
d
d
θ
Sl
2
m
+
2
(
θ
)
=
d
d
θ
∑
k
=
1
∞
cos
k
θ
k
2
m
+
2
=
−
∑
k
=
1
∞
sin
k
θ
k
2
m
+
1
=
−
Sl
2
m
+
1
(
θ
)
{\displaystyle {\frac {d}{d\theta }}\operatorname {Sl} _{2m+2}(\theta )={\frac {d}{d\theta }}\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{2m+2}}}=-\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{2m+1}}}=-\operatorname {Sl} _{2m+1}(\theta )}
d
d
θ
Sl
2
m
+
1
(
θ
)
=
d
d
θ
∑
k
=
1
∞
sin
k
θ
k
2
m
+
1
=
∑
k
=
1
∞
cos
k
θ
k
2
m
=
Sl
2
m
(
θ
)
{\displaystyle {\frac {d}{d\theta }}\operatorname {Sl} _{2m+1}(\theta )={\frac {d}{d\theta }}\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{2m+1}}}=\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{2m}}}=\operatorname {Sl} _{2m}(\theta )}
By appealing to the First Fundamental Theorem Of Calculus, we also have:
d
d
θ
Cl
2
(
θ
)
=
d
d
θ
[
−
∫
0
θ
log
|
2
sin
x
2
|
d
x
]
=
−
log
|
2
sin
θ
2
|
=
Cl
1
(
θ
)
{\displaystyle {\frac {d}{d\theta }}\operatorname {Cl} _{2}(\theta )={\frac {d}{d\theta }}\left[-\int _{0}^{\theta }\log \left|2\sin {\frac {x}{2}}\right|\,dx\,\right]=-\log \left|2\sin {\frac {\theta }{2}}\right|=\operatorname {Cl} _{1}(\theta )}
== Relation to the inverse tangent integral ==
The inverse tangent integral is defined on the interval
0
<
z
<
1
{\displaystyle 0<z<1}
by
Ti
2
(
z
)
=
∫
0
z
tan
−
1
x
x
d
x
=
∑
k
=
0
∞
(
−
1
)
k
z
2
k
+
1
(
2
k
+
1
)
2
{\displaystyle \operatorname {Ti} _{2}(z)=\int _{0}^{z}{\frac {\tan ^{-1}x}{x}}\,dx=\sum _{k=0}^{\infty }(-1)^{k}{\frac {z^{2k+1}}{(2k+1)^{2}}}}
It has the following closed form in terms of the Clausen function:
Ti
2
(
tan
θ
)
=
θ
log
(
tan
θ
)
+
1
2
Cl
2
(
2
θ
)
+
1
2
Cl
2
(
π
−
2
θ
)
{\displaystyle \operatorname {Ti} _{2}(\tan \theta )=\theta \log(\tan \theta )+{\frac {1}{2}}\operatorname {Cl} _{2}(2\theta )+{\frac {1}{2}}\operatorname {Cl} _{2}(\pi -2\theta )}
=== Proof of the inverse tangent integral relation ===
From the integral definition of the inverse tangent integral, we have
Ti
2
(
tan
θ
)
=
∫
0
tan
θ
tan
−
1
x
x
d
x
{\displaystyle \operatorname {Ti} _{2}(\tan \theta )=\int _{0}^{\tan \theta }{\frac {\tan ^{-1}x}{x}}\,dx}
Performing an integration by parts
∫
0
tan
θ
tan
−
1
x
x
d
x
=
tan
−
1
x
log
x
|
0
tan
θ
−
∫
0
tan
θ
log
x
1
+
x
2
d
x
=
{\displaystyle \int _{0}^{\tan \theta }{\frac {\tan ^{-1}x}{x}}\,dx=\tan ^{-1}x\log x\,{\Bigg |}_{0}^{\tan \theta }-\int _{0}^{\tan \theta }{\frac {\log x}{1+x^{2}}}\,dx=}
θ
log
tan
θ
−
∫
0
tan
θ
log
x
1
+
x
2
d
x
{\displaystyle \theta \log \tan \theta -\int _{0}^{\tan \theta }{\frac {\log x}{1+x^{2}}}\,dx}
Apply the substitution
x
=
tan
y
,
y
=
tan
−
1
x
,
d
y
=
d
x
1
+
x
2
{\displaystyle x=\tan y,\,y=\tan ^{-1}x,\,dy={\frac {dx}{1+x^{2}}}\,}
to obtain
θ
log
tan
θ
−
∫
0
θ
log
(
tan
y
)
d
y
{\displaystyle \theta \log \tan \theta -\int _{0}^{\theta }\log(\tan y)\,dy}
For that last integral, apply the transform :
y
=
x
/
2
,
d
y
=
d
x
/
2
{\displaystyle y=x/2,\,dy=dx/2\,}
to get
θ
log
tan
θ
−
1
2
∫
0
2
θ
log
(
tan
x
2
)
d
x
=
θ
log
tan
θ
−
1
2
∫
0
2
θ
log
(
sin
(
x
/
2
)
cos
(
x
/
2
)
)
d
x
=
θ
log
tan
θ
−
1
2
∫
0
2
θ
log
(
2
sin
(
x
/
2
)
2
cos
(
x
/
2
)
)
d
x
=
θ
log
tan
θ
−
1
2
∫
0
2
θ
log
(
2
sin
x
2
)
d
x
+
1
2
∫
0
2
θ
log
(
2
cos
x
2
)
d
x
=
θ
log
tan
θ
+
1
2
Cl
2
(
2
θ
)
+
1
2
∫
0
2
θ
log
(
2
cos
x
2
)
d
x
.
{\displaystyle {\begin{aligned}&\theta \log \tan \theta -{\frac {1}{2}}\int _{0}^{2\theta }\log \left(\tan {\frac {x}{2}}\right)\,dx\\[6pt]={}&\theta \log \tan \theta -{\frac {1}{2}}\int _{0}^{2\theta }\log \left({\frac {\sin(x/2)}{\cos(x/2)}}\right)\,dx\\[6pt]={}&\theta \log \tan \theta -{\frac {1}{2}}\int _{0}^{2\theta }\log \left({\frac {2\sin(x/2)}{2\cos(x/2)}}\right)\,dx\\[6pt]={}&\theta \log \tan \theta -{\frac {1}{2}}\int _{0}^{2\theta }\log \left(2\sin {\frac {x}{2}}\right)\,dx+{\frac {1}{2}}\int _{0}^{2\theta }\log \left(2\cos {\frac {x}{2}}\right)\,dx\\[6pt]={}&\theta \log \tan \theta +{\frac {1}{2}}\operatorname {Cl} _{2}(2\theta )+{\frac {1}{2}}\int _{0}^{2\theta }\log \left(2\cos {\frac {x}{2}}\right)\,dx.\end{aligned}}}
Finally, as with the proof of the Duplication formula, the substitution
x
=
(
π
−
y
)
{\displaystyle x=(\pi -y)\,}
reduces that last integral to
∫
0
2
θ
log
(
2
cos
x
2
)
d
x
=
Cl
2
(
π
−
2
θ
)
−
Cl
2
(
π
)
=
Cl
2
(
π
−
2
θ
)
{\displaystyle \int _{0}^{2\theta }\log \left(2\cos {\frac {x}{2}}\right)\,dx=\operatorname {Cl} _{2}(\pi -2\theta )-\operatorname {Cl} _{2}(\pi )=\operatorname {Cl} _{2}(\pi -2\theta )}
Thus
Ti
2
(
tan
θ
)
=
θ
log
tan
θ
+
1
2
Cl
2
(
2
θ
)
+
1
2
Cl
2
(
π
−
2
θ
)
.
◻
{\displaystyle \operatorname {Ti} _{2}(\tan \theta )=\theta \log \tan \theta +{\frac {1}{2}}\operatorname {Cl} _{2}(2\theta )+{\frac {1}{2}}\operatorname {Cl} _{2}(\pi -2\theta )\,.\,\Box }
== Relation to the Barnes' G-function ==
For real
0
<
z
<
1
{\displaystyle 0<z<1}
, the Clausen function of second order can be expressed in terms of the Barnes G-function and (Euler) Gamma function:
Cl
2
(
2
π
z
)
=
2
π
log
(
G
(
1
−
z
)
G
(
1
+
z
)
)
+
2
π
z
log
(
π
sin
π
z
)
{\displaystyle \operatorname {Cl} _{2}(2\pi z)=2\pi \log \left({\frac {G(1-z)}{G(1+z)}}\right)+2\pi z\log \left({\frac {\pi }{\sin \pi z}}\right)}
Or equivalently
Cl
2
(
2
π
z
)
=
2
π
log
(
G
(
1
−
z
)
G
(
z
)
)
−
2
π
log
Γ
(
z
)
+
2
π
z
log
(
π
sin
π
z
)
{\displaystyle \operatorname {Cl} _{2}(2\pi z)=2\pi \log \left({\frac {G(1-z)}{G(z)}}\right)-2\pi \log \Gamma (z)+2\pi z\log \left({\frac {\pi }{\sin \pi z}}\right)}
See Adamchik (2003).
== Relation to the polylogarithm ==
The Clausen functions represent the real and imaginary parts of the polylogarithm, on the unit circle:
Cl
2
m
(
θ
)
=
ℑ
(
Li
2
m
(
e
i
θ
)
)
,
m
∈
Z
≥
1
{\displaystyle \operatorname {Cl} _{2m}(\theta )=\Im (\operatorname {Li} _{2m}(e^{i\theta })),\quad m\in \mathbb {Z} \geq 1}
Cl
2
m
+
1
(
θ
)
=
ℜ
(
Li
2
m
+
1
(
e
i
θ
)
)
,
m
∈
Z
≥
0
{\displaystyle \operatorname {Cl} _{2m+1}(\theta )=\Re (\operatorname {Li} _{2m+1}(e^{i\theta })),\quad m\in \mathbb {Z} \geq 0}
This is easily seen by appealing to the series definition of the polylogarithm.
Li
n
(
z
)
=
∑
k
=
1
∞
z
k
k
n
⟹
Li
n
(
e
i
θ
)
=
∑
k
=
1
∞
(
e
i
θ
)
k
k
n
=
∑
k
=
1
∞
e
i
k
θ
k
n
{\displaystyle \operatorname {Li} _{n}(z)=\sum _{k=1}^{\infty }{\frac {z^{k}}{k^{n}}}\quad \Longrightarrow \operatorname {Li} _{n}\left(e^{i\theta }\right)=\sum _{k=1}^{\infty }{\frac {\left(e^{i\theta }\right)^{k}}{k^{n}}}=\sum _{k=1}^{\infty }{\frac {e^{ik\theta }}{k^{n}}}}
By Euler's theorem,
e
i
θ
=
cos
θ
+
i
sin
θ
{\displaystyle e^{i\theta }=\cos \theta +i\sin \theta }
and by de Moivre's Theorem (De Moivre's formula)
(
cos
θ
+
i
sin
θ
)
k
=
cos
k
θ
+
i
sin
k
θ
⇒
Li
n
(
e
i
θ
)
=
∑
k
=
1
∞
cos
k
θ
k
n
+
i
∑
k
=
1
∞
sin
k
θ
k
n
{\displaystyle (\cos \theta +i\sin \theta )^{k}=\cos k\theta +i\sin k\theta \quad \Rightarrow \operatorname {Li} _{n}\left(e^{i\theta }\right)=\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{n}}}+i\,\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{n}}}}
Hence
Li
2
m
(
e
i
θ
)
=
∑
k
=
1
∞
cos
k
θ
k
2
m
+
i
∑
k
=
1
∞
sin
k
θ
k
2
m
=
Sl
2
m
(
θ
)
+
i
Cl
2
m
(
θ
)
{\displaystyle \operatorname {Li} _{2m}\left(e^{i\theta }\right)=\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{2m}}}+i\,\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{2m}}}=\operatorname {Sl} _{2m}(\theta )+i\operatorname {Cl} _{2m}(\theta )}
Li
2
m
+
1
(
e
i
θ
)
=
∑
k
=
1
∞
cos
k
θ
k
2
m
+
1
+
i
∑
k
=
1
∞
sin
k
θ
k
2
m
+
1
=
Cl
2
m
+
1
(
θ
)
+
i
Sl
2
m
+
1
(
θ
)
{\displaystyle \operatorname {Li} _{2m+1}\left(e^{i\theta }\right)=\sum _{k=1}^{\infty }{\frac {\cos k\theta }{k^{2m+1}}}+i\,\sum _{k=1}^{\infty }{\frac {\sin k\theta }{k^{2m+1}}}=\operatorname {Cl} _{2m+1}(\theta )+i\operatorname {Sl} _{2m+1}(\theta )}
== Relation to the polygamma function ==
The Clausen functions are intimately connected to the polygamma function. Indeed, it is possible to express Clausen functions as linear combinations of sine functions and polygamma functions. One such relation is shown here, and proven below:
Cl
2
m
(
q
π
p
)
=
1
(
2
p
)
2
m
(
2
m
−
1
)
!
∑
j
=
1
p
sin
(
q
j
π
p
)
[
ψ
2
m
−
1
(
j
2
p
)
+
(
−
1
)
q
ψ
2
m
−
1
(
j
+
p
2
p
)
]
.
{\displaystyle \operatorname {Cl} _{2m}\left({\frac {q\pi }{p}}\right)={\frac {1}{(2p)^{2m}(2m-1)!}}\,\sum _{j=1}^{p}\sin \left({\tfrac {qj\pi }{p}}\right)\,\left[\psi _{2m-1}\left({\tfrac {j}{2p}}\right)+(-1)^{q}\psi _{2m-1}\left({\tfrac {j+p}{2p}}\right)\right].}
An immediate corollary is this equivalent formula in terms of the Hurwitz zeta function:
Cl
2
m
(
q
π
p
)
=
1
(
2
p
)
2
m
∑
j
=
1
p
sin
(
q
j
π
p
)
[
ζ
(
2
m
,
j
2
p
)
+
(
−
1
)
q
ζ
(
2
m
,
j
+
p
2
p
)
]
.
{\displaystyle \operatorname {Cl} _{2m}\left({\frac {q\pi }{p}}\right)={\frac {1}{(2p)^{2m}}}\,\sum _{j=1}^{p}\sin \left({\tfrac {qj\pi }{p}}\right)\,\left[\zeta \left(2m,{\tfrac {j}{2p}}\right)+(-1)^{q}\zeta \left(2m,{\tfrac {j+p}{2p}}\right)\right].}
== Relation to the generalized logsine integral ==
The generalized logsine integral is defined by:
L
s
n
m
(
θ
)
=
−
∫
0
θ
x
m
log
n
−
m
−
1
|
2
sin
x
2
|
d
x
{\displaystyle {\mathcal {L}}s_{n}^{m}(\theta )=-\int _{0}^{\theta }x^{m}\log ^{n-m-1}\left|2\sin {\frac {x}{2}}\right|\,dx}
In this generalized notation, the Clausen function can be expressed in the form:
Cl
2
(
θ
)
=
L
s
2
0
(
θ
)
{\displaystyle \operatorname {Cl} _{2}(\theta )={\mathcal {L}}s_{2}^{0}(\theta )}
== Kummer's relation ==
Ernst Kummer and Rogers give the relation
Li
2
(
e
i
θ
)
=
ζ
(
2
)
−
θ
(
2
π
−
θ
)
/
4
+
i
Cl
2
(
θ
)
{\displaystyle \operatorname {Li} _{2}(e^{i\theta })=\zeta (2)-\theta (2\pi -\theta )/4+i\operatorname {Cl} _{2}(\theta )}
valid for
0
≤
θ
≤
2
π
{\displaystyle 0\leq \theta \leq 2\pi }
.
== Relation to the Lobachevsky function ==
The Lobachevsky function Λ or Л is essentially the same function with a change of variable:
Λ
(
θ
)
=
−
∫
0
θ
log
|
2
sin
(
t
)
|
d
t
=
Cl
2
(
2
θ
)
/
2
{\displaystyle \Lambda (\theta )=-\int _{0}^{\theta }\log |2\sin(t)|\,dt=\operatorname {Cl} _{2}(2\theta )/2}
though the name "Lobachevsky function" is not quite historically accurate, as Lobachevsky's formulas for hyperbolic volume used the slightly different function
∫
0
θ
log
|
sec
(
t
)
|
d
t
=
Λ
(
θ
+
π
/
2
)
+
θ
log
2.
{\displaystyle \int _{0}^{\theta }\log |\sec(t)|\,dt=\Lambda (\theta +\pi /2)+\theta \log 2.}
== Relation to Dirichlet L-functions ==
For rational values of
θ
/
π
{\displaystyle \theta /\pi }
(that is, for
θ
/
π
=
p
/
q
{\displaystyle \theta /\pi =p/q}
for some integers p and q), the function
sin
(
n
θ
)
{\displaystyle \sin(n\theta )}
can be understood to represent a periodic orbit of an element in the cyclic group, and thus
Cl
s
(
θ
)
{\displaystyle \operatorname {Cl} _{s}(\theta )}
can be expressed as a simple sum involving the Hurwitz zeta function. This allows relations between certain Dirichlet L-functions to be easily computed.
== Series acceleration ==
A series acceleration for the Clausen function is given by
Cl
2
(
θ
)
θ
=
1
−
log
|
θ
|
+
∑
n
=
1
∞
ζ
(
2
n
)
n
(
2
n
+
1
)
(
θ
2
π
)
2
n
{\displaystyle {\frac {\operatorname {Cl} _{2}(\theta )}{\theta }}=1-\log |\theta |+\sum _{n=1}^{\infty }{\frac {\zeta (2n)}{n(2n+1)}}\left({\frac {\theta }{2\pi }}\right)^{2n}}
which holds for
|
θ
|
<
2
π
{\displaystyle |\theta |<2\pi }
. Here,
ζ
(
s
)
{\displaystyle \zeta (s)}
is the Riemann zeta function. A more rapidly convergent form is given by
Cl
2
(
θ
)
θ
=
3
−
log
[
|
θ
|
(
1
−
θ
2
4
π
2
)
]
−
2
π
θ
log
(
2
π
+
θ
2
π
−
θ
)
+
∑
n
=
1
∞
ζ
(
2
n
)
−
1
n
(
2
n
+
1
)
(
θ
2
π
)
2
n
.
{\displaystyle {\frac {\operatorname {Cl} _{2}(\theta )}{\theta }}=3-\log \left[|\theta |\left(1-{\frac {\theta ^{2}}{4\pi ^{2}}}\right)\right]-{\frac {2\pi }{\theta }}\log \left({\frac {2\pi +\theta }{2\pi -\theta }}\right)+\sum _{n=1}^{\infty }{\frac {\zeta (2n)-1}{n(2n+1)}}\left({\frac {\theta }{2\pi }}\right)^{2n}.}
Convergence is aided by the fact that
ζ
(
n
)
−
1
{\displaystyle \zeta (n)-1}
approaches zero rapidly for large values of n. Both forms are obtainable through the types of resummation techniques used to obtain rational zeta series (Borwein et al. 2000).
== Special values ==
Recall the Barnes G-function, the Catalan's constant K and the Gieseking constant V. Some special values include
Cl
2
(
π
2
)
=
K
{\displaystyle \operatorname {Cl} _{2}\left({\frac {\pi }{2}}\right)=K}
Cl
2
(
π
3
)
=
V
{\displaystyle \operatorname {Cl} _{2}\left({\frac {\pi }{3}}\right)=V}
Cl
2
(
π
3
)
=
3
π
log
(
G
(
2
3
)
G
(
1
3
)
)
−
3
π
log
Γ
(
1
3
)
+
π
log
(
2
π
3
)
{\displaystyle \operatorname {Cl} _{2}\left({\frac {\pi }{3}}\right)=3\pi \log \left({\frac {G\left({\frac {2}{3}}\right)}{G\left({\frac {1}{3}}\right)}}\right)-3\pi \log \Gamma \left({\frac {1}{3}}\right)+\pi \log \left({\frac {2\pi }{\sqrt {3}}}\right)}
Cl
2
(
2
π
3
)
=
2
π
log
(
G
(
2
3
)
G
(
1
3
)
)
−
2
π
log
Γ
(
1
3
)
+
2
π
3
log
(
2
π
3
)
{\displaystyle \operatorname {Cl} _{2}\left({\frac {2\pi }{3}}\right)=2\pi \log \left({\frac {G\left({\frac {2}{3}}\right)}{G\left({\frac {1}{3}}\right)}}\right)-2\pi \log \Gamma \left({\frac {1}{3}}\right)+{\frac {2\pi }{3}}\log \left({\frac {2\pi }{\sqrt {3}}}\right)}
Cl
2
(
π
4
)
=
2
π
log
(
G
(
7
8
)
G
(
1
8
)
)
−
2
π
log
Γ
(
1
8
)
+
π
4
log
(
2
π
2
−
2
)
{\displaystyle \operatorname {Cl} _{2}\left({\frac {\pi }{4}}\right)=2\pi \log \left({\frac {G\left({\frac {7}{8}}\right)}{G\left({\frac {1}{8}}\right)}}\right)-2\pi \log \Gamma \left({\frac {1}{8}}\right)+{\frac {\pi }{4}}\log \left({\frac {2\pi }{\sqrt {2-{\sqrt {2}}}}}\right)}
Cl
2
(
3
π
4
)
=
2
π
log
(
G
(
5
8
)
G
(
3
8
)
)
−
2
π
log
Γ
(
3
8
)
+
3
π
4
log
(
2
π
2
+
2
)
{\displaystyle \operatorname {Cl} _{2}\left({\frac {3\pi }{4}}\right)=2\pi \log \left({\frac {G\left({\frac {5}{8}}\right)}{G\left({\frac {3}{8}}\right)}}\right)-2\pi \log \Gamma \left({\frac {3}{8}}\right)+{\frac {3\pi }{4}}\log \left({\frac {2\pi }{\sqrt {2+{\sqrt {2}}}}}\right)}
Cl
2
(
π
6
)
=
2
π
log
(
G
(
11
12
)
G
(
1
12
)
)
−
2
π
log
Γ
(
1
12
)
+
π
6
log
(
2
π
2
3
−
1
)
{\displaystyle \operatorname {Cl} _{2}\left({\frac {\pi }{6}}\right)=2\pi \log \left({\frac {G\left({\frac {11}{12}}\right)}{G\left({\frac {1}{12}}\right)}}\right)-2\pi \log \Gamma \left({\frac {1}{12}}\right)+{\frac {\pi }{6}}\log \left({\frac {2\pi {\sqrt {2}}}{{\sqrt {3}}-1}}\right)}
Cl
2
(
5
π
6
)
=
2
π
log
(
G
(
7
12
)
G
(
5
12
)
)
−
2
π
log
Γ
(
5
12
)
+
5
π
6
log
(
2
π
2
3
+
1
)
{\displaystyle \operatorname {Cl} _{2}\left({\frac {5\pi }{6}}\right)=2\pi \log \left({\frac {G\left({\frac {7}{12}}\right)}{G\left({\frac {5}{12}}\right)}}\right)-2\pi \log \Gamma \left({\frac {5}{12}}\right)+{\frac {5\pi }{6}}\log \left({\frac {2\pi {\sqrt {2}}}{{\sqrt {3}}+1}}\right)}
In general, from the Barnes G-function reflection formula,
Cl
2
(
2
π
z
)
=
2
π
log
(
G
(
1
−
z
)
G
(
z
)
)
−
2
π
log
Γ
(
z
)
+
2
π
z
log
(
π
sin
π
z
)
{\displaystyle \operatorname {Cl} _{2}(2\pi z)=2\pi \log \left({\frac {G(1-z)}{G(z)}}\right)-2\pi \log \Gamma (z)+2\pi z\log \left({\frac {\pi }{\sin \pi z}}\right)}
Equivalently, using Euler's reflection formula for the gamma function, then,
Cl
2
(
2
π
z
)
=
2
π
log
(
G
(
1
−
z
)
G
(
z
)
)
−
2
π
log
Γ
(
z
)
+
2
π
z
log
(
Γ
(
z
)
Γ
(
1
−
z
)
)
{\displaystyle \operatorname {Cl} _{2}(2\pi z)=2\pi \log \left({\frac {G(1-z)}{G(z)}}\right)-2\pi \log \Gamma (z)+2\pi z\log {\big (}\Gamma (z)\Gamma (1-z){\big )}}
== Generalized special values ==
Some special values for higher order Clausen functions include
Cl
2
m
(
0
)
=
Cl
2
m
(
π
)
=
Cl
2
m
(
2
π
)
=
0
{\displaystyle \operatorname {Cl} _{2m}(0)=\operatorname {Cl} _{2m}(\pi )=\operatorname {Cl} _{2m}(2\pi )=0}
Cl
2
m
(
π
2
)
=
β
(
2
m
)
{\displaystyle \operatorname {Cl} _{2m}\left({\frac {\pi }{2}}\right)=\beta (2m)}
Cl
2
m
+
1
(
0
)
=
Cl
2
m
+
1
(
2
π
)
=
ζ
(
2
m
+
1
)
{\displaystyle \operatorname {Cl} _{2m+1}(0)=\operatorname {Cl} _{2m+1}(2\pi )=\zeta (2m+1)}
Cl
2
m
+
1
(
π
)
=
−
η
(
2
m
+
1
)
=
−
(
2
2
m
−
1
2
2
m
)
ζ
(
2
m
+
1
)
{\displaystyle \operatorname {Cl} _{2m+1}(\pi )=-\eta (2m+1)=-\left({\frac {2^{2m}-1}{2^{2m}}}\right)\zeta (2m+1)}
Cl
2
m
+
1
(
π
2
)
=
−
1
2
2
m
+
1
η
(
2
m
+
1
)
=
−
(
2
2
m
−
1
2
4
m
+
1
)
ζ
(
2
m
+
1
)
{\displaystyle \operatorname {Cl} _{2m+1}\left({\frac {\pi }{2}}\right)=-{\frac {1}{2^{2m+1}}}\eta (2m+1)=-\left({\frac {2^{2m}-1}{2^{4m+1}}}\right)\zeta (2m+1)}
where
β
(
x
)
{\displaystyle \beta (x)}
is the Dirichlet beta function,
η
(
x
)
{\displaystyle \eta (x)}
is the Dirichlet eta function (also called the alternating zeta function), and
ζ
(
x
)
{\displaystyle \zeta (x)}
is the Riemann zeta function.
== Integrals of the direct function ==
The following integrals are easily proven from the series representations of the Clausen function:
∫
0
θ
Cl
2
m
(
x
)
d
x
=
ζ
(
2
m
+
1
)
−
Cl
2
m
+
1
(
θ
)
{\displaystyle \int _{0}^{\theta }\operatorname {Cl} _{2m}(x)\,dx=\zeta (2m+1)-\operatorname {Cl} _{2m+1}(\theta )}
∫
0
θ
Cl
2
m
+
1
(
x
)
d
x
=
Cl
2
m
+
2
(
θ
)
{\displaystyle \int _{0}^{\theta }\operatorname {Cl} _{2m+1}(x)\,dx=\operatorname {Cl} _{2m+2}(\theta )}
∫
0
θ
Sl
2
m
(
x
)
d
x
=
Sl
2
m
+
1
(
θ
)
{\displaystyle \int _{0}^{\theta }\operatorname {Sl} _{2m}(x)\,dx=\operatorname {Sl} _{2m+1}(\theta )}
∫
0
θ
Sl
2
m
+
1
(
x
)
d
x
=
ζ
(
2
m
+
2
)
−
Cl
2
m
+
2
(
θ
)
{\displaystyle \int _{0}^{\theta }\operatorname {Sl} _{2m+1}(x)\,dx=\zeta (2m+2)-\operatorname {Cl} _{2m+2}(\theta )}
Fourier-analytic methods can be used to find the first moments of the square of the function
Cl
2
(
x
)
{\displaystyle \operatorname {Cl} _{2}(x)}
on the interval
[
0
,
π
]
{\displaystyle [0,\pi ]}
:
∫
0
π
Cl
2
2
(
x
)
d
x
=
ζ
(
4
)
,
{\displaystyle \int _{0}^{\pi }\operatorname {Cl} _{2}^{2}(x)\,dx=\zeta (4),}
∫
0
π
t
Cl
2
2
(
x
)
d
x
=
221
90720
π
6
−
4
ζ
(
5
¯
,
1
)
−
2
ζ
(
4
¯
,
2
)
,
{\displaystyle \int _{0}^{\pi }t\operatorname {Cl} _{2}^{2}(x)\,dx={\frac {221}{90720}}\pi ^{6}-4\zeta ({\overline {5}},1)-2\zeta ({\overline {4}},2),}
∫
0
π
t
2
Cl
2
2
(
x
)
d
x
=
−
2
3
π
[
12
ζ
(
5
¯
,
1
)
+
6
ζ
(
4
¯
,
2
)
−
23
10080
π
6
]
.
{\displaystyle \int _{0}^{\pi }t^{2}\operatorname {Cl} _{2}^{2}(x)\,dx=-{\frac {2}{3}}\pi \left[12\zeta ({\overline {5}},1)+6\zeta ({\overline {4}},2)-{\frac {23}{10080}}\pi ^{6}\right].}
Here
ζ
{\displaystyle \zeta }
denotes the multiple zeta function.
== Integral evaluations involving the direct function ==
A large number of trigonometric and logarithmo-trigonometric integrals can be evaluated in terms of the Clausen function, and various common mathematical constants like
K
{\displaystyle \,K\,}
(Catalan's constant),
log
2
{\displaystyle \,\log 2\,}
, and the special cases of the zeta function,
ζ
(
2
)
{\displaystyle \,\zeta (2)\,}
and
ζ
(
3
)
{\displaystyle \,\zeta (3)\,}
.
The examples listed below follow directly from the integral representation of the Clausen function, and the proofs require little more than basic trigonometry, integration by parts, and occasional term-by-term integration of the Fourier series definitions of the Clausen functions.
∫
0
θ
log
(
sin
x
)
d
x
=
−
1
2
Cl
2
(
2
θ
)
−
θ
log
2
{\displaystyle \int _{0}^{\theta }\log(\sin x)\,dx=-{\tfrac {1}{2}}\operatorname {Cl} _{2}(2\theta )-\theta \log 2}
∫
0
θ
log
(
cos
x
)
d
x
=
1
2
Cl
2
(
π
−
2
θ
)
−
θ
log
2
{\displaystyle \int _{0}^{\theta }\log(\cos x)\,dx={\tfrac {1}{2}}\operatorname {Cl} _{2}(\pi -2\theta )-\theta \log 2}
∫
0
θ
log
(
tan
x
)
d
x
=
−
1
2
Cl
2
(
2
θ
)
−
1
2
Cl
2
(
π
−
2
θ
)
{\displaystyle \int _{0}^{\theta }\log(\tan x)\,dx=-{\tfrac {1}{2}}\operatorname {Cl} _{2}(2\theta )-{\tfrac {1}{2}}\operatorname {Cl} _{2}(\pi -2\theta )}
∫
0
θ
log
(
1
+
cos
x
)
d
x
=
2
Cl
2
(
π
−
θ
)
−
θ
log
2
{\displaystyle \int _{0}^{\theta }\log(1+\cos x)\,dx=2\operatorname {Cl} _{2}(\pi -\theta )-\theta \log 2}
∫
0
θ
log
(
1
−
cos
x
)
d
x
=
−
2
Cl
2
(
θ
)
−
θ
log
2
{\displaystyle \int _{0}^{\theta }\log(1-\cos x)\,dx=-2\operatorname {Cl} _{2}(\theta )-\theta \log 2}
∫
0
θ
log
(
1
+
sin
x
)
d
x
=
2
K
−
2
Cl
2
(
π
2
+
θ
)
−
θ
log
2
{\displaystyle \int _{0}^{\theta }\log(1+\sin x)\,dx=2K-2\operatorname {Cl} _{2}\left({\frac {\pi }{2}}+\theta \right)-\theta \log 2}
∫
0
θ
log
(
1
−
sin
x
)
d
x
=
−
2
K
+
2
Cl
2
(
π
2
−
θ
)
−
θ
log
2
{\displaystyle \int _{0}^{\theta }\log(1-\sin x)\,dx=-2K+2\operatorname {Cl} _{2}\left({\frac {\pi }{2}}-\theta \right)-\theta \log 2}
== References ==
Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 27.8". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 1005. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
Clausen, Thomas (1832). "Über die Function sin φ + (1/22) sin 2φ + (1/32) sin 3φ + etc". Journal für die reine und angewandte Mathematik. 8: 298–300. ISSN 0075-4102.
Wood, Van E. (1968). "Efficient calculation of Clausen's integral". Math. Comp. 22 (104): 883–884. doi:10.1090/S0025-5718-1968-0239733-9. MR 0239733.
Leonard Lewin, (Ed.). Structural Properties of Polylogarithms (1991) American Mathematical Society, Providence, RI. ISBN 0-8218-4532-2
Lu, Hung Jung; Perez, Christopher A. (1992). "Massless one-loop scalar three-point integral and associated Clausen, Glaisher, and L-functions" (PDF).
Kölbig, Kurt Siegfried (1995). "Chebyshev coefficients for the Clausen function Cl2(x)". J. Comput. Appl. Math. 64 (3): 295–297. doi:10.1016/0377-0427(95)00150-6. MR 1365432.
Borwein, Jonathan M.; Bradley, David M.; Crandall, Richard E. (2000). "Computational Strategies for the Riemann Zeta Function". J. Comput. Appl. Math. 121 (1–2): 247–296. Bibcode:2000JCoAM.121..247B. doi:10.1016/s0377-0427(00)00336-8. MR 1780051.
Adamchik, Viktor. S. (2003). "Contributions to the Theory of the Barnes Function". arXiv:math/0308086v1.
Kalmykov, Mikahil Yu.; Sheplyakov, A. (2005). "LSJK – a C++ library for arbitrary-precision numeric evaluation of the generalized log-sine integral". Comput. Phys. Commun. 172 (1): 45–59. arXiv:hep-ph/0411100. Bibcode:2005CoPhC.172...45K. doi:10.1016/j.cpc.2005.04.013.
Borwein, Jonathan M.; Straub, Armin (2013). "Relations for Nielsen Polylogarithms". J. Approx. Theory. 193: 74–88. doi:10.1016/j.jat.2013.07.003.
Mathar, R. J. (2013). "A C99 implementation of the Clausen sums". arXiv:1309.7504 [math.NA]. | Wikipedia/Clausen_function |
In mathematics, the multiple zeta functions are generalizations of the Riemann zeta function, defined by
ζ
(
s
1
,
…
,
s
k
)
=
∑
n
1
>
n
2
>
⋯
>
n
k
>
0
1
n
1
s
1
⋯
n
k
s
k
=
∑
n
1
>
n
2
>
⋯
>
n
k
>
0
∏
i
=
1
k
1
n
i
s
i
,
{\displaystyle \zeta (s_{1},\ldots ,s_{k})=\sum _{n_{1}>n_{2}>\cdots >n_{k}>0}\ {\frac {1}{n_{1}^{s_{1}}\cdots n_{k}^{s_{k}}}}=\sum _{n_{1}>n_{2}>\cdots >n_{k}>0}\ \prod _{i=1}^{k}{\frac {1}{n_{i}^{s_{i}}}},\!}
and converge when Re(s1) + ... + Re(si) > i for all i. Like the Riemann zeta function, the multiple zeta functions can be analytically continued to be meromorphic functions (see, for example, Zhao (1999)). When s1, ..., sk are all positive integers (with s1 > 1) these sums are often called multiple zeta values (MZVs) or Euler sums. These values can also be regarded as special values of the multiple polylogarithms.
The k in the above definition is named the "depth" of a MZV, and the n = s1 + ... + sk is known as the "weight".
The standard shorthand for writing multiple zeta functions is to place repeating strings of the argument within braces and use a superscript to indicate the number of repetitions. For example,
ζ
(
2
,
1
,
2
,
1
,
3
)
=
ζ
(
{
2
,
1
}
2
,
3
)
.
{\displaystyle \zeta (2,1,2,1,3)=\zeta (\{2,1\}^{2},3).}
== Definition ==
Multiple zeta functions arise as special cases of the multiple polylogarithms
L
i
s
1
,
…
,
s
d
(
μ
1
,
…
,
μ
d
)
=
∑
k
1
>
⋯
>
k
d
>
0
μ
1
k
1
⋯
μ
d
k
d
k
1
s
1
⋯
k
d
s
d
{\displaystyle \mathrm {Li} _{s_{1},\ldots ,s_{d}}(\mu _{1},\ldots ,\mu _{d})=\sum \limits _{k_{1}>\cdots >k_{d}>0}{\frac {\mu _{1}^{k_{1}}\cdots \mu _{d}^{k_{d}}}{k_{1}^{s_{1}}\cdots k_{d}^{s_{d}}}}}
which are generalizations of the polylogarithm functions. When all of the
μ
i
{\displaystyle \mu _{i}}
are nth roots of unity and the
s
i
{\displaystyle s_{i}}
are all nonnegative integers, the values of the multiple polylogarithm are called colored multiple zeta values of level
n
{\displaystyle n}
. In particular, when
n
=
2
{\displaystyle n=2}
, they are called Euler sums or alternating multiple zeta values, and when
n
=
1
{\displaystyle n=1}
they are simply called multiple zeta values. Multiple zeta values are often written
ζ
(
s
1
,
…
,
s
d
)
=
∑
k
1
>
⋯
>
k
d
>
0
1
k
1
s
1
⋯
k
d
s
d
{\displaystyle \zeta (s_{1},\ldots ,s_{d})=\sum \limits _{k_{1}>\cdots >k_{d}>0}{\frac {1}{k_{1}^{s_{1}}\cdots k_{d}^{s_{d}}}}}
and Euler sums are written
ζ
(
s
1
,
…
,
s
d
;
ε
1
,
…
,
ε
d
)
=
∑
k
1
>
⋯
>
k
d
>
0
ε
1
k
1
⋯
ε
k
d
k
1
s
1
⋯
k
d
s
d
{\displaystyle \zeta (s_{1},\ldots ,s_{d};\varepsilon _{1},\ldots ,\varepsilon _{d})=\sum \limits _{k_{1}>\cdots >k_{d}>0}{\frac {\varepsilon _{1}^{k_{1}}\cdots \varepsilon ^{k_{d}}}{k_{1}^{s_{1}}\cdots k_{d}^{s_{d}}}}}
where
ε
i
=
±
1
{\displaystyle \varepsilon _{i}=\pm 1}
. Sometimes, authors will write a bar over an
s
i
{\displaystyle s_{i}}
corresponding to an
ε
i
{\displaystyle \varepsilon _{i}}
equal to
−
1
{\displaystyle -1}
, so for example
ζ
(
a
¯
,
b
)
=
ζ
(
a
,
b
;
−
1
,
1
)
{\displaystyle \zeta ({\overline {a}},b)=\zeta (a,b;-1,1)}
.
== Integral structure and identities ==
It was noticed by Kontsevich that it is possible to express colored multiple zeta values (and thus their special cases) as certain multivariable integrals. This result is often stated with the use of a convention for iterated integrals, wherein
∫
0
x
f
1
(
t
)
d
t
⋯
f
d
(
t
)
d
t
=
∫
0
x
f
1
(
t
1
)
(
∫
0
t
1
f
2
(
t
2
)
(
∫
0
t
2
⋯
(
∫
0
t
d
f
d
(
t
d
)
d
t
d
)
)
d
t
2
)
d
t
1
{\displaystyle \int _{0}^{x}f_{1}(t)dt\cdots f_{d}(t)dt=\int _{0}^{x}f_{1}(t_{1})\left(\int _{0}^{t_{1}}f_{2}(t_{2})\left(\int _{0}^{t_{2}}\cdots \left(\int _{0}^{t_{d}}f_{d}(t_{d})dt_{d}\right)\right)dt_{2}\right)dt_{1}}
Using this convention, the result can be stated as follows:
L
i
s
1
,
…
,
s
d
(
μ
1
,
…
,
μ
d
)
=
∫
0
1
(
d
t
t
)
s
1
−
1
d
t
a
1
−
t
⋯
(
d
t
t
)
s
d
−
1
d
t
a
d
−
t
{\displaystyle \mathrm {Li} _{s_{1},\ldots ,s_{d}}(\mu _{1},\ldots ,\mu _{d})=\int _{0}^{1}\left({\frac {dt}{t}}\right)^{s_{1}-1}{\frac {dt}{a_{1}-t}}\cdots \left({\frac {dt}{t}}\right)^{s_{d}-1}{\frac {dt}{a_{d}-t}}}
where
a
j
=
∏
i
=
1
j
μ
i
−
1
{\displaystyle a_{j}=\prod \limits _{i=1}^{j}\mu _{i}^{-1}}
for
j
=
1
,
2
,
…
,
d
{\displaystyle j=1,2,\ldots ,d}
.
This result is extremely useful due to a well-known result regarding products of iterated integrals, namely that
(
∫
0
x
f
1
(
t
)
d
t
⋯
f
n
(
t
)
d
t
)
(
∫
0
x
f
n
+
1
(
t
)
d
t
⋯
f
m
(
t
)
d
t
)
=
∑
σ
∈
S
h
n
,
m
∫
0
x
f
σ
(
1
)
(
t
)
⋯
f
σ
(
m
)
(
t
)
{\displaystyle \left(\int _{0}^{x}f_{1}(t)dt\cdots f_{n}(t)dt\right)\!\left(\int _{0}^{x}f_{n+1}(t)dt\cdots f_{m}(t)dt\right)=\sum \limits _{\sigma \in {\mathfrak {Sh}}_{n,m}}\int _{0}^{x}f_{\sigma (1)}(t)\cdots f_{\sigma (m)}(t)}
where
S
h
n
,
m
=
{
σ
∈
S
m
∣
σ
(
1
)
<
⋯
<
σ
(
n
)
,
σ
(
n
+
1
)
<
⋯
<
σ
(
m
)
}
{\displaystyle {\mathfrak {Sh}}_{n,m}=\{\sigma \in S_{m}\mid \sigma (1)<\cdots <\sigma (n),\sigma (n+1)<\cdots <\sigma (m)\}}
and
S
m
{\displaystyle S_{m}}
is the symmetric group on
m
{\displaystyle m}
symbols.
To utilize this in the context of multiple zeta values, define
X
=
{
a
,
b
}
{\displaystyle X=\{a,b\}}
,
X
∗
{\displaystyle X^{*}}
to be the free monoid generated by
X
{\displaystyle X}
and
A
{\displaystyle {\mathfrak {A}}}
to be the free
Q
{\displaystyle \mathbb {Q} }
-vector space generated by
X
∗
{\displaystyle X^{*}}
.
A
{\displaystyle {\mathfrak {A}}}
can be equipped with the shuffle product, turning it into an algebra. Then, the multiple zeta function can be viewed as an evaluation map, where we identify
a
=
d
t
t
{\displaystyle a={\frac {dt}{t}}}
,
b
=
d
t
1
−
t
{\displaystyle b={\frac {dt}{1-t}}}
, and define
ζ
(
w
)
=
∫
0
1
w
{\displaystyle \zeta (\mathbf {w} )=\int _{0}^{1}\mathbf {w} }
for any
w
∈
X
∗
{\displaystyle \mathbf {w} \in X^{*}}
,
which, by the aforementioned integral identity, makes
ζ
(
a
s
1
−
1
b
⋯
a
s
d
−
1
b
)
=
ζ
(
s
1
,
…
,
s
d
)
.
{\displaystyle \zeta (a^{s_{1}-1}b\cdots a^{s_{d}-1}b)=\zeta (s_{1},\ldots ,s_{d}).}
Then, the integral identity on products gives
ζ
(
w
)
ζ
(
v
)
=
ζ
(
w
⧢
v
)
.
{\displaystyle \zeta (w)\zeta (v)=\zeta (w{\text{ ⧢ }}v).}
== Two parameters case ==
In the particular case of only two parameters we have (with s > 1 and n, m integers):
ζ
(
s
,
t
)
=
∑
n
>
m
≥
1
1
n
s
m
t
=
∑
n
=
2
∞
1
n
s
∑
m
=
1
n
−
1
1
m
t
=
∑
n
=
1
∞
1
(
n
+
1
)
s
∑
m
=
1
n
1
m
t
{\displaystyle \zeta (s,t)=\sum _{n>m\geq 1}\ {\frac {1}{n^{s}m^{t}}}=\sum _{n=2}^{\infty }{\frac {1}{n^{s}}}\sum _{m=1}^{n-1}{\frac {1}{m^{t}}}=\sum _{n=1}^{\infty }{\frac {1}{(n+1)^{s}}}\sum _{m=1}^{n}{\frac {1}{m^{t}}}}
ζ
(
s
,
t
)
=
∑
n
=
1
∞
H
n
,
t
(
n
+
1
)
s
{\displaystyle \zeta (s,t)=\sum _{n=1}^{\infty }{\frac {H_{n,t}}{(n+1)^{s}}}}
where
H
n
,
t
{\displaystyle H_{n,t}}
are the generalized harmonic numbers.
Multiple zeta functions are known to satisfy what is known as MZV duality, the simplest case of which is the famous identity of Euler:
∑
n
=
1
∞
H
n
(
n
+
1
)
2
=
ζ
(
2
,
1
)
=
ζ
(
3
)
=
∑
n
=
1
∞
1
n
3
,
{\displaystyle \sum _{n=1}^{\infty }{\frac {H_{n}}{(n+1)^{2}}}=\zeta (2,1)=\zeta (3)=\sum _{n=1}^{\infty }{\frac {1}{n^{3}}},\!}
where Hn are the harmonic numbers.
Special values of double zeta functions, with s > 0 and even, t > 1 and odd, but s+t = 2N+1 (taking if necessary ζ(0) = 0):
ζ
(
s
,
t
)
=
ζ
(
s
)
ζ
(
t
)
+
1
2
[
(
s
+
t
s
)
−
1
]
ζ
(
s
+
t
)
−
∑
r
=
1
N
−
1
[
(
2
r
s
−
1
)
+
(
2
r
t
−
1
)
]
ζ
(
2
r
+
1
)
ζ
(
s
+
t
−
1
−
2
r
)
{\displaystyle \zeta (s,t)=\zeta (s)\zeta (t)+{\tfrac {1}{2}}{\Big [}{\tbinom {s+t}{s}}-1{\Big ]}\zeta (s+t)-\sum _{r=1}^{N-1}{\Big [}{\tbinom {2r}{s-1}}+{\tbinom {2r}{t-1}}{\Big ]}\zeta (2r+1)\zeta (s+t-1-2r)}
Note that if
s
+
t
=
2
p
+
2
{\displaystyle s+t=2p+2}
we have
p
/
3
{\displaystyle p/3}
irreducibles, i.e. these MZVs cannot be written as function of
ζ
(
a
)
{\displaystyle \zeta (a)}
only.
== Three parameters case ==
In the particular case of only three parameters we have (with a > 1 and n, j, i integers):
ζ
(
a
,
b
,
c
)
=
∑
n
>
j
>
i
≥
1
1
n
a
j
b
i
c
=
∑
n
=
1
∞
1
(
n
+
2
)
a
∑
j
=
1
n
1
(
j
+
1
)
b
∑
i
=
1
j
1
(
i
)
c
=
∑
n
=
1
∞
1
(
n
+
2
)
a
∑
j
=
1
n
H
j
,
c
(
j
+
1
)
b
{\displaystyle \zeta (a,b,c)=\sum _{n>j>i\geq 1}\ {\frac {1}{n^{a}j^{b}i^{c}}}=\sum _{n=1}^{\infty }{\frac {1}{(n+2)^{a}}}\sum _{j=1}^{n}{\frac {1}{(j+1)^{b}}}\sum _{i=1}^{j}{\frac {1}{(i)^{c}}}=\sum _{n=1}^{\infty }{\frac {1}{(n+2)^{a}}}\sum _{j=1}^{n}{\frac {H_{j,c}}{(j+1)^{b}}}}
== Euler reflection formula ==
The above MZVs satisfy the Euler reflection formula:
ζ
(
a
,
b
)
+
ζ
(
b
,
a
)
=
ζ
(
a
)
ζ
(
b
)
−
ζ
(
a
+
b
)
{\displaystyle \zeta (a,b)+\zeta (b,a)=\zeta (a)\zeta (b)-\zeta (a+b)}
for
a
,
b
>
1
{\displaystyle a,b>1}
Using the shuffle relations, it is easy to prove that:
ζ
(
a
,
b
,
c
)
+
ζ
(
a
,
c
,
b
)
+
ζ
(
b
,
a
,
c
)
+
ζ
(
b
,
c
,
a
)
+
ζ
(
c
,
a
,
b
)
+
ζ
(
c
,
b
,
a
)
=
ζ
(
a
)
ζ
(
b
)
ζ
(
c
)
+
2
ζ
(
a
+
b
+
c
)
−
ζ
(
a
)
ζ
(
b
+
c
)
−
ζ
(
b
)
ζ
(
a
+
c
)
−
ζ
(
c
)
ζ
(
a
+
b
)
{\displaystyle \zeta (a,b,c)+\zeta (a,c,b)+\zeta (b,a,c)+\zeta (b,c,a)+\zeta (c,a,b)+\zeta (c,b,a)=\zeta (a)\zeta (b)\zeta (c)+2\zeta (a+b+c)-\zeta (a)\zeta (b+c)-\zeta (b)\zeta (a+c)-\zeta (c)\zeta (a+b)}
for
a
,
b
,
c
>
1
{\displaystyle a,b,c>1}
This function can be seen as a generalization of the reflection formulas.
== Symmetric sums in terms of the zeta function ==
Let
S
(
i
1
,
i
2
,
⋯
,
i
k
)
=
∑
n
1
≥
n
2
≥
⋯
n
k
≥
1
1
n
1
i
1
n
2
i
2
⋯
n
k
i
k
{\displaystyle S(i_{1},i_{2},\cdots ,i_{k})=\sum _{n_{1}\geq n_{2}\geq \cdots n_{k}\geq 1}{\frac {1}{n_{1}^{i_{1}}n_{2}^{i_{2}}\cdots n_{k}^{i_{k}}}}}
, and for a partition
Π
=
{
P
1
,
P
2
,
…
,
P
l
}
{\displaystyle \Pi =\{P_{1},P_{2},\dots ,P_{l}\}}
of the set
{
1
,
2
,
…
,
k
}
{\displaystyle \{1,2,\dots ,k\}}
, let
c
(
Π
)
=
(
|
P
1
|
−
1
)
!
(
|
P
2
|
−
1
)
!
⋯
(
|
P
l
|
−
1
)
!
{\displaystyle c(\Pi )=(\left|P_{1}\right|-1)!(\left|P_{2}\right|-1)!\cdots (\left|P_{l}\right|-1)!}
. Also, given such a
Π
{\displaystyle \Pi }
and a k-tuple
i
=
{
i
1
,
.
.
.
,
i
k
}
{\displaystyle i=\{i_{1},...,i_{k}\}}
of exponents, define
∏
s
=
1
l
ζ
(
∑
j
∈
P
s
i
j
)
{\displaystyle \prod _{s=1}^{l}\zeta (\sum _{j\in P_{s}}i_{j})}
.
The relations between the
ζ
{\displaystyle \zeta }
and
S
{\displaystyle S}
are:
S
(
i
1
,
i
2
)
=
ζ
(
i
1
,
i
2
)
+
ζ
(
i
1
+
i
2
)
{\displaystyle S(i_{1},i_{2})=\zeta (i_{1},i_{2})+\zeta (i_{1}+i_{2})}
and
S
(
i
1
,
i
2
,
i
3
)
=
ζ
(
i
1
,
i
2
,
i
3
)
+
ζ
(
i
1
+
i
2
,
i
3
)
+
ζ
(
i
1
,
i
2
+
i
3
)
+
ζ
(
i
1
+
i
2
+
i
3
)
.
{\displaystyle S(i_{1},i_{2},i_{3})=\zeta (i_{1},i_{2},i_{3})+\zeta (i_{1}+i_{2},i_{3})+\zeta (i_{1},i_{2}+i_{3})+\zeta (i_{1}+i_{2}+i_{3}).}
=== Theorem 1 (Hoffman) ===
For any real
i
1
,
⋯
,
i
k
>
1
,
{\displaystyle i_{1},\cdots ,i_{k}>1,}
,
∑
σ
∈
Σ
k
S
(
i
σ
(
1
)
,
…
,
i
σ
(
k
)
)
=
∑
partitions
Π
of
{
1
,
…
,
k
}
c
(
Π
)
ζ
(
i
,
Π
)
{\displaystyle \sum _{\sigma \in \Sigma _{k}}S(i_{\sigma (1)},\dots ,i_{\sigma (k)})=\sum _{{\text{partitions }}\Pi {\text{ of }}\{1,\dots ,k\}}c(\Pi )\zeta (i,\Pi )}
.
Proof. Assume the
i
j
{\displaystyle i_{j}}
are all distinct. (There is no loss of generality, since we can take limits.) The left-hand side can be written as
∑
σ
∑
n
1
≥
n
2
≥
⋯
≥
n
k
≥
1
1
n
i
1
σ
(
1
)
n
i
2
σ
(
2
)
⋯
n
i
k
σ
(
k
)
{\displaystyle \sum _{\sigma }\sum _{n_{1}\geq n_{2}\geq \cdots \geq n_{k}\geq 1}{\frac {1}{{n^{i_{1}}}_{\sigma (1)}{n^{i_{2}}}_{\sigma (2)}\cdots {n^{i_{k}}}_{\sigma (k)}}}}
. Now thinking on the symmetric
group
Σ
k
{\displaystyle \Sigma _{k}}
as acting on k-tuple
n
=
(
1
,
⋯
,
k
)
{\displaystyle n=(1,\cdots ,k)}
of positive integers. A given k-tuple
n
=
(
n
1
,
⋯
,
n
k
)
{\displaystyle n=(n_{1},\cdots ,n_{k})}
has an isotropy group
Σ
k
(
n
)
{\displaystyle \Sigma _{k}(n)}
and an associated partition
Λ
{\displaystyle \Lambda }
of
(
1
,
2
,
⋯
,
k
)
{\displaystyle (1,2,\cdots ,k)}
:
Λ
{\displaystyle \Lambda }
is the set of equivalence classes of the relation
given by
i
∼
j
{\displaystyle i\sim j}
iff
n
i
=
n
j
{\displaystyle n_{i}=n_{j}}
, and
Σ
k
(
n
)
=
{
σ
∈
Σ
k
:
σ
(
i
)
∼
∀
i
}
{\displaystyle \Sigma _{k}(n)=\{\sigma \in \Sigma _{k}:\sigma (i)\sim \forall i\}}
. Now the term
1
n
i
1
σ
(
1
)
n
i
2
σ
(
2
)
⋯
n
i
k
σ
(
k
)
{\displaystyle {\frac {1}{{n^{i_{1}}}_{\sigma (1)}{n^{i_{2}}}_{\sigma (2)}\cdots {n^{i_{k}}}_{\sigma (k)}}}}
occurs on the left-hand side of
∑
σ
∈
Σ
k
S
(
i
σ
(
1
)
,
…
,
i
σ
(
k
)
)
=
∑
partitions
Π
of
{
1
,
…
,
k
}
c
(
Π
)
ζ
(
i
,
Π
)
{\displaystyle \sum _{\sigma \in \Sigma _{k}}S(i_{\sigma (1)},\dots ,i_{\sigma (k)})=\sum _{{\text{partitions }}\Pi {\text{ of }}\{1,\dots ,k\}}c(\Pi )\zeta (i,\Pi )}
exactly
|
Σ
k
(
n
)
|
{\displaystyle \left|\Sigma _{k}(n)\right|}
times. It occurs on the right-hand side in those terms corresponding to partitions
Π
{\displaystyle \Pi }
that are refinements of
Λ
{\displaystyle \Lambda }
: letting
⪰
{\displaystyle \succeq }
denote refinement,
1
n
i
1
σ
(
1
)
n
i
2
σ
(
2
)
⋯
n
i
k
σ
(
k
)
{\displaystyle {\frac {1}{{n^{i_{1}}}_{\sigma (1)}{n^{i_{2}}}_{\sigma (2)}\cdots {n^{i_{k}}}_{\sigma (k)}}}}
occurs
∑
Π
⪰
Λ
(
Π
)
{\displaystyle \sum _{\Pi \succeq \Lambda }(\Pi )}
times. Thus, the conclusion will follow if
|
Σ
k
(
n
)
|
=
∑
Π
⪰
Λ
c
(
Π
)
{\displaystyle \left|\Sigma _{k}(n)\right|=\sum _{\Pi \succeq \Lambda }c(\Pi )}
for any k-tuple
n
=
{
n
1
,
⋯
,
n
k
}
{\displaystyle n=\{n_{1},\cdots ,n_{k}\}}
and associated partition
Λ
{\displaystyle \Lambda }
.
To see this, note that
c
(
Π
)
{\displaystyle c(\Pi )}
counts the permutations having cycle type specified by
Π
{\displaystyle \Pi }
: since any elements of
Σ
k
(
n
)
{\displaystyle \Sigma _{k}(n)}
has a unique cycle type specified by a partition that refines
Λ
{\displaystyle \Lambda }
, the result follows.
For
k
=
3
{\displaystyle k=3}
, the theorem says
∑
σ
∈
Σ
3
S
(
i
σ
(
1
)
,
i
σ
(
2
)
,
i
σ
(
3
)
)
=
ζ
(
i
1
)
ζ
(
i
2
)
ζ
(
i
3
)
+
ζ
(
i
1
+
i
2
)
ζ
(
i
3
)
+
ζ
(
i
1
)
ζ
(
i
2
+
i
3
)
+
ζ
(
i
1
+
i
3
)
ζ
(
i
2
)
+
2
ζ
(
i
1
+
i
2
+
i
3
)
{\displaystyle \sum _{\sigma \in \Sigma _{3}}S(i_{\sigma (1)},i_{\sigma (2)},i_{\sigma (3)})=\zeta (i_{1})\zeta (i_{2})\zeta (i_{3})+\zeta (i_{1}+i_{2})\zeta (i_{3})+\zeta (i_{1})\zeta (i_{2}+i_{3})+\zeta (i_{1}+i_{3})\zeta (i_{2})+2\zeta (i_{1}+i_{2}+i_{3})}
for
i
1
,
i
2
,
i
3
>
1
{\displaystyle i_{1},i_{2},i_{3}>1}
. This is the main result of.
Having
ζ
(
i
1
,
i
2
,
⋯
,
i
k
)
=
∑
n
1
>
n
2
>
⋯
n
k
≥
1
1
n
1
i
1
n
2
i
2
⋯
n
k
i
k
{\displaystyle \zeta (i_{1},i_{2},\cdots ,i_{k})=\sum _{n_{1}>n_{2}>\cdots n_{k}\geq 1}{\frac {1}{n_{1}^{i_{1}}n_{2}^{i_{2}}\cdots n_{k}^{i_{k}}}}}
. To state the analog of Theorem 1 for the
ζ
′
s
{\displaystyle \zeta 's}
, we require one bit of notation. For a partition
Π
=
{
P
1
,
⋯
,
P
l
}
{\displaystyle \Pi =\{P_{1},\cdots ,P_{l}\}}
of
{
1
,
2
⋯
,
k
}
{\displaystyle \{1,2\cdots ,k\}}
, let
c
~
(
Π
)
=
(
−
1
)
k
−
l
c
(
Π
)
{\displaystyle {\tilde {c}}(\Pi )=(-1)^{k-l}c(\Pi )}
.
=== Theorem 2 (Hoffman) ===
For any real
i
1
,
⋯
,
i
k
>
1
{\displaystyle i_{1},\cdots ,i_{k}>1}
,
∑
σ
∈
Σ
k
ζ
(
i
σ
(
1
)
,
…
,
i
σ
(
k
)
)
=
∑
partitions
Π
of
{
1
,
…
,
k
}
c
~
(
Π
)
ζ
(
i
,
Π
)
{\displaystyle \sum _{\sigma \in \Sigma _{k}}\zeta (i_{\sigma (1)},\dots ,i_{\sigma (k)})=\sum _{{\text{partitions }}\Pi {\text{ of }}\{1,\dots ,k\}}{\tilde {c}}(\Pi )\zeta (i,\Pi )}
.
Proof. We follow the same line of argument as in the preceding proof. The left-hand side is now
∑
σ
∑
n
1
>
n
2
>
⋯
>
n
k
≥
1
1
n
i
1
σ
(
1
)
n
i
2
σ
(
2
)
⋯
n
i
k
σ
(
k
)
{\displaystyle \sum _{\sigma }\sum _{n_{1}>n_{2}>\cdots >n_{k}\geq 1}{\frac {1}{{n^{i_{1}}}_{\sigma (1)}{n^{i_{2}}}_{\sigma (2)}\cdots {n^{i_{k}}}_{\sigma (k)}}}}
, and a term
1
n
1
i
1
n
2
i
2
⋯
n
k
i
k
{\displaystyle {\frac {1}{n_{1}^{i_{1}}n_{2}^{i_{2}}\cdots n_{k}^{i_{k}}}}}
occurs on the left-hand since once if all the
n
i
{\displaystyle n_{i}}
are distinct, and not at all otherwise. Thus, it suffices to show
∑
Π
⪰
Λ
c
~
(
Π
)
=
{
1
,
if
|
Λ
|
=
k
0
,
otherwise
.
{\displaystyle \sum _{\Pi \succeq \Lambda }{\tilde {c}}(\Pi )={\begin{cases}1,{\text{ if }}\left|\Lambda \right|=k\\0,{\text{ otherwise }}.\end{cases}}}
(1)
To prove this, note first that the sign of
c
~
(
Π
)
{\displaystyle {\tilde {c}}(\Pi )}
is positive if the permutations of cycle type
Π
{\displaystyle \Pi }
are even, and negative if they are odd: thus, the left-hand side of (1) is the signed sum of the number of even and odd permutations in the isotropy group
Σ
k
(
n
)
{\displaystyle \Sigma _{k}(n)}
. But such an isotropy group has equal numbers of even and odd permutations unless it is trivial, i.e. unless the associated partition
Λ
{\displaystyle \Lambda }
is
{
{
1
}
,
{
2
}
,
⋯
,
{
k
}
}
{\displaystyle \{\{1\},\{2\},\cdots ,\{k\}\}}
.
== The sum and duality conjectures ==
Source:
We first state the sum conjecture, which is due to C. Moen.
Sum conjecture (Hoffman). For positive integers k and n,
∑
i
1
+
⋯
+
i
k
=
n
,
i
1
>
1
ζ
(
i
1
,
⋯
,
i
k
)
=
ζ
(
n
)
{\displaystyle \sum _{i_{1}+\cdots +i_{k}=n,i_{1}>1}\zeta (i_{1},\cdots ,i_{k})=\zeta (n)}
, where the sum is extended over k-tuples
i
1
,
⋯
,
i
k
{\displaystyle i_{1},\cdots ,i_{k}}
of positive integers with
i
1
>
1
{\displaystyle i_{1}>1}
.
Three remarks concerning this conjecture are in order. First, it implies
∑
i
1
+
⋯
+
i
k
=
n
,
i
1
>
1
S
(
i
1
,
⋯
,
i
k
)
=
(
n
−
1
k
−
1
)
ζ
(
n
)
{\displaystyle \sum _{i_{1}+\cdots +i_{k}=n,i_{1}>1}S(i_{1},\cdots ,i_{k})={n-1 \choose k-1}\zeta (n)}
. Second, in the case
k
=
2
{\displaystyle k=2}
it says that
ζ
(
n
−
1
,
1
)
+
ζ
(
n
−
2
,
2
)
+
⋯
+
ζ
(
2
,
n
−
2
)
=
ζ
(
n
)
{\displaystyle \zeta (n-1,1)+\zeta (n-2,2)+\cdots +\zeta (2,n-2)=\zeta (n)}
, or using the relation between the
ζ
′
s
{\displaystyle \zeta 's}
and
S
′
s
{\displaystyle S's}
and Theorem 1,
2
S
(
n
−
1
,
1
)
=
(
n
+
1
)
ζ
(
n
)
−
∑
k
=
2
n
−
2
ζ
(
k
)
ζ
(
n
−
k
)
.
{\displaystyle 2S(n-1,1)=(n+1)\zeta (n)-\sum _{k=2}^{n-2}\zeta (k)\zeta (n-k).}
This was proved by Euler and has been rediscovered several times, in particular by Williams. Finally, C. Moen has proved the same conjecture for k=3 by lengthy but elementary arguments.
For the duality conjecture, we first define an involution
τ
{\displaystyle \tau }
on the set
ℑ
{\displaystyle \Im }
of finite sequences of positive integers whose first element is greater than 1. Let
T
{\displaystyle \mathrm {T} }
be the set of strictly increasing finite sequences of positive integers, and let
Σ
:
ℑ
→
T
{\displaystyle \Sigma :\Im \rightarrow \mathrm {T} }
be the function that sends a sequence in
ℑ
{\displaystyle \Im }
to its sequence of partial sums. If
T
n
{\displaystyle \mathrm {T} _{n}}
is the set of sequences in
T
{\displaystyle \mathrm {T} }
whose last element is at most
n
{\displaystyle n}
, we have two commuting involutions
R
n
{\displaystyle R_{n}}
and
C
n
{\displaystyle C_{n}}
on
T
n
{\displaystyle \mathrm {T} _{n}}
defined by
R
n
(
a
1
,
a
2
,
…
,
a
l
)
=
(
n
+
1
−
a
l
,
n
+
1
−
a
l
−
1
,
…
,
n
+
1
−
a
1
)
{\displaystyle R_{n}(a_{1},a_{2},\dots ,a_{l})=(n+1-a_{l},n+1-a_{l-1},\dots ,n+1-a_{1})}
and
C
n
(
a
1
,
…
,
a
l
)
{\displaystyle C_{n}(a_{1},\dots ,a_{l})}
= complement of
{
a
1
,
…
,
a
l
}
{\displaystyle \{a_{1},\dots ,a_{l}\}}
in
{
1
,
2
,
…
,
n
}
{\displaystyle \{1,2,\dots ,n\}}
arranged in increasing order. The our definition of
τ
{\displaystyle \tau }
is
τ
(
I
)
=
Σ
−
1
R
n
C
n
Σ
(
I
)
=
Σ
−
1
C
n
R
n
Σ
(
I
)
{\displaystyle \tau (I)=\Sigma ^{-1}R_{n}C_{n}\Sigma (I)=\Sigma ^{-1}C_{n}R_{n}\Sigma (I)}
for
I
=
(
i
1
,
i
2
,
…
,
i
k
)
∈
ℑ
{\displaystyle I=(i_{1},i_{2},\dots ,i_{k})\in \Im }
with
i
1
+
⋯
+
i
k
=
n
{\displaystyle i_{1}+\cdots +i_{k}=n}
.
For example,
τ
(
3
,
4
,
1
)
=
Σ
−
1
C
8
R
8
(
3
,
7
,
8
)
=
Σ
−
1
(
3
,
4
,
5
,
7
,
8
)
=
(
3
,
1
,
1
,
2
,
1
)
.
{\displaystyle \tau (3,4,1)=\Sigma ^{-1}C_{8}R_{8}(3,7,8)=\Sigma ^{-1}(3,4,5,7,8)=(3,1,1,2,1).}
We shall say the sequences
(
i
1
,
…
,
i
k
)
{\displaystyle (i_{1},\dots ,i_{k})}
and
τ
(
i
1
,
…
,
i
k
)
{\displaystyle \tau (i_{1},\dots ,i_{k})}
are dual to each other, and refer to a sequence fixed by
τ
{\displaystyle \tau }
as self-dual.
Duality conjecture (Hoffman). If
(
h
1
,
…
,
h
n
−
k
)
{\displaystyle (h_{1},\dots ,h_{n-k})}
is dual to
(
i
1
,
…
,
i
k
)
{\displaystyle (i_{1},\dots ,i_{k})}
, then
ζ
(
h
1
,
…
,
h
n
−
k
)
=
ζ
(
i
1
,
…
,
i
k
)
{\displaystyle \zeta (h_{1},\dots ,h_{n-k})=\zeta (i_{1},\dots ,i_{k})}
.
This sum conjecture is also known as Sum Theorem, and it may be expressed as follows: the Riemann zeta value of an integer n ≥ 2 is equal to the sum of all the valid (i.e. with s1 > 1) MZVs of the partitions of length k and weight n, with 1 ≤ k ≤ n − 1. In formula:
∑
s
1
>
1
s
1
+
⋯
+
s
k
=
n
ζ
(
s
1
,
…
,
s
k
)
=
ζ
(
n
)
.
{\displaystyle \sum _{\stackrel {s_{1}+\cdots +s_{k}=n}{s_{1}>1}}\zeta (s_{1},\ldots ,s_{k})=\zeta (n).}
For example, with length k = 2 and weight n = 7:
ζ
(
6
,
1
)
+
ζ
(
5
,
2
)
+
ζ
(
4
,
3
)
+
ζ
(
3
,
4
)
+
ζ
(
2
,
5
)
=
ζ
(
7
)
.
{\displaystyle \zeta (6,1)+\zeta (5,2)+\zeta (4,3)+\zeta (3,4)+\zeta (2,5)=\zeta (7).}
== Euler sum with all possible alternations of sign ==
The Euler sum with alternations of sign appears in studies of the non-alternating Euler sum.
=== Notation ===
∑
n
=
1
∞
H
n
(
b
)
(
−
1
)
(
n
+
1
)
(
n
+
1
)
a
=
ζ
(
a
¯
,
b
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {H_{n}^{(b)}(-1)^{(n+1)}}{(n+1)^{a}}}=\zeta ({\bar {a}},b)}
with
H
n
(
b
)
=
+
1
+
1
2
b
+
1
3
b
+
⋯
{\displaystyle H_{n}^{(b)}=+1+{\frac {1}{2^{b}}}+{\frac {1}{3^{b}}}+\cdots }
are the generalized harmonic numbers.
∑
n
=
1
∞
H
¯
n
(
b
)
(
n
+
1
)
a
=
ζ
(
a
,
b
¯
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {{\bar {H}}_{n}^{(b)}}{(n+1)^{a}}}=\zeta (a,{\bar {b}})}
with
H
¯
n
(
b
)
=
−
1
+
1
2
b
−
1
3
b
+
⋯
{\displaystyle {\bar {H}}_{n}^{(b)}=-1+{\frac {1}{2^{b}}}-{\frac {1}{3^{b}}}+\cdots }
∑
n
=
1
∞
H
¯
n
(
b
)
(
−
1
)
(
n
+
1
)
(
n
+
1
)
a
=
ζ
(
a
¯
,
b
¯
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {{\bar {H}}_{n}^{(b)}(-1)^{(n+1)}}{(n+1)^{a}}}=\zeta ({\bar {a}},{\bar {b}})}
∑
n
=
1
∞
(
−
1
)
n
(
n
+
2
)
a
∑
n
=
1
∞
H
¯
n
(
c
)
(
−
1
)
(
n
+
1
)
(
n
+
1
)
b
=
ζ
(
a
¯
,
b
¯
,
c
¯
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {(-1)^{n}}{(n+2)^{a}}}\sum _{n=1}^{\infty }{\frac {{\bar {H}}_{n}^{(c)}(-1)^{(n+1)}}{(n+1)^{b}}}=\zeta ({\bar {a}},{\bar {b}},{\bar {c}})}
with
H
¯
n
(
c
)
=
−
1
+
1
2
c
−
1
3
c
+
⋯
{\displaystyle {\bar {H}}_{n}^{(c)}=-1+{\frac {1}{2^{c}}}-{\frac {1}{3^{c}}}+\cdots }
∑
n
=
1
∞
(
−
1
)
n
(
n
+
2
)
a
∑
n
=
1
∞
H
n
(
c
)
(
n
+
1
)
b
=
ζ
(
a
¯
,
b
,
c
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {(-1)^{n}}{(n+2)^{a}}}\sum _{n=1}^{\infty }{\frac {H_{n}^{(c)}}{(n+1)^{b}}}=\zeta ({\bar {a}},b,c)}
with
H
n
(
c
)
=
+
1
+
1
2
c
+
1
3
c
+
⋯
{\displaystyle H_{n}^{(c)}=+1+{\frac {1}{2^{c}}}+{\frac {1}{3^{c}}}+\cdots }
∑
n
=
1
∞
1
(
n
+
2
)
a
∑
n
=
1
∞
H
n
(
c
)
(
−
1
)
(
n
+
1
)
(
n
+
1
)
b
=
ζ
(
a
,
b
¯
,
c
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{(n+2)^{a}}}\sum _{n=1}^{\infty }{\frac {H_{n}^{(c)}(-1)^{(n+1)}}{(n+1)^{b}}}=\zeta (a,{\bar {b}},c)}
∑
n
=
1
∞
1
(
n
+
2
)
a
∑
n
=
1
∞
H
¯
n
(
c
)
(
n
+
1
)
b
=
ζ
(
a
,
b
,
c
¯
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{(n+2)^{a}}}\sum _{n=1}^{\infty }{\frac {{\bar {H}}_{n}^{(c)}}{(n+1)^{b}}}=\zeta (a,b,{\bar {c}})}
As a variant of the Dirichlet eta function we define
ϕ
(
s
)
=
1
−
2
(
s
−
1
)
2
(
s
−
1
)
ζ
(
s
)
{\displaystyle \phi (s)={\frac {1-2^{(s-1)}}{2^{(s-1)}}}\zeta (s)}
with
s
>
1
{\displaystyle s>1}
ϕ
(
1
)
=
−
ln
2
{\displaystyle \phi (1)=-\ln 2}
=== Reflection formula ===
The reflection formula
ζ
(
a
,
b
)
+
ζ
(
b
,
a
)
=
ζ
(
a
)
ζ
(
b
)
−
ζ
(
a
+
b
)
{\displaystyle \zeta (a,b)+\zeta (b,a)=\zeta (a)\zeta (b)-\zeta (a+b)}
can be generalized as follows:
ζ
(
a
,
b
¯
)
+
ζ
(
b
¯
,
a
)
=
ζ
(
a
)
ϕ
(
b
)
−
ϕ
(
a
+
b
)
{\displaystyle \zeta (a,{\bar {b}})+\zeta ({\bar {b}},a)=\zeta (a)\phi (b)-\phi (a+b)}
ζ
(
a
¯
,
b
)
+
ζ
(
b
,
a
¯
)
=
ζ
(
b
)
ϕ
(
a
)
−
ϕ
(
a
+
b
)
{\displaystyle \zeta ({\bar {a}},b)+\zeta (b,{\bar {a}})=\zeta (b)\phi (a)-\phi (a+b)}
ζ
(
a
¯
,
b
¯
)
+
ζ
(
b
¯
,
a
¯
)
=
ϕ
(
a
)
ϕ
(
b
)
−
ζ
(
a
+
b
)
{\displaystyle \zeta ({\bar {a}},{\bar {b}})+\zeta ({\bar {b}},{\bar {a}})=\phi (a)\phi (b)-\zeta (a+b)}
if
a
=
b
{\displaystyle a=b}
we have
ζ
(
a
¯
,
a
¯
)
=
1
2
[
ϕ
2
(
a
)
−
ζ
(
2
a
)
]
{\displaystyle \zeta ({\bar {a}},{\bar {a}})={\tfrac {1}{2}}{\Big [}\phi ^{2}(a)-\zeta (2a){\Big ]}}
=== Other relations ===
Using the series definition it is easy to prove:
ζ
(
a
,
b
)
+
ζ
(
a
,
b
¯
)
+
ζ
(
a
¯
,
b
)
+
ζ
(
a
¯
,
b
¯
)
=
ζ
(
a
,
b
)
2
(
a
+
b
−
2
)
{\displaystyle \zeta (a,b)+\zeta (a,{\bar {b}})+\zeta ({\bar {a}},b)+\zeta ({\bar {a}},{\bar {b}})={\frac {\zeta (a,b)}{2^{(a+b-2)}}}}
with
a
>
1
{\displaystyle a>1}
ζ
(
a
,
b
,
c
)
+
ζ
(
a
,
b
,
c
¯
)
+
ζ
(
a
,
b
¯
,
c
)
+
ζ
(
a
¯
,
b
,
c
)
+
ζ
(
a
,
b
¯
,
c
¯
)
+
ζ
(
a
¯
,
b
,
c
¯
)
+
ζ
(
a
¯
,
b
¯
,
c
)
+
ζ
(
a
¯
,
b
¯
,
c
¯
)
=
ζ
(
a
,
b
,
c
)
2
(
a
+
b
+
c
−
3
)
{\displaystyle \zeta (a,b,c)+\zeta (a,b,{\bar {c}})+\zeta (a,{\bar {b}},c)+\zeta ({\bar {a}},b,c)+\zeta (a,{\bar {b}},{\bar {c}})+\zeta ({\bar {a}},b,{\bar {c}})+\zeta ({\bar {a}},{\bar {b}},c)+\zeta ({\bar {a}},{\bar {b}},{\bar {c}})={\frac {\zeta (a,b,c)}{2^{(a+b+c-3)}}}}
with
a
>
1
{\displaystyle a>1}
A further useful relation is:
ζ
(
a
,
b
)
+
ζ
(
a
¯
,
b
¯
)
=
∑
s
>
0
(
a
+
b
−
s
−
1
)
!
[
Z
a
(
a
+
b
−
s
,
s
)
(
a
−
s
)
!
(
b
−
1
)
!
+
Z
b
(
a
+
b
−
s
,
s
)
(
b
−
s
)
!
(
a
−
1
)
!
]
{\displaystyle \zeta (a,b)+\zeta ({\bar {a}},{\bar {b}})=\sum _{s>0}(a+b-s-1)!{\Big [}{\frac {Z_{a}(a+b-s,s)}{(a-s)!(b-1)!}}+{\frac {Z_{b}(a+b-s,s)}{(b-s)!(a-1)!}}{\Big ]}}
where
Z
a
(
s
,
t
)
=
ζ
(
s
,
t
)
+
ζ
(
s
¯
,
t
)
−
[
ζ
(
s
,
t
)
+
ζ
(
s
+
t
)
]
2
(
s
−
1
)
{\displaystyle Z_{a}(s,t)=\zeta (s,t)+\zeta ({\bar {s}},t)-{\frac {{\Big [}\zeta (s,t)+\zeta (s+t){\Big ]}}{2^{(s-1)}}}}
and
Z
b
(
s
,
t
)
=
ζ
(
s
,
t
)
2
(
s
−
1
)
{\displaystyle Z_{b}(s,t)={\frac {\zeta (s,t)}{2^{(s-1)}}}}
Note that
s
{\displaystyle s}
must be used for all value
>
1
{\displaystyle >1}
for which the argument of the factorials is
⩾
0
{\displaystyle \geqslant 0}
== Other results ==
For all positive integers
a
,
b
,
…
,
k
{\displaystyle a,b,\dots ,k}
:
∑
n
=
2
∞
ζ
(
n
,
k
)
=
ζ
(
k
+
1
)
{\displaystyle \sum _{n=2}^{\infty }\zeta (n,k)=\zeta (k+1)}
or more generally:
∑
n
=
2
∞
ζ
(
n
,
a
,
b
,
…
,
k
)
=
ζ
(
a
+
1
,
b
,
…
,
k
)
{\displaystyle \sum _{n=2}^{\infty }\zeta (n,a,b,\dots ,k)=\zeta (a+1,b,\dots ,k)}
∑
n
=
2
∞
ζ
(
n
,
k
¯
)
=
−
ϕ
(
k
+
1
)
{\displaystyle \sum _{n=2}^{\infty }\zeta (n,{\bar {k}})=-\phi (k+1)}
∑
n
=
2
∞
ζ
(
n
,
a
¯
,
b
)
=
ζ
(
a
+
1
¯
,
b
)
{\displaystyle \sum _{n=2}^{\infty }\zeta (n,{\bar {a}},b)=\zeta ({\overline {a+1}},b)}
∑
n
=
2
∞
ζ
(
n
,
a
,
b
¯
)
=
ζ
(
a
+
1
,
b
¯
)
{\displaystyle \sum _{n=2}^{\infty }\zeta (n,a,{\bar {b}})=\zeta (a+1,{\bar {b}})}
∑
n
=
2
∞
ζ
(
n
,
a
¯
,
b
¯
)
=
ζ
(
a
+
1
¯
,
b
¯
)
{\displaystyle \sum _{n=2}^{\infty }\zeta (n,{\bar {a}},{\bar {b}})=\zeta ({\overline {a+1}},{\bar {b}})}
lim
k
→
∞
ζ
(
n
,
k
)
=
ζ
(
n
)
−
1
{\displaystyle \lim _{k\to \infty }\zeta (n,k)=\zeta (n)-1}
1
−
ζ
(
2
)
+
ζ
(
3
)
−
ζ
(
4
)
+
⋯
=
|
1
2
|
{\displaystyle 1-\zeta (2)+\zeta (3)-\zeta (4)+\cdots =|{\frac {1}{2}}|}
ζ
(
a
,
a
)
=
1
2
[
(
ζ
(
a
)
)
2
−
ζ
(
2
a
)
]
{\displaystyle \zeta (a,a)={\tfrac {1}{2}}{\Big [}(\zeta (a))^{2}-\zeta (2a){\Big ]}}
ζ
(
a
,
a
,
a
)
=
1
6
(
ζ
(
a
)
)
3
+
1
3
ζ
(
3
a
)
−
1
2
ζ
(
a
)
ζ
(
2
a
)
{\displaystyle \zeta (a,a,a)={\tfrac {1}{6}}(\zeta (a))^{3}+{\tfrac {1}{3}}\zeta (3a)-{\tfrac {1}{2}}\zeta (a)\zeta (2a)}
== Mordell–Tornheim zeta values ==
The Mordell–Tornheim zeta function, introduced by Matsumoto (2003) who was motivated by the papers Mordell (1958) and Tornheim (1950), is defined by
ζ
M
T
,
r
(
s
1
,
…
,
s
r
;
s
r
+
1
)
=
∑
m
1
,
…
,
m
r
>
0
1
m
1
s
1
⋯
m
r
s
r
(
m
1
+
⋯
+
m
r
)
s
r
+
1
{\displaystyle \zeta _{MT,r}(s_{1},\dots ,s_{r};s_{r+1})=\sum _{m_{1},\dots ,m_{r}>0}{\frac {1}{m_{1}^{s_{1}}\cdots m_{r}^{s_{r}}(m_{1}+\dots +m_{r})^{s_{r+1}}}}}
It is a special case of the Shintani zeta function.
== References ==
Tornheim, Leonard (1950). "Harmonic double series". American Journal of Mathematics. 72 (2): 303–314. doi:10.2307/2372034. ISSN 0002-9327. JSTOR 2372034. MR 0034860.
Mordell, Louis J. (1958). "On the evaluation of some multiple series". Journal of the London Mathematical Society. Second Series. 33 (3): 368–371. doi:10.1112/jlms/s1-33.3.368. ISSN 0024-6107. MR 0100181.
Apostol, Tom M.; Vu, Thiennu H. (1984), "Dirichlet series related to the Riemann zeta function", Journal of Number Theory, 19 (1): 85–102, doi:10.1016/0022-314X(84)90094-5, ISSN 0022-314X, MR 0751166
Crandall, Richard E.; Buhler, Joe P. (1994). "On the evaluation of Euler Sums". Experimental Mathematics. 3 (4): 275. doi:10.1080/10586458.1994.10504297. MR 1341720.
Borwein, Jonathan M.; Girgensohn, Roland (1996). "Evaluation of Triple Euler Sums". Electron. J. Comb. 3 (1): #R23. doi:10.37236/1247. hdl:1959.13/940394. MR 1401442.
Flajolet, Philippe; Salvy, Bruno (1998). "Euler Sums and contour integral representations". Exp. Math. 7: 15–35. CiteSeerX 10.1.1.37.652. doi:10.1080/10586458.1998.10504356.
Zhao, Jianqiang (1999). "Analytic continuation of multiple zeta functions". Proceedings of the American Mathematical Society. 128 (5): 1275–1283. doi:10.1090/S0002-9939-99-05398-8. MR 1670846.
Matsumoto, Kohji (2003), "On Mordell–Tornheim and other multiple zeta-functions", Proceedings of the Session in Analytic Number Theory and Diophantine Equations, Bonner Math. Schriften, vol. 360, Bonn: Univ. Bonn, MR 2075634
Espinosa, Olivier; Moll, Victor Hugo (2008). "The evaluation of Tornheim double sums". arXiv:math/0505647.
Espinosa, Olivier; Moll, Victor Hugo (2010). "The evaluation of Tornheim double sums II". Ramanujan J. 22: 55–99. arXiv:0811.0557. doi:10.1007/s11139-009-9181-1. MR 2610609. S2CID 17055581.
Borwein, J.M.; Chan, O-Y. (2010). "Duality in tails of multiple zeta values". Int. J. Number Theory. 6 (3): 501–514. CiteSeerX 10.1.1.157.9158. doi:10.1142/S1793042110003058. MR 2652893.
Basu, Ankur (2011). "On the evaluation of Tornheim sums and allied double sums". Ramanujan J. 26 (2): 193–207. doi:10.1007/s11139-011-9302-5. MR 2853480. S2CID 120229489.
== Notes ==
== External links ==
Borwein, Jonathan; Zudilin, Wadim. "Lecture notes on the Multiple Zeta Function".
Hoffman, Michael (2012). "Multiple zeta values".
Zhao, Jianqiang (2016). Multiple Zeta Functions, Multiple Polylogarithms and Their Special Values. Series on Number Theory and its Applications. Vol. 12. World Scientific Publishing. doi:10.1142/9634. ISBN 978-981-4689-39-7.
Burgos Gil, José Ignacio; Fresán, Javier. "Multiple zeta values: from numbers to motives" (PDF). | Wikipedia/Multiple_zeta_functions |
In number theory, a Hecke character is a generalisation of a Dirichlet character, introduced by Erich Hecke to construct a class of
L-functions larger than Dirichlet L-functions, and a natural setting for the Dedekind zeta-functions and certain others which have functional equations analogous to that of the Riemann zeta-function.
== Definition ==
A Hecke character is a character of the idele class group of a number field or global function field. It corresponds uniquely to a character of the idele group which is trivial on principal ideles, via composition with the projection map.
This definition depends on the definition of a character, which varies slightly between authors: It may be defined as a homomorphism to the non-zero complex numbers (also called a "quasicharacter"), or as a homomorphism to the unit circle in
C
{\displaystyle \mathbb {C} }
("unitary"). Any quasicharacter (of the idele class group) can be written uniquely as a unitary character times a real power of the norm, so there is no big difference between the two definitions.
The conductor of a Hecke character
χ
{\displaystyle \chi }
is the largest ideal
m
{\displaystyle {\mathfrak {m}}}
such that
χ
{\displaystyle \chi }
is a Hecke character mod
m
{\displaystyle {\mathfrak {m}}}
. Here we say that
χ
{\displaystyle \chi }
is a Hecke character mod
m
{\displaystyle {\mathfrak {m}}}
if
χ
{\displaystyle \chi }
(considered as a character on the idele group) is trivial on the group of finite ideles whose every
ν
{\displaystyle \nu }
-adic component lies in
1
+
m
O
ν
{\displaystyle 1+{\mathfrak {m}}O_{\nu }}
.
== Größencharakter ==
A Größencharakter (often written Grössencharakter, Grossencharacter, etc.), origin of a Hecke character, going back to Hecke, is defined in terms of
a character on the group of fractional ideals. For a number field
K
{\displaystyle K}
, let
m
=
m
f
m
∞
{\displaystyle {\mathfrak {m}}={\mathfrak {m}}_{f}{\mathfrak {m}}_{\infty }}
be a
K
{\displaystyle K}
-modulus, with
m
f
{\displaystyle {\mathfrak {m}}_{f}}
, the "finite part", being an integral ideal of
K
{\displaystyle K}
and
m
∞
{\displaystyle {\mathfrak {m}}_{\infty }}
, the "infinite part", being a (formal) product of real places of
K
{\displaystyle K}
. Let
I
m
{\displaystyle I_{\mathfrak {m}}}
denote the group of fractional ideals of
K
{\displaystyle K}
relatively prime to
m
f
{\displaystyle {\mathfrak {m}}_{f}}
and
let
P
m
{\displaystyle P_{\mathfrak {m}}}
denote the subgroup of principal fractional ideals
(
a
)
{\displaystyle (a)}
where
a
{\displaystyle a}
is near
1
{\displaystyle 1}
at each place of
m
{\displaystyle {\mathfrak {m}}}
in accordance with the multiplicities of
its factors. That is, for each finite place
ν
{\displaystyle \nu }
in
m
f
{\displaystyle {\mathfrak {m}}_{f}}
, the order
o
r
d
ν
(
a
−
1
)
{\displaystyle ord_{\nu }(a-1)}
is at least as large as the exponent for
ν
{\displaystyle \nu }
in
m
f
{\displaystyle {\mathfrak {m}}_{f}}
, and
a
{\displaystyle a}
is positive under each real embedding in
m
∞
{\displaystyle {\mathfrak {m}}_{\infty }}
. A Größencharakter with modulus
m
{\displaystyle {\mathfrak {m}}}
is a group homomorphism from
I
m
{\displaystyle I_{\mathfrak {m}}}
into the nonzero complex numbers such that on ideals
(
a
)
{\displaystyle (a)}
in
P
m
{\displaystyle P_{\mathfrak {m}}}
its value is equal to the
value at
a
{\displaystyle a}
of a continuous homomorphism to the nonzero complex numbers from the product of the multiplicative groups of all Archimedean completions of
K
{\displaystyle K}
where each local component of the homomorphism has the same real part (in the exponent). (Here we embed
a
{\displaystyle a}
into the product of Archimedean completions of
K
{\displaystyle K}
using embeddings corresponding to the various Archimedean places on
K
{\displaystyle K}
.) Thus a Größencharakter may be defined on the ray class group modulo
m
{\displaystyle {\mathfrak {m}}}
, which is the quotient
I
m
/
P
m
{\displaystyle I_{\mathfrak {m}}/P_{\mathfrak {m}}}
.
Strictly speaking, Hecke made the stipulation about behavior on principal ideals for those admitting a totally positive generator. So, in terms of the definition given above, he really only worked with moduli where all real places appeared.
The role of the infinite part m∞ is now subsumed under the notion of
an infinity-type.
== Relationship between Größencharakter and Hecke character ==
A Hecke character and a Größencharakter are essentially the same notion with a one-to-one correspondence. The ideal definition is much more complicated than the idelic one, and Hecke's motivation for his definition was to construct L-functions (sometimes referred to as Hecke L-functions) that extend the notion of a Dirichlet L-function from the rationals to other number fields. For a Größencharakter χ, its L-function is defined to be the Dirichlet series
∑
(
I
,
m
)
=
1
χ
(
I
)
N
(
I
)
−
s
=
L
(
s
,
χ
)
{\displaystyle \sum _{(I,m)=1}\chi (I)N(I)^{-s}=L(s,\chi )}
carried out over integral ideals relatively prime to the modulus
m
{\displaystyle {\mathfrak {m}}}
of the Größencharakter.
Here
N
(
I
)
{\displaystyle N(I)}
denotes the ideal norm. The common real part condition governing the behavior of Größencharakter on the subgroups
P
m
{\displaystyle P_{\mathfrak {m}}}
implies these
Dirichlet series are absolutely convergent in some right half-plane. Hecke proved these L-functions have a meromorphic continuation to the whole complex plane, being analytic except for a simple pole of order 1 at '
s
=
1
{\displaystyle s=1}
when the character is trivial. For primitive Größencharakter (defined relative to a modulus in a similar manner to primitive Dirichlet characters), Hecke showed these L-functions satisfy a functional equation relating the values of the L-function of a character and the L-function of its complex conjugate character.
Consider a character
ψ
{\displaystyle \psi }
of the idele class group, taken to be a map into the unit circle which is 1 on principal ideles and on an exceptional finite set
S
{\displaystyle S}
containing all infinite places. Then
ψ
{\displaystyle \psi }
generates a character
χ
{\displaystyle \chi }
of the ideal group
I
S
{\displaystyle I^{S}}
, which is the free abelian group on the prime ideals not in
S
{\displaystyle S}
. Take a uniformising element
π
{\displaystyle \pi }
for each prime
p
{\displaystyle {\mathfrak {p}}}
not in
S
{\displaystyle S}
and define a map
Π
{\displaystyle \Pi }
from
I
S
{\displaystyle I^{S}}
to idele classes by mapping each
p
{\displaystyle {\mathfrak {p}}}
to the class of the idele which is
π
{\displaystyle \pi }
in the
p
{\displaystyle {\mathfrak {p}}}
coordinate and
1
{\displaystyle 1}
everywhere else. Let
χ
{\displaystyle \chi }
be the composite of
Π
{\displaystyle \Pi }
and
ψ
{\displaystyle \psi }
. Then
χ
{\displaystyle \chi }
is well-defined as a character on the ideal group.
In the opposite direction, given an admissible character
χ
{\displaystyle \chi }
of
I
S
{\displaystyle I^{S}}
there corresponds a unique idele class character
ψ
{\displaystyle \psi }
. Here admissible refers to the existence of a modulus
m
{\displaystyle {\mathfrak {m}}}
based on the set
S
{\displaystyle S}
such that the character
χ
{\displaystyle \chi }
evaluates to
1
{\displaystyle 1}
on the ideals which are 1 mod
m
{\displaystyle {\mathfrak {m}}}
.
The characters are 'big' in the sense that the infinity-type when present non-trivially means these characters are not of finite order. The finite-order Hecke characters are all, in a sense, accounted for by class field theory: their L-functions are Artin L-functions, as Artin reciprocity shows. But even a field as simple as the Gaussian field has Hecke characters that go beyond finite order in a serious way (see the example below). Later developments in complex multiplication theory indicated that the proper place of the 'big' characters was to provide the Hasse–Weil L-functions for an important class of algebraic varieties (or even motives).
== Special cases ==
A Dirichlet character is a Hecke character of finite order. It is determined by values on the set of totally positive principal ideals which are 1 with respect to some modulus m.
A Hilbert character is a Dirichlet character of conductor 1. The number of Hilbert characters is the order of the class group of the field. Class field theory identifies the Hilbert characters with the characters of the Galois group of the Hilbert class field.
== Examples ==
For the field of rational numbers, the idele class group is isomorphic to the product of positive reals
R
+
{\displaystyle \mathbb {R} ^{+}}
with all the unit groups of the p-adic integers. So a quasicharacter can be written as product of a power of the norm with a Dirichlet character.
A Hecke character χ of the Gaussian integers of conductor 1 is of the form
χ((a)) = |a|s(a/|a|)4n
for s imaginary and n an integer, where a is a generator of the ideal (a). The only units are powers of i, so the factor of 4 in the exponent ensures that the character is well defined on ideals.
== Tate's thesis ==
Hecke's original proof of the functional equation for L(s,χ) used an explicit theta-function. John Tate's 1950 Princeton doctoral dissertation, written under the supervision of Emil Artin, applied Pontryagin duality systematically, to remove the need for any special functions. A similar theory was independently developed by Kenkichi Iwasawa which was the subject of his 1950 ICM talk. A later reformulation in a Bourbaki seminar by Weil 1966 showed that parts of Tate's proof could be expressed by distribution theory: the space of distributions (for Schwartz–Bruhat test functions) on the adele group of K transforming under the action of the ideles by a given χ has dimension 1.
== Algebraic Hecke characters ==
An algebraic Hecke character is a Hecke character taking algebraic values: they were introduced by Weil in 1947 under the name type A0. Such characters occur in class field theory and the theory of complex multiplication.
Indeed let E be an elliptic curve defined over a number field F with complex multiplication by the imaginary quadratic field K, and suppose that K is contained in F. Then there is an algebraic Hecke character χ for F, with exceptional set S the set of primes of bad reduction of E together with the infinite places. This character has the property that for a prime ideal p of good reduction, the value χ(p) is a root of the characteristic polynomial of the Frobenius endomorphism. As a consequence, the Hasse–Weil zeta function for E is a product of two Dirichlet series, for χ and its complex conjugate.
== Notes ==
== References ==
Cassels, J.W.S.; Fröhlich, Albrecht, eds. (1967). Algebraic Number Theory. Academic Press. Zbl 0153.07403.
Heilbronn, H. (1967). "VIII. Zeta-functions and L-functions". In Cassels, J.W.S.; Fröhlich, Albrecht (eds.). Algebraic Number Theory. Academic Press. pp. 204–230.
Husemöller, Dale H. (1987). Elliptic curves. Graduate Texts in Mathematics. Vol. 111. With an appendix by Ruth Lawrence. Springer-Verlag. ISBN 0-387-96371-5. Zbl 0605.14032.
Husemöller, Dale (2002). Elliptic curves. Graduate Texts in Mathematics. Vol. 111 (second ed.). Springer-Verlag. doi:10.1007/b97292. ISBN 0-387-95490-2. Zbl 1040.11043.
W. Narkiewicz (1990). Elementary and analytic theory of algebraic numbers (2nd ed.). Springer-Verlag/Polish Scientific Publishers PWN. pp. 334–343. ISBN 3-540-51250-0. Zbl 0717.11045.
Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021.
J. Tate, Fourier analysis in number fields and Hecke's zeta functions (Tate's 1950 thesis), reprinted in Algebraic Number Theory edd J. W. S. Cassels, A. Fröhlich (1967) pp. 305–347. Zbl 1179.11041
Tate, J.T. (1967). "VII. Global class field theory". In Cassels, J.W.S.; Fröhlich, Albrecht (eds.). Algebraic Number Theory. Academic Press. pp. 162–203. Zbl 1179.11041.
Weil, André (1966), Functions Zetas et Distributions (PDF), vol. 312, Séminaire Bourbaki | Wikipedia/L-function_with_Grössencharakter |
In mathematics, the hypergeometric function of a matrix argument is a generalization of the classical hypergeometric series. It is a function defined by an infinite summation which can be used to evaluate certain multivariate integrals.
Hypergeometric functions of a matrix argument have applications in random matrix theory. For example, the distributions of the extreme eigenvalues of random matrices are often expressed in terms of the hypergeometric function of a matrix argument.
== Definition ==
Let
p
≥
0
{\displaystyle p\geq 0}
and
q
≥
0
{\displaystyle q\geq 0}
be integers, and let
X
{\displaystyle X}
be an
m
×
m
{\displaystyle m\times m}
complex symmetric matrix.
Then the hypergeometric function of a matrix argument
X
{\displaystyle X}
and parameter
α
>
0
{\displaystyle \alpha >0}
is defined as
p
F
q
(
α
)
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
X
)
=
∑
k
=
0
∞
∑
κ
⊢
k
1
k
!
⋅
(
a
1
)
κ
(
α
)
⋯
(
a
p
)
κ
(
α
)
(
b
1
)
κ
(
α
)
⋯
(
b
q
)
κ
(
α
)
⋅
C
κ
(
α
)
(
X
)
,
{\displaystyle _{p}F_{q}^{(\alpha )}(a_{1},\ldots ,a_{p};b_{1},\ldots ,b_{q};X)=\sum _{k=0}^{\infty }\sum _{\kappa \vdash k}{\frac {1}{k!}}\cdot {\frac {(a_{1})_{\kappa }^{(\alpha )}\cdots (a_{p})_{\kappa }^{(\alpha )}}{(b_{1})_{\kappa }^{(\alpha )}\cdots (b_{q})_{\kappa }^{(\alpha )}}}\cdot C_{\kappa }^{(\alpha )}(X),}
where
κ
⊢
k
{\displaystyle \kappa \vdash k}
means
κ
{\displaystyle \kappa }
is a partition of
k
{\displaystyle k}
,
(
a
i
)
κ
(
α
)
{\displaystyle (a_{i})_{\kappa }^{(\alpha )}}
is the generalized Pochhammer symbol, and
C
κ
(
α
)
(
X
)
{\displaystyle C_{\kappa }^{(\alpha )}(X)}
is the "C" normalization of the Jack function.
== Two matrix arguments ==
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are two
m
×
m
{\displaystyle m\times m}
complex symmetric matrices, then the hypergeometric function of two matrix arguments is defined as:
p
F
q
(
α
)
(
a
1
,
…
,
a
p
;
b
1
,
…
,
b
q
;
X
,
Y
)
=
∑
k
=
0
∞
∑
κ
⊢
k
1
k
!
⋅
(
a
1
)
κ
(
α
)
⋯
(
a
p
)
κ
(
α
)
(
b
1
)
κ
(
α
)
⋯
(
b
q
)
κ
(
α
)
⋅
C
κ
(
α
)
(
X
)
C
κ
(
α
)
(
Y
)
C
κ
(
α
)
(
I
)
,
{\displaystyle _{p}F_{q}^{(\alpha )}(a_{1},\ldots ,a_{p};b_{1},\ldots ,b_{q};X,Y)=\sum _{k=0}^{\infty }\sum _{\kappa \vdash k}{\frac {1}{k!}}\cdot {\frac {(a_{1})_{\kappa }^{(\alpha )}\cdots (a_{p})_{\kappa }^{(\alpha )}}{(b_{1})_{\kappa }^{(\alpha )}\cdots (b_{q})_{\kappa }^{(\alpha )}}}\cdot {\frac {C_{\kappa }^{(\alpha )}(X)C_{\kappa }^{(\alpha )}(Y)}{C_{\kappa }^{(\alpha )}(I)}},}
where
I
{\displaystyle I}
is the identity matrix of size
m
{\displaystyle m}
.
== Not a typical function of a matrix argument ==
Unlike other functions of matrix argument, such as the matrix exponential, which are matrix-valued, the hypergeometric function of (one or two) matrix arguments is scalar-valued.
== The parameter α ==
In many publications the parameter
α
{\displaystyle \alpha }
is omitted. Also, in different publications different values of
α
{\displaystyle \alpha }
are being implicitly assumed. For example, in the theory of real random matrices (see, e.g., Muirhead, 1984),
α
=
2
{\displaystyle \alpha =2}
whereas in other settings (e.g., in the complex case—see Gross and Richards, 1989),
α
=
1
{\displaystyle \alpha =1}
. To make matters worse, in random matrix theory researchers tend to prefer a parameter called
β
{\displaystyle \beta }
instead of
α
{\displaystyle \alpha }
which is used in combinatorics.
The thing to remember is that
α
=
2
β
.
{\displaystyle \alpha ={\frac {2}{\beta }}.}
Care should be exercised as to whether a particular text is using a parameter
α
{\displaystyle \alpha }
or
β
{\displaystyle \beta }
and which the particular value of that parameter is.
Typically, in settings involving real random matrices,
α
=
2
{\displaystyle \alpha =2}
and thus
β
=
1
{\displaystyle \beta =1}
. In settings involving complex random matrices, one has
α
=
1
{\displaystyle \alpha =1}
and
β
=
2
{\displaystyle \beta =2}
.
== References ==
K. I. Gross and D. St. P. Richards, "Total positivity, spherical series, and hypergeometric functions of matrix argument", J. Approx. Theory, 59, no. 2, 224–246, 1989.
J. Kaneko, "Selberg Integrals and hypergeometric functions associated with Jack polynomials", SIAM Journal on Mathematical Analysis, 24, no. 4, 1086-1110, 1993.
Plamen Koev and Alan Edelman, "The efficient evaluation of the hypergeometric function of a matrix argument", Mathematics of Computation, 75, no. 254, 833-846, 2006.
Robb Muirhead, Aspects of Multivariate Statistical Theory, John Wiley & Sons, Inc., New York, 1984.
== External links ==
Software for computing the hypergeometric function of a matrix argument by Plamen Koev. | Wikipedia/Hypergeometric_function_of_a_matrix_argument |
In mathematics, the universality of zeta functions is the remarkable ability of the Riemann zeta function and other similar functions (such as the Dirichlet L-functions) to approximate arbitrary non-vanishing holomorphic functions arbitrarily well.
The universality of the Riemann zeta function was first proven by Sergei Mikhailovitch Voronin in 1975 and is sometimes known as Voronin's universality theorem.
== Formal statement ==
A mathematically precise statement of universality for the Riemann zeta function ζ(s) follows.
Let U be a compact subset of the strip
{
s
∈
C
:
1
2
<
R
e
(
s
)
<
1
}
{\displaystyle \left\{\ s\in \mathbb {C} :{\frac {\ 1\ }{2}}<\operatorname {\mathrm {Re} } (s)<1\ \right\}}
such that the complement of U is connected. Let f : U → ℂ be a continuous function on U which is holomorphic on the interior of U and does not have any zeros in U. Then for any ε > 0 there exists a t ≥ 0 such that
for all
s
∈
U
.
{\displaystyle \ s\in U~.}
Even more: The lower density of the set of values t satisfying the above inequality is positive. Precisely
0
<
lim inf
T
→
∞
1
T
λ
(
{
t
∈
[
0
,
T
]
:
max
s
∈
U
|
ζ
(
s
+
i
t
)
−
f
(
s
)
|
<
ε
}
)
,
{\displaystyle \ 0~<~\liminf _{T\to \infty }~{\frac {1}{\ T\ }}\ \lambda \!\left(\left\{\ t\in [0,T]\;:\;\max _{s\in U}{\Bigl |}\ \zeta (s+it)-f(s)\ {\Bigr |}<\varepsilon \ \right\}\right)\ ,}
where
λ
{\displaystyle \ \lambda \ }
is the Lebesgue measure on the real numbers and
lim inf
{\displaystyle \ \liminf \ }
is the limit inferior.
== Discussion ==
The condition that the complement of U be connected essentially means that U does not contain any holes.
The intuitive meaning of the first statement is as follows: it is possible to move U by some vertical displacement it so that the function f on U is approximated by the zeta function on the displaced copy of U, to an accuracy of ε.
The function f is not allowed to have any zeros on U. This is an important restriction; if we start with a holomorphic function with an isolated zero, then any "nearby" holomorphic function will also have a zero. According to the Riemann hypothesis, the Riemann zeta function does not have any zeros in the considered strip, and so it couldn't possibly approximate such a function. The function f(s) = 0 which is identically zero on U can be approximated by ζ: we can first pick the "nearby" function g(s) = ε/2 (which is holomorphic and does not have zeros) and find a vertical displacement such that ζ approximates g to accuracy ε/2, and therefore f to accuracy ε.
The accompanying figure shows the zeta function on a representative part of the relevant strip. The color of the point s encodes the value ζ(s) as follows: the hue represents the argument of ζ(s), with red denoting positive real values, and then counterclockwise through yellow, green cyan, blue and purple. Strong colors denote values close to 0 (black = 0), weak colors denote values far away from 0 (white = ∞). The picture shows three zeros of the zeta function, at about 1/2 + 103.7i, 1/2 + 105.5i and 1/2 + 107.2i. Voronin's theorem essentially states that this strip contains all possible "analytic" color patterns that do not use black or white.
The rough meaning of the statement on the lower density is as follows: if a function f and an ε > 0 are given, then there is a positive probability that a randomly picked vertical displacement it will yield an approximation of f to accuracy ε.
The interior of U may be empty, in which case there is no requirement of f being holomorphic. For example, if we take U to be a line segment, then a continuous function f : U → C is a curve in the complex plane, and we see that the zeta function encodes every possible curve (i.e., any figure that can be drawn without lifting the pencil) to arbitrary precision on the considered strip.
The theorem as stated applies only to regions U that are contained in the strip. However, if we allow translations and scalings, we can also find encoded in the zeta functions approximate versions of all non-vanishing holomorphic functions defined on other regions. In particular, since the zeta function itself is holomorphic, versions of itself are encoded within it at different scales, the hallmark of a fractal.
The surprising nature of the theorem may be summarized in this way: the Riemann zeta function contains "all possible behaviors" within it, and is thus "chaotic" in a sense, yet it is a perfectly smooth analytic function with a straightforward definition.
=== Proof sketch ===
A sketch of the proof presented in (Voronin and Karatsuba, 1992) follows.
We consider only the case where U is a disk centered at 3/4:
U
=
{
s
∈
C
:
|
s
−
3
/
4
|
<
r
}
with
0
<
r
<
1
/
4
{\displaystyle U=\{s\in \mathbb {C} :|s-3/4|<r\}\quad {\mbox{with}}\quad 0<r<1/4}
and we will argue that every non-zero holomorphic function defined on U can be approximated by the ζ-function on a vertical translation of this set.
Passing to the logarithm, it is enough to show that for every holomorphic function g : U → C and every ε > 0 there exists a real number t such that
|
ln
ζ
(
s
+
i
t
)
−
g
(
s
)
|
<
ε
for all
s
∈
U
.
{\displaystyle \left|\ln \zeta (s+it)-g(s)\right|<\varepsilon \quad {\text{for all}}\quad s\in U.}
We will first approximate g(s) with the logarithm of certain finite products reminiscent of the Euler product for the ζ-function:
ζ
(
s
)
=
∏
p
∈
P
(
1
−
1
p
s
)
−
1
,
{\displaystyle \zeta (s)=\prod _{p\in \mathbb {P} }\left(1-{\frac {1}{p^{s}}}\right)^{-1},}
where P denotes the set of all primes.
If
θ
=
(
θ
p
)
p
∈
P
{\displaystyle \theta =(\theta _{p})_{p\in \mathbb {P} }}
is a sequence of real numbers, one for each prime p, and M is a finite set of primes, we set
ζ
M
(
s
,
θ
)
=
∏
p
∈
M
(
1
−
e
−
2
π
i
θ
p
p
s
)
−
1
.
{\displaystyle \zeta _{M}(s,\theta )=\prod _{p\in M}\left(1-{\frac {e^{-2\pi i\theta _{p}}}{p^{s}}}\right)^{-1}.}
We consider the specific sequence
θ
^
=
(
1
4
,
2
4
,
3
4
,
4
4
,
5
4
,
…
)
{\displaystyle {\hat {\theta }}=\left({\frac {1}{4}},{\frac {2}{4}},{\frac {3}{4}},{\frac {4}{4}},{\frac {5}{4}},\ldots \right)}
and claim that g(s) can be approximated by a function of the form
ln
(
ζ
M
(
s
,
θ
^
)
)
{\displaystyle \ln(\zeta _{M}(s,{\hat {\theta }}))}
for a suitable set M of primes. The proof of this claim utilizes the Bergman space, falsely named Hardy space in (Voronin and Karatsuba, 1992), in H of holomorphic functions defined on U, a Hilbert space. We set
u
k
(
s
)
=
ln
(
1
−
e
−
π
i
k
/
2
p
k
s
)
{\displaystyle u_{k}(s)=\ln \left(1-{\frac {e^{-\pi ik/2}}{p_{k}^{s}}}\right)}
where pk denotes the k-th prime number. It can then be shown that the series
∑
k
=
1
∞
u
k
{\displaystyle \sum _{k=1}^{\infty }u_{k}}
is conditionally convergent in H, i.e. for every element v of H there exists a rearrangement of the series
which converges in H to v. This argument uses a theorem that generalizes the Riemann series theorem to a Hilbert space setting. Because of a relationship between the norm in H and the maximum absolute value of a function, we can then approximate our given function g(s) with an initial segment of this rearranged series, as required.
By a version of the Kronecker theorem, applied to the real numbers
ln
2
2
π
,
ln
3
2
π
,
ln
5
2
π
,
…
,
ln
p
N
2
π
{\displaystyle {\frac {\ln 2}{2\pi }},{\frac {\ln 3}{2\pi }},{\frac {\ln 5}{2\pi }},\ldots ,{\frac {\ln p_{N}}{2\pi }}}
(which are linearly independent over the rationals)
we can find real values of t so that
ln
(
ζ
M
(
s
,
θ
^
)
)
{\displaystyle \ln(\zeta _{M}(s,{\hat {\theta }}))}
is approximated by
ln
(
ζ
M
(
s
+
i
t
,
0
)
)
{\displaystyle \ln(\zeta _{M}(s+it,0))}
. Further, for some of these values t,
ln
(
ζ
M
(
s
+
i
t
,
0
)
)
{\displaystyle \ln(\zeta _{M}(s+it,0))}
approximates
ln
(
ζ
(
s
+
i
t
)
)
{\displaystyle \ln(\zeta (s+it))}
, finishing the proof.
The theorem is stated without proof in § 11.11 of (Titchmarsh and Heath-Brown, 1986),
the second edition of a 1951 monograph by Titchmarsh; and a weaker result is given in Thm. 11.9. Although Voronin's theorem is not proved there, two corollaries are derived from it:
Let
1
2
<
σ
<
1
{\displaystyle {\tfrac {1}{2}}<\sigma <1}
be fixed. Then the curve
γ
(
t
)
=
(
ζ
(
σ
+
i
t
)
,
ζ
′
(
σ
+
i
t
)
,
…
,
ζ
(
n
−
1
)
(
σ
+
i
t
)
)
{\displaystyle \gamma (t)=(\zeta (\sigma +it),\zeta '(\sigma +it),\dots ,\zeta ^{(n-1)}(\sigma +it))}
is dense in
C
n
.
{\displaystyle \mathbb {C} ^{n}.}
Let
Φ
{\displaystyle \Phi }
be any continuous function, and let
h
1
,
h
2
,
…
,
h
n
{\displaystyle h_{1},h_{2},\dots ,h_{n}}
be real constants.Then
ζ
(
s
)
{\displaystyle \zeta (s)}
cannot satisfy the differential-difference equation
Φ
{
ζ
(
s
+
h
1
)
,
ζ
′
(
s
+
h
1
)
,
…
,
ζ
(
n
1
)
(
s
+
h
1
)
,
ζ
(
s
+
h
2
)
,
ζ
′
(
s
+
h
2
)
,
…
,
ζ
(
n
2
)
(
s
+
h
2
)
,
…
}
=
0
{\displaystyle \Phi \{\zeta (s+h_{1}),\zeta '(s+h_{1}),\dots ,\zeta ^{(n_{1})}(s+h_{1}),\zeta (s+h_{2}),\zeta '(s+h_{2}),\dots ,\zeta ^{(n_{2})}(s+h_{2}),\dots \}=0}
unless
Φ
{\displaystyle \Phi }
vanishes identically.
== Effective universality ==
Some recent work has focused on effective universality.
Under the conditions stated at the beginning of this article, there exist values of t that satisfy inequality (1).
An effective universality theorem places an upper bound on the smallest such t.
For example, in 2003, Garunkštis proved that if
f
(
s
)
{\displaystyle f(s)}
is analytic in
|
s
|
≤
.05
{\displaystyle |s|\leq .05}
with
max
|
s
|
≤
.05
|
f
(
s
)
|
≤
1
{\displaystyle \max _{\left|s\right|\leq .05}\left|f(s)\right|\leq 1}
, then for any ε in
0
<
ϵ
<
1
/
2
{\displaystyle 0<\epsilon <1/2}
, there exists a number
t
{\displaystyle t}
in
0
≤
t
≤
exp
(
exp
(
10
/
ϵ
13
)
)
{\displaystyle 0\leq t\leq \exp({\exp({10/\epsilon ^{13}})})}
such that
max
|
s
|
≤
.0001
|
log
ζ
(
s
+
3
4
+
i
t
)
−
f
(
s
)
|
<
ϵ
.
{\displaystyle \max _{\left|s\right|\leq .0001}\left|\log \zeta (s+{\frac {3}{4}}+it)-f(s)\right|<\epsilon .}
For example, if
ϵ
=
1
/
10
{\displaystyle \epsilon =1/10}
, then the bound for t is
t
≤
exp
(
exp
(
10
/
ϵ
13
)
)
=
exp
(
exp
(
10
14
)
)
{\displaystyle t\leq \exp({\exp({10/\epsilon ^{13}})})=\exp({\exp({10^{14}})})}
.
Bounds can also be obtained on the measure of these t values, in terms of ε:
lim inf
T
→
∞
1
T
λ
(
{
t
∈
[
0
,
T
]
:
max
|
s
|
≤
.0001
|
log
ζ
(
s
+
3
4
+
i
t
)
−
f
(
s
)
|
<
ϵ
}
)
≥
1
exp
(
ϵ
−
13
)
.
{\displaystyle \liminf _{T\to \infty }{\frac {1}{T}}\,\lambda \!\left(\left\{t\in [0,T]:\max _{\left|s\right|\leq .0001}\left|\log \zeta (s+{\frac {3}{4}}+it)-f(s)\right|<\epsilon \right\}\right)\geq {\frac {1}{\exp({\epsilon ^{-13}})}}.}
For example, if
ϵ
=
1
/
10
{\displaystyle \epsilon =1/10}
, then the right-hand side is
1
/
exp
(
10
13
)
{\displaystyle 1/\exp({10^{13}})}
.
See.: 210
== Universality of other zeta functions ==
Work has been done showing that universality extends to Selberg zeta functions.
The Dirichlet L-functions show not only universality, but a certain kind of joint universality that allow any set of functions to be approximated by the same value(s) of t in different L-functions, where each function to be approximated is paired with a different L-function.
: Section 4
A similar universality property has been shown for the Lerch zeta function
L
(
λ
,
α
,
s
)
{\displaystyle L(\lambda ,\alpha ,s)}
, at least when the parameter α is a transcendental number.: Section 5
Sections of the Lerch zeta function have also been shown to have a form of joint universality.
: Section 6
== References ==
== Further reading ==
Karatsuba, Anatoly A.; Voronin, S. M. (2011). The Riemann Zeta-Function. de Gruyter Expositions In Mathematics. Berlin: de Gruyter. ISBN 978-3110131703.
Laurinčikas, Antanas (1996). Limit Theorems for the Riemann Zeta-Function. Mathematics and Its Applications. Vol. 352. Berlin: Springer. doi:10.1007/978-94-017-2091-5. ISBN 978-90-481-4647-5.
Steuding, Jörn (2007). Value-Distribution of L-Functions. Lecture Notes in Mathematics. Vol. 1877. Berlin: Springer. p. 19. arXiv:1711.06671. doi:10.1007/978-3-540-44822-8. ISBN 978-3-540-26526-9.
Titchmarsh, Edward Charles; Heath-Brown, David Rodney ("Roger") (1986). The Theory of the Riemann Zeta-function (2nd ed.). Oxford: Oxford U. P. ISBN 0-19-853369-1.
== External links ==
Voronin's Universality Theorem, by Matthew R. Watkins
X-Ray of the Zeta Function Visually oriented investigation of where zeta is real or purely imaginary. Gives some indication of how complicated it is in the critical strip. | Wikipedia/Zeta_function_universality |
In mathematics, the local zeta function Z(V, s) (sometimes called the congruent zeta function or the Hasse–Weil zeta function) is defined as
Z
(
V
,
s
)
=
exp
(
∑
k
=
1
∞
N
k
k
(
q
−
s
)
k
)
{\displaystyle Z(V,s)=\exp \left(\sum _{k=1}^{\infty }{\frac {N_{k}}{k}}(q^{-s})^{k}\right)}
where V is a non-singular n-dimensional projective algebraic variety over the field Fq with q elements and Nk is the number of points of V defined over the finite field extension Fqk of Fq.
Making the variable transformation t = q−s, gives
Z
(
V
,
t
)
=
exp
(
∑
k
=
1
∞
N
k
t
k
k
)
{\displaystyle {\mathit {Z}}(V,t)=\exp \left(\sum _{k=1}^{\infty }N_{k}{\frac {t^{k}}{k}}\right)}
as the formal power series in the variable
t
{\displaystyle t}
.
Equivalently, the local zeta function is sometimes defined as follows:
(
1
)
Z
(
V
,
0
)
=
1
{\displaystyle (1)\ \ {\mathit {Z}}(V,0)=1\,}
(
2
)
d
d
t
log
Z
(
V
,
t
)
=
∑
k
=
1
∞
N
k
t
k
−
1
.
{\displaystyle (2)\ \ {\frac {d}{dt}}\log {\mathit {Z}}(V,t)=\sum _{k=1}^{\infty }N_{k}t^{k-1}\ .}
In other words, the local zeta function Z(V, t) with coefficients in the finite field Fq is defined as a function whose logarithmic derivative generates the number Nk of solutions of the equation defining V in the degree k extension Fqk.
== Formulation ==
Given a finite field F, there is, up to isomorphism, only one field Fk with
[
F
k
:
F
]
=
k
{\displaystyle [F_{k}:F]=k\,}
,
for k = 1, 2, ... . When F is the unique field with q elements, Fk is the unique field with
q
k
{\displaystyle q^{k}}
elements. Given a set of polynomial equations — or an algebraic variety V — defined over F, we can count the number
N
k
{\displaystyle N_{k}\,}
of solutions in Fk and create the generating function
G
(
t
)
=
N
1
t
+
N
2
t
2
/
2
+
N
3
t
3
/
3
+
⋯
{\displaystyle G(t)=N_{1}t+N_{2}t^{2}/2+N_{3}t^{3}/3+\cdots \,}
.
The correct definition for Z(t) is to set log Z equal to G, so
Z
=
exp
(
G
(
t
)
)
{\displaystyle Z=\exp(G(t))\,}
and Z(0) = 1, since G(0) = 0, and Z(t) is a priori a formal power series.
The logarithmic derivative
Z
′
(
t
)
/
Z
(
t
)
{\displaystyle Z'(t)/Z(t)\,}
equals the generating function
G
′
(
t
)
=
N
1
+
N
2
t
1
+
N
3
t
2
+
⋯
{\displaystyle G'(t)=N_{1}+N_{2}t^{1}+N_{3}t^{2}+\cdots \,}
.
== Examples ==
For example, assume all the Nk are 1; this happens for example if we start with an equation like X = 0, so that geometrically we are taking V to be a point. Then
G
(
t
)
=
−
log
(
1
−
t
)
{\displaystyle G(t)=-\log(1-t)}
is the expansion of a logarithm (for |t| < 1). In this case we have
Z
(
t
)
=
1
(
1
−
t
)
.
{\displaystyle Z(t)={\frac {1}{(1-t)}}\ .}
To take something more interesting, let V be the projective line over F. If F has q elements, then this has q + 1 points, including the one point at infinity. Therefore, we have
N
k
=
q
k
+
1
{\displaystyle N_{k}=q^{k}+1}
and
G
(
t
)
=
−
log
(
1
−
t
)
−
log
(
1
−
q
t
)
{\displaystyle G(t)=-\log(1-t)-\log(1-qt)}
for |t| small enough, and therefore
Z
(
t
)
=
1
(
1
−
t
)
(
1
−
q
t
)
.
{\displaystyle Z(t)={\frac {1}{(1-t)(1-qt)}}\ .}
The first study of these functions was in the 1923 dissertation of Emil Artin. He obtained results for the case of a hyperelliptic curve, and conjectured the further main points of the theory as applied to curves. The theory was then developed by F. K. Schmidt and Helmut Hasse. The earliest known nontrivial cases of local zeta functions were implicit in Carl Friedrich Gauss's Disquisitiones Arithmeticae, article 358. There, certain particular examples of elliptic curves over finite fields having complex multiplication have their points counted by means of cyclotomy.
For the definition and some examples, see also.
== Motivations ==
The relationship between the definitions of G and Z can be explained in a number of ways. (See for example the infinite product formula for Z below.) In practice it makes Z a rational function of t, something that is interesting even in the case of V an elliptic curve over a finite field.
The local Z zeta functions are multiplied to get global
ζ
{\displaystyle \zeta }
zeta functions,
ζ
=
∏
Z
{\displaystyle \zeta =\prod Z}
These generally involve different finite fields (for example the whole family of fields Z/pZ as p runs over all prime numbers).
In these fields, the variable t is substituted by p−s, where s is the complex variable traditionally used in Dirichlet series. (For details see Hasse–Weil zeta function.)
The global products of Z in the two cases used as examples in the previous section therefore come out as
ζ
(
s
)
{\displaystyle \zeta (s)}
and
ζ
(
s
)
ζ
(
s
−
1
)
{\displaystyle \zeta (s)\zeta (s-1)}
after letting
q
=
p
{\displaystyle q=p}
.
== Riemann hypothesis for curves over finite fields ==
For projective curves C over F that are non-singular, it can be shown that
Z
(
t
)
=
P
(
t
)
(
1
−
t
)
(
1
−
q
t
)
,
{\displaystyle Z(t)={\frac {P(t)}{(1-t)(1-qt)}}\ ,}
with P(t) a polynomial, of degree 2g, where g is the genus of C. Rewriting
P
(
t
)
=
∏
i
=
1
2
g
(
1
−
ω
i
t
)
,
{\displaystyle P(t)=\prod _{i=1}^{2g}(1-\omega _{i}t)\ ,}
the Riemann hypothesis for curves over finite fields states
|
ω
i
|
=
q
1
/
2
.
{\displaystyle |\omega _{i}|=q^{1/2}\ .}
For example, for the elliptic curve case there are two roots, and it is easy to show the absolute values of the roots are q1/2. Hasse's theorem is that they have the same absolute value; and this has immediate consequences for the number of points.
André Weil proved this for the general case, around 1940 (Comptes Rendus note, April 1940): he spent much time in the years after that writing up the algebraic geometry involved. This led him to the general Weil conjectures. Alexander Grothendieck developed scheme theory for the purpose of resolving these.
A generation later Pierre Deligne completed the proof.
(See étale cohomology for the basic formulae of the general theory.)
== General formulas for the zeta function ==
It is a consequence of the Lefschetz trace formula for the Frobenius morphism that
Z
(
X
,
t
)
=
∏
i
=
0
2
dim
X
det
(
1
−
t
Frob
q
|
H
c
i
(
X
¯
,
Q
ℓ
)
)
(
−
1
)
i
+
1
.
{\displaystyle Z(X,t)=\prod _{i=0}^{2\dim X}\det {\big (}1-t{\mbox{Frob}}_{q}|H_{c}^{i}({\overline {X}},{\mathbb {Q} }_{\ell }){\big )}^{(-1)^{i+1}}.}
Here
X
{\displaystyle X}
is a separated scheme of finite type over the finite field F with
q
{\displaystyle q}
elements, and Frobq is the geometric Frobenius acting on
ℓ
{\displaystyle \ell }
-adic étale cohomology with compact supports of
X
¯
{\displaystyle {\overline {X}}}
, the lift of
X
{\displaystyle X}
to the algebraic closure of the field F. This shows that the zeta function is a rational function of
t
{\displaystyle t}
.
An infinite product formula for
Z
(
X
,
t
)
{\displaystyle Z(X,t)}
is
Z
(
X
,
t
)
=
∏
(
1
−
t
deg
(
x
)
)
−
1
.
{\displaystyle Z(X,t)=\prod \ (1-t^{\deg(x)})^{-1}.}
Here, the product ranges over all closed points x of X and deg(x) is the degree of x.
The local zeta function Z(X, t) is viewed as a function of the complex variable s via the change of
variables q−s.
In the case where X is the variety V discussed above, the closed points
are the equivalence classes x=[P] of points P on
V
¯
{\displaystyle {\overline {V}}}
, where two points are equivalent if they are conjugates over F. The degree of x is the degree of the field extension of F
generated by the coordinates of P. The logarithmic derivative of the infinite product Z(X, t) is easily seen to be the generating function discussed above, namely
N
1
+
N
2
t
1
+
N
3
t
2
+
⋯
{\displaystyle N_{1}+N_{2}t^{1}+N_{3}t^{2}+\cdots \,}
.
== See also ==
List of zeta functions
Weil conjectures
Elliptic curve
== References == | Wikipedia/Local_zeta_function |
In mathematics, the study of special values of L-functions is a subfield of number theory devoted to generalising formulae such as the Leibniz formula for π, namely
1
−
1
3
+
1
5
−
1
7
+
1
9
−
⋯
=
π
4
,
{\displaystyle 1\,-\,{\frac {1}{3}}\,+\,{\frac {1}{5}}\,-\,{\frac {1}{7}}\,+\,{\frac {1}{9}}\,-\,\cdots \;=\;{\frac {\pi }{4}},\!}
by the recognition that expression on the left-hand side is also
L
(
1
)
{\displaystyle L(1)}
where
L
(
s
)
{\displaystyle L(s)}
is the Dirichlet L-function for the field of Gaussian rational numbers. This formula is a special case of the analytic class number formula, and in those terms reads that the Gaussian field has class number 1. The factor
1
4
{\displaystyle {\tfrac {1}{4}}}
on the right hand side of the formula corresponds to the fact that this field contains four roots of unity.
== Conjectures ==
There are two families of conjectures, formulated for general classes of L-functions (the very general setting being for L-functions associated to Chow motives over number fields), the division into two reflecting the questions of:
how to replace
π
{\displaystyle \pi }
in the Leibniz formula by some other "transcendental" number (regardless of whether it is currently possible for transcendental number theory to provide a proof of the transcendence); and
how to generalise the rational factor in the formula (class number divided by number of roots of unity) by some algebraic construction of a rational number that will represent the ratio of the L-function value to the "transcendental" factor.
Subsidiary explanations are given for the integer values of
n
{\displaystyle n}
for which a formulae of this sort involving
L
(
n
)
{\displaystyle L(n)}
can be expected to hold.
The conjectures for (a) are called Beilinson's conjectures, for Alexander Beilinson. The idea is to abstract from the regulator of a number field to some "higher regulator" (the Beilinson regulator), a determinant constructed on a real vector space that comes from algebraic K-theory.
The conjectures for (b) are called the Bloch–Kato conjectures for special values (for Spencer Bloch and Kazuya Kato; this circle of ideas is distinct from the Bloch–Kato conjecture of K-theory, extending the Milnor conjecture, a proof of which was announced in 2009). They are also called the Tamagawa number conjecture, a name arising via the Birch–Swinnerton-Dyer conjecture and its formulation as an elliptic curve analogue of the Tamagawa number problem for linear algebraic groups. In a further extension, the equivariant Tamagawa number conjecture (ETNC) has been formulated, to consolidate the connection of these ideas with Iwasawa theory, and its so-called Main Conjecture.
=== Current status ===
All of these conjectures are known to be true only in special cases.
== See also ==
Brumer–Stark conjecture
== Notes ==
== References ==
Kings, Guido (2003), "The Bloch–Kato conjecture on special values of L-functions. A survey of known results", Journal de théorie des nombres de Bordeaux, 15 (1): 179–198, doi:10.5802/jtnb.396, ISSN 1246-7405, MR 2019010
"Beilinson conjectures", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"K-functor in algebraic geometry", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Mathar, Richard J. (2010), "Table of Dirichlet L-Series and Prime Zeta Modulo Functions for small moduli", arXiv:1008.2547 [math.NT]
== External links ==
L-funktionen und die Vermutingen von Deligne und Beilinson (L-functions and the conjectures of Deligne and Beilsnson) | Wikipedia/Bloch–Kato_conjecture_(L-functions) |
In mathematics, an Artin L-function is a type of Dirichlet series associated to a linear representation ρ of a Galois group G. These functions were introduced in 1923 by Emil Artin, in connection with his research into class field theory. Their fundamental properties, in particular the Artin conjecture described below, have turned out to be resistant to easy proof. One of the aims of proposed non-abelian class field theory is to incorporate the complex-analytic nature of Artin L-functions into a larger framework, such as is provided by automorphic forms and the Langlands program. So far, only a small part of such a theory has been put on a firm basis.
== Definition ==
Given
ρ
{\displaystyle \rho }
, a representation of
G
{\displaystyle G}
on a finite-dimensional complex vector space
V
{\displaystyle V}
, where
G
{\displaystyle G}
is the Galois group of the finite extension
L
/
K
{\displaystyle L/K}
of number fields, the Artin
L
{\displaystyle L}
-function
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
is defined by an Euler product. For each prime ideal
p
{\displaystyle {\mathfrak {p}}}
in
K
{\displaystyle K}
's ring of integers, there is an Euler factor, which is easiest to define in the case where
p
{\displaystyle {\mathfrak {p}}}
is unramified in
L
{\displaystyle L}
(true for almost all
p
{\displaystyle {\mathfrak {p}}}
). In that case, the Frobenius element
F
r
o
b
(
p
)
{\displaystyle \mathbf {Frob} ({\mathfrak {p}})}
is defined as a conjugacy class in
G
{\displaystyle G}
. Therefore, the characteristic polynomial of
ρ
(
F
r
o
b
(
p
)
)
{\displaystyle \rho (\mathbf {Frob} ({\mathfrak {p}}))}
is well-defined. The Euler factor for
p
{\displaystyle {\mathfrak {p}}}
is a slight modification of the characteristic polynomial, equally well-defined,
charpoly
(
ρ
(
F
r
o
b
(
p
)
)
)
−
1
=
det
[
I
−
t
ρ
(
F
r
o
b
(
p
)
)
]
−
1
,
{\displaystyle \operatorname {charpoly} (\rho (\mathbf {Frob} ({\mathfrak {p}})))^{-1}=\operatorname {det} \left[I-t\rho (\mathbf {Frob} ({\mathfrak {p}}))\right]^{-1},}
as rational function in t, evaluated at
t
=
N
(
p
)
−
s
{\displaystyle t=N({\mathfrak {p}})^{-s}}
, with
s
{\displaystyle s}
a complex variable in the usual Riemann zeta function notation. (Here N is the field norm of an ideal.)
When
p
{\displaystyle {\mathfrak {p}}}
is ramified, and I is the inertia group which is a subgroup of G, a similar construction is applied, but to the subspace of V fixed (pointwise) by I.
The Artin L-function
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
is then the infinite product over all prime ideals
p
{\displaystyle {\mathfrak {p}}}
of these factors. As Artin reciprocity shows, when G is an abelian group these L-functions have a second description (as Dirichlet L-functions when K is the rational number field, and as Hecke L-functions in general). Novelty comes in with non-abelian G and their representations.
One application is to give factorisations of Dedekind zeta-functions, for example in the case of a number field that is Galois over the rational numbers. In accordance with the decomposition of the regular representation into irreducible representations, such a zeta-function splits into a product of Artin L-functions, for each irreducible representation of G. For example, the simplest case is when G is the symmetric group on three letters. Since G has an irreducible representation of degree 2, an Artin L-function for such a representation occurs, squared, in the factorisation of the Dedekind zeta-function for such a number field, in a product with the Riemann zeta-function (for the trivial representation) and an L-function of Dirichlet's type for the signature representation.
More precisely for
L
/
K
{\displaystyle L/K}
a Galois extension of degree n, the factorization
ζ
L
(
s
)
=
L
(
s
,
ρ
regular
)
=
∏
ρ
Irr rep
Gal
(
L
/
K
)
L
(
ρ
,
s
)
deg
(
ρ
)
{\displaystyle \zeta _{L}(s)=L(s,\rho _{\text{regular}})=\prod _{\rho {\text{ Irr rep }}{\text{Gal}}(L/K)}L(\rho ,s)^{\deg(\rho )}}
follows from
L
(
ρ
,
s
)
=
∏
p
∈
K
1
det
[
I
−
N
(
p
)
−
s
ρ
(
F
r
o
b
p
)
|
V
p
,
ρ
]
{\displaystyle L(\rho ,s)=\prod _{{\mathfrak {p}}\in K}{\frac {1}{\det \left[I-N({\mathfrak {p}})^{-s}\rho (\mathbf {Frob} _{\mathfrak {p}}){|V_{{\mathfrak {p}},\rho }}\right]}}}
−
log
det
[
I
−
N
(
p
)
−
s
ρ
(
F
r
o
b
p
)
]
=
∑
m
=
1
∞
tr
(
ρ
(
F
r
o
b
p
)
m
)
m
N
(
p
)
−
s
m
{\displaystyle -\log \det \left[I-N({\mathfrak {p}})^{-s}\rho \left(\mathbf {Frob} _{\mathfrak {p}}\right)\right]=\sum _{m=1}^{\infty }{\frac {{\text{tr}}(\rho (\mathbf {Frob} _{\mathfrak {p}})^{m})}{m}}N({\mathfrak {p}})^{-sm}}
∑
ρ
Irr
deg
(
ρ
)
tr
(
ρ
(
σ
)
)
=
{
n
σ
=
1
0
σ
≠
1
{\displaystyle \sum _{\rho {\text{ Irr}}}\deg(\rho ){\text{tr}}(\rho (\sigma ))={\begin{cases}n&\sigma =1\\0&\sigma \neq 1\end{cases}}}
−
∑
ρ
Irr
deg
(
ρ
)
log
det
[
I
−
N
(
p
−
s
)
ρ
(
F
r
o
b
p
)
]
=
n
∑
m
=
1
∞
N
(
p
)
−
s
f
m
f
m
=
−
log
[
(
1
−
N
(
p
)
−
s
f
)
n
f
]
{\displaystyle -\sum _{\rho {\text{ Irr}}}\deg(\rho )\log \det \left[I-N\left({\mathfrak {p}}^{-s}\right)\rho \left(\mathbf {Frob} _{\mathfrak {p}}\right)\right]=n\sum _{m=1}^{\infty }{\frac {N({\mathfrak {p}})^{-sfm}}{fm}}=-\log \left[\left(1-N({\mathfrak {p}})^{-sf}\right)^{\frac {n}{f}}\right]}
where
deg
(
ρ
)
{\displaystyle \deg(\rho )}
is the multiplicity of the irreducible representation in the regular representation, f is the order of
F
r
o
b
p
{\displaystyle \mathbf {Frob} _{\mathfrak {p}}}
and n is replaced by n/e at the ramified primes.
Since characters are an orthonormal basis of the class functions, after showing some analytic properties of the
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
we obtain the Chebotarev density theorem as a generalization of Dirichlet's theorem on arithmetic progressions.
== Functional equation ==
Artin L-functions satisfy a functional equation. The function
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
is related in its values to
L
(
ρ
∗
,
1
−
s
)
{\displaystyle L(\rho ^{*},1-s)}
, where
ρ
∗
{\displaystyle \rho ^{*}}
denotes the complex conjugate representation. More precisely L is replaced by
Λ
(
ρ
,
s
)
{\displaystyle \Lambda (\rho ,s)}
, which is L multiplied by certain gamma factors, and then there is an equation of meromorphic functions
Λ
(
ρ
,
s
)
=
W
(
ρ
)
Λ
(
ρ
∗
,
1
−
s
)
{\displaystyle \Lambda (\rho ,s)=W(\rho )\Lambda (\rho ^{*},1-s)}
,
with a certain complex number W(ρ) of absolute value 1. It is the Artin root number. It has been studied deeply with respect to two types of properties. Firstly Robert Langlands and Pierre Deligne established a factorisation into Langlands–Deligne local constants; this is significant in relation to conjectural relationships to automorphic representations. Also the case of ρ and ρ* being equivalent representations is exactly the one in which the functional equation has the same L-function on each side. It is, algebraically speaking, the case when ρ is a real representation or quaternionic representation. The Artin root number is, then, either +1 or −1. The question of which sign occurs is linked to Galois module theory.
== The Artin conjecture ==
The Artin conjecture on Artin L-functions (also known as Artin's holomorphy conjecture) states that the Artin L-function
L
(
ρ
,
s
)
{\displaystyle L(\rho ,s)}
of a non-trivial irreducible representation ρ is analytic in the whole complex plane.
This is known for one-dimensional representations, the L-functions being then associated to Hecke characters — and in particular for Dirichlet L-functions. More generally Artin showed that the Artin conjecture is true for all representations induced from 1-dimensional representations. If the Galois group is supersolvable or more generally monomial, then all representations are of this form so the Artin conjecture holds.
André Weil proved the Artin conjecture in the case of function fields.
Two-dimensional representations are classified by the nature of the image subgroup: it may be cyclic, dihedral, tetrahedral, octahedral, or icosahedral. The Artin conjecture for the cyclic or dihedral case follows easily from Erich Hecke's work. Langlands used the base change lifting to prove the tetrahedral case, and Jerrold Tunnell extended his work to cover the octahedral case; Andrew Wiles used these cases in his proof of the Modularity conjecture. Richard Taylor and others have made some progress on the (non-solvable) icosahedral case; this is an active area of research. The Artin conjecture for odd, irreducible, two-dimensional representations follows from the proof of Serre's modularity conjecture, regardless of projective image subgroup.
Brauer's theorem on induced characters implies that all Artin L-functions are products of positive and negative integral powers of Hecke L-functions, and are therefore meromorphic in the whole complex plane.
Langlands (1970) pointed out that the Artin conjecture follows from strong enough results from the Langlands philosophy, relating to the L-functions associated to automorphic representations for GL(n) for all
n
≥
1
{\displaystyle n\geq 1}
. More precisely, the Langlands conjectures associate an automorphic representation of the adelic group GLn(AQ) to every n-dimensional irreducible representation of the Galois group, which is a cuspidal representation if the Galois representation is irreducible, such that the Artin L-function of the Galois representation is the same as the automorphic L-function of the automorphic representation. The Artin conjecture then follows immediately from the known fact that the L-functions of cuspidal automorphic representations are holomorphic. This was one of the major motivations for Langlands' work.
== The Dedekind conjecture ==
A weaker conjecture (sometimes known as Dedekind conjecture) states that
if M/K is an extension of number fields, then the quotient
s
↦
ζ
M
(
s
)
/
ζ
K
(
s
)
{\displaystyle s\mapsto \zeta _{M}(s)/\zeta _{K}(s)}
of their Dedekind zeta functions is entire.
The Aramata-Brauer theorem states that the conjecture holds if M/K is Galois.
More generally, let N be the Galois closure of M over K,
and G the Galois group of N/K.
The quotient
s
↦
ζ
M
(
s
)
/
ζ
K
(
s
)
{\displaystyle s\mapsto \zeta _{M}(s)/\zeta _{K}(s)}
is equal to the
Artin L-functions associated to the natural representation associated to the
action of G on the K-invariants complex embedding of M. Thus the Artin conjecture implies the Dedekind conjecture.
The conjecture was proven when G is a solvable group, independently by Koji Uchida and R. W. van der Waall in 1975.
== See also ==
Equivariant L-function
== Notes ==
== References ==
== Bibliography == | Wikipedia/Artin_conjecture_(L-functions) |
In mathematics, the polygamma function of order m is a meromorphic function on the complex numbers
C
{\displaystyle \mathbb {C} }
defined as the (m + 1)th derivative of the logarithm of the gamma function:
ψ
(
m
)
(
z
)
:=
d
m
d
z
m
ψ
(
z
)
=
d
m
+
1
d
z
m
+
1
ln
Γ
(
z
)
.
{\displaystyle \psi ^{(m)}(z):={\frac {\mathrm {d} ^{m}}{\mathrm {d} z^{m}}}\psi (z)={\frac {\mathrm {d} ^{m+1}}{\mathrm {d} z^{m+1}}}\ln \Gamma (z).}
Thus
ψ
(
0
)
(
z
)
=
ψ
(
z
)
=
Γ
′
(
z
)
Γ
(
z
)
{\displaystyle \psi ^{(0)}(z)=\psi (z)={\frac {\Gamma '(z)}{\Gamma (z)}}}
holds where ψ(z) is the digamma function and Γ(z) is the gamma function. They are holomorphic on
C
∖
Z
≤
0
{\displaystyle \mathbb {C} \backslash \mathbb {Z} _{\leq 0}}
. At all the nonpositive integers these polygamma functions have a pole of order m + 1. The function ψ(1)(z) is sometimes called the trigamma function.
== Integral representation ==
When m > 0 and Re z > 0, the polygamma function equals
ψ
(
m
)
(
z
)
=
(
−
1
)
m
+
1
∫
0
∞
t
m
e
−
z
t
1
−
e
−
t
d
t
=
−
∫
0
1
t
z
−
1
1
−
t
(
ln
t
)
m
d
t
=
(
−
1
)
m
+
1
m
!
ζ
(
m
+
1
,
z
)
{\displaystyle {\begin{aligned}\psi ^{(m)}(z)&=(-1)^{m+1}\int _{0}^{\infty }{\frac {t^{m}e^{-zt}}{1-e^{-t}}}\,\mathrm {d} t\\&=-\int _{0}^{1}{\frac {t^{z-1}}{1-t}}(\ln t)^{m}\,\mathrm {d} t\\&=(-1)^{m+1}m!\zeta (m+1,z)\end{aligned}}}
where
ζ
(
s
,
q
)
{\displaystyle \zeta (s,q)}
is the Hurwitz zeta function.
This expresses the polygamma function as the Laplace transform of (−1)m+1 tm/1 − e−t. It follows from Bernstein's theorem on monotone functions that, for m > 0 and x real and non-negative, (−1)m+1 ψ(m)(x) is a completely monotone function.
Setting m = 0 in the above formula does not give an integral representation of the digamma function. The digamma function has an integral representation, due to Gauss, which is similar to the m = 0 case above but which has an extra term e−t/t.
== Recurrence relation ==
It satisfies the recurrence relation
ψ
(
m
)
(
z
+
1
)
=
ψ
(
m
)
(
z
)
+
(
−
1
)
m
m
!
z
m
+
1
{\displaystyle \psi ^{(m)}(z+1)=\psi ^{(m)}(z)+{\frac {(-1)^{m}\,m!}{z^{m+1}}}}
which – considered for positive integer argument – leads to a presentation of the sum of reciprocals of the powers of the natural numbers:
ψ
(
m
)
(
n
)
(
−
1
)
m
+
1
m
!
=
ζ
(
1
+
m
)
−
∑
k
=
1
n
−
1
1
k
m
+
1
=
∑
k
=
n
∞
1
k
m
+
1
m
≥
1
{\displaystyle {\frac {\psi ^{(m)}(n)}{(-1)^{m+1}\,m!}}=\zeta (1+m)-\sum _{k=1}^{n-1}{\frac {1}{k^{m+1}}}=\sum _{k=n}^{\infty }{\frac {1}{k^{m+1}}}\qquad m\geq 1}
and
ψ
(
0
)
(
n
)
=
−
γ
+
∑
k
=
1
n
−
1
1
k
{\displaystyle \psi ^{(0)}(n)=-\gamma \ +\sum _{k=1}^{n-1}{\frac {1}{k}}}
for all
n
∈
N
{\displaystyle n\in \mathbb {N} }
, where
γ
{\displaystyle \gamma }
is the Euler–Mascheroni constant. Like the log-gamma function, the polygamma functions can be generalized from the domain
N
{\displaystyle \mathbb {N} }
uniquely to positive real numbers only due to their recurrence relation and one given function-value, say ψ(m)(1), except in the case m = 0 where the additional condition of strict monotonicity on
R
+
{\displaystyle \mathbb {R} ^{+}}
is still needed. This is a trivial consequence of the Bohr–Mollerup theorem for the gamma function where strictly logarithmic convexity on
R
+
{\displaystyle \mathbb {R} ^{+}}
is demanded additionally. The case m = 0 must be treated differently because ψ(0) is not normalizable at infinity (the sum of the reciprocals doesn't converge).
== Reflection relation ==
(
−
1
)
m
ψ
(
m
)
(
1
−
z
)
−
ψ
(
m
)
(
z
)
=
π
d
m
d
z
m
cot
π
z
=
π
m
+
1
P
m
(
cos
π
z
)
sin
m
+
1
(
π
z
)
{\displaystyle (-1)^{m}\psi ^{(m)}(1-z)-\psi ^{(m)}(z)=\pi {\frac {\mathrm {d} ^{m}}{\mathrm {d} z^{m}}}\cot {\pi z}=\pi ^{m+1}{\frac {P_{m}(\cos {\pi z})}{\sin ^{m+1}(\pi z)}}}
where Pm is alternately an odd or even polynomial of degree |m − 1| with integer coefficients and leading coefficient (−1)m⌈2m − 1⌉. They obey the recursion equation
P
0
(
x
)
=
x
P
m
+
1
(
x
)
=
−
(
(
m
+
1
)
x
P
m
(
x
)
+
(
1
−
x
2
)
P
m
′
(
x
)
)
.
{\displaystyle {\begin{aligned}P_{0}(x)&=x\\P_{m+1}(x)&=-\left((m+1)xP_{m}(x)+\left(1-x^{2}\right)P'_{m}(x)\right).\end{aligned}}}
== Multiplication theorem ==
The multiplication theorem gives
k
m
+
1
ψ
(
m
)
(
k
z
)
=
∑
n
=
0
k
−
1
ψ
(
m
)
(
z
+
n
k
)
m
≥
1
{\displaystyle k^{m+1}\psi ^{(m)}(kz)=\sum _{n=0}^{k-1}\psi ^{(m)}\left(z+{\frac {n}{k}}\right)\qquad m\geq 1}
and
k
ψ
(
0
)
(
k
z
)
=
k
ln
k
+
∑
n
=
0
k
−
1
ψ
(
0
)
(
z
+
n
k
)
{\displaystyle k\psi ^{(0)}(kz)=k\ln {k}+\sum _{n=0}^{k-1}\psi ^{(0)}\left(z+{\frac {n}{k}}\right)}
for the digamma function.
== Series representation ==
The polygamma function has the series representation
ψ
(
m
)
(
z
)
=
(
−
1
)
m
+
1
m
!
∑
k
=
0
∞
1
(
z
+
k
)
m
+
1
{\displaystyle \psi ^{(m)}(z)=(-1)^{m+1}\,m!\sum _{k=0}^{\infty }{\frac {1}{(z+k)^{m+1}}}}
which holds for integer values of m > 0 and any complex z not equal to a negative integer. This representation can be written more compactly in terms of the Hurwitz zeta function as
ψ
(
m
)
(
z
)
=
(
−
1
)
m
+
1
m
!
ζ
(
m
+
1
,
z
)
.
{\displaystyle \psi ^{(m)}(z)=(-1)^{m+1}\,m!\,\zeta (m+1,z).}
This relation can for example be used to compute the special values
ψ
(
2
n
−
1
)
(
1
4
)
=
4
2
n
−
1
2
n
(
π
2
n
(
2
2
n
−
1
)
|
B
2
n
|
+
2
(
2
n
)
!
β
(
2
n
)
)
;
{\displaystyle \psi ^{(2n-1)}\left({\frac {1}{4}}\right)={\frac {4^{2n-1}}{2n}}\left(\pi ^{2n}(2^{2n}-1)|B_{2n}|+2(2n)!\beta (2n)\right);}
ψ
(
2
n
−
1
)
(
3
4
)
=
4
2
n
−
1
2
n
(
π
2
n
(
2
2
n
−
1
)
|
B
2
n
|
−
2
(
2
n
)
!
β
(
2
n
)
)
;
{\displaystyle \psi ^{(2n-1)}\left({\frac {3}{4}}\right)={\frac {4^{2n-1}}{2n}}\left(\pi ^{2n}(2^{2n}-1)|B_{2n}|-2(2n)!\beta (2n)\right);}
ψ
(
2
n
)
(
1
4
)
=
−
2
2
n
−
1
(
π
2
n
+
1
|
E
2
n
|
+
2
(
2
n
)
!
(
2
2
n
+
1
−
1
)
ζ
(
2
n
+
1
)
)
;
{\displaystyle \psi ^{(2n)}\left({\frac {1}{4}}\right)=-2^{2n-1}\left(\pi ^{2n+1}|E_{2n}|+2(2n)!(2^{2n+1}-1)\zeta (2n+1)\right);}
ψ
(
2
n
)
(
3
4
)
=
2
2
n
−
1
(
π
2
n
+
1
|
E
2
n
|
−
2
(
2
n
)
!
(
2
2
n
+
1
−
1
)
ζ
(
2
n
+
1
)
)
.
{\displaystyle \psi ^{(2n)}\left({\frac {3}{4}}\right)=2^{2n-1}\left(\pi ^{2n+1}|E_{2n}|-2(2n)!(2^{2n+1}-1)\zeta (2n+1)\right).}
Alternately, the Hurwitz zeta can be understood to generalize the polygamma to arbitrary, non-integer order.
One more series may be permitted for the polygamma functions. As given by Schlömilch,
1
Γ
(
z
)
=
z
e
γ
z
∏
n
=
1
∞
(
1
+
z
n
)
e
−
z
n
.
{\displaystyle {\frac {1}{\Gamma (z)}}=ze^{\gamma z}\prod _{n=1}^{\infty }\left(1+{\frac {z}{n}}\right)e^{-{\frac {z}{n}}}.}
This is a result of the Weierstrass factorization theorem. Thus, the gamma function may now be defined as:
Γ
(
z
)
=
e
−
γ
z
z
∏
n
=
1
∞
(
1
+
z
n
)
−
1
e
z
n
.
{\displaystyle \Gamma (z)={\frac {e^{-\gamma z}}{z}}\prod _{n=1}^{\infty }\left(1+{\frac {z}{n}}\right)^{-1}e^{\frac {z}{n}}.}
Now, the natural logarithm of the gamma function is easily representable:
ln
Γ
(
z
)
=
−
γ
z
−
ln
(
z
)
+
∑
k
=
1
∞
(
z
k
−
ln
(
1
+
z
k
)
)
.
{\displaystyle \ln \Gamma (z)=-\gamma z-\ln(z)+\sum _{k=1}^{\infty }\left({\frac {z}{k}}-\ln \left(1+{\frac {z}{k}}\right)\right).}
Finally, we arrive at a summation representation for the polygamma function:
ψ
(
n
)
(
z
)
=
d
n
+
1
d
z
n
+
1
ln
Γ
(
z
)
=
−
γ
δ
n
0
−
(
−
1
)
n
n
!
z
n
+
1
+
∑
k
=
1
∞
(
1
k
δ
n
0
−
(
−
1
)
n
n
!
(
k
+
z
)
n
+
1
)
{\displaystyle \psi ^{(n)}(z)={\frac {\mathrm {d} ^{n+1}}{\mathrm {d} z^{n+1}}}\ln \Gamma (z)=-\gamma \delta _{n0}-{\frac {(-1)^{n}n!}{z^{n+1}}}+\sum _{k=1}^{\infty }\left({\frac {1}{k}}\delta _{n0}-{\frac {(-1)^{n}n!}{(k+z)^{n+1}}}\right)}
Where δn0 is the Kronecker delta.
Also the Lerch transcendent
Φ
(
−
1
,
m
+
1
,
z
)
=
∑
k
=
0
∞
(
−
1
)
k
(
z
+
k
)
m
+
1
{\displaystyle \Phi (-1,m+1,z)=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{(z+k)^{m+1}}}}
can be denoted in terms of polygamma function
Φ
(
−
1
,
m
+
1
,
z
)
=
1
(
−
2
)
m
+
1
m
!
(
ψ
(
m
)
(
z
2
)
−
ψ
(
m
)
(
z
+
1
2
)
)
{\displaystyle \Phi (-1,m+1,z)={\frac {1}{(-2)^{m+1}m!}}\left(\psi ^{(m)}\left({\frac {z}{2}}\right)-\psi ^{(m)}\left({\frac {z+1}{2}}\right)\right)}
== Taylor series ==
The Taylor series at z = -1 is
ψ
(
m
)
(
z
+
1
)
=
∑
k
=
0
∞
(
−
1
)
m
+
k
+
1
(
m
+
k
)
!
k
!
ζ
(
m
+
k
+
1
)
z
k
m
≥
1
{\displaystyle \psi ^{(m)}(z+1)=\sum _{k=0}^{\infty }(-1)^{m+k+1}{\frac {(m+k)!}{k!}}\zeta (m+k+1)z^{k}\qquad m\geq 1}
and
ψ
(
0
)
(
z
+
1
)
=
−
γ
+
∑
k
=
1
∞
(
−
1
)
k
+
1
ζ
(
k
+
1
)
z
k
{\displaystyle \psi ^{(0)}(z+1)=-\gamma +\sum _{k=1}^{\infty }(-1)^{k+1}\zeta (k+1)z^{k}}
which converges for |z| < 1. Here, ζ is the Riemann zeta function. This series is easily derived from the corresponding Taylor series for the Hurwitz zeta function. This series may be used to derive a number of rational zeta series.
== Asymptotic expansion ==
These non-converging series can be used to get quickly an approximation value with a certain numeric at-least-precision for large arguments:
ψ
(
m
)
(
z
)
∼
(
−
1
)
m
+
1
∑
k
=
0
∞
(
k
+
m
−
1
)
!
k
!
B
k
z
k
+
m
m
≥
1
{\displaystyle \psi ^{(m)}(z)\sim (-1)^{m+1}\sum _{k=0}^{\infty }{\frac {(k+m-1)!}{k!}}{\frac {B_{k}}{z^{k+m}}}\qquad m\geq 1}
and
ψ
(
0
)
(
z
)
∼
ln
(
z
)
−
∑
k
=
1
∞
B
k
k
z
k
{\displaystyle \psi ^{(0)}(z)\sim \ln(z)-\sum _{k=1}^{\infty }{\frac {B_{k}}{kz^{k}}}}
where we have chosen B1 = 1/2, i.e. the Bernoulli numbers of the second kind.
== Inequalities ==
The hyperbolic cotangent satisfies the inequality
t
2
coth
t
2
≥
1
,
{\displaystyle {\frac {t}{2}}\operatorname {coth} {\frac {t}{2}}\geq 1,}
and this implies that the function
t
m
1
−
e
−
t
−
(
t
m
−
1
+
t
m
2
)
{\displaystyle {\frac {t^{m}}{1-e^{-t}}}-\left(t^{m-1}+{\frac {t^{m}}{2}}\right)}
is non-negative for all m ≥ 1 and t ≥ 0. It follows that the Laplace transform of this function is completely monotone. By the integral representation above, we conclude that
(
−
1
)
m
+
1
ψ
(
m
)
(
x
)
−
(
(
m
−
1
)
!
x
m
+
m
!
2
x
m
+
1
)
{\displaystyle (-1)^{m+1}\psi ^{(m)}(x)-\left({\frac {(m-1)!}{x^{m}}}+{\frac {m!}{2x^{m+1}}}\right)}
is completely monotone. The convexity inequality et ≥ 1 + t implies that
(
t
m
−
1
+
t
m
)
−
t
m
1
−
e
−
t
{\displaystyle \left(t^{m-1}+t^{m}\right)-{\frac {t^{m}}{1-e^{-t}}}}
is non-negative for all m ≥ 1 and t ≥ 0, so a similar Laplace transformation argument yields the complete monotonicity of
(
(
m
−
1
)
!
x
m
+
m
!
x
m
+
1
)
−
(
−
1
)
m
+
1
ψ
(
m
)
(
x
)
.
{\displaystyle \left({\frac {(m-1)!}{x^{m}}}+{\frac {m!}{x^{m+1}}}\right)-(-1)^{m+1}\psi ^{(m)}(x).}
Therefore, for all m ≥ 1 and x > 0,
(
m
−
1
)
!
x
m
+
m
!
2
x
m
+
1
≤
(
−
1
)
m
+
1
ψ
(
m
)
(
x
)
≤
(
m
−
1
)
!
x
m
+
m
!
x
m
+
1
.
{\displaystyle {\frac {(m-1)!}{x^{m}}}+{\frac {m!}{2x^{m+1}}}\leq (-1)^{m+1}\psi ^{(m)}(x)\leq {\frac {(m-1)!}{x^{m}}}+{\frac {m!}{x^{m+1}}}.}
Since both bounds are strictly positive for
x
>
0
{\displaystyle x>0}
, we have:
ln
Γ
(
x
)
{\displaystyle \ln \Gamma (x)}
is strictly convex.
For
m
=
0
{\displaystyle m=0}
, the digamma function,
ψ
(
x
)
=
ψ
(
0
)
(
x
)
{\displaystyle \psi (x)=\psi ^{(0)}(x)}
, is strictly monotonic increasing and strictly concave.
For
m
{\displaystyle m}
odd, the polygamma functions,
ψ
(
1
)
,
ψ
(
3
)
,
ψ
(
5
)
,
…
{\displaystyle \psi ^{(1)},\psi ^{(3)},\psi ^{(5)},\ldots }
, are strictly positive, strictly monotonic decreasing and strictly convex.
For
m
{\displaystyle m}
even the polygamma functions,
ψ
(
2
)
,
ψ
(
4
)
,
ψ
(
6
)
,
…
{\displaystyle \psi ^{(2)},\psi ^{(4)},\psi ^{(6)},\ldots }
, are strictly negative, strictly monotonic increasing and strictly concave.
This can be seen in the first plot above.
=== Trigamma bounds and asymptote ===
For the case of the trigamma function (
m
=
1
{\displaystyle m=1}
) the final inequality formula above for
x
>
0
{\displaystyle x>0}
, can be rewritten as:
x
+
1
2
x
2
≤
ψ
(
1
)
(
x
)
≤
x
+
1
x
2
{\displaystyle {\frac {x+{\frac {1}{2}}}{x^{2}}}\leq \psi ^{(1)}(x)\leq {\frac {x+1}{x^{2}}}}
so that for
x
≫
1
{\displaystyle x\gg 1}
:
ψ
(
1
)
(
x
)
≈
1
x
{\displaystyle \psi ^{(1)}(x)\approx {\frac {1}{x}}}
.
== See also ==
Factorial
Gamma function
Digamma function
Trigamma function
Generalized polygamma function
== References ==
Abramowitz, Milton; Stegun, Irene A. (1964). "Section 6.4". Handbook of Mathematical Functions. New York: Dover Publications. ISBN 978-0-486-61272-0. {{cite book}}: ISBN / Date incompatibility (help) | Wikipedia/Polygamma_function |
In mathematics, in the area of analytic number theory, the Dirichlet eta function is defined by the following Dirichlet series, which converges for any complex number having real part > 0:
η
(
s
)
=
∑
n
=
1
∞
(
−
1
)
n
−
1
n
s
=
1
1
s
−
1
2
s
+
1
3
s
−
1
4
s
+
⋯
.
{\displaystyle \eta (s)=\sum _{n=1}^{\infty }{(-1)^{n-1} \over n^{s}}={\frac {1}{1^{s}}}-{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}-{\frac {1}{4^{s}}}+\cdots .}
This Dirichlet series is the alternating sum corresponding to the Dirichlet series expansion of the Riemann zeta function, ζ(s) — and for this reason the Dirichlet eta function is also known as the alternating zeta function, also denoted ζ*(s). The following relation holds:
η
(
s
)
=
(
1
−
2
1
−
s
)
ζ
(
s
)
{\displaystyle \eta (s)=\left(1-2^{1-s}\right)\zeta (s)}
Both the Dirichlet eta function and the Riemann zeta function are special cases of polylogarithms.
While the Dirichlet series expansion for the eta function is convergent only for any complex number s with real part > 0, it is Abel summable for any complex number. This serves to define the eta function as an entire function. (The above relation and the facts that the eta function is entire and
η
(
1
)
≠
0
{\displaystyle \eta (1)\neq 0}
together show the zeta function is meromorphic with a simple pole at s = 1, and possibly additional poles at the other zeros of the factor
1
−
2
1
−
s
{\displaystyle 1-2^{1-s}}
, although in fact these hypothetical additional poles do not exist.)
Equivalently, we may begin by defining
η
(
s
)
=
1
Γ
(
s
)
∫
0
∞
x
s
−
1
e
x
+
1
d
x
{\displaystyle \eta (s)={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}+1}}{dx}}
which is also defined in the region of positive real part (
Γ
(
s
)
{\displaystyle \Gamma (s)}
represents the gamma function). This gives the eta function as a Mellin transform.
Hardy gave a simple proof of the functional equation for the eta function, which is
η
(
−
s
)
=
2
1
−
2
−
s
−
1
1
−
2
−
s
π
−
s
−
1
s
sin
(
π
s
2
)
Γ
(
s
)
η
(
s
+
1
)
.
{\displaystyle \eta (-s)=2{\frac {1-2^{-s-1}}{1-2^{-s}}}\pi ^{-s-1}s\sin \left({\pi s \over 2}\right)\Gamma (s)\eta (s+1).}
From this, one immediately has the functional equation of the zeta function also, as well as another means to extend the definition of eta to the entire complex plane.
== Zeros ==
The zeros of the eta function include all the zeros of the zeta function: the negative even integers (real equidistant simple zeros); the zeros along the critical line, none of which are known to be multiple and over 40% of which have been proven to be simple, and the hypothetical zeros in the critical strip but not on the critical line, which if they do exist must occur at the vertices of rectangles symmetrical around the x-axis and the critical line and whose multiplicity is unknown. In addition, the factor
1
−
2
1
−
s
{\displaystyle 1-2^{1-s}}
adds an infinite number of complex simple zeros, located at equidistant points on the line
ℜ
(
s
)
=
1
{\displaystyle \Re (s)=1}
, at
s
n
=
1
+
2
n
π
i
/
ln
(
2
)
{\displaystyle s_{n}=1+2n\pi i/\ln(2)}
where n is any nonzero integer.
The zeros of the eta function are located symmetrically with respect to the real axis and under the Riemann hypothesis would be on two parallel lines
ℜ
(
s
)
=
1
/
2
,
ℜ
(
s
)
=
1
{\displaystyle \Re (s)=1/2,\Re (s)=1}
, and on the perpendicular half line formed by the negative real axis.
== Landau's problem with ζ(s) = η(s)/ (1 − (2^(1−s))) and solutions ==
In the equation η(s) = (1 − 21−s) ζ(s), "the pole of ζ(s) at s = 1 is cancelled by the zero of the other factor" (Titchmarsh, 1986, p. 17), and as a result η(1) is neither infinite nor zero (see § Particular values). However, in the equation
ζ
(
s
)
=
η
(
s
)
1
−
2
1
−
s
,
{\displaystyle \zeta (s)={\frac {\eta (s)}{1-2^{1-s}}},}
η must be zero at all the points
s
n
=
1
+
n
2
π
ln
2
i
,
n
≠
0
,
n
∈
Z
{\displaystyle s_{n}=1+n{\frac {2\pi }{\ln {2}}}i,n\neq 0,n\in \mathbb {Z} }
, where the denominator is zero, if the Riemann zeta function is analytic and finite there. The problem of proving this without defining the zeta function first was signaled and left open by E. Landau in his 1909 treatise on number theory: "Whether the eta series is different from zero or not at the points
s
n
≠
1
{\displaystyle s_{n}\neq 1}
, i.e., whether these are poles of zeta or not, is not readily apparent here."
A first solution for Landau's problem was published almost 40 years later by D. V. Widder in his book The Laplace Transform. It uses the next prime 3 instead of 2 to define a Dirichlet series similar to the eta function, which we will call the
λ
{\displaystyle \lambda }
function, defined for
ℜ
(
s
)
>
0
{\displaystyle \Re (s)>0}
and with some zeros also on
ℜ
(
s
)
=
1
{\displaystyle \Re (s)=1}
, but not equal to those of eta.
An elementary direct and
ζ
{\displaystyle \zeta \,}
-independent proof of the vanishing of the eta function at
s
n
≠
1
{\displaystyle s_{n}\neq 1}
was published by J. Sondow in 2003. It expresses the value of the eta function as the limit of special Riemann sums associated to an integral known to be zero, using a relation between the partial sums of the Dirichlet series defining the eta and zeta functions for
ℜ
(
s
)
>
1
{\displaystyle \Re (s)>1}
.
Assuming
η
(
s
n
)
=
0
{\displaystyle \eta (s_{n})=0}
, for each point
s
n
≠
1
{\displaystyle s_{n}\neq 1}
where
2
s
n
=
2
{\displaystyle 2^{s_{n}}=2}
, we can now define
ζ
(
s
n
)
{\displaystyle \zeta (s_{n})\,}
by continuity as follows,
ζ
(
s
n
)
=
lim
s
→
s
n
η
(
s
)
1
−
2
2
s
=
lim
s
→
s
n
η
(
s
)
−
η
(
s
n
)
2
2
s
n
−
2
2
s
=
lim
s
→
s
n
η
(
s
)
−
η
(
s
n
)
s
−
s
n
s
−
s
n
2
2
s
n
−
2
2
s
=
η
′
(
s
n
)
log
(
2
)
.
{\displaystyle \zeta (s_{n})=\lim _{s\to s_{n}}{\frac {\eta (s)}{1-{\frac {2}{2^{s}}}}}=\lim _{s\to s_{n}}{\frac {\eta (s)-\eta (s_{n})}{{\frac {2}{2^{s_{n}}}}-{\frac {2}{2^{s}}}}}=\lim _{s\to s_{n}}{\frac {\eta (s)-\eta (s_{n})}{s-s_{n}}}\,{\frac {s-s_{n}}{{\frac {2}{2^{s_{n}}}}-{\frac {2}{2^{s}}}}}={\frac {\eta '(s_{n})}{\log(2)}}.}
The apparent singularity of zeta at
s
n
≠
1
{\displaystyle s_{n}\neq 1}
is now removed, and the zeta function is proven to be analytic everywhere in
ℜ
s
>
0
{\displaystyle \Re {s}>0}
, except at
s
=
1
{\displaystyle s=1}
where
lim
s
→
1
(
s
−
1
)
ζ
(
s
)
=
lim
s
→
1
η
(
s
)
1
−
2
1
−
s
s
−
1
=
η
(
1
)
log
2
=
1.
{\displaystyle \lim _{s\to 1}(s-1)\zeta (s)=\lim _{s\to 1}{\frac {\eta (s)}{\frac {1-2^{1-s}}{s-1}}}={\frac {\eta (1)}{\log 2}}=1.}
== Integral representations ==
A number of integral formulas involving the eta function can be listed. The first one follows from a change of variable of the integral representation of the Gamma function (Abel, 1823), giving a Mellin transform which can be expressed in different ways as a double integral (Sondow, 2005). This is valid for
ℜ
s
>
0.
{\displaystyle \Re s>0.}
Γ
(
s
)
η
(
s
)
=
∫
0
∞
x
s
−
1
e
x
+
1
d
x
=
∫
0
∞
∫
0
x
x
s
−
2
e
x
+
1
d
y
d
x
=
∫
0
∞
∫
0
∞
(
t
+
r
)
s
−
2
e
t
+
r
+
1
d
r
d
t
=
∫
0
1
∫
0
1
(
−
log
(
x
y
)
)
s
−
2
1
+
x
y
d
x
d
y
.
{\displaystyle {\begin{aligned}\Gamma (s)\eta (s)&=\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}+1}}\,dx=\int _{0}^{\infty }\int _{0}^{x}{\frac {x^{s-2}}{e^{x}+1}}\,dy\,dx\\[8pt]&=\int _{0}^{\infty }\int _{0}^{\infty }{\frac {(t+r)^{s-2}}{e^{t+r}+1}}dr\,dt=\int _{0}^{1}\int _{0}^{1}{\frac {\left(-\log(xy)\right)^{s-2}}{1+xy}}\,dx\,dy.\end{aligned}}}
The Cauchy–Schlömilch transformation (Amdeberhan, Moll et al., 2010) can be used to prove this other representation, valid for
ℜ
s
>
−
1
{\displaystyle \Re s>-1}
. Integration by parts of the first integral above in this section yields another derivation.
2
1
−
s
Γ
(
s
+
1
)
η
(
s
)
=
2
∫
0
∞
x
2
s
+
1
cosh
2
(
x
2
)
d
x
=
∫
0
∞
t
s
cosh
2
(
t
)
d
t
.
{\displaystyle 2^{1-s}\,\Gamma (s+1)\,\eta (s)=2\int _{0}^{\infty }{\frac {x^{2s+1}}{\cosh ^{2}(x^{2})}}\,dx=\int _{0}^{\infty }{\frac {t^{s}}{\cosh ^{2}(t)}}\,dt.}
The next formula, due to Lindelöf (1905), is valid over the whole complex plane, when the principal value is taken for the logarithm implicit in the exponential.
η
(
s
)
=
∫
−
∞
∞
(
1
/
2
+
i
t
)
−
s
e
π
t
+
e
−
π
t
d
t
.
{\displaystyle \eta (s)=\int _{-\infty }^{\infty }{\frac {(1/2+it)^{-s}}{e^{\pi t}+e^{-\pi t}}}\,dt.}
This corresponds to a Jensen (1895) formula for the entire function
(
s
−
1
)
ζ
(
s
)
{\displaystyle (s-1)\,\zeta (s)}
, valid over the whole complex plane and also proven by Lindelöf.
(
s
−
1
)
ζ
(
s
)
=
2
π
∫
−
∞
∞
(
1
/
2
+
i
t
)
1
−
s
(
e
π
t
+
e
−
π
t
)
2
d
t
.
{\displaystyle (s-1)\zeta (s)=2\pi \,\int _{-\infty }^{\infty }{\frac {(1/2+it)^{1-s}}{(e^{\pi t}+e^{-\pi t})^{2}}}\,dt.}
"This formula, remarquable by its simplicity, can be proven easily with the help of Cauchy's theorem, so important for the summation of series" wrote Jensen (1895). Similarly by converting the integration paths to contour integrals one can obtain other formulas for the eta function, such as this generalisation (Milgram, 2013) valid for
0
<
c
<
1
{\displaystyle 0<c<1}
and all
s
{\displaystyle s}
:
η
(
s
)
=
1
2
∫
−
∞
∞
(
c
+
i
t
)
−
s
sin
(
π
(
c
+
i
t
)
)
d
t
.
{\displaystyle \eta (s)={\frac {1}{2}}\int _{-\infty }^{\infty }{\frac {(c+it)^{-s}}{\sin {(\pi (c+it))}}}\,dt.}
The zeros on the negative real axis are factored out cleanly by making
c
→
0
+
{\displaystyle c\to 0^{+}}
(Milgram, 2013) to obtain a formula valid for
ℜ
s
<
0
{\displaystyle \Re s<0}
:
η
(
s
)
=
−
sin
(
s
π
2
)
∫
0
∞
t
−
s
sinh
(
π
t
)
d
t
.
{\displaystyle \eta (s)=-\sin \left({\frac {s\pi }{2}}\right)\int _{0}^{\infty }{\frac {t^{-s}}{\sinh {(\pi t)}}}\,dt.}
== Numerical algorithms ==
Most of the series acceleration techniques developed for alternating series can be profitably applied to the evaluation of the eta function. One particularly simple, yet reasonable method is to apply Euler's transformation of alternating series, to obtain
η
(
s
)
=
∑
n
=
0
∞
1
2
n
+
1
∑
k
=
0
n
(
−
1
)
k
(
n
k
)
1
(
k
+
1
)
s
.
{\displaystyle \eta (s)=\sum _{n=0}^{\infty }{\frac {1}{2^{n+1}}}\sum _{k=0}^{n}(-1)^{k}{n \choose k}{\frac {1}{(k+1)^{s}}}.}
Note that the second, inside summation is a forward difference.
=== Borwein's method ===
Peter Borwein used approximations involving Chebyshev polynomials to produce a method for efficient evaluation of the eta function. If
d
k
=
n
∑
ℓ
=
0
k
(
n
+
ℓ
−
1
)
!
4
ℓ
(
n
−
ℓ
)
!
(
2
ℓ
)
!
{\displaystyle d_{k}=n\sum _{\ell =0}^{k}{\frac {(n+\ell -1)!4^{\ell }}{(n-\ell )!(2\ell )!}}}
then
η
(
s
)
=
−
1
d
n
∑
k
=
0
n
−
1
(
−
1
)
k
(
d
k
−
d
n
)
(
k
+
1
)
s
+
γ
n
(
s
)
,
{\displaystyle \eta (s)=-{\frac {1}{d_{n}}}\sum _{k=0}^{n-1}{\frac {(-1)^{k}(d_{k}-d_{n})}{(k+1)^{s}}}+\gamma _{n}(s),}
where for
ℜ
(
s
)
≥
1
2
{\displaystyle \Re (s)\geq {\frac {1}{2}}}
the error term γn is bounded by
|
γ
n
(
s
)
|
≤
3
(
3
+
8
)
n
(
1
+
2
|
ℑ
(
s
)
|
)
exp
(
π
2
|
ℑ
(
s
)
|
)
.
{\displaystyle |\gamma _{n}(s)|\leq {\frac {3}{(3+{\sqrt {8}})^{n}}}(1+2|\Im (s)|)\exp \left({\frac {\pi }{2}}|\Im (s)|\right).}
The factor of
3
+
8
≈
5.8
{\displaystyle 3+{\sqrt {8}}\approx 5.8}
in the error bound indicates that the Borwein series converges quite rapidly as n increases.
== Particular values ==
η(0) = 1⁄2, the Abel sum of Grandi's series 1 − 1 + 1 − 1 + ⋯.
η(−1) = 1⁄4, the Abel sum of 1 − 2 + 3 − 4 + ⋯.
For positive integer k,
η
(
1
−
k
)
=
2
k
−
1
k
B
k
+
,
{\displaystyle \eta (1-k)={\frac {2^{k}-1}{k}}B_{k}^{+{}},}
where B+n is the k-th Bernoulli number.
Also:
η
(
1
)
=
ln
2
{\displaystyle \eta (1)=\ln 2}
, this is the alternating harmonic series
η
(
2
)
=
π
2
12
{\displaystyle \eta (2)={\pi ^{2} \over 12}}
OEIS: A072691
η
(
4
)
=
7
π
4
720
≈
0.94703283
{\displaystyle \eta (4)={{7\pi ^{4}} \over 720}\approx 0.94703283}
η
(
6
)
=
31
π
6
30240
≈
0.98555109
{\displaystyle \eta (6)={{31\pi ^{6}} \over 30240}\approx 0.98555109}
η
(
8
)
=
127
π
8
1209600
≈
0.99623300
{\displaystyle \eta (8)={{127\pi ^{8}} \over 1209600}\approx 0.99623300}
η
(
10
)
=
73
π
10
6842880
≈
0.99903951
{\displaystyle \eta (10)={{73\pi ^{10}} \over 6842880}\approx 0.99903951}
η
(
12
)
=
1414477
π
12
1307674368000
≈
0.99975769
{\displaystyle \eta (12)={{1414477\pi ^{12}} \over {1307674368000}}\approx 0.99975769}
The general form for even positive integers is:
η
(
2
n
)
=
(
−
1
)
n
+
1
B
2
n
π
2
n
(
2
2
n
−
1
−
1
)
(
2
n
)
!
.
{\displaystyle \eta (2n)=(-1)^{n+1}{{B_{2n}\pi ^{2n}\left(2^{2n-1}-1\right)} \over {(2n)!}}.}
Taking the limit
n
→
∞
{\displaystyle n\to \infty }
, one obtains
η
(
∞
)
=
1
{\displaystyle \eta (\infty )=1}
.
== Derivatives ==
The derivative with respect to the parameter s is for
s
≠
1
{\displaystyle s\neq 1}
η
′
(
s
)
=
∑
n
=
1
∞
(
−
1
)
n
ln
n
n
s
=
2
1
−
s
ln
(
2
)
ζ
(
s
)
+
(
1
−
2
1
−
s
)
ζ
′
(
s
)
.
{\displaystyle \eta '(s)=\sum _{n=1}^{\infty }{\frac {(-1)^{n}\ln n}{n^{s}}}=2^{1-s}\ln(2)\,\zeta (s)+(1-2^{1-s})\,\zeta '(s).}
η
′
(
1
)
=
ln
(
2
)
γ
−
ln
(
2
)
2
2
−
1
{\displaystyle \eta '(1)=\ln(2)\,\gamma -\ln(2)^{2}\,2^{-1}}
== References ==
Jensen, J. L. W. V. (1895). "Remarques relatives aux réponses de MM. Franel et Kluyver". L'Intermédiaire des Mathématiciens. II: 346].
Lindelöf, Ernst (1905). Le calcul des résidus et ses applications à la théorie des fonctions. Gauthier-Villars. p. 103.
Widder, David Vernon (1946). The Laplace Transform. Princeton University Press. p. 230.
Landau, Edmund, Handbuch der Lehre von der Verteilung der Primzahlen, Erster Band, Berlin, 1909, p. 160. (Second edition by Chelsea, New York, 1953, p. 160, 933)
Titchmarsh, E. C. (1986). The Theory of the Riemann Zeta Function, Second revised (Heath-Brown) edition. Oxford University Press.
Conrey, J. B. (1989). "More than two fifths of the zeros of the Riemann zeta function are on the critical line". Journal für die Reine und Angewandte Mathematik. 1989 (399): 1–26. doi:10.1515/crll.1989.399.1. S2CID 115910600.
Knopp, Konrad (1990) [1922]. Theory and Application of Infinite Series. Dover. ISBN 0-486-66165-2.
Borwein, P., An Efficient Algorithm for the Riemann Zeta Function Archived 2011-07-26 at the Wayback Machine, Constructive experimental and nonlinear analysis, CMS Conference Proc. 27 (2000), 29–34.
Sondow, Jonathan (2005). "Double integrals for Euler's constant and ln 4/π and an analog of Hadjicostas's formula". Amer. Math. Monthly. 112: 61–65. arXiv:math.CO/0211148. Amer. Math. Monthly 112 (2005) 61–65, formula 18.
Sondow, Jonathan (2003). "Zeros of the Alternating Zeta Function on the Line R(s)=1". Amer. Math. Monthly. 110: 435–437. arXiv:math/0209393. Amer. Math. Monthly, 110 (2003) 435–437.
Gourdon, Xavier; Sebah, Pascal (2003). "Numerical evaluation of the Riemann Zeta-function" (PDF).
Amdeberhan, T.; Glasser, M. L.; Jones, M. C; Moll, V. H.; Posey, R.; Varela, D. (2010). "The Cauchy–Schlomilch Transformation". arXiv:1004.2445 [math.CA]. p. 12.
Milgram, Michael S. (2012). "Integral and Series Representations of Riemann's Zeta Function, Dirichlet's Eta Function and a Medley of Related Results". Journal of Mathematics. 2013: 1–17. arXiv:1208.3429. doi:10.1155/2013/181724.. | Wikipedia/Dirichlet_eta_function |
In mathematics, the study of special values of L-functions is a subfield of number theory devoted to generalising formulae such as the Leibniz formula for π, namely
1
−
1
3
+
1
5
−
1
7
+
1
9
−
⋯
=
π
4
,
{\displaystyle 1\,-\,{\frac {1}{3}}\,+\,{\frac {1}{5}}\,-\,{\frac {1}{7}}\,+\,{\frac {1}{9}}\,-\,\cdots \;=\;{\frac {\pi }{4}},\!}
by the recognition that expression on the left-hand side is also
L
(
1
)
{\displaystyle L(1)}
where
L
(
s
)
{\displaystyle L(s)}
is the Dirichlet L-function for the field of Gaussian rational numbers. This formula is a special case of the analytic class number formula, and in those terms reads that the Gaussian field has class number 1. The factor
1
4
{\displaystyle {\tfrac {1}{4}}}
on the right hand side of the formula corresponds to the fact that this field contains four roots of unity.
== Conjectures ==
There are two families of conjectures, formulated for general classes of L-functions (the very general setting being for L-functions associated to Chow motives over number fields), the division into two reflecting the questions of:
how to replace
π
{\displaystyle \pi }
in the Leibniz formula by some other "transcendental" number (regardless of whether it is currently possible for transcendental number theory to provide a proof of the transcendence); and
how to generalise the rational factor in the formula (class number divided by number of roots of unity) by some algebraic construction of a rational number that will represent the ratio of the L-function value to the "transcendental" factor.
Subsidiary explanations are given for the integer values of
n
{\displaystyle n}
for which a formulae of this sort involving
L
(
n
)
{\displaystyle L(n)}
can be expected to hold.
The conjectures for (a) are called Beilinson's conjectures, for Alexander Beilinson. The idea is to abstract from the regulator of a number field to some "higher regulator" (the Beilinson regulator), a determinant constructed on a real vector space that comes from algebraic K-theory.
The conjectures for (b) are called the Bloch–Kato conjectures for special values (for Spencer Bloch and Kazuya Kato; this circle of ideas is distinct from the Bloch–Kato conjecture of K-theory, extending the Milnor conjecture, a proof of which was announced in 2009). They are also called the Tamagawa number conjecture, a name arising via the Birch–Swinnerton-Dyer conjecture and its formulation as an elliptic curve analogue of the Tamagawa number problem for linear algebraic groups. In a further extension, the equivariant Tamagawa number conjecture (ETNC) has been formulated, to consolidate the connection of these ideas with Iwasawa theory, and its so-called Main Conjecture.
=== Current status ===
All of these conjectures are known to be true only in special cases.
== See also ==
Brumer–Stark conjecture
== Notes ==
== References ==
Kings, Guido (2003), "The Bloch–Kato conjecture on special values of L-functions. A survey of known results", Journal de théorie des nombres de Bordeaux, 15 (1): 179–198, doi:10.5802/jtnb.396, ISSN 1246-7405, MR 2019010
"Beilinson conjectures", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"K-functor in algebraic geometry", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Mathar, Richard J. (2010), "Table of Dirichlet L-Series and Prime Zeta Modulo Functions for small moduli", arXiv:1008.2547 [math.NT]
== External links ==
L-funktionen und die Vermutingen von Deligne und Beilinson (L-functions and the conjectures of Deligne and Beilsnson) | Wikipedia/Beilinson_conjectures |
In arithmetic and algebra, the cube of a number n is its third power, that is, the result of multiplying three instances of n together.
The cube of a number n is denoted n3, using a superscript 3, for example 23 = 8. The cube operation can also be defined for any other mathematical expression, for example (x + 1)3.
The cube is also the number multiplied by its square:
n3 = n × n2 = n × n × n.
The cube function is the function x ↦ x3 (often denoted y = x3) that maps a number to its cube. It is an odd function, as
(−n)3 = −(n3).
The volume of a geometric cube is the cube of its side length, giving rise to the name. The inverse operation that consists of finding a number whose cube is n is called extracting the cube root of n. It determines the side of the cube of a given volume. It is also n raised to the one-third power.
The graph of the cube function is known as the cubic parabola. Because the cube function is an odd function, this curve has a center of symmetry at the origin, but no axis of symmetry.
== In integers ==
A cube number, or a perfect cube, or sometimes just a cube, is a number which is the cube of an integer.
The non-negative perfect cubes up to 603 are (sequence A000578 in the OEIS):
Geometrically speaking, a positive integer m is a perfect cube if and only if one can arrange m solid unit cubes into a larger, solid cube. For example, 27 small cubes can be arranged into one larger one with the appearance of a Rubik's Cube, since 3 × 3 × 3 = 27.
The difference between the cubes of consecutive integers can be expressed as follows:
n3 − (n − 1)3 = 3(n − 1)n + 1.
or
(n + 1)3 − n3 = 3(n + 1)n + 1.
There is no minimum perfect cube, since the cube of a negative integer is negative. For example, (−4) × (−4) × (−4) = −64.
=== Base ten ===
Unlike perfect squares, perfect cubes do not have a small number of possibilities for the last two digits. Except for cubes divisible by 5, where only 25, 75 and 00 can be the last two digits, any pair of digits with the last digit odd can occur as the last digits of a perfect cube. With even cubes, there is considerable restriction, for only 00, o2, e4, o6 and e8 can be the last two digits of a perfect cube (where o stands for any odd digit and e for any even digit). Some cube numbers are also square numbers; for example, 64 is a square number (8 × 8) and a cube number (4 × 4 × 4). This happens if and only if the number is a perfect sixth power (in this case 26).
The last digits of each 3rd power are:
It is, however, easy to show that most numbers are not perfect cubes because all perfect cubes must have digital root 1, 8 or 9. That is their values modulo 9 may be only 0, 1, and 8. Moreover, the digital root of any number's cube can be determined by the remainder the number gives when divided by 3:
If the number x is divisible by 3, its cube has digital root 9; that is,
if
x
≡
0
(
mod
3
)
then
x
3
≡
0
(
mod
9
)
(actually
0
(
mod
27
)
)
;
{\displaystyle {\text{if}}\quad x\equiv 0{\pmod {3}}\quad {\text{then}}\quad x^{3}\equiv 0{\pmod {9}}{\text{ (actually}}\quad 0{\pmod {27}}{\text{)}};}
If it has a remainder of 1 when divided by 3, its cube has digital root 1; that is,
if
x
≡
1
(
mod
3
)
then
x
3
≡
1
(
mod
9
)
;
{\displaystyle {\text{if}}\quad x\equiv 1{\pmod {3}}\quad {\text{then}}\quad x^{3}\equiv 1{\pmod {9}};}
If it has a remainder of 2 when divided by 3, its cube has digital root 8; that is,
if
x
≡
2
(
mod
3
)
then
x
3
≡
8
(
mod
9
)
.
{\displaystyle {\text{if}}\quad x\equiv 2{\pmod {3}}\quad {\text{then}}\quad x^{3}\equiv 8{\pmod {9}}.}
=== Sums of two cubes ===
=== Sums of three cubes ===
It is conjectured that every integer (positive or negative) not congruent to ±4 modulo 9 can be written as a sum of three (positive or negative) cubes with infinitely many ways. For example,
6
=
2
3
+
(
−
1
)
3
+
(
−
1
)
3
{\displaystyle 6=2^{3}+(-1)^{3}+(-1)^{3}}
. Integers congruent to ±4 modulo 9 are excluded because they cannot be written as the sum of three cubes.
The smallest such integer for which such a sum is not known is 114. In September 2019, the previous smallest such integer with no known 3-cube sum, 42, was found to satisfy this equation:
42
=
(
−
80538738812075974
)
3
+
80435758145817515
3
+
12602123297335631
3
.
{\displaystyle 42=(-80538738812075974)^{3}+80435758145817515^{3}+12602123297335631^{3}.}
One solution to
x
3
+
y
3
+
z
3
=
n
{\displaystyle x^{3}+y^{3}+z^{3}=n}
is given in the table below for n ≤ 78, and n not congruent to 4 or 5 modulo 9. The selected solution is the one that is primitive (gcd(x, y, z) = 1), is not of the form
c
3
+
(
−
c
)
3
+
n
3
=
n
3
{\displaystyle c^{3}+(-c)^{3}+n^{3}=n^{3}}
or
(
n
+
6
n
c
3
)
3
+
(
n
−
6
n
c
3
)
3
+
(
−
6
n
c
2
)
3
=
2
n
3
{\displaystyle (n+6nc^{3})^{3}+(n-6nc^{3})^{3}+(-6nc^{2})^{3}=2n^{3}}
(since they are infinite families of solutions), satisfies 0 ≤ |x| ≤ |y| ≤ |z|, and has minimal values for |z| and |y| (tested in this order).
Only primitive solutions are selected since the non-primitive ones can be trivially deduced from solutions for a smaller value of n. For example, for n = 24, the solution
2
3
+
2
3
+
2
3
=
24
{\displaystyle 2^{3}+2^{3}+2^{3}=24}
results from the solution
1
3
+
1
3
+
1
3
=
3
{\displaystyle 1^{3}+1^{3}+1^{3}=3}
by multiplying everything by
8
=
2
3
.
{\displaystyle 8=2^{3}.}
Therefore, this is another solution that is selected. Similarly, for n = 48, the solution (x, y, z) = (−2, −2, 4) is excluded, and this is the solution (x, y, z) = (−23, −26, 31) that is selected.
=== Fermat's Last Theorem for cubes ===
The equation x3 + y3 = z3 has no non-trivial (i.e. xyz ≠ 0) solutions in integers. In fact, it has none in Eisenstein integers.
Both of these statements are also true for the equation x3 + y3 = 3z3.
=== Sum of first n cubes ===
The sum of the first n cubes is the nth triangle number squared:
1
3
+
2
3
+
⋯
+
n
3
=
(
1
+
2
+
⋯
+
n
)
2
=
(
n
(
n
+
1
)
2
)
2
.
{\displaystyle 1^{3}+2^{3}+\dots +n^{3}=(1+2+\dots +n)^{2}=\left({\frac {n(n+1)}{2}}\right)^{2}.}
Proofs.
Charles Wheatstone (1854) gives a particularly simple derivation, by expanding each cube in the sum into a set of consecutive odd numbers. He begins by giving the identity
n
3
=
(
n
2
−
n
+
1
)
+
(
n
2
−
n
+
1
+
2
)
+
(
n
2
−
n
+
1
+
4
)
+
⋯
+
(
n
2
+
n
−
1
)
⏟
n
consecutive odd numbers
.
{\displaystyle n^{3}=\underbrace {\left(n^{2}-n+1\right)+\left(n^{2}-n+1+2\right)+\left(n^{2}-n+1+4\right)+\cdots +\left(n^{2}+n-1\right)} _{n{\text{ consecutive odd numbers}}}.}
That identity is related to triangular numbers
T
n
{\displaystyle T_{n}}
in the following way:
n
3
=
∑
k
=
T
n
−
1
+
1
T
n
(
2
k
−
1
)
,
{\displaystyle n^{3}=\sum _{k=T_{n-1}+1}^{T_{n}}(2k-1),}
and thus the summands forming
n
3
{\displaystyle n^{3}}
start off just after those forming all previous values
1
3
{\displaystyle 1^{3}}
up to
(
n
−
1
)
3
{\displaystyle (n-1)^{3}}
.
Applying this property, along with another well-known identity:
n
2
=
∑
k
=
1
n
(
2
k
−
1
)
,
{\displaystyle n^{2}=\sum _{k=1}^{n}(2k-1),}
we obtain the following derivation:
∑
k
=
1
n
k
3
=
1
+
8
+
27
+
64
+
⋯
+
n
3
=
1
⏟
1
3
+
3
+
5
⏟
2
3
+
7
+
9
+
11
⏟
3
3
+
13
+
15
+
17
+
19
⏟
4
3
+
⋯
+
(
n
2
−
n
+
1
)
+
⋯
+
(
n
2
+
n
−
1
)
⏟
n
3
=
1
⏟
1
2
+
3
⏟
2
2
+
5
⏟
3
2
+
⋯
+
(
n
2
+
n
−
1
)
⏟
(
n
2
+
n
2
)
2
=
(
1
+
2
+
⋯
+
n
)
2
=
(
∑
k
=
1
n
k
)
2
.
{\displaystyle {\begin{aligned}\sum _{k=1}^{n}k^{3}&=1+8+27+64+\cdots +n^{3}\\&=\underbrace {1} _{1^{3}}+\underbrace {3+5} _{2^{3}}+\underbrace {7+9+11} _{3^{3}}+\underbrace {13+15+17+19} _{4^{3}}+\cdots +\underbrace {\left(n^{2}-n+1\right)+\cdots +\left(n^{2}+n-1\right)} _{n^{3}}\\&=\underbrace {\underbrace {\underbrace {\underbrace {1} _{1^{2}}+3} _{2^{2}}+5} _{3^{2}}+\cdots +\left(n^{2}+n-1\right)} _{\left({\frac {n^{2}+n}{2}}\right)^{2}}\\&=(1+2+\cdots +n)^{2}\\&={\bigg (}\sum _{k=1}^{n}k{\bigg )}^{2}.\end{aligned}}}
In the more recent mathematical literature, Stein (1971) uses the rectangle-counting interpretation of these numbers to form a geometric proof of the identity (see also Benjamin, Quinn & Wurtz 2006); he observes that it may also be proved easily (but uninformatively) by induction, and states that Toeplitz (1963) provides "an interesting old Arabic proof". Kanim (2004) provides a purely visual proof, Benjamin & Orrison (2002) provide two additional proofs, and Nelsen (1993) gives seven geometric proofs.
For example, the sum of the first 5 cubes is the square of the 5th triangular number,
1
3
+
2
3
+
3
3
+
4
3
+
5
3
=
15
2
{\displaystyle 1^{3}+2^{3}+3^{3}+4^{3}+5^{3}=15^{2}}
A similar result can be given for the sum of the first y odd cubes,
1
3
+
3
3
+
⋯
+
(
2
y
−
1
)
3
=
(
x
y
)
2
{\displaystyle 1^{3}+3^{3}+\dots +(2y-1)^{3}=(xy)^{2}}
but x, y must satisfy the negative Pell equation x2 − 2y2 = −1. For example, for y = 5 and 29, then,
1
3
+
3
3
+
⋯
+
9
3
=
(
7
⋅
5
)
2
{\displaystyle 1^{3}+3^{3}+\dots +9^{3}=(7\cdot 5)^{2}}
1
3
+
3
3
+
⋯
+
57
3
=
(
41
⋅
29
)
2
{\displaystyle 1^{3}+3^{3}+\dots +57^{3}=(41\cdot 29)^{2}}
and so on. Also, every even perfect number, except the lowest, is the sum of the first 2p−1/2 odd cubes (p = 3, 5, 7, ...):
28
=
2
2
(
2
3
−
1
)
=
1
3
+
3
3
{\displaystyle 28=2^{2}(2^{3}-1)=1^{3}+3^{3}}
496
=
2
4
(
2
5
−
1
)
=
1
3
+
3
3
+
5
3
+
7
3
{\displaystyle 496=2^{4}(2^{5}-1)=1^{3}+3^{3}+5^{3}+7^{3}}
8128
=
2
6
(
2
7
−
1
)
=
1
3
+
3
3
+
5
3
+
7
3
+
9
3
+
11
3
+
13
3
+
15
3
{\displaystyle 8128=2^{6}(2^{7}-1)=1^{3}+3^{3}+5^{3}+7^{3}+9^{3}+11^{3}+13^{3}+15^{3}}
=== Sum of cubes of numbers in arithmetic progression ===
There are examples of cubes of numbers in arithmetic progression whose sum is a cube:
3
3
+
4
3
+
5
3
=
6
3
{\displaystyle 3^{3}+4^{3}+5^{3}=6^{3}}
11
3
+
12
3
+
13
3
+
14
3
=
20
3
{\displaystyle 11^{3}+12^{3}+13^{3}+14^{3}=20^{3}}
31
3
+
33
3
+
35
3
+
37
3
+
39
3
+
41
3
=
66
3
{\displaystyle 31^{3}+33^{3}+35^{3}+37^{3}+39^{3}+41^{3}=66^{3}}
with the first one sometimes identified as the mysterious Plato's number. The formula F for finding the sum of n
cubes of numbers in arithmetic progression with common difference d and initial cube a3,
F
(
d
,
a
,
n
)
=
a
3
+
(
a
+
d
)
3
+
(
a
+
2
d
)
3
+
⋯
+
(
a
+
d
n
−
d
)
3
{\displaystyle F(d,a,n)=a^{3}+(a+d)^{3}+(a+2d)^{3}+\cdots +(a+dn-d)^{3}}
is given by
F
(
d
,
a
,
n
)
=
(
n
/
4
)
(
2
a
−
d
+
d
n
)
(
2
a
2
−
2
a
d
+
2
a
d
n
−
d
2
n
+
d
2
n
2
)
{\displaystyle F(d,a,n)=(n/4)(2a-d+dn)(2a^{2}-2ad+2adn-d^{2}n+d^{2}n^{2})}
A parametric solution to
F
(
d
,
a
,
n
)
=
y
3
{\displaystyle F(d,a,n)=y^{3}}
is known for the special case of d = 1, or consecutive cubes, as found by Pagliani in 1829.
=== Cubes as sums of successive odd integers ===
In the sequence of odd integers 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, ..., the first one is a cube (1 = 13); the sum of the next two is the next cube (3 + 5 = 23); the sum of the next three is the next cube (7 + 9 + 11 = 33); and so forth.
=== Waring's problem for cubes ===
Every positive integer can be written as the sum of nine (or fewer) positive cubes. This upper limit of nine cubes cannot be reduced because, for example, 23 cannot be written as the sum of fewer than nine positive cubes:
23 = 23 + 23 + 13 + 13 + 13 + 13 + 13 + 13 + 13.
== In rational numbers ==
Every positive rational number is the sum of three positive rational cubes, and there are rationals that are not the sum of two rational cubes.
== In real numbers, other fields, and rings ==
In real numbers, the cube function preserves the order: larger numbers have larger cubes. In other words, cubes (strictly) monotonically increase. Also, its codomain is the entire real line: the function x ↦ x3 : R → R is a surjection (takes all possible values). Only three numbers are equal to their own cubes: −1, 0, and 1. If −1 < x < 0 or 1 < x, then x3 > x. If x < −1 or 0 < x < 1, then x3 < x. All aforementioned properties pertain also to any higher odd power (x5, x7, ...) of real numbers. Equalities and inequalities are also true in any ordered ring.
Volumes of similar Euclidean solids are related as cubes of their linear sizes.
In complex numbers, the cube of a purely imaginary number is also purely imaginary. For example, i3 = −i.
The derivative of x3 equals 3x2.
Cubes occasionally have the surjective property in other fields, such as in Fp for such prime p that p ≠ 1 (mod 3), but not necessarily: see the counterexample with rationals above. Also in F7 only three elements 0, ±1 are perfect cubes, of seven total. −1, 0, and 1 are perfect cubes anywhere and the only elements of a field equal to their own cubes: x3 − x = x(x − 1)(x + 1).
== History ==
Determination of the cubes of large numbers was very common in many ancient civilizations. Mesopotamian mathematicians created cuneiform tablets with tables for calculating cubes and cube roots by the Old Babylonian period (20th to 16th centuries BC). Cubic equations were known to the ancient Greek mathematician Diophantus. Hero of Alexandria devised a method for calculating cube roots in the 1st century CE. Methods for solving cubic equations and extracting cube roots appear in The Nine Chapters on the Mathematical Art, a Chinese mathematical text compiled around the 2nd century BCE and commented on by Liu Hui in the 3rd century CE.
== See also ==
== Notes ==
== References ==
=== Sources === | Wikipedia/Cube_(algebra) |
Leonhard Euler proved the Euler product formula for the Riemann zeta function in his thesis Variae observationes circa series infinitas (Various Observations about Infinite Series), published by St Petersburg Academy in 1737.
== The Euler product formula ==
The Euler product formula for the Riemann zeta function reads
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
=
∏
p
prime
1
1
−
p
−
s
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}}
where the left hand side equals the Riemann zeta function:
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
=
1
+
1
2
s
+
1
3
s
+
1
4
s
+
1
5
s
+
…
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=1+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+{\frac {1}{4^{s}}}+{\frac {1}{5^{s}}}+\ldots }
and the product on the right hand side extends over all prime numbers p:
∏
p
prime
1
1
−
p
−
s
=
1
1
−
2
−
s
⋅
1
1
−
3
−
s
⋅
1
1
−
5
−
s
⋅
1
1
−
7
−
s
⋯
1
1
−
p
−
s
⋯
{\displaystyle \prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}={\frac {1}{1-2^{-s}}}\cdot {\frac {1}{1-3^{-s}}}\cdot {\frac {1}{1-5^{-s}}}\cdot {\frac {1}{1-7^{-s}}}\cdots {\frac {1}{1-p^{-s}}}\cdots }
== Proof of the Euler product formula ==
This sketch of a proof makes use of simple algebra only. This was the method by which Euler originally discovered the formula. There is a certain sieving property that we can use to our advantage:
ζ
(
s
)
=
1
+
1
2
s
+
1
3
s
+
1
4
s
+
1
5
s
+
…
{\displaystyle \zeta (s)=1+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+{\frac {1}{4^{s}}}+{\frac {1}{5^{s}}}+\ldots }
1
2
s
ζ
(
s
)
=
1
2
s
+
1
4
s
+
1
6
s
+
1
8
s
+
1
10
s
+
…
{\displaystyle {\frac {1}{2^{s}}}\zeta (s)={\frac {1}{2^{s}}}+{\frac {1}{4^{s}}}+{\frac {1}{6^{s}}}+{\frac {1}{8^{s}}}+{\frac {1}{10^{s}}}+\ldots }
Subtracting the second equation from the first we remove all elements that have a factor of 2:
(
1
−
1
2
s
)
ζ
(
s
)
=
1
+
1
3
s
+
1
5
s
+
1
7
s
+
1
9
s
+
1
11
s
+
1
13
s
+
…
{\displaystyle \left(1-{\frac {1}{2^{s}}}\right)\zeta (s)=1+{\frac {1}{3^{s}}}+{\frac {1}{5^{s}}}+{\frac {1}{7^{s}}}+{\frac {1}{9^{s}}}+{\frac {1}{11^{s}}}+{\frac {1}{13^{s}}}+\ldots }
Repeating for the next term:
1
3
s
(
1
−
1
2
s
)
ζ
(
s
)
=
1
3
s
+
1
9
s
+
1
15
s
+
1
21
s
+
1
27
s
+
1
33
s
+
…
{\displaystyle {\frac {1}{3^{s}}}\left(1-{\frac {1}{2^{s}}}\right)\zeta (s)={\frac {1}{3^{s}}}+{\frac {1}{9^{s}}}+{\frac {1}{15^{s}}}+{\frac {1}{21^{s}}}+{\frac {1}{27^{s}}}+{\frac {1}{33^{s}}}+\ldots }
Subtracting again we get:
(
1
−
1
3
s
)
(
1
−
1
2
s
)
ζ
(
s
)
=
1
+
1
5
s
+
1
7
s
+
1
11
s
+
1
13
s
+
1
17
s
+
…
{\displaystyle \left(1-{\frac {1}{3^{s}}}\right)\left(1-{\frac {1}{2^{s}}}\right)\zeta (s)=1+{\frac {1}{5^{s}}}+{\frac {1}{7^{s}}}+{\frac {1}{11^{s}}}+{\frac {1}{13^{s}}}+{\frac {1}{17^{s}}}+\ldots }
where all elements having a factor of 3 or 2 (or both) are removed.
It can be seen that the right side is being sieved. Repeating infinitely for
1
p
s
{\displaystyle {\frac {1}{p^{s}}}}
where
p
{\displaystyle p}
is prime, we get:
…
(
1
−
1
11
s
)
(
1
−
1
7
s
)
(
1
−
1
5
s
)
(
1
−
1
3
s
)
(
1
−
1
2
s
)
ζ
(
s
)
=
1
{\displaystyle \ldots \left(1-{\frac {1}{11^{s}}}\right)\left(1-{\frac {1}{7^{s}}}\right)\left(1-{\frac {1}{5^{s}}}\right)\left(1-{\frac {1}{3^{s}}}\right)\left(1-{\frac {1}{2^{s}}}\right)\zeta (s)=1}
Dividing both sides by everything but the ζ(s) we obtain:
ζ
(
s
)
=
1
(
1
−
1
2
s
)
(
1
−
1
3
s
)
(
1
−
1
5
s
)
(
1
−
1
7
s
)
(
1
−
1
11
s
)
…
{\displaystyle \zeta (s)={\frac {1}{\left(1-{\frac {1}{2^{s}}}\right)\left(1-{\frac {1}{3^{s}}}\right)\left(1-{\frac {1}{5^{s}}}\right)\left(1-{\frac {1}{7^{s}}}\right)\left(1-{\frac {1}{11^{s}}}\right)\ldots }}}
This can be written more concisely as an infinite product over all primes p:
ζ
(
s
)
=
∏
p
prime
1
1
−
p
−
s
{\displaystyle \zeta (s)=\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}}
To make this proof rigorous, we need only to observe that when
ℜ
(
s
)
>
1
{\displaystyle \Re (s)>1}
, the sieved right-hand side approaches 1, which follows immediately from the convergence of the Dirichlet series for
ζ
(
s
)
{\displaystyle \zeta (s)}
.
== The case s = 1 ==
An interesting result can be found for ζ(1), the harmonic series:
…
(
1
−
1
11
)
(
1
−
1
7
)
(
1
−
1
5
)
(
1
−
1
3
)
(
1
−
1
2
)
ζ
(
1
)
=
1
{\displaystyle \ldots \left(1-{\frac {1}{11}}\right)\left(1-{\frac {1}{7}}\right)\left(1-{\frac {1}{5}}\right)\left(1-{\frac {1}{3}}\right)\left(1-{\frac {1}{2}}\right)\zeta (1)=1}
which can also be written as,
…
(
10
11
)
(
6
7
)
(
4
5
)
(
2
3
)
(
1
2
)
ζ
(
1
)
=
1
{\displaystyle \ldots \left({\frac {10}{11}}\right)\left({\frac {6}{7}}\right)\left({\frac {4}{5}}\right)\left({\frac {2}{3}}\right)\left({\frac {1}{2}}\right)\zeta (1)=1}
which is,
(
…
⋅
10
⋅
6
⋅
4
⋅
2
⋅
1
…
⋅
11
⋅
7
⋅
5
⋅
3
⋅
2
)
ζ
(
1
)
=
1
{\displaystyle \left({\frac {\ldots \cdot 10\cdot 6\cdot 4\cdot 2\cdot 1}{\ldots \cdot 11\cdot 7\cdot 5\cdot 3\cdot 2}}\right)\zeta (1)=1}
as,
ζ
(
1
)
=
1
+
1
2
+
1
3
+
1
4
+
1
5
+
…
{\displaystyle \zeta (1)=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\ldots }
thus,
1
+
1
2
+
1
3
+
1
4
+
1
5
+
…
=
2
⋅
3
⋅
5
⋅
7
⋅
11
⋅
…
1
⋅
2
⋅
4
⋅
6
⋅
10
⋅
…
{\displaystyle 1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\ldots ={\frac {2\cdot 3\cdot 5\cdot 7\cdot 11\cdot \ldots }{1\cdot 2\cdot 4\cdot 6\cdot 10\cdot \ldots }}}
While the series ratio test is inconclusive for the left-hand side it may be shown divergent by bounding logarithms. Similarly for the right-hand side the infinite coproduct of reals greater than one does not guarantee divergence, e.g.,
lim
n
→
∞
(
1
+
1
n
)
n
=
e
{\displaystyle \lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n}=e}
.
Instead, the partial products (whose numerators are primorials) may be bounded, using ln(1+x)≤x, as
∏
k
=
1
n
p
k
p
k
−
1
=
e
−
∑
k
=
1
n
ln
(
1
−
1
p
k
)
≥
e
∑
k
=
1
n
1
p
k
,
{\displaystyle \prod _{k=1}^{n}{\frac {p_{k}}{p_{k}-1}}=e^{-\sum _{k=1}^{n}\ln \left(1-{\frac {1}{p_{k}}}\right)}\geq e^{\sum _{k=1}^{n}{\frac {1}{p_{k}}}},}
so that divergence is clear given the double-logarithmic divergence of the inverse prime series.
(Note that Euler's original proof for inverse prime series used just the converse direction to prove the divergence of the inverse prime series based on that of the Euler product and the harmonic series.)
== Another proof ==
Each factor (for a given prime p) in the product above can be expanded to a geometric series consisting of the reciprocal of p raised to multiples of s, as follows
1
1
−
p
−
s
=
1
+
1
p
s
+
1
p
2
s
+
1
p
3
s
+
…
+
1
p
k
s
+
…
{\displaystyle {\frac {1}{1-p^{-s}}}=1+{\frac {1}{p^{s}}}+{\frac {1}{p^{2s}}}+{\frac {1}{p^{3s}}}+\ldots +{\frac {1}{p^{ks}}}+\ldots }
When
ℜ
(
s
)
>
1
{\displaystyle \Re (s)>1}
, this series converges absolutely. Hence we may take a finite number of factors, multiply them together, and rearrange terms. Taking all the primes p up to some prime number limit q, we have
|
ζ
(
s
)
−
∏
p
≤
q
(
1
1
−
p
−
s
)
|
<
∑
n
=
q
+
1
∞
1
n
σ
{\displaystyle \left|\zeta (s)-\prod _{p\leq q}\left({\frac {1}{1-p^{-s}}}\right)\right|<\sum _{n=q+1}^{\infty }{\frac {1}{n^{\sigma }}}}
where σ is the real part of s. By the fundamental theorem of arithmetic, the partial product when expanded out gives a sum consisting of those terms n−s where n is a product of primes less than or equal to q. The inequality results from the fact that therefore only integers larger than q can fail to appear in this expanded out partial product. Since the difference between the partial product and ζ(s) goes to zero when σ > 1, we have convergence in this region.
== See also ==
Euler product
Riemann zeta function
== References ==
John Derbyshire, Prime Obsession: Bernhard Riemann and The Greatest Unsolved Problem in Mathematics, Joseph Henry Press, 2003, ISBN 978-0-309-08549-6
== Notes == | Wikipedia/Proof_of_the_Euler_product_formula_for_the_Riemann_zeta_function |
In mathematics, the Riemann zeta function is a function in complex analysis, which is also important in number theory. It is often denoted
ζ
(
s
)
{\displaystyle \zeta (s)}
and is named after the mathematician Bernhard Riemann. When the argument
s
{\displaystyle s}
is a real number greater than one, the zeta function satisfies the equation
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
.
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}\,.}
It can therefore provide the sum of various convergent infinite series, such as
ζ
(
2
)
=
1
1
2
+
{\textstyle \zeta (2)={\frac {1}{1^{2}}}+}
1
2
2
+
{\textstyle {\frac {1}{2^{2}}}+}
1
3
2
+
…
.
{\textstyle {\frac {1}{3^{2}}}+\ldots \,.}
Explicit or numerically efficient formulae exist for
ζ
(
s
)
{\displaystyle \zeta (s)}
at integer arguments, all of which have real values, including this example. This article lists these formulae, together with tables of values. It also includes derivatives and some series composed of the zeta function at integer arguments.
The same equation in
s
{\displaystyle s}
above also holds when
s
{\displaystyle s}
is a complex number whose real part is greater than one, ensuring that the infinite sum still converges. The zeta function can then be extended to the whole of the complex plane by analytic continuation, except for a simple pole at
s
=
1
{\displaystyle s=1}
. The complex derivative exists in this more general region, making the zeta function a meromorphic function. The above equation no longer applies for these extended values of
s
{\displaystyle s}
, for which the corresponding summation would diverge. For example, the full zeta function exists at
s
=
−
1
{\displaystyle s=-1}
(and is therefore finite there), but the corresponding series would be
1
+
2
+
3
+
…
,
{\textstyle 1+2+3+\ldots \,,}
whose partial sums would grow indefinitely large.
The zeta function values listed below include function values at the negative even numbers (s = −2, −4, etc.), for which ζ(s) = 0 and which make up the so-called trivial zeros. The Riemann zeta function article includes a colour plot illustrating how the function varies over a continuous rectangular region of the complex plane. The successful characterisation of its non-trivial zeros in the wider plane is important in number theory, because of the Riemann hypothesis.
== The Riemann zeta function at 0 and 1 ==
At zero, one has
ζ
(
0
)
=
B
1
−
=
−
B
1
+
=
−
1
2
{\displaystyle \zeta (0)={B_{1}^{-}}=-{B_{1}^{+}}=-{\tfrac {1}{2}}\!}
At 1 there is a pole, so ζ(1) is not finite but the left and right limits are:
lim
ε
→
0
±
ζ
(
1
+
ε
)
=
±
∞
{\displaystyle \lim _{\varepsilon \to 0^{\pm }}\zeta (1+\varepsilon )=\pm \infty }
Since it is a pole of first order, it has a complex residue
lim
ε
→
0
ε
ζ
(
1
+
ε
)
=
1
.
{\displaystyle \lim _{\varepsilon \to 0}\varepsilon \zeta (1+\varepsilon )=1\,.}
== Positive integers ==
=== Even positive integers ===
For the even positive integers
n
{\displaystyle n}
, one has the relationship to the Bernoulli numbers
B
n
{\displaystyle B_{n}}
:
ζ
(
n
)
=
(
−
1
)
n
2
+
1
(
2
π
)
n
B
n
2
(
n
!
)
.
{\displaystyle \zeta (n)=(-1)^{{\tfrac {n}{2}}+1}{\frac {(2\pi )^{n}B_{n}}{2(n!)}}\,.}
The computation of
ζ
(
2
)
{\displaystyle \zeta (2)}
is known as the Basel problem. The value of
ζ
(
4
)
{\displaystyle \zeta (4)}
is related to the Stefan–Boltzmann law and Wien approximation in physics. The first few values are given by:
ζ
(
2
)
=
1
+
1
2
2
+
1
3
2
+
⋯
=
π
2
6
ζ
(
4
)
=
1
+
1
2
4
+
1
3
4
+
⋯
=
π
4
90
ζ
(
6
)
=
1
+
1
2
6
+
1
3
6
+
⋯
=
π
6
945
ζ
(
8
)
=
1
+
1
2
8
+
1
3
8
+
⋯
=
π
8
9450
ζ
(
10
)
=
1
+
1
2
10
+
1
3
10
+
⋯
=
π
10
93555
ζ
(
12
)
=
1
+
1
2
12
+
1
3
12
+
⋯
=
691
π
12
638512875
ζ
(
14
)
=
1
+
1
2
14
+
1
3
14
+
⋯
=
2
π
14
18243225
ζ
(
16
)
=
1
+
1
2
16
+
1
3
16
+
⋯
=
3617
π
16
325641566250
.
{\displaystyle {\begin{aligned}\zeta (2)&=1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\cdots ={\frac {\pi ^{2}}{6}}\\[4pt]\zeta (4)&=1+{\frac {1}{2^{4}}}+{\frac {1}{3^{4}}}+\cdots ={\frac {\pi ^{4}}{90}}\\[4pt]\zeta (6)&=1+{\frac {1}{2^{6}}}+{\frac {1}{3^{6}}}+\cdots ={\frac {\pi ^{6}}{945}}\\[4pt]\zeta (8)&=1+{\frac {1}{2^{8}}}+{\frac {1}{3^{8}}}+\cdots ={\frac {\pi ^{8}}{9450}}\\[4pt]\zeta (10)&=1+{\frac {1}{2^{10}}}+{\frac {1}{3^{10}}}+\cdots ={\frac {\pi ^{10}}{93555}}\\[4pt]\zeta (12)&=1+{\frac {1}{2^{12}}}+{\frac {1}{3^{12}}}+\cdots ={\frac {691\pi ^{12}}{638512875}}\\[4pt]\zeta (14)&=1+{\frac {1}{2^{14}}}+{\frac {1}{3^{14}}}+\cdots ={\frac {2\pi ^{14}}{18243225}}\\[4pt]\zeta (16)&=1+{\frac {1}{2^{16}}}+{\frac {1}{3^{16}}}+\cdots ={\frac {3617\pi ^{16}}{325641566250}}\,.\end{aligned}}}
Taking the limit
n
→
∞
{\displaystyle n\rightarrow \infty }
, one obtains
ζ
(
∞
)
=
1
{\displaystyle \zeta (\infty )=1}
.
The relationship between zeta at the positive even integers and powers of pi may be written as
a
n
ζ
(
2
n
)
=
π
2
n
b
n
{\displaystyle a_{n}\zeta (2n)=\pi ^{2n}b_{n}}
where
a
n
{\displaystyle a_{n}}
and
b
n
{\displaystyle b_{n}}
are coprime positive integers for all
n
{\displaystyle n}
. These are given by the integer sequences OEIS: A002432 and OEIS: A046988, respectively, in OEIS. Some of these values are reproduced below:
If we let
η
n
=
b
n
/
a
n
{\displaystyle \eta _{n}=b_{n}/a_{n}}
be the coefficient of
π
2
n
{\displaystyle \pi ^{2n}}
as above,
ζ
(
2
n
)
=
∑
ℓ
=
1
∞
1
ℓ
2
n
=
η
n
π
2
n
{\displaystyle \zeta (2n)=\sum _{\ell =1}^{\infty }{\frac {1}{\ell ^{2n}}}=\eta _{n}\pi ^{2n}}
then we find recursively,
η
1
=
1
/
6
η
n
=
∑
ℓ
=
1
n
−
1
(
−
1
)
ℓ
−
1
η
n
−
ℓ
(
2
ℓ
+
1
)
!
+
(
−
1
)
n
+
1
n
(
2
n
+
1
)
!
{\displaystyle {\begin{aligned}\eta _{1}&=1/6\\\eta _{n}&=\sum _{\ell =1}^{n-1}(-1)^{\ell -1}{\frac {\eta _{n-\ell }}{(2\ell +1)!}}+(-1)^{n+1}{\frac {n}{(2n+1)!}}\end{aligned}}}
This recurrence relation may be derived from that for the Bernoulli numbers.
Also, there is another recurrence:
ζ
(
2
n
)
=
1
n
+
1
2
∑
k
=
1
n
−
1
ζ
(
2
k
)
ζ
(
2
n
−
2
k
)
for
n
>
1
{\displaystyle \zeta (2n)={\frac {1}{n+{\frac {1}{2}}}}\sum _{k=1}^{n-1}\zeta (2k)\zeta (2n-2k)\quad {\text{ for }}\quad n>1}
which can be proved, using that
d
d
x
cot
(
x
)
=
−
1
−
cot
2
(
x
)
{\displaystyle {\frac {d}{dx}}\cot(x)=-1-\cot ^{2}(x)}
The values of the zeta function at non-negative even integers have the generating function:
∑
n
=
0
∞
ζ
(
2
n
)
x
2
n
=
−
π
x
2
cot
(
π
x
)
=
−
1
2
+
π
2
6
x
2
+
π
4
90
x
4
+
π
6
945
x
6
+
⋯
{\displaystyle \sum _{n=0}^{\infty }\zeta (2n)x^{2n}=-{\frac {\pi x}{2}}\cot(\pi x)=-{\frac {1}{2}}+{\frac {\pi ^{2}}{6}}x^{2}+{\frac {\pi ^{4}}{90}}x^{4}+{\frac {\pi ^{6}}{945}}x^{6}+\cdots }
Since
lim
n
→
∞
ζ
(
2
n
)
=
1
{\displaystyle \lim _{n\rightarrow \infty }\zeta (2n)=1}
The formula also shows that for
n
∈
N
,
n
→
∞
{\displaystyle n\in \mathbb {N} ,n\rightarrow \infty }
,
|
B
2
n
|
∼
(
2
n
)
!
2
(
2
π
)
2
n
{\displaystyle \left|B_{2n}\right|\sim {\frac {(2n)!\,2}{\;~(2\pi )^{2n}\,}}}
=== Odd positive integers ===
The sum of the harmonic series is infinite.
ζ
(
1
)
=
1
+
1
2
+
1
3
+
⋯
=
∞
{\displaystyle \zeta (1)=1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots =\infty \!}
The value ζ(3) is also known as Apéry's constant and has a role in the electron's gyromagnetic ratio.
The value ζ(3) also appears in Planck's law.
These and additional values are:
It is known that ζ(3) is irrational (Apéry's theorem) and that infinitely many of the numbers ζ(2n + 1) : n ∈
N
{\displaystyle \mathbb {N} }
, are irrational. There are also results on the irrationality of values of the Riemann zeta function at the elements of certain subsets of the positive odd integers; for example, at least one of ζ(5), ζ(7), ζ(9), or ζ(11) is irrational.
The positive odd integers of the zeta function appear in physics, specifically correlation functions of antiferromagnetic XXX spin chain.
Most of the identities following below are provided by Simon Plouffe. They are notable in that they converge quite rapidly, giving almost three digits of precision per iteration, and are thus useful for high-precision calculations.
Plouffe stated the following identities without proof. Proofs were later given by other authors.
==== ζ(5) ====
ζ
(
5
)
=
1
294
π
5
−
72
35
∑
n
=
1
∞
1
n
5
(
e
2
π
n
−
1
)
−
2
35
∑
n
=
1
∞
1
n
5
(
e
2
π
n
+
1
)
ζ
(
5
)
=
12
∑
n
=
1
∞
1
n
5
sinh
(
π
n
)
−
39
20
∑
n
=
1
∞
1
n
5
(
e
2
π
n
−
1
)
+
1
20
∑
n
=
1
∞
1
n
5
(
e
2
π
n
+
1
)
{\displaystyle {\begin{aligned}\zeta (5)&={\frac {1}{294}}\pi ^{5}-{\frac {72}{35}}\sum _{n=1}^{\infty }{\frac {1}{n^{5}(e^{2\pi n}-1)}}-{\frac {2}{35}}\sum _{n=1}^{\infty }{\frac {1}{n^{5}(e^{2\pi n}+1)}}\\\zeta (5)&=12\sum _{n=1}^{\infty }{\frac {1}{n^{5}\sinh(\pi n)}}-{\frac {39}{20}}\sum _{n=1}^{\infty }{\frac {1}{n^{5}(e^{2\pi n}-1)}}+{\frac {1}{20}}\sum _{n=1}^{\infty }{\frac {1}{n^{5}(e^{2\pi n}+1)}}\end{aligned}}}
==== ζ(7) ====
ζ
(
7
)
=
19
56700
π
7
−
2
∑
n
=
1
∞
1
n
7
(
e
2
π
n
−
1
)
{\displaystyle \zeta (7)={\frac {19}{56700}}\pi ^{7}-2\sum _{n=1}^{\infty }{\frac {1}{n^{7}(e^{2\pi n}-1)}}\!}
Note that the sum is in the form of a Lambert series.
==== ζ(2n + 1) ====
By defining the quantities
S
±
(
s
)
=
∑
n
=
1
∞
1
n
s
(
e
2
π
n
±
1
)
{\displaystyle S_{\pm }(s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}(e^{2\pi n}\pm 1)}}}
a series of relationships can be given in the form
0
=
a
n
ζ
(
n
)
−
b
n
π
n
+
c
n
S
−
(
n
)
+
d
n
S
+
(
n
)
{\displaystyle 0=a_{n}\zeta (n)-b_{n}\pi ^{n}+c_{n}S_{-}(n)+d_{n}S_{+}(n)}
where an, bn, cn and dn are positive integers. Plouffe gives a table of values:
These integer constants may be expressed as sums over Bernoulli numbers, as given in (Vepstas, 2006) below.
A fast algorithm for the calculation of Riemann's zeta function for any integer argument is given by E. A. Karatsuba.
== Negative integers ==
In general, for negative integers (and also zero), one has
ζ
(
−
n
)
=
(
−
1
)
n
B
n
+
1
n
+
1
{\displaystyle \zeta (-n)=(-1)^{n}{\frac {B_{n+1}}{n+1}}}
The so-called "trivial zeros" occur at the negative even integers:
ζ
(
−
2
n
)
=
0
{\displaystyle \zeta (-2n)=0}
(Ramanujan summation)
The first few values for negative odd integers are
ζ
(
−
1
)
=
−
1
12
ζ
(
−
3
)
=
1
120
ζ
(
−
5
)
=
−
1
252
ζ
(
−
7
)
=
1
240
ζ
(
−
9
)
=
−
1
132
ζ
(
−
11
)
=
691
32760
ζ
(
−
13
)
=
−
1
12
{\displaystyle {\begin{aligned}\zeta (-1)&=-{\frac {1}{12}}\\[4pt]\zeta (-3)&={\frac {1}{120}}\\[4pt]\zeta (-5)&=-{\frac {1}{252}}\\[4pt]\zeta (-7)&={\frac {1}{240}}\\[4pt]\zeta (-9)&=-{\frac {1}{132}}\\[4pt]\zeta (-11)&={\frac {691}{32760}}\\[4pt]\zeta (-13)&=-{\frac {1}{12}}\end{aligned}}}
However, just like the Bernoulli numbers, these do not stay small for increasingly negative odd values. For details on the first value, see 1 + 2 + 3 + 4 + · · ·.
So ζ(m) can be used as the definition of all (including those for index 0 and 1) Bernoulli numbers.
== Derivatives ==
The derivative of the zeta function at the negative even integers is given by
ζ
′
(
−
2
n
)
=
(
−
1
)
n
(
2
n
)
!
2
(
2
π
)
2
n
ζ
(
2
n
+
1
)
.
{\displaystyle \zeta ^{\prime }(-2n)=(-1)^{n}{\frac {(2n)!}{2(2\pi )^{2n}}}\zeta (2n+1)\,.}
The first few values of which are
ζ
′
(
−
2
)
=
−
ζ
(
3
)
4
π
2
ζ
′
(
−
4
)
=
3
4
π
4
ζ
(
5
)
ζ
′
(
−
6
)
=
−
45
8
π
6
ζ
(
7
)
ζ
′
(
−
8
)
=
315
4
π
8
ζ
(
9
)
.
{\displaystyle {\begin{aligned}\zeta ^{\prime }(-2)&=-{\frac {\zeta (3)}{4\pi ^{2}}}\\[4pt]\zeta ^{\prime }(-4)&={\frac {3}{4\pi ^{4}}}\zeta (5)\\[4pt]\zeta ^{\prime }(-6)&=-{\frac {45}{8\pi ^{6}}}\zeta (7)\\[4pt]\zeta ^{\prime }(-8)&={\frac {315}{4\pi ^{8}}}\zeta (9)\,.\end{aligned}}}
One also has
ζ
′
(
0
)
=
−
1
2
ln
(
2
π
)
ζ
′
(
−
1
)
=
1
12
−
ln
A
ζ
′
(
2
)
=
1
6
π
2
(
γ
+
ln
2
−
12
ln
A
+
ln
π
)
{\displaystyle {\begin{aligned}\zeta ^{\prime }(0)&=-{\frac {1}{2}}\ln(2\pi )\\[4pt]\zeta ^{\prime }(-1)&={\frac {1}{12}}-\ln A\\[4pt]\zeta ^{\prime }(2)&={\frac {1}{6}}\pi ^{2}(\gamma +\ln 2-12\ln A+\ln \pi )\end{aligned}}}
where A is the Glaisher–Kinkelin constant. The first of these identities implies that the regularized product of the reciprocals of the positive integers is
1
/
2
π
{\displaystyle 1/{\sqrt {2\pi }}}
, thus the amusing "equation"
∞
!
=
2
π
{\displaystyle \infty !={\sqrt {2\pi }}}
.
From the logarithmic derivative of the functional equation,
2
ζ
′
(
1
/
2
)
ζ
(
1
/
2
)
=
log
(
2
π
)
+
π
cos
(
π
/
4
)
2
sin
(
π
/
4
)
−
Γ
′
(
1
/
2
)
Γ
(
1
/
2
)
=
log
(
2
π
)
+
π
2
+
2
log
2
+
γ
.
{\displaystyle 2{\frac {\zeta '(1/2)}{\zeta (1/2)}}=\log(2\pi )+{\frac {\pi \cos(\pi /4)}{2\sin(\pi /4)}}-{\frac {\Gamma '(1/2)}{\Gamma (1/2)}}=\log(2\pi )+{\frac {\pi }{2}}+2\log 2+\gamma \,.}
== Series involving ζ(n) ==
The following sums can be derived from the generating function:
∑
k
=
2
∞
ζ
(
k
)
x
k
−
1
=
−
ψ
0
(
1
−
x
)
−
γ
{\displaystyle \sum _{k=2}^{\infty }\zeta (k)x^{k-1}=-\psi _{0}(1-x)-\gamma }
where ψ0 is the digamma function.
∑
k
=
2
∞
(
ζ
(
k
)
−
1
)
=
1
∑
k
=
1
∞
(
ζ
(
2
k
)
−
1
)
=
3
4
∑
k
=
1
∞
(
ζ
(
2
k
+
1
)
−
1
)
=
1
4
∑
k
=
2
∞
(
−
1
)
k
(
ζ
(
k
)
−
1
)
=
1
2
{\displaystyle {\begin{aligned}\sum _{k=2}^{\infty }(\zeta (k)-1)&=1\\[4pt]\sum _{k=1}^{\infty }(\zeta (2k)-1)&={\frac {3}{4}}\\[4pt]\sum _{k=1}^{\infty }(\zeta (2k+1)-1)&={\frac {1}{4}}\\[4pt]\sum _{k=2}^{\infty }(-1)^{k}(\zeta (k)-1)&={\frac {1}{2}}\end{aligned}}}
Series related to the Euler–Mascheroni constant (denoted by γ) are
∑
k
=
2
∞
(
−
1
)
k
ζ
(
k
)
k
=
γ
∑
k
=
2
∞
ζ
(
k
)
−
1
k
=
1
−
γ
∑
k
=
2
∞
(
−
1
)
k
ζ
(
k
)
−
1
k
=
ln
2
+
γ
−
1
{\displaystyle {\begin{aligned}\sum _{k=2}^{\infty }(-1)^{k}{\frac {\zeta (k)}{k}}&=\gamma \\[4pt]\sum _{k=2}^{\infty }{\frac {\zeta (k)-1}{k}}&=1-\gamma \\[4pt]\sum _{k=2}^{\infty }(-1)^{k}{\frac {\zeta (k)-1}{k}}&=\ln 2+\gamma -1\end{aligned}}}
and using the principal value
ζ
(
k
)
=
lim
ε
→
0
ζ
(
k
+
ε
)
+
ζ
(
k
−
ε
)
2
{\displaystyle \zeta (k)=\lim _{\varepsilon \to 0}{\frac {\zeta (k+\varepsilon )+\zeta (k-\varepsilon )}{2}}}
which of course affects only the value at 1, these formulae can be stated as
∑
k
=
1
∞
(
−
1
)
k
ζ
(
k
)
k
=
0
∑
k
=
1
∞
ζ
(
k
)
−
1
k
=
0
∑
k
=
1
∞
(
−
1
)
k
ζ
(
k
)
−
1
k
=
ln
2
{\displaystyle {\begin{aligned}\sum _{k=1}^{\infty }(-1)^{k}{\frac {\zeta (k)}{k}}&=0\\[4pt]\sum _{k=1}^{\infty }{\frac {\zeta (k)-1}{k}}&=0\\[4pt]\sum _{k=1}^{\infty }(-1)^{k}{\frac {\zeta (k)-1}{k}}&=\ln 2\end{aligned}}}
and show that they depend on the principal value of ζ(1) = γ .
== Nontrivial zeros ==
Zeros of the Riemann zeta except negative even integers are called "nontrivial zeros". The Riemann hypothesis states that the real part of every nontrivial zero must be 1/2. In other words, all known nontrivial zeros of the Riemann zeta are of the form z = 1/2 + yi where y is a real number. The following table contains the decimal expansion of Im(z) for the first few nontrivial zeros:
Andrew Odlyzko computed the first 2 million nontrivial zeros accurate to within 4×10−9, and the first 100 zeros accurate within 1000 decimal places. See their website for the tables and bibliographies.
A table of about 103 billion zeros with high precision (of ±2−102≈±2·10−31) is available for interactive access and download (although in a very inconvenient compressed format) via LMFDB.
== Ratios ==
Although evaluating particular values of the zeta function is difficult, often certain ratios can be found by inserting particular values of the gamma function into the functional equation
ζ
(
s
)
=
2
s
π
s
−
1
sin
(
π
s
2
)
Γ
(
1
−
s
)
ζ
(
1
−
s
)
{\displaystyle \zeta (s)=2^{s}\pi ^{s-1}\sin \left({\frac {\pi s}{2}}\right)\Gamma (1-s)\zeta (1-s)}
We have simple relations for half-integer arguments
ζ
(
3
/
2
)
ζ
(
−
1
/
2
)
=
−
4
π
ζ
(
5
/
2
)
ζ
(
−
3
/
2
)
=
−
16
π
2
3
ζ
(
7
/
2
)
ζ
(
−
5
/
2
)
=
64
π
3
15
ζ
(
9
/
2
)
ζ
(
−
7
/
2
)
=
256
π
4
105
{\displaystyle {\begin{aligned}{\frac {\zeta (3/2)}{\zeta (-1/2)}}&=-4\pi \\{\frac {\zeta (5/2)}{\zeta (-3/2)}}&=-{\frac {16\pi ^{2}}{3}}\\{\frac {\zeta (7/2)}{\zeta (-5/2)}}&={\frac {64\pi ^{3}}{15}}\\{\frac {\zeta (9/2)}{\zeta (-7/2)}}&={\frac {256\pi ^{4}}{105}}\end{aligned}}}
Other examples follow for more complicated evaluations and relations of the gamma function. For example a consequence of the relation
Γ
(
3
4
)
=
(
π
2
)
1
4
AGM
(
2
,
1
)
1
2
{\displaystyle \Gamma \left({\tfrac {3}{4}}\right)=\left({\tfrac {\pi }{2}}\right)^{\tfrac {1}{4}}{\operatorname {AGM} \left({\sqrt {2}},1\right)}^{\tfrac {1}{2}}}
is the zeta ratio relation
ζ
(
3
/
4
)
ζ
(
1
/
4
)
=
2
π
(
2
−
2
)
AGM
(
2
,
1
)
{\displaystyle {\frac {\zeta (3/4)}{\zeta (1/4)}}=2{\sqrt {\frac {\pi }{(2-{\sqrt {2}})\operatorname {AGM} \left({\sqrt {2}},1\right)}}}}
where AGM is the arithmetic–geometric mean. In a similar vein, it is possible to form radical relations, such as from
Γ
(
1
5
)
2
Γ
(
1
10
)
Γ
(
3
10
)
=
1
+
5
2
7
10
5
4
{\displaystyle {\frac {\Gamma \left({\frac {1}{5}}\right)^{2}}{\Gamma \left({\frac {1}{10}}\right)\Gamma \left({\frac {3}{10}}\right)}}={\frac {\sqrt {1+{\sqrt {5}}}}{2^{\tfrac {7}{10}}{\sqrt[{4}]{5}}}}}
the analogous zeta relation is
ζ
(
1
/
5
)
2
ζ
(
7
/
10
)
ζ
(
9
/
10
)
ζ
(
1
/
10
)
ζ
(
3
/
10
)
ζ
(
4
/
5
)
2
=
(
5
−
5
)
(
10
+
5
+
5
)
10
⋅
2
3
10
{\displaystyle {\frac {\zeta (1/5)^{2}\zeta (7/10)\zeta (9/10)}{\zeta (1/10)\zeta (3/10)\zeta (4/5)^{2}}}={\frac {(5-{\sqrt {5}})\left({\sqrt {10}}+{\sqrt {5+{\sqrt {5}}}}\right)}{10\cdot 2^{\tfrac {3}{10}}}}}
== References ==
== Further reading ==
Ciaurri, Óscar; Navas, Luis M.; Ruiz, Francisco J.; Varona, Juan L. (May 2015). "A Simple Computation of ζ(2k)". The American Mathematical Monthly. 122 (5): 444–451. arXiv:1209.5030. doi:10.4169/amer.math.monthly.122.5.444. JSTOR 10.4169/amer.math.monthly.122.5.444. S2CID 207521195.
Simon Plouffe, "Identities inspired from Ramanujan Notebooks Archived 2009-01-30 at the Wayback Machine", (1998).
Simon Plouffe, "Identities inspired by Ramanujan Notebooks part 2 PDF Archived 2011-09-26 at the Wayback Machine" (2006).
Vepstas, Linas (2006). "On Plouffe's Ramanujan identities" (PDF). The Ramanujan Journal. 27 (3): 387–408. arXiv:math.NT/0609775. doi:10.1007/s11139-011-9335-9. S2CID 8789411.
Zudilin, Wadim (2001). "One of the Numbers ζ(5), ζ(7), ζ(9), ζ(11) Is Irrational". Russian Mathematical Surveys. 56 (4): 774–776. Bibcode:2001RuMaS..56..774Z. doi:10.1070/RM2001v056n04ABEH000427. MR 1861452. S2CID 250734661. PDF PDF Russian PS Russian
Nontrival zeros reference by Andrew Odlyzko:
Bibliography
Tables | Wikipedia/Particular_values_of_the_Riemann_zeta_function |
The term figurate number is used by different writers for members of different sets of numbers, generalizing from triangular numbers to different shapes (polygonal numbers) and different dimensions (polyhedral numbers). The ancient Greek mathematicians already considered triangular numbers, polygonal numbers, tetrahedral numbers, and pyramidal numbers, and subsequent mathematicians have included other classes of these numbers including numbers defined from other types of polyhedra and from their analogs in other dimensions.
== Terminology ==
Some kinds of figurate number were discussed in the 16th and 17th centuries under the name "figural number".
In historical works about Greek mathematics the preferred term used to be figured number.
In a use going back to Jacob Bernoulli's Ars Conjectandi, the term figurate number is used for triangular numbers made up of successive integers, tetrahedral numbers made up of successive triangular numbers, etc. These turn out to be the binomial coefficients. In this usage the square numbers (4, 9, 16, 25, ...) would not be considered figurate numbers when viewed as arranged in a square.
A number of other sources use the term figurate number as synonymous for the polygonal numbers, either just the usual kind or both those and the centered polygonal numbers.
== History ==
The mathematical study of figurate numbers is said to have originated with Pythagoras, possibly based on Babylonian or Egyptian precursors. Generating whichever class of figurate numbers the Pythagoreans studied using gnomons is also attributed to Pythagoras. Unfortunately, there is no trustworthy source for these claims, because all surviving writings about the Pythagoreans are from centuries later. Speusippus is the earliest source to expose the view that ten, as the fourth triangular number, was in fact the tetractys, supposed to be of great importance for Pythagoreanism. Figurate numbers were a concern of the Pythagorean worldview. It was well understood that some numbers could have many figurations, e.g. 36 is a both a square and a triangle and also various rectangles.
The modern study of figurate numbers goes back to Pierre de Fermat, specifically the Fermat polygonal number theorem. Later, it became a significant topic for Euler, who gave an explicit formula for all triangular numbers that are also perfect squares, among many other discoveries relating to figurate numbers.
Figurate numbers have played a significant role in modern recreational mathematics. In research mathematics, figurate numbers are studied by way of the Ehrhart polynomials, polynomials that count the number of integer points in a polygon or polyhedron when it is expanded by a given factor.
== Triangular numbers and their analogs in higher dimensions ==
The triangular numbers for n = 1, 2, 3, ... are the result of the juxtaposition of the linear numbers (linear gnomons) for n = 1, 2, 3, ...:
These are the binomial coefficients
(
n
+
1
2
)
{\displaystyle \textstyle {\binom {n+1}{2}}}
. This is the case r = 2 of the fact that the rth diagonal of Pascal's triangle for r ≥ 0 consists of the figurate numbers for the r-dimensional analogs of triangles (r-dimensional simplices).
The simplicial polytopic numbers for r = 1, 2, 3, 4, ... are:
P
1
(
n
)
=
n
1
=
(
n
+
0
1
)
=
(
n
1
)
{\displaystyle P_{1}(n)={\frac {n}{1}}={\binom {n+0}{1}}={\binom {n}{1}}}
(linear numbers),
P
2
(
n
)
=
n
(
n
+
1
)
2
=
(
n
+
1
2
)
{\displaystyle P_{2}(n)={\frac {n(n+1)}{2}}={\binom {n+1}{2}}}
(triangular numbers),
P
3
(
n
)
=
n
(
n
+
1
)
(
n
+
2
)
6
=
(
n
+
2
3
)
{\displaystyle P_{3}(n)={\frac {n(n+1)(n+2)}{6}}={\binom {n+2}{3}}}
(tetrahedral numbers),
P
4
(
n
)
=
n
(
n
+
1
)
(
n
+
2
)
(
n
+
3
)
24
=
(
n
+
3
4
)
{\displaystyle P_{4}(n)={\frac {n(n+1)(n+2)(n+3)}{24}}={\binom {n+3}{4}}}
(pentachoric numbers, pentatopic numbers, 4-simplex numbers),
⋮
{\displaystyle \qquad \vdots }
P
r
(
n
)
=
n
(
n
+
1
)
(
n
+
2
)
⋯
(
n
+
r
−
1
)
r
!
=
(
n
+
(
r
−
1
)
r
)
{\displaystyle P_{r}(n)={\frac {n(n+1)(n+2)\cdots (n+r-1)}{r!}}={\binom {n+(r-1)}{r}}}
(r-topic numbers, r-simplex numbers).
The terms square number and cubic number derive from their geometric representation as a square or cube. The difference of two positive triangular numbers is a trapezoidal number.
== Gnomon ==
The gnomon is the piece added to a figurate number to transform it to the next larger one.
For example, the gnomon of the square number is the odd number, of the general form 2n + 1, n = 0, 1, 2, 3, .... The square of size 8 composed of gnomons looks like this:
1
2
3
4
5
6
7
8
2
2
3
4
5
6
7
8
3
3
3
4
5
6
7
8
4
4
4
4
5
6
7
8
5
5
5
5
5
6
7
8
6
6
6
6
6
6
7
8
7
7
7
7
7
7
7
8
8
8
8
8
8
8
8
8
{\displaystyle {\begin{matrix}1&2&3&4&5&6&7&8\\2&2&3&4&5&6&7&8\\3&3&3&4&5&6&7&8\\4&4&4&4&5&6&7&8\\5&5&5&5&5&6&7&8\\6&6&6&6&6&6&7&8\\7&7&7&7&7&7&7&8\\8&8&8&8&8&8&8&8\end{matrix}}}
To transform from the n-square (the square of size n) to the (n + 1)-square, one adjoins 2n + 1 elements: one to the end of each row (n elements), one to the end of each column (n elements), and a single one to the corner. For example, when transforming the 7-square to the 8-square, we add 15 elements; these adjunctions are the 8s in the above figure.
This gnomonic technique also provides a mathematical proof that the sum of the first n odd numbers is n2; the figure illustrates 1 + 3 + 5 + 7 + 9 + 11 + 13 + 15 = 64 = 82.
There is a similar gnomon with centered hexagonal numbers adding up to make cubes of each integer number.
== Notes ==
== Further reading ==
Deza, Elena; Deza, Michel Marie (2012), Figurate Numbers, World Scientific, ISBN 978-981-4355-48-3 | Wikipedia/Figurate_number |
The Geography (Ancient Greek: Γεωγραφικὴ Ὑφήγησις, Geōgraphikḕ Hyphḗgēsis, lit. "Geographical Guidance"), also known by its Latin names as the Geographia and the Cosmographia, is a gazetteer, an atlas, and a treatise on cartography, compiling the geographical knowledge of the 2nd-century Roman Empire. Originally written by Claudius Ptolemy in Greek at Alexandria around 150 AD, the work was a revision of a now-lost atlas by Marinus of Tyre using additional Roman and Persian gazetteers and new principles. Its translation – Kitab Surat al-Ard – into Arabic by Al-Khwarismi in the 9th century was highly influential on the geographical knowledge and cartographic traditions of the Islamic world. Alongside the works of Islamic scholars – and the commentary containing revised and more accurate data by Alfraganus – Ptolemy's work was subsequently highly influential on Medieval and Renaissance Europe.
== Manuscripts ==
Versions of Ptolemy's work in antiquity were probably proper atlases with attached maps, although some scholars believe that the references to maps in the text were later additions.
No Greek manuscript of the Geography survives from earlier than the 13th century. However fragmentary papyri of later somewhat derivative works such as the Table of Noteworthy Cities have been found with the earliest, Rylands Library GP 522, dating to the early 3rd century. A letter written by the Byzantine monk Maximus Planudes records that he searched for one for Chora Monastery in the summer of 1295; one of the earliest surviving texts may have been one of those he then assembled. In Europe, maps were sometimes redrawn using the coordinates provided by the text, as Planudes was forced to do. Later scribes and publishers could then copy these new maps, as Athanasius did for the emperor Andronicus II Palaeologus. The three earliest surviving texts with maps are those from Constantinople (Istanbul) based on Planudes's work.
The first Latin translation of these texts was made in 1406 or 1407 by Jacobus Angelus in Florence, Italy, under the name Geographia Claudii Ptolemaei. It is not thought that his edition had maps, although Manuel Chrysoloras had given Palla Strozzi a Greek copy of Planudes's maps in Florence in 1397.
=== Stemma ===
Berggren & Jones (2000) place these manuscripts into a stemma whereby U, K, F and N are connected with the activities of Maximos Planudes (c.1255-1305). From a sister manuscript to UKFN descends R, V, W & C, however the maps were either copied defectively or not at all. "Of the greatest importance for the text of the Geography" they state is manuscript X (Vat.Gr.191); "because it is the only copy that is uninfluenced by the Byzantine revision." e.g. the 13th-14th century corrections of Planudes, possibly associated with recreating the maps.
Regarding the maps, they conclude that it was unlikely that extant maps survived from which the above stemma descends, even if maps existed in antiquity: "The transmission of Ptolemy's text certainly passed through a stage when the manuscripts were too small to contain the maps. Planudes and his assistants therefore probably had no pictorial models, and the success of their enterprise is proof that Ptolemy succeeded in his attempt to encode the map in words and numbers. The copies of the maps in later manuscripts and printed editions of the Geography were reproduced from Planudes' reconstructions." Mittenhuber (2010) further divides the stemma into two recensions of the original c.AD 150 lost work: Ξ and Ω (c.3rd/4th cent., lost). Recension Ω contains most of the extant manuscripts and is subdivided into a further two groups: Δ and Π. Group Δ contains parchment manuscripts from the end of the thirteenth century, which are the earliest extant manuscripts of the Geography; these are U, K & F. Recension, Ξ, is represented by one codex only, X. Mittenhuber agrees with Berggren & Jones, stating that "The so-called Codex X is of particular significance, because it contains many local names and coordinates that differ from the other manuscripts ... which cannot be explained by mere errors in the tradition.".
Although no manuscripts survive from earlier than the late 13th century; there are references to the existence of ancient codicies in late antiquity. One such example is in an epistle by Cassiodorus (c.560 A.D.): “Tum, si vos notitiae nobilis cura inflammaverit, habetis Ptolemaei codicem, qui sic omnia loca evidenter expressit, ut eum cunctarum regionum paene incolam fuisse iudicetis. Eoque fit, ut uno loco positi, sicut monachos decet, animo percurratis, quod aliquorum peregrinatio plurimo labore collegit.”(Institutiones 1, 25).The existence of ancient recensions that differ fundamentally to the surviving manuscript tradition can be seen in the epitomes of Markianos by Stephanus:"Καὶ ἄλλοι οὕτως διὰ του π Πρετανίδες νῆσοι, ὡς Μαρκιανὸς καὶ Πτολεμαῖος."The tradition preserved within the stemma of surviving (13th-14th century) manuscripts by Stückelberger & Grasshoff only preserves "Β" and not "Π" recentions of "Βρεττανικήσ".
== Contents ==
The Geography consists of three sections, divided among 8 books. Book I is a treatise on cartography and chorography, describing the methods used to assemble and arrange Ptolemy's data. From Book II through the beginning of Book VII, a gazetteer provides longitude and latitude values for the world known to the ancient Romans (the "ecumene"). The rest of Book VII provides details on three projections to be used for the construction of a map of the world, varying in complexity and fidelity. Book VIII constitutes an atlas of regional maps. The maps include a recapitulation of some of the values given earlier in the work, which were intended to be used as captions to clarify the map's contents and maintain their accuracy during copying. Book 8 formed the basis for the Table of Noteworthy Cities.
=== Cartographical treatise ===
Maps based on scientific principles had been made in Europe since the time of Eratosthenes in the 3rd century BC. Ptolemy improved the treatment of map projections. He provided instructions on how to create his maps in the first section of the work.
=== Gazetteer ===
The gazetteer section of Ptolemy's work provided latitude and longitude coordinates for all the places and geographical features in the work. Latitude was expressed in degrees of arc from the equator, the same system that is used now, though Ptolemy used fractions of a degree rather than minutes of arc. His Prime Meridian, of 0 longitude, ran through the Fortunate Isles, the westernmost land recorded, at around the position of El Hierro in the Canary Islands. The maps spanned 180 degrees of longitude from the Fortunate Isles in the Atlantic to China.
Ptolemy was aware that Europe knew only about a quarter of the globe.
=== Atlas ===
Ptolemy's work included a single large and less detailed world map and then separate and more detailed regional maps. The first Greek manuscripts compiled after Maximus Planudes's rediscovery of the text had as many as 64 regional maps. The standard set in Western Europe came to be 26: 10 European maps, 4 African maps, and 12 Asian maps. As early as the 1420s, these canonical maps were complemented by extra-Ptolemaic regional maps depicting, e.g., Scandinavia.
== Content ==
The Geography is spread over 8 books with the main body of the work (books 2-7) is a list of some 8000 toponyms comprising the Oikumene of the second century AD. Book 1 is written in prose and is Ptolemy's explanation of the project, his method and his sources (mainly Marinos of Tyre). Book 8 offers descriptions for each of the maps created in books 2-7 and forms the basis of the Table of Noteworthy Cities. The critical edition was published by Stückelberger, Mittenhuber and Klöti (2006).
=== Book 1 ===
Book 1 is a theoretical treatise by Ptolemy outlining the subject matter, previous work and instructing the reader how to draw a world map using his projection systems. The sections are, to use Ptolemy's original titles:
On the difference between world cartography and regional cartography
On the prerequisites for world cartography
How the number of stades in the earth's circumference can be obtained from the number of stades in an arbitrary rectilinear interval, and vice versa, even if [the interval] is not on a single meridian
That it is necessary to give priority to the [astronomical] phenomena over [data] from records of travel
That it is necessary to follow the most recent researches because of changes in the world over time
On Marinos' guide to world cartography
Revision of Marinos' latitudinal dimension of the known world on the basis ofthe [astronomical] phenomena
The same revision [of the latitudinal dimension], on the basis of land journeys
The same revision [of the latitudinal dimension], on the basis of sea journeys
That one should not put the Aithiopians south of the parallel situated opposite to that through Meroe
On the computations that Marinos improperly made for the longitudinal dimension of the oikoumene
The revision ofthe longitudinal dimension of the known world on the basis of journeys by land
The same revision [of the longitudinal dimension] on the basis of journeys by sea
On the crossing from the Golden Peninsula to Kattigara
On the inconsistencies in details of Marinos' exposition
That certain matters escaped [Marinos'] notice in the boundaries of the provinces
On the inconsistencies between [Marinos] and the reports ofour time
On the inconvenience of Marinos' compilations for drawing a map of the oikoumene
On the convenience of our catalogue for making a map
On the disproportional nature of Marinos' geographical map
On the things that should be preserved in a planar map
On how one should make a map of the oikoumene on a globe
List of the meridians and parallels to be included in the map
Method of making a map of the oikoumene in the plane in proper proportionality with its configuration on the globe (In this section Ptolemy explains two methods for projecting his map)
=== Book 2 ===
Western Atlantic fringes, Gaul, Central Europe and the Iberian Peninsula.
=== Book 3 ===
Italy, Greece and the major Mediterranean Islands.
=== Book 4 ===
North Africa from Morocco to Egypt and Ethiopia.
=== Book 5 ===
Covering Anatolia, Asia Minor, the Middle East and Near East as well as Cyprus.
=== Book 6 ===
In book 6, Ptolemy covers the Near East, Caucuses and Central Asia.
=== Book 7 ===
India, China, and Sri Lanka.
=== Book 8 ===
Descriptions of the maps created by the previous sections with details of day length at solstice, etc. The gazetter of toponyms is thought to have formed the basis for the Table of Noteworthy Cities.
=== Image Gallery ===
== History ==
=== Antiquity ===
The original treatise by Marinus of Tyre that formed the basis of Ptolemy's Geography has been completely lost. A world map based on Ptolemy was displayed in Augustodunum (Autun, France) in late Roman times. Pappus, writing at Alexandria in the 4th century, produced a commentary on Ptolemy's Geography and used it as the basis of his (now lost) Chorography of the Ecumene. Later imperial writers and mathematicians, however, seem to have restricted themselves to commenting on Ptolemy's text, rather than improving upon it; surviving records actually show decreasing fidelity to real position. Nevertheless, Byzantine scholars continued these geographical traditions throughout the Medieval period.
Whereas previous Greco-Roman geographers such as Strabo and Pliny the Elder demonstrated a reluctance to rely on the contemporary accounts of sailors and merchants who plied distant areas of the Indian Ocean, Marinus and Ptolemy betray a much greater receptiveness to incorporating information received from them. For instance, Grant Parker argues that it would be highly implausible for them to have constructed the Bay of Bengal as precisely as they did without the accounts of sailors. When it comes to the account of the Golden Chersonese (i.e. Malay Peninsula) and the Magnus Sinus (i.e. Gulf of Thailand and South China Sea), Marinus and Ptolemy relied on the testimony of a Greek sailor named Alexandros, who claimed to have visited a far eastern site called "Cattigara" (most likely Oc Eo, Vietnam, the site of unearthed Antonine-era Roman goods and not far from the region of Jiaozhi in northern Vietnam where ancient Chinese sources claim several Roman embassies first landed in the 2nd and 3rd centuries).
=== Medieval Islam ===
Muslim cartographers were using copies of Ptolemy's Almagest and Geography by the 9th century. At that time, in the court of the caliph al-Maʾmūm, al-Khwārazmī compiled his Book of the Depiction of the Earth (Kitab Surat al-Ard) which mimicked the Geography in providing the coordinates for 545 cities and regional maps of the Nile, the Island of the Jewel, the Sea of Darkness, and the Sea of Azov. A 1037 copy of these are the earliest extant maps from Islamic lands. The text clearly states that al-Khwārazmī was working from an earlier map, although this could not have been an exact copy of Ptolemy's work: his Prime Meridian was 10° east of Ptolemy's, he adds some places, and his latitudes differ. C.A. Nallino suggests that the work was not based on Ptolemy but on a derivative world map, presumably in Syriac or Arabic. The coloured map of al-Maʾmūm constructed by a team including al-Khwārazmī was described by the Persian encyclopædist al-Masʿūdī around 956 as superior to the maps of Marinus and Ptolemy, probably indicating that it was built along similar mathematical principles. It included 4530 cities and over 200 mountains.
Despite beginning to compile numerous gazetteers of places and coordinates indebted to Ptolemy, Muslim scholars made almost no direct use of Ptolemy's principles in the maps which have survived. Instead, they followed al-Khwārazmī's modifications and the orthogonal projection advocated by Suhrāb's early 10th-century treatise on the Marvels of the Seven Climes to the End of Habitation. Surviving maps from the medieval period were not done according to mathematical principles. The world map from the 11th-century Book of Curiosities is the earliest surviving map of the Muslim or Christian worlds to include a geographic coordinate system but the copyist seems to have not understood its purpose, starting it from the left using twice the intended scale and then (apparently realizing his mistake) giving up halfway through. Its presence does strongly suggest the existence of earlier, now-lost maps which had been mathematically derived in the manner of Ptolemy, al-Khwārazmi, or Suhrāb. There are surviving reports of such maps.
Ptolemy's Geography was translated from Arabic into Latin at the court of King Roger II of Sicily in the 12th century AD. However, no copy of that translation has survived.
=== Renaissance ===
The Greek text of the Geography reached Florence from Constantinople in about 1400 and was translated into Latin by Jacobus Angelus of Scarperia around 1406. The reception of the Geography in Latin Europe was diverse. In the first half of the 15th century, Florentine humanists used it mainly as a philological resource to understand the geography of ancient texts; Venetian cartographers attempted to reconcile Ptolemaic maps with portolan charts and medieval mappaemundi, and French and German scholars with an interest in astrology focused on Ptolemy's cosmographical concepts. Over the second half of the century, the prestige of the Geography grew to become the necessary framework of any reflection on geographical space.
The first printed edition with maps, published in 1477 in Bologna, was also the first printed book with engraved illustrations. Many editions followed (more often using woodcut in the early days), some following traditional versions of the maps, and others updating them. An edition published at Ulm in 1482 was the first one printed north of the Alps. It became a commercial success and was reprinted in 1486. Also in 1482, Francesco Berlinghieri printed the first edition in vernacular Italian. The edition published in Strasbourg in 1513 was a major step in the modernization of the Geography. It preserved the corpus of Ptolemy's text and maps as faithfully as possible to the original while it provided a separate set of 20 more accurate and up-to-date modern maps. A much improved Latin translation of the Greek original was produced by Willibald Pirckheimer for the 1525 Strasbourg edition, and the first printed edition directly in Greek was authored by Erasmus of Rotterdam in Basel in 1533.
Ptolemy had mapped the whole world from the Fortunatae Insulae (Cape Verde or Canary Islands) eastward to the eastern shore of the Magnus Sinus. This known portion of the world was comprised within 180 degrees. In his extreme east Ptolemy placed Serica (the Land of Silk), the Sinarum Situs (the Port of the Sinae), and the emporium of Cattigara. On the 1489 map of the world by Henricus Martellus, which was based on Ptolemy's work, Asia terminated in its southeastern point in a cape, the Cape of Cattigara. Cattigara was understood by Ptolemy to be a port on the Sinus Magnus, or Great Gulf, the actual Gulf of Thailand, at eight and a half degrees north of the Equator, on the coast of Cambodia, which is where he located it in his Canon of Famous Cities. It was the easternmost port reached by shipping trading from the Graeco-Roman world to the lands of the Far East.
In Ptolemy's later and better-known Geography, a scribal error was made and Cattigara was located at eight and a half degrees South of the Equator. On Ptolemaic maps, such as that of Martellus, Catigara was located on the easternmost shore of the Mare Indicum, 180 degrees East of the Cape St Vincent at, due to the scribal error, eight and a half degrees South of the Equator.
Catigara is also shown at this location on Martin Waldseemüller's 1507 world map, which avowedly followed the tradition of Ptolemy. Ptolemy's information was thereby misinterpreted so that the coast of China, which should have been represented as part of the coast of eastern Asia, was falsely made to represent an eastern shore of the Indian Ocean. As a result, Ptolemy implied more land east of the 180th meridian and an ocean beyond. Marco Polo’s account of his travels in eastern Asia described lands and seaports on an eastern ocean apparently unknown to Ptolemy. Marco Polo’s narrative authorized the extensive additions to the Ptolemaic map shown on the 1492 globe of Martin Behaim. The fact that Ptolemy did not represent an eastern coast of Asia made it admissible for Behaim to extend that continent far to the east. Behaim’s globe placed Marco Polo’s Mangi and Cathay east of Ptolemy’s 180th meridian, and the Great Khan’s capital, Cambaluc (Beijing), on the 41st parallel of latitude at approximately 233 degrees East. Behaim allowed 60 degrees beyond Ptolemy’s 180 degrees for the mainland of Asia and 30 degrees more to the east coast of Cipangu (Japan). Cipangu and the mainland of Asia were thus placed only 90 and 120 degrees, respectively, west of the Canary Islands.
The Codex Seragliensis was used as the base of a new edition of the work in 2006. This new edition was used to "decode" Ptolemy's coordinates of Books 2 and 3 by an interdisciplinary team of TU Berlin, presented in publications in 2010 and 2012.
==== Influence on Christopher Columbus ====
Christopher Columbus modified this geography further by using 53+2⁄3 Italian nautical miles as the length of a degree instead of the longer degree of Ptolemy, and by adopting Marinus of Tyre’s longitude of 225 degrees for the east coast of the Magnus Sinus. This resulted in a considerable eastward advancement of the longitudes given by Martin Behaim and other contemporaries of Columbus. By some process Columbus reasoned that the longitudes of eastern Asia and Cipangu respectively were about 270 and 300 degrees east, or 90 and 60 degrees west of the Canary Islands. He said that he had sailed 1100 leagues from the Canaries when he found Cuba in 1492. This was approximately where he thought the coast of eastern Asia would be found. On this basis of calculation he identified Hispaniola with Cipangu, which he had expected to find on the outward voyage at a distance of about 700 leagues from the Canaries. His later voyages resulted in further exploration of Cuba and in the discovery of South and Central America. At first South America, the Mundus Novus (New World) was considered to be a great island of continental proportions; but as a result of his fourth voyage, it was apparently considered to be identical with the great Upper India peninsula (India Superior) represented by Behaim – the Cape of Cattigara. This seems to be the best interpretation of the sketch map made by Alessandro Zorzi on the advice of Bartholomew Columbus (Christopher's brother) around 1506, which bears an inscription saying that according to the ancient geographer Marinus of Tyre and Christopher Columbus the distance from Cape St Vincent on the coast of Portugal to Cattigara on the peninsula of India Superior was 225 degrees, while according to Ptolemy the same distance was 180 degrees.
=== Early modern Ottoman Empire ===
Prior to the 16th century, knowledge of geography in the Ottoman Empire was limited in scope, with almost no access to the works of earlier Islamic scholars that superseded Ptolemy. His Geography would again be translated and updated with commentary into Arabic under Mehmed II, who commissioned works from Byzantine scholar George Amiroutzes in 1465 and the Florentine humanist Francesco Berlinghieri in 1481.
== Longitudes error and Earth size ==
There are two related errors:
Considering a sample of 80 cities amongst the 6345 listed by Ptolemy, those that are both identifiable and for which we can expect a better distance measurement since they were well known, there is a systematic overestimation of the longitude by a factor 1.428 with a high confidence (coefficient of determination r² = 0.9935). This error produces evident deformations in Ptolemy's world map most apparent for example in the profile of Italy, which is markedly stretched horizontally.
Ptolemy accepted that the known Ecumene spanned 180° of longitude, but instead of accepting Eratosthenes's estimate for the circumference of the Earth of 252,000 stadia, he shrinks it to 180,000 stadia, with a factor of 1.4 between the two figures.
This suggests Ptolemy rescaled his longitude data to fit with a figure of 180,000 stadia for the circumference of the Earth, which he described as a "general consensus". Ptolemy rescaled experimentally obtained data in many of his works on geography, astrology, music, and optics.
== Gallery ==
== See also ==
Almagest, Ptolemy's astronomical work
Description of Greece
Geographia Generalis
Diodorus Siculus
Geography and cartography in medieval Islam
Strabo
List of most expensive books and manuscripts
Kitab Surat al-Ard
Table of Noteworthy Cities
== Notes ==
== Citations ==
== References ==
Ptolemy. Translated by Jacobus Angelus (c. 1406), Geographia. (in Latin)
Berggren, J. Lennart & Alexander Jones (2000), Ptolemy's Geography: An Annotated Translation of the Theoretical Chapters, Princeton: Princeton University Press, ISBN 978-0-691-09259-1.
Clemens, Raymond (2008), "Medieval Maps in a Renaissance Context: Gregorio Dati", in Talbert, Richard J.A.; Unger, Richard Watson (eds.), Cartography in Antiquity and the Middle Ages: Fresh Perspectives, New Methods, Leiden: Koninklijke Brill NV, pp. 237–256
Dilke, Oswald Ashton Wentworth (1987a), "14 · Itineraries and Geographical Maps in the Early and Late Roman Empires" (PDF), History of Cartography, vol. I, Chicago: University of Chicago Press, pp. 234–257.
Dilke, Oswald Ashton Wentworth (1987b), "15 · Cartography in the Byzantine Empire" (PDF), History of Cartography, vol. I, Chicago: University of Chicago Press, pp. 258–275.
Diller, Aubrey (1940), "The Oldest Manuscripts of Ptolemaic Maps", Transactions of the American Philological Association, pp. 62–67.
Edson, Evelyn & al. (2004), Medieval Views of the Cosmos, Oxford: Bodleian Library, ISBN 978-1-85124-184-2.
al-Masʿūdī (1894), "Kitāb al-Tanbīh wa-al-ishrāf", Bibliotheca Geographorum Arabicorum, vol. 8, Leiden: Brill.
Mawer, Granville Allen (2013). "The Riddle of Cattigara". In Nichols, Robert and Martin Woods (ed.). Mapping Our World: Terra Incognita to Australia. National Library of Australia. pp. 38–39. ISBN 9780642278098.
Milanesi, Marica (1996), "A Forgotten Ptolemy: Harley Codex 3686 in the British Library", Imago Mundi, 48: 43–64, doi:10.1080/03085699608592832.
Nallino, C.A. (1939), "Al-Ḥuwārismī e il suo rifacimento della Geografia di Tolomeo", Raccolta di scritti editi e inediti, vol. V, Rome: Istituto per l'Oriente, pp. 458–532. (in Italian)
Parker, Grant (2008). The Making of Roman India. Cambridge University Press. ISBN 978-0-521-85834-2.
Peerlings, Robert; Laurentius, Frans; van den Bovenkamp, Jaap (2017), "The watermarks in the Rome editions of Ptolemy's Cosmography and more", Quaerendo, 47 (3–4), Leiden: Koninklijke Brill NV: 307–327, doi:10.1163/15700690-12341392.
Peerlings, Robert; Laurentius, Frans; van den Bovenkamp, Jaap (2018), "New findings and discoveries in the 1507/8 Rome edition of Ptolemy's Cosmography", Quaerendo, 48 (2), Leiden: Koninklijke Brill NV: 139–162, doi:10.1163/15700690-12341408, S2CID 165379448.
Rapoport, Yossef; et al. (2008), "The Book of Curiosities and a Unique Map of the World", Cartography in Antiquity and the Middle Ages: Fresh Perspectives, New Methods, Leiden: Koninklijke Brill NV, pp. 121–138.
Stückelberger, Alfred & al., eds. (2006), Ptolemaios Handbuch der Geographie (Griechisch-Deutsch) [Ptolemy's Manual on Geography (Greek/German)], Schwabe, ISBN 978-3-7965-2148-5. (in German and Greek)
Suárez, Thomas (1999), Early Mapping of Southeast Asia, Periplus Editions, ISBN 978-962-593-470-9.
Wright, John Kirtland (1923), "Notes on the Knowledge of Latitudes and Longitudes in the Middle Ages", Isis, V (1): 75–98, doi:10.1086/358121, JSTOR 223599, S2CID 143159033.
Young, Gary Keith (2001). Rome's Eastern Trade: International Commerce and Imperial Policy, 31 BC-AD 305. Routledge. ISBN 978-0-415-24219-6.
Yule, Henry (1915). Henri Cordier (ed.). Cathay and the Way Thither: Being a Collection of Medieval Notices of China, Vol I: Preliminary Essay on the Intercourse Between China and the Western Nations Previous to the Discovery of the Cape Route. Vol. 1. Hakluyt Society.
== Further reading ==
Blažek, Václav. "Etymological Analysis of Toponyms from Ptolemy's Description of Central Europe". In: Studia Celto-Slavica 3 (2010): 21–45. DOI: https://doi.org/10.54586/GTQF3679.
Blažek, Václav. "The North-Eastern Border of the Celtic World". In: Studia Celto-Slavica 8 (2018): 7–21. DOI: https://doi.org/10.54586/ZMEE3109.
Cosgrove, Dennis. 2003. Apollo's Eye: A Cartographic Genealogy of the Earth in the Western Imagination. Johns Hopkins University Press. Baltimore and London.
Gautier Dalché, Patrick. 2009. La Géographie de Ptolémée en Occident (IVe-XVIe siècle). Terratum Orbis. Turnhout. Brepols, .
Shalev, Zur, and Charles Burnett, eds. 2011. Ptolemy's Geography in the Renaissance. London; Turin. Warburg Institute; Nino Aragno. (In Appendix: Latin text of Jacopo Angeli's introduction to his translation of the Geography, with English translation by C. Burnett.)
Stevenson, Edward Luther. Trans. and ed. 1932. Claudius Ptolemy: The Geography. New York Public Library. Reprint: Dover, 1991. This is the only complete English translation of Ptolemy's most famous work. Unfortunately, it is marred by numerous mistakes (see Diller) and the place names are given in Latinised forms, rather than in the original Greek.
Diller, Aubrey (February 1935). "Review of Stevenson's translation". Isis. 22 (2): 533–539. doi:10.1086/346925. Retrieved 2007-07-15.
== External links ==
=== Primary sources ===
Greek
Klaudios Ptolemaios: Handbuch der Geographie, hrsg. von Alfred Stückelberger und Gerd Grasshoff (Basel: Schwabe, 2006)
(in Greek) Claudii Ptolemaei Geographia, ed. Karl Friedrich August Nobbe, Sumptibus et typis Caroli Tauchnitii, 1843, tom. I (books 1–4, missing p. 126); 1845, tom. II (books 5–8); 1845, tom. III (indices).
Rylands Library GP 522, the earliest papyrus fragment of the Table of Noteworthy Cities. Catalogue list entry here.
Vatican Urb.gr.82, the earliest surviving Byzantine manuscript maps of the Geography.
Latin
(in Latin) La Cosmographie de Claude Ptolemée, Latin manuscript copied around 1411
(in Latin) Geography, digitized codex made in Italy between 1460 and 1477, translated to Latin by Jacobus Angelus at Somni. Also known as codex valentinus, it is the oldest manuscript of the codices with maps of Ptolemy with the donis projections.
(in Latin) "Cosmographia" / Claudius Ptolemaeus. Translated into Latin by Jacobus Angelus, and edited by Nicolaus Germanus. – Ulm : Lienhart Holle. – 1482. (In the National Library of Finland.)
(in Latin) Geographia Universalis, Basileae apud Henricum Petrum mense Martio anno M. D. XL. [of Basel, printed by Henricus Petrus in the month of March in the year 1540].
(in Latin) Geographia Cl. Ptolemaei Alexandrini, Venetiis : apud Vincentium Valgrisium, Venezia, 1562.
Portuguese
Pedro Nunes, Tratado da Sphera com a Theorica do Sol e da Lua e ho Primeiro Liuro da Geographia de Claudio Ptolomeo Alexãndrino, Oficina de Germão Galharde, Lisboa, 1537 (Republished in: Pedro Nunes. Obras, vol. I, Ed. Academia das Ciências de Lisboa, pp. 1-159).
Italian
(in Italian) Geografia cioè descrittione vniuersale della terra partita in due volumi..., In Venetia : appresso Gio. Battista et Giorgio Galignani fratelli, 1598.
(in Italian) Geografia di Claudio Tolomeo alessandrino, In Venetia : appresso gli heredi di Melchior Sessa, 1599.
English
Ptolemy's Geography at LacusCurtius (English translation)
Extracts of Ptolemy on the country of the Seres (China) (English translation)
1st critical edition of Geography Book 8, by Aubrey Diller
Geography Books 2.10-6.11 in English, with most Greece-related places geolocated, by John Brady Kiesling at ToposText
AncientMiddleEast.com, geo-located .KMZ plots of nearly all points in Non-European regions, Books 4-7, for use in GoogleEarth.
German
"Heaven and Earth : Ptolemy, the astronomer and geographer" (PDF). Archived (PDF) from the original on 6 June 2024. Retrieved 24 April 2025. 2006 Exhibition by Alfred Stückelberger, Florian Mittenhuber and Thomas Klöti
=== Secondary material ===
Ptolemy the Geographer
Ptolemy's Geography of Asia – Selected problems of Ptolemy's Geography of Asia (in German)
History of Cartography Archived 2011-09-27 at the Wayback Machine including a discussion of the Geographia
Claudius Ptolemy’s East Africa Georeferenced and Visualized
Ptolemaios-Forschungsstelle (Ptolemy Research Institute, University of Bern) | Wikipedia/Geographia_(Ptolemy) |
Infographics (a clipped compound of "information" and "graphics") are graphic visual representations of information, data, or knowledge intended to present information quickly and clearly. They can improve cognition by using graphics to enhance the human visual system's ability to see patterns and trends. Similar pursuits are information visualization, data visualization, statistical graphics, information design, or information architecture. Infographics have evolved in recent years to be for mass communication, and thus are designed with fewer assumptions about the readers' knowledge base than other types of visualizations. Isotypes are an early example of infographics conveying information quickly and easily to the masses.
== Overview ==
Infographics have been around for many years and recently the increase of the number of easy-to-use, free tools have made the creation of infographics available to a large segment of the population. Social media sites such as Facebook and Twitter have also allowed for individual infographics to be spread among many people around the world. Infographics are widely used in the age of short attention span.
In newspapers, infographics are commonly used to show the weather, as well as maps, site plans, and graphs for summaries of data. Some books are almost entirely made up of information graphics, such as David Macaulay's The Way Things Work. The Snapshots in USA Today are also an example of simple infographics used to convey news and current events.
Modern maps, especially route maps for transit systems, use infographic techniques to integrate a variety of information, such as the conceptual layout of the transit network, transfer points, and local landmarks. Public transportation maps, such as those for the Washington Metro and the London Underground map, are well-known infographics. Public places such as transit terminals usually have some sort of integrated "signage system" with standardized icons and stylized maps.
In his 1983 "landmark book" The Visual Display of Quantitative Information, Edward Tufte defines "graphical displays" in the following passage:
Graphical displays should
show the data
induce the viewer to think about the substance rather than about methodology, graphic design, the technology of graphic production, or something else
avoid distorting what the data has to say
present many numbers in a small space
make large data sets coherent
encourage the eye to compare different pieces of data
reveal the data at several levels of detail, from a broad overview to the fine structure
serve a reasonably clear purpose: description, exploration, tabulation, or decoration
be closely integrated with the statistical and verbal descriptions of a data set.
Graphics reveal data. Indeed graphics can be more precise and revealing than conventional statistical computations.
== History ==
=== Early history ===
In 1626, Christoph Scheiner published the Rosa Ursina sive Sol, a book that revealed his research about the rotation of the sun. Infographics appeared in the form of illustrations demonstrating the Sun's rotation patterns.
In 1786, William Playfair, an engineer and political economist, published the first data graphs in his book The Commercial and Political Atlas. To represent the economy of 18th century England, Playfair used statistical graphs, bar charts, line graphs, area charts, and histograms. In his work, Statistical Breviary, he is credited with introducing the first pie chart.
Around 1820, modern geography was established by Carl Ritter. His maps included shared frames, agreed map legends, scales, repeatability, and fidelity. Such a map can be considered a "supersign" which combines sign systems—as defined by Charles Sanders Peirce—consisting of symbols, icons, indexes as representations. Other examples can be seen in the works of geographers Ritter and Alexander von Humboldt.
In 1857, English nurse Florence Nightingale used information graphics to persuade Queen Victoria to improve conditions in military hospitals. The principal one she used was the Coxcomb chart, a combination of stacked bar and pie charts, depicting the number and causes of deaths during each month of the Crimean War.
1861 saw the release of an influential information graphic on the subject of Napoleon's disastrous march on Moscow. The graphic's creator, Charles Joseph Minard, captured four different changing variables that contributed to Napoleon's downfall in a single two-dimensional image: the army's direction as they traveled, the location the troops passed through, the size of the army as troops died from hunger and wounds, and the freezing temperatures they experienced.
James Joseph Sylvester introduced the term "graph" in 1878 in the scientific magazine Nature and published a set of diagrams showing the relationship between chemical bonds and mathematical properties. These were also some of the first mathematical graphs.
=== 20th century ===
In 1900, the African-American historian, sociologist, writer, and Black rights activist, W.E.B. Du Bois presented data visualizations at the Exposition Universelle (1900) in Paris, France. In addition to curating 500 photographs of the lives of Black Americans, Du Bois and his Atlanta University team of students and scholars created 60 handmade data visualizations to document the ways Black Americans were being denied access to education, housing, employment, and household wealth.
The Cologne Progressives developed an aesthetic approach to art that focused on communicating information. Gerd Arntz, Peter Alma and Augustin Tschinkel, all participants in this movement were recruited by Otto Neurath for the Gesellschafts- und Wirtschaftsmuseum, where they developed the Vienna Method from 1926 to 1934. Here simple images were used to represent data in a structured way. Following the victory of Austrofascism in the Austrian Civil War, the team moved to the Netherlands where they continued their work rebranding it Isotypes (International System of Typographic Picture Education). The method was also applied by IZOSTAT (ИЗОСТАТ) in the Soviet Union.
In 1942 Isidore Isou published the Lettrist manifesto, a document covering art, culture, poetry, film, and political theory. The included works also called metagraphics and hypergraphics, are a synthesis of writing and visual art.
In 1958 Stephen Toulmin proposed a graphical argument model, called The Toulmin Model of Argumentation. The diagram contained six interrelated components used for analyzing arguments and was considered Toulmin's most influential work, particularly in the field of rhetoric, communication, and computer science. The Toulmin Model of Argumentation became influential in argumentation theory and its applications.
In 1972 and 1973, respectively, the Pioneer 10 and Pioneer 11 spacecraft included on their vessels the Pioneer Plaques, a pair of gold-anodized aluminum plaques, each featuring a pictorial message. The pictorial messages included nude male and female figures as well as symbols that were intended to provide information about the origin of the spacecraft. The images were designed by Carl Sagan and Frank Drake and were unique in that their graphical meanings were to be understandable to extraterrestrial beings, who would have no conception of human language.
A pioneer in data visualization, Edward Tufte, wrote a series of books – Visual Explanations, The Visual Display of Quantitative Information, and Envisioning Information – on the subject of information graphics. Referred to by The New York Times as the "da Vinci of Data", Tufte began to give day-long lectures and workshops on the subject of infographics starting in 1993. As of 2012, Tufte still gives these lectures. To Tufte, good data visualizations represent every data point accurately and enable a viewer to see trends and patterns in the data. Tufte's contribution to the field of data visualization and infographics is considered immense, and his design principles can be seen in many websites, magazines, and newspapers today.
The infographics created by Peter Sullivan for The Sunday Times in the 1970s, 1980s, and 1990s were some of the key factors in encouraging newspapers to use more infographics. Sullivan is also one of the few authors who have written about information graphics in newspapers. Likewise, the staff artists at USA Today, the United States newspaper that debuted in 1982, established the goal of using graphics to make information easier to comprehend. However, the paper has received criticism for oversimplifying news stories and for creating infographics that some find emphasizes entertainment over content and data. Tufte coined the term chartjunk to refer to graphics that are visually appealing to the point of losing the information contained within them.
With vector graphics and raster graphics becoming ubiquitous in computing in the 21st century, data visualizations have been applied to commonly used computer systems, including desktop publishing and Geographic Information Systems (GIS).
Closely related to the field of information graphics is information design, which is the creation of infographics.
=== 21st century ===
By the year 2000, Adobe Flash-based animations on the Internet had made use of many key practices in creating infographics in order to create a variety of products and games.
Likewise, television began to incorporate infographics into the viewers' experiences in the early 2000s. One example of infographics usage in television and in pop culture is the 2002 music video by the Norwegian musicians of Röyksopp, for their song "Remind Me." The video was composed entirely of animated infographics. Similarly, in 2004, a television commercial for the French nuclear technology company Areva used animated infographics as an advertising tactic. Both of these videos and the attention they received have conveyed to other fields the potential value of using information graphics to describe complex information efficiently.
With the rise of alternatives to Adobe Flash, such as HTML 5 and CSS3, infographics are now created in a variety of media with a number of software tools.
The field of journalism has also incorporated and applied information graphics to news stories. For stories that intend to include text, images, and graphics, the system called the maestro concept allows entire newsrooms to collaborate and organize a story to successfully incorporate all components. Across many newsrooms, this teamwork-integrated system is applied to improve time management. The maestro system is designed to improve the presentation of stories for busy readers of media. Many news-based websites have also used interactive information graphics in which the user can extract information on a subject as they explore the graphic.
Many businesses use infographics as a medium for communicating with and attracting potential customers. Information graphics are a form of content marketing and have become a tool for internet marketers and companies to create content that others will link to, thus possibly boosting a company's reputation and online presence.
Religious denominations have also started using infographics. For example, The Church of Jesus Christ of Latter-day Saints has made numerous infographics to help people learn about their faith, missionaries, temples, lay ministry, and family history efforts.
Infographics are finding a home in the classroom as well. Courses that teach students to create their own infographics using a variety of tools may encourage engagement in the classroom and may lead to a better understanding of the concepts they are mapping onto the graphics.
With the popularity of social media, infographics have become popular, often as static images or simple web interfaces, covering any number of topics. Such infographics are often shared between users of social networks such as Facebook, Twitter, Pinterest, Google+ and Reddit. The hashtag #infographic was tweeted 56,765 times in March 2012 and at its peak 3,365 times in a span of 24 hours.
== Analysis ==
The three parts of all infographics are the visual, the content, and the knowledge. The visual consists of colors and graphics. There are two different types of graphics – theme, and reference. These graphics are included in all infographics and represent the underlying visual representation of the data. Reference graphics are generally icons that can be used to point to certain data, although they are not always found in infographics. Statistics and facts usually serve as the content for infographics and can be obtained from any number of sources, including census data and news reports. One of the most important aspects of infographics is that they contain some sort of insight into the data that they are presenting – this is the knowledge.
Infographics are effective because of their visual element. Humans receive input from all five of their senses (sight, touch, hearing, smell, taste), but they receive significantly more information from vision than any of the other four. Fifty percent of the human brain is dedicated to visual functions, and images are processed faster than text. The brain processes pictures all at once, but processes text in a linear fashion, meaning it takes much longer to obtain information from text. Entire business processes or industry sectors can be made relevant to a new audience through a guidance design technique that leads the eye. The page may link to a complete report, but the infographic primes the reader making the subject-matter more accessible. Online trends, such as the increasingly short attention span of Internet users, has also contributed to the increasing popularity and effectiveness of infographics.
When designing the visual aspect of an infographic, a number of considerations must be made to optimize the effectiveness of the visualization. The six components of visual encoding are spatial, marks, connection, enclosure, retinal properties, and temporal encoding. Each of these can be utilized in its own way to represent relationships between different types of data. However, studies have shown that spatial position is the most effective way to represent numerical data and leads to the fastest and easiest understanding by viewers. Therefore, the designers often spatially represent the most important relationship being depicted in an infographic.
There are also three basic provisions of communication that need to be assessed when designing an infographic – appeal, comprehension, and retention. "Appeal" is the idea that communication needs to engage its audience. Comprehension implies that the viewer should be able to easily understand the information that is presented to them. And finally, "retention" means that the viewer should remember the data presented by the infographic. The order of importance of these provisions depends on the purpose of the infographic. If the infographic is meant to convey information in an unbiased way, such as in the domains of academia or science, comprehension should be considered first, then retention, and finally, appeal. However, if the infographic is being used for commercial purposes, then appeal becomes most important, followed by retention and comprehension. When infographics are being used for editorial purposes, such as in a newspaper, the appeal is again most important but is followed first by comprehension and then retention.
However, the appeal and the retention can in practice be put together with the aid of a comprehensible layout design. Recently, as an attempt to study the effect of the layout of an infographic on the comprehension of the viewers, a new Neural Network-based cognitive load estimation method was applied on different types of common layouts for the infographic design. When the varieties of factors listed above are taken into consideration when designing infographics, they can be a highly efficient and effective way to convey large amounts of information in a visual manner.
== Data visualization ==
Data visualizations are often used in infographics and may make up the entire infographic. There are many types of visualizations that can be used to represent the same set of data. Therefore, it is crucial to identify the appropriate visualization for the data set and infographic by taking into consideration graphical features such as position, size, shape, and color. There are primarily five types of visualization categories – time-series data, statistical distributions, maps, hierarchies, and networking.
=== Time-series ===
Time-series data is one of the most common forms of data visualization. It documents sets of values over time. Examples of graphics in this category include index charts, stacked graphs, small multiples, and horizon graphs. Index charts are ideal to use when raw values are less important than relative changes. It is an interactive line chart that shows percentage changes for a collection of time-series data based on a selected index point. For example, stock investors could use this because they are less concerned with the specific price and more concerned with the rate of growth. Stacked graphs are area charts that are stacked on top of each other, and depict aggregate patterns. They allow viewers to see overall patterns and individual patterns. However, they do not support negative numbers and make it difficult to accurately interpret trends. An alternative to stacked graphs is small multiples. Instead of stacking each area chart, each series is individually shown so the overall trends of each sector are more easily interpreted. Horizon graphs are a space efficient method to increase the data density of a time-series while preserving resolution.
=== Statistical ===
Statistical distributions reveal trends based on how numbers are distributed. Common examples include histograms and box-and-whisker plots, which convey statistical features such as mean, median, and outliers. In addition to these common infographics, alternatives include stem-and-leaf plots, Q–Q plots, scatter plot matrices (SPLOM) and parallel coordinates. For assessing a collection of numbers and focusing on frequency distribution, stem-and-leaf plots can be helpful. The numbers are binned based on the first significant digit, and within each stack binned again based on the second significant digit. On the other hand, Q–Q plots compare two probability distributions by graphing quantiles against each other. This allows the viewer to see if the plot values are similar and if the two are linearly related. SPLOM is a technique that represents the relationships among multiple variables. It uses multiple scatter plots to represent a pairwise relation among variables. Another statistical distribution approach to visualize multivariate data is parallel coordinates. Rather than graphing every pair of variables in two dimensions, the data is repeatedly plotted on a parallel axis, and corresponding points are then connected with a line. The advantage of parallel coordinates is that they are relatively compact, allowing many variables to be shown simultaneously.
=== Maps ===
Maps are a natural way to represent geographical data. Time and space can be depicted through the use of flow maps. Line strokes are used with various widths and colors to help encode information. Choropleth maps, which encode data through color and geographical region, are also commonly used. Graduated symbol maps are another method to represent geographical data. They are an alternative to choropleth map and use symbols, such as pie charts for each area, over a map. This map allows for more dimensions to be represented using various shapes, sizes, and colors. Cartograms, on the other hand, completely distort the shape of a region and directly encode a data variable. Instead of using a geographic map, regions are redrawn proportionally to the data. For example, each region can be represented by a circle and the size/color is directly proportional to other information, such as population size.
=== Hierarchies ===
Many data sets, such as spatial entities of countries or common structures for governments, can be organized into natural hierarchies. Node-link diagrams, adjacency diagrams, and enclosure diagrams are all types of infographics that effectively communicate hierarchical data. Node-link diagrams are a popular method due to the tidy and space-efficient results. A node-link diagram is similar to a tree, where each node branches off into multiple sub-sections. An alternative is adjacency diagrams, which is a space-filling variant of the node-link diagram. Instead of drawing a link between hierarchies, nodes are drawn as solid areas with sub-sections inside of each section. This method allows for size to be easily represented than in the node-link diagrams. Enclosure diagrams are also a space-filling visualization method. However, they use containment rather than adjacency to represent the hierarchy. Similar to the adjacency diagram, the size of the node is easily represented in this model.
=== Networks ===
Network visualization explores relationships, such as friendships and cliques. Three common types are force-directed layout, arc diagrams, and matrix view. Force-directed layouts are a common and intuitive approach to network layout. In this system, nodes are similar to charged particles, which repel each other. Links are used to pull related nodes together. Arc diagrams are one-dimensional layouts of nodes with circular arcs linking each node. When used properly, with good order in nodes, cliques and bridges are easily identified in this layout. Alternatively, mathematicians and computer scientists more often use matrix views. Each value has an (x,y) value in the matrix that corresponds to a node. By using color and saturation instead of text, values associated with the links can be perceived rapidly. While this method makes it hard to view the path of the nodes, there are no line crossings, which in a large and highly connected network can quickly become too cluttered.
While all of these visualizations can be effectively used on their own, many modern infographics combine multiple types into one graphic, along with other features, such as illustrations and text. Some modern infographics do not even contain data visualization, and instead are simply a colorful and succinct ways to present knowledge. Fifty-three percent of the 30 most-viewed infographics on the infographic sharing site visual.ly did not contain actual data.
=== Comparison infographics ===
Comparison infographics are a type of visual representation that focuses on comparing and contrasting different elements, such as products, services, options, or features. These infographics are designed to help viewers make informed decisions by presenting information in a clear and concise manner. Comparison infographics can be highly effective in simplifying complex data and highlighting key differences between multiple items.
== Tools ==
Infographics can be created by hand using simple everyday tools such as graph paper, pencils, markers, and rulers. However, today they are more often created using computer software, which is often both faster and easier. They can be created with general illustration software.
Diagrams can be manually created and drawn using software, which can be downloaded for the desktop or used online. Templates can be used to get users started on their diagrams. Additionally, the software allows users to collaborate on diagrams in real time over the Internet.
There are also numerous tools to create very specific types of visualizations, such as creating a visualization based on embedded data in the photos on a user's smartphone. Users can create an infographic of their resume or a "picture of their digital life."
== See also ==
== References ==
== Further reading ==
Heiner Benking (1981–1988) Requisite inquiry and time-line: computer graphics-infographics http://benking.de/infographics/ see there: Computer Graphics in the Environmental Sector – Possibilities and Limitations of Data-visualisation this citation in chapter 3: technical possibilities and human potentials and capacities, "a picture is more than 10.000 words", and "10.000 miles equal 10.000 books".
Sullivan, Peter. (1987) Newspaper Graphics. IFRA, Darmstadt.
Jacques Bertin (1983). Semiology of Graphics. Madison, WI: University of Wisconsin Press. Translation by William Berg of Semiologie Graphique. Paris: Mouton/Gauthier-Villars, 1967.
William S. Cleveland (1985). The Elements of Graphing Data. Summit, NJ: Hobart Press. ISBN 978-1-58465-512-1
Heiner Benking (1993), Visual Access Strategies for Multi-Dimensional Objects and Issues / "Our View of Life is too Flat", WFSF, Turku, FAW Report TR-93019
William S. Cleveland (1993). Visualizing Data. Summit, NJ: Hobart Press. ISBN 978-0-9634884-0-4
Sullivan, Peter. (1993) Information Graphics in Colour. IFRA, Darmstadt.
John Emerson (2008). Visualizing Information for Advocacy: An Introduction to Information Design. New York: OSI.
Paul Lewi (2006). "Speaking of Graphics".
Hankins, Thomas L. (1999). "Blood, Dirt, and Nomograms: A Particular History of Graphs". Isis. 90 (1): 50–80. doi:10.1086/384241. JSTOR 237474. S2CID 144376938.
Robert L. Harris (1999). Information Graphics: A Comprehensive Illustrated Reference. Oxford University Press.
Eric K. Meyer (1997). Designing Infographics. Hayden Books.
Edward R. Tufte (1983). The Visual Display of Quantitative Information. Edition, Cheshire, CT: Graphics Press.
Edward R. Tufte (1990). Envisioning Information. Cheshire, CT: Graphics Press.
Edward R. Tufte (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Cheshire,
Edward R. Tufte (2006). Beautiful Evidence. Cheshire. CT: Graphics Press.
John Wilder Tukey (1977). Exploratory Data Analysis. Addison-Wesley.
Veszelszki, Ágnes (2014). Information visualization: Infographics from a linguistic point of view. In: Benedek, András − Nyíri, Kristóf (eds.): The Power of the Image Series Visual Learning, vol. 4. Frankfurt: Peter Lang, pp. 99−109.
Sandra Rendgen, Julius Wiedemann (2012). Information Graphics. Taschen Publishing. ISBN 978-3-8365-2879-5
Jason Lankow, Josh Ritchie, Ross Crooks (2012). Infographics: The Power of Visual Storytelling. Wiley. ISBN 978-1-118-31404-3
Murray Dick (2020). The Infographic: A History of Data Graphics in News and Communications. The MIT Press. ISBN 9780262043823
== External links ==
Milestones in the History of Thematic Cartography, Statistical Graphics and Data Visualization
Visual Display of Quantitative Information | Wikipedia/Infographics |
In computing, data transformation is the process of converting data from one format or structure into another format or structure. It is a fundamental aspect of most data integration and data management tasks such as data wrangling, data warehousing, data integration and application integration.
Data transformation can be simple or complex based on the required changes to the data between the source (initial) data and the target (final) data. Data transformation is typically performed via a mixture of manual and automated steps. Tools and technologies used for data transformation can vary widely based on the format, structure, complexity, and volume of the data being transformed.
A master data recast is another form of data transformation where the entire database of data values is transformed or recast without extracting the data from the database. All data in a well-designed database is directly or indirectly related to a limited set of master database tables by a network of foreign key constraints. Each foreign key constraint is dependent upon a unique database index from the parent database table. Therefore, when the proper master database table is recast with a different unique index, the directly and indirectly related data are also recast or restated. The directly and indirectly related data may also still be viewed in the original form since the original unique index still exists with the master data. Also, the database recast must be done in such a way as to not impact the applications architecture software.
When the data mapping is indirect via a mediating data model, the process is also called data mediation.
== Data transformation process ==
Data transformation can be divided into the following steps, each applicable as needed based on the complexity of the transformation required.
Data discovery
Data mapping
Code generation
Code execution
Data review
These steps are often the focus of developers or technical data analysts who may use multiple specialized tools to perform their tasks.
The steps can be described as follows:
Data discovery is the first step in the data transformation process. Typically the data is profiled using profiling tools or sometimes using manually written profiling scripts to better understand the structure and characteristics of the data and decide how it needs to be transformed.
Data mapping is the process of defining how individual fields are mapped, modified, joined, filtered, aggregated etc. to produce the final desired output. Developers or technical data analysts traditionally perform data mapping since they work in the specific technologies to define the transformation rules (e.g. visual ETL tools, transformation languages).
Code generation is the process of generating executable code (e.g. SQL, Python, R, or other executable instructions) that will transform the data based on the desired and defined data mapping rules. Typically, the data transformation technologies generate this code based on the definitions or metadata defined by the developers.
Code execution is the step whereby the generated code is executed against the data to create the desired output. The executed code may be tightly integrated into the transformation tool, or it may require separate steps by the developer to manually execute the generated code.
Data review is the final step in the process, which focuses on ensuring the output data meets the transformation requirements. It is typically the business user or final end-user of the data that performs this step. Any anomalies or errors in the data that are found and communicated back to the developer or data analyst as new requirements to be implemented in the transformation process.
== Types of data transformation ==
=== Batch data transformation ===
Traditionally, data transformation has been a bulk or batch process, whereby developers write code or implement transformation rules in a data integration tool, and then execute that code or those rules on large volumes of data. This process can follow the linear set of steps as described in the data transformation process above.
Batch data transformation is the cornerstone of virtually all data integration technologies such as data warehousing, data migration and application integration.
When data must be transformed and delivered with low latency, the term "microbatch" is often used. This refers to small batches of data (e.g. a small number of rows or a small set of data objects) that can be processed very quickly and delivered to the target system when needed.
=== Benefits of batch data transformation ===
Traditional data transformation processes have served companies well for decades. The various tools and technologies (data profiling, data visualization, data cleansing, data integration etc.) have matured and most (if not all) enterprises transform enormous volumes of data that feed internal and external applications, data warehouses and other data stores.
=== Limitations of traditional data transformation ===
This traditional process also has limitations that hamper its overall efficiency and effectiveness.
The people who need to use the data (e.g. business users) do not play a direct role in the data transformation process. Typically, users hand over the data transformation task to developers who have the necessary coding or technical skills to define the transformations and execute them on the data.
This process leaves the bulk of the work of defining the required transformations to the developer, which often in turn do not have the same domain knowledge as the business user. The developer interprets the business user requirements and implements the related code/logic. This has the potential of introducing errors into the process (through misinterpreted requirements), and also increases the time to arrive at a solution.
This problem has given rise to the need for agility and self-service in data integration (i.e. empowering the user of the data and enabling them to transform the data themselves interactively).
There are companies that provide self-service data transformation tools. They are aiming to efficiently analyze, map and transform large volumes of data without the technical knowledge and process complexity that currently exists. While these companies use traditional batch transformation, their tools enable more interactivity for users through visual platforms and easily repeated scripts.
Still, there might be some compatibility issues (e.g. new data sources like IoT may not work correctly with older tools) and compliance limitations due to the difference in data governance, preparation and audit practices.
=== Interactive data transformation ===
Interactive data transformation (IDT) is an emerging capability that allows business analysts and business users the ability to directly interact with large datasets through a visual interface, understand the characteristics of the data (via automated data profiling or visualization), and change or correct the data through simple interactions such as clicking or selecting certain elements of the data.
Although interactive data transformation follows the same data integration process steps as batch data integration, the key difference is that the steps are not necessarily followed in a linear fashion and typically don't require significant technical skills for completion.
There are a number of companies that provide interactive data transformation tools, including Trifacta, Alteryx and Paxata. They are aiming to efficiently analyze, map and transform large volumes of data while at the same time abstracting away some of the technical complexity and processes which take place under the hood.
Interactive data transformation solutions provide an integrated visual interface that combines the previously disparate steps of data analysis, data mapping and code generation/execution and data inspection. That is, if changes are made at one step (like for example renaming), the software automatically updates the preceding or following steps accordingly. Interfaces for interactive data transformation incorporate visualizations to show the user patterns and anomalies in the data so they can identify erroneous or outlying values.
Once they've finished transforming the data, the system can generate executable code/logic, which can be executed or applied to subsequent similar data sets.
By removing the developer from the process, interactive data transformation systems shorten the time needed to prepare and transform the data, eliminate costly errors in the interpretation of user requirements and empower business users and analysts to control their data and interact with it as needed.
== Transformational languages ==
There are numerous languages available for performing data transformation. Many transformation languages require a grammar to be provided. In many cases, the grammar is structured using something closely resembling Backus–Naur form (BNF). There are numerous languages available for such purposes varying in their accessibility (cost) and general usefulness. Examples of such languages include:
AWK - one of the oldest and most popular textual data transformation languages;
Perl - a high-level language with both procedural and object-oriented syntax capable of powerful operations on binary or text data.
Template languages - specialized to transform data into documents (see also template processor);
TXL - prototyping language-based descriptions, used for source code or data transformation.
XSLT - the standard XML data transformation language (suitable by XQuery in many applications);
Additionally, companies such as Trifacta and Paxata have developed domain-specific transformational languages (DSL) for servicing and transforming datasets. The development of domain-specific languages has been linked to increased productivity and accessibility for non-technical users. Trifacta's “Wrangle” is an example of such a domain-specific language.
Another advantage of the recent domain-specific transformational languages trend is that a domain-specific transformational language can abstract the underlying execution of the logic defined in the domain-specific transformational language. They can also utilize that same logic in various processing engines, such as Spark, MapReduce, and Dataflow. In other words, with a domain-specific transformational language, the transformation language is not tied to the underlying engine.
Although transformational languages are typically best suited for transformation, something as simple as regular expressions can be used to achieve useful transformation. A text editor like vim, emacs or TextPad supports the use of regular expressions with arguments. This would allow all instances of a particular pattern to be replaced with another pattern using parts of the original pattern. For example:
foo ("some string", 42, gCommon);
bar (someObj, anotherObj);
foo ("another string", 24, gCommon);
bar (myObj, myOtherObj);
could both be transformed into a more compact form like:
foobar("some string", 42, someObj, anotherObj);
foobar("another string", 24, myObj, myOtherObj);
In other words, all instances of a function invocation of foo with three arguments, followed by a function invocation with two arguments would be replaced with a single function invocation using some or all of the original set of arguments.
== See also ==
Data cleansing
Data mapping
Data integration
Data preparation
Data wrangling
Extract, transform, load
Information integration
== References ==
== External links ==
File Formats, Transformation, and Migration, a related Wikiversity article | Wikipedia/Data_transformation |
Computer-generated imagery (CGI) is a specific-technology or application of computer graphics for creating or improving images in art, printed media, simulators, videos and video games. These images are either static (i.e. still images) or dynamic (i.e. moving images). CGI both refers to 2D computer graphics and (more frequently) 3D computer graphics with the purpose of designing characters, virtual worlds, or scenes and special effects (in films, television programs, commercials, etc.). The application of CGI for creating/improving animations is called computer animation, or CGI animation.
== History ==
The first feature film to use CGI as well as the composition of live-action film with CGI was Vertigo, which used abstract computer graphics by John Whitney in the opening credits of the film. The first feature film to make use of CGI with live action in the storyline of the film was the 1973 film Westworld. The first feature film to present a fully CGI character was the 1985 film Young Sherlock Holmes, showcasing a fully animated stained glass knight character. Other early films that incorporated CGI include Demon Seed (1977), Star Wars (1977), Tron (1982), Star Trek II: The Wrath of Khan (1982), Golgo 13: The Professional (1983), The Last Starfighter (1984),The Abyss (1989), Terminator 2: Judgement Day (1991), and Jurassic Park (1993). The first music video to use CGI was Will Powers' "Adventures in Success" (1983). In 1995, Pixar’s Toy Story became the first fully CGI feature film, marking a historic milestone for both animation and film-making.
Prior to CGI being prevalent in film, virtual reality, personal computing and gaming, one of the early practical applications of CGI was for aviation and military training, namely the flight simulator. Visual systems developed in flight simulators were also an important precursor to three dimensional computer graphics and Computer Generated Imagery (CGI) systems today. Namely because the object of flight simulation was to reproduce on the ground the behavior of an aircraft in flight. Much of this reproduction had to do with believable visual synthesis that mimicked reality. The Link Digital Image Generator (DIG) by the Singer Company (Singer-Link), was considered one of the world's first generation CGI systems. It was a real-time, 3D capable, day/dusk/night system that was used by NASA shuttles, for F-111s, Black Hawk and the B-52. Link's Digital Image Generator had architecture to provide a visual system that realistically corresponded with the view of the pilot. The basic architecture of the DIG and subsequent improvements contained a scene manager followed by geometric processor, video processor and into the display with the end goal of a visual system that processed realistic texture, shading, translucency capabilties, and free of aliasing.
Combined with the need to pair virtual synthesis with military level training requirements, CGI technologies applied in flight simulation were often years ahead of what would have been available in commercial computing or even in high budget film. Early CGI systems could depict only objects consisting of planar polygons. Advances in algorithms and electronics in flight simulator visual systems and CGI in the 1970s and 1980s influenced many technologies still used in modern CGI adding the ability to superimpose texture over the surfaces as well as transition imagery from one level of detail to the next one in a smooth manner.
The evolution of CGI led to the emergence of virtual cinematography in the 1990s, where the vision of the simulated camera is not constrained by the laws of physics. Availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers.
== Static images and landscapes ==
Not only do animated images form part of computer-generated imagery; natural looking landscapes (such as fractal landscapes) are also generated via computer algorithms. A simple way to generate fractal surfaces is to use an extension of the triangular mesh method, relying on the construction of some special case of a de Rham curve, e.g., midpoint displacement. For instance, the algorithm may start with a large triangle, then recursively zoom in by dividing it into four smaller Sierpinski triangles, then interpolate the height of each point from its nearest neighbors. The creation of a Brownian surface may be achieved not only by adding noise as new nodes are created but by adding additional noise at multiple levels of the mesh. Thus a topographical map with varying levels of height can be created using relatively straightforward fractal algorithms. Some typical, easy-to-program fractals used in CGI are the plasma fractal and the more dramatic fault fractal.
Many specific techniques have been researched and developed to produce highly focused computer-generated effects — e.g., the use of specific models to represent the chemical weathering of stones to model erosion and produce an "aged appearance" for a given stone-based surface.
== Architectural scenes ==
Modern architects use services from computer graphic firms to create 3-dimensional models for both customers and builders. These computer generated models can be more accurate than traditional drawings. Architectural animation (which provides animated movies of buildings, rather than interactive images) can also be used to see the possible relationship a building will have in relation to the environment and its surrounding buildings. The processing of architectural spaces without the use of paper and pencil tools is now a widely accepted practice with a number of computer-assisted architectural design systems.
Architectural modeling tools allow an architect to visualize a space and perform "walk-throughs" in an interactive manner, thus providing "interactive environments" both at the urban and building levels. Specific applications in architecture not only include the specification of building structures (such as walls and windows) and walk-throughs but the effects of light and how sunlight will affect a specific design at different times of the day.
Architectural modeling tools have now become increasingly internet-based. However, the quality of internet-based systems still lags behind sophisticated in-house modeling systems.
In some applications, computer-generated images are used to "reverse engineer" historical buildings. For instance, a computer-generated reconstruction of the monastery at Georgenthal in Germany was derived from the ruins of the monastery, yet provides the viewer with a "look and feel" of what the building would have looked like in its day.
== Anatomical models ==
Computer generated models used in skeletal animation are not always anatomically correct. However, organizations such as the Scientific Computing and Imaging Institute have developed anatomically correct computer-based models. Computer generated anatomical models can be used both for instructional and operational purposes. To date, a large body of artist produced medical images continue to be used by medical students, such as images by Frank H. Netter, e.g. Cardiac images. However, a number of online anatomical models are becoming available.
A single patient X-ray is not a computer generated image, even if digitized. However, in applications which involve CT scans a three-dimensional model is automatically produced from many single-slice x-rays, producing "computer generated image". Applications involving magnetic resonance imaging also bring together a number of "snapshots" (in this case via magnetic pulses) to produce a composite, internal image.
In modern medical applications, patient-specific models are constructed in 'computer assisted surgery'. For instance, in total knee replacement, the construction of a detailed patient-specific model can be used to carefully plan the surgery. These three-dimensional models are usually extracted from multiple CT scans of the appropriate parts of the patient's own anatomy. Such models can also be used for planning aortic valve implantations, one of the common procedures for treating heart disease. Given that the shape, diameter, and position of the coronary openings can vary greatly from patient to patient, the extraction (from CT scans) of a model that closely resembles a patient's valve anatomy can be highly beneficial in planning the procedure.
== Cloth and skin images ==
Models of cloth generally fall into three groups:
The geometric-mechanical structure at yarn crossing
The mechanics of continuous elastic sheets
The geometric macroscopic features of cloth.
To date, making the clothing of a digital character automatically fold in a natural way remains a challenge for many animators.
In addition to their use in film, advertising and other modes of public display, computer generated images of clothing are now routinely used by top fashion design firms.
The challenge in rendering human skin images involves three levels of realism:
Photo realism in resembling real skin at the static level
Physical realism in resembling its movements
Function realism in resembling its response to actions.
The finest visible features such as fine wrinkles and skin pores are the size of about 100 μm or 0.1 millimetres. Skin can be modeled as a 7-dimensional bidirectional texture function (BTF) or a collection of bidirectional scattering distribution function (BSDF) over the target's surfaces.
When animating a texture like hair or fur for a computer generated model, individual base hairs are first created and later duplicated to demonstrate volume. The initial hairs are often different lengths and colors, to each cover several different sections of a model. This technique was notably used in Pixar’s Monsters Inc (2001) for the character Sulley, who had approximately 1,000 initial hairs generated that were later duplicated 2,800 times. The quantity of duplications can range from thousands to millions, depending on the level of detail sought after.
== Interactive simulation and visualization ==
Interactive visualization is the rendering of data that may vary dynamically and allowing a user to view the data from multiple perspectives. The applications areas may vary significantly, ranging from the visualization of the flow patterns in fluid dynamics to specific computer aided design applications. The data rendered may correspond to specific visual scenes that change as the user interacts with the system — e.g. simulators, such as flight simulators, make extensive use of CGI techniques for representing the world.
At the abstract level, an interactive visualization process involves a "data pipeline" in which the raw data is managed and filtered to a form that makes it suitable for rendering. This is often called the "visualization data". The visualization data is then mapped to a "visualization representation" that can be fed to a rendering system. This is usually called a "renderable representation". This representation is then rendered as a displayable image. As the user interacts with the system (e.g. by using joystick controls to change their position within the virtual world) the raw data is fed through the pipeline to create a new rendered image, often making real-time computational efficiency a key consideration in such applications.
== Computer animation ==
While computer-generated images of landscapes may be static, computer animation only applies to dynamic images that resemble a movie. However, in general, the term computer animation refers to dynamic images that do not allow user interaction, and the term virtual world is used for the interactive animated environments.
Computer animation is essentially a digital successor to the art of stop motion animation of 3D models and frame-by-frame animation of 2D illustrations. Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology. It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props.
To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image which is similar to the previous image, but advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.
== Text-to-image models ==
== Virtual worlds ==
A virtual world is an agent-based and simulated environment allowing users to interact with artificially animated characters (e.g software agent) or with other physical users, through the use of avatars. Virtual worlds are intended for its users to inhabit and interact, and the term today has become largely synonymous with interactive 3D virtual environments, where the users take the form of avatars visible to others graphically. These avatars are usually depicted as textual, two-dimensional, or three-dimensional graphical representations, although other forms are possible (auditory and touch sensations for example). Some, but not all, virtual worlds allow for multiple users.
== In courtrooms ==
Computer-generated imagery has been used in courtrooms, primarily since the early 2000s. However, some experts have argued that it is prejudicial. They are used to help judges or the jury to better visualize the sequence of events, evidence or hypothesis. However, a 1997 study showed that people are poor intuitive physicists and easily influenced by computer generated images. Thus it is important that jurors and other legal decision-makers be made aware that such exhibits are merely a representation of one potential sequence of events.
== Broadcast and live events ==
Weather visualizations were the first application of CGI in television. One of the first companies to offer computer systems for generating weather graphics was ColorGraphics Weather Systems in 1979 with the "LiveLine", based around an Apple II computer, with later models from ColorGraphics using Cromemco computers fitted with their Dazzler video graphics card.
It has now become common in weather casting to display full motion video of images captured in real-time from multiple cameras and other imaging devices. Coupled with 3D graphics symbols and mapped to a common virtual geospatial model, these animated visualizations constitute the first true application of CGI to TV.
CGI has become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay content through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first down" line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. CGI is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and snooker ball trajectories. Sometimes CGI on TV with correct alignment to the real world has been referred to as augmented reality.
== Motion capture ==
Computer-generated imagery is often used in conjunction with motion capture to better cover the faults that come with CGI and animation. Computer-generated imagery is limited in its practical application by how realistic it can look. Unrealistic, or badly managed computer-generated imagery can result in the uncanny valley effect. This effect refers to the human ability to recognize things that look eerily like humans, but are slightly off. Such ability is a fault with normal computer-generated imagery which, due to the complex anatomy of the human body, can often fail to replicate it perfectly. Artists can use motion capture to get footage of a human performing an action and then replicate it perfectly with computer-generated imagery so that it looks normal.
In many instances, motion capture is needed to accurately mimic an actor's full body movements while slightly changing their appearance with de-aging. De-aging is a visual effect used to alter the appearance of an actor, often through facial scanning technologies, motion capture, and photo references. It is commonly used for flashback scenes and cameos to have an actor appear younger. Marvel’s X-Men: The Last Stand was the first film to publicly incorporate de-aging, which was used on actors Patrick Stewart and Ian Mckellen for flashback scenes featuring their characters at a younger age. The visual effects were done by the company Lola VFX, and used photos taken of the actors at a younger age as references to later smooth out the wrinkles on their face with use of CGI. Overtime, de-aging technologies have advanced, with films such as Here (2024), portraying actors at younger ages through the use of digital AI techniques, scanning millions of facial features and incorporating a number of them onto actors’ faces to alter their appearance.
The lack of anatomically correct digital models contributes to the necessity of motion capture as it is used with computer-generated imagery. Because computer-generated imagery reflects only the outside, or skin, of the object being rendered, it fails to capture the infinitesimally small interactions between interlocking muscle groups used in fine motor skills like speaking. The constant motion of the face as it makes sounds with shaped lips and tongue movement, along with the facial expressions that go along with speaking are difficult to replicate by hand. Motion capture can catch the underlying movement of facial muscles and better replicate the visual that goes along with the audio.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
A Critical History of Computer Graphics and Animation – a course page at Ohio State University that includes all the course materials and extensive supplementary materials (videos, articles, links).
CG101: A Computer Graphics Industry Reference ISBN 073570046X Unique and personal histories of early computer graphics production, plus a comprehensive foundation of the industry for all reading levels.
F/X Gods, by Anne Thompson, Wired, February 2005.
"History Gets A Computer Graphics Make-Over" Tayfun King, Click, BBC World News (2004-11-19)
NIH Visible Human Gallery | Wikipedia/Computer-generated_images |
3D computer graphics, sometimes called CGI, 3D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later (possibly as an animation) or displayed in real time.
3D computer graphics, contrary to what the name suggests, are most often displayed on two-dimensional displays. Unlike 3D film and similar techniques, the result is two-dimensional, without visual depth. More often, 3D graphics are being displayed on 3D displays, like in virtual reality systems.
3D graphics stand in contrast to 2D computer graphics which typically use completely different methods and formats for creation and rendering.
3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, and similarly, 3D may use some 2D rendering techniques.
The objects in 3D computer graphics are often referred to as 3D models. Unlike the rendered image, a model's data is contained within a graphical data file. A 3D model is a mathematical representation of any three-dimensional object; a model is not technically a graphic until it is displayed. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or it can be used in non-graphical computer simulations and calculations. With 3D printing, models are rendered into an actual 3D physical representation of themselves, with some limitations as to how accurately the physical model can match the virtual model.
== History ==
William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing. An early example of interactive 3-D computer graphics was explored in 1963 by the Sketchpad program at Massachusetts Institute of Technology's Lincoln Laboratory. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and a hand that had originally appeared in the 1971 experimental short A Computer Animated Hand, created by University of Utah students Edwin Catmull and Fred Parke.
3-D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3-D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 for the Apple II.
Virtual Reality 3D is a version of 3D computer graphics. With the first headset coming out in the late 1950s, the popularity of VR didn't take off until the 2000s. In 2012 the Oculus was released and since then, the 3D VR headset world has expanded.
== Overview ==
3D computer graphics production workflow falls into three basic phases:
3D modeling – the process of forming a computer model of an object's shape
Layout and CGI animation – the placement and movement of objects (models, lights etc.) within a scene
3D rendering – the computer calculations that, based on light placement, surface types, and other qualities, generate (rasterize the scene into) an image
=== Modeling ===
The modeling describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects (Polygonal Modeling, Patch Modeling and NURBS Modeling are some popular tools used in 3D modeling). Models can also be produced procedurally or via physical simulation.
Basically, a 3D model is formed from points called vertices that define the shape and form polygons. A polygon is an area formed from at least three vertices (a triangle). A polygon of n points is an n-gon. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.
=== Layout and animation ===
Before rendering into an image, objects must be laid out in a 3D scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object (i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion-capture). These techniques are often used in combination. As with animation, physical simulation also specifies motion.
Stop Motion has multiple categories within such as Claymation, Cutout, Silhouette, Lego, Puppets, and Pixelation.
Claymation is the use of models made of clay used for an animation. Some examples are Clay Fighter and Clay Jam.
Lego animation is one of the more common types of stop motion. Lego stop motion is the use of the figures themselves moving around. Some examples of this are Lego Island and Lego Harry Potter.
=== Materials and textures ===
Materials and textures are properties that the render engine uses to render the model. One can give the model materials to tell the render engine how to treat light when it hits the surface. Textures are used to give the material color using a color or albedo map, or give the surface features using a bump map or normal map. It can be also used to deform the model itself using a displacement map.
=== Rendering ===
Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3-D computer graphics software or a 3-D graphics API.
Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3-D modeling and CAD software may perform 3-D rendering as well (e.g., Autodesk 3ds Max or Blender), exclusive 3-D rendering software also exists (e.g., OTOY's Octane Rendering Engine, Maxon's Redshift)
Examples of 3-D rendering
== Software ==
3-D computer graphics software produces computer-generated imagery (CGI) through 3D modeling and 3D rendering or produces 3-D models for analytical, scientific and industrial purposes.
=== File formats ===
There are many varieties of files supporting 3-D graphics, for example, Wavefront .obj files, .fbx and .x DirectX files. Each file type generally tends to have its own unique data structure.
Each file format can be accessed through their respective applications, such as DirectX files, and Quake. Alternatively, files can be accessed through third-party standalone programs, or via manual decompilation.
=== Modeling ===
3-D modeling software is a class of 3-D computer graphics software used to produce 3-D models. Individual programs of this class are called modeling applications or modelers.
3-D modeling starts by describing 3 display models : Drawing Points, Drawing Lines and Drawing triangles and other Polygonal patches.
3-D modelers allow users to create and alter models via their 3-D mesh. Users can add, subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed in and out.
3-D modelers can export their models to files, which can then be imported into other applications as long as the metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications.
Most 3-D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes (i.e. animation).
=== Computer-aided design (CAD) ===
Computer aided design software may employ the same fundamental 3-D modeling techniques that 3-D modeling software use but their goal differs. They are used in computer-aided engineering, computer-aided manufacturing, Finite element analysis, product lifecycle management, 3D printing and computer-aided architectural design.
=== Complementary tools ===
After producing a video, studios then edit or composite the video using programs such as Adobe Premiere Pro or Final Cut Pro at the mid-level, or Autodesk Combustion, Digital Fusion, Shake at the high-end. Match moving software is commonly used to match live video with computer-generated video, keeping the two in sync as the camera moves.
Use of real-time computer graphics engines to create a cinematic production is called machinima.
== Other types of 3D appearance ==
=== Photorealistic 2D graphics ===
Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wire-frame modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photo-realistic effects without the use of filters.
=== 2.5D ===
Some video games use 2.5D graphics, involving restricted projections of three-dimensional environments, such as isometric graphics or virtual cameras with fixed angles, either as a way to improve performance of the game engine or for stylistic and gameplay concerns. By contrast, games using 3D computer graphics without such restrictions are said to use true 3D.
=== Other forms of animation ===
Cutout is the use of flat materials such as paper. Everything is cut out of paper including the environment, characters, and even some props. An example of this is Paper Mario. Silhouette is similar to cutouts except they are one solid color, black. Limbo is an example of this. Puppets are dolls and different puppets used in the game. An example of this would be Yoshi's Wooly World. Pixelation is when the entire game appears pixelated, this includes the characters and the environment around them. One example of this is seen in Shovel Knight.
== See also ==
Graphics processing unit (GPU)
List of 3D computer graphics software
3D data acquisition and object reconstruction
3D projection on 2D planes
Geometry processing
Isometric graphics in video games and pixel art
List of stereoscopic video games
Medical animation
Render farm
== References ==
== External links ==
A Critical History of Computer Graphics and Animation (Wayback Machine copy)
How Stuff Works - 3D Graphics
History of Computer Graphics series of articles (Wayback Machine copy)
How 3D Works - Explains 3D modeling for an illuminated manuscript | Wikipedia/3-D_computer_graphics |
In computer graphics, level of detail (LOD) refers to the complexity of a 3D model representation. LOD can be decreased as the model moves away from the viewer or according to other metrics such as object importance, viewpoint-relative speed or position.
LOD techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations.
The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast.
Although most of the time LOD is applied to geometry detail only, the basic concept can be generalized. Recently, LOD techniques also included shader management to keep control of pixel complexity.
A form of level of detail management has been applied to texture maps for years, under the name of mipmapping, also providing higher rendering quality.
It is commonplace to say that "an object has been LOD-ed" when the object is simplified by the underlying LOD-ing algorithm as well as a 3D modeler manually creating LOD models.
== Historical reference ==
The origin[1] of all the LOD algorithms for 3D computer graphics can be traced back to an article by James H. Clark in the October 1976 issue of Communications of the ACM.
At the time, computers were monolithic and rare, and graphics were being driven by researchers. The hardware itself was completely different, both architecturally and performance-wise. As such, many differences could be observed with regard to today's algorithms but also many common points.
The original algorithm presented a much more generic approach to what will be discussed here. After introducing some available algorithms for geometry management, it is stated that most fruitful gains came from "...structuring the environments being rendered", allowing to exploit faster transformations and clipping operations.
The same environment structuring is now proposed as a way to control varying detail thus avoiding unnecessary computations, yet delivering adequate visual quality:
For example, a dodecahedron looks like a sphere from a sufficiently large distance and thus can be used to model it so long as it is viewed from that or a greater distance. However, if it must ever be viewed more closely, it will look like a dodecahedron. One solution to this is simply to define it with the most detail that will ever be necessary. However, then it might have far more detail than is needed to represent it at large distances, and in a complex environment with many such objects, there would be too many polygons (or other geometric primitives) for the visible surface algorithms to efficiently handle.
The proposed algorithm envisions a tree data structure which encodes in its arcs both transformations and transitions to more detailed objects. In this way, each node encodes an object and according to a fast heuristic, the tree is descended to the leaves which provide each object with more detail. When a leaf is reached, other methods could be used when higher detail is needed, such as Catmull's recursive subdivision[2].
The significant point, however, is that in a complex environment, the amount of information presented about the various objects in the environment varies according to the fraction of the field of view occupied by those objects.
The paper then introduces clipping (not to be confused with culling although often similar), various considerations on the graphical working set and its impact on performance, interactions between the proposed algorithm and others to improve rendering speed.
== Well known approaches ==
Although the algorithm introduced above covers a whole range of level of detail management techniques, real world applications usually employ specialized methods tailored to the information being rendered. Depending on the requirements of the situation, two main methods are used:
The first method, Discrete Levels of Detail (DLOD), involves creating multiple, discrete versions of the original geometry with decreased levels of geometric detail. At runtime, the full-detail models are substituted for the models with reduced detail as necessary. Due to the discrete nature of the levels, there may be visual popping when one model is exchanged for another. This may be mitigated by alpha blending or morphing between states during the transition.
The second method, Continuous Levels of Detail (CLOD), uses a structure which contains a continuously variable spectrum of geometric detail. The structure can then be probed to smoothly choose the appropriate level of detail required for the situation. A significant advantage of this technique is the ability to locally vary the detail; for instance, the side of a large object nearer to the view may be presented in high detail, while simultaneously reducing the detail on its distant side.
In both cases, LODs are chosen based on some heuristic which is used to judge how much detail is being lost by the reduction in detail, such as by evaluation of the LOD's geometric error relative to the full-detail model. Objects are then displayed with the minimum amount of detail required to satisfy the heuristic, which is designed to minimize geometric detail as much as possible to maximize performance while maintaining an acceptable level of visual quality.
=== Details on discrete LOD ===
The basic concept of discrete LOD (DLOD) is to provide various models to represent the same object. Obtaining those models requires an external algorithm which is often non-trivial and subject of many polygon reduction techniques. Successive LOD-ing algorithms will simply assume those models are available.
DLOD algorithms are often used in performance-intensive applications with small data sets which can easily fit in memory. Although out-of-core algorithms could be used, the information granularity is not well suited to this kind of application. This kind of algorithm is usually easier to get working, providing both faster performance and lower CPU usage because of the few operations involved.
DLOD methods are often used for "stand-alone" moving objects, possibly including complex animation methods. A different approach is used for geomipmapping,[3] a popular terrain rendering algorithm because this applies to terrain meshes which are both graphically and topologically different from "object" meshes. Instead of computing an error and simplify the mesh according to this, geomipmapping takes a fixed reduction method, evaluates the error introduced and computes a distance at which the error is acceptable. Although straightforward, the algorithm provides decent performance.
=== A discrete LOD example ===
As a simple example, consider a sphere. A discrete LOD approach would cache a certain number of models to be used at different distances. Because the model can trivially be procedurally generated by its mathematical formulation, using a different number of sample points distributed on the surface is sufficient to generate the various models required. This pass is not a LOD-ing algorithm.
To simulate a realistic transform bound scenario, an ad-hoc written application can be used. The use of simple algorithms and minimum fragment operations ensures that CPU bounding does not occur. Each frame, the program will compute each sphere's distance and choose a model from a pool according to this information. To easily show the concept, the distance at which each model is used is hard coded in the source. A more involved method would compute adequate models according to the usage distance chosen.
OpenGL is used for rendering due to its high efficiency in managing small batches, storing each model in a display list thus avoiding communication overheads. Additional vertex load is given by applying two directional light sources ideally located infinitely far away.
The following table compares the performance of LOD aware rendering and a full detail (brute force) method.
=== Hierarchical LOD ===
Because hardware is geared towards large amounts of detail, rendering low polygon objects may score sub-optimal performances. HLOD avoids the problem by grouping different objects together[4]. This allows for higher efficiency as well as taking advantage of proximity considerations.
== Practical applications ==
=== Video games ===
LOD is especially useful in 3D video games. Video game developers want to provide players with large worlds but are always constrained by hardware, frame rate and the real-time nature of video game graphics. With the advent of 3D games in the 1990s, a lot of video games simply did not render distant structures or objects. Only nearby objects would be rendered and more distant parts would gradually fade, essentially implementing distance fog. Video games using LOD rendering avoid this fog effect and can render larger areas. Some notable early examples of LOD rendering in 3D video games include The Killing Cloud, Spyro the Dragon, Crash Bandicoot: Warped, Unreal Tournament and the Serious Sam engine. Most modern 3D games use a combination of LOD rendering techniques, using different models for large structures and distance culling for environment details like grass and trees. The effect is sometimes still noticeable, for example when the player character flies over the virtual terrain or uses a sniper scope for long distance viewing. Especially grass and foliage will seem to pop-up when getting closer, also known as foliage culling. LOD can also be used to render fractal terrain in real time. Unreal Engine 5's Nanite system essentially implements level-of-detail within meshes instead of just objects as a whole.
=== In GIS and 3D city modelling ===
LOD is found in GIS and 3D city models as a similar concept. It indicates how thoroughly real-world features have been mapped and how much the model adheres to its real-world counterpart. Besides the geometric complexity, other metrics such as spatio-semantic coherence, resolution of the texture and attributes can be considered in the LOD of a model.
The standard CityGML contains one of the most prominent LOD categorizations.
The analogy of "LOD-ing" in GIS is referred to as generalization.
=== Rendering and modeling software ===
MeshLab an open source mesh processing tool that is able to accurately simplify 3D polygonal meshes.
Polygon Cruncher a commercial software from Mootools that reduces the number of polygons of objects without changing their appearance.
Simplygon a commercial mesh processing package for remeshing general input meshes into real-time renderable meshes.
== See also ==
Anisotropic filtering
Distance fog
Draw distance
Mipmap
Popping (computer graphics)
Progressive meshes
Sparse voxel octree
Spatial resolution
== References ==
^ Communications of the ACM, October 1976 Volume 19 Number 10. Pages 547–554. Hierarchical Geometric Models for Visible Surface Algorithms by James H. Clark, University of California at Santa Cruz. Digitalized scan is freely available at https://web.archive.org/web/20060910212907/http://accad.osu.edu/%7Ewaynec/history/PDFs/clark-vis-surface.pdf.
^ Catmull E., A Subdivision Algorithm for Computer Display of Curved Surfaces. Tech. Rep. UTEC-CSc-74-133, University of Utah, Salt Lake City, Utah, Dec. 1
^ Ribelles, López, and Belmonte, "An Improved Discrete Level of Detail Model Through an Incremental Representation", 2010, Available at http://www3.uji.es/~ribelles/papers/2010-TPCG/tpcg10.pdf
^ de Boer, W.H., Fast Terrain Rendering using Geometrical Mipmapping, in flipCode featured articles, October 2000. Available at flipcode - Fast Terrain Rendering Using Geometrical MipMapping.
^ Carl Erikson's paper at http://www.cs.unc.edu/Research/ProjectSummaries/hlods.pdf provides a quick, yet effective overlook at HLOD mechanisms. A more involved description follows in his thesis, at https://wwwx.cs.unc.edu/~geom/papers/documents/dissertations/erikson00.pdf. | Wikipedia/Level_of_detail_(computer_graphics) |
Graphical perception is the human capacity for visually interpreting information on graphs and charts. Both quantitative and qualitative information can be said to be encoded into the image, and the human capacity to interpret it is sometimes called decoding. The importance of human graphical perception, what we discern easily versus what our brains have more difficulty decoding, is fundamental to good statistical graphics design, where clarity, transparency, accuracy and precision in data display and interpretation are essential for understanding the translation of data in a graph to clarify and interpret the science.
Graphical perception is achieved in dimensions or steps of discernment by:
detection : recognition of geometry which encodes physical values
assembly : grouping of detected symbol elements; discerning overall patterns in data
estimation : assessment of relative magnitudes of two physical values.
Cleveland and McGill's experiments to elucidate the graphical elements humans detect most accurately is a fundamental component of good statistical graphics design principles. In practical terms, graphs displaying relative position on a common scale most accurately are most effective. A graph type that utilizes this element is the dot plot. Conversely, angles are perceived with less accuracy; an example is the pie chart. Humans do not naturally order color hues. Only a limited number of hues can be discriminated in one graphic.
Graphic designs that utilize visual pre-attentive processing in the graph design's assembly is why a picture can be worth a thousand words by using the brain's ability to perceive patterns. Not all graphs are designed to consider pre-attentive processing. For example in the attached figure, a graphic design feature, table look-up, requires the brain to work harder and take longer to decode than if the graph utilizes our ability to discern patterns.
Graphic design that readily answers the scientific questions of interest will include appropriate estimation. Details for choosing the appropriate graph type for continuous and categorical data and for grouping have been described. Graphics principles for accuracy, clarity and transparency have been detailed and key elements summarized.
== See also ==
Spatial visualization ability
== References ==
== External links ==
A brief description and picture of Cleveland and McGill's nine graphical elements
"How William Cleveland Turned Data Visualization Into a Science" (2016) from Priceonomics.com
John Rauser's 2016 presentation, "How Humans See Data" at Velocity Amsterdam. Describes how good visualizations optimize for the human visual system
Michael Friendly's Gallery of Data Visualization: The Best and Worst of Statistical Graphics | Wikipedia/Graphical_perception |
Frame rate, most commonly expressed in frame/s, frames per second or FPS, is typically the frequency (rate) at which consecutive images (frames) are captured or displayed. This definition applies to film and video cameras, computer animation, and motion capture systems. In these contexts, frame rate may be used interchangeably with frame frequency and refresh rate, which are expressed in hertz. Additionally, in the context of computer graphics performance, FPS is the rate at which a system, particularly a GPU, is able to generate frames, and refresh rate is the frequency at which a display shows completed frames. In electronic camera specifications frame rate refers to the maximum possible rate frames could be captured, but in practice, other settings (such as exposure time) may reduce the actual frequency to a lower number than the frame rate.
== Human vision ==
The temporal sensitivity and resolution of human vision varies depending on the type and characteristics of visual stimulus, and it differs between individuals. The human visual system can process 10 to 12 images per second and perceive them individually, while higher rates are perceived as motion. Modulated light (such as a computer display) is perceived as stable by the majority of participants in studies when the rate is higher than 50 Hz. This perception of modulated light as steady is known as the flicker fusion threshold. However, when the modulated light is non-uniform and contains an image, the flicker fusion threshold can be much higher, in the hundreds of hertz. With regard to image recognition, people have been found to recognize a specific image in an unbroken series of different images, each of which lasts as little as 13 milliseconds. Persistence of vision sometimes accounts for very short single-millisecond visual stimulus having a perceived duration of between 100 ms and 400 ms. Multiple stimuli that are very short are sometimes perceived as a single stimulus, such as a 10 ms green flash of light immediately followed by a 10 ms red flash of light perceived as a single yellow flash of light.
== Film and video ==
=== Silent film ===
Early silent films had stated frame rates anywhere from 16 to 24 frames per second (FPS), but since the cameras were hand-cranked, the rate often changed during the scene to fit the mood. Projectionists could also change the frame rate in the theater by adjusting a rheostat controlling the voltage powering the film-carrying mechanism in the projector. Film companies often intended for theaters to show their silent films at a higher frame rate than that at which they were filmed. These frame rates were enough for the sense of motion, but it was perceived as jerky motion. To minimize the perceived flicker, projectors employed dual- and triple-blade shutters, so each frame was displayed two or three times, increasing the flicker rate to 48 or 72 hertz and reducing eye strain. Thomas Edison said that 46 frames per second was the minimum needed for the eye to perceive motion: "Anything less will strain the eye." In the mid to late 1920s, the frame rate for silent film increased to 20–26 FPS.
=== Sound film ===
When sound film was introduced in 1926, variations in film speed were no longer tolerated, as the human ear is more sensitive than the eye to changes in frequency. Many theaters had shown silent films at 22 to 26 FPS, which is why the industry chose 24 FPS for sound film as a compromise. From 1927 to 1930, as various studios updated equipment, the rate of 24 FPS became standard for 35 mm sound film. At 24 FPS, the film travels through the projector at a rate of 456 millimetres (18.0 in) per second. This allowed simple two-blade shutters to give a projected series of images at 48 per second, satisfying Edison's recommendation. Many modern 35 mm film projectors use three-blade shutters to give 72 images per second—each frame is flashed on screen three times.
=== Animation ===
In drawn animation, moving characters are often shot "on twos", that is to say, one drawing is shown for every two frames of film (which usually runs at 24 frame per second), meaning there are only 12 drawings per second. Even though the image update rate is low, the fluidity is satisfactory for most subjects. However, when a character is required to perform a quick movement, it is usually necessary to revert to animating "on ones", as "twos" are too slow to convey the motion adequately. A blend of the two techniques keeps the eye fooled without unnecessary production cost.
Animation for most "Saturday morning cartoons" was produced as cheaply as possible and was most often shot on "threes" or even "fours", i.e. three or four frames per drawing. This translates to only 8 or 6 drawings per second respectively. Anime is also usually drawn on threes or twos.
=== Modern video standards ===
Due to the mains frequency of electric grids, analog television broadcast was developed with frame rates of 50 Hz (most of the world) or 60 Hz (Canada, US, Mexico, Philippines, Japan, South Korea). The frequency of the electricity grid was extremely stable and therefore it was logical to use for synchronization.
The introduction of color television technology made it necessary to lower that 60 FPS frequency by 0.1% to avoid "dot crawl", a display artifact appearing on legacy black-and-white displays, showing up on highly-color-saturated surfaces. It was found that by lowering the frame rate by 0.1%, the undesirable effect was minimized.
As of 2021, video transmission standards in North America, Japan, and South Korea are still based on 60/1.001 ≈ 59.94 images per second. Two sizes of images are typically used: 1920×1080 ("1080i/p") and 1280×720 ("720p"). Confusingly, interlaced formats are customarily stated at 1/2 their image rate, 29.97/25 FPS, and double their image height, but these statements are purely custom; in each format, 60 images per second are produced. A resolution of 1080i produces 59.94 or 50 1920×540 images, each squashed to half-height in the photographic process and stretched back to fill the screen on playback in a television set. The 720p format produces 59.94/50 or 29.97/25 1280×720p images, not squeezed, so that no expansion or squeezing of the image is necessary. This confusion was industry-wide in the early days of digital video software, with much software being written incorrectly, the developers believing that only 29.97 images were expected each second, which was incorrect. While it was true that each picture element was polled and sent only 29.97 times per second, the pixel location immediately below that one was polled 1/60 of a second later, part of a completely separate image for the next 1/60-second frame.
At its native 24 FPS rate, film could not be displayed on 60 Hz video without the necessary pulldown process, often leading to "judder": to convert 24 frames per second into 60 frames per second, every odd frame is repeated, playing twice, while every even frame is tripled. This creates uneven motion, appearing stroboscopic. Other conversions have similar uneven frame doubling. Newer video standards support 120, 240, or 300 frames per second, so frames can be evenly sampled for standard frame rates such as 24, 48 and 60 FPS film or 25, 30, 50 or 60 FPS video. Of course these higher frame rates may also be displayed at their native rates.
=== Electronic camera specifications ===
In electronic camera specifications frame rate refers to the maximum possible rate frames that can be captured (e.g. if the exposure time were set to near-zero), but in practice, other settings (such as exposure time) may reduce the actual frequency to a lower number than the frame rate.
== Computer games ==
In computer video games, frame rate plays an important part in the experience as, unlike film, games are rendered in real-time. 60 frames per second has for a long time been considered the minimum frame rate for smoothly animated game play. Video games designed for PAL markets, before the sixth generation of video game consoles, had lower frame rates by design due to the 50 Hz output. This noticeably made fast-paced games, such as racing or fighting games, run slower; less frequently developers accounted for the frame rate difference and altered the game code to achieve (nearly) identical pacing across both regions, with varying degrees of success. Computer monitors marketed to competitive PC gamers can hit 360 Hz, 500 Hz, or more. High frame rates make action scenes look less blurry, such as sprinting through the wilderness in an open world game, spinning rapidly to face an opponent in a first-person shooter, or keeping track of details during an intense fight in a multiplayer online battle arena. Input latency is also reduced. Some people may have difficulty perceiving the differences between high frame rates, though.
Frame time is related to frame rate, but it measures the time between frames. A game could maintain an average of 60 frames per second but appear choppy because of a poor frame time. Game reviews sometimes average the worst 1% of frame rates, reported as the 99th percentile, to measure how choppy the game appears. A small difference between the average frame rate and 99th percentile would generally indicate a smooth experience. To mitigate the choppiness of poorly optimized games, players can set frame rate caps closer to their 99% percentile.
When a game's frame rate is different than the display's refresh rate, screen tearing can occur. Vsync mitigates this, but it caps the frame rate to the display's refresh rate, increases input lag, and introduces judder. Variable refresh rate displays automatically set their refresh rate equal to the game's frame rate, as long as it is within the display's supported range.
== Frame rate up-conversion ==
Frame rate up-conversion (FRC) is the process of increasing the temporal resolution of a video sequence by synthesizing one or more intermediate frames between two consecutive frames. A low frame rate causes aliasing, yields abrupt motion artifacts, and degrades the video quality. Consequently, the temporal resolution is an important factor affecting video quality. Algorithms for FRC are widely used in applications, including visual quality enhancement, video compression and slow-motion video generation.
=== Methods ===
Most FRC methods can be categorized into optical flow or kernel-based and pixel hallucination-based methods.
==== Flow-based FRC ====
Flow-based methods linearly combine predicted optical flows between two input frames to approximate flows from the target intermediate frame to the input frames. They also propose flow reversal (projection) for more accurate image warping. Moreover, there are algorithms that give different weights of overlapped flow vectors depending on the object depth of the scene via a flow projection layer.
==== Pixel hallucination-based FRC ====
Pixel hallucination-based methods use deformable convolution to the center frame generator by replacing optical flows with offset vectors. There are algorithms that also interpolate middle frames with the help of deformable convolution in the feature domain. However, since these methods directly hallucinate pixels unlike the flow-based FRC methods, the predicted frames tend to be blurry when fast-moving objects are present.
== See also ==
Delta timing
Federal Standard 1037C
Film-out
Flicker fusion threshold
Glossary of video terms
High frame rate
List of motion picture film formats
Micro stuttering
MIL-STD-188
Movie projector
Time-lapse photography
Video compression
== References ==
== External links ==
"Temporal Rate Conversion"—a very detailed guide about the visual interference of TV, video & PC (Wayback Machine copy)
Compare frames per second: which looks better?—a web tool to visually compare differences in frame rate and motion blur. | Wikipedia/Framerate |
Visualization (or visualisation ), also known as graphics visualization, is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering purposes that actively involve scientific requirements.
Visualization today has ever-expanding applications in science, education, engineering (e.g., product visualization), interactive multimedia, medicine, etc. Typical of a visualization application is the field of computer graphics. The invention of computer graphics (and 3D computer graphics) may be the most important development in visualization since the invention of central perspective in the Renaissance period. The development of animation also helped advance visualization.
== Overview ==
The use of visualization to present information is not a new phenomenon. It has been used in maps, scientific drawings, and data plots for over a thousand years. Examples from cartography include Ptolemy's Geographia (2nd century AD), a map of China (1137 AD), and Minard's map (1861) of Napoleon's invasion of Russia a century and a half ago. Most of the concepts learned in devising these images carry over in a straightforward manner to computer visualization. Edward Tufte has written three critically acclaimed books that explain many of these principles.
Computer graphics has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the publication of Visualization in Scientific Computing, a special issue of Computer Graphics. Since then, there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH, devoted to the general topic, and special areas in the field, for example volume visualization.
Most people are familiar with the digital animations produced to present meteorological data during weather reports on television, though few can distinguish between those models of reality and the satellite photos that are also shown on such programs. TV also offers scientific visualizations when it shows computer drawn and animated reconstructions of road or airplane accidents. Some of the most popular examples of scientific visualizations are computer-generated images that show real spacecraft in action, out in the void far beyond Earth, or on other planets. Dynamic forms of visualization, such as educational animation or timelines, have the potential to enhance learning about systems that change over time.
Apart from the distinction between interactive visualizations and animation, the most useful categorization is probably between abstract and model-based scientific visualizations. The abstract visualizations show completely conceptual constructs in 2D or 3D. These generated shapes are completely arbitrary. The model-based visualizations either place overlays of data on real or digitally constructed images of reality or make a digital construction of a real object directly from the scientific data.
Scientific visualization is usually done with specialized software, though there are a few exceptions, noted below. Some of these specialized programs have been released as open source software, having very often its origins in universities, within an academic environment where sharing software tools and giving access to the source code is common. There are also many proprietary software packages of scientific visualization tools.
Models and frameworks for building visualizations include the data flow models popularized by systems such as AVS, IRIS Explorer, and VTK toolkit, and data state models in spreadsheet systems such as the Spreadsheet for Visualization and Spreadsheet for Images.
== Applications ==
=== Scientific visualization ===
As a subject in computer science, scientific visualization is the use of interactive, sensory representations, typically visual, of abstract data to reinforce cognition, hypothesis building, and reasoning.
Scientific visualization is the transformation, selection, or representation of data from simulations or experiments, with an implicit or explicit geometric structure, to allow the exploration, analysis, and understanding of the data. Scientific visualization focuses and emphasizes the representation of higher order data using primarily graphics and animation techniques. It is a very important part of visualization and maybe the first one, as the visualization of experiments and phenomena is as old as science itself. Traditional areas of scientific visualization are flow visualization, medical visualization, astrophysical visualization, and chemical visualization. There are several different techniques to visualize scientific data, with isosurface reconstruction and direct volume rendering being the more common.
=== Data and information visualization ===
Data visualization is a related subcategory of visualization dealing with statistical graphics and geospatial data (as in thematic cartography) that is abstracted in schematic form.
Information visualization concentrates on the use of computer-supported tools to explore large amount of abstract data. The term "information visualization" was originally coined by the User Interface Research Group at Xerox PARC and included Jock Mackinlay. Practical application of information visualization in computer programs involves selecting, transforming, and representing abstract data in a form that facilitates human interaction for exploration and understanding. Important aspects of information visualization are dynamics of visual representation and the interactivity. Strong techniques enable the user to modify the visualization in real-time, thus affording unparalleled perception of patterns and structural relations in the abstract data in question.
=== Educational visualization ===
Educational visualization is using a simulation to create an image of something so it can be taught about. This is very useful when teaching about a topic that is difficult to otherwise see, for example, atomic structure, because atoms are far too small to be studied easily without expensive and difficult to use scientific equipment.
=== Knowledge visualization ===
The use of visual representations to transfer knowledge between at least two persons aims to improve the transfer of knowledge by using computer and non-computer-based visualization methods complementarily. Thus properly designed visualization is an important part of not only data analysis but knowledge transfer process, too. Knowledge transfer may be significantly improved using hybrid designs as it enhances information density but may decrease clarity as well. For example, visualization of a 3D scalar field may be implemented using iso-surfaces for field distribution and textures for the gradient of the field. Examples of such visual formats are sketches, diagrams, images, objects, interactive visualizations, information visualization applications, and imaginary visualizations as in stories. While information visualization concentrates on the use of computer-supported tools to derive new insights, knowledge visualization focuses on transferring insights and creating new knowledge in groups. Beyond the mere transfer of facts, knowledge visualization aims to further transfer insights, experiences, attitudes, values, expectations, perspectives, opinions, and estimates in different fields by using various complementary visualizations.
See also: picture dictionary, visual dictionary
=== Product visualization ===
Product visualization involves visualization software technology for the viewing and manipulation of 3D models, technical drawing and other related documentation of manufactured components and large assemblies of products. It is a key part of product lifecycle management. Product visualization software typically provides high levels of photorealism so that a product can be viewed before it is actually manufactured. This supports functions ranging from design and styling to sales and marketing. Technical visualization is an important aspect of product development. Originally technical drawings were made by hand, but with the rise of advanced computer graphics the drawing board has been replaced by computer-aided design (CAD). CAD-drawings and models have several advantages over hand-made drawings such as the possibility of 3-D modeling, rapid prototyping, and simulation. 3D product visualization promises more interactive experiences for online shoppers, but also challenges retailers to overcome hurdles in the production of 3D content, as large-scale 3D content production can be extremely costly and time-consuming.
=== Visual communication ===
Visual communication is the communication of ideas through the visual display of information. Primarily associated with two dimensional images, it includes: alphanumerics, art, signs, and electronic resources. Recent research in the field has focused on web design and graphically oriented usability.
=== Visual analytics ===
Visual analytics focuses on human interaction with visualization systems as part of a larger process of data analysis. Visual analytics has been defined as "the science of analytical reasoning supported by the interactive visual interface".
Its focus is on human information discourse (interaction) within massive, dynamically changing information spaces. Visual analytics research concentrates on support for perceptual and cognitive operations that enable users to detect the expected and discover the unexpected in complex information spaces.
Technologies resulting from visual analytics find their application in almost all fields, but are being driven by critical needs (and funding) in biology and national security.
== Interactivity ==
Interactive visualization or interactive visualisation is a branch of graphic visualization in computer science that involves studying how humans interact with computers to create graphic illustrations of information and how this process can be made more efficient.
For a visualization to be considered interactive it must satisfy two criteria:
Human input: control of some aspect of the visual representation of information, or of the information being represented, must be available to a human, and
Response time: changes made by the human must be incorporated into the visualization in a timely manner. In general, interactive visualization is considered a soft real-time task.
One particular type of interactive visualization is virtual reality (VR), where the visual representation of information is presented using an immersive display device such as a stereo projector (see stereoscopy). VR is also characterized by the use of a spatial metaphor, where some aspect of the information is represented in three dimensions so that humans can explore the information as if it were present (where instead it was remote), sized appropriately (where instead it was on a much smaller or larger scale than humans can sense directly), or had shape (where instead it might be completely abstract).
Another type of interactive visualization is collaborative visualization, in which multiple people interact with the same computer visualization to communicate their ideas to each other or to explore information cooperatively. Frequently, collaborative visualization is used when people are physically separated. Using several networked computers, the same visualization can be presented to each person simultaneously. The people then make annotations to the visualization as well as communicate via audio (i.e., telephone), video (i.e., a video-conference), or text (i.e., IRC) messages.
=== Human control of visualization ===
The Programmer's Hierarchical Interactive Graphics System (PHIGS) was one of the first programmatic efforts at interactive visualization and provided an enumeration of the types of input humans provide. People can:
Pick some part of an existing visual representation;
Locate a point of interest (which may not have an existing representation);
Stroke a path;
Choose an option from a list of options;
Valuate by inputting a number; and
Write by inputting text.
All of these actions require a physical device. Input devices range from the common – keyboards, mice, graphics tablets, trackballs, and touchpads – to the esoteric – wired gloves, boom arms, and even omnidirectional treadmills.
These input actions can be used to control both the unique information being represented or the way that the information is presented. When the information being presented is altered, the visualization is usually part of a feedback loop. For example, consider an aircraft avionics system where the pilot inputs roll, pitch, and yaw and the visualization system provides a rendering of the aircraft's new attitude. Another example would be a scientist who changes a simulation while it is running in response to a visualization of its current progress. This is called computational steering.
More frequently, the representation of the information is changed rather than the information itself.
=== Rapid response to human input ===
Experiments have shown that a delay of more than 20 ms between when input is provided and a visual representation is updated is noticeable by most people . Thus it is desirable for an interactive visualization to provide a rendering based on human input within this time frame. However, when large amounts of data must be processed to create a visualization, this becomes hard or even impossible with current technology. Thus the term "interactive visualization" is usually applied to systems that provide feedback to users within several seconds of input. The term interactive framerate is often used to measure how interactive a visualization is. Framerates measure the frequency with which an image (a frame) can be generated by a visualization system. A framerate of 50 frames per second (frame/s) is considered good while 0.1 frame/s would be considered poor. The use of framerates to characterize interactivity is slightly misleading however, since framerate is a measure of bandwidth while humans are more sensitive to latency. Specifically, it is possible to achieve a good framerate of 50 frame/s but if the images generated refer to changes to the visualization that a person made more than 1 second ago, it will not feel interactive to a person.
The rapid response time required for interactive visualization is a difficult constraint to meet and there are several approaches that have been explored to provide people with rapid visual feedback based on their input. Some include
Parallel rendering – where more than one computer or video card is used simultaneously to render an image. Multiple frames can be rendered at the same time by different computers and the results transferred over the network for display on a single monitor. This requires each computer to hold a copy of all the information to be rendered and increases bandwidth, but also increases latency. Also, each computer can render a different region of a single frame and send the results over a network for display. This again requires each computer to hold all of the data and can lead to a load imbalance when one computer is responsible for rendering a region of the screen with more information than other computers. Finally, each computer can render an entire frame containing a subset of the information. The resulting images plus the associated depth buffer can then be sent across the network and merged with the images from other computers. The result is a single frame containing all the information to be rendered, even though no single computer's memory held all of the information. This is called parallel depth compositing and is used when large amounts of information must be rendered interactively.
Progressive rendering – where a framerate is guaranteed by rendering some subset of the information to be presented and providing incremental (progressive) improvements to the rendering once the visualization is no longer changing.
Level-of-detail (LOD) rendering – where simplified representations of information are rendered to achieve a desired framerate while a person is providing input and then the full representation is used to generate a still image once the person is through manipulating the visualization. One common variant of LOD rendering is subsampling. When the information being represented is stored in a topologically rectangular array (as is common with digital photos, MRI scans, and finite difference simulations), a lower resolution version can easily be generated by skipping n points for each 1 point rendered. Subsampling can also be used to accelerate rendering techniques such as volume visualization that require more than twice the computations for an image twice the size. By rendering a smaller image and then scaling the image to fill the requested screen space, much less time is required to render the same data.
Frameless rendering – where the visualization is no longer presented as a time series of images, but as a single image where different regions are updated over time.
== See also ==
Graphical perception
Spatial visualization ability
Visual language
== References ==
== Further reading ==
Battiti, Roberto; Mauro Brunato (2011). Reactive Business Intelligence. From Data to Models to Insight. Trento, Italy: Reactive Search Srl. ISBN 978-88-905795-0-9.
Bederson, Benjamin B., and Ben Shneiderman. The Craft of Information Visualization: Readings and Reflections, Morgan Kaufmann, 2003, ISBN 1-55860-915-6.
Cleveland, William S. (1993). Visualizing Data.
Cleveland, William S. (1994). The Elements of Graphing Data.
Charles D. Hansen, Chris Johnson. The Visualization Handbook, Academic Press (June 2004).
Kravetz, Stephen A. and David Womble. ed. Introduction to Bioinformatics. Totowa, N.J. Humana Press, 2003.
Mackinlay, Jock D. (1999). Readings in information visualization: using vision to think. Card, S. K., Ben Shneiderman (eds.). Morgan Kaufmann Publishers Inc. pp. 686. ISBN 1-55860-533-9.
Will Schroeder, Ken Martin, Bill Lorensen. The Visualization Toolkit, by August 2004.
Spence, Robert Information Visualization: Design for Interaction (2nd Edition), Prentice Hall, 2007, ISBN 0-13-206550-9.
Edward R. Tufte (1992). The Visual Display of Quantitative Information
Edward R. Tufte (1990). Envisioning Information.
Edward R. Tufte (1997). Visual Explanations: Images and Quantities, Evidence and Narrative.
Matthew Ward, Georges Grinstein, Daniel Keim. Interactive Data Visualization: Foundations, Techniques, and Applications. (May 2010).
Wilkinson, Leland. The Grammar of Graphics, Springer ISBN 0-387-24544-8
== External links ==
National Institute of Standards and Technology
Scientific Visualization Tutorials, Georgia Tech
Scientific Visualization Studio (NASA)
Visual-literacy.org, (e.g. Periodic Table of Visualization Methods)
Conferences
Many conferences occur where interactive visualization academic papers are presented and published.
Amer. Soc. of Information Science and Technology (ASIS&T SIGVIS) Special Interest Group in Visualization Information and Sound
ACM SIGCHI
ACM SIGGRAPH
ACM VRST
Eurographics
IEEE Visualization
ACM Transactions on Graphics
IEEE Transactions on Visualization and Computer Graphics | Wikipedia/Visualization_(computer_graphics) |
ACM SIGGRAPH is the international Association for Computing Machinery's Special Interest Group on Computer Graphics and Interactive Techniques based in New York. It was founded in 1969 by Andy van Dam (its direct predecessor, ACM SICGRAPH was founded two years earlier in 1967).
ACM SIGGRAPH convenes the annual SIGGRAPH conference, attended by tens of thousands of computer professionals. The organization also sponsors other conferences around the world, and regular events are held by its professional and student chapters in several countries.
== Committees ==
=== Professional and Student Chapters Committee ===
The Professional and Student Chapters Committee (PSCC) is the leadership group that oversees the activities of ACM SIGGRAPH Chapters around the world. Details about Local Chapters can be found below.
=== International Resources Committee ===
The International Resources Committee (IRC) facilitates throughout the year worldwide collaboration in the ACM SIGGRAPH community, provides an English review service to help submitters whose first language is not English, and encourages participation in all SIGGRAPH conference venues, activities, and events.
== Awards ==
ACM SIGGRAPH presents six awards to recognize achievement in computer graphics. The awards are presented at the annual SIGGRAPH conference.
=== Steven A. Coons Award ===
The Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics is considered the highest award in computer graphics, and is presented each odd-numbered year to individuals who have made a lifetime contribution to computer graphics. It is named for Steven Anson Coons, an early pioneer in interactive computer graphics.
Recipients:
=== Computer Graphics Achievement Award ===
The Computer Graphics Achievement award is given each year to recognize individuals for an outstanding achievement in computer graphics and interactive techniques that provided a significant advance in the state of the art of computer graphics and is still significant and apparent.
Recipients:
=== Significant New Researcher Award ===
The Significant New Researcher Award is given annually to a researcher with a recent significant contribution to computer graphics.
Recipients:
=== Distinguished Artist Award ===
The Distinguished Artist Award is presented annually to an artist who has created a significant body of digital art work that has advanced the aesthetic content of the medium.
Recipients:
== Professional and Student Chapters ==
Within their local areas, Chapters continue the work of ACM SIGGRAPH on a year-round basis via their meetings and other activities. Each ACM SIGGRAPH Professional and Student Chapter consists of individuals involved in education, research & development, the arts, industry and entertainment. ACM SIGGRAPH Chapter members are interested in the advancement of computer graphics and interactive techniques, its related technologies and applications. For the annual conference, some of the Chapters produce a "Fast Forward" overview of activities.
Listed below are some examples of Chapter activities:
MetroCAF is the annual NYC Metropolitan Area College Computer Animation Festival, organized by the New York City chapter of ACM SIGGRAPH.
Bogota ACM SIGGRAPH has become one of the largest Animation and VFX events in Latin America, counting more than 6,000 registered attendees in 2015’s edition.
ACM SIGGRAPH Helsinki runs an evening-long graphics conference called SyysGraph, which is held autumn every year. The seminar strives to bring the latest updates of the 3D graphics field, demos, animations and interactive technologies. The presentations are held in English.
Silicon Valley ACM SIGGRAPH held "Star Wars: The Force Awakens" Visual Effects Panel with Industrial Light & Magic.
== See also ==
Association for Computing Machinery
ACM Transactions on Graphics
Computer Graphics, its defunct quarterly periodical publication.
SIGGRAPH Conferences
== References ==
== External links ==
Official website | Wikipedia/ACM_SIGGRAPH |
In mathematics, the Bogomolov conjecture is a conjecture, named after Fedor Bogomolov, in arithmetic geometry about algebraic curves that generalizes the Manin–Mumford conjecture in arithmetic geometry. The conjecture was proven by Emmanuel Ullmo and Shou-Wu Zhang in 1998 using Arakelov theory. A further generalization to general abelian varieties was also proved by Zhang in 1998.
== Statement ==
Let C be an algebraic curve of genus g at least two defined over a number field K, let
K
¯
{\displaystyle {\overline {K}}}
denote the algebraic closure of K, fix an embedding of C into its Jacobian variety J, and let
h
^
{\displaystyle {\hat {h}}}
denote the Néron-Tate height on J associated to an ample symmetric divisor. Then there exists an
ϵ
>
0
{\displaystyle \epsilon >0}
such that the set
{
P
∈
C
(
K
¯
)
:
h
^
(
P
)
<
ϵ
}
{\displaystyle \{P\in C({\overline {K}}):{\hat {h}}(P)<\epsilon \}}
is finite.
Since
h
^
(
P
)
=
0
{\displaystyle {\hat {h}}(P)=0}
if and only if P is a torsion point, the Bogomolov conjecture generalises the Manin-Mumford conjecture.
== Proof ==
The original Bogomolov conjecture was proved by Emmanuel Ullmo and Shou-Wu Zhang using Arakelov theory in 1998.
== Generalization ==
In 1998, Zhang proved the following generalization:
Let A be an abelian variety defined over K, and let
h
^
{\displaystyle {\hat {h}}}
be the Néron-Tate height on A associated to an ample symmetric divisor. A subvariety
X
⊂
A
{\displaystyle X\subset A}
is called a torsion subvariety if it is the translate of an abelian subvariety of A by a torsion point. If X is not a torsion subvariety, then there is an
ϵ
>
0
{\displaystyle \epsilon >0}
such that the set
{
P
∈
X
(
K
¯
)
:
h
^
(
P
)
<
ϵ
}
{\displaystyle \{P\in X({\overline {K}}):{\hat {h}}(P)<\epsilon \}}
is not Zariski dense in X.
== References ==
=== Other sources ===
Chambert-Loir, Antoine (2013). "Diophantine geometry and analytic spaces". In Amini, Omid; Baker, Matthew; Faber, Xander (eds.). Tropical and non-Archimedean geometry. Bellairs workshop in number theory, tropical and non-Archimedean geometry, Bellairs Research Institute, Holetown, Barbados, USA, May 6–13, 2011. Contemporary Mathematics. Vol. 605. Providence, RI: American Mathematical Society. pp. 161–179. ISBN 978-1-4704-1021-6. Zbl 1281.14002.
== Further reading ==
The Manin-Mumford conjecture: a brief survey, by Pavlos Tzermias | Wikipedia/Bogomolov_conjecture |
In computer science, analysis of parallel algorithms is the process of finding the computational complexity of algorithms executed in parallel – the amount of time, storage, or other resources needed to execute them. In many respects, analysis of parallel algorithms is similar to the analysis of sequential algorithms, but is generally more involved because one must reason about the behavior of multiple cooperating threads of execution. One of the primary goals of parallel analysis is to understand how a parallel algorithm's use of resources (speed, space, etc.) changes as the number of processors is changed.
== Background ==
A so-called work-time (WT) (sometimes called work-depth, or work-span) framework was originally introduced by Shiloach and Vishkin
for conceptualizing and describing parallel algorithms.
In the WT framework, a parallel algorithm is first described in terms of parallel rounds. For each round, the operations to be performed are characterized, but several issues can be suppressed. For example, the number of operations at each round need not be clear, processors need not be mentioned and any information that may help with the assignment of processors to jobs need not be accounted for. Second, the suppressed information is provided. The inclusion of the suppressed information is guided by the proof of a scheduling theorem due to Brent, which is explained later in this article. The WT framework is useful since while it can greatly simplify the initial description of a parallel algorithm, inserting the details suppressed by that initial description is often not very difficult. For example, the WT framework was adopted as the basic presentation framework in the parallel algorithms books (for the parallel random-access machine PRAM model)
and,
as well as in the class notes
. The overview below explains how the WT framework can be used for analyzing more general parallel algorithms, even when their description is not available within the WT framework.
== Definitions ==
Suppose computations are executed on a machine that has p processors. Let Tp denote the time that expires between the start of the computation and its end. Analysis of the computation's running time focuses on the following notions:
The work of a computation executed by p processors is the total number of primitive operations that the processors perform. Ignoring communication overhead from synchronizing the processors, this is equal to the time used to run the computation on a single processor, denoted T1.
The depth or span is the length of the longest series of operations that have to be performed sequentially due to data dependencies (the critical path). The depth may also be called the critical path length of the computation. Minimizing the depth/span is important in designing parallel algorithms, because the depth/span determines the shortest possible execution time. Alternatively, the span can be defined as the time T∞ spent computing using an idealized machine with an infinite number of processors.
The cost of the computation is the quantity pTp. This expresses the total time spent, by all processors, in both computing and waiting.
Several useful results follow from the definitions of work, span and cost:
Work law. The cost is always at least the work: pTp ≥ T1. This follows from the fact that p processors can perform at most p operations in parallel.
Span law. A finite number p of processors cannot outperform an infinite number, so that Tp ≥ T∞.
Using these definitions and laws, the following measures of performance can be given:
Speedup is the gain in speed made by parallel execution compared to sequential execution: Sp = T1 / Tp. When the speedup is Ω(p) for p processors (using big O notation), the speedup is linear, which is optimal in simple models of computation because the work law implies that T1 / Tp ≤ p (super-linear speedup can occur in practice due to memory hierarchy effects). The situation T1 / Tp = p is called perfect linear speedup. An algorithm that exhibits linear speedup is said to be scalable. Analytical expressions for the speedup of many important parallel algorithms are presented in this book.
Efficiency is the speedup per processor, Sp / p.
Parallelism is the ratio T1 / T∞. It represents the maximum possible speedup on any number of processors. By the span law, the parallelism bounds the speedup: if p > T1 / T∞, then:
T
1
T
p
≤
T
1
T
∞
<
p
.
{\displaystyle {\frac {T_{1}}{T_{p}}}\leq {\frac {T_{1}}{T_{\infty }}}<p.}
The slackness is T1 / (pT∞). A slackness less than one implies (by the span law) that perfect linear speedup is impossible on p processors.
== Execution on a limited number of processors ==
Analysis of parallel algorithms is usually carried out under the assumption that an unbounded number of processors is available. This is unrealistic, but not a problem, since any computation that can run in parallel on N processors can be executed on p < N processors by letting each processor execute multiple units of work. A result called Brent's law states that one can perform such a "simulation" in time Tp, bounded by
T
p
≤
T
N
+
T
1
−
T
N
p
,
{\displaystyle T_{p}\leq T_{N}+{\frac {T_{1}-T_{N}}{p}},}
or, less precisely,
T
p
=
O
(
T
N
+
T
1
p
)
.
{\displaystyle T_{p}=O\left(T_{N}+{\frac {T_{1}}{p}}\right).}
An alternative statement of the law bounds Tp above and below by
T
1
p
≤
T
p
≤
T
1
p
+
T
∞
{\displaystyle {\frac {T_{1}}{p}}\leq T_{p}\leq {\frac {T_{1}}{p}}+T_{\infty }}
.
showing that the span (depth) T∞ and the work T1 together provide reasonable bounds on the computation time.
== References == | Wikipedia/Analysis_of_parallel_algorithms |
In computing, a system resource, or simply resource, is any physical or virtual component of limited availability that is accessible to a computer. All connected devices and internal system components are resources. Virtual system resources include files (concretely file handles), network connections (concretely network sockets), and memory areas.
Managing resources is referred to as resource management, and includes both preventing resource leaks (not releasing a resource when a process has finished using it) and dealing with resource contention (when multiple processes wish to access a limited resource). Computing resources are used in cloud computing to provide services through networks.
== Major resource types ==
Interrupt request (IRQ) lines
Direct memory access (DMA) channels
Port-mapped I/O
Memory-mapped I/O
Locks
External devices
External memory or objects, such as memory managed in native code, from Java; or objects in the Document Object Model (DOM), from JavaScript
=== General resources ===
CPU, both time on a single CPU and use of multiple CPUs – see multitasking
Random-access memory and virtual memory – see memory management
Hard disk drives, include space generally, contiguous free space (such as for swap space), and use of multiple physical devices ("spindles"), since using multiple devices allows parallelism
Cache space, including CPU cache and MMU cache (translation lookaside buffer)
Network throughput
Electrical power
Input/output operations
Randomness
== Categories ==
Some resources, notably memory and storage space, have a notion of "location", and one can distinguish contiguous allocations from non-contiguous allocations. For example, allocating 1 GB of memory in a single block, versus allocating it in 1,024 blocks each of size 1 MB. The latter is known as fragmentation, and often severely impacts performance, so contiguous free space is a subcategory of the general resource of storage space.
One can also distinguish compressible resources from incompressible resources. Compressible resources, generally throughput ones such as CPU and network bandwidth, can be throttled benignly: the user will be slowed proportionally to the throttling, but will otherwise proceed normally. Other resources, generally storage ones such as memory, cannot be throttled without either causing failure (if a process cannot allocate enough memory, it typically cannot run) or severe performance degradation, such as due to thrashing (if a working set does not fit into memory and requires frequent paging, progress will slow significantly). The distinction is not always sharp; as mentioned, a paging system can allow main memory (primary storage) to be compressed (by paging to hard drive (secondary storage)), and some systems allow discardable memory for caches, which is compressible without disastrous performance impact. Electrical power is to some degree compressible: without power (or without sufficient voltage) an electrical device cannot run, and will stop or crash, but some devices, notably mobile phones, can allow degraded operation at reduced power consumption, or can allow the device to be suspended but not terminated, with much lower power consumption.
== See also ==
Computational resource
Linear scheduling method
Sequence step algorithm
System monitor
== References == | Wikipedia/Resource_(computer_science) |
In computer science, the iterated logarithm of
n
{\displaystyle n}
, written log*
n
{\displaystyle n}
(usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to
1
{\displaystyle 1}
. The simplest formal definition is the result of this recurrence relation:
log
∗
n
:=
{
0
if
n
≤
1
;
1
+
log
∗
(
log
n
)
if
n
>
1
{\displaystyle \log ^{*}n:={\begin{cases}0&{\mbox{if }}n\leq 1;\\1+\log ^{*}(\log n)&{\mbox{if }}n>1\end{cases}}}
In computer science, lg* is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base
2
{\displaystyle 2}
) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than
e
1
/
e
≈
1.444667
{\displaystyle e^{1/e}\approx 1.444667}
, not only for base
2
{\displaystyle 2}
and base e. The "super-logarithm" function
s
l
o
g
b
(
n
)
{\displaystyle \mathrm {slog} _{b}(n)}
is "essentially equivalent" to the base
b
{\displaystyle b}
iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration.
== Analysis of algorithms ==
The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:
Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n log* n) time.
Fürer's algorithm for integer multiplication: O(n log n 2O(lg* n)).
Finding an approximate maximum (element at least as large as the median): lg* n − 1 ± 3 parallel operations.
Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O(log* n) synchronous communication rounds.
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:
y
b
=
b
b
⋅
⋅
b
⏟
y
≫
b
b
⋅
⋅
b
y
⏟
n
{\displaystyle {^{y}b}=\underbrace {b^{b^{\cdot ^{\cdot ^{b}}}}} _{y}\gg \underbrace {b^{b^{\cdot ^{\cdot ^{b^{y}}}}}} _{n}}
the inverse grows much slower:
log
b
∗
x
≪
log
b
n
x
{\displaystyle \log _{b}^{*}x\ll \log _{b}^{n}x}
.
For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.
Higher bases give smaller iterated logarithms.
== Other applications ==
The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is
O
(
log
∗
n
)
{\displaystyle O(\log ^{*}n)}
.
In computational complexity theory, Santhanam shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to
n
log
∗
n
.
{\displaystyle n{\sqrt {\log ^{*}n}}.}
== See also ==
Inverse Ackermann function, an even more slowly growing function also used in computational complexity theory
== References == | Wikipedia/Iterated_logarithm |
A hybrid algorithm is an algorithm that combines two or more other algorithms that solve the same problem, either choosing one based on some characteristic of the data, or switching between them over the course of the algorithm. This is generally done to combine desired features of each, so that the overall algorithm is better than the individual components.
"Hybrid algorithm" does not refer to simply combining multiple algorithms to solve a different problem – many algorithms can be considered as combinations of simpler pieces – but only to combining algorithms that solve the same problem, but differ in other characteristics, notably performance.
== Examples ==
In computer science, hybrid algorithms are very common in optimized real-world implementations of recursive algorithms, particularly implementations of
divide-and-conquer or decrease-and-conquer algorithms, where the size of the data decreases as one moves deeper in the recursion. In this case, one algorithm is used for the overall approach (on large data), but deep in the recursion, it switches to a different algorithm, which is more efficient on small data. A common example is in sorting algorithms, where the insertion sort, which is inefficient on large data, but very efficient on small data (say, five to ten elements), is used as the final step, after primarily applying another algorithm, such as merge sort or quicksort. Merge sort and quicksort are asymptotically optimal on large data, but the overhead becomes significant if applying them to small data, hence the use of a different algorithm at the end of the recursion. A highly optimized hybrid sorting algorithm is Timsort, which combines merge sort, insertion sort, together with additional logic (including binary search) in the merging logic.
A general procedure for a simple hybrid recursive algorithm is short-circuiting the base case, also known as arm's-length recursion. In this case whether the next step will result in the base case is checked before the function call, avoiding an unnecessary function call. For example, in a tree, rather than recursing to a child node and then checking if it is null, checking null before recursing. This is useful for efficiency when the algorithm usually encounters the base case many times, as in many tree algorithms, but is otherwise considered poor style, particularly in academia, due to the added complexity.
Another example of hybrid algorithms for performance reasons are introsort and introselect, which combine one algorithm for fast average performance, falling back on another algorithm to ensure (asymptotically) optimal worst-case performance. Introsort begins with a quicksort, but switches to a heap sort if quicksort is not progressing well; analogously introselect begins with quickselect, but switches to median of medians if quickselect is not progressing well.
Centralized distributed algorithms can often be considered as hybrid algorithms, consisting of an individual algorithm (run on each distributed processor), and a combining algorithm (run on a centralized distributor) – these correspond respectively to running the entire algorithm on one processor, or running the entire computation on the distributor, combining trivial results (a one-element data set from each processor). A basic example of these algorithms are distribution sorts, particularly used for external sorting, which divide the data into separate subsets, sort the subsets, and then combine the subsets into totally sorted data; examples include bucket sort and flashsort.
However, in general distributed algorithms need not be hybrid algorithms, as individual algorithms or combining or communication algorithms may be solving different problems. For example, in models such as MapReduce, the Map and Reduce step solve different problems, and are combined to solve a different, third problem.
== See also ==
Hybrid algorithm (constraint satisfaction)
Hybrid genetic algorithm
Hybrid input output (HIO) algorithm for phase retrieval | Wikipedia/Hybrid_algorithm |
In the analysis of algorithms, the master theorem for divide-and-conquer recurrences provides an asymptotic analysis for many recurrence relations that occur in the analysis of divide-and-conquer algorithms. The approach was first presented by Jon Bentley, Dorothea Blostein (née Haken), and James B. Saxe in 1980, where it was described as a "unifying method" for solving such recurrences. The name "master theorem" was popularized by the widely used algorithms textbook Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein.
Not all recurrence relations can be solved by this theorem; its generalizations include the Akra–Bazzi method.
== Introduction ==
Consider a problem that can be solved using a recursive algorithm such as the following:
procedure p(input x of size n):
if n < some constant k:
Solve x directly without recursion
else:
Create a subproblems of x, each having size n/b
Call procedure p recursively on each subproblem
Combine the results from the subproblems
The above algorithm divides the problem into a number (a) of subproblems recursively, each subproblem being of size n/b. The factor by which the size of subproblems is reduced (b) need not, in general, be the same as the number of subproblems (a). Its solution tree has a node for each recursive call, with the children of that node being the other calls made from that call. The leaves of the tree are the base cases of the recursion, the subproblems (of size less than k) that do not recurse. The above example would have a child nodes at each non-leaf node. Each node does an amount of work that corresponds to the size of the subproblem n passed to that instance of the recursive call and given by
f
(
n
)
{\displaystyle f(n)}
. The total amount of work done by the entire algorithm is the sum of the work performed by all the nodes in the tree.
The runtime of an algorithm such as the p above on an input of size n, usually denoted
T
(
n
)
{\displaystyle T(n)}
, can be expressed by the recurrence relation
T
(
n
)
=
a
T
(
n
b
)
+
f
(
n
)
,
{\displaystyle T(n)=a\;T\left({\frac {n}{b}}\right)+f(n),}
where
f
(
n
)
{\displaystyle f(n)}
is the time to create the subproblems and combine their results in the above procedure. This equation can be successively substituted into itself and expanded to obtain an expression for the total amount of work done. The master theorem allows many recurrence relations of this form to be converted to Θ-notation directly, without doing an expansion of the recursive relation.
== Generic form ==
The master theorem always yields asymptotically tight bounds to recurrences from divide and conquer algorithms that partition an input into smaller subproblems of equal sizes, solve the subproblems recursively, and then combine the subproblem solutions to give a solution to the original problem. The time for such an algorithm can be expressed by adding the work that they perform at the top level of their recursion (to divide the problems into subproblems and then combine the subproblem solutions) together with the time made in the recursive calls of the algorithm. If
T
(
n
)
{\displaystyle T(n)}
denotes the total time for the algorithm on an input of size
n
{\displaystyle n}
, and
f
(
n
)
{\displaystyle f(n)}
denotes the amount of time taken at the top level of the recurrence then the time can be expressed by a recurrence relation that takes the form:
T
(
n
)
=
a
T
(
n
b
)
+
f
(
n
)
{\displaystyle T(n)=a\;T\!\left({\frac {n}{b}}\right)+f(n)}
Here
n
{\displaystyle n}
is the size of an input problem,
a
{\displaystyle a}
is the number of subproblems in the recursion, and
b
{\displaystyle b}
is the factor by which the subproblem size is reduced in each recursive call (
b
>
1
{\displaystyle b>1}
). Crucially,
a
{\displaystyle a}
and
b
{\displaystyle b}
must not depend on
n
{\displaystyle n}
. The theorem below also assumes that, as a base case for the recurrence,
T
(
n
)
=
Θ
(
1
)
{\displaystyle T(n)=\Theta (1)}
when
n
{\displaystyle n}
is less than some bound
κ
>
0
{\displaystyle \kappa >0}
, the smallest input size that will lead to a recursive call.
Recurrences of this form often satisfy one of the three following regimes, based on how the work to split/recombine the problem
f
(
n
)
{\displaystyle f(n)}
relates to the critical exponent
c
crit
=
log
b
a
{\displaystyle c_{\operatorname {crit} }=\log _{b}a}
. (The table below uses standard big O notation). Throughout,
(
log
n
)
k
{\displaystyle (\log n)^{k}}
is used for clarity, though in textbooks this is usually rendered
log
k
n
{\displaystyle \log ^{k}n}
.
c
crit
=
log
b
a
=
log
(
#
subproblems
)
/
log
(
relative subproblem size
)
{\displaystyle c_{\operatorname {crit} }=\log _{b}a=\log(\#{\text{subproblems}})/\log({\text{relative subproblem size}})}
A useful extension of Case 2 handles all values of
k
{\displaystyle k}
:
=== Examples ===
==== Case 1 example ====
T
(
n
)
=
8
T
(
n
2
)
+
1000
n
2
{\displaystyle T(n)=8T\left({\frac {n}{2}}\right)+1000n^{2}}
As one can see from the formula above:
a
=
8
,
b
=
2
,
f
(
n
)
=
1000
n
2
{\displaystyle a=8,\,b=2,\,f(n)=1000n^{2}}
, so
f
(
n
)
=
O
(
n
c
)
{\displaystyle f(n)=O\left(n^{c}\right)}
, where
c
=
2
{\displaystyle c=2}
Next, we see if we satisfy the case 1 condition:
log
b
a
=
log
2
8
=
3
>
c
{\displaystyle \log _{b}a=\log _{2}8=3>c}
.
It follows from the first case of the master theorem that
T
(
n
)
=
Θ
(
n
log
b
a
)
=
Θ
(
n
3
)
{\displaystyle T(n)=\Theta \left(n^{\log _{b}a}\right)=\Theta \left(n^{3}\right)}
(This result is confirmed by the exact solution of the recurrence relation, which is
T
(
n
)
=
1001
n
3
−
1000
n
2
{\displaystyle T(n)=1001n^{3}-1000n^{2}}
, assuming
T
(
1
)
=
1
{\displaystyle T(1)=1}
).
==== Case 2 example ====
T
(
n
)
=
2
T
(
n
2
)
+
10
n
{\displaystyle T(n)=2T\left({\frac {n}{2}}\right)+10n}
As we can see in the formula above the variables get the following values:
a
=
2
,
b
=
2
,
c
=
1
,
f
(
n
)
=
10
n
{\displaystyle a=2,\,b=2,\,c=1,\,f(n)=10n}
f
(
n
)
=
Θ
(
n
c
(
log
n
)
k
)
{\displaystyle f(n)=\Theta \left(n^{c}(\log n)^{k}\right)}
where
c
=
1
,
k
=
0
{\displaystyle c=1,k=0}
Next, we see if we satisfy the case 2 condition:
log
b
a
=
log
2
2
=
1
{\displaystyle \log _{b}a=\log _{2}2=1}
, and therefore, c and
log
b
a
{\displaystyle \log _{b}a}
are equal
So it follows from the second case of the master theorem:
T
(
n
)
=
Θ
(
n
log
b
a
(
log
n
)
k
+
1
)
=
Θ
(
n
1
(
log
n
)
1
)
=
Θ
(
n
log
n
)
{\displaystyle T(n)=\Theta \left(n^{\log _{b}a}(\log n)^{k+1}\right)=\Theta \left(n^{1}(\log n)^{1}\right)=\Theta \left(n\log n\right)}
Thus the given recurrence relation
T
(
n
)
{\displaystyle T(n)}
was in
Θ
(
n
log
n
)
{\displaystyle \Theta (n\log n)}
.
(This result is confirmed by the exact solution of the recurrence relation, which is
T
(
n
)
=
n
+
10
n
log
2
n
{\displaystyle T(n)=n+10n\log _{2}n}
, assuming
T
(
1
)
=
1
{\displaystyle T(1)=1}
).
==== Case 3 example ====
T
(
n
)
=
2
T
(
n
2
)
+
n
2
{\displaystyle T(n)=2T\left({\frac {n}{2}}\right)+n^{2}}
As we can see in the formula above the variables get the following values:
a
=
2
,
b
=
2
,
f
(
n
)
=
n
2
{\displaystyle a=2,\,b=2,\,f(n)=n^{2}}
f
(
n
)
=
Ω
(
n
c
)
{\displaystyle f(n)=\Omega \left(n^{c}\right)}
, where
c
=
2
{\displaystyle c=2}
Next, we see if we satisfy the case 3 condition:
log
b
a
=
log
2
2
=
1
{\displaystyle \log _{b}a=\log _{2}2=1}
, and therefore, yes,
c
>
log
b
a
{\displaystyle c>\log _{b}a}
The regularity condition also holds:
2
(
n
2
4
)
≤
k
n
2
{\displaystyle 2\left({\frac {n^{2}}{4}}\right)\leq kn^{2}}
, choosing
k
=
1
/
2
{\displaystyle k=1/2}
So it follows from the third case of the master theorem:
T
(
n
)
=
Θ
(
f
(
n
)
)
=
Θ
(
n
2
)
.
{\displaystyle T\left(n\right)=\Theta \left(f(n)\right)=\Theta \left(n^{2}\right).}
Thus the given recurrence relation
T
(
n
)
{\displaystyle T(n)}
was in
Θ
(
n
2
)
{\displaystyle \Theta (n^{2})}
, that complies with the
f
(
n
)
{\displaystyle f(n)}
of the original formula.
(This result is confirmed by the exact solution of the recurrence relation, which is
T
(
n
)
=
2
n
2
−
n
{\displaystyle T(n)=2n^{2}-n}
, assuming
T
(
1
)
=
1
{\displaystyle T(1)=1}
.)
== Inadmissible equations ==
The following equations cannot be solved using the master theorem:
T
(
n
)
=
2
n
T
(
n
2
)
+
n
n
{\displaystyle T(n)=2^{n}T\left({\frac {n}{2}}\right)+n^{n}}
a is not a constant; the number of subproblems should be fixed
T
(
n
)
=
2
T
(
n
2
)
+
n
log
n
{\displaystyle T(n)=2T\left({\frac {n}{2}}\right)+{\frac {n}{\log n}}}
non-polynomial difference between
f
(
n
)
{\displaystyle f(n)}
and
n
log
b
a
{\displaystyle n^{\log _{b}a}}
(see below; extended version applies)
T
(
n
)
=
64
T
(
n
8
)
−
n
2
log
n
{\displaystyle T(n)=64T\left({\frac {n}{8}}\right)-n^{2}\log n}
f
(
n
)
{\displaystyle f(n)}
, which is the combination time, is not positive
T
(
n
)
=
T
(
n
2
)
+
n
(
2
−
cos
n
)
{\displaystyle T(n)=T\left({\frac {n}{2}}\right)+n(2-\cos n)}
case 3 but regularity violation.
In the second inadmissible example above, the difference between
f
(
n
)
{\displaystyle f(n)}
and
n
log
b
a
{\displaystyle n^{\log _{b}a}}
can be expressed with the ratio
f
(
n
)
n
log
b
a
=
n
/
log
n
n
log
2
2
=
n
n
log
n
=
1
log
n
{\displaystyle {\frac {f(n)}{n^{\log _{b}a}}}={\frac {n/\log n}{n^{\log _{2}2}}}={\frac {n}{n\log n}}={\frac {1}{\log n}}}
. It is clear that
1
log
n
<
n
ϵ
{\displaystyle {\frac {1}{\log n}}<n^{\epsilon }}
for any constant
ϵ
>
0
{\displaystyle \epsilon >0}
. Therefore, the difference is not polynomial and the basic form of the Master Theorem does not apply. The extended form (case 2b) does apply, giving the solution
T
(
n
)
=
Θ
(
n
log
log
n
)
{\displaystyle T(n)=\Theta (n\log \log n)}
.
== Application to common algorithms ==
== See also ==
Akra–Bazzi method
Asymptotic complexity
== Notes ==
== References ==
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw–Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.
Michael T. Goodrich and Roberto Tamassia. Algorithm Design: Foundation, Analysis, and Internet Examples. Wiley, 2002. ISBN 0-471-38365-1. The master theorem (including the version of Case 2 included here, which is stronger than the one from CLRS) is on pp. 268–270. | Wikipedia/Master_theorem_(analysis_of_algorithms) |
Clamshell design is a form factor commonly used in the design of electronic devices and other manufactured objects. It is inspired by the morphology of the clam. The form factor has been applied to handheld game consoles, mobile phones (where it is often called a "flip phone"), and especially laptop computers. Clamshell devices are usually made of two sections connected by a hinge, each section containing either a flat panel display or an alphanumeric keyboard/keypad, which can fold into contact together like a bivalve shell.
Generally speaking, the interface components such as keys and display are kept inside the closed clamshell, protecting them from damage and unintentional use while also making the device shorter or narrower so it is easier to carry around. In many cases, opening the clamshell offers more surface area than when the device is closed, allowing interface components to be larger and easier to use than on devices which do not flip open. A disadvantage of the clamshell design is the connecting hinge, which is prone to fatigue or failure.
The clamshell design is most popularly recognized in the context of mobile cellular phones. The term "flip phone" is used more frequently than "clamshell" in colloquial speech, especially when referring to a phone where the hinge is on the short edge – if the hinge is on a long edge, more akin to a laptop (e.g., Nokia Communicators), the device is more likely to be called just a "clamshell" rather than a flip phone. In the 1990s and early 2000s, what is now called "flip" phones were more commonly known as "folder" or "folding" phones, whereas "flip phone" referred to a now obsolete form factor most notably seen on the Motorola MicroTAC. Motorola itself held the "Flip Phone" trademark until 2005.
== Early examples in tech ==
A "flip phone" like communication device appears in chapter 3 of Armageddon 2419 A.D., a science fiction novella by Philip Francis Nowlan, which was first published in the August 1928 issue of the pulp magazine Amazing Stories: "Alan took a compact packet about six inches square from a holster attached to her belt and handed it to Wilma. So far as I could see, it had no special receiver for the ear. Wilma merely threw back a lid, as though she was opening a book, and began to talk. The voice that came back from the machine was as audible as her own." Also from science fiction, Star Trek: The Original Series featured a regular plot device called the "Communicator" which influenced the development of early clamshell mobile phones such as the Motorola MicroTAC and StarTAC. The acronym "TAC" is an abbreviation of "Total Area Coverage" and was first used for the Motorola DynaTAC.
Early examples of the form factor's use in electronics include the 1963 Brionvega TS 502 radio, the Grillo telephone, which first appeared in Italy in the mid 1960s, and the Soundbook portable radio cassette player, which was introduced in 1974 (the designer Richard Sapper was responsible for all three of these, and waas subsequently involved in the design of IBM computers). The form factor was first used for a portable computer in 1982 by the laptop manufacturer GRiD (who had the patents on the idea at the time) for their Compass model. In 1985, the Ampere WS-1 laptop used a modern clamshell design.
== In mobile phones ==
The clamshell design is known to have been applied to some mobile phones. The popular contemporary term "flip phone" typically describes a clamshell cell phone where the upper part of the device folds out to reveal a display and keypad. Historically the term was different: "flip" earlier used to refer to a type of candybar phone with a folding cover, folding downwards to reveal a keypad – the modern definition for these are 'flip down'. Such a design originated on GTE landline phones, who held the trademark 'Flip Phone' until Motorola acquired it, a few years after releasing the MicroTAC, the first cell phone in this form factor. Phones in this style were relatively common in the mid and late 1990s, other known examples being the Ericsson T28. On the other hand, symmetrical clamshell designs that unfold upwards such as the Motorola StarTAC were in earlier years variously referred to as "folder" type or style, or "folding" phones, before the term "flip phone" entered common usage during the 2000s to refer to these.
Clamshell phones also include other variations that are not typically referred to as "flip phone". The Palm Treo 180, Motorola A760, and Motorola MING are examples of clamshells where the exterior is a display cover, and the display is on the lower part – this was common on some PDAs and PDA-style smartphones in the 2000s. Motorola also developed such a phone with a touchscreen in 2008, the Krave ZN4, but touch displays in this style were uncommon. The Nokia Communicator series is an example of clamshell that look like a miniature notebook computer. Also, some clamshells were made in an unconventional wider style, such as the fashion-oriented Siemens Xelibri 6 and the Alcatel OT-808 with a full QWERTY keyboard. In addition, some experimental clamshells look like the typical 'flip phone' but have additional mechanism and forms (like rotating displays), for example Nokia N90, Samsung P400 and LG G7070.
=== Origins ===
The earliest notable clamshell phone was Motorola's StarTAC introduced in 1996. An iconic product, it was extremely compact and light for its time, and it flipped upwards to reveal a standard display and keypad on the lower part and a speakerphone on the upper. The StarTAC series was the first example of a fashionable mobile phone against the usual products of its time. For the rest of the decade, this 'flip up' clamshell style in mobile phones was mainly a product of Motorola only, and in 1999 the company released its next generation 'Vader' clamshell, Motorola V series, which again achieved popularity and was even smaller than the StarTAC. It was the first flip phone (in the West) with the now-conventional design of a display in the upper part of the device and a keypad on the lower.
At the same time as Motorola's V series, the form factor took off in Asia: the StarTAC was extremely popular in South Korea, which led to Samsung Electronics developing its first 'flip phone' clamshell, the SCH-800, released domestically in October 1998 to tremendous success. In 1999, Samsung released the silver-colored A100 'Mini Folder' phone in South Korea to even greater success. Meanwhile in Japan, NEC released the silver N501i flip phone in March 1999 for use with the new i-mode mobile Internet service. Both Samsung and NEC popularised the flip phone in their respective countries. Cited benefits of the form factor compared to regular candybar phones were the ability to have a larger display in a smaller device, as well as not having the risk of buttons being accidentally pressed, and the subjective view that they were fashionable.
=== Popularity ===
After the millenium, clamshell-style cell phones experienced a boom globally, with Samsung especially being influential in this development whose line of silver flip phones were positively received as being stylish. The Samsung SGH-A100 made its debut in Europe in 2000 and in the same year the company released locally the A200 'Dual Folder', which was the first clamshell phone with an external display – allowing users to see who is calling without opening the phone – a feature that would become industry standard. Marketed as "Blue-i" (referring to the blue backlit circular external display), the model was released in Asian markets in 2001 while other regions got the A300 model. Motorola released the Motorola V60 in 2001 with a metallic body and an external display, a handset that became very popular in the US and was noted for looking luxurious.
The new style of flip phones were initially seen as solely fashion products, but by 2002 and 2003 they were dominating sales in Asia while being trendy elsewhere. The Samsung SGH-T100 released in 2002 was the first flip phone with a color screen and it enjoyed global popularity. Motorola presented its first with a color display in the T720. Samsung also released the first 3G (UMTS) flip phone in Europe, the SGH-Z100. Eventually many other manufacturers, mainly Asian (such as LG, Sharp and Sanyo) as well as the likes of Siemens were offering clamshell products globally. Nokia, the largest mobile phone manufacturer, had initially been reluctant to develop clamshell phones; it shipped its first such product in early 2004, the Nokia 7200, with unusual textile covers. Nokia's products were more distinct with its rectangular corners and experimented with different styles as seen on the 7200, 2650 and 6170, although in 2005 the company released its first "Asian" influenced flip phone with rounded corners and a silver color, the Nokia 6101. Sony Ericsson also shipped its first flip phone, the Z600, in 2004, with the company mainly targeting women in its future flip phones ('Z' series).
The Motorola V3 (RAZR) is the most iconic product during the peak of the flip phone era, combining the clamshell form factor with a sleek and thin, silver-colored body (although it was also released in pink and other colors), and a flat etched keypad. The lack of a protruding external antenna also helped its style (although it was not the first product with this distinction). The RAZR series was the best-selling phone in the United States for three years in a row (2005, 2006 and 2007). It was highly influential during the mid-2000s and a large number of rival manufacturers imitated its physical characteristics in their own new products. In Japan, domestic flip phones during this time were uniquely advanced and has been associated with the Galápagos syndrome. Flip phones were still ubiquitous here as of 2009, after it had already dropped in popularity in the West.
==== Flip smartphones ====
The Samsung SCH-i600 introduced in 2002 was one of the first smartphones in a flip phone form factor; it ran on Microsoft's Windows Smartphone 2002 software. Another early smartphone in this form factor was the Kyocera 7135 from 2003 which ran Palm OS and the Motorola MPx200. Panasonic introduced the first flip phone running Symbian, the X700, in 2004.
=== Decline ===
Interest in flip phones declined during the late 2000s: this was first due to the growing popularity of sliders, including 'slide-out QWERTYs' that offered a full keyboard in combination with a wider, landscape-oriented display. The typically narrow body of the flip phone was a design constraint for incorporating a QWERTY. It was also affected by the growing popularity of touchscreen-operated phones which started to be considered as more intuitive to use compared to traditional keypads. The flip phone – designed in mind with a display on the upper part and a keypad on the lower part – was impractical for use with a touch display (which could also incorporate QWERTY virtual keyboards), so it fell out of favor when touchscreens became standard at the end of the decade.
=== Contemporary era ===
While flip phones had become largely obsolete in the 2010s, it was noticably still relevant to a degree in Japan where various such products continued to be developed and offered by carriers such as KDDI au and NTT Docomo. Outside these territories, flip phones continued to be marketed as a niche by some manufacturers (such as Alcatel, Doro, Kyocera, LG and Samsung), typically as cheap, basic feature phones that remain popular among specialized audiences who prefer their simplicity or durability over smartphones.
In 2019, a new style of clamshell phones began to emerge using rollable OLED displays, most often referred to as "foldable smartphones". These also overlap with the slate form factor. Motorola unveiled the new Motorola Razr in November 2019, the first major product in this style, which uses a foldable display and a clamshell design reminiscent of its namesake line of feature phones. Samsung released the Galaxy Z Flip, and the series has since expanded. Not all "foldable smartphones", however, are clamshell: the company's Galaxy Z Fold, for example, uses a book-like vertical fold instead of a clamshell design.
== Automotive ==
In automotive design, a clamshell bonnet or clamshell hood is a design where the engine cover also incorporates all or part of one of the wings (fenders). It is sometimes found in a car with a separate chassis such as a Triumph Herald or in cars based on a spaceframe where the bodywork is lightweight and carries no significant loading, such as the Ford GT40 and Ferrari Enzo, where the whole rear end can be lifted to access the engine compartment and suspension system. It is also sometimes seen in unibody cars, albeit much more rarely – such as the BMW Minis and Alfa Romeo GTV.
It is also an informal name for General Motors full-size station wagons, manufactured from 1971 to 1976, that featured a complex, two-piece "disappearing" tailgate, officially known as the "Glide Away" tailgate.
== Other uses ==
In addition to mobile phones; laptop computers and subnotebooks; handheld game consoles such as the Game Boy Advance SP, Nintendo DS, and Nintendo 3DS (though these are less frequently described as "flip" or "clamshell"); objects such as pocket watches, egg cartons, certain types of luggage, waffle irons, sandwich toasters, krumkake irons, and the George Foreman Grill are all examples a clamshell design.
Bookbinders build archival clamshell boxes called Solander cases, in which valuable books or loose papers can be protected from light and dust.
The clamshell form factor is also commonly used in product packaging.
== See also ==
Dual-touchscreen
Foldable smartphone
Form factor (mobile phones)
History of mobile phones
Laptop
Living hinge
== References ==
== External links == | Wikipedia/Clamshell_design |
In number theory, the Dirichlet hyperbola method is a technique to evaluate the sum
F
(
n
)
=
∑
k
=
1
n
f
(
k
)
,
{\displaystyle F(n)=\sum _{k=1}^{n}f(k),}
where f is a multiplicative function. The first step is to find a pair of multiplicative functions g and h such that, using Dirichlet convolution, we have f = g ∗ h; the sum then becomes
F
(
n
)
=
∑
k
=
1
n
∑
x
y
=
k
g
(
x
)
h
(
y
)
,
{\displaystyle F(n)=\sum _{k=1}^{n}\sum _{xy=k}^{}g(x)h(y),}
where the inner sum runs over all ordered pairs (x,y) of positive integers such that xy = k. In the Cartesian plane, these pairs lie on a hyperbola, and when the double sum is fully expanded, there is a bijection between the terms of the sum and the lattice points in the first quadrant on the hyperbolas of the form xy = k, where k runs over the integers 1 ≤ k ≤ n: for each such point (x,y), the sum contains a term g(x)h(y), and vice versa.
Let a be a real number, not necessarily an integer, such that 1 < a < n, and let b = n/a. Then the lattice points can be split into three overlapping regions: one region is bounded by 1 ≤ x ≤ a and 1 ≤ y ≤ n/x, another region is bounded by 1 ≤ y ≤ b and 1 ≤ x ≤ n/y, and the third is bounded by 1 ≤ x ≤ a and 1 ≤ y ≤ b. In the diagram, the first region is the union of the blue and red regions, the second region is the union of the red and green, and the third region is the red. Note that this third region is the intersection of the first two regions. By the principle of inclusion and exclusion, the full sum is therefore the sum over the first region, plus the sum over the second region, minus the sum over the third region. This yields the formula
== Examples ==
Let σ0(n) be the divisor-counting function, and let D(n) be its summatory function:
D
(
n
)
=
∑
k
=
1
n
σ
0
(
k
)
.
{\displaystyle D(n)=\sum _{k=1}^{n}\sigma _{0}(k).}
Computing D(n) naïvely requires factoring every integer in the interval [1, n]; an improvement can be made by using a modified Sieve of Eratosthenes, but this still requires Õ(n) time. Since σ0 admits the Dirichlet convolution σ0 = 1 ∗ 1, taking a = b = √n in (1) yields the formula
D
(
n
)
=
∑
x
=
1
n
∑
y
=
1
n
/
x
1
⋅
1
+
∑
y
=
1
n
∑
x
=
1
n
/
y
1
⋅
1
−
∑
x
=
1
n
∑
y
=
1
n
1
⋅
1
,
{\displaystyle D(n)=\sum _{x=1}^{\sqrt {n}}\sum _{y=1}^{n/x}1\cdot 1+\sum _{y=1}^{\sqrt {n}}\sum _{x=1}^{n/y}1\cdot 1-\sum _{x=1}^{\sqrt {n}}\sum _{y=1}^{\sqrt {n}}1\cdot 1,}
which simplifies to
D
(
n
)
=
2
⋅
∑
x
=
1
n
⌊
n
x
⌋
−
⌊
n
⌋
2
,
{\displaystyle D(n)=2\cdot \sum _{x=1}^{\sqrt {n}}\left\lfloor {\frac {n}{x}}\right\rfloor -\left\lfloor {\sqrt {n}}\right\rfloor ^{2},}
which can be evaluated in O(√n) operations.
The method also has theoretical applications: for example, Peter Gustav Lejeune Dirichlet introduced the technique in 1849 to obtain the estimate
D
(
n
)
=
n
log
n
+
(
2
γ
−
1
)
n
+
O
(
n
)
,
{\displaystyle D(n)=n\log n+(2\gamma -1)n+O({\sqrt {n}}),}
where γ is the Euler–Mascheroni constant.
== References ==
== External links ==
Discussion of the Dirichlet hyperbola method for computational purposes | Wikipedia/Dirichlet_hyperbola_method |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.