text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Global_illumination] | [TOKENS: 970]
Contents Global illumination Global illumination (GI), or indirect illumination, refers to a group of algorithms used in 3D computer graphics meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source (direct illumination), but also subsequent "bounces" where light rays are reflected by other surfaces in the scene (indirect illumination). The term "global illumination" was first used by Turner Whitted in his paper "An improved illumination model for shaded display", to differentiate between illumination calculations at a local scale (using geometric information directly, such as in Phong shading), a microscopic scale (extending local geometry with microfacet detail), and a global scale, including not only the geometry itself but also the visibility of every other object in the scene. Theoretically, reflections, refractions, transparency, and shadows are all examples of global illumination, because when simulating them, one object affects the rendering of another (as opposed to an object being affected only by a direct source of light). In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination, especially in real-time settings. Algorithms Global illumination is a key aspect to the realism of a 3D scene. Naive 3D lighting will only take into account direct light, meaning any light which radiates off a light source and bounces directly into the virtual camera. Shadows will appear completely dark, due to light not interacting with any other surface before it reaches the camera. As this is not what occurs in real life, we perceive the resulting image as incomplete. Applying full global illumination allows for the missing effects that makes an image feel more natural. However, global illumination is computationally more expensive and consequently much slower to generate. Most algorithms, especially those focusing on real-time solutions, model diffuse inter-reflection exclusively, which is a very important part of global illumination; however, some also model indirect specular reflections, refraction, and indirect shadowing, which allows for a closer approximation of the reality and produces more appealing images. The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design. Radiosity, ray tracing, beam tracing, cone tracing, path tracing, Metropolis light transport and photon mapping are all examples of algorithms used for global illumination in offline settings, some of which may be used together to yield results that trade between accuracy and speed, depending on the implementation. Real-time applications Achieving accurate computation of global illumination in real-time remains difficult. On one end, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software. Though this method is one of the cheapest ways to simulate indirect lighting, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. Beyond ambient lighting, techniques which trace the path of light accurately have historically been either too slow for consumer hardware or limited to static and precomputed environments. This proves problematic, as most applications allow for input from an user that can affect their surroundings, and the precalculation steps may introduce constraints upon the artists. Consequently, research has been dedicated to finding a balance between adequate performance, accurate visual results, and interactivity. Starting with Nvidia's RTX 20 series, consumer graphics hardware has been extended to allow for ray tracing computations to be performed in real time through hardware acceleration. This has allowed for further improvements, as applications can now harness the power of this acceleration to provide not only precise lighting results, but the ability to affect said lighting dynamically. Some content that has taken advantage of this capability includes Cyberpunk 2077, Indiana Jones and the Great Circle, and Alan Wake 2, among others. For an overview of the current state of real-time global illumination, see or . Procedure Algorithms which attempt to simulate global illumination are numerical approximations of the rendering equation. Well-known algorithms for computing global illumination include path tracing, photon mapping and radiosity. The following approaches can be distinguished here: A full overview can be found in . List of methods IBL has also been used to describe image proxies, or image-based reflections, which represent surfaces as flat image planes to improve the appearance of reflections. They have been used in games such as Remember Me and Thief (2014), as well as the Unreal Engine 3 Samaritan demo. The most common approach is screen-space ray marching. Additional techniques include screen space directional occlusion, "deep" buffers, and horizon-based visibility bitmasks. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Texture_mapping&action=edit&section=10] | [TOKENS: 1430]
Editing Texture mapping (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 12 hidden categories (help):
========================================
[SOURCE: https://en.wikipedia.org/wiki/Drawcalls] | [TOKENS: 1629]
Contents Real-time computer graphics Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion. Computers have been capable of generating 2D images such as simple lines, images and polygons in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics. Different techniques for rendering now exist, such as ray-tracing and rasterization. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience. Principles of real-time 3D computer graphics The goal of computer graphics is to generate computer-generated images, or frames, using certain desired metrics. One such metric is the number of frames generated in a given second. Real-time computer graphics systems differ from traditional (i.e., non-real-time) rendering systems in that non-real-time graphics typically rely on ray tracing. In this process, millions or billions of rays are traced from the camera to the world for detailed rendering—this expensive operation can take hours or days to render a single frame. Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique of z-buffer triangle rasterization. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle gets positioned, rotated and scaled on the screen, and rasterizer hardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units called fragments that are suitable for displaying on a display screen. The fragments are drawn on the screen using a color that is computed in several steps. For example, a texture can be used to "paint" a triangle based on a stored image, and then shadow mapping can alter that triangle's colors based on line-of-sight to light sources. Real-time graphics optimizes image quality subject to time and hardware constraints. GPUs and other advances increased the image quality that real-time graphics can produce. GPUs are capable of handling millions of triangles per frame, and modern DirectX/OpenGL class hardware is capable of generating complex effects, such as shadow volumes, motion blurring, and triangle generation, in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actual gameplay graphics and the pre-rendered cutscenes traditionally found in video games. Cutscenes are typically rendered in real-time—and may be interactive. Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate. Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions. In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame. Usually, the display's response-time is far slower than the input device—this is justified by the immense difference between the (fast) response time of a human being's motion and the (slow) perspective speed of the human visual system. This difference has other effects too: because input devices must be very fast to keep up with human motion response, advancements in input devices (e.g., the current[when?] Wii remote) typically take much longer to achieve than comparable advancements in display devices. Another important factor controlling real-time computer graphics is the combination of physics and animation. These techniques largely dictate what is to be drawn on the screen—especially where to draw objects in the scene. These techniques help realistically imitate real world behavior (the temporal dimension, not the spatial dimensions), adding to the computer graphics' degree of realism. Real-time previewing with graphics software, especially when adjusting lighting effects, can increase work speed. Some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time. Rendering pipeline The graphics rendering pipeline ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics. Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more. The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and rasterization. The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as collision detection, speed-up techniques, animation and force feedback, in addition to handling user input. Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller. The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via transforms, and geometry morphing. Finally, it produces primitives (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline. The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs. Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages. Before the final model is shown on the output device, the model is transformed onto multiple spaces or coordinate systems. Transformations move and manipulate objects by altering their vertices. Transformation is the general term for the four specific ways that manipulate the shape or position of a point, line or shape. In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a right-handed coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right. Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are orthographic projection (also called parallel) and perspective projection. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight. Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage. The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage. The rasterizer stage applies color and turns the graphic elements into pixels or picture elements. See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Level_of_detail_(computer_graphics)] | [TOKENS: 1858]
Contents Level of detail (computer graphics) In computer graphics, level of detail (LOD) refers to the complexity of a 3D model representation. LOD can be decreased as the model moves away from the viewer or according to other metrics such as object importance, viewpoint-relative speed or position. LOD techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast. Although most of the time LOD is applied to geometry detail only, the basic concept can be generalized. Recently, LOD techniques also included shader management to keep control of pixel complexity. A form of level of detail management has been applied to texture maps for years, under the name of mipmapping, also providing higher rendering quality. It is commonplace to say that "an object has been LOD-ed" when the object is simplified by the underlying LOD-ing algorithm as well as a 3D modeler manually creating LOD models.[citation needed] Historical reference The origin of all the LOD algorithms for 3D computer graphics can be traced back to an article by James H. Clark in the October 1976 issue of Communications of the ACM. At the time, computers were monolithic and rare, and graphics were being driven by researchers. The hardware itself was completely different, both architecturally and performance-wise. As such, many differences could be observed with regard to today's algorithms but also many common points. The original algorithm presented a much more generic approach to what will be discussed here. After introducing some available algorithms for geometry management, it is stated that most fruitful gains came from "...structuring the environments being rendered", allowing to exploit faster transformations and clipping operations. The same environment structuring is now proposed as a way to control varying detail thus avoiding unnecessary computations, yet delivering adequate visual quality: For example, a dodecahedron looks like a sphere from a sufficiently large distance and thus can be used to model it so long as it is viewed from that or a greater distance. However, if it must ever be viewed more closely, it will look like a dodecahedron. One solution to this is simply to define it with the most detail that will ever be necessary. However, then it might have far more detail than is needed to represent it at large distances, and in a complex environment with many such objects, there would be too many polygons (or other geometric primitives) for the visible surface algorithms to efficiently handle. The proposed algorithm envisions a tree data structure which encodes in its arcs both transformations and transitions to more detailed objects. In this way, each node encodes an object and according to a fast heuristic, the tree is descended to the leaves which provide each object with more detail. When a leaf is reached, other methods could be used when higher detail is needed, such as Catmull's recursive subdivision. The significant point, however, is that in a complex environment, the amount of information presented about the various objects in the environment varies according to the fraction of the field of view occupied by those objects. The paper then introduces clipping (not to be confused with culling although often similar), various considerations on the graphical working set and its impact on performance, interactions between the proposed algorithm and others to improve rendering speed. Well known approaches Although the algorithm introduced above covers a whole range of level of detail management techniques, real world applications usually employ specialized methods tailored to the information being rendered. Depending on the requirements of the situation, two main methods are used: The first method, discrete level of detail (DLOD), involves creating multiple, discrete versions of the original geometry with decreased levels of geometric detail. At runtime, the full-detail models are substituted for the models with reduced detail as necessary. Due to the discrete nature of the levels, there may be visual popping when one model is exchanged for another. This may be mitigated by alpha blending or morphing between states during the transition. The second method, continuous level of detail (CLOD), uses a structure which contains a continuously variable spectrum of geometric detail. The structure can then be probed to smoothly choose the appropriate level of detail required for the situation. A significant advantage of this technique is the ability to locally vary the detail; for instance, the side of a large object nearer to the view may be presented in high detail, while simultaneously reducing the detail on its distant side. In both cases, LODs are chosen based on some heuristic which is used to judge how much detail is being lost by the reduction in detail, such as by evaluation of the LOD's geometric error relative to the full-detail model. Objects are then displayed with the minimum amount of detail required to satisfy the heuristic, which is designed to minimize geometric detail as much as possible to maximize performance while maintaining an acceptable level of visual quality. The basic concept of discrete LOD (DLOD) is to provide various models to represent the same object. Obtaining those models requires an external algorithm which is often non-trivial and subject of many polygon reduction techniques. Successive LOD-ing algorithms will simply assume those models are available. DLOD algorithms are often used in performance-intensive applications with small data sets which can easily fit in memory. Although out-of-core algorithms could be used, the information granularity is not well suited to this kind of application. This kind of algorithm is usually easier to get working, providing both faster performance and lower CPU usage because of the few operations involved. DLOD methods are often used for "stand-alone" moving objects, possibly including complex animation methods. A different approach is used for geomipmapping, a popular terrain rendering algorithm because this applies to terrain meshes which are both graphically and topologically different from "object" meshes. Instead of computing an error and simplify the mesh according to this, geomipmapping takes a fixed reduction method, evaluates the error introduced and computes a distance at which the error is acceptable. Although straightforward, the algorithm provides decent performance. As a simple example, consider a sphere. A discrete LOD approach would cache a certain number of models to be used at different distances. Because the model can trivially be procedurally generated by its mathematical formulation, using a different number of sample points distributed on the surface is sufficient to generate the various models required. This pass is not a LOD-ing algorithm. To simulate a realistic transform bound scenario, an ad-hoc written application can be used. The use of simple algorithms and minimum fragment operations ensures that CPU bounding does not occur. Each frame, the program will compute each sphere's distance and choose a model from a pool according to this information. To easily show the concept, the distance at which each model is used is hard coded in the source. A more involved method would compute adequate models according to the usage distance chosen. OpenGL is used for rendering due to its high efficiency in managing small batches, storing each model in a display list thus avoiding communication overheads. Additional vertex load is given by applying two directional light sources ideally located infinitely far away. The following table compares the performance of LOD aware rendering and a full detail (brute force) method. Because hardware is geared towards large amounts of detail, rendering low polygon objects may score sub-optimal performances. HLOD avoids the problem by grouping different objects together. This allows for higher efficiency as well as taking advantage of proximity considerations. Practical applications LOD is especially useful in 3D video games. Video game developers want to provide players with large worlds but are always constrained by hardware, frame rate and the real-time nature of video game graphics. With the advent of 3D games in the 1990s, a lot of video games simply did not render distant structures or objects. Only nearby objects would be rendered and more distant parts would gradually fade, essentially implementing distance fog. Video games using LOD rendering avoid this fog effect and can render larger areas. Some notable early examples of LOD rendering in 3D video games include The Killing Cloud, Spyro the Dragon, Crash Bandicoot: Warped, Unreal Tournament and Serious Sam: The First Encounter. Most modern 3D games use a combination of LOD rendering techniques, using different models for large structures and distance culling for environment details like grass and trees. The effect is sometimes still noticeable, for example when the player character flies over the virtual terrain or uses a sniper scope for long distance viewing. Especially grass and foliage will seem to pop-up when getting closer, also known as foliage culling. LOD can also be used to render fractal terrain in real time. Unreal Engine 5's Nanite system essentially implements level-of-detail within meshes instead of just objects as a whole. LOD is found in GIS and 3D city models as a similar concept. It indicates how thoroughly real-world features have been mapped and how much the model adheres to its real-world counterpart. Besides the geometric complexity, other metrics such as spatio-semantic coherence, resolution of the texture and attributes can be considered in the LOD of a model. The standard CityGML contains one of the most prominent LOD categorizations. The analogy of "LOD-ing" in GIS is referred to as generalization. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Point_cloud_scanning] | [TOKENS: 837]
Contents Point cloud A point cloud is a discrete set of data points in space. The points may represent a 3D shape or object. Each point position has its set of Cartesian coordinates (X, Y, Z). Points may contain data other than position such as RGB colors, normals, timestamps and others. Point clouds are generally produced by 3D scanners or by photogrammetry software, which measure many points on the external surfaces of objects around them. As the output of 3D scanning processes, point clouds are used for many purposes, including to create 3D computer-aided design (CAD) or geographic information systems (GIS) models for manufactured parts, for metrology and quality inspection, and for a multitude of visualizing, animating, rendering, and mass customization applications. Alignment and registration When scanning a scene in real world using Lidar, the captured point clouds contain snippets of the scene, which requires alignment to generate a full map of the scanned environment. Point clouds are often aligned with 3D models or with other point clouds, a process termed point set registration. The Iterative closest point (ICP) algorithm can be used to align two point clouds that have an overlap between them, and are separated by a rigid transform. Point clouds with elastic transforms can also be aligned by using a non-rigid variant of the ICP (NICP). With advancements in machine learning in recent years, point cloud registration may also be done using end-to-end neural networks. For industrial metrology or inspection using industrial computed tomography, the point cloud of a manufactured part can be aligned to an existing model and compared to check for differences. Geometric dimensions and tolerances can also be extracted directly from the point cloud. Conversion to 3D surfaces While point clouds can be directly rendered and inspected, point clouds are often converted to polygon mesh or triangle mesh models, non-uniform rational B-spline (NURBS) surface models, or CAD models through a process commonly referred to as surface reconstruction. There are many techniques for converting a point cloud to a 3D surface. Some approaches, like Delaunay triangulation, alpha shapes, and ball pivoting, build a network of triangles over the existing vertices of the point cloud, while other approaches convert the point cloud into a volumetric distance field and reconstruct the implicit surface so defined through a marching cubes algorithm. In geographic information systems, point clouds are one of the sources used to make digital elevation model of the terrain. They are also used to generate 3D models of urban environments. Drones are often used to collect a series of RGB images which can be later processed on a computer vision algorithm platform such as on AgiSoft Photoscan, Pix4D, DroneDeploy or Hammer Missions to create RGB point clouds from where distances and volumetric estimations can be made.[citation needed] Point clouds can also be used to represent volumetric data, as is sometimes done in medical imaging. Using point clouds, multi-sampling and data compression can be achieved. MPEG Point Cloud Compression MPEG began standardizing point cloud compression (PCC) with a Call for Proposal (CfP) in 2017. Three categories of point clouds were identified: category 1 for static point clouds, category 2 for dynamic point clouds, and category 3 for Lidar sequences (dynamically acquired point clouds). Two technologies were finally defined: G-PCC (Geometry-based PCC, ISO/IEC 23090 part 9) for category 1 and category 3; and V-PCC (Video-based PCC, ISO/IEC 23090 part 5) for category 2. The first test models were developed in October 2017, one for G-PCC (TMC13) and another one for V-PCC (TMC2). Since then, the two test models have evolved through technical contributions and collaboration, and the first version of the PCC standard specifications was expected to be finalized in 2020 as part of the ISO/IEC 23090 series on the coded representation of immersive media content. See also References
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Texture_mapping&action=edit&section=12] | [TOKENS: 1430]
Editing Texture mapping (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 12 hidden categories (help):
========================================
[SOURCE: https://en.wikipedia.org/wiki/UV_coordinates] | [TOKENS: 825]
Contents UV mapping UV mapping in 3D graphics is a process for texture mapping a 3D model by projecting the model's surface coordinates onto a 2D image. The letters "U" and "V" denote the axes of the 2D texture because "X", "Y", and "Z" are already used to denote the axes of the 3D object in model space, while "W" (in addition to XYZ) is used in calculating quaternion rotations, a common operation in computer graphics. Process UV texturing permits polygons that make up a 3D object to be painted with color (and other surface attributes) from an ordinary image. The image is called a UV texture map. The UV mapping process involves assigning pixels in the image to surface mappings on the polygon, usually done by "programmatically" copying a triangular piece of the image map and pasting it onto a triangle on the object. UV texturing is an alternative to projection mapping (e.g., using any pair of the model's X, Y, Z coordinates or any transformation of the position); it only maps into a texture space rather than into the geometric space of the object. The rendering computation uses the UV texture coordinates to determine how to paint the three-dimensional surface. Application techniques In the example image, a sphere is given a checkered texture in two ways. On the left, without UV mapping, the sphere is carved out of three-dimensional checkers tiling Euclidean space. With UV mapping, the checkers tile the two-dimensional UV space, and points on the sphere map to this space according to their latitude and longitude. When a model is created as a polygon mesh using a 3D modeller, UV coordinates (also known as texture coordinates) can be generated for each vertex in the mesh. One way is for the 3D modeller to unfold the triangle mesh at the seams, automatically laying out the triangles on a flat page. If the mesh is a UV sphere, for example, the modeller might transform it into an equirectangular projection. Once the model is unwrapped, the artist can paint a texture on each triangle individually, using the unwrapped mesh as a template. When the scene is rendered, each triangle will map to the appropriate texture from the "decal sheet". A UV map can either be generated automatically by the software application, made manually by the artist, or some combination of both. Often a UV map will be generated, and then the artist will adjust and optimize it to minimize seams and overlaps. If the model is symmetric, the artist might overlap opposite triangles to allow painting both sides simultaneously. UV coordinates are optionally applied per face. This means a shared spatial vertex position can have different UV coordinates for each of its triangles, so adjacent triangles can be cut apart and positioned on different areas of the texture map. The UV mapping process at its simplest requires three steps: unwrapping the mesh, creating the texture, and applying the texture to a respective face of polygon. UV mapping may use repeating textures, or an injective 'unique' mapping as a prerequisite for baking. Finding UV on a sphere For any point P {\displaystyle P} on the sphere, calculate d ^ {\displaystyle {\hat {d}}} , that being the unit vector from P {\displaystyle P} to the sphere's origin. Assuming that the sphere's poles are aligned with the Y axis, UV coordinates in the range [ 0 , 1 ] {\displaystyle [0,1]} can then be calculated as follows: u = 0.5 + arctan2 ⁡ ( d z , d x ) 2 π , {\displaystyle u=0.5+{\frac {\operatorname {arctan2} (d_{z},d_{x})}{2\pi }},} v = 0.5 + arcsin ⁡ ( d y ) π . {\displaystyle v=0.5+{\frac {\arcsin(d_{y})}{\pi }}.} See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Perspective_correct_texture_mapping.svg] | [TOKENS: 226]
File:Perspective correct texture mapping.svg Summary The SVG was recreated using a program to generate the shapes instead of the hand-made version. The python program used to generate the SVG is available here: https://gist.github.com/bbbradsmith/484e48841a55a7ff1c2b00fb190e5f29 A python code example that generates an equivalent image, demonstrating how the rendering techniques might be implemented: https://gist.github.com/bbbradsmith/a65212dbbb917b5bc449995f6333de66 Licensing http://creativecommons.org/publicdomain/zero/1.0/deed.enCC0Creative Commons Zero, Public Domain Dedicationfalsefalse File history Click on a date/time to view the file as it appeared at that time. File usage The following page uses this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Doom_engine] | [TOKENS: 2360]
Contents Doom engine id Tech 1, also known as the Doom engine, is the game engine used in the id Software video games Doom and Doom II: Hell on Earth. It is also used in Heretic, Hexen: Beyond Heretic, Strife: Quest for the Sigil, Hacx: Twitch 'n Kill, Freedoom, and other games produced by licensees. It was created by John Carmack, with auxiliary functions written by Mike Abrash, John Romero, Dave Taylor, and Paul Radek. Originally developed on NeXT computers, it was ported to MS-DOS and compatible operating systems for Doom's initial release and was later ported to several game consoles and operating systems. The source code to the Linux version of Doom was released to the public under a license that granted rights to non-commercial use on December 23, 1997, followed by the Linux version of Doom II about a week later on December 29, 1997. The source code was later re-released under the GNU General Public License v2.0 or later on October 3, 1999. The dozens of unofficial Doom source ports that have been created since then allow Doom to run on previously unsupported operating systems and sometimes radically expand the engine's functionality with new features. Although the engine renders a 3D space, that space is projected from a two-dimensional floor plan. The line of sight is always parallel to the floor, walls must be perpendicular to the floors, and it is not possible to create multi-level structures or sloped areas (floors and ceilings with different angles). Despite these limitations, the engine represented a technological leap from id's previous Wolfenstein 3D engine. The Doom engine was later renamed[citation needed] to "id Tech 1" in order to categorize it in a list of id Software's long line of game engines. Game world The Doom engine separates rendering from the rest of the game. The graphics engine runs as fast as possible, but the game world runs at 35 frames per second regardless of the hardware, so multiple players can play against each other using computers of varying performance. Level structure A simple setup demonstrating how Doom represents levels internally Viewed from the top down, all Doom levels are actually two-dimensional, demonstrating one of the key limitations of the Doom engine: room-over-room is not possible. This limitation, however, has a silver lining: a "map mode" can be easily displayed, which represents the walls and the player's position, much like the first image to the right. The base unit is the vertex, which represents a single 2D point. Vertices (or "vertexes" as they are referred to internally) are then joined to form lines, known as "linedefs". Each linedef can have either one or two sides, which are known as "sidedefs". Sidedefs are then grouped together to form polygons; these are called "sectors". Sectors represent particular areas of the level. Each sector contains a number of properties: a floor height, ceiling height, light level, a floor texture and a ceiling texture. To have a different light level in a particular area, for example, a new sector must be created for that area with a different light level. One-sided linedefs therefore represent solid walls, while two-sided linedefs represent bridge lines between sectors. Sidedefs are used to store wall textures; these are completely separate from the floor and ceiling textures. Each sidedef can have three textures; these are called the middle, upper and lower textures. In one-sided linedefs, only the middle texture is used for the texture on the wall. In two-sided linedefs, the situation is more complex. The lower and upper textures are used to fill the gaps where adjacent sectors have different floor and ceiling heights: lower textures are used for steps, for example. The sidedefs can have a middle texture as well, although most do not; this is used to make textures hang in mid air. For example, when a transparent bar texture is seen forming a cage, this is an example of a middle texture on a two-sided linedef. Binary space partitioning Doom makes use of a system known as binary space partitioning (BSP). A tool is used to generate the BSP data for a level beforehand. This process can take quite some time for a large level. It is because of this that it is not possible to move the walls in Doom; while doors and lifts move up and down, none of them ever move sideways. The level is divided up into a binary tree: each location in the tree is a "node" which represents a particular area of the level (with the root node representing the entire level). At each branch of the tree there is a dividing line which divides the area of the node into two subnodes. At the same time, the dividing line divides linedefs into line segments called "segs". At the leaves of the tree are convex polygons, where further division of the level is not needed. These convex polygons are referred to as subsectors (or "SSECTORS"), and are bound to a particular sector. Each subsector has a list of segs associated with it. The BSP system sorts the subsectors into the right order for rendering. The algorithm is fairly simple: The process is complete when the whole column of pixels is filled (i.e., there are no more gaps left). This ordering ensures that no time is used drawing objects that are not visible and as a result maps can become very large without any speed penalty. Rendering All of the walls in Doom are drawn vertically; it is because of this that it is not possible to properly look up and down. It is possible to perform a form of look up/down via "y-shearing", and many modern Doom source ports do this, as well as later games that use the engine, such as Heretic. Essentially this works by moving the horizon line up and down within the screen, in effect providing a "window" on a taller viewable area. By moving the window up and down, it is possible to give the illusion of looking up and down. However, this will distort the view the further up and down the player looks. The Doom engine renders the walls as it traverses the BSP tree, drawing subsectors by order of distance from the camera so that the closest segs are drawn first. As the segs are drawn, they are stored in a linked list. This is used to clip other segs rendered later on, reducing overdraw. This is also used later to clip the edges of sprites. Once the engine reaches a solid (1-sided) wall at a particular x coordinate, no more lines need to be drawn at that area. For clipping the engine stores a "map" of areas of the screen where solid walls have been reached. This allows far away parts of the level which are invisible to the player to be clipped completely. The Doom graphic format stores the wall textures as sets of vertical columns; this is useful to the renderer, which essentially renders the walls by drawing many vertical columns of textures. The system for drawing floors and ceilings ("flats") is less elegant [according to whom?] than that used for the walls. Flats are drawn with a flood fill-like algorithm. Because of this, if a bad BSP builder is used, it is sometimes possible to get "holes" where the floor or ceiling bleeds down to the edges of the screen, a visual error commonly referred to as a "slime trail". This is also the reason why if the player travels outside of the level using the noclip cheat, the floors and ceilings will appear to stretch out from the level over the empty space. The floor and ceiling are drawn as "visplanes". These represent horizontal runs of texture, from a floor or ceiling at a particular height, light level and texture (if two adjacent sectors have exactly the same floor, these can get merged into one visplane). Each x position in the visplane has a particular vertical line of texture which is to be drawn. Because of this limit of drawing one vertical line at each x position, it is sometimes necessary to split visplanes into multiple visplanes. For example, consider viewing a floor with two concentric squares. The inner square will vertically divide the surrounding floor. In that horizontal range where the inner square is drawn, two visplanes are needed for the surrounding floor. Doom contained a static limit on the number of visplanes; if exceeded, a "visplane overflow" would occur, causing the game to exit to DOS with one of two errors, "No more visplanes!" or "visplane overflow (128 or higher)". The easiest way to invoke the visplane limit is a large checkerboard floor pattern; this creates a large number of visplanes. As the segs are rendered, visplanes are also added, extending from the edges of the segs towards the vertical edges of the screen. These extend until they reach existing visplanes. Because of the way this works, the system is dependent on the fact that segs are rendered in order by the overall engine; it is necessary to draw nearer visplanes first, so that they can "cut off" by others further away. If unstopped, the floor or ceiling will "bleed out" to the edges of the screen, as previously described. Eventually, the visplanes form a "map" of particular areas of the screen in which to draw particular textures. While visplanes are constructed essentially from vertical "strips", the actual low level rendering is performed in the form of horizontal "spans" of texture. After all the visplanes have been constructed, they are converted into spans which are then rendered to the screen. This appears to be a trade off: it is easier to construct visplanes as vertical strips, but because of the nature of how the floor and ceiling textures appear it is easier to draw them as horizontal strips. Each sector within the level has a linked list of things stored in that sector. As each sector is drawn the sprites are placed into a list of sprites to be drawn. If not within the field of view these are ignored. The edges of sprites are clipped by checking the list of segs previously drawn. Sprites in Doom are stored in the same column based format as the walls are, which again is useful for the renderer. The same functions which are used to draw walls are used to draw sprites as well. While subsectors are guaranteed to be in order, the sprites within them are not. Doom stores a list of sprites to be drawn ("vissprites") and sorts the list before rendering. Far away sprites are drawn before close ones. This causes some overdraw but usually this is negligible. There is a final issue of middle textures on 2-sided lines, used in transparent bars for example. These are mixed in and drawn with the sprites at the end of the rendering process, rather than with the other walls. Games using the Doom engine The Doom engine achieved most of its fame as a result of powering the classic first person shooter Doom, and it was used in several other games. It is usually considered that the "Big Four" Doom engine games are Doom, Heretic, Hexen: Beyond Heretic, and Strife: Quest for the Sigil. In the 1990s a handful of developers acquired licenses to distribute total conversions of Doom, and following the 1997 source code release a number of standalone titles have been produced in the engine, including freeware, fangames and commercial titles. TAO See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Texture_mapping#cite_note-13] | [TOKENS: 4408]
Contents Texture mapping Texture mapping is a term used in computer graphics to describe how 2D images are projected onto 3D models. The most common variant is the UV unwrap, which can be described as an inverse paper cutout, where the surfaces of a 3D model are cut apart so that it can be unfolded into a 2D coordinate space (UV space). Semantic Texture mapping can multiply refer to (1) the task of unwrapping a 3D model (converting the surface of a 3D model into a 2D texture map), (2) applying a 2D texture map onto the surface of a 3D model, and (3) the 3D software algorithm that performs both tasks. A texture map refers to a 2D image ("texture") that adds visual detail to a 3D model. The image can be stored as a raster graphic. A texture that stores a specific property—such as bumpiness, reflectivity, or transparency—is also referred to as a color map or roughness map. The coordinate space that converts from a 3D model's 3D space into a 2D space for sampling from the texture map is variously called UV space, UV coordinates, or texture space. Algorithm The following is a simplified explanation of how an algorithm could work to render an image: History The original technique was pioneered by Edwin Catmull in 1974 as part of his doctoral thesis. Texture mapping originally referred to diffuse mapping, a method that simply mapped pixels from a texture to a 3D surface ("wrapping" the image around the object). In recent decades, the advent of multi-pass rendering, multitexturing, mipmaps, and more complex mappings such as height mapping, bump mapping, normal mapping, displacement mapping, reflection mapping, specular mapping, occlusion mapping, and many other variations on the technique (controlled by a materials system) have made it possible to simulate near-photorealism in real time by vastly reducing the number of polygons and lighting calculations needed to construct a realistic and functional 3D scene. Texture maps A texture map is an image applied ("mapped") to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3D model formats or material definitions, and assembled into resource bundles. They may have one to three dimensions, although two dimensions are most common for visible surfaces. For use with modern hardware, texture map data may be stored in swizzled or tiled orderings to improve cache coherency. Rendering APIs typically manage texture map resources (which may be located in device memory) as buffers or surfaces, and may allow 'render to texture' for additional effects such as post processing or environment mapping. Texture maps usually contain RGB color data (either stored as direct color, compressed formats, or indexed color), and sometimes an additional channel for alpha blending (RGBA) especially for billboards and decal overlay textures. It is possible to use the alpha channel (which may be convenient to store in formats parsed by hardware) for other uses such as specularity. Multiple texture maps (or channels) may be combined for control over specularity, normals, displacement, or subsurface scattering, e.g. for skin rendering. Multiple texture images may be combined in texture atlases or array textures to reduce state changes for modern hardware. (They may be considered a modern evolution of tile map graphics). Modern hardware often supports cube map textures with multiple faces for environment mapping. Texture maps may be acquired by scanning or digital photography, designed in image manipulation software such as GIMP or Photoshop, or painted onto 3D surfaces directly in a 3D paint tool such as Mudbox or ZBrush. This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2D case is also known as UV coordinates). This may be done through explicit assignment of vertex attributes, manually edited in a 3D modelling package through UV unwrapping tools. It is also possible to associate a procedural transformation from 3D space to texture space with the material. This might be accomplished via planar projection or, alternatively, cylindrical or spherical mapping. More complex mappings may consider the distance along a surface to minimize distortion. These coordinates are interpolated across the faces of polygons to sample the texture map during rendering. Textures may be repeated or mirrored to extend a finite rectangular bitmap over a larger area, or they may have a one-to-one unique "injective" mapping from every piece of a surface (which is important for render mapping and light mapping, also known as baking). Texture mapping maps the model surface (or screen space during rasterization) into texture space; in this space, the texture map is visible in its undistorted form. UV unwrapping tools typically provide a view in texture space for manual editing of texture coordinates. Some rendering techniques such as subsurface scattering may be performed approximately by texture-space operations. Multitexturing is the use of more than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Microtextures or detail textures are used to add higher frequency details, and dirt maps add weathering and variation; this can greatly reduce the apparent periodicity of repeating textures. Modern graphics may use more than 10 layers, which are combined using shaders, for greater fidelity. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface (such as tree bark or rough concrete) that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in video games, as graphics hardware has become powerful enough to accommodate it in real-time. The way that samples (e.g. when viewed as pixels on the screen) are calculated from the texels (texture pixels) is governed by texture filtering. The cheapest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of a texture coordinate being outside the texture, it is either clamped or wrapped. Anisotropic filtering better eliminates directional artefacts when viewing textures from oblique viewing angles. Texture streaming is a means of using data streams for textures, where each texture is available in two or more different resolutions, as to determine which texture should be loaded into memory and used based on draw distance from the viewer and how much memory is available for textures. Texture streaming allows a rendering engine to use low resolution textures for objects far away from the viewer's camera, and resolve those into more detailed textures, read from a data source, as the point of view nears the objects. As an optimization, it is possible to render detail from a complex, high-resolution model or expensive process (such as global illumination) into a surface texture (possibly on a low-resolution model). This technique is called baking (or render mapping) and is most commonly used for light maps, but may also be used to generate normal maps and displacement maps. Some computer games (e.g. Messiah) have used this technique. The original Quake software engine used on-the-fly baking to combine light maps and colour maps in a process called surface caching. Baking can be used as a form of level of detail generation, where a complex scene with many different elements and materials may be approximated by a single element with a single texture, which is then algorithmically reduced for lower rendering cost and fewer drawcalls. It is also used to take high-detail models from 3D sculpting software and point cloud scanning and approximate them with meshes more suitable for realtime rendering. Rasterisation algorithms Various techniques have evolved in software and hardware implementations. Each offers different trade-offs in precision, versatility, and performance. Affine texture mapping linearly interpolates texture coordinates across a surface, making it the fastest form of texture mapping. Some software and hardware (such as the original PlayStation) project vertices in 3D space onto the screen during rendering and linearly interpolate the texture coordinates in screen space between them. This may be done by incrementing fixed-point UV coordinates or by an incremental error algorithm akin to Bresenham's line algorithm. In contrast to perpendicular polygons, this leads to noticeable distortion with perspective transformations (as shown in the figure: the checker box texture appears bent), especially as primitives near the camera. This distortion can be reduced by subdividing polygons into smaller polygons. Using quad primitives for rectangular objects can look less incorrect than if those rectangles were split into triangles. However, since interpolating four points adds complexity to the rasterization, most early implementations preferred triangles only. Some hardware, such as the forward texture mapping used by the Nvidia NV1, offered efficient quad primitives. With perspective correction, triangles become equivalent to quad primitives and this advantage disappears. For rectangular objects that are at right angles to the viewer (like floors and walls), the perspective only needs to be corrected in one direction across the screen rather than both. The correct perspective mapping can be calculated at the left and right edges of the floor. Affine linear interpolation across that horizontal span will look correct because every pixel along that line is the same distance from the viewer. Perspective correct texturing accounts for the vertices' positions in 3D space rather than simply interpolating coordinates in 2D screen space. While achieving the correct visual effect, perspective correct texturing is more expensive to calculate. To perform perspective correction of the texture coordinates u {\displaystyle u} and v {\displaystyle v} , with z {\displaystyle z} being the depth component from the viewer's point of view, it is possible to take advantage of the fact that the values 1 z {\displaystyle {\frac {1}{z}}} , u z {\displaystyle {\frac {u}{z}}} , and v z {\displaystyle {\frac {v}{z}}} are linear in screen space across the surface being textured. In contrast, the original z {\displaystyle z} , u {\displaystyle u} , and v {\displaystyle v} , before the division, are not linear across the surface in screen space. It is therefore possible to linearly interpolate these reciprocals across the surface, computing corrected values at each pixel, to produce a perspective correct texture mapping. To do this, the reciprocals at each vertex of the geometry (three points for a triangle) are calculated. Vertex n {\displaystyle n} has reciprocals u n z n {\displaystyle {\frac {u_{n}}{z_{n}}}} , v n z n {\displaystyle {\frac {v_{n}}{z_{n}}}} , and 1 z n {\displaystyle {\frac {1}{z_{n}}}} . Then, linear interpolation can be done on these reciprocals between the n {\displaystyle n} vertices (e.g., using barycentric coordinates), resulting in interpolated values across the surface. At a given point, this yields the interpolated u i , v i {\displaystyle u_{i},v_{i}} and 1 z i {\displaystyle {\frac {1}{z_{i}}}} (reciprocal z i {\displaystyle z_{i}} ). However, as our division by z {\displaystyle z} altered their coordinate system, this u i , v i {\displaystyle u_{i},v_{i}} cannot be used as texture coordinates. To correct back to the u , v {\displaystyle u,v} space, the corrected z {\displaystyle z} is calculated by taking the reciprocal once again: z c o r r e c t = 1 1 z i {\displaystyle z_{correct}={\frac {1}{\frac {1}{z_{i}}}}} . This is then used to correct the u i , v i {\displaystyle u_{i},v_{i}} coordinates: u c o r r e c t = u i ⋅ z i {\displaystyle u_{correct}=u_{i}\cdot z_{i}} and v c o r r e c t = v i ⋅ z i {\displaystyle v_{correct}=v_{i}\cdot z_{i}} . This correction makes it so that the difference from pixel to pixel between texture coordinates is smaller in parts of the polygon that are closer to the viewer (stretching the texture wider) and is larger in parts that are farther away (compressing the texture). Affine texture mapping directly interpolates a texture coordinate u α {\displaystyle u_{\alpha }} between two endpoints u 0 {\displaystyle u_{0}} and u 1 {\displaystyle u_{1}} : u α = ( 1 − α ) u 0 + α u 1 {\displaystyle u_{\alpha }=(1-\alpha )u_{0}+\alpha u_{1}} where 0 ≤ α ≤ 1 {\displaystyle 0\leq \alpha \leq 1} . Perspective correct mapping interpolates after dividing by depth z {\displaystyle z} , then uses its interpolated reciprocal to recover the correct coordinate: u α = ( 1 − α ) u 0 z 0 + α u 1 z 1 ( 1 − α ) 1 z 0 + α 1 z 1 {\displaystyle u_{\alpha }={\frac {(1-\alpha ){\frac {u_{0}}{z_{0}}}+\alpha {\frac {u_{1}}{z_{1}}}}{(1-\alpha ){\frac {1}{z_{0}}}+\alpha {\frac {1}{z_{1}}}}}} 3D graphics hardware typically supports perspective correct texturing. Various techniques have evolved for rendering texture mapped geometry into images with different quality and precision trade-offs, which can be applied to both software and hardware. Classic software texture mappers generally only performed simple texture mapping with one lighting effect at most (typically applied through a lookup table), and the perspective correctness was about 16 times more expensive.[compared to?] The Doom engine restricted the world to vertical walls and horizontal floors and ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors and ceilings would have a constant depth along a horizontal line. After performing one perspective correction calculation for the depth, the rest of the line could use fast affine mapping. Some later renderers of this era simulated a small amount of camera pitch with shearing which allowed the appearance of greater freedom while using the same rendering technique. Some engines were able to render texture mapped heightmaps (e.g. Nova Logic's Voxel Space, and the engine for Outcast) via Bresenham-like incremental algorithms, producing the appearance of a texture mapped landscape without the use of traditional geometric primitives. Every triangle can be further subdivided into groups of about 16 pixels in order to achieve two goals: keeping the arithmetic mill busy at all times and producing faster arithmetic results.[vague] For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering and affine mapping is used on them. The reason this technique works is that the distortion of affine mapping becomes much less noticeable on smaller polygons. The Sony PlayStation made extensive use of this because it only supported affine mapping in hardware and had a relatively high triangle throughput compared to its peers. Software renderers generally prefer screen subdivision because it has less overhead. Additionally, they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2D affine interpolation), thus lessening the overhead further. Another reason is that affine texture mapping does not fit into the low number of CPU registers of the x86 CPU; the 68000 and RISC processors are much more suited for that approach. A different approach was taken for Quake, which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor. As the polygons are rendered independently, it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the polygon normal to achieve a more constant z, but the effort seems not to be worth it.[original research?] One other technique is to approximate the perspective with a faster calculation, such as a polynomial. A second uses the 1 z i {\textstyle {\frac {1}{z_{i}}}} value of the last two drawn pixels to linearly extrapolate the next value. For the latter, the division is then done starting from those values so that all that has to be divided is a small remainder. However, the amount of bookkeeping needed makes this technique too slow on most systems.[citation needed] A third technique, used by the Build Engine (used, most notably, in Duke Nukem 3D), builds on the constant distance trick used by the Doom engine by finding and rendering along the line of constant distance for arbitrary polygons. Texture mapping hardware was originally developed for simulation (e.g. as implemented in the Evans and Sutherland ESIG and Singer-Link Digital Image Generators DIG) and professional graphics workstations (such as Silicon Graphics) and broadcast digital video effects machines such as the Ampex ADO. Texture mapping hardware later appeared in arcade cabinets, consumer video game consoles, and PC video cards in the mid-1990s. In flight simulations, texture mapping provided important motion and altitude cues necessary for pilot training not available on untextured surfaces. Additionally, texture mapping was implemented so that real-time processing of prefiltered texture patterns stored in memory could be accessed by the video processor in real-time. Modern graphics processing units (GPUs) provide specialised fixed function units called texture samplers, or texture mapping units, to perform texture mapping, usually with trilinear filtering or better multi-tap anisotropic filtering and hardware for decoding specific formats such as DXTn. As of 2016, texture mapping hardware is ubiquitous as most SOCs contain a suitable GPU. Some hardware implementations combine texture mapping with hidden-surface determination in tile-based deferred rendering or scanline rendering; such systems only fetch the visible texels at the expense of using greater workspace for transformed vertices. Most systems have settled on the z-buffering approach, which can still reduce the texture mapping workload with front-to-back sorting. On earlier graphics hardware, there were two competing paradigms of how to deliver a texture to the screen: Of these methods, inverse texture mapping has become standard in modern hardware. With this method, a pixel on the screen is mapped to a point on the texture. Each vertex of a rendering primitive is projected to a point on the screen, and each of these points is mapped to a u,v texel coordinate on the texture. A rasterizer will interpolate between these points to fill in each pixel covered by the primitive. The primary advantage of this method is that each pixel covered by a primitive will be traversed exactly once. Once a primitive's vertices are transformed, the amount of remaining work scales directly with how many pixels it covers on the screen. The main disadvantage is that the memory access pattern in the texture space will not be linear if the texture is at an angle to the screen. This disadvantage is often addressed by texture caching techniques, such as the swizzled texture memory arrangement. The linear interpolation can be used directly for simple and efficient affine texture mapping, but can also be adapted for perspective correctness. Forward texture mapping maps each texel of the texture to a pixel on the screen. After transforming a rectangular primitive to a place on the screen, a forward texture mapping renderer iterates through each texel on the texture, splatting each one onto a pixel of the frame buffer. This was used by some hardware, such as the 3DO, the Sega Saturn and the NV1. The primary advantage is that the texture will be accessed in a simple linear order, allowing very efficient caching of the texture data. However, this benefit is also its disadvantage: as a primitive gets smaller on screen, it still has to iterate over every texel in the texture, causing many pixels to be overdrawn redundantly. This method is also well suited for rendering quad primitives rather than reducing them to triangles, which provided an advantage when perspective correct texturing was not available in hardware. This is because the affine distortion of a quad looks less incorrect than the same quad split into two triangles (see the § Affine texture mapping section above). The NV1 hardware also allowed a quadratic interpolation mode to provide an even better approximation of perspective correctness. UV mapping became an important technique for 3D modelling and assisted in clipping the texture correctly when the primitive went past the edge of the screen, but existing hardware did not provide effective implementations of this. These shortcomings could have been addressed with further development, but GPU design has mostly shifted toward using the inverse mapping technique. Applications Beyond 3D rendering, the availability of texture mapping hardware has inspired its use for accelerating other tasks: It is possible to use texture mapping hardware to accelerate both the reconstruction of voxel data sets from tomographic scans, and to visualize the results. Many user interfaces use texture mapping to accelerate animated transitions of screen elements, e.g. Exposé in Mac OS X. See also References Software External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Fixed_point_arithmetic] | [TOKENS: 5680]
Contents Fixed-point arithmetic In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents (1/100 of a dollar). More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g., a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation. In the fixed-point representation, the fraction is often expressed in the same number base as the integer part, but using negative powers of the base b. The most common variants are decimal (base 10) and binary (base 2). The latter is commonly known also as binary scaling. Thus, if n fraction digits are stored, the value will always be an integer multiple of b−n. Fixed-point representation can also be used to omit the low-order digits of integer values, for instance, when representing large dollar values as multiples of $1000 ($1K). When decimal fixed-point numbers are displayed for human reading, the fraction digits are usually separated from those of the integer part by a radix character (usually "." in English, but "," or some other symbol in many other languages). Internally, however, there is no separation, and the distinction between the two groups of digits is defined only by the programs that handle such numbers. Fixed-point representation was the norm in mechanical calculators. Since most modern processors have a fast floating-point unit (FPU), fixed-point representations in processor-based implementations are now used only in special situations, such as in low-cost embedded microprocessors and microcontrollers; in applications that demand high speed or low power consumption or small chip area, like image, video, and digital signal processing; or when their use is more natural for the problem. Examples of the latter are accounting of dollar amounts, when fractions of cents must be rounded to whole cents in strictly prescribed ways; and the evaluation of functions by table lookup, or any application where rational numbers need to be represented without rounding errors (which fixed-point does but floating-point cannot). Fixed-point representation is still the norm for field-programmable gate array (FPGA) implementations, as floating-point support in an FPGA requires significantly more resources than fixed-point support. Representation A fixed-point representation of a fractional number is essentially an integer that is to be implicitly multiplied by a fixed scaling factor. For example, the value 1.23 can be stored in a variable as the integer value 123 with an implicit scaling factor of 1/100. This representation allows standard integer arithmetic logic units to perform rational number calculations. Negative values are usually represented in binary fixed-point formats as a signed integer in two's complement representation with an implicit scaling factor as above. The sign of the value will always be indicated by the most significant bit (1 = negative, 0 = non-negative), even if the number of fraction bits is greater than or equal to the total number of bits. For example, the 8-bit signed binary integer (11110101)2 = −11, taken with −3, +5, and +12 implied fraction bits, would represent the values −11/2−3 = −88, −11/25 = −0.343 75, and −11/212 = −0.002 685 546 875, respectively. Alternatively, negative values can be represented by an integer in the sign-magnitude format, in which case the sign is never included in the number of implied fraction bits. This variant is more commonly used in decimal fixed-point arithmetic. Thus the signed 5-digit decimal integer (−00025)10, taken with −3, +5, and +12 implied decimal fraction digits, would represent the values −25/10−3 = −25000, −25/105 = −0.00025, and −25/1012 = −0.000 000 000 025, respectively. A program will usually assume that all fixed-point values that will be stored into a given variable, or will be produced by a given instruction, will have the same scaling factor. This parameter can usually be chosen by the programmer depending on the precision needed and range of values to be stored. The scaling factor of a variable or formula may not appear explicitly in the program. Coding best practices then requires that it be provided in the documentation, at least as a comment in the source code. For greater efficiency, scaling factors are often chosen to be powers (positive or negative) of the base b used to represent the integers internally. However, often the best scaling factor is dictated by the application. Thus, one often uses scaling factors that are powers of 10 (e.g., 1/100 for dollar values), for human convenience, even when the integers are represented internally in binary. Decimal scaling factors also mesh well with the metric (SI) system, since the choice of the fixed-point scaling factor is often equivalent to the choice of a unit of measure (like centimeters or microns instead of meters). However, other scaling factors may be used occasionally, e.g., a fractional amount of hours may be represented as an integer number of seconds; that is, as a fixed-point number with scale factor of 1/3600. Even with the most careful rounding, fixed-point values represented with a scaling factor S may have an error of up to ±0.5 in the stored integer, that is, ±0.5 S in the value. Therefore, smaller scaling factors generally produce more accurate results. On the other hand, a smaller scaling factor means a smaller range of the values that can be stored in a given program variable. The maximum fixed-point value that can be stored into a variable is the largest integer value that can be stored into it, multiplied by the scaling factor, and similarly for the minimum value. For example, the table below gives the implied scaling factor S, the minimum and maximum representable values Vmin and Vmax, and the accuracy δ = S/2 of values that could be represented in 16-bit signed binary fixed point format, depending on the number f of implied fraction bits. Fixed-point formats with scaling factors of the form 2n−1 (namely 1, 3, 7, 15, 31, etc.) have been said to be appropriate for image processing and other digital signal processing tasks. They are supposed to provide more consistent conversions between fixed- and floating-point values than the usual 2n scaling. The Julia programming language implements both versions. Any binary fraction a/2m, such as 1/16 or 17/32, can be exactly represented in fixed-point, with a power-of-two scaling factor 1/2n with any n ≥ m. However, most decimal fractions like 0.1 or 0.123 are infinite repeating fractions in base 2. and hence cannot be represented that way. Similarly, any decimal fraction a/10m, such as 1/100 or 37/1000, can be exactly represented in fixed point with a power-of-ten scaling factor 1/10n with any n ≥ m. This decimal format can also represent any binary fraction a/2m, such as 1/8 (0.125) or 17/32 (0.53125). More generally, a rational number a/b, with a and b relatively prime and b positive, can be exactly represented in binary fixed point only if b is a power of 2; and in decimal fixed point only if b has no prime factors other than 2 and/or 5. Fixed-point computations can be faster and/or use less hardware than floating-point ones. If the range of the values to be represented is known in advance and is sufficiently limited, fixed point can make better use of the available bits. For example, if 32 bits are available to represent a number between 0 and 1, a fixed-point representation can have error less than 1.2 × 10−10, whereas the standard floating-point representation may have error up to 596 × 10−10 — because 9 of the bits are wasted with the sign and exponent of the dynamic scaling factor. Specifically, comparing 32-bit fixed-point to floating-point audio, a recording requiring less than 40 dB of headroom has a higher signal-to-noise ratio using 32-bit fixed. Programs using fixed-point computations are usually more portable than those using floating-point since they do not depend on the availability of an FPU. This advantage was particularly strong before the IEEE Floating Point Standard was widely adopted, when floating-point computations with the same data would yield different results depending on the manufacturer, and often on the computer model. Many embedded processors lack an FPU, because integer arithmetic units require substantially fewer logic gates and consume much smaller chip area than an FPU; and software emulation of floating-point on low-speed devices would be too slow for most applications. CPU chips for the earlier personal computers and game consoles, like the Intel 386 and 486SX, also lacked an FPU. The absolute resolution (difference between successive values) of any fixed-point format is constant over the whole range, namely the scaling factor S. In contrast, the relative resolution of a floating-point format is approximately constant over its whole range, varying within a factor of the base b; whereas their absolute resolution varies by many orders of magnitude, like the values themselves. In many cases, the rounding and truncation errors of fixed-point computations are easier to analyze than those of the equivalent floating-point computations. Applying linearization techniques to truncation, such as dithering and/or noise shaping is more straightforward within fixed-point arithmetic. On the other hand, the use of fixed point requires greater care by the programmer. Avoidance of overflow requires much tighter estimates for the ranges of variables and all intermediate values in the computation, and often also extra code to adjust their scaling factors. Fixed-point programming normally requires the use of integer types of different widths. Fixed-point applications can make use of block floating point, which is a fixed-point environment having each array (block) of fixed-point data be scaled with a common exponent in a single word. Applications A common use of decimal fixed-point is for storing monetary values, for which the complicated rounding rules of floating-point numbers are often a liability. For example, the open-source money management application GnuCash, written in C, switched from floating-point to fixed-point as of version 1.6, for this reason. Binary fixed-point (binary scaling) was widely used from the late 1960s to the 1980s for real-time computing that was mathematically intensive, such as flight simulation and in nuclear power plant control algorithms. It is still used in many DSP applications and custom-made microprocessors. Computations involving angles would use binary angular measurement. Binary fixed point is used in the STM32G4 series CORDIC co-processors and in the discrete cosine transform algorithms used to compress JPEG images. Electronic instruments such as electricity meters and digital clocks often use polynomials to compensate for introduced errors, e.g. from temperature or power supply voltage. The coefficients are produced by polynomial regression. Binary fixed-point polynomials can utilize more bits of precision than floating-point and do so in fast code using inexpensive CPUs. Accuracy, crucial for instruments, compares well to equivalent-bit floating-point calculations, if the fixed-point polynomials are evaluated using Horner's method (e.g. y = ((ax + b)x + c)x + d) to reduce the number of times that rounding occurs, and the fixed-point multiplications utilize rounding addends. Operations To add or subtract two values with the same implicit scaling factor, it is sufficient to add or subtract the underlying integers; the result will have their common implicit scaling factor and can thus be stored in the same program variables as the operands. These operations yield the exact mathematical result, as long as no overflow occurs—that is, as long as the resulting integer can be stored in the receiving program variable. If overflow happens, it occurs like with ordinary integers of the same signedness. In the unsigned and signed-via-two's-complement cases, the overflow behaviour is well-known as a finite group. If the operands have different scaling factors, then they must be converted to a common scaling factor before the operation. To multiply two fixed-point numbers, it suffices to multiply the two underlying integers and assume that the scaling factor of the result is the product of their scaling factors. The result will be exact, with no rounding, provided that it does not overflow the receiving variable. (Specifically, with integer multiplication, the product is up to twice the width of the two factors.) For example, multiplying the numbers 123 scaled by 1/1000 (0.123) and 25 scaled by 1/10 (2.5) yields the integer 123×25 = 3075 scaled by (1/1000)×(1/10) = 1/10000, that is 3075/10000 = 0.3075. As another example, multiplying the first number by 155 implicitly scaled by 1/32 (155/32 = 4.84375) yields the integer 123×155 = 19065 with implicit scaling factor (1/1000)×(1/32) = 1/32000, that is 19065/32000 = 0.59578125. In binary, it is common to use a scaling factor that is a power of two. After the multiplication, the scaling factor can be divided away by shifting right. Shifting is simple and fast in most computers. When right-shifting or a typical integer-division instruction (such as C integer division and x86 idiv) is used, the result is equivalent to a flooring division (floor(x/y)). A method with rounding can be used to reduce the error introduced. Three variations are possible based on choice of tie-breaking: These rounding methods are usable in any scaling through integer division. For example, they are also applicable to the discussion on rescaling. The division of fixed-point numbers can be understood as the division of two fractions of potentially different denominators (scaling factors). With p⁄q and r⁄s (where p q r s are all integers), the naive approach is to rearrange the fraction to form a new scaling factor (s/q): For example, division of 3456 scaled by 1/100 (34.56) and 1234 scaled by 1/1000 (1.234) yields the integer 3456÷1234 = 3 (rounded) with scale factor (1/100)/(1/1000) = 10, that is, 30. As another example, the division of the first number by 155 implicitly scaled by 1/32 (155/32 = 4.84375) yields the integer 3456÷155 = 22 (rounded) with implicit scaling factor (1/100)/(1/32) = 32/100 = 8/25, that is 22×32/100 = 7.04. With very similar s and q, the above algorithm results in an overly coarse scaling factor. This can be improved by first converting the dividend to a smaller scaling factor. Say we reduce the scaling factor by n times, then we instead calculate: For example, if a = 1.23 is represented as 123 with scaling 1/100, and b = 6.25 is represented as 6250 with scaling 1/1000, then simple division of the integers yields 123÷6250 = 0 (rounded) with scaling factor (1/100)/(1/1000) = 10. If a is first converted to 1,230,000 with scaling factor 1/1000000, the result will be 1,230,000÷6250 = 197 (rounded) with scale factor 1/1000 (0.197). The exact value 1.23/6.25 is 0.1968. A different way to think about the scaling is to consider division, the inverse operation of multiplication. If multiplication leads to a finer scaling factor, it is reasonable that the dividend needs to have a finer scaling factor as well to recover the original value given. In fixed-point computing, it is often necessary to convert a value to a different scaling factor. This operation is necessary, for example: To convert a number from a fixed point type with scaling factor R to another type with scaling factor S, the underlying integer must be multiplied by the ratio R/S. Thus, for example, to convert the value 1.23 = 123/100 from scaling factor R=1/100 to one with scaling factor S=1/1000, the integer 123 must be multiplied by (1/100)/(1/1000) = 10, yielding the representation 1230/1000. If the scaling factor is a power of the base used internally to represent the integer, changing the scaling factor requires only dropping low-order digits of the integer, or appending zero digits. However, this operation must preserve the sign of the number. In two's complement representation, that means extending the sign bit as in arithmetic shift operations. If S does not divide R (in particular, if the new scaling factor S is greater than the original R), the new integer may have to be rounded. In particular, if r and s are fixed-point variables with implicit scaling factors R and S, the operation r ← r×s requires multiplying the respective integers and explicitly dividing the result by S. The result may have to be rounded, and overflow may occur. For example, if the common scaling factor is 1/100, multiplying 1.23 by 0.25 entails multiplying 123 by 25 to yield 3075 with an intermediate scaling factor of 1/10000. In order to return to the original scaling factor 1/100, the integer 3075 then must be multiplied by 1/100, that is, divided by 100, to yield either 31 (0.31) or 30 (0.30), depending on the rounding policy used. Similarly, the operation r ← r/s will require dividing the integers and explicitly multiplying the quotient by S. Rounding and/or overflow may occur here too. To convert a number from floating point to fixed point, one may multiply it by the scaling factor S, then round the result to the nearest integer. Care must be taken to ensure that the result fits in the destination variable or register. Depending on the scaling factor and storage size, and on the range input numbers, the conversion may not entail any rounding. To convert a fixed-point number to floating-point, one may convert the integer to floating-point and then divide it by the scaling factor S. This conversion may entail rounding if the integer's absolute value is greater than 224 (for binary single-precision IEEE floating point) or of 253 (for double-precision). Overflow or underflow may occur if |S| is very large or very small, respectively. Hardware support Typical processors do not have specific support for fixed-point arithmetic. However, most computers with binary arithmetic have fast bit shift instructions that can multiply or divide an integer by any power of 2; in particular, an arithmetic shift instruction. These instructions can be used to quickly change scaling factors that are powers of 2, while preserving the sign of the number. Early computers like the IBM 1620 and the Burroughs B3500 used a binary-coded decimal (BCD) representation for integers, namely base 10 where each decimal digit was independently encoded with 4 bits. Some processors, such as microcontrollers, may still use it. In such machines, conversion of decimal scaling factors can be performed by bit shifts and/or by memory address manipulation. Some DSP architectures offer native support for specific fixed-point formats, for example, signed n-bit numbers with n−1 fraction bits (whose values may range between −1 and almost +1). The support may include a multiply instruction that includes renormalization—the scaling conversion of the product from 2n−2 to n−1 fraction bits.[citation needed] If the CPU does not provide that feature, the programmer must save the product in a large enough register or temporary variable, and code the renormalization explicitly. Overflow happens when the result of an arithmetic operation is too large to be stored in the designated destination area. In addition and subtraction, the result may require one bit more than the operands. In multiplication of two unsigned integers with m and n bits, the result may have m+n bits. In case of overflow, the high-order bits are usually lost, as the un-scaled integer gets reduced modulo 2n where n is the size of the storage area. The sign bit, in particular, is lost, which may radically change the sign and the magnitude of the value. Some processors can set a hardware overflow flag and/or generate an exception on the occurrence of an overflow. Some processors may instead provide saturation arithmetic: if the result of an addition or subtraction were to overflow, they store instead the value with the largest magnitude that can fit in the receiving area and has the correct sign.[citation needed] However, these features are not very useful in practice; it is generally easier and safer to select scaling factors and word sizes so as to exclude the possibility of overflow, or to check the operands for excessive values before executing the operation. Computer language support Explicit support for fixed-point numbers is provided by a few programming languages, notably PL/I, COBOL, Ada, JOVIAL, and Coral 66. They provide fixed-point data types, with a binary or decimal scaling factor. The compiler automatically generates code to do the appropriate scaling conversions when doing operations on these data types, when reading or writing variables, or when converting the values to other data types, such as floating-point. Most of those languages were designed between 1955 and 1990. More modern languages usually do not offer any fixed-point data types or support for scaling factor conversion. That is also the case for several older languages that are still very popular, like FORTRAN, C and C++. The wide availability of fast floating-point processors, with strictly standardized behavior, has greatly reduced the demand for binary fixed-point support.[citation needed] Similarly, the support for decimal floating point in some programming languages, like C# and Python, has removed most of the need for decimal fixed-point support. In the few situations that call for fixed-point operations, they can be implemented by the programmer, with explicit scaling conversion, in any programming language. On the other hand, all relational databases and the SQL notation support fixed-point decimal arithmetic and storage of numbers. PostgreSQL has a special numeric type for exact storage of numbers with up to 1000 digits. Moreover, in 2008 the International Organization for Standardization (ISO) published a draft technical report to extend the C programming language with fixed-point data types, for the benefit of programs running on embedded DSP processors. Two main kinds of data types are proposed, _Fract (fractional part with a minimum 7-bit precision) and _Accum (_Fract with at least 4 bits of integer part). The GNU Compiler Collection (GCC) supports this draft. Detailed examples Suppose there is the following multiplication with two fixed-point, 3-decimal-place numbers. Note how, since there are 3 decimal places, we show the trailing zeros. To re-characterize this as an integer multiplication, we must first multiply by 1000 ( = 10 3 ) {\displaystyle 1000\ (=10^{3})} , moving all the decimal places into integer places, then we will multiply by 1 / 1000 ( = 10 − 3 ) {\displaystyle 1/1000\ (=10^{-3})} to put them back. The equation now looks like This works equivalently if we choose a different base, notably base 2 for computing, since a bit shift is the same as a multiplication or division by an order of 2. Three decimal digits are equivalent to about 10 binary digits, so we should round 0.05 to 10 bits after the binary point. The closest approximation is then 0.0000110011. Thus our multiplication becomes This rounds to 11.023 with three digits after the decimal point. Consider the task of computing the product of 1.2 and 5.6 with binary fixed point using 16 fraction bits. To represent the two numbers, one multiplies them by 216, obtaining 78 643.2 and 367 001.6; and round these values the nearest integers, obtaining 78 643 and 367 002. These numbers will fit comfortably into a 32-bit word with two's complement signed format. Multiplying these integers together gives the 35-bit integer 28 862 138 286 with 32 fraction bits, without any rounding. Note that storing this value directly into a 32-bit integer variable would result in overflow and loss of the most significant bits. In practice, it would probably be stored in a signed 64-bit integer variable or register. If the result is to be stored in the same format as the data, with 16 fraction bits, that integer should be divided by 216, which gives approximately 440 401.28, and then rounded to the nearest integer. This effect can be achieved by adding 215 and then shifting the result by 16 bits. The result is 440 401, which represents the value 6.719 985 961 914 062 5. Taking into account the precision of the format, that value is better expressed as 6.719 986 ± 0.000 008 (not counting the error that comes from the operand approximations). The correct result would be 1.2 × 5.6 = 6.72. For a more complicated example, suppose that the two numbers 1.2 and 5.6 are represented in 32-bit fixed-point format with 30 and 20 fraction bits, respectively. Scaling by 230 and 220 gives 1 288 490 188.8 and 5 872 025.6, that round to 1 288 490 189 and 5 872 026, respectively. Both numbers still fit in a 32-bit signed integer variable, and represent the fractions Their product is (exactly) the 53-bit integer 7 566 047 890 552 914, which has 30+20 = 50 implied fraction bits and therefore represents the fraction If we choose to represent this value in signed 16-bit fixed format with 8 fraction bits, we must divide the integer product by 250−8 = 242 and round the result; which can be achieved by adding 241 and shifting by 42 bits. The result is 1720, representing the value 1720/28 = 6.718 75, or rather the interval between 3439/29 and 3441/29 (approximately 6.719 ± 0.002). Notations Various notations have been used to concisely specify the parameters of a fixed-point format. In the following list, f represents the number of fractional bits, m the number of magnitude or integer bits, s the number of sign bits (0/1 or some other alternative representation), and b the total number of bits. Software application examples See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Voxel_Space] | [TOKENS: 243]
Contents Voxel Space Voxel Space was a voxel raster graphics rendering engine invented by NovaLogic developer and vice-president of technology, Kyle Freeman. The company was issued a patent for the technology in early 2000. History The original Voxel Space engine was patented in 1996, and first released in software in the 1992 release Comanche: Maximum Overkill. The engine was then revamped into Voxel Space 2 (which supports the use of polygons as well as voxels, and was used in Comanche 3 and Armored Fist 2), and later Voxel Space 32 and used in Armored Fist 3 and Delta Force 2. Based on Kyle Freeman's experience with voxels in medical-imaging technologies used in CT scan and MRI scanners, similar technology was used in games such as Outcast. With the advance of computation power in modern computers there do exist browser-based versions of similar technology based on the Voxel Space terrain rendering used in Comanche. The version of the engine used in the Comanche series utilized ray-tracing every pixel of the volumetric terrain data.[citation needed] Versions List of games The technology was used in a number of commercial game titles: References
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Texture_mapping&action=edit&section=17] | [TOKENS: 1430]
Editing Texture mapping (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 12 hidden categories (help):
========================================
[SOURCE: https://en.wikipedia.org/wiki/CPU_register] | [TOKENS: 852]
Contents Processor register A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900. Almost all computers, whether load/store architecture or not, load items of data from a larger memory into registers where they are used for arithmetic operations, bitwise operations, and other operations, and are manipulated or tested by machine instructions. Manipulated items are then often stored back to main memory, either by the same instruction or by a subsequent one. Modern processors use either static or dynamic random-access memory (RAM) as main memory, with the latter usually accessed via one or more cache levels. Processor registers are normally at the top of the memory hierarchy, and provide the fastest way to access data. The term normally refers only to the group of registers that are directly encoded as part of an instruction, as defined by the instruction set. However, modern high-performance CPUs often have duplicates of these "architectural registers" in order to improve performance via register renaming, allowing parallel and speculative execution. Modern x86 design acquired these techniques around 1995 with the releases of Pentium Pro, Cyrix 6x86, Nx586, and AMD K5. When a computer program accesses the same data repeatedly, this is called locality of reference. Holding frequently used values in registers can be critical to a program's performance. Register allocation is performed either by a compiler in the code generation phase, or manually by an assembly language programmer. Size Registers are normally measured by the number of bits they can hold, for example, an 8-bit register, 32-bit register, 64-bit register, 128-bit register, or more. In some instruction sets, the registers can operate in various modes, breaking down their storage memory into smaller parts (32-bit into four 8-bit ones, for instance) to which multiple data (vector, or one-dimensional array of data) can be loaded and operated upon at the same time. Typically it is implemented by adding extra registers that map their memory into a larger register. Processors that have the ability to execute single instructions on multiple data are called vector processors. Types A processor often contains several kinds of registers, which can be classified according to the types of values they can store or the instructions that operate on them: Hardware registers are similar, but occur outside CPUs. In some architectures (such as SPARC and MIPS), the first or last register in the integer register file is a pseudo-register in that it is hardwired to always return zero when read (mostly to simplify indexing modes), and it cannot be overwritten. In Alpha, this is also done for the floating-point register file. As a result of this, register files are commonly quoted as having one register more than how many of them are actually usable; for example, 32 registers are quoted when only 31 of them fit within the above definition of a register. Examples The following table shows the number of registers in several mainstream CPU architectures. Although all of the below-listed architectures are different, almost all are in a basic arrangement known as the von Neumann architecture, first proposed by the Hungarian-American mathematician John von Neumann. It is also noteworthy that the number of registers on GPUs is much higher than that on CPUs. (64 elements) (if FP present) 8 (if SSE/MMX present) (if AVX-512 available) (if FP present) + 2 × 32 Vector (dedicated vector co-processor located nearby its GPU) 16 in G5 and later S/390 models and z/Architecture (if FP present) (if FPP present) (up to 32) Usage The number of registers available on a processor and the operations that can be performed using those registers has a significant impact on the efficiency of code generated by optimizing compilers. The Strahler number of an expression tree gives the minimum number of registers required to evaluate that expression tree. See also References
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Texture_mapping&action=edit&section=19] | [TOKENS: 1430]
Editing Texture mapping (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 12 hidden categories (help):
========================================
[SOURCE: https://en.wikipedia.org/wiki/Barycentric_coordinates] | [TOKENS: 12959]
Contents Barycentric coordinate system In geometry, a barycentric coordinate system is a coordinate system in which the location of a point is specified by reference to a simplex (a triangle for points in a plane, a tetrahedron for points in three-dimensional space, etc.). The barycentric coordinates of a point can be interpreted as masses placed at the vertices of the simplex, such that the point is the center of mass (or barycenter) of these masses. These masses can be zero or negative; they are all positive if and only if the point is strictly inside the simplex. Every point has barycentric coordinates, and their sum is never zero. Two tuples of barycentric coordinates specify the same point if and only if they are proportional; that is to say, if one tuple can be obtained by multiplying the elements of the other tuple by the same non-zero number. Therefore, barycentric coordinates are either considered to be defined up to multiplication by a nonzero constant, or normalized for summing to unity. Barycentric coordinates were introduced by August Möbius in 1827. They are special homogeneous coordinates. Barycentric coordinates are strongly related with Cartesian coordinates and, more generally, to affine coordinates (see Affine space § Relationship between barycentric and affine coordinates). Barycentric coordinates are particularly useful in triangle geometry for studying properties that do not depend on the angles of the triangle, such as Ceva's theorem, Routh's theorem, and Menelaus's theorem. In computer-aided design, they are useful for defining some kinds of Bézier surfaces. Definition Let A 0 , … , A n {\displaystyle A_{0},\ldots ,A_{n}} be n + 1 points in a Euclidean space, a flat or an affine space A {\displaystyle \mathbf {A} } of dimension n that are affinely independent; this means that there is no affine subspace of dimension n − 1 that contains all the points, or, equivalently that the points define a simplex. Given any point P ∈ A , {\displaystyle P\in \mathbf {A} ,} there are scalars a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} that are not all zero, such that ( a 0 + ⋯ + a n ) O P → = a 0 O A 0 → + ⋯ + a n O A n → , {\displaystyle (a_{0}+\cdots +a_{n}){\overset {}{\overrightarrow {OP}}}=a_{0}{\overset {}{\overrightarrow {OA_{0}}}}+\cdots +a_{n}{\overset {}{\overrightarrow {OA_{n}}}},} for any point O. (As usual, the notation A B → {\displaystyle {\overset {}{\overrightarrow {AB}}}} represents the translation vector or free vector that maps the point A to the point B.) The elements of a (n + 1) tuple ( a 0 : … : a n ) {\displaystyle (a_{0}:\dotsc :a_{n})} that satisfies this equation are called barycentric coordinates of P with respect to A 0 , … , A n . {\displaystyle A_{0},\ldots ,A_{n}.} The use of colons in the notation of the tuple means that barycentric coordinates are a sort of homogeneous coordinates, that is, the point is not changed if all coordinates are multiplied by the same nonzero constant. Moreover, the barycentric coordinates are also not changed if the auxiliary point O, the origin, is changed. The barycentric coordinates of a point are unique up to a scaling. That is, two tuples ( a 0 : … : a n ) {\displaystyle (a_{0}:\dotsc :a_{n})} and ( b 0 : … : b n ) {\displaystyle (b_{0}:\dotsc :b_{n})} are barycentric coordinates of the same point if and only if there is a nonzero scalar λ {\displaystyle \lambda } such that b i = λ a i {\displaystyle b_{i}=\lambda a_{i}} for every i. In some contexts, it is useful to constrain the barycentric coordinates of a point so that they are unique. This is usually achieved by imposing the condition ∑ a i = 1 , {\displaystyle \sum a_{i}=1,} or equivalently by dividing every a i {\displaystyle a_{i}} by the sum of all a i . {\displaystyle a_{i}.} These specific barycentric coordinates are called normalized or absolute barycentric coordinates. Sometimes, they are also called affine coordinates, although this term refers commonly to a slightly different concept. Sometimes, it is the normalized barycentric coordinates that are called barycentric coordinates. In this case the above defined coordinates are called homogeneous barycentric coordinates. With above notation, the homogeneous barycentric coordinates of Ai are all zero, except the one of index i. When working over the real numbers (the above definition is also used for affine spaces over an arbitrary field), the points whose all normalized barycentric coordinates are nonnegative form the convex hull of { A 0 , … , A n } , {\displaystyle \{A_{0},\ldots ,A_{n}\},} which is the simplex that has these points as its vertices. With above notation, a tuple ( a 1 , … , a n ) {\displaystyle (a_{1},\ldots ,a_{n})} such that ∑ i = 0 n a i = 0 {\displaystyle \sum _{i=0}^{n}a_{i}=0} does not define any point, but the vector a 0 O A 0 → + ⋯ + a n O A n → {\displaystyle a_{0}{\overset {}{\overrightarrow {OA_{0}}}}+\cdots +a_{n}{\overset {}{\overrightarrow {OA_{n}}}}} is independent from the origin O. As the direction of this vector is not changed if all a i {\displaystyle a_{i}} are multiplied by the same scalar, the homogeneous tuple ( a 0 : … : a n ) {\displaystyle (a_{0}:\dotsc :a_{n})} defines a direction of lines, that is a point at infinity. See below for more details. Relationship with Cartesian or affine coordinates Barycentric coordinates are strongly related to Cartesian coordinates and, more generally, affine coordinates. For a space of dimension n, these coordinate systems are defined relative to a point O, the origin, whose coordinates are zero, and n points A 1 , … , A n , {\displaystyle A_{1},\ldots ,A_{n},} whose coordinates are zero except that of index i that equals one. A point has coordinates ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} for such a coordinate system if and only if its normalized barycentric coordinates are ( 1 − x 1 − ⋯ − x n , x 1 , … , x n ) {\displaystyle (1-x_{1}-\cdots -x_{n},x_{1},\ldots ,x_{n})} relatively to the points O , A 1 , … , A n . {\displaystyle O,A_{1},\ldots ,A_{n}.} The main advantage of barycentric coordinate systems is to be symmetric with respect to the n + 1 defining points. They are therefore often useful for studying properties that are symmetric with respect to n + 1 points. On the other hand, distances and angles are difficult to express in general barycentric coordinate systems, and when they are involved, it is generally simpler to use a Cartesian coordinate system. Relationship with projective coordinates Homogeneous barycentric coordinates are also strongly related with some projective coordinates. However this relationship is more subtle than in the case of affine coordinates, and, for being clearly understood, requires a coordinate-free definition of the projective completion of an affine space, and a definition of a projective frame. The projective completion of an affine space of dimension n is a projective space of the same dimension that contains the affine space as the complement of a hyperplane. The projective completion is unique up to an isomorphism. The hyperplane is called the hyperplane at infinity, and its points are the points at infinity of the affine space. Given a projective space of dimension n, a projective frame is an ordered set of n + 2 points that are not contained in the same hyperplane. A projective frame defines a projective coordinate system such that the coordinates of the (n + 2)th point of the frame are all equal, and, otherwise, all coordinates of the ith point are zero, except the ith one. When constructing the projective completion from an affine coordinate system, one commonly defines it with respect to a projective frame consisting of the intersections with the hyperplane at infinity of the coordinate axes, the origin of the affine space, and the point that has all its affine coordinates equal to one. This implies that the points at infinity have their last coordinate equal to zero, and that the projective coordinates of a point of the affine space are obtained by completing its affine coordinates by one as (n + 1)th coordinate. When one has n + 1 points in an affine space that define a barycentric coordinate system, this is another projective frame of the projective completion that is convenient to choose. This frame consists of these points and their centroid, that is the point that has all its barycentric coordinates equal. In this case, the homogeneous barycentric coordinates of a point in the affine space are the same as the projective coordinates of this point. A point is at infinity if and only if the sum of its coordinates is zero. This point is in the direction of the vector defined at the end of § Definition. Barycentric coordinates on triangles In the context of a triangle, barycentric coordinates are also known as area coordinates or areal coordinates, because the coordinates of P with respect to triangle ABC are equivalent to the (signed) ratios of the areas of PBC, PCA and PAB to the area of the reference triangle ABC. Areal and trilinear coordinates are used for similar purposes in geometry. Barycentric or areal coordinates are extremely useful in engineering applications involving triangular subdomains. These make analytic integrals often easier to evaluate, and Gaussian quadrature tables are often presented in terms of area coordinates. Consider a triangle A B C {\displaystyle ABC} with vertices A = ( a 1 , a 2 ) {\displaystyle A=(a_{1},a_{2})} , B = ( b 1 , b 2 ) {\displaystyle B=(b_{1},b_{2})} , C = ( c 1 , c 2 ) {\displaystyle C=(c_{1},c_{2})} in the x,y-plane, R 2 {\displaystyle \mathbb {R} ^{2}} . One may regard points in R 2 {\displaystyle \mathbb {R} ^{2}} as vectors, so it makes sense to add or subtract them and multiply them by scalars. Each triangle A B C {\displaystyle ABC} has a signed area or sarea, which is plus or minus its area: The sign is plus if the path from A {\displaystyle A} to B {\displaystyle B} to C {\displaystyle C} then back to A {\displaystyle A} goes around the triangle in a counterclockwise direction. The sign is minus if the path goes around in a clockwise direction. Let P {\displaystyle P} be a point in the plane, and let ( λ 1 , λ 2 , λ 3 ) {\displaystyle (\lambda _{1},\lambda _{2},\lambda _{3})} be its normalized barycentric coordinates with respect to the triangle A B C {\displaystyle ABC} , so and Normalized barycentric coordinates ( λ 1 , λ 2 , λ 3 ) {\displaystyle (\lambda _{1},\lambda _{2},\lambda _{3})} are also called areal coordinates because they represent ratios of signed areas of triangles: One may prove these ratio formulas based on the facts that a triangle is half of a parallelogram, and the area of a parallelogram is easy to compute using a determinant. Specifically, let A B C D {\displaystyle ABCD} is a parallelogram because its pairs of opposite sides, represented by the pairs of displacement vectors D − C = B − A {\displaystyle D-C=B-A} , and D − B = C − A {\displaystyle D-B=C-A} , are parallel and congruent. Triangle A B C {\displaystyle ABC} is half of the parallelogram A B D C {\displaystyle ABDC} , so twice its signed area is equal to the signed area of the parallelogram, which is given by the 2 × 2 {\displaystyle 2\times 2} determinant det ( B − A , C − A ) {\displaystyle \det(B-A,C-A)} whose columns are the displacement vectors B − A {\displaystyle B-A} and C − A {\displaystyle C-A} : Expanding the determinant, using its alternating and multilinear properties, one obtains so Similarly, To obtain the ratio of these signed areas, express P {\displaystyle P} in the second formula in terms of its barycentric coordinates: The barycentric coordinates are normalized so 1 = λ 1 + λ 2 + λ 3 {\displaystyle 1=\lambda _{1}+\lambda _{2}+\lambda _{3}} , hence λ 1 = ( 1 − λ 2 − λ 3 ) {\displaystyle \lambda _{1}=(1-\lambda _{2}-\lambda _{3})} . Plug that into the previous line to obtain Therefore Similar calculations prove the other two formulas Trilinear coordinates ( γ 1 , γ 2 , γ 3 ) {\displaystyle (\gamma _{1},\gamma _{2},\gamma _{3})} of P {\displaystyle P} are signed distances from P {\displaystyle P} to the lines BC, AC, and AB, respectively. The sign of γ 1 {\displaystyle \gamma _{1}} is positive if P {\displaystyle P} and A {\displaystyle A} lie on the same side of BC, negative otherwise. The signs of γ 2 {\displaystyle \gamma _{2}} and γ 3 {\displaystyle \gamma _{3}} are assigned similarly. Let Then where, as above, sarea stands for signed area. All three signs are plus if triangle ABC is positively oriented, minus otherwise. The relations between trilinear and barycentric coordinates are obtained by substituting these formulas into the above formulas that express barycentric coordinates as ratios of areas. Switching back and forth between the barycentric coordinates and other coordinate systems makes some problems much easier to solve. Given a point r {\displaystyle \mathbf {r} } in a triangle's plane one can obtain the barycentric coordinates λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} and λ 3 {\displaystyle \lambda _{3}} from the Cartesian coordinates ( x , y ) {\displaystyle (x,y)} or vice versa. We can write the Cartesian coordinates of the point r {\displaystyle \mathbf {r} } in terms of the Cartesian components of the triangle vertices r 1 {\displaystyle \mathbf {r} _{1}} , r 2 {\displaystyle \mathbf {r} _{2}} , r 3 {\displaystyle \mathbf {r} _{3}} where r i = ( x i , y i ) {\displaystyle \mathbf {r} _{i}=(x_{i},y_{i})} and in terms of the barycentric coordinates of r {\displaystyle \mathbf {r} } as x = λ 1 x 1 + λ 2 x 2 + λ 3 x 3 y = λ 1 y 1 + λ 2 y 2 + λ 3 y 3 {\displaystyle {\begin{aligned}x&=\lambda _{1}x_{1}+\lambda _{2}x_{2}+\lambda _{3}x_{3}\\[2pt]y&=\lambda _{1}y_{1}+\lambda _{2}y_{2}+\lambda _{3}y_{3}\end{aligned}}} That is, the Cartesian coordinates of any point are a weighted average of the Cartesian coordinates of the triangle's vertices, with the weights being the point's barycentric coordinates summing to unity. To find the reverse transformation, from Cartesian coordinates to barycentric coordinates, we first substitute λ 3 = 1 − λ 1 − λ 2 {\displaystyle \lambda _{3}=1-\lambda _{1}-\lambda _{2}} into the above to obtain x = λ 1 x 1 + λ 2 x 2 + ( 1 − λ 1 − λ 2 ) x 3 y = λ 1 y 1 + λ 2 y 2 + ( 1 − λ 1 − λ 2 ) y 3 {\displaystyle {\begin{aligned}x&=\lambda _{1}x_{1}+\lambda _{2}x_{2}+(1-\lambda _{1}-\lambda _{2})x_{3}\\[2pt]y&=\lambda _{1}y_{1}+\lambda _{2}y_{2}+(1-\lambda _{1}-\lambda _{2})y_{3}\end{aligned}}} Rearranging, this is λ 1 ( x 1 − x 3 ) + λ 2 ( x 2 − x 3 ) + x 3 − x = 0 λ 1 ( y 1 − y 3 ) + λ 2 ( y 2 − y 3 ) + y 3 − y = 0 {\displaystyle {\begin{aligned}\lambda _{1}(x_{1}-x_{3})+\lambda _{2}(x_{2}-x_{3})+x_{3}-x&=0\\[2pt]\lambda _{1}(y_{1}-y_{3})+\lambda _{2}(y_{2}-\,y_{3})+y_{3}-\,y&=0\end{aligned}}} This linear transformation may be written more succinctly as T ⋅ λ = r − r 3 {\displaystyle \mathbf {T} \cdot \lambda =\mathbf {r} -\mathbf {r} _{3}} where λ {\displaystyle \lambda } is the vector of the first two barycentric coordinates, r {\displaystyle \mathbf {r} } is the vector of Cartesian coordinates, and T {\displaystyle \mathbf {T} } is a matrix given by T = ( x 1 − x 3 x 2 − x 3 y 1 − y 3 y 2 − y 3 ) {\displaystyle \mathbf {T} =\left({\begin{matrix}x_{1}-x_{3}&x_{2}-x_{3}\\y_{1}-y_{3}&y_{2}-y_{3}\end{matrix}}\right)} Now the matrix T {\displaystyle \mathbf {T} } is invertible, since r 1 − r 3 {\displaystyle \mathbf {r} _{1}-\mathbf {r} _{3}} and r 2 − r 3 {\displaystyle \mathbf {r} _{2}-\mathbf {r} _{3}} are linearly independent (if this were not the case, then r 1 {\displaystyle \mathbf {r} _{1}} , r 2 {\displaystyle \mathbf {r} _{2}} , and r 3 {\displaystyle \mathbf {r} _{3}} would be collinear and would not form a triangle). Thus, we can rearrange the above equation to get ( λ 1 λ 2 ) = T − 1 ( r − r 3 ) {\displaystyle \left({\begin{matrix}\lambda _{1}\\\lambda _{2}\end{matrix}}\right)=\mathbf {T} ^{-1}(\mathbf {r} -\mathbf {r} _{3})} Finding the barycentric coordinates has thus been reduced to finding the 2×2 inverse matrix of T {\displaystyle \mathbf {T} } , an easy problem. Explicitly, the formulae for the barycentric coordinates of point r {\displaystyle \mathbf {r} } in terms of its Cartesian coordinates (x, y) and in terms of the Cartesian coordinates of the triangle's vertices are: λ 1 = ( y 2 − y 3 ) ( x − x 3 ) + ( x 3 − x 2 ) ( y − y 3 ) det ( T ) = ( y 2 − y 3 ) ( x − x 3 ) + ( x 3 − x 2 ) ( y − y 3 ) ( y 2 − y 3 ) ( x 1 − x 3 ) + ( x 3 − x 2 ) ( y 1 − y 3 ) = ( r − r 3 ) × ( r 2 − r 3 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) λ 2 = ( y 3 − y 1 ) ( x − x 3 ) + ( x 1 − x 3 ) ( y − y 3 ) det ( T ) = ( y 3 − y 1 ) ( x − x 3 ) + ( x 1 − x 3 ) ( y − y 3 ) ( y 2 − y 3 ) ( x 1 − x 3 ) + ( x 3 − x 2 ) ( y 1 − y 3 ) = ( r − r 3 ) × ( r 3 − r 1 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) λ 3 = 1 − λ 1 − λ 2 = 1 − ( r − r 3 ) × ( r 2 − r 1 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) = ( r − r 1 ) × ( r 1 − r 2 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) {\displaystyle {\begin{aligned}\lambda _{1}=&\ {\frac {(y_{2}-y_{3})(x-x_{3})+(x_{3}-x_{2})(y-y_{3})}{\det(\mathbf {T} )}}\\[4pt]&={\frac {(y_{2}-y_{3})(x-x_{3})+(x_{3}-x_{2})(y-y_{3})}{(y_{2}-y_{3})(x_{1}-x_{3})+(x_{3}-x_{2})(y_{1}-y_{3})}}\\[4pt]&={\frac {(\mathbf {r} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\\[12pt]\lambda _{2}=&\ {\frac {(y_{3}-y_{1})(x-x_{3})+(x_{1}-x_{3})(y-y_{3})}{\det(\mathbf {T} )}}\\[4pt]&={\frac {(y_{3}-y_{1})(x-x_{3})+(x_{1}-x_{3})(y-y_{3})}{(y_{2}-y_{3})(x_{1}-x_{3})+(x_{3}-x_{2})(y_{1}-y_{3})}}\\[4pt]&={\frac {(\mathbf {r} -\mathbf {r_{3}} )\times (\mathbf {r_{3}} -\mathbf {r_{1}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\\[12pt]\lambda _{3}=&\ 1-\lambda _{1}-\lambda _{2}\\[4pt]&=1-{\frac {(\mathbf {r} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{1}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\\[4pt]&={\frac {(\mathbf {r} -\mathbf {r_{1}} )\times (\mathbf {r_{1}} -\mathbf {r_{2}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\end{aligned}}} When understanding the last line of equation, note the identity ( r 1 − r 3 ) × ( r 2 − r 3 ) = ( r 3 − r 1 ) × ( r 1 − r 2 ) {\displaystyle (\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )=(\mathbf {r_{3}} -\mathbf {r_{1}} )\times (\mathbf {r_{1}} -\mathbf {r_{2}} )} . Another way to solve the conversion from Cartesian to barycentric coordinates is to write the relation in the matrix form R λ = r {\displaystyle \mathbf {R} {\boldsymbol {\lambda }}=\mathbf {r} } with R = ( r 1 | r 2 | r 3 ) {\displaystyle \mathbf {R} =\left(\,\mathbf {r} _{1}\,|\,\mathbf {r} _{2}\,|\,\mathbf {r} _{3}\right)} and λ = ( λ 1 , λ 2 , λ 3 ) ⊤ , {\displaystyle {\boldsymbol {\lambda }}=\left(\lambda _{1},\lambda _{2},\lambda _{3}\right)^{\top },} i.e. ( x 1 x 2 x 3 y 1 y 2 y 3 ) ( λ 1 λ 2 λ 3 ) = ( x y ) {\displaystyle {\begin{pmatrix}x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\end{pmatrix}}{\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{pmatrix}}={\begin{pmatrix}x\\y\end{pmatrix}}} To get the unique normalized solution we need to add the condition λ 1 + λ 2 + λ 3 = 1 {\displaystyle \lambda _{1}+\lambda _{2}+\lambda _{3}=1} . The barycentric coordinates are thus the solution of the linear system ( 1 1 1 x 1 x 2 x 3 y 1 y 2 y 3 ) ( λ 1 λ 2 λ 3 ) = ( 1 x y ) {\displaystyle \left({\begin{matrix}1&1&1\\x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\end{matrix}}\right){\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{pmatrix}}=\left({\begin{matrix}1\\x\\y\end{matrix}}\right)} which is ( λ 1 λ 2 λ 3 ) = 1 2 A ( x 2 y 3 − x 3 y 2 y 2 − y 3 x 3 − x 2 x 3 y 1 − x 1 y 3 y 3 − y 1 x 1 − x 3 x 1 y 2 − x 2 y 1 y 1 − y 2 x 2 − x 1 ) ( 1 x y ) {\displaystyle {\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{pmatrix}}={\frac {1}{2A}}{\begin{pmatrix}x_{2}y_{3}-x_{3}y_{2}&y_{2}-y_{3}&x_{3}-x_{2}\\x_{3}y_{1}-x_{1}y_{3}&y_{3}-y_{1}&x_{1}-x_{3}\\x_{1}y_{2}-x_{2}y_{1}&y_{1}-y_{2}&x_{2}-x_{1}\end{pmatrix}}{\begin{pmatrix}1\\x\\y\end{pmatrix}}} where 2 A = det ( 1 | R ) = x 1 ( y 2 − y 3 ) + x 2 ( y 3 − y 1 ) + x 3 ( y 1 − y 2 ) {\displaystyle 2A=\det(1|R)=x_{1}(y_{2}-y_{3})+x_{2}(y_{3}-y_{1})+x_{3}(y_{1}-y_{2})} is twice the signed area of the triangle. The area interpretation of the barycentric coordinates can be recovered by applying Cramer's rule to this linear system. A point with trilinear coordinates x : y : z has barycentric coordinates ax : by : cz where a, b, c are the side lengths of the triangle. Conversely, a point with barycentrics λ 1 : λ 2 : λ 3 {\displaystyle \lambda _{1}:\lambda _{2}:\lambda _{3}} has trilinears λ 1 / a : λ 2 / b : λ 3 / c . {\displaystyle \lambda _{1}/a:\lambda _{2}/b:\lambda _{3}/c.} The three sides a, b, c respectively have equations λ 1 = 0 , λ 2 = 0 , λ 3 = 0. {\displaystyle \lambda _{1}=0,\quad \lambda _{2}=0,\quad \lambda _{3}=0.} The equation of a triangle's Euler line is | λ 1 λ 2 λ 3 1 1 1 tan ⁡ A tan ⁡ B tan ⁡ C | = 0. {\displaystyle {\begin{vmatrix}\lambda _{1}&\lambda _{2}&\lambda _{3}\\1&1&1\\\tan A&\tan B&\tan C\end{vmatrix}}=0.} Using the previously given conversion between barycentric and trilinear coordinates, the various other equations given in Trilinear coordinates#Formulas can be rewritten in terms of barycentric coordinates. The displacement vector of two normalized points P = ( p 1 , p 2 , p 3 ) {\displaystyle P=(p_{1},p_{2},p_{3})} and Q = ( q 1 , q 2 , q 3 ) {\displaystyle Q=(q_{1},q_{2},q_{3})} is P Q → = ( q 1 − p 1 , q 2 − p 2 , q 3 − p 3 ) . {\displaystyle {\overset {}{\overrightarrow {PQ}}}=(q_{1}-p_{1},q_{2}-p_{2},q_{3}-p_{3}).} The distance d between P and Q, or the length of the displacement vector P Q → = ( x , y , z ) , {\displaystyle {\overset {}{\overrightarrow {PQ}}}=(x,y,z),} is d 2 = | P Q | 2 = − a 2 y z − b 2 z x − c 2 x y = 1 2 [ x 2 ( b 2 + c 2 − a 2 ) + y 2 ( c 2 + a 2 − b 2 ) + z 2 ( a 2 + b 2 − c 2 ) ] . {\displaystyle {\begin{aligned}d^{2}&=|PQ|^{2}\\[2pt]&=-a^{2}yz-b^{2}zx-c^{2}xy\\[4pt]&={\frac {1}{2}}\left[x^{2}(b^{2}+c^{2}-a^{2})+y^{2}(c^{2}+a^{2}-b^{2})+z^{2}(a^{2}+b^{2}-c^{2})\right].\end{aligned}}} where a, b, c are the sidelengths of the triangle. The equivalence of the last two expressions follows from x + y + z = 0 , {\displaystyle x+y+z=0,} which holds because x + y + z = ( p 1 − q 1 ) + ( p 2 − q 2 ) + ( p 3 − q 3 ) = ( p 1 + p 2 + p 3 ) − ( q 1 + q 2 + q 3 ) = 1 − 1 = 0. {\displaystyle {\begin{aligned}x+y+z&=(p_{1}-q_{1})+(p_{2}-q_{2})+(p_{3}-q_{3})\\[2pt]&=(p_{1}+p_{2}+p_{3})-(q_{1}+q_{2}+q_{3})\\[2pt]&=1-1=0.\end{aligned}}} The barycentric coordinates of a point can be calculated based on distances di to the three triangle vertices by solving the equation ( − c 2 c 2 b 2 − a 2 − b 2 c 2 − a 2 b 2 1 1 1 ) λ = ( d A 2 − d B 2 d A 2 − d C 2 1 ) . {\displaystyle \left({\begin{matrix}-c^{2}&c^{2}&b^{2}-a^{2}\\-b^{2}&c^{2}-a^{2}&b^{2}\\1&1&1\end{matrix}}\right){\boldsymbol {\lambda }}=\left({\begin{matrix}d_{A}^{2}-d_{B}^{2}\\d_{A}^{2}-d_{C}^{2}\\1\end{matrix}}\right).} Although barycentric coordinates are most commonly used to handle points inside a triangle, they can also be used to describe a point outside the triangle. If the point is not inside the triangle, then we can still use the formulas above to compute the barycentric coordinates. However, since the point is outside the triangle, at least one of the coordinates will violate our original assumption that λ 1...3 ≥ 0 {\displaystyle \lambda _{1...3}\geq 0} . In fact, given any point in cartesian coordinates, we can use this fact to determine where this point is with respect to a triangle. If a point lies in the interior of the triangle, all of the Barycentric coordinates lie in the open interval ( 0 , 1 ) . {\displaystyle (0,1).} If a point lies on an edge of the triangle but not at a vertex, one of the area coordinates λ 1...3 {\displaystyle \lambda _{1...3}} (the one associated with the opposite vertex) is zero, while the other two lie in the open interval ( 0 , 1 ) . {\displaystyle (0,1).} If the point lies on a vertex, the coordinate associated with that vertex equals 1 and the others equal zero. Finally, if the point lies outside the triangle at least one coordinate is negative. Summarizing, r {\displaystyle \mathbf {r} } lies on the edge or corner of the triangle if 0 ≤ λ i ≤ 1 ∀ i in 1 , 2 , 3 {\displaystyle 0\leq \lambda _{i}\leq 1\;\forall \;i{\text{ in }}{1,2,3}} and λ i = 0 , for some i in 1 , 2 , 3 {\displaystyle \lambda _{i}=0\;{\text{, for some i in }}{1,2,3}} . In particular, if a point lies on the far side of a line the barycentric coordinate of the point in the triangle that is not on the line will have a negative value. If f ( r 1 ) , f ( r 2 ) , f ( r 3 ) {\displaystyle f(\mathbf {r} _{1}),f(\mathbf {r} _{2}),f(\mathbf {r} _{3})} are known quantities, but the values of f inside the triangle defined by r 1 , r 2 , r 3 {\displaystyle \mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3}} is unknown, they can be approximated using linear interpolation. Barycentric coordinates provide a convenient way to compute this interpolation. If r {\displaystyle \mathbf {r} } is a point inside the triangle with barycentric coordinates λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} , λ 3 {\displaystyle \lambda _{3}} , then f ( r ) ≈ λ 1 f ( r 1 ) + λ 2 f ( r 2 ) + λ 3 f ( r 3 ) {\displaystyle f(\mathbf {r} )\approx \lambda _{1}f(\mathbf {r} _{1})+\lambda _{2}f(\mathbf {r} _{2})+\lambda _{3}f(\mathbf {r} _{3})} In general, given any unstructured grid or polygon mesh, this kind of technique can be used to approximate the value of f at all points, as long as the function's value is known at all vertices of the mesh. In this case, we have many triangles, each corresponding to a different part of the space. To interpolate a function f at a point r {\displaystyle \mathbf {r} } , first a triangle must be found that contains r {\displaystyle \mathbf {r} } . To do so, r {\displaystyle \mathbf {r} } is transformed into the barycentric coordinates of each triangle. If some triangle is found such that the coordinates satisfy 0 ≤ λ i ≤ 1 ∀ i in 1 , 2 , 3 {\displaystyle 0\leq \lambda _{i}\leq 1\;\forall \;i{\text{ in }}1,2,3} , then the point lies in that triangle or on its edge (explained in the previous section). Then the value of f ( r ) {\displaystyle f(\mathbf {r} )} can be interpolated as described above. These methods have many applications, such as the finite element method (FEM). The integral of a function over the domain of the triangle can be annoying to compute in a cartesian coordinate system. One generally has to split the triangle up into two halves, and great messiness follows. Instead, it is often easier to make a change of variables to any two barycentric coordinates, e.g. λ 1 , λ 2 {\displaystyle \lambda _{1},\lambda _{2}} . Under this change of variables, ∫ T f ( r ) d r = 2 A ∫ 0 1 ∫ 0 1 − λ 2 f ( λ 1 r 1 + λ 2 r 2 + ( 1 − λ 1 − λ 2 ) r 3 ) d λ 1 d λ 2 {\displaystyle \int _{T}f(\mathbf {r} )\ d\mathbf {r} =2A\int _{0}^{1}\int _{0}^{1-\lambda _{2}}f(\lambda _{1}\mathbf {r} _{1}+\lambda _{2}\mathbf {r} _{2}+(1-\lambda _{1}-\lambda _{2})\mathbf {r} _{3})\ d\lambda _{1}\ d\lambda _{2}} where A is the area of the triangle. This result follows from the fact that a rectangle in barycentric coordinates corresponds to a quadrilateral in cartesian coordinates, and the ratio of the areas of the corresponding shapes in the corresponding coordinate systems is given by 2 A {\displaystyle 2A} . Similarly, for integration over a tetrahedron, instead of breaking up the integral into two or three separate pieces, one could switch to 3D tetrahedral coordinates under the change of variables ∫ T f ( r ) d r = 6 V ∫ 0 1 ∫ 0 1 − λ 3 ∫ 0 1 − λ 2 − λ 3 f ( λ 1 r 1 + λ 2 r 2 + λ 3 r 3 + ( 1 − λ 1 − λ 2 − λ 3 ) r 4 ) d λ 1 d λ 2 d λ 3 {\displaystyle \int _{T}f(\mathbf {r} )\ d\mathbf {r} =6V\int _{0}^{1}\int _{0}^{1-\lambda _{3}}\int _{0}^{1-\lambda _{2}-\lambda _{3}}f(\lambda _{1}\mathbf {r} _{1}+\lambda _{2}\mathbf {r} _{2}+\lambda _{3}\mathbf {r} _{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})\mathbf {r} _{4})\ d\lambda _{1}\ d\lambda _{2}\ d\lambda _{3}} where V is the volume of the tetrahedron. This approach can be generalized to higher dimensions, to integrate over any n-dimensional simplex. In the homogeneous barycentric coordinate system defined with respect to a triangle A B C {\displaystyle ABC} , the following statements about special points of A B C {\displaystyle ABC} hold. Normalizing coordinates at vertices, the three vertices A, B, and C have coordinates: A = 1 : 0 : 0 B = 0 : 1 : 0 C = 0 : 0 : 1 {\displaystyle {\begin{array}{rccccc}A=&1&:&0&:&0\\B=&0&:&1&:&0\\C=&0&:&0&:&1\end{array}}} The centroid would be at 1 3 : 1 3 : 1 3 {\displaystyle {\tfrac {1}{3}}:{\tfrac {1}{3}}:{\tfrac {1}{3}}} . If a, b, c are the edge lengths B C {\displaystyle BC} , C A {\displaystyle CA} , A B {\displaystyle AB} respectively, α {\displaystyle \alpha } , β {\displaystyle \beta } , γ {\displaystyle \gamma } are the angle measures ∠ C A B {\displaystyle \angle CAB} , ∠ A B C {\displaystyle \angle ABC} , and ∠ B C A {\displaystyle \angle BCA} respectively, and s is the semiperimeter of A B C {\displaystyle ABC} , then the following statements about special points of A B C {\displaystyle ABC} hold in addition. The circumcenter has coordinates sin ⁡ 2 α : sin ⁡ 2 β : sin ⁡ 2 γ = 1 − cot ⁡ β cot ⁡ γ : 1 − cot ⁡ γ cot ⁡ α : 1 − cot ⁡ α cot ⁡ β = a 2 ( − a 2 + b 2 + c 2 ) : b 2 ( a 2 − b 2 + c 2 ) : c 2 ( a 2 + b 2 − c 2 ) {\displaystyle {\begin{array}{rccccc}&\sin 2\alpha &:&\sin 2\beta &:&\sin 2\gamma \\[2pt]=&1-\cot \beta \cot \gamma &:&1-\cot \gamma \cot \alpha &:&1-\cot \alpha \cot \beta \\[2pt]=&a^{2}(-a^{2}+b^{2}+c^{2})&:&b^{2}(a^{2}-b^{2}+c^{2})&:&c^{2}(a^{2}+b^{2}-c^{2})\end{array}}} The orthocenter has coordinates tan ⁡ α : tan ⁡ β : tan ⁡ γ = a cos ⁡ β cos ⁡ γ : b cos ⁡ γ cos ⁡ α : c cos ⁡ α cos ⁡ β = ( a 2 + b 2 − c 2 ) ( a 2 − b 2 + c 2 ) : ( − a 2 + b 2 + c 2 ) ( a 2 + b 2 − c 2 ) : ( a 2 − b 2 + c 2 ) ( − a 2 + b 2 + c 2 ) {\displaystyle {\begin{array}{rccccc}&\tan \alpha &:&\tan \beta &:&\tan \gamma \\[2pt]=&a\cos \beta \cos \gamma &:&b\cos \gamma \cos \alpha &:&c\cos \alpha \cos \beta \\[2pt]=&(a^{2}+b^{2}-c^{2})(a^{2}-b^{2}+c^{2})&:&(-a^{2}+b^{2}+c^{2})(a^{2}+b^{2}-c^{2})&:&(a^{2}-b^{2}+c^{2})(-a^{2}+b^{2}+c^{2})\end{array}}} The incenter has coordinates a : b : c = sin ⁡ α : sin ⁡ β : sin ⁡ γ . {\displaystyle a:b:c=\sin \alpha :\sin \beta :\sin \gamma .} The excenters have coordinates J A = − a : b : c J B = a : − b : c J C = a : b : − c {\displaystyle {\begin{array}{rrcrcr}J_{A}=&-a&:&b&:&c\\J_{B}=&a&:&-b&:&c\\J_{C}=&a&:&b&:&-c\end{array}}} The nine-point center has coordinates a cos ⁡ ( β − γ ) : b cos ⁡ ( γ − α ) : c cos ⁡ ( α − β ) = 1 + cot ⁡ β cot ⁡ γ : 1 + cot ⁡ γ cot ⁡ α : 1 + cot ⁡ α cot ⁡ β = a 2 ( b 2 + c 2 ) − ( b 2 − c 2 ) 2 : b 2 ( c 2 + a 2 ) − ( c 2 − a 2 ) 2 : c 2 ( a 2 + b 2 ) − ( a 2 − b 2 ) 2 {\displaystyle {\begin{array}{rccccc}&a\cos(\beta -\gamma )&:&b\cos(\gamma -\alpha )&:&c\cos(\alpha -\beta )\\[4pt]=&1+\cot \beta \cot \gamma &:&1+\cot \gamma \cot \alpha &:&1+\cot \alpha \cot \beta \\[4pt]=&a^{2}(b^{2}+c^{2})-(b^{2}-c^{2})^{2}&:&b^{2}(c^{2}+a^{2})-(c^{2}-a^{2})^{2}&:&c^{2}(a^{2}+b^{2})-(a^{2}-b^{2})^{2}\end{array}}} The Gergonne point has coordinates ( s − b ) ( s − c ) : ( s − c ) ( s − a ) : ( s − a ) ( s − b ) {\displaystyle (s-b)(s-c):(s-c)(s-a):(s-a)(s-b)} . The Nagel point has coordinates s − a : s − b : s − c {\displaystyle s-a:s-b:s-c} . The symmedian point has coordinates a 2 : b 2 : c 2 {\displaystyle a^{2}:b^{2}:c^{2}} . Barycentric coordinates on tetrahedra Barycentric coordinates may be easily extended to three dimensions. The 3D simplex is a tetrahedron, a polyhedron having four triangular faces and four vertices. Once again, the four barycentric coordinates are defined so that the first vertex r 1 {\displaystyle \mathbf {r} _{1}} maps to barycentric coordinates λ = ( 1 , 0 , 0 , 0 ) {\displaystyle \lambda =(1,0,0,0)} , r 2 → ( 0 , 1 , 0 , 0 ) {\displaystyle \mathbf {r} _{2}\to (0,1,0,0)} , etc. This is again a linear transformation, and we may extend the above procedure for triangles to find the barycentric coordinates of a point r {\displaystyle \mathbf {r} } with respect to a tetrahedron: ( λ 1 λ 2 λ 3 ) = T − 1 ( r − r 4 ) {\displaystyle \left({\begin{matrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{matrix}}\right)=\mathbf {T} ^{-1}(\mathbf {r} -\mathbf {r} _{4})} where T {\displaystyle \mathbf {T} } is now a 3×3 matrix: T = ( x 1 − x 4 x 2 − x 4 x 3 − x 4 y 1 − y 4 y 2 − y 4 y 3 − y 4 z 1 − z 4 z 2 − z 4 z 3 − z 4 ) {\displaystyle \mathbf {T} =\left({\begin{matrix}x_{1}-x_{4}&x_{2}-x_{4}&x_{3}-x_{4}\\y_{1}-y_{4}&y_{2}-y_{4}&y_{3}-y_{4}\\z_{1}-z_{4}&z_{2}-z_{4}&z_{3}-z_{4}\end{matrix}}\right)} and λ 4 = 1 − λ 1 − λ 2 − λ 3 {\displaystyle \lambda _{4}=1-\lambda _{1}-\lambda _{2}-\lambda _{3}} with the corresponding Cartesian coordinates: x = λ 1 x 1 + λ 2 x 2 + λ 3 x 3 + ( 1 − λ 1 − λ 2 − λ 3 ) x 4 y = λ 1 y 1 + λ 2 y 2 + λ 3 y 3 + ( 1 − λ 1 − λ 2 − λ 3 ) y 4 z = λ 1 z 1 + λ 2 z 2 + λ 3 z 3 + ( 1 − λ 1 − λ 2 − λ 3 ) z 4 {\displaystyle {\begin{aligned}x&=\lambda _{1}x_{1}+\lambda _{2}x_{2}+\lambda _{3}x_{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})x_{4}\\y&=\lambda _{1}y_{1}+\,\lambda _{2}y_{2}+\lambda _{3}y_{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})y_{4}\\z&=\lambda _{1}z_{1}+\,\lambda _{2}z_{2}+\lambda _{3}z_{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})z_{4}\end{aligned}}} Once again, the problem of finding the barycentric coordinates has been reduced to inverting a 3×3 matrix. 3D barycentric coordinates may be used to decide if a point lies inside a tetrahedral volume, and to interpolate a function within a tetrahedral mesh, in an analogous manner to the 2D procedure. Tetrahedral meshes are often used in finite element analysis because the use of barycentric coordinates can greatly simplify 3D interpolation. Generalized barycentric coordinates Barycentric coordinates ( λ 1 , λ 2 , . . . , λ k ) {\displaystyle (\lambda _{1},\lambda _{2},...,\lambda _{k})} of a point p ∈ R n {\displaystyle p\in \mathbb {R} ^{n}} that are defined with respect to a finite set of k points x 1 , x 2 , . . . , x k ∈ R n {\displaystyle x_{1},x_{2},...,x_{k}\in \mathbb {R} ^{n}} instead of a simplex are called generalized barycentric coordinates. For these, the equation ( λ 1 + λ 2 + ⋯ + λ k ) p = λ 1 x 1 + λ 2 x 2 + ⋯ + λ k x k {\displaystyle (\lambda _{1}+\lambda _{2}+\cdots +\lambda _{k})p=\lambda _{1}x_{1}+\lambda _{2}x_{2}+\cdots +\lambda _{k}x_{k}} is still required to hold. Usually one uses normalized coordinates, λ 1 + λ 2 + ⋯ + λ k = 1 {\displaystyle \lambda _{1}+\lambda _{2}+\cdots +\lambda _{k}=1} . As for the case of a simplex, the points with nonnegative normalized generalized coordinates ( 0 ≤ λ i ≤ 1 {\displaystyle 0\leq \lambda _{i}\leq 1} ) form the convex hull of x1, ..., xn. If there are more points than in a full simplex ( k > n + 1 {\displaystyle k>n+1} ) the generalized barycentric coordinates of a point are not unique, as the defining linear system (here for n=2) ( 1 1 1 . . . x 1 x 2 x 3 . . . y 1 y 2 y 3 . . . ) ( λ 1 λ 2 λ 3 ⋮ ) = ( 1 x y ) {\displaystyle \left({\begin{matrix}1&1&1&...\\x_{1}&x_{2}&x_{3}&...\\y_{1}&y_{2}&y_{3}&...\end{matrix}}\right){\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\\\vdots \end{pmatrix}}=\left({\begin{matrix}1\\x\\y\end{matrix}}\right)} is underdetermined. The simplest example is a quadrilateral in the plane. Various kinds of additional restrictions can be used to define unique barycentric coordinates. More abstractly, generalized barycentric coordinates express a convex polytope with n vertices, regardless of dimension, as the image of the standard ( n − 1 ) {\displaystyle (n-1)} -simplex, which has n vertices – the map is onto: Δ n − 1 ↠ P . {\displaystyle \Delta ^{n-1}\twoheadrightarrow P.} The map is one-to-one if and only if the polytope is a simplex, in which case the map is an isomorphism; this corresponds to a point not having unique generalized barycentric coordinates except when P is a simplex. Dual to generalized barycentric coordinates are slack variables, which measure by how much margin a point satisfies the linear constraints, and gives an embedding P ↪ ( R ≥ 0 ) f {\displaystyle P\hookrightarrow (\mathbf {R} _{\geq 0})^{f}} into the f-orthant, where f is the number of faces (dual to the vertices). This map is one-to-one (slack variables are uniquely determined) but not onto (not all combinations can be realized). This use of the standard ( n − 1 ) {\displaystyle (n-1)} -simplex and f-orthant as standard objects that map to a polytope or that a polytope maps into should be contrasted with the use of the standard vector space K n {\displaystyle K^{n}} as the standard object for vector spaces, and the standard affine hyperplane { ( x 0 , … , x n ) ∣ ∑ x i = 1 } ⊂ K n + 1 {\displaystyle \{(x_{0},\ldots ,x_{n})\mid \sum x_{i}=1\}\subset K^{n+1}} as the standard object for affine spaces, where in each case choosing a linear basis or affine basis provides an isomorphism, allowing all vector spaces and affine spaces to be thought of in terms of these standard spaces, rather than an onto or one-to-one map (not every polytope is a simplex). Further, the n-orthant is the standard object that maps to cones. Generalized barycentric coordinates have applications in computer graphics and more specifically in geometric modelling. Often, a three-dimensional model can be approximated by a polyhedron such that the generalized barycentric coordinates with respect to that polyhedron have a geometric meaning. In this way, the processing of the model can be simplified by using these meaningful coordinates. Barycentric coordinates are also used in geophysics. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Polygon_normal] | [TOKENS: 4266]
Contents Normal (geometry) In geometry, a normal is an object (e.g. a line, ray, or vector) that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the infinite straight line perpendicular to the tangent line to the curve at the point. A normal vector is a vector perpendicular to a given object at a particular point. A normal vector of length one is called a unit normal vector or normal direction. A curvature vector is a normal vector whose length is the curvature of the object. Multiplying a normal vector by −1 results in the opposite vector, which may be used for indicating sides (e.g., interior or exterior) or orientation (e.g., clockwise vs. counterclockwise, right handed vs. left handed). In three-dimensional space, a surface normal, or simply normal, to a surface at point P is a vector perpendicular to the tangent plane of the surface at P. The vector field of normal directions to a surface is known as Gauss map. The word "normal" is also used as an adjective: a line normal to a plane, the normal component of a force, etc. The concept of normality generalizes to orthogonality (right angles). The concept has been generalized to differentiable manifolds of arbitrary dimension embedded in a Euclidean space. The normal vector space or normal space of a manifold at point P {\displaystyle P} is the set of vectors which are orthogonal to the tangent space at P . {\displaystyle P.} Normal vectors are of special interest in the case of smooth curves and smooth surfaces. The normal is often used in 3D computer graphics (notice the singular, as only one normal will be defined) to determine a surface's orientation toward a light source for flat shading, or the orientation of each of the surface's corners (vertices) to mimic a curved surface with Phong shading. The foot of a normal at a point of interest Q (analogous to the foot of a perpendicular) can be defined at the point P on the surface where the normal vector contains Q. The normal distance of a point Q to a curve or to a surface is the Euclidean distance between Q and its foot P. Normal to space curves The normal direction to a space curve is: where R = κ − 1 {\displaystyle R=\kappa ^{-1}} is the radius of curvature (reciprocal curvature); T {\displaystyle \mathbf {T} } is the tangent vector, in terms of the curve position r {\displaystyle \mathbf {r} } and arc-length s {\displaystyle s} : Normal to planes and polygons For a convex polygon (such as a triangle), a surface normal can be calculated as the vector cross product of two (non-parallel) edges of the polygon. For a plane given by the general form plane equation a x + b y + c z + d = 0 , {\displaystyle ax+by+cz+d=0,} the vector n = ( a , b , c ) {\displaystyle \mathbf {n} =(a,b,c)} is a normal. For a plane whose equation is given in parametric form r ( s , t ) = r 0 + s p + t q , {\displaystyle \mathbf {r} (s,t)=\mathbf {r} _{0}+s\mathbf {p} +t\mathbf {q} ,} where r 0 {\displaystyle \mathbf {r} _{0}} is a point on the plane and p , q {\displaystyle \mathbf {p} ,\mathbf {q} } are non-parallel vectors pointing along the plane, a normal to the plane is a vector normal to both p {\displaystyle \mathbf {p} } and q , {\displaystyle \mathbf {q} ,} which can be found as the cross product n = p × q . {\displaystyle \mathbf {n} =\mathbf {p} \times \mathbf {q} .} Normal to general surfaces in 3D space If a (possibly non-flat) surface S {\displaystyle S} in 3D space R 3 {\displaystyle \mathbb {R} ^{3}} is parameterized by a system of curvilinear coordinates r ( s , t ) = ( x ( s , t ) , y ( s , t ) , z ( s , t ) ) , {\displaystyle \mathbf {r} (s,t)=(x(s,t),y(s,t),z(s,t)),} with s {\displaystyle s} and t {\displaystyle t} real variables, then a normal to S is by definition a normal to a tangent plane, given by the cross product of the partial derivatives n = ∂ r ∂ s × ∂ r ∂ t . {\displaystyle \mathbf {n} ={\frac {\partial \mathbf {r} }{\partial s}}\times {\frac {\partial \mathbf {r} }{\partial t}}.} If a surface S {\displaystyle S} is given implicitly as the set of points ( x , y , z ) {\displaystyle (x,y,z)} satisfying F ( x , y , z ) = 0 , {\displaystyle F(x,y,z)=0,} then a normal at a point ( x , y , z ) {\displaystyle (x,y,z)} on the surface is given by the gradient n = ∇ F ( x , y , z ) . {\displaystyle \mathbf {n} =\nabla F(x,y,z).} since the gradient at any point is perpendicular to the level set of S . {\displaystyle S.} For a surface S {\displaystyle S} in R 3 {\displaystyle \mathbb {R} ^{3}} given as the graph of a function z = f ( x , y ) , {\displaystyle z=f(x,y),} an upward-pointing normal can be found either from the parametrization r ( x , y ) = ( x , y , f ( x , y ) ) , {\displaystyle \mathbf {r} (x,y)=(x,y,f(x,y)),} giving n = ∂ r ∂ x × ∂ r ∂ y = ( 1 , 0 , ∂ f ∂ x ) × ( 0 , 1 , ∂ f ∂ y ) = ( − ∂ f ∂ x , − ∂ f ∂ y , 1 ) ; {\displaystyle \mathbf {n} ={\frac {\partial \mathbf {r} }{\partial x}}\times {\frac {\partial \mathbf {r} }{\partial y}}=\left(1,0,{\tfrac {\partial f}{\partial x}}\right)\times \left(0,1,{\tfrac {\partial f}{\partial y}}\right)=\left(-{\tfrac {\partial f}{\partial x}},-{\tfrac {\partial f}{\partial y}},1\right);} or more simply from its implicit form F ( x , y , z ) = z − f ( x , y ) = 0 , {\displaystyle F(x,y,z)=z-f(x,y)=0,} giving n = ∇ F ( x , y , z ) = ( − ∂ f ∂ x , − ∂ f ∂ y , 1 ) . {\displaystyle \mathbf {n} =\nabla F(x,y,z)=\left(-{\tfrac {\partial f}{\partial x}},-{\tfrac {\partial f}{\partial y}},1\right).} Since a surface does not have a tangent plane at a singular point, it has no well-defined normal at that point: for example, the vertex of a cone. In general, it is possible to define a normal almost everywhere for a surface that is Lipschitz continuous. The normal to a (hyper)surface is usually scaled to have unit length, but it does not have a unique direction, since its opposite is also a unit normal. For a surface which is the topological boundary of a set in three dimensions, one can distinguish between two normal orientations, the inward-pointing normal and outer-pointing normal. For an oriented surface, the normal is usually determined by the right-hand rule or its analog in higher dimensions. If the normal is constructed as the cross product of tangent vectors (as described in the text above), it is a pseudovector. When applying a transform to a surface it is often useful to derive normals for the resulting surface from the original normals. Specifically, given a 3×3 transformation matrix M , {\displaystyle \mathbf {M} ,} we can determine the matrix W {\displaystyle \mathbf {W} } that transforms a vector n {\displaystyle \mathbf {n} } perpendicular to the tangent plane t {\displaystyle \mathbf {t} } into a vector n ′ {\displaystyle \mathbf {n} ^{\prime }} perpendicular to the transformed tangent plane M t , {\displaystyle \mathbf {Mt} ,} by the following logic: Write n′ as W n . {\displaystyle \mathbf {Wn} .} We must find W . {\displaystyle \mathbf {W} .} W n is perpendicular to M t if and only if 0 = ( W n ) ⋅ ( M t ) if and only if 0 = ( W n ) T ( M t ) if and only if 0 = ( n T W T ) ( M t ) if and only if 0 = n T ( W T M ) t {\displaystyle {\begin{alignedat}{5}W\mathbb {n} {\text{ is perpendicular to }}M\mathbb {t} \quad \,&{\text{ if and only if }}\quad 0=(W\mathbb {n} )\cdot (M\mathbb {t} )\\&{\text{ if and only if }}\quad 0=(W\mathbb {n} )^{\mathrm {T} }(M\mathbb {t} )\\&{\text{ if and only if }}\quad 0=\left(\mathbb {n} ^{\mathrm {T} }W^{\mathrm {T} }\right)(M\mathbb {t} )\\&{\text{ if and only if }}\quad 0=\mathbb {n} ^{\mathrm {T} }\left(W^{\mathrm {T} }M\right)\mathbb {t} \\\end{alignedat}}} Choosing W {\displaystyle \mathbf {W} } such that W T M = I , {\displaystyle W^{\mathrm {T} }M=I,} or W = ( M − 1 ) T , {\displaystyle W=(M^{-1})^{\mathrm {T} },} will satisfy the above equation, giving a W n {\displaystyle W\mathbb {n} } perpendicular to M t , {\displaystyle M\mathbb {t} ,} or an n ′ {\displaystyle \mathbf {n} ^{\prime }} perpendicular to t ′ , {\displaystyle \mathbf {t} ^{\prime },} as required. Therefore, one should use the inverse transpose of the linear transformation when transforming surface normals. The inverse transpose is equal to the original matrix if the matrix is orthonormal, that is, purely rotational with no scaling or shearing. Hypersurfaces in n-dimensional space For an ( n − 1 ) {\displaystyle (n-1)} -dimensional hyperplane in n {\displaystyle n} -dimensional space R n {\displaystyle \mathbb {R} ^{n}} given by its parametric representation r ( t 1 , … , t n − 1 ) = p 0 + t 1 v 1 + ⋯ + t n − 1 v n − 1 , {\displaystyle \mathbf {r} \left(t_{1},\ldots ,t_{n-1}\right)=\mathbf {p} _{0}+t_{1}\mathbf {v} _{1}+\cdots +t_{n-1}\mathbf {v} _{n-1},} where p 0 {\displaystyle \mathbf {p} _{0}} is a point on the hyperplane and v i {\displaystyle \mathbf {v} _{i}} for i = 1 , … , n − 1 {\displaystyle i=1,\ldots ,n-1} are linearly independent vectors pointing along the hyperplane, a normal to the hyperplane is any vector n {\displaystyle \mathbf {n} } in the null space of the matrix V = [ v 1 ⋯ v n − 1 ] , {\displaystyle V={\begin{bmatrix}\mathbf {v} _{1}&\cdots &\mathbf {v} _{n-1}\end{bmatrix}},} meaning ⁠ V n = 0 {\displaystyle V\mathbf {n} =\mathbf {0} } ⁠. That is, any vector orthogonal to all in-plane vectors is by definition a surface normal. Alternatively, if the hyperplane is defined as the solution set of a single linear equation ⁠ a 1 x 1 + ⋯ + a n x n = c {\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}=c} ⁠, then the vector n = ( a 1 , … , a n ) {\displaystyle \mathbf {n} =\left(a_{1},\ldots ,a_{n}\right)} is a normal. The definition of a normal to a surface in three-dimensional space can be extended to ( n − 1 ) {\displaystyle (n-1)} -dimensional hypersurfaces in ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠. A hypersurface may be locally defined implicitly as the set of points ( x 1 , x 2 , … , x n ) {\displaystyle (x_{1},x_{2},\ldots ,x_{n})} satisfying an equation ⁠ F ( x 1 , x 2 , … , x n ) = 0 {\displaystyle F(x_{1},x_{2},\ldots ,x_{n})=0} ⁠, where F {\displaystyle F} is a given scalar function. If F {\displaystyle F} is continuously differentiable then the hypersurface is a differentiable manifold in the neighbourhood of the points where the gradient is not zero. At these points a normal vector is given by the gradient: n = ∇ F ( x 1 , x 2 , … , x n ) = ( ∂ F ∂ x 1 , ∂ F ∂ x 2 , … , ∂ F ∂ x n ) . {\displaystyle \mathbb {n} =\nabla F\left(x_{1},x_{2},\ldots ,x_{n}\right)=\left({\tfrac {\partial F}{\partial x_{1}}},{\tfrac {\partial F}{\partial x_{2}}},\ldots ,{\tfrac {\partial F}{\partial x_{n}}}\right)\,.} The normal line is the one-dimensional subspace with basis { n } . {\displaystyle \{\mathbf {n} \}.} A vector that is normal to the space spanned by the linearly independent vectors v1, ..., vr−1 and falls within the r-dimensional space spanned by the linearly independent vectors v1, ..., vr is given by the r-th column of the matrix Λ = V(VTV)−1, where the matrix V = (v1, ..., vr) is the juxtaposition of the r column vectors. (Proof: Λ is V times a matrix so each column of Λ is a linear combination of the columns of V. Furthermore, VTΛ = I, so each column of V other than the last is perpendicular to the last column of Λ.) This formula works even when r is less than the dimension of the Euclidean space n. The formula simplifies to Λ = (VT)−1 when r = n. Varieties defined by implicit equations in n-dimensional space A differential variety defined by implicit equations in the n {\displaystyle n} -dimensional space R n {\displaystyle \mathbb {R} ^{n}} is the set of the common zeros of a finite set of differentiable functions in n {\displaystyle n} variables f 1 ( x 1 , … , x n ) , … , f k ( x 1 , … , x n ) . {\displaystyle f_{1}\left(x_{1},\ldots ,x_{n}\right),\ldots ,f_{k}\left(x_{1},\ldots ,x_{n}\right).} The Jacobian matrix of the variety is the k × n {\displaystyle k\times n} matrix whose i {\displaystyle i} -th row is the gradient of f i . {\displaystyle f_{i}.} By the implicit function theorem, the variety is a manifold in the neighborhood of a point where the Jacobian matrix has rank k . {\displaystyle k.} At such a point P , {\displaystyle P,} the normal vector space is the vector space generated by the values at P {\displaystyle P} of the gradient vectors of the f i . {\displaystyle f_{i}.} In other words, a variety is defined as the intersection of k {\displaystyle k} hypersurfaces, and the normal vector space at a point is the vector space generated by the normal vectors of the hypersurfaces at the point. The normal (affine) space at a point P {\displaystyle P} of the variety is the affine subspace passing through P {\displaystyle P} and generated by the normal vector space at P . {\displaystyle P.} These definitions may be extended verbatim to the points where the variety is not a manifold. Let V be the variety defined in the 3-dimensional space by the equations x y = 0 , z = 0. {\displaystyle x\,y=0,\quad z=0.} This variety is the union of the x {\displaystyle x} -axis and the y {\displaystyle y} -axis. At a point ( a , 0 , 0 ) , {\displaystyle (a,0,0),} where a ≠ 0 , {\displaystyle a\neq 0,} the rows of the Jacobian matrix are ( 0 , 0 , 1 ) {\displaystyle (0,0,1)} and ( 0 , a , 0 ) . {\displaystyle (0,a,0).} Thus the normal affine space is the plane of equation x = a . {\displaystyle x=a.} Similarly, if b ≠ 0 , {\displaystyle b\neq 0,} the normal plane at ( 0 , b , 0 ) {\displaystyle (0,b,0)} is the plane of equation y = b . {\displaystyle y=b.} At the point ( 0 , 0 , 0 ) {\displaystyle (0,0,0)} the rows of the Jacobian matrix are ( 0 , 0 , 1 ) {\displaystyle (0,0,1)} and ( 0 , 0 , 0 ) . {\displaystyle (0,0,0).} Thus the normal vector space and the normal affine space have dimension 1 and the normal affine space is the z {\displaystyle z} -axis. Uses Normal in geometric optics The normal ray is the outward-pointing ray perpendicular to the surface of an optical medium at a given point. In reflection of light, the angle of incidence and the angle of reflection are respectively the angle between the normal and the incident ray (on the plane of incidence) and the angle between the normal and the reflected ray. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Duke_Nukem_3D] | [TOKENS: 5042]
Contents Duke Nukem 3D Duke Nukem 3D is a 1996 first-person shooter game developed by 3D Realms and published by FormGen for MS-DOS. It is a sequel to the platform games Duke Nukem and Duke Nukem II, published by 3D Realms. Duke Nukem 3D features the adventures of the titular Duke Nukem, voiced by Jon St. John, who fights against an alien invasion on Earth. Along with Wolfenstein 3D, Doom and Quake, Duke Nukem 3D is considered to be responsible for popularizing first-person shooters, and was released to major critical acclaim. Reviewers praised the interactivity of the environments, gameplay, level design, and unique risqué humor, a mix of pop-culture satire and lampooning of over-the-top Hollywood action heroes. However, it also incited controversy due to its violence, erotic elements, and portrayal of women. Since its release, Duke Nukem 3D has been cited as one of the greatest video games ever made. The shareware version was originally released on January 29, 1996, while the full version of the game was released on April 19, 1996.[citation needed] The Plutonium PAK, an expansion pack which updated the game to version 1.4 and added a fourth eleven-level episode, was released on October 21, 1996. The Atomic Edition, a standalone version of the game that included the content from the Plutonium PAK and updated the game to version 1.5, was released on December 11, 1996. A fifth episode was released on October 11, 2016, with 20th Anniversary World Tour published by Gearbox Software. A sequel, Duke Nukem Forever, was released in 2011. Gameplay As a first-person shooter whose gameplay is similar to Doom, the gameplay of Duke Nukem 3D involves moving through levels presented from the protagonist's point of view, shooting enemies on the way. The environments in Duke Nukem 3D are highly destructible and interactive; most props can be destroyed by the player. Levels were designed in a fairly non-linear manner such that players can advantageously use air ducts, back doors, and sewers to avoid enemies or find hidden caches. These locations are also filled with objects the player can interact with. Some confer gameplay benefits to the player; light switches make it easier to see, while water fountains and broken fire hydrants provide some health points. Others are simply there as a diversion. Tipping strippers provokes a quote from Duke, and a provocative reveal from the dancer. Duke's arsenal consists of the "Mighty Foot" (a basic kick attack), a pistol, a shotgun, a triple-barrelled chain gun, a rocket-propelled grenade launcher, pipe bombs, freezethrower and shrink rays, laser land mines, and the rapid-fire "Devastator" rocket launcher. The Atomic Edition version of the game also has an "Expander", the opposite of the shrink-ray weapon. Lastly, the 20th Anniversary World Tour version of the game also has an "Incinerator", the opposite of the freezethrower (with fiery projectiles instead of ice). Various items can be picked up during gameplay. The portable medkit allows players to heal Duke at will. Steroids speed up Duke's movement, as well as instantly reversing the effects of the shrink-ray weapon and increasing the strength of Duke's Mighty Foot for a short period. Night vision goggles allow players to see enemies in the dark. The "HoloDuke" device projects a hologram of Duke, which can be used to distract enemies. Protective boots allow Duke to cross dangerously hot or toxic terrain. In sections where progress requires more aquatic legwork, an aqua-lung allows Duke to take longer trips underwater. Duke's jet pack allows the player to move vertically and gain access to otherwise inaccessible areas. The game features a wide variety of enemies; some of which are aliens and other mutated humans. The LAPD have been turned into "Pig Cops", a play on the derogatory term "pig" for police officers, with LARD emblazoned on their uniforms. As is usual for a first-person shooter, Duke Nukem encounters a large number of lesser foes, as well as bosses, usually at the end of episodes. Like Duke, these enemies have access to a wide range of weapons and equipment, and some weaker enemies have jet packs. Plot Duke Nukem 3D is set on Earth "sometime in the early 21st century". The levels of Duke Nukem 3D take players outdoors and indoors through rendered street scenes, military bases, deserts, a flooded city, space stations, Moon bases, and a Japanese restaurant. The game contains several humorous references to pop culture. Some of Duke's lines are drawn from movies such as Aliens, Dirty Harry, Evil Dead II, Full Metal Jacket, Jaws, Pulp Fiction, and They Live; the captured women saying "Kill me" is a reference to Aliens. Players will encounter corpses of famous characters such as Luke Skywalker, Indiana Jones, Snake Plissken, the protagonist of Doom, and a smashed T-800. In the first episode, players navigate a tunnel in the wall of a prison cell hidden behind a poster, just like in The Shawshank Redemption. During the second episode, players can see a Monolith (from 2001: A Space Odyssey) on the Moon and use it as a teleport to complete the level. There is little narrative in the game, only a brief text prelude located under "Help" in the Main Menu, and a few cutscenes after the completion of an episode. The game picks up right after the events of Duke Nukem II, with Duke returning to Earth in his space cruiser. As Duke descends on Los Angeles in hopes of taking a vacation, his ship is shot down by unknown hostiles. While sending a distress signal, Duke learns that aliens are attacking Los Angeles and have mutated the LAPD. With his vacation plans now ruined, Duke hits the "eject" button, and vows to do whatever it takes to stop the alien invasion. In "Episode One: L.A. Meltdown", Duke fights his way through a dystopian Los Angeles. At a strip club, he is captured by pig-cops, but escapes the alien-controlled penitentiary and tracks down the alien cruiser responsible for the invasion in the San Andreas Fault. Duke confronts and kills an Alien Battlelord in the final level. Duke discovers that the aliens were capturing women, and detonates the ship. Levels in this episode include a movie theater, a red-light district, a prison, and a nuclear-waste disposal facility. In "Episode Two: Lunar Apocalypse", Duke journeys to space, where he finds many of the captured women held in various incubators throughout space stations that had been conquered by the aliens. Duke reaches the alien mother ship on the Moon and kills an alien Overlord. As Duke inspects the ship's computer, it is revealed that the plot to capture women was merely a ruse to distract him. The aliens have already begun their attack on Earth. In "Episode Three: Shrapnel City", Duke battles the massive alien presence through Los Angeles once again, and kills the leader of the alien menace: the Cycloid Emperor. The game ends as Duke promises that after some "R&R", he will be "...ready for more action!", as an anonymous woman calls him back to bed. Levels in this episode include a sushi bar, a movie set, a subway, and a hotel. The story continues in the Atomic Edition. In "Episode Four: The Birth", it is revealed that the aliens used a captured woman to give birth to the Alien Queen, a creature which can quickly spawn deadly alien protector drones. Duke is dispatched back to Los Angeles to fight hordes of aliens, including the protector drones. Eventually, Duke finds the lair of the Alien Queen, and kills her, thus thwarting the alien plot. Levels in this episode include a fast-food restaurant ("Duke Burger"), a supermarket, a Disneyland parody called "Babe Land", a police station, the oil tanker Exxon Valdez, and Area 51. With the release of 20th Anniversary World Tour, the story progresses further. In "Episode Five: Alien World Order", Duke finds out that the aliens initiated a world-scale invasion, so he sets out to repel their attack on various countries. Duke proceeds to clear out aliens from Amsterdam, Moscow, London, San Francisco, Paris, the Giza pyramid complex, and Rome, with the final showdown with the returning alien threat taking place in Los Angeles, taking the game full circle. There, he defeats the Cycloid Incinerator, the current alien leader, stopping their threat for good. Development Duke Nukem 3D was developed on a budget of roughly $300,000. The development team consisted of 8 people for most of the development cycle, increasing to 12 or 13 people near the end. At one point, the game was being programmed to allow the player to switch between first-person view, third-person view, and fixed camera angles. Scott Miller of 3D Realms recalled that "with Duke 3D, unlike every shooter that came before, we wanted to have sort of real life locations like a cinema theatre, you know, strip club, bookstores..." The game's development started in 1994. LameDuke is an early prototype of Duke Nukem 3D, which was released by 3D Realms as a "bonus" one year after the release of the official version. It has been released as is, with no support. LameDuke features four episodes: Mr. Caliber, Mission Cockroach, Suck Hole, and Hard Landing. Certain weapons were altered from the original versions and/or removed. The original official website was created by Jeffrey D. Erb and Mark Farish of Intersphere Communications Ltd. Release Duke Nukem 3D was ported to many consoles of the time. All of the ports featured some sort of new content. Lee Jackson's theme song "Grabbag" has elicited many covers and remixes over the years by both fans and professional musicians, including an officially sanctioned studio version by thrash metal band Megadeth. Another version of the song was recorded by Chris Kline in August 2005. 3D Realms featured it on the front page of their website and contracted with Kline to use it to promote their Xbox Live release of Duke Nukem 3D. Sales Duke Nukem 3D was a commercial hit, selling about 3.5 million copies. In the United States alone, it was the 12th best-selling computer game in the period from 1993 to 1999, with 950,000 units sold. NPD Techworld, a firm that tracked sales in the United States, reported 1.25 million units sold of Duke Nukem 3D by December 2002. Source ports Following the release of the Doom source code in 1997, players wanted a similar source code release from 3D Realms. The last major game to make use of the Duke Nukem 3D source code was TNT Team's World War II GI in 1999. Its programmer, Matthew Saettler, obtained permission from 3D Realms to expand the gameplay enhancements done on WWII GI to Duke Nukem 3D. EDuke was a semi-official branch of Duke Nukem 3D that was released as a patch as Duke Nukem 3D v2.0 for Atomic Edition users on July 28, 2000. It included a demo mod made by several beta testers. It focused primarily on enhancing the CON scripting language in ways which allowed those modifying the game to do much more with the system than originally possible. Though a further version was planned, it never made it out of beta. It was eventually cancelled due to programmer time constraints. About a month after the release of the Duke Nukem 3D source code, Blood project manager Matt Saettler released the source code for both EDuke v2.0 and EDuke v2.1, the test version of which would have eventually become the next EDuke release, under the GPL.[citation needed] The source code to the Duke Nukem 3D v1.5 executable, which uses the Build engine, was released as free software under the GPL-2.0-or-later license on April 1, 2003. The game content remains under a proprietary license. The game was quickly ported by enthusiasts to modern operating systems. The first Duke Nukem 3D port was from icculus.org. It is a cross-platform project that allows the game to be played on AmigaOS, AmigaOS 4, AROS, BeOS, FreeBSD, Linux, Mac OS X, MorphOS, Solaris, and Windows rather than MS-DOS. The icculus.org codebase would later be used as the base for several other ports, including Duke3d_32. Another popular early project is Jonathon Fowler's JFDuke3D, which, in December 2003, received backing from the original author of Build, programmer Ken Silverman. Fowler, in cooperation with Silverman, released a new version of JFDuke3D using Polymost, an OpenGL-enhanced renderer for Build which allows hardware acceleration and 3D model support along with 32-bit color high resolution textures. Another project based on JFDuke3D called xDuke, unrelated to the xDuke project based on Duke3d_w32, runs on the Xbox. Silverman has since helped Fowler with a large portion of other engine work, including updating the network code, and helping to maintain various other aspects of the engine.[citation needed] Development was semi-active between 2005 and 2020; since then, new versions are regularly published. While a few short-lived MS-DOS-based EDuke projects emerged, it was not until the release of EDuke32, an extended version of Duke3D incorporating variants of both Fowler's Microsoft Windows JFDuke3D code, and Saettler's EDuke code, by one of 3D Realms' forum moderators in late 2004, that EDuke's scripting extensions received community focus. Among the various enhancements, support for advanced shader model 3.0 based graphics was added to EDuke32 during late 2008-early 2009. In June 2008, thanks to significant porting contributions from the DOSBox team, EDuke32 became the only Duke Nukem 3D source port to compile and run natively on 64-bit Linux systems without the use of a 32-bit compatibility environment. On April 1, 2009, an OpenGL Shader Model 3.0 renderer was revealed to have been developed for EDuke32, named Polymer to distinguish from Ken Silverman's Polymost.[citation needed] It allows for much more modern effects such as dynamic lighting and normal mapping. Although Polymer is fully functional, it is technically incomplete and unoptimized, and is still in development. As of the fifth installment of the High Resolution Pack, released in 2011, the Polymer renderer is mandatory. In 2011, another significant development of EDuke32 was the introduction of true room over room (TROR), where sectors can be placed over other sectors, and can be seen at the same time. In practice, this allows for true three-dimensional level design that was previously impossible, although the base engine is still 2D. On December 18, 2012, the Chocolate Duke Nukem 3D source port was released. Inspired by Chocolate Doom, the primary goal was to refactor the code so developers could easily read and learn from it, as well as make it portable. In February 2013, a source code review article was published that described the internal working of the code. Reception All versions of the game have earned a positive aggregate score on GameRankings and Metacritic. The original release on MS-DOS holds an aggregate score of 89% on GameRankings and a score of 89/100 on Metacritic. The version released on Nintendo 64 holds an aggregate score of 74% on GameRankings and a score of 73/100 on Metacritic. The version released on Xbox 360 holds an aggregate score of 81% on GameRankings while it holds a score of 80/100 on Metacritic. The iOS version holds an aggregate score of 64% on GameRankings. Daniel Jevons of Maximum gave it five out of five stars, calling it "absolutely perfect in every respect." He particularly cited the game's speed and fluidity even on low-end PCs, imaginative weapons, varied and identifiable environments, true 3D level designs, and strong multiplayer mode. A Next Generation critic summarized: "Duke Nukem 3D has everything Doom doesn't, but it also doesn't leave out the stuff that made Doom a classic." He praised the imaginative weapons, long and complex single-player campaign, competitive multiplayer, built-in level editor, and parental lock. Reviewers paid a lot of attention to the sexual content within the game. Reception of this element varied: Tim Soete of GameSpot felt that it was "morally questionable", while the Game Revolution reviewer noted that it was "done in a tongue-in-cheek manner," and he was "not personally offended". GamingOnLinux reviewer Hamish Paul Wilson commented in a later retrospective how the game's "dark dystopian atmosphere filled with pornography and consumerist decadence" in his view helped to ground "the game's more outlandish and obscene moments in context", concluding that "in a world as perverse as this, someone like Duke becoming its hero seems almost inevitable." Next Generation reviewed the Macintosh version of the game and stated that "Though it took a year, the Mac port of Duke Nukem 3D is an impressive feat, both for the game's own features, and the quality of the port." The Saturn version also received generally positive reviews, with critics particularly praising the use of real-world settings for the levels and Duke's numerous one-liners. Reviewers were also generally impressed with how accurately it replicates the PC version. AllGame editor Colin Williamson highly praised the Sega Saturn port, referring to it as "one of the best versions" and that it was "probably one of the best console ports ever released." GamePro summarized that "All the gore, vulgarity, go-go dancers, and ultra-intense 3D combat action that made Duke Nukem [3D] excel on the PC are firmly intact in the Saturn version, making it one of the premier corridor shooters on the system." However, some complained at the limitations of this version's multiplayer. Dan Hsu of Electronic Gaming Monthly said it was unfortunate that it supports only two players instead of four, while Sega Saturn Magazine editor Rich Leadbetter complained at the multiplayer being only supported through the Sega NetLink and not the Saturn link cable, since the NetLink was not being released in Europe, effectively making the Saturn version single-player only to Europeans. The Nintendo 64 version was likewise positively received, with critics almost overwhelmingly praising the new weapons and polygonal explosions, though some said that the use of sprites for most enemies and objects makes the game look outdated. While commenting that the deathmatch gameplay is less impressive than that of GoldenEye 007, critics also overwhelmingly applauded the port's multiplayer features. Next Generation stated that "The sound effects and music are solid, the levels are still interactive as heck, and it's never quite felt so good blasting enemies with a shotgun or blowing them to chunks with pipe bombs." GamePro opined that the censoring of sexual content from the port stripped the game of all uniqueness, but the vast majority of critics held that the censorship, though unfortunate, was not extensive enough to eliminate or even reduce Duke's distinctive personality. Peer Schneider of IGN called it "a better and much more intense shooter than Hexen and Doom 64, and currently the best N64 game with a two-player co-op mode. If you don't already own the PC or Saturn version of Duke, do yourself a favor and get it." Crispin Boyer of Electronic Gaming Monthly, while complaining that the large weapons obscure too much of the player's view in four-player mode, assessed that "You're not gonna find a better console version of Duke." The PlayStation console port met with more mixed reviews. GamePro and Tim Soete of GameSpot both found this conversion technically inferior, particularly the frame rate. Both also complained that the control configuration only provides three presets, with no option for custom configuration. Soete also found the game had become dated by the time this version was released, though he still recommended it for those who do not own a PC. IGN's Jay Boor gave it a more enthusiastic recommendation, saying it "plays exactly like its PC predecessor" and praising the PlayStation-exclusive levels and link cable support. Duke Nukem 3D was a finalist for CNET Gamecenter's 1996 "Best Action Game" award, which ultimately went to Quake. In 1996, Next Generation ranked it as the 35th top game of all time, called "for many, the game Quake should have been." In 1996 Computer Gaming World named Duke Nukem 3D #37 overall among the best games of all time and #13 among the "best ways to die in computer gaming". It won a 1996 Spotlight Award for Best Action Game. In 1998, PC Gamer declared it the 29th-best computer game ever released, and the editors called it "a gaming icon" and "an absolute blast". PC Gamer magazine's readers' voted it #13 on its all-time top games poll. The editors of PC Game ranked it as the 12th top game of all time in 2001 citing the game's humor and pop-culture references, and as the 15th best games of all time in 2005. GamePro included it among the most important video games of all time. In 2009, IGN's Cam Shea ranked it as the ninth top 10 Xbox Live Arcade game, stating that it was as fun as it was in its initial release, and praised the ability to rewind to any point before the player died. Duke Nukem 3D was attacked by some critics, who alleged that it promoted pornography and murder. In response to the criticism encountered, censored versions of the game were released in certain countries in order to avoid it being banned altogether. A similar censored version was carried at Wal-Mart retail stores in the United States. In Australia, the game was originally refused classification on release. 3D Realms repackaged the game with the parental lock feature permanently enabled, although a patch available on the 3D Realms website allowed the user to revert the game back into its uncensored U.S. version. The OFLC then attempted to have the game pulled from the shelves, but it was discovered that the distributor had notified them of this fact and the rating could not be surrendered; six months later, the game was reclassified and released uncensored with an MA15+ rating. In Germany, the BPjM placed the game on their "List B" ("List of Media Harmful to Young People") of videos games, thus prohibiting its advertisement in the public. However, it was not fully confiscated, meaning that an adult could still request to see the game and buy it. In 1999, Duke Nukem 3D was banned in Brazil, along with Doom and several other first-person shooters, after a rampage in and around a movie theater was supposedly inspired by the first level in the game. Despite such concerns from critics, legislators, and publishers, Scott Miller later recounted that 3D Realms saw very little negative feedback to the game's controversial elements from actual gamers or their parents. He pointed out that Duke Nukem 3D was appropriately rated "M" and had no real nudity, and speculated that that was enough to make it inoffensive to the general public. Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/68000] | [TOKENS: 7459]
Contents Motorola 68000 The Motorola 68000 (sometimes shortened to Motorola 68k or m68k and usually pronounced "sixty-eight-thousand") is a 16/32-bit complex instruction set computer (CISC) microprocessor, introduced in 1979 by Motorola Semiconductor Products Sector. The design implements a 32-bit instruction set, with 32-bit registers and a 16-bit internal data bus. The address bus is 24 bits and does not use memory segmentation, which made it easier to program for. Internally, it uses a 16-bit data arithmetic logic unit (ALU) and two 16-bit arithmetic units used mostly for addresses, and has a 16-bit external data bus. For this reason, Motorola termed it a 16/32-bit processor. As one of the first widely available processors with a 32-bit instruction set, large unsegmented address space, and relatively high speed for the era, the 68k was a popular design through the 1980s. It was widely used in a new generation of personal computers with graphical user interfaces, including the Macintosh 128K, Amiga, Atari ST, and X68000, as well as video game consoles such as the Sega Genesis/Mega Drive console. Several arcade game systems also used the 68000. Later processors in the Motorola 68000 series, beginning with the Motorola 68020, use full 32-bit ALUs and have full 32-bit address and data buses, speeding up 32-bit operations and allowing 32-bit addressing, rather than the 24-bit addressing of the 68000 and 68010 or the 31-bit addressing of the Motorola 68012. The original 68k is generally software forward-compatible with the rest of the line despite being limited to a 16-bit wide external bus. Although the processors are no longer in production, functionally-identical clones and reproductions of the 68000 are still actively made. Development Motorola's first widely produced microprocessor was the 6800, introduced in early 1974 and available in quantity late that year. The company set itself the goal of selling 25,000 units by September 1976, a goal they did meet. Although a capable design, it was eclipsed by more powerful designs, such as the Zilog Z80, and less expensive designs, such as the MOS Technology 6502. By late 1976 sales were stagnant and the division was saved by a project for General Motors engine control and other tasks. By the time the 6800 was introduced, a small number of 16-bit designs had come to market. These were generally modeled on minicomputer platforms like the Data General Nova or PDP-11. Based on the semiconductor manufacturing processes of the era, these were often multi-chip solutions like the National Semiconductor IMP-16, or the single-chip PACE that had issues with speed. With the sales prospects for the 6800 dimming, but still cash-flush from the engine control sales, in late 1976 Colin Crook, Operations Manager, began considering how to successfully win future sales. They were aware that Intel was working on a 16-bit extension of their 8080 series, which would emerge as the Intel 8086, and had heard rumors of a 16-bit Zilog Z80, which became the Z8000. These would use new design techniques that would eliminate the problems seen in earlier 16-bit systems. Motorola knew that if they launched a product similar to the 8086, within 10% of its capabilities, Intel would outperform them in the market. In order to compete, they set themselves the goal of being two times as powerful at the same cost, or one-half the cost with the same performance. Crook decided that they would attack the high-end of the market with the most powerful processor on the market. Another 16-bit would not do, their design would have to be bigger, and that meant having some 32-bit features. Crook had decided on this approach by the end of 1976. Crook formed the Motorola Advanced Computer System on Silicon (MACSS) project to build the design and hired Tom Gunter to be its principal architect. Gunter began forming his team in January 1977. The performance goal was set at 1 million instructions per second (MIPS). They wanted the design to not only win back microcomputer vendors like Apple Computer and Tandy, but also minicomputer companies like NCR and AT&T. The team decided to abandon an attempt at backward compatibility with the 6800, as they felt the 8-bit designs were too limited to be the basis for new designs. The new system was influenced by the PDP-11, the most popular minicomputer design of the era. At the time, a key concept in minis was the concept of an orthogonal instruction set, in which every operation was allowed to work on any sort of data, and the PDP-11 was considered the canonical example of this concept. This resulted in many different instructions, which were minor variations that changed where the data came from. To feed the correct data into the internal units, MACSS made extensive use of microcode, essentially small programs in read only memory that gathered up the required data, performed the operations, and then wrote out the results. This was common in mainframes and minis, but MACSS would be among the first to use this technique in a microprocessor. There was a large amount of support hardware for the 6800 that would remain useful, things like UARTs and similar interfacing systems. For this reason, the new design retained a bus protocol compatibility mode for existing 6800 peripheral devices. A chip with 32 data and 32 addressing pins would require 64 pins, plus more for power and other features. At the time, 64-pin dual inline package (DIP)s were "large, heavy-cost" systems and "just terrible", making that the largest they could consider. To make it fit, Crook selected a hybrid design, with a 32-bit instruction set architecture (ISA) but 16-bit components implementing it, like the arithmetic logic unit (ALU). The external interface was reduced to 16 data pins and 24 for addresses, allowing it all to fit in a 64-pin package. This became known as the "Texas Cockroach".[a] By the mid-1970s, Motorola's MOS design techniques had become less advanced than their competition, and their fabrication lines at times struggled with low yields. By the late-1970s, the company had entered a technology exchange program with Hitachi, dramatically improving their production capabilities. As part of this, a new fab named MOS-8 was built using the latest 5-inch wafer sizes and Intel's HMOS process with a 3.5 μm feature size. This was an investment aimed at catching the competition: even upstart semiconductor companies such as Zilog and MOS Technology had introduced CPUs fabricated on depletion-mode NMOS logic before Motorola did. In fact, Motorola may have substantially lagged contemporaries in phasing out enhancement mode and metal gate, with Gunter recollecting that the 68000 itself had to succeed despite initially adopting a metal-gate design. Though the point about playing catch-up is clear, this could not have been an entirely accurate summary because Motorola's 1976 datasheets, predating the inception of the MACCS project, denote the majority of its 6800 family in silicon-gate. Indeed, Gunter's own 1979 article introducing the 68000 highlighted it as a silicon-gate depletion-mode HMOS design. Whatever the degree of Motorola's process and manufacturing deficits in the early days, the team was undeterred and would not compromise in its pursuit of a microprocessor with industry-leading performance. Formally introduced in September 1979, initial samples were released in February 1980, with production chips available over the counter in November. Initial speed grades were 4, 6, and 8 MHz. 10 MHz chips became available during 1981, and 12.5 MHz chips by June 1982. The 16.67 MHz "12F" version of the MC68000, the fastest version of the original HMOS chip, was not produced until the late 1980s. Along with initial chip sampling, Motorola offered a development board, the Motorola MC68000 Design Module (or 68KDM or just KDM). The board featured a 68000 microprocessor with 32k bytes of DRAM. It had two 16 bit peripheral interface adapter ports on the ejector edge, and a male edge connector for the older Motorola 6800 EXORciser bus at the opposite end, allowing it to be used with standard 6800 emulators. One asynchronous communication port (ACIA) was configured to connect with a standard RS-232C data terminal, and the other was configured to emulate an RS-232C terminal, enabling communication with a host computer. A transparent mode allowed a program created on a host computer to be directly downloaded into the 68000 memory. Contents of the 68000's memory could also be uploaded into the host computer for further processing.: p.111 The KDM board came with socketed ROM chips that contained Motorola's new MacsBug debugger. A switch on the KDM board activated the debugger. MacsBug was later used in Apple's early Macintosh computers. By the start of 1981, the 68k was winning orders in the high end, and Gunter began to approach Apple to win their business. At that time, the 68k sold for about $125 in quantity. In meetings with Steve Jobs, Jobs talked about using the 68k in the Apple Lisa, but stated "the real future is in this product that I'm personally doing. If you want this business, you got to commit that you'll sell it for $15." Motorola countered by offering to sell it at $55 at first, then step down to $35, and so on. Jobs agreed, and the Macintosh moved from the 6809 to the 68k. The average price eventually reached $14.76. Variants In 1982, the 68000 received a minor update to its instruction set architecture (ISA) to support virtual memory and to conform to the Popek and Goldberg virtualization requirements. The updated chip is called the 68010. It also adds a new "loop mode" which speeds up small loops, and increases overall performance by about 10% at the same clock speeds. A further extended version, which exposes 31 bits of the address bus, was also produced in small quantities as the 68012. To support lower-cost systems and control applications with smaller memory sizes, Motorola introduced the 8-bit compatible 68008, also in 1982. This is a 68000 with an 8-bit data bus and a smaller (20-bit) address bus. After 1982, Motorola devoted more attention to the 68020 and 88000 projects. Several other companies were second-source manufacturers of the HMOS 68000. These included Hitachi (HD68000), who shrank the feature size to 2.7 μm for their 12.5 MHz version, Mostek (MK68000), Rockwell (R68000), Signetics (SCN68000), Thomson/SGS-Thomson (originally EF68000 and later TS68000), and Toshiba (TMP68000). Toshiba was also a second-source maker of the CMOS 68HC000 (TMP68HC000). Encrypted variants of the 68000 such as the Hitachi FD1089 and FD1094 store decryption keys for opcodes and opcode data in battery-backed memory. These were used in certain Sega arcade systems (including System 16 games) to prevent piracy and illegal bootleg games. The 68HC000, the first CMOS version of the 68000, was designed by Hitachi and jointly introduced in 1985. Motorola offered it as the MC68HC000 while Hitachi offered it as the HD68HC000. The 68HC000 offers speeds of 8–20 MHz. Aside from using CMOS circuitry, it behaved identically to the HMOS 68000, but the change to CMOS greatly reduced its power consumption. The original HMOS 68000 consumed around 1.35 watts at an ambient temperature of 25 °C, regardless of clock speed, while the 68HC000 consumed only 0.13 watts at 8 MHz and 0.38 watts at 20 MHz. (Unlike CMOS circuits, HMOS still draws power when idle, so power consumption varies little with clock rate.) Apple selected the 68HC000 for use in the Macintosh Portable and PowerBook 100. Motorola replaced the 68008 with the 68HC001 in 1990. It resembles the 68HC000 in most respects, but its data bus can operate in either 16-bit or 8-bit mode, depending on the value of an input pin at reset. Thus, like the 68008, it can be used in systems with cheaper 8-bit memories. The later evolution of the 68000 focused on more modern embedded control applications and on-chip peripherals. The 68EC000 chip and SCM68000 core removed the M6800 peripheral bus, as well as excluding the MOVE from SR instruction from user mode programs, making the 68EC000 and 68SEC000 the only 68000 CPUs not 100% object code compatible with previous 68000 CPUs when run in User Mode. When run in Supervisor Mode, there is no difference. The latter method was done so that the 68EC000 and SCM68000 can meet Popek and Goldberg virtualization requirements, and was also later implemented into its successor, the 68010. In 1996, Motorola updated the standalone core with fully static circuitry, drawing only 2 μW in low-power mode; this became known as the 68SEC000. Motorola ceased production of the HMOS 68000 as well as the 68008, 68010, 68330, and 68340 on June 1, 1996, but its spin-off company Freescale Semiconductor (merged with NXP) was still producing the 68HC000, 68HC001, 68EC000 and 68SEC000, as well as the 68302 and 68306 microcontrollers and later versions of the DragonBall family. The 68000's architectural descendants, the 680x0, CPU32, and Coldfire families, were also still in production. More recently, with the Sendai fab closure in 2010, all 68HC000, 68020, 68030 and 68882 parts have been discontinued, leaving only the 68SEC000 in production until it too was discontinued. The only 68000-based processors in production was the 68302 as well as other variants in the 683xx family such as the 68331 and 68332 (which are derived from the 68000), although the 68302 stopped production in 2025, leaving only the 68331 and 68332 as the remaining members of the 683xx family in production. In 2024, Rochester Electronics was given a license by NXP to continue producing the 68HC000 alongside other members of the 68000 family; both the physical design and test programs were transferred from NXP to Rochester in order to continue providing an authorized source to the market. The 68HC000 processors provided by Rochester Electronics uses a clone of the J82M mask set manufactured by Tohoku Semiconductor Corporation (TSC) in Japan at the TSC6 wafer fab, which was the last mask set used by Motorola for the 68HC000 to replace the previous E72N and G73K mask sets manufactured in the United States. Since being succeeded by "true" 32-bit microprocessors, the 68000 is used as the core of many microcontrollers. In 1989, Motorola introduced the 68302 communications processor. This processor was formerly supplied by Freescale and NXP after Motorola spun off its semiconductor business in 2004. Applications IBM considered the 68000 for the IBM PC but chose the Intel 8088; however, IBM Instruments briefly sold the 68000-based IBM System 9000 laboratory computer systems. The 68k instruction set is particularly well suited to implement Unix, and the 68000 and its successors became the dominant CPUs for Unix-based workstations including Sun workstations and Apollo/Domain workstations. In 1981, Motorola introduced the Motorola 68000 Educational Computer Board, a single-board computer for educational and training purposes which in addition to the 68000 itself contained memory, I/O devices, programmable timer and wire-wrap area for custom circuitry. The board remained in use in US colleges as a tool for learning assembly programming until the early 1990s. At its introduction, the 68000 was first used in high-priced systems, including multiuser microcomputers like the WICAT 150, early Alpha Microsystems computers, Sage II / IV, Tandy 6000 / TRS-80 Model 16, and Fortune 32:16; single-user workstations such as Hewlett-Packard's HP 9000 Series 200 systems, the first Apollo/Domain systems, Sun Microsystems' Sun-1, and the Corvus Concept; and graphics terminals like Digital Equipment Corporation's VAXstation 100 and Silicon Graphics' IRIS 1000 and 1200. Unix systems rapidly moved to the more capable later generations of the 68k line, which remained popular in that market throughout the 1980s. By the mid-1980s, falling production cost made the 68000 viable for use in personal computers starting with the Apple Lisa and Macintosh, and followed by the Amiga, Atari ST, and X68000. The Sinclair QL microcomputer, along with its derivatives, such as the ICL One Per Desk business terminal, was the most commercially important utilisation of the 68008. Helix Systems (in Missouri, United States) designed an extension to the SWTPC SS-50 bus, the SS-64, and produced systems built around the 68008 processor. 68000 and 68008 second processors were released for the BBC Micro in 1984 and 1985 respectively, and according to Steve Furber contributed to Acorn developing the ARM. While the adoption of RISC and x86 displaced the 68000 series as desktop/workstation CPU, the processor found substantial use in embedded applications. By the early 1990s, quantities of 68000 CPUs could be purchased for less than 30 USD per part.[citation needed] The 68000 also saw great success as an embedded controller. As early as 1981, laser printers such as the Imagen Imprint-10 were controlled by external boards equipped with the 68000. The first HP LaserJet, introduced in 1984, came with a built-in 8 MHz 68000. Other printer manufacturers adopted the 68000, including Apple with its introduction of the LaserWriter in 1985, the first PostScript laser printer. The 68000 continued to be widely used in printers throughout the rest of the 1980s, persisting well into the 1990s in low-end printers. The 68000 was successful in the field of industrial control systems. Among the systems benefited from having a 68000 or derivative as their microprocessor were families of programmable logic controllers (PLCs) manufactured by Allen-Bradley, Texas Instruments and subsequently, following the acquisition of that division of TI, by Siemens. Users of such systems do not accept product obsolescence at the same rate as domestic users, and it is entirely likely that despite having been installed over 20 years ago, many 68000-based controllers will continue in reliable service well into the 21st century. In a number of digital oscilloscopes from the 80s, the 68000 has been used as a waveform display processor; some models including the LeCroy 9400/9400A also use the 68000 as a waveform math processor (including addition, subtraction, multiplication, and division of two waveforms/references/waveform memories), and some digital oscilloscopes using the 68000 (including the 9400/9400A) can also perform fast Fourier transform functions on a waveform. The 683XX microcontrollers, based on the 68000 architecture, are used in networking and telecom equipment, television set-top boxes, laboratory and medical instruments, and even handheld calculators. The MC68302 and its derivatives have been used in many telecom products from Cisco, 3com, Ascend, Marconi, Cyclades and others. Past models of the Palm PDAs and the Handspring Visor used the DragonBall, a derivative of the 68000. AlphaSmart used the DragonBall family in later versions of its portable word processors. Texas Instruments used the 68000 in its high-end graphing calculators, the TI-89 and TI-92 series and Voyage 200. A modified version of the 68000 formed the basis of the IBM XT/370 hardware emulator of the System 370 processor. Video game manufacturers used the 68000 as the backbone of many arcade games and home game consoles: Atari's Food Fight, from 1983, was one of the first 68000-based arcade games. Others included Sega's System 16, Capcom's CP System and CP System II, and SNK's Neo Geo. By the late 1980s, the 68000 was inexpensive enough to power home game consoles, such as Sega's Genesis console, and also the Sega CD attachment for it (a Sega CD system has three CPUs, two of them 68000s). The 68000 is also used as the main CPU of Sega's Pico, a young children's educational game console. The multi-processor Atari Jaguar console from 1993 used the 68000 as a support chip, however, due to familiarity, some developers used it as the primary processor. Sega's Saturn console from 1994 used the 68000 as a sound co-processor. In October 1995, the 68000 made it into Sega's Genesis Nomad, a handheld game console, as its CPU. Certain arcade games (such as Steel Gunner and others based on Namco System 2) use a dual 68000 CPU configuration, and systems with a triple 68000 CPU configuration also exist (such as Galaxy Force and others based on the Sega Y Board), along with a quad 68000 CPU configuration, which has been used by Jaleco (one 68000 for sound has a lower clock rate compared to the other 68000 CPUs) for games such as Big Run and Cisco Heat; another, fifth 68000 (at a different clock rate than the other 68000 CPUs) was used in the Jaleco arcade game Wild Pilot for input/output (I/O) processing. Architecture The 68000 has a 24-bit external address bus and two byte-select signals "replaced" A0. These 24 lines can therefore address 16 MB of physical memory with byte resolution. Address storage and computation uses 32 bits internally; however, the 8 high-order address bits are ignored due to the physical lack of device pins. This allows it to run software written for a logically flat 32-bit address space, while accessing only a 24-bit physical address space. Motorola's intent with the internal 32-bit address space was forward compatibility, making it feasible to write 68000 software that would take full advantage of later 32-bit implementations of the 68000 instruction set. However, this did not prevent programmers from writing forward incompatible software. "24-bit" software that discarded the upper address byte, or used it for purposes other than addressing, could fail on 32-bit 68000 implementations. For example, early (pre-7.0) versions of Apple's Mac OS used the high byte of memory-block master pointers to hold flags such as locked and purgeable. Later versions of the OS moved the flags to a nearby location, and Apple began shipping computers which had "32-bit clean" ROMs beginning with the release of the 1989 Mac IIci. The 68000 family stores multi-byte integers in memory in big-endian order. The CPU has eight 32-bit general-purpose data registers (D0-D7), and eight address registers (A0-A7). The last address register is the stack pointer, and assemblers accept the label SP as equivalent to A7. This was a good number of registers at the time in many ways. It was small enough to allow the 68000 to respond quickly to interrupts (even in the worst case where all 8 data registers D0–D7 and 7 address registers A0–A6 needed to be saved, 15 registers in total), and yet large enough to make most calculations fast, because they could be done entirely within the processor without keeping any partial results in memory. (Note that an exception routine in supervisor mode can also save the user stack pointer A7, which would total 8 address registers. However, the dual stack pointer (A7 and supervisor-mode A7') design of the 68000 makes this normally unnecessary, except when a task switch is performed in a multitasking system.) Having the two types of registers allows the condition codes to remain unchanged when manipulating address registers. Also by splitting the 16 registers into two different types, they can be encoded by only three bits. The 68000 has a 16-bit status register. The upper 8 bits is the system byte, and modification of it is privileged. The lower 8 bits is the user byte, also known as the condition code register (CCR), and modification of it is not privileged. The 68000 comparison, arithmetic, and logic operations modify condition codes to record their results for use by later conditional jumps. The condition code bits are "carry" (C), "overflow" (V), "zero" (Z), "negative" (N) and "extend" (X). The "extend" (X) flag deserves special mention, because it is separate from the carry flag. This permits the extra bit from arithmetic, logic, and shift operations to be separated from the carry multiprecision arithmetic. The designers attempted to make the assembly language orthogonal. That is, instructions are divided into operations and address modes, and almost all address modes are available for almost all instructions. There are 56 instructions and a minimum instruction size of 16 bits. Many instructions and addressing modes are longer to include more address or mode bits. The CPU, and later the whole family, implements two levels of privilege. User mode gives access to everything except privileged instructions such as interrupt level controls. Supervisor privilege gives access to everything. An interrupt always becomes supervisory. The supervisor bit is stored in the status register, and is visible to user programs. An advantage of this system is that the supervisor level has a separate stack pointer. This permits a multitasking system to use very small stacks for tasks, because the designers do not have to allocate the memory required to hold the stack frames of a maximum stack-up of interrupts. The CPU recognizes seven interrupt levels. Levels 1 through 5 are strictly prioritized. That is, a higher-numbered interrupt can always interrupt a lower-numbered interrupt. In the status register, a privileged instruction allows setting the current minimum interrupt level, blocking lower or equal priority interrupts. For example, if the interrupt level in the status register is set to 3, higher levels from 4 to 7 can cause an exception. Level 7 is a level triggered non-maskable interrupt (NMI). Level 1 can be interrupted by any higher level. Level 0 means no interrupt. The level is stored in the status register, and is visible to user-level programs. Hardware interrupts are signalled to the CPU using three inputs that encode the highest pending interrupt priority. A separate encoder is usually required to encode the interrupts, though for systems that do not require more than three hardware interrupts it is possible to connect the interrupt signals directly to the encoded inputs at the cost of more software complexity. The interrupt controller can be as simple as a 74LS148 priority encoder, or may be part of a very large-scale integration (VLSI) peripheral chip such as the MC68901 Multi-Function Peripheral (used in the Atari ST range of computers and X68000), which also provides a UART, timer, and parallel I/O. The "exception table" (interrupt vector table interrupt vector addresses) is fixed at addresses 0 through 1023, permitting 256 32-bit vectors. The first vector (RESET) consists of two vectors, namely the starting stack address, and the starting code address. Vectors 3 through 15 are used to report various errors: bus error, address error, illegal instruction, zero division, CHK and CHK2 vector, privilege violation (to block privilege escalation), and some reserved vectors that became line 1010 emulator, line 1111 emulator, and hardware breakpoint. Vector 24 starts the real interrupts: spurious interrupt (no hardware acknowledgement), and level 1 through level 7 autovectors, then the 16 TRAP vectors, then some more reserved vectors, then the user defined vectors. Since the starting code address vector must always be valid on reset, systems commonly included some nonvolatile memory (e.g. ROM) starting at address zero to contain the vectors and bootstrap code. However, for a general purpose system it is desirable for the operating system to be able to change the vectors at runtime. This was often accomplished by either pointing the vectors in ROM to a jump table in RAM, or through use of bank switching to allow the ROM to be replaced by RAM at runtime. The 68000 does not meet the Popek and Goldberg virtualization requirements for full processor virtualization because it has a single unprivileged instruction, "MOVE from SR", which allows user-mode software read-only access to a small amount of privileged state. The 68EC000 and 68SEC000, which are later derivatives of the 68000, do meet the requirements as the "MOVE from SR" instruction is privileged. The same change was introduced on the 68010 and later CPUs. The 68000 is also unable to easily support virtual memory, which requires the ability to trap and recover from a failed memory access. The 68000 does provide a bus error exception which can be used to trap, but it does not save enough processor state to resume the faulted instruction once the operating system has handled the exception. Several companies did succeed in making 68000-based Unix workstations with virtual memory that worked by using two 68000 chips running in parallel on different phased clocks. When the "leading" 68000 encountered a bad memory access, extra hardware would interrupt the "main" 68000 to prevent it from also encountering the bad memory access. This interrupt routine would handle the virtual memory functions and restart the "leading" 68000 in the correct state to continue properly synchronized operation when the "main" 68000 returned from the interrupt. These problems were fixed in the next major revision of the 68k architecture with the release of the MC68010. The Bus Error and Address Error exceptions push a large amount of internal state onto the supervisor stack in order to facilitate recovery, and the "MOVE from SR" instruction was made privileged. A new unprivileged "MOVE from CCR" instruction is provided for use in its place by user mode software; an operating system can trap and emulate user mode "MOVE from SR" instructions if desired. Instruction set details The standard addressing modes are: Plus: access to the status register, and, in later models, other special registers. Most instructions have variants that operate on 8-bit bytes, 16-bit words, and 32-bit longs; assembler languages use dot-letter suffixes ".b", ".w", and ".l" after the instruction mnemonic to indicate the variant. Like many CPUs of its era the cycle timing of some instructions varied depending on the source operand(s). For example, the unsigned multiply instruction takes (38+2n) clock cycles to complete where 'n' is equal to the number of bits set in the operand. To create a function that took a fixed cycle count required the addition of extra code after the multiply instruction. This would typically consume extra cycles for each bit that wasn't set in the original multiplication operand. Most instructions are dyadic, that is, the operation has a source, and a destination, and the destination is changed. Notable instructions are: 68EC000 The 68EC000 is a low-cost version of the 68000 with a slightly different pinout, designed for embedded controller applications. The 68EC000 can have either a 8-bit or 16-bit data bus, switchable at reset. The processors also have some minor changes to confirm to the Popek and Goldberg virtualization requirements, which includes the MOVE from SR instruction being privileged. The processors are available in a variety of speeds including 8 and 16 MHz configurations, producing 2,100 and 4,376 Dhrystones each. These processors have no floating-point unit, and it is difficult to implement an FPU coprocessor (MC68881/2) with one because the EC series lacks necessary coprocessor instructions. The 68EC000 was used as a controller in many audio applications, including Ensoniq musical instruments and sound cards, where it was part of the MIDI synthesizer. On Ensoniq sound boards, the controller provided several advantages compared to competitors without a CPU on board. The processor allowed the board to be configured to perform various audio tasks, such as MPU-401 MIDI synthesis or MT-32 emulation, without the use of a terminate-and-stay-resident program. This improved software compatibility, lowered CPU usage, and eliminated host system memory usage. The Motorola 68EC000 core was later used in the m68k-based DragonBall processors from Motorola/Freescale. It also was used as a sound controller in the Sega Saturn game console and as a controller for the HP JetDirect Ethernet controller boards for the mid-1990s HP LaserJet printers. Example code The 68000 assembly code below is for a subroutine named strtolower, which copies a null-terminated string of 8-bit characters to a destination string, converting all alphabetic characters to lower case. The subroutine establishes a call frame using register A6 as the frame pointer. This kind of calling convention supports reentrant and recursive code and is typically used by languages like C and C++. The subroutine then retrieves the parameters passed to it (src and dst) from the stack. It then loops, reading an ASCII character (one byte) from the src string, checking whether it is a capital alphabetic character, and if so, converting it into a lower-case character, otherwise leaving it as it is, then writing the character into the dst string. Finally, it checks whether the character was a null character; if not, it repeats the loop, otherwise it restores the previous stack frame (and A6 register) and returns. Note that the string pointers (registers A0 and A1) are auto-incremented in each iteration of the loop. In contrast, the code below is for a stand-alone function, even on the most restrictive version of AMS for the TI-89 series of calculators, being kernel-independent, with no values looked up in tables, files or libraries when executing, no system calls, no exception processing, minimal registers to be used, nor the need to save any. It is valid for historical Julian dates from 1 March 1 AD, or for Gregorian ones. In less than two dozen operations it calculates a day number compatible with ISO 8601 when called with three inputs stored at their corresponding LOCATIONS: Notes See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Texture_mapping&action=edit&section=20] | [TOKENS: 1430]
Editing Texture mapping (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] &nbsp; <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 12 hidden categories (help):
========================================
[SOURCE: https://en.wikipedia.org/wiki/Arcade_cabinet] | [TOKENS: 3331]
Contents Arcade cabinet An arcade cabinet, also known as an arcade machine or a coin-op cabinet or coin-op machine, is the housing within which an arcade game's electronic hardware resides. Most cabinets designed since the mid-1980s conform to the Japanese Amusement Machine Manufacturers Association (JAMMA) wiring standard. Some include additional connectors for features not included in the standard. Parts of an arcade cabinet Because arcade cabinets vary according to the games they were built for or contain, they may not possess all of the parts listed below: The sides of the arcade cabinet are usually decorated with brightly colored stickers or paint, representing the gameplay of its particular game. Types of cabinets There are many types of arcade cabinets, some being custom-made for a particular game; however, the most common are the upright, the cocktail or table, and the sit-down. Upright cabinets are the most common in North America, with their design heavily influenced by Computer Space and Pong. While the futuristic look of Computer Space's outer fiberglass cabinet did not carry forward, both games did establish separating parts of the arcade machine for the cathode-ray tube (CRT) display, the game controllers, and the computer logic areas. Atari also had placed the controls at a height suitable for most adult players to use, but close enough to the console's base to also allow children to play. Further, the cabinets were more compact than traditional electro-mechanical games and did not use flashing lights or other means to attract players. The side panels of Atari's Pong had a simple wood veneer finish, making it easier to market to non-arcade venues, such as hotels, country clubs, and cocktail bars. In the face of growing competition, Atari started to include cabinet art and attraction panels around 1973–1974, which soon became a standard practice. Arcade cabinets today are usually made of wood and metal, about six feet or two meters tall, with the control panel set perpendicular to the monitor at slightly above waist level. The monitor is housed inside the cabinet, at approximately eye level. The marquee is above it, and often overhangs it. In Computer Space, Pong and other early arcade games, the CRT was mounted 90 degrees from the ground, facing directly outward. Arcade game manufacturers began incorporating design principles from older electro-mechanical games by using CRTs mounted at a 45-degree angle, facing upward and away from the player but towards a one-way mirror that reflected the display to the player. Additional transparent overlays could be added between the mirror and the player's view to include additional images and colorize the black-and-white CRT output, as is the case in Boot Hill. Other games, like Warrior, used a one-sided mirror and included an illuminated background behind the mirror, so that the on-screen characters would appear to the players as if they were on that background. With the advent of color CRT displays, the need for the mirror was eliminated. The CRT was subsequently positioned at an angle permitting a typical adult player to look directly at the screen. Controls are most commonly a joystick for as many player as the game allows, plus action buttons and "player" buttons which serve the same purpose as the start button on console gamepads. Trackballs are sometimes used instead of joysticks, especially in games from the early 1980s. Spinners (knobs for turning, also called "paddle controls") are used to control game elements that move strictly horizontally or vertically, such as the paddles in Arkanoid and Pong. Games such as Robotron: 2084, Smash TV and Battlezone use double joysticks instead of action buttons. Some versions of the original Street Fighter had pressure-sensitive rubber pads instead of buttons. If an upright is housing a driving game, it may have a steering wheel and throttle pedal instead of a joystick and buttons. If the upright is housing a shooting game, it may have light guns attached to the front of the machine, via durable cables. Some arcade machines had the monitor placed at the bottom of the cabinet with a mirror mounted at around 45 degrees above the screen facing the player. This was done to save space, as a large CRT monitor would otherwise poke out the back of the cabinet.[citation needed] To correct for the mirrored image, some games had an option to flip the video output using a dip switch setting. Other genres of games such as Guitar Freaks feature controllers resembling musical instruments. Upright cabinet shape designs vary from the simplest symmetric perpendicular boxes as with Star Trek to complicated asymmetric forms. Games are typically for one or two players; however, games such as Gauntlet feature as many as four sets of controls. Cocktail cabinets are shaped like low, rectangular tables, with the controls usually set at either of the broad ends, or, though not as common, at the narrow ends, and the monitor inside the table, the screen facing upward. Two-player games housed in cocktails were usually alternant, each player taking turns. The monitor reverses its orientation (game software controlled) for each player, so the game display is properly oriented for each player. This requires special programming of the cocktail versions of the game (usually set by dip switches). The monitor's orientation is usually in player two's favor only in two-player games when it is player two's turn and in player one's favor all other times. Simultaneous, four-player games that are built as a cocktail include Warlords, and others. In Japan, many games manufactured by Taito from the 1970s to the early 1980s have the cocktail versions prefixed by "T.T" in their titles (eg. T.T Space Invaders). Cocktail cabinet versions were usually released alongside the upright version of the same game. They were relatively common in the 1980s, especially during the golden age of arcade video games; however, they have since lost popularity. Their main advantage over upright cabinets was their smaller size, making them seem less obtrusive, although requiring more floor space (more so by having players seated at each end). The top of the table was covered with a piece of tempered glass, making it convenient to set drinks on (hence the name); they were often seen in bars and pubs. Owing to the resemblance of plastic to hard candy, they are often known as "candy cabinets", by both arcade enthusiasts and by people in the industry. They are also generally easier to clean and move than upright cabinets, but usually just as heavy as most have 29" screens, as opposed to 20"–25". They are positioned so that the player can sit down on a chair or stool and play for extended periods. SNK sold many Neo-Geo MVS cabinets in this configuration, though most arcade games made in Japan that only use a joystick and buttons will come in a sit-down cabinet variety. In Japanese arcades, this type of cabinet is generally more prevalent than the upright kind, and they are usually lined up in uniform-looking rows. A variant of this, often referred to as "versus-style" cabinets are designed to look like two cabinets facing each other, with two monitors and separate controls allowing two players to fight each other without having to share the same monitor and control area. Some newer cabinets can emulate these "versus-style" cabinets through networking. Deluxe cabinets (also known as DX cabinets in Japan) are most commonly used for games involving gambling, long stints of gaming (such as fighting games), or vehicles (such as flight simulators and racing games). These cabinets typically have equipment resembling the controls of a vehicle (though some of them are merely large cabinets with features such as a large screen or chairs). Driving games may have a bucket seat, foot pedals, a stick shift, and even an ignition, while flight simulators may have a flight yoke or joystick, and motorcycle games handlebars, and a seat shaped like a full-size bike. Often, these cabinets are arranged side-by-side, to allow players to compete together. Sega is one of the biggest manufacturers of these kinds of cabinets, while Namco released Ridge Racer Full Scale, in which the player sits in a full-size Mazda MX-5 road car. A cockpit or environmental cabinet is a type of deluxe cabinet where the player sits inside the cabinet itself. It also typically has an enclosure. Examples of this can be seen on the Killer List of Videogames, including shooter games such as Star Fire, Missile Command, SubRoc-3D, Star Wars, Astron Belt, Sinistar and Discs of Tron as well as racing games such as Monaco GP, Turbo and Pole Position. A number of cockpit/or environmental cabinets incorporate hydraulic motion simulation, as covered in the section below. A motion simulator cabinet is a type of deluxe cabinet that is very elaborate, including hydraulics which move the player according to the action on screen. In Japan, they are known as "taikan" games, with "taikan" meaning "body sensation" in Japanese. Sega is particularly known for these kinds of cabinets, with various types of sit-down and cockpit motion cabinets that Sega have been manufacturing since the 1980s. Namco was another major manufacturer of motion simulator cabinets. Motorbike racing games since Sega's Hang-On have had the player sit on and move a motorbike replica to control the in-game actions (like a motion controller). Driving games since Sega's Out Run have had hydraulic motion simulator sit-down cabinets, while hydraulic motion simulator cockpit cabinets have been used for space combat games such as Sega's Space Tactics (1981) and Galaxy Force, rail shooters such as Space Harrier and Thunder Blade, and combat flight simulators such as After Burner and G-LOC: Air Battle. One of the most sophisticated motion simulator cabinets is Sega's R360, which simulates the full 360-degree rotation of an aircraft. Mini or cabaret cabinets are similar forms of arcade cabinet but are intended for different markets. Modern mini cabinets are sold directly to consumers and are not intended for commercial operation. They are styled just like a standard upright cabinet, often with full art and marquees, but are scaled down to more easily fit in a home environment or be used by children. The older form of mini or cabaret cabinets were marketed for commercial use and are no longer made. They were often thinner as well as shorter, lacked side art, and had smaller marquees and monitors. This reduced their cost, reduced their weight, made them better suited to locations with less space, and also made them less conspicuous in darker environments. In place of side art they were often clad in faux wood grain vinyl instead. Countertop or bartop cabinets are usually only large enough to house their monitors and control panels. They are often used for trivia and gambling-type games and are usually found installed on bars or tables in pubs and restaurants. These cabinets often have touchscreen controls instead of traditional push-button controls. They are also fairly popular with home use, as they can be placed upon a table or countertop. Usually found in Japan, these machines have multiple screens interconnected to one system, sometimes with one big screen in the middle. These also often feature the dispensation of different types of cards, either a smartcard in order to save stats and progress or trading cards used in the game. Conversion kit An arcade conversion kit, also known as a software kit, is special equipment that can be installed into an arcade machine that changes the current game it plays into another one. For example, a conversion kit can be used to reconfigure an arcade machine designed to play one game so that it would play its sequel or update instead, such as from Street Fighter II: Champion Edition to Street Fighter II Turbo. Restoration Since arcade games are becoming increasingly popular as collectibles, an entire niche industry has sprung up focused on arcade cabinet restoration. There are many websites (both commercial and hobbyist) and newsgroups devoted to arcade cabinet restoration. They are full of tips and advice on restoring games to mint condition. Often game cabinets were used to host a variety of games. Often after the cabinet's initial game was removed and replaced with another, the cabinet's side art was painted over (usually black) so that the cabinet would not misrepresent the game contained within. The side art was also painted over to hide damaged or faded artwork. Of course, hobbyists prefer cabinets with original artwork in the best possible condition. Since machines with good quality art are hard to find, one of the first tasks is stripping any old artwork or paint from the cabinet. This is done with conventional chemical paint strippers or by sanding (preferences vary). Normally artwork cannot be preserved that has been painted over and is removed with any covering paint. New paint can be applied in any manner preferred (roller, brush, spray). Paint used is often just conventional paint with a finish matching the cabinet's original paint. Many games had artwork that was silkscreened directly on the cabinets. Others used large decals for the side art. Some manufacturers produce replication artwork for popular classic games—each varying in quality. This side art can be applied over the new paint after it has dried. These appliques can be very large and must be carefully applied to avoid bubbles or wrinkles from developing. Spraying the surface with a slightly soapy water solution allows the artwork to be quickly repositioned if wrinkles or bubbles develop like in window tinting applications. Acquiring these pieces is harder than installing them. Many hobbyists trade these items via newsgroups or sites such as eBay (the same is true for side art). As with side art, some replication art shops also produce replication artwork for these pieces that is indistinguishable from the original. Some even surpass the originals in quality. Once these pieces are acquired, they usually snap right into place. If the controls are worn and need replacing, if the game is popular, they can be easily obtained. Rarer game controls are harder to come by, but some shops stock replacement controls for classic arcade games. Some shops manufacture controls that are more robust than originals and fit a variety of machines. Installing them takes some experimentation for novices, but are usually not too difficult to place. While both use the same basic type of tube, raster monitors are easier to service than vector monitors, as the support circuitry is very similar to that which is used in CRT televisions and computer monitors, and is typically easy to adjust for color and brightness. On the other hand, vector monitors can be challenging or very costly to service, and some can no longer be repaired due to certain parts having been discontinued years ago. Even finding a drop-in replacement for a vector monitor is a challenge today, as few were produced after their heyday in the early 1980s.[citation needed] CRT replacement is possible, but the process of transferring the deflection yoke and other parts from one tube neck to the other also means a long process of positioning and adjusting the parts on the CRT for proper performance, a job that may prove too challenging for the typical amateur arcade collector.[citation needed] On the other hand, it may be possible to retrofit other monitor technologies to emulate vector graphics. Some electronic components are stressed by the hot, cramped conditions inside a cabinet. Electrolytic capacitors dry out over time, and if a classic arcade cabinet is still using its original components, it may be near the end of its service life. A common step in refurbishing vintage electronics (of all types) is "recapping": replacing certain capacitors (and other parts) to restore, or ensure the continued safe operation of the monitor and power supplies. Because of the capacity and voltage ratings of these parts, it can be dangerous if not done properly, and should only be attempted by experienced hobbyists or professionals. If a monitor is broken, it may be easier to just source a drop-in replacement through coin-op machine distributors or parts suppliers. If a cabinet needs rewiring, some wiring kits are available over the Internet. An experienced hobbyist can usually solve most wiring problems through trial and error. Many cabinets are converted to be used to host a game other than the original. In these cases, if both games conform to the JAMMA standard, the process is simple. Other conversions can be more difficult, but some manufacturers such as Nintendo have produced kits to ease the conversion process (Nintendo manufactured kits to convert a cabinet from Classic wiring to VS. wiring). See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Texture_mapping#cite_note-16] | [TOKENS: 4408]
Contents Texture mapping Texture mapping is a term used in computer graphics to describe how 2D images are projected onto 3D models. The most common variant is the UV unwrap, which can be described as an inverse paper cutout, where the surfaces of a 3D model are cut apart so that it can be unfolded into a 2D coordinate space (UV space). Semantic Texture mapping can multiply refer to (1) the task of unwrapping a 3D model (converting the surface of a 3D model into a 2D texture map), (2) applying a 2D texture map onto the surface of a 3D model, and (3) the 3D software algorithm that performs both tasks. A texture map refers to a 2D image ("texture") that adds visual detail to a 3D model. The image can be stored as a raster graphic. A texture that stores a specific property—such as bumpiness, reflectivity, or transparency—is also referred to as a color map or roughness map. The coordinate space that converts from a 3D model's 3D space into a 2D space for sampling from the texture map is variously called UV space, UV coordinates, or texture space. Algorithm The following is a simplified explanation of how an algorithm could work to render an image: History The original technique was pioneered by Edwin Catmull in 1974 as part of his doctoral thesis. Texture mapping originally referred to diffuse mapping, a method that simply mapped pixels from a texture to a 3D surface ("wrapping" the image around the object). In recent decades, the advent of multi-pass rendering, multitexturing, mipmaps, and more complex mappings such as height mapping, bump mapping, normal mapping, displacement mapping, reflection mapping, specular mapping, occlusion mapping, and many other variations on the technique (controlled by a materials system) have made it possible to simulate near-photorealism in real time by vastly reducing the number of polygons and lighting calculations needed to construct a realistic and functional 3D scene. Texture maps A texture map is an image applied ("mapped") to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3D model formats or material definitions, and assembled into resource bundles. They may have one to three dimensions, although two dimensions are most common for visible surfaces. For use with modern hardware, texture map data may be stored in swizzled or tiled orderings to improve cache coherency. Rendering APIs typically manage texture map resources (which may be located in device memory) as buffers or surfaces, and may allow 'render to texture' for additional effects such as post processing or environment mapping. Texture maps usually contain RGB color data (either stored as direct color, compressed formats, or indexed color), and sometimes an additional channel for alpha blending (RGBA) especially for billboards and decal overlay textures. It is possible to use the alpha channel (which may be convenient to store in formats parsed by hardware) for other uses such as specularity. Multiple texture maps (or channels) may be combined for control over specularity, normals, displacement, or subsurface scattering, e.g. for skin rendering. Multiple texture images may be combined in texture atlases or array textures to reduce state changes for modern hardware. (They may be considered a modern evolution of tile map graphics). Modern hardware often supports cube map textures with multiple faces for environment mapping. Texture maps may be acquired by scanning or digital photography, designed in image manipulation software such as GIMP or Photoshop, or painted onto 3D surfaces directly in a 3D paint tool such as Mudbox or ZBrush. This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2D case is also known as UV coordinates). This may be done through explicit assignment of vertex attributes, manually edited in a 3D modelling package through UV unwrapping tools. It is also possible to associate a procedural transformation from 3D space to texture space with the material. This might be accomplished via planar projection or, alternatively, cylindrical or spherical mapping. More complex mappings may consider the distance along a surface to minimize distortion. These coordinates are interpolated across the faces of polygons to sample the texture map during rendering. Textures may be repeated or mirrored to extend a finite rectangular bitmap over a larger area, or they may have a one-to-one unique "injective" mapping from every piece of a surface (which is important for render mapping and light mapping, also known as baking). Texture mapping maps the model surface (or screen space during rasterization) into texture space; in this space, the texture map is visible in its undistorted form. UV unwrapping tools typically provide a view in texture space for manual editing of texture coordinates. Some rendering techniques such as subsurface scattering may be performed approximately by texture-space operations. Multitexturing is the use of more than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Microtextures or detail textures are used to add higher frequency details, and dirt maps add weathering and variation; this can greatly reduce the apparent periodicity of repeating textures. Modern graphics may use more than 10 layers, which are combined using shaders, for greater fidelity. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface (such as tree bark or rough concrete) that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in video games, as graphics hardware has become powerful enough to accommodate it in real-time. The way that samples (e.g. when viewed as pixels on the screen) are calculated from the texels (texture pixels) is governed by texture filtering. The cheapest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of a texture coordinate being outside the texture, it is either clamped or wrapped. Anisotropic filtering better eliminates directional artefacts when viewing textures from oblique viewing angles. Texture streaming is a means of using data streams for textures, where each texture is available in two or more different resolutions, as to determine which texture should be loaded into memory and used based on draw distance from the viewer and how much memory is available for textures. Texture streaming allows a rendering engine to use low resolution textures for objects far away from the viewer's camera, and resolve those into more detailed textures, read from a data source, as the point of view nears the objects. As an optimization, it is possible to render detail from a complex, high-resolution model or expensive process (such as global illumination) into a surface texture (possibly on a low-resolution model). This technique is called baking (or render mapping) and is most commonly used for light maps, but may also be used to generate normal maps and displacement maps. Some computer games (e.g. Messiah) have used this technique. The original Quake software engine used on-the-fly baking to combine light maps and colour maps in a process called surface caching. Baking can be used as a form of level of detail generation, where a complex scene with many different elements and materials may be approximated by a single element with a single texture, which is then algorithmically reduced for lower rendering cost and fewer drawcalls. It is also used to take high-detail models from 3D sculpting software and point cloud scanning and approximate them with meshes more suitable for realtime rendering. Rasterisation algorithms Various techniques have evolved in software and hardware implementations. Each offers different trade-offs in precision, versatility, and performance. Affine texture mapping linearly interpolates texture coordinates across a surface, making it the fastest form of texture mapping. Some software and hardware (such as the original PlayStation) project vertices in 3D space onto the screen during rendering and linearly interpolate the texture coordinates in screen space between them. This may be done by incrementing fixed-point UV coordinates or by an incremental error algorithm akin to Bresenham's line algorithm. In contrast to perpendicular polygons, this leads to noticeable distortion with perspective transformations (as shown in the figure: the checker box texture appears bent), especially as primitives near the camera. This distortion can be reduced by subdividing polygons into smaller polygons. Using quad primitives for rectangular objects can look less incorrect than if those rectangles were split into triangles. However, since interpolating four points adds complexity to the rasterization, most early implementations preferred triangles only. Some hardware, such as the forward texture mapping used by the Nvidia NV1, offered efficient quad primitives. With perspective correction, triangles become equivalent to quad primitives and this advantage disappears. For rectangular objects that are at right angles to the viewer (like floors and walls), the perspective only needs to be corrected in one direction across the screen rather than both. The correct perspective mapping can be calculated at the left and right edges of the floor. Affine linear interpolation across that horizontal span will look correct because every pixel along that line is the same distance from the viewer. Perspective correct texturing accounts for the vertices' positions in 3D space rather than simply interpolating coordinates in 2D screen space. While achieving the correct visual effect, perspective correct texturing is more expensive to calculate. To perform perspective correction of the texture coordinates u {\displaystyle u} and v {\displaystyle v} , with z {\displaystyle z} being the depth component from the viewer's point of view, it is possible to take advantage of the fact that the values 1 z {\displaystyle {\frac {1}{z}}} , u z {\displaystyle {\frac {u}{z}}} , and v z {\displaystyle {\frac {v}{z}}} are linear in screen space across the surface being textured. In contrast, the original z {\displaystyle z} , u {\displaystyle u} , and v {\displaystyle v} , before the division, are not linear across the surface in screen space. It is therefore possible to linearly interpolate these reciprocals across the surface, computing corrected values at each pixel, to produce a perspective correct texture mapping. To do this, the reciprocals at each vertex of the geometry (three points for a triangle) are calculated. Vertex n {\displaystyle n} has reciprocals u n z n {\displaystyle {\frac {u_{n}}{z_{n}}}} , v n z n {\displaystyle {\frac {v_{n}}{z_{n}}}} , and 1 z n {\displaystyle {\frac {1}{z_{n}}}} . Then, linear interpolation can be done on these reciprocals between the n {\displaystyle n} vertices (e.g., using barycentric coordinates), resulting in interpolated values across the surface. At a given point, this yields the interpolated u i , v i {\displaystyle u_{i},v_{i}} and 1 z i {\displaystyle {\frac {1}{z_{i}}}} (reciprocal z i {\displaystyle z_{i}} ). However, as our division by z {\displaystyle z} altered their coordinate system, this u i , v i {\displaystyle u_{i},v_{i}} cannot be used as texture coordinates. To correct back to the u , v {\displaystyle u,v} space, the corrected z {\displaystyle z} is calculated by taking the reciprocal once again: z c o r r e c t = 1 1 z i {\displaystyle z_{correct}={\frac {1}{\frac {1}{z_{i}}}}} . This is then used to correct the u i , v i {\displaystyle u_{i},v_{i}} coordinates: u c o r r e c t = u i ⋅ z i {\displaystyle u_{correct}=u_{i}\cdot z_{i}} and v c o r r e c t = v i ⋅ z i {\displaystyle v_{correct}=v_{i}\cdot z_{i}} . This correction makes it so that the difference from pixel to pixel between texture coordinates is smaller in parts of the polygon that are closer to the viewer (stretching the texture wider) and is larger in parts that are farther away (compressing the texture). Affine texture mapping directly interpolates a texture coordinate u α {\displaystyle u_{\alpha }} between two endpoints u 0 {\displaystyle u_{0}} and u 1 {\displaystyle u_{1}} : u α = ( 1 − α ) u 0 + α u 1 {\displaystyle u_{\alpha }=(1-\alpha )u_{0}+\alpha u_{1}} where 0 ≤ α ≤ 1 {\displaystyle 0\leq \alpha \leq 1} . Perspective correct mapping interpolates after dividing by depth z {\displaystyle z} , then uses its interpolated reciprocal to recover the correct coordinate: u α = ( 1 − α ) u 0 z 0 + α u 1 z 1 ( 1 − α ) 1 z 0 + α 1 z 1 {\displaystyle u_{\alpha }={\frac {(1-\alpha ){\frac {u_{0}}{z_{0}}}+\alpha {\frac {u_{1}}{z_{1}}}}{(1-\alpha ){\frac {1}{z_{0}}}+\alpha {\frac {1}{z_{1}}}}}} 3D graphics hardware typically supports perspective correct texturing. Various techniques have evolved for rendering texture mapped geometry into images with different quality and precision trade-offs, which can be applied to both software and hardware. Classic software texture mappers generally only performed simple texture mapping with one lighting effect at most (typically applied through a lookup table), and the perspective correctness was about 16 times more expensive.[compared to?] The Doom engine restricted the world to vertical walls and horizontal floors and ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors and ceilings would have a constant depth along a horizontal line. After performing one perspective correction calculation for the depth, the rest of the line could use fast affine mapping. Some later renderers of this era simulated a small amount of camera pitch with shearing which allowed the appearance of greater freedom while using the same rendering technique. Some engines were able to render texture mapped heightmaps (e.g. Nova Logic's Voxel Space, and the engine for Outcast) via Bresenham-like incremental algorithms, producing the appearance of a texture mapped landscape without the use of traditional geometric primitives. Every triangle can be further subdivided into groups of about 16 pixels in order to achieve two goals: keeping the arithmetic mill busy at all times and producing faster arithmetic results.[vague] For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering and affine mapping is used on them. The reason this technique works is that the distortion of affine mapping becomes much less noticeable on smaller polygons. The Sony PlayStation made extensive use of this because it only supported affine mapping in hardware and had a relatively high triangle throughput compared to its peers. Software renderers generally prefer screen subdivision because it has less overhead. Additionally, they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2D affine interpolation), thus lessening the overhead further. Another reason is that affine texture mapping does not fit into the low number of CPU registers of the x86 CPU; the 68000 and RISC processors are much more suited for that approach. A different approach was taken for Quake, which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor. As the polygons are rendered independently, it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the polygon normal to achieve a more constant z, but the effort seems not to be worth it.[original research?] One other technique is to approximate the perspective with a faster calculation, such as a polynomial. A second uses the 1 z i {\textstyle {\frac {1}{z_{i}}}} value of the last two drawn pixels to linearly extrapolate the next value. For the latter, the division is then done starting from those values so that all that has to be divided is a small remainder. However, the amount of bookkeeping needed makes this technique too slow on most systems.[citation needed] A third technique, used by the Build Engine (used, most notably, in Duke Nukem 3D), builds on the constant distance trick used by the Doom engine by finding and rendering along the line of constant distance for arbitrary polygons. Texture mapping hardware was originally developed for simulation (e.g. as implemented in the Evans and Sutherland ESIG and Singer-Link Digital Image Generators DIG) and professional graphics workstations (such as Silicon Graphics) and broadcast digital video effects machines such as the Ampex ADO. Texture mapping hardware later appeared in arcade cabinets, consumer video game consoles, and PC video cards in the mid-1990s. In flight simulations, texture mapping provided important motion and altitude cues necessary for pilot training not available on untextured surfaces. Additionally, texture mapping was implemented so that real-time processing of prefiltered texture patterns stored in memory could be accessed by the video processor in real-time. Modern graphics processing units (GPUs) provide specialised fixed function units called texture samplers, or texture mapping units, to perform texture mapping, usually with trilinear filtering or better multi-tap anisotropic filtering and hardware for decoding specific formats such as DXTn. As of 2016, texture mapping hardware is ubiquitous as most SOCs contain a suitable GPU. Some hardware implementations combine texture mapping with hidden-surface determination in tile-based deferred rendering or scanline rendering; such systems only fetch the visible texels at the expense of using greater workspace for transformed vertices. Most systems have settled on the z-buffering approach, which can still reduce the texture mapping workload with front-to-back sorting. On earlier graphics hardware, there were two competing paradigms of how to deliver a texture to the screen: Of these methods, inverse texture mapping has become standard in modern hardware. With this method, a pixel on the screen is mapped to a point on the texture. Each vertex of a rendering primitive is projected to a point on the screen, and each of these points is mapped to a u,v texel coordinate on the texture. A rasterizer will interpolate between these points to fill in each pixel covered by the primitive. The primary advantage of this method is that each pixel covered by a primitive will be traversed exactly once. Once a primitive's vertices are transformed, the amount of remaining work scales directly with how many pixels it covers on the screen. The main disadvantage is that the memory access pattern in the texture space will not be linear if the texture is at an angle to the screen. This disadvantage is often addressed by texture caching techniques, such as the swizzled texture memory arrangement. The linear interpolation can be used directly for simple and efficient affine texture mapping, but can also be adapted for perspective correctness. Forward texture mapping maps each texel of the texture to a pixel on the screen. After transforming a rectangular primitive to a place on the screen, a forward texture mapping renderer iterates through each texel on the texture, splatting each one onto a pixel of the frame buffer. This was used by some hardware, such as the 3DO, the Sega Saturn and the NV1. The primary advantage is that the texture will be accessed in a simple linear order, allowing very efficient caching of the texture data. However, this benefit is also its disadvantage: as a primitive gets smaller on screen, it still has to iterate over every texel in the texture, causing many pixels to be overdrawn redundantly. This method is also well suited for rendering quad primitives rather than reducing them to triangles, which provided an advantage when perspective correct texturing was not available in hardware. This is because the affine distortion of a quad looks less incorrect than the same quad split into two triangles (see the § Affine texture mapping section above). The NV1 hardware also allowed a quadratic interpolation mode to provide an even better approximation of perspective correctness. UV mapping became an important technique for 3D modelling and assisted in clipping the texture correctly when the primitive went past the edge of the screen, but existing hardware did not provide effective implementations of this. These shortcomings could have been addressed with further development, but GPU design has mostly shifted toward using the inverse mapping technique. Applications Beyond 3D rendering, the availability of texture mapping hardware has inspired its use for accelerating other tasks: It is possible to use texture mapping hardware to accelerate both the reconstruction of voxel data sets from tomographic scans, and to visualize the results. Many user interfaces use texture mapping to accelerate animated transitions of screen elements, e.g. Exposé in Mac OS X. See also References Software External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Scanline_rendering] | [TOKENS: 1212]
Contents Scanline rendering Scanline rendering (also scan line rendering and scan-line rendering) is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture. The main advantage of this method is that sorting vertices along the normal of the scanning plane reduces the number of comparisons between edges. Another advantage is that it is not necessary to translate the coordinates of all vertices from the main memory into the working memory—only vertices defining edges that intersect the current scan line need to be in active memory, and each vertex is read in only once. The main memory is often very slow compared to the link between the central processing unit and cache memory, and thus avoiding re-accessing vertices in main memory can provide a substantial speedup. This kind of algorithm can be easily integrated with many other graphics techniques, such as the Phong reflection model or the Z-buffer algorithm. Algorithm The usual method starts with edges of projected polygons inserted into buckets, one per scanline; the rasterizer maintains an active edge table (AET). Entries maintain sort links, X coordinates, gradients, and references to the polygons they bound. To rasterize the next scanline, the edges no longer relevant are removed; new edges from the current scanlines' Y-bucket are added, inserted sorted by X coordinate. The active edge table entries have X and other parameter information incremented. Active edge table entries are maintained in an X-sorted list, effecting a change when 2 edges cross. After updating edges, the active edge table is traversed in X order to emit only the visible spans, maintaining a Z-sorted active Span table, inserting and deleting the surfaces when edges are crossed.[citation needed] Variants A hybrid between this and Z-buffering does away with the active edge table sorting, and instead rasterizes one scanline at a time into a Z-buffer, maintaining active polygon spans from one scanline to the next. In another variant, an ID buffer is rasterized in an intermediate step, allowing deferred shading of the resulting visible pixels. History The first publication of the scanline rendering technique was probably by Wylie, Romney, Evans, and Erdahl in 1967. Other early developments of the scanline rendering method were by Bouknight in 1969, and Newell, Newell, and Sancha in 1972. Much of the early work on these methods was done in Ivan Sutherland's graphics group at the University of Utah, and at the Evans & Sutherland company in Salt Lake City. Use in realtime rendering The early Evans & Sutherland ESIG line of image-generators (IGs) employed the technique in hardware 'on the fly', to generate images one raster-line at a time without a framebuffer, saving the need for then costly memory. Later variants used a hybrid approach. The Nintendo DS is the latest hardware to render 3D scenes in this manner, with the option of caching the rasterized images into VRAM. The sprite hardware prevalent in 1980s games machines can be considered a simple 2D form of scanline rendering. The technique was used in the first Quake engine for software rendering of environments (but moving objects were Z-buffered over the top). Static scenery used BSP-derived sorting for priority. It proved better than Z-buffer/painter's type algorithms at handling scenes of high depth complexity with costly pixel operations (i.e. perspective-correct texture mapping without hardware assist). This use preceded the widespread adoption of Z-buffer-based GPUs now common in PCs. Sony experimented with software scanline renderers on a second Cell processor during the development of the PlayStation 3, before settling on a conventional CPU/GPU arrangement. Similar techniques A similar principle is employed in tiled rendering (most famously the PowerVR 3D chip); that is, primitives are sorted into screen space, then rendered in fast on-chip memory, one tile at a time. The Dreamcast provided a mode for rasterizing one row of tiles at a time for direct raster scanout, saving the need for a complete framebuffer, somewhat in the spirit of hardware scanline rendering. Some software rasterizers use 'span buffering' (or 'coverage buffering'), in which a list of sorted, clipped spans are stored in scanline buckets. Primitives would be successively added to this datastructure, before rasterizing only the visible pixels in a final stage. Comparison with Z-buffer algorithm The main advantage of scanline rendering over Z-buffering is that the number of times visible pixels are processed is kept to the absolute minimum which is always one time if no transparency effects are used—a benefit for the case of high resolution or expensive shading computations. In modern Z-buffer systems, similar benefits can be gained through rough front-to-back sorting (approaching the 'reverse painters algorithm'), early Z-reject (in conjunction with hierarchical Z), and less common deferred rendering techniques possible on programmable GPUs. Scanline techniques working on the raster have the drawback that overload is not handled gracefully. The technique is not considered to scale well as the number of primitives increases. This is because of the size of the intermediate datastructures required during rendering—which can exceed the size of a Z-buffer for a complex scene. Consequently, in contemporary interactive graphics applications, the Z-buffer has become ubiquitous. The Z-buffer allows larger volumes of primitives to be traversed linearly, in parallel, in a manner friendly to modern hardware. Transformed coordinates, attribute gradients, etc., need never leave the graphics chip; only the visible pixels and depth values are stored. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Digital_video_effect] | [TOKENS: 258]
Contents Digital video effect Digital video effects (DVEs) are visual effects that provide comprehensive live video image manipulation, in the same form as optical printer effects in film. DVEs differ from standard video switcher effects (often referred to as analog effects) such as wipes or dissolves, in that they deal primarily with resizing, distortion or movement of the image. Modern video switchers often contain internal DVE functionality. Modern DVE devices are incorporated in high-end broadcast video switchers. Early examples of DVE devices found in the broadcast post-production industry include the Ampex Digital Optics (ADO), Quantel DPE-5000, Vital Squeezoom, NEC E-Flex and the Abekas A5x series of DVEs. By 1988, Grass Valley Group caught up with the competition with their Kaleidoscope, which integrated ADO-type effects with their widely used line of broadcast switching gear. DVEs are used by the broadcast television industry in live television production environments like television studios and outside broadcasts. They are commonly used in video post-production. See also References This filmmaking article is a stub. You can help Wikipedia by adding missing information. This article about television technology is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/DXTn] | [TOKENS: 2426]
Contents S3 Texture Compression S3 Texture Compression (S3TC) (sometimes also called DXTn, DXTC, or BCn) is a group of related lossy texture compression algorithms originally developed by Iourcha et al. of S3 Graphics, Ltd. for use in their Savage 3D computer graphics accelerator. The method of compression is strikingly similar to the previously published Color Cell Compression, which is in turn an adaptation of Block Truncation Coding published in the late 1970s. Unlike some image compression algorithms (e.g. JPEG), S3TC's fixed-rate data compression coupled with the single memory access (cf. Color Cell Compression and some VQ-based schemes) made it well-suited for use in compressing textures in hardware-accelerated 3D computer graphics. Its subsequent inclusion in Microsoft's DirectX 6.0 and OpenGL 1.3 (via the GL_EXT_texture_compression_s3tc extension) led to widespread adoption of the technology among hardware and software makers. While S3 Graphics is no longer a competitor in the graphics accelerator market, license fees have been levied and collected for the use of S3TC technology until October 2017, for example in game consoles and graphics cards. The wide use of S3TC has led to a de facto requirement for OpenGL drivers to support it, but the patent-encumbered status of S3TC presented a major obstacle to open source implementations, while implementation approaches which tried to avoid the patented parts existed. Patent Some (e.g. US 5956431 A) of the multiple USPTO patents on S3 Texture Compression expired on October 2, 2017. At least one continuation patent, US6,775,417, however had a 165-day extension. This continuation patent expired on March 16, 2018. Codecs There are five variations of the S3TC algorithm (named DXT1 through DXT5, referring to the FourCC code assigned by Microsoft to each format), each designed for specific types of image data. All convert a 4×4 block of pixels to a 64-bit or 128-bit quantity, resulting in compression ratios of 6:1 with 24-bit RGB input data or 4:1 with 32-bit RGBA input data. S3TC is a lossy compression algorithm, resulting in image quality degradation, an effect which is minimized by the ability to increase texture resolutions while maintaining the same memory requirements. Hand-drawn cartoon-like images do not compress well, nor do normal map data, both of which usually generate artifacts. ATI's 3Dc compression algorithm is a modification of DXT5 designed to overcome S3TC's shortcomings with regard to normal maps. id Software worked around the normalmap compression issues in Doom 3 by moving the red component into the alpha channel before compression and moving it back during rendering in the pixel shader. Like many modern image compression algorithms, S3TC only specifies the method used to decompress images, allowing implementers to design the compression algorithm to suit their specific needs, although the patent still covers compression algorithms. The nVidia GeForce 256 through to GeForce 4 cards also used 16-bit interpolation to render DXT1 textures, which resulted in banding when unpacking textures with color gradients. Again, this created an unfavorable impression of texture compression, not related to the fundamentals of the codec itself. DXT1 DXT1 (also known as Block Compression 1 or BC1) is the smallest variation of S3TC, storing 16 input pixels in 64 bits of output, consisting of two 16-bit RGB 5:6:5 color values c 0 {\displaystyle c_{0}} and c 1 {\displaystyle c_{1}} , and a 4×4 two-bit lookup table. If c 0 > c 1 {\displaystyle c_{0}>c_{1}} (compare these colors by interpreting them as two 16-bit unsigned numbers), then two other colors are calculated, such that for each component, c 2 = 2 3 c 0 + 1 3 c 1 {\textstyle c_{2}={2 \over 3}c_{0}+{1 \over 3}c_{1}} and c 3 = 1 3 c 0 + 2 3 c 1 {\textstyle c_{3}={1 \over 3}c_{0}+{2 \over 3}c_{1}} . This mode operates similarly to mode 0xC0 of the original Apple Video codec. Otherwise, if c 0 ≤ c 1 {\displaystyle c_{0}\leq c_{1}} , then c 2 = 1 2 c 0 + 1 2 c 1 {\textstyle c_{2}={1 \over 2}c_{0}+{1 \over 2}c_{1}} and c 3 {\displaystyle c_{3}} is transparent black corresponding to a premultiplied alpha format. This color sometimes causes a black border surrounding the transparent area when linear texture filtering and alpha test is used, due to colors being interpolated between the color of opaque texel and neighbouring black transparent texel. The lookup table is then consulted to determine the color value for each pixel, with a value of 0 corresponding to c 0 {\displaystyle c_{0}} and a value of 3 corresponding to c 3 {\displaystyle c_{3}} . DXT2 and DXT3 DXT2 and DXT3 (collectively also known as Block Compression 2 or BC2) converts 16 input pixels (corresponding to a 4x4 pixel block) into 128 bits of output, consisting of 64 bits of alpha channel data (4 bits for each pixel) followed by 64 bits of color data, encoded the same way as DXT1 (with the exception that the 4-color version of the DXT1 algorithm is always used instead of deciding which version to use based on the relative values of c 0 {\displaystyle c_{0}} and c 1 {\displaystyle c_{1}} ). In DXT2, the color data is interpreted as being premultiplied by alpha, in DXT3 it is interpreted as not having been premultiplied by alpha. Typically DXT2/3 are well suited to images with sharp alpha transitions, between translucent and opaque areas. DXT4 and DXT5 DXT4 and DXT5 (collectively also known as Block Compression 3 or BC3) converts 16 input pixels into 128 bits of output, consisting of 64 bits of alpha channel data (two 8-bit alpha values and a 4×4 3-bit lookup table) followed by 64 bits of color data (encoded the same way as DXT1). If α 0 > α 1 {\displaystyle \alpha _{0}>\alpha _{1}} , then six other alpha values are calculated, such that α 2 = 6 α 0 + 1 α 1 7 {\textstyle \alpha _{2}={{6\alpha _{0}+1\alpha _{1}} \over 7}} , α 3 = 5 α 0 + 2 α 1 7 {\textstyle \alpha _{3}={{5\alpha _{0}+2\alpha _{1}} \over 7}} , α 4 = 4 α 0 + 3 α 1 7 {\textstyle \alpha _{4}={{4\alpha _{0}+3\alpha _{1}} \over 7}} , α 5 = 3 α 0 + 4 α 1 7 {\textstyle \alpha _{5}={{3\alpha _{0}+4\alpha _{1}} \over 7}} , α 6 = 2 α 0 + 5 α 1 7 {\textstyle \alpha _{6}={{2\alpha _{0}+5\alpha _{1}} \over 7}} , and α 7 = 1 α 0 + 6 α 1 7 {\textstyle \alpha _{7}={{1\alpha _{0}+6\alpha _{1}} \over 7}} . Otherwise, if α 0 ≤ α 1 {\textstyle \alpha _{0}\leq \alpha _{1}} , four other alpha values are calculated such that α 2 = 4 α 0 + 1 α 1 5 {\textstyle \alpha _{2}={{4\alpha _{0}+1\alpha _{1}} \over 5}} , α 3 = 3 α 0 + 2 α 1 5 {\textstyle \alpha _{3}={{3\alpha _{0}+2\alpha _{1}} \over 5}} , α 4 = 2 α 0 + 3 α 1 5 {\textstyle \alpha _{4}={{2\alpha _{0}+3\alpha _{1}} \over 5}} , and α 5 = 1 α 0 + 4 α 1 5 {\textstyle \alpha _{5}={{1\alpha _{0}+4\alpha _{1}} \over 5}} with α 6 = 0 {\displaystyle \alpha _{6}=0} and α 7 = 255 {\displaystyle \alpha _{7}=255} . The lookup table is then consulted to determine the alpha value for each pixel, with a value of 0 corresponding to α 0 {\displaystyle \alpha _{0}} and a value of 7 corresponding to α 7 {\displaystyle \alpha _{7}} . DXT4's color data is premultiplied by alpha, whereas DXT5's is not. Because DXT4/5 use an interpolated alpha scheme, they generally produce superior results for alpha (transparency) gradients than DXT2/3. Further variants BC4 and BC5 (Block Compression 4 and 5) are added in Direct3D 10. They reuse the alpha channel encoding found in DXT4/5 (BC3). BC6H (sometimes BC6) and BC7 (Block Compression 6H and 7) are added in Direct3D 11. BC6H and BC7 have a much more complex algorithm with a selection of encoding modes. The quality is much better as a result. These two modes are also specified much more exactly, with ranges of accepted deviation. Earlier BCn modes decode slightly differently among GPU vendors. S3TC format comparison Data preconditioning BCn textures can be further compressed for on-disk storage and distribution (texture supercompression). An application would decompress this extra layer and send the BCn data to the GPU as usual. BCn can be combined with Oodle Texture, a lossy preprocessor that modifies the input texture so that the BCn output is more easily compressed by a LZ77 compressor (rate-distortion optimization). BC7 specifically can also use "bc7prep", a lossless pass to re-encode the texture in a more compressible form (requiring its inverse at decompression). crunch is another tool that performs RDO and optionally further re-encoding. In 2021, Microsoft produced a "BCPack" compression algorithm specifically for BCn-compressed textures. Xbox series X and S have hardware support for decompressing BCPack streams. See also References External links
========================================