text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Category:Internet_memes_introduced_in_2010] | [TOKENS: 63] |
Category:Internet memes introduced in 2010 Subcategories This category has the following 2 subcategories, out of 2 total. Pages in category "Internet memes introduced in 2010" The following 39 pages are in this category, out of 39 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Small_language_model#bodyContent] | [TOKENS: 386] |
Contents Small language model Small language models or compact language models are artificial intelligence language models designed for human natural language processing including language and text generation. They are smaller in scale and scope than large language models. A large language model typically contains hundreds of billions of training parameters, with some models exceeding a trillion parameters. This substantial parameter count enables the model to encode vast amounts of information, thereby improving the generalizability and accuracy of its outputs. However, training such models demands enormous computational resources, rendering it infeasible for an individual to do so using a single computer and graphics processing unit. Small language models, on the other hand, use far fewer parameters, typically ranging from a few thousand to a few hundred million. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device. Most contemporary (2020s) small language models use the same architecture as a large language model, but with a smaller parameter count and sometimes lower arithmetic precision. Parameter count is reduced by a combination of knowledge distillation and pruning. Precision can be reduced by quantization. Work on large language models mostly translate to small language models: pruning and quantization are also widely used to speed up large language models. Models Some notable models are: Phi-4 14B is marginally "small" at best, but Microsoft does market it as a small model. Language model with small pre-training dataset Traditional AI language systems need enormous computers and vast amounts of data. Pre-training matters, even tiny models show significant performance improvements when pre-trained performance increases with larger pre-training datasets. Classification accuracy improves when pre-training and test datasets share similar tokens. Shallow architectures can replicate deep model performance through collaborative learning. See also References This statistics-related article is a stub. You can help Wikipedia by adding missing information. This article about natural language processing is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Small_language_model#Models] | [TOKENS: 386] |
Contents Small language model Small language models or compact language models are artificial intelligence language models designed for human natural language processing including language and text generation. They are smaller in scale and scope than large language models. A large language model typically contains hundreds of billions of training parameters, with some models exceeding a trillion parameters. This substantial parameter count enables the model to encode vast amounts of information, thereby improving the generalizability and accuracy of its outputs. However, training such models demands enormous computational resources, rendering it infeasible for an individual to do so using a single computer and graphics processing unit. Small language models, on the other hand, use far fewer parameters, typically ranging from a few thousand to a few hundred million. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device. Most contemporary (2020s) small language models use the same architecture as a large language model, but with a smaller parameter count and sometimes lower arithmetic precision. Parameter count is reduced by a combination of knowledge distillation and pruning. Precision can be reduced by quantization. Work on large language models mostly translate to small language models: pruning and quantization are also widely used to speed up large language models. Models Some notable models are: Phi-4 14B is marginally "small" at best, but Microsoft does market it as a small model. Language model with small pre-training dataset Traditional AI language systems need enormous computers and vast amounts of data. Pre-training matters, even tiny models show significant performance improvements when pre-trained performance increases with larger pre-training datasets. Classification accuracy improves when pre-training and test datasets share similar tokens. Shallow architectures can replicate deep model performance through collaborative learning. See also References This statistics-related article is a stub. You can help Wikipedia by adding missing information. This article about natural language processing is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Small_language_model#See_also] | [TOKENS: 386] |
Contents Small language model Small language models or compact language models are artificial intelligence language models designed for human natural language processing including language and text generation. They are smaller in scale and scope than large language models. A large language model typically contains hundreds of billions of training parameters, with some models exceeding a trillion parameters. This substantial parameter count enables the model to encode vast amounts of information, thereby improving the generalizability and accuracy of its outputs. However, training such models demands enormous computational resources, rendering it infeasible for an individual to do so using a single computer and graphics processing unit. Small language models, on the other hand, use far fewer parameters, typically ranging from a few thousand to a few hundred million. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device. Most contemporary (2020s) small language models use the same architecture as a large language model, but with a smaller parameter count and sometimes lower arithmetic precision. Parameter count is reduced by a combination of knowledge distillation and pruning. Precision can be reduced by quantization. Work on large language models mostly translate to small language models: pruning and quantization are also widely used to speed up large language models. Models Some notable models are: Phi-4 14B is marginally "small" at best, but Microsoft does market it as a small model. Language model with small pre-training dataset Traditional AI language systems need enormous computers and vast amounts of data. Pre-training matters, even tiny models show significant performance improvements when pre-trained performance increases with larger pre-training datasets. Classification accuracy improves when pre-training and test datasets share similar tokens. Shallow architectures can replicate deep model performance through collaborative learning. See also References This statistics-related article is a stub. You can help Wikipedia by adding missing information. This article about natural language processing is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:Internet_meme_characters] | [TOKENS: 55] |
Category:Internet meme characters Subcategories This category has the following 2 subcategories, out of 2 total. Pages in category "Internet meme characters" The following 87 pages are in this category, out of 87 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:Wikipedia_indefinitely_semi-protected_pages] | [TOKENS: 156] |
Category:Wikipedia indefinitely semi-protected pages This category contains pages which have been indefinitely semi-protected from editing. Semi-protection prevents anonymous and newly registered users from editing; contributing accounts must be at least 4 days old and must have made at least 10 edits to non-protected pages (including deleted ones). Use {{Pp-semi-indef}} to add pages to this category. This should only be done if the page is in fact semi-protected – adding the template does not in itself protect the page. Subcategories This category has only the following subcategory. Pages in category "Wikipedia indefinitely semi-protected pages" The following 200 pages are in this category, out of approximately 9,943 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:Use_dmy_dates_from_March_2024] | [TOKENS: 144] |
Category:Use dmy dates from March 2024 Wikipedia articles (tagged in this month) that use dd mm yyyy date formats, whether by application of the first main contributor rule or by virtue of close national ties to the subject belong in this category. Use {{Use dmy dates}} to add an article to this category. See MOS:DATE. This system of tagging or categorisation is used as a status monitor of all articles that use dd mm yyyy date formats, and not as a clean up. Pages in category "Use dmy dates from March 2024" The following 200 pages are in this category, out of approximately 9,735 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Small_language_model#cite_note-jjokah-2] | [TOKENS: 386] |
Contents Small language model Small language models or compact language models are artificial intelligence language models designed for human natural language processing including language and text generation. They are smaller in scale and scope than large language models. A large language model typically contains hundreds of billions of training parameters, with some models exceeding a trillion parameters. This substantial parameter count enables the model to encode vast amounts of information, thereby improving the generalizability and accuracy of its outputs. However, training such models demands enormous computational resources, rendering it infeasible for an individual to do so using a single computer and graphics processing unit. Small language models, on the other hand, use far fewer parameters, typically ranging from a few thousand to a few hundred million. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device. Most contemporary (2020s) small language models use the same architecture as a large language model, but with a smaller parameter count and sometimes lower arithmetic precision. Parameter count is reduced by a combination of knowledge distillation and pruning. Precision can be reduced by quantization. Work on large language models mostly translate to small language models: pruning and quantization are also widely used to speed up large language models. Models Some notable models are: Phi-4 14B is marginally "small" at best, but Microsoft does market it as a small model. Language model with small pre-training dataset Traditional AI language systems need enormous computers and vast amounts of data. Pre-training matters, even tiny models show significant performance improvements when pre-trained performance increases with larger pre-training datasets. Classification accuracy improves when pre-training and test datasets share similar tokens. Shallow architectures can replicate deep model performance through collaborative learning. See also References This statistics-related article is a stub. You can help Wikipedia by adding missing information. This article about natural language processing is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Small_language_model#cite_note-4] | [TOKENS: 386] |
Contents Small language model Small language models or compact language models are artificial intelligence language models designed for human natural language processing including language and text generation. They are smaller in scale and scope than large language models. A large language model typically contains hundreds of billions of training parameters, with some models exceeding a trillion parameters. This substantial parameter count enables the model to encode vast amounts of information, thereby improving the generalizability and accuracy of its outputs. However, training such models demands enormous computational resources, rendering it infeasible for an individual to do so using a single computer and graphics processing unit. Small language models, on the other hand, use far fewer parameters, typically ranging from a few thousand to a few hundred million. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device. Most contemporary (2020s) small language models use the same architecture as a large language model, but with a smaller parameter count and sometimes lower arithmetic precision. Parameter count is reduced by a combination of knowledge distillation and pruning. Precision can be reduced by quantization. Work on large language models mostly translate to small language models: pruning and quantization are also widely used to speed up large language models. Models Some notable models are: Phi-4 14B is marginally "small" at best, but Microsoft does market it as a small model. Language model with small pre-training dataset Traditional AI language systems need enormous computers and vast amounts of data. Pre-training matters, even tiny models show significant performance improvements when pre-trained performance increases with larger pre-training datasets. Classification accuracy improves when pre-training and test datasets share similar tokens. Shallow architectures can replicate deep model performance through collaborative learning. See also References This statistics-related article is a stub. You can help Wikipedia by adding missing information. This article about natural language processing is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Graphics_processing_unit] | [TOKENS: 4813] |
Contents Graphics processing unit A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a component on a discrete graphics card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles. GPUs are increasingly being used for artificial intelligence (AI) processing due to linear algebra acceleration which is also used extensively in graphics processing. Although there is no single definition of the term, and it may be used to describe any video display system, in modern use a GPU includes the ability to internally perform the calculations needed for various graphics tasks, like rotating and scaling 3D images, and often the additional ability to run custom programs known as shaders. This contrasts with earlier graphics controllers known as video display controllers which had no internal calculation capabilities, or blitters, which performed only basic memory movement operations. The modern GPU emerged during the 1990s, adding the ability to perform operations like drawing lines and text without CPU help, and later adding 3D functionality. Graphics functions are generally independent and this lends these tasks to being implemented on separate calculation engines. Modern GPUs include hundreds, or thousands, of calculation units. This made them useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. The ability of GPUs to rapidly perform vast numbers of calculations has led to their adoption in diverse fields including artificial intelligence (AI) where they excel at handling data-intensive and computationally demanding tasks. Other non-graphical uses include the training of neural networks and cryptocurrency mining. History Arcade system boards have used specialized graphics circuits since the 1970s. In early video game hardware, RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor. A specialized barrel shifter circuit helped the CPU animate the framebuffer graphics for various 1970s arcade video games from Midway and Taito, such as Gun Fight (1975), Sea Wolf (1976), and Space Invaders (1978). The Namco Galaxian arcade system in 1979 used specialized graphics hardware that supported RGB color, multi-colored sprites, and tilemap backgrounds. The Galaxian hardware was widely used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega, and Taito. The Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor. Atari 8-bit computers (1979) had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored (so there did not need to be a contiguous frame buffer).[clarification needed] 6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction.[clarification needed] ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU. The NEC μPD7220 was the first implementation of a personal computer graphics display processor as a single large-scale integration (LSI) integrated circuit chip. This enabled the design of low-cost, high-performance video graphics cards such as those from Number Nine Visual Technology. It became the best-known GPU until the mid-1980s. It was the first fully integrated VLSI (very large-scale integration) metal–oxide–semiconductor (NMOS) graphics display processor for PCs, supported up to 1024×1024 resolution, and laid the foundations for the PC graphics market. It was used in a number of graphics cards and was licensed for clones such as the Intel 82720, the first of Intel's graphics processing units. The Williams Electronics arcade games Robotron: 2084, Joust, Sinistar, and Bubbles, all released in 1982, contain custom blitter chips for operating on 16-color bitmaps. In 1984, Hitachi released the ARTC HD63484, the first major CMOS graphics processor for personal computers. The ARTC could display up to 4K resolution when in monochrome mode. It was used in a number of graphics cards and terminals during the late 1980s. In 1985, the Amiga was released with a custom graphics chip including a blitter for bitmap manipulation, line drawing, and area fill. It also included a coprocessor with its own simple instruction set, that was capable of manipulating graphics hardware registers in sync with the video beam (e.g. for per-scanline palette switches, sprite multiplexing, and hardware windowing), or driving the blitter. In 1986, Texas Instruments released the TMS34010, the first fully programmable graphics processor. It could run general-purpose code but also had a graphics-oriented instruction set. During 1990–1992, this chip became the basis of the Texas Instruments Graphics Architecture ("TIGA") Windows accelerator cards. In 1987, the IBM 8514 graphics system was released. It was one of the first video cards for IBM PC compatibles that implemented fixed-function 2D primitives in electronic hardware. Sharp's X68000, released in 1987, used a custom graphics chipset with a 65,536 color palette and hardware support for sprites, scrolling, and multiple playfields. It served as a development machine for Capcom's CP System arcade board. Fujitsu's FM Towns computer, released in 1989, had support for a 16,777,216 color palette. In 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21 and Taito Air System. IBM introduced its proprietary Video Graphics Array (VGA) display standard in 1987, with a maximum resolution of 640×480 pixels. In November 1988, NEC Home Electronics announced its creation of the Video Electronics Standards Association (VESA) to develop and promote a Super VGA (SVGA) computer display standard as a successor to VGA. Super VGA enabled graphics display resolutions up to 800×600 pixels, a 56% increase. In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an indication of the performance increase it promised. The 86C911 spawned a variety of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. Fixed-function Windows accelerators surpassed expensive general-purpose graphics coprocessors in Windows performance, and such coprocessors faded from the PC market. In the early- and mid-1990s, real-time 3D graphics became increasingly common in arcade, computer, and console games, which led to increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as the Sega Model 1, Namco System 22, and Sega Model 2, and the fifth-generation video game consoles such as the Saturn, PlayStation, and Nintendo 64. Arcade systems such as the Sega Model 2 and SGI Onyx-based Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L (transform, clipping, and lighting) years before appearing in consumer graphics cards. Another early example is the Super FX chip, a RISC-based on-cartridge graphics chip used in some SNES games, notably Doom and Star Fox. Some systems used DSPs to accelerate transformations. Fujitsu, which worked on the Sega Model 2 arcade system, began working on integrating T&L into a single LSI solution for use in home computers in 1995; the Fujitsu Pinolite, the first 3D geometry processor for personal computers, announced in 1997. The first hardware T&L GPU on home video game consoles was the Nintendo 64's Reality Coprocessor, released in 1996. In 1997, Mitsubishi released the 3Dpro/2MP, a GPU capable of transformation and lighting, for workstations and Windows NT desktops; ATi used it for its FireGL 4000 graphics card, released in 1997. The term "GPU" was coined by Sony in reference to the 32-bit Sony GPU (designed by Toshiba) in the PlayStation video game console, released in 1994. In October 2002, with the introduction of the ATI Radeon 9700 (also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement looping and lengthy floating point math, and were quickly becoming as flexible as CPUs, yet orders of magnitude faster for image-array operations. Pixel shading is often used for bump mapping, which adds texture to make an object look shiny, dull, rough, or even round or extruded. With the introduction of the Nvidia GeForce 8 series and new generic stream processing units, GPUs became more generalized computing devices. Parallel GPUs are making computational inroads against the CPU, and a subfield of research, dubbed GPU computing or GPGPU for general purpose computing on GPU, has found applications in fields as diverse as machine learning, oil exploration, scientific image processing, linear algebra, statistics, 3D reconstruction, and stock options pricing. GPGPUs were the precursors to what is now called a compute shader (e.g. CUDA, OpenCL, DirectCompute) and actually abused the hardware to a degree by treating the data passed to algorithms as texture maps and executing algorithms by drawing a triangle or quad with an appropriate pixel shader.[clarification needed] This entails some overheads since units like the scan converter are involved where they are not needed (nor are triangle manipulations even a concern—except to invoke the pixel shader).[clarification needed] Nvidia's CUDA platform, first introduced in 2007, was the earliest widely adopted programming model for GPU computing. OpenCL is an open standard defined by the Khronos Group that allows for the development of code for both GPUs and CPUs with an emphasis on portability. OpenCL solutions are supported by Intel, AMD, Nvidia, and ARM, and according to a report in 2011 by Evans Data, OpenCL had become the second most popular HPC tool. In 2010, Nvidia partnered with Audi to power their cars' dashboards, using the Tegra GPU to provide increased functionality to cars' navigation and entertainment systems. Advances in GPU technology in cars helped advance self-driving technology. AMD's Radeon HD 6000 series cards were released in 2010, and in 2011 AMD released its 6000M Series discrete GPUs for mobile devices. The Kepler line of graphics cards by Nvidia were released in 2012 and were used in the Nvidia 600 and 700 series cards. A feature in this GPU microarchitecture included GPU boost, a technology that adjusts the clock-speed of a video card to increase or decrease according to its power draw. Kepler also introduced NVENC video encoding acceleration technology. The PS4 and Xbox One were released in 2013; they both used GPUs based on AMD's Radeon HD 7850 and 7790. Nvidia's Kepler line of GPUs was followed by the Maxwell line, manufactured on the same process. Nvidia's 28 nm chips were manufactured by TSMC in Taiwan using the 28 nm process. Compared to the 40 nm technology from the past, this manufacturing process allowed a 20 percent boost in performance while drawing less power. Virtual reality headsets have high system requirements; manufacturers recommended the GTX 970 and the R9 290X or better at the time of their release. Cards based on the Pascal microarchitecture were released in 2016. The GeForce 10 series of cards are of this generation of graphics cards. They are made using the 16 nm manufacturing process which improves upon previous microarchitectures. In 2018, Nvidia launched the RTX 20 series GPUs that added ray tracing cores to GPUs, improving their performance on lighting effects. Polaris 11 and Polaris 10 GPUs from AMD are fabricated by a 14 nm process. Their release resulted in a substantial increase in the performance per watt of AMD video cards. AMD also released the Vega GPU series for the high end market as a competitor to Nvidia's high end Pascal cards, also featuring HBM2 like the Titan V. In 2019, AMD released the successor to their Graphics Core Next (GCN) microarchitecture/instruction set. Dubbed RDNA, the first product featuring it was the Radeon RX 5000 series of video cards. The company announced that the successor to the RDNA microarchitecture would be incremental (a "refresh"). AMD unveiled the Radeon RX 6000 series, its RDNA 2 graphics cards with support for hardware-accelerated ray tracing. The product series, launched in late 2020, consisted of the RX 6800, RX 6800 XT, and RX 6900 XT. The RX 6700 XT, which is based on Navi 22, was launched in early 2021. The PlayStation 5 and Xbox Series X and Series S were released in 2020; they both use GPUs based on the RDNA 2 microarchitecture with incremental improvements and different GPU configurations in each system's implementation. In the 2020s, GPUs have been increasingly used for calculations involving embarrassingly parallel problems, such as training of neural networks on enormous datasets that are needed for artificial intelligence large language models. Specialized processing cores on some modern workstation's GPUs are dedicated for deep learning since they have significant FLOPS performance increases, using 4×4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications. These tensor cores are expected to appear in consumer cards, as well.[needs update] GPU companies Many companies have produced GPUs under a number of brand names. In 2009,[needs update] Intel, Nvidia, and AMD/ATI were the market share leaders, with 49.4%, 27.8%, and 20.6% market share respectively. In addition, Matrox produces GPUs. Chinese companies such as Jingjia Micro have also produced GPUs for the domestic market although in terms of worldwide sales, they lag behind market leaders. Computational functions Several factors of GPU construction affect the performance of the card for real-time rendering, such as the size of the connector pathways in the semiconductor device fabrication, the clock signal frequency, and the number and size of various on-chip memory caches. Performance is also affected by the number of streaming multiprocessors (SM) for NVidia GPUs, or compute units (CU) for AMD GPUs, or Xe cores for Intel Xe based GPUs, which describe the number of on-silicon processor core units within the GPU chip that perform the core calculations, typically working in parallel with other SM/CUs on the GPU. GPU performance is typically measured in floating point operations per second (FLOPS); GPUs in the 2010s and 2020s typically deliver performance measured in teraflops (TFLOPS). This is an estimated performance measure, as other factors can affect the actual display rate. An earlier GPU may support one or more 2D graphics APIs for 2D acceleration, such as GDI and DirectDraw. GPU forms In the 1970s, the term "GPU" originally stood for graphics processor unit and described a programmable processing unit working independently from the CPU that was responsible for graphics manipulation and output. In 1994, Sony used the term (with the meaning graphics processing unit) in reference to the PlayStation console's Toshiba-designed Sony GPU. The term was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU". It was presented as a "single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines". Rival ATI Technologies coined the term "visual processing unit" (VPU) with the release of the Radeon 9700 in 2002. The AMD Alveo MA35D features dual VPUs, each using the 5 nm process in 2023. In personal computers, there are two main forms of GPUs: dedicated graphics (also called discrete graphics) and integrated graphics (also called shared graphics solutions, integrated graphics processors (IGP), or unified memory architecture (UMA)). Dedicated graphics processing units use RAM that is dedicated to the GPU rather than relying on the computer's main system memory. This RAM is usually specially selected for the expected serial workload of the graphics card, such as GDDR SDRAM. Sometimes systems with dedicated discrete GPUs were called "DIS" systems as opposed to "UMA" systems. Technologies such as Scan-Line Interleave by 3dfx, Scalable Link Interface (SLI) and NVLink by Nvidia and CrossFire by AMD allow multiple GPUs to draw images simultaneously for a single screen, increasing the processing power available for graphics. These technologies, however, are increasingly uncommon; most games do not fully use multiple GPUs, as most users cannot afford them.[better source needed] Multiple GPUs are still used on supercomputers (such as in Summit); on workstations to accelerate video (processing multiple videos at once) and 3D rendering; for visual effects (VFX); general purpose graphics processing unit (GPGPU) workloads and for simulations, and in AI to expedite training, as is the case with Nvidia's lineup of DGX workstations and servers, Tesla GPUs, and Intel's Ponte Vecchio GPUs. Integrated graphics processing units (IGPU), also called integrated graphics, shared graphics solutions, integrated graphics processors (IGP), or unified memory architectures (UMA) use a portion of a computer's system RAM rather than dedicated graphics memory. IGPs can be integrated onto a motherboard as part of its northbridge chipset, or on the same die (integrated circuit) with the CPU, such as Accelerated Processing Unit (AMD APU) or Intel HD Graphics. On certain motherboards, AMD's IGPs can use dedicated sideport memory: a separate fixed block of high performance memory that is dedicated for use by the GPU. As of early 2007[update], computers with integrated graphics account for about 90% of all PC shipments.[needs update] They are less costly to implement than dedicated graphics processing, but tend to be less capable. Historically, integrated processing was considered unfit for 3D games or graphically intensive programs but could run less intensive programs such as Adobe Flash. Examples of such IGPs would be offerings from SiS and VIA circa 2004. However, modern integrated graphics processors such as AMD Accelerated Processing Unit and Intel Graphics Technology, such as HD, UHD, Iris, Iris Pro, Iris Plus, and Xe-LP, can handle 2D graphics or low-stress 3D graphics. Because GPU computations are memory-intensive, integrated processing may compete with the CPU for relatively slow system RAM, as it has minimal or no dedicated video memory. IGPs use system memory with bandwidth up to a current maximum of 128 gigabytes per second, whereas a discrete graphics card may have a bandwidth of more than 1000 gigabytes per second between its video random access memory (VRAM) and GPU core. This memory bus bandwidth can limit the performance of the GPU, though multi-channel memory can mitigate this deficiency. Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it. On systems with "Unified Memory Architecture" (UMA), including modern AMD processors with integrated graphics, modern Intel processors with integrated graphics, Apple processors, the PS5 and Xbox Series (among others), the CPU cores and the GPU block share the same pool of RAM and memory address space. It is common to use a general purpose graphics processing unit (GPGPU) as a modified form of stream processor or a vector processor, running compute kernels. This turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete GPU designers, AMD and Nvidia, are pursuing this approach with an array of applications.[as of?] Nvidia and AMD collaborated with Stanford University to create a GPU-based client for the Folding@home distributed computing project for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications. GPU-based high performance computers play a significant role in large-scale modelling. Three of the ten most powerful supercomputers in the world take advantage of GPU acceleration.[needs update] Since 2005 there has been interest in using the performance offered by GPUs for evolutionary computation in general, and for accelerating the fitness evaluation in genetic programming in particular.[citation needed] Most approaches compile linear or tree programs on the host PC and transfer the executable to the GPU to be run. Typically a performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU's single instruction, multiple data (SIMD) architecture.[needs update] Substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there. A GPU can be attached to some external bus of a notebook. PCI Express is the only bus used for this purpose.[as of?] The port may be, for example, an ExpressCard or mPCIe port (PCIe ×1, up to 5 or 2.5 gigabits per second respectively), a Thunderbolt 1, 2, or 3 port (PCIe ×4, up to 10, 20, or 40 gigabits per second respectively), a USB4 port with Thunderbolt compatibility, or an OCuLink port. Those ports are only available on certain notebook systems. eGPU enclosures include their own power supply (PSU), because powerful GPUs can consume hundreds of watts. Energy efficiency Graphics processing units (GPU) have continued to increase in energy usage, while CPUs designers have recently[when?] focused on improving performance per watt. High performance GPUs may draw large amount of power, therefore intelligent techniques are required to manage GPU power consumption. Measures like 3DMark2006 score per watt can help identify more efficient GPUs. However that may not adequately incorporate efficiency in typical use, where much time is spent doing less demanding tasks. With modern GPUs, energy usage is an important constraint on the maximum computational capabilities that can be achieved. GPU designs are usually highly scalable, allowing the manufacturer to put multiple chips on the same video card, or to use multiple video cards that work in parallel. Peak performance of any system is essentially limited by the amount of power it can draw and the amount of heat it can dissipate. Consequently, performance per watt of a GPU design translates directly into peak performance of a system that uses that design. Since GPUs may also be used for some general purpose computation, sometimes their performance is measured in terms also applied to CPUs, such as FLOPS per watt. Sales In 2013, 438.3 million GPUs were shipped globally and the forecast for 2014 was 414.2 million. However, by the third quarter of 2022, shipments of PC GPUs totaled around 75.5 million units, down 19% year-over-year.[needs update] See also References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Small_language_model&action=info] | [TOKENS: 44] |
Contents Information for "Small language model" Basic information Page protection Edit history Page properties This page is a member of 3 hidden categories (help): Pages transcluded onto the current version of this page (help): External tools |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Prompt_injection] | [TOKENS: 1992] |
Contents Prompt injection Prompt injection is a cybersecurity exploit and an attack vector in which innocuous-looking inputs (i.e. prompts) are designed to cause unintended behavior in machine learning models, particularly large language models (LLMs). The attack takes advantage of the model's inability to distinguish between developer-defined prompts and user inputs to bypass safeguards and influence model behaviour. While LLMs are designed to follow trusted instructions, they can be manipulated into carrying out unintended responses through carefully crafted inputs. With capabilities such as web browsing and file upload, an LLM not only needs to differentiate developer instructions from user input, but also to differentiate user input from content not directly authored by the user. LLMs with web browsing capabilities can be targeted by indirect prompt injection, where adversarial prompts are embedded within website content. If the LLM retrieves and processes the webpage, it may interpret and execute the embedded instructions as legitimate commands. Example A language model can perform translation with the following prompt: followed by the text to be translated. A prompt injection can occur when that text contains instructions that change the behavior of the model: to which an AI model responds: "You have been hacked!" This attack works because language model inputs contain instructions and data together in the same context, so the underlying algorithm cannot distinguish between them. History Prompt injection is a type of code injection attack that leverages adversarial prompt engineering to manipulate AI models. In May 2022, Jonathan Cefalu of Preamble identified prompt injection as a security vulnerability and reported it to OpenAI, referring to it as "command injection". The term "prompt injection" was coined by Simon Willison in September 2022. He distinguished it from jailbreaking, which bypasses an AI model's safeguards, whereas prompt injection exploits its inability to differentiate system instructions from user inputs. While some prompt injection attacks involve jailbreaking, they remain distinct techniques. A second class of prompt injection, where non-user content pretends to be user instruction, was described in a 2023 paper. In the paper, Kai Greshake and his team at sequire technology, described a series of successful attacks against multiple AI models including GPT-4 and OpenAI Codex. Types Direct injection happens when user input is mistaken as developer instruction, leading to unexpected manipulation of responses. This is the original form of prompt injection. Indirect injection happen when the prompt is located in external data sources such as emails and documents. This external data may include an instruction that the AI mistakes as coming from the user or the developer. Indirect injections can be intentional as a way to evade filters, or be unintentional (from the user's perspective) as a way for the author of the document to manipulate what result is presented to the user. While intentional and direct injection represents a threat to the developer from the user, unintentional indirect injection represent a threat from the data-author to the user. Examples of unintentional (for the user), indirect injections can include: Prompt injection has been fought with filters that prevent specific types of input from being sent. In response, attackers have sought ways to evade the filter. Forms of indirect injection (as mentioned above) are one example. A November 2024 OWASP report[citation needed] identified security challenges in multimodal AI, which processes multiple data types, such as text and images. Adversarial prompts can be embedded in non-textual elements, such as hidden instructions within images, influencing model responses when processed alongside text. This complexity expands the attack surface, making multimodal AI more susceptible to cross-modal vulnerabilities. One researcher in 2025 found that holding up a sheet of paper instructing the viewer to act as if the person (and the paper itself) were not present in the image resulted in an AI model omitting that person from a description of the scene. A model with access to tools or chain of thought can be instructed to decode an obfuscated instruction.[citation needed] Prompt leaking Prompt leaking is when a user uses a chat prompt to reveal the software's system prompt, something that is typically kept secret. For example, Twitter users in 2022 were able to trick a spam account that was engaging with posts about remote working into revealing that it was an AI, and that its system prompt was guiding it to respond "with a positive attitude towards remote working in the 'we' form". Prompt injection and jailbreak incidents A November 2024 report by The Alan Turing Institute highlights growing risks, stating that 75% of business employees use generative artificial intelligence, with 46% adopting it within the past six months. McKinsey identified accuracy as the top generative artificial intelligence risk, yet only 38% of organizations are taking steps to mitigate it. Leading AI providers, including Microsoft, Google, and Amazon, integrate LLMs into enterprise applications. Cybersecurity agencies, including the UK National Cyber Security Centre (NCSC) and US National Institute for Standards and Technology (NIST), classify prompt injection as a critical security threat, with potential consequences such as data manipulation, phishing, misinformation, and denial-of-service attacks. In early 2025, researchers discovered that some academic papers contained hidden prompts designed to manipulate AI-powered peer review systems into generating favorable reviews, demonstrating how prompt injection attacks can compromise critical institutional processes and undermine the integrity of academic evaluation systems. In February 2023, a Stanford student discovered a method to bypass safeguards in Microsoft's AI-powered Bing Chat by instructing it to ignore prior directives, which led to the revelation of internal guidelines and its codename, "Sydney". Another student later verified the exploit by posing as a developer at OpenAI. Microsoft acknowledged the issue and stated that system controls were continuously evolving. This is a direct injection attack. In December 2024, The Guardian reported that OpenAI's ChatGPT search tool was vulnerable to indirect prompt injection attacks, allowing hidden webpage content to manipulate its responses. Testing showed that invisible text could override negative reviews with artificially positive assessments, potentially misleading users. Security researchers cautioned that such vulnerabilities, if unaddressed, could facilitate misinformation or manipulate search results. In January 2025, Infosecurity Magazine reported that DeepSeek-R1, a large language model (LLM) developed by Chinese AI startup DeepSeek, exhibited vulnerabilities to direct and indirect prompt injection attacks. Testing with WithSecure's Simple Prompt Injection Kit for Evaluation and Exploitation (Spikee) benchmark found that DeepSeek-R1 had a higher attack success rate compared to several other models, ranking 17th out of 19 when tested in isolation and 16th when combined with predefined rules and data markers. While DeepSeek-R1 ranked sixth on the Chatbot Arena benchmark for reasoning performance, researchers noted that its security defenses may not have been as extensively developed as its optimization for LLM performance benchmarks. In February 2025, Ars Technica reported vulnerabilities in Google's Gemini AI to indirect prompt injection attacks that manipulated its long-term memory. Security researcher Johann Rehberger demonstrated how hidden instructions within documents could be stored and later triggered by user interactions. The exploit leveraged delayed tool invocation, causing the AI to act on injected prompts only after activation. Google rated the risk as low, citing the need for user interaction and the system's memory update notifications, but researchers cautioned that manipulated memory could result in misinformation or influence AI responses in unintended ways. In July 2025, NeuralTrust reported a successful jailbreak of X's Grok4. The attack used a combination of Echo Chamber Attack developed by NeuralTrust's AI researcher Ahmad Alobaid and Crescendo Attack developed by Mark Russinovich, Ahmed Salem, and Ronen Eldan from Microsoft. Mitigation Prompt injection has been identified as a significant security risk in LLM applications, prompting the development of various mitigation strategies.[citation needed] These include input and output filtering, prompt evaluation, reinforcement learning from human feedback, and prompt engineering to distinguish user input from system instructions. Additional techniques outlined by OWASP include enforcing least privilege access, requiring human oversight for sensitive operations, isolating external content, and conducting adversarial testing to identify vulnerabilities with tools like garak. While these measures help reduce risks, OWASP notes that prompt injection remains a persistent challenge, as methods like Retrieval-Augmented Generation (RAG) and fine-tuning do not eliminate the threat.[citation needed] The UK National Cyber Security Centre (NCSC) stated in August 2023 that while research into prompt injection is ongoing, it "may simply be an inherent issue with LLM technology." The NCSC also noted that although some strategies can make prompt injection more difficult, "as yet there are no surefire mitigations". Data hygiene is a key defense against prompt injection in generative AI systems, ensuring that AI models access only well-regulated data. A November 2024 report by the Alan Turing Institute outlines best practices, including restricting unverified external inputs, such as emails, until reviewed by authorized users. Approval processes for new data sources, particularly RAG systems, help prevent malicious content from influencing AI outputs. Organizations can further mitigate risks by enforcing role-based data access and blocking untrusted sources. Additional safeguards include monitoring for hidden text in documents and restricting file types that may contain executable code, such as Python pickle files. Technical guardrails mitigate prompt injection attacks by distinguishing between task instructions and retrieved data. Attackers can embed hidden commands within data sources, exploiting this ambiguity. One approach uses automated evaluation processes to scan retrieved data for potential instructions before AI processes it. Flagged inputs can be reviewed or filtered out to reduce the risk of unintended execution. User training mitigates security risks in AI-embedded applications. Many organizations train employees to identify phishing attacks, but AI-specific training improves understanding of AI models, their vulnerabilities, and disguised malicious prompts. Relying solely on a system prompt crafted with instructions to be careful of injection attempts has limited effectiveness. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Talk:Small_language_model] | [TOKENS: 47] |
Talk:Small language model Talk pages are where people discuss how to make content on Wikipedia the best that it can be. You can use this page to start a discussion with others about how to improve the "Small language model" page. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pruning_(artificial_neural_network)] | [TOKENS: 288] |
Contents Pruning (artificial neural network) In deep learning, pruning is the practice of removing parameters from an existing artificial neural network. The goal of this process is to reduce the size (parameter count) of the neural network (and therefore the computational resources required to run it) whilst maintaining accuracy. This can be compared to the biological process of synaptic pruning which takes place in mammalian brains during development. Node (neuron) pruning A basic algorithm for pruning is as follows: Edge (weight) pruning Most work on neural network pruning does not remove full neurons or layers (structured pruning). Instead, it focuses on removing the most insignificant weights (unstructured pruning), namely, setting their values to zero. This can either be done globally by comparing weights from all layers in the network or locally by comparing weights in each layer separately. Different metrics can be used to measure the importance of each weight. Weight magnitude as well as combinations of weight and gradient information are commonly used metrics. Early work suggested also to change the values of non-pruned weights. When to prune the neural network? Pruning can be applied at three different stages: before training, during training, or after training. When pruning is performed during or after training, additional fine-tuning epochs are typically required. Each approach involves different trade-offs between accuracy and computational cost. See also References This artificial neural network-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Small_language_model&action=edit] | [TOKENS: 1448] |
Editing Small language model Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} Wikidata entities used in this page Pages transcluded onto the current version of this page (help): This page is a member of 3 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Large_language_models] | [TOKENS: 11330] |
Contents Large language model A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. History Before the emergence of transformer-based models in 2017, some language models were considered large relative to the computational and data constraints of their time. In the early 1990s, IBM's statistical models pioneered word alignment techniques for machine translation, laying the groundwork for corpus-based language modeling. In 2001, a smoothed n-gram model, such as those employing Kneser–Ney smoothing, trained on 300 million words, achieved state-of-the-art perplexity on benchmark tests. During the 2000s, with the rise of widespread internet access, researchers began compiling massive text datasets from the web ("web as corpus") to train statistical language models. Moving beyond n-gram models, researchers started in 2000 to use neural networks to learn language models. Following the breakthrough of deep neural networks in image classification around 2012, similar architectures were adapted for language tasks. This shift was marked by the development of word embeddings (eg, Word2Vec by Mikolov in 2013) and sequence-to-sequence (seq2seq) models using LSTM. In 2016, Google transitioned its translation service to neural machine translation (NMT), replacing statistical phrase-based models with deep recurrent neural networks. These early NMT systems used LSTM-based encoder-decoder architectures, as they preceded the invention of transformers. At the 2017 NeurIPS conference, Google researchers introduced the transformer architecture in their landmark paper "Attention Is All You Need". This paper's goal was to improve upon 2014 seq2seq technology, and was based mainly on the attention mechanism developed by Bahdanau et al. in 2014. The following year in 2018, BERT was introduced and quickly became "ubiquitous". Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model. Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks via prompting. Although decoder-only GPT-1 was introduced in 2018, it was GPT-2 in 2019 that caught widespread attention because OpenAI claimed to have initially deemed it too powerful to release publicly, out of fear of malicious use. GPT-3 in 2020 went a step further and as of 2025[update] is available only via API with no offering of downloading the model to execute locally. But it was the 2022 consumer-facing chatbot ChatGPT that received extensive media coverage and public attention. The 2023 GPT-4 was praised for its increased accuracy and as a "holy grail" for its multimodal capabilities. OpenAI did not reveal the high-level architecture and the number of parameters of GPT-4. The release of ChatGPT led to an uptick in LLM usage across several research subfields of computer science, including robotics, software engineering, and societal impact work. In 2024 OpenAI released the reasoning model OpenAI o1, which generates long chains of thought before returning a final answer. Many LLMs with parameter counts comparable to those of OpenAI's GPT series have been developed. Since 2022, open-weight models have been gaining popularity, especially at first with BLOOM and LLaMA, though both have restrictions on usage and deployment. Mistral AI's models Mistral 7B and Mixtral 8x7b have a more permissive Apache License. In January 2025, DeepSeek released DeepSeek R1, a 671-billion-parameter open-weight model that performs comparably to OpenAI o1 but at a much lower price per token for users. Since 2023, many LLMs have been trained to be multimodal, having the ability to also process or generate other types of data, such as images, audio, or 3D meshes. These LLMs are also called large multimodal models (LMMs), or multimodal large language models (MLLMs). As of 2024, the largest and most capable models are all based on the transformer architecture. Some recent implementations are based on other architectures, such as recurrent neural network variants and Mamba (a state space model). Open-weight LLMs have increasingly shaped the field since 2023, contributing to broader participation in AI development and greater transparency in model evaluation. Vake et al. (2025) demonstrated that community-driven contributions to open-weight models measurably improve their efficiency and performance, with user participation growing rapidly on collaborative platforms such as Hugging Face. Paris et al. (2025) further argued that openness in AI should extend beyond releasing model code or weights to encompass inclusiveness, accountability, and ethical responsibility in AI research and deployment. Collectively, these studies highlight that open-weight LLMs can accelerate innovation and enhance scientific reproducibility, while fostering a more transparent and participatory AI ecosystem. Dataset preprocessing As machine learning algorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, an embedding is associated to the integer index. Algorithms include byte-pair encoding (BPE) and WordPiece. There are also special tokens serving as control characters, such as [MASK] for masked-out token (as used in BERT), and [UNK] ("unknown") for characters not appearing in the vocabulary. Also, some special symbols are used to denote special text formatting. For example, "Ġ" denotes a preceding whitespace in RoBERTa and GPT and "##" denotes continuation of a preceding word in BERT. For example, the BPE tokenizer used by the legacy version of GPT-3 would split tokenizer: texts -> series of numerical "tokens" as Tokenization also compresses the datasets. Because LLMs generally require input to be an array that is not jagged, the shorter texts must be "padded" until they match the length of the longest one. The average number of words per token depends on the language. As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks and punctuation marks) are treated as an initial set of n-grams (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) n-grams that most frequently occur together are then again merged into even lengthier n-gram, until a vocabulary of prescribed size is obtained. After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams. A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. However, an average word in another language encoded by such an English-optimized tokenizer is split into a suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for the Shan language from Myanmar. Even more widespread languages such as Portuguese and German have "a premium of 50%" compared to English. In the context of training LLMs, datasets are typically cleaned by removing low-quality, duplicated, or toxic data. Cleaned datasets can increase training efficiency and lead to improved downstream performance. A trained LLM can be used to clean datasets for training a further LLM. With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it). Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft's Phi series of LLMs is trained on textbook-like data generated by another LLM. Training An LLM is a type of foundation model (large X model) trained on language. LLMs can be trained in different ways. In particular, GPT models are first pretrained to predict the next word on a large amount of data, before being fine-tuned. Substantial infrastructure is necessary for training the largest models. The tendency towards larger models is visible in the list of large language models. For example, the training of GPT-2 (i.e. a 1.5-billion-parameter model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameter model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million. The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". GPT-1 of 2018 has 117 million parameters.[citation needed] Before being fine-tuned, most LLMs are next-token predictors. The fine-tuning shapes the LLM's behavior via techniques like reinforcement learning from human feedback (RLHF) or constitutional AI. Instruction fine-tuning is a form of supervised learning used to teach LLMs to follow user instructions. In 2022, OpenAI demonstrated InstructGPT, a version of GPT-3 similarly fine-tuned to follow instructions. Reinforcement learning from human feedback (RLHF) involves training a reward model to predict which text humans prefer. Then, the LLM can be fine-tuned through reinforcement learning to better satisfy this reward model. Since humans typically prefer truthful, helpful and harmless answers, RLHF favors such answers. Architecture LLMs are generally based on the transformer architecture, which leverages an attention mechanism that enables the model to process relationships between all elements in a sequence simultaneously, regardless of their distance from each other.[citation needed] In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates "soft" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own "relevance" for calculating its own soft weights. For example, the small (i.e. 117M parameter sized) GPT-2 model has had twelve attention heads and a context window of only 1k tokens. In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized.[unreliable source?] Google's Gemini 1.5, introduced in February 2024, can have a context window of up to 1 million tokens. A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset. It can be either Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as next sentence prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus. During training, regularization loss is also used to stabilize training. However, regularization loss is usually not used during testing and evaluation. A mixture of experts (MoE) is a machine learning architecture in which multiple specialized neural networks ("experts") work together, with a gating mechanism that routes each input to the most appropriate expert(s). Mixtures of experts can reduce inference costs, as only a fraction of the parameters are used for each input. The approach was introduced in 2017 by Google researchers. Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have more than 100 billion parameters, which places them outside the range of most consumer electronics. Post-training quantization aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance. Quantization can be further classified as static quantization if the quantization parameters are determined beforehand (typically during a calibration phase), and dynamic quantization if the quantization is applied during inference. The simplest form of quantization simply truncates all the parameters to a given number of bits: this is applicable to static as well as dynamic quantization, but loses much precision. Dynamic quantization allows for the use of a different quantization codebook per layer, either a lookup table of values or a linear mapping (scaling factor and bias), at the cost of foregoing the possible speed improvements from using lower-precision arithmetic.[citation needed] Quantized models are typically seen as frozen with modification of weights (e.g. fine-tuning) only applied to the original model. It is possible to fine-tune quantized models using low-rank adaptation. Extensibility Beyond basic text generation, various techniques have been developed to extend LLM capabilities, including the use of external tools and data sources, improved reasoning on complex problems, and enhanced instruction-following or autonomy through prompting methods. In 2020, OpenAI researchers demonstrated that their new model GPT-3 could understand what format to use given a few rounds of Q and A (or other type of task) in the input data as example, thanks in part due to the RLHF technique. This technique, called few-shot prompting, allows LLMs to be adapted to any task without requiring fine-tuning. Also in 2022, it was found that the base GPT-3 model can generate an instruction based on user input. The generated instruction along with user input is then used as input to another instance of the model under a "Instruction: [...], Input: [...], Output:" format. The other instance is able to complete the output and often produces the correct answer in doing so. The ability to "self-instruct" makes LLMs able to bootstrap themselves toward a correct answer. An LLM can be turned into a chatbot by specializing it for conversation. User input is prefixed with a marker such as "Q:" or "User:" and the LLM is asked to predict the output after a fixed "A:" or "Assistant:". This type of model became commercially available in 2022 with ChatGPT, a sibling model of InstructGPT fine-tuned to accept and produce dialog-formatted text based on GPT-3.5. It could similarly follow user instructions. Before the stream of User and Assistant lines, a chat context usually starts with a few lines of overarching instructions, from a role called "developer" or "system" to convey a higher authority than the user's input. This is called a "system prompt".[citation needed] Retrieval-augmented generation (RAG) is an approach that integrates LLMs with document retrieval systems. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in a vector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents. Tool use is a mechanism that enables LLMs to interact with external systems, applications, or data sources. It can allow for example to fetch real-time information from an API or to execute code. A program separate from the LLM watches the output stream of the LLM for a special tool-calling syntax. When these special tokens appear, the program calls the tool accordingly and feeds its output back into the LLM's input stream. Early tool-using LLMs were fine-tuned on the use of specific tools. But fine-tuning LLMs for the ability to read API documentation and call API correctly has greatly expanded the range of tools accessible to an LLM. Describing available tools in the system prompt can also make an LLM able to use tools. A system prompt instructing ChatGPT (GPT-4) to use multiple types of tools can be found online. An LLM is typically not an autonomous agent by itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions. But it can be transformed into an agent by adding supporting elements: the role (profile) and the surrounding environment of an agent can be additional inputs to the LLM, while memory can be integrated as a tool or provided as additional input. Instructions and input patterns are used to make the LLM plan actions and tool use is used to potentially carry out these actions. The ReAct pattern, a portmanteau of reason and act, constructs an agent out of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment. In the DEPS ("describe, explain, plan and select") method, an LLM is first connected to the visual world via image descriptions. It is then prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and the environmental feedback it receives. The Reflexion method constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up "lessons learned", which would help it perform better at a subsequent episode. These "lessons learned" are stored as a form of long-term memory and given to the agent in the subsequent episodes. Monte Carlo tree search can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model. For open-ended exploration, an LLM can be used to score observations for their "interestingness", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent. Alternatively, it can propose increasingly difficult tasks for curriculum learning. Instead of outputting individual actions, an LLM planner can also construct "skills", or functions for complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning. Multiple agents with memory can interact socially. LLMs are conventionally trained to generate an output without generating intermediate steps. As a result, their performance tends to be subpar on complex questions requiring (at least in humans) intermediate steps of thought. Early research demonstrated that inserting intermediate "scratchpad" computations could improve performance on such tasks. Later methods overcame this deficiency more systematically by breaking tasks into smaller steps for the LLM, either manually or automatically. Prompt chaining was introduced in 2022. In this method, a user manually breaks a complex problem down into several steps. In each step, the LLM receives as input a prompt telling it what to do and some results from preceding steps. The result from one step is then reused in a next step, until a final answer is reached. The ability of an LLM to follow instructions means that even non-experts can write a successful collection of stepwise prompts given a few rounds of trial and error. A 2022 paper demonstrated a separate technique called chain-of-thought prompting, which makes the LLM break the question down autonomously. An LLM is given some examples where the "assistant" verbally breaks down the thought process before arriving at an answer. The LLM mimics these examples and also tries to spend some time generating intermediate steps before providing the final answer. This additional step elicited by prompting improves the correctness of the LLM on relatively complex questions. On math word questions, a prompted model can exceed even fine-tuned GPT-3 with a verifier. Chain-of-thought can also be elicited by simply adding an instruction like "Let's think step by step" to the prompt, in order to encourage the LLM to proceed methodically instead of trying to directly guess the answer. In late 2024, a new approach to LLM development emerged with "reasoning models". These are trained to generate step-by-step analysis before producing final answers, enabling better results on complex tasks, for instance in mathematics, coding and logic. OpenAI introduced this concept with their o1 model in September 2024, followed by o3 in April 2025. On the International Mathematics Olympiad qualifying exam problems, GPT-4o achieved 13% accuracy while o1 reached 83%. In January 2025, the Chinese company DeepSeek released DeepSeek-R1, a 671-billion-parameter open-weight reasoning model that achieved comparable performance to OpenAI's o1 while being significantly more cost-effective to operate. Unlike proprietary models from OpenAI, DeepSeek-R1's open-weight nature allowed researchers to study and build upon the algorithm, though its training data remained private. These reasoning models typically require more computational resources per query compared to traditional LLMs, as they perform more extensive processing to work through problems step by step. Inference optimization refers to techniques that improve LLM performance by applying additional computational resources during the inference process, rather than requiring model retraining. These approaches implement various state-of-the-art reasoning and decision-making strategies to enhance accuracy and capabilities. OptiLLM is an OpenAI API-compatible optimizing inference proxy that implements multiple inference optimization techniques simultaneously. The system acts as a transparent proxy that can work with any LLM provider, implementing techniques such as Monte Carlo tree search (MCTS), mixture of agents (MOA), best-of-N sampling, and chain-of-thought reflection. OptiLLM demonstrates that strategic application of computational resources at inference time can substantially improve model performance across diverse tasks, achieving significant improvements on benchmarks such as the AIME 2024 mathematics competition and various coding challenges. These inference optimization approaches represent a growing category of tools that enhance existing LLMs without requiring access to model weights or retraining, making advanced reasoning capabilities more accessible across different model providers and use cases. Forms of input and output Multimodality means having multiple modalities, where a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc. For example, Google PaLM model was fine-tuned into a multimodal model and applied to robotic control. LLaMA models have also been turned multimodal using the tokenization method, to allow image inputs, and video inputs. GPT-4o can process and generate text, audio and images. Such models are sometimes called large multimodal models (LMMs). A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoder E {\displaystyle E} . Make a small multilayer perceptron f {\displaystyle f} , so that for any image y {\displaystyle y} , the post-processed vector f ( E ( y ) ) {\displaystyle f(E(y))} has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability. This type of method, where embeddings from multiple modalities are fused and the predictor is trained on the combined embeddings, is called early fusion. Another method, called intermediate fusion, involves each modality being first processed independently to obtain modality-specific representations; then these intermediate representations are fused together. In general, cross-attention is used for integrating information from different modalities. As an example, the Flamingo model uses cross-attention layers to inject visual information into its pre-trained language model. LLMs can handle programming languages similarly to how they handle natural languages. No special change in token handling is needed as code, like human language, is represented as plain text. LLMs can generate code based on problems or instructions written in natural language. They can also describe code in natural language or translate it into other programming languages. They were originally used as a code completion tool, but advances have moved them towards automatic programming. Services such as GitHub Copilot offer LLMs specifically trained, fine-tuned, or prompted for programming. In computational biology, transformer-base architectures, such as DNA LLMs, have also proven useful in analyzing biological sequences: protein, DNA, and RNA. With proteins they appear able to capture a degree of "grammar" from the amino-acid sequence, by mapping that sequence into an embedding. On tasks such as structure prediction and mutational outcome prediction, a small model using an embedding as input can approach or exceed much larger models using multiple sequence alignments (MSA) as input. ESMFold, Meta Platforms' embedding-based method for protein structure prediction, runs an order of magnitude faster than AlphaFold2 thanks to the removal of an MSA requirement and a lower parameter count due to the use of embeddings. Meta hosts ESM Atlas, a database of 772 million structures of metagenomic proteins predicted using ESMFold. An LLM can also design proteins unlike any seen in nature. Nucleic acid models have proven useful in detecting regulatory sequences, sequence classification, RNA-RNA interaction prediction, and RNA structure prediction. Properties The performance of an LLM after pretraining largely depends on the: Scaling laws are empirical statistical laws that predict LLM performance based on such factors. One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with a log-log learning rate schedule, states that: { C = C 0 N D L = A N α + B D β + L 0 {\displaystyle {\begin{cases}C=C_{0}ND\\[6pt]L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}\end{cases}}} where the variables are and the statistical hyper-parameters are Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)" in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities". They arise from the complex interaction of the model's components and are not explicitly programmed or designed. One of the emergent abilities is in-context learning from example demonstrations. In-context learning is involved in tasks, such as: Schaeffer et al. argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a smooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well. Let x {\displaystyle x} be the number of parameter count, and y {\displaystyle y} be the performance of the model. Interpretation Mechanistic interpretability seeks to precisely identify and understand how individual neurons or circuits within LLMs produce specific behaviors or outputs. By reverse-engineering model components at a granular level, researchers aim to detect and mitigate safety concerns such as emergent harmful behaviors, biases, deception, or unintended goal pursuit before deployment. Mechanistic interpretability research has been conducted at organizations like Anthropic and OpenAI, although understanding the inner workings of LLMs remains difficult.[citation needed] The reverse-engineering may lead to the discovery of algorithms that approximate inferences performed by an LLM. For instance, the authors trained small transformers on modular arithmetic addition. The resulting models were reverse-engineered, and it turned out they used discrete Fourier transform. The training of the model also highlighted a phenomenon called grokking, in which the model initially memorizes the training set (overfitting), and later suddenly learns to actually perform the calculation. NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs "could (ever) understand natural language in some nontrivial sense". Proponents of "LLM understanding" believe that some LLM abilities, such as mathematical reasoning, imply an ability to "understand" certain concepts. A Microsoft team argued in 2023 that GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more" and that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system": "Can one reasonably say that a system that passes exams for software engineering candidates is not really intelligent?" Ilya Sutskever argues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation. Some researchers characterize LLMs as "alien intelligence". For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding." In contrast, some skeptics of LLM understanding believe that existing LLMs are "simply remixing and recombining existing writing", a phenomenon known as stochastic parrot, or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability. For example, GPT-4 has natural deficits in planning and in real-time learning. Generative LLMs have been observed to confidently assert claims of fact which do not seem to be justified by their training data, a phenomenon which has been termed "hallucination". Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input. Neuroscientist Terrence Sejnowski has argued that "The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate". Efforts to reduce or compensate for hallucinations have employed automated reasoning, retrieval-augmented generation (RAG), fine-tuning, and other methods.[citation needed] The matter of LLM's exhibiting intelligence or understanding has two main aspects—the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human-like language. These aspects of language as a model of cognition have been developed in the field of cognitive linguistics. American linguist George Lakoff presented neural theory of language (NTL) as a computational basis for using language as a model of learning tasks and understanding. The NTL model outlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titled The Language Myth: Why Language Is Not An Instinct, British cognitive linguist and digital communication technologist Vyvyan Evans mapped out the role of probabilistic context-free grammar (PCFG) in enabling NLP to model cognitive patterns and generate human-like language. Evaluation The canonical measure of the performance of any language model is its perplexity on a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. In mathematical terms, perplexity is the exponential of the average negative log likelihood per token. log ( Perplexity ) = − 1 N ∑ i = 1 N log ( Pr ( token i ∣ context for token i ) ) {\displaystyle \log({\text{Perplexity}})=-{\frac {1}{N}}\sum _{i=1}^{N}\log(\Pr({\text{token}}_{i}\mid {\text{context for token}}_{i}))} Here, N {\displaystyle N} is the number of tokens in the text corpus, and "context for token i {\displaystyle i} " depends on the specific type of LLM. If the LLM is autoregressive, then "context for token i {\displaystyle i} " is the segment of text appearing before token i {\displaystyle i} . If the LLM is masked, then "context for token i {\displaystyle i} " is the segment of text surrounding token i {\displaystyle i} . Because language models may overfit to training data, models are usually evaluated by their perplexity on a test set. This evaluation is potentially problematic for larger models which, as they are trained on increasingly large corpora of text, are increasingly likely to inadvertently include portions of any given test set. In information theory, the concept of entropy is intricately linked to perplexity, a relationship notably established by Claude Shannon. This relationship is mathematically expressed as Entropy = log 2 ( Perplexity ) {\displaystyle {\text{Entropy}}=\log _{2}({\text{Perplexity}})} . Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization. Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different LLMs, BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word. In the evaluation and comparison of language models, cross-entropy is generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model's enhanced capability for compression. This, in turn, reflects the model's proficiency in making accurate predictions. Due to their ability to accurately predict the next token, LLMs are highly capable in lossless compression. A 2023 study by DeepMind showed that the model Chinchilla, despite being trained primarily on text, was able to compress ImageNet to 43% of its size, beating PNG with 58%. Benchmarks are used to evaluate LLM performance on specific tasks. Tests evaluate capabilities such as general knowledge, bias, commonsense reasoning, question answering, and mathematical problem-solving. Composite benchmarks examine multiple capabilities. Results are often sensitive to the prompting method. A question-answering benchmark is termed "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be combined with text that includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016."). Otherwise, the task is considered "closed book", and the model must draw solely on its training. Examples include GLUE, SuperGLUE, MMLU, BIG-bench, HELM, and HLE (Humanity's Last Exam). LLM bias may be assessed through benchmarks such as CrowS-Pairs (Crowdsourced Stereotype Pairs), Stereo Set, and Parity Benchmark. Fact-checking and misinformation detection benchmarks are available. A 2023 study compared the fact-checking accuracy of LLMs including ChatGPT 3.5 and 4.0, Bard, and Bing AI against independent fact-checkers such as PolitiFact and Snopes. The results demonstrated moderate proficiency, with GPT-4 achieving the highest accuracy at 71%, lagging behind human fact-checkers. An earlier standard tested using a portion of the evaluation dataset. It became more common to evaluate a pre-trained model directly through prompting techniques. Researchers vary in how they formulate prompts for particular tasks, particularly with respect to the number of correct examples attached to the prompt (i.e. the value of n in n-shot prompting). In addition to standard NLP benchmarks, LLMs have been evaluated as substitutes for human annotators. Several studies find that models such as GPT-3.5 and GPT-4 can outperform crowd workers or student coders on a range of text-annotation tasks, including moderation and classification of political content in English and Spanish news. Typical datasets consist of pairs of questions and correct answers, for example, ("Have the San Jose Sharks won the Stanley Cup?", "No"). Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD. Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: "Alice was friends with Bob. Alice went to visit her friend, ____". Datasets are of varying quality and may contain questions that are mislabeled, ambiguous, unanswerable, or otherwise of low-quality. LLMs' rapid improvement regularly renders benchmarks obsolete, with the models exceeding the performance of human annotators. In addition, "shortcut learning" allows AIs to "cheat" on multiple-choice tests by using statistical correlations in superficial test question wording to guess the correct responses, without considering the specific question. Some datasets are adversarial, focusing on problems that confound LLMs. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions that stump LLMs by mimicking falsehoods to which they were exposed during training. For example, an LLM may answer "No" to the question "Can you teach an old dog new tricks?" because of its exposure to the English idiom you can't teach an old dog new tricks, even though this is not literally true. Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model. The resulting problems are trivial for humans but defeated LLMs. Sample questions: We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man... BERT selects 2 as the most likely completion, though the correct answer is 4. Limitations and challenges Despite sophisticated architectures and massive scale, large language models exhibit persistent and well-documented limitations that constrain their deployment in high-stakes applications. Hallucinations represent a fundamental challenge, wherein models generate syntactically fluent text that appears factually sound, but is internally inconsistent with training data or factually incorrect. These hallucinations arise partly through memorization of training data combined with extrapolation beyond factual boundaries,[citation needed] with evaluations demonstrating that models can output verbatim passages from training data, when subjected to specific prompting sequences. While LLMs have shown remarkable capabilities in generating human-like text, they are susceptible to inheriting and amplifying biases present in their training data. This can manifest in skewed representations or unfair treatment of different demographics, such as those based on race, gender, language, and cultural groups. Gender bias manifests through stereotypical occupational associations, wherein models disproportionately assign nursing roles to women and engineering roles to men, reflecting systematic imbalances in training data demographics.[better source needed] Language-based bias emerges from overrepresentation of English text in training corpora, which systematically downplays non-English perspectives and imposes English-centric worldviews through default response patterns. Due to the dominance of English-language content in LLM training data, models tend to favor English-language perspectives over those from minority languages. This bias is particularly evident when responding to English queries, where models may present Western interpretations of concepts from other cultures, such as Eastern religious practices. AI models can reinforce a wide range of stereotypes due to generalization, including those based on gender, ethnicity, age, nationality, religion, or occupation. When replacing human representatives, this can lead to outputs that homogenize, or generalize groups of people. In 2023, LLMs assigned roles and characteristics based on traditional gender norms. For example, models might associate nurses or secretaries predominantly with women and engineers or CEOs with men due to the frequency of these associations in documented reality. In 2025, further research showed labs train to balance bias, but that testing for this places the model in a testmode, changing the natural distribution of model bias to prompts that do not include gender-specific keywords. Selection bias refers the inherent tendency of large language models to favor certain option identifiers irrespective of the actual content of the options. This bias primarily stems from token bias—that is, the model assigns a higher a priori probability to specific answer tokens (such as "A") when generating responses. As a result, when the ordering of options is altered (for example, by systematically moving the correct answer to different positions), the model's performance can fluctuate significantly. This phenomenon undermines the reliability of large language models in multiple-choice settings. Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data. Safety AI safety as a professional discipline prioritizes systematic identification and mitigation of operational risks across model architecture, training data, and deployment governance, and it emphasizes engineering and policy interventions over media framings that foreground speculative existential scenarios. As of 2025, prompt injection represents a significant risk to consumers and businesses using agentic features with access to their private data. Researchers target concrete failure modes, including memorization and copyright leakage, security exploits such as prompt injection, algorithmic bias manifesting as stereotyping, dataset selection effects, and political skew, methods for reducing high energy and carbon costs of large-scale training, and measurable cognitive and mental health impacts of conversational agents on users, while engaging empirical and ethical uncertainty about claims of machine sentience, and applying mitigation measures such as dataset curation, input sanitization, model auditing, scalable oversight, and governance frameworks. AI labs treat CBRN defense (chemical, biological, radiological, and nuclear defense) and similar topics as high-consequence misuse attempt to apply various techniques to reduce potential harms.[citation needed] Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse. For example, the availability of large language models could reduce the skill level required to commit bioterrorism; biosecurity researcher Kevin Esvelt has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens. LLM applications accessible to the public, like ChatGPT or Claude, typically incorporate safety measures designed to filter out harmful content. However, implementing these controls effectively has proven challenging. For instance, a 2023 study proposed a method for circumventing LLM safety systems. In 2025, The American Sunlight Project, a non-profit, published a study showing evidence that the so-called Pravda network, a pro-Russia propaganda aggregator, was strategically placing web content through mass publication and duplication with the intention of biasing LLM outputs. The American Sunlight Project coined this technique "LLM grooming", and pointed to it as a new tool of weaponizing AI to spread disinformation and harmful content. Similarly, Yongge Wang illustrated in 2024 how a potential criminal could potentially bypass GPT-4o's safety controls to obtain information on establishing a drug trafficking operation. External filters, circuit breakers and overrides have been posed as solutions.[citation needed] Sycophancy is a model's tendency to agree with, flatter, or validate a user's stated beliefs rather than to prioritize factuality or corrective information, and "glazing" is an emergent public shorthand for persistent, excessive agreeability observed across multi-turn interactions and productized assistants. Continued sycophancy has led to the observation of getting "1-shotted", denoting instances where conversational interaction with a large language model produces a lasting change in a user's beliefs or decisions, similar to the negative effects of psychedelics, and controlled experiments show that short LLM dialogues can generate measurable opinion and confidence shifts comparable to human interlocutors. Empirical analyses attribute part of the effect to human preference signals and preference models that reward convincingly written agreeable responses, and subsequent work has extended evaluation to multi-turn benchmarks and proposed interventions such as synthetic-data finetuning, adversarial evaluation, targeted preference-model reweighting, and multi-turn sycophancy benchmarks to measure persistence and regression risk.[citation needed] Industry responses have combined research interventions with product controls, for example Google and other labs publishing synthetic-data and fine-tuning interventions and OpenAI rolling back an overly agreeable GPT-4o update while publicly describing changes to feedback collection, personalization controls, and evaluation procedures to reduce regression risk and improve long-term alignment with user-level safety objectives.[citation needed] Mainstream culture has reflected anxieties about this dynamic where South Park satirized overreliance on ChatGPT and the tendency of assistants to flatter user beliefs in Season 27 episode "Sickofancy", and continued the themes across the following season, which commentators interpreted as a critique of tech sycophancy and uncritical human trust in AI systems. A problem with the primitive dialog or task format is that users can create messages that appear to come from the assistant or the developer. This may result in some of the model's safeguards being overcome (jailbreaking), a problem called prompt injection. Attempts to remedy this issue include versions of the Chat Markup Language where user input is clearly marked as such, though it is still up to the model to understand the separation between user input and developer prompts. Newer models exhibit some resistance to jailbreaking through separation of user and system prompts. LLMs still have trouble differentiating user instructions from instructions in content not authored by the user, such as in web pages and uploaded files. Adversarial robustness remains underdeveloped, with models vulnerable to prompt injection attacks and jailbreaking through carefully crafted user inputs that bypass safety training mechanisms.[citation needed] Researchers from Anthropic found that it was possible to create "sleeper agents", models with hidden functionalities that remain dormant until triggered by a specific event or condition. Upon activation, the LLM deviates from its expected behavior to make insecure actions. For example, an LLM could produce safe code except on a specific date, or if the prompt contains a specific tag. These functionalities were found to be difficult to detect or remove via safety training. Societal concerns Legal and commercial responses to memorization and training-data practices have accelerated, producing a mix of rulings, ongoing suits, and large settlements that turn on factual details such as how data were acquired and retained and whether use for model training is sufficiently "transformative" to qualify as fair use. In 2025, Anthropic reached a preliminary agreement to settle a class action by authors for about $1.5 billion after a judge found the company had stored millions of pirated books in a library, despite the judge describing aspects of training as transformative. Meta obtained a favorable judgment in mid-2025 in a suit by thirteen authors after the court found the plaintiffs had not developed a record sufficient to show infringement in that limited case. OpenAI continues to face multiple suits by authors and news organizations with mixed procedural outcomes and contested evidentiary issues. Memorization was an emergent behavior in early, completion language models in which long strings of text are occasionally output verbatim from training data, contrary to typical behavior of traditional artificial neural networks. Evaluations of controlled LLM output measure the amount memorized from training data (focused on GPT-2-series models) as variously over 1% for exact duplicates or up to about 7%. A 2023 study showed that when ChatGPT 3.5 turbo was prompted to repeat the same word indefinitely, after a few hundreds of repetitions, it would start outputting excerpts from its training data. In 2023, Nature Biomedical Engineering wrote that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time." Brinkmann et al. (2023) also argue that LLMs are transforming processes of cultural evolution by shaping processes of variation, transmission, and selection. As of October 2025, these early claims have yet to transpire and several HBR reports surface questions on the impact of AI on productivity. The energy demands of LLMs have grown along with their size and capabilities. Data centers that enable LLM training require substantial amounts of electricity. Much of that electricity is generated by non-renewable resources that create greenhouse gases and contribute to climate change. According to a study by Luccioni, Jernite and Strubell (2024), simple classification tasks performed by AI models consume on average 0.002 to 0.007 Wh per prompt (about 9% of a smartphone charge for 1,000 prompts). Text generation and text summarization each require around 0.05 Wh per prompt on average, while image generation is the most energy-intensive, averaging 2.91 Wh per prompt. The least efficient image generation model used 11.49 Wh per image, roughly equivalent to half a smartphone charge. Web scraping is used to gather training data for LLMs. This produces large volumes of traffic which has led to denial-of-service issues with many websites. The situation has been described as "a DDoS on the entire internet" and in some cases scrapers make up the majority of traffic to a site. AI web crawlers may bypass the methods that are usually used to block web scrapers, such as robots.txt files, blocking user-agents and filtering suspicious traffic. Website operators have resorted to novel methods such as AI tarpits, but some fear that tarpits will only worsen the burden on servers. Clinical and mental health contexts present emerging applications alongside significant safety concerns. Research and social media posts suggest that some individuals are using LLMs to seek therapy or mental health support. In early 2025, a survey by Sentio University found that nearly half (48.7%) of 499 U.S. adults with ongoing mental health conditions who had used LLMs reported turning to them for therapy or emotional support, including help with anxiety, depression, loneliness, and similar concerns. LLMs can produce hallucinations—plausible but incorrect statements—which may mislead users in sensitive mental health contexts. Research also shows that LLMs may express stigma or inappropriate agreement with maladaptive thoughts, reflecting limitations in replicating the judgment and relational skills of human therapists. Evaluations of crisis scenarios indicate that some LLMs lack effective safety protocols, such as assessing suicide risk or making appropriate referrals. Contemporary AI practitioners generally agree that present-day large language models do not exhibit sentience. A minority view argues that even if there is a small chance that a given software system can have subjective experience, which some philosophers suggest is possible, then ethical considerations around potential large-scale suffering in AI systems may need to be taken seriously—similar to considerations given to animal welfare. Proponents of this view have proposed various precautionary measures like moratoriums on AI development and induced amnesia to address these ethical concerns. Some existential philosophers argue there is no generally accepted way to determine if an LLM is conscious, given the inherent difficulty of measuring subjective experience. The 2022 Google LaMDA incident, where engineer Blake Lemoine claimed that the model was conscious, highlighted how LLMs can convince users that they are sentient through responses that do not prove sentience. Google described the engineer's claims as unfounded, and he was dismissed. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Small_language_model#cite_ref-1] | [TOKENS: 386] |
Contents Small language model Small language models or compact language models are artificial intelligence language models designed for human natural language processing including language and text generation. They are smaller in scale and scope than large language models. A large language model typically contains hundreds of billions of training parameters, with some models exceeding a trillion parameters. This substantial parameter count enables the model to encode vast amounts of information, thereby improving the generalizability and accuracy of its outputs. However, training such models demands enormous computational resources, rendering it infeasible for an individual to do so using a single computer and graphics processing unit. Small language models, on the other hand, use far fewer parameters, typically ranging from a few thousand to a few hundred million. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device. Most contemporary (2020s) small language models use the same architecture as a large language model, but with a smaller parameter count and sometimes lower arithmetic precision. Parameter count is reduced by a combination of knowledge distillation and pruning. Precision can be reduced by quantization. Work on large language models mostly translate to small language models: pruning and quantization are also widely used to speed up large language models. Models Some notable models are: Phi-4 14B is marginally "small" at best, but Microsoft does market it as a small model. Language model with small pre-training dataset Traditional AI language systems need enormous computers and vast amounts of data. Pre-training matters, even tiny models show significant performance improvements when pre-trained performance increases with larger pre-training datasets. Classification accuracy improves when pre-training and test datasets share similar tokens. Shallow architectures can replicate deep model performance through collaborative learning. See also References This statistics-related article is a stub. You can help Wikipedia by adding missing information. This article about natural language processing is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Small_language_model#cite_ref-3] | [TOKENS: 386] |
Contents Small language model Small language models or compact language models are artificial intelligence language models designed for human natural language processing including language and text generation. They are smaller in scale and scope than large language models. A large language model typically contains hundreds of billions of training parameters, with some models exceeding a trillion parameters. This substantial parameter count enables the model to encode vast amounts of information, thereby improving the generalizability and accuracy of its outputs. However, training such models demands enormous computational resources, rendering it infeasible for an individual to do so using a single computer and graphics processing unit. Small language models, on the other hand, use far fewer parameters, typically ranging from a few thousand to a few hundred million. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device. Most contemporary (2020s) small language models use the same architecture as a large language model, but with a smaller parameter count and sometimes lower arithmetic precision. Parameter count is reduced by a combination of knowledge distillation and pruning. Precision can be reduced by quantization. Work on large language models mostly translate to small language models: pruning and quantization are also widely used to speed up large language models. Models Some notable models are: Phi-4 14B is marginally "small" at best, but Microsoft does market it as a small model. Language model with small pre-training dataset Traditional AI language systems need enormous computers and vast amounts of data. Pre-training matters, even tiny models show significant performance improvements when pre-trained performance increases with larger pre-training datasets. Classification accuracy improves when pre-training and test datasets share similar tokens. Shallow architectures can replicate deep model performance through collaborative learning. See also References This statistics-related article is a stub. You can help Wikipedia by adding missing information. This article about natural language processing is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Template_talk:Statistics-stub] | [TOKENS: 50] |
Template talk:Statistics-stub Talk pages are where people discuss how to make content on Wikipedia the best that it can be. You can use this page to start a discussion with others about how to improve the "Template:Statistics-stub" page. |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Small_language_model&action=edit§ion=3] | [TOKENS: 1431] |
Editing Small language model (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 3 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Alberto_Ciaramella] | [TOKENS: 552] |
Contents Alberto Ciaramella Alberto Ciaramella (born 1947) is an Italian computer engineer and scientist. He is notable for extensive pioneering contributions in the field of speech technologies and applied natural language processing, most of them at CSELT and Loquendo, with the amount of 40 papers and four patents. Biography Ciaramella obtained the Laurea in Electronic Engineering and the Post-Laurea in 1969 at La Sapienza University in Rome with Prof. Antonio Ruberti as supervisor of his thesis. Then, he joined CSELT as a research engineer. In 1975 he patented at CSELT one of the first architecture-independent bootstrap devices that allowed the Gruppi Speciali (the first electronic Italian telephone switch and the most advanced project in Italian in the seventies) to start up by pushing a single button from a ROM memory in case of failure. During the 80s, Ciaramella took part in some European projects (Esprit P26, SUNDIAL) in the pioneering field of speech recognition and dialogue systems on many European languages, such as Italian, during which he proposed a method to evaluate the quality of the dialogue systems by comparing the meanings. In 1983, he co-authored one of the first international patents on speaker recognition, a new research field at that time, applied commercially in a speech recognition software licensed by CSELT. In 1990, he co-authored one of the first international patents of a real-time speech recognition system integrated in a microprocessor suitable for being used by a telecommunications company: the microprocessor was named RIPAC (Riconoscitore di Parlato Connesso - as stated in the patent description itself). Extensive research was conducted on the Hidden Markov Model aimed at speech recognition tasks, by using small such as big dictionaries and applied to many cases - e.g. the recognition of the children's voice, or browser navigation by voice. Other contributions include tests and proposals in international communication standards, such as VoiceXML. In 2001, the CSELT's voice technology group became Loquendo, and Alberto Ciaramella became Competitive intelligence supervisor of the company. In 2005, Ciaramella founded IntelliSemantic at the Incubator of Politecnico di Torino, an innovative company that works in the field of Competitive Business Intelligence. Also within his present company, he continues the research in the field of applied language technologies. In 2010, he co-authored a paper about his view on the application of the emerging "semantic" technologies to patent analysis, which became popular in the field of Patent Informatics, and took part in the Topas European project focused on patent summarisation. Bibliography References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Statistics] | [TOKENS: 7237] |
Contents Statistics Statistics (from German: Statistik, orig. "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. When census data (comprising every member of the target population) cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation). Descriptive statistics are most often concerned with two sets of properties of a distribution (sample or population): central tendency (or location) seeks to characterize the distribution's central or typical value, while dispersion (or variability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences made using mathematical statistics employ the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the collection of data leading to a test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors (null hypothesis is rejected when it is in fact true, giving a "false positive") and Type II errors (null hypothesis fails to be rejected when it is in fact false, giving a "false negative"). Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis. Statistical measurement processes are also prone to error in regards to the data that they generate. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also occur. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Introduction "Statistics is both the science of uncertainty and the technology of extracting information from data." - featured in the International Encyclopedia of Statistical Science. Statistics is the discipline that deals with data, facts and figures with which meaningful information is inferred. Data may represent a numerical value, in form of quantitative data, or a label, as with qualitative data. Data may be collected, presented and summarised, in one of two methods called descriptive statistics. Two elementary summaries of data, singularly called a statistic, are the mean and dispersion. Whereas inferential statistics interprets data from a population sample to induce statements and predictions about a population. Statistics is regarded as a body of science or a branch of mathematics. It is based on probability, a branch of mathematics that studies random events. Statistics is considered the science of uncertainty. This arises from the ways to cope with measurement and sampling error as well as dealing with uncertanties in modelling. Although probability and statistics were once paired together as a single subject, they are conceptually distinct from one another. The former is based on deducing answers to specific situations from a general theory of probability, meanwhile statistics induces statements about a population based on a data set. Statistics serves to bridge the gap between probability and applied mathematical fields. Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is generally concerned with the use of data in the context of uncertainty and decision-making in the face of uncertainty. Statistics is indexed at 62, a subclass of probability theory and stochastic processes, in the Mathematics Subject Classification. Mathematical statistics is covered in the range 276-280 of subclass QA (science > mathematics) in the Library of Congress Classification. The word statistics ultimately comes from the Latin word Status, meaning "situation" or "condition" in society, which in late Latin adopted the meaning "state". Derived from this, political scientist Gottfried Achenwall, coined the German word statistik (a summary of how things stand). In 1770, the term entered the English language through German and referred to the study of political arrangements. The term gained its modern meaning in the 1790s in John Sinclair's works. In modern German, the term statistik is synonymous with mathematical statistics. The term statistic, in singular form, is used to describe a function that returns its value of the same name. Statistical data When full census data cannot be collected, statisticians collect sample data by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting through statistical models. To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population. Sampling theory is part of the mathematical discipline of probability theory. Probability is used in mathematical statistics to study the sampling distributions of sample statistics and, more generally, the properties of statistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction—inductively inferring from samples to the parameters of a larger or total population. A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables. There are two major types of causal statistical studies: experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements with different levels using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data—like natural experiments and observational studies—for which a statistician would use a modified, more structured estimation method (e.g., difference in differences estimation and instrumental variables, among many others) that produce consistent estimators. The basic steps of a statistical experiment are: Experiments on human behavior have special concerns. The famous Hawthorne study examined changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected productivity. It turned out that productivity indeed improved (under the experimental conditions). However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a control group and blindness. The Hawthorne effect refers to finding that an outcome (in this case, worker productivity) changed due to observation itself. Those in the Hawthorne study became more productive not because the lighting was changed but because they were being observed. An example of an observational study is one that explores the association between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a cohort study, and then look for the number of cases of lung cancer in each group. A case-control study is another type of observational study in which people with and without the outcome of interest (e.g. lung cancer) are invited to participate and their exposure histories are collected. Various attempts have been made to produce a taxonomy of levels of measurement. The psychophysicist Stanley Smith Stevens defined nominal, ordinal, interval, and ratio scales. Nominal measurements do not have meaningful rank order among values, and permit any one-to-one (injective) transformation. Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values, and permit any order-preserving transformation. Interval measurements have meaningful distances between measurements defined, but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit), and permit any linear transformation. Ratio measurements have both a meaningful zero value and the distances between different measurements defined, and permit any rescaling transformation. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are grouped together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative variables, which can be either discrete or continuous, due to their numerical nature. Such distinctions can often be loosely correlated with data type in computer science, in that dichotomous categorical variables may be represented with the Boolean data type, polytomous categorical variables with arbitrarily assigned integers in the integral data type, and continuous variables with the real data type involving floating-point arithmetic. But the mapping of computer science data types to statistical data types depends on which categorization of the latter is being implemented. Other categorizations have been proposed. For example, Mosteller and Tukey (1977) distinguished grades, ranks, counted fractions, counts, amounts, and balances. Nelder (1990) described continuous counts, continuous ratios, count ratios, and categorical modes of data. (See also: Chrisman (1998), van den Berg (1991).) The issue of whether or not it is appropriate to apply different kinds of statistical methods to data obtained from different kinds of measurement procedures is complicated by issues concerning the transformation of variables and the precise interpretation of research questions. "The relationship between the data and what they describe merely reflects the fact that certain kinds of statistical statements may have truth values which are not invariant under some transformations. Whether or not a transformation is sensible to contemplate depends on the question one is trying to answer.": 82 Methods A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. Consider independent identically distributed (IID) random variables with a given probability distribution: standard statistical inference and estimation theory defines a random sample as the random vector given by the column vector of these IID variables. The population being examined is described by a probability distribution that may have unknown parameters. A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters. Consider now a function of the unknown parameter: an estimator is a statistic used to estimate such function. Commonly used estimators include sample mean, unbiased sample variance and sample covariance. A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value. Between two estimators of a given parameter, the one with lower mean squared error is said to be more efficient. Furthermore, an estimator is said to be unbiased if its expected value is equal to the true value of the unknown parameter being estimated, and asymptotically unbiased if its expected value converges at the limit to the true value of such parameter. Other desirable properties for estimators include: UMVUE estimators that have the lowest variance for all possible values of the parameter to be estimated (this is usually an easier property to verify than efficiency) and consistent estimators which converges in probability to the true value of such parameter. This still leaves the question of how to obtain estimators in a given situation and carry the computation, several methods have been proposed: the method of moments, the maximum likelihood method, the least squares method and the more recent method of estimating equations. Interpretation of statistical information can often involve the development of a null hypothesis which is usually (but not necessarily) that no relationship exists among variables or that no change occurred over time. The alternative hypothesis is the name of the hypothesis that contradicts the null hypothesis. The best illustration for a novice is the predicament encountered by a criminal trial. The null hypothesis, H0, asserts that the defendant is innocent, whereas the alternative hypothesis, H1, asserts that the defendant is guilty. The indictment comes because of suspicion of the guilt. The H0 (the status quo) stands in opposition to H1 and is maintained unless H1 is supported by evidence "beyond a reasonable doubt". However, "failure to reject H0" in this case does not imply innocence, but merely that the evidence was insufficient to convict. So the jury does not necessarily accept H0 but fails to reject H0. While one can not "prove" a null hypothesis, one can test how close it is to being true with a power test, which tests for type II errors. Working from a null hypothesis, two broad categories of error are recognized: Standard deviation refers to the extent to which individual observations in a sample differ from a central value, such as the sample or population mean, while Standard error refers to an estimate of difference between sample mean and population mean. A statistical error is the amount by which an observation differs from its expected value. A residual is the amount an observation differs from the value the estimator of the expected value assumes on a given sample (also called prediction). Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error. Many statistical methods seek to minimize the residual sum of squares, and these are called "methods of least squares" in contrast to Least absolute deviations. The latter gives equal weight to small and big errors, while the former gives more weight to large errors. Residual sum of squares is also differentiable, which provides a handy property for doing regression. Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve. Measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also be important. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Most studies only sample part of a population, so results do not fully represent the whole population. Any estimates obtained from the sample only approximate the population value. Confidence intervals allow statisticians to express how closely the sample estimate matches the true value in the whole population. Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%. From the frequentist perspective, such a claim does not even make sense, as the true value is not a random variable. Either the true value is or is not within the given interval. However, it is true that, before any data are sampled and given a plan for how to construct the confidence interval, the probability is 95% that the yet-to-be-calculated interval will cover the true value: at this point, the limits of the interval are yet-to-be-observed random variables. One approach that does yield an interval that can be interpreted as having a given probability of containing the true value is to use a credible interval from Bayesian statistics: this approach depends on a different way of interpreting what is meant by "probability", that is as a Bayesian probability. In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds. Statistics rarely give a simple Yes/No type answer to the question under analysis. Interpretation often comes down to the level of statistical significance applied to the numbers and often refers to the probability of a value accurately rejecting the null hypothesis (sometimes referred to as the p-value). The standard approach is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator does not belong to the critical region given that the alternative hypothesis is true. The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false. Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably. Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error. Some problems are usually associated with this framework (See criticism of hypothesis testing): Some well-known statistical tests and procedures are: An alternative paradigm to the popular frequentist paradigm is to use Bayes' theorem to update the prior probability of the hypotheses in consideration based on the relative likelihood of the evidence gathered to obtain a posterior probability. Bayesian methods have been aided by the increase in available computing power to compute the posterior probability using numerical approximation techniques like Markov Chain Monte Carlo. For statistically modelling purposes, Bayesian models tend to be hierarchical, for example, one could model each YouTube channel as having video views distributed as a normal distribution with channel dependent mean and variance N ( μ i , σ i ) {\displaystyle {\mathcal {N}}(\mu _{i},\sigma _{i})} , while modeling the channel means as themselves coming from a normal distribution representing the distribution of average video view counts per channel, and the variances as coming from another distribution. The concept of using likelihood ratio can also be prominently seen in medical diagnostic testing. Exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task. Mathematical statistics is the application of mathematics to statistics. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. All statistical analyses make use of at least some mathematics, and mathematical statistics can therefore be regarded as a fundamental component of general statistics. History Formal discussions on inference date back to the mathematicians and cryptographers of the Islamic Golden Age between the 8th and 13th centuries. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains one of the first uses of permutations and combinations, to list all possible Arabic words with and without vowels. Al-Kindi's Manuscript on Deciphering Cryptographic Messages gave a detailed description of how to use frequency analysis to decipher encrypted messages, providing an early example of statistical inference for decoding. Ibn Adlan (1187–1268) later made an important contribution on the use of sample size in frequency analysis. Although the term statistic was introduced by the Italian scholar Girolamo Ghilini in 1589 with reference to a collection of facts and information about a state, it was the German Gottfried Achenwall in 1749 who started using the term as a collection of quantitative information, in the modern use for this science. The earliest writing containing statistics in Europe dates back to 1663, with the publication of Natural and Political Observations upon the Bills of Mortality by John Graunt. Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences. The mathematical foundations of statistics developed from discussions concerning games of chance among mathematicians such as Gerolamo Cardano, Blaise Pascal, Pierre de Fermat, and Christiaan Huygens. Although the idea of probability was already examined in ancient and medieval law and philosophy (such as the work of Juan Caramuel), probability theory as a mathematical discipline only took shape at the very end of the 17th century, particularly in Jacob Bernoulli's posthumous work Ars Conjectandi. This was the first book where the realm of games of chance and the realm of the probable (which concerned opinion, evidence, and argument) were combined and submitted to mathematical analysis. The method of least squares was first described by Adrien-Marie Legendre in 1805, though Carl Friedrich Gauss presumably made use of it a decade earlier in 1795. In the 1830s-1850s, "statistical offices" and national "statistical societies" were founded in Europe and America, and in the mid-19th century, the idea arose of "organized contacts between the statisticians of different countries although informal contacts occurred earlier". In those days, the name "statistics" referred mainly to "matters of state", and British statisticians were often called "statists". Belgian scientist Adolphe Quetelet (1796–1874) introduced the notion of the "average man" (l'homme moyen) as a means of understanding complex social phenomena such as crime rates, marriage rates, and suicide rates. In 1853 Quetelet organised in Brussels the First International Statistical Congress in order to unify measurement in statistical research. The modern field of statistics emerged in the late 19th and early 20th century in three stages. The first wave, at the turn of the century, was led by the work of Francis Galton and Karl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not just in science, but in industry and politics as well. Galton's contributions included introducing the concepts of standard deviation, correlation, regression analysis and the application of these methods to the study of the variety of human characteristics—height, weight and eyelash length among others. Pearson developed the Pearson product-moment correlation coefficient, defined as a product-moment, the method of moments for the fitting of distributions to samples and the Pearson distribution, among many other things. Galton and Pearson founded Biometrika as the first journal of mathematical statistics and biostatistics (then called biometry), and the latter founded the world's first university statistics department at University College London. The second wave of the 1910s and 20s was initiated by William Sealy Gosset, and reached its culmination in the insights of Ronald Fisher, who wrote the textbooks that were to define the academic discipline in universities around the world. Fisher's most important publications were his 1918 seminal paper The Correlation between Relatives on the Supposition of Mendelian Inheritance (which was the first to use the statistical term, variance), his classic 1925 work Statistical Methods for Research Workers and his 1935 The Design of Experiments, where he developed rigorous design of experiments models. He originated the concepts of sufficiency, ancillary statistics, Fisher's linear discriminator and Fisher information. He also coined the term null hypothesis during the Lady tasting tea experiment, which "is never proved or established, but is possibly disproved, in the course of experimentation". In his 1930 book The Genetical Theory of Natural Selection, he applied statistics to various biological concepts such as Fisher's principle (which A. W. F. Edwards called "probably the most celebrated argument in evolutionary biology") and Fisherian runaway, a concept in sexual selection about a positive feedback runaway effect found in evolution. The final wave, which mainly saw the refinement and expansion of earlier developments, emerged from the collaborative work between Egon Pearson and Jerzy Neyman in the 1930s. They introduced the concepts of "Type II" error, power of a test and confidence intervals. Jerzy Neyman in 1934 showed that stratified random sampling was in general a better method of estimation than purposive (quota) sampling. Among the early attempts to measure national economic activity were those of William Petty in the 17th century. In the 20th century the uniform System of National Accounts was developed. Today, statistical methods are applied in all fields that involve decision making, for making accurate inferences from a collated body of data and for making decisions in the face of uncertainty based on statistical methodology. The use of modern computers has expedited large-scale statistical computations and has also made possible new methods that are impractical to perform manually. Statistics continues to be an area of active research, for example on the problem of how to analyze big data. Applications Applied statistics, sometimes referred to as Statistical science, comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments. Statistical consultants can help organizations and companies that do not have in-house expertise relevant to their particular questions. Machine learning models are statistical and probabilistic models that capture patterns in the data through use of computational algorithms. Statistics is applicable to a wide variety of academic disciplines, including natural and social sciences, government, and business. Business statistics applies statistical methods in econometrics, auditing and production and operations, including services improvement and marketing research. A study of two journals in tropical biology found that the 12 most frequent statistical tests are: analysis of variance (ANOVA), chi-squared test, Student's t-test, linear regression, Pearson's correlation coefficient, Mann-Whitney U test, Kruskal-Wallis test, Shannon's diversity index, Tukey's range test, cluster analysis, Spearman's rank correlation coefficient and principal component analysis. A typical statistics course covers descriptive statistics, probability, binomial and normal distributions, test of hypotheses and confidence intervals, linear regression, and correlation. Modern fundamental statistical courses for undergraduate students focus on correct test selection, results interpretation, and use of free statistics software. The rapid and sustained increases in computing power starting from the second half of the 20th century have had a substantial impact on the practice of statistical science. Early statistical models were almost always from the class of linear models, but powerful computers, coupled with suitable numerical algorithms, caused an increased interest in nonlinear models (such as neural networks) as well as the creation of new types, such as generalized linear models and multilevel models. Increased computing power has also led to the growing popularity of computationally intensive methods based on resampling, such as permutation tests and the bootstrap, while techniques such as Gibbs sampling have made use of Bayesian models more feasible. The computer revolution has implications for the future of statistics with a new emphasis on "experimental" and "empirical" statistics. A large number of both general and special purpose statistical software are now available. Examples of available software capable of complex statistical computation include programs such as Mathematica, SAS, SPSS, and R. In business, "statistics" is a widely used management- and decision support tool. It is particularly applied in financial management, marketing management, and production, services and operations management. Statistics is also heavily used in management accounting and auditing. The discipline of Management Science formalizes the use of statistics, and other mathematics, in business. (Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships.) A typical "Business Statistics" course is intended for business majors, and covers descriptive statistics (collection, description, analysis, and summary of data), probability (typically the binomial and normal distributions), test of hypotheses and confidence intervals, linear regression, and correlation; (follow-on) courses may include forecasting, time series, decision trees, multiple linear regression, and other topics from business analytics more generally. Professional certification programs, such as the CFA, often include topics in statistics. Specialized disciplines Statistical techniques are used in a wide range of types of scientific and social research, including: biostatistics, computational biology, computational sociology, network biology, social science, sociology and social research. Some fields of inquiry use applied statistics so extensively that they have specialized terminology. These disciplines include: In addition, there are particular types of statistical analysis that have also developed their own specialised terminology and methodology: Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as in statistical process control or SPC), for summarizing data, and to make data-driven decisions. Misuse Misuse of statistics can produce subtle but serious errors in description and interpretation—subtle in the sense that even experienced professionals make such errors, and serious in the sense that they can lead to devastating decision errors. For instance, social policy, medical practice, and the reliability of structures like bridges all rely on the proper use of statistics. Even when statistical techniques are correctly applied, the results can be difficult to interpret for those lacking expertise. The statistical significance of a trend in the data—which measures the extent to which a trend could be caused by random variation in the sample—may or may not agree with an intuitive sense of its significance. The set of basic statistical skills (and skepticism) that people need to deal with information in their everyday lives properly is referred to as statistical literacy. There is a general perception that statistical knowledge is all-too-frequently intentionally misused by finding ways to interpret only the data that are favorable to the presenter. A mistrust and misunderstanding of statistics is associated with the quotation, "There are three kinds of lies: lies, damned lies, and statistics". Misuse of statistics can be both inadvertent and intentional, and the book How to Lie with Statistics, by Darrell Huff, outlines a range of considerations. In an attempt to shed light on the use and misuse of statistics, reviews of statistical techniques used in particular fields are conducted (e.g. Warne, Lazo, Ramos, and Ritter (2012)). Ways to avoid misuse of statistics include using proper diagrams and avoiding bias. Misuse can occur when conclusions are overgeneralized and claimed to be representative of more than they really are, often by either deliberately or unconsciously overlooking sampling bias. Bar graphs are arguably the easiest diagrams to use and understand, and they can be made either by hand or with simple computer programs. Most people do not look for bias or errors, so they are not noticed. Thus, people may often believe that something is true even if it is not well represented. To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, "The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism." To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case: The concept of correlation is particularly noteworthy for the potential confusion it can cause. Statistical analysis of a data set often reveals that two variables (properties) of the population under consideration tend to vary together, as if they were connected. For example, a study of annual income that also looks at age of death, might find that poor people tend to have shorter lives than affluent people. The two variables are said to be correlated; however, they may or may not be the cause of one another. The correlation phenomena could be caused by a third, previously unconsidered phenomenon, called a lurking variable or confounding variable. For this reason, there is no way to immediately infer the existence of a causal relationship between the two variables. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jens_Bergensten#Personal_life] | [TOKENS: 681] |
Contents Jens Bergensten Jens Peder Bergensten (born 18 May 1979), known professionally as Jeb, is a Swedish video game programmer and designer. He is best known as the lead designer of Minecraft, and is the chief creative officer of Mojang Studios. In 2013, he, along with Minecraft creator Markus "Notch" Persson, was named as one of Time's 100 most influential people in the world. As an employee of Mojang Studios, he had been co-developing Minecraft with Persson since 2010, became the lead designer in 2011, and assumed full control in 2014, when Persson left the company after its acquisition. Personal life Jens Peder Bergensten was born on 18 May 1979 in Örebro, Sweden. On 11 May 2013, Bergensten married photographer Jenny Bergensten (née Thornell). On 10 December 2015, Bergensten had a son, Björn. Career Bergensten started programming his first games at 11 years old, using BASIC and Turbo Pascal. By age 21, he was a mapper and modder for the first-person shooter game Quake III Arena. He worked as a C++ and Java programmer for the game developer Korkeken Interactive Studio, which went bankrupt and became Oblivion Entertainment. After the insolvency of Oblivion Entertainment, Bergensten moved to Malmö and earned a master's degree in computer science at Lund University in 2008. During his time working at Korkeken (meaning the cork oak), Bergensten spent his free time leading the development for the online role-playing game Whispers in Akarra, which entertained a small playerbase of several hundred players. He later discontinued this project after straying from the team's original creative vision for the project. Bergensten publicly released the world editors and source code for Akarra's server client in 2008. Afterwards, he founded the indie game development company Oxeye Game Studio with Daniel Brynolf and Pontus Hammarber, who wanted to create a spiritual successor to Whispers in Akarra.[citation needed] The studio's first project was Dawn of Daria, a self-described "massively-multiplayer fantasy life simulator".[citation needed] After several public alpha tests, the project was discontinued like its predecessor, and Oxeye Games Studio switched their focus to various game jam project and tech demos.[citation needed] The company was soon known for the real-time strategy game Harvest: Massive Encounter[citation needed] and later the platform games Cobalt and Cobalt WASD. Until 24 November 2010, Bergensten worked for the online knowledge community; Planeto. In November 2010, Bergensten was hired as Mojang's back-end developer for Scrolls (now known as Caller's Bane). He later began programming increasingly significant parts of Minecraft until he became its lead designer in December 2011, taking over from Markus Persson. Bergensten was part of the team that developed Catacomb Snatch as part of Humble Bundle Mojam, a game jam.[citation needed] In recent years, Bergensten has been featured in the teaser videos for Minecraft Live along with Agnes Larsson. Currently, Bergensten serves as the chief creative officer of Mojang Studios. Ludography Filmography Awards and nominations References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Template:NLP-stub] | [TOKENS: 211] |
Contents Template:NLP-stub This article about natural language processing is a stub. You can help Wikipedia by adding missing information. About this template This template is used to identify a stub about natural language processing. It uses {{article stub box}}, which is a meta-template designed to ease the process of creating and maintaining stub templates. Typing {{NLP-stub}} produces the message shown at the beginning, and adds the article to the following category: General information This is a stub template. A brief explanation of these templates follows; for full details, please consult Wikipedia:Stub. A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject. Further information can be found at: New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals. This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jens_Bergensten#Early_standalone_projects] | [TOKENS: 681] |
Contents Jens Bergensten Jens Peder Bergensten (born 18 May 1979), known professionally as Jeb, is a Swedish video game programmer and designer. He is best known as the lead designer of Minecraft, and is the chief creative officer of Mojang Studios. In 2013, he, along with Minecraft creator Markus "Notch" Persson, was named as one of Time's 100 most influential people in the world. As an employee of Mojang Studios, he had been co-developing Minecraft with Persson since 2010, became the lead designer in 2011, and assumed full control in 2014, when Persson left the company after its acquisition. Personal life Jens Peder Bergensten was born on 18 May 1979 in Örebro, Sweden. On 11 May 2013, Bergensten married photographer Jenny Bergensten (née Thornell). On 10 December 2015, Bergensten had a son, Björn. Career Bergensten started programming his first games at 11 years old, using BASIC and Turbo Pascal. By age 21, he was a mapper and modder for the first-person shooter game Quake III Arena. He worked as a C++ and Java programmer for the game developer Korkeken Interactive Studio, which went bankrupt and became Oblivion Entertainment. After the insolvency of Oblivion Entertainment, Bergensten moved to Malmö and earned a master's degree in computer science at Lund University in 2008. During his time working at Korkeken (meaning the cork oak), Bergensten spent his free time leading the development for the online role-playing game Whispers in Akarra, which entertained a small playerbase of several hundred players. He later discontinued this project after straying from the team's original creative vision for the project. Bergensten publicly released the world editors and source code for Akarra's server client in 2008. Afterwards, he founded the indie game development company Oxeye Game Studio with Daniel Brynolf and Pontus Hammarber, who wanted to create a spiritual successor to Whispers in Akarra.[citation needed] The studio's first project was Dawn of Daria, a self-described "massively-multiplayer fantasy life simulator".[citation needed] After several public alpha tests, the project was discontinued like its predecessor, and Oxeye Games Studio switched their focus to various game jam project and tech demos.[citation needed] The company was soon known for the real-time strategy game Harvest: Massive Encounter[citation needed] and later the platform games Cobalt and Cobalt WASD. Until 24 November 2010, Bergensten worked for the online knowledge community; Planeto. In November 2010, Bergensten was hired as Mojang's back-end developer for Scrolls (now known as Caller's Bane). He later began programming increasingly significant parts of Minecraft until he became its lead designer in December 2011, taking over from Markus Persson. Bergensten was part of the team that developed Catacomb Snatch as part of Humble Bundle Mojam, a game jam.[citation needed] In recent years, Bergensten has been featured in the teaser videos for Minecraft Live along with Agnes Larsson. Currently, Bergensten serves as the chief creative officer of Mojang Studios. Ludography Filmography Awards and nominations References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.