text
stringlengths
31
999
source
stringclasses
5 values
In particle and condensed matter physics, Goldstone bosons or Nambu–Goldstone bosons (NGBs) are bosons that appear necessarily in models exhibiting spontaneous breakdown of continuous symmetries. They were discovered by Yoichiro Nambu in particle physics within the context of the BCS superconductivity mechanism, and subsequently elucidated by Jeffrey Goldstone, and systematically generalized in the context of quantum field theory. In condensed matter physics such bosons are quasiparticles and are known as Anderson–Bogoliubov modes
https://huggingface.co/datasets/fmars/wiki_stem
In mathematical physics and differential geometry, a gravitational instanton is a four-dimensional complete Riemannian manifold satisfying the vacuum Einstein equations. They are so named because they are analogues in quantum theories of gravity of instantons in Yang–Mills theory. In accordance with this analogy with self-dual Yang–Mills instantons, gravitational instantons are usually assumed to look like four dimensional Euclidean space at large distances, and to have a self-dual Riemann tensor
https://huggingface.co/datasets/fmars/wiki_stem
In many-body theory, the term Green's function (or Green function) is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators. The name comes from the Green's functions used to solve inhomogeneous differential equations, to which they are loosely related. (Specifically, only two-point 'Green's functions' in the case of a non-interacting system are Green's functions in the mathematical sense; the linear operator that they invert is the Hamiltonian operator, which in the non-interacting case is quadratic in the fields
https://huggingface.co/datasets/fmars/wiki_stem
In theoretical physics, Eugene Wigner and Erdal İnönü have discussed the possibility to obtain from a given Lie group a different (non-isomorphic) Lie group by a group contraction with respect to a continuous subgroup of it. That amounts to a limiting operation on a parameter of the Lie algebra, altering the structure constants of this Lie algebra in a nontrivial singular manner, under suitable circumstances. For example, the Lie algebra of the 3D rotation group SO(3), [X1, X2] = X3, etc
https://huggingface.co/datasets/fmars/wiki_stem
The group velocity of a wave is the velocity with which the overall envelope shape of the wave's amplitudes—known as the modulation or envelope of the wave—propagates through space. For example, if a stone is thrown into the middle of a very still pond, a circular pattern of waves with a quiescent center appears in the water, also known as a capillary wave. The expanding ring of waves is the wave group or wave packet, within which one can discern individual waves that travel faster than the group as a whole
https://huggingface.co/datasets/fmars/wiki_stem
In cosmology, Gurzadyan-Savvidy (GS) relaxation is a theory developed by Vahe Gurzadyan and George Savvidy to explain the relaxation over time of the dynamics of N-body gravitating systems such as star clusters and galaxies. Stellar systems observed in the Universe – globular clusters and elliptical galaxies – reveal their relaxed state reflected in the high degree of regularity of some of their physical characteristics such as surface luminosity, velocity dispersion, geometric shapes, etc. The basic mechanism of relaxation of stellar systems has been considered the 2-body encounters (of stars), to lead to the observed fine-grained equilibrium
https://huggingface.co/datasets/fmars/wiki_stem
In theoretical physics, Hamiltonian field theory is the field-theoretic analogue to classical Hamiltonian mechanics. It is a formalism in classical field theory alongside Lagrangian field theory. It also has applications in quantum field theory
https://huggingface.co/datasets/fmars/wiki_stem
Hamiltonian mechanics emerged in 1833 as a reformulation of Lagrangian mechanics. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities q ˙ i {\displaystyle {\dot {q}}^{i}} used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena
https://huggingface.co/datasets/fmars/wiki_stem
In mathematics and mathematical physics, a coordinate basis or holonomic basis for a differentiable manifold M is a set of basis vector fields {e1, . .
https://huggingface.co/datasets/fmars/wiki_stem
The Infeld–Van der Waerden symbols, sometimes called simply Van der Waerden symbols, are an invariant symbol associated to the Lorentz group used in quantum field theory. They are named after Leopold Infeld and Bartel Leendert van der Waerden. The Infeld–Van der Waerden symbols are index notation for Clifford multiplication of covectors on left handed spinors giving a right-handed spinors or vice versa, i
https://huggingface.co/datasets/fmars/wiki_stem
The International Association of Mathematical Physics (IAMP) was founded in 1976 to promote research in mathematical physics. It brings together research mathematicians and theoretical physicists, including students. The association's ordinary members are individual researchers, although associate membership is available to organizations and companies
https://huggingface.co/datasets/fmars/wiki_stem
In physics, a Killing horizon is a geometrical construct used in general relativity and its generalizations to delineate spacetime boundaries without reference to the dynamic Einstein field equations. Mathematically a Killing horizon is a null hypersurface defined by the vanishing of the norm of a Killing vector field (both are named after Wilhelm Killing). It can also be defined as a null hypersurface generated by a Killing vector, which in turn is null at that surface
https://huggingface.co/datasets/fmars/wiki_stem
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom
https://huggingface.co/datasets/fmars/wiki_stem
In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually t {\displaystyle t} , in the time domain) to a function of a complex variable s {\displaystyle s} (in the complex frequency domain, also known as s-domain, or s-plane). The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms ordinary differential equations into algebraic equations and convolution into multiplication
https://huggingface.co/datasets/fmars/wiki_stem
Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum based on a least-squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and gapped records; LSSA mitigates such problems. Unlike in Fourier analysis, data need not be equally spaced to use LSSA
https://huggingface.co/datasets/fmars/wiki_stem
In the theory of Lie groups, Lie algebras and their representation theory, a Lie algebra extension e is an enlargement of a given Lie algebra g by another Lie algebra h. Extensions arise in several ways. There is the trivial extension obtained by taking a direct sum of two Lie algebras
https://huggingface.co/datasets/fmars/wiki_stem
In mathematical physics Linear transport theory is the study of equations describing the migration of particles or energy within a host medium when such migration involves random absorption, emission and scattering events. Subject to certain simplifying assumptions, this is a common and useful framework for describing the scattering of light (radiative transfer) or neutrons (neutron transport). Given the laws of individual collision events (in the form of absorption coefficients and scattering kernels/phase functions) the problem of linear transport theory is then to determine the result of a large number of random collisions governed by these laws
https://huggingface.co/datasets/fmars/wiki_stem
In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it furnishes the fundamental matrix of a system of linear ordinary differential equations of order n with varying coefficients. The exponent is aggregated as an infinite series, whose terms involve multiple integrals and nested commutators
https://huggingface.co/datasets/fmars/wiki_stem
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking. Vector field approach The most common description of the electromagnetic field uses two three-dimensional vector fields called the electric field and the magnetic field
https://huggingface.co/datasets/fmars/wiki_stem
The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces (L2 space mainly), and operators on these spaces
https://huggingface.co/datasets/fmars/wiki_stem
In algebraic geometry and theoretical physics, mirror symmetry is a relationship between geometric objects called Calabi–Yau manifolds. The term refers to a situation where two Calabi–Yau manifolds look very different geometrically but are nevertheless equivalent when employed as extra dimensions of string theory. Early cases of mirror symmetry were discovered by physicists
https://huggingface.co/datasets/fmars/wiki_stem
In mathematics, mirror symmetry is a conjectural relationship between certain Calabi–Yau manifolds and a constructed "mirror manifold". The conjecture allows one to relate the number of rational curves on a Calabi-Yau manifold (encoded as Gromov–Witten invariants) to integrals from a family of varieties (encoded as period integrals on a variation of Hodge structures). In short, this means there is a relation between the number of genus g {\displaystyle g} algebraic curves of degree d {\displaystyle d} on a Calabi-Yau variety X {\displaystyle X} and integrals on a dual variety X ˇ {\displaystyle {\check {X}}}
https://huggingface.co/datasets/fmars/wiki_stem
In computers it is typical to define rules relative to data transfers for optimizing the overall system considerations. One such consideration is to define coherency granules (CG) that relate to units of data that are stored in memory. These units generally have a close relationship to caches that may be used in the system
https://huggingface.co/datasets/fmars/wiki_stem
A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have a hierarchy of multiple cache levels (L1, L2, often L3, and rarely even L4), with different instruction-specific and data-specific caches at level 1
https://huggingface.co/datasets/fmars/wiki_stem
In computing, a distributed cache is an extension of the traditional concept of cache used in a single locale. A distributed cache may span multiple servers so that it can grow in size and in transactional capacity. It is mainly used to store application data residing in database and web session data
https://huggingface.co/datasets/fmars/wiki_stem
Ehcache ( EE-aytch-kash) is an open source Java distributed cache for general-purpose caching, Java EE and light-weight containers. Ehcache is available under an Apache open source license. Ehcache was developed by Greg Luck starting in 2003
https://huggingface.co/datasets/fmars/wiki_stem
In computing, external memory algorithms or out-of-core algorithms are algorithms that are designed to process data that are too large to fit into a computer's main memory at once. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory (auxiliary memory) such as hard drives or tape drives, or when memory is on a computer network. External memory algorithms are analyzed in the external memory model
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, the five-minute rule is a rule of thumb for deciding whether a data item should be kept in memory, or stored on disk and read back into memory when required. It was first formulated by Jim Gray and Gianfranco Putzolu in 1985, and then subsequently revised in 1997 and 2007 to reflect changes in the relative cost and performance of memory and persistent storage. The rule is as follows: The 5-minute random rule: cache randomly accessed disk pages that are re-used every 5 minutes or less
https://huggingface.co/datasets/fmars/wiki_stem
Funky caching is the generation, display and storage of dynamic content when a requested static web page resource isn't available. The name is based on the idea of treating the web server, serving static pages, as a cache. However, unlike common reverse caches, the funky cache is part of the web server software, and has the ability to dynamically generate this content
https://huggingface.co/datasets/fmars/wiki_stem
Funnelsort is a comparison-based sorting algorithm. It is similar to mergesort, but it is a cache-oblivious algorithm, designed for a setting where the number of elements to sort is too large to fit in a cache where operations are done. It was introduced by Matteo Frigo, Charles Leiserson, Harald Prokop, and Sridhar Ramachandran in 1999 in the context of the cache oblivious model
https://huggingface.co/datasets/fmars/wiki_stem
In lower power systems, Hierarchical Value Cache refers to the hierarchical arrangement of Value Caches (VCs) in such a fashion that lower level VCs observe higher hit-rates, but undergo more switching activity on VC hits. The organization is similar to Memory Hierarchy, where lower-level caches enjoy higher hit rates, but longer hit latencies. The architecture for Hierarchical Value Cache is mainly organized along two approaches: Hierarchical Unified Value Cache (HUVC) and Hierarchical Combinational Value Cache (HCVC)
https://huggingface.co/datasets/fmars/wiki_stem
The ETag or entity tag is part of HTTP, the protocol for the World Wide Web. It is one of several mechanisms that HTTP provides for Web cache validation, which allows a client to make conditional requests. This mechanism allows caches to be more efficient and saves bandwidth, as a Web server does not need to send a full response if the content has not changed
https://huggingface.co/datasets/fmars/wiki_stem
Infinispan is a distributed cache and key-value NoSQL data store software developed by Red Hat. Java applications can embed it as library, use it as a service in WildFly or any non-java applications can use it, as remote service through TCP/IP. History Infinispan is the successor of JBoss Cache
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, locality of reference, also known as the principle of locality, is the tendency of a processor to access the same set of memory locations repetitively over a short period of time. There are two basic types of reference locality – temporal and spatial locality. Temporal locality refers to the reuse of specific data and/or resources within a relatively small time duration
https://huggingface.co/datasets/fmars/wiki_stem
(For a detailed description see Cache coherency protocols (examples)) In computing, MOESI is a full cache coherency protocol that encompasses all of the possible states commonly used in other protocols. In addition to the four common MESI protocol states, there is a fifth "Owned" state representing data that is both modified and shared. This avoids the need to write modified data back to main memory before sharing it
https://huggingface.co/datasets/fmars/wiki_stem
In computer programming, negative cache is a cache that also stores "negative" responses, i. e. failures
https://huggingface.co/datasets/fmars/wiki_stem
NetCache is a former web cache software product which was owned and developed by NetApp between 1997 and 2006, and a hardware product family incorporating the NetCache software. History The NetCache software started as a commercial fork of the Harvest Object Cache developed by Internet Middleware Corporation (IMC), which consisted of former Harvest project developers including Peter B. Danzig, a professor at University of Southern California
https://huggingface.co/datasets/fmars/wiki_stem
An oblivious RAM (ORAM) simulator is a compiler that transforms algorithms in such a way that the resulting algorithms preserve the input-output behavior of the original algorithm but the distribution of the memory access pattern of the transformed algorithm is independent of the memory access pattern of the original algorithm. The use of ORAMs is motivated by the fact that an adversary can obtain nontrivial information about the execution of a program and the nature of the data that it is dealing with, just by observing the pattern in which various locations of memory are accessed during its execution. An adversary can get this information even if the data values are all encrypted
https://huggingface.co/datasets/fmars/wiki_stem
Peer-to-peer caching (P2P caching) is a computer network traffic management technology used by Internet Service Providers (ISPs) to accelerate content delivered over peer-to-peer (P2P) networks while reducing related bandwidth costs. P2P caching is similar in principle to the content caching long used by ISPs to accelerate Web (HTTP) content. P2P caching temporarily stores popular content that is flowing into an ISP's network
https://huggingface.co/datasets/fmars/wiki_stem
In computer science, a parallel external memory (PEM) model is a cache-aware, external-memory abstract machine. It is the parallel-computing analogy to the single-processor external memory (EM) model. In a similar way, it is the cache-aware analogy to the parallel random-access machine (PRAM)
https://huggingface.co/datasets/fmars/wiki_stem
A power law is a mathematical relationship between two quantities in which one is directly proportional to some power of the other. The power law for cache misses was first established by C. K
https://huggingface.co/datasets/fmars/wiki_stem
A victim cache is a small, usually fully associative cache placed in the refill path of a CPU cache that stores all the blocks evicted from that level of cache, originally proposed in 1990. In modern architectures, this function is typically performed by Level 3 or Level 4 caches. Overview Victim caching is a hardware technique to improve performance of caches proposed by Norman Jouppi
https://huggingface.co/datasets/fmars/wiki_stem
A Web cache (or HTTP cache) is a system for optimizing the World Wide Web. It is implemented both client-side and server-side. The caching of multimedia and other files can result in less overall delay when browsing the Web
https://huggingface.co/datasets/fmars/wiki_stem
Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map". Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only
https://huggingface.co/datasets/fmars/wiki_stem
The Dynamically Redefined Character Set, or DRCS for short, was a feature of Digital Equipment Corporation's smart terminals starting with the VT200 series in 1983. DRCS added a RAM buffer where new glyphs could be uploaded from the host system using the Sixel data format. References "VT320 Soft Character Sets"
https://huggingface.co/datasets/fmars/wiki_stem
8-bit clean is an attribute of computer systems, communication channels, and other devices and software, that handle 8-bit character encodings correctly. Such encoding include the ISO/IEC 8859 series and the UTF-8 encoding of Unicode. History Until the early 1990s, many programs and data transmission channels were character-oriented and treated some characters, e
https://huggingface.co/datasets/fmars/wiki_stem
BCD (binary-coded decimal), also called alphanumeric BCD, alphameric BCD, BCD Interchange Code, or BCDIC, is a family of representations of numerals, uppercase Latin letters, and some special and control characters as six-bit character codes. Unlike later encodings such as ASCII, BCD codes were not standardized. Different computer manufacturers, and even different product lines from the same manufacturer, often had their own variants, and sometimes included unique characters
https://huggingface.co/datasets/fmars/wiki_stem
A bidirectional text contains two text directionalities, right-to-left (RTL) and left-to-right (LTR). It generally involves text containing different types of alphabets, but may also refer to boustrophedon, which is changing text direction in each row. Many computer programs fail to display bidirectional text correctly
https://huggingface.co/datasets/fmars/wiki_stem
A binary-to-text encoding is encoding of data in plain text. More precisely, it is an encoding of binary data in a sequence of printable characters. These encodings are necessary for transmission of data when the communication channel does not allow binary data (such as email or NNTP) or is not 8-bit clean
https://huggingface.co/datasets/fmars/wiki_stem
In computing, a bucky bit is a bit in a binary representation of a character that is set by pressing on a keyboard modifier key other than the shift key. Overview Setting a bucky bit changes the output character. A bucky bit allows the user to type a wider variety of characters and commands while maintaining a reasonable number of keys on a keyboard
https://huggingface.co/datasets/fmars/wiki_stem
CCIR 476 is a character encoding used in radio data protocols such as SITOR, AMTOR and Navtex. It is a recasting of the ITA2 character encoding, known as Baudot code, from a five-bit code to a seven-bit code. In each character, exactly four of the seven bits are mark bits, and the other three are space bits
https://huggingface.co/datasets/fmars/wiki_stem
A CCSID (coded character set identifier) is a 16-bit number that represents a particular encoding of a specific code page. For example, Unicode is a code page that has several character encoding schemes (referred to as "transformation forms")—including UTF-8, UTF-16 and UTF-32—but which may or may not actually be accompanied by a CCSID number to indicate that this encoding is being used. Difference between a code page and a CCSID The terms code page and CCSID are often used interchangeably, even though they are not synonymous
https://huggingface.co/datasets/fmars/wiki_stem
The Compatibility Encoding Scheme for UTF-16: 8-Bit (CESU-8) is a variant of UTF-8 that is described in Unicode Technical Report #26. A Unicode code point from the Basic Multilingual Plane (BMP), i. e
https://huggingface.co/datasets/fmars/wiki_stem
"Five-Letter Codegroup Filter": uses only upper case ASCII letters and spaces; includes a CRC. --DavidCary (talk) 08:28, 19 February 2021 (UTC) VLQ LEB128 basE91 Z85 diff charset from Ascii85 I wanted to add link to my "just written" implementation, however got a warning about conflict of interests. If anyone other will think that this information will be helpfull feel free to add this link to Article
https://huggingface.co/datasets/fmars/wiki_stem
A character literal is a type of literal in programming for the representation of a single character's value within the source code of a computer program. Languages that have a dedicated character data type generally include character literals; these include C, C++, Java, and Visual Basic. Languages without character data types (like Python or PHP) will typically use strings of length 1 to serve the same purpose a character data type would fulfil
https://huggingface.co/datasets/fmars/wiki_stem
Character encoding detection, charset detection, or code page detection is the process of heuristically guessing the character encoding of a series of bytes that represent text. The technique is recognised to be unreliable and is only used when specific metadata, such as a HTTP Content-Type: header is either not available, or is assumed to be untrustworthy. This algorithm usually involves statistical analysis of byte patterns, like frequency distribution of trigraphs of various languages encoded in each code page that will be detected; such statistical analysis can also be used to perform language detection
https://huggingface.co/datasets/fmars/wiki_stem
In computing, a code page is a character encoding and as such it is a specific association of a set of printable characters and control characters with unique numbers. Typically each number represents the binary value in a single byte. (In some contexts these terms are used more precisely; see Character encoding § Character sets, character maps and code pages
https://huggingface.co/datasets/fmars/wiki_stem
In character encoding terminology, a code point, codepoint or code position is a numerical value that maps to a specific character. Code points usually represent a single grapheme—usually a letter, digit, punctuation mark, or whitespace—but sometimes represent symbols, control characters, or formatting. The set of all possible code points within a given encoding/character set make up that encoding's codespace
https://huggingface.co/datasets/fmars/wiki_stem
The CS Indic character set, or the Classical Sanskrit Indic Character Set, is used by LaTeX represent text used in the Romanization of Sanskrit. It is used in fonts, and is based on Code Page 437. Extended versions are the CSX Indic character set and the CSX+ Indic character set
https://huggingface.co/datasets/fmars/wiki_stem
In computer programming, digraphs and trigraphs are sequences of two and three characters, respectively, that appear in source code and, according to a programming language's specification, should be treated as if they were single characters. Various reasons exist for using digraphs and trigraphs: keyboards may not have keys to cover the entire character set of the language, input of special characters may be difficult, text editors may reserve some characters for special use and so on. Trigraphs might also be used for some EBCDIC code pages that lack characters such as { and }
https://huggingface.co/datasets/fmars/wiki_stem
Extended Channel Interpretation (ECI) is an extension to the communication protocol that is used to transmit data from a bar code reader to a host when a bar code symbol is scanned. It enables the application software to receive additional information about the intended interpretation of the message contained within the barcode symbol and even details about the scan itself. ECI was developed as a symbology-independent extension of the Global Label Identifier (GLI) system used in the PDF417 bar code
https://huggingface.co/datasets/fmars/wiki_stem
In ISO/IEC 646 (commonly known as ASCII) and related standards including ISO 8859 and Unicode, a graphic character, also known as printing character (or printable character), is any character intended to be written, printed, or otherwise displayed in a form that can be read by humans. In other words, it is any encoded character that is associated with one or more glyphs. ISO/IEC 646 In ISO 646, graphic characters are contained in rows 2 through 7 of the code table
https://huggingface.co/datasets/fmars/wiki_stem
In computing, a hardware code page (HWCP) refers to a code page supported natively by a hardware device such as a display adapter or printer. The glyphs to present the characters are stored in the alphanumeric character generator's resident read-only memory (like ROM or flash) and are thus not user-changeable. They are available for use by the system without having to load any font definitions into the device first
https://huggingface.co/datasets/fmars/wiki_stem
The Lotus International Character Set (LICS) is a proprietary single-byte character encoding introduced in 1985 by Lotus Development Corporation. It is based on the 1983 DEC Multinational Character Set (MCS) for VT220 terminals. As such, LICS is also similar to two other descendants of MCS, the ECMA-94 character set of 1985 and the ISO 8859-1 (Latin-1) character set of 1987
https://huggingface.co/datasets/fmars/wiki_stem
The Lotus Multi-Byte Character Set (LMBCS) is a proprietary multi-byte character encoding originally conceived in 1988 at Lotus Development Corporation with input from Bob Balaban and others. Created around the same time and addressing some of the same problems, LMBCS could be viewed as parallel development and possible alternative to Unicode. For maximum compatibility, later issues of LMBCS incorporate UTF-16 as a subset
https://huggingface.co/datasets/fmars/wiki_stem
Exact measurements are not possible. Counts of numbers of documents are different than counts weighed by actual use or visibility of those documents. The encoding popularity varies depending on the language used for the documents, or the locale that is the source of the document, or the purpose of the document
https://huggingface.co/datasets/fmars/wiki_stem
RADIX 50 or RAD50 (also referred to as RADIX50, RADIX-50 or RAD-50), is an uppercase-only character encoding created by Digital Equipment Corporation (DEC) for use on their DECsystem, PDP, and VAX computers. RADIX 50's 40-character repertoire (050 in octal) can encode six characters plus four additional bits into one 36-bit machine word (PDP-6, PDP-10/DECsystem-10, DECSYSTEM-20), three characters plus two additional bits into one 18-bit word (PDP-9, PDP-15), or three characters into one 16-bit word (PDP-11, VAX). The actual encoding differs between the 36-bit and 16-bit systems
https://huggingface.co/datasets/fmars/wiki_stem
A six-bit character code is a character encoding designed for use on computers with word lengths a multiple of 6. Six bits can only encode 64 distinct characters, so these codes generally include only the upper-case letters, the numerals, some punctuation characters, and sometimes control characters. The 7-track magnetic tape format was developed to store data in such codes, along with an additional parity bit
https://huggingface.co/datasets/fmars/wiki_stem
The slate and stylus are tools used by blind people to write text that they can read without assistance. Invented by Charles Barbier as the tool for writing letters that could be read by touch, the slate and stylus allow for a quick, easy, convenient and constant method of making embossed printing for Braille character encoding. Prior methods of making raised printing for the blind required a movable type printing press
https://huggingface.co/datasets/fmars/wiki_stem
Stanford/ITS character set is an extended ASCII character set based on SEASCII with modifications allowing compatibility with 1968 ASCII. Usage It is used as an alternate character set of the SUPDUP protocol for terminals with %TOSAI and %TOFCI bits set. It is also recommended for TeX implementations on systems with large character sets
https://huggingface.co/datasets/fmars/wiki_stem
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed (after creation). A string is generally considered as a data type and is often implemented as an array data structure of bytes (or words) that stores a sequence of elements, typically characters, using some character encoding
https://huggingface.co/datasets/fmars/wiki_stem
Unicode, formally The Unicode Standard, is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard, which is maintained by the Unicode Consortium, defines as of the current version (15. 0) 149,186 characters covering 161 modern and historic scripts, as well as symbols, thousands of emoji (including in colours), and non-visual control and formatting codes
https://huggingface.co/datasets/fmars/wiki_stem
UTF-7 (7-bit Unicode Transformation Format) is an obsolete variable-length character encoding for representing Unicode text using a stream of ASCII characters. It was originally intended to provide a means of encoding Unicode text for use in Internet E-mail messages that was more efficient than the combination of UTF-8 with quoted-printable. UTF-7 (according to its RFC) isn't a "Unicode Transformation Format", as the definition can only encode code points in the BMP (the first 65536 Unicode code points, which does not include emojis and many other characters)
https://huggingface.co/datasets/fmars/wiki_stem
UTF-8 is a variable-length character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from Unicode (or Universal Coded Character Set) Transformation Format – 8-bit. UTF-8 is capable of encoding all 1,112,064 valid character code points in Unicode using one to four one-byte (8-bit) code units
https://huggingface.co/datasets/fmars/wiki_stem
ARJ (Archived by Robert Jung) is a software tool designed by Robert K. Jung for creating high-efficiency compressed file archives. ARJ is currently on version 2
https://huggingface.co/datasets/fmars/wiki_stem
In data compression, BCJ, short for Branch/Call/Jump, refers to a technique that improves the compression of machine code by replacing relative branch addresses with absolute ones. This allows a Lempel–Ziv compressor to identify duplicate targets and more efficiently encode them. On decompression, the inverse filter restores the original encoding
https://huggingface.co/datasets/fmars/wiki_stem
LHA or LZH is a freeware compression utility and associated file format. It was created in 1988 by Haruyasu Yoshizaki (吉崎栄泰, Yoshizaki Haruyasu), a doctor and originally named LHarc. A complete rewrite of LHarc, tentatively named LHx, was eventually released as LH
https://huggingface.co/datasets/fmars/wiki_stem
PNGOUT is a freeware command line optimizer for PNG images written by Ken Silverman. The transformation is lossless, meaning that the resulting image is visually identical to the source image. According to its author, this program can often get higher compression than other optimizers by 5–10%
https://huggingface.co/datasets/fmars/wiki_stem
SQ (squeeze) is a computer program, devised by Richard (Dick) Greenlaw circa 1981, which was used in the early 1980s on both DOS and CP/M computer systems to compress files so they use less space. Files compressed by SQ are identified by changing the middle initial of the extension to "Q", so that text files ended with the extension . TQT, executable files ended with the extension
https://huggingface.co/datasets/fmars/wiki_stem
The Unarchiver is a proprietary freeware data decompression utility, which supports more formats than Archive Utility (formerly known as BOMArchiveHelper), the built-in archive unpacker program in macOS. It can also handle filenames in various character encodings, created using operating system versions that use those character encodings. The latest version requires Mac OS X Lion or higher
https://huggingface.co/datasets/fmars/wiki_stem
WMA Convert is a software created for audio and video files conversion. Features The program supports conversion of MP3, M4A AAC, WAV, WMA audio file formats and MP4, WMV, AVI video formats. Also coverts M4P files to MP3
https://huggingface.co/datasets/fmars/wiki_stem
ZipIt is a shareware data compression utility for archiving and compressing files on the Classic Mac OS and Mac OS X platforms. It was designed to be highly compatible with PKZIP on MS-DOS machines, reading and writing those files as well as performing any necessary line ending conversion or MacBinary encoding to ensure the files were usable on both platforms. It had an advanced user interface and offered a number of automation features, including AppleScript support
https://huggingface.co/datasets/fmars/wiki_stem
In computing, file comparison is the calculation and display of the differences and similarities between data objects, typically text files such as source code. The methods, implementations, and results are typically called a diff, after the Unix diff utility. The output may be presented in a graphical user interface or used as part of larger tasks in networks, file systems, or revision control
https://huggingface.co/datasets/fmars/wiki_stem
This article compares computer software tools which are used for accomplishing comparisons of files of various types. The file types addressed by individual file comparison apps varies, but may include text, symbols, images, audio, or video. This category of software tool is often called "file comparison" or "diff tool", but those effectively are equivalent terms — where the term "diff" is more commonly associated with the Unix diff utility
https://huggingface.co/datasets/fmars/wiki_stem
Compare++ is an auxiliary tool for programmers and web developers. The tool can syntax-aware compare text files and folders quickly and do a 3-way merge. It is useful to detect differences in codes and match
https://huggingface.co/datasets/fmars/wiki_stem
Diff-Text is a web-based software tool that efficiently identifies differences between two blocks of plain text. It operates on a closed-source model and offers a donation or pay-what-you-want payment option. To be compared, text is pasted directly into the web-page
https://huggingface.co/datasets/fmars/wiki_stem
freedup is a program to scan directories or file lists for duplicate files. The file lists may be provided to an input pipe or internally generated using find with provided options. There are more options to specify the search conditions more detailed
https://huggingface.co/datasets/fmars/wiki_stem
Guiffy SureMerge is a data comparison utility. In addition to comparing files, the program is capable of doing side-by-side comparison of directories and archives. The program is also capable of performing automatic 3-way file merges
https://huggingface.co/datasets/fmars/wiki_stem
MultiEx Commander is a game resource archive manager for Windows published by the Xentax Foundation. Some features include a built-in MexScript (AKA BMS) interpreter, file extractor and importer, stand alone mod creator EasyMod. This application is currently written by Mike Zuurman in Visual Basic
https://huggingface.co/datasets/fmars/wiki_stem
Pretty Diff is a language-aware data comparison utility implemented in TypeScript. The online utility is capable of source code prettification, minification, and comparison of two pieces of input text. It operates by removing code comments from supported languages and then performs a pretty-print operation prior to executing the diff algorithm
https://huggingface.co/datasets/fmars/wiki_stem
Surround SCM is a software configuration management application developed by Seapine Software, now owned by Perforce since 2017. Perforce integrated the software with its Helix ALM product. Architecture Surround SCM has a client–server architecture
https://huggingface.co/datasets/fmars/wiki_stem
Total Commander (formerly Windows Commander) is a shareware orthodox file manager for Windows, Windows Phone, Windows Mobile/Windows CE and Android, developed by Christian Ghisler. Originally coded using Delphi, latest Windows 64-bit versions were developed with Lazarus. It features a built-in FTP client, tabbed interface, file compare, archive file navigation, and a multi-rename tool with regular expression support
https://huggingface.co/datasets/fmars/wiki_stem
In computing, copy is a command in various operating systems. The command copies computer files from one directory to another. Overview Generally, the command copies files from one location to another
https://huggingface.co/datasets/fmars/wiki_stem
In computing, cp is a command in various Unix and Unix-like operating systems for copying files and directories. The command has three principal modes of operation, expressed by the types of arguments presented to the program for copying a file to another file, one or more files to a directory, or for copying entire directories to another directory. The utility further accepts various command line option flags to detail the operations performed
https://huggingface.co/datasets/fmars/wiki_stem
DirSync Pro is an open-source file synchronization and backup utility for Windows, Linux and macOS. DirSync Pro is based on the program Directory Synchronize (DirSync), which was first released in February 2003 by Elias Gerber. He subsequently developed it with Frank Gerbig and T
https://huggingface.co/datasets/fmars/wiki_stem
FreeFileSync is a free and open-source program used for file synchronization. It is available on Windows, Linux and macOS. The project is backed by donations
https://huggingface.co/datasets/fmars/wiki_stem
Peripheral Interchange Program (PIP) was a utility to transfer files on and between devices on Digital Equipment Corporation's computers. It was first implemented on the PDP-6 architecture by Harrison "Dit" Morse early in the 1960s. It was subsequently implemented for DEC's operating systems for PDP-10, PDP-11, and PDP-8 architectures
https://huggingface.co/datasets/fmars/wiki_stem
Synkron is an open-source multiplatform utility designed for file synchronization of two or more folders, supporting synchs across computers. It is written in C++ and uses the Qt4 libraries. Synkron is distributed under the terms of the GPL v2
https://huggingface.co/datasets/fmars/wiki_stem
TeraCopy is a freemium file transfer utility designed as an alternative for the built-in Windows Explorer file transfer feature. Its focus is data integrity, file transfer reliability and the ability to pause or resume file transfers. Design TeraCopy uses dynamically adjusted buffers to reduce seek times
https://huggingface.co/datasets/fmars/wiki_stem
Ultracopier is file-copying software for Windows, macOS, and Linux. It supersedes SuperCopier. Features Main features include: pause/resume transfers dynamic speed limitation on-error resume error/collision management data security intelligent reorganization of transfer to optimize performance pluginsNormal vs Ultimate version: The code sources are exactly the same, and under the same licence The basic ultimate version just include some alternate plugin All versions are without DRM (this is explicitly banned by the GPLv3 license) and can be redistributed freely
https://huggingface.co/datasets/fmars/wiki_stem