text
stringlengths
31
999
source
stringclasses
5 values
The Materials Science Laboratory (MSL) of the European Space Agency is a payload on board the International Space Station for materials science experiments in low gravity. It is installed in NASA's first Materials Science Research Rack which is placed in the Destiny laboratory on board the ISS. Its purpose is to process material samples in different ways: directional solidification of metals and alloys, crystal growth of semi-conducting materials, thermo-physical properties and diffusion experiments of alloys and glass-forming materials, and investigations on polymers and ceramics at the liquid-solid phase transition
https://huggingface.co/datasets/fmars/wiki_stem
In continuum physics, materials with memory, also referred as materials with hereditary effects are a class of materials whose constitutive equations contains a dependence upon the past history of thermodynamic, kinetic, electromagnetic or other kind of state variables. Historical notes The study of these materials arises from the pioneering articles of Ludwig Boltzmann and Vito Volterra, in which they sought an extension of the concept of an elastic material. The key assumption of their theory was that the local stress value at a time t depends upon the history of the local deformation up to t
https://huggingface.co/datasets/fmars/wiki_stem
Materiomics is the holistic study of material systems. Materiomics examines links between physicochemical material properties and material characteristics and function. The focus of materiomics is system functionality and behavior, rather than a piecewise collection of properties, a paradigm similar to systems biology
https://huggingface.co/datasets/fmars/wiki_stem
A Maxwell material is the most simple model viscoelastic material showing properties of a typical liquid. It shows viscous flow on the long timescale, but additional elastic resistance to fast deformations. It is named for James Clerk Maxwell who proposed the model in 1867
https://huggingface.co/datasets/fmars/wiki_stem
Mechanical testing covers a wide range of tests, which can be divided broadly into two types: those that aim to determine a material's mechanical properties, independent of geometry. those that determine the response of a structure to a given action, e. g
https://huggingface.co/datasets/fmars/wiki_stem
Mechanically Stimulated Gas Emission Phenomenology Mechanically stimulated gas emission (MSGE) is a complex phenomenon embracing various physical and chemical processes occurring on the surface and in the bulk of a solid under applied mechanical stress and resulting in emission of gases. MSGE is a part of a more general phenomenon of Mechanically Stimulated Neutral Emission (MSNE). The specific characteristics of MSGE as compared with MSNE is that the emitted neutral particles are limited to gas molecules
https://huggingface.co/datasets/fmars/wiki_stem
Melting, or fusion, is a physical process that results in the phase transition of a substance from a solid to a liquid. This occurs when the internal energy of the solid increases, typically by the application of heat or pressure, which increases the substance's temperature to the melting point. At the melting point, the ordering of ions or molecules in the solid breaks down to a less ordered state, and the solid melts to become a liquid
https://huggingface.co/datasets/fmars/wiki_stem
A mesocrystal is a material structure composed of numerous small crystals of similar size and shape, which are arranged in a regular periodic pattern. It is a form of oriented aggregation, where the small crystals have parallel crystallographic alignment but are spatially separated. When the sizes of individual components are at the nanoscale, mesocrystals represent a new class of nanostructured solids made from crystiallographically oriented nanoparticles
https://huggingface.co/datasets/fmars/wiki_stem
Metallurgical failure analysis is the process to determine the mechanism that has caused a metal component to fail. It can identify the cause of failure, providing insight into the root cause and potential solutions to prevent similar failures in the future, as well as culpability, which is important in legal cases. Resolving the source of metallurgical failures can be of financial interest to companies
https://huggingface.co/datasets/fmars/wiki_stem
Micromeritics is the science and technology of small particles pioneered by Joseph M. DallaValle. It is thus the study of the fundamental and derived properties of individual as well as a collection of particles
https://huggingface.co/datasets/fmars/wiki_stem
Micronization is the process of reducing the average diameter of a solid material's particles. Traditional techniques for micronization focus on mechanical means, such as milling and grinding. Modern techniques make use of the properties of supercritical fluids and manipulate the principles of solubility
https://huggingface.co/datasets/fmars/wiki_stem
The microplane model, conceived in 1984, is a material constitutive model for progressive softening damage. Its advantage over the classical tensorial constitutive models is that it can capture the oriented nature of damage such as tensile cracking, slip, friction, and compression splitting, as well as the orientation of fiber reinforcement. Another advantage is that the anisotropy of materials such as gas shale or fiber composites can be effectively represented
https://huggingface.co/datasets/fmars/wiki_stem
Microstructure is the very small scale structure of a material, defined as the structure of a prepared surface of material as revealed by an optical microscope above 25× magnification. The microstructure of a material (such as metals, polymers, ceramics or composites) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behaviour or wear resistance. These properties in turn govern the application of these materials in industrial practice
https://huggingface.co/datasets/fmars/wiki_stem
Microthermal analysis is a materials characterization technique which combines the thermal analysis principles of differential scanning calorimetry (DSC) with high spatial resolution of scanning probe microscopy. The instrument consists of a thermal probe which is basically a fine platinum/rhodium alloy wire (5 micro meter in diameter) coated by a sheath of silver (Wollaston wire). The wire is bent into V-shape and the silver sheath is etched away to form a fine pointed tip
https://huggingface.co/datasets/fmars/wiki_stem
A miscibility gap is a region in a phase diagram for a mixture of components where the mixture exists as two or more phases – any region of composition of mixtures where the constituents are not completely miscible. The IUPAC Gold Book defines miscibility gap as "Area within the coexistence curve of an isobaric phase diagram (temperature vs composition) or an isothermal phase diagram (pressure vs composition). "A miscibility gap between isostructural phases may be described as the solvus, a term also used to describe the boundary on a phase diagram between a miscibility gap and other phases
https://huggingface.co/datasets/fmars/wiki_stem
Mohr–Coulomb theory is a mathematical model (see yield surface) describing the response of brittle materials such as concrete, or rubble piles, to shear stress as well as normal stress. Most of the classical engineering materials follow this rule in at least a portion of their shear failure envelope. Generally the theory applies to materials for which the compressive strength far exceeds the tensile strength
https://huggingface.co/datasets/fmars/wiki_stem
In materials science, MXenes are a class of two-dimensional inorganic compounds , that consist of atomically thin layers of transition metal carbides, nitrides, or carbonitrides. MXenes accept a variety of hydrophilic terminations. MXenes were reported for the first time in 2012, and their studies have been undergoing an exponential growth as shown in the figure on the right
https://huggingface.co/datasets/fmars/wiki_stem
Mycelium, the fungal equivalent of roots in plants, has been identified as an ecologically friendly substitute to a litany of materials throughout different industries, including but not limited to packaging, fashion and building materials. Such substitutes present a biodegradable alternative (also known as a "Living Building Material") to conventional materials. Mycelium was most notably first examined as an ecologically friendly material alternative in 2007
https://huggingface.co/datasets/fmars/wiki_stem
Nanochannel glass materials are an experimental mask technology that is an alternate method for fabricating nanostructures, although optical lithography is the predominant patterning technique. Nanochannel glass materials are complex glass structures containing large numbers of parallel hollow channels. In its simplest form, the hollow channels are arranged in geometric arrays with packing densities as great as 1011 channels/cm2
https://huggingface.co/datasets/fmars/wiki_stem
Nanofluidic circuitry is a nanotechnology aiming for control of fluids in nanometer scale. Due to the effect of an electrical double layer within the fluid channel, the behavior of nanofluid is observed to be significantly different compared with its microfluidic counterparts. Its typical characteristic dimensions fall within the range of 1–100 nm
https://huggingface.co/datasets/fmars/wiki_stem
Nanofluidics is the study of the behavior, manipulation, and control of fluids that are confined to structures of nanometer (typically 1–100 nm) characteristic dimensions (1 nm = 10−9 m). Fluids confined in these structures exhibit physical behaviors not observed in larger structures, such as those of micrometer dimensions and above, because the characteristic physical scaling lengths of the fluid, (e. g
https://huggingface.co/datasets/fmars/wiki_stem
Nanolamination is the production of materials that are fully dense, ultra-fine grained solids that exhibit a high concentration of interface defects. The properties of fabricated nanolaminates depend on their compositions and thicknesses. Production Nanolaminates can be grown using atom-by-atom deposition techniques that are designed with different stacking sequences and layer thicknesses
https://huggingface.co/datasets/fmars/wiki_stem
Nanotribology is the branch of tribology that studies friction, wear, adhesion and lubrication phenomena at the nanoscale, where atomic interactions and quantum effects are not negligible. The aim of this discipline is characterizing and modifying surfaces for both scientific and technological purposes. Nanotribological research has historically involved both direct and indirect methodologies
https://huggingface.co/datasets/fmars/wiki_stem
The NASLA (Nanostructured Anti-septical Coatings) Project involves four small and medium enterprises (SMEs) having one common technological problem: the need of antiseptic functionality for their products. The project, funded by the European Union's Seventh Framework Programme, aims at creating new products and knowledge in antiseptic coatings suitable to be applied on a large variety of surfaces. Introduction NASLA results are hoped to have a clear and immediate exploitation potential to improve or develop new products currently commercialized by the four SMEs: biomedical implants for DIPROMED, agro/food industry equipment for ALCE Calidad] and EASRETH, and personnel protective systems for Aero Sekur
https://huggingface.co/datasets/fmars/wiki_stem
In engineering and materials science, necking is a mode of tensile deformation where relatively large amounts of strain localize disproportionately in a small region of the material. The resulting prominent decrease in local cross-sectional area provides the basis for the name "neck". Because the local strains in the neck are large, necking is often closely associated with yielding, a form of plastic deformation associated with ductile materials, often metals or polymers
https://huggingface.co/datasets/fmars/wiki_stem
Negative thermal expansion (NTE) is an unusual physicochemical process in which some materials contract upon heating, rather than expand as most other materials do. The most well-known material with NTE is water at 0~4 °C. Water's NTE is the reason why ice floats, rather than sinks, in liquid water
https://huggingface.co/datasets/fmars/wiki_stem
A non-stick surface is engineered to reduce the ability of other materials to stick to it. Non-stick cookware is a common application, where the non-stick coating allows food to brown without sticking to the pan. Non-stick is often used to refer to surfaces coated with polytetrafluoroethylene (PTFE), a well-known brand of which is Teflon
https://huggingface.co/datasets/fmars/wiki_stem
Nondestructive testing (NDT) is any of a wide group of analysis techniques used in science and technology industry to evaluate the properties of a material, component or system without causing damage. The terms nondestructive examination (NDE), nondestructive inspection (NDI), and nondestructive evaluation (NDE) are also commonly used to describe this technology. Because NDT does not permanently alter the article being inspected, it is a highly valuable technique that can save both money and time in product evaluation, troubleshooting, and research
https://huggingface.co/datasets/fmars/wiki_stem
An ohmic contact is a non-rectifying electrical junction: a junction between two conductors that has a linear current–voltage (I–V) curve as with Ohm's law. Low-resistance ohmic contacts are used to allow charge to flow easily in both directions between the two conductors, without blocking due to rectification or excess power dissipation due to voltage thresholds. By contrast, a junction or contact that does not demonstrate a linear I–V curve is called non-ohmic
https://huggingface.co/datasets/fmars/wiki_stem
An optical modulator is an optical device which is used to modulate a beam of light with a perturbation device. It is a kind of transmitter to convert information to optical binary signal through optical fiber (optical waveguide) or transmission medium of optical frequency in fiber optic communication. There are several methods to manipulate this device depending on the parameter of a light beam like amplitude modulator (majority), phase modulator, polarization modulator etc
https://huggingface.co/datasets/fmars/wiki_stem
The optical properties of a material define how it interacts with light. The optical properties of matter are studied in optical physics, a subfield of optics. The optical properties of matter include: Refractive index Dispersion Transmittance and Transmission coefficient Absorption Scattering Turbidity Reflectance and Reflectivity (reflection coefficient) Albedo Perceived color Fluorescence Phosphorescence Photoluminescence Optical bistability Dichroism Birefringence Optical activity PhotosensitivityA basic distinction is between isotropic materials, which exhibit the same properties regardless of the direction of the light, and anisotropic ones, which exhibit different properties when light passes through them in different directions
https://huggingface.co/datasets/fmars/wiki_stem
A flow graph is a form of digraph associated with a set of linear algebraic or differential equations: "A signal flow graph is a network of nodes (or points) interconnected by directed branches, representing a set of linear algebraic equations. The nodes in a flow graph are used to represent the variables, or parameters, and the connecting branches represent the coefficients relating these variables to one another. The flow graph is associated with a number of simple rules which enable every possible solution [related to the equations] to be obtained
https://huggingface.co/datasets/fmars/wiki_stem
In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering
https://huggingface.co/datasets/fmars/wiki_stem
Free convolution is the free probability analog of the classical notion of convolution of probability measures. Due to the non-commutative nature of free probability theory, one has to talk separately about additive and multiplicative free convolution, which arise from addition and multiplication of free random variables (see below; in the classical case, what would be the analog of free multiplicative convolution can be reduced to additive convolution by passing to logarithms of random variables). These operations have some interpretations in terms of empirical spectral measures of random matrices
https://huggingface.co/datasets/fmars/wiki_stem
A frequency band is an interval in the frequency domain, delimited by a lower frequency and an upper frequency. The term may refer to a radio band (such as wireless communication standards set by the International Telecommunication Union) or an interval of some other spectrum. The frequency range of a system is the range over which it is considered to provide satisfactory performance, such as a useful level of signal with acceptable distortion characteristics
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing and electronics, the frequency response of a system is the quantitative measure of the magnitude and phase of the output as a function of input frequency. The frequency response is widely used in the design and analysis of systems, such as audio and control systems, where they simplify mathematical analysis by converting governing differential equations into algebraic equations. In an audio system, it may be used to minimize audible distortion by designing components (such as microphones, amplifiers and loudspeakers) so that the overall response is as flat (uniform) as possible across the system's bandwidth
https://huggingface.co/datasets/fmars/wiki_stem
Gain compression is a reduction in differential or slope gain caused by nonlinearity of the transfer function of the amplifying device. This nonlinearity may be caused by heat due to power dissipation or by overdriving the active device beyond its linear region. It is a large-signal phenomenon of circuits
https://huggingface.co/datasets/fmars/wiki_stem
In telecommunication, the term gating has the following meanings: The process of selecting only those portions of a wave between specified time intervals or between specified amplitude limits. The controlling of signals by means of combinational logic elements. A process in which a predetermined set of conditions, when established, permits a second process to occur
https://huggingface.co/datasets/fmars/wiki_stem
Signal gating is a concept commonly used in the field of electronics and signal processing. It refers to the process of controlling the flow of signals based on certain conditions or criteria. The goal of signal gating is to selectively allow or block the transmission of signals through a circuit or system
https://huggingface.co/datasets/fmars/wiki_stem
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency. The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method
https://huggingface.co/datasets/fmars/wiki_stem
Within signal processing, in many cases only one image with noise is available, and averaging is then realized in a local neighbourhood. Results are acceptable if the noise is smaller in size than the smallest objects of interest in the image, but blurring of edges is a serious disadvantage. In the case of smoothing within a single image, one has to assume that there are no changes in the gray levels of the underlying image data
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing, group delay and phase delay are two related ways of describing how a signal's frequency components are delayed in time when passing through a linear time-invariant (LTI) system (such as a microphone, coaxial cable, amplifier, loudspeaker, telecommunications system, ethernet cable, digital filter, or analog filter). Phase delay describes the time shift of a sinusoidal component (a sine wave in steady state). Group delay describes the time shift of the envelope of a wave packet, a "pack" or "group" of oscillations centered around one frequency that travel together, formed for instance by multiplying (amplitude modulation) a sine wave by an envelope (such as a tapering function)
https://huggingface.co/datasets/fmars/wiki_stem
In digital signal processing, half-band filters are widely used for their efficiency in multi-rate applications. A half-band filter is a low-pass filter that reduces the maximum bandwidth of sampled data by a factor of 2 (one octave). When multiple octaves of reduction are needed, a cascade of half-band filters is common
https://huggingface.co/datasets/fmars/wiki_stem
A head-related transfer function (HRTF), also known as anatomical transfer function (ATF), or a head shadow, is a response that characterizes how an ear receives a sound from a point in space. As sound strikes the listener, the size and shape of the head, ears, ear canal, density of the head, size and shape of nasal and oral cavities, all transform the sound and affect how it is perceived, boosting some frequencies and attenuating others. Generally speaking, the HRTF boosts frequencies from 2–5 kHz with a primary resonance of +17 dB at 2,700 Hz
https://huggingface.co/datasets/fmars/wiki_stem
The Hexagonal Efficient Coordinate System (HECS), formerly known as Array Set Addressing (ASA), is a coordinate system for hexagonal grids that allows hexagonally sampled images to be efficiently stored and processed on digital systems. HECS represents the hexagonal grid as a set of two interleaved rectangular sub-arrays, which can be addressed by normal integer row and column coordinates and are distinguished with a single binary coordinate. Hexagonal sampling is the optimal approach for isotropically band-limited two-dimensional signals and its use provides a sampling efficiency improvement of 13
https://huggingface.co/datasets/fmars/wiki_stem
== Definition == The higher-order sinusoidal input describing functions (HOSIDF) were first introduced by dr. ir. P
https://huggingface.co/datasets/fmars/wiki_stem
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function 1 / ( π t ) {\displaystyle 1/(\pi t)} (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π⁄2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform)
https://huggingface.co/datasets/fmars/wiki_stem
Homomorphic filtering is a generalized technique for signal and image processing, involving a nonlinear mapping to a different domain in which linear filter techniques are applied, followed by mapping back to the original domain. This concept was developed in the 1960s by Thomas Stockham, Alan V. Oppenheim, and Ronald W
https://huggingface.co/datasets/fmars/wiki_stem
A sinusoid with modulation can be decomposed into, or synthesized from, two amplitude-modulated sinusoids that are offset in phase by one-quarter cycle (90 degrees or π/2 radians). All three sinusoids have the same center frequency. The two amplitude-modulated sinusoids are known as the in-phase (I) and quadrature (Q) components, which describes their relationships with the amplitude- and phase-modulated carrier
https://huggingface.co/datasets/fmars/wiki_stem
Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function: φ ( t ) = arg ⁡ { s ( t ) } , {\displaystyle \varphi (t)=\arg\{s(t)\},} where arg is the complex argument function. The instantaneous frequency is the temporal rate of change of the instantaneous phase
https://huggingface.co/datasets/fmars/wiki_stem
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration started as a method to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity
https://huggingface.co/datasets/fmars/wiki_stem
Kernel-phases are observable quantities used in high resolution astronomical imaging used for superresolution image creation. It can be seen as a generalization of closure phases for redundant arrays. For this reason, when the wavefront quality requirement are met, it is an alternative to aperture masking interferometry that can be executed without a mask while retaining phase error rejection properties
https://huggingface.co/datasets/fmars/wiki_stem
In Hamiltonian mechanics, the linear canonical transformation (LCT) is a family of integral transforms that generalizes many classical transforms. It has 4 parameters and 1 constraint, so it is a 3-dimensional family, and can be visualized as the action of the special linear group SL2(R) on the time–frequency plane (domain). As this defines the original function up to a sign, this translates into an action of its double cover on the original function space
https://huggingface.co/datasets/fmars/wiki_stem
A low-pass filter is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications
https://huggingface.co/datasets/fmars/wiki_stem
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically
https://huggingface.co/datasets/fmars/wiki_stem
Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i. e. , redundant) dictionary D {\displaystyle D}
https://huggingface.co/datasets/fmars/wiki_stem
The median filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see the discussion below), also having applications in signal processing
https://huggingface.co/datasets/fmars/wiki_stem
Real-time Analog Signal Processing (R-ASP), as an alternative to DSP-based processing, might be defined as the manipulation of signals in their pristine analog form and in real time to realize specific operations enabling microwave or millimeter-wave and terahertz applications. The exploding demand for higher spectral efficiency in radio has spurred a renewed interest in analog real-time components and systems beyond conventional purely digital signal processing techniques. Although they are unrivaled at low microwave frequencies, due to their high flexibility, compact size, low cost and strong reliability, digital devices suffer of major issues, such as poor performance, high cost of A/D and D/A converters and excessive power consumption, at higher microwave and millimeter-wave frequencies
https://huggingface.co/datasets/fmars/wiki_stem
Note: the Wigner distribution function is abbreviated here as WD rather than WDF as used at Wigner distribution functionA Modified Wigner distribution function is a variation of the Wigner distribution function (WD) with reduced or removed cross-terms. The Wigner distribution (WD) was first proposed for corrections to classical statistical mechanics in 1932 by Eugene Wigner. The Wigner distribution function, or Wigner–Ville distribution (WVD) for analytic signals, also has applications in time frequency analysis
https://huggingface.co/datasets/fmars/wiki_stem
An array is simply a group of objects, and the array factor is a measure of how much a specific characteristic changes because of the grouping. This phenomenon is observed when antennas are grouped together. The radiation (or reception) pattern of the antenna group is considerably different from that of a single antenna
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing, multidimensional empirical mode decomposition (multidimensional EMD) is an extension of the one-dimensional (1-D) EMD algorithm to a signal encompassing multiple dimensions. The Hilbert–Huang empirical mode decomposition (EMD) process decomposes a signal into intrinsic mode functions combined with the Hilbert spectral analysis, known as the Hilbert–Huang transform (HHT). The multidimensional EMD extends the 1-D EMD algorithm into multiple-dimensional signals
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing, multidimensional signal processing covers all signal processing done using multidimensional signals and systems. While multidimensional signal processing is a subset of signal processing, it is unique in the sense that it deals specifically with data that can only be adequately detailed using more than one dimension. In m-D digital signal processing, useful data is sampled in more than one dimension
https://huggingface.co/datasets/fmars/wiki_stem
Multiscale geometric analysis or geometric multiscale analysis is an emerging area of high-dimensional signal processing and data analysis. See also Wavelet Scale space Multi-scale approaches Multiresolution analysis Singular value decomposition Compressed sensing Further reading Multiscale Geometry and Analysis in High Dimensions. September 7 – December 17, 2004
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing, multitaper is a spectral density estimation technique developed by David J. Thomson. It can estimate the power spectrum SX of a stationary ergodic finite-variance random process X, given a finite contiguous realization of X as data
https://huggingface.co/datasets/fmars/wiki_stem
MUSHRA stands for Multiple Stimuli with Hidden Reference and Anchor and is a methodology for conducting a codec listening test to evaluate the perceived quality of the output from lossy audio compression algorithms. It is defined by ITU-R recommendation BS. 1534-3
https://huggingface.co/datasets/fmars/wiki_stem
In linear algebra, the coherence or mutual coherence of a matrix A is defined as the maximum absolute value of the cross-correlations between the columns of A. Formally, let a 1 , … , a m ∈ C d {\displaystyle a_{1},\ldots ,a_{m}\in {\mathbb {C} }^{d}} be the columns of the matrix A, which are assumed to be normalized such that a i H a i = 1. {\displaystyle a_{i}^{H}a_{i}=1
https://huggingface.co/datasets/fmars/wiki_stem
The near–far problem or hearability problem is the effect of a strong signal from a near signal source in making it hard for a receiver to hear a weaker signal from a further source due to adjacent-channel interference, co-channel interference, distortion, capture effect, dynamic range limitation, or the like. Such a situation is common in wireless communication systems, in particular CDMA. In some signal jamming techniques, the near–far problem is exploited to disrupt ("jam") communications
https://huggingface.co/datasets/fmars/wiki_stem
Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing, noise is a general term for unwanted (and, in general, unknown) modifications that a signal may suffer during capture, storage, transmission, processing, or conversion. Sometimes the word is also used to mean signals that are random (unpredictable) and carry no useful information; even if they are not interfering with other signals or may have been introduced intentionally, as in comfort noise. Noise reduction, the recovery of the original signal from the noise-corrupted one, is a very common goal in the design of signal processing systems, especially filters
https://huggingface.co/datasets/fmars/wiki_stem
Noiselets are functions which gives the worst case behavior for the Haar wavelet packet analysis. In other words, noiselets are totally incompressible by the Haar wavelet packet analysis. Like the canonical and Fourier bases, which have an incoherent property, noiselets are perfectly incoherent with the Haar basis
https://huggingface.co/datasets/fmars/wiki_stem
Nominal level is the operating level at which an electronic signal processing device is designed to operate. The electronic circuits that make up such equipment are limited in the maximum signal they can handle and the low-level internally generated electronic noise they add to the signal. The difference between the internal noise and the maximum level is the device's dynamic range
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing, nonlinear multidimensional signal processing (NMSP) covers all signal processing using nonlinear multidimensional signals and systems. Nonlinear multidimensional signal processing is a subset of signal processing (multidimensional signal processing). Nonlinear multi-dimensional systems can be used in a broad range such as imaging, teletraffic, communications, hydrology, geology, and economics
https://huggingface.co/datasets/fmars/wiki_stem
In electronics, a norator is a theoretical linear, time-invariant one-port which can have an arbitrary current and voltage between its terminals. A norator represents a controlled voltage or current source with infinite gain. Inserting a norator in a circuit schematic provides whatever current and voltage the outside circuit demands, in particular, the demands of Kirchhoff's circuit laws
https://huggingface.co/datasets/fmars/wiki_stem
In electronics, a nullator is a theoretical linear, time-invariant one-port defined as having zero current and voltage across its terminals. Nullators are strange in the sense that they simultaneously have properties of both a short (zero voltage) and an open circuit (zero current). They are neither current nor voltage sources, yet both at the same time
https://huggingface.co/datasets/fmars/wiki_stem
Optomyography (OMG) was proposed in 2015 as a technique that could be used to monitor muscular activity. It is possible to use OMG for the same applications where Electromyography (EMG) and Mechanomyography (MMG) are used. However, OMG offers superior signal-to-noise ratio and improved robustness against the disturbing factors and limitations of EMG and MMG
https://huggingface.co/datasets/fmars/wiki_stem
In rotordynamics, order tracking is a family of signal processing tools aimed at transforming a measured signal from time domain to angular (or order) domain. These techniques are applied to asynchronously sampled signals (i. e
https://huggingface.co/datasets/fmars/wiki_stem
Pairwise error probability is the error probability that for a transmitted signal ( X {\displaystyle X} ) its corresponding but distorted version ( X ^ {\displaystyle {\widehat {X}}} ) will be received. This type of probability is called ″pair-wise error probability″ because the probability exists with a pair of signal vectors in a signal constellation. It's mainly used in communication systems
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing, a passthrough is a logic gate that enables a signal to "pass through" unaltered, sometimes with little alteration. Sometimes the concept of a "passthrough" can also involve daisy chain logic. Examples of passthroughs Analog passthrough (for digital TV) Sega 32X (passthrough for Sega Genesis video games) VCRs, DVD recorders, etc
https://huggingface.co/datasets/fmars/wiki_stem
In electronic amplifiers, the phase margin (PM) is the difference between the phase lag φ (< 0) and -180°, for an amplifier's output signal (relative to its input) at zero dB gain - i. e. unity gain, or that the output signal has the same amplitude as the input
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing, phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier or a filter. Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency
https://huggingface.co/datasets/fmars/wiki_stem
A phase vocoder is a type of vocoder-purposed algorithm which can interpolate information present in the frequency and time domains of audio signals by using phase information extracted from a frequency transform. The computer algorithm allows frequency-domain modifications to a digital sound file (typically time expansion/compression and pitch shifting). At the heart of the phase vocoder is the short-time Fourier transform (STFT), typically coded using fast Fourier transforms
https://huggingface.co/datasets/fmars/wiki_stem
In mathematics, in functional analysis, several different wavelets are known by the name Poisson wavelet. In one context, the term "Poisson wavelet" is used to denote a family of wavelets labeled by the set of positive integers, the members of which are associated with the Poisson probability distribution. These wavelets were first defined and studied by Karlene A
https://huggingface.co/datasets/fmars/wiki_stem
In mathematics, signal processing and control theory, a pole–zero plot is a graphical representation of a rational transfer function in the complex plane which helps to convey certain properties of the system such as: Stability Causal system / anticausal system Region of convergence (ROC) Minimum phase / non minimum phaseA pole-zero plot shows the location in the complex plane of the poles and zeros of the transfer function of a dynamic system, such as a controller, compensator, sensor, equalizer, filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O. A pole-zero plot can represent either a continuous-time (CT) or a discrete-time (DT) system
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing, pre-emphasis is a technique to protect against anticipated noise. The idea is to boost (and hence distort) the frequency range that is most susceptible to noise beforehand, so that after a noisy process (transmission over cable, tape recording.
https://huggingface.co/datasets/fmars/wiki_stem
In a spread-spectrum system, the process gain (or "processing gain") is the ratio of the spread (or RF) bandwidth to the unspread (or baseband) bandwidth. It is usually expressed in decibels (dB). For example, if a 1 kHz signal is spread to 100 kHz, the process gain expressed as a numerical ratio would be 100000/1000 = 100
https://huggingface.co/datasets/fmars/wiki_stem
A pulse in signal processing is a rapid, transient change in the amplitude of a signal from a baseline value to a higher or lower value, followed by a rapid return to the baseline value. Pulse shapes Pulse shapes can arise out of a process called pulse-shaping. Optimum pulse shape depends on the application
https://huggingface.co/datasets/fmars/wiki_stem
Pulse compression is a signal processing technique commonly used by radar, sonar and echography to either increase the range resolution when pulse length is constrained or increase the signal to noise ratio when the peak power and the bandwidth (or equivalently range resolution) of the transmitted signal are constrained. This is achieved by modulating the transmitted pulse and then correlating the received signal with the transmitted pulse. Simple pulse Signal description The ideal model for the simplest, and historically first type of signals a pulse radar or sonar can transmit is a truncated sinusoidal pulse (also called a CW --carrier wave-- pulse), of amplitude A {\displaystyle A} and carrier frequency, f 0 {\displaystyle f_{0}} , truncated by a rectangular function of width, T {\displaystyle T}
https://huggingface.co/datasets/fmars/wiki_stem
In signal processing and telecommunication, pulse duration is the interval between the time, during the first transition, that the amplitude of the pulse reaches a specified fraction (level) of its final amplitude, and the time the pulse amplitude drops, on the last transition, to the same level. The interval between the 50% points of the final amplitude is usually used to determine or define pulse duration, and this is understood to be the case unless otherwise specified. Other fractions of the final amplitude, e
https://huggingface.co/datasets/fmars/wiki_stem
In electronics and telecommunications, pulse shaping is the process of changing a transmitted pulses' waveform to optimize the signal for its intended purpose or the communication channel. This is often done by limiting the bandwidth of the transmission and filtering the pulses to control intersymbol interference. Pulse shaping is particularly important in RF communication for fitting the signal within a certain frequency band and is typically applied after line coding and modulation
https://huggingface.co/datasets/fmars/wiki_stem
The pulse width is a measure of the elapsed time between the leading and trailing edges of a single pulse of energy. The measure is typically used with electrical signals and is widely used in the fields of radar and power supplies. There are two closely related measures
https://huggingface.co/datasets/fmars/wiki_stem
Pulse-density modulation, or PDM, is a form of modulation used to represent an analog signal with a binary signal. In a PDM signal, specific amplitude values are not encoded into codewords of pulses of different weight as they would be in pulse-code modulation (PCM); rather, the relative density of the pulses corresponds to the analog signal's amplitude. The output of a 1-bit DAC is the same as the PDM encoding of the signal
https://huggingface.co/datasets/fmars/wiki_stem
Pulse-width modulation (PWM), or pulse-duration modulation (PDM), is a method of controlling the average power delivered by an electrical signal. The average value of voltage (and current) fed to the load is controlled by switching the supply between 0 and 100% at a rate faster than it takes the load to change significantly. The longer the switch is on, the higher the total power supplied to the load
https://huggingface.co/datasets/fmars/wiki_stem
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding
https://huggingface.co/datasets/fmars/wiki_stem
In telecommunication, a quasi-analog signal is a digital signal that has been converted to a form suitable for transmission over a specified analog channel. The specification of the analog channel should include frequency range, bandwidth, signal-to-noise ratio, and envelope delay distortion. When quasi-analog form of signaling is used to convey message traffic over dial-up telephone systems, it is often referred to as voice-data
https://huggingface.co/datasets/fmars/wiki_stem
Radio frequency sweep or frequency sweep or RF sweep apply to scanning a radio frequency band for detecting signals being transmitted there. A radio receiver with an adjustable receiving frequency is used to do this. A display shows the strength of the signals received at each frequency as the receiver's frequency is modified to sweep (scan) the desired frequency band
https://huggingface.co/datasets/fmars/wiki_stem
Word Rescue is an educational platform DOS game written by Karen Crowther (Chun) of Redwood Games and released by Apogee Software in March, 1992. It was re-released in 2015 for Steam with support for Windows and Mac OS. The game can also allow the player interact with a pair of Stereoscopic Vision Glasses
https://huggingface.co/datasets/fmars/wiki_stem
World at War: Stalingrad is a 1995 computer wargame developed by Atomic Games and published by Avalon Hill. It is the second game in the World at War series, following Operation Crusader. Stalingrad was followed by D-Day: America Invades (1995)
https://huggingface.co/datasets/fmars/wiki_stem
WWF European Rampage Tour is a game based on the World Wrestling Federation (WWF), created by Arc Developments in 1992 for the Amiga, Atari ST, Commodore 64 and DOS. It capitalizes on the success of the previous WWF game for home computers, WWF WrestleMania, and was aimed predominantly at the European markets. It was the last WWF game released strictly for home computers until the release of WWF With Authority! in 2001
https://huggingface.co/datasets/fmars/wiki_stem
X-Men II: Fall of the Mutants is an action-adventure game for MS-DOS compatible operating systems developed and released by Paragon Software in 1990. It follows the story of the X-Men crossover storyline "Fall of the Mutants". The game is the sequel to Paragon's 1989 X-Men: Madness in Murderworld
https://huggingface.co/datasets/fmars/wiki_stem
Mpxplay is a 32-bit console audio player for MS-DOS and Windows. It supports a wide range of audio codecs, playlists, as well as containers for video formats. The MS-DOS version uses a 32-bit DOS extender (DOS/32 Advanced DOS Extender being the most up-to-date version compatible)
https://huggingface.co/datasets/fmars/wiki_stem