text
stringlengths
11
320k
source
stringlengths
26
161
Spatial scale is a specific application of the term scale for describing or categorizing (e.g. into orders of magnitude ) the size of a space (hence spatial ), or the extent of it at which a phenomenon or process occurs. [ 1 ] [ 2 ] For instance, in physics an object or phenomenon can be called microscopic if too small to be visible. In climatology , a micro-climate is a climate which might occur in a mountain , valley or near a lake shore. In statistics , a megatrend is a political, social, economical, environmental or technological trend which involves the whole planet or is supposed to last a very large amount of time. The concept is also used in geography , astronomy , and meteorology . [ 3 ] These divisions are somewhat arbitrary; where, on this table, mega- is assigned global scope, it may only apply continentally or even regionally in other contexts. The interpretations of meso- and macro- must then be adjusted accordingly.
https://en.wikipedia.org/wiki/Spatial_scale
Spatial transcriptomics, or spatially resolved transcriptomics , is a method that captures positional context of transcriptional activity within intact tissue. [ 1 ] The historical precursor to spatial transcriptomics is in situ hybridization , [ 2 ] where the modernized omics terminology refers to the measurement of all the mRNA in a cell rather than select RNA targets. It comprises an important part of spatial biology . Spatial transcriptomics includes methods that can be divided into two modalities, those based in next-generation sequencing for gene detection, and those based in imaging. [ 3 ] Some common approaches to resolve spatial distribution of transcripts are microdissection techniques, fluorescent in situ hybridization methods, in situ sequencing, in situ capture protocols and in silico approaches. [ 4 ] in situ hybridization was developed in the late 1960's by Joseph G. Gall and Mary-Lou Pardue [ 5 ] [ 6 ] and saw major developments in the 1980's with single molecule FISH ( smFISH ) and 2010's with RNAscope, seqFISH, MERFISH and osmFISH, seqFISH+, and DNA microscopy. [ 7 ] Microdisecction techniques were first developed in the late 1990's (Laser Capture Microdissection) and combined with RNA-seq profiling in 2013 in Michael Eisen 's lab using fruit fly embryos. [ 4 ] [ 8 ] Spatial genomics as a technique, or now referred to as spatial transcriptomics, was initiated in 1990s by Michael Doyle (of Eolas ), Maurice Pescitelli (of the University of Illinois at Chicago), Betsey Williams (of Harvard), and George Michaels (of George Mason University), [ 9 ] [ 10 ] [ 11 ] as part of the Visible Embryo Project . [ 12 ] [ 13 ] Doyle and his co-investigators described a method called Spatial Analysis of Genomic Activity (SAGA). This spatial indexing concept was expanded upon in 2016 by Jonas Frisén, Joakim Lundeberg, Patrik Ståhl and their colleagues in Stockholm, Sweden. [ 14 ] In 2019, at the Broad Institute , the labs of Fei Chen and Evan Macosko developed Slide-seq, which used barcoded oligos on beads. [ 15 ] In 2019, the first commercial platforms for spatial transcriptomics were launched with Visium by 10X Genomics [ 16 ] and GeoMx Digital Spatial Profiler (DSP) by Nanostring Technologies . [ 17 ] Defining the spatial distribution of mRNA molecules allows for the detection of cellular heterogeneity in tissues, tumours, immune cells as well as determine the subcellular distribution of transcripts in various conditions. [ 4 ] This information provides a unique opportunity to decipher both the cellular and subcellular architecture in both tissues and individual cells. These methodologies provide crucial insights in the fields of embryology, oncology, immunology, neuroscience, pathology, and histology. [ 4 ] The functioning of the individual cells in multicellular organisms can only be completely explained in the context of identifying their exact location in the body. [ 4 ] Spatial transcriptomics techniques sought to elucidate cells’ properties this way. Below, we look into the methods that connect gene expression to the spatial organization of cells. [ 4 ] Laser capture microdissection enables capturing single cells without causing morphologic alterations. [ 19 ] [ 20 ] It exploits transparent ethylene vinyl acetate film apposed to the histological section and a low-power infrared laser beam. [ 19 ] Once such beam is directed at the cells of interest, film directly above the targeted area temporarily melts so that its long-chain polymers cover and tightly capture the cells. [ 19 ] Then, the section is removed and cells of interest remain embedded in the film. [ 19 ] This method allows further RNA transcript profiling and cDNA library generation of the retrieved cells. RNA sequencing of the selected regions in individual cryosections is another method that can produce location-based genome-wide expression data. [ 21 ] This method is carried out without laser capture microdissection. It was first used to determine genome-wide spatial patterns of gene expression in cryo-sliced Drosophila embryos. [ 21 ] Essentially, it implies simple preparation of the library from the selected regions of the sample. This method had difficulties in obtaining high-quality RNA-seq libraries from every section due to the material loss as a result of the small amount of total RNA in each slice. [ 4 ] [ 21 ] This problem was resolved by adding RNA of a distantly related Drosophila species to each tube after initial RNA extraction. [ 21 ] NanoString's GeoMx Digital Spatial Profiler (DSP) is the first automated commercial instrument developed for spatial profiling of RNAs and proteins in archival formalin-fixed, paraffin-embedded (FFPE) tissue sections. [ 22 ] FFPE is a common sample type in the field of pathology and histology due to its long term preservation of tissue structure. The GeoMx DSP technology centers around a user's ability to perform "microdissection" based on histological structures, functional compartments, and cell types. However, unlike LCM , gene expression profiling is performed in a nondestructive manner through light, due to a UV-photocleavable barcode engineered into the in situ hybridization probe. To do this, tissue sections on microscope slides are stained with fluorescent antibodies and nuclear dye to visualize the whole tissue section at single cell resolution on the DSP instrument. [ 22 ] Selection of region of interests occurs through automated segmentation on flurorescent signal intensities or drawing tools, including geometric or free hand shapes. [ 22 ] Each region of interest is precisely exposed to UV light and the barcodes are cleaved, collected, and used to identify RNAs or proteins present in the tissue. [ 23 ] The defined regions of interest can vary in size, between ten and six hundred micrometers, allowing a wide variety of structures and cells in the histological sample. [ 4 ] [ 24 ] GeoMx DSP can spatially resolve and measure human or mouse whole transcriptomes, [ 25 ] more than 570 proteins, or both RNA and protein in multiomic same slide protocols. [ 26 ] T ranscriptome i n v ivo a nalysis (TIVA) is a technique that enables capturing mRNA in live single cells in intact live tissue sections. [ 27 ] It uses a photoactivatable tag. [ 4 ] [ 27 ] The TIVA tag has several functional groups and a trapped poly(U) oligonucleotide coupled to biotin. [ 27 ] A disulfide-linked peptide, which is adjacent to the tag, allows it to penetrate the cell membrane. [ 27 ] Once inside, laser photoactivation is used to unblock poly(U) oligonucleotide in the cells of interest, so that TIVA tag hybridizes to mRNAs within the cell. [ 27 ] Then, streptavidin capture of the biotin group is used to extract poly(A)-tailed mRNA molecules bound to unblocked tags, after which these mRNAs are analyzed by RNA sequencing. [ 27 ] This method is limited by low throughput, as only a few single cells can be processed at a time. [ 4 ] An advanced alternative for RNA Sequencing of Individual Cryosections described above, RNA tomography (tomo-seq) features better RNA quantification and spatial resolution. [ 28 ] It is also based on tissue cryosectioning with further RNA sequencing of individual sections, yielding genome-wide expression data and preserving spatial information. [ 28 ] In this protocol, usage of carrier RNA is omitted due to linear amplification of cDNA in individual histological sections. [ 28 ] The identical sample is sectioned in different directions followed by 3D transcriptional construction using overlapping data. [ 28 ] Overall, this method implies using identical samples for each section and thus cannot be applied for processing clinical material. [ 4 ] LCM-seq utilizes laser capture microdissection (LCM) coupled with Smart-Seq2 RNA sequencing and is applicable down to the single cell level and can even be used on partially degraded tissues. The workflow includes cryosectioning of tissues followed by laser capture microdissection, where cells are collected directly into lysis buffer and cDNA is generated without the need for RNA isolation, which both simplifies the experimental procedures as well as lowers technical noise. [ 29 ] [ 30 ] As the positional identity of each cell is recorded during the LCM procedure, the transcriptome of each cell after RNA sequencing of the corresponding cDNA library can be inferred to the position where it was isolated from. [ 29 ] LCM-seq has been applied to multiple cell types to understand their intrinsic properties, including oculomotor neurons, facial motor neurons, hypoglossal motor neurons, spinal motor neurons, red nucleus neurons, [ 31 ] interneurons, [ 32 ] dopamine neurons, [ 33 ] and chondrocytes. [ 34 ] Geo-seq is a method that utilizes both laser capture microdissection and single-cell RNA sequencing procedures to determine the spatial distribution of the transcriptome in tissue areas approximately ten cells in size. [ 7 ] The workflow involves removal and cryosectioning of tissue followed by laser capture microdissection. [ 7 ] [ 35 ] The extracted tissue is then lysed, and the RNA is purified and reverse transcribed into a cDNA library. [ 7 ] [ 35 ] Library is sequenced, and the transcriptomic profile can be mapped to the original location of the extracted tissue. [ 7 ] [ 35 ] This technique allows the user to define regions of interest in a tissue, extract said tissue and map the transcriptome in a targeted approach. The NICHE-seq method uses photoactivatable fluorescent markers and two-photon laser scanning microscopy to provide spatial data to the transcriptome generated. [ 36 ] The cells bound by the fluorescent marker are photoactivated, dissociated and sorted via fluorescence-activated cell sorting. [ 36 ] This provides sorting specificity to only labeled, photoactivated cells. [ 36 ] Following sorting, single-cell RNA sequencing generates the transcriptome of the visualized cells. [ 36 ] This method can process thousands of cells within a defined niche at the cost of losing spatial data between cells in the niche. [ 36 ] ProximID is a methodology based on iterative micro digestion of extracted tissue to single cells. [ 37 ] Initial mild digestion steps preserve small interacting structures that are recorded prior to continued digestion. [ 37 ] The single cells are then separated from each structure and undergo sc-RNAseq and clustered using t-distributed stochastic neighbour embedding. [ 37 ] [ 38 ] The clustered cells can be mapped to physical interactions based on the interacting structures prior to the micro digestions. [ 37 ] While the throughput of this technique is relatively low it provides information on physical interaction between cells to the dataset. [ 4 ] CosMx Spatial Molecular Imager is the first high-plex in situ analysis platform to provide spatial multiomics with formalin-fixed paraffin-embedded (FFPE) and fresh frozen (FF) tissue samples at cellular and subcellular resolution. It enables rapid quantification and visualization of up to whole transcriptome [ 39 ] and 64 validated protein analytes and is the flexible, spatial single-cell imaging platform for cell atlasing, tissue phenotyping, cell-cell interactions, cellular processes, and biomarker discovery. [ 40 ] One of the first techniques able to achieve spatially resolved RNA profiling of individual cells was s ingle- m olecule f luorescent in situ h ybridization (smFISH). [ 41 ] It implemented short (50 base pairs) oligonucleotide probes conjugated with 5 fluorophores which could bind to a specific transcript yielding bright spots in the sample. [ 41 ] Detection of these spots provides quantitative information about expression of certain genes in the cell. [ 41 ] However, usage of probes labeled with multiple fluorophores was challenged by self-quenching, altered hybridization characteristics, their synthesis and purification. [ 4 ] Later, this method was changed [ 42 ] by substituting the above described probes with those of 20 bp length, coupled to only one fluorophore and complementary in tandem to an mRNA sequence of interest, meaning that those would collectively bind to the targeted mRNA. One such probe itself wouldn't produce a strong signal, but the cumulative fluorescence of the congregated probes would show a bright spot. Since single misbound probes are unlikely to co-localize, false-positive signals in this method are limited. [ 4 ] Thus, this in situ hybridization (ISH) technique spots spatial localization of RNA expression via direct imaging of individual RNA molecules in single cells. Another in situ hybridization technique termed RNAscope employs probes of the specific Z-shaped design to simultaneously amplify hybridization signals and suppress background noise. [ 43 ] It allows for the visualization of single RNA in a variety of cellular types. [ 44 ] Most steps of RNAScope are similar to the classic ISH protocol. [ 43 ] The tissue sample is fixed onto slides and then treated with RNAscope reagents that permeate the cells. Z-probes are designed in a way that they are only effective when bound in pairs to the target sequence. [ 43 ] This allows another element of this method (preamplifier) to connect to the top tails of Z-probes. [ 43 ] Once affixed, preamplifier serves as a binding site for other elements: amplifiers which in turn bind to another type of probes: label probes. [ 43 ] As a result, a bulky structure is formed on the target sequence. Most importantly, the preamplifier fails to bind to a singular Z-probe, thus, nonspecific binding wouldn't entail signal emission, thus, eliminating background noise mentioned in the beginning. [ 4 ] [ 43 ] S equential f luorescence i n s itu h ybridization (seqFISH) is another method that provides identification of mRNA directly in single cells with preservation of their spatial context. [ 45 ] [ 46 ] This method is carried out in multiple rounds; each of them includes fluorescent probe hybridization, imaging, and consecutive probe stripping. [ 45 ] Various genes are assigned different colors in every round, generating a unique temporal barcode. [ 45 ] Thus, seqFISH distinguishes mRNAs by a sequential color code, such as red-red-green. Nevertheless, this technique has its flaws featuring autofluorescent background and high costs due to the number of probes used in each round. [ 4 ] Conventional FISH methods are limited by the small number of genes that can be simultaneously analyzed due to the small number of distinct color channels, so multiplexed error-robust FISH was designed to overcome this problem. [ 47 ] M ultiplexed E rror- R obust FISH (MERFISH) greatly increases the number of RNA species that can be simultaneously imaged in single cells employing binary code gene labeling in multiple rounds of hybridization. [ 48 ] This approach can measure 140 RNA species at a time using an encoding scheme that both detects and corrects errors. [ 48 ] The core principle lies in identification of genes by combining signals from several consecutive hybridization rounds and assigning N-bit binary barcodes to genes of interest. [ 48 ] The Code depends on specific probes and comprises “1” or “0” values and their combination is set differently for each gene. [ 48 ] Errors are avoided by using six-bit or longer codes with any two of them differing by at least 3 bits. [ 48 ] A specific probe is created for each RNA species. [ 48 ] Each probe is a target-specific oligonucleotide that consists of 20-30 base pairs and complementary binds to mRNA sequence after permeating the cell. [ 48 ] Then, multiple rounds of hybridization are conducted as follows: for each round, only a probe that includes “1” in the corresponding binary code position is added. [ 48 ] At the end of each round, fluorescent microscopy is used to locate each probe. [ 48 ] Expectedly, only those mRNAs which had “1” in the assigned position would be captured. [ 48 ] Photos are then photobleached and a new subset is added. [ 48 ] Thus, we retrieve combination of binary values which makes it possible to distinguish between numerous RNA species. Si ngle- m olecule RNA detection at depth by h ybridization c hain r eaction (smHCR) is an advanced seqFISH technique that can overcome typical complication of autofluorescent background in thick and opaque tissue samples. [ 49 ] In this method, multiple readout probes are bound with the target region of mRNA. [ 49 ] Target is detected by a set of short DNA probes which attach to it in defined subsequence. [ 49 ] Each DNA probe carries an initiator for the same HCR amplifier. [ 49 ] Then, fluorophore-labeled DNA HCR hairpins penetrate the sample and assemble into fluorescent amplification polymers attaching to initiating probes. [ 49 ] In multiplexed studies, the same two-stage protocol described above is used: all probe sets are introduced simultaneously, just as all HCR amplifiers are; spectrally distinct fluorophores are used for further imaging. [ 49 ] Cyclic- o uroboros smFISH (osmFISH) is an adaptation of smFISH which aims to overcome the challenge of optical crowding. [ 50 ] In osmFISH, transcripts are visualized, and an image is acquired before the probe is stripped and a new transcript is visualized with a different fluorescent probe. [ 50 ] After successive rounds the images are compiled to view the spatial distribution of the RNA. [ 50 ] Due to transcripts being sequentially visualized it eliminates the issue of signals interfering with each other. [ 4 ] [ 50 ] This method allows the user to generate high resolution images of larger tissue sections than other related techniques. [ 4 ] Expansion FISH (ExFISH) leverages expansion microscopy to allow for super-resolution imaging of RNA location, even in thick specimens such as brain tissue. [ 51 ] It supports both single-molecule and multiplexed readouts. [ 51 ] Expansion-Assisted Iterative Fluorescence In Situ Hybridization (EASI-FISH) optimizes and builds on ExFISH with improved detection accuracy and robust multi-round processing across samples thicker (300 μm) than what was previously possible. [ 52 ] It also includes a turn-key computational analysis pipeline. [ 53 ] SeqFISH+ resolved optical issues related to spatial crowding by subsequent rounds of fluorescence. [ 54 ] First, a primary probe anneals to targeted mRNA and then subsequent probes bind to flanking regions of the primary probe resulting in a unique barcode. [ 54 ] Each readout probe is captured as an image and collapsed into a super resolved image. [ 54 ] This method allows the user to target up to ten thousand genes at a time. [ 4 ] DNA microscopy is a distinct imaging method for optics-free mapping of molecules’ positions with simultaneous preservation of sequencing data carried out in several consecutive in situ reactions. [ 55 ] First, cells are fixed and cDNA is synthesized. [ 55 ] Randomized nucleotides then tag target cDNAs in situ , providing unique labels for each molecule. [ 55 ] Tagged transcripts are amplified in the second in situ reaction, retrieved copies are concatenated, and new randomized nucleotides are added. [ 55 ] Each consecutive concatenation event is labeled, yielding unique event identifiers. [ 55 ] Algorithm then generates images of the original transcripts based on decoded molecular proximities from the obtained concatenated sequences, while target's single nucleotide information is being recorded as well. [ 55 ] The ISS padlock method [ 56 ] is based on padlock probing, [ 57 ] rolling-circle amplification (RCA), [ 58 ] and sequencing by ligation chemistry. [ 59 ] Within intact tissue sections, mRNA is reversely transcribed to cDNA, which is followed by mRNA degradation by RNase H. [ 4 ] Then, there are two ways of how this method can be carried out. The first way, gap-targeted sequencing, involves padlock probe binding to cDNA with a gap between the ends of the probe which are targeted for sequencing by ligation. [ 4 ] DNA polymerization then fills this gap and a DNA circle is created by DNA ligation. [ 4 ] Another way, barcode-targeted sequencing, DNA circularization of a padlock probe with a barcode sequence is conducted by ligation only. [ 4 ] In both versions of the method, the ends are ligated forming a circle of DNA. [ 4 ] Target amplification is then performed by RCA, yielding micrometer-sized RCA products (RCPs). [ 4 ] RCAs consist of repeats of the padlock probe sequence. [ 4 ] These DNA molecules are then subjected to sequencing by ligation, decoding either a gap-filled sequence or an up to four-base-long barcode within the probe with adjacent ends, depending on the version. [ 4 ] No-gap variant claims higher sensitivity, while gap-filled one implies reading out the actual RNA sequence of the transcript. [ 4 ] Later, this method was improved by automatization on a microfluidic platform and substitution of sequencing by ligation with sequencing by hybridization technology. [ 4 ] Fluorescent in situ sequencing (FISSEQ), [ 60 ] like ISS padlock, is a  method that uses reverse transcription, rolling-circle amplification, and sequencing by ligation techniques. [ 4 ] It allows spatial transcriptome analysis in fixed cells. [ 4 ] RNA is first reverse transcribed into cDNA with regular and modified amine-bases and tagged random hexamer RT primers. [ 4 ] Amine-bases mediate the cross-linkage of cDNA to its cellular surrounding. [ 4 ] Then cDNA is circulated by ligation and amplified by RCA. [ 4 ] Single-stranded DNA nanoballs of 200–400 nm in diameter are obtained as a result. [ 4 ] Thus, these nanoballs comprise numerous tandem repeats of the cDNA sequence. Then sequencing is performed via SOLiD sequencing by ligation. [ 4 ] Positions of both product of reverse transcription and clonally amplified RCPs are maintained via cross-linkage to cellular matrix components mentioned previously, creating a 3D in situ RNA-seq library within the cell. [ 4 ] Once bound with fluorescent probes featuring different colors, amplicons become highly fluorescent which allows visual detection of the signal; however, the image-processing algorithm relies on read alignment to reference sequences rather than signal intensity. [ 4 ] Bar code i n s itu ta rgeted seq uencing (Barista-seq) is an improvement on the gap padlock probe methodology boasting a fivefold increase in efficiency, an increased read length of fifteen bases and is compatible with illumina sequencing platforms. [ 4 ] [ 38 ] The method also uses padlock probes and rolling circle amplification, however this approach uses sequencing-by-synthesis and crosslinking unlike the gap padlock method. [ 38 ] The crosslinking to the cellular matrix in the same procedure is the same as FISSEQ. [ 4 ] [ 38 ] S patially-resolved t ranscript a mplicon r eadout map ping (STARmap) utilizes a padlock probe with an additional primer which allows for direct amplification of mRNA, forgoing the need for reverse transcription. [ 4 ] [ 61 ] Similar to other padlock probe based methods amplification occurs via rolling circle amplification. [ 4 ] The DNA amplicons are chemically modified and embedded into a polymerized hydrogel within the cell. [ 4 ] Captured RNA can then be sequenced in situ providing three dimensional locations of the mRNA within each cell. [ 4 ] STOmics is a pioneer in advancing spatially-resolved transcriptomic analysis through its proprietary SpaTial Enhanced REsolution Omics-Sequencing (Stereo-seq) technology. [ 62 ] It combines in situ capture with DNB-seq, DNB sequencing is based on lithographically etched chips (patterned arrays) for in situ sequencing. Unlike other um-level in situ capture technologies, standard DNB chips have spots with approximately 220 nm diameter and a center-to-center distance of 500 nm, providing up to 20000 spots for tissue RNA capture per 10mm linear distance, or 4x10 8 spots per 1cm 2 . Therefore, STOmics can show higher resolution and wider field of view than other in situ capture technologies. [ 63 ] The first widely-adopted method was described by Ståhl et al. in a landmark 2016 paper in Science, [ 64 ] coining the term "spatial transcriptomics." This methodology relies on diffusion of mRNA from a fresh frozen tissue section for capture of the polyadenylated mRNAs via hybridization to oligo(dT) sequence attached to a glass slide. [ 64 ] The glass slide is arrayed with "spots" that contain oligo(dT) sequence to capture mRNA transcripts, spatial barcode sequence to indicate the x and y position on the arrayed slide, amplification and sequencing handle to generate sequence libraries, and unique molecular identifier to quantitate transcript abundance. [ 64 ] Frozen tissue samples are cut using cryotome , then fixed, stained, and carefully laid flat onto the microarray. [ 64 ] Next, enzymatic permeabilization allows RNA molecules to diffuse to the microarray slide for hybridization of polyadenylated mRNA molecules to the oligo(dT) sequence tails. [ 64 ] Reverse transcription is then carried out in situ for first-strand synthesis. [ 64 ] As a result, spatially marked complementary DNA (cDNA) is synthesized, providing information about gene expression in the exact location of the tissue section. [ 64 ] From the cDNA, libraries are generated for short-read sequencing. In summary, this spatial transcriptomics protocol combines paralleled sequencing and staining of the same sample. [ 64 ] In the downstream analysis, bioinformatic tools allow overlay of the tissue image with the gene expression. The output is a map of the transcriptome captured gene expression within a tissue section. It is important to mention that the first generation of the arrayed slides comprised about 1,000 spots of the 100-μm diameter, limiting resolution to ~10-40 cells per spot. [ 64 ] This technology was the basis of a company founded in 2012 called Spatial Transcriptomics. In 2018, 10X Genomics acquired Spatial Transcriptomics [ 65 ] as the foundation for the 10X Visium platform. Slide-seq relies on the attachment of RNA binding, DNA-barcoded micro beads to a rubber coated glass coverslip. [ 4 ] [ 66 ] The microbeads are mapped to their spatial location via SOLiD sequencing. [ 66 ] Tissue sections are transferred to this coverslip to capture extracted RNA. Captured RNA is amplified and sequenced. [ 66 ] Transcript localization is determined by the barcode oligonucleotide sequence from the bead that captured it. [ 66 ] APEX-seq allows the for assessment of the spatial transcriptome in different regions of a cell. [ 4 ] [ 67 ] The method utilizes the APEX2 gene, expressed in live cells which are incubated with biotin-phenol and hydrogen peroxide. [ 67 ] In these conditions the APEX2 enzymes catalyse the transfer of biotin groups to the RNA molecules and these can then be purified via streptavidin bead purification. [ 67 ] The purified transcripts are then sequenced to determine which molecules were in close proximity to the biotin tagging enzyme. [ 67 ] H igh- D efinition S patial T ranscriptomics (HDST) begins with decoding the location of mRNA capture beads in wells on a glass slide. [ 68 ] This is accomplished by sequential hybridization to the barcode oligonucleotide sequence of each bead. [ 68 ] Once the location of each bead is decoded, a tissue sample can be placed on the slide and permeabilized. [ 68 ] The captured transcripts are then sequenced. [ 68 ] HDST uses smaller beads than Slide-seq and thus can resolve at a spatial resolution of two micrometers compared to ten micrometers of Slide-seq. [ 4 ] The 10X Genomics Visium assay is a newer and improved version of the Spatial Transcriptomics assay. It also utilizes spotted arrays of mRNA-capturing probes on the surface of glass slides but with increased spot number, minimized spot size and increased amount of capture probes per spot. Within each of the four capture areas of the Visium Spatial Gene Expression slides, there are approximately 5000 barcoded spots, which in turn contain millions of spatially barcoded capture oligonucleotides. Tissue mRNA is released upon permeabilization and binds to the barcoded oligos, enabling capture of gene expression information. Each barcoded spot is 55 μm in diameter, and the distance from the center of one spot to the center of another is approximately 100 μm. The spots are staggered to minimize the distance between them. On average, mRNA from anywhere between 1 and 10 cells are captured per spot which provides near single-cell resolution. [ 69 ] The Curio Bio SEEKER assay is similar in concept to the 10X Genomics Visium but has a higher density of spots. Contrary to the 10X Genomics Visium HD, which uses RNA probes that have to be pre-defined for species like human or mouse, SEEKER has a similar density and resolution, but will assay any fresh frozen tissue sample, using poly-A adaptation of all the mRNAs in the sample. in silico Spatial Reconstruction with ISH implies computational spatial reconstruction of cells’ locations according to their expression profiles. [ 4 ] Several similar methods of this principle exist. [ 70 ] [ 71 ] They co-analyze single-cell transcriptomics and available ISH-based gene expression atlases of the same cell type. [ 4 ] Based on these data, cells are then assigned to their positions in the tissue. Obviously, this method is limited by the factor of availability of ISH references. [ 4 ] Additionally, it becomes more complicated when assigning cells in complex tissues. This approach is not applicable for clinical samples due to the lack of paired references. [ 4 ] Reported success rate for the exact allocation of cells in brain tissue was 81%. [ 70 ] Mapping the transcriptome using the Distmap algorithm requires high-throughput single cell sequencing and an existing in situ hybridization atlas for the tissue of interest. [ 4 ] [ 72 ] The Distmap algorithm generates a virtual 3D model of the tissue of interest using the transcriptomes of sequenced cells and said reference atlas. [ 4 ] [ 72 ] The transcriptomes can be clustered into cell types using t-distributed stochastic neighbour embedding and mapped to the 3D model using virtual in situ hybridization. [ 72 ] [ 73 ] Essentially, this algorithm takes data generated from single cells in a dissociated tissue and is able to map individual transcripts to where the cell type exists in the tissue using virtual in situ hybridization.
https://en.wikipedia.org/wiki/Spatial_transcriptomics
Spatial–temporal reasoning is an area of artificial intelligence that draws from the fields of computer science , cognitive science , and cognitive psychology . The theoretic goal—on the cognitive side—involves representing and reasoning spatial-temporal knowledge in mind. The applied goal—on the computing side—involves developing high-level control systems of automata for navigating and understanding time and space. A convergent result in cognitive psychology is that the connection relation is the first spatial relation that human babies acquire, followed by understanding orientation relations and distance relations. Internal relations among the three kinds of spatial relations can be computationally and systematically explained within the theory of cognitive prism as follows: Without addressing internal relations among spatial relations, AI researchers contributed many fragmentary representations. Examples of temporal calculi include Allen's interval algebra , and Vilain's & Kautz's point algebra . The most prominent spatial calculi are mereotopological calculi , Frank 's cardinal direction calculus , Freksa's double cross calculus, Egenhofer and Franzosa's 4- and 9-intersection calculi , Ligozat's flip-flop calculus , various region connection calculi (RCC), and the Oriented Point Relation Algebra. Recently, spatio-temporal calculi have been designed that combine spatial and temporal information. For example, the spatiotemporal constraint calculus (STCC) by Gerevini and Nebel combines Allen's interval algebra with RCC-8. Moreover, the qualitative trajectory calculus (QTC) allows for reasoning about moving objects. An emphasis in the literature has been on qualitative spatial-temporal reasoning which is based on qualitative abstractions of temporal and spatial aspects of the common-sense background knowledge on which our human perspective of physical reality is based. Methodologically, qualitative constraint calculi restrict the vocabulary of rich mathematical theories dealing with temporal or spatial entities such that specific aspects of these theories can be treated within decidable fragments with simple qualitative (non- metric ) languages. Contrary to mathematical or physical theories about space and time, qualitative constraint calculi allow for rather inexpensive reasoning about entities located in space and time. For this reason, the limited expressiveness of qualitative representation formalism calculi is a benefit if such reasoning tasks need to be integrated in applications. For example, some of these calculi may be implemented for handling spatial GIS queries efficiently and some may be used for navigating, and communicating with, a mobile robot . Most of these calculi can be formalized as abstract relation algebras , such that reasoning can be carried out at a symbolic level. For computing solutions of a constraint network , the path-consistency algorithm is an important tool.
https://en.wikipedia.org/wiki/Spatial–temporal_reasoning
Spatio-spectral scanning [ 1 ] is one of four techniques for hyperspectral imaging , the other three being spatial scanning, [ 2 ] spectral scanning [ 3 ] and non-scanning, or snapshot hyperspectral imaging . The technique was designed to put into practice the concept of 'tilted sampling ' of the hyperspectral data cube , which had been deemed difficult to achieve. [ 4 ] Spatio-spectral scanning yields a series of thin, diagonal slices of the data cube. Figuratively speaking, each acquired image is a 'rainbow-colored' spatial map of the scene. More precisely, each image represents two spatial dimensions, one of which is wavelength-coded. To acquire the spectrum of a given object point, scanning is needed. Spatio-spectral scanning combines some advantages of spatial and spectral scanning: Depending on the context of application, one can choose between a mobile and a stationary platform. Moreover, each image is a spatial map of the scene, facilitating pointing, focusing, and data analysis. This is particularly valuable for irregular or irretrievable scanning movements. Being based on dispersion, spatio-spectral scanning systems yield high spatial and spectral resolution. A prototypical spatio-spectral scanning system, introduced in June 2014, consists of a basic slit spectroscope (slit + dispersive element) at some suitable, non-zero distance before a camera. (If the effective camera distance is zero, the system is applicable to spatial scanning). The imaging process is based on spectrally-decoded camera obscura projections: A series of projections from a continuous array of pinholes (= the slit) is projected onto the dispersive element, each projection contributing a rainbow-colored strip to the recorded two-dimensional image. The field of view in the wavelength-coded spatial dimension asymptotically approaches the dispersion angle of the dispersive element as the camera distance from the dispersive element approaches infinity. [ 1 ] Scanning is achieved by moving the camera transverse to the slit (stationary platform), or by moving the entire system transverse to the slit (mobile platform). An advanced spatio-spectral scanning system, proposed in June 2014, consists of a dispersive element before a spatial scanning system. (This allows for easy switching between spatial and spatio-spectral scanning). The imaging process is based on spectral analysis of a strip of a dispersed image of the scene. The field of view in the wavelength-coded spatial dimension equals the dispersion angle of the dispersive element. [ 1 ] As in the more basic system, scanning is achieved by transverse movement of the slit or by moving the system relative to the scene.
https://en.wikipedia.org/wiki/Spatiospectral_scanning
Spatiotemporal gene expression is the activation of genes within specific tissues of an organism at specific times during development . Gene activation patterns vary widely in complexity. Some are straightforward and static, such as the pattern of tubulin, which is expressed in all cells at all times in life. Some, on the other hand, are extraordinarily intricate and difficult to predict and model, with expression fluctuating wildly from minute to minute or from cell to cell. Spatiotemporal variation plays a key role in generating the diversity of cell types found in developed organisms; since the identity of a cell is specified by the collection of genes actively expressed within that cell, if gene expression was uniform spatially and temporally, there could be at most one kind of cell. Consider the gene wingless, a member of the wnt family of genes. In the early embryonic development of the model organism Drosophila melanogaster , or fruit fly, wingless is expressed across almost the entire embryo in alternating stripes three cells separated. This pattern is lost by the time the organism develops into a larva, but wingless is still expressed in a variety of tissues such as the wing imaginal discs , patches of tissue that will develop into the adult wings. The spatiotemporal pattern of wingless gene expression is determined by a network of regulatory interactions consisting of the effects of many different genes such as even-skipped and Krüppel. What causes spatial and temporal differences in the expression of a single gene? Because current expression patterns depend strictly on previous expression patterns, there is a regressive problem of explaining what caused the first differences in gene expression. The process by which uniform gene expression becomes spatially and temporally differential is known as symmetry breaking . For example, in the case of embryonic Drosophila development, the genes nanos and bicoid are asymmetrically expressed in the oocyte because maternal cells deposit messenger RNA (mRNA) for these genes in the poles of the egg before it is laid . One way to identify the expression pattern of a particular gene is to place a reporter gene downstream of its promoter. In this configuration, the promoter gene will cause the reporter gene to be expressed only where and when the gene of interest is expressed. The expression distribution of the reporter gene can be determined by visualizing it. For example, the reporter gene green fluorescent protein can be visualized by stimulating it with blue light and then using a digital camera to record green fluorescent emission. If the promoter of the gene of interest is unknown, there are several ways to identify its spatiotemporal distribution. Immunohistochemistry involves preparing an antibody with specific affinity for the protein associated with the gene of interest. This distribution of this antibody can then be visualized by a technique such as fluorescent labeling. Immunohistochemistry has the advantages of being methodologically feasible and relatively inexpensive. Its disadvantages include non-specificity of the antibody leading to false positive identification of expression. Poor penetrance of the antibody into the target tissue can lead to false negative results. Furthermore, since immunohistochemistry visualizes the protein generated by the gene, if the protein product diffuses between cells, or has a particularly short or long half-life relative to the mRNA that is used to translate the protein, this can lead to distorted interpretation of which cells are expressing the mRNA . In situ hybridization is an alternate method in which a "probe," a synthetic nucleic acid with a sequence complementary to the mRNA of the gene, is added to the tissue. This probe is then chemically tagged so that it can be visualized later. This technique enables visualization specifically of mRNA-producing cells without any of the artifacts associated with immunohistochemistry. However, it is notoriously difficult, and requires knowledge of the sequence of DNA corresponding to the gene of interest. A method called enhancer-trap screening reveals the diversity of spatiotemporal gene expression patterns possible in an organism. In this technique, DNA that encodes a reporter gene is randomly inserted into the genome. Depending on the gene promoters proximal to the insertion point, the reporter gene will be expressed in particular tissues at particular points in development. While enhancer-trap derived expression patterns do not necessarily reflect the actual patterns of expression of specific genes, they reveal the variety of spatiotemporal patterns that are accessible to evolution. Reporter genes can be visualized in living organisms, but both immunohistochemistry and in situ hybridization must be performed in fixed tissues. Techniques that require fixation of tissue can only generate a single temporal time point per individual organism. However, using live animals instead of fixed tissue can be crucial in dynamically understanding expression patterns over an individual's lifespan. Either way, variation between individuals can confound the interpretation of temporal expression patterns. Several methods are being pursued for controlling gene expression spatially, temporally and in different degrees. One method is by using operon inducer/repressor system which provides temporal control of gene expression. To control gene expression spatially inkjet printers are under development for printing ligands on gel culture. [ 1 ] Other popular method involves use of light to control gene expression in spatiotemporal fashion. Since light can also be controlled easily in space, time and degree, several methods of controlling gene expression at DNA and RNA level [ 2 ] have been developed and are under study. For example, RNA interference can be controlled using light [ 3 ] [ 4 ] and also patterning of gene expression has been performed in cell monolayer [ 5 ] and in zebrafish embryos using caged morpholino [ 6 ] or peptide nucleic acid [ 7 ] [ 8 ] [ 9 ] demonstrating the control of gene expression spatiotemporally. Recently light based control has been shown at DNA level using transgene based system [ 10 ] or caged triplex forming oligos [ 11 ]
https://en.wikipedia.org/wiki/Spatiotemporal_gene_expression
Spatiotemporal patterns are patterns that occur in a wide range of natural phenoma and are characterized by a spatial and temporal patterning. The general rules of pattern formation hold. In contrast to "static", pure spatial patterns, the full complexity of spatiotemporal patterns can only be recognized over time. Any kind of traveling wave is a good example of a spatiotemporal pattern. Besides the shape and amplitude of the wave (spatial part), its time-varying position (and possibly shape) in space is an essential part of the entire pattern. The distinction between spatial and spatio-temporal patterns in nature is not clear-cut because a static, invariable pattern will never occur in the strict sense. Even rock formations will slowly change on a time-scale of tens of millions of years, therefore the distinction lies in the time scale of change in relation to human experience . Already the snapshot state of a dune will usually be taken as an example of a purely spatial pattern although this is clearly not the case. It is thus apt to say that spatiotemporal patterns in nature are the rule rather than the exception. Many hydrodynamical systems show s.t. pattern formation: Any type of reaction–diffusion system that produces spatial patterns will also, due to the time-dependency of both reactions and diffusion, produce spatiotemporal patterns. Neural networks, both artificial and natural , produce a virtually unbounded variety of s.t. patterns, both in sensory perception , learning, thinking and reasoning as well as in spontaneous activity . It has for example been demonstrated that spiral waves , signatures of many excitable systems can occur in neocortical preparations. [ 1 ] All communication , language , relies on spatiotemporal encoding of information, producing and transmitting sound variations or any type of signal i.e. single building blocks of information that are varied over time. -Even though written language appears to exist only as a (2D) spatial concatenation of letters - strings , it must be decoded sequentially over time. Any kind of language that is understood by organisms is thus eventually a transcoding of neural s.t. signals and will - in successful communication - evoke similar patterns of neural activity in the recipient as they existed in the sender. For example, the warning call of a bird when it perceives a predator will produce a similar type and degree of alarmedness (eventually a certain kind of neural activity pattern) in other individuals even though they have not yet seen or heard the potential attacker. Even artificial languages , e.g. computer languages , are not read and interpreted in one step, but sequentially, thus, their meaningfully arranged vocabulary (e.g. " computer code ") can be seen as a s.t. pattern. As a particular type of language, the "static" (neglecting random transcription errors , recombination and mutation ) DNA and its transcription pattern over time yields biologically essential s.t. patterns. Gene regulatory networks are responsible for regulation the time course of gene expression level which can be analyzed using expression profiling . Criminals show spatiotemporal patterns when planning and executing their activities that may be used to predict their future behaviour . Temporal patterns may apply to preferred times when crimes are committed. Spatial patterns can be identified in potential targets and routes for criminal activities. Spatial patterns may be used to identify the most likely locations for crimes to occur, or to identify potential escape routes. In addition, criminals often use temporal and spatial patterns to hide their activities, such as by committing crimes in areas with low population density or in areas with limited surveillance. By understanding spatiotemporal patterns in relation to crime, law enforcement and crime prevention professionals can develop strategies to better prevent and respond to criminal activities. For example, law enforcement can use spatiotemporal patterns to identify crime hot spots and to determine the most effective strategies for responding to these areas. In addition, law enforcement and crime prevention professionals can use spatiotemporal patterns to identify and monitor potential suspects or areas of criminal activity. [ 2 ]
https://en.wikipedia.org/wiki/Spatiotemporal_pattern
Speakeasy was a numerical computing interactive environment also featuring an interpreted programming language . It was initially developed for internal use at the Physics Division of Argonne National Laboratory by the theoretical physicist Stanley Cohen . [ 4 ] He eventually founded Speakeasy Computing Corporation to make the program available commercially. [ 5 ] Speakeasy is a very long-lasting numerical package. In fact, the original version of the environment was built around a core dynamic data repository called "Named storage" developed in the early 1960s, [ 6 ] [ 7 ] while the most recent version has been released in 2006. Speakeasy was aimed to make the computational work of the physicists at the Argonne National Laboratory easier. [ 8 ] Speakeasy was initially conceived to work on mainframes (the only kind of computers at that time), and was subsequently ported to new platforms ( minicomputers , personal computers ) as they became available. The porting of the same code on different platforms was made easier by using Mortran metalanguage macros to face systems dependencies and compilers deficiencies and differences. [ 9 ] Speakeasy is currently available on several platforms: PCs running Windows , macOS , Linux , departmental computers and workstations running several flavors of Linux, AIX or Solaris . Speakeasy was also among the first [ citation needed ] interactive numerical computing environments, having been implemented in such a way on a CDC 3600 system, and later on IBM TSO machines as one was in beta-testing at the Argonne National Laboratory at the time. By 1984 it was available on Digital Equipment Corporation 's VAX systems. [ 10 ] [ 11 ] Almost since the beginning (as the dynamic linking functionality was made available in the operating systems) Speakeasy features the capability of expanding its operational vocabulary using separated modules, dynamically linked to the core processor as they are needed. For that reason such modules were called "linkules" (LINKable-modULES). [ 12 ] They are functions with a generalized interface, which can be written in FORTRAN [ 13 ] or in C . [ citation needed ] The independence of each of the new modules from the others and from the main processor is of great help in improving the system, especially it was in the old days. This easy way of expanding the functionalities of the main processor was often exploited by the users to develop their own specialized packages. Besides the programs, functions and subroutines the user can write in the Speakeasy's own interpreted language, linkules add functionalities carried out with the typical performances of compiled programs. Among the packages developed by the users, one of the most important is "Modeleasy", originally developed as "FEDeasy" [ 14 ] in the early 1970s at the research department of the Federal Reserve Board of Governors in Washington D.C.. [ 15 ] Modeleasy implements special objects and functions for large econometric models estimation and simulation. [ 16 ] Its evolution led eventually to its distribution as an independent product. The symbol :_ (colon+underscore) is both the Speakeasy logo and the prompt of the interactive session. The dollar sign is used for delimiting comments; the ampersand is used to continue a statement on the following physical line, in which case the prompt becomes :& (colon+ampersand); a semicolon can separate statements written on the same physical line. As its own name tells, Speakeasy was aimed to expose a syntax as friendly as possible to the user, and as close as possible to the spoken language. The best example of that is given by the set of commands for reading/writing data from/to the permanent storage. E.g. (the languages keywords are in upper case to clarify the point): Variables (i.e. Speakeasy objects) are given a name up to 255 character long, when LONGNAME option is ON, up to 8 characters otherwise (for backward compatibility). They are dynamically typed, depending on the value assigned to them. Arguments of functions are usually not required to be surrounded by parenthesis or separated by commas, provided that the context remains clear and unambiguous. For example: can be written : or even Many other syntax simplifications are possible; for example, to define an object named 'a' valued to a ten-elements array of zeroes, one can write any of the following statements: Speakeasy is a vector-oriented language: giving a structured argument to a function of a scalar, the result is usually an object with the same structure of the argument, in which each element is the result of the function applied to the corresponding element of the argument. In the example given above, the result of function sin applied to the array (let us call it x ) generated by the function grid is the array answer whose element answer (i) equals sin ( x (i)) for each i from 1 to noels (x) (the number of elements of x ). In other words, the statement is equivalent to the following fragment of program: The vector-oriented statements avoid writing programs for such loops and are much faster than them. By the very first statement of the session, the user can define the size of the "named storage" (or "work area", or "allocator"), which is allocated once and for all at the beginning of the session. Within this fixed-size work area, the Speakeasy processor dynamically creates and destroys the work objects as needed. A user-tunable [ 17 ] garbage collection mechanism is provided to maximize the size of the free block in the work area, packing the defined objects in the low end or in the high end of the allocator. At any time, the user can ask about used or remaining space in the work area. Within reasonable conformity and compatibility constraints, the Speakeasy objects can be operated on using the same algebraic syntax. From this point of view, and considering the dynamic and structured nature of the data held in the "named storage", it is possible to say that Speakeasy since the beginning implemented a very raw form of operator overloading , and a pragmatic approach to some features of what was later called " Object Oriented Programming ", although it did not evolve further in that direction. Speakeasy provides a bunch of predefined "families" of data objects: scalars, arrays (up to 15 dimensions), matrices, sets, time series. The elemental data can be of kind real (8-bytes), complex (2x8-bytes), character-literal or name-literal ( matrices elements can be real or complex, time series values can only be real ). For time series processing, five types of missing values are provided. They are denoted by N.A. (not available), N.C. (not computable), N.D. (not defined), along with N.B. and N.E. the meaning of which is not predetermined and is left available for the linkules developer. They are internally represented by specific (and very small) numeric values, acting as codes. All the time series operations take care of the presence of missing values, propagating them appropriately in the results. Depending on a specific setting, missing values can be represented by the above notation, by a question mark symbol, or a blank (useful in tables). When used in input the question mark is interpreted as an N.A. missing value. In numerical objects other than time series, the concept of "missing values" is meaningless, and the numerical operations on them use the actual numeric values regardless they correspond to "missing values codes" or not (although "missing values codes" can be input and shown as such). Note that, in other contexts, a question mark may have a different meaning: for example, when used as the first (and possibly only) character of a command line, it means the request to show more pieces of a long error message (which ends with a "+" symbol). Some support is provided for logical values, relational operators (the Fortran syntax can be used) and logical expressions. Logical values are stored actually as numeric values: with 0 meaning false and non-zero (1 on output) meaning true. Special objects such as "PROGRAM", "SUBROUTINE" and "FUNCTION" objects (collectively referred to as procedures ) can be defined for operations automation. Another way for running several instructions with a single command is to store them into a use-file and make the processor read them by mean of the USE command. "USEing" a use-file is the simplest way for performing several instruction with minimal typed input. (This operation roughly corresponds to what "source-ing" a file is in other scripting languages.) A use-file is an alternate input source to the standard console and can contain all the commands a user can input by the keyboard (hence no multi-line flow control construct is allowed). The processor reads and executes use-files one line at a time. Use-file execution can be concatenated but not nested, i.e. the control does not return to the caller at the completion of the called use-file. Full programming capability is achieved using "procedures". They are actually Speakeasy objects, which must be defined in the work area to be executed. An option is available in order to make the procedures being automatically retrieved and loaded from the external storage as they are needed. Procedures can contain any of the execution flow control constructs available in the Speakeasy programming language. A program can be run simply invoking its name or using it as the argument of the command EXECUTE. In the latter case, a further argument can identify a label from which the execution will begin. Speakeasy programs differs from the other procedures for being executed at the same scoping "level" they are referenced to, hence they have full visibility of all the objects defined at that level, and all the objects created during their execution will be left there for subsequent uses. For that reason no argument list is needed. Subroutines and Functions are executed at a new scoping level, which is removed when they finish. The communication with the calling scoping level is carried out through the argument list (in both directions). This implements data hiding, i.e. objects created within a Subroutine or a Function are not visible to other Subroutine and Functions but through argument lists. A global level is available for storing object which must be visible from within any procedure, e.g. the procedures themselves. The Functions differ from the Subroutines because they also return a functional value; reference to them can be part of more complex statement and are replaced by the returned functional value when evaluating the statement. In some extent, Speakeasy Subroutines and Functions are very similar to the Fortran procedures of the same name. An IF-THEN-ELSE construct is available for conditional execution and two forms of FOR-NEXT construct are provided for looping. A "GO TO label " statement is provided for jumping, while a Fortran-like computed GO TO statement can be used fort multiple branching. An ON ERROR mechanism, with several options, provides a means for error handling. Linkules are functions usually written in Fortran (or, unsupportedly, in C). With the aid of Mortran or C macros and an API library, they can interface the Speakeasy workarea for retrieving, defining, manipulating any Speakeasy object. Most of the Speakeasy operational vocabulary is implemented via linkules. They can be statically linked to the core engine, or dynamically loaded as they are needed, provided they are properly compiled as shared objects (unix) or dll (windows).
https://en.wikipedia.org/wiki/Speakeasy_(computational_environment)
A special-use permit authorizes land uses that are allowed and encouraged by the ordinance and declared harmonious with the applicable zoning district. [ 1 ] Land use is governed by a set of regulations generally known as ordinances or municipal codes , which are authorized by the state's zoning enabling law. Within an ordinance is a list of land use designations commonly known as zoning. Each different type of zone has its own set of allowed uses. These are known as by-right uses. Then there is an extra set of uses known as special uses. To build a use that is listed as a special use, a special-use permit (or conditional-use permit ) must be obtained. An example of a special-use permit may be found in a church applying for one to construct a church building in a residential neighborhood. Although the church building is not a residential building, the zoning law may allow for churches in the residential neighborhood if the local zoning authority may review the impact on the neighborhood. This process grants discretion to the local zoning authority to ensure that an acceptable land use does not disrupt the zoning scheme because of its particular location. The Standard State Zoning Enabling Act allows special-use permits based upon a finding of compatibility with surrounding areas and with developments already permitted under the general provisions of the ordinance. If the local zoning authority grants a special-use permit that exceeds the discretion allowed to it, then an incidence of spot zoning may arise. Such discretion then may be attacked as ultra vires , and the special-use permit overturned as an unconstitutional violation of equal protection . Special-use permits are also required when a property has been deemed a "nonconforming use." A permit for a nonconforming use will allow the owner of a previously-compliant property to continue the existing use. This often arises when a property has been rezoned or amortized. Amortization is unconstitutional in many states, and is a controversial tool according to many property rights advocacy groups. [ 2 ] An example of special-use-permit abuse may be found when a business or other organization is using U.S. Forest Service land for commercial use. Special-use permits may be revoked after the initial period if it is deemed to have not met the proposed public need. This then allows for another operator to apply for the special-use permit which will be evaluated on whether they are likely to succeed and meet the public need as well as follow all other criteria such as proper resource management and respectful public use. If deemed successful after the initial period the permit can be renewed for a longer period. Special land-use permits are also issued by the U.S. Forest Service for the operation of ski areas and other recreational facilities in national forests . These facilities are operated by commercial providers who help the public in a way that would not otherwise be helped by the U.S. Forest Service. The U.S. Forest Service does not run the facility itself and is not the one providing the benefits that the commercial business provides. Special-use permits have also been issued for other purposes, such as in Alaska during the summer of 2015, when special fishing permits were issued to feed firefighters who had difficulty receiving supplies via land routes due to the forest fires that they were fighting in remote areas. In broadcasting , a restricted service licence ( UK ) or special temporary authority ( US ) may be issued by a broadcasting authority for temporary changes or set-ups for a radio station or television station . This may be for a temporary LPFM station for a special event (an RSL), or for an unexpected situation such as having to operate at low power from an emergency radio antenna or radio tower after a disaster or major equipment failure (STA).
https://en.wikipedia.org/wiki/Special-use_permit
In logic , especially as applied in mathematics , concept A is a special case or specialization of concept B precisely if every instance of A is also an instance of B but not vice versa, or equivalently, if B is a generalization of A . [ 1 ] A limiting case is a type of special case which is arrived at by taking some aspect of the concept to the extreme of what is permitted in the general case. If B is true, one can immediately deduce that A is true as well, and if B is false, A can also be immediately deduced to be false. A degenerate case is a special case which is in some way qualitatively different from almost all of the cases allowed. Special case examples include the following:
https://en.wikipedia.org/wiki/Special_case
In the petroleum industry , special core analysis , often abbreviated SCAL or SPCAN , is a laboratory procedure [ 1 ] for conducting flow experiments on core plugs taken from a petroleum reservoir . Special core analysis is distinguished from "routine (RCAL) or conventional (CCAL) core analysis" by adding more experiments, in particular including measurements of two-phase flow properties, determining relative permeability , capillary pressure , wettability , and electrical properties. Due to the time-consuming and costly character of SCAL measurements, routine core analysis (RCAL) data should be inspected thoroughly to select a representative subset of samples for SCAL. [ 2 ] [ 3 ] This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Special_core_analysis
In mathematics , the special linear group SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,R)} of degree n {\displaystyle n} over a commutative ring R {\displaystyle R} is the set of n × n {\displaystyle n\times n} matrices with determinant 1 {\displaystyle 1} , with the group operations of ordinary matrix multiplication and matrix inversion . This is the normal subgroup of the general linear group given by the kernel of the determinant where R × {\displaystyle R^{\times }} is the multiplicative group of R {\displaystyle R} (that is, R {\displaystyle R} excluding 0 {\displaystyle 0} when R {\displaystyle R} is a field). These elements are "special" in that they form an algebraic subvariety of the general linear group – they satisfy a polynomial equation (since the determinant is polynomial in the entries). When R {\displaystyle R} is the finite field of order q {\displaystyle q} , the notation SL ⁡ ( n , q ) {\displaystyle \operatorname {SL} (n,q)} is sometimes used. The special linear group SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} can be characterized as the group of volume and orientation preserving linear transformations of R n {\displaystyle \mathbb {R} ^{n}} . This corresponds to the interpretation of the determinant as measuring change in volume and orientation. When F {\displaystyle F} is R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } , SL ⁡ ( n , F ) {\displaystyle \operatorname {SL} (n,F)} is a Lie subgroup of GL ⁡ ( n , F ) {\displaystyle \operatorname {GL} (n,F)} of dimension n 2 − 1 {\displaystyle n^{2}-1} . The Lie algebra s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} of SL ⁡ ( n , F ) {\displaystyle \operatorname {SL} (n,F)} consists of all n × n {\displaystyle n\times n} matrices over F {\displaystyle F} with vanishing trace . The Lie bracket is given by the commutator . Any invertible matrix can be uniquely represented according to the polar decomposition as the product of a unitary matrix and a Hermitian matrix with positive eigenvalues . The determinant of the unitary matrix is on the unit circle , while that of the Hermitian matrix is real and positive. Since in the case of a matrix from the special linear group the product of these two determinants must be 1, then each of them must be 1. Therefore, a special linear matrix can be written as the product of a special unitary matrix (or special orthogonal matrix in the real case) and a positive definite Hermitian matrix (or symmetric matrix in the real case) having determinant 1. It follows that the topology of the group SL ⁡ ( n , C ) {\displaystyle \operatorname {SL} (n,\mathbb {C} )} is the product of the topology of SU ⁡ ( n ) {\displaystyle \operatorname {SU} (n)} and the topology of the group of Hermitian matrices of unit determinant with positive eigenvalues. A Hermitian matrix of unit determinant and having positive eigenvalues can be uniquely expressed as the exponential of a traceless Hermitian matrix, and therefore the topology of this is that of ( n 2 − 1 ) {\displaystyle (n^{2}-1)} -dimensional Euclidean space . [ 1 ] Since SU ⁡ ( n ) {\displaystyle \operatorname {SU} (n)} is simply connected , [ 2 ] then SL ⁡ ( n , C ) {\displaystyle \operatorname {SL} (n,\mathbb {C} )} is also simply connected, for all n ≥ 2 {\displaystyle n\geq 2} . The topology of SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} is the product of the topology of SO ( n ) and the topology of the group of symmetric matrices with positive eigenvalues and unit determinant. Since the latter matrices can be uniquely expressed as the exponential of symmetric traceless matrices, then this latter topology is that of ( n + 2)( n − 1)/2 -dimensional Euclidean space. Thus, the group SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} has the same fundamental group as SO ⁡ ( n ) {\displaystyle \operatorname {SO} (n)} ; that is, Z {\displaystyle \mathbb {Z} } for n = 2 {\displaystyle n=2} and Z 2 {\displaystyle \mathbb {Z} _{2}} for n > 2 {\displaystyle n>2} . [ 3 ] In particular this means that SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} , unlike SL ⁡ ( n , C ) {\displaystyle \operatorname {SL} (n,\mathbb {C} )} , is not simply connected, for n > 1 {\displaystyle n>1} . Two related subgroups, which in some cases coincide with SL {\displaystyle \operatorname {SL} } , and in other cases are accidentally conflated with SL {\displaystyle \operatorname {SL} } , are the commutator subgroup of GL {\displaystyle \operatorname {GL} } , and the group generated by transvections . These are both subgroups of SL {\displaystyle \operatorname {SL} } (transvections have determinant 1, and det is a map to an abelian group, so [ GL , GL ] < SL {\displaystyle [\operatorname {GL} ,\operatorname {GL} ]<\operatorname {SL} } ), but in general do not coincide with it. The group generated by transvections is denoted E ⁡ ( n , A ) {\displaystyle \operatorname {E} (n,A)} (for elementary matrices ) or TV ⁡ ( n , A ) {\displaystyle \operatorname {TV} (n,A)} . By the second Steinberg relation , for n ≥ 3 {\displaystyle n\geq 3} , transvections are commutators, so for n ≥ 3 {\displaystyle n\geq 3} , E ⁡ ( n , A ) < [ GL ⁡ ( n , A ) , GL ⁡ ( n , A ) ] {\displaystyle \operatorname {E} (n,A)<[\operatorname {GL} (n,A),\operatorname {GL} (n,A)]} . For n = 2 {\displaystyle n=2} , transvections need not be commutators (of 2 × 2 {\displaystyle 2\times 2} matrices), as seen for example when A {\displaystyle A} is F 2 {\displaystyle \mathbb {F} _{2}} , the field of two elements. In that case where A 3 {\displaystyle A_{3}} and S 3 {\displaystyle S_{3}} respectively denote the alternating and symmetric group on 3 letters. However, if A {\displaystyle A} is a field with more than 2 elements, then E(2, A ) = [GL(2, A ), GL(2, A )] , and if A {\displaystyle A} is a field with more than 3 elements, E(2, A ) = [SL(2, A ), SL(2, A )] . [ dubious – discuss ] In some circumstances these coincide: the special linear group over a field or a Euclidean domain is generated by transvections, and the stable special linear group over a Dedekind domain is generated by transvections. For more general rings the stable difference is measured by the special Whitehead group S K 1 ( A ) = SL ⁡ ( A ) / E ⁡ ( A ) {\displaystyle SK_{1}(A)=\operatorname {SL} (A)/\operatorname {E} (A)} , where SL ⁡ ( A ) {\displaystyle \operatorname {SL} (A)} and E ⁡ ( A ) {\displaystyle \operatorname {E} (A)} are the stable groups of the special linear group and elementary matrices. If working over a ring where SL {\displaystyle \operatorname {SL} } is generated by transvections (such as a field or Euclidean domain ), one can give a presentation of SL {\displaystyle \operatorname {SL} } using transvections with some relations. Transvections satisfy the Steinberg relations , but these are not sufficient: the resulting group is the Steinberg group , which is not the special linear group, but rather the universal central extension of the commutator subgroup of GL {\displaystyle \operatorname {GL} } . A sufficient set of relations for SL( n , Z ) for n ≥ 3 is given by two of the Steinberg relations, plus a third relation ( Conder, Robertson & Williams 1992 , p. 19). Let T ij := e ij (1) be the elementary matrix with 1's on the diagonal and in the ij position, and 0's elsewhere (and i ≠ j ). Then are a complete set of relations for SL( n , Z ), n ≥ 3. In characteristic other than 2, the set of matrices with determinant ±1 form another subgroup of GL, with SL as an index 2 subgroup (necessarily normal); in characteristic 2 this is the same as SL. This forms a short exact sequence of groups: This sequence splits by taking any matrix with determinant −1 , for example the diagonal matrix ( − 1 , 1 , … , 1 ) . {\displaystyle (-1,1,\dots ,1).} If n = 2 k + 1 {\displaystyle n=2k+1} is odd, the negative identity matrix − I {\displaystyle -I} is in SL ± ( n , F ) but not in SL( n , F ) and thus the group splits as an internal direct product SL ± ⁡ ( 2 k + 1 , F ) ≅ SL ⁡ ( 2 k + 1 , F ) × { ± I } {\displaystyle \operatorname {SL} ^{\pm }(2k+1,F)\cong \operatorname {SL} (2k+1,F)\times \{\pm I\}} . However, if n = 2 k {\displaystyle n=2k} is even, − I {\displaystyle -I} is already in SL( n , F ) , SL ± does not split, and in general is a non-trivial group extension . Over the real numbers, SL ± ( n , R ) has two connected components , corresponding to SL( n , R ) and another component, which are isomorphic with identification depending on a choice of point (matrix with determinant −1 ). In odd dimension these are naturally identified by − I {\displaystyle -I} , but in even dimension there is no one natural identification. The group GL ⁡ ( n , F ) {\displaystyle \operatorname {GL} (n,F)} splits over its determinant (we use F × = GL ⁡ ( 1 , F ) → GL ⁡ ( n , F ) {\displaystyle F^{\times }=\operatorname {GL} (1,F)\to \operatorname {GL} (n,F)} as the monomorphism from F × {\displaystyle F^{\times }} to GL ⁡ ( n , F ) {\displaystyle \operatorname {GL} (n,F)} , see semidirect product ), and therefore GL ⁡ ( n , F ) {\displaystyle \operatorname {GL} (n,F)} can be written as a semidirect product of SL ⁡ ( n , F ) {\displaystyle \operatorname {SL} (n,F)} by F × {\displaystyle F^{\times }} :
https://en.wikipedia.org/wiki/Special_linear_group
Specialty chemicals (also called specialties or effect chemicals ) are particular chemical products that provide a wide variety of effects on which many other industry sectors rely. Some of the categories of speciality chemicals are adhesives , agrichemicals , cleaning materials, colors, cosmetic additives , construction chemicals, elastomers , flavors , food additives , fragrances , industrial gases , lubricants , paints, polymers , surfactants , and textile auxiliaries. Other industrial sectors such as automotive , aerospace , food , cosmetics, agriculture, manufacturing , and textiles are highly dependent on such products. [ 1 ] Speciality chemicals are materials used on the basis of their performance or function. Consequently, in addition to "effect" chemicals they are sometimes referred to as "performance" chemicals or "formulation" chemicals. They can be unique molecules or mixtures of molecules known as formulations . The physical and chemical characteristics of the single molecules or the formulated mixtures of molecules and the composition of the mixtures influences the performance end product. In commercial applications the companies providing these products more often than not provide targeted customer service to innovative individual technical solutions for their customers. [ 2 ] This is a differentiating component of the service provided by speciality chemical producers when they are compared to the other sub-sectors of the chemical industry such as fine chemicals , commodity chemicals , petrochemicals and pharmaceuticals. In the USA the speciality chemical manufacturers are members of the Society of Chemical Manufacturers and Affiliates (SOCMA). In the United Kingdom such companies are members of the British Association for Chemical Specialties (BACS). [ 3 ] SOCMA state that "Specialty chemicals differ from commodity chemicals in that each one may have only one or two uses, while commodities may have dozens of different applications for each chemical. While commodity chemicals make up most of the production volume (by weight) in the global marketplace, specialty chemicals make up most of the diversity (number of different chemicals) in commerce at any given time." The specialty chemicals industry is a sector within the broader chemical industry that produces a diverse range of high-value chemicals and materials used in various applications. These chemicals, also known as performance or effect chemicals, are formulated to provide specific functions, enhance product performance, or meet specific customer requirements. Specialty chemicals are used in a wide array of industries, including automotive, aerospace, agriculture, construction, electronics, food and beverage, personal care, pharmaceutical, and textile. Specialty chemicals are usually manufactured in batch chemical plants using batch processing techniques. [ 4 ] A batch process is one in which a defined quantity of product is made from a fixed input of raw materials during a measured period of time. The batch process most often consists of introducing accurately measured amounts of starting materials into a vessel followed by a series of processes involving mixing, heating, cooling, making more chemical reactions, distillation , crystallization , separation, drying, packaging etc., taking place at predetermined and scheduled intervals. The manufacturing processes are supported by activities such as the quality testing, storage, warehousing , logistics of the products, and management by recycling , treatment and disposal of by-products , and waste streams . For the next "batch" the equipment may be cleaned and the above processes repeated. Most specialty chemicals are organic chemicals that are used in a wide range of everyday products used by consumers and industry. It is a consumer driven sector and as such the specialty chemical industry has to be innovative, entrepreneurial and consumer-driven. In contrast to the production of commodity chemicals that are usually made on large scale single product manufacturing units to achieve economies of scale , specialty manufacturing units are required to be flexible because the products, raw materials , processes &, operating conditions and equipment mix may change on a regular basis to respond to the needs of customers. In the United Kingdom there are many speciality chemical companies that are members of the British Association of Chemical Specialties (BACS) and also the Chemical Industries Association (CIA). Most of these companies have their manufacturing units based in the North of England for example see the membership of the Northeast of England they are members of the Northeast of England process Industry Cluster (NEPIC). The Specialty Chemical market size is valued at US$627.7 billion in 2021 and it is expected to grow to $886.2 billion by 2030, at a CAGR of 4.3% during 2022-2030. [ 5 ] These speciality products are marketed as pesticides , speciality polymers , electronic chemicals, surfactants , construction chemicals, Industrial Cleaners, flavors and fragrances , speciality coatings, printing inks, water-soluble polymers, food additives , paper chemicals , oil field chemicals, plastic adhesives, adhesives and sealants , cosmetic chemicals , water management chemicals , catalysts , textile chemicals. The world's top five specialty chemicals segments in 2012 were specialty polymers, industrial and institutional (I&I) cleaners, construction chemicals, electronic chemicals, and flavors and fragrances. These segments had a market share of about 36%. The ten largest segments accounted for 62% of total annual specialty chemicals sales. [ 6 ] The specialty chemical market is complex and each specialty chemicals business segment comprises many sub-segments, each with individualized product, market and competitive profiles. This has given rise to a wide range of business needs and opportunities, consequently there are a large number of speciality chemical companies around the world. Many of these companies are SME's with their own niche products and sometimes technology focus. The common stock of over 400 speciality chemical companies from around the world are identified by Bloomberg , [ 7 ] providers of global business and financial information. There are many more privately owned speciality chemical companies that are not quoted on the global stock markets. In 2010, the 10 largest European speciality chemical companies were BASF , AkzoNobel , Clariant , Evonik , Cognis , Kemira , Lanxess , Rhodia , Wacker and Croda . [ 8 ] By definition, speciality chemicals are produced in relatively small quantities but they represent 28 per cent of EU chemical sales. [ 9 ] The 10 largest USA speciality chemical companies are The Lubrizol Corporation, Huntsman , Ashland , Chemtura , Rockwood, Albemarle , Cabot , W. R. Grace , Ferro Corporation , and Cytec Industries . [ 10 ] The emergence of India as a manufacturer and supplier of speciality chemicals has had a major impact on the Global speciality chemical industry. In India the many speciality chemical companies are members of national organisations such as the Indian Chemical Council (ICC) and the Indian Speciality Chemical Manufacturers' Association (ISCMA). The wide capability of these companies extends into all sectors and sub-sectors of the speciality chemical market. [ 11 ] The United Kingdom has 1300 speciality chemical companies and which have an annual turnover of £11.2 billion. [ 12 ] The products of these UK companies are sold globally and contribute significantly to the UKs export trade. With over £30 billion of exports the chemical industry is the last remaining net-exporting manufacturing industry in the UK [ 13 ] and Speciality Chemicals make up a significant proportion of this. The products include dyestuffs, paints, explosives, adhesives, flavors and fragrances, photographic chemicals, unrecorded media and various industrial specialities. As Speciality Chemical manufacturers, unlike commodity chemical manufacturers, are less dependent on large scale infrastructure, therefore Speciality Chemical companies can be found in almost all regions of the UK. Some 80% of the United Kingdom Chemical industry is based in the north of the country and consequently there are concentrations of Speciality Chemical companies in Yorkshire, and in the membership of the Northeast of England Process Industry Cluster (NEPIC). [ 14 ] Specialty chemicals can be broadly categorized into the following segments: [ 15 ] [ 16 ] [ 17 ]
https://en.wikipedia.org/wiki/Speciality_chemicals
In the branch of mathematics known as topology , the specialization (or canonical ) preorder is a natural preorder on the set of the points of a topological space . For most spaces that are considered in practice, namely for all those that satisfy the T 0 separation axiom , this preorder is even a partial order (called the specialization order ). On the other hand, for T 1 spaces the order becomes trivial and is of little interest. The specialization order is often considered in applications in computer science , where T 0 spaces occur in denotational semantics . The specialization order is also important for identifying suitable topologies on partially ordered sets, as is done in order theory . Consider any topological space X . The specialization preorder ≤ on X relates two points of X when one lies in the closure of the other. However, various authors disagree on which 'direction' the order should go. What is agreed [ citation needed ] is that if (where cl{ y } denotes the closure of the singleton set { y }, i.e. the intersection of all closed sets containing { y }), we say that x is a specialization of y and that y is a generalization of x ; this is commonly written y ⤳ x . Unfortunately, the property " x is a specialization of y " is alternatively written as " x ≤ y " and as " y ≤ x " by various authors (see, respectively, [ 1 ] and [ 2 ] ). Both definitions have intuitive justifications: in the case of the former, we have However, in the case where our space X is the prime spectrum Spec( R ) of a commutative ring R (which is the motivational situation in applications related to algebraic geometry ), then under our second definition of the order, we have For the sake of consistency, for the remainder of this article we will take the first definition, that " x is a specialization of y " be written as x ≤ y . We then see, These restatements help to explain why one speaks of a "specialization": y is more general than x , since it is contained in more open sets. This is particularly intuitive if one views closed sets as properties that a point x may or may not have. The more closed sets contain a point, the more properties the point has, and the more special it is. The usage is consistent with the classical logical notions of genus and species ; and also with the traditional use of generic points in algebraic geometry , in which closed points are the most specific, while a generic point of a space is one contained in every nonempty open subset. Specialization as an idea is applied also in valuation theory . The intuition of upper elements being more specific is typically found in domain theory , a branch of order theory that has ample applications in computer science. Let X be a topological space and let ≤ be the specialization preorder on X . Every open set is an upper set with respect to ≤ and every closed set is a lower set . The converses are not generally true. In fact, a topological space is an Alexandrov-discrete space if and only if every upper set is also open (or equivalently every lower set is also closed). Let A be a subset of X . The smallest upper set containing A is denoted ↑ A and the smallest lower set containing A is denoted ↓ A . In case A = { x } is a singleton one uses the notation ↑ x and ↓ x . For x ∈ X one has: The lower set ↓ x is always closed; however, the upper set ↑ x need not be open or closed. The closed points of a topological space X are precisely the minimal elements of X with respect to ≤. As suggested by the name, the specialization preorder is a preorder, i.e. it is reflexive and transitive . The equivalence relation determined by the specialization preorder is just that of topological indistinguishability . That is, x and y are topologically indistinguishable if and only if x ≤ y and y ≤ x . Therefore, the antisymmetry of ≤ is precisely the T 0 separation axiom: if x and y are indistinguishable then x = y . In this case it is justified to speak of the specialization order . On the other hand, the symmetry of the specialization preorder is equivalent to the R 0 separation axiom: x ≤ y if and only if x and y are topologically indistinguishable. It follows that if the underlying topology is T 1 , then the specialization order is discrete, i.e. one has x ≤ y if and only if x = y . Hence, the specialization order is of little interest for T 1 topologies, especially for all Hausdorff spaces . Any continuous function f {\displaystyle f} between two topological spaces is monotone with respect to the specialization preorders of these spaces: x ≤ y {\displaystyle x\leq y} implies f ( x ) ≤ f ( y ) . {\displaystyle f(x)\leq f(y).} The converse, however, is not true in general. In the language of category theory , we then have a functor from the category of topological spaces to the category of preordered sets that assigns a topological space its specialization preorder. This functor has a left adjoint , which places the Alexandrov topology on a preordered set. There are spaces that are more specific than T 0 spaces for which this order is interesting: the sober spaces . Their relationship to the specialization order is more subtle: For any sober space X with specialization order ≤, we have One may describe the second property by saying that open sets are inaccessible by directed suprema . A topology is order consistent with respect to a certain order ≤ if it induces ≤ as its specialization order and it has the above property of inaccessibility with respect to (existing) suprema of directed sets in ≤. The specialization order yields a tool to obtain a preorder from every topology. It is natural to ask for the converse too: Is every preorder obtained as a specialization preorder of some topology? Indeed, the answer to this question is positive and there are in general many topologies on a set X that induce a given order ≤ as their specialization order. The Alexandroff topology of the order ≤ plays a special role: it is the finest topology that induces ≤. The other extreme, the coarsest topology that induces ≤, is the upper topology , the least topology within which all complements of sets ↓ x (for some x in X ) are open. There are also interesting topologies in between these two extremes. The finest sober topology that is order consistent in the above sense for a given order ≤ is the Scott topology . The upper topology however is still the coarsest sober order-consistent topology. In fact, its open sets are even inaccessible by any suprema. Hence any sober space with specialization order ≤ is finer than the upper topology and coarser than the Scott topology. Yet, such a space may fail to exist, that is, there exist partial orders for which there is no sober order-consistent topology. Especially, the Scott topology is not necessarily sober.
https://en.wikipedia.org/wiki/Specialization_(pre)order
Specialized pro-resolving mediators ( SPM , also termed specialized proresolving mediators ) are a large and growing class of cell signaling molecules formed in cells by the metabolism of polyunsaturated fatty acids (PUFA) by one or a combination of lipoxygenase , cyclooxygenase , and cytochrome P450 monooxygenase enzymes. Pre-clinical studies , primarily in animal models and human tissues, implicate SPM in orchestrating the resolution of inflammation . [ 1 ] [ 2 ] [ 3 ] Prominent members include the resolvins and protectins . SPM join the long list of other physiological agents which tend to limit inflammation (see Inflammation § Resolution ) including glucocorticoids , interleukin 10 (an anti-inflammatory cytokine), interleukin 1 receptor antagonist (an inhibitor of the action of pro-inflammatory cytokine, interleukin 1 ), annexin A1 (an inhibitor of formation of pro-inflammatory metabolites of polyunsaturated fatty acids, and the gaseous resolvins, carbon monoxide (see Carbon monoxide § Physiology ), nitric oxide (see Nitric oxide § Biological functions ), and hydrogen sulfide (see Hydrogen sulfide §§ Biosynthesis ​ and Signalling role ). [ 4 ] [ 5 ] The absolute as well as relative roles of the SPM along with other physiological anti-inflammatory agents in resolving human inflammatory responses remain to be defined precisely. However, studies suggest that synthetic SPM that are resistant to being metabolically inactivated hold promise of being clinically useful pharmacological tools for preventing and resolving a wide range of pathological inflammatory responses along with the tissue destruction and morbidity that these responses cause. Based on animal model studies, the inflammation-based diseases which may be treated by such metabolically resistant SPM analogs include not only pathological and tissue damaging responses to invading pathogens but also a wide array of pathological conditions in which inflammation is a contributing factor such as allergic inflammatory diseases (e.g. asthma , rhinitis ), autoimmune diseases (e.g. rheumatoid arthritis , systemic lupus erythematosus ), psoriasis , atherosclerosis disease leading to heart attacks and strokes , type 1 and type 2 diabetes , the metabolic syndrome , and certain dementia syndromes (e.g. Alzheimer's disease , Huntington's disease ). [ 1 ] [ 2 ] [ 3 ] Many of the SPM are metabolites of omega−3 fatty acids and have been proposed to be responsible for the anti-inflammatory actions that are attributed to omega−3 fatty acid-rich diets. [ 6 ] Through most of its early period of study, acute inflammatory responses were regarded as self-limiting innate immune system reactions to invading foreign organisms, tissue injuries, and other insults. These reactions were orchestrated by various soluble signaling agents such as a) foreign organism-derived N-formylated oligopeptide chemotactic factors (e.g. N-formylmethionine-leucyl-phenylalanine ); b) complement components C5a and C3a which are chemotactic factors formed during the activation of the host's blood complement system by invading organisms or injured tissues; and c) host cell-derived pro-inflammatory cytokines (e.g. interleukin 1s ), host-derived pro-inflammatory chemokines (e.g. CXCL8 , CCL2 , CCL3 , CCL4 , CCL5 , CCL11 , CXCL10 ), platelet-activating factor , and PUFA metabolites including in particular leukotrienes (e.g. LTB4 ), hydroxyeicosatetraenoic acids (e.g., 5-HETE , 12-HETE ), the hydroxylated heptadecatrienoic acid, 12-HHT , and oxoeicosanoids (e.g. 5-oxo-ETE ). These agents functioned as pro-inflammatory signals by increasing the permeability of local blood vessels; activating tissue-bound pro-inflammatory cells such as mast cells , and macrophages ; and attracting to nascent inflammatory sites and activating circulating neutrophils , monocytes , eosinophils , gamma delta T cells , and natural killer T cells . The cited cells then proceeded to neutralize invading organisms, limit tissue injury, and initiate tissue repair. Hence, the classic inflammatory response was viewed as fully regulated by the soluble signaling agents. That is, the agents formed, orchestrated an inflammatory cell response, but then dissipated to allow resolution of the response. [ 7 ] In 1974, however, Charles N. Serhan , Mats Hamberg and Bengt Samuelsson , discovered that human neutrophils metabolize arachidonic acid to two novel products that contain 3 hydroxyl residues and 4 double bonds viz., 5,6,15-trihydroxy-7,9,11,13-icosatetraenoic acid and 5,14,15-trihydroxy-6,8,10,12-icosatetraenoic acid. [ 8 ] [ 9 ] These products are now termed lipoxin A4 and B4, respectively. While initially found to have in vitro activity suggesting that they might act as pro-inflammatory agents, Serhan and colleagues and other groups found that the lipoxins as well as a large number of newly discovered metabolites of other PUFA possess primarily if not exclusively anti-inflammatory activities and therefore may be crucial for causing the resolution of inflammation. In this view, inflammatory responses are not self-limiting but rather limited by the formation of a particular group of PUFA metabolites that counteract the actions of pro-inflammatory signals. [ 10 ] Later, these PUFA metabolites were classified together and termed specialized pro-resolving mediators (i.e. SPM). [ 11 ] The production and activities of the SPM suggest a new view of inflammation wherein the initial response to foreign organisms, tissue injury, or other insults involves numerous soluble cell signaling molecules that not only recruit various cell types to promote inflammation but concurrently cause these cells to produce SPM which feed back on their parent and other cells to dampen their pro-inflammatory activity and to promote repair. Resolution of an inflammatory response is thus an active rather than self-limiting process which is set into motion at least in part by the initiating pro-inflammatory mediators (e.g. prostaglandin E2 and prostaglandin D2 ) which instruct relevant cells to produce SPM and to assume a more anti-inflammatory phenotype. Resolution of the normal inflammatory response, then, may involve switching production of pro-inflammatory to anti-inflammatory PUFA metabolites. Excessive inflammatory responses to insult as well as many pathological inflammatory responses that contribute to diverse diseases such as atherosclerosis , obesity , diabetes , Alzheimer's disease , inflammatory bowel disease , etc. (see Inflammation § Disorders ) may reflect, in part, a failure in this class switching. Diseases caused or worsened by non-adaptive inflammatory responses may by amenable to treatment with SPM or synthetic SPM which, unlike natural SPM, resist in vivo metabolic inactivation. [ 12 ] [ 2 ] [ 13 ] [ 14 ] The SPM possess overlapping activities which work to resolve inflammation. SPMs (typically more than one for each listed action) have the following anti-inflammatory activities on the indicated cell types as defined in animal and human model studies: [ 1 ] [ 15 ] [ 16 ] [ 17 ] SPMs also stimulate anti-inflammatory and tissue reparative types of responses in epithelium cells, endothelium cells, fibroblasts , smooth muscle cells, osteoclasts , osteoblasts , goblet cells , and kidney podocytes [ 1 ] as well as activate the heme oxygenase system of cells thereby increasing the production of the tissue-protective gaso-transmitter, carbon monoxide (see Carbon monoxide § Physiology ), in inflamed tissues. [ 18 ] SPM are metabolites of arachidonic acid (AA), eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), or n −3 DPA (i.e. 7 Z ,10 Z ,13 Z ,16 Z ,19 Z - docosapentaenoic acid or clupanodonic acid); these metabolites are termed lipoxins (Lx), resolvins (Rv), protectins (PD) (also termed neuroprotectins [NP]), and maresins (MaR). EPA, DHA, and n −3 DPA are n −3 fatty acids; their conversions to SPM are proposed to be one mechanism by which n −3 fatty acids may ameliorate inflammatory diseases (see Omega−3 fatty acid § Inflammation ). [ 19 ] SPM act, at least in part, by either activating or inhibiting cells through binding to and thereby activating or inhibiting the activation of specific cellular receptors . Human cells synthesize LxA4 and LxB4 by serially metabolizing arachidonic acid (5 Z ,8 Z ,11 Z ,14 Z -eicosatetraenoic acid) with a) ALOX15 (or possibly ALOX15B ) followed by ALOX5 ; b) ALOX5 followed by ALOX15 (or possibly ALOX15B); or c) ALOX5 followed by ALOX12 . Cells and, indeed, humans treated with aspirin form the 15 R -hydroxy epimer lipoxins of these two 15 S -lipoxins viz., 15-epi-LXA4 and 15-epi-LXB4, through a pathway that involves ALOX5 followed by aspirin-treated cyclooxygenase-2 (COX-2). Aspirin-treated COX-2, while inactive in metabolizing arachidonic acid to prostanoids , metabolizes this PUFA to 15 R -hydroperoxy-eicosatetraenoic acid whereas the ALOX15 (or ALOX15B) pathway metabolizes arachidonic acid to 15 S -hydroperoxy-eicosatetraenoic acid. The two aspirin-triggered lipoxins (AT-lipoxins) or epi-lipoxins differ structurally from LxA4 and LxB4 only in the S versus R chirality of their 15-hydroxyl residue. Numerous studies have found that these metabolites have potent anti-inflammatory activity in vitro and in animal models and in humans may stimulate cells by binding to certain receptors on these cells. [ 13 ] [ 20 ] [ 21 ] The following table lists the structural formulae (ETE stands for eicosatetraenoic acid), major activities, and cellular receptor targets (where known). Resolvins are metabolites of omega−3 fatty acids , EPA, DHA, and 7 Z ,10 Z ,13 Z ,16 Z ,19 Z - docosapentaenoic acid ( n −3 DPA). All three of these omega−3 fatty acids are abundant in salt water fish, fish oils, and other seafood. [ 19 ] n −3 DPA (also termed clupanodonic acid) is to be distinguished from its n −6 DPA isomer, i.e. 4 Z ,7 Z ,10 Z ,13 Z ,16 Z -docosapentaenoic acid, also termed osbond acid. Cells metabolize EPA (5 Z ,8 Z ,11 Z ,14 Z ,17 Z -eicosapentaenoic acid) by a cytochrome P450 monooxygenase(s) (in infected tissues a bacterial cytochrome P450 may supply this activity) or aspirin-treated cyclooxygenase-2 to 18 R -hydroperoxy-EPA which is then reduced to 18 R -hydroxy-EPA and further metabolized by ALOX5 to 5 S -hydroperoxy-18 R -hydroxy-EPA; the later product may be reduced to its 5,18-dihydroxy product, RvE2, or converted to its 5,6-epoxide and then acted on by an epoxide hydrolase to form a 5,12,18-trihydroxy derivative, RvE1. In vitro, ALOX5 can convert 18 S -HETE to the 18 S analog of RvE1 termed 18 S -RvE1. 18 R -HETE or 18 S -HETE may also be metabolized by ALOX15 to its 17 S -hydroperoxy and then reduced to its 17 S -hydroxy product, Rv3. Rv3, as detected in in vitro studies, is a dihydroxy mixture of 18 S -dihydroxy (i.e. 18 S -RvE3) and 18 R -dihydroxy (i.e. 18 R -RvE3) isomers, both of which, similar to the other aforementioned metabolites possess potent SPM activity in in vitro and/or animal models. [ 24 ] [ 25 ] [ 26 ] In vitro studies find that ALOX5 can convert 18 S -hydroperoxy-EPA to the 18 S -hydroxy analog of RvE2 termed 18 S -RvE2. 18 S -RvE2, however has little or no SPM activity [ 26 ] and is therefore not considered to be a SPM here. The following table lists the structural formulae (EPA stands for eicosapentaenoic acid), major activities, and cellular receptor targets (where known). Cells metabolize DHA (4 Z ,7 Z ,10 Z ,13 Z ,16 Z ,19 Z -docosahexaenoic acid) by either ALOX15 or a cytochrome P450 monooxygenase(s) (bacteria may supply the cytochrome P450 activity in infected tissues) or aspirin-treated cyclooxygenase-2 to 17 S -hydroperoxy-DHA which is reduced to 17 S -hydroxy-DHA. ALOX5 metabolizes this intermediate to a) 7 S -hydroperoxy,17 S -hydroxy-DHA which is then reduced to its 7 S ,17 S -dihydroxy analog, RvD5; b) 4 S -hydroperoxy,17 S -hydroxy-DHA which is reduced to its 4 S ,17 S -dihydroxy analog, RvD6; c) 7 S ,8 S -epoxy-17 S -DHA which is then hydrolyzed to 7,8,17-trihydroxy and 7,16,17-trihydorxy products, RvD1 and RvD2, respectively; and d) 4 S ,5 S -epoxy-17 S -DHA which is then hydrolyzed to 4,11,17-trihydroxy and 4,5,17-trihydroxy products, RvD3 and RvD4, respectively. These six RvDs possess a 17 S -hydroxy residue; however, if aspirin-treated cyclooxygenase-2 is the initiating enzyme, they contain a 17 R -hydroxy residue and are termed 17 R -RvDs, aspirin-triggered-RvDs, or AT-RvDs 1 thru 6. In certain cases, the final structures of these AT-RvDs is assumed by analogy to the structures of their RvD counterparts. Studies have found that most (and presumably all) of these metabolites have potent anti-inflammatory activity in vitro and/or in animal models. [ 23 ] [ 24 ] [ 25 ] [ 30 ] The following table lists the structural formulae, major activities with citations and cellular receptor targets of D series resolvins. n −3 DPA (i.e. 7 Z ,10 Z ,13 Z ,16 Z ,19 Z -docosapentaenoic acid)-derived resolvins are recently identified SPM. In the model system used to identify them, human platelets pretreated with aspirin to form acetylated COX-2 or with the statin , atorvastatin , to form S -nitrosylated COX-2, thereby modify this enzyme's activity. The modified enzyme metabolizes n −3 DPA to a 13 R -hydroperoxy-n−3 DPA intermediate which is passed over to nearby human neutrophils ; these cells then metabolize the intermediate to four poly- hydroxyl metabolites termed resolvin T1 (RvT1), RvT2, RvT3, and RvT4. These T series resolvins also form in mice undergoing experimental inflammatory responses and have potent in vitro and in vivo anti-inflammatory activity; they are particularly effective in reducing the systemic inflammation as well as increasing the survival of mice injected with lethal doses of E. coli bacteria. [ 25 ] [ 38 ] [ 39 ] Another set of newly described n−3 DPA resolvins, RvD1 n−3 , RvD2 n−3 , and RvD5 n−3 , have been named based on their presumed structural analogies to the DHA-derived resolvins RvD1, RvD2, and RvD5, respectively. These three n −3 DPA-derived resolvins have not been defined with respect to the chirality of their hydroxyl residues or the cis–trans isomerism of their double bonds but do possess potent anti-inflammatory activity in animal models and human cells; they also have protective actions in increasing the survival of mice subjected to E. coli sepsis . [ 39 ] The following table lists the structural formulae (DPA stands for docosapentaenoic acid), major activities and cellular receptor targets (where known). Cells metabolize DHA by either ALOX15, by a bacterial or mammalian cytochrome P450 monooxygenase (Cyp1a1, Cyp1a2, or Cyp1b1 in mice; see CYP450 §§ CYP families in humans ​ and P450s in other species ) or by aspirin-treated cyclooxygenase-2 to 17 S -hydroperoxy or 17 R -hydroperoxy intermediates (see previous subsection); this intermediate is then converted to a 16 S ,17 S - epoxide which is then hydrolyzed (probably by a soluble epoxide hydrolase to protectin D1 (PD1, also termed neuroprotectin D1 [NPD1] when formed in neural tissue). [ 2 ] PDX is formed by the metabolism of DHA by two serial lipoxygenases, probably a 15-lipoxygenase and ALOX12 . 22-Hydroxy-PD1 (also termed 22-hydroxy-NPD1) is formed by the omega oxidation of PD1 probably by an unidentified cytochrome P450 enzyme. While omega-oxidation products of most bioactive PUFA metabolites are far weaker than their precursors, 22-hydroxy-PD1 is as potent as PD1 in inflammatory assays. Aspirin-triggered-PD1 (AT-PD1 or AP-NPD1) is the 17 R -hydroxyl diastereomer of PD1 formed by the initial metabolism of DHA by aspirin-treated COX-2 or possibly a cytochrome P450 enzyme to 17 R -hydroxy-DHA and its subsequent metabolism possibly in manner similar to that which forms PD1. 10-Epi-PD1 (ent-AT-NPD1), the 10 S -hydroxy diastereomer of PD1, has been detected in small amounts in human neutrophils . While its in vivo synthetic pathway has not been defined, 10-epi-PD1 has anti-inflammatory activity. [ 25 ] [ 43 ] The following table lists the structural formulae (DHA stands for docosahexaenoic acid), major activities, cellular receptor targets (where known), and Wikipedia pages giving further information on the activity and syntheses. n −3 DPA-derived protectins with structural similarities to PD1 and PD2 have been described, determined to be formed in vitro and in animal models, and termed PD1 n−3 and PD2 n−3 , respectively. These products are presumed to be formed in mammals by the metabolism of n−3 DPA by an unidentified 15-lipoxygenase activity to 16,17-epoxide intermediate and the subsequent conversion of this intermediate to the di-hydroxyl products PD1 n−3 and PD2 n−3 . PD1 n−3 has anti-inflammatory activity in a mouse model of peritonitis ; PD2 n−3 has anti-inflammatory activity in an in vitro model. [ 39 ] [ 47 ] The following table lists the structural formulae (DPA stands for docosapentaenoic acid), major activities and cellular receptor targets (where known). Cells metabolize DHA by ALOX12 , other lipoxygenase , (12/15-lipoxygenase in mice), or an unidentified pathway to a 13 S ,14 S - epoxide -4 Z ,7 Z ,9 E ,11 E ,16 Z ,19 Z -DHA intermediate (13 S ,14 S -epoxy-maresin MaR) and then hydrolyze this intermediate by an epoxide hydrolase activity (which ALOX 12 and mouse 12/15-lipoxygenase possess) to MaR1 and MaR2. During this metabolism, cells also form 7-epi-Mar1, i.e. the 7 S -12 E isomer of Mar1, as well as the 14 S -hydroxy and 14 R -hydroxy metabolites of DHA. The latter hydroxy metabolites can be converted by an unidentified cytochrome P450 enzyme to maresin like-1 (Mar-L1) and Mar-L2 by omega oxidation ; alternatively, DHA may be first metabolized to 22-hydroxy-DHA by CYP1A2 , CYP2C8 , CYP2C9 , CYP2D6 , CYP2E1 , or CYP3A4 and then metabolized through the cited epoxide-forming pathways to Mar-L1 and MaR-L2. Studies have found that these metabolites have potent anti-inflammatory activity in vitro and in animal models. [ 14 ] [ 24 ] [ 25 ] The following table lists the structural formulae (DHA stands for docosahexaenoic acid), major activities and cellular receptor targets (where known). n −3 DPA-derived maresins are presumed to be formed in mammals by metabolism of n −3 DPA by an undefined 12-lipoxygenase activity to a 14-hydroperoxy-DPA intermediated and the subsequent conversion of this intermediate to di-hydroxyl products which have been termed MaR1 n−3 , MaR2 n−3 , and MaR3 n−3 based on their structural analogies to MaR1, MaR2, and MaR3, respectively. MaR1 n−3 and MaR2 n−3 have been found to possess anti-inflammatory activity in in vitro assays of human neutrophil function. These n−3 DPA-derived maresins have not been defined with respect to the chirality of their hydroxyl residues or the cis–trans isomerism of their double bonds. [ 39 ] The following table lists the structural formulae (DPA stands for docosapentaenoic acid), major activities and cellular receptor targets (where known). The following PUFA metabolites, while not yet formally classified as SPM, have been recently described and determined to have anti-inflammatory activity. 10 R ,17 S -dihydroxy-7 Z ,11 E ,13 E ,15 Z ,19 Z -docosapentaenoic acid (10 R ,17 S -diHDPA EEZ ) has been found in inflamed exudates of animal models and possesses in vitro and in vivo anti-inflammatory activity almost as potently as PD1. [ 44 ] n −6 DPA (i.e. 4 Z ,7 Z ,10 Z ,13 Z ,16 Z -docosapentaenoic acid or osbond acid) is an isomer of n −3 DPA ( clupanodonic acid ) differing from the latter fatty acid only in the location of its 5 double bonds. Cells metabolize n −6 DPA to 7-hydroxy-DPA n−6 , 10,17-dihydroxy-DPA n−6 , and 7,17-dihydroxy-DPA n−3 ; the former two metabolites have been shown to possess anti-inflammatory activity in in vitro and in animal model studies. [ 39 ] Cells metabolize DHA and n −3 DPA by COX-2 to 13-hydroxy-DHA and 13-hydroxy-DPA n−3 products and by aspirin-treated COX-2 to 17-hydroxy-DHA and 17-hydroxy-DPA n−3 products and may then oxidize these products to their corresponding oxo (i.e. ketone ) derivatives, 13-oxo-DHA (also termed e lectrophilic f atty acid ox o d erivative or EFOX-D6), 13-oxo-DPA n−3 ( EFOX -D5), 17-oxo-DHA (17-EFOX-D6), and 17-oxo-DPA n−3 (17-EFOX-D3). These oxo metabolites directly activate the nuclear receptor peroxisome proliferator-activated receptor gamma and possess anti-inflammatory activity as assesses in in vitro systems. [ 39 ] DHA ethanolamide ester (the DHA analog of arachindonyl ethanolamide (i.e. anandamide ) is metabolized to 10,17-dihydroxydocosahexaenoyl ethanolamide (10,17-diHDHEA) and/or 15-hydroxy-16(17)-epoxy-docosapentaenoyl ethanolamide (15-HEDPEA) by mouse brain tissue and human neutrophils . Both compounds possess anti-inflammatory activity in vitro; 15-HEDPEA also has tissue-protective effects in mouse models of lung injury and tissue reperfusion. Like anandamide, both compounds activated the cannabinoid receptor . [ 50 ] [ 51 ] PUFA derivatives containing a cyclopentenone structure are chemically reactive and can form adducts with various tissue targets, particularly proteins. Certain of these PUFA-cyclopentenones bind to the sulfur residues in the KEAP1 component of the KEAP1- NFE2L2 protein complex in the cytosol of cells. This negates KEAP1's ability to bind NFE2L2; in consequence, NFE2L2 becomes free to translocate to the nuclease and stimulate the transcription of genes that encode proteins active in detoxifying reactive oxygen species ; this effect tends to reduce inflammatory reactions. PUFA-cyclopentenones may likewise react with the IKK2 component of the cytosolic IKK2 - NFκB protein complex thereby inhibiting NFκB from stimulating the transcription of genes that encode various pro-inflammatory proteins. One or both of these mechanisms appears to contribute to the ability of certain highly reactive PUFA-cyclopenetenones to exhibit SPM activity. The PUFA-cyclopentenones include two prostaglandins , (PG) Δ12-PGJ2 and 15-deoxy-Δ12,14-PGJ2, and two isoprostanes , 5,6-epoxyisoprostane E2 and 5,6-epoxyisoprostane A2. Both PGJ2's are arachidonic acid-derived metabolites made by cyclooxygenases , primarily COX-2 , which is induced in many cell types during inflammation. Both isoprostanes form non-enzymatically as a result the attack on the arachidonic acid bond to cellular phospholipids by reactive oxygen species ; they are then release from the phospholipids to become free in attacking their target proteins. All four products have been shown to form and possess SPM activity in various in vitro studies of human and animal tissue as well as in in vivo studies of animal models of inflammation; they have been termed pro-resolving mediators of inflammation [ 52 ] Mice made deficient in their 12/15-lipoxygenase gene (Alox15) exhibit a prolonged inflammatory response along with various other aspects of a pathologically enhanced inflammatory response in experimental models of cornea injury, airway inflammation, and peritonitis . These mice also show an accelerated rate of progression of atherosclerosis whereas mice made to overexpress 12/15-lipoxygenase exhibit a delayed rate of atherosclerosis development. Alox15 overexpressing rabbits exhibited reduced tissue destruction and bone loss in a model of periodontitis . [ 2 ] Similarly, Alox5 deficient mice exhibit a worsened inflammatory component, failure to resolve, and/or decrease in survival in experimental models of respiratory syncytial virus disease, Lyme disease , Toxoplasma gondii disease, and corneal injury. [ 2 ] These studies indicate that the suppression of inflammation is a major function of 12/15-lipoxygenase and Alox5 along with the SPMs they make in at least certain rodent experimental inflammation models; although these rodent lipoxygenases differ from human ALOX15 and ALOX5 in the profile of the PUFA metabolites that they make as well as various other parameters (e.g. tissue distribution), these genetic studies allow that human ALOX15, ALOX5, and the SPMs they make may play a similar anti-inflammatory functions in humans. Concurrent knockout of the three members of the CYP1 family of cytochrome P450 enzymes in mice, i.e. Cyp1a1, Cyp1a2, and Cyp1b1, caused an increase in the recruitment of neutrophils to the peritoneum in mice undergoing experimental peritonitis; these triple knockout mice also exhibited an increase in the peritoneal fluid LTB4 level and decreases in the levels of peritoneal fluid NPD1 as well as the precursors to various SPMs including 5-hydroxyeicosatetraenoic acid , 15-hydroxyeicosatetraenoic acid , 18-hydroxyeicosapentaenoic acid, 17-hydroxydocosahexaenoic acid, and 14-hydroxydocosahexaenoic. These results support the notion that Cyp1 enzymes contribute to the production of certain SPMs and inflammatory responses in mice; CYP1 enzymes may therefore play a similar role in humans. [ 53 ] In a randomized controlled trial , AT-LXA4 and a comparatively stable analog of LXB4, 15 R/S -methyl-LXB4, reduced the severity of eczema in a study of 60 infants. [ 54 ] [ 55 ] A synthetic analog of ReV1 is in clinical phase III testing (see Phases of clinical research ) for the treatment of the inflammation-based dry eye syndrome; along with this study, other clinical trials (NCT01639846, NCT01675570, NCT00799552 and NCT02329743) using an RvE1 analogue to treat various ocular conditions are underway. [ 16 ] RvE1, Mar1, and NPD1 are in clinical development studies for the treatment of neurodegenerative diseases and hearing loss. [ 2 ] And, in a single study, inhaled LXA4 decreased LTC4-initiated bronchoprovocation in patients with asthma. [ 16 ]
https://en.wikipedia.org/wiki/Specialized_pro-resolving_mediators
Specialty drugs or specialty pharmaceuticals are a recent designation of pharmaceuticals [ 1 ] [ 2 ] classified as high-cost, [ 3 ] [ 4 ] [ 5 ] high complexity and/or high touch . [ 4 ] Specialty drugs are often biologics [ 3 ] [ 6 ] —"drugs derived from living cells" [ 7 ] that are injectable or infused (although some are oral medications). [ 4 ] They are used to treat complex or rare chronic conditions such as cancer , rheumatoid arthritis , hemophilia , H.I.V. [ 5 ] psoriasis , [ 3 ] inflammatory bowel disease [ 3 ] and hepatitis C . [ 4 ] [ 8 ] In 1990 there were 10 specialty drugs on the market, [ 9 ] around five years later nearly 30, [ 10 ] by 2008 200, [ 10 ] and by 2015 300. [ 11 ] Drugs can be defined as specialty because of their high price. [ 3 ] [ 4 ] [ 5 ] Medicare defines any drug with a negotiated price of $670 per month or more as a specialty drug. These drugs are placed in a specialty tier requiring a higher patient cost sharing. [ 11 ] [ 12 ] Drugs are also identified as specialty when there is a special handling requirement [ 3 ] or the drug is only available via a limited distributions network. [ 3 ] By 2015 "specialty medications accounted for one-third of all spending on drugs in the United States, up from 19 percent in 2004 and heading toward 50 percent in the next 10 years", [ 5 ] according to IMS Health . According to a 2010 article in Forbes , specialty drugs for rare diseases became more expensive "than anyone imagined" and their success came "at a time when the traditional drug business of selling medicines to the masses" was "in decline". [ 13 ] In 2015 analysis by The Wall Street Journal suggested the large premium was due to the perceived value of rare disease treatments which usually are very expensive when compared to treatments for more common diseases. [ 14 ] Medications must be either identified as high cost, high complexity or high touch to be classified as a specialty medication by Magellan Rx Management. [ 4 ] Specialty pharmaceuticals are defined as "high-cost oral or injectable medications used to treat complex chronic conditions". [ 4 ] According to a 2013 article in the Journal of Managed Care & Specialty Pharmacy , on the increasingly important role of specialty drugs in the treatment of chronic conditions and their cost, drugs are most typically defined as specialty because they are expensive. [ 3 ] Other criteria used to define a drug as specialty include "biologic drugs, the need to inject or infuse the drug, the requirement for special handling, or drug availability only via a limited distribution network". [ 3 ] The price of specialty drugs compared to non-specialty drugs is very high, "more than $1,000 per 30-day supply". [ 4 ] [ 5 ] Specialty drugs cover over forty therapeutic categories and special disease states with over 500 drugs. [ 4 ] Vogenberg claims that there is no standard definition of a specialty drug which is one of the reasons they are difficult to manage. [ 6 ] "[T]hose pharmaceuticals that usually require special handling, administration, unique inventory management, and a high level of patient monitoring and support to consumers with specific chronic conditions, acute events, or complex therapies, and provides comprehensive patient education services and coordination with the patient and prescriber." [ 15 ] Drugs are most typically defined as specialty because they are expensive. [ 3 ] They are high cost "both in total and on a per-patient basis". [ 16 ] High-cost medications are typically priced at more than $1,000 per 30-day supply. [ 4 ] [ 5 ] The Medicare Part D program "defines a specialty drug as one that costs more than $600 per month". [ 16 ] [ 17 ] Most of the prescriptions filled by Pennsylvania-licensed Philidor Rx Services , a specialty online mail-order pharmacy, which mainly sold [ 18 ] [ 19 ] [ 20 ] Valeant Pharmaceuticals International Inc expensive drugs directly to patients and handled insurance claims on the customers' behalf, [ 21 ] [ 22 ] such as Solodyn , Jublia , [ 14 ] [ 23 ] [ 24 ] and Tretinoin , would be considered specialty drugs. [ 25 ] Specialty drugs are more complex to manufacture. [ 16 ] They are "highly complex medications, typically biology-based, that structurally mimic compounds found within the body". [ 4 ] Specialty drugs are often biologics [ 3 ] [ 6 ] —"drugs derived from living cells"—but biologics are "not always deemed to be specialty drugs". [ 7 ] Biologics "may be produced by biotechnology methods and other cutting-edge technologies. Gene-based and cellular biologics, for example, often are at the forefront of biomedical research, and may be used to treat a variety of medical conditions for which no other treatments are available." [ 26 ] "In contrast to most drugs that are chemically synthesized and their structure is known, most biologics are complex mixtures that are not easily identified or characterized. Biological products, including those manufactured by biotechnology, tend to be heat sensitive and susceptible to microbial contamination. Therefore, it is necessary to use aseptic principles from initial manufacturing steps, which is also in contrast to most conventional drugs. Biological products often represent the cutting-edge of biomedical research and, in time, may offer the most effective means to treat a variety of medical illnesses and conditions that presently have no other treatments available." According to the U.S. Food and Drug Administration (FDA) biologics, or [ 26 ] "Biological products include a wide range of products such as vaccines, blood and blood components, allergenics, somatic cells, gene therapy, tissues, and recombinant therapeutic proteins. Biologics can be composed of sugars, proteins, or nucleic acids or complex combinations of these substances, or may be living entities such as cells and tissues. Biologics are isolated from a variety of natural sources—human, animal, or microorganism..." Some specialty drugs can be oral medications or self-administered injectables. Others may be professionally administered or injectables/infusions. [ 4 ] High-touch patient care management is usually required to control side effects and ensure compliance. Specialized handling and distribution are also necessary to ensure appropriate medication administration. [ 4 ] Specialty drugs patient care management is meant to be both high technology and high touch care, or patient-centered care with "more face-to-face time, more personal connections". Patient-centered care is defined by the Institute of Medicine as "care that is respectful of and responsive to individual patient preferences, needs and values". [ 27 ] Specialty drugs may be "difficult for patients to take without ongoing clinical support". [ 16 ] Specialty drugs might have special requirements for handling procedures and administration including the necessity of having controlled environments such as highly specific temperature controls to ensure product integrity. [ 16 ] They are often only available via a limited distributions network such as a special pharmacy. [ 3 ] Specialty drugs may be "challenging for providers to manage". [ 16 ] Specialty drugs may be taken "by relatively small patient populations presenting with complex medical conditions". [ 16 ] "Specialty pharmacies have their roots in the 1970s, when they began delivering temperature-controlled drugs to treat cancer, HIV, infertility and hemophilia." [ 28 ] "The business grew as more drugs became available for patients to inject themselves and as insurers sought to manage expenses for patients with chronic conditions, according to areport from IMS Health. Manufacturers have increasingly relied on these pharmacies when it comes to fragile medicines that need special handling or have potentially dangerous side effects that require them to be taken under a management program." According to The American Journal of Managed Care , in 1990 there were 10 specialty drugs on the market. [ 9 ] According to the National Center for Biotechnology Information , by the mid-1990s, there were fewer than 30 specialty drugs on the market, but by 2008 that number had increased to 200. [ 10 ] Specialty drugs may also be designated as orphan drugs or ultra-orphan drugs under the U. S. Orphan Drug Act of 1983 . This was enacted to facilitate development of orphan drugs—drugs for rare diseases such as Huntington's disease , myoclonus , amyotrophic lateral sclerosis , Tourette syndrome and muscular dystrophy which affect small numbers of individuals residing in the United States. [ 29 ] Not all specialty drugs are orphan drugs. According to Thomson Reuters in their 2012 publication "The Economic Power of Orphan Drugs", there has been increased investing in orphan drug research and development partly since the U.S. Congress enacted the Orphan Drug Act, giving an extra monopoly for drugs for "orphan diseases" that affected fewer than 200,000 people in the country. [ 13 ] Similar acts came into existence in other regions of the world, many driven by "high-profile philanthropic funding". [ 30 ] [ 31 ] According to a 2010 article in Forbes , prior to 1983 drug companies largely ignored rare diseases and focused on drugs that affected millions of patients. [ 13 ] The term specialty drugs was used as early as 1988 in a New York Times article about Eastman Kodak Company 's acquisition of the New York-based Sterling Drug Inc. , maker of specialty drugs along with many and diverse other products. [ 2 ] When Shire Pharmaceuticals acquired BioChem Pharma in 2000 they created a specialty pharmaceuticals company. [ 32 ] By 2001 Shire was one of the fastest growing specialty pharmaceutical companies in the world. [ 33 ] By 2001 CVS 's specialty pharmacy ProCare was the "largest integrated retail/mail provider of specialty pharmacy services" in the United States. [ 34 ] : 10 It was consolidated with their pharmacy benefit management company PharmaCare in 2002. In their 2001 annual report, CVS anticipated that the "$16 billion specialty pharmacy market" would grow at "an even faster rate than traditional pharmacy due in large part to the robust pipeline of biotechnology drugs". [ 34 ] By 2014 CVS Caremark, Express Scripts and Walgreens represented more than 50% of the specialty drug market in the United States. [ 35 ] : 4 When an increasing number of oral oncology agents first entered the market between 2000 and 2010, most cancer care was provided in a community oncology practices. By 2008 many other drugs had been developed to treat cancer, and drug development had grown into a multibillion-dollar industry. [ 36 ] In 2003 the Medicare Prescription Drug, Improvement, and Modernization Act [ 37 ] was enacted [ 38 ] —the largest overhaul of Medicare in the public health program's 38-year history—included Medicare Part D an entitlement benefit for prescription drugs , through tax breaks and subsidies. In 2004 the U. S. Centers for Medicare and Medicaid Services (CMS) prepared a report on final guidance regarding access to drug coverage enacted under in which they included the specialty drugs tier in the prescription drug formulary. [ 1 ] At that time CMS guidelines included four tiers: tier 1 includes preferred generics, tier 2 includes preferred brands, tier 3 includes non-preferred brands and generics and tier 4 included specialty drugs. [ 1 ] By January 1, 2006, the controversial [ according to whom? ] Medicare Part D was put in effect. It was a massive [ clarification needed ] expansion of the federal government's provision of prescription drug coverage to previously uninsured Americans, particularly seniors. [ 39 ] : 69 In 2006 in the United States there was no standard nomenclature, so sellers could call the plan anything they wanted and cover whatever drugs they wanted. [ 40 ] By 2008 most prescription medication plans in the United States used specialty drug tiers, and some had a separate benefit tier for injectable drugs. Beneficiary cost sharing was higher for drugs in these tiers. [ 41 ] By 2011 in the United States a growing number of Medicare Part D health insurance plans—which normally include generic, preferred, and non-preferred tiers with an accompanying rate of cost-sharing or co-payment—had added an "additional tier for high-cost drugs which is referred to as a specialty tier". [ 42 ] : 1 By 2014 in the United States, in the new Health Insurance Marketplace —following the implementation of the U.S. Affordable Care Act , also known as Obamacare [ 43 ] —most health plans had a four- or five-tier prescription drug formulary with specialty drugs in the highest of the tiers. [ 44 ] According to an AARP 2015 report, "All but 4 of the 46 therapeutic categories of specialty drug products had average annual retail price increases that exceeded the rate of general inflation in 2013. Price increases by therapeutic category ranged from 1.7 percent to 77.2 percent." [ 45 ] On September 27, 2007 President George W. Bush amended the Food and Drug Administration Amendments Act of 2007 (FDAAA) to authorize the FDA to require Risk Evaluation and Mitigation Strategies (REMS) on medications if necessary to minimize the risks associated with some drugs". These medications were designated as specialty drugs and required specialty pharmacies. When the FDA approves a new drug they may require a REMS program which "may contain any combination of 5 criteria: Medication Guide, Communication Plan, Elements to Assure Safe Use, Implementation System, and Timetable for Submission of Assessments". [ 46 ] "In 2010, 48% of all new molecular entities, and 60% of all new specialty drug approvals, required a REMS program." Risk-reduction mechanisms can include the "use of specialized distribution partners", special pharmacy. [ 47 ] In 2013 the FDA introduced the breakthrough therapy designation program which cut the development process of new therapies by several years. This meant that the FDA could "introduce important medicines to the market based on very promising phase 2 rather than phase 3 clinical trial results". Shortly after the law was enacted, Ivacaftor , in January 2013, became the first drug to receive the breakthrough therapy designation. [ 48 ] On February 3, 2015 New York-based Pfizer 's drug Ibrance was approved through the FDA's Breakthrough Therapy designation program as a treatment for advanced breast cancer. [ 49 ] It can only be ordered through specialty pharmacies and sells for "$9,850 for a month or $118,200 per year". [ 50 ] According to a statement by the New York-based Pfizer the price "is not the cost that most patients or payors pay" since most prescriptions are dispensed through health plans, which negotiate discounts for medicines or get government-mandated price concessions. [ 50 ] According to Express Scripts, [ 51 ] "[T]he pharmacy landscape [in the United States] underwent a seismic change, and the budgetary impact to healthcare payers was significant. U.S. prescription drug spend increased 13.1% in 2014 – the largest annual increase since 2003 – and this was largely driven by an unprecedented 30.9% increase in spending on specialty medications. Utilization of traditional medications stayed flat (-0.1%), while the use of specialty drugs increased 5.8%. The largest factors contributing to the increased spending, however, were the price increases for these medication categories – 6.5% for traditional and 25.2% for specialty. While specialty medications represent only 1% of all U.S. prescriptions, these medications represented 31.8% of all 2014 drug spend – an increase from 27.7% in 2013." By 2015 "specialty medications account for one-third of all spending on drugs in the United States, up from 19 percent in 2004 and heading toward 50 percent in the next 10 years, according to IMS Health, which tracks prescriptions". [ 5 ] The specialty pharmacy business had $20 billion in sales in 2005. By 2014 it had grown to "$78 billion in sales". [ 5 ] In Canada by 2013 "specialty drugs made up less than 1.3 percent of all Canadian prescriptions, but accounted for 24 percent of Canada's total spending on prescription drugs". [ 52 ] When Randy Vogenberg of the Institute for Integrated Healthcare in Massachusetts and a co-leader of the Midwest Business Group initiative, began investigating specialty drugs in 2003, it "wasn't showing up on the radar". By 2009 specialty drugs had started doubling in cost and payers such as employers began to question. [ 6 ] Vogenberg observed that by 2014 health care reform had changed the landscape for specialty drugs. There is a shift away from a marketplace based on a predominately clinical perspective, to one that puts economics first and clinical second. [ 53 ] : 15 Many factors contribute to the continuing increase in price of specialty drugs. Development of specialty drugs not only costs more, but they also take longer to develop than other large market pharmaceuticals [ 54 ] (See Drug development ). In addition, there are often fewer drug choices for rare or hard-to-treat diseases. [ 55 ] This results in less competition in the marketplace for these drugs due to patent protection, which allows these firms to act as monopolists (See Drug Price Competition and Patent Term Restoration Act ). Due to this lack of competition, policies that serve to limit prices in other markets can be ineffective or even counter-productive when applied to specialty drugs. [ 56 ] High prices for specialty drugs are a problem for both patients and payers. Patients frequently have difficulty paying for these medications, which can lead to lack of access to treatment. [ 57 ] Specialty drugs are now so expensive that they are leading to increases in insurance premiums. [ 58 ] Control of specialty drug prices will require research to identify effective policy options, which may include: decreasing regulation, limiting patent protection, allowing negotiation of drug prices by Medicare, or pricing drugs based on their effectiveness. [ 59 ] In the United States, private insurance payers will favour a lower-cost agent preferring generics and biosimilars to the more expensive specialty drugs if there is no peer-reviewed or evidence-based justification for them. [ 53 ] According to a 2012 report by Sun Life Financial the average cost of specialty drug claims was $10,753 versus $185 for non-specialty drugs and the cost of specialty drugs continues to rise. With such steep prices by 2012 specialty drugs represented 15-20% of prescription drug reimbursement claims. [ 60 ] Patient advocacy groups that lobby for payment for specialty drugs include the Alliance for Patient Access (AfPA), formed in 2006 and which according to a 2014 article in the Wall Street Journal "represents physicians and is largely funded by the pharmaceutical industry. The contributors mostly include brand-name drug makers and biotechs, but some—such as Pfizer and Amgen—are also developing biosimilars." [ 61 ] In 2013 AfPA director David Charles published an article on specialty drugs in which he agreed with the findings of the Congressional Budget Office that spending on prescription medications "saves costs in other areas of healthcare spending". [ 62 ] He observed that specialty drugs are so high priced that many patients do not fill prescriptions resulting in more serious health problems increasing. His article referred to specialty drugs such as "new cancer drugs specially formulated for patients with specific genetic markers". [ 62 ] He explained the high cost of these "individualized medications based on diagnostic testing; and "biologics", or medicines created through biologic processes, rather than chemically synthesized like most pharmaceuticals". [ 62 ] He argued that there should be a slight increase in co-pays for the more commonly using lower-tier medications to allow a lower co-pay for those who "require high-cost specialty tier medications". [ 62 ] According to the 2014 Express Scripts Drug Trend Report, [ 51 ] the most significant increase in prescription drugs in the United States in 2014 was due to "increased inflation and utilization of hepatitis C and compounded medications". [ 51 ] "Excluding those two therapy classes, overall drug spend would have increased only 6.4%. [ 51 ] The cost of "the top three specialty therapy classes—inflammatory conditions, multiple sclerosis and oncology—contributed 55.9% of the spend for all specialty medications billed through the pharmacy benefit in 2014. The U.S. spent 742.6% more on hepatitis C medications in 2014 than it did in 2013; this therapy class was not among the top 10 specialty classes in 2013. [ 51 ] As the market demanded specialization in drug distribution and clinical management of complex therapies, specialized pharma (SP) evolved.„ [ 63 ] By 2001 CVS ' specialty pharmacy ProCare was the "largest integrated retail/mail provider of specialty pharmacy services" in the United States. [ 34 ] : 10 It was consolidated with their pharmacy benefit management company, PharmaCare in 2002 to In their 2001 annual report CVS anticipated that the "$16 billion specialty pharmacy market" would grow at "an even faster rate than traditional pharmacy due in large part to the robust pipeline of biotechnology drugs". [ 34 ] By 2014 CVS Caremark, Express Scripts and Walgreens represented more than 50% of the specialty drug market in the United States. [ 35 ] : 4 The specialty pharmacy business had $20 billion in sales in 2005. By 2014 it had grown to "$78 billion in sales". [ 5 ] Specialty pharmacies came into existence to as a result of unmet needs. According to the National Comprehensive Cancer Network the "primary goals of specialty pharmacies are to ensure the appropriate use of medications, maximize drug adherence, enhance patient satisfaction through direct interaction with healthcare professionals, minimize cost impact, and optimize pharmaceutical care outcomes and delivery of information". [ 64 ] McKesson Specialty Care Solutions, a division of McKesson Corporation, is "one of the largest distributors of specialty drugs, biologics and rheumatology drugs to community-based specialty practices". It is "a leader in the development, implementation and management of FDA-mandated Risk Evaluation and Mitigation Strategies (REMS) for manufacturers". [ 65 ] For example, in order ProStrakan Group plc, an international pharmaceutical company based in the UK works with McKesson Specialty Care Solutions to administer its FDA-approved Risk Evaluation and Mitigation Strategy (REMS) program for Abstral . [ 65 ] URAC's Specialty Pharmacy Accreditation "provides an external validation of excellence in Specialty Pharmacy Management and provides Continuous Quality Improvement (CQI) oriented processes that improve operations and enhance compliance". [ 15 ] Specialty pharmaceuticals or biologics are a significant part of the treatment market, yet there is still additional work that should be done to manage costs.  Defining biologics has been described as a matter of perspective, with variation between chemists, physicians, payers, microbiologists and regulators. [ 66 ] A payer may define a biologic by cost, while a biochemist may look at composition and structure and a provider at means of delivery or action on the body. [ 66 ] The FDA generally defines biologics as, "a wide range of products [that] ...can be composed of sugars, proteins, or nucleic acids or complex combinations of these substances, or may be living entities such as cells and tissues. Biologics are isolated from a variety of natural sources—human, animal, or microorganism—and may be produced by biotechnology methods and other cutting-edge technologies". [ 67 ] Due to the complexity, risk of adverse events and allergic reactions associated with biologics, management is very important for the safety of patients. [ 68 ] Management includes areas from patient education and adherence to the delivery of the medication.  These medications often require very specific storage conditions and monitoring of temperature, the level of agitation and proper reconstitution of the drug . [ 68 ] Because of the high risk of error and adverse events, provider management of delivery is required, especially for injection or infusion of some biologic medications. Such biologics are often coded in a way that ties reimbursement to delivery by a provider—either a specialty pharmacist or medical care provider with those skills. [ 69 ] As more biologics are being designed to be self-administered pharmacists are supporting the management of these drugs.  They make calls to remind patients of the need for refills, provide education to patients, monitor patients for adverse events and work with primary care provider offices to monitor the outcomes of the medication. [ 69 ] The high cost of specialty pharmaceuticals is one of their defining characteristics; as such, cost-containment is high on the list of all the players in the arena.  For physician-administered biologics, cost-containment is often handled by volume purchasing of biologic drugs for discounted pricing, formularies, step therapy to attempt other treatment before beginning biologics and administrative fees by insurers to keep physicians from artificially inflating requested reimbursement from insurance companies. [ 70 ] [ 69 ] [ 68 ] Cost-containment for self-administered biologics tends to occur via requiring authorization to be prescribed those drugs and benefit design, such as coinsurance for cost-sharing. [ 70 ] [ 69 ] The 21st Century Cures Act which addressed fast-tracking approval of specialty pharmaceuticals was particularly beneficial for dealing with the development of 2nd run biologics (which might be more easily understood as "generic biologics", though they do not exist). [ 71 ] Debate around the act raised some important questions about the efficacy of biologics and their continued high costs. Some call for insurers to pay only the cost of production to manufacturers until the benefit of these biologics can be proven long-term, stating that insurers should not bear the full cost of products that may be unreliable or have only limited efficacy. [ 71 ] Achieving this would require conducting studies that assess value, such as comparative effectiveness studies and using those studies to determine pricing. Comparative effectiveness would examine all aspects of the use of biologics, from outcomes such as clinical benefits and potential harms, to efficiency of administration, public health benefits and patient productivity after treatment. [ 72 ] This is a new direction in managing the high costs of specialty pharmaceuticals and not without challenges.  One of the barriers is strict regulation by the Food and Drug Administration of what pharmaceutical manufacturers may communicate to the public, limiting that communication to formulary committees for managed care, for example. [ 73 ] Additionally, studies tend to be constructed using observational design, instead of as randomized controlled trials, limiting their usefulness for real-world application. [ 73 ] Difficulties experienced with patient adherence to specialty pharmaceuticals also limit the availability of real-world outcomes data for biologics. [ 71 ] [ 69 ] In 2016, real world data evaluating the efficacy of biologics was only publicly available for multiple myeloma through ICER (where biologics were found to be overpriced for their outcomes) [ 72 ] and for hepatitis C treatment (which achieved high cure rates—90%—for patients co-infected with HIV and Hep C) through Curant Health. [ 71 ] These studies show how useful value-based pricing may become for cost-containment in the field.  The good news is that there are effectiveness studies on biologics currently underway aiming to provide more of this data. [ 71 ] Biologics or biological products for human use are regulated by the Center for Biologics Evaluation and Research (CBER), overseen by the Office of Medical Products and Tobacco , within the U.S. Food and Drug Administration which includes the Public Health Service Act and the Federal Food, Drug and Cosmetic Act . [ 74 ] "CBER protects and advances the public health by ensuring that biological products are safe and effective and available to those who need them. CBER also provides the public with information to promote the safe and appropriate use of biological products." There are multiple players in specialty drugs including the employer, the health plan, the pharmacy benefits manager and it is unclear who should be in charge of controlling costs and monitoring care. [ 6 ] Pharmacies generally buy a product from a wholesaler and sell (Buy & Bill) it to the patient and provide basic drug use information and counseling. According to Maria Hardin, vice president of patient services for the National Organization for Rare Disorders , an alliance of voluntary health and patient advocacy groups working with rare diseases, "As the cost of drugs increases, management of the financial side has gotten more complex... The issues range from Medicare Part D to tiered benefits, prior authorizations, and no benefits. These patients need a pharmacy with the expertise and the clout to go to bat for them. If the patient doesn't get treated, the specialty pharmacy doesn't get paid." [ 75 ] Alexion Pharmaceuticals was one of the pioneers in the use of a business model of developing drugs to combat rare diseases. "Knowing the value of specialty drugs as well as its own stock is Alexion's business." [ 76 ] Since other big pharmaceutical companies had tended to ignore these markets, Alexion had minimal competition at first. Insurance companies have generally been willing to pay high prices for such drugs; since few of their customers need the drugs, a high price does not significantly impact the insurance companies outlays. [ 76 ] Alexion is thus seeking a stronger position in the lucrative rare disease market, and is willing to pay a premium to obtain that position. [ 14 ] [ 76 ] The rare disease market is seen as desirable because insurers have minimal motive to deny claims (due to small population sizes of patients) and are unable to negotiate better drug prices due to lack of competition. of May 2015, Alexion is currently seeking approval of its second drug, Strensiq. It will be used to treat hypophosphatasia, a rare metabolic disorder. In 2015 Alexion estimated that Synageva, its specialty drug for lysosomal acid lipase deficiency , a fatal genetic disorder, could eventually have annual sales of more than $1 billion. [ 14 ] [ 77 ] Companies like Magellan RX Management provide a "single source for high-touch patient care management to control side effects, patient support and education to ensure compliance or continued treatment, and specialized handling and distribution of medications directly to the patient or care provider. Specialty medications may be covered under either the medical or pharmacy benefit." [ 4 ] According to an article published in 2014 in the journal Pharmacoeconomics , [ 78 ] "[s]pecialty pharmacies combine medication dispensing with clinical disease management. Their services have been used to improve patient outcomes and contain costs of specialty pharmaceuticals. These may be part of independent pharmacy businesses, retail pharmacy chains, wholesalers, pharmacy benefit managers, or health insurance companies. Over the last several years, payers have been transitioning to obligate beneficiaries to receive self-administered agents from contracted specialty pharmacies, limiting the choice of acceptable specialty pharmacy providers (SPPs) for patient services." [ 78 ] Managed care organizations contract with Specialty Pharmacy vendors. "Managed care organizations (MCOs) are using varied strategies to manage utilization and costs. For example, 58% of 109 MCOs surveyed implement prior authorizations for MS specialty therapies." [ 78 ] The Academy of Managed Care Pharmacy (AMCP) designates a product as a specialty drug if "[i]t requires a difficult or unusual process of delivery to the patient (preparation, handling, storage, inventory, distribution, Risk Evaluation and Mitigation Strategy (REMS) programs, data collection, or administration) or, Patient management prior to or following administration (monitoring, disease or therapeutic support systems)". [ 79 ] Health plans consider "high cost" (on average a minimum monthly costs of $US1,200) to be is a determining factor in identifying a specialty drug. [ 79 ] Tom Westrich, of St. Louis, Missouri-based Centric Health Resources, a specialty pharmacy, described how their specialty drugs treat ultra-orphan diseases with a total patient population of 20,000 nationwide. [ 75 ] The top ten specialty pharmacies in 2014 were CVS Specialty parent company CVS Health with $20.5B in sales, Express Scripts 's Accredo at $15B, Walgreens Boots Alliance 's Walgreens Specialty at $8.5B, UnitedHealth Group 's OptumRx at $2.4B, Diplomat Pharmacy at $2.1B, Catamaran 's BriovaRx at $2.0B, Specialty Prime Therapeutics's Prime Therapeutics at $1.8B, Omnicare 's Advanced Care Scripts at $1.3B, Humana 's RightsourceRx at $1.2B, Avella at $0.8B. All the other specialty pharmacies accounted for $22.4B of sales in 2014 with a total of $78B. [ 80 ] In 2010 the United States enacted a new health law which had unintended consequences. Because of the 2010 law, drug companies like Genentech informed children's hospitals that they would no longer get discounts for certain cancer medicines such as the orphan drugs Avastin , Herceptin , Rituxan , Tarceva , or Activase . This cost hospitals millions of dollars. [ 81 ] There is a debate about whether specialty drugs should be managed as a medical benefit or a pharmaceutical benefit. Infused or injected medications are usually covered under the medical benefit and oral medications are covered under the pharmacy benefit. Self-injected medications may be either. [ 82 ] "Many biologics, such as chemotherapy drugs, are administered in a doctor's office and require extensive monitoring, further driving up costs." [ 6 ] Chemotherapy is usually delivered intravenously , although a number of agents can be administered orally (e.g. specialty drugs, melphalan (trade name Alkeran), busulfan , capecitabine ). Delcath Systems, Inc. (NASDAQ: DCTH) a specialty pharmaceutical and medical device company manufactures melphalan. [ 83 ] By 2011 the oral medications for cancer patients represented approximately 35% of cancer medications. Prior to the increase in cancer oral drugs community cancer centers were used to managing office-administered chemotherapy treatments. At that time "the majority of community oncology practices were unfamiliar with the process of prescribing and obtaining drugs that are covered under the pharmacy benefit" and "conventional retail pharmacy chains were ill-prepared to stock oral oncology agents, and were not set up to deliver the counseling that often accompanies these medications". [ 84 ] According to IMS Health "Specialty pharmaceutical spending is on the rise and is expected to increase from approximately $55 billion in 2005 to $1.7 trillion in 2030, according to the Pharmaceutical Care Management Association. That reflects an increase from 24% of total drug spend in 2005 to an estimated 44% of a health plan's total drug expenditure in 2030." [ 35 ] While CVS, Accredo, and Walgreens led the Specialty Pharmacies (SP) market in revenue in 2014, there are constant changes through mergers and acquisitions in terms of SPs and specialty distributors (SDs). [ 85 ] The SP/SD network faces common strengths such as "in-depth clinical management, coordinated/comprehensive care, and early limited distribution network success" and common weaknesses, "lack of ability to customize services, poor integration experience and outcomes, and strained pharma relations". [ 85 ] BioScrip was acquired by Walgreens in 2012. [ 86 ] Specialty companies like Genzyme and MedImmune were acquired and are transitioning to a new business model. [ 53 ] : 12 According to Nicolas Basta, by 2013 there was "a spate of new entities" called hub services, "mechanisms by which manufacturers can keep a grip on the marketplace" in specialty pharma. The "biggest and oldest of these organizations" are "offshoots of insurance companies or [Pharmacy benefit managers] PBMs, such as Express Scripts' combination of Accredo and CuraScript (both specialty pharmacies) and HealthBridge (physician and patient support). UnitedHealth, an insurance company, operates OptumRx, a PBM, which has a specialty unit within it. Cigna has Tel-Drug, a mail-order pharmacy and support system." [ 87 ] Basta described how Hubs have been around since about 2002 "starting out as "reimbursement hubs"", usually provided as a service by manufacturers to help patients and providers navigate the process of obtaining permission to use, and reimbursement for, expensive specialty therapies". Industry observers look to pioneering efforts by Genentech and Genzyme under the tenure of Henri Termeer , "when some of their earliest biotech products entered the marketplace". [ 87 ] Specialty hubs provide reimbursement support to physicians and patients as well as patient education including medical hotlines. There is a voluntary program enrollment and registry intake with Patient Assistance Program management. According to a 2007 study by employees of Express Scripts or its wholly owned subsidiary CuraScript on specialty pharmacy costs, if payers manage cost control through copayments with patients, there is an increased risk that patients will forego essential but expensive specialty drugs. [ 88 ] : 6 and health outcomes were compromised. [ 39 ] : 69 In 2007 these researchers suggested in the adoption of formularies and other traditional drug-management tools. They also recommended specialty drug utilization management programs that guide treatment plans and improve outpatient compliance. [ 88 ] : 88 By 2010 Alexion Pharmaceuticals 's Soliris , was considered to be the most expensive drug in the world. [ 13 ] In a 2012 article in the New York Times, journalist Andrew Pollack described how Don M. Bailey , a mechanical engineer by training who became interim president of Questcor Pharmaceuticals, Inc. (Questcor) in May 2007, initiated a new pricing model for Acthar in August 2007 [ 89 ] when it was classified by FDA as an orphan drug and a specialty drug to treat infantile spasms. [ 90 ] Questcor, a biopharmaceutical company, focuses on the treatment of patients with "serious, difficult-to-treat autoimmune and inflammatory disorders". Its primary product is FDA-approved Acthar, an injectable drug that is used for the treatment of 19 indications. [ 91 ] At the same time Questcor created "an expanded safety net for patients using Acthar", provided a "group of Medical Science Liaisons to work with health care providers who are administering Acthar" and limited distribution to its sole specialty distributor, Curascript . The 2007 pricing model brought "Acthar in line with the cost of treatments for other very rare diseases". [ 89 ] The cost for a course of treatment in 2007 was estimated at about "$80,000–$100,000". [ 89 ] Acthar is now manufactured through a contractor on Prince Edward Island , Canada. [ 91 ] The price increased from $40 a vial to $700 and continued to increase. [ 90 ] By 2012 the price of a vial of Acthar was $28,400. [ 90 ] and was considered to be one of the world's most expensive drugs in 2013. By 2014 the price of Gilead 's specialty drug for hepatitis C, Sovaldi or sofosbuvir , was $84,000 to $168,000 for a course of treatment in the U.S., £35,000 in the UK for 12 weeks. [ 92 ] Sovaldi is on the World Health Organization's most important medications needed in a basic health system and the steep price is highly controversial. [ 93 ] [ 94 ] [ 95 ] In 2014 the U.S. spent 742.6% more on hepatitis C medications than it did in 2013. [ 51 ] In September 2015, Martin Shkreli was criticized by several health organizations [ 96 ] for obtaining manufacturing licenses on old, out-of-patent , [ 97 ] life-saving medicines including pyrimethamine (brand name Daraprim ), which is used to treat patients with toxoplasmosis , malaria , some cancers, and AIDS , [ 98 ] and then increasing the price of the drug in the US from $13.50 to $750 per pill, a 5,455% increase. [ 99 ] [ 100 ] In an interview with Bloomberg News , Shkreli claimed that despite the price increase, patient co-pays would be lower, that many patients would get the drug at no cost, that the company has expanded its free drug program, and that it sells half of the drugs for one dollar. [ 101 ] [ 102 ] In 2015 Bloomberg News used the term 'captive pharmacies' to describe the alleged exclusive agreements such as that between the specialty mail-order pharmacy Philodor and Valeant , mail-order pharmacy Linden Care and Horizon Pharma Plc . In November 2015 Express Scripts Holding Co.—the largest U.S. manager of prescription drug benefits—"removed the mail-order pharmacy Linden Care LLC from its network after concluding it dispensed a large portion of its medications from Horizon Pharma Plc and didn't fulfill its contractual agreements". Express Scripts was "evaluating other 'captive pharmacies' that it said are mostly distributing Horizon drugs". In 2015 specialty pharmacies like "Philidor drew attention for the lengths they went to fill prescriptions with brand-name drugs and then secure insurance reimbursement. [ 28 ] According to Pfenex, a clinical-stage biotechnology company, the proposed terms in the Trans-Pacific Partnership, a trade agreement between twelve Pacific Rim countries, meant that all participating countries had to adopt the United States' lengthy drug patent exclusivity protection period of 12 years for biologics and specialty drugs. [ 103 ] In 1981 an episode of the television series Quincy, M.E. starring star, Jack Klugman as Quincy, entitled "Seldom Silent, Never Heard" brought the plight of children with orphan diseases to public attention. In the episode, Jeffrey, a young boy with Tourette syndrome , died after falling from a building. Dr. Arthur Ciotti ( Michael Constantine ), a medical doctor who had been researching Tourette syndrome for years wanted to study Jeffrey's brain to discover the cause and cure for the rare disease. He explained to Quincy that drug companies, like the one where he worked, were not interested in doing the research because so few people were afflicted with them that it was not financially viable. [ 104 ] In 1982 another episode "Give Me Your Weak" Klugman as Quincy testified before Congress in an effort to get the Orphan Drug Act passed. He was moved by the dilemma of a young mother with myoclonus . [ 13 ] [ 105 ] [ 106 ] [ 107 ] [ 108 ]
https://en.wikipedia.org/wiki/Specialty_drugs_in_the_United_States
In the domain of systems engineering , Specialty Engineering is defined as and includes the engineering disciplines that are not typical of the main engineering effort. More common engineering efforts in systems engineering such as hardware , software , and human factors engineering may be used as major elements in a majority of systems engineering efforts and therefore are not viewed as "special". Examples of specialty engineering include electromagnetic interference , safety, and physical security. [ 1 ] Less common engineering domains such as electromagnetic interference, electrical grounding , safety, security, electrical power filtering/uninterruptible supply, manufacturability, and environmental engineering may be included in systems engineering efforts where they have been identified to address special system implementations. These less common but just as important engineering efforts are then viewed as "specialty engineering". However, if the specific system has a standard implementation of environmental or security for example, the situation is reversed and the human factors engineering or hardware/software engineering may be the "specialty engineering" domain. The key take away is; the context of the system engineering project and unique needs of the project are fundamental when thinking of what are the specialty engineering efforts. The benefit of citing "specialty engineering" in planning is the notice to all team levels that special management and science factors may need to be accounted for and may influence the project. Specialty engineering may be cited by commercial entities and others to specify their unique abilities. Eisner, Howard. (2002). "Essentials of Project and Systems Engineering Management". Wiley. p. 217.
https://en.wikipedia.org/wiki/Specialty_engineering
Specialty pharmacy refers to distribution channels designed to handle specialty drugs — pharmaceutical therapies that are high cost, [ 1 ] [ 2 ] [ 3 ] high complexity [ 3 ] or high touch . [ 2 ] High touch refers to higher degree of complexity in terms of distribution, administration, or patient management which drives up the cost of the drugs. In the early years specialty pharmacy providers attached "high-touch services to their overall price tags" arguing that patients who receive specialty pharmaceuticals "need high levels of ancillary and follow-up care to ensure that the drug spend is not wasted on them." [ 4 ] An example of a specialty drug that would only be available through specialty pharmacy is interferon beta-1a ( Avonex ), a treatment for MS that requires a refrigerated chain of distribution and costs $17,000 a year. [ 1 ] Some specialty pharmacies deal in pharmaceuticals that treat complex or rare chronic conditions such as [ 5 ] cancer , rheumatoid arthritis , hemophilia , H.I.V. [ 3 ] psoriasis , [ 1 ] inflammatory bowel disease (IBD) [ 1 ] or Hepatitis C . [ 2 ] [ 6 ] "Specialty pharmacies are seen as a reliable distribution channel for expensive drugs, offering patients convenience and lower costs while maximizing insurance reimbursements from those companies that cover the drug. Patients typically pay the same co-payments whether or not their insurers cover the drug." [ 7 ] As the market demanded specialization in drug distribution and clinical management of complex therapies, specialized pharma (SP) evolved.„ [ 2 ] [ 8 ] Specialty pharmacies may handle therapies that are biologics, [ 1 ] [ 9 ] [ 10 ] and are injectable or infused (although some are oral medications). [ 2 ] By 2008 the pharmacy benefit management dominated the specialty pharmacies market having acquired smaller specialty pharmacies. PBMs administer specialty pharmacies in their network and can "negotiate better prices and frequently offer a complete menu of specialty pharmaceuticals and related services to serve as an attractive 'one-stop shop' for health plans and employers." [ 4 ] In the mid 1990s, there were fewer than 30 specialty drugs on the market, but by 2008 that number had increased to 200. [ 4 ] Specialty pharmacies initially served a limited number of people with a small number of high-cost, low-volume, and high-maintenance conditions, such as hemophilia and Gaucher's disease . [ 4 ] In 1991 the FDA approved the first version of Genzyme 's orphan drug alglucerase , the only treatment for Gaucher's disease . [ 11 ] [ 12 ] : 123 At that time, according to Genzyme CEO Henri Termeer — a pioneer in the biotechnology industry business model — one treatment of Ceredase for one patient took 22,000 placentas annually to manufacture, a difficult and expensive procedure. [ 13 ] A new version of Ceredase, called Cerezyme, Imiglucerase which Genzyme produced in 1994 using genetically modified cells in vitro, was cheaper and easier to produce, was approved in several countries. [ 13 ] [ 14 ] [ 15 ] [ 16 ] In 2005 there were only about 4,500 patients on Cerezyme. [ 13 ] In marketing imiglucerase, Termeer introduced the innovative and successful business strategy that became a model for the biotechnology or life sciences industry in general and specialty pharmacy in particular. Genzyme's added revenue from profits on the highly priced orphan or specialty drugs like imiglucerase, which had no competition, was used to undertake research and development for other drugs and to allow them to fund programs that distribute a small portion of production for free. [ 13 ] In 2005 he then created what would eventually become a feature service of specialty pharmacy, by hiring 34 people to help patients acquire insurance plans that would cover the cost of their drugs. [ 13 ] By 2005 although Cerezyme cost the average patient (including babies) $200,000 a year, it could cost a single adult patient as much as $520,000 a year even though it cost Genzyme less than $52,000 to manufacture, because the alternative is severe debilitation and early death. [ 13 ] In 2005 there were only about 4,500 patients on Cerezyme. [ 13 ] In 1992 Stadtlanders Pharmacy — a subsidiary of Bergen Brunswig Corporation — was a grassroots company in Pittsburgh that occupied one floor of a seven-story office building and had only a handful of employees and sold drugs by mail-order to patients with chronic conditions with "higher-than-average prescription prices". In 1992 this included "ancillary to primary therapy to manage side effects, as well as HIV, transplant, and a new growth area, multiple sclerosis (MS)." [ 4 ] By 1995 Stadtlanders added others, including growth hormones . [ 4 ] Compared to retail drugstores that dealt in high volumes of lower margin drugs, Stadtlanders successful business model focused on lower volumes of higher priced drugs resulting in "healthier revenues." By about 1995 there were fewer than 30 specialty drugs on the market. [ 4 ] By 2000 Stadtlanders "generated annual revenues of $500 million by selling drugs by mail-order to patients with chronic conditions." [ 17 ] In the 1990s specialized pharmacies were mainly mom-and-pop organizations and the specialty pharmacy industry was highly fragmented. [ 17 ] In the 1990s more expensive lifesaving therapies became available. Pharmacies like Stadtlanders began to do more than fill prescriptions. They would fill out the cumbersome insurance paperwork for patients to secure reimbursement — often from Medicare — coordinate "benefits to eliminate the potentially enormous out-of-pocket costs." They were able to keep these specialty drugs in stock when most retail pharmacies could not. In this way they could intervene for patients who "needed immediate access to therapies to prevent organ rejection" but who "did not have the money for such payments, nor did they have the expertise they needed to complete the forms." These "pharmacies also coordinated referrals from hospital discharge planners and delivered the medication to the patients’ homes to allow therapy to begin immediately upon hospital discharge." [ 4 ] "These pharmacies grew through word of mouth; nurses and physicians heard from their patients about the “special services” provided by these pharmacies and started to refer patients in similar predicaments." In 1999 CVS launched ProCare , a "chain of specialty pharmacies, about 1,500 square feet in size, serving patients with chronic diseases and conditions that require complex and expensive drug regimens." CVS was an early pioneer in online mail-delivery prescription filling service which it operated through CVS.com, a rebranded website it acquired with the 1999 purchase of Soma.com, the first major online pharmacy. [ 17 ] By September 2000 when CVS acquired Stadtlander for $124 million Stadtlanders had become "one of the largest employers" in Allegheny County, Pennsylvania . [ 4 ] [ 17 ] By 1999, the market for specialty pharmaceuticals was estimated at about $16 billion and it was "a particularly fast-growing segment of the drug industry." [ 17 ] CVS became a consolidator in the special pharmacy market. [ 17 ] By the end of 2000, CVS's specialty pharmacy business consisted of mail-order operations and 46 CVS ProCare pharmacies located in 17 states and the District of Columbia. Overall, CVS saw its revenues surpass the $20 billion mark for the first time in 2000, while net income reached a record $746 million." [ 17 ] By 2001 CVS' specialty pharmacy ProCare was the "largest integrated retail/mail provider of specialty pharmacy services" in the United States. [ 18 ] : 10 It was consolidated with their pharmacy benefit management company, PharmaCare in 2002. In their 2001 annual report CVS anticipated that the "$16 billion specialty pharmacy market" would grow at "an even faster rate than traditional pharmacy due in large part to the robust pipeline of biotechnology drugs." [ 18 ] Niche independents like Stadtlanders and smaller specialty pharmacies acquired by larger corporations as the specialty pharmacy industry’s "profitability grew exponentially." [ 4 ] By 2008 the specialty pharmacy marketplace was "dominated primarily by traditional" pharmacy benefit management that "merged with previously existing specialty pharmacies, or those that are retail-based or insurer-owned. These organizations typically have the muscle to negotiate better prices and frequently offer a complete menu of specialty pharmaceuticals and related services to serve as an attractive 'one-stop shop' for health plans and employers." [ 4 ] By 2007 specialty costs began to drive pharmacy trend. [ 4 ] According to Express Scripts 2007 Drug Trend Report in 2007 there was a 14% increase in specialty drugs. There was a 60.4% increase in utilization, a 37.4% increase in costs and a 4.9% increase in new medications. [ 4 ] By 2008 there were more than 200 specialty pharmaceuticals on the market. [ 4 ] By 2014 CVS Caremark, Express Scripts and Walgreens represented more than 50% of the specialty drug market in the United States. [ 19 ] : 4 other specialty providers included Commcare Pharmacy . By 2015 the trend among U.S. pharmacies is to "incorporate specialty pharmacy in their operations." [ 20 ] Specialty pharmacy offers services such as "adherence management, benefits investigation, and patient education, and the challenge of space limitations for item stocking." [ 20 ] [ 21 ] Pharmacy benefit management — working with employers, health plans companies and government programs — dominate the specialty pharmacy market in the United States since at 2008. [ 4 ] According to the American Pharmacists Association (APhA), "Historically, a pharmacy benefit manager (PBM) is a third-party administrator of prescription drug programs. PBMs are primarily responsible for developing and maintaining the formulary, contracting with pharmacies, negotiating discounts and rebates with drug manufacturers, and processing and paying prescription drug claims. For the most part, they work with self-insured companies and government programs striving to maintain or reduce the pharmacy expenditures of the plan while concurrently trying to improve health care outcomes." [ 22 ]
https://en.wikipedia.org/wiki/Specialty_pharmacy
Speciation is the evolutionary process by which populations evolve to become distinct species . The biologist Orator F. Cook coined the term in 1906 for cladogenesis , the splitting of lineages, as opposed to anagenesis , phyletic evolution within lineages. [ 1 ] [ 2 ] [ 3 ] Charles Darwin was the first to describe the role of natural selection in speciation in his 1859 book On the Origin of Species . [ 4 ] He also identified sexual selection as a likely mechanism, but found it problematic. There are four geographic modes of speciation in nature, based on the extent to which speciating populations are isolated from one another: allopatric , peripatric , parapatric , and sympatric . Whether genetic drift is a minor or major contributor to speciation is the subject of much ongoing discussion. [ 5 ] Rapid sympatric speciation can take place through polyploidy , such as by doubling of chromosome number; the result is progeny which are immediately reproductively isolated from the parent population. New species can also be created through hybridization , followed by reproductive isolation, if the hybrid is favoured by natural selection. [ citation needed ] In addressing the origin of species, there are two key issues: Since Charles Darwin's time, efforts to understand the nature of species have primarily focused on the first aspect, and it is now widely agreed that the critical factor behind the origin of new species is reproductive isolation. [ 6 ] In On the Origin of Species (1859), Darwin interpreted biological evolution in terms of natural selection, but was perplexed by the clustering of organisms into species. [ 7 ] Chapter 6 of Darwin's book is entitled "Difficulties of the Theory". In discussing these "difficulties" he noted Firstly, why, if species have descended from other species by insensibly fine gradations, do we not everywhere see innumerable transitional forms? Why is not all nature in confusion instead of the species being, as we see them, well defined? This dilemma can be described as the absence or rarity of transitional varieties in habitat space. [ 8 ] Another dilemma, [ 9 ] related to the first one, is the absence or rarity of transitional varieties in time. Darwin pointed out that by the theory of natural selection "innumerable transitional forms must have existed", and wondered "why do we not find them embedded in countless numbers in the crust of the earth". That clearly defined species actually do exist in nature in both space and time implies that some fundamental feature of natural selection operates to generate and maintain species. [ 7 ] It has been argued that the resolution of Darwin's first dilemma lies in the fact that out-crossing sexual reproduction has an intrinsic cost of rarity. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] The cost of rarity arises as follows. If, on a resource gradient, a large number of separate species evolve, each exquisitely adapted to a very narrow band on that gradient, each species will, of necessity, consist of very few members. Finding a mate under these circumstances may present difficulties when many of the individuals in the neighborhood belong to other species. Under these circumstances, if any species' population size happens, by chance, to increase (at the expense of one or other of its neighboring species, if the environment is saturated), this will immediately make it easier for its members to find sexual partners. The members of the neighboring species, whose population sizes have decreased, experience greater difficulty in finding mates, and therefore form pairs less frequently than the larger species. This has a snowball effect, with large species growing at the expense of the smaller, rarer species, eventually driving them to extinction . Eventually, only a few species remain, each distinctly different from the other. [ 10 ] [ 11 ] [ 13 ] Rarity not only imposes the risk of failure to find a mate, but it may also incur indirect costs, such as the resources expended or risks taken to seek out a partner at low population densities. [ citation needed ] Rarity brings with it other costs. Rare and unusual features are very seldom advantageous. In most instances, they indicate a ( non-silent ) mutation , which is almost certain to be deleterious. It therefore behooves sexual creatures to avoid mates sporting rare or unusual features ( koinophilia ). [ 16 ] [ 17 ] Sexual populations therefore rapidly shed rare or peripheral phenotypic features, thus canalizing the entire external appearance, as illustrated in the accompanying image of the African pygmy kingfisher , Ispidina picta . This uniformity of all the adult members of a sexual species has stimulated the proliferation of field guides on birds, mammals, reptiles, insects, and many other taxa , in which a species can be described with a single illustration (or two, in the case of sexual dimorphism ). Once a population has become as homogeneous in appearance as is typical of most species (and is illustrated in the photograph of the African pygmy kingfisher), its members will avoid mating with members of other populations that look different from themselves. [ 18 ] Thus, the avoidance of mates displaying rare and unusual phenotypic features inevitably leads to reproductive isolation, one of the hallmarks of speciation. [ 19 ] [ 20 ] [ 21 ] [ 22 ] In the contrasting case of organisms that reproduce asexually , there is no cost of rarity; consequently, there are only benefits to fine-scale adaptation. Thus, asexual organisms very frequently show the continuous variation in form (often in many different directions) that Darwin expected evolution to produce, making their classification into "species" (more correctly, morphospecies ) very difficult. [ 10 ] [ 16 ] [ 17 ] [ 23 ] [ 24 ] [ 25 ] All forms of natural speciation have taken place over the course of evolution ; however, debate persists as to the relative importance of each mechanism in driving biodiversity . [ 26 ] One example of natural speciation is the diversity of the three-spined stickleback , a marine fish that, after the last glacial period , has undergone speciation into new freshwater colonies in isolated lakes and streams. Over an estimated 10,000 generations, the sticklebacks show structural differences that are greater than those seen between different genera of fish including variations in fins, changes in the number or size of their bony plates, variable jaw structure, and color differences. [ 27 ] During allopatric (from the ancient Greek allos , "other" + patrā , "fatherland") speciation, a population splits into two geographically isolated populations (for example, by habitat fragmentation due to geographical change such as mountain formation ). The isolated populations then undergo genotypic or phenotypic divergence as: (a) they become subjected to dissimilar selective pressures; (b) different mutations arise in the two populations. When the populations come back into contact, they have evolved such that they are reproductively isolated and are no longer capable of exchanging genes . Island genetics is the term associated with the tendency of small, isolated genetic pools to produce unusual traits. Examples include insular dwarfism and the radical changes among certain famous island chains, for example on Komodo . The Galápagos Islands are particularly famous for their influence on Charles Darwin. During his five weeks there he heard that Galápagos tortoises could be identified by island, and noticed that finches differed from one island to another, but it was only nine months later that he speculated that such facts could show that species were changeable. When he returned to England , his speculation on evolution deepened after experts informed him that these were separate species, not just varieties, and famously that other differing Galápagos birds were all species of finches. Though the finches were less important for Darwin, more recent research has shown the birds now known as Darwin's finches to be a classic case of adaptive evolutionary radiation . [ 28 ] In peripatric speciation, a subform of allopatric speciation, new species are formed in isolated, smaller peripheral populations that are prevented from exchanging genes with the main population. It is related to the concept of a founder effect , since small populations often undergo bottlenecks . Genetic drift is often proposed to play a significant role in peripatric speciation. [ 29 ] [ 30 ] Case studies include Mayr's investigation of bird fauna; [ 31 ] the Australian bird Petroica multicolor ; [ 32 ] and reproductive isolation in populations of Drosophila subject to population bottlenecking. [ citation needed ] In parapatric speciation, there is only partial separation of the zones of two diverging populations afforded by geography; individuals of each species may come in contact or cross habitats from time to time, but reduced fitness of the heterozygote leads to selection for behaviours or mechanisms that prevent their interbreeding . Parapatric speciation is modelled on continuous variation within a "single", connected habitat acting as a source of natural selection rather than the effects of isolation of habitats produced in peripatric and allopatric speciation. [ 33 ] Parapatric speciation may be associated with differential landscape-dependent selection . Even if there is a gene flow between two populations, strong differential selection may impede assimilation and different species may eventually develop. [ 34 ] Habitat differences may be more important in the development of reproductive isolation than the isolation time. Caucasian rock lizards Darevskia rudis , D. valentini and D. portschinskii all hybridize with each other in their hybrid zone ; however, hybridization is stronger between D. portschinskii and D. rudis , which separated earlier but live in similar habitats than between D. valentini and two other species, which separated later but live in climatically different habitats. [ 35 ] Ecologists refer to [ clarification needed ] parapatric and peripatric speciation in terms of ecological niches . A niche must be available in order for a new species to be successful. Ring species such as Larus gulls have been claimed to illustrate speciation in progress, though the situation may be more complex. [ 36 ] The grass Anthoxanthum odoratum may be starting parapatric speciation in areas of mine contamination. [ 37 ] Sympatric speciation is the formation of two or more descendant species from a single ancestral species all occupying the same geographic location. Often-cited examples of sympatric speciation are found in insects that become dependent on different host plants in the same area. [ 38 ] [ 39 ] The best known example of sympatric speciation is that of the cichlids of East Africa inhabiting the Rift Valley lakes , particularly Lake Victoria , Lake Malawi and Lake Tanganyika . There are over 800 described species, and according to estimates, there could be well over 1,600 species in the region. Their evolution is cited as an example of both natural and sexual selection . [ 40 ] [ 41 ] A 2008 study suggests that sympatric speciation has occurred in Tennessee cave salamanders . [ 42 ] Sympatric speciation driven by ecological factors may also account for the extraordinary diversity of crustaceans living in the depths of Siberia's Lake Baikal . [ 43 ] Budding speciation has been proposed as a particular form of sympatric speciation, whereby small groups of individuals become progressively more isolated from the ancestral stock by breeding preferentially with one another. This type of speciation would be driven by the conjunction of various advantages of inbreeding such as the expression of advantageous recessive phenotypes, reducing the recombination load, and reducing the cost of sex. [ 44 ] The hawthorn fly ( Rhagoletis pomonella ), also known as the apple maggot fly, appears to be undergoing sympatric speciation. [ 45 ] Different populations of hawthorn fly feed on different fruits. A distinct population emerged in North America in the 19th century some time after apples , a non-native species, were introduced. This apple-feeding population normally feeds only on apples and not on the historically preferred fruit of hawthorns . The current hawthorn feeding population does not normally feed on apples. Some evidence, such as that six out of thirteen allozyme loci are different, that hawthorn flies mature later in the season and take longer to mature than apple flies; and that there is little evidence of interbreeding (researchers have documented a 4–6% hybridization rate) suggests that sympatric speciation is occurring. [ 46 ] Reinforcement, also called the Wallace effect , is the process by which natural selection increases reproductive isolation. [ 19 ] It may occur after two populations of the same species are separated and then come back into contact. If their reproductive isolation was complete, then they will have already developed into two separate incompatible species. If their reproductive isolation is incomplete, then further mating between the populations will produce hybrids, which may or may not be fertile. If the hybrids are infertile, or fertile but less fit than their ancestors, then there will be further reproductive isolation and speciation has essentially occurred, as in horses and donkeys . [ 47 ] One reasoning behind this is that if the parents of the hybrid offspring each have naturally selected traits for their own certain environments, the hybrid offspring will bear traits from both, therefore would not fit either ecological niche as well as either parent (ecological speciation). The low fitness of the hybrids would cause selection to favor assortative mating , which would control hybridization. This is sometimes called the Wallace effect after the evolutionary biologist Alfred Russel Wallace who suggested in the late 19th century that it might be an important factor in speciation. [ 48 ] Conversely, if the hybrid offspring are more fit than their ancestors, then the populations will merge back into the same species within the area they are in contact. [ citation needed ] Another important theoretical mechanism is the arise of intrinsic genetic incompatibilities, addressed in the Bateson-Dobzhansky-Muller model . [ 49 ] Genes from allopatric populations will have different evolutionary backgrounds and are never tested together until hybridization at secondary contact, when negative epistatic interactions will be exposed. In other words, new alleles will emerge in a population and only pass through selection if they work well together with other genes in the same population, but it may not be compatible with genes in an allopatric population, be those other newly derived alleles or retained ancestral alleles. This is only revealed through new hybridization. [ 49 ] [ 50 ] Such incompatibilities cause lower fitness in hybrids regardless of the ecological environment, and are thus intrinsic, although they can originate from the adaptation to different environments. [ 51 ] The accumulation of such incompatibilities increases faster and faster with time, creating a "snowball" effect. [ 52 ] There is a large amount of evidence supporting this theory, primarily from laboratory populations such as Drosophila and Mus , and some genes involved in incompatibilities have been identified. [ 50 ] Reinforcement favoring reproductive isolation is required for both parapatric and sympatric speciation. Without reinforcement, the geographic area of contact between different forms of the same species, called their "hybrid zone", will not develop into a boundary between the different species. Hybrid zones are regions where diverged populations meet and interbreed. Hybrid offspring are common in these regions, which are usually created by diverged species coming into secondary contact . Without reinforcement, the two species would have uncontrollable inbreeding . [ citation needed ] Reinforcement may be induced in artificial selection experiments as described below. Ecological selection is "the interaction of individuals with their environment during resource acquisition". [ 53 ] Natural selection is inherently involved in the process of speciation, whereby, "under ecological speciation, populations in different environments, or populations exploiting different resources, experience contrasting natural selection pressures on the traits that directly or indirectly bring about the evolution of reproductive isolation". [ 54 ] Evidence for the role ecology plays in the process of speciation exists. Studies of stickleback populations support ecologically-linked speciation arising as a by-product, [ 55 ] alongside numerous studies of parallel speciation, where isolation evolves between independent populations of species adapting to contrasting environments than between independent populations adapting to similar environments. [ 56 ] Ecological speciation occurs with much of the evidence, "...accumulated from top-down studies of adaptation and reproductive isolation". [ 56 ] Sexual selection can drive speciation in a clade, independently of natural selection . [ 57 ] However the term "speciation", in this context, tends to be used in two different, but not mutually exclusive senses. The first and most commonly used sense refers to the "birth" of new species. That is, the splitting of an existing species into two separate species, or the budding off of a new species from a parent species, both driven by a biological "fashion fad" (a preference for a feature, or features, in one or both sexes, that do not necessarily have any adaptive qualities). [ 57 ] [ 58 ] [ 59 ] [ 60 ] In the second sense, "speciation" refers to the wide-spread tendency of sexual creatures to be grouped into clearly defined species, [ 61 ] [ 20 ] rather than forming a continuum of phenotypes both in time and space – which would be the more obvious or logical consequence of natural selection. This was indeed recognized by Darwin as problematic, and included in his On the Origin of Species (1859), under the heading "Difficulties with the Theory". [ 7 ] There are several suggestions as to how mate choice might play a significant role in resolving Darwin's dilemma . [ 20 ] [ 10 ] [ 16 ] [ 17 ] [ 18 ] [ 62 ] If speciation takes place in the absence of natural selection, it might be referred to as nonecological speciation . [ 63 ] [ 64 ] New species have been created by animal husbandry , but the dates and methods of the initiation of such species are not clear. Often, the domestic counterpart can still interbreed and produce fertile offspring with its wild ancestor. This is the case with domestic cattle , which can be considered the same species as several varieties of wild ox , gaur , and yak ; and with domestic sheep that can interbreed with the mouflon . [ 65 ] [ 66 ] The best-documented creations of new species in the laboratory were performed in the late 1980s. William R. Rice and George W. Salt bred Drosophila melanogaster fruit flies using a maze with three different choices of habitat such as light/dark and wet/dry. Each generation was placed into the maze, and the groups of flies that came out of two of the eight exits were set apart to breed with each other in their respective groups. After thirty-five generations, the two groups and their offspring were isolated reproductively because of their strong habitat preferences: they mated only within the areas they preferred, and so did not mate with flies that preferred the other areas. [ 67 ] The history of such attempts is described by Rice and Elen E. Hostert (1993). [ 68 ] [ 69 ] Diane Dodd used a laboratory experiment to show how reproductive isolation can develop in Drosophila pseudoobscura fruit flies after several generations by placing them in different media, starch- and maltose-based media. [ 70 ] Dodd's experiment has been replicated many times, including with other kinds of fruit flies and foods. [ 71 ] Such rapid evolution of reproductive isolation may sometimes be a relic of infection by Wolbachia bacteria. [ 72 ] An alternative explanation is that these observations are consistent with sexually-reproducing animals being inherently reluctant to mate with individuals whose appearance or behavior is different from the norm. The risk that such deviations are due to heritable maladaptations is high. Thus, if an animal, unable to predict natural selection's future direction, is conditioned to produce the fittest offspring possible, it will avoid mates with unusual habits or features. [ 73 ] [ 74 ] [ 16 ] [ 17 ] [ 18 ] Sexual creatures then inevitably group themselves into reproductively isolated species. [ 17 ] Few speciation genes have been found. They usually involve the reinforcement process of late stages of speciation. In 2008, a speciation gene causing reproductive isolation was reported. [ 75 ] It causes hybrid sterility between related subspecies. The order of speciation of three groups from a common ancestor may be unclear or unknown; a collection of three such species is referred to as a "trichotomy". [ citation needed ] Polyploidy is a mechanism that has caused many rapid speciation events in sympatry because offspring of, for example, tetraploid x diploid matings often result in triploid sterile progeny. [ 76 ] However, among plants, not all polyploids are reproductively isolated from their parents, and gene flow may still occur, such as through triploid hybrid x diploid matings that produce tetraploids, or matings between meiotically unreduced gametes from diploids and gametes from tetraploids (see also hybrid speciation ). [ citation needed ] It has been suggested that many of the existing plant and most animal species have undergone an event of polyploidization in their evolutionary history. [ 77 ] [ 78 ] Reproduction of successful polyploid species is sometimes asexual, by parthenogenesis or apomixis , as for unknown reasons many asexual organisms are polyploid. Rare instances of polyploid mammals are known, but most often result in prenatal death. [ 79 ] Hybridization between two different species sometimes leads to a distinct phenotype . This phenotype can also be fitter than the parental lineage and as such natural selection may then favor these individuals. Eventually, if reproductive isolation is achieved, it may lead to a separate species. However, reproductive isolation between hybrids and their parents is particularly difficult to achieve and thus hybrid speciation is considered an extremely rare event. The Mariana mallard is thought to have arisen from hybrid speciation. [ citation needed ] Hybridization is an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome ) is tolerated in plants more readily than in animals. [ 80 ] [ 81 ] Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis. [ 78 ] Polyploids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations. [ 82 ] Hybridization without change in chromosome number is called homoploid hybrid speciation. It is considered very rare but has been shown in Heliconius butterflies [ 83 ] and sunflowers . Polyploid speciation, which involves changes in chromosome number, is a more common phenomenon, especially in plant species. [ citation needed ] Theodosius Dobzhansky , who studied fruit flies in the early days of genetic research in 1930s, speculated that parts of chromosomes that switch from one location to another might cause a species to split into two different species. He mapped out how it might be possible for sections of chromosomes to relocate themselves in a genome. Those mobile sections can cause sterility in inter-species hybrids, which can act as a speciation pressure. In theory, his idea was sound, but scientists long debated whether it actually happened in nature. Eventually a competing theory involving the gradual accumulation of mutations was shown to occur in nature so often that geneticists largely dismissed the moving gene hypothesis. [ 84 ] However, 2006 research shows that jumping of a gene from one chromosome to another can contribute to the birth of new species. [ 85 ] This validates the reproductive isolation mechanism, a key component of speciation. [ 86 ] There is debate as to the rate at which speciation events occur over geologic time. While some evolutionary biologists claim that speciation events have remained relatively constant and gradual over time (known as "Phyletic gradualism" – see diagram), some palaeontologists such as Niles Eldredge and Stephen Jay Gould [ 87 ] have argued that species usually remain unchanged over long stretches of time, and that speciation occurs only over relatively brief intervals, a view known as punctuated equilibrium . (See diagram, and Darwin's dilemma .) [ citation needed ] Evolution can be extremely rapid, as shown in the creation of domesticated animals and plants in a very short geological space of time, spanning only a few tens of thousands of years. Maize ( Zea mays ), for instance, was created in Mexico in only a few thousand years, starting about 7,000 to 12,000 years ago. [ 88 ] This raises the question of why the long term rate of evolution is far slower than is theoretically possible. [ 89 ] [ 90 ] [ 91 ] [ 92 ] Evolution is imposed on species or groups. It is not planned or striven for in some Lamarckist way. [ 93 ] The mutations on which the process depends are random events, and, except for the " silent mutations " which do not affect the functionality or appearance of the carrier, are thus usually disadvantageous, and their chance of proving to be useful in the future is vanishingly small. Therefore, while a species or group might benefit from being able to adapt to a new environment by accumulating a wide range of genetic variation, this is to the detriment of the individuals who have to carry these mutations until a small, unpredictable minority of them ultimately contributes to such an adaptation. Thus, the capability to evolve would require group selection , a concept discredited by (for example) George C. Williams , [ 94 ] John Maynard Smith [ 95 ] and Richard Dawkins [ 96 ] [ 97 ] [ 98 ] [ 99 ] as selectively disadvantageous to the individual. The resolution to Darwin's second dilemma might thus come about as follows: If sexual individuals are disadvantaged by passing mutations on to their offspring, they will avoid mutant mates with strange or unusual characteristics. [ 74 ] [ 16 ] [ 17 ] [ 62 ] Mutations that affect the external appearance of their carriers will then rarely be passed on to the next and subsequent generations. They would therefore seldom be tested by natural selection. Evolution is, therefore, effectively halted or slowed down considerably. The only mutations that can accumulate in a population, on this punctuated equilibrium view, are ones that have no noticeable effect on the outward appearance and functionality of their bearers (i.e., they are "silent" or " neutral mutations ", which can be, and are, used to trace the relatedness and age of populations and species . [ 16 ] [ 100 ] ) This argument implies that evolution can only occur if mutant mates cannot be avoided, as a result of a severe scarcity of potential mates. This is most likely to occur in small, isolated communities . These occur most commonly on small islands, in remote valleys, lakes, river systems, or caves, [ 101 ] or during the aftermath of a mass extinction . [ 100 ] Under these circumstances, not only is the choice of mates severely restricted but population bottlenecks, founder effects, genetic drift and inbreeding cause rapid, random changes in the isolated population's genetic composition. [ 101 ] Furthermore, hybridization with a related species trapped in the same isolate might introduce additional genetic changes. If an isolated population such as this survives its genetic upheavals , and subsequently expands into an unoccupied niche, or into a niche in which it has an advantage over its competitors, a new species, or subspecies, will have come into being. In geological terms, this will be an abrupt event. A resumption of avoiding mutant mates will thereafter result, once again, in evolutionary stagnation. [ 87 ] [ 90 ] In apparent confirmation of this punctuated equilibrium view of evolution, the fossil record of an evolutionary progression typically consists of species that suddenly appear, and ultimately disappear, hundreds of thousands or millions of years later, without any change in external appearance. [ 87 ] [ 100 ] [ 102 ] Graphically, these fossil species are represented by lines parallel with the time axis, whose lengths depict how long each of them existed. The fact that the lines remain parallel with the time axis illustrates the unchanging appearance of each of the fossil species depicted on the graph. During each species' existence new species appear at random intervals, each also lasting many hundreds of thousands of years before disappearing without a change in appearance. The exact relatedness of these concurrent species is generally impossible to determine. This is illustrated in the diagram depicting the distribution of hominin species through time since the hominins separated from the line that led to the evolution of their closest living primate relatives, the chimpanzees. [ 102 ] For similar evolutionary time lines see, for instance, the paleontological list of African dinosaurs , Asian dinosaurs , the Lampriformes and Amiiformes . [ citation needed ]
https://en.wikipedia.org/wiki/Speciation
The American Species Survival Plan or SSP program was developed in 1981 by the (American) Association of Zoos and Aquariums to help ensure the survival of selected species in zoos and aquariums , [ 1 ] most of which are threatened or endangered in the wild. SSP programs focus on animals that are near threatened, threatened, endangered, or otherwise in danger of extinction in the wild, when zoo and zoology conservationists believe captive breeding programs will aid in their chances of survival. [ 2 ] These programs help maintain healthy and genetically diverse animal populations within the Association of Zoos and Aquariums-accredited zoo community. [ 3 ] AZA accredited zoos and AZA conservation partners that are involved in SSP programs engage in cooperative population management and conservation efforts that include research, conservation genetics, public education, reintroduction , and in situ or field conservation projects. [ 1 ] The process for selecting recommended species is guided by Taxon Advisory Groups, whose sole objective is to curate Regional Collection Plans for the conservation needs of a species and how AZA institutions will cooperate to reach those needs. [ 4 ] Today, there are almost 300 existing SSP programs. [ 5 ] The SSP has been met with widespread success in ensuring that, should a species population become functionally extinct in its natural habitat, a viable population still exists within a zoological setting. This has also led to AZA species reintroduction programs, examples of which include the black-footed ferret , the California condor , the northern riffleshell , the golden lion tamarin , the Karner blue butterfly , the Oregon spotted frog , the palila finch , the red wolf , and the Wyoming toad . [ 6 ] An SSP master plan is a document produced by the SSP coordinator (generally a zoo professional under the guidance of an elected management committee) [ 1 ] for a certain species. This document sets ex situ population goals and other management recommendations to achieve the maximum genetic diversity and demographic stability for a species, given transfer and space constraints. [ 2 ] As of 2025, there are 295 species that are a part of the Species Survival Plan program. [ 7 ] [ note 1 ]
https://en.wikipedia.org/wiki/Species_Survival_Plan
Species affinis (commonly abbreviated to: sp. , aff. , or affin. ) is taxonomic terminology in zoology and botany . In open nomenclature it indicates that available material or evidence suggests that the proposed species is related to, has an affinity to, but is not identical to, the species with the binomial name it comes after. [ 1 ] The Latin word affinis can be translated as "closely related to", or "akin to". [ 2 ] An author who inserts n.sp. , or sp. nov. , aff before a species name thereby states the opinion that the specimen is a new, previously undescribed species, [ 3 ] but that there may not (yet) be enough information to complete a formal description. To use aff. alone, implies that the specimen differs suggestively from the holotype but that further progress is necessary to confirm that it is a novel species. An example would be: a gastropod shell listed as Lucapina aff. L. aegis would mean that this shell somewhat resembles the shell of Lucapina aegis , but is thought more likely to be another species, either closely related to, or closely resembling Lucapina aegis . In a suitable context it also may suggest the possibility that the shell belongs to a species that has not yet been described . Within entomology , species proxima (Latin: 'the nearest species', abbreviated prox. or sp. prox. ) and species near (abbreviated nr. or sp. nr. ) indicate a specimen is similar to, but distinct from, a described species. [ 4 ] The use of aff. is similar to other indicators of open nomenclature such as cf. , sp. , or ?, [ 3 ] but the latter indicate that the species is uncertain rather than undescribed.
https://en.wikipedia.org/wiki/Species_affinis
In biology, a species complex is a group of closely related organisms that are so similar in appearance and other features that the boundaries between them are often unclear. The taxa in the complex may be able to hybridize readily with each other, further blurring any distinctions. Terms that are sometimes used synonymously but have more precise meanings are cryptic species for two or more species hidden under one species name, sibling species for two (or more) species that are each other's closest relative, and species flock for a group of closely related species that live in the same habitat. As informal taxonomic ranks , species group , species aggregate , macrospecies , and superspecies are also in use. Two or more taxa that were once considered conspecific (of the same species) may later be subdivided into infraspecific taxa (taxa within a species, such as plant varieties ), which may be a complex ranking but it is not a species complex. In most cases, a species complex is a monophyletic group of species with a common ancestor, but there are exceptions. It may represent an early stage after speciation in which the species were separated for a long time period without evolving morphological differences. Hybrid speciation can be a component in the evolution of a species complex. Species complexes are ubiquitous and are identified by the rigorous study of differences between individual species that uses minute morphological details, tests of reproductive isolation , or DNA -based methods, such as molecular phylogenetics and DNA barcoding . The existence of extremely similar species may cause local and global species diversity to be underestimated. The recognition of similar-but-distinct species is important for disease and pest control and in conservation biology although the drawing of dividing lines between species can be inherently difficult . A species complex is typically considered as a group of close, but distinct species. [ 5 ] Obviously, the concept is closely tied to the definition of a species. Modern biology understands a species as "separately evolving metapopulation lineage " but acknowledges that the criteria to delimit species may depend on the group studied. [ 6 ] Thus, many traditionally defined species, based only on morphological similarity, have been found to be several distinct species when other criteria, such as genetic differentiation or reproductive isolation , are applied. [ 7 ] A more restricted use applies the term to a group of species among which hybridisation has occurred or is occurring, which leads to intermediate forms and blurred species boundaries. [ 8 ] The informal classification, superspecies, can be exemplified by the grizzled skipper butterfly, which is a superspecies that is further divided into three subspecies. [ 9 ] Some authors apply the term to a species with intraspecific variability , which might be a sign of ongoing or incipient speciation . Examples are ring species [ 10 ] [ 11 ] or species with subspecies , in which it is often unclear if they should be considered separate species. [ 12 ] Several terms are used synonymously for a species complex, but some of them may also have slightly different or narrower meanings. In the nomenclature codes of zoology and bacteriology, no taxonomic ranks are defined at the level between subgenus and species, [ 13 ] [ 14 ] but the botanical code defines four ranks below subgenus (section, subsection, series, and subseries). [ 15 ] Different informal taxonomic solutions have been used to indicate a species complex. Distinguishing close species within a complex requires the study of often very small differences. Morphological differences may be minute and visible only by the use of adapted methods, such as microscopy . However, distinct species sometimes have no morphological differences. [ 21 ] In those cases, other characters, such as in the species' life history , behavior , physiology , and karyology , may be explored. For example, territorial songs are indicative of species in the treecreepers , a bird genus with few morphological differences. [ 22 ] Mating tests are common in some groups such as fungi to confirm the reproductive isolation of two species. [ 23 ] Analysis of DNA sequences is becoming increasingly standard for species recognition and may, in many cases, be the only useful method. [ 21 ] Different methods are used to analyse such genetic data, such as molecular phylogenetics or DNA barcoding . Such methods have greatly contributed to the discovery of cryptic species, [ 21 ] [ 24 ] including such emblematic species as the fly agaric , [ 2 ] the water fleas , [ 25 ] or the African elephants . [ 3 ] Species forming a complex have typically diverged very recently from each other, which sometimes allows the retracing of the process of speciation . Species with differentiated populations, such as ring species , are sometimes seen as an example of early, ongoing speciation: a species complex in formation. Nevertheless, similar but distinct species have sometimes been isolated for a long time without evolving differences, a phenomenon known as "morphological stasis". [ 21 ] For example, the Amazonian frog Pristimantis ockendeni is actually at least three different species that diverged over 5 million years ago. [ 27 ] Stabilizing selection has been invoked as a force maintaining similarity in species complexes, especially when they adapted to special environments (such as a host in the case of symbionts or extreme environments). [ 21 ] This may constrain possible directions of evolution; in such cases, strongly divergent selection is not to be expected. [ 21 ] Also, asexual reproduction, such as through apomixis in plants, may separate lineages without producing a great degree of morphological differentiation. A species complex is usually a group that has one common ancestor (a monophyletic group), but closer examination can sometimes disprove that. For example, yellow-spotted "fire salamanders" in the genus Salamandra , formerly all classified as one species S. salamandra , are not monophyletic: the Corsican fire salamander 's closest relative has been shown to be the entirely black Alpine salamander . [ 26 ] In such cases, similarity has arisen from convergent evolution . Hybrid speciation can lead to unclear species boundaries through a process of reticulate evolution , in which species have two parent species as their most recent common ancestors . In such cases, the hybrid species may have intermediate characters, such as in Heliconius butterflies. [ 28 ] Hybrid speciation has been observed in various species complexes, such as insects, fungi, and plants. In plants, hybridization often takes place through polyploidization , and hybrid plant species are called nothospecies . Sources differ on whether or not members of a species group share a range . A source from Iowa State University Department of Agronomy states that members of a species group usually have partially overlapping ranges but do not interbreed with one another. [ 29 ] A Dictionary of Zoology ( Oxford University Press 1999) describes a species group as complex of related species that exist allopatrically and explains that the "grouping can often be supported by experimental crosses in which only certain pairs of species will produce hybrids ." [ 30 ] The examples given below may support both uses of the term "species group." Often, such complexes do not become evident until a new species is introduced into the system, which breaks down existing species barriers. An example is the introduction of the Spanish slug in Northern Europe , where interbreeding with the local black slug and red slug , which were traditionally considered clearly separate species that did not interbreed, shows that they may be actually just subspecies of the same species. [ 31 ] [ 32 ] Where closely related species co-exist in sympatry , it is often a particular challenge to understand how the similar species persist without outcompeting each other. Niche partitioning is one mechanism invoked to explain that. Indeed, studies in some species complexes suggest that species divergence have gone in par with ecological differentiation, with species now preferring different microhabitats. [ 33 ] Similar methods also found that the Amazonian frog Eleutherodactylus ockendeni is actually at least three different species that diverged over 5 million years ago. [ 27 ] A species flock may arise when a species penetrates a new geographical area and diversifies to occupy a variety of ecological niches , a process known as adaptive radiation . The first species flock to be recognized as such was the 13 species of Darwin's finches on the Galápagos Islands described by Charles Darwin . It has been suggested that cryptic species complexes are very common in the marine environment. [ 34 ] That suggestion came before the detailed analysis of many systems using DNA sequence data but has been proven to be correct. [ 35 ] The increased use of DNA sequence in the investigation of organismal diversity (also called phylogeography and DNA barcoding ) has led to the discovery of a great many cryptic species complexes in all habitats. In the marine bryozoan Celleporella hyalina , [ 36 ] detailed morphological analyses and mating compatibility tests between the isolates identified by DNA sequence analysis were used to confirm that these groups consisted of more than 10 ecologically distinct species, which had been diverging for many millions of years. Pests, species that cause diseases and their vectors, have direct importance for humans. When they are found to be cryptic species complexes, the ecology and the virulence of each of these species need to be re-evaluated to devise appropriate control strategies as their diversity increases the capacity for more dangerous strains to develop. Examples are cryptic species in the malaria vector genus of mosquito, Anopheles , the fungi causing cryptococcosis , and sister species of Bactrocera tryoni , or the Queensland fruit fly. That pest is indistinguishable from two sister species except that B. tryoni inflicts widespread, devastating damage to Australian fruit crops, but the sister species do not. [ 38 ] When a species is found to be several phylogenetically distinct species, each typically has smaller distribution ranges and population sizes than had been reckoned. The different species can also differ in their ecology, such as by having different breeding strategies or habitat requirements, which must be taken into account for appropriate management. For example, giraffe populations and subspecies differ genetically to such an extent that they may be considered species. Although the giraffe, as a whole, is not considered to be threatened, if each cryptic species is considered separately, there is a much higher level of threat. [ 39 ]
https://en.wikipedia.org/wiki/Species_complex
A species description is a formal scientific description of a newly encountered species , typically articulated through a scientific publication . Its purpose is to provide a clear description of a new species of organism and explain how it differs from species that have been previously described or related species. For a species to be considered valid, a species description must follow established guidelines and naming conventions dictated by relevant nomenclature codes . These include the International Code of Zoological Nomenclature (ICZN) for animals, the International Code of Nomenclature for algae, fungi, and plants (ICN) for plants, and the International Committee on Taxonomy of Viruses (ICTV) for viruses. A species description often includes photographs or other illustrations of type material and information regarding where this material is deposited. The publication in which the species is described gives the new species a formal scientific name . Some 1.9 million species have been identified and described, out of some 8.7 million that may actually exist. [ 1 ] Additionally, over five billion species have gone extinct over the history of life on Earth . [ 2 ] A name of a new species becomes valid ( available in zoological terminology) with the date of publication of its formal scientific description. Once the scientist has performed the necessary research to determine that the discovered organism represents a new species, the scientific results are summarized in a scientific manuscript, either as part of a book or as a paper to be submitted to a scientific journal . A scientific species description must fulfill several formal criteria specified by the nomenclature codes , e.g. selection of at least one type specimen . These criteria are intended to ensure that the species name is clear and unambiguous, for example, the International Code of Zoological Nomenclature states that "Authors should exercise reasonable care and consideration in forming new names to ensure that they are chosen with their subsequent users in mind and that, as far as possible, they are appropriate, compact, euphonious , memorable, and do not cause offence." [ 3 ] Species names are written in the 26 letters of the Latin alphabet, but many species names are based on words from other languages, and are Latinized. Once the manuscript has been accepted for publication, [ 4 ] the new species name is officially created. Once a species name has been assigned and approved, it can generally not be changed except in the case of error. For example, a species of beetle ( Anophthalmus hitleri ) was named by a German collector after Adolf Hitler in 1933 when he had recently become chancellor of Germany. [ 5 ] It is not clear whether such a dedication would be considered acceptable or appropriate today, but the name remains in use. [ 5 ] Species names have been chosen on many different bases. The most common is a naming for the species' external appearance, its origin, or the species name is a dedication to a certain person. Examples would include a bat species named for the two stripes on its back ( Saccopteryx bilineata ), a frog named for its Bolivian origin ( Phyllomedusa boliviana ), and an ant species dedicated to the actor Harrison Ford ( Pheidole harrisonfordi ). A scientific name in honor of a person or persons is known as a taxonomic eponym or eponymic; patronym and matronym are the gendered terms for this. [ 6 ] [ 7 ] A number of humorous species names also exist. Literary examples include the genus name Borogovia (an extinct dinosaur), which is named after the borogove, a mythical character from Lewis Carroll 's poem " Jabberwocky ". A second example, Macrocarpaea apparata (a tall plant) was named after the magical spell "to apparate" from the Harry Potter novels by J. K. Rowling , as it seemed to appear out of nowhere. [ 8 ] In 1975, the British naturalist Peter Scott proposed the binomial name Nessiteras rhombopteryx ("Ness monster with diamond-shaped fin") for the Loch Ness Monster; it was soon spotted that it was an anagram of "Monster hoax by Sir Peter S". Species have frequently been named by scientists in recognition of supporters and benefactors. For example, the genus Victoria (a flowering waterplant) was named in honour of Queen Victoria of Great Britain. More recently, a species of lemur ( Avahi cleesei ) was named after the actor John Cleese in recognition of his work to publicize the plight of lemurs in Madagascar. Non-profit ecological organizations may also allow benefactors to name new species in exchange for financial support for taxonomic research and nature conservation. A German non-profit organisation, BIOPAT – Patrons for Biodiversity , has raised more than $450,000 for research and conservation through sponsorship of over 100 species using this model. [ 9 ] An individual example of this system is the Callicebus aureipalatii (or "monkey of the Golden Palace"), which was named after the Golden Palace casino in recognition of a $650,000 contribution to the Madidi National Park in Bolivia in 2005. [ 10 ] The International Code of Nomenclature for algae, fungi, and plants discourages this practice somewhat: "Recommendation 20A. Authors forming generic names should comply with the following ... (h) Not dedicate genera to persons quite unconcerned with botany, mycology, phycology, or natural science in general." [ 11 ] Early biologists often published entire volumes or multiple-volume works of descriptions in an attempt to catalog all known species. These catalogs typically featured extensive descriptions of each species and were often illustrated upon reprinting. The first of these large catalogs was Aristotle 's History of Animals , published around 343 BC. Aristotle included descriptions of creatures, mostly fish and invertebrates, in his homeland, and several mythological creatures rumored to live in far-away lands, such as the manticore . In 77 AD Pliny the Elder dedicated several volumes of his Natural History to the description of all life forms he knew to exist. He appears to have read Aristotle's work since he writes about many of the same far-away mythological creatures. Toward the end of the 12th century, Konungs skuggsjá , an Old Norse philosophical didactic work, featured several descriptions of the whales, seals, and monsters of the Icelandic seas. These descriptions were brief and often erroneous, and they included a description of the mermaid and a rare island-like sea monster called hafgufu . The author was hesitant to mention the beast (known today to be fictitious) for fear of its size, but felt it was important enough to be included in his descriptions. [ 12 ] However, the earliest recognized species authority is Carl Linnaeus , who standardized the modern taxonomy system beginning with his Systema Naturae in 1735. [ 13 ] As the catalog of known species was increasing rapidly, it became impractical to maintain a single work documenting every species. Publishing a paper documenting a single species was much faster and could be done by scientists with less broadened scopes of study. For example, a scientist who discovered a new species of insect would not need to understand plants, or frogs, or even insects which did not resemble the species, but would only need to understand closely related insects. Formal species descriptions today follow strict guidelines set forth by the codes of nomenclature . Very detailed formal descriptions are made by scientists, who usually study the organism closely for a considerable time. A diagnosis may be used instead of, [ 14 ] or as well as [ 15 ] the description. A diagnosis specifies the distinction between the new species and other species, and it does not necessarily have to be based on morphology. [ 16 ] In recent times, new species descriptions have been made without voucher specimens, and this has been controversial. [ 17 ] According to the RetroSOS report, [ 18 ] the following numbers of species have been described each year in the 2000s.
https://en.wikipedia.org/wiki/Species_description
Species distribution , or species dispersion , [ 1 ] is the manner in which a biological taxon is spatially arranged. [ 2 ] The geographic limits of a particular taxon's distribution is its range , often represented as shaded areas on a map. Patterns of distribution change depending on the scale at which they are viewed, from the arrangement of individuals within a small family unit, to patterns within a population, or the distribution of the entire species as a whole (range). Species distribution is not to be confused with dispersal , which is the movement of individuals away from their region of origin or from a population center of high density . In biology , the range of a species is the geographical area within which that species can be found. Within that range, distribution is the general structure of the species population , while dispersion is the variation in its population density . Range is often described with the following qualities: Disjunct distribution occurs when two or more areas of the range of a taxon are considerably separated from each other geographically. Distribution patterns may change by season , distribution by humans, in response to the availability of resources, and other abiotic and biotic factors. There are three main types of abiotic factors: An example of the effects of abiotic factors on species distribution can be seen in drier areas, where most individuals of a species will gather around water sources, forming a clumped distribution. Researchers from the Arctic Ocean Diversity (ARCOD) project have documented rising numbers of warm-water crustaceans in the seas around Norway's Svalbard Islands. ARCOD is part of the Census of Marine Life, a huge 10-year project involving researchers in more than 80 nations that aims to chart the diversity, distribution and abundance of life in the oceans. Marine Life has become largely affected by increasing effects of global climate change . This study shows that as the ocean temperatures rise species are beginning to travel into the cold and harsh Arctic waters. Even the snow crab has extended its range 500 km north. Biotic factors such as predation, disease, and inter- and intra-specific competition for resources such as food, water, and mates can also affect how a species is distributed. For example, biotic factors in a quail's environment would include their prey (insects and seeds), competition from other quail, and their predators, such as the coyote. [ 5 ] An advantage of a herd, community, or other clumped distribution allows a population to detect predators earlier, at a greater distance, and potentially mount an effective defense. Due to limited resources, populations may be evenly distributed to minimize competition, [ 6 ] as is found in forests, where competition for sunlight produces an even distribution of trees. [ 7 ] One key factor in determining species distribution is the phenology of the organism. Plants are well documented as examples showing how phenology is an adaptive trait that can influence fitness in changing climates. [ 8 ] Physiology can influence species distributions in an environmentally sensitive manner because physiology underlies movement such as exploration and dispersal . Individuals that are more disperse-prone have higher metabolism, locomotor performance, corticosterone levels, and immunity. [ 9 ] Humans are one of the largest distributors due to the current trends in globalization and the expanse of the transportation industry. For example, large tankers often fill their ballasts with water at one port and empty them in another, causing a wider distribution of aquatic species. [ 10 ] On large scales, the pattern of distribution among individuals in a population is clumped. [ 11 ] One common example of bird species' ranges are land mass areas bordering water bodies, such as oceans, rivers, or lakes; they are called a coastal strip . A second example, some species of bird depend on water, usually a river, swamp, etc., or water related forest and live in a river corridor . A separate example of a river corridor would be a river corridor that includes the entire drainage, having the edge of the range delimited by mountains, or higher elevations; the river itself would be a smaller percentage of this entire wildlife corridor , but the corridor is created because of the river. A further example of a bird wildlife corridor would be a mountain range corridor. In the U.S. of North America, the Sierra Nevada range in the west, and the Appalachian Mountains in the east are two examples of this habitat, used in summer, and winter, by separate species, for different reasons. Bird species in these corridors are connected to a main range for the species (contiguous range) or are in an isolated geographic range and be a disjunct range. Birds leaving the area, if they migrate , would leave connected to the main range or have to fly over land not connected to the wildlife corridor; thus, they would be passage migrants over land that they stop on for an intermittent, hit or miss, visit. On large scales, the pattern of distribution among individuals in a population is clumped. On small scales, the pattern may be clumped, regular, or random. [ 11 ] Clumped distribution , also called aggregated distribution , clumped dispersion or patchiness , is the most common type of dispersion found in nature. In clumped distribution, the distance between neighboring individuals is minimized. This type of distribution is found in environments that are characterized by patchy resources. Animals need certain resources to survive, and when these resources become rare during certain parts of the year animals tend to "clump" together around these crucial resources. Individuals might be clustered together in an area due to social factors such as selfish herds and family groups. Organisms that usually serve as prey form clumped distributions in areas where they can hide and detect predators easily. Other causes of clumped distributions are the inability of offspring to independently move from their habitat. This is seen in juvenile animals that are immobile and strongly dependent upon parental care. For example, the bald eagle 's nest of eaglets exhibits a clumped species distribution because all the offspring are in a small subset of a survey area before they learn to fly. Clumped distribution can be beneficial to the individuals in that group. However, in some herbivore cases, such as cows and wildebeests, the vegetation around them can suffer, especially if animals target one plant in particular. Clumped distribution in species acts as a mechanism against predation as well as an efficient mechanism to trap or corner prey. African wild dogs, Lycaon pictus , use the technique of communal hunting to increase their success rate at catching prey. Studies have shown that larger packs of African wild dogs tend to have a greater number of successful kills. A prime example of clumped distribution due to patchy resources is the wildlife in Africa during the dry season; lions, hyenas, giraffes, elephants, gazelles, and many more animals are clumped by small water sources that are present in the severe dry season. [ 12 ] It has also been observed that extinct and threatened species are more likely to be clumped in their distribution on a phylogeny. The reasoning behind this is that they share traits that increase vulnerability to extinction because related taxa are often located within the same broad geographical or habitat types where human-induced threats are concentrated. Using recently developed complete phylogenies for mammalian carnivores and primates it has been shown that in the majority of instances threatened species are far from randomly distributed among taxa and phylogenetic clades and display clumped distribution. [ 13 ] A contiguous distribution is one in which individuals are closer together than they would be if they were randomly or evenly distributed, i.e., it is clumped distribution with a single clump. [ 14 ] Less common than clumped distribution, uniform distribution, also known as even distribution, is evenly spaced. [ 15 ] Uniform distributions are found in populations in which the distance between neighboring individuals is maximized. The need to maximize the space between individuals generally arises from competition for a resource such as moisture or nutrients, or as a result of direct social interactions between individuals within the population, such as territoriality. For example, penguins often exhibit uniform spacing by aggressively defending their territory among their neighbors. The burrows of great gerbils for example are also regularly distributed, [ 16 ] which can be seen on satellite images. [ 17 ] Plants also exhibit uniform distributions, like the creosote bushes in the southwestern region of the United States. Salvia leucophylla is a species in California that naturally grows in uniform spacing. This flower releases chemicals called terpenes which inhibit the growth of other plants around it and results in uniform distribution. [ 18 ] This is an example of allelopathy , which is the release of chemicals from plant parts by leaching, root exudation, volatilization, residue decomposition and other processes. Allelopathy can have beneficial, harmful, or neutral effects on surrounding organisms. Some allelochemicals even have selective effects on surrounding organisms; for example, the tree species Leucaena leucocephala exudes a chemical that inhibits the growth of other plants but not those of its own species, and thus can affect the distribution of specific rival species. Allelopathy usually results in uniform distributions, and its potential to suppress weeds is being researched. [ 19 ] Farming and agricultural practices often create uniform distribution in areas where it would not previously exist, for example, orange trees growing in rows on a plantation. Random distribution, also known as unpredictable spacing, is the least common form of distribution in nature and occurs when the members of a given species are found in environments in which the position of each individual is independent of the other individuals: they neither attract nor repel one another. Random distribution is rare in nature as biotic factors, such as the interactions with neighboring individuals, and abiotic factors, such as climate or soil conditions, generally cause organisms to be either clustered or spread. Random distribution usually occurs in habitats where environmental conditions and resources are consistent. This pattern of dispersion is characterized by the lack of any strong social interactions between species. For example; When dandelion seeds are dispersed by wind, random distribution will often occur as the seedlings land in random places determined by uncontrollable factors. Oyster larvae can also travel hundreds of kilometers powered by sea currents, which can result in their random distribution. Random distributions exhibit chance clumps (see Poisson clumping ). There are various ways to determine the distribution pattern of species. The Clark–Evans nearest neighbor method [ 20 ] can be used to determine if a distribution is clumped, uniform, or random. [ 21 ] To utilize the Clark–Evans nearest neighbor method, researchers examine a population of a single species. The distance of an individual to its nearest neighbor is recorded for each individual in the sample. For two individuals that are each other's nearest neighbor, the distance is recorded twice, once for each individual. To receive accurate results, it is suggested that the number of distance measurements is at least 50. The average distance between nearest neighbors is compared to the expected distance in the case of random distribution to give the ratio: If this ratio R is equal to 1, then the population is randomly dispersed. If R is significantly greater than 1, the population is evenly dispersed. Lastly, if R is significantly less than 1, the population is clumped. Statistical tests (such as t-test, chi squared, etc.) can then be used to determine whether R is significantly different from 1. The variance/mean ratio method focuses mainly on determining whether a species fits a randomly spaced distribution, but can also be used as evidence for either an even or clumped distribution. [ 22 ] To utilize the Variance/Mean ratio method, data is collected from several random samples of a given population. In this analysis, it is imperative that data from at least 50 sample plots is considered. The number of individuals present in each sample is compared to the expected counts in the case of random distribution. The expected distribution can be found using Poisson distribution . If the variance/mean ratio is equal to 1, the population is found to be randomly distributed. If it is significantly greater than 1, the population is found to be clumped distribution. Finally, if the ratio is significantly less than 1, the population is found to be evenly distributed. Typical statistical tests used to find the significance of the variance/mean ratio include Student's t-test and chi squared . However, many researchers believe that species distribution models based on statistical analysis, without including ecological models and theories, are too incomplete for prediction. Instead of conclusions based on presence-absence data, probabilities that convey the likelihood a species will occupy a given area are more preferred because these models include an estimate of confidence in the likelihood of the species being present/absent. They are also more valuable than data collected based on simple presence or absence because models based on probability allow the formation of spatial maps that indicates how likely a species is to be found in a particular area. Similar areas can then be compared to see how likely it is that a species will occur there also; this leads to a relationship between habitat suitability and species occurrence. [ 23 ] Species distribution can be predicted based on the pattern of biodiversity at spatial scales. A general hierarchical model can integrate disturbance, dispersal and population dynamics. Based on factors of dispersal, disturbance, resources limiting climate, and other species distribution, predictions of species distribution can create a bio-climate range, or bio-climate envelope. The envelope can range from a local to a global scale or from a density independence to dependence. The hierarchical model takes into consideration the requirements, impacts or resources as well as local extinctions in disturbance factors. Models can integrate the dispersal/migration model, the disturbance model, and abundance model. Species distribution models (SDMs) can be used to assess climate change impacts and conservation management issues. Species distribution models include: presence/absence models, the dispersal/migration models, disturbance models, and abundance models. A prevalent way of creating predicted distribution maps for different species is to reclassify a land cover layer depending on whether or not the species in question would be predicted to habit each cover type. This simple SDM is often modified through the use of range data or ancillary information, such as elevation or water distance. Recent studies have indicated that the grid size used can have an effect on the output of these species distribution models. [ 24 ] The standard 50x50 km grid size can select up to 2.89 times more area than when modeled with a 1x1 km grid for the same species. This has several effects on the species conservation planning under climate change predictions (global climate models, which are frequently used in the creation of species distribution models, usually consist of 50–100 km size grids) which could lead to over-prediction of future ranges in species distribution modeling. This can result in the misidentification of protected areas intended for a species future habitat. The Species Distribution Grids Project is an effort led out of the University of Columbia to create maps and databases of the whereabouts of various animal species. This work is centered on preventing deforestation and prioritizing areas based on species richness. [ 25 ] As of April 2009, data are available for global amphibian distributions, as well as birds and mammals in the Americas. The map gallery Gridded Species Distribution contains sample maps for the Species Grids data set. These maps are not inclusive but rather contain a representative sample of the types of data available for download:
https://en.wikipedia.org/wiki/Species_distribution
Species distribution modelling (SDM) , also known as environmental (or ecological) niche modelling (ENM) , habitat modelling , predictive habitat distribution modelling , and range mapping [ 1 ] uses ecological models to predict the distribution of a species across geographic space and time using environmental data. The environmental data are most often climate data (e.g. temperature, precipitation), but can include other variables such as soil type, water depth, and land cover. SDMs are used in several research areas in conservation biology , ecology and evolution . These models can be used to understand how environmental conditions influence the occurrence or abundance of a species, and for predictive purposes ( ecological forecasting ). Predictions from an SDM may be of a species’ future distribution under climate change, a species’ past distribution in order to assess evolutionary relationships, or the potential future distribution of an invasive species. Predictions of current and/or future habitat suitability can be useful for management applications (e.g. reintroduction or translocation of vulnerable species, reserve placement in anticipation of climate change). There are two main types of SDMs. Correlative SDMs , also known as climate envelope models , bioclimatic models , or resource selection function models , model the observed distribution of a species as a function of environmental conditions. [ 1 ] Mechanistic SDMs , also known as process-based models or biophysical models , use independently derived information about a species' physiology to develop a model of the environmental conditions under which the species can exist. [ 2 ] The extent to which such modelled data reflect real-world species distributions will depend on a number of factors, including the nature, complexity, and accuracy of the models used and the quality of the available environmental data layers; the availability of sufficient and reliable species distribution data as model input; and the influence of various factors such as barriers to dispersal , geologic history, or biotic interactions , that increase the difference between the realized niche and the fundamental niche. Environmental niche modelling may be considered a part of the discipline of biodiversity informatics . A. F. W. Schimper used geographical and environmental factors to explain plant distributions in his 1898 Pflanzengeographie auf physiologischer Grundlage ( Plant Geography Upon a Physiological Basis ) and his 1908 work of the same name. [ 3 ] Andrew Murray used the environment to explain the distribution of mammals in his 1866 The Geographical Distribution of Mammals . [ 4 ] Robert Whittaker's work with plants and Robert MacArthur's work with birds strongly established the role the environment plays in species distributions. [ 1 ] Elgene O. Box constructed environmental envelope models to predict the range of tree species. [ 5 ] His computer simulations were among the earliest uses of species distribution modelling. [ 1 ] The adoption of more sophisticated generalised linear models (GLMs) made it possible to create more sophisticated and realistic species distribution models. The expansion of remote sensing and the development of GIS-based environmental modelling increase the amount of environmental information available for model-building and made it easier to use. [ 1 ] SDMs originated as correlative models. Correlative SDMs model the observed distribution of a species as a function of geographically referenced climatic predictor variables using multiple regression approaches. Given a set of geographically referred observed presences of a species and a set of climate maps, a model defines the most likely environmental ranges within which a species lives. Correlative SDMs assume that species are at equilibrium with their environment and that the relevant environmental variables have been adequately sampled. The models allow for interpolation between a limited number of species occurrences. For these models to be effective, it is required to gather observations not only of species presences, but also of absences, that is, where the species does not live. Records of species absences are typically not as common as records of presences, thus often "random background" or "pseudo-absence" data are used to fit these models. If there are incomplete records of species occurrences, pseudo-absences can introduce bias. Since correlative SDMs are models of a species’ observed distribution, they are models of the realized niche (the environments where a species is found), as opposed to the fundamental niche (the environments where a species can be found, or where the abiotic environment is appropriate for the survival). For a given species, the realized and fundamental niches might be the same, but if a species is geographically confined due to dispersal limitation or species interactions, the realized niche will be smaller than the fundamental niche . Correlative SDMs are easier and faster to implement than mechanistic SDMs, and can make ready use of available data. Since they are correlative however, they do not provide much information about causal mechanisms and are not good for extrapolation. They will also be inaccurate if the observed species range is not at equilibrium (e.g. if a species has been recently introduced and is actively expanding its range). Mechanistic SDMs are more recently developed. In contrast to correlative models, mechanistic SDMs use physiological information about a species (taken from controlled field or laboratory studies) to determine the range of environmental conditions within which the species can persist. [ 2 ] These models aim to directly characterize the fundamental niche, and to project it onto the landscape. A simple model may simply identify threshold values outside of which a species can't survive. A more complex model may consist of several sub-models, e.g. micro-climate conditions given macro-climate conditions, body temperature given micro-climate conditions, fitness or other biological rates (e.g. survival, fecundity) given body temperature (thermal performance curves), resource or energy requirements, and population dynamics . Geographically referenced environmental data are used as model inputs. Because the species distribution predictions are independent of the species’ known range, these models are especially useful for species whose range is actively shifting and not at equilibrium, such as invasive species. Mechanistic SDMs incorporate causal mechanisms and are better for extrapolation and non-equilibrium situations. However, they are more labor-intensive to create than correlational models and require the collection and validation of a lot of physiological data, which may not be readily available. The models require many assumptions and parameter estimates, and they can become very complicated. Dispersal, biotic interactions, and evolutionary processes present challenges, as they aren’t usually incorporated into either correlative or mechanistic models. Correlational and mechanistic models can be used in combination to gain additional insights. For example, a mechanistic model could be used to identify areas that are clearly outside the species’ fundamental niche, and these areas can be marked as absences or excluded from analysis. See [ 6 ] for a comparison between mechanistic and correlative models. There are a variety of mathematical methods that can be used for fitting, selecting, and evaluating correlative SDMs. Models include "profile" methods, which are simple statistical techniques that use e.g. environmental distance to known sites of occurrence such as BIOCLIM [ 7 ] [ 8 ] and DOMAIN; "regression" methods (e.g. forms of generalized linear models); and " machine learning " methods such as maximum entropy (MAXENT). Ten machine learning techiniques used in SDM can be seen in. [ 9 ] An incomplete list of models that have been used for niche modelling includes: Furthermore, ensemble models can be created from several model outputs to create a model that captures components of each. Often the mean or median value across several models is used as an ensemble. Similarly, consensus models are models that fall closest to some measure of central tendency of all models—consensus models can be individual model runs or ensembles of several models. SPACES is an online Environmental niche modeling platform that allows users to design and run dozens of the most prominent methods in a high performance, multi-platform, browser-based environment. MaxEnt is the most widely used method/software uses presence only data and performs well when there are few presence records available. ModEco implements various methods. DIVA-GIS has an easy to use (and good for educational use) implementation of BIOCLIM The Biodiversity and Climate Change Virtual Laboratory (BCCVL) is a "one stop modelling shop" that simplifies the process of biodiversity and climate impact modelling. It connects the research community to Australia's national computational infrastructure by integrating a suite of tools in a coherent online environment. Users can access global climate and environmental datasets or upload their own data, perform data analysis across six different experiment types with a suite of 17 different methods, and easily visualize, interpret and evaluate the results of the models. Experiments types include: Species Distribution Model, Multispecies Distribution Model, Species Trait Model (currently under development), Climate Change Projection, Biodiverse Analysis and Ensemble Analysis. Example of BCCVL SDM outputs can be found here Another example is Ecocrop, which is used to determine the suitability of a crop to a specific environment. [ 11 ] This database system can also project crop yields and evaluate the impact of environmental factors such as climate change on plant growth and suitability. [ 12 ] Most niche modelling methods are available in the R packages 'dismo' , 'biomod2' and 'mopa' .. Software developers may want to build on the openModeller project. The Collaboratory for Adaptation to Climate Change adapt.nd.edu Archived 2012-08-06 at the Wayback Machine has implemented an online version of openModeller that allows users to design and run openModeller in a high-performance, browser-based environment to allow for multiple parallel experiments without the limitations of local processor power.
https://en.wikipedia.org/wiki/Species_distribution_modelling
In biology, a species complex is a group of closely related organisms that are so similar in appearance and other features that the boundaries between them are often unclear. The taxa in the complex may be able to hybridize readily with each other, further blurring any distinctions. Terms that are sometimes used synonymously but have more precise meanings are cryptic species for two or more species hidden under one species name, sibling species for two (or more) species that are each other's closest relative, and species flock for a group of closely related species that live in the same habitat. As informal taxonomic ranks , species group , species aggregate , macrospecies , and superspecies are also in use. Two or more taxa that were once considered conspecific (of the same species) may later be subdivided into infraspecific taxa (taxa within a species, such as plant varieties ), which may be a complex ranking but it is not a species complex. In most cases, a species complex is a monophyletic group of species with a common ancestor, but there are exceptions. It may represent an early stage after speciation in which the species were separated for a long time period without evolving morphological differences. Hybrid speciation can be a component in the evolution of a species complex. Species complexes are ubiquitous and are identified by the rigorous study of differences between individual species that uses minute morphological details, tests of reproductive isolation , or DNA -based methods, such as molecular phylogenetics and DNA barcoding . The existence of extremely similar species may cause local and global species diversity to be underestimated. The recognition of similar-but-distinct species is important for disease and pest control and in conservation biology although the drawing of dividing lines between species can be inherently difficult . A species complex is typically considered as a group of close, but distinct species. [ 5 ] Obviously, the concept is closely tied to the definition of a species. Modern biology understands a species as "separately evolving metapopulation lineage " but acknowledges that the criteria to delimit species may depend on the group studied. [ 6 ] Thus, many traditionally defined species, based only on morphological similarity, have been found to be several distinct species when other criteria, such as genetic differentiation or reproductive isolation , are applied. [ 7 ] A more restricted use applies the term to a group of species among which hybridisation has occurred or is occurring, which leads to intermediate forms and blurred species boundaries. [ 8 ] The informal classification, superspecies, can be exemplified by the grizzled skipper butterfly, which is a superspecies that is further divided into three subspecies. [ 9 ] Some authors apply the term to a species with intraspecific variability , which might be a sign of ongoing or incipient speciation . Examples are ring species [ 10 ] [ 11 ] or species with subspecies , in which it is often unclear if they should be considered separate species. [ 12 ] Several terms are used synonymously for a species complex, but some of them may also have slightly different or narrower meanings. In the nomenclature codes of zoology and bacteriology, no taxonomic ranks are defined at the level between subgenus and species, [ 13 ] [ 14 ] but the botanical code defines four ranks below subgenus (section, subsection, series, and subseries). [ 15 ] Different informal taxonomic solutions have been used to indicate a species complex. Distinguishing close species within a complex requires the study of often very small differences. Morphological differences may be minute and visible only by the use of adapted methods, such as microscopy . However, distinct species sometimes have no morphological differences. [ 21 ] In those cases, other characters, such as in the species' life history , behavior , physiology , and karyology , may be explored. For example, territorial songs are indicative of species in the treecreepers , a bird genus with few morphological differences. [ 22 ] Mating tests are common in some groups such as fungi to confirm the reproductive isolation of two species. [ 23 ] Analysis of DNA sequences is becoming increasingly standard for species recognition and may, in many cases, be the only useful method. [ 21 ] Different methods are used to analyse such genetic data, such as molecular phylogenetics or DNA barcoding . Such methods have greatly contributed to the discovery of cryptic species, [ 21 ] [ 24 ] including such emblematic species as the fly agaric , [ 2 ] the water fleas , [ 25 ] or the African elephants . [ 3 ] Species forming a complex have typically diverged very recently from each other, which sometimes allows the retracing of the process of speciation . Species with differentiated populations, such as ring species , are sometimes seen as an example of early, ongoing speciation: a species complex in formation. Nevertheless, similar but distinct species have sometimes been isolated for a long time without evolving differences, a phenomenon known as "morphological stasis". [ 21 ] For example, the Amazonian frog Pristimantis ockendeni is actually at least three different species that diverged over 5 million years ago. [ 27 ] Stabilizing selection has been invoked as a force maintaining similarity in species complexes, especially when they adapted to special environments (such as a host in the case of symbionts or extreme environments). [ 21 ] This may constrain possible directions of evolution; in such cases, strongly divergent selection is not to be expected. [ 21 ] Also, asexual reproduction, such as through apomixis in plants, may separate lineages without producing a great degree of morphological differentiation. A species complex is usually a group that has one common ancestor (a monophyletic group), but closer examination can sometimes disprove that. For example, yellow-spotted "fire salamanders" in the genus Salamandra , formerly all classified as one species S. salamandra , are not monophyletic: the Corsican fire salamander 's closest relative has been shown to be the entirely black Alpine salamander . [ 26 ] In such cases, similarity has arisen from convergent evolution . Hybrid speciation can lead to unclear species boundaries through a process of reticulate evolution , in which species have two parent species as their most recent common ancestors . In such cases, the hybrid species may have intermediate characters, such as in Heliconius butterflies. [ 28 ] Hybrid speciation has been observed in various species complexes, such as insects, fungi, and plants. In plants, hybridization often takes place through polyploidization , and hybrid plant species are called nothospecies . Sources differ on whether or not members of a species group share a range . A source from Iowa State University Department of Agronomy states that members of a species group usually have partially overlapping ranges but do not interbreed with one another. [ 29 ] A Dictionary of Zoology ( Oxford University Press 1999) describes a species group as complex of related species that exist allopatrically and explains that the "grouping can often be supported by experimental crosses in which only certain pairs of species will produce hybrids ." [ 30 ] The examples given below may support both uses of the term "species group." Often, such complexes do not become evident until a new species is introduced into the system, which breaks down existing species barriers. An example is the introduction of the Spanish slug in Northern Europe , where interbreeding with the local black slug and red slug , which were traditionally considered clearly separate species that did not interbreed, shows that they may be actually just subspecies of the same species. [ 31 ] [ 32 ] Where closely related species co-exist in sympatry , it is often a particular challenge to understand how the similar species persist without outcompeting each other. Niche partitioning is one mechanism invoked to explain that. Indeed, studies in some species complexes suggest that species divergence have gone in par with ecological differentiation, with species now preferring different microhabitats. [ 33 ] Similar methods also found that the Amazonian frog Eleutherodactylus ockendeni is actually at least three different species that diverged over 5 million years ago. [ 27 ] A species flock may arise when a species penetrates a new geographical area and diversifies to occupy a variety of ecological niches , a process known as adaptive radiation . The first species flock to be recognized as such was the 13 species of Darwin's finches on the Galápagos Islands described by Charles Darwin . It has been suggested that cryptic species complexes are very common in the marine environment. [ 34 ] That suggestion came before the detailed analysis of many systems using DNA sequence data but has been proven to be correct. [ 35 ] The increased use of DNA sequence in the investigation of organismal diversity (also called phylogeography and DNA barcoding ) has led to the discovery of a great many cryptic species complexes in all habitats. In the marine bryozoan Celleporella hyalina , [ 36 ] detailed morphological analyses and mating compatibility tests between the isolates identified by DNA sequence analysis were used to confirm that these groups consisted of more than 10 ecologically distinct species, which had been diverging for many millions of years. Pests, species that cause diseases and their vectors, have direct importance for humans. When they are found to be cryptic species complexes, the ecology and the virulence of each of these species need to be re-evaluated to devise appropriate control strategies as their diversity increases the capacity for more dangerous strains to develop. Examples are cryptic species in the malaria vector genus of mosquito, Anopheles , the fungi causing cryptococcosis , and sister species of Bactrocera tryoni , or the Queensland fruit fly. That pest is indistinguishable from two sister species except that B. tryoni inflicts widespread, devastating damage to Australian fruit crops, but the sister species do not. [ 38 ] When a species is found to be several phylogenetically distinct species, each typically has smaller distribution ranges and population sizes than had been reckoned. The different species can also differ in their ecology, such as by having different breeding strategies or habitat requirements, which must be taken into account for appropriate management. For example, giraffe populations and subspecies differ genetically to such an extent that they may be considered species. Although the giraffe, as a whole, is not considered to be threatened, if each cryptic species is considered separately, there is a much higher level of threat. [ 39 ]
https://en.wikipedia.org/wiki/Species_flock
In ecology , species homogeneity is a lack of biodiversity . Species richness is the fundamental unit in which to assess the homogeneity of an environment. Therefore, any reduction in species richness , especially endemic species , could be argued as advocating the production of a homogeneous environment. Homogeneity in agriculture and forestry; in particular, industrial agriculture and forestry use a limited number of species. [ 1 ] About 7,000 plants (2.6% of all plant species) have been collected or cultivated for human consumption. Of these, a mere 200 have been domesticated and only a dozen contribute about 75% of the global intake of plant-derived calories. 95% of world consumption of protein derives from a few domesticated species, i.e. poultry , cattle and pigs . There are about 1,000 commercial fish species, but in aquaculture fewer than 10 species dominate global production. Human food production therefore rests on the tips of pyramids of biodiversity, leaving the majority of species not utilised and not domesticated. [ 2 ] Species naturally migrate and expand their ranges, utilising new habitats and resources, e.g. the cattle egret . These natural invasions , an incursion in the absence of anthropogenic influences, occur "when an intervening barrier is removed, or through the development of biotic or abiotic transportation mechanisms, able to overcome the barrier in question". [ 3 ] Introductions, or human-mediated invasions, have in the last century become more frequent. [ 4 ] It is estimated that on an average day more than 3,000 species alone are in transit aboard ocean-going vessels. [ 5 ] Using species richness as the unit for which to assess global homogeneity, it appears that anthropogenic assistance in alien species establishment has done much to reduce the number of endemic species, especially on remote islands. Some 'species-poor' habitats may, however, benefit in diversity if an invader can occupy an empty niche. Arguably, that environment becomes more diverse, equally it has also "become more similar to the rest of the world", [ 6 ] though ecological interactions between the invaders and the natives are likely to be unique. Indeed, many species are so well naturalised that they are considered native, yet they were originally introduced; with the best examples probably being the Roman and Norman introduction of the hare and the rabbit respectively to Britain. [ 7 ] Introduction of non-endemic species and subsequent eradication of species can happen remarkably fast; evolutionary tempo is, however, slow and "succession of rapid change [will] result in a great impoverishment". [ 8 ] That impoverishment will indeed equate in a world that is more similar, as there will simply be less species to formulate difference.
https://en.wikipedia.org/wiki/Species_homogeneity
In biological classification , a species inquirenda is a species of doubtful identity requiring further investigation. [ 2 ] The use of the term in English-language biological literature dates back to at least the early nineteenth century. [ 3 ] The term taxon inquirendum is broader in meaning and refers to an incompletely defined taxon of which the taxonomic validity is uncertain or disputed by different experts or is impossible to identify the taxon. Further characterization is required. Certain species names may be designated unplaced names , which Plants of the World Online defines as "names that cannot be accepted, nor can they be put into synonymy". Unplaced names may be names which were not validly published, later homonyms which are therefore illegitimate, or species which cannot be accepted because the genus name is not accepted. Species names may remain unplaced if there is no accepted species in a genus in which it can be placed, or if the type material for the species is not known to exist, is insufficient to establish a clear identity, or has not been studied by experts in the group in order to properly place it. [ 4 ] Nomenclatural instability refers to the phenomenon where there is disagreement or lack of consensus regarding the naming of a species, which can lead to multiple proposed names for the same species. Synonymy : when different names are proposed for the same species, they become synonyms , which can complicate classification and identification. This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Species_inquirenda
The ecological and biogeographical concept of the species pool describes all species available that could potentially colonize and inhabit a focal habitat area. [ 1 ] [ 2 ] The concept lays emphasis on the fact that "local communities aren't closed systems , and that the species occupying any local site typically came from somewhere else", however, the species pool concept may suffer from the logical fallacy of composition . [ 3 ] Most local communities, however, have just a fraction of its species pool present. It is derived from MacArthur and Wilson's Island Biogeography Theory that examines the factors that affect the species richness of isolated natural communities. It helps to understand the composition and richness of local communities and how they are influenced by biogeographic and evolutionary processes acting at large spatial and temporal scales. [ 1 ] The absent portion of species pool— dark diversity —has been used to understand processes influencing local communities. [ 4 ] Methods to estimate potential but absent species are developing. [ 4 ] It has been hypothesized that there might be a direct correlation between species richness and the size of the species pool for plant communities . [ 5 ] Elsewhere, it was reported that "trade-offs and species pool structure (size and trait distribution) determines the shape of the plant productivity -diversity relationship. [ 6 ]
https://en.wikipedia.org/wiki/Species_pool
Species reintroduction is the deliberate release of a species into the wild, from captivity or other areas where the organism is capable of survival. [ 1 ] The goal of species reintroduction is to establish a healthy, genetically diverse , self-sustaining population to an area where it has been extirpated, or to augment an existing population . [ 2 ] Species that may be eligible for reintroduction are typically threatened or endangered in the wild. However, reintroduction of a species can also be for pest control ; for example, wolves being reintroduced to a wild area to curb an overpopulation of deer. Because reintroduction may involve returning native species to localities where they had been extirpated, some prefer the term " reestablishment ". [ 1 ] Humans have been reintroducing species for food and pest control for thousands of years. However, the practice of reintroducing for conservation is much younger, starting in the 20th century. [ 3 ] There are a variety of approaches to species reintroduction. The optimal strategy will depend on the biology of the organism. [ 4 ] The first matter to address when beginning a species reintroduction is whether to source individuals in situ , from wild populations, or ex situ , from captivity in a zoo or botanic garden, for example. In situ sourcing for restorations involves moving individuals from an existing wild population to a new site where the species was formerly extirpated . Ideally, populations should be sourced in situ when possible due to the numerous risks associated with reintroducing organisms from captive populations to the wild. [ 5 ] To ensure that reintroduced populations have the best chance of surviving and reproducing, individuals should be sourced from populations that genetically and ecologically resemble the recipient population. [ 6 ] Generally, sourcing from populations with similar environmental conditions to the reintroduction site will maximize the chance that reintroduced individuals are well adapted to the habitat of the reintroduction site otherwise there are possibilities that they will not take to their environment. . [ 7 ] [ 6 ] One consideration for in situ sourcing is at which life stage the organisms should be collected, transported, and reintroduced. For instance, with plants, it is often ideal to transport them as seeds as they have the best chance of surviving translocation at this stage. However, some plants are difficult to establish as seed and may need to be translocated as juveniles or adults. [ 4 ] In situations where in situ collection of individuals is not feasible, such as for rare and endangered species with too few individuals existing in the wild, ex situ collection is possible. Ex situ collection methods allow storage of individuals that have high potential for reintroduction. Storage examples include germplasm stored in seed banks, sperm and egg banks, cryopreservation , and tissue culture. [ 5 ] Methods that allow for storage of a high numbers of individuals also aim to maximize genetic diversity. Stored materials generally have long lifespans in storage, but some species do lose viability when stored as seed. [ 8 ] Tissue culture and cryopreservation techniques have only been perfected for a few species. [ 9 ] Organisms may also be kept in living collections in captivity. Living collections are more costly than storing germplasm and hence can support only a fraction of the individuals that ex situ sourcing can. [ 5 ] Risk increases when sourcing individuals to add to living collections. Loss of genetic diversity is a concern because fewer individuals stored. [ 10 ] Individuals may also become genetically adapted to captivity, which often adversely affects the reproductive fitness of individuals. Adaptation to captivity may make individuals less suitable for reintroduction to the wild. Thus, efforts should be made to replicate wild conditions and minimize time spent in captivity whenever possible. [ 11 ] Reintroduction biology is a relatively young discipline and continues to be a work in progress. No strict and accepted definition of reintroduction success exists, but it has been proposed that the criteria widely used to assess the conservation status of endangered taxa, such as the IUCN Red List criteria, should be used to assess reintroduction success. [ 12 ] Successful reintroduction programs should yield viable and self-sustainable populations in the long-term. The IUCN/SSC Re-introduction Specialist Group & Environment Agency, in their 2011 Global Re-introduction Perspectives, compiled reintroduction case studies from around the world. [ 13 ] 184 case studies were reported on a range of species which included invertebrates , fish , amphibians , reptiles , birds , mammals , and plants . Assessments from all of the studies included goals, success indicators, project summary, major difficulties faced, major lessons learned, and success of project with reasons for success or failure. A similar assessment focused solely on plants found high rates of success for rare species reintroductions. [ 14 ] An analysis of data from the Center for Plant Conservation International Reintroduction Registry found that, for the 49 cases where data were available, 92% of the reintroduced plant populations survived two years. The Siberian tiger population has rebounded from 40 individuals in the 1940s to around 500 in 2007. The Siberian tiger population is now the largest un-fragmented tiger population in the world. [ 15 ] Yet, a high proportion of translocations and reintroductions have not been successful in establishing viable populations. [ 16 ] For instance, in China reintroduction of captive Giant Pandas have had mixed effects. The initial pandas released from captivity all died quickly after reintroduction. [ 17 ] Even now that they have improved their ability to reintroduce pandas, concern remains over how well the captive-bred pandas will fare with their wild relatives. [ 18 ] Many factors can attribute to the success or failure of a reintroduction. Predators, food, pathogens, competitors, and weather can all affect a reintroduced population's ability to grow, survive, and reproduce. The number of animals reintroduced in an attempt should also vary with factors such as social behavior, expected rates of predation, and density in the wild. [ 19 ] Animals raised in captivity may experience stress during captivity or translocation, which can weaken their immune systems. [ 20 ] The IUCN reintroduction guidelines emphasize the need for an assessment of the availability of suitable habitat as a key component of reintroduction planning. [ 21 ] Poor assessment of the release site can increase the chances that the species will reject the site and perhaps move to a less suitable environment. This can decrease the species fitness and thus decrease chances for survival. [ 20 ] They state that restoration of the original habitat and amelioration of causes of extinction must be explored and considered as essential conditions for these projects. Unfortunately, the monitoring period that should follow reintroductions often remains neglected. [ 22 ] When a species has been extirpated from a site where it previously existed, individuals that will comprise the reintroduced population must be sourced from wild or captive populations. When sourcing individuals for reintroduction, it is important to consider local adaptation , adaptation to captivity (for ex situ conservation ), the possibility of inbreeding depression and outbreeding depression , and taxonomy , ecology , and genetic diversity of the source population. [ 2 ] Reintroduced populations experience increased vulnerability to influences of drift , selection , and gene flow evolutionary processes due to their small sizes, climatic and ecological differences between source and native habitats, and presence of other mating-compatible populations. [ 11 ] [ 23 ] [ 24 ] [ 25 ] If the species slated for reintroduction is rare in the wild, it is likely to have unusually low population numbers, and care should be taken to avoid inbreeding and inbreeding depression . [ 2 ] Inbreeding can change the frequency of allele distribution in a population, and potentially result in a change to crucial genetic diversity. [ 2 ] Additionally, outbreeding depression can occur if a reintroduced population can hybridize with existing populations in the wild, which can result in offspring with reduced fitness, and less adaptation to local conditions. To minimize both, practitioners should source for individuals in a way that captures as much genetic diversity as possible, and attempt to match source site conditions to local site conditions as much as possible. [ 2 ] Capturing as much genetic diversity as possible, measured as heterozygosity , is suggested in species reintroductions. [ 2 ] Some protocols suggest sourcing approximately 30 individuals from a population will capture 95% of the genetic diversity. [ 2 ] Maintaining genetic diversity in the recipient population is crucial to avoiding the loss of essential local adaptations, minimizing inbreeding depression, and maximizing fitness of the reintroduced population. Plants or animals that undergo reintroduction may exhibit reduced fitness if they are not sufficiently adapted to local environmental conditions. Therefore, researchers should consider ecological and environmental similarity of source and recipient sites when selecting populations for reintroduction. Environmental factors to consider include climate and soil traits (pH, percent clay, silt and sand, percent combustion carbon, percent combustion nitrogen, concentration of Ca, Na, Mg, P, K). [ 6 ] Historically, sourcing plant material for reintroductions has followed the rule "local is best," as the best way to preserve local adaptations, with individuals for reintroductions selected from the most geographically proximate population. [ 26 ] However, geographic distance was shown in a common garden experiment to be an insufficient predictor of fitness. [ 6 ] Additionally, projected climatic shifts induced by climate change have led to the development of new seed sourcing protocols that aim to source seeds that are best adapted to project climate conditions. [ 27 ] Conservation agencies have developed seed transfer zones that serve as guidelines for how far plant material can be transported before it will perform poorly. [ 28 ] Seed transfer zones take into account proximity, ecological conditions, and climatic conditions in order to predict how plant performance will vary from one zone to the next. A study of the reintroduction of Castilleja levisecta found that the source populations most physically near the reintroduction site performed the poorest in a field experiment, while those from the source population whose ecological conditions most closely matched the reintroduction site performed best, demonstrating the importance of matching the evolved adaptations of a population to the conditions at the reintroduction site. [ 29 ] Some reintroduction programs use plants or animals from captive populations to form a reintroduced population. [ 2 ] When reintroducing individuals from a captive population to the wild, there is a risk that they have adapted to captivity due to differential selection of genotypes in captivity versus the wild. The genetic basis of this adaptation is selection of rare, recessive alleles that are deleterious in the wild but preferred in captivity. [ 11 ] Consequently, animals adapted to captivity show reduced stress tolerance, increased tameness, and loss of local adaptations. [ 30 ] Plants also can show adaptations to captivity through changes in drought tolerance, nutrient requirements, and seed dormancy requirements. [ 31 ] Extent of adaptation is directly related to intensity of selection, genetic diversity, effective population size and number of generations in captivity. Characteristics selected for in captivity are overwhelmingly disadvantageous in the wild, so such adaptations can lead to reduced fitness following reintroduction. Reintroduction projects that introduce wild animals generally experience higher success rates than those that use captive-bred animals. [ 11 ] Genetic adaptation to captivity can be minimized through management methods: by maximizing generation length and number of new individuals added to the captive population; minimizing effective population size, number of generations spent in captivity, and selection pressure; and reducing genetic diversity by fragmenting the population. [ 2 ] [ 11 ] For plants, minimizing adaptation to captivity is usually achieved by sourcing plant material from a seed bank , where individuals are preserved as wild-collected seeds, and have not had the chance to adapt to conditions in captivity. However, this method is only plausible for plants with seed dormancy . [ 11 ] In reintroductions from captivity, translocation of animals from captivity to the wild has implications for both captive and wild populations. Reintroduction of genetically valuable animals from captivity improves genetic diversity of reintroduced populations while depleting captive populations; conversely, genetically valuable captive-bred animals may be closely related to individuals in the wild and thus increase risk of inbreeding depression if reintroduced. Increasing genetic diversity is favored with removal of genetically overrepresented individuals from captive populations and addition of animals with low genetic relatedness to the wild. [ 32 ] [ 33 ] However, in practice, initial reintroduction of individuals with low genetic value to the captive population is recommended to allow for genetic assessment before translocation of valuable individuals. [ 33 ] A cooperative approach to reintroduction by ecologists and biologists could improve research techniques. For both preparation and monitoring of reintroductions, increasing contacts between academic population biologists and wildlife managers is encouraged within the Survival Species Commission and the IUCN. The IUCN states that a re-introduction requires a multidisciplinary approach involving a team of persons drawn from a variety of backgrounds. [ 21 ] A survey by Wolf et al. in 1998 indicated that 64% of reintroduction projects have used subjective opinion to assess habitat quality. [ 20 ] This means that most reintroduction evaluation has been based on human anecdotal evidence and not enough has been based on statistical findings. Seddon et al. (2007) suggest that researchers contemplating future reintroductions should specify goals, overall ecological purpose, and inherent technical and biological limitations of a given reintroduction, and planning and evaluation processes should incorporate both experimental and modeling approaches. [ 3 ] Monitoring the health of individuals, as well as the survival, is important; both before and after the reintroduction. Intervention may be necessary if the situation proves unfavorable. [ 21 ] Population dynamics models that integrate demographic parameters and behavioral data recorded in the field can lead to simulations and tests of a priori hypotheses. Using previous results to design further decisions and experiments is a central concept of adaptive management . In other words, learning by doing can help in future projects. Population ecologists should therefore collaborate with biologists, ecologists, and wildlife management to improve reintroduction programs. [ 34 ] For reintroduced populations to successfully establish and maximize reproductive fitness, practitioners should perform genetic tests to select which individuals will be the founders of reintroduced populations and to continue monitoring populations post-reintroduction. [ 4 ] A number of methods are available to measure the genetic relatedness between and variation among individuals within populations. Common genetic diversity assessment tools include microsatellite markers, mitochondrial DNA analyses, alloenzymes , and amplified fragment length polymorphism markers. [ 35 ] Post-reintroduction, genetic monitoring tools can be used to obtain data such as population abundance, effective population size , and population structure , and can also be used to identify instances of inbreeding within reintroduced populations or hybridization with existing populations that are genetically compatible. Long-term genetic monitoring is recommended post-reintroduction to track changes in genetic diversity of the reintroduced population and determine success of a reintroduction program. Adverse genetic changes such as loss of heterozygosity may indicate management intervention, such as population supplementation, is necessary for survival of the reintroduced population. [ 36 ] [ 37 ] [ 38 ] The RSG is a network of specialists whose aim is to combat the ongoing and massive loss of biodiversity by using re-introductions as a responsible tool for the management and restoration of biodiversity. It does this by actively developing and promoting sound inter-disciplinary scientific information, policy, and practice to establish viable wild populations in their natural habitats. The role of the RSG is to promote the re-establishment of viable populations in the wild of animals and plants. The need for this role was felt due to the increased demand from re-introduction practitioners, the global conservation community and increase in re-introduction projects worldwide. Increasing numbers of animal and plant species are becoming rare, or even extinct in the wild. In an attempt to re-establish populations, species can – in some instances – be re-introduced into an area, either through translocation from existing wild populations, or by re-introducing captive-bred animals or artificially propagated plants.
https://en.wikipedia.org/wiki/Species_reintroduction
Species sorting is a mechanism in the metacommunity framework of ecology whereby species distributions and abundances can be related to the environmental or biotic conditions in a particular habitat. The species sorting paradigm [ 1 ] describes a system of habitat patches with different environmental conditions that organisms can move between. Species are able to disperse to patches with suitable environmental conditions, resulting in patterns where environmental conditions can predict the species found in a particular habitat . [ 2 ]
https://en.wikipedia.org/wiki/Species_sorting
The species–area relationship or species–area curve describes the relationship between the area of a habitat , or of part of a habitat, and the number of species found within that area. Larger areas tend to contain larger numbers of species, and empirically, the relative numbers seem to follow systematic mathematical relationships. [ 1 ] The species–area relationship is usually constructed for a single type of organism, such as all vascular plants or all species of a specific trophic level within a particular site. It is rarely if ever, constructed for all types of organisms if simply because of the prodigious data requirements. It is related but not identical to the species discovery curve . Ecologists have proposed a wide range of factors determining the slope and elevation of the species–area relationship. [ 2 ] These factors include the relative balance between immigration and extinction, [ 3 ] rate and magnitude of disturbance on small vs. large areas, [ 3 ] predator-prey dynamics, [ 4 ] and clustering of individuals of the same species as a result of dispersal limitation or habitat heterogeneity . [ 5 ] The species–area relationship has been reputed to follow from the 2nd law of thermodynamics . [ 6 ] In contrast to these "mechanistic" explanations, others assert the need to test whether the pattern is simply the result of a random sampling process. [ 7 ] Species–area relationships are often evaluated in conservation science in order to predict extinction rates in the case of habitat loss and habitat fragmentation . [ 8 ] Authors have classified the species–area relationship according to the type of habitats being sampled and the census design used. Frank W. Preston , an early investigator of the theory of the species–area relationship, divided it into two types: samples (a census of a contiguous habitat that grows in the census area, also called "mainland" species–area relationships), and isolates (a census of discontiguous habitats, such as islands, also called "island" species–area relationships). [ 1 ] Michael Rosenzweig also notes that species–area relationships for very large areas—those collecting different biogeographic provinces or continents—behave differently from species–area relationships from islands or smaller contiguous areas. [ 2 ] It has been presumed that "island"-like species–area relationships have steeper slopes (in log–log space ) than "mainland" relationships, [ 2 ] but a 2006 metaanalysis of almost 700 species–area relationships found the former had lower slopes than the latter. [ 9 ] Regardless of census design and habitat type, species–area relationships are often fitted with a simple function. Frank Preston advocated the power function based on his investigation of the lognormal species-abundance distribution . [ 1 ] If S {\displaystyle S} is the number of species, A {\displaystyle A} is the habitat area, and z {\displaystyle z} is the slope of the species area relationship in log-log space, then the power function species–area relationship goes as: S = c A z {\displaystyle S=cA^{z}} Here c {\displaystyle c} is a constant which depends on the unit used for area measurement, and equals the number of species that would exist if the habitat area was confined to one square unit. The graph looks like a straight line on log–log axes , and can be linearized as: log ⁡ ( S ) = log ⁡ ( c A z ) = log ⁡ ( c ) + z log ⁡ ( A ) {\displaystyle \log(S)=\log(cA^{z})=\log(c)+z\log(A)} In contrast, Henry Gleason championed the semilog model: S = log ⁡ ( c A z ) = log ⁡ ( c ) + z log ⁡ ( A ) {\displaystyle S=\log(cA^{z})=\log(c)+z\log(A)} which looks like a straight line on semilog axes , where the area is logged and the number of species is arithmetic. In either case, the species–area relationship is almost always decelerating (has a negative second derivative) when plotted arithmetically. [ 10 ] Species–area relationships are often graphed for islands (or habitats that are otherwise isolated from one another, such as woodlots in an agricultural landscape) of different sizes. [ 3 ] Although larger islands tend to have more species, a smaller island may have more than a larger one. In contrast, species–area relationships for contiguous habitats will always rise as areas increases, provided that the sample plots are nested within one another. The species–area relationship for mainland areas (contiguous habitats) will differ according to the census design used to construct it. [ 11 ] A common method is to use quadrats of successively larger size so that the area enclosed by each one includes the area enclosed by the smaller one (i.e. areas are nested). In the first part of the 20th century, plant ecologists often used the species–area curve to estimate the minimum size of a quadrat necessary to adequately characterize a community. This is done by plotting the curve (usually on arithmetic axes, not log-log or semilog axes), and estimating the area after which using larger quadrats results in the addition of only a few more species. This is called the minimal area. A quadrat that encloses the minimal area is called a relevé , and using species–area curves in this way is called the relevé method. It was largely developed by the Swiss ecologist Josias Braun-Blanquet . [ 12 ] Estimation of the minimal area from the curve is necessarily subjective, so some authors prefer to define the minimal area as the area enclosing at least 95 percent (or some other large proportion) of the total species found. The problem with this is that the species area curve does not usually approach an asymptote , so it is not obvious what should be taken as the total. [ 12 ] the number of species always increases with area up to the point where the area of the entire world has been accumulated. [ 13 ]
https://en.wikipedia.org/wiki/Species–area_relationship
Specific-pathogen-free (SPF) is a term used for laboratory animals that are guaranteed free of particular pathogens . Use of SPF animals ensures that specified diseases do not interfere with an experiment. For example, absence of respiratory pathogens such as influenza is desirable when investigating a drug's effect on lung function. The animals can be born through a caesarian section then special care taken so the newborn does not acquire infections, such as use of sterile isolation units with a positive pressure differential to keep all outside air and pathogens from entering. Everything that needs to be inserted into the isolator, such as food, water and equipment needs to be completely sterilized and disinfected, and inserted through an airlock that can be disinfected before opening from the inside. A disadvantage is that any contact with pathogens may be fatal. This is because the animals have no protective bacterial microbiota on the skin or in the intestine or respiratory tract , and because they have no natural immunity to common infections as they have never been exposed to them. To certify SPF, the population is checked for presence of ( antibodies against) the specified pathogens. For SPF eggs the specific pathogens are: Avian Adenovirus Group I, Avian Adenovirus Group II (HEV), Avian Adenovirus Group III (EDS), Avian Encephalomyelitis, Avian Influenza (Type A), Avian Nephritis Virus, Avian Paramyxovirus Type 2, Avian Reovirus S 1133, Avian Rhinotracheitis Virus; Avian Rotavirus; Avian Tuberculosis M. avium; Chicken Anemia Virus; Endogenous GS Antigen; Fowl Pox; Hemophilus paragallinarum Serovars A, B, C; Infectious Bronchitis - Ark; Infectious Bronchitis - Conn; Infectious Bronchitis - JMK; Infectious Bronchitis - Mass; Infectious Bursal Disease Type 1; Infectious Bursal Disease Type 2; Infectious Laryngotracheitis; Lymphoid Leukosis A, B; Avian Lymphoid Leukosis Virus; Lymphoid Leukosis Viruses A, B, C, D, E, J; Marek's Disease (Serotypes 1,2, 3); Mycoplasma gallisepticum; Mycoplasma synoviae; Newcastle Disease LaSota; Reticuloendotheliosis Virus; Salmonella pullorum-gallinarum; Salmonella species; When by accident some infection does occur, the population is said to have minimal disease status. The population is regularly checked to ensure the status still holds. SPF eggs can be used to make vaccines . Mice raised under SPF conditions (no Helicobacter pylori ) were shown to develop colitis rather than enterocolitis . [ 1 ]
https://en.wikipedia.org/wiki/Specific-pathogen-free
Specific absorption rate ( SAR ) is a measure of the rate at which energy is absorbed per unit mass by a human body when exposed to a radio frequency (RF) electromagnetic field . It is defined as the power absorbed per mass of tissue and has units of watts per kilogram (W/kg). [ 1 ] SAR is usually averaged either over the whole body, or over a small sample volume (typically 1 g or 10 g of tissue). The value cited is then the maximum level measured in the body part studied over the stated volume or mass. SAR for electromagnetic energy can be calculated from the electric field within the tissue as where SAR measures exposure to fields between 100 kHz and 10 GHz (known as radio waves). [ 2 ] It is commonly used to measure power absorbed from mobile phones and during MRI scans. The value depends heavily on the geometry of the part of the body that is exposed to the RF energy and on the exact location and geometry of the RF source. Thus tests must be made with each specific source, such as a mobile-phone model and at the intended position of use. When measuring the SAR due to a mobile phone the phone is placed against a representation of a human head (a "SAR Phantom") in a talk position. The SAR value is then measured at the location that has the highest absorption rate in the entire head, which in the case of a mobile phone is often as close to the phone's antenna as possible. Measurements are made for different positions on both sides of the head and at different frequencies representing the frequency bands at which the device can transmit. Depending on the size and capabilities of the phone, additional testing may also be required to represent usage of the device while placed close to the user's body and/or extremities. Various governments have defined maximum SAR levels for RF energy emitted by mobile devices: SAR values are heavily dependent on the size of the averaging volume. Without information about the averaging volume used, comparisons between different measurements cannot be made. Thus, the European 10-gram ratings should be compared among themselves, and the American 1-gram ratings should only be compared among themselves. To check SAR on your mobile phone, review the documentation provided with the phone, dial *#07# (only works on some models) or visit the manufacturer's website. For magnetic resonance imaging the limits (described in IEC 60601-2-33 ) are slightly more complicated: SAR limits set by law do not consider that the human body is particularly sensitive to the power peaks or frequencies responsible for the microwave hearing effect . [ 3 ] [ 4 ] Frey reports that the microwave hearing effect occurs with average power density exposures of 400 μW/cm 2 , well below SAR limits (as set by government regulations). [ 3 ] Notes: In comparison to the short term, relatively intensive exposures described above, for long-term environmental exposure of the general public there is a limit of 0.08 W/kg averaged over the whole body. [ 2 ] A whole-body average SAR of 0.4 W/kg has been chosen as the restriction that provides adequate protection for occupational exposure. An additional safety factor of 5 is introduced for exposure of the public, giving an average whole-body SAR limit of 0.08 W/kg. The FCC guide "Specific Absorption Rate (SAR) For Cell Phones: What It Means For You", after detailing the limitations of SAR values, offers the following "bottom line" editorial: ALL cell phones must meet the FCC’s RF exposure standard, which is set at a level well below that at which laboratory testing indicates, and medical and biological experts generally agree, adverse health effects could occur. For users who are concerned with the adequacy of this standard or who otherwise wish to further reduce their exposure, the most effective means to reduce exposure are to hold the cell phone away from the head or body and to use a speakerphone or hands-free accessory. These measures will generally have much more impact on RF energy absorption than the small difference in SAR between individual cell phones, which, in any event, is an unreliable comparison of RF exposure to consumers, given the variables of individual use. [ 5 ] In order to find out possible advantages and the interaction mechanisms of electromagnetic fields (EMF), the minimum SAR (or intensity) that could have biological effect (MSBE) would be much more valuable in comparison to studying high-intensity fields. Such studies can possibly shed light on thresholds of non-ionizing radiation effects and cell capabilities (e.g., oxidative response ). In addition, it is more likely to reduce the complexity of the EMF interaction targets in cell cultures by lowering the exposure power, which at least reduces the overall rise in temperature. This parameter might differ regarding the case under study and depends on the physical and biological conditions of the exposed target. [ 6 ] The FCC regulations for SAR are contained in 47 C.F.R. 1.1307(b), 1.1310, 2.1091, 2.1093 and also discussed in OET Bulletin No. 56, " Questions and Answers About the Biological Effects and Potential Hazards of Radiofrequency Electromagnetic Fields. " [ 7 ] Specific energy absorption rate (SAR) averaged over the whole body or over parts of the body, is defined as the rate at which energy is absorbed per unit mass of body tissue and is expressed in watts per kilogram (W/kg). Whole body SAR is a widely accepted measure for relating adverse thermal effects to RF exposure. [ 8 ] Legislative acts in the European Union include directive 2013/35/EU of the European Parliament and of the Council of 26 June 2013 on the minimum health and safety requirements regarding the exposure of workers to the risks arising from physical agents (electromagnetic fields) (20th individual Directive within the meaning of Article 16(1) of Directive 89/391/EEC) and repealing Directive 2004/40/EC) in its annex III "THERMAL EFFECTS" for "EXPOSURE LIMIT VALUES AND ACTION LEVELS IN THE FREQUENCY RANGE FROM 100 kHz TO 300 GHz". [ 9 ]
https://en.wikipedia.org/wiki/Specific_absorption_rate
Specific appetite , also known as specific hunger , is a drive to eat foods with specific flavors or other characteristics. [ 1 ] Regulation of homeostasis is essential to the survival of animals. Because the nutritional content of a diet will vary with environmental and other conditions, it is useful for animals to have a mechanism to ensure that their nutritional needs are within the appropriate range. Specific appetite is one such mechanism. Specific appetite has been demonstrated in various species for a number of vitamins and minerals, as well as calories, protein, and water. Unfortunately, specific appetite is very difficult to study experimentally, as there are a number of factors that influence food choice. Very little is known about the specific mechanisms inducing specific appetite, and the genes encoding for specific appetites are mostly speculative. Very few specific appetites for particular nutrients have been identified in humans. The most robustly identified are salt appetite / sodium appetite . The problem with many other nutrients is that they do not have distinctly identifiable tastes, and only two other specific appetites, for iron and calcium, have been identified with experimental rigour so far. Other appetites are thus currently classified as learned appetites , which are not innate appetites that are triggered automatically in the absence of certain nutrients, but learned behaviours, aversions to or preferences for certain foods as they become associated with experiences of malnutrition and illness. [ 1 ] If a food source has an identifiable flavor, an animal can learn to associate the positive effects of alleviation of a certain nutrient deficiency with consumption of that food. This has been demonstrated in a variety of species: lambs offered free choice of various foods will compensate for phosphorus, sodium, and calcium deficiencies. [ 2 ] Domestic fowl have demonstrated specific appetites for calcium, zinc, and phosphorus, thiamine , protein in general, and methionine and lysine . Heat-stressed fowls seek out vitamin C , which alleviates the consequences of heat stress. [ 3 ] Learned specific appetites are not necessarily a result of an animal's ability to detect the presence of a nutrient. Because nutrient deficiencies of various types can have stressful effects which vary depending on the missing nutrient, subsequent ingestion of that nutrient is associated with relief of certain signs. An animal may therefore associate the flavor of a food that is high in a certain nutrient with relief of the signs of that nutrient deficiency, while not seeking out other foods rich in the same nutrient. An unlearned appetite is one that an animal possesses at birth, without conditioning through prior stress followed by alleviation with specific foods. An unlearned appetite suggests a physiological mechanism for detecting the absence of a nutrient as well as a signalling component that directs the animal to seek out the missing nutrient. An example of an unlearned appetite might be caloric appetite, as seen in all domestic animals . Other unlearned appetites are more difficult to demonstrate. In one study, protein-deficient rats that had not previously experienced protein deficiency demonstrated strong preferences for high-protein foods such as soybean, gluten, and ovalbumin within thirty minutes of food presentation. This preference was not seen in controls, and was also exhibited by pregnant females with higher protein needs who were not experimentally protein-deficient. [ 4 ] Rats also seem to have an unlearned appetite for calcium and sodium. [ 5 ] In addition, zinc-depleted chicks show preferences for zinc-rich feeds. [ 6 ] Specific appetite can be indirectly induced under experimental circumstances. In one study, normal (sodium-replete) rats exposed to angiotensin II via infusion directly into the brain developed a strong sodium appetite which persisted for months. [ 7 ] However, the conclusions of this experiment have been contested. [ 8 ] Nicotine implants in rats have been shown to induce a specific appetite for sucrose , even after removal of the implants. [ 9 ] There is very little strong evidence for specific appetite in humans. However, it has been demonstrated that humans have the ability to taste calcium, [ 10 ] and indirect evidence supports the idea that patients on kidney dialysis who develop hypocalcemia prefer cheese with greater amounts of calcium added. [ 11 ] Exercise also increases the preference for salt. [ 12 ] Some diseases, including Gitelman syndrome and the salt-wasting variant of Congenital adrenal hyperplasia , impair the kidney's ability to retain sodium in the body and cause a specific craving for sodium. [ 13 ] Extreme sodium depletion in human volunteers has been demonstrated to increase the desire for high-salt foods. [ 14 ] While the most common nutritional disorders in humans concern excessive intake of calories, malnutrition remains a problem. For example, the link between insufficient dietary calcium and bone disorders is well established [ 15 ] Commonly people have an appetite for meat or eggs, high protein foods. But these may be expensive or otherwise unavailable. A specific appetite for protein may be unsatisfied with the ingestion of a diet deficient in protein. Because protein is vitally important to maintaining the structures of the body’s systems, a form of protein-mediated hunger has been hypothesised. The protein leverage hypothesis posits that hunger is principally satiated by protein, with caloric intake being escalated until this need is met. [ 16 ]
https://en.wikipedia.org/wiki/Specific_appetite
Specific detectivity , or D* , for a photodetector is a figure of merit used to characterize performance, equal to the reciprocal of noise-equivalent power (NEP), normalized per square root of the sensor's area and frequency bandwidth (reciprocal of twice the integration time). Specific detectivity is given by D ∗ = A Δ f N E P {\displaystyle D^{*}={\frac {\sqrt {A\Delta f}}{NEP}}} , where A {\displaystyle A} is the area of the photosensitive region of the detector, Δ f {\displaystyle \Delta f} is the bandwidth, and NEP the noise equivalent power in units [W]. It is commonly expressed in Jones units ( c m ⋅ H z / W {\displaystyle cm\cdot {\sqrt {Hz}}/W} ) in honor of Robert Clark Jones who originally defined it. [ 1 ] [ 2 ] Given that noise-equivalent power can be expressed as a function of the responsivity R {\displaystyle {\mathfrak {R}}} (in units of A / W {\displaystyle A/W} or V / W {\displaystyle V/W} ) and the noise spectral density S n {\displaystyle S_{n}} (in units of A / H z 1 / 2 {\displaystyle A/Hz^{1/2}} or V / H z 1 / 2 {\displaystyle V/Hz^{1/2}} ) as N E P = S n R {\displaystyle NEP={\frac {S_{n}}{\mathfrak {R}}}} , it is common to see the specific detectivity expressed as D ∗ = R ⋅ A S n {\displaystyle D^{*}={\frac {{\mathfrak {R}}\cdot {\sqrt {A}}}{S_{n}}}} . It is often useful to express the specific detectivity in terms of relative noise levels present in the device. A common expression is given below. With q as the electronic charge, λ {\displaystyle \lambda } is the wavelength of interest, h is the Planck constant, c is the speed of light, k is the Boltzmann constant, T is the temperature of the detector, R 0 A {\displaystyle R_{0}A} is the zero-bias dynamic resistance area product (often measured experimentally, but also expressible in noise level assumptions), η {\displaystyle \eta } is the quantum efficiency of the device, and Φ b {\displaystyle \Phi _{b}} is the total flux of the source (often a blackbody) in photons/sec/cm 2 . Detectivity can be measured from a suitable optical setup using known parameters. You will need a known light source with known irradiance at a given standoff distance. The incoming light source will be chopped at a certain frequency, and then each wavelength will be integrated over a given time constant over a given number of frames. In detail, we compute the bandwidth Δ f {\displaystyle \Delta f} directly from the integration time constant t c {\displaystyle t_{c}} . Next, an average signal and rms noise needs to be measured from a set of N {\displaystyle N} frames. This is done either directly by the instrument, or done as post-processing. Now, the computation of the radiance H {\displaystyle H} in W/sr/cm 2 must be computed where cm 2 is the emitting area. Next, emitting area must be converted into a projected area and the solid angle ; this product is often called the etendue . This step can be obviated by the use of a calibrated source, where the exact number of photons/s/cm 2 is known at the detector. If this is unknown, it can be estimated using the black-body radiation equation, detector active area A d {\displaystyle A_{d}} and the etendue. This ultimately converts the outgoing radiance of the black body in W/sr/cm 2 of emitting area into one of W observed on the detector. The broad-band responsivity, is then just the signal weighted by this wattage. where From this metric noise-equivalent power can be computed by taking the noise level over the responsivity. Similarly, noise-equivalent irradiance can be computed using the responsivity in units of photons/s/W instead of in units of the signal. Now, the detectivity is simply the noise-equivalent power normalized to the bandwidth and detector area. This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22.
https://en.wikipedia.org/wiki/Specific_detectivity
Specific dynamic action ( SDA ), also known as thermic effect of food ( TEF ) or dietary induced thermogenesis ( DIT ), is the amount of energy expenditure above the basal metabolic rate due to the cost of processing food for use and storage. [ 1 ] Heat production by brown adipose tissue which is activated after consumption of a meal is an additional component of dietary induced thermogenesis. [ 2 ] The thermic effect of food is one of the components of metabolism along with resting metabolic rate and the exercise component. A commonly used estimate of the thermic effect of food is about 10% of one's caloric intake, though the effect varies substantially for different food components. For example, dietary fat is very easy to process and has very little thermic effect, while protein is hard to process and has a much larger thermic effect. [ 3 ] The thermic effect of food is increased by both aerobic training of sufficient duration and intensity or by anaerobic weight training . However, the increase is marginal, amounting to 7-8 calories per hour. [ 1 ] The primary determinants of daily TEF are the total caloric content of the meals and the macronutrient composition of the meals ingested. Meal frequency has little to no effect on TEF; assuming total calorie intake for the days are equivalent. Some studies suggest that TEF may be reduced in patients with obesity, although research findings are mixed. [ 4 ] A low TEF could potentially predispose to obesity by reducing energy expenditure and promoting a positive energy balance. Alternatively, the presence of a low TEF in many patients with obesity may be explained by insulin resistance, which can be related to excess dietary energy and vary in severity with the degree of obesity. Over time, chronic hyperinsulinemia could diminish the ability of cells to respond to insulin, promote adiposity and peripheral insulin resistance, and worsen the risk for type 2 diabetes. Some research shows that TEF may be more severely impaired in patients depending on the grade of insulin resistance, although some other studies have not reproduced that finding. [ 5 ] The mechanism of TEF is unknown, although it may be related in part to increases in sympathetic activity induced by eating food. [ 6 ] : 505 TEF has been described as the energy used in the distribution of nutrients and metabolic processes in the liver, [ 7 ] but a hepatectomized animal shows no signs of TEF and intravenous injection of amino acids results in an effect equal to that of oral ingestion of the same amino acids. [ 6 ] : 505 The thermic effect of food is the energy required for digestion, absorption, and disposal of ingested nutrients. Its magnitude depends on the composition of the food consumed: Raw celery and grapefruit are often claimed to have negative caloric balance (requiring more energy to digest than recovered from the food), presumably because the thermic effect is greater than the caloric content due to the high fibre matrix that must be unraveled to access their carbohydrates. However, there has been no research carried out to test this hypothesis and a significant amount of the thermic effect depends on the insulin sensitivity of the individual, with more insulin-sensitive individuals having a significant effect while individuals with increasing resistance have negligible to zero effects. [ 10 ] [ 11 ] The Functional Food Centre at Oxford Brookes University conducted a study into the effects of chilli peppers and medium-chain triglycerides (MCT) on Diet Induced Thermogenesis (DIT). They concluded that "adding chilli and MCT to meals increases DIT by over 50% which over time may accumulate to help induce weight loss and prevent weight gain or regain". [ 12 ] Australia's Human Nutrition conducted a study on the effect of meal content in lean women's diets on the thermic effect of food and found that the inclusion of an ingredient containing increased soluble fibre and amylose did not reduce spontaneous food intake but rather was associated with higher subsequent energy intakes despite its reduced glycaemic and insulinemic effects. [ 13 ] The thermic effect of food should be measured for a period of time greater than or equal to five hours. [ 14 ] The American Journal of Clinical Nutrition published that TEF lasts beyond six hours for the majority of people. [ 14 ] Acetyl-CoA Oxaloacetate Malate Fumarate Succinate Succinyl-CoA Citrate cis- Aconitate Isocitrate Oxalosuccinate 2-oxoglutarate
https://en.wikipedia.org/wiki/Specific_dynamic_action
Specific energy or massic energy is energy per unit mass . It is also sometimes called gravimetric energy density , which is not to be confused with energy density , which is defined as energy per unit volume. It is used to quantify, for example, stored heat and other thermodynamic properties of substances such as specific internal energy , specific enthalpy , specific Gibbs free energy , and specific Helmholtz free energy . It may also be used for the kinetic energy or potential energy of a body. Specific energy is an intensive property , whereas energy and mass are extensive properties . The SI unit for specific energy is the joule per kilogram (J/kg). Other units still in use worldwide in some contexts are the kilocalorie per gram (Cal/g or kcal/g), mostly in food-related topics, and watt-hours per kilogram (W⋅h/kg) in the field of batteries. In some countries the Imperial unit BTU per pound (Btu/lb) is used in some engineering and applied technical fields. [ 1 ] Specific energy has the same units as specific strength , which is related to the maximum specific energy of rotation an object can have without flying apart due to centrifugal force . The concept of specific energy is related to but distinct from the notion of molar energy in chemistry , that is energy per mole of a substance, which uses units such as joules per mole, or the older but still widely used calories per mole. [ 2 ] The following table shows the factors for conversion to J/kg of some non- SI units : For a table giving the specific energy of many different fuels as well as batteries, see the article Energy density . For ionising radiation , the gray is the SI unit of specific energy absorbed by matter known as absorbed dose , from which the SI unit the sievert is calculated for the stochastic health effect on tissues, known as dose equivalent . The International Committee for Weights and Measures states: "In order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H , the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H ." [ 6 ] Energy density is the amount of energy per mass or volume of food. The energy density of a food can be determined from the label by dividing the energy per serving (usually in kilojoules or food calories ) by the serving size (usually in grams, milliliters or fluid ounces). An energy unit commonly used in nutritional contexts within non-metric countries (e.g. the United States) is the "dietary calorie," "food calorie," or "Calorie" with a capital "C" and is commonly abbreviated as "Cal." A nutritional Calorie is equivalent to a thousand chemical or thermodynamic calories (abbreviated "cal" with a lower case "c") or one kilocalorie (kcal). Because food energy is commonly measured in Calories, the energy density of food is commonly called "caloric density". [ 7 ] In the metric system, the energy unit commonly used on food labels is the kilojoule (kJ) or megajoule (MJ). Energy density is thus commonly expressed in metric units of cal/g, kcal/g, J/g, kJ/g, MJ/kg, cal/mL, kcal/mL, J/mL, or kJ/mL. Energy density measures the energy released when the food is metabolized by a healthy organism when it ingests the food (see food energy for calculation). In aerobic environments, this typically requires oxygen as an input and generates waste products such as carbon dioxide and water. Besides alcohol , the only sources of food energy are carbohydrates , fats and proteins , which make up ninety percent of the dry weight of food. [ 8 ] Therefore, water content is the most important factor in computing energy density. In general, proteins have lower energy densities (≈16 kJ/g) than carbohydrates (≈17 kJ/g), whereas fats provide much higher energy densities (≈38 kJ/g), [ 8 ] 2 + 1 ⁄ 4 times as much energy. Fats contain more carbon-carbon and carbon-hydrogen bonds than carbohydrates or proteins, yielding higher energy density. [ 9 ] Foods that derive most of their energy from fat have a much higher energy density than those that derive most of their energy from carbohydrates or proteins, even if the water content is the same. Nutrients with a lower absorption, such as fiber or sugar alcohols , lower the energy density of foods as well. A moderate energy density would be 1.6 to 3 calories per gram (7–13 kJ/g); salmon, lean meat, and bread would fall in this category. Foods with high energy density have more than three calories per gram (>13 kJ/g) and include crackers, cheese, chocolate, nuts, [ 10 ] and fried foods like potato or tortilla chips. Energy density is sometimes more useful than specific energy for comparing fuels. For example, liquid hydrogen fuel has a higher specific energy (energy per unit mass) than gasoline does, but a much lower volumetric energy density. [ citation needed ] Specific mechanical energy , rather than simply energy, is often used in astrodynamics , because gravity changes the kinetic and potential specific energies of a vehicle in ways that are independent of the mass of the vehicle, consistent with the conservation of energy in a Newtonian gravitational system . The specific energy of an object such as a meteoroid falling on the Earth from outside the Earth's gravitational well is at least one half the square of the escape velocity of 11.2 km/s. This comes to 63 MJ/kg (15 kcal/g, or 15 tonnes TNT equivalent per tonne). Comets have even more energy, typically moving with respect to the Sun, when in our vicinity, at about the square root of two times the speed of the Earth. This comes to 42 km/s, or a specific energy of 882 MJ/kg. The speed relative to the Earth may be more or less, depending on direction. Since the speed of the Earth around the Sun is about 30 km/s, a comet's speed relative to the Earth can range from 12 to 72 km/s, the latter corresponding to 2592 MJ/kg. If a comet with this speed fell to the Earth it would gain another 63 MJ/kg, yielding a total of 2655 MJ/kg with a speed of 72.9 km/s. Since the equator is moving at about 0.5 km/s, the impact speed has an upper limit of 73.4 km/s, giving an upper limit for the specific energy of a comet hitting the Earth of about 2690 MJ/kg. If the Hale-Bopp comet (50 km in diameter) had hit Earth, it would have vaporized the oceans and sterilized the surface of Earth. [ 11 ]
https://en.wikipedia.org/wiki/Specific_energy
Specific Fan Power ( SFP ) is a parameter that quantifies the energy-efficiency of fan air movement systems. It is a measure of the electric power that is needed to drive a fan (or collection of fans), relative to the amount of air that is circulated through the fan(s). It is not constant for a given fan, but changes with both air flow rate and fan pressure rise. SFP for a given fan system and operating point (combination of flow rate and pressure rise) is defined as: where: There are various sub-definitions of SFP for different specific applications, including SFP e (building energy performance calculations), SFP v (for performance verification tests), SFP i (individual fan), SFP AHU (air handling unit), SFP FCU (fan coil unit), and SFP BLDG (whole building). These are explained in [ 1 ] and in part in. [ 2 ] Reference 1 also describes how account for intermittently operated fans, e.g. kitchen hoods, and part-load performance in variable air volume (VAV) systems. SFP can be expressed in the following equivalent SI units: As you can see above, SFP can be expressed in units of pressure, since pressure is a measure of energy per m³ air. The relationship between SFP, fan pressure rise, and fan system efficiency is simply: where: In the case of an ideal lossless fan system (i.e. η t o t = 1 {\displaystyle \eta _{tot}=1} ) the SFP is exactly equal to the fan pressure rise (i.e. total pressure loss in the ventilation system). In reality the fan system efficiency is often in the range 0 to 60% (i.e. η t o t < 0.6 {\displaystyle \eta _{tot}<0.6} ); it is lowest for small fans or inefficient operating points (e.g. throttled flow or free-flow). The efficiency is a function of the total losses in the fan system, including aerodynamic losses in the fan, friction losses in the drive (e.g. belt), losses in the electric motor, and variable speed drive power electronics. For more insight into how to maximise energy efficiency and minimize noise in fan systems, see ref.1
https://en.wikipedia.org/wiki/Specific_fan_power
Specific force ( SF ) is a mass-specific quantity defined as the quotient of force per unit mass . It is a physical quantity of kind acceleration , with dimension of length per time squared and units of metre per second squared (m·s −2 ). It is normally applied to forces other than gravity , to emulate the relationship between gravitational acceleration and gravitational force . It can also be called mass-specific weight (weight per unit mass), as the weight of an object is equal to the magnitude of the gravity force acting on it. The g-force is an instance of specific force measured in units of the standard gravity ( g ) instead of m/s², i.e., in multiples of g (e.g., "3 g "). The (mass-)specific force is not a coordinate acceleration , but rather a proper acceleration , which is the acceleration relative to free-fall. Forces, specific forces, and proper accelerations are the same in all reference frames, but coordinate accelerations are frame-dependent. For free bodies, the specific force is the cause of, and a measure of, the body's proper acceleration. The acceleration of an object free falling towards the earth depends on the reference frame (it disappears in the free-fall frame, also called the inertial frame), but any g-force "acceleration" will be present in all frames. This specific force is zero for freely-falling objects, since gravity acting alone does not produce g-forces or specific forces. Accelerometers on the surface of the Earth measure a constant 9.8 m/s^2 even when they are not accelerating (that is, when they do not undergo coordinate acceleration). This is because accelerometers measure the proper acceleration produced by the g-force exerted by the ground (gravity acting alone never produces g-force or specific force). Accelerometers measure specific force (proper acceleration), which is the acceleration relative to free-fall, [ 1 ] not the "standard" acceleration that is relative to a coordinate system. In open channel hydraulics , specific force ( F s {\displaystyle F_{s}} ) has a different meaning: where Q is the discharge, g is the acceleration due to gravity, A is the cross-sectional area of flow, and z is the depth of the centroid of flow area A. [ 2 ] This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Specific_force
Specific granules are secretory vesicles found exclusively in cells of the immune system called granulocytes . It is sometimes described as applying specifically to neutrophils, [ 1 ] and sometimes the term is applied to other types of cells. [ 2 ] These granules store a mixture of cytotoxic molecules, including many enzymes and antimicrobial peptides , that are released by a process called degranulation following activation of the granulocyte by an immune stimulus. Specific granules are also known as "secondary granules". [ 3 ] Examples of cytotoxic molecule stored by specific granules in different granulocytes include: A specific granule deficiency can be associated with CEBPE . [ 4 ] This cell biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Specific_granule
In thermodynamics , the specific heat capacity (symbol c ) of a substance is the amount of heat that must be added to one unit of mass of the substance in order to cause an increase of one unit in temperature . It is also referred to as massic heat capacity or as the specific heat. More formally it is the heat capacity of a sample of the substance divided by the mass of the sample. [ 1 ] The SI unit of specific heat capacity is joule per kelvin per kilogram , J⋅kg −1 ⋅K −1 . [ 2 ] For example, the heat required to raise the temperature of 1 kg of water by 1 K is 4184 joules , so the specific heat capacity of water is 4184 J⋅kg −1 ⋅K −1 . [ 3 ] Specific heat capacity often varies with temperature, and is different for each state of matter . Liquid water has one of the highest specific heat capacities among common substances, about 4184 J⋅kg −1 ⋅K −1 at 20 °C; but that of ice, just below 0 °C, is only 2093 J⋅kg −1 ⋅K −1 . The specific heat capacities of iron , granite , and hydrogen gas are about 449 J⋅kg −1 ⋅K −1 , 790 J⋅kg −1 ⋅K −1 , and 14300 J⋅kg −1 ⋅K −1 , respectively. [ 4 ] While the substance is undergoing a phase transition , such as melting or boiling, its specific heat capacity is technically undefined, because the heat goes into changing its state rather than raising its temperature. The specific heat capacity of a substance, especially a gas, may be significantly higher when it is allowed to expand as it is heated (specific heat capacity at constant pressure ) than when it is heated in a closed vessel that prevents expansion (specific heat capacity at constant volume ). These two values are usually denoted by c p {\displaystyle c_{p}} and c V {\displaystyle c_{V}} , respectively; their quotient γ = c p / c V {\displaystyle \gamma =c_{p}/c_{V}} is the heat capacity ratio . The term specific heat may also refer to the ratio between the specific heat capacities of a substance at a given temperature and of a reference substance at a reference temperature, such as water at 15 °C; [ 5 ] much in the fashion of specific gravity . Specific heat capacity is also related to other intensive measures of heat capacity with other denominators. If the amount of substance is measured as a number of moles , one gets the molar heat capacity instead, whose SI unit is joule per kelvin per mole, J⋅mol −1 ⋅K −1 . If the amount is taken to be the volume of the sample (as is sometimes done in engineering), one gets the volumetric heat capacity , whose SI unit is joule per kelvin per cubic meter , J⋅m −3 ⋅K −1 . One of the first scientists to use the concept was Joseph Black , an 18th-century medical doctor and professor of medicine at Glasgow University . He measured the specific heat capacities of many substances, using the term capacity for heat . [ 6 ] In 1756 or soon thereafter, Black began an extensive study of heat. [ 7 ] In 1760 he realized that when two different substances of equal mass but different temperatures are mixed, the changes in number of degrees in the two substances differ, though the heat gained by the cooler substance and lost by the hotter is the same. Black related an experiment conducted by Daniel Gabriel Fahrenheit on behalf of Dutch physician Herman Boerhaave . For clarity, he then described a hypothetical, but realistic variant of the experiment: If equal masses of 100 °F water and 150 °F mercury are mixed, the water temperature increases by 20 ° and the mercury temperature decreases by 30 ° (both arriving at 120 °F), even though the heat gained by the water and lost by the mercury is the same. This clarified the distinction between heat and temperature. It also introduced the concept of specific heat capacity, being different for different substances. Black wrote: “Quicksilver [mercury] ... has less capacity for the matter of heat than water.” [ 8 ] [ 9 ] The specific heat capacity of a substance, usually denoted by c {\displaystyle c} or s {\displaystyle s} , is the heat capacity C {\displaystyle C} of a sample of the substance, divided by the mass M {\displaystyle M} of the sample: [ 10 ] c = C M = 1 M ⋅ d Q d T , {\displaystyle c={\frac {C}{M}}={\frac {1}{M}}\cdot {\frac {\mathrm {d} Q}{\mathrm {d} T}},} where d Q {\displaystyle \mathrm {d} Q} represents the amount of heat needed to uniformly raise the temperature of the sample by a small increment d T {\displaystyle \mathrm {d} T} . Like the heat capacity of an object, the specific heat capacity of a substance may vary, sometimes substantially, depending on the starting temperature T {\displaystyle T} of the sample and the pressure p {\displaystyle p} applied to it. Therefore, it should be considered a function c ( p , T ) {\displaystyle c(p,T)} of those two variables. These parameters are usually specified when giving the specific heat capacity of a substance. For example, "Water (liquid): c p {\displaystyle c_{p}} = 4187 J⋅kg −1 ⋅K −1 (15 °C)." [ 11 ] When not specified, published values of the specific heat capacity c {\displaystyle c} generally are valid for some standard conditions for temperature and pressure . However, the dependency of c {\displaystyle c} on starting temperature and pressure can often be ignored in practical contexts, e.g. when working in narrow ranges of those variables. In those contexts one usually omits the qualifier ( p , T ) {\displaystyle (p,T)} and approximates the specific heat capacity by a constant c {\displaystyle c} suitable for those ranges. Specific heat capacity is an intensive property of a substance, an intrinsic characteristic that does not depend on the size or shape of the amount in consideration. (The qualifier "specific" in front of an extensive property often indicates an intensive property derived from it. [ 12 ] ) The injection of heat energy into a substance, besides raising its temperature, usually causes an increase in its volume and/or its pressure, depending on how the sample is confined. The choice made about the latter affects the measured specific heat capacity, even for the same starting pressure p {\displaystyle p} and starting temperature T {\displaystyle T} . Two particular choices are widely used: The value of c V {\displaystyle c_{V}} is always less than the value of c p {\displaystyle c_{p}} for all fluids. This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume. Hence the heat capacity ratio of gases is typically between 1.3 and 1.67. [ 13 ] The specific heat capacity can be defined and measured for gases, liquids, and solids of fairly general composition and molecular structure. These include gas mixtures, solutions and alloys, or heterogenous materials such as milk, sand, granite, and concrete, if considered at a sufficiently large scale. The specific heat capacity can be defined also for materials that change state or composition as the temperature and pressure change, as long as the changes are reversible and gradual. Thus, for example, the concepts are definable for a gas or liquid that dissociates as the temperature increases, as long as the products of the dissociation promptly and completely recombine when it drops. The specific heat capacity is not meaningful if the substance undergoes irreversible chemical changes, or if there is a phase change , such as melting or boiling, at a sharp temperature within the range of temperatures spanned by the measurement. The specific heat capacity of a substance is typically determined according to the definition; namely, by measuring the heat capacity of a sample of the substance, usually with a calorimeter , and dividing by the sample's mass. Several techniques can be applied for estimating the heat capacity of a substance, such as differential scanning calorimetry . [ 14 ] [ 15 ] The specific heat capacities of gases can be measured at constant volume, by enclosing the sample in a rigid container. On the other hand, measuring the specific heat capacity at constant volume can be prohibitively difficult for liquids and solids, since one often would need impractical pressures in order to prevent the expansion that would be caused by even small increases in temperature. Instead, the common practice is to measure the specific heat capacity at constant pressure (allowing the material to expand or contract as it wishes), determine separately the coefficient of thermal expansion and the compressibility of the material, and compute the specific heat capacity at constant volume from these data according to the laws of thermodynamics. [ citation needed ] The SI unit for specific heat capacity is joule per kelvin per kilogram ⁠ J / kg⋅K ⁠ , J⋅K −1 ⋅kg −1 . Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same as joule per degree Celsius per kilogram: J/(kg⋅°C). Sometimes the gram is used instead of kilogram for the unit of mass: 1 J⋅g −1 ⋅K −1 = 1000 J⋅kg −1 ⋅K −1 . The specific heat capacity of a substance (per unit of mass) has dimension L 2 ⋅Θ −1 ⋅T −2 , or (L/T) 2 /Θ. Therefore, the SI unit J⋅kg −1 ⋅K −1 is equivalent to metre squared per second squared per kelvin (m 2 ⋅K −1 ⋅s −2 ). Professionals in construction , civil engineering , chemical engineering , and other technical disciplines, especially in the United States , may use English Engineering units including the pound (lb = 0.45359237 kg) as the unit of mass, the degree Fahrenheit or Rankine (°R = ⁠ 5 / 9 ⁠ K, about 0.555556 K) as the unit of temperature increment, and the British thermal unit (BTU ≈ 1055.056 J), [ 16 ] [ 17 ] as the unit of heat. In those contexts, the unit of specific heat capacity is BTU/lb⋅°R, or 1 ⁠ BTU / lb⋅°R ⁠ = 4186.68 ⁠ J / kg⋅K ⁠ . [ 18 ] The BTU was originally defined so that the average specific heat capacity of water would be 1 BTU/lb⋅°F. [ 19 ] Note the value's similarity to that of the calorie - 4187 J/kg⋅°C ≈ 4184 J/kg⋅°C (~.07%) - as they are essentially measuring the same energy, using water as a basis reference, scaled to their systems' respective lbs and °F, or kg and °C. In chemistry, heat amounts were often measured in calories . Confusingly, there are two common units with that name, respectively denoted cal and Cal : While these units are still used in some contexts (such as kilogram calorie in nutrition ), their use is now deprecated in technical and scientific fields. When heat is measured in these units, the unit of specific heat capacity is usually: Note that while cal is 1 ⁄ 1000 of a Cal or kcal, it is also per gram instead of kilo gram : ergo, in either unit, the specific heat capacity of water is approximately 1. The temperature of a sample of a substance reflects the average kinetic energy of its constituent particles (atoms or molecules) relative to its center of mass. However, not all energy provided to a sample of a substance will go into raising its temperature, exemplified via the equipartition theorem . Statistical mechanics predicts that at room temperature and ordinary pressures, an isolated atom in a gas cannot store any significant amount of energy except in the form of kinetic energy, unless multiple electronic states are accessible at room temperature (such is the case for atomic fluorine). [ 21 ] Thus, the heat capacity per mole at room temperature is the same for all of the noble gases as well as for many other atomic vapors. More precisely, c V , m = 3 R / 2 ≈ 12.5 J ⋅ K − 1 ⋅ m o l − 1 {\displaystyle c_{V,\mathrm {m} }=3R/2\approx \mathrm {12.5\,J\cdot K^{-1}\cdot mol^{-1}} } and c P , m = 5 R / 2 ≈ 21 J ⋅ K − 1 ⋅ m o l − 1 {\displaystyle c_{P,\mathrm {m} }=5R/2\approx \mathrm {21\,J\cdot K^{-1}\cdot mol^{-1}} } , where R ≈ 8.31446 J ⋅ K − 1 ⋅ m o l − 1 {\displaystyle R\approx \mathrm {8.31446\,J\cdot K^{-1}\cdot mol^{-1}} } is the ideal gas unit (which is the product of Boltzmann conversion constant from kelvin microscopic energy unit to the macroscopic energy unit joule , and the Avogadro number ). Therefore, the specific heat capacity (per gram, not per mole) of a monatomic gas will be inversely proportional to its (adimensional) atomic weight A {\displaystyle A} . That is, approximately, c V ≈ 12470 J ⋅ K − 1 ⋅ k g − 1 / A c p ≈ 20785 J ⋅ K − 1 ⋅ k g − 1 / A {\displaystyle c_{V}\approx \mathrm {12470\,J\cdot K^{-1}\cdot kg^{-1}} /A\quad \quad \quad c_{p}\approx \mathrm {20785\,J\cdot K^{-1}\cdot kg^{-1}} /A} For the noble gases, from helium to xenon, these computed values are On the other hand, a polyatomic gas molecule (consisting of two or more atoms bound together) can store heat energy in additional degrees of freedom. Its kinetic energy contributes to the heat capacity in the same way as monatomic gases, but there are also contributions from the rotations of the molecule and vibration of the atoms relative to each other (including internal potential energy ). There may also be contributions to the heat capacity from excited electronic states for molecules where the energy gap between the ground state and the excited state is sufficiently small, such as NO . [ 22 ] For a few systems, quantum spin statistics can also be important contributions to the heat capacity, even at room temperature. The analysis of the heat capacity of H 2 due to ortho/para separation, [ 23 ] which arises from nuclear spin statistics, has been referred to as "one of the great triumphs of post-quantum mechanical statistical mechanics." [ 24 ] These extra degrees of freedom or "modes" contribute to the specific heat capacity of the substance. Namely, when heat energy is injected into a gas with polyatomic molecules, only part of it will go into increasing their kinetic energy, and hence the temperature; the rest will go to into the other degrees of freedom. To achieve the same increase in temperature, more heat energy is needed for a gram of that substance than for a gram of a monatomic gas. Thus, the specific heat capacity per mole of a polyatomic gas depends both on the molecular mass and the number of degrees of freedom of the molecules. [ 25 ] [ 26 ] [ 27 ] Quantum statistical mechanics predicts that each rotational or vibrational mode can only take or lose energy in certain discrete amounts (quanta), and that this affects the system’s thermodynamic properties. Depending on the temperature, the average heat energy per molecule may be too small compared to the quanta needed to activate some of those degrees of freedom. Those modes are said to be "frozen out". In that case, the specific heat capacity of the substance increases with temperature, sometimes in a step-like fashion as mode becomes unfrozen and starts absorbing part of the input heat energy. For example, the molar heat capacity of nitrogen N 2 at constant volume is c V , m = 20.6 J ⋅ K − 1 ⋅ m o l − 1 {\displaystyle c_{V,\mathrm {m} }=\mathrm {20.6\,J\cdot K^{-1}\cdot mol^{-1}} } (at 15 °C, 1 atm), which is 2.49 R {\displaystyle 2.49R} . [ 28 ] That is the value expected from the Equipartition Theorem if each molecule had 5 kinetic degrees of freedom. These turn out to be three degrees of the molecule's velocity vector, plus two degrees from its rotation about an axis through the center of mass and perpendicular to the line of the two atoms. Because of those two extra degrees of freedom, the specific heat capacity c V {\displaystyle c_{V}} of N 2 (736 J⋅K −1 ⋅kg −1 ) is greater than that of an hypothetical monatomic gas with the same molecular mass 28 (445 J⋅K −1 ⋅kg −1 ), by a factor of ⁠ 5 / 3 ⁠ . The vibrational and electronic degrees of freedom do not contribute significantly to the heat capacity in this case, due to the relatively large energy level gaps for both vibrational and electronic excitation in this molecule. This value for the specific heat capacity of nitrogen is practically constant from below −150 °C to about 300 °C. In that temperature range, the two additional degrees of freedom that correspond to vibrations of the atoms, stretching and compressing the bond, are still "frozen out". At about that temperature, those modes begin to "un-freeze" as vibrationally excited states become accessible. As a result c V {\displaystyle c_{V}} starts to increase rapidly at first, then slower as it tends to another constant value. It is 35.5 J⋅K −1 ⋅mol −1 at 1500 °C, 36.9 at 2500 °C, and 37.5 at 3500 °C. [ 29 ] The last value corresponds almost exactly to the value predicted by the Equipartition Theorem, since in the high-temperature limit the theorem predicts that the vibrational degree of freedom contributes twice as much to the heat capacity as any one of the translational or rotational degrees of freedom. Starting from the fundamental thermodynamic relation one can show, c p − c v = α 2 T ρ β T {\displaystyle c_{p}-c_{v}={\frac {\alpha ^{2}T}{\rho \beta _{T}}}} where A derivation is discussed in the article Relations between specific heats . For an ideal gas , if ρ {\displaystyle \rho } is expressed as molar density in the above equation, this equation reduces simply to Mayer 's relation, C p , m − C v , m = R {\displaystyle C_{p,m}-C_{v,m}=R\!} where C p , m {\displaystyle C_{p,m}} and C v , m {\displaystyle C_{v,m}} are intensive property heat capacities expressed on a per mole basis at constant pressure and constant volume, respectively. The specific heat capacity of a material on a per mass basis is c = ∂ C ∂ m , {\displaystyle c={\partial C \over \partial m},} which in the absence of phase transitions is equivalent to c = E m = C m = C ρ V , {\displaystyle c=E_{m}={C \over m}={C \over {\rho V}},} where For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, d p = 0 {\displaystyle dp=0} ) or isochoric (constant volume, d V = 0 {\displaystyle dV=0} ) processes. The corresponding specific heat capacities are expressed as c p = ( ∂ C ∂ m ) p , c V = ( ∂ C ∂ m ) V . {\displaystyle {\begin{aligned}c_{p}&=\left({\frac {\partial C}{\partial m}}\right)_{p},\\c_{V}&=\left({\frac {\partial C}{\partial m}}\right)_{V}.\end{aligned}}} A related parameter to c {\displaystyle c} is C V − 1 {\displaystyle CV^{-1}} , the volumetric heat capacity . In engineering practice, c V {\displaystyle c_{V}} for solids or liquids often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the mass-specific heat capacity is often explicitly written with the subscript m {\displaystyle m} , as c m {\displaystyle c_{m}} . Of course, from the above relationships, for solids one writes c m = C m = c V ρ . {\displaystyle c_{m}={\frac {C}{m}}={\frac {c_{V}}{\rho }}.} For pure homogeneous chemical compounds with established molecular or molar mass or a molar quantity is established, heat capacity as an intensive property can be expressed on a per mole basis instead of a per mass basis by the following equations analogous to the per mass equations: C p , m = ( ∂ C ∂ n ) p = molar heat capacity at constant pressure C V , m = ( ∂ C ∂ n ) V = molar heat capacity at constant volume {\displaystyle {\begin{alignedat}{3}C_{p,m}=\left({\frac {\partial C}{\partial n}}\right)_{p}&={\text{molar heat capacity at constant pressure}}\\C_{V,m}=\left({\frac {\partial C}{\partial n}}\right)_{V}&={\text{molar heat capacity at constant volume}}\end{alignedat}}} where n = number of moles in the body or thermodynamic system . One may refer to such a per mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per-mass basis. The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change C i , m = ( ∂ C ∂ n ) = molar heat capacity at polytropic process {\displaystyle C_{i,m}=\left({\frac {\partial C}{\partial n}}\right)={\text{molar heat capacity at polytropic process}}} The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent ( γ or κ ) The dimensionless heat capacity of a material is C ∗ = C n R = C N k B {\displaystyle C^{*}={\frac {C}{nR}}={\frac {C}{Nk_{\text{B}}}}} where Again, SI units shown for example. In the Ideal gas article, dimensionless heat capacity C ∗ {\displaystyle C^{*}} is expressed as c ^ {\displaystyle {\hat {c}}} . From the definition of entropy T d S = δ Q {\displaystyle TdS=\delta Q} the absolute entropy can be calculated by integrating from zero kelvins temperature to the final temperature T f S ( T f ) = ∫ T = 0 T f δ Q T = ∫ 0 T f δ Q d T d T T = ∫ 0 T f C ( T ) d T T . {\displaystyle S(T_{f})=\int _{T=0}^{T_{f}}{\frac {\delta Q}{T}}=\int _{0}^{T_{f}}{\frac {\delta Q}{dT}}{\frac {dT}{T}}=\int _{0}^{T_{f}}C(T)\,{\frac {dT}{T}}.} The heat capacity must be zero at zero temperature in order for the above integral not to yield an infinite absolute entropy, thus violating the third law of thermodynamics . One of the strengths of the Debye model is that (unlike the preceding Einstein model) it predicts the proper mathematical form of the approach of heat capacity toward zero, as absolute zero temperature is approached. The theoretical maximum heat capacity for larger and larger multi-atomic gases at higher temperatures, also approaches the Dulong–Petit limit of 3 R , so long as this is calculated per mole of atoms, not molecules. The reason is that gases with very large molecules, in theory have almost the same high-temperature heat capacity as solids, lacking only the (small) heat capacity contribution that comes from potential energy that cannot be stored between separate molecules in a gas. The Dulong–Petit limit results from the equipartition theorem , and as such is only valid in the classical limit of a microstate continuum , which is a high temperature limit. For light and non-metallic elements, as well as most of the common molecular solids based on carbon compounds at standard ambient temperature , quantum effects may also play an important role, as they do in multi-atomic gases. These effects usually combine to give heat capacities lower than 3 R per mole of atoms in the solid, although in molecular solids, heat capacities calculated per mole of molecules in molecular solids may be more than 3 R . For example, the heat capacity of water ice at the melting point is about 4.6 R per mole of molecules, but only 1.5 R per mole of atoms. The lower than 3 R number "per atom" (as is the case with diamond and beryllium) results from the “freezing out” of possible vibration modes for light atoms at suitably low temperatures, just as in many low-mass-atom gases at room temperatures. Because of high crystal binding energies, these effects are seen in solids more often than liquids: for example the heat capacity of liquid water is twice that of ice at near the same temperature, and is again close to the 3 R per mole of atoms of the Dulong–Petit theoretical maximum. For a more modern and precise analysis of the heat capacities of solids, especially at low temperatures, it is useful to use the idea of phonons . See Debye model . The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below. For liquids and gases, it is important to know the pressure to which given heat capacity data refer. Most published data are given for standard pressure. However, different standard conditions for temperature and pressure have been defined by different organizations. The International Union of Pure and Applied Chemistry (IUPAC) changed its recommendation from one atmosphere to the round value 100 kPa (≈750.062 Torr). [ notes 1 ] Measuring the specific heat capacity at constant volume can be prohibitively difficult for liquids and solids. That is, small temperature changes typically require large pressures to maintain a liquid or solid at constant volume, implying that the containing vessel must be nearly rigid or at least very strong (see coefficient of thermal expansion and compressibility ). Instead, it is easier to measure the heat capacity at constant pressure (allowing the material to expand or contract freely) and solve for the heat capacity at constant volume using mathematical relationships derived from the basic thermodynamic laws. The heat capacity ratio , or adiabatic index, is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor. For an ideal gas , evaluating the partial derivatives above according to the equation of state , where R is the gas constant , for an ideal gas [ 30 ] P V = n R T , C P − C V = T ( ∂ P ∂ T ) V , n ( ∂ V ∂ T ) P , n , P = n R T V ⇒ ( ∂ P ∂ T ) V , n = n R V , V = n R T P ⇒ ( ∂ V ∂ T ) P , n = n R P . {\displaystyle {\begin{alignedat}{3}PV&=nRT,&\\C_{P}-C_{V}&=T\left({\frac {\partial P}{\partial T}}\right)_{V,n}\left({\frac {\partial V}{\partial T}}\right)_{P,n},&\\P&={\frac {nRT}{V}}\Rightarrow \left({\frac {\partial P}{\partial T}}\right)_{V,n}&={\frac {nR}{V}},\\V&={\frac {nRT}{P}}\Rightarrow \left({\frac {\partial V}{\partial T}}\right)_{P,n}&={\frac {nR}{P}}.\end{alignedat}}} Substituting T ( ∂ P ∂ T ) V , n ( ∂ V ∂ T ) P , n = T n R V n R P = n R T V n R P = P n R P = n R , {\displaystyle T\left({\frac {\partial P}{\partial T}}\right)_{V,n}\left({\frac {\partial V}{\partial T}}\right)_{P,n}=T{\frac {nR}{V}}{\frac {nR}{P}}={\frac {nRT}{V}}{\frac {nR}{P}}=P{\frac {nR}{P}}=nR,} this equation reduces simply to Mayer 's relation: C P , m − C V , m = R . {\displaystyle C_{P,m}-C_{V,m}=R.} The differences in heat capacities as defined by the above Mayer relation is only exact for an ideal gas and would be different for any real gas. The specific heat capacity of a material on a per mass basis is c = ∂ C ∂ m , {\displaystyle c={\frac {\partial C}{\partial m}},} which in the absence of phase transitions is equivalent to c = E m = C m = C ρ V , {\displaystyle c=E_{m}={\frac {C}{m}}={\frac {C}{\rho V}},} where For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, d P = 0 {\displaystyle {\text{d}}P=0} ) or isochoric (constant volume, d V = 0 {\displaystyle {\text{d}}V=0} ) processes. The corresponding specific heat capacities are expressed as c P = ( ∂ C ∂ m ) P , c V = ( ∂ C ∂ m ) V . {\displaystyle {\begin{aligned}c_{P}&=\left({\frac {\partial C}{\partial m}}\right)_{P},\\c_{V}&=\left({\frac {\partial C}{\partial m}}\right)_{V}.\end{aligned}}} From the results of the previous section, dividing through by the mass gives the relation c P − c V = α 2 T ρ β T . {\displaystyle c_{P}-c_{V}={\frac {\alpha ^{2}T}{\rho \beta _{T}}}.} A related parameter to c {\displaystyle c} is C / V {\displaystyle C/V} , the volumetric heat capacity . In engineering practice, c V {\displaystyle c_{V}} for solids or liquids often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the specific heat capacity is often explicitly written with the subscript m {\displaystyle m} , as c m {\displaystyle c_{m}} . Of course, from the above relationships, for solids one writes c m = C m = c volumetric ρ . {\displaystyle c_{m}={\frac {C}{m}}={\frac {c_{\text{volumetric}}}{\rho }}.} For pure homogeneous chemical compounds with established molecular or molar mass , or a molar quantity , heat capacity as an intensive property can be expressed on a per- mole basis instead of a per-mass basis by the following equations analogous to the per mass equations: C P , m = ( ∂ C ∂ n ) P = molar heat capacity at constant pressure, C V , m = ( ∂ C ∂ n ) V = molar heat capacity at constant volume, {\displaystyle {\begin{alignedat}{3}C_{P,m}&=\left({\frac {\partial C}{\partial n}}\right)_{P}&={\text{molar heat capacity at constant pressure,}}\\C_{V,m}&=\left({\frac {\partial C}{\partial n}}\right)_{V}&={\text{molar heat capacity at constant volume,}}\end{alignedat}}} where n is the number of moles in the body or thermodynamic system . One may refer to such a per-mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per-mass basis. The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change: C i , m = ( ∂ C ∂ n ) = molar heat capacity at polytropic process. {\displaystyle C_{i,m}=\left({\frac {\partial C}{\partial n}}\right)={\text{molar heat capacity at polytropic process.}}} The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent ( γ or κ ). The dimensionless heat capacity of a material is C ∗ = C n R = C N k B , {\displaystyle C^{*}={\frac {C}{nR}}={\frac {C}{Nk_{\text{B}}}},} where In the ideal gas article, dimensionless heat capacity C ∗ {\displaystyle C^{*}} is expressed as c ^ {\displaystyle {\hat {c}}} and is related there directly to half the number of degrees of freedom per particle. This holds true for quadratic degrees of freedom, a consequence of the equipartition theorem . More generally, the dimensionless heat capacity relates the logarithmic increase in temperature to the increase in the dimensionless entropy per particle S ∗ = S / N k B {\displaystyle S^{*}=S/Nk_{\text{B}}} , measured in nats . C ∗ = d S ∗ d ( ln ⁡ T ) . {\displaystyle C^{*}={\frac {{\text{d}}S^{*}}{{\text{d}}(\ln T)}}.} Alternatively, using base-2 logarithms, C ∗ {\displaystyle C^{*}} relates the base-2 logarithmic increase in temperature to the increase in the dimensionless entropy measured in bits . [ 31 ] From the definition of entropy T d S = δ Q , {\displaystyle T\,{\text{d}}S=\delta Q,} the absolute entropy can be calculated by integrating from zero to the final temperature T f : S ( T f ) = ∫ T = 0 T f δ Q T = ∫ 0 T f δ Q d T d T T = ∫ 0 T f C ( T ) d T T . {\displaystyle S(T_{\text{f}})=\int _{T=0}^{T_{\text{f}}}{\frac {\delta Q}{T}}=\int _{0}^{T_{\text{f}}}{\frac {\delta Q}{{\text{d}}T}}{\frac {{\text{d}}T}{T}}=\int _{0}^{T_{\text{f}}}C(T)\,{\frac {{\text{d}}T}{T}}.} In theory, the specific heat capacity of a substance can also be derived from its abstract thermodynamic modeling by an equation of state and an internal energy function . To apply the theory, one considers the sample of the substance (solid, liquid, or gas) for which the specific heat capacity can be defined; in particular, that it has homogeneous composition and fixed mass M {\displaystyle M} . Assume that the evolution of the system is always slow enough for the internal pressure P {\displaystyle P} and temperature T {\displaystyle T} be considered uniform throughout. The pressure P {\displaystyle P} would be equal to the pressure applied to it by the enclosure or some surrounding fluid, such as air. The state of the material can then be specified by three parameters: its temperature T {\displaystyle T} , the pressure P {\displaystyle P} , and its specific volume ν = V / M {\displaystyle \nu =V/M} , where V {\displaystyle V} is the volume of the sample. (This quantity is the reciprocal 1 / ρ {\displaystyle 1/\rho } of the material's density ρ = M / V {\displaystyle \rho =M/V} .) Like T {\displaystyle T} and P {\displaystyle P} , the specific volume ν {\displaystyle \nu } is an intensive property of the material and its state, that does not depend on the amount of substance in the sample. Those variables are not independent. The allowed states are defined by an equation of state relating those three variables: F ( T , P , ν ) = 0. {\displaystyle F(T,P,\nu )=0.} The function F {\displaystyle F} depends on the material under consideration. The specific internal energy stored internally in the sample, per unit of mass, will then be another function U ( T , P , ν ) {\displaystyle U(T,P,\nu )} of these state variables, that is also specific of the material. The total internal energy in the sample then will be M U ( T , P , ν ) {\displaystyle M\,U(T,P,\nu )} . For some simple materials, like an ideal gas , one can derive from basic theory the equation of state F = 0 {\displaystyle F=0} and even the specific internal energy U {\displaystyle U} In general, these functions must be determined experimentally for each substance. The absolute value of this quantity U {\displaystyle U} is undefined, and (for the purposes of thermodynamics) the state of "zero internal energy" can be chosen arbitrarily. However, by the law of conservation of energy , any infinitesimal increase M d U {\displaystyle M\,\mathrm {d} U} in the total internal energy M U {\displaystyle MU} must be matched by the net flow of heat energy d Q {\displaystyle \mathrm {d} Q} into the sample, plus any net mechanical energy provided to it by enclosure or surrounding medium on it. The latter is − P d V {\displaystyle -P\,\mathrm {d} V} , where d V {\displaystyle \mathrm {d} V} is the change in the sample's volume in that infinitesimal step. [ 32 ] Therefore d Q − P d V = M d U {\displaystyle \mathrm {d} Q-P\,\mathrm {d} V=M\,\mathrm {d} U} hence d Q M − P d ν = d U {\displaystyle {\frac {\mathrm {d} Q}{M}}-P\,\mathrm {d} \nu =\mathrm {d} U} If the volume of the sample (hence the specific volume of the material) is kept constant during the injection of the heat amount d Q {\displaystyle \mathrm {d} Q} , then the term P d ν {\displaystyle P\,\mathrm {d} \nu } is zero (no mechanical work is done). Then, dividing by d T {\displaystyle \mathrm {d} T} , d Q M d T = d U d T {\displaystyle {\frac {\mathrm {d} Q}{M\,\mathrm {d} T}}={\frac {\mathrm {d} U}{\mathrm {d} T}}} where d T {\displaystyle \mathrm {d} T} is the change in temperature that resulted from the heat input. The left-hand side is the specific heat capacity at constant volume c V {\displaystyle c_{V}} of the material. For the heat capacity at constant pressure, it is useful to define the specific enthalpy of the system as the sum h ( T , P , ν ) = U ( T , P , ν ) + P ν {\displaystyle h(T,P,\nu )=U(T,P,\nu )+P\nu } . An infinitesimal change in the specific enthalpy will then be d h = d U + V d P + P d V {\displaystyle \mathrm {d} h=\mathrm {d} U+V\,\mathrm {d} P+P\,\mathrm {d} V} therefore d Q M + V d P = d h {\displaystyle {\frac {\mathrm {d} Q}{M}}+V\,\mathrm {d} P=\mathrm {d} h} If the pressure is kept constant, the second term on the left-hand side is zero, and d Q M d T = d h d T {\displaystyle {\frac {\mathrm {d} Q}{M\,\mathrm {d} T}}={\frac {\mathrm {d} h}{\mathrm {d} T}}} The left-hand side is the specific heat capacity at constant pressure c P {\displaystyle c_{P}} of the material. In general, the infinitesimal quantities d T , d P , d V , d U {\displaystyle \mathrm {d} T,\mathrm {d} P,\mathrm {d} V,\mathrm {d} U} are constrained by the equation of state and the specific internal energy function. Namely, { d T ∂ F ∂ T ( T , P , V ) + d P ∂ F ∂ P ( T , P , V ) + d V ∂ F ∂ V ( T , P , V ) = 0 d T ∂ U ∂ T ( T , P , V ) + d P ∂ U ∂ P ( T , P , V ) + d V ∂ U ∂ V ( T , P , V ) = d U {\displaystyle {\begin{cases}\displaystyle \mathrm {d} T{\frac {\partial F}{\partial T}}(T,P,V)+\mathrm {d} P{\frac {\partial F}{\partial P}}(T,P,V)+\mathrm {d} V{\frac {\partial F}{\partial V}}(T,P,V)&=&0\\[2ex]\displaystyle \mathrm {d} T{\frac {\partial U}{\partial T}}(T,P,V)+\mathrm {d} P{\frac {\partial U}{\partial P}}(T,P,V)+\mathrm {d} V{\frac {\partial U}{\partial V}}(T,P,V)&=&\mathrm {d} U\end{cases}}} Here ( ∂ F / ∂ T ) ( T , P , V ) {\displaystyle (\partial F/\partial T)(T,P,V)} denotes the (partial) derivative of the state equation F {\displaystyle F} with respect to its T {\displaystyle T} argument, keeping the other two arguments fixed, evaluated at the state ( T , P , V ) {\displaystyle (T,P,V)} in question. The other partial derivatives are defined in the same way. These two equations on the four infinitesimal increments normally constrain them to a two-dimensional linear subspace space of possible infinitesimal state changes, that depends on the material and on the state. The constant-volume and constant-pressure changes are only two particular directions in this space. This analysis also holds no matter how the energy increment d Q {\displaystyle \mathrm {d} Q} is injected into the sample, namely by heat conduction , irradiation, electromagnetic induction , radioactive decay , etc. For any specific volume ν {\displaystyle \nu } , denote p ν ( T ) {\displaystyle p_{\nu }(T)} the function that describes how the pressure varies with the temperature T {\displaystyle T} , as allowed by the equation of state, when the specific volume of the material is forcefully kept constant at ν {\displaystyle \nu } . Analogously, for any pressure P {\displaystyle P} , let ν P ( T ) {\displaystyle \nu _{P}(T)} be the function that describes how the specific volume varies with the temperature, when the pressure is kept constant at P {\displaystyle P} . Namely, those functions are such that F ( T , p ν ( T ) , ν ) = 0 {\displaystyle F(T,p_{\nu }(T),\nu )=0} and F ( T , P , ν P ( T ) ) = 0 {\displaystyle F(T,P,\nu _{P}(T))=0} for any values of T , P , ν {\displaystyle T,P,\nu } . In other words, the graphs of p ν ( T ) {\displaystyle p_{\nu }(T)} and ν P ( T ) {\displaystyle \nu _{P}(T)} are slices of the surface defined by the state equation, cut by planes of constant ν {\displaystyle \nu } and constant P {\displaystyle P} , respectively. Then, from the fundamental thermodynamic relation it follows that c P ( T , P , ν ) − c V ( T , P , ν ) = T [ d p ν d T ( T ) ] [ d ν P d T ( T ) ] {\displaystyle c_{P}(T,P,\nu )-c_{V}(T,P,\nu )=T\left[{\frac {\mathrm {d} p_{\nu }}{\mathrm {d} T}}(T)\right]\left[{\frac {\mathrm {d} \nu _{P}}{\mathrm {d} T}}(T)\right]} This equation can be rewritten as c P ( T , P , ν ) − c V ( T , P , ν ) = ν T α 2 β T , {\displaystyle c_{P}(T,P,\nu )-c_{V}(T,P,\nu )=\nu T{\frac {\alpha ^{2}}{\beta _{T}}},} where both depending on the state ( T , P , ν ) {\displaystyle (T,P,\nu )} . The heat capacity ratio , or adiabatic index, is the ratio c P / c V {\displaystyle c_{P}/c_{V}} of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor. The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3 R = 24.94 joules per kelvin per mole of atoms ( Dulong–Petit law , R is the gas constant ). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below. However, attention should be made for the consistency of such ab-initio considerations when used along with an equation of state for the considered material. [ 33 ] For an ideal gas , evaluating the partial derivatives above according to the equation of state , where R is the gas constant , for an ideal gas [ 34 ] P V = n R T , C P − C V = T ( ∂ P ∂ T ) V , n ( ∂ V ∂ T ) P , n , P = n R T V ⇒ ( ∂ P ∂ T ) V , n = n R V , V = n R T P ⇒ ( ∂ V ∂ T ) P , n = n R P . {\displaystyle {\begin{alignedat}{3}PV&=nRT,\\C_{P}-C_{V}&=T\left({\frac {\partial P}{\partial T}}\right)_{V,n}\left({\frac {\partial V}{\partial T}}\right)_{P,n},\\P&={\frac {nRT}{V}}\Rightarrow \left({\frac {\partial P}{\partial T}}\right)_{V,n}&={\frac {nR}{V}},\\V&={\frac {nRT}{P}}\Rightarrow \left({\frac {\partial V}{\partial T}}\right)_{P,n}&={\frac {nR}{P}}.\end{alignedat}}} Substituting T ( ∂ P ∂ T ) V , n ( ∂ V ∂ T ) P , n = T n R V n R P = n R T V n R P = P n R P = n R , {\displaystyle T\left({\frac {\partial P}{\partial T}}\right)_{V,n}\left({\frac {\partial V}{\partial T}}\right)_{P,n}=T{\frac {nR}{V}}{\frac {nR}{P}}={\frac {nRT}{V}}{\frac {nR}{P}}=P{\frac {nR}{P}}=nR,} this equation reduces simply to Mayer 's relation: C P , m − C V , m = R . {\displaystyle C_{P,m}-C_{V,m}=R.} The differences in heat capacities as defined by the above Mayer relation is only exact for an ideal gas and would be different for any real gas. Physics portal
https://en.wikipedia.org/wiki/Specific_heat_capacity
Specific impulse (usually abbreviated I sp ) is a measure of how efficiently a reaction mass engine, such as a rocket using propellant or a jet engine using fuel, generates thrust . In general, this is a ratio of the impulse , i.e. change in momentum, per mass of propellant. This is equivalent to "thrust per massflow". The resulting unit is equivalent to velocity, although it does not represent any physical velocity (see below); it is more properly thought of in terms of momentum per mass, since this represents a physical momentum and physical mass. The practical meaning of the measurement varies with different types of engines. Car engines consume onboard fuel, breathe environmental air to burn the fuel, and react (through the tires) against the ground beneath them. In this case, the only sensible interpretation is momentum per fuel burned. Chemical rocket engines, by contrast, carry aboard all of their combustion ingredients and reaction mass, so the only practical measure is momentum per reaction mass. Airplane engines are in the middle, as they only react against airflow through the engine, but some of this reaction mass (and combustion ingredients) is breathed rather than carried on board. As such, "specific impulse" could be taken to mean either "per reaction mass", as with a rocket, or "per fuel burned" as with cars. The latter is the traditional and common choice. In sum, specific impulse is not practically comparable between different types of engines. In any case, specific impulse can be taken as a measure of efficiency. In cars and planes, it typically corresponds with fuel mileage; in rocketry, it corresponds to the achievable delta- v , [ 1 ] [ 2 ] which is the typical way to measure changes between orbits. Rocketry traditionally uses a "bizarre" choice of units: rather than speaking of momentum-per-mass, or velocity, the rocket industry typically converts units of velocity to units of time by dividing by a standard reference acceleration, that being standard gravity g 0 . This is a historical result of competing units, imperial units vs metric units . They shared a common unit of time (seconds) but not common units of distance or mass, so this conversion by reference to g 0 became a standard way to make international comparisons. This choice of reference conversion is arbitrary and the resulting units of time have no physical meaning. The only physical quantities are the momentum change and the mass used to achieve it. For any chemical rocket engine, the momentum transfer efficiency depends heavily on the effectiveness of the nozzle ; the nozzle is the primary means of converting reactant energy (e.g. thermal or pressure energy) into a flow of momentum all directed the same way. Therefore, nozzle shape and effectiveness has a great impact on total momentum transfer from the reaction mass to the rocket. Efficiency of conversion of input energy to reactant energy also matters; be that thermal energy in combustion engines or electrical energy in ion engines , the engineering involved in converting such energy to outbound momentum can have high impact on specific impulse. Specific impulse in turn has deep impacts on the achievable delta-v and associated orbits achievable, and (by the rocket equation) mass fraction required to achieve a given delta-v. Optimizing the tradeoffs between mass fraction and specific impulse is one of the fundamental engineering challenges in rocketry. Although the specific impulse has units equivalent to velocity, it almost never corresponds to any physical velocity. In chemical and cold gas rockets, the shape of the nozzle has a high impact on the energy-to-momentum conversion, and is never perfect, and there are other sources of losses and inefficiencies (e.g. the details of the combustion in such engines). As such, the physical exhaust velocity is higher than the "effective exhaust velocity", i.e. that "velocity" suggested by the specific impulse. In any case, the momentum exchanged and the mass used to generate it are physically real measurements. Typically, rocket nozzles work better when the ambient pressure is lower, i.e. better in space than in atmosphere. Ion engines operate without a nozzle, although they have other sources of losses such that the momentum transferred is lower than the physical exhaust velocity. Although the car industry almost never uses specific impulse on any practical level, the measure can be defined, and makes good contrast against other engine types. Car engines breathe external air to combust their fuel, and (via the wheels) react against the ground. As such, the only meaningful way to interpret "specific impulse" is as "thrust per fuelflow", although one must also specify if the force is measured at the crankshaft or at the wheels, since there are transmission losses. Such a measure corresponds to fuel mileage . In an aerodynamic context, there are similarities to both cars and rockets. Like cars, airplane engines breathe outside air; unlike cars they react only against fluids flowing through the engine (including the propellers as applicable). As such, there are several possible ways to interpret "specific impulse": as thrust per fuel flow, as thrust per breathing-flow, or as thrust per "turbine-flow" (i.e. excluding air though the propeller/bypass fan). Since the air breathed is not a direct cost, with wide engineering leeway on how much to breathe, the industry traditionally chooses the "thrust per fuel flow" interpretation with its focus on cost efficiency. In this interpretation, the resulting specific impulse numbers are much higher than for rocket engines, although this comparison is essentially meaningless since the interpretations — with or without reaction mass — are so different. As with all kinds of engines, there are many engineering choices and tradeoffs that affect specific impulse. Nonlinear air resistance and the engine's inability to keep a high specific impulse at a fast burn rate are limiting factors to the fuel consumption rate. As with rocket engines, the interpretation of specific impulse as a "velocity" has no physical meaning. Since the usual interpretation excludes much of the reaction mass, the physical velocity of the reactants downstream is much lower than the I sp "velocity". Specific impulse should not be confused with energy efficiency , which can decrease as specific impulse increases, since propulsion systems that give high specific impulse require high energy to do so. [ 3 ] Specific impulse should not be confused with total thrust . Thrust is the force supplied by the engine and depends on the propellant mass flow through the engine. Specific impulse measures the thrust per propellant mass flow. Thrust and specific impulse are related by the design and propellants of the engine in question, but this relationship is tenuous: in most cases, high thrust and high specific impulse are mutually exclusive engineering goals. For example, LH 2 /LO 2 bipropellant produces higher I sp (due to higher chemical energy and lower exhaust molecular mass) but lower thrust than RP-1 / LO 2 (due to higher density and propellant flow). In many cases, propulsion systems with very high specific impulse—some ion thrusters reach 25x-35x better I sp than chemical engines—produce correspondingly low thrust. [ 4 ] When calculating specific impulse, only propellant carried with the vehicle before use is counted, in the standard interpretation. This usage best corresponds to the cost of operating the vehicle. For a chemical rocket, unlike a plane or car, the propellant mass therefore would include both fuel and oxidizer . For any vehicle, optimising for specific impulse is generally not the same as optimising for total performance or total cost. In rocketry, a heavier engine with a higher specific impulse may not be as effective in gaining altitude, distance, or velocity as a lighter engine with a lower specific impulse, especially if the latter engine possesses a higher thrust-to-weight ratio . This is a significant reason for most rocket designs having multiple stages. The first stage can optimised for high thrust to effectively fight gravity drag and air drag, while the later stages operating strictly in orbit and in vacuum can be much easier optimised for higher specific impulse, especially for high delta-v orbits. The amount of propellant could be defined either in units of mass or weight . If mass is used, specific impulse is an impulse per unit of mass, which dimensional analysis shows to be equivalent to units of speed; this interpretation is commonly labeled the effective exhaust velocity . If a force-based unit system is used, impulse is divided by propellant weight (weight is a measure of force), resulting in units of time. The problem with weight, as a measure of quantity, is that it depends on the acceleration applied to the propellant, which is arbitrary with no relation to the design of the engine. Historically, standard gravity was the reference conversion between weight and mass. But since technology has progressed to the point that we can measure Earth gravity's variation across the surface, and where such differences can cause differences in practical engineering projects (not to mention science projects on other solar bodies), modern science and engineering focus on mass as the measure of quantity, so as to remove the acceleration dependence. As such, measuring specific impulse by propellant mass gives it the same meaning for a car at sea level, an airplane at cruising altitude, or a helicopter on Mars . No matter the choice of mass or weight, the resulting quotient of "velocity" or "time" has no physical meaning. Due to various losses in real engines, the actual exhaust velocity is different from the I sp "velocity" (and for cars there isn't even a sensible definition of "actual exhaust velocity"). Rather, the specific impulse is just that: a physical momentum from a physical quantity of propellant (be that in mass or weight). The particular habit in rocketry of measuring I sp in seconds results from the above historical circumstances. Since metric and imperial units had in common only the unit of time, this was the most convenient way to make international comparisons. However, the choice of reference acceleration conversion, (g 0 ) is arbitrary, and as above, the interpretation in terms of time or speed has no physical meaning. The most common unit for specific impulse is the second, as values are identical regardless of whether the calculations are done in SI , imperial , or US customary units. Nearly all manufacturers quote their engine performance in seconds, and the unit is also useful for specifying aircraft engine performance. [ 5 ] The use of metres per second to specify effective exhaust velocity is also reasonably common. The unit is intuitive when describing rocket engines, although the effective exhaust speed of the engines may be significantly different from the actual exhaust speed, especially in gas-generator cycle engines. For airbreathing jet engines , the effective exhaust velocity is not physically meaningful, although it can be used for comparison purposes. [ 6 ] Metres per second are numerically equivalent to newton-seconds per kg (N·s/kg), and SI measurements of specific impulse can be written in terms of either units interchangeably. This unit highlights the definition of specific impulse as impulse per unit mass of propellant. Specific fuel consumption is inversely proportional to specific impulse and has units of g/(kN·s) or lb/(lbf·h). Specific fuel consumption is used extensively for describing the performance of air-breathing jet engines. [ 7 ] Specific impulse, measured in seconds, can be thought of as how many seconds one kilogram of fuel can produce one kilogram of thrust. Or, more precisely, how many seconds a given propellant, when paired with a given engine, can accelerate its own initial mass at 1 g. The longer it can accelerate its own mass, the more delta-V it delivers to the whole system. In other words, given a particular engine and a mass of a particular propellant, specific impulse measures for how long a time that engine can exert a continuous force (thrust) until fully burning that mass of propellant. A given mass of a more energy-dense propellant can burn for a longer duration than some less energy-dense propellant made to exert the same force while burning in an engine. Different engine designs burning the same propellant may not be equally efficient at directing their propellant's energy into effective thrust. For all vehicles, specific impulse (impulse per unit weight-on-Earth of propellant) in seconds can be defined by the following equation: [ 8 ] I s p = F a v g m ˙ ⋅ g 0 {\displaystyle I_{sp}={\frac {F_{avg}}{{\dot {m}}\cdot g_{0}}}} Where: I s p = I t o t a l m ⋅ g 0 {\displaystyle I_{sp}={\frac {I_{total}}{m\cdot g_{0}}}} Where: I sp in seconds is the amount of time a rocket engine can generate thrust, given a quantity of propellant the weight of which is equal to the engine's thrust. The advantage of this formulation is that it may be used for rockets, where all the reaction mass is carried on board, as well as airplanes, where most of the reaction mass is taken from the atmosphere. In addition, giving the result as a unit of time makes the result easily comparable between calculations in SI units, imperial units, US customary units or other unit framework. The English unit pound mass is more commonly used than the slug, and when using pounds per second for mass flow rate, it is more convenient to express standard gravity as 1 pound-force per pound-mass. Note that this is equivalent to 32.17405 ft/s2, but expressed in more convenient units. This gives: F thrust = I sp ⋅ m ˙ ⋅ ( 1 l b f l b m ) . {\displaystyle F_{\text{thrust}}=I_{\text{sp}}\cdot {\dot {m}}\cdot \left(1\mathrm {\frac {lbf}{lbm}} \right).} In rocketry, the only reaction mass is the propellant, so the specific impulse is calculated using an alternative method, giving results with units of seconds. Specific impulse is defined as the thrust integrated over time per unit weight -on-Earth of the propellant: [ 9 ] I sp = v e g 0 , {\displaystyle I_{\text{sp}}={\frac {v_{\text{e}}}{g_{0}}},} where In rockets, due to atmospheric effects, the specific impulse varies with altitude, reaching a maximum in a vacuum. This is because the exhaust velocity is not simply a function of the chamber pressure, but is a function of the difference between the interior and exterior of the combustion chamber . Values are usually given for operation at sea level ("sl") or in a vacuum ("vac"). Because of the geocentric factor of g 0 in the equation for specific impulse, many prefer an alternative definition. The specific impulse of a rocket can be defined in terms of thrust per unit mass flow of propellant. This is an equally valid (and in some ways somewhat simpler) way of defining the effectiveness of a rocket propellant. For a rocket, the specific impulse defined in this way is simply the effective exhaust velocity relative to the rocket, v e . "In actual rocket nozzles, the exhaust velocity is not really uniform over the entire exit cross section and such velocity profiles are difficult to measure accurately. A uniform axial velocity, v e , is assumed for all calculations which employ one-dimensional problem descriptions. This effective exhaust velocity represents an average or mass equivalent velocity at which propellant is being ejected from the rocket vehicle." [ 10 ] The two definitions of specific impulse are proportional to one another, and related to each other by: v e = g 0 ⋅ I sp , {\displaystyle v_{\text{e}}=g_{0}\cdot I_{\text{sp}},} where This equation is also valid for air-breathing jet engines, but is rarely used in practice. (Note that different symbols are sometimes used; for example, c is also sometimes seen for exhaust velocity. While the symbol I sp {\displaystyle I_{\text{sp}}} might logically be used for specific impulse in units of (N·s 3 )/(m·kg); to avoid confusion, it is desirable to reserve this for specific impulse measured in seconds.) It is related to the thrust , or forward force on the rocket by the equation: [ 11 ] F thrust = v e ⋅ m ˙ , {\displaystyle F_{\text{thrust}}=v_{\text{e}}\cdot {\dot {m}},} where m ˙ {\displaystyle {\dot {m}}} is the propellant mass flow rate, which is the rate of decrease of the vehicle's mass. A rocket must carry all its propellant with it, so the mass of the unburned propellant must be accelerated along with the rocket itself. Minimizing the mass of propellant required to achieve a given change in velocity is crucial to building effective rockets. The Tsiolkovsky rocket equation shows that for a rocket with a given empty mass and a given amount of propellant, the total change in velocity it can accomplish is proportional to the effective exhaust velocity. A spacecraft without propulsion follows an orbit determined by its trajectory and any gravitational field. Deviations from the corresponding velocity pattern (these are called Δ v ) are achieved by sending exhaust mass in the direction opposite to that of the desired velocity change. When an engine is run within the atmosphere, the exhaust velocity is reduced by atmospheric pressure, in turn reducing specific impulse. This is a reduction in the effective exhaust velocity, versus the actual exhaust velocity achieved in vacuum conditions. In the case of gas-generator cycle rocket engines, more than one exhaust gas stream is present as turbopump exhaust gas exits through a separate nozzle. Calculating the effective exhaust velocity requires averaging the two mass flows as well as accounting for any atmospheric pressure. [ 12 ] For air-breathing jet engines, particularly turbofans , the actual exhaust velocity and the effective exhaust velocity are different by orders of magnitude. This happens for several reasons. First, a good deal of additional momentum is obtained by using air as reaction mass, such that combustion products in the exhaust have more mass than the burned fuel. Next, inert gases in the atmosphere absorb heat from combustion, and through the resulting expansion provide additional thrust. Lastly, for turbofans and other designs there is even more thrust created by pushing against intake air which never sees combustion directly. These all combine to allow a better match between the airspeed and the exhaust speed, which saves energy/propellant and enormously increases the effective exhaust velocity while reducing the actual exhaust velocity. [ 13 ] Again, this is because the mass of the air is not counted in the specific impulse calculation, thus attributing all of the thrust momentum to the mass of the fuel component of the exhaust, and omitting the reaction mass, inert gas, and effect of driven fans on overall engine efficiency from consideration. Essentially, the momentum of engine exhaust includes a lot more than just fuel, but specific impulse calculation ignores everything but the fuel. Even though the effective exhaust velocity for an air-breathing engine seems nonsensical in the context of actual exhaust velocity, this is still useful for comparing absolute fuel efficiency of different engines. A related measure, the density specific impulse , sometimes also referred to as Density Impulse and usually abbreviated as I s d is the product of the average specific gravity of a given propellant mixture and the specific impulse. [ 14 ] While less important than the specific impulse, it is an important measure in launch vehicle design, as a low specific impulse implies that bigger tanks will be required to store the propellant, which in turn will have a detrimental effect on the launch vehicle's mass ratio . [ 15 ] Specific impulse is inversely proportional to specific fuel consumption (SFC) by the relationship I sp = 1/( g o ·SFC) for SFC in kg/(N·s) and I sp = 3600/SFC for SFC in lb/(lbf·hr). An example of a specific impulse measured in time is 453 seconds, which is equivalent to an effective exhaust velocity of 4.440 km/s (14,570 ft/s), for the RS-25 engines when operating in a vacuum. [ 36 ] An air-breathing jet engine typically has a much larger specific impulse than a rocket; for example a turbofan jet engine may have a specific impulse of 6,000 seconds or more at sea level whereas a rocket would be between 200 and 400 seconds. [ 37 ] An air-breathing engine is thus much more propellant efficient than a rocket engine, because the air serves as reaction mass and oxidizer for combustion which does not have to be carried as propellant, and the actual exhaust speed is much lower, so the kinetic energy the exhaust carries away is lower and thus the jet engine uses far less energy to generate thrust. [ 38 ] While the actual exhaust velocity is lower for air-breathing engines, the effective exhaust velocity is very high for jet engines. This is because the effective exhaust velocity calculation assumes that the carried propellant is providing all the reaction mass and all the thrust. Hence effective exhaust velocity is not physically meaningful for air-breathing engines; nevertheless, it is useful for comparison with other types of engines. [ 39 ] The highest specific impulse for a chemical propellant ever test-fired in a rocket engine was 542 seconds (5.32 km/s) with a tripropellant of lithium , fluorine , and hydrogen . However, this combination is impractical. Lithium and fluorine are both extremely corrosive, lithium ignites on contact with air, fluorine ignites on contact with most fuels, and hydrogen, while not hypergolic, is an explosive hazard. Fluorine and the hydrogen fluoride (HF) in the exhaust are very toxic, which damages the environment, makes work around the launch pad difficult, and makes getting a launch license that much more difficult. The rocket exhaust is also ionized, which would interfere with radio communication with the rocket. [ 40 ] [ 41 ] [ 42 ] Nuclear thermal rocket engines differ from conventional rocket engines in that energy is supplied to the propellants by an external nuclear heat source instead of the heat of combustion . [ 43 ] The nuclear rocket typically operates by passing liquid hydrogen gas through an operating nuclear reactor. Testing in the 1960s yielded specific impulses of about 850 seconds (8,340 m/s), about twice that of the Space Shuttle engines. [ 44 ] A variety of other rocket propulsion methods, such as ion thrusters , give much higher specific impulse but with much lower thrust; for example the Hall-effect thruster on the SMART-1 satellite has a specific impulse of 1,640 s (16.1 km/s) but a maximum thrust of only 68 mN (0.015 lbf). [ 45 ] The variable specific impulse magnetoplasma rocket (VASIMR) engine currently in development will theoretically yield 20 to 300 km/s (66,000 to 984,000 ft/s), and a maximum thrust of 5.7 N (1.3 lbf). [ 46 ]
https://en.wikipedia.org/wiki/Specific_impulse
In theoretical chemistry , Specific ion Interaction Theory ( SIT theory) is a theory used to estimate single- ion activity coefficients in electrolyte solutions at relatively high concentrations . [ 1 ] [ 2 ] It does so by taking into consideration interaction coefficients between the various ions present in solution. Interaction coefficients are determined from equilibrium constant values obtained with solutions at various ionic strengths . The determination of SIT interaction coefficients also yields the value of the equilibrium constant at infinite dilution. This theory arises from the need to derive activity coefficients of solutes when their concentrations are too high to be predicted accurately by the Debye–Hückel theory . Activity coefficients are needed because an equilibrium constant is defined in chemical thermodynamics as the ratio of activities but is usually measured using concentrations . The protonation of a monobasic acid will be used to simplify the presentation. The equilibrium for protonation of the conjugate base , A − of the acid HA, may be written as: for which the association constant K is defined as: where {HA}, {H + }, and {A – } represent the activity of the corresponding chemical species. The role of water in the association equilibrium is ignored as in all but the most concentrated solutions the activity of water is constant. K is defined here as an association constant, the reciprocal of an acid dissociation constant . Each activity term { } can be expressed as the product of a concentration [ ] and an activity coefficient γ. For example, where the square brackets represent a concentration and γ is an activity coefficient. Thus the equilibrium constant can be expressed as a product of a concentration ratio and an activity coefficient ratio. Taking the logarithms: where: K 0 is the hypothetical value that the equilibrium constant K would have if the solution of the acid HA was infinitely diluted and that the activity coefficients of all the species in solution were equal to one. It is a common practice to determine equilibrium constants in solutions containing an electrolyte at high ionic strength such that the activity coefficients are effectively constant. However, when the ionic strength is changed the measured equilibrium constant will also change, so there is a need to estimate individual (single ion) activity coefficients. Debye–Hückel theory provides a means to do this, but it is accurate only at very low concentrations. Hence the need for an extension to Debye–Hückel theory. Two main approaches have been used. SIT theory, discussed here and Pitzer equations . [ 3 ] [ 4 ] SIT theory was first proposed by Brønsted [ 5 ] in 1922 and was further developed by Guggenheim in 1955. [ 1 ] Scatchard [ 6 ] extended the theory in 1936 to allow the interaction coefficients to vary with ionic strength. The theory was mainly of theoretical interest until 1945 because of the difficulty of determining equilibrium constants before the glass electrode was invented. Subsequently, Ciavatta [ 2 ] developed the theory further in 1980. The activity coefficient of the j th ion in solution is written as γ j when concentrations are on the molal concentration scale and as y j when concentrations are on the molar concentration scale. (The molality scale is preferred in thermodynamics because molal concentrations are independent of temperature). The basic idea of SIT theory is that the activity coefficient can be expressed as or where z is the electrical charge on the ion, I is the ionic strength, ε and b are interaction coefficients and m and c are concentrations. The summation extends over the other ions present in solution, which includes the ions produced by the background electrolyte. The first term in these expressions comes from Debye–Hückel theory. The second term shows how the contributions from "interaction" are dependent on concentration. Thus, the interaction coefficients are used as corrections to Debye–Hückel theory when concentrations are higher than the region of validity of that theory. The activity coefficient of a neutral species can be assumed to depend linearly on ionic strength, as in where k m is a Sechenov coefficient. [ 7 ] In the example of a monobasic acid HA, assuming that the background electrolyte is the salt NaNO 3 , the interaction coefficients will be for interaction between H + and NO 3 − , and between A − and Na + . Firstly, equilibrium constants are determined at a number of different ionic strengths, at a chosen temperature and particular background electrolyte. The interaction coefficients are then determined by fitting to the observed equilibrium constant values. The procedure also provides the value of K at infinite dilution. It is not limited to monobasic acids. [ 8 ] and can also be applied to metal complexes. [ 9 ] The SIT and Pitzer approaches have been compared recently. [ 10 ] The Bromley equation [ 11 ] has also been compared to both SIT and Pitzer equations . [ 12 ] It has been shown that the SIT equation is a practical simplification of a more complicated hypothesis, [ 13 ] that is rigorously applicable only at trace concentrations of reactant and product species immersed in a surrounding electrolyte medium.
https://en.wikipedia.org/wiki/Specific_ion_interaction_theory
Specific modulus is a materials property consisting of the elastic modulus per mass density of a material. It is also known as the stiffness to weight ratio or specific stiffness . High specific modulus materials find wide application in aerospace applications where minimum structural weight is required. The dimensional analysis yields units of distance squared per time squared. The equation can be written as: where E {\displaystyle E} is the elastic modulus and ρ {\displaystyle \rho } is the density. The utility of specific modulus is to find materials which will produce structures with minimum weight, when the primary design limitation is deflection or physical deformation, rather than load at breaking—this is also known as a "stiffness-driven" structure. Many common structures are stiffness-driven over much of their use, such as airplane wings, bridges, masts, and bicycle frames. To emphasize the point, consider the issue of choosing a material for building an airplane. Aluminum seems obvious because it is "lighter" than steel, but steel is stronger than aluminum, so one could imagine using thinner steel components to save weight without sacrificing (tensile) strength. The problem with this idea is that there would be a significant sacrifice of stiffness, allowing, e.g., wings to flex unacceptably. Because it is stiffness, not tensile strength, that drives this kind of decision for airplanes, we say that they are stiffness-driven. The connection details of such structures may be more sensitive to strength (rather than stiffness) issues due to effects of stress risers . Specific modulus is not to be confused with specific strength , a term that compares strength to density. The use of specific stiffness in tension applications is straightforward. Both stiffness in tension and total mass for a given length are directly proportional to cross-sectional area . Thus performance of a beam in tension will depend on Young's modulus divided by density . Specific stiffness can be used in the design of beams subject to bending or Euler buckling , since bending and buckling are stiffness-driven. However, the role that density plays changes depending on the problem's constraints. Examining the formulas for buckling and deflection , we see that the force required to achieve a given deflection or to achieve buckling depends directly on Young's modulus . Examining the density formula, we see that the mass of a beam depends directly on the density. Thus if a beam's cross-sectional dimensions are constrained and weight reduction is the primary goal, performance of the beam will depend on Young's modulus divided by density . By contrast, if a beam's weight is fixed, its cross-sectional dimensions are unconstrained, and increased stiffness is the primary goal, the performance of the beam will depend on Young's modulus divided by either density squared or cubed. This is because a beam's overall stiffness , and thus its resistance to Euler buckling when subjected to an axial load and to deflection when subjected to a bending moment , is directly proportional to both the Young's modulus of the beam's material and the second moment of area (area moment of inertia) of the beam. Comparing the list of area moments of inertia with formulas for area gives the appropriate relationship for beams of various configurations. Consider a beam whose cross-sectional area increases in two dimensions, e.g. a solid round beam or a solid square beam. By combining the area and density formulas, we can see that the radius of this beam will vary with approximately the inverse of the square of the density for a given mass. By examining the formulas for area moment of inertia , we can see that the stiffness of this beam will vary approximately as the fourth power of the radius. Thus the second moment of area will vary approximately as the inverse of the density squared, and performance of the beam will depend on Young's modulus divided by density squared . Consider a beam whose cross-sectional area increases in one dimension, e.g. a thin-walled round beam or a rectangular beam whose height but not width is varied. By combining the area and density formulas, we can see that the radius or height of this beam will vary with approximately the inverse of the density for a given mass. By examining the formulas for area moment of inertia , we can see that the stiffness of this beam will vary approximately as the third power of the radius or height. Thus the second moment of area will vary approximately as the inverse of the cube of the density, and performance of the beam will depend on Young's modulus divided by density cubed . However, caution must be exercised in using this metric. Thin-walled beams are ultimately limited by local buckling and lateral-torsional buckling . These buckling modes depend on material properties other than stiffness and density, so the stiffness-over-density-cubed metric is at best a starting point for analysis. For example, most wood species score better than most metals on this metric, but many metals can be formed into useful beams with much thinner walls than could be achieved with wood, given wood's greater vulnerability to local buckling. The performance of thin-walled beams can also be greatly modified by relatively minor variations in geometry such as flanges and stiffeners. [ 1 ] [ 2 ] [ 3 ] Note that the ultimate strength of a beam in bending depends on the ultimate strength of its material and its section modulus , not its stiffness and second moment of area. Its deflection, however, and thus its resistance to Euler buckling, will depend on these two latter values.
https://en.wikipedia.org/wiki/Specific_modulus
In the gravitational two-body problem , the specific orbital energy ε {\displaystyle \varepsilon } (or specific vis-viva energy ) of two orbiting bodies is the constant quotient of their mechanical energy (the sum of their mutual potential energy , ε p {\displaystyle \varepsilon _{p}} , and their kinetic energy , ε k {\displaystyle \varepsilon _{k}} ) to their reduced mass . [ 1 ] According to the orbital energy conservation equation (also referred to as vis-viva equation), it does not vary with time: ε = ε k + ε p = v 2 2 − μ r = − 1 2 μ 2 h 2 ( 1 − e 2 ) = − μ 2 a {\displaystyle {\begin{aligned}\varepsilon &=\varepsilon _{k}+\varepsilon _{p}\\&={\frac {v^{2}}{2}}-{\frac {\mu }{r}}=-{\frac {1}{2}}{\frac {\mu ^{2}}{h^{2}}}\left(1-e^{2}\right)=-{\frac {\mu }{2a}}\end{aligned}}} where It is a kind of specific energy , typically expressed in units of MJ kg {\displaystyle {\frac {\text{MJ}}{\text{kg}}}} (mega joule per kilogram) or km 2 s 2 {\displaystyle {\frac {{\text{km}}^{2}}{{\text{s}}^{2}}}} (squared kilometer per squared second). For an elliptic orbit the specific orbital energy is the negative of the additional energy required to accelerate a mass of one kilogram to escape velocity ( parabolic orbit ). For a hyperbolic orbit , it is equal to the excess energy compared to that of a parabolic orbit. In this case the specific orbital energy is also referred to as characteristic energy . For an elliptic orbit , the specific orbital energy equation, when combined with conservation of specific angular momentum at one of the orbit's apsides , simplifies to: [ 2 ] ε = − μ 2 a {\displaystyle \varepsilon =-{\frac {\mu }{2a}}} where For an elliptic orbit with specific angular momentum h given by h 2 = μ p = μ a ( 1 − e 2 ) {\displaystyle h^{2}=\mu p=\mu a\left(1-e^{2}\right)} we use the general form of the specific orbital energy equation, ε = v 2 2 − μ r {\displaystyle \varepsilon ={\frac {v^{2}}{2}}-{\frac {\mu }{r}}} with the relation that the relative velocity at periapsis is v p 2 = h 2 r p 2 = h 2 a 2 ( 1 − e ) 2 = μ a ( 1 − e 2 ) a 2 ( 1 − e ) 2 = μ ( 1 − e 2 ) a ( 1 − e ) 2 {\displaystyle v_{p}^{2}={h^{2} \over r_{p}^{2}}={h^{2} \over a^{2}(1-e)^{2}}={\mu a\left(1-e^{2}\right) \over a^{2}(1-e)^{2}}={\mu \left(1-e^{2}\right) \over a(1-e)^{2}}} Thus our specific orbital energy equation becomes ε = μ a [ 1 − e 2 2 ( 1 − e ) 2 − 1 1 − e ] = μ a [ ( 1 − e ) ( 1 + e ) 2 ( 1 − e ) 2 − 1 1 − e ] = μ a [ 1 + e 2 ( 1 − e ) − 2 2 ( 1 − e ) ] = μ a [ e − 1 2 ( 1 − e ) ] {\displaystyle \varepsilon ={\frac {\mu }{a}}{\left[{1-e^{2} \over 2(1-e)^{2}}-{1 \over 1-e}\right]}={\frac {\mu }{a}}{\left[{(1-e)(1+e) \over 2(1-e)^{2}}-{1 \over 1-e}\right]}={\frac {\mu }{a}}{\left[{1+e \over 2(1-e)}-{2 \over 2(1-e)}\right]}={\frac {\mu }{a}}{\left[{e-1 \over 2(1-e)}\right]}} and finally with the last simplification we obtain: ε = − μ 2 a {\displaystyle \varepsilon =-{\mu \over 2a}} For a parabolic orbit this equation simplifies to ε = 0. {\displaystyle \varepsilon =0.} For a hyperbolic trajectory this specific orbital energy is either given by ε = μ 2 a . {\displaystyle \varepsilon ={\mu \over 2a}.} or the same as for an ellipse, depending on the convention for the sign of a . In this case the specific orbital energy is also referred to as characteristic energy (or C 3 {\displaystyle C_{3}} ) and is equal to the excess specific energy compared to that for a parabolic orbit. It is related to the hyperbolic excess velocity v ∞ {\displaystyle v_{\infty }} (the orbital velocity at infinity) by 2 ε = C 3 = v ∞ 2 . {\displaystyle 2\varepsilon =C_{3}=v_{\infty }^{2}.} It is relevant for interplanetary missions. Thus, if orbital position vector ( r {\displaystyle \mathbf {r} } ) and orbital velocity vector ( v {\displaystyle \mathbf {v} } ) are known at one position, and μ {\displaystyle \mu } is known, then the energy can be computed and from that, for any other position, the orbital speed. For an elliptic orbit the rate of change of the specific orbital energy with respect to a change in the semi-major axis is μ 2 a 2 {\displaystyle {\frac {\mu }{2a^{2}}}} where In the case of circular orbits, this rate is one half of the gravitation at the orbit. This corresponds to the fact that for such orbits the total energy is one half of the potential energy, because the kinetic energy is minus one half of the potential energy. If the central body has radius R , then the additional specific energy of an elliptic orbit compared to being stationary at the surface is − μ 2 a + μ R = μ ( 2 a − R ) 2 a R {\displaystyle -{\frac {\mu }{2a}}+{\frac {\mu }{R}}={\frac {\mu (2a-R)}{2aR}}} The quantity 2 a − R {\displaystyle 2a-R} is the height the ellipse extends above the surface, plus the periapsis distance (the distance the ellipse extends beyond the center of the Earth). For the Earth and a {\displaystyle a} just little more than R {\displaystyle R} the additional specific energy is ( g R / 2 ) {\displaystyle (gR/2)} ; which is the kinetic energy of the horizontal component of the velocity, i.e. 1 2 V 2 = 1 2 g R {\textstyle {\frac {1}{2}}V^{2}={\frac {1}{2}}gR} , V = g R {\displaystyle V={\sqrt {gR}}} . The International Space Station has an orbital period of 91.74 minutes (5504 s), hence by Kepler's Third Law the semi-major axis of its orbit is 6,738 km. [ citation needed ] The specific orbital energy associated with this orbit is −29.6 MJ/kg: the potential energy is −59.2 MJ/kg, and the kinetic energy 29.6 MJ/kg. Compared with the potential energy at the surface, which is −62.6 MJ/kg., the extra potential energy is 3.4 MJ/kg, and the total extra energy is 33.0 MJ/kg. The average speed is 7.7 km/s, the net delta-v to reach this orbit is 8.1 km/s (the actual delta-v is typically 1.5–2.0 km/s more for atmospheric drag and gravity drag ). The increase per meter would be 4.4 J/kg; this rate corresponds to one half of the local gravity of 8.8 m/s 2 . For an altitude of 100 km (radius is 6471 km): The energy is −30.8 MJ/kg: the potential energy is −61.6 MJ/kg, and the kinetic energy 30.8 MJ/kg. Compare with the potential energy at the surface, which is −62.6 MJ/kg. The extra potential energy is 1.0 MJ/kg, the total extra energy is 31.8 MJ/kg. The increase per meter would be 4.8 J/kg; this rate corresponds to one half of the local gravity of 9.5 m/s 2 . The speed is 7.8 km/s, the net delta-v to reach this orbit is 8.0 km/s. Taking into account the rotation of the Earth, the delta-v is up to 0.46 km/s less (starting at the equator and going east) or more (if going west). For Voyager 1 , with respect to the Sun: Hence: ε = ε k + ε p = v 2 2 − μ r = 146 k m 2 s − 2 − 8 k m 2 s − 2 = 138 k m 2 s − 2 {\displaystyle \varepsilon =\varepsilon _{k}+\varepsilon _{p}={\frac {v^{2}}{2}}-{\frac {\mu }{r}}=\mathrm {146\,km^{2}s^{-2}} -\mathrm {8\,km^{2}s^{-2}} =\mathrm {138\,km^{2}s^{-2}} } Thus the hyperbolic excess velocity (the theoretical orbital velocity at infinity) is given by v ∞ = 16.6 k m / s {\displaystyle v_{\infty }=\mathrm {16.6\,km/s} } However, Voyager 1 does not have enough velocity to leave the Milky Way . The computed speed applies far away from the Sun, but at such a position that the potential energy with respect to the Milky Way as a whole has changed negligibly, and only if there is no strong interaction with celestial bodies other than the Sun. Assume: Then the time-rate of change of the specific energy of the rocket is v ⋅ a {\displaystyle \mathbf {v} \cdot \mathbf {a} } : an amount v ⋅ ( a − g ) {\displaystyle \mathbf {v} \cdot (\mathbf {a} -\mathbf {g} )} for the kinetic energy and an amount v ⋅ g {\displaystyle \mathbf {v} \cdot \mathbf {g} } for the potential energy. The change of the specific energy of the rocket per unit change of delta-v is v ⋅ a | a | {\displaystyle {\frac {\mathbf {v\cdot a} }{|\mathbf {a} |}}} which is | v | times the cosine of the angle between v and a . Thus, when applying delta-v to increase specific orbital energy, this is done most efficiently if a is applied in the direction of v , and when | v | is large. If the angle between v and g is obtuse, for example in a launch and in a transfer to a higher orbit, this means applying the delta-v as early as possible and at full capacity. See also gravity drag . When passing by a celestial body it means applying thrust when nearest to the body. When gradually making an elliptic orbit larger, it means applying thrust each time when near the periapsis. Such maneuver is called an Oberth maneuver or powered flyby. When applying delta-v to decrease specific orbital energy, this is done most efficiently if a is applied in the direction opposite to that of v , and again when | v | is large. If the angle between v and g is acute, for example in a landing (on a celestial body without atmosphere) and in a transfer to a circular orbit around a celestial body when arriving from outside, this means applying the delta-v as late as possible. When passing by a planet it means applying thrust when nearest to the planet. When gradually making an elliptic orbit smaller, it means applying thrust each time when near the periapsis. If a is in the direction of v : Δ ε = ∫ v d ( Δ v ) = ∫ v a d t {\displaystyle \Delta \varepsilon =\int v\,d(\Delta v)=\int v\,adt}
https://en.wikipedia.org/wiki/Specific_orbital_energy
In the natural sciences , including physiology and engineering , a specific quantity generally refers to an intensive quantity obtained by the ratio of an extensive quantity of interest by another extensive quantity (usually mass or volume ). If mass is the divisor quantity, the specific quantity is a massic quantity . [ 1 ] If volume is the divisor quantity, the specific quantity is a volumic quantity . [ citation needed ] For example, massic leaf area is leaf area divided by leaf mass and volumic leaf area is leaf area divided by leaf volume. Derived SI units involve reciprocal kilogram (kg −1 ), e.g., square metre per kilogram (m 2 · kg −1 ). Another kind of specific quantity, termed named specific quantity , is a generalization of the original concept. The divisor quantity is not restricted to mass, and name of the divisor is usually placed before "specific" in the full term (e.g., " thrust-specific fuel consumption "). Named and unnamed specific quantities are given for the terms below. Per unit of mass (short form of mass-specific ): Volume-specific quantity , the quotient of a physical quantity and volume ("per unit volume"), also called volumic quantities: [ 2 ] Area-specific quantity , the quotient of a physical quantity and area ("per unit area"), also called areic quantities: [ 2 ] Length-specific quantity , the quotient of a physical quantity and length ("per unit length"), also called lineic quantities: [ 2 ] In chemistry: Per unit of other types. The dividing unit is sometimes added before the term "specific", and sometimes omitted.
https://en.wikipedia.org/wiki/Specific_quantity
In chemistry , specific rotation ( [α] ) is a property of a chiral chemical compound . [ 1 ] : 244 It is defined as the change in orientation of monochromatic plane-polarized light , per unit distance–concentration product, as the light passes through a sample of a compound in solution. [ 2 ] : 2–65 Compounds which rotate the plane of polarization of a beam of plane polarized light clockwise are said to be dextrorotary , and correspond with positive specific rotation values, while compounds which rotate the plane of polarization of plane polarized light counterclockwise are said to be levorotary , and correspond with negative values. [ 1 ] : 245 If a compound is able to rotate the plane of polarization of plane-polarized light, it is said to be “ optically active ”. Specific rotation is an intensive property , distinguishing it from the more general phenomenon of optical rotation . As such, the observed rotation ( α ) of a sample of a compound can be used to quantify the enantiomeric excess of that compound, provided that the specific rotation ( [α] ) for the enantiopure compound is known. The variance of specific rotation with wavelength—a phenomenon known as optical rotatory dispersion —can be used to find the absolute configuration of a molecule. [ 3 ] : 124 The concentration of bulk sugar solutions is sometimes determined by comparison of the observed optical rotation with the known specific rotation. The CRC Handbook of Chemistry and Physics defines specific rotation as: For an optically active substance, defined by [α] θ λ = α/γl, where α is the angle through which plane polarized light is rotated by a solution of mass concentration γ and path length l. Here θ is the Celsius temperature and λ the wavelength of the light at which the measurement is carried out. [ 2 ] Values for specific rotation are reported in units of deg·mL·g −1 ·dm −1 , which are typically shortened to just degrees , wherein the other components of the unit are tacitly assumed. [ 4 ] These values should always be accompanied by information about the temperature, solvent and wavelength of light used, as all of these variables can affect the specific rotation. As noted above, temperature and wavelength are frequently reported as a superscript and subscript, respectively, while the solvent is reported parenthetically, or omitted if it happens to be water. Optical rotation is measured with an instrument called a polarimeter . There is a linear relationship between the observed rotation and the concentration of optically active compound in the sample. There is a nonlinear relationship between the observed rotation and the wavelength of light used. Specific rotation is calculated using either of two equations, depending on whether the sample is a pure chemical to be tested or that chemical dissolved in solution. This equation is used: In this equation, α (Greek letter "alpha") is the measured rotation in degrees, l is the path length in decimeters, and ρ (Greek letter "rho") is the density of the liquid in g/mL, for a sample at a temperature T (given in degrees Celsius) and wavelength λ (in nanometers). If the wavelength of the light used is 589 nanometers ( the sodium D line ), the symbol “D” is used. The sign of the rotation (+ or −) is always given. For solutions, a slightly different equation is used: In this equation, α (Greek letter "alpha") is the measured rotation in degrees, l is the path length in decimeters, c is the concentration in g/mL, T is the temperature at which the measurement was taken (in degrees Celsius), and λ is the wavelength in nanometers. [ 10 ] For practical and historical reasons, concentrations are often reported in units of g/100mL. In this case, a correction factor in the numerator is necessary: [ 1 ] : 248 [ 3 ] : 123 When using this equation, the concentration and the solvent may be provided in parentheses after the rotation. The rotation is reported using degrees, and no units of concentration are given (it is assumed to be g/100mL). The sign of the rotation (+ or −) is always given. If the wavelength of the light used is 589 nanometer (the sodium D line ), the symbol “D” is used. If the temperature is omitted, it is assumed to be at standard room temperature (20 °C). For example, the specific rotation of a compound would be reported in the scientific literature as: [ 11 ] If a compound has a very large specific rotation or a sample is very concentrated, the actual rotation of the sample may be larger than 180°, and so a single polarimeter measurement cannot detect when this has happened (for example, the values +270° and −90° are not distinguishable, nor are the values 361° and 1°). In these cases, measuring the rotation at several different concentrations allows one to determine the true value. Another method would be to use shorter path-lengths to perform the measurements. In cases of very small or very large angles, one can also use the variation of specific rotation with wavelength to facilitate measurement. Switching wavelength is particularly useful when the angle is small. Many polarimeters are equipped with a mercury lamp (in addition to the sodium lamp) for this purpose. If the specific rotation, [ α ] λ {\displaystyle {[\alpha ]_{\lambda }}} of a pure chiral compound is known, it is possible to use the observed specific rotation, [ α ] obs {\displaystyle {[\alpha ]_{\text{obs}}}} to determine the enantiomeric excess ( ee ), or "optical purity", of a sample of the compound, by using the formula: [ 3 ] : 124 For example, if a sample of bromobutane measured under standard conditions has an observed specific rotation of −9.2°, this indicates that the net effect is due to (9.2°/23.1°)(100%) = 40% of the R enantiomer . The remainder of the sample is a racemic mixture of the enantiomers (30% R and 30% S), which has no net contribution to the observed rotation. The enantiomeric excess is 40%; the total concentration of R is 70%. However, in practice the utility of this method is limited, as the presence of small amounts of highly rotating impurities can greatly affect the rotation of a given sample. Moreover, the optical rotation of a compound may be non-linearly dependent on its enantiomeric excess because of aggregation in solution. For these reasons other methods of determining the enantiomeric ratio, such as gas chromatography or HPLC with a chiral column, are generally preferred. The variation of specific rotation with wavelength is called optical rotatory dispersion (ORD). ORD can be used in conjunction with computational methods to determine the absolute configuration of certain compounds. [ 12 ]
https://en.wikipedia.org/wiki/Specific_rotation
Specific speed N s , is used to characterize turbomachinery speed. [ 1 ] Common commercial and industrial practices use dimensioned versions which are of equal utility. Specific speed is most commonly used in pump applications to define the suction specific speed [1] —a quasi non-dimensional number that categorizes pump impellers as to their type and proportions. In Imperial units it is defined as the speed in revolutions per minute at which a geometrically similar impeller would operate if it were of such a size as to deliver one gallon per minute against one foot of hydraulic head . In metric units flow may be in l/s or m 3 /s and head in m, and care must be taken to state the units used. Performance is defined as the ratio of the pump or turbine against a reference pump or turbine, which divides the actual performance figure to provide a unitless figure of merit . The resulting figure would more descriptively be called the "ideal-reference-device-specific performance." This resulting unitless ratio may loosely be expressed as a "speed," only because the performance of the reference ideal pump is linearly dependent on its speed, so that the ratio of [device-performance to reference-device-performance] is also the increased speed at which the reference device would need to operate, in order to produce the performance, instead of its reference speed of "1 unit." Specific speed is an index used to predict desired pump or turbine performance. i.e. it predicts the general shape of a pump's impeller . It is this impeller's "shape" that predicts its flow and head characteristics so that the designer can then select a pump or turbine most appropriate for a particular application. Once the desired specific speed is known, basic dimensions of the unit's components can be easily calculated. Several mathematical definitions of specific speed (all of them actually ideal-device-specific) have been created for different devices and applications. Low-specific speed radial flow impellers develop hydraulic head principally through centrifugal force . Pumps of higher specific speeds develop head partly by centrifugal force and partly by axial force. An axial flow or propeller pump with a specific speed of 10,000 or greater generates its head exclusively through axial forces. Radial impellers are generally low flow/high head designs whereas axial flow impellers are high flow/low head designs. In theory, the discharge of a "purely" centrifugal machine (pump, turbine, fan, etc.) is tangential to the rotation of the impeller whereas a "purely" axial-flow machine's discharge will be parallel to the axis of rotation. There are also machines that exhibit a combination of both properties and are specifically referred to as "mixed-flow" machines. Centrifugal pump impellers have specific speed values ranging from 500 to 10,000 (English units), with radial flow pumps at 500 to 4,000, mixed flow at 2,000 to 8,000, and axial flow pumps at 7,000 to 20,000. Values of specific speed less than 500 are associated with positive displacement pumps . As the specific speed increases, the ratio of the impeller outlet diameter to the inlet or eye diameter decreases. This ratio becomes 1.0 for a true axial flow impeller. The following equation gives a dimensionless specific speed: N s = n Q ( g H ) 3 / 4 {\displaystyle N_{s}={\frac {n{\sqrt {Q}}}{(gH)^{3/4}}}} where: Note that the units used affect the specific speed value in the above equation and consistent units should be used for comparisons. Pump specific speed can be calculated using British gallons or using Metric units (m 3 /s and metres head), changing the values listed above. The suction specific speed is mainly used to see if there will be problems with cavitation during the pump's operation on the suction side. [ 2 ] It is defined by centrifugal and axial pumps' inherent physical characteristics and operating point. [ 3 ] The suction specific speed of a pump will define the range of operation in which a pump will experience stable operation. [ 4 ] The higher the suction specific speed, then the smaller the range of stable operation, up to the point of cavitation at 8500 (unitless). The envelope of stable operation is defined in terms of the best efficiency point of the pump. The suction specific speed is defined as: [ 5 ] N s s = n Q N P S H R 0.75 {\displaystyle N_{ss}={\frac {n{\sqrt {Q}}}{{NPSH}_{R}^{0.75}}}} where: The specific speed value for a turbine is the speed of a geometrically similar turbine which would produce unit power (one kilowatt) under unit head (one meter). [ 6 ] The specific speed of a turbine is given by the manufacturer (along with other ratings) and will always refer to the point of maximum efficiency. This allows accurate calculations to be made of the turbine's performance for a range of heads. Well-designed efficient machines typically use the following values: Impulse turbines have the lowest n s values, typically ranging from 1 to 10, a Pelton wheel is typically around 4, Francis turbines fall in the range of 10 to 100, while Kaplan turbines are at least 100 or more, all in imperial units. [ 7 ] To derive the Turbine specific speed equation we first start with the Power formula for water then using proportionalities with η,ρ, and g being constant they can be removed. The power of the turbine is therefore only dependent on the head H and flow Q. let: Now utilising the constant speed ratio at the turbine tip, the following proportionality can be made that the tangential velocity of the turbine blade is proportional to the square root of the head. But from rotational speed in RPM to linear speed in m/s the following equation and proportionality can be made. The flow through a turbine is the product of flow velocity and area so the flow through a turbine can be quantified. and as shown previously: So using the above 2, the following is obtained By combining the equation for diameter and tangential speed, with tangential speed and head a relationship between flow and head can be reached. Substituting this back into the power equation gives: To convert this proportionality into an equation a factor of proportionality, say K, must be introduced which gives: Now assuming our original proposition of producing 1 kilowatt at 1m head our speed N becomes our specific speed N s {\displaystyle N_{s}} . So substituting these values into our equation gives: Now we know K {\displaystyle K} we have a complete formula for specific speed, N s {\displaystyle N_{s}} : So rearranging for Specific Speed give the final following result: where: Expressed in English units , the "specific speed" is defined as n s = n √ P / h 5/4 Expressed in metric units , the "specific speed" is n s = 0.2626 n √ P / h 5/4 The factor 0.2626 is only required when the specific speed is to be adjusted to English units. In countries which use the metric system, the factor is omitted, and quoted specific speeds are correspondingly larger. [ citation needed ] Given a flow and head for a specific hydro site, and the RPM requirement of the generator, calculate the specific speed. The result is the main criteria for turbine selection or the starting point for analytical design of a new turbine. Once the desired specific speed is known, basic dimensions of the turbine parts can be easily calculated. Turbine calculations:
https://en.wikipedia.org/wiki/Specific_speed
In the field of hydrogeology , storage properties are physical properties that characterize the capacity of an aquifer to release groundwater . These properties are storativity (S) , specific storage (S s ) and specific yield (S y ) . According to Groundwater , by Freeze and Cherry (1979), specific storage, S s {\displaystyle S_{s}} [m −1 ], of a saturated aquifer is defined as the volume of water that a unit volume of the aquifer releases from storage under a unit decline in hydraulic head. [ 1 ] They are often determined using some combination of field tests (e.g., aquifer tests ) and laboratory tests on aquifer material samples. Recently, these properties have been also determined using remote sensing data derived from Interferometric synthetic-aperture radar . [ 2 ] [ 3 ] Storativity or the storage coefficient is the volume of water released from storage per unit decline in hydraulic head in the aquifer, per unit area of the aquifer. Storativity is a dimensionless quantity, and is always greater than 0. For a confined aquifer or aquitard, storativity is the vertically integrated specific storage value. Specific storage is the volume of water released from one unit volume of the aquifer under one unit decline in head. This is related to both the compressibility of the aquifer and the compressibility of the water itself. Assuming the aquifer or aquitard is homogeneous : For an unconfined aquifer, storativity is approximately equal to the specific yield ( S y {\displaystyle S_{y}} ) since the release from specific storage ( S s {\displaystyle S_{s}} ) is typically orders of magnitude less ( S s b ≪ S y {\displaystyle S_{s}b\ll \!\ S_{y}} ). The specific storage is the amount of water that a portion of an aquifer releases from storage, per unit mass or volume of the aquifer, per unit change in hydraulic head, while remaining fully saturated. Mass specific storage is the mass of water that an aquifer releases from storage, per mass of aquifer, per unit decline in hydraulic head: where Volumetric specific storage (or volume-specific storage ) is the volume of water that an aquifer releases from storage, per volume of the aquifer, per unit decline in hydraulic head (Freeze and Cherry, 1979): where In hydrogeology , volumetric specific storage is much more commonly encountered than mass specific storage . Consequently, the term specific storage generally refers to volumetric specific storage . In terms of measurable physical properties, specific storage can be expressed as where The compressibility terms relate a given change in stress to a change in volume (a strain). These two terms can be defined as: where These equations relate a change in total or water volume ( V t {\displaystyle V_{t}} or V w {\displaystyle V_{w}} ) per change in applied stress (effective stress — σ e {\displaystyle \sigma _{e}} or pore pressure — p {\displaystyle p} ) per unit volume. The compressibilities (and therefore also S s ) can be estimated from laboratory consolidation tests (in an apparatus called a consolidometer), using the consolidation theory of soil mechanics (developed by Karl Terzaghi ). Aquifer-test analyses provide estimates of aquifer -system storage coefficients by examining the drawdown and recovery responses of water levels in wells to applied stresses, typically induced by pumping from nearby wells. [ 4 ] Elastic and inelastic skeletal storage coefficients can be estimated through a graphical method developed by Riley. [ 5 ] This method involves plotting the applied stress ( hydraulic head ) on the y-axis against vertical strain or displacement (compaction) on the x-axis. The inverse slopes of the dominant linear trends in these compaction-head trajectories indicate the skeletal storage coefficients. The displacements used to build the stress-strain curve can be determined by extensometers , [ 5 ] [ 6 ] InSAR [ 7 ] or levelling . [ 8 ] Laboratory consolidation tests yield measurements of the coefficient of consolidation within the inelastic range and provide estimates of vertical hydraulic conductivity . [ 9 ] The inelastic skeletal specific storage of the sample can be determined by calculating the ratio of vertical hydraulic conductivity to the coefficient of consolidation. Simulations of land subsidence incorporate data on aquifer-system storage and hydraulic conductivity . Calibrating these models can lead to optimized estimates of storage coefficients and vertical hydraulic conductivity . [ 8 ] [ 10 ] Specific yield , also known as the drainable porosity, is a ratio and is the volumetric fraction of the bulk aquifer volume that a given aquifer will yield when all the water is allowed to drain out of it under the forces of gravity: where It is primarily used for unconfined aquifers since the elastic storage component, S s {\displaystyle S_{s}} , is relatively small and usually has an insignificant contribution. Specific yield can be close to effective porosity, but several subtleties make this value more complicated than it seems. Some water always remains in the formation, even after drainage; it clings to the grains of sand and clay. Also, the value of a specific yield may not be fully realized for a very long time due to complications caused by unsaturated flow. Problems related to unsaturated flow are simulated using the numerical solution of Richards Equation , which requires estimation of the specific yield, or the numerical solution of the Soil Moisture Velocity Equation , which does not require estimation of the specific yield.
https://en.wikipedia.org/wiki/Specific_storage
The specific strength is a material's (or muscle's) strength (force per unit area at failure) divided by its density . It is also known as the strength-to-weight ratio or strength/weight ratio or strength-to-mass ratio . In fiber or textile applications, tenacity is the usual measure of specific strength. The SI unit for specific strength is Pa ⋅ m 3 / kg , or N ⋅m/kg, which is dimensionally equivalent to m 2 /s 2 , though the latter form is rarely used. Specific strength has the same units as specific energy , and is related to the maximum specific energy of rotation that an object can have without flying apart due to centrifugal force . Another way to describe specific strength is breaking length , also known as self support length : the maximum length of a vertical column of the material (assuming a fixed cross-section) that could suspend its own weight when supported only at the top. For this measurement, the definition of weight is the force of gravity at the Earth's surface ( standard gravity , 9.80665 m/s 2 ) applying to the entire length of the material, not diminishing with height. This usage is more common with certain specialty fiber or textile applications. The materials with the highest specific strengths are typically fibers such as carbon fiber , glass fiber and various polymers, and these are frequently used to make composite materials (e.g. carbon fiber-epoxy ). These materials and others such as titanium , aluminium , magnesium and high strength steel alloys are widely used in aerospace and other applications where weight savings are worth the higher material cost. Note that strength and stiffness are distinct. Both are important in design of efficient and safe structures. where L {\displaystyle L} is the length, T s {\displaystyle T_{s}} is the tensile strength, ρ {\displaystyle \rho } is the density and g {\displaystyle \mathbf {g} } is the acceleration due to gravity ( ≈ 9.8 {\displaystyle \approx 9.8} m/s 2 {\displaystyle ^{2}} ) The data of this table is from best cases, and has been established for giving a rough figure. Note: Multiwalled carbon nanotubes have the highest tensile strength of any material yet measured, with labs producing them at a tensile strength of 63 GPa, [ 36 ] still well below their theoretical limit of 300 GPa. The first nanotube ropes (20 mm long) whose tensile strength was published (in 2000) had a strength of 3.6 GPa, still well below their theoretical limit. [ 41 ] The density is different depending on the manufacturing method, and the lowest value is 0.037 or 0.55 (solid). [ 37 ] The International Space Elevator Consortium uses the "Yuri" as a name for the SI units describing specific strength. Specific strength is of fundamental importance in the description of space elevator cable materials. One Yuri is conceived to be the SI unit for yield stress (or breaking stress) per unit of density of a material under tension. One Yuri equals 1 Pa⋅m 3 /kg or 1 N ⋅ m / kg , which is the breaking/yielding force per linear density of the cable under tension. [ 42 ] [ 43 ] A functional Earth space elevator would require a tether of 30–80 megaYuri (corresponding to 3100–8200 km of breaking length). [ 44 ] The null energy condition places a fundamental limit on the specific strength of any material. [ 40 ] The specific strength is bounded to be no greater than c 2 ≈ 9 × 10 13 kN ⋅ m / kg , where c is the speed of light . This limit is achieved by electric and magnetic field lines, QCD flux tubes , and the fundamental strings hypothesized by string theory . [ citation needed ] Tenacity is the customary measure of strength of a fiber or yarn . It is usually defined as the ultimate (breaking) force of the fiber (in gram -force units) divided by the denier . Because denier is a measure of the linear density, the tenacity works out to be not a measure of force per unit area, but rather a quasi-dimensionless measure analogous to specific strength. [ 45 ] A tenacity of 1 {\displaystyle 1} corresponds to: [ citation needed ] 1 g ⋅ 9.80665 m s − 2 1 g / 9000 m = 9.80665 m s − 2 1 / 9000 m = 9.80665 m s − 2 9000 m = 88259.85 m 2 s − 2 {\displaystyle {\frac {1{\rm {\,g}}\cdot 9.80665{\rm {\,ms^{-2}}}}{1{\rm {\,g}}/9000{\rm {\,m}}}}={\frac {9.80665{\rm {\,ms^{-2}}}}{1/9000{\rm {\,m}}}}=9.80665{\rm {\,ms^{-2}}}\,9000{\rm {\,m}}=88259.85{\rm {\,m^{2}s^{-2}}}} Mostly Tenacity expressed in report as cN/tex.
https://en.wikipedia.org/wiki/Specific_strength
Specific surface area ( SSA ) is a property of solids defined as the total surface area (SA) of a material per unit mass , [ 1 ] (with units of m 2 /kg or m 2 /g). Alternatively, it may be defined as SA per solid or bulk volume [ 2 ] [ 3 ] (units of m 2 /m 3 or m −1 ). It is a physical value that can be used to determine the type and properties of a material (e.g. soil or snow ). It has a particular importance for adsorption , heterogeneous catalysis , and reactions on surfaces . Values obtained for specific surface area depend on the method of measurement. In adsorption based methods, the size of the adsorbate molecule (the probe molecule), the exposed crystallographic planes at the surface and measurement temperature all affect the obtained specific surface area. [ 4 ] For this reason, in addition to the most commonly used Brunauer–Emmett–Teller (N 2 -BET) adsorption method, several techniques have been developed to measure the specific surface area of particulate materials at ambient temperatures and at controllable scales, including methylene blue (MB) staining, ethylene glycol monoethyl ether (EGME) adsorption, [ 5 ] electrokinetic analysis of complex-ion adsorption [ 4 ] and a Protein Retention (PR) method. [ 6 ] A number of international standards exist for the measurement of specific surface area, including ISO standard 9277. [ 7 ] The SSA can be simply calculated from a particle size distribution , making some assumption about the particle shape. This method, however, fails to account for surface associated with the surface texture of the particles. The SSA can be measured by adsorption using the BET isotherm . This has the advantage of measuring the surface of fine structures and deep texture on the particles. However, the results can differ markedly depending on the substance adsorbed. The BET theory has inherent limitations but has the advantage to be simple and to yield adequate relative answers when the solids are chemically similar. In relatively rare cases, more complicated models based on thermodynamic approaches, or even quantum chemistry, may be applied to improve the consistency of the results, but at the cost of much more complex calculations requiring advanced knowledge and a good understanding from the operator. [ 8 ] This depends upon a relationship between the specific surface area and the resistance to gas-flow of a porous bed of powder. The method is simple and quick, and yields a result that often correlates well with the chemical reactivity of a powder. However, it fails to measure much of the deep surface texture.
https://en.wikipedia.org/wiki/Specific_surface_area
Specific ultraviolet absorbance ( SUVA ) is the absorbance of ultraviolet light in a water sample at a specified wavelength that is normalized for dissolved organic carbon (DOC) concentration. [ 1 ] Specific UV absorbance (SUVA) wavelengths have analytical uses to measure the aromatic character of dissolved organic matter by detecting density of electron conjugation which is associated with aromatic bonds. [ 2 ] To derive SUVA, first, UVC light ( UV spectrum subtypes ) at 254 nm or 280 nm, [ 2 ] is measured in units of absorbance per meter of path length, often the sample must be diluted with ultrapure water because absorbance can be high. [ 3 ] As increasing dissolved organic carbon concentration increases absorbance in the UV range, the UV light has to be normalized to the concentration of dissolved organic carbon in mg per L to ascertain differences in the aromatic quality of the water. Aromatic character is used in the study of dissolved organic matter, from mineral soils, or organic soils, to use as an assay to whether or not dissolved organic carbon in the water is labile, a ready source of energy, or is from a relatively old source of carbon (recalcitrant). However, although a good indicator of aromaticity, caution must be used with determination of reactivity. [ 1 ] Measures of water purity often rely on measuring turbidity , not aromaticity. [ citation needed ]
https://en.wikipedia.org/wiki/Specific_ultraviolet_absorbance
In respiratory physiology , specific ventilation is defined as the ratio of the volume of gas entering a region of the lung (ΔV) following an inspiration, divided by the end-expiratory volume (V 0 ) of that same lung region: SV = ΔV ⁄ V 0 It is a dimensionless quantity. For the whole human lung, given an indicative tidal volume of 0.6 L and a functional residual capacity of 2.5 L, average SV is of the order of 0.24. The distribution of specific ventilation within the lung can be inferred using Multiple Breath Washout (MBW) experiments [ 1 ] or imaging techniques such as Positron Emission Tomography (PET) using 13 N, Magnetic Resonance Imaging (MRI) using either hyperpolarized gas ( 3 He , 129 Xe ) or proton MRI (oxygen enhanced imaging).
https://en.wikipedia.org/wiki/Specific_ventilation
In thermodynamics , the specific volume of a substance (symbol: ν , nu ) is the quotient of the substance's volume ( V ) to its mass ( m ): It is a mass-specific intrinsic property of the substance. It is the reciprocal of density ρ ( rho ) and it is also related to the molar volume and molar mass : The standard unit of specific volume is cubic meters per kilogram (m 3 /kg), but other units include ft 3 /lb, ft 3 /slug, or mL/g. [ 1 ] Specific volume for an ideal gas is related to the molar gas constant ( R ) and the gas's temperature ( T ), pressure ( P ), and molar mass ( M ): It's based on the ideal gas law , P V = n R T {\displaystyle PV={nRT}} , and the amount of substance , n = m / M {\textstyle n=m/M} Specific volume is commonly applied to: Imagine a variable-volume, airtight chamber containing a certain number of atoms of oxygen gas. Consider the following four examples: Specific volume is a property of materials, defined as the number of cubic meters occupied by one kilogram of a particular substance. The standard unit is the meter cubed per kilogram (m 3 /kg or m 3 ·kg −1 ). Sometimes specific volume is expressed in terms of the number of cubic centimeters occupied by one gram of a substance. In this case, the unit is the centimeter cubed per gram (cm 3 /g or cm 3 ·g −1 ). To convert m 3 /kg to cm 3 /g, multiply by 1000; conversely, multiply by 0.001. Specific volume is inversely proportional to density. If the density of a substance doubles, its specific volume, as expressed in the same base units, is cut in half. If the density drops to 1/10 its former value, the specific volume, as expressed in the same base units, increases by a factor of 10. The density of gases changes with even slight variations in temperature, while densities of liquid and solids, which are generally thought of as incompressible, will change very little. Specific volume is the inverse of the density of a substance; therefore, careful consideration must be taken account when dealing with situations that involve gases. Small changes in temperature will have a noticeable effect on specific volumes. The average density of human blood is 1060 kg/m 3 . The specific volume that correlates to that density is 0.00094 m 3 /kg. Notice that the average specific volume of blood is almost identical to that of water: 0.00100 m 3 /kg. [ 2 ] If one sets out to determine the specific volume of an ideal gas, such as super heated steam, using the equation ν = RT / P , where pressure is 2500 lbf/in 2 , R is 0.596, temperature is 1960 °R . In that case, the specific volume would equal 0.4672 in 3 /lb. However, if the temperature is changed to 1160 °R , the specific volume of the super heated steam would have changed to 0.2765 in 3 /lb, which is a 59% overall change. Knowing the specific volumes of two or more substances allows one to find useful information for certain applications. For a substance X with a specific volume of 0.657 cm 3 /g and a substance Y with a specific volume 0.374 cm 3 /g, the density of each substance can be found by taking the inverse of the specific volume; therefore, substance X has a density of 1.522 g/cm 3 and substance Y has a density of 2.673 g/cm 3 . With this information, the specific gravities of each substance relative to one another can be found. The specific gravity of substance X with respect to Y is 0.569, while the specific gravity of Y with respect to X is 1.756. Therefore, substance X will not sink if placed on Y. [ 3 ] The specific volume of a non-ideal solution is the sum of the partial specific volumes of the components: M is the molar mass of the mixture. This can be used instead of volume, as this is intensive property tied to the system. The table below displays densities and specific volumes for various common substances that may be useful. The values were recorded at standard temperature and pressure, which is defined as air at 0 °C (273.15 K, 32 °F) and 1 atm (101.325 kN/m 2 , 101.325 kPa, 14.7 psia, 0 psig, 30 in Hg, 760 torr). [ 4 ] * values not taken at standard temperature and pressure
https://en.wikipedia.org/wiki/Specific_volume
The specific weight , also known as the unit weight (symbol γ , the Greek letter gamma ), is a volume-specific quantity defined as the weight W divided by the volume V of a material: γ = W / V {\displaystyle \gamma =W/V} Equivalently, it may also be formulated as the product of density , ρ , and gravity acceleration , g : γ = ρ g {\displaystyle \gamma =\rho \,g} Its unit of measurement in the International System of Units (SI) is newton per cubic metre (N/m 3 ), with base units of kg ⋅ m −2 ⋅ s −2 . A commonly used value is the specific weight of water on Earth at 4 °C (39 °F), which is 9.807 kilonewtons per cubic metre or 62.43 pounds-force per cubic foot . [ 1 ] The density of a material is defined as mass divided by volume, typically expressed in units of kg/m 3 . Unlike density, specific weight is not a fixed property of a material, as it depends on the value of the gravitational acceleration , which varies with location (e.g., Earth's gravity ). For simplicity, the standard gravity (a constant) is often assumed, usually taken as 9.81 m/s 2 . Pressure may also affect values, depending upon the bulk modulus of the material, but generally, at moderate pressures, has a less significant effect than the other factors. [ 2 ] In fluid mechanics , specific weight represents the force exerted by gravity on a unit volume of a fluid. For this reason, units are expressed as force per unit volume (e.g., N/m 3 or lbf/ft 3 ). Specific weight can be used as a characteristic property of a fluid. [ 2 ] Specific weight is often used as a property of soil to solve earthwork problems. In soil mechanics, specific weight may refer to: Specific weight can be used in civil engineering and mechanical engineering to determine the weight of a structure designed to carry certain loads while remaining intact and remaining within limits regarding deformation .
https://en.wikipedia.org/wiki/Specific_weight
This specification is usually called SEMI [ 1 ] E95-0200 standard. It was originally published in February 2000, and the latest technical revision is SEMI E95-1101. [ 2 ] This standard addresses the area of processing content with the direct intention of developing common software standards, so that problems involving operator training, operation specifications, and efficient development can be resolved more easily. Semiconductor Equipment and Materials International
https://en.wikipedia.org/wiki/Specification_for_human_interface_for_semiconductor_manufacturing_equipment
A specification tree shows all specifications of a technical system under development in a hierarchical order. For a spacecraft system it has the following levels: This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Specification_tree
Specificity in symbiosis refers to the taxonomic range with which an organism associates in a symbiosis. [ 1 ] [ 2 ] [ 3 ] [ 4 ] In a symbiosis between a larger organism such as a plant or an animal (called host) and a microorganism (called symbiont ) specificity can be looked at both from the perspective of the host i.e. how many different species of symbionts does the host associate with (symbiont specificity), as well as from the perspective of the symbiont i.e. how many different host species can a symbiont associate with (host specificity). There are two major approaches to determine specificity, the field based (ecological) approach and the physiological (experimental) approach. [ 1 ] In the field based approach specificity is assessed by determining the natural range of hosts or symbionts an organism associates with. In the physiological approach combinations of potential symbiotic partners are brought together artificially in the laboratory and the successful establishment of symbiosis is assessed. For example, while in the laboratory the midgut crypts of the bean bug Riptortus pedestris can be colonized by a large diversity of bacterial species in nature it only occurs with one specific Burkholderia species. [ 5 ] This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Specificity_(symbiosis)
A structural load or structural action is a mechanical load (more generally a force ) applied to structural elements . [ 1 ] [ 2 ] A load causes stress , deformation , displacement or acceleration in a structure . Structural analysis , a discipline in engineering , analyzes the effects of loads on structures and structural elements. Excess load may cause structural failure , so this should be considered and controlled during the design of a structure. Particular mechanical structures—such as aircraft , satellites , rockets , space stations , ships , and submarines —are subject to their own particular structural loads and actions. [ 3 ] Engineers often evaluate structural loads based upon published regulations , contracts , or specifications . Accepted technical standards are used for acceptance testing and inspection . In civil engineering , specified loads are the best estimate of the actual loads a structure is expected to carry. These loads come in many different forms, such as people, equipment, vehicles, wind, rain, snow, earthquakes, the building materials themselves, etc. Specified loads also known as characteristic loads in many cases. Buildings will be subject to loads from various sources. The principal ones can be classified as live loads (loads which are not always present in the structure), dead loads (loads which are permanent and immovable excepting redesign or renovation) and wind load, as described below. In some cases structures may be subject to other loads, such as those due to earthquakes or pressures from retained material. The expected maximum magnitude of each is referred to as the characteristic load. Dead loads are static forces that are relatively constant for an extended time. They can be in tension or compression . The term can refer to a laboratory test method or to the normal usage of a material or structure. Live loads are usually variable or moving loads . These can have a significant dynamic element and may involve considerations such as impact , momentum , vibration , slosh dynamics of fluids, etc. An impact load is one whose time of application on a material is less than one-third of the natural period of vibration of that material. Cyclic loads on a structure can lead to fatigue damage, cumulative damage, or failure. These loads can be repeated loadings on a structure or can be due to vibration . Imposed loads are those associated with occupation and use of the building; their magnitude is less clearly defined and is generally related to the use of the building. Structural loads are an important consideration in the design of buildings. Building codes require that structures be designed and built to safely resist all actions that they are likely to face during their service life, while remaining fit for use. [ 4 ] Minimum loads or actions are specified in these building codes for types of structures, geographic locations, usage and building materials . [ 5 ] Structural loads are split into categories by their originating cause. In terms of the actual load on a structure, there is no difference between dead or live loading, but the split occurs for use in safety calculations or ease of analysis on complex models. To meet the requirement that design strength be higher than maximum loads, building codes prescribe that, for structural design, loads are increased by load factors. These load factors are, roughly, a ratio of the theoretical design strength to the maximum load expected in service. They are developed to help achieve the desired level of reliability of a structure [ 6 ] based on probabilistic studies that take into account the load's originating cause, recurrence, distribution, and static or dynamic nature. [ 7 ] The dead load includes loads that are relatively constant over time, including the weight of the structure itself, and immovable fixtures such as walls, plasterboard or carpet . The roof is also a dead load. Dead loads are also known as permanent or static loads . Building materials are not dead loads until constructed in permanent position. [ 8 ] [ 9 ] [ 10 ] IS875(part 1)-1987 give unit weight of building materials, parts, components. Live loads, or imposed loads, are temporary, of short duration, or a moving load . These dynamic loads may involve considerations such as impact , momentum , vibration , slosh dynamics of fluids and material fatigue . Live loads, sometimes also referred to as probabilistic loads, include all the forces that are variable within the object's normal operation cycle not including construction or environmental loads. Roof and floor live loads are produced during maintenance by workers, equipment and materials, and during the life of the structure by movable objects, such as planters and people. Bridge live loads are produced by vehicles traveling over the deck of the bridge. Environmental loads are structural loads caused by natural forces such as wind, rain, snow, earthquake or extreme temperatures. Engineers must also be aware of other actions that may affect a structure, such as: A load combination results when more than one load type acts on the structure. Building codes usually specify a variety of load combinations together with load factors (weightings) for each load type in order to ensure the safety of the structure under different maximum expected loading scenarios. For example, in designing a staircase , a dead load factor may be 1.2 times the weight of the structure, and a live load factor may be 1.6 times the maximum expected live load. These two "factored loads" are combined (added) to determine the "required strength" of the staircase. The size of the load factor is based on the probability of exceeding any specified design load. Dead loads have small load factors, such as 1.2, because weight is mostly known and accounted for, such as structural members, architectural elements and finishes, large pieces of mechanical, electrical and plumbing (MEP) equipment, and for buildings, it's common to include a Super Imposed Dead Load (SIDL) of around 5 pounds per square foot (psf) accounting for miscellaneous weight such as bolts and other fasteners, cabling, and various fixtures or small architectural elements. Live loads, on the other hand, can be furniture, moveable equipment, or the people themselves, and may increase beyond normal or expected amounts in some situations, so a larger factor of 1.6 attempts to quantify this extra variability. Snow will also use a maximum factor of 1.6, while lateral loads (earthquakes and wind) are defined such that a 1.0 load factor is practical. Multiple loads may be added together in different ways, such as 1.2*Dead + 1.0*Live + 1.0*Earthquake + 0.2*Snow, or 1.2*Dead + 1.6(Snow, Live(roof), OR Rain) + (1.0*Live OR 0.5*Wind). For aircraft, loading is divided into two major categories: limit loads and ultimate loads. [ 11 ] Limit loads are the maximum loads a component or structure may carry safely. Ultimate loads are the limit loads times a factor of 1.5 or the point beyond which the component or structure will fail. [ 11 ] Gust loads are determined statistically and are provided by an agency such as the Federal Aviation Administration . Crash loads are loosely bounded by the ability of structures to survive the deceleration of a major ground impact . [ 12 ] Other loads that may be critical are pressure loads (for pressurized, high-altitude aircraft) and ground loads. Loads on the ground can be from adverse braking or maneuvering during taxiing . Aircraft are constantly subjected to cyclic loading. These cyclic loads can cause metal fatigue . [ 13 ]
https://en.wikipedia.org/wiki/Specified_load
Specim, Spectral Imaging Ltd is a European technology firm headquartered in Oulu , Finland. Specim manufactures and sells imaging spectrographs, hyperspectral cameras and systems. Specim's airborne AISA hyperspectral cameras have been utilized for example in monitoring the environmental effects of major industrial catastrophes such as Deepwater Horizon oil spill and Red mud spill . [ 2 ] [ 3 ] In 2010, Specim was widely credited for its Thermal Infrared Hyperspectral Cameras, including a position as a Prism Awards for Photonics Innovation finalist. [ 4 ] The credited Specim Owl is world's first Thermal Hyperspectral Camera that can efficiently be used for outdoor surveillance and UAV applications without an external light source such as the Sun or the Moon. [ 5 ] In 2013, together with Germany's Forschungszentrum Jülich research centre, Specim developed and thoroughly tested the novel Hyplant airborne hyperspectral sensor. This was the first airborne sensor to map the fluorescence over large areas. Since then it has been used to map various types of vegetation all over Europe and also in the USA. This project is one step in assessing feasibility of possible new ESA satellite instrument that could provide global maps of vegetation fluorescence called the Fluorescence Explorer (FLEX). [ 6 ] [ 7 ] In 2016, Specim announced the launch of new camera series, the Specim FX series, designed for industrial needs. Specim FX10 and FX17 were the first in a series of small, fast and flexible FX cameras for industrial applications e.g. food processing, recycling and pharmaceuticals. [ 8 ] In 2017, Specim revealed the World’s first mobile hyperspectral camera, Specim IQ, which allows you to make hyperspectral measurements and analysis anywhere. Specim IQ is a full hyperspectral imaging system that combines a spectral camera, a scanner, a computer, a frame grabber, data acquisition and processing software, data storage, a display, a keyboard, and power supplies in one mobile device. Since its launch, the Specim IQ has been used in various on-site studies in phenotyping and archaeology, ranging from analysing rock paintings to studying cultural heritage inside a Pharaoh's tomb and plant phenotyping and disease analysis. [ 9 ] [ 10 ] [ 11 ] [ 12 ] In the spring of 2020, Specim’s investors and founders felt the company had reached a suitable point of maturity and opportunity, complemented by external interest, and initiated a competitive trade sale process resulting in offers from several global players. Of these, the Board determined Konica Minolta to be the perfect home for Specim’s technology, customers and employees. In November 2020 Konica Minolta acquired Specim, Spectral Imaging Ltd. to be part of their sensing business. After the acquisition Specim will maintain its existing offices and facilities in Oulu. [ 13 ]
https://en.wikipedia.org/wiki/Specim
Specimen provenance complications (SPCs) result from instances of biopsy specimen transposition, extraneous/foreign cell contamination or misidentification of cells used in clinical or anatomical pathology . If left undetected, SPCs can lead to serious diagnostic mistakes and adverse patient outcomes. According to recent reports from the American Cancer Society , an estimated 12.7 million cases of cancer were diagnosed in 2008. That number is expected to rise to more than 20 million by 2030 due to population growth and aging alone. The problem will likely be further exacerbated by the widespread adoption of certain lifestyle factors (smoking, poor diet, physical inactivity, etc.) that increase the risk of developing the disease. [ 1 ] The process of collecting and evaluating the biopsy specimens used to render these cancer diagnoses involves nearly 20 steps and numerous medical professionals from the time the sample is originally taken from the patient to the time it is received by pathology for analysis. [ 2 ] With such a complex process executed at a large scale, the potential for a variety of Specimen Provenance Complications is a serious concern for both physicians and patients. While enforcement of strict protocols and procedures for the handling of samples helps to minimize error, identification issues in anatomic and clinical pathology labs still occur. The most common error is a mislabeled or unlabeled specimen. Another potential complication is the presence of a contaminant tissue fragment - commonly referred to as a floater - that does not belong to the patient being evaluated. Floaters can be introduced in the laboratory during tissue sectioning, processing or gross dissection, or potentially in a clinical setting as well when the biopsy is being performed. [ 3 ] If one of these floaters is from a malignant specimen, a healthy patient could be falsely diagnosed as having cancer. Medical research, reports from respected news organizations and real-life cases document the existence of Specimen Provenance Complications in the diagnostic testing cycle for cancer. One such report from the Wall Street Journal indicates that three to five percent of specimens taken each year are defective in some way, whether that be from insufficient extraction of tumor cells, a mix-up of patients’ samples or some other issue. [ 4 ] A study published in the Journal of Urology in 2014 concluded that more than 1 in every 200 prostate biopsy patients is misdiagnosed due to undetected specimen provenance complications. [ 5 ] A study conducted by the College of American Pathologists extrapolated that reported misidentification errors from 120 pathology labs would result in more than 160,000 adverse patient outcomes per year. [ 6 ] The study further cautioned that the true incidence of both errors and resulting adverse events would be much higher than can be presently measured since the research results were based solely on errors that were actually detected. To determine an estimate for the rate of occult Specimen Provenance Complication occurrence, researchers at Washington University School of Medicine conducted prospective analysis of approximately 13,000 prostate biopsies performed as part of routine clinical practice. Published in the American Journal of Clinical Pathology , this study classified biopsy misidentification errors into two segments: a complete transposition between patients (Type 1) and contamination of a patient’s tissue with one or more unrelated patients (Type 2). The frequency of occult Type 1 and Type 2 errors was found to be 0.26% and 0.67% respectively, or a combined error rate of 0.93%. However, each case involves at least two individuals, so this error rate actually underestimates the percentage of patients potentially affected by incidents of biopsy misidentification. Furthermore, the study demonstrated that errors occur across a variety of practice types and diagnostic laboratories, indicating no one setting is immune from this problem. [ 7 ] Additionally, in a study published in the American Journal of Clinical Pathology in 2015, the researchers at Washington University’s School of Medicine Genomics and Pathology Services center in St Louis, MO, determined that 2% of tissue samples received in their lab for next generation sequencing were contaminated by another person’s DNA to an extent to be clinically significant (i.e. greater than 5% of the sample was contaminated). [ 8 ] As data substantiates, SPCs are an under-recognized problem in clinical practice that warrants further investigation and consideration of additional safety measures such as required DNA testing to confirm the identity of biopsy samples. [ 9 ] In terms of outcomes, diagnostic mistakes due to Specimen Provenance Complications can have devastating results for both patients and the medical professionals involved in their care. One patient may receive an unnecessary treatment that significantly affects his or her quality of life, while the other patient’s cancer remains undiagnosed, and thus continues to advance. An example of the consequences of SPCs is the story of a Long Island, New York woman who underwent an unnecessary double mastectomy due to a misidentification error that caused her biopsy test results to be switched with those of another patient. Consequently, necessary treatment was delayed for the woman who did have breast cancer. [ 10 ] In another case, a young Australian woman received an unnecessary radical hysterectomy , leaving her infertile and dependent on hormone replacement therapy, after her biopsy sample was contaminated with malignant tissue from another patient. [ 11 ] To ensure the diagnostic accuracy of pathology lab results and prevent these types of adverse outcomes, a DNA Specimen Provenance Assignment (DSPA) also known as a DNA Specimen Provenance Assay test can be performed to confirm that surgical biopsies being evaluated belong exclusively to the patient being diagnosed and that they are free from contamination.
https://en.wikipedia.org/wiki/Specimen_provenance_complications
Specol (trade name Stimune ) is a water in oil adjuvant composed of defined and purified light mineral oil (Stills. 2005). It has been suggested as an alternative to Freund's adjuvant for hyperactivation of the immune response in rabbits (Leenaars et al. 1994). Specol can be used for antigens of low immunogenicity and can be administered equally effectively by the subcutaneous or intraperitoneal routes. The histological lesions produced are fewer than those produced by using Complete Freund's adjuvant. However, pain and distress following administration of these 2 adjuvants seem to be similar (based on limited data). This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Specol
A spectator ion is an ion that exists both as a reactant and a product in a chemical equation of an aqueous solution . [ 1 ] For example, in the reaction of aqueous solutions of sodium carbonate and copper(II) sulfate : The Na + and SO 2− 4 ions are spectator ions since they remain unchanged on both sides of the equation. They simply "watch" the other ions react and does not participate in any reaction, hence the name. [ 1 ] They are present in total ionic equations to balance the charges of the ions. Whereas the Cu 2+ and CO 2− 3 ions combine to form a precipitate of solid CuCO 3 . In reaction stoichiometry , spectator ions are removed from a complete ionic equation to form a net ionic equation. For the above example this yields: So: 2 Na + (aq) + CO 2− 3 (aq) + Cu 2+ (aq) + SO 2− 4 (aq) → 2 Na + (aq) + SO 2− 4 (aq) + CuCO 3 (s) (where x = spectator ion) Spectator ion concentration only affects the Debye length . In contrast, potential determining ions , whose concentrations affect surface potential (by surface chemical reactions) as well the Debye length. A net ionic equation ignores the spectator ions that were part of the original equation. [ 1 ] So, the net ionic equation only shows the ions which reacted to produce a precipitate. [ 1 ] Therefore, the total ionic reaction is different from the net reaction. This chemistry -related article is a stub . You can help Wikipedia by expanding it . This chemical reaction article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spectator_ion
In coordination chemistry , a spectator ligand is a ligand that does not participate in chemical reactions of the complex. Instead, spectator ligands (vs "actor ligands") occupy coordination sites. [ 1 ] Spectator ligands tend to be of polydentate , such that the M-spectator ensemble is inert kinetically. Although they do not participate in reactions of the metal, spectator ligands influence the reactivity of the metal center to which they are bound. These ligands are sometimes referred to as ancillary ligands. [ 2 ] Several different classes of ligand exist that can be considered spectator ligands. A few examples include trispyrazolylborates (Tp), cyclopentadienyl ligands (Cp), and many chelating diphosphines such as 1,2-bis(diphenylphosphino)ethane ligands (dppe). Varying the substituents on the spectator ligands greatly influences the solubility, stability, electronic, and steric properties of the metal complex. In the area of platinum-based antineoplastic agents, spectator (and nonspectator) ligands greatly affect efficacy. [ 3 ]
https://en.wikipedia.org/wiki/Spectator_ligand
Mass spectrometry is a scientific technique for measuring the mass-to-charge ratio of ions. It is often coupled to chromatographic techniques such as gas- or liquid chromatography and has found widespread adoption in the fields of analytical chemistry and biochemistry where it can be used to identify and characterize small molecules and proteins ( proteomics ). The large volume of data produced in a typical mass spectrometry experiment requires that computers be used for data storage and processing. Over the years, different manufacturers of mass spectrometers have developed various proprietary data formats for handling such data which makes it difficult for academic scientists to directly manipulate their data. To address this limitation, several open , XML -based data formats have recently been developed by the Trans-Proteomic Pipeline at the Institute for Systems Biology to facilitate data manipulation and innovation in the public sector. [ 1 ] These data formats are described here. This format was one of the earliest attempts to supply a standardized file format for data exchange in mass spectrometry. JCAMP-DX was initially developed for infrared spectrometry. JCAMP-DX is an ASCII based format and therefore not very compact even though it includes standards for file compression. JCAMP was officially released in 1988. [ 2 ] Together with the American Society for Mass Spectrometry a JCAMP-DX format for mass spectrometry was developed with aim to preserve legacy data. [ 3 ] The Analytical Data Interchange Format for Mass Spectrometry is a format for exchanging data. Many mass spectrometry software packages can read or write ANDI files. ANDI is specified in the ASTM E1947 Standard. [ 4 ] ANDI is based on netCDF which is a software tool library for writing and reading data files. ANDI was initially developed for chromatography-MS data and therefore was not used in the proteomics gold rush where new formats based on XML were developed. [ 5 ] AnIML is a joined effort of IUPAC and ASTM International to create an XML based standard that covers a wide variety of analytical techniques including mass spectrometry. [ 6 ] mzData was the first attempt by the Proteomics Standards Initiative (PSI) from the Human Proteome Organization (HUPO) to create a standardized format for Mass Spectrometry data. [ 7 ] This format is now deprecated, and replaced by mzML. [ 8 ] mzXML is a XML (eXtensible Markup Language) based common file format for proteomics mass spectrometric data. [ 9 ] [ 10 ] This format was developed at the Seattle Proteome Center/Institute for Systems Biology while the HUPO-PSI was trying to specify the standardized mzData format, and is still in use in the proteomics community. Y et A nother F ormat for M ass S pectrometry (YAFMS) is a suggestion to save data in four table relational server-less database schema with data extraction and appending being exercised using SQL queries. [ 11 ] As two formats (mzData and mzXML) for representing the same information is an undesirable state, a joint effort was set by HUPO-PSI, the SPC/ISB and instrument vendors to create a unified standard borrowing the best aspects of both mzData and mzXML, and intended to replace them. Originally called dataXML, it was officially announced as mzML. [ 12 ] The first specification was published in June 2008. [ 13 ] This format was officially released at the 2008 American Society for Mass Spectrometry Meeting, and is since then relatively stable with very few updates. On 1 June 2009, mzML 1.1.0 was released. There are no planned further changes as of 2013. Instead of defining new file formats and writing converters for proprietary vendor formats a group of scientists proposed to define a common application program interface to shift the burden of standards compliance to the instrument manufacturers' existing data access libraries. [ 14 ] The mz5 format addresses the performance problems of the previous XML based formats. It uses the mzML ontology, but saves the data using the HDF5 backend for reduced storage space requirements and improved read/write speed. [ 15 ] The imzML standard was proposed to exchange data from mass spectrometry imaging in a standardized XML file based on the mzML ontology. It splits experimental data into XML and spectral data in a binary file. Both files are linked by a universally unique identifier . [ 16 ] mzDB saves data in an SQLite database to save on storage space and improve access times as the data points can be queried from a relational database . [ 17 ] Toffee is an open lossless file format for data-independent acquisition mass spectrometry. It leverages HDF5 and aims to achieve file sizes similar to those from the proprietary and closed vendor formats. [ 18 ] mzMLb is another take on using a HDF5 backend for performant raw data saving. It, however, preserves the mzML XML data structure and stays compliant to the existing standard. [ 19 ] The Allotrope Foundation curates a HDF5 and Triplestore based file format named Allotrope Data Format (ADF) and a flat JSON representation ASM short for Allotrope Simple Model. Both are based on the Allotrope Foundation Ontologies (AFO) and contain schemas for mass spectrometry and chromatography coupled with MS detectors. [ 20 ] Below is a table of different file format extensions. (*) Note that the RAW formats of each vendor are not interchangeable; software from one cannot handle the RAW files from another. (**) Micromass was acquired by Waters in 1997 (***) Finnigan is a division of Thermo There are several viewers for mzXML, mzML and mzData. These viewers are of two types: Free Open Source Software (FOSS) or proprietary. In the FOSS viewer category, one can find MZmine, [ 22 ] mineXpert2 (mzXML, mzML, native timsTOF, xy, MGF, BafAscii) [ 23 ] MS-Spectre, [ 24 ] TOPPView (mzXML, mzML and mzData), [ 25 ] Spectra Viewer, [ 26 ] SeeMS, [ 27 ] msInspect, [ 28 ] jmzML. [ 29 ] In the proprietary category, one can find PEAKS, [ 30 ] Insilicos , [ 31 ] Mascot Distiller, [ 32 ] Elsci Peaksel. [ 33 ] There is a viewer for ITA images. [ 34 ] ITA and ITM images can be parsed with the pySPM python library. [ 35 ] Known converters for mzData to mzXML: Known converters for mzXML: Known converters for mzML: Converters for proprietary formats: Currently available converters are :
https://en.wikipedia.org/wiki/Spectra_Viewer
In convex geometry , a spectrahedron is a shape that can be represented as a linear matrix inequality . Alternatively, the set of n × n positive semidefinite matrices forms a convex cone in R n × n , and a spectrahedron is a shape that can be formed by intersecting this cone with an affine subspace . Spectrahedra are the feasible regions of semidefinite programs . [ 1 ] The images of spectrahedra under linear or affine transformations are called projected spectrahedra or spectrahedral shadows . Every spectrahedral shadow is a convex set that is also semialgebraic , but the converse (conjectured to be true until 2017) is false. [ 2 ] An example of a spectrahedron is the spectraplex , defined as where S + n {\displaystyle \mathbf {S} _{+}^{n}} is the set of n × n positive semidefinite matrices and Tr ⁡ ( X ) {\displaystyle \operatorname {Tr} (X)} is the trace of the matrix X {\displaystyle X} . [ 3 ] The spectraplex is a compact set, and can be thought of as the "semidefinite" analog of the simplex . This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spectrahedron
The Spectral Database for Organic Compounds ( SDBS ) is a free online searchable database hosted by the National Institute of Advanced Industrial Science and Technology (AIST) in Japan, that contains spectral data for ca 34,000 organic molecules. [ 1 ] The database is available in English and in Japanese and it includes six types of spectra: laser Raman spectra, electron ionization mass spectra (EI-MS), Fourier-transform infrared (FT-IR) spectra, 1 H nuclear magnetic resonance ( 1 H-NMR) spectra, 13 C nuclear magnetic resonance ( 13 C-NMR) spectra and electron paramagnetic resonance (EPR) spectra. [ 2 ] The construction of the database started in 1982. Most of the spectra were acquired and recorded in AIST and some of the collections are still being updated. [ 3 ] Since 1997, the database can be accessed free of charge, but its use requires agreeing to a disclaimer; the total accumulated number of times accessed reached 550 million by the end of January, 2015. [ 4 ] The database contains ca 3,500 Raman spectra. The spectra were recorded in the region of 4,000 – 0 cm −1 with an excitation wavelength of 4,800 nm and a slit width of 100 – 200 micrometers. This collection is not being updated. [ 4 ] The EI-MS spectra were measured in a JEOL JMS-01SG or a JEOL JMS-700 spectrometers, by the electron ionization method, with an electronic accelerating voltage of 75 eV and an ion accelerating voltage of 8 – 10 kV. The direct or reservoir inlet systems were used. The accuracy of the mass number is 0.5. This collection contains ca. 25,000 EI-MS spectra and is being updated. [ 4 ] The FT-IR spectra were recorded using a Nicolet 170SX or a JASCO FT/IR-410 spectrometer. For spectra recorded in the Nicolet spectrometer, the data were stored at intervals of 0.5 cm −1 in the 4,000 – 2,000 cm −1 region and of 0.25 cm −1 in the 2,000 – 400 cm −1 region and the spectral resolution was 0.25 cm −1 . For spectra recorded in the JASCO spectrometer, the resolution as well as the intervals was 0.5 cm −1 . Samples from solids were prepared using the KBr disc or the Nujol paste methods , samples from liquids were prepared with the liquid film method . This collections contains ca 54,100 spectra and is being updated. [ 4 ] The 1 H NMR spectra were recorded at a resonance frequency of 400 MHz with a resolution of 0.0625 Hz or at 90 MHz with a resolution of 0.125 Hz. The spectral acquisition was carried out using a flip angle of 22.5 – 30.0 degrees and a pulse repetition time of 30 seconds. [ 4 ] Samples were prepared by dissolution in deuterated chloroform (CDCl 3 ), deuterium oxide (D 2 O), or deuterated dimethylsulfoxide (DMSO-d 6 ). [ 5 ] Each spectrum is accompanied by a list of peaks with their respective intensities and chemical shifts reported in ppm and in Hz. Most spectra show the peak assignment. This collection contains ca 15,900 spectra and is being updated. [ 4 ] The 13 C NMR spectra were recorded at several spectrometers with resonance frequencies ranging from 15 MHz to 100 MHz and a resolution ranging from 0.025 to 0.045 ppm. Spectra were acquired using a pulse flip angle of 22.5 – 45 degrees and a pulse repetition time of 4 – 7 seconds. [ 4 ] Samples were prepared by dissolution in CDCl 3 , D 2 O, or DMSO-d 6 . [ 5 ] Each spectrum is accompanied by a list of the observed peaks with their respective chemical shifts in ppm and their intensities. Most spectra show the peak assignment. This collection contains ca 14,200 spectra and is being updated. [ 4 ] This collection contains ca 2,000 spectra. The measuring conditions and sample preparation is described for each particular spectrum. This collection stopped being updated in 1987. [ 4 ] The database can be searched by entering one or more of the following parameters: chemical name (is possible to request partial or full matching), molecular formula , number of different types of atoms present in the molecule (as a single value or as a range of values), molecular weight (as a single value or as a range of values), CAS Registry Number or SDBS number. In all cases “%” or “*” can be used as wildcards. The result of the search includes all the available spectra for the search parameters entered. Results can be sorted by molecular weight, number of carbons or SDBS number in ascending or descending order. [ 6 ] If a spectrum of an unknown chemical compound is available, a reverse search can be carried out by entering the values of the chemical shift, frequency or mass of the peaks in the NMR, FT-IR or EI-MS spectrum respectively. This type of search affords all the chemical compounds in the database that have the entered spectral characteristics. [ 6 ]
https://en.wikipedia.org/wiki/Spectral_Database_for_Organic_Compounds
Spectral Genomics, Inc. was a technology spin-off company from Baylor College of Medicine , selling aCGH microarrays and related software. The company was founded in February 2000 by BCM technologies . Spectral licensed technology invented by its founders Alan Bradley, Ph.D., Wei-wen Cai, Ph.D.. The company raised $3.0 million in the first financing round in August 2001. In March 2004 the company raised additional $9.4 million in its second financing round. In March 2005, GE Healthcare became the exclusive distributor for Spectral Genomics's products outside of North America. Spectral Genomics was acquired by PerkinElmer in May 2006, ending GE's distribution agreement.
https://en.wikipedia.org/wiki/Spectral_Genomics
Spectral bands are regions of a given spectrum , having a specific range of wavelengths or frequencies. Most often, it refers to electromagnetic bands , regions of the electromagnetic spectrum . [ 1 ] More generally, spectral bands may also be means in the spectra of other types of signals, e.g., noise spectrum . A frequency band is an interval in the frequency domain , limited by a lower frequency and an upper frequency. For example, it may refer to a radio band , such as wireless communication standards set by the International Telecommunication Union . [ 2 ] In nuclear physics, spectral bands refer to the electromagnetic emission of polyatomic systems, including condensed materials, large molecules, etc. Each spectral line corresponds to the difference in two energy levels of an atom. In molecules, these levels can split. When the number of atoms is large, one gets a continuum of energy levels, the so-called spectral bands . They are often labeled in the same way as the monatomic lines. The bands may overlap. In general, the energy spectrum can be given by a density function, describing the number of energy levels of the quantum system for a given interval. Spectral bands have constant density, and when the bands overlap, the corresponding densities are added. Band spectra is the name given to a group of lines that are closely spaced and arranged in a regular sequence that appears to be a band. It is a colored band, separated by dark spaces on the two sides and arranged in a regular sequence. In one band, there are various sharp and wider color lines, that are closer on one side and wider on other. The intensity in each band falls off from definite limits and indistinct on the other side. In complete band spectra, there is a number lines in a band. This spectra is produced when the emitting substance is in the molecular state. Therefore, they are also called molecular spectra . It is emitted by a molecule in vacuum tube , C-arc core with metallic salt. The band spectrum is the combination of many different spectral lines , resulting from molecular vibrational , rotational, and electronic transition . Spectroscopy studies spectral bands for astronomy and other purposes. Many systems are characterized by the spectral band to which they respond. For example:
https://en.wikipedia.org/wiki/Spectral_band
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies . It is typically measured in unit of hertz (symbol Hz). It may refer more specifically to two subcategories: Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter , a communication channel , or a signal spectrum . Baseband bandwidth is equal to the upper cutoff frequency of a low-pass filter or baseband signal, which includes a zero frequency. Bandwidth in hertz is a central concept in many fields, including electronics , information theory , digital communications , radio communications , signal processing , and spectroscopy and is one of the determinants of the capacity of a given communication channel . A key characteristic of bandwidth is that any band of a given width can carry the same amount of information , regardless of where that band is located in the frequency spectrum . [ a ] For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the § Fractional bandwidth is smaller. Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal . An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing . For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practically useful definition will refer to the frequencies beyond which performance is degraded. In the case of frequency response , degradation could, for example, mean more than 3 dB below the maximum value or it could mean below a certain absolute value. As with any definition of the width of a function, many definitions are suitable for different purposes. In the context of, for example, the sampling theorem and Nyquist sampling rate , bandwidth typically refers to baseband bandwidth. In the context of Nyquist symbol rate or Shannon-Hartley channel capacity for communication systems it refers to passband bandwidth. The Rayleigh bandwidth of a simple radar pulse is defined as the inverse of its duration. For example, a one-microsecond pulse has a Rayleigh bandwidth of one megahertz. [ 1 ] The essential bandwidth is defined as the portion of a signal spectrum in the frequency domain which contains most of the energy of the signal. [ 2 ] In some contexts, the signal bandwidth in hertz refers to the frequency range in which the signal's spectral density (in W/Hz or V 2 /Hz) is nonzero or above a small threshold value. The threshold value is often defined relative to the maximum value, and is most commonly the 3 dB point , that is the point where the spectral density is half its maximum value (or the spectral amplitude, in V {\displaystyle \mathrm {V} } or V / H z {\displaystyle \mathrm {V/{\sqrt {Hz}}} } , is 70.7% of its maximum). [ 3 ] This figure, with a lower threshold value, can be used in calculations of the lowest sampling rate that will satisfy the sampling theorem . The bandwidth is also used to denote system bandwidth , for example in filter or communication channel systems. To say that a system has a certain bandwidth means that the system can process signals with that range of frequencies, or that the system reduces the bandwidth of a white noise input to that bandwidth. The 3 dB bandwidth of an electronic filter or communication channel is the part of the system's frequency response that lies within 3 dB of the response at its peak, which, in the passband filter case, is typically at or near its center frequency , and in the low-pass filter is at or near its cutoff frequency . If the maximum gain is 0 dB, the 3 dB bandwidth is the frequency range where attenuation is less than 3 dB. 3 dB attenuation is also where power is half its maximum. This same half-power gain convention is also used in spectral width , and more generally for the extent of functions as full width at half maximum (FWHM). In electronic filter design, a filter specification may require that within the filter passband , the gain is nominally 0 dB with a small variation, for example within the ±1 dB interval. In the stopband (s), the required attenuation in decibels is above a certain level, for example >100 dB. In a transition band the gain is not specified. In this case, the filter bandwidth corresponds to the passband width, which in this example is the 1 dB-bandwidth. If the filter shows amplitude ripple within the passband, the x dB point refers to the point where the gain is x dB below the nominal passband gain rather than x dB below the maximum gain. In signal processing and control theory the bandwidth is the frequency at which the closed-loop system gain drops 3 dB below peak. In communication systems, in calculations of the Shannon–Hartley channel capacity , bandwidth refers to the 3 dB-bandwidth. In calculations of the maximum symbol rate , the Nyquist sampling rate , and maximum bit rate according to the Hartley's law , the bandwidth refers to the frequency range within which the gain is non-zero. The fact that in equivalent baseband models of communication systems, the signal spectrum consists of both negative and positive frequencies, can lead to confusion about bandwidth since they are sometimes referred to only by the positive half, and one will occasionally see expressions such as B = 2 W {\displaystyle B=2W} , where B {\displaystyle B} is the total bandwidth (i.e. the maximum passband bandwidth of the carrier-modulated RF signal and the minimum passband bandwidth of the physical passband channel), and W {\displaystyle W} is the positive bandwidth (the baseband bandwidth of the equivalent channel model). For instance, the baseband model of the signal would require a low-pass filter with cutoff frequency of at least W {\displaystyle W} to stay intact, and the physical passband channel would require a passband filter of at least B {\displaystyle B} to stay intact. The absolute bandwidth is not always the most appropriate or useful measure of bandwidth. For instance, in the field of antennas the difficulty of constructing an antenna to meet a specified absolute bandwidth is easier at a higher frequency than at a lower frequency. For this reason, bandwidth is often quoted relative to the frequency of operation which gives a better indication of the structure and sophistication needed for the circuit or device under consideration. There are two different measures of relative bandwidth in common use: fractional bandwidth ( B F {\displaystyle B_{\mathrm {F} }} ) and ratio bandwidth ( B R {\displaystyle B_{\mathrm {R} }} ). [ 4 ] In the following, the absolute bandwidth is defined as follows, B = Δ f = f H − f L {\displaystyle B=\Delta f=f_{\mathrm {H} }-f_{\mathrm {L} }} where f H {\displaystyle f_{\mathrm {H} }} and f L {\displaystyle f_{\mathrm {L} }} are the upper and lower frequency limits respectively of the band in question. Fractional bandwidth is defined as the absolute bandwidth divided by the center frequency ( f C {\displaystyle f_{\mathrm {C} }} ), B F = Δ f f C . {\displaystyle B_{\mathrm {F} }={\frac {\Delta f}{f_{\mathrm {C} }}}\,.} The center frequency is usually defined as the arithmetic mean of the upper and lower frequencies so that, f C = f H + f L 2 {\displaystyle f_{\mathrm {C} }={\frac {f_{\mathrm {H} }+f_{\mathrm {L} }}{2}}\ } and B F = 2 ( f H − f L ) f H + f L . {\displaystyle B_{\mathrm {F} }={\frac {2(f_{\mathrm {H} }-f_{\mathrm {L} })}{f_{\mathrm {H} }+f_{\mathrm {L} }}}\,.} However, the center frequency is sometimes defined as the geometric mean of the upper and lower frequencies, f C = f H f L {\displaystyle f_{\mathrm {C} }={\sqrt {f_{\mathrm {H} }f_{\mathrm {L} }}}} and B F = f H − f L f H f L . {\displaystyle B_{\mathrm {F} }={\frac {f_{\mathrm {H} }-f_{\mathrm {L} }}{\sqrt {f_{\mathrm {H} }f_{\mathrm {L} }}}}\,.} While the geometric mean is more rarely used than the arithmetic mean (and the latter can be assumed if not stated explicitly) the former is considered more mathematically rigorous. It more properly reflects the logarithmic relationship of fractional bandwidth with increasing frequency. [ 5 ] For narrowband applications, there is only marginal difference between the two definitions. The geometric mean version is inconsequentially larger. For wideband applications they diverge substantially with the arithmetic mean version approaching 2 in the limit and the geometric mean version approaching infinity. Fractional bandwidth is sometimes expressed as a percentage of the center frequency ( percent bandwidth , % B {\displaystyle \%B} ), % B F = 100 Δ f f C . {\displaystyle \%B_{\mathrm {F} }=100{\frac {\Delta f}{f_{\mathrm {C} }}}\,.} Ratio bandwidth is defined as the ratio of the upper and lower limits of the band, B R = f H f L . {\displaystyle B_{\mathrm {R} }={\frac {f_{\mathrm {H} }}{f_{\mathrm {L} }}}\,.} Ratio bandwidth may be notated as B R : 1 {\displaystyle B_{\mathrm {R} }:1} . The relationship between ratio bandwidth and fractional bandwidth is given by, B F = 2 B R − 1 B R + 1 {\displaystyle B_{\mathrm {F} }=2{\frac {B_{\mathrm {R} }-1}{B_{\mathrm {R} }+1}}} and B R = 2 + B F 2 − B F . {\displaystyle B_{\mathrm {R} }={\frac {2+B_{\mathrm {F} }}{2-B_{\mathrm {F} }}}\,.} Percent bandwidth is a less meaningful measure in wideband applications. A percent bandwidth of 100% corresponds to a ratio bandwidth of 3:1. All higher ratios up to infinity are compressed into the range 100–200%. Ratio bandwidth is often expressed in octaves (i.e., as a frequency level ) for wideband applications. An octave is a frequency ratio of 2:1 leading to this expression for the number of octaves, log 2 ⁡ ( B R ) . {\displaystyle \log _{2}\left(B_{\mathrm {R} }\right).} The noise equivalent bandwidth (or equivalent noise bandwidth (enbw) ) of a system of frequency response H ( f ) {\displaystyle H(f)} is the bandwidth of an ideal filter with rectangular frequency response centered on the system's central frequency that produces the same average power outgoing H ( f ) {\displaystyle H(f)} when both systems are excited with a white noise source. The value of the noise equivalent bandwidth depends on the ideal filter reference gain used. Typically, this gain equals | H ( f ) | {\displaystyle |H(f)|} at its center frequency, [ 6 ] but it can also equal the peak value of | H ( f ) | {\displaystyle |H(f)|} . The noise equivalent bandwidth B n {\displaystyle B_{n}} can be calculated in the frequency domain using H ( f ) {\displaystyle H(f)} or in the time domain by exploiting the Parseval's theorem with the system impulse response h ( t ) {\displaystyle h(t)} . If H ( f ) {\displaystyle H(f)} is a lowpass system with zero central frequency and the filter reference gain is referred to this frequency, then: B n = ∫ − ∞ ∞ | H ( f ) | 2 d f 2 | H ( 0 ) | 2 = ∫ − ∞ ∞ | h ( t ) | 2 d t 2 | ∫ − ∞ ∞ h ( t ) d t | 2 . {\displaystyle B_{n}={\frac {\int _{-\infty }^{\infty }|H(f)|^{2}df}{2|H(0)|^{2}}}={\frac {\int _{-\infty }^{\infty }|h(t)|^{2}dt}{2\left|\int _{-\infty }^{\infty }h(t)dt\right|^{2}}}\,.} The same expression can be applied to bandpass systems by substituting the equivalent baseband frequency response for H ( f ) {\displaystyle H(f)} . The noise equivalent bandwidth is widely used to simplify the analysis of telecommunication systems in the presence of noise. In photonics , the term bandwidth carries a variety of meanings: A related concept is the spectral linewidth of the radiation emitted by excited atoms.
https://en.wikipedia.org/wiki/Spectral_bandwidth
The spectral dimension is a real-valued quantity that characterizes a spacetime geometry and topology . It characterizes a spread into space over time, e.g. an ink drop diffusing in a water glass or the evolution of a pandemic in a population. Its definition is as follow: if a phenomenon spreads as t n {\displaystyle t^{n}} , with t {\displaystyle t} the time, then the spectral dimension is 2 n {\displaystyle 2n} . The spectral dimension depends on the topology of the space, e.g., the distribution of neighbors in a population, and the diffusion rate. In physics , the concept of spectral dimension is used, among other things, in quantum gravity , [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] percolation theory , superstring theory , [ 6 ] or quantum field theory . [ 7 ] The diffusion of ink in an isotropic homogeneous medium like still water evolves as t 3 / 2 {\displaystyle t^{3/2}} , giving a spectral dimension of 3. Ink in a 2D Sierpiński triangle diffuses following a more complicated path and thus more slowly, as t 0.6826 {\displaystyle t^{0.6826}} , giving a spectral dimension of 1.3652. [ 8 ]
https://en.wikipedia.org/wiki/Spectral_dimension
A spectral energy distribution ( SED ) is a plot of energy versus frequency or wavelength of light (not to be confused with a 'spectrum' of flux density vs frequency or wavelength). [ 1 ] It is used in many branches of astronomy to characterize astronomical sources. For example, in radio astronomy they are used to show the emission from synchrotron radiation , free-free emission and other emission mechanisms. In infrared astronomy , SEDs can be used to classify young stellar objects . The count rates observed from a given astronomical radiation source have no simple relationship to the flux from that source, such as might be incident at the top of the Earth's atmosphere . [ 2 ] This lack of a simple relationship is due in no small part to the complex properties of radiation detectors. [ 2 ] These detector properties can be divided into This astronomy -related article is a stub . You can help Wikipedia by expanding it . This scattering –related article is a stub . You can help Wikipedia by expanding it . This spectroscopy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spectral_energy_distribution
Spectral flatness or tonality coefficient , [ 1 ] [ 2 ] also known as Wiener entropy , [ 3 ] [ 4 ] is a measure used in digital signal processing to characterize an audio spectrum . Spectral flatness is typically measured in decibels , and provides a way to quantify how much a sound resembles a pure tone , as opposed to being noise -like. [ 2 ] The meaning of tonal in this context is in the sense of the amount of peaks or resonant structure in a power spectrum , as opposed to the flat spectrum of white noise . A high spectral flatness (approaching 1.0 for white noise) indicates that the spectrum has a similar amount of power in all spectral bands — this would sound similar to white noise, and the graph of the spectrum would appear relatively flat and smooth. A low spectral flatness (approaching 0.0 for a pure tone) indicates that the spectral power is concentrated in a relatively small number of bands — this would typically sound like a mixture of sine waves , and the spectrum would appear "spiky". [ 5 ] Dubnov [ 2 ] has shown that spectral flatness is equivalent to information theoretic concept of mutual information that is known as dual total correlation . The spectral flatness is calculated by dividing the geometric mean of the power spectrum by the arithmetic mean of the power spectrum, i.e.: where x(n) represents the magnitude of bin number n . Note that a single (or more) empty bin yields a flatness of 0, so this measure is most useful when bins are generally not empty. The ratio produced by this calculation is often converted to a decibel scale for reporting, with a maximum of 0 dB and a minimum of −∞ dB. The spectral flatness can also be measured within a specified sub-band, rather than across the whole band. This measurement is one of the many audio descriptors used in the MPEG-7 standard, in which it is labelled "AudioSpectralFlatness". In birdsong research, it has been used as one of the features measured on birdsong audio, when testing similarity between two excerpts. [ 6 ] Spectral flatness has also been used in the analysis of electroencephalography (EEG) diagnostics and research, [ 7 ] and psychoacoustics in humans. [ 8 ]
https://en.wikipedia.org/wiki/Spectral_flatness
In spectroscopy , spectral flux density is the quantity that describes the rate at which energy is transferred by electromagnetic radiation through a real or virtual surface, per unit surface area and per unit wavelength (or, equivalently, per unit frequency). It is a radiometric rather than a photometric measure. In SI units it is measured in W m −3 , although it can be more practical to use W m −2 nm −1 (1 W m −2 nm −1 = 1 GW m −3 = 1 W mm −3 ) or W m −2 μm −1 (1 W m −2 μm −1 = 1 MW m −3 ), and respectively by W·m −2 ·Hz −1 , Jansky or solar flux units . The terms irradiance , radiant exitance , radiant emittance , and radiosity are closely related to spectral flux density. The terms used to describe spectral flux density vary between fields, sometimes including adjectives such as "electromagnetic" or "radiative", and sometimes dropping the word "density". Applications include: For the flux density received from a remote unresolvable "point source", the measuring instrument, usually telescopic, though not able to resolve any detail of the source itself, must be able to optically resolve enough details of the sky around the point source, so as to record radiation coming from it only, uncontaminated by radiation from other sources. In this case, [ 1 ] spectral flux density is the quantity that describes the rate at which energy transferred by electromagnetic radiation is received from that unresolved point source, per unit receiving area facing the source, per unit wavelength range. At any given wavelength λ , the spectral flux density, F λ , can be determined by the following procedure: Spectral flux density is often used as the quantity on the y -axis of a graph representing the spectrum of a light-source, such as a star . There are two main approaches to definition of the spectral flux density at a measuring point in an electromagnetic radiative field. One may be conveniently here labelled the 'vector approach', the other the 'scalar approach'. The vector definition refers to the full spherical integral of the spectral radiance (also known as the specific radiative intensity or specific intensity) at the point, while the scalar definition refers to the many possible hemispheric integrals of the spectral radiance (or specific intensity) at the point. The vector definition seems to be preferred for theoretical investigations of the physics of the radiative field. The scalar definition seems to be preferred for practical applications. The vector approach defines flux density as a vector at a point of space and time prescribed by the investigator. To distinguish this approach, one might speak of the 'full spherical flux density'. In this case, nature tells the investigator what is the magnitude, direction, and sense of the flux density at the prescribed point. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] For the flux density vector, one may write where I ( x , t ; n ^ , ν ) {\displaystyle I(\mathbf {x} ,t;\mathbf {\hat {n}} ,\nu )} denotes the spectral radiance (or specific intensity) at the point x {\displaystyle \mathbf {x} } at time t {\displaystyle t} and frequency ν {\displaystyle \nu \!} , n ^ {\displaystyle \mathbf {\hat {n}} } denotes a variable unit vector with origin at the point x {\displaystyle \mathbf {x} } , d ω ( n ^ ) {\displaystyle d\omega (\mathbf {\hat {n}} )} denotes an element of solid angle around n ^ {\displaystyle \mathbf {\hat {n}} } , and Ω {\displaystyle \Omega } indicates that the integration extends over the full range of solid angles of a sphere. Mathematically, defined as an unweighted integral over the solid angle of a full sphere, the flux density is the first moment of the spectral radiance (or specific intensity) with respect to solid angle. [ 5 ] It is not common practice to make the full spherical range of measurements of the spectral radiance (or specific intensity) at the point of interest, as is needed for the mathematical spherical integration specified in the strict definition; the concept is nevertheless used in theoretical analysis of radiative transfer. As described below, if the direction of the flux density vector is known in advance because of a symmetry, namely that the radiative field is uniformly layered and flat, then the vector flux density can be measured as the 'net flux', by algebraic summation of two oppositely sensed scalar readings in the known direction, perpendicular to the layers. At a given point in space, in a steady-state field, the vector flux density, a radiometric quantity, is equal to the time-averaged Poynting vector , [ 8 ] an electromagnetic field quantity. [ 4 ] [ 7 ] Within the vector approach to the definition, however, there are several specialized sub-definitions. Sometimes the investigator is interested only in a specific direction, for example the vertical direction referred to a point in a planetary or stellar atmosphere, because the atmosphere there is considered to be the same in every horizontal direction, so that only the vertical component of the flux is of interest. Then the horizontal components of flux are considered to cancel each other by symmetry, leaving only the vertical component of the flux as non-zero. In this case [ 4 ] some astrophysicists think in terms of the astrophysical flux (density), which they define as the vertical component of the flux (of the above general definition) divided by the number π . And sometimes [ 4 ] [ 5 ] the astrophysicist uses the term Eddington flux to refer to the vertical component of the flux (of the above general definition) divided by the number 4 π . The scalar approach defines flux density as a scalar-valued function of a direction and sense in space prescribed by the investigator at a point prescribed by the investigator. Sometimes [ 9 ] this approach is indicated by the use of the term 'hemispheric flux'. For example, an investigator of thermal radiation, emitted from the material substance of the atmosphere, received at the surface of the earth, is interested in the vertical direction, and the downward sense in that direction. This investigator thinks of a unit area in a horizontal plane, surrounding the prescribed point. The investigator wants to know the total power of all the radiation from the atmosphere above in every direction, propagating with a downward sense, received by that unit area. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] For the flux density scalar for the prescribed direction and sense, we may write where with the notation above, Ω + {\displaystyle \Omega ^{^{+}}} indicates that the integration extends only over the solid angles of the relevant hemisphere, and θ ( n ^ ) {\displaystyle \theta (\mathbf {\hat {n}} )} denotes the angle between n ^ {\displaystyle \mathbf {\hat {n}} } and the prescribed direction. The term cos ( θ ( n ^ ) ) {\displaystyle \cos \,(\theta (\mathbf {\hat {n}} ))} is needed on account of Lambert's law . [ 15 ] Mathematically, the quantity F ( x , t ; ν ) {\displaystyle F(\mathbf {x} ,t;\nu )} is not a vector because it is a positive scalar-valued function of the prescribed direction and sense, in this example, of the downward vertical. In this example, when the collected radiation is propagating in the downward sense, the detector is said to be "looking upwards". The measurement can be made directly with an instrument (such as a pyrgeometer) that collects the measured radiation all at once from all the directions of the imaginary hemisphere; in this case, Lambert-cosine-weighted integration of the spectral radiance (or specific intensity) is not performed mathematically after the measurement; the Lambert-cosine-weighted integration has been performed by the physical process of measurement itself. In a flat horizontal uniformly layered radiative field, the hemispheric fluxes, upwards and downwards, at a point, can be subtracted to yield what is often called the net flux . The net flux then has a value equal to the magnitude of the full spherical flux vector at that point, as described above. The radiometric description of the electromagnetic radiative field at a point in space and time is completely represented by the spectral radiance (or specific intensity) at that point. In a region in which the material is uniform and the radiative field is isotropic and homogeneous , let the spectral radiance (or specific intensity) be denoted by I ( x , t ; r 1 , ν ) , a scalar-valued function of its arguments x , t , r 1 , and ν , where r 1 denotes a unit vector with the direction and sense of the geometrical vector r from the source point P 1 to the detection point P 2 , where x denotes the coordinates of P 1 , at time t and wave frequency ν . Then, in the region, I ( x , t ; r 1 , ν ) takes a constant scalar value, which we here denote by I . In this case, the value of the vector flux density at P 1 is the zero vector, while the scalar or hemispheric flux density at P 1 in every direction in both senses takes the constant scalar value π I . The reason for the value π I is that the hemispheric integral is half the full spherical integral, and the integrated effect of the angles of incidence of the radiation on the detector requires a halving of the energy flux according to Lambert's cosine law ; the solid angle of a sphere is 4 π . The vector definition is suitable for the study of general radiative fields. The scalar or hemispheric spectral flux density is convenient for discussions in terms of the two-stream model of the radiative field, which is reasonable for a field that is uniformly stratified in flat layers, when the base of the hemisphere is chosen to be parallel to the layers, and one or other sense (up or down) is specified. In an inhomogeneous non-isotropic radiative field, the spectral flux density defined as a scalar-valued function of direction and sense contains much more directional information than does the spectral flux density defined as a vector, but the full radiometric information is customarily stated as the spectral radiance (or specific intensity). For the present purposes, the light from a star, and for some particular purposes, the light of the sun, can be treated as a practically collimated beam , but apart from this, a collimated beam is rarely if ever found in nature, [ 16 ] though artificially produced beams can be very nearly collimated. [ 17 ] The spectral radiance (or specific intensity) is suitable for the description of an uncollimated radiative field. The integrals of spectral radiance (or specific intensity) with respect to solid angle, used above, are singular for exactly collimated beams, or may be viewed as Dirac delta functions . Therefore, the specific radiative intensity is unsuitable for the description of a collimated beam, while spectral flux density is suitable for that purpose. [ 18 ] At a point within a collimated beam, the spectral flux density vector has a value equal to the Poynting vector , [ 8 ] a quantity defined in the classical Maxwell theory of electromagnetic radiation. [ 7 ] [ 19 ] [ 20 ] Sometimes it is more convenient to display graphical spectra with vertical axes that show the relative spectral flux density . In this case, the spectral flux density at a given wavelength is expressed as a fraction of some arbitrarily chosen reference value. Relative spectral flux densities are expressed as pure numbers without any units. Spectra showing the relative spectral flux density are used when we are interested in comparing the spectral flux densities of different sources; for example, if we want to show how the spectra of blackbody sources vary with absolute temperature, it is not necessary to show the absolute values. The relative spectral flux density is also useful if we wish to compare a source's flux density at one wavelength with the same source's flux density at another wavelength; for example, if we wish to demonstrate how the Sun's spectrum peaks in the visible part of the EM spectrum, a graph of the Sun's relative spectral flux density will suffice.
https://en.wikipedia.org/wiki/Spectral_flux_density
In signal processing and related disciplines, aliasing is a phenomenon that a reconstructed signal from samples of the original signal contains low frequency components that are not present in the original one. This is caused when, in the original signal, there are components at frequency exceeding a certain frequency called Nyquist frequency , f s / 2 {\textstyle f_{s}/2} , where f s {\textstyle f_{s}} is the sampling frequency ( undersampling ). This is because typical reconstruction methods use low frequency components while there are a number of frequency components, called aliases, which sampling result in the identical sample. It also often refers to the distortion or artifact that results when a signal reconstructed from samples is different from the original continuous signal. Aliasing can occur in signals sampled in time, for instance in digital audio or the stroboscopic effect , and is referred to as temporal aliasing . Aliasing in spatially sampled signals (e.g., moiré patterns in digital images ) is referred to as spatial aliasing . Aliasing is generally avoided by applying low-pass filters or anti-aliasing filters (AAF) to the input signal before sampling and when converting a signal from a higher to a lower sampling rate. Suitable reconstruction filtering should then be used when restoring the sampled signal to the continuous domain or converting a signal from a lower to a higher sampling rate. For spatial anti-aliasing , the types of anti-aliasing include fast approximate anti-aliasing (FXAA), multisample anti-aliasing , and supersampling . When a digital image is viewed, a reconstruction is performed by a display or printer device, and by the eyes and the brain. If the image data is processed incorrectly during sampling or reconstruction, the reconstructed image will differ from the original image, and an alias is seen. An example of spatial aliasing is the moiré pattern observed in a poorly pixelized image of a brick wall. Spatial anti-aliasing techniques avoid such poor pixelizations. Aliasing can be caused either by the sampling stage or the reconstruction stage; these may be distinguished by calling sampling aliasing prealiasing and reconstruction aliasing postaliasing. [ 1 ] Temporal aliasing is a major concern in the sampling of video and audio signals. Music, for instance, may contain high-frequency components that are inaudible to humans. If a piece of music is sampled at 32,000 samples per second (Hz), any frequency components at or above 16,000 Hz (the Nyquist frequency for this sampling rate) will cause aliasing when the music is reproduced by a digital-to-analog converter (DAC). The high frequencies in the analog signal will appear as lower frequencies (wrong alias) in the recorded digital sample and, hence, cannot be reproduced by the DAC. To prevent this, an anti-aliasing filter is used to remove components above the Nyquist frequency prior to sampling. In video or cinematography, temporal aliasing results from the limited frame rate, and causes the wagon-wheel effect , whereby a spoked wheel appears to rotate too slowly or even backwards. Aliasing has changed its apparent frequency of rotation. A reversal of direction can be described as a negative frequency . Temporal aliasing frequencies in video and cinematography are determined by the frame rate of the camera, but the relative intensity of the aliased frequencies is determined by the shutter timing (exposure time) or the use of a temporal aliasing reduction filter during filming. [ 2 ] [ unreliable source? ] Like the video camera, most sampling schemes are periodic; that is, they have a characteristic sampling frequency in time or in space. Digital cameras provide a certain number of samples ( pixels ) per degree or per radian, or samples per mm in the focal plane of the camera. Audio signals are sampled ( digitized ) with an analog-to-digital converter , which produces a constant number of samples per second. Some of the most dramatic and subtle examples of aliasing occur when the signal being sampled also has periodic content. Actual signals have a finite duration and their frequency content, as defined by the Fourier transform , has no upper bound. Some amount of aliasing always occurs when such continuous functions over time are sampled. Functions whose frequency content is bounded ( bandlimited ) have an infinite duration in the time domain. If sampled at a high enough rate, determined by the bandwidth , the original function can, in theory, be perfectly reconstructed from the infinite set of samples. Sometimes aliasing is used intentionally on signals with no low-frequency content, called bandpass signals. Undersampling , which creates low-frequency aliases, can produce the same result, with less effort, as frequency-shifting the signal to lower frequencies before sampling at the lower rate. Some digital channelizers exploit aliasing in this way for computational efficiency. [ 3 ] (See Sampling (signal processing) , Nyquist rate (relative to sampling) , and Filter bank .) Sinusoids are an important type of periodic function, because realistic signals are often modeled as the summation of many sinusoids of different frequencies and different amplitudes (for example, with a Fourier series or transform ). Understanding what aliasing does to the individual sinusoids is useful in understanding what happens to their sum. When sampling a function at frequency f s (i.e., the sampling interval is 1/ f s ), the following functions of time ( t ) yield identical sets of samples if the sampling starts from t = 0 {\textstyle t=0} such that t = 1 f s n {\displaystyle t={\frac {1}{f_{s}}}n} where n = 0 , 1 , 2 , 3 {\textstyle n=0,1,2,3} , and so on: { sin ⁡ ( 2 π ( f + N f s ) t + φ ) , N = 0 , ± 1 , ± 2 , ± 3 , … } . {\displaystyle \{\sin(2\pi (f+Nf_{s})t+\varphi ),N=0,\pm 1,\pm 2,\pm 3,\ldots \}.} A frequency spectrum of the samples produces equally strong responses at all those frequencies. Without collateral information, the frequency of the original function is ambiguous. So, the functions and their frequencies are said to be aliases of each other. Noting the sine functions as odd functions : thus, we can write all the alias frequencies as positive values: f N ( f ) ≜ | f + N f s | {\displaystyle f_{_{N}}(f)\triangleq \left|f+Nf_{\rm {s}}\right|} . For example, a snapshot of the lower right frame of Fig.2 shows a component at the actual frequency f {\displaystyle f} and another component at alias f − 1 ( f ) {\displaystyle f_{_{-1}}(f)} . As f {\displaystyle f} increases during the animation, f − 1 ( f ) {\displaystyle f_{_{-1}}(f)} decreases. The point at which they are equal ( f = f s / 2 ) {\displaystyle (f=f_{s}/2)} is an axis of symmetry called the folding frequency , also known as Nyquist frequency . Aliasing matters when one attempts to reconstruct the original waveform from its samples. The most common reconstruction technique produces the smallest of the f N ( f ) {\displaystyle f_{_{N}}(f)} frequencies. So, it is usually important that f 0 ( f ) {\displaystyle f_{0}(f)} be the unique minimum. A necessary and sufficient condition for that is f s / 2 > | f | , {\displaystyle f_{s}/2>|f|,} called the Nyquist condition . The lower left frame of Fig.2 depicts the typical reconstruction result of the available samples. Until f {\displaystyle f} exceeds the Nyquist frequency, the reconstruction matches the actual waveform (upper left frame). After that, it is the low frequency alias of the upper frame. The figures below offer additional depictions of aliasing, due to sampling. A graph of amplitude vs frequency (not time) for a single sinusoid at frequency 0.6 f s and some of its aliases at 0.4 f s , 1.4 f s , and 1.6 f s would look like the 4 black dots in Fig.3. The red lines depict the paths ( loci ) of the 4 dots if we were to adjust the frequency and amplitude of the sinusoid along the solid red segment (between f s /2 and f s ). No matter what function we choose to change the amplitude vs frequency, the graph will exhibit symmetry between 0 and f s . Folding is often observed in practice when viewing the frequency spectrum of real-valued samples, such as Fig.4. Complex sinusoids are waveforms whose samples are complex numbers ( z = A e i θ = A ( cos ⁡ θ + i sin ⁡ θ ) {\textstyle z=Ae^{i\theta }=A(\cos \theta +i\sin \theta )} ), and the concept of negative frequency is necessary to distinguish them. In that case, the frequencies of the aliases are given by just : f N ( f ) = f + N f s . (In real sinusoids, as shown in the above, all alias frequencies can be written as positive frequencies f N ( f ) ≜ | f + N f s | {\displaystyle f_{_{N}}(f)\triangleq \left|f+Nf_{\rm {s}}\right|} because of sine functions as odd functions.) Therefore, as f increases from 0 to f s , f −1 ( f ) also increases (from – f s to 0). Consequently, complex sinusoids do not exhibit folding . When the condition f s /2 > f is met for the highest frequency component of the original signal, then it is met for all the frequency components, a condition called the Nyquist criterion . That is typically approximated by filtering the original signal to attenuate high frequency components before it is sampled. These attenuated high frequency components still generate low-frequency aliases, but typically at low enough amplitudes that they do not cause problems. A filter chosen in anticipation of a certain sample frequency is called an anti-aliasing filter . The filtered signal can subsequently be reconstructed, by interpolation algorithms, without significant additional distortion. Most sampled signals are not simply stored and reconstructed. But the fidelity of a theoretical reconstruction (via the Whittaker–Shannon interpolation formula ) is a customary measure of the effectiveness of sampling. Historically the term aliasing evolved from radio engineering because of the action of superheterodyne receivers . When the receiver shifts multiple signals down to lower frequencies, from RF to IF by heterodyning , an unwanted signal, from an RF frequency equally far from the local oscillator (LO) frequency as the desired signal, but on the wrong side of the LO, can end up at the same IF frequency as the wanted one. If it is strong enough it can interfere with reception of the desired signal. This unwanted signal is known as an image or alias of the desired signal. The first written use of the terms "alias" and "aliasing" in signal processing appears to be in a 1949 unpublished Bell Laboratories technical memorandum [ 4 ] by John Tukey and Richard Hamming . That paper includes an example of frequency aliasing dating back to 1922. The first published use of the term "aliasing" in this context is due to Blackman and Tukey in 1958. [ 5 ] In their preface to the Dover reprint [ 6 ] of this paper, they point out that the idea of aliasing had been illustrated graphically by Stumpf [ 7 ] ten years prior. The 1949 Bell technical report refers to aliasing as though it is a well-known concept, but does not offer a source for the term. Gwilym Jenkins and Maurice Priestley credit Tukey with introducing it in this context, [ 8 ] though an analogous concept of aliasing had been introduced a few years earlier [ 9 ] in fractional factorial designs . While Tukey did significant work in factorial experiments [ 10 ] and was certainly aware of aliasing in fractional designs, [ 11 ] it cannot be determined whether his use of "aliasing" in signal processing was consciously inspired by such designs. Aliasing occurs whenever the use of discrete elements to capture or produce a continuous signal causes frequency ambiguity. Spatial aliasing, particular of angular frequency, can occur when reproducing a light field or sound field with discrete elements, as in 3D displays or wave field synthesis of sound. [ 12 ] This aliasing is visible in images such as posters with lenticular printing : if they have low angular resolution, then as one moves past them, say from left-to-right, the 2D image does not initially change (so it appears to move left), then as one moves to the next angular image, the image suddenly changes (so it jumps right) – and the frequency and amplitude of this side-to-side movement corresponds to the angular resolution of the image (and, for frequency, the speed of the viewer's lateral movement), which is the angular aliasing of the 4D light field. The lack of parallax on viewer movement in 2D images and in 3-D film produced by stereoscopic glasses (in 3D films the effect is called " yawing ", as the image appears to rotate on its axis) can similarly be seen as loss of angular resolution, all angular frequencies being aliased to 0 (constant). The qualitative effects of aliasing can be heard in the following audio demonstration. Six sawtooth waves are played in succession, with the first two sawtooths having a fundamental frequency of 440 Hz (A4), the second two having fundamental frequency of 880 Hz (A5), and the final two at 1760 Hz (A6). The sawtooths alternate between bandlimited (non-aliased) sawtooths and aliased sawtooths and the sampling rate is 22050 Hz. The bandlimited sawtooths are synthesized from the sawtooth waveform's Fourier series such that no harmonics above the Nyquist frequency (11025 Hz = 22050 Hz / 2 here) are present. The aliasing distortion in the lower frequencies is increasingly obvious with higher fundamental frequencies, and while the bandlimited sawtooth is still clear at 1760 Hz, the aliased sawtooth is degraded and harsh with a buzzing audible at frequencies lower than the fundamental. A form of spatial aliasing can also occur in antenna arrays or microphone arrays used to estimate the direction of arrival of a wave signal, as in geophysical exploration by seismic waves. Waves must be sampled more densely than two points per wavelength , or the wave arrival direction becomes ambiguous. [ 13 ]
https://en.wikipedia.org/wiki/Spectral_folding
In quantum mechanics , the spectral gap of a system is the energy difference between its ground state and its first excited state . [ 1 ] [ 2 ] The mass gap is the spectral gap between the vacuum and the lightest particle. A Hamiltonian with a spectral gap is called a gapped Hamiltonian , and those that do not are called gapless . In solid-state physics , the most important spectral gap is for the many-body system of electrons in a solid material, in which case it is often known as an energy gap . In quantum many-body systems, ground states of gapped Hamiltonians have exponential decay of correlations. [ 3 ] [ 4 ] [ 5 ] In 2015, it was shown that the problem of determining the existence of a spectral gap is undecidable in two or more dimensions. [ 6 ] [ 7 ] The authors used an aperiodic tiling of quantum Turing machines and showed that this hypothetical material becomes gapped if and only if the machine halts. [ 8 ] The one-dimensional case was also proven undecidable in 2020 by constructing a chain of interacting qudits divided into blocks that gain energy if and only if they represent a full computation by a Turing machine, and showing that this system becomes gapped if and only if the machine does not halt. [ 9 ]
https://en.wikipedia.org/wiki/Spectral_gap_(physics)
Spectral imaging is an umbrella term for energy-resolved X-ray imaging in medicine. [ 1 ] The technique makes use of the energy dependence of X-ray attenuation to either increase the contrast-to-noise ratio , or to provide quantitative image data and reduce image artefacts by so-called material decomposition. Dual-energy imaging, i.e. imaging at two energy levels, is a special case of spectral imaging and is still the most widely used terminology, but the terms "spectral imaging" and "spectral CT" have been coined to acknowledge the fact that photon-counting detectors have the potential for measurements at a larger number of energy levels. [ 2 ] [ 3 ] The first medical application of spectral imaging appeared in 1953 when B. Jacobson at the Karolinska University Hospital , inspired by X-ray absorption spectroscopy , presented a method called "dichromography" to measure the concentration of iodine in X-ray images. [ 4 ] In the 70's, spectral computed tomography (CT) with exposures at two different voltage levels was proposed by G.N. Hounsfield in his landmark CT paper. [ 5 ] The technology evolved rapidly during the 70's and 80's, [ 6 ] [ 7 ] but technical limitations, such as motion artifacts, [ 8 ] for long held back widespread clinical use. In recent years, however, two fields of technological breakthrough have spurred a renewed interest in energy-resolved imaging. Firstly, single-scan energy-resolved CT was introduced for routine clinical use in 2006 and is now available by several major manufacturers, [ 9 ] which has resulted in a large and expanding number of clinical applications. Secondly, energy-resolving photon-counting detectors start to become available for clinical practice; the first commercial photon-counting system was introduced for mammography in 2003, [ 10 ] and CT systems are at the verge of being feasible for routine clinical use. [ 11 ] An energy-resolved imaging system probes the object at two or more photon energy levels. In a generic imaging system, the projected signal in a detector element at energy level Ω ∈ { E 1 , E 2 , E 3 , … } {\textstyle \Omega \in \{E_{1},E_{2},E_{3},\ldots \}} is [ 1 ] where q {\textstyle q} is the number of incident photons, Φ {\textstyle \Phi } is the normalized incident energy spectrum, and Γ {\textstyle \Gamma } is the detector response function. Linear attenuation coefficients and integrated thicknesses for materials that make up the object are denoted μ {\textstyle \mu } and t {\textstyle t} (attenuation according to Lambert–Beers law ). Two conceivable ways of acquiring spectral information are to either vary q × Φ {\textstyle q\times \Phi } with Ω {\textstyle \Omega } , or to have Ω {\textstyle \Omega } -specific Γ {\textstyle \Gamma } , here denoted incidence-based and detection-based methods, respectively. Most elements appearing naturally in human bodies are of low atomic number and lack absorption edges in the diagnostic X-ray energy range. The two dominating X-ray interaction effects are then Compton scattering and the photo-electric effect , which can be assumed to be smooth and with separable and independent material and energy dependences. The linear attenuation coefficients can hence be expanded as [ 6 ] In contrast-enhanced imaging, high-atomic-number contrast agents with K absorption edges in the diagnostic energy range may be present in the body. K-edge energies are material specific, which means that the energy dependence of the photo-electric effect is no longer separable from the material properties, and an additional term can be added to Eq. ( 2 ) according to [ 12 ] where a K {\textstyle a_{K}} and f K {\textstyle f_{K}} are the material coefficient and energy dependency of contrast-agent material K {\textstyle K} . Summing the energy bins in Eq. ( 1 ) ( n = ∑ n Ω {\textstyle n=\sum n_{\Omega }} ) yields a conventional non-energy-resolved image, but because X-ray contrast varies with energy, a weighted sum ( n = ∑ w Ω × n Ω {\textstyle n=\sum w_{\Omega }\times n_{\Omega }} ) optimizes the contrast-to-noise-ratio (CNR) and enables a higher CNR at a constant patient dose or a lower dose at a constant CNR. [ 13 ] The benefit of energy weighting is highest where the photo-electric effect dominates and lower in high-energy regions dominated by Compton scattering (with weaker energy dependence). Energy weighting was pioneered by Tapiovaara and Wagner [ 13 ] and has subsequently been refined for projection imaging [ 14 ] [ 15 ] and CT [ 16 ] with CNR improvements ranging from a few percent up to tenth of percent for heavier elements and an ideal CT detector. [ 17 ] An example with a realistic detector was presented by Berglund et al. who modified a photon-counting mammography system and raised the CNR of clinical images by 2.2–5.2%. [ 18 ] Equation ( 1 ) can be treated as a system of equations with material thicknesses as unknowns, a technique broadly referred to as material decomposition. System properties and linear attenuation coefficients need to be known, either explicitly (by modelling) or implicitly (by calibration). In CT, implementing material decomposition post reconstruction (image-based decomposition) does not require coinciding projection data, but the decomposed images may suffer from beam-hardening artefacts because the reconstruction algorithm is generally non-reversible. [ 19 ] Applying material decomposition directly in projection space instead (projection-based decomposition), [ 6 ] can in principle eliminate beam-hardening artefacts because the decomposed projections are quantitative, but the technique requires coinciding projection data such as from a detection-based method. In the absence of K-edge contrast agents and any other information about the object (e.g. thickness), the limited number of independent energy dependences according to Eq. ( 2 ) means that the system of equations can only be solved for two unknowns, and measurements at two energies ( | Ω | = 2 {\textstyle |\Omega |=2} ) are necessary and sufficient for a unique solution of t 1 {\textstyle t_{1}} and t 2 {\textstyle t_{2}} . [ 7 ] Materials 1 and 2 are referred to as basis materials and are assumed to make up the object; any other material present in the object will be represented by a linear combination of the two basis materials. Material-decomposed images can be used to differentiate between healthy and malignant tissue, such as micro calcifications in the breast , [ 20 ] ribs and pulmonary nodules, [ 21 ] cysts , solid tumors and normal breast tissue, [ 22 ] posttraumatic bone bruises ( bone marrow edema ) and the bone itself, [ 23 ] different types of renal calculi (stones), [ 24 ] and gout in the joints. [ 25 ] The technique can also be used to characterize healthy tissue, such as the composition of breast tissue (an independent risk factor for breast cancer) [ 26 ] [ 27 ] [ 28 ] and bone-mineral density (an independent risk factor for fractures and all-cause mortality). [ 29 ] Finally, virtual autopsies with spectral imaging can facilitate detection and characterization of bullets, knife tips, glass or shell fragments etc. [ 30 ] The basis-material representation can be readily converted to images showing the amounts of photoelectric and Compton interactions by invoking Eq. ( 2 ), and to images of effective-atomic-number and electron density distributions. [ 6 ] As the basis-material representation is sufficient to describe the linear attenuation of the object, it is possible to calculate virtual monochromatic images, which is useful for optimizing the CNR to a certain imaging task, analogous to energy weighting. For instance, the CNR between grey and white brain matter is maximized at medium energies, whereas artefacts caused by photon starvation are minimized at higher virtual energies. [ 31 ] In contrast-enhanced imaging , additional unknowns may be added to the system of equations according to Eq. ( 3 ) if one or several K absorption edges are present in the imaged energy range, a technique often referred to as K-edge imaging. With one K-edge contrast agent, measurements at three energies ( | Ω | = 3 {\textstyle |\Omega |=3} ) are necessary and sufficient for a unique solution, two contrast agents can be differentiated with four energy bins ( | Ω | = 4 {\textstyle |\Omega |=4} ), etc. K-edge imaging can be used to either enhance and quantify, or to suppress a contrast agent. Enhancement of contrast agents can be used for improved detection and diagnosis of tumors, [ 32 ] which exhibit increased retention of contrast agents. Further, differentiation between iodine and calcium is often challenging in conventional CT, but energy-resolved imaging can facilitate many procedures by, for instance, suppressing bone contrast [ 33 ] and improving characterization of atherosclerotic plaque . [ 34 ] Suppression of contrast agents is employed in so-called virtual unenhanced or virtual non-contrast (VNC) images. VNC images are free from iodine staining (contrast-agent residuals), [ 35 ] can save dose to the patient by reducing the need for an additional non-contrast acquisition, [ 36 ] can improve radiotherapy dose calculations from CT images, [ 37 ] and can help in distinguishing between contrast agent and foreign objects. [ 38 ] Most studies of contrast-enhanced spectral imaging have used iodine, which is a well-established contrast agent, but the K edge of iodine at 33.2 keV is not optimal for all applications and some patients are hypersensitive to iodine. Other contrast agents have therefore been proposed, such as gadolinium (K edge at 50.2 keV), [ 39 ] nanoparticle silver (K edge at 25.5 keV), [ 40 ] zirconium (K edge at 18.0 keV), [ 41 ] and gold (K edge at 80.7 keV). [ 42 ] Some contrast agents can be targeted, [ 43 ] which opens up possibilities for molecular imaging , and using several contrast agents with different K-edge energies in combination with photon-counting detectors with a corresponding number of energy thresholds enable multi-agent imaging. [ 44 ] Incidence-based methods obtain spectral information by acquiring several images at different tube voltage settings, possibly in combination with different filtering. Temporal differences between the exposures (e.g. patient motion, variation in contrast-agent concentration) for long limited practical implementations, [ 6 ] but dual-source CT [ 9 ] and subsequently rapid kV switching [ 45 ] have now virtually eliminated the time between exposures. Splitting the incident radiation of a scanning system into two beams with different filtration is another way to quasi-simultaneously acquire data at two energy levels. [ 46 ] Detection-based methods instead obtain spectral information by splitting the spectrum after interaction in the object. So-called sandwich detectors consist of two (or more) detector layers, where the top layer preferentially detects low-energy photons and the bottom layer detects a harder spectrum. [ 47 ] [ 48 ] Detection-based methods enable projection-based material decomposition because the two energy levels measured by the detector represent identical ray paths. Further, spectral information is available from every scan, which has work-flow advantages. [ 49 ] The currently most advanced detection-based method is based on photon-counting detectors . As opposed to conventional detectors , which integrate all photon interactions over the exposure time, photon-counting detectors are fast enough to register and measure the energy of single photon events. [ 50 ] Hence, the number of energy bins and the spectral separation are not determined by physical properties of the system (detector layers, source / filtration etc.), but by the detector electronics, which increases efficiency and the degrees of freedom, and enable elimination of electronic noise . The first commercial photon-counting application was the MicroDose mammography system, introduced by Sectra Mamea in 2003 (later acquired by Philips), [ 10 ] and spectral imaging was launched on this platform in 2013. [ 51 ] The MicroDose system was based on silicon strip detectors, [ 10 ] [ 51 ] a technology that has subsequently been refined for CT with up to eight energy bins. [ 52 ] [ 53 ] Silicon as sensor material benefit from high charge-collection efficiency, ready availability of high-quality high-purity silicon crystals, and established methods for test and assembly. [ 54 ] The relatively low photo-electric cross section can be compensated for by arranging the silicon wafers edge on, [ 55 ] which also enables depth segments. [ 56 ] Cadmium telluride (CdTe) and cadmium–zinc telluride (CZT) are also being investigated as sensor materials. [ 57 ] [ 58 ] [ 59 ] The higher atomic number of these materials result in a higher photo-electric cross section, which is advantageous, but the higher fluorescent yield degrades spectral response and induces cross talk. [ 60 ] [ 61 ] Manufacturing of macro-sized crystals of these materials have so far posed practical challenges and leads to charge trapping [ 62 ] and long-term polarization effects (build-up of space charge). [ 63 ] Other solid-state materials, such as gallium arsenide [ 64 ] and mercuric iodide , [ 65 ] as well as gas detectors, [ 66 ] are currently quite far from clinical implementation. The main intrinsic challenge of photon-counting detectors for medical imaging is pulse pileup, [ 62 ] which results in lost counts and reduced energy resolution because several pulses are counted as one. Pileup will always be present in photon-counting detectors because of the Poisson distribution of incident photons, but detector speeds are now so high that acceptable pileup levels at CT count rates begin to come within reach. [ 67 ]
https://en.wikipedia.org/wiki/Spectral_imaging_(radiography)
The Fourier transform of a function of time, s(t), is a complex-valued function of frequency, S(f), often referred to as a frequency spectrum . Any linear time-invariant operation on s(t) produces a new spectrum of the form H(f)•S(f), which changes the relative magnitudes and/or angles ( phase ) of the non-zero values of S(f). Any other type of operation creates new frequency components that may be referred to as spectral leakage in the broadest sense. Sampling , for instance, produces leakage, which we call aliases of the original spectral component. For Fourier transform purposes, sampling is modeled as a product between s(t) and a Dirac comb function. The spectrum of a product is the convolution between S(f) and another function, which inevitably creates the new frequency components. But the term 'leakage' usually refers to the effect of windowing , which is the product of s(t) with a different kind of function, the window function . Window functions happen to have finite duration, but that is not necessary to create leakage. Multiplication by a time-variant function is sufficient. The Fourier transform of the function cos( ωt ) is zero, except at frequency ± ω . However, many other functions and waveforms do not have convenient closed-form transforms. Alternatively, one might be interested in their spectral content only during a certain time period. In either case, the Fourier transform (or a similar transform) can be applied on one or more finite intervals of the waveform. In general, the transform is applied to the product of the waveform and a window function. Any window (including rectangular) affects the spectral estimate computed by this method. The effects are most easily characterized by their effect on a sinusoidal s(t) function, whose unwindowed Fourier transform is zero for all but one frequency. The customary frequency of choice is 0 Hz, because the windowed Fourier transform is simply the Fourier transform of the window function itself (see § Examples of window functions ) : When both sampling and windowing are applied to s(t), in either order, the leakage caused by windowing is a relatively localized spreading of frequency components, with often a blurring effect, whereas the aliasing caused by sampling is a periodic repetition of the entire blurred spectrum. Windowing of a simple waveform like cos( ωt ) causes its Fourier transform to develop non-zero values (commonly called spectral leakage) at frequencies other than ω . The leakage tends to be worst (highest) near ω and least at frequencies farthest from ω . If the waveform under analysis comprises two sinusoids of different frequencies, leakage can interfere with our ability to distinguish them spectrally. Possible types of interference are often broken down into two opposing classes as follows: If the component frequencies are dissimilar and one component is weaker, then leakage from the stronger component can obscure the weaker one's presence. But if the frequencies are too similar, leakage can render them unresolvable even when the sinusoids are of equal strength. Windows that are effective against the first type of interference, namely where components have dissimilar frequencies and amplitudes, are called high dynamic range . Conversely, windows that can distinguish components with similar frequencies and amplitudes are called high resolution . The rectangular window is an example of a window that is high resolution but low dynamic range , meaning it is good for distinguishing components of similar amplitude even when the frequencies are also close, but poor at distinguishing components of different amplitude even when the frequencies are far away. High-resolution, low-dynamic-range windows such as the rectangular window also have the property of high sensitivity , which is the ability to reveal relatively weak sinusoids in the presence of additive random noise. That is because the noise produces a stronger response with high-dynamic-range windows than with high-resolution windows. At the other extreme of the range of window types are windows with high dynamic range but low resolution and sensitivity. High-dynamic-range windows are most often justified in wideband applications , where the spectrum being analyzed is expected to contain many different components of various amplitudes. In between the extremes are moderate windows, such as Hann and Hamming . They are commonly used in narrowband applications , such as the spectrum of a telephone channel. In summary, spectral analysis involves a trade-off between resolving comparable strength components with similar frequencies ( high resolution / sensitivity ) and resolving disparate strength components with dissimilar frequencies ( high dynamic range ). That trade-off occurs when the window function is chosen. [ 1 ] : p.90 When the input waveform is time-sampled, instead of continuous, the analysis is usually done by applying a window function and then a discrete Fourier transform (DFT). But the DFT provides only a sparse sampling of the actual discrete-time Fourier transform (DTFT) spectrum. Figure 2, row 3 shows a DTFT for a rectangularly-windowed sinusoid. The actual frequency of the sinusoid is indicated as "13" on the horizontal axis. Everything else is leakage, exaggerated by the use of a logarithmic presentation. The unit of frequency is "DFT bins"; that is, the integer values on the frequency axis correspond to the frequencies sampled by the DFT. [ 2 ] : p.56 eq.(16) So the figure depicts a case where the actual frequency of the sinusoid coincides with a DFT sample, and the maximum value of the spectrum is accurately measured by that sample. In row 4, it misses the maximum value by 1 ⁄ 2 bin, and the resultant measurement error is referred to as scalloping loss (inspired by the shape of the peak). For a known frequency, such as a musical note or a sinusoidal test signal, matching the frequency to a DFT bin can be prearranged by choices of a sampling rate and a window length that results in an integer number of cycles within the window. The concepts of resolution and dynamic range tend to be somewhat subjective, depending on what the user is actually trying to do. But they also tend to be highly correlated with the total leakage, which is quantifiable. It is usually expressed as an equivalent bandwidth, B. It can be thought of as redistributing the DTFT into a rectangular shape with height equal to the spectral maximum and width B. [ A ] [ 3 ] The more the leakage, the greater the bandwidth. It is sometimes called noise equivalent bandwidth or equivalent noise bandwidth , because it is proportional to the average power that will be registered by each DFT bin when the input signal contains a random noise component (or is just random noise). A graph of the power spectrum , averaged over time, typically reveals a flat noise floor , caused by this effect. The height of the noise floor is proportional to B. So two different window functions can produce different noise floors, as seen in figures 1 and 3. In signal processing , operations are chosen to improve some aspect of quality of a signal by exploiting the differences between the signal and the corrupting influences. When the signal is a sinusoid corrupted by additive random noise, spectral analysis distributes the signal and noise components differently, often making it easier to detect the signal's presence or measure certain characteristics, such as amplitude and frequency. Effectively, the signal-to-noise ratio (SNR) is improved by distributing the noise uniformly, while concentrating most of the sinusoid's energy around one frequency. Processing gain is a term often used to describe an SNR improvement. The processing gain of spectral analysis depends on the window function, both its noise bandwidth (B) and its potential scalloping loss. These effects partially offset, because windows with the least scalloping naturally have the most leakage. Figure 3 depicts the effects of three different window functions on the same data set, comprising two equal strength sinusoids in additive noise. The frequencies of the sinusoids are chosen such that one encounters no scalloping and the other encounters maximum scalloping. Both sinusoids suffer less SNR loss under the Hann window than under the Blackman-Harris window. In general (as mentioned earlier), this is a deterrent to using high-dynamic-range windows in low-dynamic-range applications. The formulas provided at § Examples of window functions produce discrete sequences, as if a continuous window function has been "sampled". (See an example at Kaiser window .) Window sequences for spectral analysis are either symmetric or 1-sample short of symmetric (called periodic , [ 4 ] [ 5 ] DFT-even , or DFT-symmetric [ 2 ] : p.52 ). For instance, a true symmetric sequence, with its maximum at a single center-point, is generated by the MATLAB function hann(9,'symmetric') . Deleting the last sample produces a sequence identical to hann(8,'periodic') . Similarly, the sequence hann(8,'symmetric') has two equal center-points. [ 6 ] Some functions have one or two zero-valued end-points, which are unnecessary in most applications. Deleting a zero-valued end-point has no effect on its DTFT (spectral leakage). But the function designed for N + 1 or N + 2 samples, in anticipation of deleting one or both end points, typically has a slightly narrower main lobe, slightly higher sidelobes, and a slightly smaller noise-bandwidth. [ 7 ] The predecessor of the DFT is the finite Fourier transform , and window functions were "always an odd number of points and exhibit even symmetry about the origin". [ 2 ] : p.52 In that case, the DTFT is entirely real-valued. When the same sequence is shifted into a DFT data window , [ 0 ≤ n ≤ N ] , {\displaystyle [0\leq n\leq N],} the DTFT becomes complex-valued except at frequencies spaced at regular intervals of 1 / N . {\displaystyle 1/N.} [ a ] Thus, when sampled by an N {\displaystyle N} -length DFT, the samples (called DFT coefficients ) are still real-valued. An approximation is to truncate the N +1-length sequence (effectively w [ N ] = 0 {\displaystyle w[N]=0} ), and compute an N {\displaystyle N} -length DFT. The DTFT (spectral leakage) is slightly affected, but the samples remain real-valued. [ 8 ] [ B ] The terms DFT-even and periodic refer to the idea that if the truncated sequence were repeated periodically, it would be even-symmetric about n = 0 , {\displaystyle n=0,} and its DTFT would be entirely real-valued. But the actual DTFT is generally complex-valued, except for the N {\displaystyle N} DFT coefficients. Spectral plots like those at § Examples of window functions , are produced by sampling the DTFT at much smaller intervals than 1 / N {\displaystyle 1/N} and displaying only the magnitude component of the complex numbers. An exact method to sample the DTFT of an N +1-length sequence at intervals of 1 / N {\displaystyle 1/N} is described at DTFT § L=N+1 . Essentially, w [ N ] {\displaystyle w[N]} is combined with w [ 0 ] {\displaystyle w[0]} (by addition), and an N {\displaystyle N} -point DFT is done on the truncated sequence. Similarly, spectral analysis would be done by combining the n = 0 {\displaystyle n=0} and n = N {\displaystyle n=N} data samples before applying the truncated symmetric window. That is not a common practice, even though truncated windows are very popular. [ 2 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ b ] The appeal of DFT-symmetric windows is explained by the popularity of the fast Fourier transform (FFT) algorithm for implementation of the DFT, because truncation of an odd-length sequence results in an even-length sequence. Their real-valued DFT coefficients are also an advantage in certain esoteric applications [ C ] where windowing is achieved by means of convolution between the DFT coefficients and an unwindowed DFT of the data. [ 14 ] [ 2 ] : p.62 [ 1 ] : p.85 In those applications, DFT-symmetric windows (even or odd length) from the Cosine-sum family are preferred, because most of their DFT coefficients are zero-valued, making the convolution very efficient. [ D ] [ 1 ] : p.85 When selecting an appropriate window function for an application, this comparison graph may be useful. The frequency axis has units of FFT "bins" when the window of length N is applied to data and a transform of length N is computed. For instance, the value at frequency ⁠ 1 / 2 ⁠ "bin" is the response that would be measured in bins k and k + 1 to a sinusoidal signal at frequency k + ⁠ 1 / 2 ⁠ . It is relative to the maximum possible response, which occurs when the signal frequency is an integer number of bins. The value at frequency ⁠ 1 / 2 ⁠ is referred to as the maximum scalloping loss of the window, which is one metric used to compare windows. The rectangular window is noticeably worse than the others in terms of that metric. Other metrics that can be seen are the width of the main lobe and the peak level of the sidelobes, which respectively determine the ability to resolve comparable strength signals and disparate strength signals. The rectangular window (for instance) is the best choice for the former and the worst choice for the latter. What cannot be seen from the graphs is that the rectangular window has the best noise bandwidth, which makes it a good candidate for detecting low-level sinusoids in an otherwise white noise environment. Interpolation techniques, such as zero-padding and frequency-shifting, are available to mitigate its potential scalloping loss.
https://en.wikipedia.org/wiki/Spectral_leakage
A spectral line is a weaker or stronger region in an otherwise uniform and continuous spectrum . It may result from emission or absorption of light in a narrow frequency range, compared with the nearby frequencies. Spectral lines are often used to identify atoms and molecules . These "fingerprints" can be compared to the previously collected ones of atoms [ 1 ] and molecules, [ 2 ] and are thus used to identify the atomic and molecular components of stars and planets , which would otherwise be impossible. Spectral lines are the result of interaction between a quantum system (usually atoms , but sometimes molecules or atomic nuclei ) and a single photon . When a photon has about the right amount of energy (which is connected to its frequency) [ 3 ] to allow a change in the energy state of the system (in the case of an atom this is usually an electron changing orbitals ), the photon is absorbed. Then the energy will be spontaneously re-emitted, either as one photon at the same frequency as the original one or in a cascade, where the sum of the energies of the photons emitted will be equal to the energy of the one absorbed (assuming the system returns to its original state). A spectral line may be observed either as an emission line or an absorption line . Which type of line is observed depends on the type of material and its temperature relative to another emission source. An absorption line is produced when photons from a hot, broad spectrum source pass through a cooler material. The intensity of light, over a narrow frequency range, is reduced due to absorption by the material and re-emission in random directions. By contrast, a bright emission line is produced when photons from a hot material are detected, perhaps in the presence of a broad spectrum from a cooler source. The intensity of light, over a narrow frequency range, is increased due to emission by the hot material. Spectral lines are highly atom-specific, and can be used to identify the chemical composition of any medium. Several elements, including helium , thallium , and caesium , were discovered by spectroscopic means. Spectral lines also depend on the temperature and density of the material, so they are widely used to determine the physical conditions of stars and other celestial bodies that cannot be analyzed by other means. Depending on the material and its physical conditions, the energy of the involved photons can vary widely, with the spectral lines observed across the electromagnetic spectrum , from radio waves to gamma rays . Strong spectral lines in the visible part of the electromagnetic spectrum often have a unique Fraunhofer line designation, such as K for a line at 393.366 nm emerging from singly-ionized calcium atom, Ca + , though some of the Fraunhofer "lines" are blends of multiple lines from several different species . In other cases, the lines are designated according to the level of ionization by adding a Roman numeral to the designation of the chemical element . Neutral atoms are denoted with the Roman numeral I, singly ionized atoms with II, and so on, so that, for example: Cu II — copper ion with +1 charge, Cu 1+ Fe III — iron ion with +2 charge, Fe 2+ More detailed designations usually include the line wavelength and may include a multiplet number (for atomic lines) or band designation (for molecular lines). Many spectral lines of atomic hydrogen also have designations within their respective series , such as the Lyman series or Balmer series . Originally all spectral lines were classified into series: the principal series , sharp series , and diffuse series . These series exist across atoms of all elements, and the patterns for all atoms are well-predicted by the Rydberg-Ritz formula . These series were later associated with suborbitals. There are a number of effects which control spectral line shape . A spectral line extends over a tiny spectral band with a nonzero range of frequencies, not a single frequency (i.e., a nonzero spectral width ). In addition, its center may be shifted from its nominal central wavelength. There are several reasons for this broadening and shift. These reasons may be divided into two general categories – broadening due to local conditions and broadening due to extended conditions. Broadening due to local conditions is due to effects which hold in a small region around the emitting element, usually small enough to assure local thermodynamic equilibrium . Broadening due to extended conditions may result from changes to the spectral distribution of the radiation as it traverses its path to the observer. It also may result from the combining of radiation from a number of regions which are far from each other. The lifetime of excited states results in natural broadening, also known as lifetime broadening. The uncertainty principle relates the lifetime of an excited state (due to spontaneous radiative decay or the Auger process ) with the uncertainty of its energy. Some authors use the term "radiative broadening" to refer specifically to the part of natural broadening caused by the spontaneous radiative decay. [ 4 ] A short lifetime will have a large energy uncertainty and a broad emission. This broadening effect results in an unshifted Lorentzian profile . The natural broadening can be experimentally altered only to the extent that decay rates can be artificially suppressed or enhanced. [ 5 ] The atoms in a gas which are emitting radiation will have a distribution of velocities. Each photon emitted will be "red"- or "blue"-shifted by the Doppler effect depending on the velocity of the atom relative to the observer. The higher the temperature of the gas, the wider the distribution of velocities in the gas. Since the spectral line is a combination of all of the emitted radiation, the higher the temperature of the gas, the broader the spectral line emitted from that gas. This broadening effect is described by a Gaussian profile and there is no associated shift. The presence of nearby particles will affect the radiation emitted by an individual particle. There are two limiting cases by which this occurs: Pressure broadening may also be classified by the nature of the perturbing force as follows: Inhomogeneous broadening is a general term for broadening because some emitting particles are in a different local environment from others, and therefore emit at a different frequency. This term is used especially for solids, where surfaces, grain boundaries, and stoichiometry variations can create a variety of local environments for a given atom to occupy. In liquids, the effects of inhomogeneous broadening is sometimes reduced by a process called motional narrowing . Certain types of broadening are the result of conditions over a large region of space rather than simply upon conditions that are local to the emitting particle. Opacity broadening is an example of a non-local broadening mechanism. Electromagnetic radiation emitted at a particular point in space can be reabsorbed as it travels through space. This absorption depends on wavelength. The line is broadened because the photons at the line center have a greater reabsorption probability than the photons at the line wings. Indeed, the reabsorption near the line center may be so great as to cause a self reversal in which the intensity at the center of the line is less than in the wings. This process is also sometimes called self-absorption . Radiation emitted by a moving source is subject to Doppler shift due to a finite line-of-sight velocity projection. If different parts of the emitting body have different velocities (along the line of sight), the resulting line will be broadened, with the line width proportional to the width of the velocity distribution. For example, radiation emitted from a distant rotating body, such as a star , will be broadened due to the line-of-sight variations in velocity on opposite sides of the star (this effect usually referred to as rotational broadening). The greater the rate of rotation, the broader the line. Another example is an imploding plasma shell in a Z-pinch . Each of these mechanisms can act in isolation or in combination with others. Assuming each effect is independent, the observed line profile is a convolution of the line profiles of each mechanism. For example, a combination of the thermal Doppler broadening and the impact pressure broadening yields a Voigt profile . However, the different line broadening mechanisms are not always independent. For example, the collisional effects and the motional Doppler shifts can act in a coherent manner, resulting under some conditions even in a collisional narrowing , known as the Dicke effect . The phrase "spectral lines", when not qualified, usually refers to lines having wavelengths in the visible band of the full electromagnetic spectrum . Many spectral lines occur at wavelengths outside this range. At shorter wavelengths, which correspond to higher energies, ultraviolet spectral lines include the Lyman series of hydrogen . At the much shorter wavelengths of X-rays , the lines are known as characteristic X-rays because they remain largely unchanged for a given chemical element, independent of their chemical environment. Longer wavelengths correspond to lower energies, where the infrared spectral lines include the Paschen series of hydrogen. At even longer wavelengths, the radio spectrum includes the 21-cm line used to detect neutral hydrogen throughout the cosmos .
https://en.wikipedia.org/wiki/Spectral_line
The analysis of line intensity ratios is an important tool to obtain information about laboratory and space plasmas. In emission spectroscopy , the intensity of spectral lines can provide various information about the plasma (or gas ) condition. It might be used to determine the temperature or density of the plasma. Since the measurement of an absolute intensity in an experiment can be challenging, the ratio of different spectral line intensities can be used to achieve information about the plasma, as well. The emission intensity density of an atomic transition from the upper state to the lower state is: [ 1 ] P u → l = N u ℏ ω u → l A u → l {\displaystyle P_{u\rightarrow l}=N_{u}\ \hbar \omega _{u\rightarrow l}\ A_{u\rightarrow l}} where: The population of atomic states N is generally dependent on plasma temperature and density. Generally, the more hot and dense the plasma, the more the higher atomic states are populated. The observance or not-observance of spectral lines from certain ion species can, therefore, help to give a rough estimation of the plasma parameters. More accurate results can be obtained by comparing line intensities: P u 1 → l 1 P u 2 → l 2 = N u 1 ω u 1 → l 1 A u 1 → l 1 N u 2 ω u 2 → l 2 A u 2 → l 2 {\displaystyle {\frac {P_{u_{1}\rightarrow l_{1}}}{P_{u_{2}\rightarrow l_{2}}}}={\frac {N_{u_{1}}\omega _{u_{1}\rightarrow l_{1}}A_{u_{1}\rightarrow l_{1}}}{N_{u_{2}}\omega _{u_{2}\rightarrow l_{2}}A_{u_{2}\rightarrow l_{2}}}}} The transition frequencies and the Einstein coefficients of transitions are well known and listed in various tables as in NIST Atomic Spectra Database. It is often that atomic modeling [ 2 ] is required for determination of the population densities N u 1 {\displaystyle N_{u_{1}}} and N u 2 {\displaystyle N_{u_{2}}} as a function of density and temperature. While for the temperature determination of plasma in thermal equilibrium Saha's equation and Boltzmann's formula might be used, the density dependence usually requires atomic modeling.
https://en.wikipedia.org/wiki/Spectral_line_ratios
Spectral line shape or spectral line profile describes the form of an electromagnetic spectrum in the vicinity of a spectral line – a region of stronger or weaker intensity in the spectrum. Ideal line shapes include Lorentzian , Gaussian and Voigt functions, whose parameters are the line position, maximum height and half-width. [ 1 ] Actual line shapes are determined principally by Doppler , collision and proximity broadening. For each system the half-width of the shape function varies with temperature, pressure (or concentration ) and phase. A knowledge of shape function is needed for spectroscopic curve fitting and deconvolution . A spectral line can result from an electron transition in an atom, molecule or ion, which is associated with a specific amount of energy, E . When this energy is measured by means of some spectroscopic technique, the line is not infinitely sharp, but has a particular shape. Numerous factors can contribute to the broadening of spectral lines . Broadening can only be mitigated by the use of specialized techniques, such as Lamb dip spectroscopy. The principal sources of broadening are: Observed spectral line shape and line width are also affected by instrumental factors. The observed line shape is a convolution of the intrinsic line shape with the instrument transfer function . [ 3 ] Each of these mechanisms, and others , can act in isolation or in combination. If each effect is independent of the other, the observed line profile is a convolution of the line profiles of each mechanism. Thus, a combination of Doppler and pressure broadening effects yields a Voigt profile. A Lorentzian line shape function can be represented as where L signifies a Lorentzian function standardized, for spectroscopic purposes, to a maximum value of 1; [ note 1 ] x {\displaystyle x} is a subsidiary variable defined as where p 0 {\displaystyle p_{0}} is the position of the maximum (corresponding to the transition energy E ), p is a position, and w is the full width at half maximum (FWHM), the width of the curve when the intensity is half the maximum intensity (this occurs at the points p = p 0 ± w 2 {\displaystyle p=p_{0}\pm {\frac {w}{2}}} ). The unit of p 0 {\displaystyle p_{0}} , p {\displaystyle p} and w {\displaystyle w} is typically wavenumber or frequency . The variable x is dimensionless and is zero at p = p 0 {\displaystyle p=p_{0}} . In Nuclear Magnetic Resonance it is possible to measure spectra in a phase sensitive manner. In those cases it is important to decompose the Lorentzian into its absorptive and dispersive parts, meaning real and imaginary parts respectively. The full Lorentzian lineshape is a result from the Fourier Transform of a Free Induction Decay [ 4 ] and takes the following form: L ( ω ) = 1 1 T 1 − i ( ω − ω 0 ) = F [ e − t / T 1 e i ω 0 t ] {\displaystyle L(\omega )={\frac {1}{{\frac {1}{T_{1}}}-i(\omega -\omega _{0})}}={\mathcal {F}}[e^{-t/T_{1}}e^{i\omega _{0}t}]} This can be expanded into the real and imaginary part by quadratic expansion of the denominator: L ( ω ) = 1 / T 1 ( 1 / T 1 ) 2 + ω 2 + i ω ( 1 / T 1 ) 2 + ω 2 {\displaystyle L(\omega )={\frac {1/T_{1}}{(1/T_{1})^{2}+\omega ^{2}}}+{\frac {i\omega }{(1/T_{1})^{2}+\omega ^{2}}}} Taking only the real part of this expression yields the less general, but more common form of the Lorentz lineshape. The Gaussian line shape has the standardized form, The subsidiary variable, x , is defined in the same way as for a Lorentzian shape. Both this function and the Lorentzian have a maximum value of 1 at x = 0 and a value of 1/2 at x =±1. The third line shape that has a theoretical basis is the Voigt function , a convolution of a Gaussian and a Lorentzian, where σ and γ are half-widths. The computation of a Voigt function and its derivatives are more complicated than a Gaussian or Lorentzian. [ 5 ] A spectroscopic peak may be fitted to multiples of the above functions or to sums or products of functions with variable parameters. [ 6 ] The above functions are all symmetrical about the position of their maximum. [ note 2 ] Asymmetric functions have also been used. [ 7 ] [ note 3 ] For atoms in the gas phase the principal effects are Doppler and pressure broadening. Lines are relatively sharp on the scale of measurement so that applications such as atomic absorption spectroscopy (AAS) and Inductively coupled plasma atomic emission spectroscopy (ICP) are used for elemental analysis . Atoms also have distinct x-ray spectra that are attributable to the excitation of inner shell electrons to excited states. The lines are relatively sharp because the inner electron energies are not very sensitive to the atom's environment. This is applied to X-ray fluorescence spectroscopy of solid materials. For molecules in the gas phase, the principal effects are Doppler and pressure broadening. This applies to rotational spectroscopy , [ 8 ] rotational-vibrational spectroscopy and vibronic spectroscopy . For molecules in the liquid state or in solution, collision and proximity broadening predominate and lines are much broader than lines from the same molecule in the gas phase. [ 9 ] [ 10 ] Line maxima may also be shifted. Because there are many sources of broadening, the lines have a stable distribution , tending towards a Gaussian shape. The shape of lines in a nuclear magnetic resonance (NMR) spectrum is determined by the process of free induction decay . This decay is approximately exponential , so the line shape is Lorentzian. [ 11 ] This follows because the Fourier transform of an exponential function in the time domain is a Lorentzian in the frequency domain. In NMR spectroscopy the lifetime of the excited states is relatively long, so the lines are very sharp, producing high-resolution spectra. Gadolinium-based pharmaceuticals alter the relaxation time, and hence spectral line shape, of those protons that are in water molecules that are transiently attached to the paramagnetic atoms, resulting contrast enhancement of the MRI image. [ 12 ] This allows better visualisation of some brain tumours. [ 12 ] Some spectroscopic curves can be approximated by the sum of a set of component curves. For example, when Beer's law applies, the total absorbance, A , at wavelength λ, is a linear combination of the absorbance due to the individual components, k , at concentration , c k . ε is an extinction coefficient . In such cases the curve of experimental data may be decomposed into sum of component curves in a process of curve fitting . This process is also widely called deconvolution. Curve deconvolution and curve fitting are completely different mathematical procedures. [ 7 ] [ 13 ] Curve fitting can be used in two distinct ways. Spectroscopic curves can be subjected to numerical differentiation . [ 19 ] When the data points in a curve are equidistant from each other the Savitzky–Golay convolution method may be used. [ 20 ] The best convolution function to use depends primarily on the signal-to-noise ratio of the data. [ 21 ] The first derivative (slope, d y d x {\displaystyle {\frac {dy}{dx}}} ) of all single line shapes is zero at the position of maximum height. This is also true of the third derivative; odd derivatives can be used to locate the position of a peak maximum. [ 22 ] The second derivatives, d 2 y d x 2 {\displaystyle {\frac {d^{2}y}{dx^{2}}}} , of both Gaussian and Lorentzian functions have a reduced half-width. This can be used to apparently improve spectral resolution . The diagram shows the second derivative of the black curve in the diagram above it. Whereas the smaller component produces a shoulder in the spectrum, it appears as a separate peak in the 2nd. derivative. [ note 4 ] Fourth derivatives, d 4 y d x 4 {\displaystyle {\frac {d^{4}y}{dx^{4}}}} , can also be used, when the signal-to-noise ratio in the spectrum is sufficiently high. [ 23 ] Deconvolution can be used to apparently improve spectral resolution . In the case of NMR spectra, the process is relatively straight forward, because the line shapes are Lorentzian, and the convolution of a Lorentzian with another Lorentzian is also Lorentzian. The Fourier transform of a Lorentzian is an exponential. In the co-domain (time) of the spectroscopic domain (frequency) convolution becomes multiplication. Therefore, a convolution of the sum of two Lorentzians becomes a multiplication of two exponentials in the co-domain. Since, in FT-NMR, the measurements are made in the time domain division of the data by an exponential is equivalent to deconvolution in the frequency domain. A suitable choice of exponential results in a reduction of the half-width of a line in the frequency domain. This technique has been rendered all but obsolete by advances in NMR technology. [ 24 ] A similar process has been applied for resolution enhancement of other types of spectra, with the disadvantage that the spectrum must be first Fourier transformed and then transformed back after the deconvoluting function has been applied in the spectrum's co-domain. [ 13 ]
https://en.wikipedia.org/wiki/Spectral_line_shape
In signal processing , the power spectrum S x x ( f ) {\displaystyle S_{xx}(f)} of a continuous time signal x ( t ) {\displaystyle x(t)} describes the distribution of power into frequency components f {\displaystyle f} composing that signal. [ 1 ] According to Fourier analysis , any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (including noise ) as analyzed in terms of its frequency content, is called its spectrum . When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density . More commonly used is the power spectral density (PSD, or simply power spectrum ), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The PSD then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating x 2 ( t ) {\displaystyle x^{2}(t)} over the time domain, as dictated by Parseval's theorem . [ 1 ] The spectrum of a physical process x ( t ) {\displaystyle x(t)} often contains essential information about the nature of x {\displaystyle x} . For instance, the pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light source is determined by the spectrum of the electromagnetic wave's electric field E ( t ) {\displaystyle E(t)} as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform , and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph , or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency. However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important in statistical signal processing and in the statistical study of stochastic processes , as well as in many other branches of physics and engineering . Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms of spatial frequency . [ 1 ] In physics , the signal might be a wave, such as an electromagnetic wave , an acoustic wave , or the vibration of a mechanism. The power spectral density (PSD) of the signal describes the power present in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed in SI units of watts per hertz (abbreviated as W/Hz). [ 2 ] When a signal is defined in terms only of a voltage , for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given impedance . So one might use units of V 2 Hz −1 for the PSD. Energy spectral density (ESD) would have units of V 2 s Hz −1 , since energy has units of power multiplied by time (e.g., watt-hour ). [ 3 ] In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD in units of meters squared per hertz, m 2 /Hz. In the analysis of random vibrations , units of g 2 Hz −1 are frequently used for the PSD of acceleration , where g denotes the g-force . [ 4 ] Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning of x ( t ) will remain unspecified, but the independent variable will be assumed to be that of time. A PSD can be either a one-sided function of only positive frequencies or a two-sided function of both positive and negative frequencies but with only half the amplitude. Noise PSDs are generally one-sided in engineering and two-sided in physics. [ 5 ] In signal processing , the energy of a signal x ( t ) {\displaystyle x(t)} is given by E ≜ ∫ − ∞ ∞ | x ( t ) | 2 d t . {\displaystyle E\triangleq \int _{-\infty }^{\infty }\left|x(t)\right|^{2}\ dt.} Assuming the total energy is finite (i.e. x ( t ) {\displaystyle x(t)} is a square-integrable function ) allows applying Parseval's theorem (or Plancherel's theorem ). [ 6 ] That is, ∫ − ∞ ∞ | x ( t ) | 2 d t = ∫ − ∞ ∞ | x ^ ( f ) | 2 d f , {\displaystyle \int _{-\infty }^{\infty }|x(t)|^{2}\,dt=\int _{-\infty }^{\infty }\left|{\hat {x}}(f)\right|^{2}\,df,} where x ^ ( f ) = ∫ − ∞ ∞ e − i 2 π f t x ( t ) d t , {\displaystyle {\hat {x}}(f)=\int _{-\infty }^{\infty }e^{-i2\pi ft}x(t)\ dt,} is the Fourier transform of x ( t ) {\displaystyle x(t)} at frequency f {\displaystyle f} (in Hz ). [ 7 ] The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of | x ^ ( f ) | 2 d f {\displaystyle \left|{\hat {x}}(f)\right|^{2}df} can be interpreted as a density function multiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequency f {\displaystyle f} in the frequency interval f + d f {\displaystyle f+df} . Therefore, the energy spectral density of x ( t ) {\displaystyle x(t)} is defined as: [ 8 ] The function S ¯ x x ( f ) {\displaystyle {\bar {S}}_{xx}(f)} and the autocorrelation of x ( t ) {\displaystyle x(t)} form a Fourier transform pair, a result also known as the Wiener–Khinchin theorem (see also Periodogram ). As a physical example of how one might measure the energy spectral density of a signal, suppose V ( t ) {\displaystyle V(t)} represents the potential (in volts ) of an electrical pulse propagating along a transmission line of impedance Z {\displaystyle Z} , and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law , the power delivered to the resistor at time t {\displaystyle t} is equal to V ( t ) 2 / Z {\displaystyle V(t)^{2}/Z} , so the total energy is found by integrating V ( t ) 2 / Z {\displaystyle V(t)^{2}/Z} with respect to time over the duration of the pulse. To find the value of the energy spectral density S ¯ x x ( f ) {\displaystyle {\bar {S}}_{xx}(f)} at frequency f {\displaystyle f} , one could insert between the transmission line and the resistor a bandpass filter which passes only a narrow range of frequencies ( Δ f {\displaystyle \Delta f} , say) near the frequency of interest and then measure the total energy E ( f ) {\displaystyle E(f)} dissipated across the resistor. The value of the energy spectral density at f {\displaystyle f} is then estimated to be E ( f ) / Δ f {\displaystyle E(f)/\Delta f} . In this example, since the power V ( t ) 2 / Z {\displaystyle V(t)^{2}/Z} has units of V 2 Ω −1 , the energy E ( f ) {\displaystyle E(f)} has units of V 2 s Ω −1 = J , and hence the estimate E ( f ) / Δ f {\displaystyle E(f)/\Delta f} of the energy spectral density has units of J Hz −1 , as required. In many situations, it is common to forget the step of dividing by Z {\displaystyle Z} so that the energy spectral density instead has units of V 2 Hz −1 . This definition generalizes in a straightforward manner to a discrete signal with a countably infinite number of values x n {\displaystyle x_{n}} such as a signal sampled at discrete times t n = t 0 + ( n Δ t ) {\displaystyle t_{n}=t_{0}+(n\,\Delta t)} : S ¯ x x ( f ) = lim N → ∞ ( Δ t ) 2 | ∑ n = − N N x n e − i 2 π f n Δ t | 2 ⏟ | x ^ d ( f ) | 2 , {\displaystyle {\bar {S}}_{xx}(f)=\lim _{N\to \infty }(\Delta t)^{2}\underbrace {\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}} _{\left|{\hat {x}}_{d}(f)\right|^{2}},} where x ^ d ( f ) {\displaystyle {\hat {x}}_{d}(f)} is the discrete-time Fourier transform of x n . {\displaystyle x_{n}.} The sampling interval Δ t {\displaystyle \Delta t} is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit Δ t → 0. {\displaystyle \Delta t\to 0.} But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see normalized frequency ) The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define the power spectral density (PSD) which exists for stationary processes ; this describes how the power of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study the variance of a function over time x ( t ) {\displaystyle x(t)} (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as the power spectrum even when there is no physical power involved. If one were to create a physical voltage source which followed x ( t ) {\displaystyle x(t)} and applied it to the terminals of a one ohm resistor , then indeed the instantaneous power dissipated in that resistor would be given by x 2 ( t ) {\displaystyle x^{2}(t)} watts . The average power P {\displaystyle P} of a signal x ( t ) {\displaystyle x(t)} over all time is therefore given by the following time average, where the period T {\displaystyle T} is centered about some arbitrary time t = t 0 {\displaystyle t=t_{0}} : P = lim T → ∞ 1 T ∫ t 0 − T / 2 t 0 + T / 2 | x ( t ) | 2 d t {\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{t_{0}-T/2}^{t_{0}+T/2}\left|x(t)\right|^{2}\,dt} Whenever it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral, the average power can also be written as P = lim T → ∞ 1 T ∫ − ∞ ∞ | x T ( t ) | 2 d t , {\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }\left|x_{T}(t)\right|^{2}\,dt,} where x T ( t ) = x ( t ) w T ( t ) {\displaystyle x_{T}(t)=x(t)w_{T}(t)} and w T ( t ) {\displaystyle w_{T}(t)} is unity within the arbitrary period and zero elsewhere. When P {\displaystyle P} is non-zero, the integral must grow to infinity at least as fast as T {\displaystyle T} does. That is the reason why we cannot use the energy of the signal, which is that diverging integral. In analyzing the frequency content of the signal x ( t ) {\displaystyle x(t)} , one might like to compute the ordinary Fourier transform x ^ ( f ) {\displaystyle {\hat {x}}(f)} ; however, for many signals of interest the ordinary Fourier transform does not formally exist. [ nb 1 ] However, under suitable conditions, certain generalizations of the Fourier transform (e.g. the Fourier-Stieltjes transform ) still adhere to Parseval's theorem . As such, P = lim T → ∞ 1 T ∫ − ∞ ∞ | x ^ T ( f ) | 2 d f , {\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|{\hat {x}}_{T}(f)|^{2}\,df,} where the integrand defines the power spectral density : [ 9 ] [ 10 ] The convolution theorem then allows regarding | x ^ T ( f ) | 2 {\displaystyle |{\hat {x}}_{T}(f)|^{2}} as the Fourier transform of the time convolution of x T ∗ ( − t ) {\displaystyle x_{T}^{*}(-t)} and x T ( t ) {\displaystyle x_{T}(t)} , where * represents the complex conjugate. In order to deduce Eq.2, we will find an expression for [ x ^ T ( f ) ] ∗ {\displaystyle [{\hat {x}}_{T}(f)]^{*}} that will be useful for the purpose. In fact, we will demonstrate that [ x ^ T ( f ) ] ∗ = F { x T ∗ ( − t ) } {\displaystyle [{\hat {x}}_{T}(f)]^{*}={\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}} . Let's start by noting that F { x T ∗ ( − t ) } = ∫ − ∞ ∞ x T ∗ ( − t ) e − i 2 π f t d t {\displaystyle {\begin{aligned}{\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}&=\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt\end{aligned}}} and let z = − t {\displaystyle z=-t} , so that z → − ∞ {\displaystyle z\rightarrow -\infty } when t → ∞ {\displaystyle t\rightarrow \infty } and vice versa. So ∫ − ∞ ∞ x T ∗ ( − t ) e − i 2 π f t d t = ∫ ∞ − ∞ x T ∗ ( z ) e i 2 π f z ( − d z ) = ∫ − ∞ ∞ x T ∗ ( z ) e i 2 π f z d z = ∫ − ∞ ∞ x T ∗ ( t ) e i 2 π f t d t {\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt&=\int _{\infty }^{-\infty }x_{T}^{*}(z)e^{i2\pi fz}\left(-dz\right)\\&=\int _{-\infty }^{\infty }x_{T}^{*}(z)e^{i2\pi fz}dz\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)e^{i2\pi ft}dt\end{aligned}}} Where, in the last line, we have made use of the fact that z {\displaystyle z} and t {\displaystyle t} are dummy variables. So, we have F { x T ∗ ( − t ) } = ∫ − ∞ ∞ x T ∗ ( − t ) e − i 2 π f t d t = ∫ − ∞ ∞ x T ∗ ( t ) e i 2 π f t d t = ∫ − ∞ ∞ x T ∗ ( t ) [ e − i 2 π f t ] ∗ d t = [ ∫ − ∞ ∞ x T ( t ) e − i 2 π f t d t ] ∗ = [ F { x T ( t ) } ] ∗ = [ x ^ T ( f ) ] ∗ {\displaystyle {\begin{aligned}{\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}&=\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)e^{i2\pi ft}dt\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)[e^{-i2\pi ft}]^{*}dt\\&=\left[\int _{-\infty }^{\infty }x_{T}(t)e^{-i2\pi ft}dt\right]^{*}\\&=\left[{\mathcal {F}}\left\{x_{T}(t)\right\}\right]^{*}\\&=\left[{\hat {x}}_{T}(f)\right]^{*}\end{aligned}}} q.e.d. Now, let's demonstrate eq.2 by using the demonstrated identity. In addition, we will make the subtitution u ( t ) = x T ∗ ( − t ) {\displaystyle u(t)=x_{T}^{*}(-t)} . In this way, we have: | x ^ T ( f ) | 2 = [ x ^ T ( f ) ] ∗ ⋅ x ^ T ( f ) = F { x T ∗ ( − t ) } ⋅ F { x T ( t ) } = F { u ( t ) } ⋅ F { x T ( t ) } = F { u ( t ) ∗ x T ( t ) } = ∫ − ∞ ∞ [ ∫ − ∞ ∞ u ( τ − t ) x T ( t ) d t ] e − i 2 π f τ d τ = ∫ − ∞ ∞ [ ∫ − ∞ ∞ x T ∗ ( t − τ ) x T ( t ) d t ] e − i 2 π f τ d τ , {\displaystyle {\begin{aligned}\left|{\hat {x}}_{T}(f)\right|^{2}&=[{\hat {x}}_{T}(f)]^{*}\cdot {\hat {x}}_{T}(f)\\&={\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}\cdot {\mathcal {F}}\left\{x_{T}(t)\right\}\\&={\mathcal {F}}\left\{u(t)\right\}\cdot {\mathcal {F}}\left\{x_{T}(t)\right\}\\&={\mathcal {F}}\left\{u(t)\mathbin {\mathbf {*} } x_{T}(t)\right\}\\&=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }u(\tau -t)x_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau \\&=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau ,\end{aligned}}} where the convolution theorem has been used when passing from the 3rd to the 4th line. Now, if we divide the time convolution above by the period T {\displaystyle T} and take the limit as T → ∞ {\displaystyle T\rightarrow \infty } , it becomes the autocorrelation function of the non-windowed signal x ( t ) {\displaystyle x(t)} , which is denoted as R x x ( τ ) {\displaystyle R_{xx}(\tau )} , provided that x ( t ) {\displaystyle x(t)} is ergodic , which is true in most, but not all, practical cases. [ nb 2 ] lim T → ∞ 1 T | x ^ T ( f ) | 2 = ∫ − ∞ ∞ [ lim T → ∞ 1 T ∫ − ∞ ∞ x T ∗ ( t − τ ) x T ( t ) d t ] e − i 2 π f τ d τ = ∫ − ∞ ∞ R x x ( τ ) e − i 2 π f τ d τ {\displaystyle \lim _{T\to \infty }{\frac {1}{T}}\left|{\hat {x}}_{T}(f)\right|^{2}=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau =\int _{-\infty }^{\infty }R_{xx}(\tau )e^{-i2\pi f\tau }d\tau } Assuming the ergodicity of x ( t ) {\displaystyle x(t)} , the power spectral density can be found once more as the Fourier transform of the autocorrelation function ( Wiener–Khinchin theorem ). [ 11 ] Many authors use this equality to actually define the power spectral density. [ 12 ] The power of the signal in a given frequency band [ f 1 , f 2 ] {\displaystyle [f_{1},f_{2}]} , where 0 < f 1 < f 2 {\displaystyle 0<f_{1}<f_{2}} , can be calculated by integrating over frequency. Since S x x ( − f ) = S x x ( f ) {\displaystyle S_{xx}(-f)=S_{xx}(f)} , an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used): P bandlimited = 2 ∫ f 1 f 2 S x x ( f ) d f {\displaystyle P_{\textsf {bandlimited}}=2\int _{f_{1}}^{f_{2}}S_{xx}(f)\,df} More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time interval T {\displaystyle T} is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than 1 / T {\displaystyle 1/T} are not sampled, and results at frequencies which are not an integer multiple of 1 / T {\displaystyle 1/T} are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to statistical ensembles of realizations of x ( t ) {\displaystyle x(t)} evaluated over the specified time window. Just as with the energy spectral density, the definition of the power spectral density can be generalized to discrete time variables x n {\displaystyle x_{n}} . As before, we can consider a window of − N ≤ n ≤ N {\displaystyle -N\leq n\leq N} with the signal sampled at discrete times t n = t 0 + ( n Δ t ) {\displaystyle t_{n}=t_{0}+(n\,\Delta t)} for a total measurement period T = ( 2 N + 1 ) Δ t {\displaystyle T=(2N+1)\,\Delta t} . S x x ( f ) = lim N → ∞ ( Δ t ) 2 T | ∑ n = − N N x n e − i 2 π f n Δ t | 2 {\displaystyle S_{xx}(f)=\lim _{N\to \infty }{\frac {(\Delta t)^{2}}{T}}\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}} Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved when N {\displaystyle N} (and thus T {\displaystyle T} ) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called a periodogram . This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval T {\displaystyle T} approach infinity. [ 13 ] If two signals both possess power spectral densities, then the cross-spectral density can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to the cross-correlation . Some properties of the PSD include: [ 14 ] Given two signals x ( t ) {\displaystyle x(t)} and y ( t ) {\displaystyle y(t)} , each of which possess power spectral densities S x x ( f ) {\displaystyle S_{xx}(f)} and S y y ( f ) {\displaystyle S_{yy}(f)} , it is possible to define a cross power spectral density ( CPSD ) or cross spectral density ( CSD ). To begin, let us consider the average power of such a combined signal. P = lim T → ∞ 1 T ∫ − ∞ ∞ [ x T ( t ) + y T ( t ) ] ∗ [ x T ( t ) + y T ( t ) ] d t = lim T → ∞ 1 T ∫ − ∞ ∞ | x T ( t ) | 2 + x T ∗ ( t ) y T ( t ) + y T ∗ ( t ) x T ( t ) + | y T ( t ) | 2 d t {\displaystyle {\begin{aligned}P&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }\left[x_{T}(t)+y_{T}(t)\right]^{*}\left[x_{T}(t)+y_{T}(t)\right]dt\\&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|x_{T}(t)|^{2}+x_{T}^{*}(t)y_{T}(t)+y_{T}^{*}(t)x_{T}(t)+|y_{T}(t)|^{2}dt\\\end{aligned}}} Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtain S x y ( f ) = lim T → ∞ 1 T [ x ^ T ∗ ( f ) y ^ T ( f ) ] S y x ( f ) = lim T → ∞ 1 T [ y ^ T ∗ ( f ) x ^ T ( f ) ] {\displaystyle {\begin{aligned}S_{xy}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {x}}_{T}^{*}(f){\hat {y}}_{T}(f)\right]&S_{yx}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {y}}_{T}^{*}(f){\hat {x}}_{T}(f)\right]\end{aligned}}} where, again, the contributions of S x x ( f ) {\displaystyle S_{xx}(f)} and S y y ( f ) {\displaystyle S_{yy}(f)} are already understood. Note that S x y ∗ ( f ) = S y x ( f ) {\displaystyle S_{xy}^{*}(f)=S_{yx}(f)} , so the full contribution to the cross power is, generally, from twice the real part of either individual CPSD . Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limit T → ∞ {\displaystyle T\to \infty } becomes the Fourier transform of a cross-correlation function. [ 16 ] S x y ( f ) = ∫ − ∞ ∞ [ lim T → ∞ 1 T ∫ − ∞ ∞ x T ∗ ( t − τ ) y T ( t ) d t ] e − i 2 π f τ d τ = ∫ − ∞ ∞ R x y ( τ ) e − i 2 π f τ d τ S y x ( f ) = ∫ − ∞ ∞ [ lim T → ∞ 1 T ∫ − ∞ ∞ y T ∗ ( t − τ ) x T ( t ) d t ] e − i 2 π f τ d τ = ∫ − ∞ ∞ R y x ( τ ) e − i 2 π f τ d τ , {\displaystyle {\begin{aligned}S_{xy}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )y_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{xy}(\tau )e^{-i2\pi f\tau }d\tau \\S_{yx}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }y_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{yx}(\tau )e^{-i2\pi f\tau }d\tau ,\end{aligned}}} where R x y ( τ ) {\displaystyle R_{xy}(\tau )} is the cross-correlation of x ( t ) {\displaystyle x(t)} with y ( t ) {\displaystyle y(t)} and R y x ( τ ) {\displaystyle R_{yx}(\tau )} is the cross-correlation of y ( t ) {\displaystyle y(t)} with x ( t ) {\displaystyle x(t)} . In light of this, the PSD is seen to be a special case of the CSD for x ( t ) = y ( t ) {\displaystyle x(t)=y(t)} . If x ( t ) {\displaystyle x(t)} and y ( t ) {\displaystyle y(t)} are real signals (e.g. voltage or current), their Fourier transforms x ^ ( f ) {\displaystyle {\hat {x}}(f)} and y ^ ( f ) {\displaystyle {\hat {y}}(f)} are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the full CPSD is just one of the CPSD s scaled by a factor of two. CPSD Full = 2 S x y ( f ) = 2 S y x ( f ) {\displaystyle \operatorname {CPSD} _{\text{Full}}=2S_{xy}(f)=2S_{yx}(f)} For discrete signals x n and y n , the relationship between the cross-spectral density and the cross-covariance is S x y ( f ) = ∑ n = − ∞ ∞ R x y ( τ n ) e − i 2 π f τ n Δ τ {\displaystyle S_{xy}(f)=\sum _{n=-\infty }^{\infty }R_{xy}(\tau _{n})e^{-i2\pi f\tau _{n}}\,\Delta \tau } The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model . A common non-parametric technique is the periodogram . The spectral density is usually estimated using Fourier transform methods (such as the Welch method ), but other techniques such as the maximum entropy method can also be used. Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such as visible light (perceived as color ), musical notes (perceived as pitch ), radio/TV (specified by their frequency, or sometimes wavelength ) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally there may be peaks corresponding to harmonics of a fundamental peak, indicating a periodic signal which is not simply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by a notch filter . The concept and use of the power spectrum of a signal is fundamental in electrical engineering , especially in electronic communication systems , including radio communications , radars , and related systems, plus passive remote sensing technology. Electronic instruments called spectrum analyzers are used to observe and measure the power spectra of signals. The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density. Primordial fluctuations , density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale.
https://en.wikipedia.org/wiki/Spectral_phase
In radiometry , photometry , and color science , a spectral power distribution ( SPD ) measurement describes the power per unit area per unit wavelength of an illumination ( radiant exitance ). More generally, the term spectral power distribution can refer to the concentration, as a function of wavelength, of any radiometric or photometric quantity (e.g. radiant energy , radiant flux , radiant intensity , radiance , irradiance , radiant exitance , radiosity , luminance , luminous flux , luminous intensity , illuminance , luminous emittance ). [ 1 ] [ 2 ] [ 3 ] [ 4 ] Knowledge of the SPD is crucial for optical-sensor system applications. Optical properties such as transmittance , reflectivity , and absorbance as well as the sensor response are typically dependent on the incident wavelength. [ 3 ] Mathematically, for the spectral power distribution of a radiant exitance or irradiance one may write: where M ( λ ) is the spectral irradiance (or exitance) of the light ( SI units: W /m 2 = kg ·m −1 · s −3 ); Φ is the radiant flux of the source (SI unit: watt, W); A is the area over which the radiant flux is integrated (SI unit: square meter, m 2 ); and λ is the wavelength (SI unit: meter, m). (Note that it is more convenient to express the wavelength of light in terms of nanometers ; spectral exitance would then be expressed in units of W·m −2 ·nm −1 .) The approximation is valid when the area and wavelength interval are small. [ 5 ] The ratio of spectral concentration (irradiance or exitance) at a given wavelength to the concentration of a reference wavelength provides the relative SPD. [ 4 ] This can be written as: For instance, the luminance of lighting fixtures and other light sources are handled separately, a spectral power distribution may be normalized in some manner, often to unity at 555 or 560 nanometers, coinciding with the peak of the eye's luminosity function . [ 2 ] [ 6 ] The SPD can be used to determine the response of a sensor at a specified wavelength. This compares the output power of the sensor to the input power as a function of wavelength. [ 7 ] This can be generalized in the following formula: Knowing the responsitivity is beneficial for determination of illumination, interactive material components, and optical components to optimize performance of a system's design. The spectral power distribution over the visible spectrum from a source can have varying concentrations of relative SPDs. The interactions between light and matter affect the absorption and reflectance properties of materials and subsequently produces a color that varies with source illumination. [ 8 ] For example, the relative spectral power distribution of the sun produces a white appearance if observed directly, but when the sunlight illuminates the Earth's atmosphere the sky appears blue under normal daylight conditions. This stems from the optical phenomenon called Rayleigh scattering which produces a concentration of shorter wavelengths and hence the blue color appearance. [ 3 ] The human visual response relies on trichromacy to process color appearance. While the human visual response integrates over all wavelengths, the relative spectral power distribution will provide color appearance modeling information as the concentration of wavelength band(s) will become the primary contributors to the perceived color. [ 8 ] This becomes useful in photometry and colorimetry as the perceived color changes with source illumination and spectral distribution and coincides with metamerisms where an object's color appearance changes. [ 8 ] The spectral makeup of the source can also coincide with color temperature producing differences in color appearance due to the source's temperature. [ 4 ]
https://en.wikipedia.org/wiki/Spectral_power_distribution
Spectral purity is a term used in both optics and signal processing . In optics, it refers to the quantification of the monochromaticity of a given light sample. [ 1 ] This is a particularly important parameter in areas like laser operation and time measurement. Spectral purity is easier to achieve in devices that generate visible and ultraviolet light, since higher frequency light results in greater spectral purity. In signal processing, spectral purity is defined as the inherent stability of a signal, or how clean a spectrum is compared to what it should be.
https://en.wikipedia.org/wiki/Spectral_purity
Spectral regrowth is the intermodulation products generated in the presence of a digital transmitter added to an analog communication system. [ 1 ] This electronics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spectral_regrowth
The spectral resolution of a spectrograph , or, more generally, of a frequency spectrum , is a measure of its ability to resolve features in the electromagnetic spectrum . It is usually denoted by Δ λ {\displaystyle \Delta \lambda } , and is closely related to the resolving power of the spectrograph, defined as R = λ Δ λ , {\displaystyle R={\frac {\lambda }{\Delta \lambda }},} where Δ λ {\displaystyle \Delta \lambda } is the smallest difference in wavelengths that can be distinguished at a wavelength of λ {\displaystyle \lambda } . For example, the Space Telescope Imaging Spectrograph (STIS) can distinguish features 0.17 nm apart at a wavelength of 1000 nm, giving it a resolution of 0.17 nm and a resolving power of about 5,900. An example of a high resolution spectrograph is the Cryogenic High-Resolution IR Echelle Spectrograph (CRIRES+) installed at ESO 's Very Large Telescope , which has a spectral resolving power of up to 100,000. [ 1 ] The spectral resolution can also be expressed in terms of physical quantities, such as velocity; then it describes the difference between velocities Δ v {\displaystyle \Delta v} that can be distinguished through the Doppler effect . Then, the resolution is Δ v {\displaystyle \Delta v} and the resolving power is R = c Δ v , {\displaystyle R={\frac {c}{\Delta v}},} where c {\displaystyle c} is the speed of light . The STIS example above then has a spectral resolution of 51 km/s . IUPAC defines resolution in optical spectroscopy as the minimum wavenumber, wavelength or frequency difference between two lines in a spectrum that can be distinguished. [ 2 ] Resolving power, R , is given by the transition wavenumber, wavelength or frequency, divided by the resolution. [ 3 ]
https://en.wikipedia.org/wiki/Spectral_resolution
Spectral shape analysis relies on the spectrum ( eigenvalues and/or eigenfunctions ) of the Laplace–Beltrami operator to compare and analyze geometric shapes. Since the spectrum of the Laplace–Beltrami operator is invariant under isometries , it is well suited for the analysis or retrieval of non-rigid shapes, i.e. bendable objects such as humans, animals, plants, etc. The Laplace–Beltrami operator is involved in many important differential equations, such as the heat equation and the wave equation . It can be defined on a Riemannian manifold as the divergence of the gradient of a real-valued function f : Its spectral components can be computed by solving the Helmholtz equation (or Laplacian eigenvalue problem): The solutions are the eigenfunctions φ i {\displaystyle \varphi _{i}} (modes) and corresponding eigenvalues λ i {\displaystyle \lambda _{i}} , representing a diverging sequence of positive real numbers. The first eigenvalue is zero for closed domains or when using the Neumann boundary condition . For some shapes, the spectrum can be computed analytically (e.g. rectangle, flat torus, cylinder, disk or sphere). For the sphere, for example, the eigenfunctions are the spherical harmonics . The most important properties of the eigenvalues and eigenfunctions are that they are isometry invariants. In other words, if the shape is not stretched (e.g. a sheet of paper bent into the third dimension), the spectral values will not change. Bendable objects, like animals, plants and humans, can move into different body postures with only minimal stretching at the joints. The resulting shapes are called near-isometric and can be compared using spectral shape analysis. Geometric shapes are often represented as 2D curved surfaces, 2D surface meshes (usually triangle meshes ) or 3D solid objects (e.g. using voxels or tetrahedra meshes). The Helmholtz equation can be solved for all these cases. If a boundary exists, e.g. a square, or the volume of any 3D geometric shape, boundary conditions need to be specified. Several discretizations of the Laplace operator exist (see Discrete Laplace operator ) for the different types of geometry representations. Many of these operators do not approximate well the underlying continuous operator. The ShapeDNA is one of the first spectral shape descriptors. It is the normalized beginning sequence of the eigenvalues of the Laplace–Beltrami operator. [ 1 ] [ 2 ] Its main advantages are the simple representation (a vector of numbers) and comparison, scale invariance, and in spite of its simplicity a very good performance for shape retrieval of non-rigid shapes. [ 3 ] Competitors of shapeDNA include singular values of Geodesic Distance Matrix (SD-GDM) [ 4 ] and Reduced BiHarmonic Distance Matrix (R-BiHDM). [ 5 ] However, the eigenvalues are global descriptors, therefore the shapeDNA and other global spectral descriptors cannot be used for local or partial shape analysis. The global point signature [ 6 ] at a point x {\displaystyle x} is a vector of scaled eigenfunctions of the Laplace–Beltrami operator computed at x {\displaystyle x} (i.e. the spectral embedding of the shape). The GPS is a global feature in the sense that it cannot be used for partial shape matching. The heat kernel signature [ 7 ] makes use of the eigen-decomposition of the heat kernel : For each point on the surface the diagonal of the heat kernel h t ( x , x ) {\displaystyle h_{t}(x,x)} is sampled at specific time values t j {\displaystyle t_{j}} and yields a local signature that can also be used for partial matching or symmetry detection. The WKS [ 8 ] follows a similar idea to the HKS, replacing the heat equation with the Schrödinger wave equation. The IWKS [ 9 ] improves the WKS for non-rigid shape retrieval by introducing a new scaling function to the eigenvalues and aggregating a new curvature term. SGWS is a local descriptor that is not only isometric invariant, but also compact, easy to compute and combines the advantages of both band-pass and low-pass filters. An important facet of SGWS is the ability to combine the advantages of WKS and HKS into a single signature, while allowing a multiresolution representation of shapes. [ 10 ] The spectral decomposition of the graph Laplacian associated with complex shapes (see Discrete Laplace operator ) provides eigenfunctions (modes) which are invariant to isometries. Each vertex on the shape could be uniquely represented with a combinations of the eigenmodal values at each point, sometimes called spectral coordinates: Spectral matching consists of establishing the point correspondences by pairing vertices on different shapes that have the most similar spectral coordinates. Early work [ 11 ] [ 12 ] [ 13 ] focused on sparse correspondences for stereoscopy. Computational efficiency now enables dense correspondences on full meshes, for instance between cortical surfaces. [ 14 ] Spectral matching could also be used for complex non-rigid image registration , which is notably difficult when images have very large deformations. [ 15 ] Such image registration methods based on spectral eigenmodal values indeed capture global shape characteristics, and contrast with conventional non-rigid image registration methods which are often based on local shape characteristics (e.g., image gradients).
https://en.wikipedia.org/wiki/Spectral_shape_analysis
In scientific imaging , the two-dimensional spectral signal-to-noise ratio (SSNR) is a signal-to-noise ratio measure which measures the normalised cross-correlation coefficient between several two-dimensional images over corresponding rings in Fourier space as a function of spatial frequency. [ 1 ] It is a multi-particle extension of the Fourier ring correlation (FRC), which is related to the Fourier shell correlation . The SSNR is a popular method for finding the resolution of a class average in cryo-electron microscopy . where F r i , k {\displaystyle F_{r_{i},k}} is the complex structure factor for image k {\displaystyle k} for a pixel r i {\displaystyle r_{i}} at radius R {\displaystyle R} . It is possible convert the SSNR into an equivalent FRC using the following formula: This applied mathematics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spectral_signal-to-noise_ratio
Spectral signature is the variation of reflectance or emittance of a material with respect to wavelengths (i.e., reflectance/emittance as a function of wavelength). [ 1 ] The spectral signature of stars indicates the composition of the stellar atmosphere . The spectral signature of an object is a function of the incidental EM wavelength and material interaction with that section of the electromagnetic spectrum . The measurements can be made with various instruments, including a task specific spectrometer , although the most common method is separation of the red, green, blue and near infrared portion of the EM spectrum as acquired by digital cameras . Calibrating spectral signatures under specific illumination are collected in order to apply a correction to airborne or satellite imagery digital images. The user of one kind of spectroscope looks through it at a tube of ionized gas . The user sees specific lines of colour falling on a graduated scale. Each substance will have its own unique pattern of spectral lines. Most remote sensing applications process digital images to extract spectral signatures at each pixel and use them to divide the image in groups of similar pixels ( segmentation ) using different approaches. As a last step, they assign a class to each group (classification) by comparing with known spectral signatures. Depending on pixel resolution, a pixel can represent many spectral signature "mixed" together - that is why much remote sensing analysis is done to "unmix mixtures". Ultimately correct matching of spectral signature recorded by image pixel with spectral signature of existing elements leads to accurate classification in remote sensing .
https://en.wikipedia.org/wiki/Spectral_signature
In radio electronics or acoustics , spectral splatter (also called switch noise ) refers to spurious emissions that result from an abrupt change in the transmitted signal, usually when transmission is started or stopped. [ 1 ] For example, a device transmitting a sine wave produces a single peak in the frequency spectrum; however, if the device abruptly starts or stops transmitting this sine wave, it will emit noise at frequencies other than the frequency of the sine wave. This noise is known as spectral splatter. When the signal is represented in the time domain , an abrupt change may not be visually apparent; in the frequency domain , however, the abrupt change causes the appearance of spikes at various frequencies. A sharper change in the time domain usually results in more spikes or stronger spikes in the frequency domain. Spectral splatter can thus be reduced by making the change more smooth. Controlling the power ramp shape (i.e. the way in which the signal increases ("power-on ramp") or falls off ("power-down ramp")) can help reduce the splatter. In some cases one can use a filter to remove unwanted emissions. Note that a completely abrupt change (in the mathematical sense) is not possible in physical reality; the change is always somewhat smoothed naturally, for example due to the capacitance (in electronics) or inertia (in acoustics) of the components involved. In radio electronics, the need to minimize spectral splatter arises because signals are usually required by government regulations to be contained in a particular frequency band , defined by a spectral mask . Spectral splatter can cause emissions that violate this mask.
https://en.wikipedia.org/wiki/Spectral_splatter
In linear algebra and functional analysis , a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators , which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras . See also spectral theory for a historical perspective. Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces . The spectral theorem also provides a canonical decomposition, called the spectral decomposition , of the underlying vector space on which the operator acts. Augustin-Louis Cauchy proved the spectral theorem for symmetric matrices , i.e., that every real, symmetric matrix is diagonalizable. In addition, Cauchy was the first to be systematic about determinants . [ 1 ] [ 2 ] The spectral theorem as generalized by John von Neumann is today perhaps the most important result of operator theory . This article mainly focuses on the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space. We begin by considering a Hermitian matrix on C n {\displaystyle \mathbb {C} ^{n}} (but the following discussion will be adaptable to the more restrictive case of symmetric matrices on R n {\displaystyle \mathbb {R} ^{n}} ). We consider a Hermitian map A on a finite-dimensional complex inner product space V endowed with a positive definite sesquilinear inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } . The Hermitian condition on A {\displaystyle A} means that for all x , y ∈ V , ⟨ A x , y ⟩ = ⟨ x , A y ⟩ . {\displaystyle \langle Ax,y\rangle =\langle x,Ay\rangle .} An equivalent condition is that A * = A , where A * is the Hermitian conjugate of A . In the case that A is identified with a Hermitian matrix, the matrix of A * is equal to its conjugate transpose . (If A is a real matrix , then this is equivalent to A T = A , that is, A is a symmetric matrix .) This condition implies that all eigenvalues of a Hermitian map are real: To see this, it is enough to apply it to the case when x = y is an eigenvector. (Recall that an eigenvector of a linear map A is a non-zero vector v such that Av = λv for some scalar λ . The value λ is the corresponding eigenvalue . Moreover, the eigenvalues are roots of the characteristic polynomial .) Theorem — If A is Hermitian on V , then there exists an orthonormal basis of V consisting of eigenvectors of A . Each eigenvalue of A is real. We provide a sketch of a proof for the case where the underlying field of scalars is the complex numbers . By the fundamental theorem of algebra , applied to the characteristic polynomial of A , there is at least one complex eigenvalue λ 1 and corresponding eigenvector v 1 , which must by definition be non-zero. Then since λ 1 ⟨ v 1 , v 1 ⟩ = ⟨ A ( v 1 ) , v 1 ⟩ = ⟨ v 1 , A ( v 1 ) ⟩ = λ ¯ 1 ⟨ v 1 , v 1 ⟩ , {\displaystyle \lambda _{1}\langle v_{1},v_{1}\rangle =\langle A(v_{1}),v_{1}\rangle =\langle v_{1},A(v_{1})\rangle ={\bar {\lambda }}_{1}\langle v_{1},v_{1}\rangle ,} we find that λ 1 is real. Now consider the space K n − 1 = span ( v 1 ) ⊥ {\displaystyle {\mathcal {K}}^{n-1}={\text{span}}(v_{1})^{\perp }} , the orthogonal complement of v 1 . By Hermiticity, K n − 1 {\displaystyle {\mathcal {K}}^{n-1}} is an invariant subspace of A . To see that, consider any k ∈ K n − 1 {\displaystyle k\in {\mathcal {K}}^{n-1}} so that ⟨ k , v 1 ⟩ = 0 {\displaystyle \langle k,v_{1}\rangle =0} by definition of K n − 1 {\displaystyle {\mathcal {K}}^{n-1}} . To satisfy invariance, we need to check if A ( k ) ∈ K n − 1 {\displaystyle A(k)\in {\mathcal {K}}^{n-1}} . This is true because, ⟨ A ( k ) , v 1 ⟩ = ⟨ k , A ( v 1 ) ⟩ = ⟨ k , λ 1 v 1 ⟩ = 0 {\displaystyle \langle A(k),v_{1}\rangle =\langle k,A(v_{1})\rangle =\langle k,\lambda _{1}v_{1}\rangle =0} . Applying the same argument to K n − 1 {\displaystyle {\mathcal {K}}^{n-1}} shows that A has at least one real eigenvalue λ 2 {\displaystyle \lambda _{2}} and corresponding eigenvector v 2 ∈ K n − 1 ⊥ v 1 {\displaystyle v_{2}\in {\mathcal {K}}^{n-1}\perp v_{1}} . This can be used to build another invariant subspace K n − 2 = span ( { v 1 , v 2 } ) ⊥ {\displaystyle {\mathcal {K}}^{n-2}={\text{span}}(\{v_{1},v_{2}\})^{\perp }} . Finite induction then finishes the proof. The matrix representation of A in a basis of eigenvectors is diagonal, and by the construction the proof gives a basis of mutually orthogonal eigenvectors; by choosing them to be unit vectors one obtains an orthonormal basis of eigenvectors. A can be written as a linear combination of pairwise orthogonal projections, called its spectral decomposition . Let V λ = { v ∈ V : A v = λ v } {\displaystyle V_{\lambda }=\{v\in V:Av=\lambda v\}} be the eigenspace corresponding to an eigenvalue λ {\displaystyle \lambda } . Note that the definition does not depend on any choice of specific eigenvectors. In general, V is the orthogonal direct sum of the spaces V λ {\displaystyle V_{\lambda }} where the λ {\displaystyle \lambda } ranges over the spectrum of A {\displaystyle A} . When the matrix being decomposed is Hermitian, the spectral decomposition is a special case of the Schur decomposition (see the proof in case of normal matrices below). The spectral decomposition is a special case of the singular value decomposition , which states that any matrix A ∈ C m × n {\displaystyle A\in \mathbb {C} ^{m\times n}} can be expressed as A = U Σ V ∗ {\displaystyle A=U\Sigma V^{*}} , where U ∈ C m × m {\displaystyle U\in \mathbb {C} ^{m\times m}} and V ∈ C n × n {\displaystyle V\in \mathbb {C} ^{n\times n}} are unitary matrices and Σ ∈ R m × n {\displaystyle \Sigma \in \mathbb {R} ^{m\times n}} is a diagonal matrix. The diagonal entries of Σ {\displaystyle \Sigma } are uniquely determined by A {\displaystyle A} and are known as the singular values of A {\displaystyle A} . If A {\displaystyle A} is Hermitian, then A ∗ = A {\displaystyle A^{*}=A} and V Σ U ∗ = U Σ V ∗ {\displaystyle V\Sigma U^{*}=U\Sigma V^{*}} which implies U = V {\displaystyle U=V} . The spectral theorem extends to a more general class of matrices. Let A be an operator on a finite-dimensional inner product space. A is said to be normal if A * A = AA * . One can show that A is normal if and only if it is unitarily diagonalizable using the Schur decomposition . That is, any matrix can be written as A = UTU * , where U is unitary and T is upper triangular . If A is normal, then one sees that TT * = T * T . Therefore, T must be diagonal since a normal upper triangular matrix is diagonal (see normal matrix ). The converse is obvious. In other words, A is normal if and only if there exists a unitary matrix U such that A = U D U ∗ , {\displaystyle A=UDU^{*},} where D is a diagonal matrix . Then, the entries of the diagonal of D are the eigenvalues of A . The column vectors of U are the eigenvectors of A and they are orthonormal. Unlike the Hermitian case, the entries of D need not be real. In the more general setting of Hilbert spaces, which may have an infinite dimension, the statement of the spectral theorem for compact self-adjoint operators is virtually the same as in the finite-dimensional case. Theorem — Suppose A is a compact self-adjoint operator on a (real or complex) Hilbert space V . Then there is an orthonormal basis of V consisting of eigenvectors of A . Each eigenvalue is real. As for Hermitian matrices, the key point is to prove the existence of at least one nonzero eigenvector. One cannot rely on determinants to show existence of eigenvalues, but one can use a maximization argument analogous to the variational characterization of eigenvalues. If the compactness assumption is removed, then it is not true that every self-adjoint operator has eigenvectors. For example, the multiplication operator M x {\displaystyle M_{x}} on L 2 ( [ 0 , 1 ] ) {\displaystyle L^{2}([0,1])} which takes each ψ ( x ) ∈ L 2 ( [ 0 , 1 ] ) {\displaystyle \psi (x)\in L^{2}([0,1])} to x ψ ( x ) {\displaystyle x\psi (x)} is bounded and self-adjoint, but has no eigenvectors. However, its spectrum, suitably defined, is still equal to [ 0 , 1 ] {\displaystyle [0,1]} , see spectrum of bounded operator . The next generalization we consider is that of bounded self-adjoint operators on a Hilbert space. Such operators may have no eigenvectors: for instance let A be the operator of multiplication by t on L 2 ( [ 0 , 1 ] ) {\displaystyle L^{2}([0,1])} , that is, [ 3 ] [ A f ] ( t ) = t f ( t ) . {\displaystyle [Af](t)=tf(t).} This operator does not have any eigenvectors in L 2 ( [ 0 , 1 ] ) {\displaystyle L^{2}([0,1])} , though it does have eigenvectors in a larger space. Namely the distribution f ( t ) = δ ( t − t 0 ) {\displaystyle f(t)=\delta (t-t_{0})} , where δ {\displaystyle \delta } is the Dirac delta function , is an eigenvector when construed in an appropriate sense. The Dirac delta function is however not a function in the classical sense and does not lie in the Hilbert space L 2 [0, 1] . Thus, the delta-functions are "generalized eigenvectors" of A {\displaystyle A} but not eigenvectors in the usual sense. In the absence of (true) eigenvectors, one can look for a "spectral subspace" consisting of an almost eigenvector , i.e, a closed subspace V E {\displaystyle V_{E}} of V {\displaystyle V} associated with a Borel set E ⊂ σ ( A ) {\displaystyle E\subset \sigma (A)} in the spectrum of A {\displaystyle A} . This subspace can be thought of as the closed span of generalized eigenvectors for A {\displaystyle A} with eigen values in E {\displaystyle E} . [ 4 ] In the above example, where [ A f ] ( t ) = t f ( t ) , {\displaystyle [Af](t)=tf(t),\;} we might consider the subspace of functions supported on a small interval [ a , a + ε ] {\displaystyle [a,a+\varepsilon ]} inside [ 0 , 1 ] {\displaystyle [0,1]} . This space is invariant under A {\displaystyle A} and for any f {\displaystyle f} in this subspace, A f {\displaystyle Af} is very close to a f {\displaystyle af} . Each subspace, in turn, is encoded by the associated projection operator, and the collection of all the subspaces is then represented by a projection-valued measure . One formulation of the spectral theorem expresses the operator A as an integral of the coordinate function over the operator's spectrum σ ( A ) {\displaystyle \sigma (A)} with respect to a projection-valued measure. [ 5 ] A = ∫ σ ( A ) λ d π ( λ ) . {\displaystyle A=\int _{\sigma (A)}\lambda \,d\pi (\lambda ).} When the self-adjoint operator in question is compact , this version of the spectral theorem reduces to something similar to the finite-dimensional spectral theorem above, except that the operator is expressed as a finite or countably infinite linear combination of projections, that is, the measure consists only of atoms. An alternative formulation of the spectral theorem says that every bounded self-adjoint operator is unitarily equivalent to a multiplication operator, a relatively simple type of operator. Theorem [ 6 ] — Let A {\displaystyle A} be a bounded self-adjoint operator on a Hilbert space V {\displaystyle V} . Then there is a measure space ( X , Σ , μ ) {\displaystyle (X,\Sigma ,\mu )} and a real-valued essentially bounded measurable function λ {\displaystyle \lambda } on X {\displaystyle X} and a unitary operator U : V → L 2 ( X , μ ) {\displaystyle U:V\to L^{2}(X,\mu )} such that U ∗ T U = A , {\displaystyle U^{*}TU=A,} where T {\displaystyle T} is the multiplication operator : [ T f ] ( x ) = λ ( x ) f ( x ) {\displaystyle [Tf](x)=\lambda (x)f(x)} and | T | {\displaystyle \vert T\vert } = | λ | ∞ {\displaystyle =\vert \lambda \vert _{\infty }} . Multiplication operators are a direct generalization of diagonal matrices. A finite-dimensional Hermitian vector space V {\displaystyle V} may be coordinatized as the space of functions f : B → C {\displaystyle f:B\to \mathbb {C} } from a basis B {\displaystyle B} to the complex numbers, so that the B {\displaystyle B} -coordinates of a vector are the values of the corresponding function f {\displaystyle f} . The finite-dimensional spectral theorem for a self-adjoint operator A : V → V {\displaystyle A:V\to V} states that there exists an orthonormal basis of eigenvectors B {\displaystyle B} , so that the inner product becomes the dot product with respect to the B {\displaystyle B} -coordinates: thus V {\displaystyle V} is isomorphic to L 2 ( B , μ ) {\displaystyle L^{2}(B,\mu )} for the discrete unit measure μ {\displaystyle \mu } on B {\displaystyle B} . Also A {\displaystyle A} is unitarily equivalent to the multiplication operator [ T f ] ( v ) = λ ( v ) f ( v ) {\displaystyle [Tf](v)=\lambda (v)f(v)} , where λ ( v ) {\displaystyle \lambda (v)} is the eigenvalue of v ∈ B {\displaystyle v\in B} : that is, A {\displaystyle A} multiplies each B {\displaystyle B} -coordinate by the corresponding eigenvalue λ ( v ) {\displaystyle \lambda (v)} , the action of a diagonal matrix. Finally, the operator norm | A | = | T | {\displaystyle |A|=|T|} is equal to the magnitude of the largest eigenvector | λ | ∞ {\displaystyle |\lambda |_{\infty }} . The spectral theorem is the beginning of the vast research area of functional analysis called operator theory ; see also spectral measure . There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now λ {\displaystyle \lambda } may be complex-valued. There is also a formulation of the spectral theorem in terms of direct integrals . It is similar to the multiplication-operator formulation, but more canonical. Let A {\displaystyle A} be a bounded self-adjoint operator and let σ ( A ) {\displaystyle \sigma (A)} be the spectrum of A {\displaystyle A} . The direct-integral formulation of the spectral theorem associates two quantities to A {\displaystyle A} . First, a measure μ {\displaystyle \mu } on σ ( A ) {\displaystyle \sigma (A)} , and second, a family of Hilbert spaces { H λ } , λ ∈ σ ( A ) . {\displaystyle \{H_{\lambda }\},\,\,\lambda \in \sigma (A).} We then form the direct integral Hilbert space ∫ R ⊕ H λ d μ ( λ ) . {\displaystyle \int _{\mathbf {R} }^{\oplus }H_{\lambda }\,d\mu (\lambda ).} The elements of this space are functions (or "sections") s ( λ ) , λ ∈ σ ( A ) , {\displaystyle s(\lambda ),\,\,\lambda \in \sigma (A),} such that s ( λ ) ∈ H λ {\displaystyle s(\lambda )\in H_{\lambda }} for all λ {\displaystyle \lambda } . The direct-integral version of the spectral theorem may be expressed as follows: [ 7 ] Theorem — If A {\displaystyle A} is a bounded self-adjoint operator, then A {\displaystyle A} is unitarily equivalent to the "multiplication by λ {\displaystyle \lambda } " operator on ∫ R ⊕ H λ d μ ( λ ) {\displaystyle \int _{\mathbf {R} }^{\oplus }H_{\lambda }\,d\mu (\lambda )} for some measure μ {\displaystyle \mu } and some family { H λ } {\displaystyle \{H_{\lambda }\}} of Hilbert spaces. The measure μ {\displaystyle \mu } is uniquely determined by A {\displaystyle A} up to measure-theoretic equivalence; that is, any two measure associated to the same A {\displaystyle A} have the same sets of measure zero. The dimensions of the Hilbert spaces H λ {\displaystyle H_{\lambda }} are uniquely determined by A {\displaystyle A} up to a set of μ {\displaystyle \mu } -measure zero. The spaces H λ {\displaystyle H_{\lambda }} can be thought of as something like "eigenspaces" for A {\displaystyle A} . Note, however, that unless the one-element set λ {\displaystyle \lambda } has positive measure, the space H λ {\displaystyle H_{\lambda }} is not actually a subspace of the direct integral. Thus, the H λ {\displaystyle H_{\lambda }} 's should be thought of as "generalized eigenspace"—that is, the elements of H λ {\displaystyle H_{\lambda }} are "eigenvectors" that do not actually belong to the Hilbert space. Although both the multiplication-operator and direct integral formulations of the spectral theorem express a self-adjoint operator as unitarily equivalent to a multiplication operator, the direct integral approach is more canonical. First, the set over which the direct integral takes place (the spectrum of the operator) is canonical. Second, the function we are multiplying by is canonical in the direct-integral approach: Simply the function λ ↦ λ {\displaystyle \lambda \mapsto \lambda } . A vector φ {\displaystyle \varphi } is called a cyclic vector for A {\displaystyle A} if the vectors φ , A φ , A 2 φ , … {\displaystyle \varphi ,A\varphi ,A^{2}\varphi ,\ldots } span a dense subspace of the Hilbert space. Suppose A {\displaystyle A} is a bounded self-adjoint operator for which a cyclic vector exists. In that case, there is no distinction between the direct-integral and multiplication-operator formulations of the spectral theorem. Indeed, in that case, there is a measure μ {\displaystyle \mu } on the spectrum σ ( A ) {\displaystyle \sigma (A)} of A {\displaystyle A} such that A {\displaystyle A} is unitarily equivalent to the "multiplication by λ {\displaystyle \lambda } " operator on L 2 ( σ ( A ) , μ ) {\displaystyle L^{2}(\sigma (A),\mu )} . [ 8 ] This result represents A {\displaystyle A} simultaneously as a multiplication operator and as a direct integral, since L 2 ( σ ( A ) , μ ) {\displaystyle L^{2}(\sigma (A),\mu )} is just a direct integral in which each Hilbert space H λ {\displaystyle H_{\lambda }} is just C {\displaystyle \mathbb {C} } . Not every bounded self-adjoint operator admits a cyclic vector; indeed, by the uniqueness in the direct integral decomposition, this can occur only when all the H λ {\displaystyle H_{\lambda }} 's have dimension one. When this happens, we say that A {\displaystyle A} has "simple spectrum" in the sense of spectral multiplicity theory . That is, a bounded self-adjoint operator that admits a cyclic vector should be thought of as the infinite-dimensional generalization of a self-adjoint matrix with distinct eigenvalues (i.e., each eigenvalue has multiplicity one). Although not every A {\displaystyle A} admits a cyclic vector, it is easy to see that we can decompose the Hilbert space as a direct sum of invariant subspaces on which A {\displaystyle A} has a cyclic vector. This observation is the key to the proofs of the multiplication-operator and direct-integral forms of the spectral theorem. One important application of the spectral theorem (in whatever form) is the idea of defining a functional calculus . That is, given a function f {\displaystyle f} defined on the spectrum of A {\displaystyle A} , we wish to define an operator f ( A ) {\displaystyle f(A)} . If f {\displaystyle f} is simply a positive power, f ( x ) = x n {\displaystyle f(x)=x^{n}} , then f ( A ) {\displaystyle f(A)} is just the n {\displaystyle n} -th power of A {\displaystyle A} , A n {\displaystyle A^{n}} . The interesting cases are where f {\displaystyle f} is a nonpolynomial function such as a square root or an exponential. Either of the versions of the spectral theorem provides such a functional calculus. [ 9 ] In the direct-integral version, for example, f ( A ) {\displaystyle f(A)} acts as the "multiplication by f {\displaystyle f} " operator in the direct integral: [ f ( A ) s ] ( λ ) = f ( λ ) s ( λ ) . {\displaystyle [f(A)s](\lambda )=f(\lambda )s(\lambda ).} That is to say, each space H λ {\displaystyle H_{\lambda }} in the direct integral is a (generalized) eigenspace for f ( A ) {\displaystyle f(A)} with eigenvalue f ( λ ) {\displaystyle f(\lambda )} . Many important linear operators which occur in analysis , such as differential operators , are unbounded . There is also a spectral theorem for self-adjoint operators that applies in these cases. To give an example, every constant-coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed, the unitary operator that implements this equivalence is the Fourier transform ; the multiplication operator is a type of Fourier multiplier . In general, spectral theorem for self-adjoint operators may take several equivalent forms. [ 10 ] Notably, all of the formulations given in the previous section for bounded self-adjoint operators—the projection-valued measure version, the multiplication-operator version, and the direct-integral version—continue to hold for unbounded self-adjoint operators, with small technical modifications to deal with domain issues. Specifically, the only reason the multiplication operator A {\displaystyle A} on L 2 ( [ 0 , 1 ] ) {\displaystyle L^{2}([0,1])} is bounded, is due to the choice of domain [ 0 , 1 ] {\displaystyle [0,1]} . The same operator on, e.g., L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} would be unbounded. The notion of "generalized eigenvectors" naturally extends to unbounded self-adjoint operators, as they are characterized as non-normalizable eigenvectors. Contrary to the case of almost eigenvectors , however, the eigenvalues can be real or complex and, even if they are real, do not necessarily belong to the spectrum. Though, for self-adjoint operators there always exist a real subset of "generalized eigenvalues" such that the corresponding set of eigenvectors is complete . [ 11 ]
https://en.wikipedia.org/wiki/Spectral_theorem
In mathematics , spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces . [ 1 ] It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. [ 2 ] The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter. [ 3 ] The name spectral theory was introduced by David Hilbert in his original formulation of Hilbert space theory, which was cast in terms of quadratic forms in infinitely many variables. The original spectral theorem was therefore conceived as a version of the theorem on principal axes of an ellipsoid , in an infinite-dimensional setting. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous. Hilbert himself was surprised by the unexpected application of this theory, noting that "I developed my theory of infinitely many variables from purely mathematical interests, and even called it 'spectral analysis' without any presentiment that it would later find application to the actual spectrum of physics." [ 4 ] There have been three main ways to formulate spectral theory, each of which find use in different domains. After Hilbert's initial formulation, the later development of abstract Hilbert spaces and the spectral theory of single normal operators on them were well suited to the requirements of physics , exemplified by the work of von Neumann . [ 5 ] The further theory built on this to address Banach algebras in general. This development leads to the Gelfand representation , which covers the commutative case , and further into non-commutative harmonic analysis . The difference can be seen in making the connection with Fourier analysis . The Fourier transform on the real line is in one sense the spectral theory of differentiation as a differential operator . But for that to cover the phenomena one has already to deal with generalized eigenfunctions (for example, by means of a rigged Hilbert space ). On the other hand, it is simple to construct a group algebra , the spectrum of which captures the Fourier transform's basic properties, and this is carried out by means of Pontryagin duality . One can also study the spectral properties of operators on Banach spaces . For example, compact operators on Banach spaces have many spectral properties similar to that of matrices . The background in the physics of vibrations has been explained in this way: [ 6 ] Spectral theory is connected with the investigation of localized vibrations of a variety of different objects, from atoms and molecules in chemistry to obstacles in acoustic waveguides . These vibrations have frequencies , and the issue is to decide when such localized vibrations occur, and how to go about computing the frequencies. This is a very complicated problem since every object has not only a fundamental tone but also a complicated series of overtones , which vary radically from one body to another. Such physical ideas have nothing to do with the mathematical theory on a technical level, but there are examples of indirect involvement (see for example Mark Kac 's question Can you hear the shape of a drum? ). Hilbert's adoption of the term "spectrum" has been attributed to an 1897 paper of Wilhelm Wirtinger on Hill differential equation (by Jean Dieudonné ), and it was taken up by his students during the first decade of the twentieth century, among them Erhard Schmidt and Hermann Weyl . The conceptual basis for Hilbert space was developed from Hilbert's ideas by Erhard Schmidt and Frigyes Riesz . [ 7 ] [ 8 ] It was almost twenty years later, when quantum mechanics was formulated in terms of the Schrödinger equation , that the connection was made to atomic spectra ; a connection with the mathematical physics of vibration had been suspected before, as remarked by Henri Poincaré , but rejected for simple quantitative reasons, absent an explanation of the Balmer series . [ 9 ] The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous, rather than being an object of Hilbert's spectral theory. Consider a bounded linear transformation T defined everywhere over a general Banach space . We form the transformation: R ζ = ( ζ I − T ) − 1 . {\displaystyle R_{\zeta }=\left(\zeta I-T\right)^{-1}.} Here I is the identity operator and ζ is a complex number . The inverse of an operator T , that is T −1 , is defined by: T T − 1 = T − 1 T = I . {\displaystyle TT^{-1}=T^{-1}T=I.} If the inverse exists, T is called regular . If it does not exist, T is called singular . With these definitions, the resolvent set of T is the set of all complex numbers ζ such that R ζ exists and is bounded . This set often is denoted as ρ ( T ). The spectrum of T is the set of all complex numbers ζ such that R ζ fails to exist or is unbounded. Often the spectrum of T is denoted by σ ( T ). The function R ζ for all ζ in ρ ( T ) (that is, wherever R ζ exists as a bounded operator) is called the resolvent of T . The spectrum of T is therefore the complement of the resolvent set of T in the complex plane. [ 10 ] Every eigenvalue of T belongs to σ ( T ), but σ ( T ) may contain non-eigenvalues. [ 11 ] This definition applies to a Banach space, but of course other types of space exist as well; for example, topological vector spaces include Banach spaces, but can be more general. [ 12 ] [ 13 ] On the other hand, Banach spaces include Hilbert spaces , and it is these spaces that find the greatest application and the richest theoretical results. [ 14 ] With suitable restrictions, much can be said about the structure of the spectra of transformations in a Hilbert space. In particular, for self-adjoint operators , the spectrum lies on the real line and (in general) is a spectral combination of a point spectrum of discrete eigenvalues and a continuous spectrum . [ 15 ] In functional analysis and linear algebra the spectral theorem establishes conditions under which an operator can be expressed in simple form as a sum of simpler operators. As a full rigorous presentation is not appropriate for this article, we take an approach that avoids much of the rigor and satisfaction of a formal treatment with the aim of being more comprehensible to a non-specialist. This topic is easiest to describe by introducing the bra–ket notation of Dirac for operators. [ 16 ] [ 17 ] As an example, a very particular linear operator L might be written as a dyadic product : [ 18 ] [ 19 ] in terms of the "bra" ⟨ b 1 | and the "ket" | k 1 ⟩. A function f is described by a ket as | f ⟩. The function f ( x ) defined on the coordinates ( x 1 , x 2 , x 3 , … ) {\displaystyle (x_{1},x_{2},x_{3},\dots )} is denoted as and the magnitude of f by where the notation (*) denotes a complex conjugate . This inner product choice defines a very specific inner product space , restricting the generality of the arguments that follow. [ 14 ] The effect of L upon a function f is then described as: expressing the result that the effect of L on f is to produce a new function | k 1 ⟩ {\displaystyle |k_{1}\rangle } multiplied by the inner product represented by ⟨ b 1 | f ⟩ {\displaystyle \langle b_{1}|f\rangle } . A more general linear operator L might be expressed as: where the { λ i } {\displaystyle \{\,\lambda _{i}\,\}} are scalars and the { | e i ⟩ } {\displaystyle \{\,|e_{i}\rangle \,\}} are a basis and the { ⟨ f i | } {\displaystyle \{\,\langle f_{i}|\,\}} a reciprocal basis for the space. The relation between the basis and the reciprocal basis is described, in part, by: If such a formalism applies, the { λ i } {\displaystyle \{\,\lambda _{i}\,\}} are eigenvalues of L and the functions { | e i ⟩ } {\displaystyle \{\,|e_{i}\rangle \,\}} are eigenfunctions of L . The eigenvalues are in the spectrum of L . [ 20 ] Some natural questions are: under what circumstances does this formalism work, and for what operators L are expansions in series of other operators like this possible? Can any function f be expressed in terms of the eigenfunctions (are they a Schauder basis ) and under what circumstances does a point spectrum or a continuous spectrum arise? How do the formalisms for infinite-dimensional spaces and finite-dimensional spaces differ, or do they differ? Can these ideas be extended to a broader class of spaces? Answering such questions is the realm of spectral theory and requires considerable background in functional analysis and matrix algebra . This section continues in the rough and ready manner of the above section using the bra–ket notation, and glossing over the many important details of a rigorous treatment. [ 21 ] A rigorous mathematical treatment may be found in various references. [ 22 ] In particular, the dimension n of the space will be finite. Using the bra–ket notation of the above section, the identity operator may be written as: where it is supposed as above that { | e i ⟩ } {\displaystyle \{|e_{i}\rangle \}} are a basis and the { ⟨ f i | } {\displaystyle \{\langle f_{i}|\}} a reciprocal basis for the space satisfying the relation: This expression of the identity operation is called a representation or a resolution of the identity. [ 21 ] [ 22 ] This formal representation satisfies the basic property of the identity: valid for every positive integer k . Applying the resolution of the identity to any function in the space | ψ ⟩ {\displaystyle |\psi \rangle } , one obtains: which is the generalized Fourier expansion of ψ in terms of the basis functions { e i }. [ 23 ] Here c i = ⟨ f i | ψ ⟩ {\displaystyle c_{i}=\langle f_{i}|\psi \rangle } . Given some operator equation of the form: with h in the space, this equation can be solved in the above basis through the formal manipulations: which converts the operator equation to a matrix equation determining the unknown coefficients c j in terms of the generalized Fourier coefficients ⟨ f j | h ⟩ {\displaystyle \langle f_{j}|h\rangle } of h and the matrix elements O j i = ⟨ f j | O | e i ⟩ {\displaystyle O_{ji}=\langle f_{j}|O|e_{i}\rangle } of the operator O . The role of spectral theory arises in establishing the nature and existence of the basis and the reciprocal basis. In particular, the basis might consist of the eigenfunctions of some linear operator L : with the { λ i } the eigenvalues of L from the spectrum of L . Then the resolution of the identity above provides the dyad expansion of L : Using spectral theory, the resolvent operator R : can be evaluated in terms of the eigenfunctions and eigenvalues of L , and the Green's function corresponding to L can be found. Applying R to some arbitrary function in the space, say φ {\displaystyle \varphi } , This function has poles in the complex λ -plane at each eigenvalue of L . Thus, using the calculus of residues : where the line integral is over a contour C that includes all the eigenvalues of L . Suppose our functions are defined over some coordinates { x j }, that is: Introducing the notation where δ(x − y) = δ(x 1 − y 1 , x 2 − y 2 , x 3 − y 3 , ...) is the Dirac delta function , [ 24 ] we can write Then: The function G(x, y; λ) defined by: is called the Green's function for operator L , and satisfies: [ 25 ] Consider the operator equation: in terms of coordinates: A particular case is λ = 0. The Green's function of the previous section is: and satisfies: Using this Green's function property: Then, multiplying both sides of this equation by h ( z ) and integrating: which suggests the solution is: That is, the function ψ ( x ) satisfying the operator equation is found if we can find the spectrum of O , and construct G , for example by using: There are many other ways to find G , of course. [ 26 ] See the articles on Green's functions and on Fredholm integral equations . It must be kept in mind that the above mathematics is purely formal, and a rigorous treatment involves some pretty sophisticated mathematics, including a good background knowledge of functional analysis , Hilbert spaces , distributions and so forth. Consult these articles and the references for more detail. Optimization problems may be the most useful examples about the combinatorial significance of the eigenvalues and eigenvectors in symmetric matrices, especially for the Rayleigh quotient with respect to a matrix M . Theorem Let M be a symmetric matrix and let x be the non-zero vector that maximizes the Rayleigh quotient with respect to M . Then, x is an eigenvector of M with eigenvalue equal to the Rayleigh quotient . Moreover, this eigenvalue is the largest eigenvalue of M . Proof Assume the spectral theorem. Let the eigenvalues of M be λ 1 ≤ λ 2 ≤ ⋯ ≤ λ n {\displaystyle \lambda _{1}\leq \lambda _{2}\leq \cdots \leq \lambda _{n}} . Since the { v i } {\displaystyle \{v_{i}\}} form an orthonormal basis , any vector x can be expressed in this basis as The way to prove this formula is pretty easy. Namely, evaluate the Rayleigh quotient with respect to x : where we used Parseval's identity in the last line. Finally we obtain that so the Rayleigh quotient is always less than λ n {\displaystyle \lambda _{n}} . [ 27 ]
https://en.wikipedia.org/wiki/Spectral_theory
In telecommunications , spectral width is the width of a spectral band , i.e., the range of wavelengths or frequencies over which the magnitude of all spectral components is significant, i.e., equal to or greater than a specified fraction of the largest magnitude. In fiber-optic communication applications, the usual method of specifying spectral width is the full width at half maximum (FWHM). This is the same convention used in bandwidth , defined as the frequency range where power drops by less than half (at most −3 dB ). The FWHM method may be difficult to apply when the spectrum has a complex shape. Another method of specifying spectral width is a special case of root-mean-square deviation where the independent variable is wavelength, λ, and f (λ) is a suitable radiometric quantity. The relative spectral width , Δλ/λ, is frequently used where Δλ is obtained according to note 1, and λ is the center wavelength. This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spectral_width
A spectrochemical series is a list of ligands ordered by ligand "strength", and a list of metal ions based on oxidation number , group and element. For a metal ion, the ligands modify the difference in energy Δ between the d orbitals , called the ligand-field splitting parameter in ligand field theory , or the crystal-field splitting parameter in crystal field theory . The splitting parameter is reflected in the ion's electronic and magnetic properties such as its spin state , and optical properties such as its color and absorption spectrum. The spectrochemical series was first proposed in 1938 based on the results of absorption spectra of cobalt complexes. [ 1 ] A partial spectrochemical series listing ligands from small Δ to large Δ is given below. [ 2 ] (For a table, see the ligand page.) Weak field ligands: H 2 O, F − , Cl − , OH − Strong field ligands: CO, CN − , NH 3 , PPh 3 Ligands arranged on the left end of this spectrochemical series are generally regarded as weaker ligands and cannot cause forcible pairing of electrons within the 3d level, and thus form outer orbital octahedral complexes that are high spin . Ligands to the right of the series are stronger ligands and form inner orbital octahedral complexes after forcible pairing of electrons within 3d level and hence are called low spin ligands. However, it is known that "the spectrochemical series is essentially backwards from what it should be for a reasonable prediction based on the assumptions of crystal field theory." [ 3 ] This deviation from crystal field theory highlights the weakness of its assumption of purely ionic bonds between metal and ligand. The order of the spectrochemical series can be derived from the understanding that ligands are frequently classified by their donor or acceptor abilities. Some, like NH 3 , are σ bond donors only, with no orbitals of appropriate symmetry for π bonding interactions. Bonding of these ligands to metals is relatively simple, using only the σ bonds to create relatively weak interactions. Another example of a σ bonding ligand would be ethylenediamine ; however, ethylenediamine has a stronger effect than ammonia, generating a larger ligand field split, Δ. Ligands that have occupied p orbitals are potentially π donors. These types of ligands tend to donate these electrons to the metal along with the σ bonding electrons, exhibiting stronger metal-ligand interactions and an effective decrease of Δ. Halide ligands are primary examples of π donor ligands, along with OH − . When ligands have vacant π* and d orbitals of suitable energy, there is the possibility of pi backbonding , and the ligands may be π acceptors. This addition to the bonding scheme increases Δ. Ligands such as CN − and CO do this very effectively. [ 4 ] Metal ions can also be arranged in order of increasing Δ; this order is largely independent of the identity of the ligand. [ 5 ] In general, it is not possible to say whether a given ligand will exert a strong field or a weak field on a given metal ion. However, when we consider the metal ion, the following two useful trends are observed:
https://en.wikipedia.org/wiki/Spectrochemical_series
Spectrochemistry is the application of spectroscopy in several fields of chemistry. It includes analysis of spectra in chemical terms, and use of spectra to derive the structure of chemical compounds, and also to qualitatively and quantitively analyze their presence in the sample. It is a method of chemical analysis that relies on the measurement of wavelengths and intensity of electromagnetic radiation . [ 1 ] It was not until 1666 that Isaac Newton showed that white lights from the sun could be dissipated into a continuous series of colors. So Newton introduced the concept which he called spectrum to describe this phenomenon. He used a small aperture to define the beam of light, a lens to collimate it, a glass prism to disperse it, and a screen to display the resulting spectrum. Newton's analysis of light was the beginning of the science of spectroscopy. Later, It became clear that the Sun's radiation might have components outside the visible portion of the spectrum. In 1800 William Hershel showed that the sun's radiation extended into infrared , and in 1801 John Wilhelm Ritter also made a similar observation in the ultraviolet . Joseph Von Fraunhofer extended Newton's discovery by observing the sun's spectrum when sufficiently dispersed was blocked by a fine dark lines now known as Fraunhofer lines . Fraunhofer also developed diffracting grating , which disperses the lights in much the same way as does a glass prism but with some advantages. the grating applied interference of lights to produce diffraction provides a direct measuring of wavelengths of diffracted beams. So by extending Thomas Young's study which demonstrated that a light beam passes slit emerges in patterns of light and dark edges Fraunhofer was able to directly measure the wavelengths of spectral lines. However, despite his enormous achievements, Fraunhofer was unable to understand the origins of the special line in which he observed. It was not until 33 years after his passing that Gustav Kirchhoff established that each element and compound has its unique spectrum and that by studying the spectrum of an unknown source, one could determine its chemical compositions, and with these advancements, spectroscopy became a truly scientific method of analyzing the structures of chemical compounds. Therefore, by recognizing that each atom and molecule has its spectrum Kirchhoff and Robert Bunsen established spectroscopy as a scientific tool for probing atomic and molecular structures and founded the field of spectrochemical analysis for analyzing the composition of materials. [ 3 ] IR Spectrum Table by Frequency [ 4 ] IR Spectra Table by Compound Class [ 5 ] To use an IR spectrum table, first need to find the frequency or compound in the first column, depending on which type of chart that is being used. Then find the corresponding values for absorption, appearance and other attributes. The value for absorption is usually in cm −1 . NOTE: NOT ALL FREQUENCIES HAVE A RELATED COMPOUND. Invasive Ductal Carcinoma (IDC) is one of the common types of breast cancer which accounts for 8 out of 10 of all invasive breast cancers. According to the American Cancer Society, more than 180,000 women in the United States find out that they have breast cancers each year, and most are diagnosed with this specific type of cancer. [ 6 ] While it is essential to detect breast cancer early to reduce the death rate there may be already more than 10,000,000 cells in breast cancer when it can be observed by x-ray mammograms . however, the IR Spectrum proposed by Szu et al seems to be more promising in detecting breast cancer cells several months ahead of a mammogram. Clinical tests have been carried out with approval of Institutional Review Board of National Taiwan University Hospital. So from August 2007 to June 2008 35 patients aged between (30-66) with an average age of 49 were enlisted in this project. the results established that about 63% of the success rate could be achieved with the cross-sectional data. Therefore the results concluded that breast cancers may be detected more accurately by cross-referencing S 1 maps of multiple three-points. [ 7 ] A Lignin in plant cell is a complex amorphous polymer and it is biosynthesized from three aromatic alcohols, namely P-Coumaryl , Coniferyl , and Sinapyl alcohols. Lignin is a highly branched polymer and accounts for 15-30% by weight of lignocellulosic biomass (LCBM) , so the structure of lignin will vary significantly according to the type of LCBM and the composition will depend on the degradation process . [ 8 ] This biosynthesis process is mainly consists of radical coupling reactions and it generates a particular lignin polymer in each plant species. So due to having a complex structure, various molecular spectroscopic methods have been applied to resolve the aromatic units and different interunit linkages in lignin from distinct plant species. [ 9 ]
https://en.wikipedia.org/wiki/Spectrochemistry
Spectroelectrochemistry ( SEC ) is a set of multi-response analytical techniques in which complementary chemical information ( electrochemical and spectroscopic ) is obtained in a single experiment. Spectroelectrochemistry provides a whole vision of the phenomena that take place in the electrode process. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The first spectroelectrochemical experiment was carried out by Theodore Kuwana , PhD, in 1964. [ 6 ] The main objective of spectroelectrochemical experiments is to obtain simultaneous, time-resolved and in-situ electrochemical and spectroscopic information on reactions taking place on the electrode surface. [ 1 ] The base of the technique consist in studying the interaction of a beam of electromagnetic radiation with the compounds involved in these reactions. The changes of the optical and electrical signal allow us to understand the evolution of the electrode process. The techniques on which the spectroelectrochemistry is based are: Spectroelectrochemistry provides molecular, thermodynamic and kinetic information of reagents, products and/or intermediates involved in the electron transfer process. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] There are different spectroelectrochemical techniques based on the combination of spectroscopic and electrochemical techniques. Regarding electrochemistry, the most common techniques used are: The general classification of the spectroelectrochemical techniques is based on the spectroscopic technique chosen. Ultraviolet-visible ( UV-Vis ) absorption spectroelectrochemistry is a technique that studies the absorption of electromagnetic radiation in the UV-Vis regions of the spectrum, providing molecular information related to the electronic levels of molecules. [ 10 ] It provides qualitative as well as quantitative information. UV-Vis spectroelectrochemistry helps to characterize compounds and materials, determines concentrations and different parameters such as absorptivity coefficients, diffusion coefficients, formal potentials or electron transfer rates. [ 11 ] [ 12 ] Photoluminescence (PL) is a phenomenon related to the ability of some compounds that, after absorbing specific electromagnetic radiation , relax to a lower energy state through emission of photons . This spectroelectrochemical technique is limited to those compounds with fluorescent or luminescent properties. The experiments are strongly interfered by ambient light . [ 1 ] This technique provides structural information and quantitative information with great detection limits . [ 8 ] Infrared spectroscopy is based on the fact that molecules absorb electromagnetic radiation at characteristic frequencies related to their vibrational structure. Infrared (IR) spectroelectrochemistry is a technique that allows the characterization of molecules based on the resistance, stiffness and number of bonds present. It also detects the presence of compounds, determines the concentration of species during a reaction, the structure of compounds, the properties of the chemical bonds, etc. [ 10 ] Raman spectroelectrochemistry is based on the inelastic scattering or Raman scattering of monochromatic light when it strikes upon a specific molecule, providing information about vibrational energy of that molecule. Raman spectrum provides highly specific information about the structure and composition of the molecules, such as a true fingerprint of them. [ 1 ] It has been extensively used to study single wall carbon nanotubes [ 13 ] and graphene. [ 14 ] X-ray spectroelectrochemistry is a technique that studies the interaction of high-energy radiation with matter during an electrode process. X-rays can originate absorption, emission or scattering phenomena, allowing to perform both quantitative and qualitative analysis depending on the phenomenon taking place. [ 8 ] [ 9 ] [ 10 ] All these processes involve electronic transitions in the inner layers of the atoms involved. Particularly, it is interesting to study the processes of radiation , absorption and emission that take place during an electron transfer reaction. In these processes, the promotion or relaxation of an electron can occur between an outer shell and an inner shell of the atom. Nuclear magnetic resonance (NMR) is a technique used to obtain physical, chemical, electronic and structural information about molecules due to the chemical shift of the resonance frequencies of nuclear spins in the sample. Its combination with electrochemical techniques can provide detailed and quantitative information about the functional groups, topology, dynamics and the three-dimensional structure of molecules in solution during a charge transfer process. The area under an NMR peak is related to the ratio of the number of turns involved and the peak integrals to determine the composition quantitatively. Electron paramagnetic resonance (EPR) is a technique that allows the detection of free radicals formed in chemical or biological systems. In addition, it studies the symmetry and electronic distribution of paramagnetic ions. This is a highly specific technique because the magnetic parameters are characteristic of each ion or free radical . [ 15 ] The physical principles of this technique are analogous to those of NMR , but in the case of EPR , electronic spins are excited instead of nuclear, that is interesting in certain electrode reactions. The versatility of spectroelectrochemistry is increasing due to the possibility of using several electrochemical techniques in different spectral regions depending on the purpose of the study and the information of interest. [ 12 ] The main advantages of spectroelectrochemical techniques are: Due to the high versatility of the technique, the field of applications is considerably wide. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 16 ]
https://en.wikipedia.org/wiki/Spectroelectrochemistry
The spectroheliograph is an instrument used in astronomy which captures a photographic image of the Sun at a single wavelength of light , a monochromatic image. The wavelength is usually chosen to coincide with a spectral wavelength of one of the chemical elements present in the Sun. It was developed independently by George Ellery Hale and Henri-Alexandre Deslandres in the 1890s [ 1 ] and further refined in 1932 by Robert R. McMath to take motion pictures . The instrument comprises a prism or diffraction grating and a narrow slit that passes a single wavelength (a monochromator ). The light is focused onto a photographic medium and the slit is moved across the disk of the Sun to form a complete image. It is now possible to make a filter that transmits a narrow band of wavelengths which produces a similar image, but spectroheliographs remain in use. [ 2 ] This article related to the Sun is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spectroheliograph
A spectrohelioscope is a type of solar telescope designed by George Ellery Hale in 1924 to allow the Sun to be viewed in a selected wavelength of light. The name comes from Latin- and Greek-based words: "Spectro," referring to the optical spectrum , "helio," referring to the Sun, and "scope," as in telescope. The basic spectrohelioscope is a complex machine that uses a spectroscope to scan the surface of the Sun. The image from the objective lens is focused on a narrow slit revealing only a thin portion of the Sun's surface. The light is then passed through a prism or diffraction grating to spread the light into a spectrum. The spectrum is then focused on another slit that allows only a narrow part of the spectrum (the desired wavelength of light for viewing) to pass. The light is finally focused on an eyepiece so the surface of the Sun can be seen. The view, however, would be only a narrow strip of the Sun's surface. The slits are moved in unison to scan across the whole surface of the Sun giving a full image. Independently nodding mirrors can be used instead of moving slits to produce the same scan: the first mirror selects a slice of the Sun, the second selects the desired wavelength. The spectroheliograph is a similar device, but images the Sun at a particular wavelength photographically and is still in use [ 1 ] in professional observatories.
https://en.wikipedia.org/wiki/Spectrohelioscope
Spectrophotometry is a branch of electromagnetic spectroscopy concerned with the quantitative measurement of the reflection or transmission properties of a material as a function of wavelength. [ 2 ] Spectrophotometry uses photometers , known as spectrophotometers, that can measure the intensity of a light beam at different wavelengths. Although spectrophotometry is most commonly applied to ultraviolet, visible , and infrared radiation, modern spectrophotometers can interrogate wide swaths of the electromagnetic spectrum , including x-ray , ultraviolet , visible , infrared , or microwave wavelengths. Spectrophotometry is a tool that hinges on the quantitative analysis of molecules depending on how much light is absorbed by colored compounds. Important features of spectrophotometers are spectral bandwidth (the range of colors it can transmit through the test sample), the percentage of sample transmission, the logarithmic range of sample absorption, and sometimes a percentage of reflectance measurement. A spectrophotometer is commonly used for the measurement of transmittance or reflectance of solutions, transparent or opaque solids, such as polished glass, or gases. Although many biochemicals are colored, as in, they absorb visible light and therefore can be measured by colorimetric procedures, even colorless biochemicals can often be converted to colored compounds suitable for chromogenic color-forming reactions to yield compounds suitable for colorimetric analysis. [ 3 ] : 65 However, they can also be designed to measure the diffusivity on any of the listed light ranges that usually cover around 200–2500 nm using different controls and calibrations . [ 2 ] Within these ranges of light, calibrations are needed on the machine using standards that vary in type depending on the wavelength of the photometric determination . [ 4 ] An example of an experiment in which spectrophotometry is used is the determination of the equilibrium constant of a solution. A certain chemical reaction within a solution may occur in a forward and reverse direction, where reactants form products and products break down into reactants. At some point, this chemical reaction will reach a point of balance called an equilibrium point. To determine the respective concentrations of reactants and products at this point, the light transmittance of the solution can be tested using spectrophotometry. The amount of light that passes through the solution is indicative of the concentration of certain chemicals that do not allow light to pass through. The absorption of light is due to the interaction of light with the electronic and vibrational modes of molecules. Each type of molecule has an individual set of energy levels associated with the makeup of its chemical bonds and nuclei and thus will absorb light of specific wavelengths, or energies, resulting in unique spectral properties. [ 5 ] This is based upon its specific and distinct makeup. The use of spectrophotometers spans various scientific fields, such as physics , materials science , chemistry , biochemistry , chemical engineering , and molecular biology . [ 6 ] They are widely used in many industries including semiconductors, laser and optical manufacturing, printing and forensic examination, as well as in laboratories for the study of chemical substances. Spectrophotometry is often used in measurements of enzyme activities, determinations of protein concentrations, determinations of enzymatic kinetic constants, and measurements of ligand binding reactions. [ 3 ] : 65 Ultimately, a spectrophotometer is able to determine, depending on the control or calibration, what substances are present in a target and exactly how much through calculations of observed wavelengths. In astronomy , the term spectrophotometry refers to the measurement of the spectrum of a celestial object in which the flux scale of the spectrum is calibrated as a function of wavelength , usually by comparison with an observation of a spectrophotometric standard star, and corrected for the absorption of light by the Earth's atmosphere. [ 7 ] Invented by Arnold O. Beckman in 1940 [ disputed – discuss ] , the spectrophotometer was created with the aid of his colleagues at his company National Technical Laboratories founded in 1935 which would become Beckman Instrument Company and ultimately Beckman Coulter . This would come as a solution to the previously created spectrophotometers which were unable to absorb the ultraviolet correctly. He would start with the invention of Model A where a glass prism was used to absorb the UV light. It would be found that this did not give satisfactory results, therefore in Model B, there was a shift from a glass to a quartz prism which allowed for better absorbance results. From there, Model C was born with an adjustment to the wavelength resolution which ended up having three units of it produced. The last and most popular model became Model D which is better recognized now as the DU spectrophotometer which contained the instrument case, hydrogen lamp with ultraviolet continuum, and a better monochromator. [ 8 ] It was produced from 1941 to 1976 where the price for it in 1941 was US$ 723 (far-UV accessories were an option at additional cost). In the words of Nobel chemistry laureate Bruce Merrifield , it was "probably the most important instrument ever developed towards the advancement of bioscience." [ 9 ] Once it became discontinued in 1976, [ 10 ] Hewlett-Packard created the first commercially available diode-array spectrophotometer in 1979 known as the HP 8450A. [ 11 ] Diode-array spectrophotometers differed from the original spectrophotometer created by Beckman because it was the first single-beam microprocessor-controlled spectrophotometer that scanned multiple wavelengths at a time in seconds. It irradiates the sample with polychromatic light which the sample absorbs depending on its properties. Then it is transmitted back by grating the photodiode array which detects the wavelength region of the spectrum. [ 12 ] Since then, the creation and implementation of spectrophotometry devices has increased immensely and has become one of the most innovative instruments of our time. There are two major classes of devices: single-beam and double-beam. A double-beam spectrophotometer [ 13 ] compares the light intensity between two light paths, one path containing a reference sample and the other the test sample. A single-beam spectrophotometer measures the relative light intensity of the beam before and after a test sample is inserted. Although comparison measurements from double-beam instruments are easier and more stable, single-beam instruments can have a larger dynamic range and are optically simpler and more compact. Additionally, some specialized instruments, such as spectrophotometers built onto microscopes or telescopes, are single-beam instruments due to practicality. Historically, spectrophotometers use a monochromator containing a diffraction grating to produce the analytical spectrum. The grating can either be movable or fixed. If a single detector, such as a photomultiplier tube or photodiode is used, the grating can be scanned stepwise (scanning spectrophotometer) so that the detector can measure the light intensity at each wavelength (which will correspond to each "step"). Arrays of detectors (array spectrophotometer), such as charge-coupled devices (CCD) or photodiode arrays (PDA) can also be used. In such systems, the grating is fixed and the intensity of each wavelength of light is measured by a different detector in the array. Additionally, most modern mid-infrared spectrophotometers use a Fourier transform technique to acquire the spectral information. This technique is called Fourier transform infrared spectroscopy . When making transmission measurements, the spectrophotometer quantitatively compares the fraction of light that passes through a reference solution and a test solution, then electronically compares the intensities of the two signals and computes the percentage of transmission of the sample compared to the reference standard. For reflectance measurements, the spectrophotometer quantitatively compares the fraction of light that reflects from the reference and test samples. Light from the source lamp is passed through a monochromator, which diffracts the light into a "rainbow" of wavelengths through a rotating prism and outputs narrow bandwidths of this diffracted spectrum through a mechanical slit on the output side of the monochromator. These bandwidths are transmitted through the test sample. Then the photon flux density (watts per meter squared usually) of the transmitted or reflected light is measured with a photodiode, CCD or other light sensor . The transmittance or reflectance value for each wavelength of the test sample is then compared with the transmission or reflectance values from the reference sample. Most instruments will apply a logarithmic function to the linear transmittance ratio to calculate the 'absorbency' of the sample, a value which is proportional to the 'concentration' of the chemical being measured. In short, the sequence of events in a scanning spectrophotometer is as follows: In an array spectrophotometer, the sequence is as follows: [ 14 ] Many older spectrophotometers must be calibrated by a procedure known as "zeroing", to balance the null current output of the two beams at the detector. The transmission of a reference substance is set as a baseline (datum) value, so the transmission of all other substances is recorded relative to the initial "zeroed" substance. The spectrophotometer then converts the transmission ratio into 'absorbency', the concentration of specific components of the test sample relative to the initial substance. [ 6 ] Some common types of spectrophotometers include the following: [ 15 ] Spectrophotometry is an important technique used in many biochemical experiments that involve DNA, RNA, and protein isolation, enzyme kinetics and biochemical analyses. [ 16 ] Since samples in these applications are not readily available in large quantities, they are especially suited to be analyzed in this non-destructive technique. In addition, precious sample can be saved by utilizing a micro-volume platform where as little as 1uL of sample is required for complete analyses. [ 17 ] A brief explanation of the procedure of spectrophotometry includes comparing the absorbency of a blank sample that does not contain a colored compound to a sample that contains a colored compound. This coloring can be accomplished by either a dye such as Coomassie Brilliant Blue G-250 dye measured at 595 nm or by an enzymatic reaction as seen between β-galactosidase and ONPG (turns sample yellow) measured at 420  nm. [ 3 ] : 21–119 The spectrophotometer is used to measure colored compounds in the visible region of light (between 350 nm and 800 nm), [ 3 ] : 65 thus it can be used to find more information about the substance being studied. In biochemical experiments, a chemical and/or physical property is chosen and the procedure that is used is specific to that property to derive more information about the sample, such as the quantity, purity, enzyme activity, etc. Spectrophotometry can be used for a number of techniques such as determining optimal wavelength absorbance of samples, determining optimal pH for absorbance of samples, determining concentrations of unknown samples, and determining the pKa of various samples. [ 3 ] : 21–119 Spectrophotometry is also a helpful process for protein purification [ 18 ] and can also be used as a method to create optical assays of a compound. Spectrophotometric data can also be used in conjunction with the Beer–Lambert Equation, A = − log 10 ⁡ T = ϵ c l = O D {\textstyle A=-\log _{10}T=\epsilon cl=OD} , to determine various relationships between transmittance and concentration, and absorbance and concentration. [ 3 ] : 21–119 Because a spectrophotometer measures the wavelength of a compound through its color, a dye-binding substance can be added so that it can undergo a color change and be measured. [ 19 ] It is possible to know the concentrations of a two-component mixture using the absorption spectra of the standard solutions of each component. To do this, it is necessary to know the extinction coefficient of this mixture at two wavelengths and the extinction coefficients of solutions that contain the known weights of the two components. [ 20 ] In addition to the traditional Beer-Lamberts law model, cuvette based label free spectroscopy can be used, which add an optical filter in the pathways of the light, enabling the spectrophotometer to quantify concentration, size and refractive index of samples following the hands law. [ 21 ] Spectrophotometers have been developed and improved over decades and have been widely used among chemists. Additionally, Spectrophotometers are specialized to measure either UV or Visible light wavelength absorbance values. [ 3 ] : 21–119 It is considered to be a highly accurate instrument that is also very sensitive and therefore extremely precise, especially in determining color change. [ 22 ] This method is also convenient for use in laboratory experiments because it is an inexpensive and relatively simple process. Most spectrophotometers are used in the UV and visible regions of the spectrum, and some of these instruments also operate into the near- infrared region as well. The concentration of a protein can be estimated by measuring the OD at 280 nm due to the presence of tryptophan, tyrosine and phenylalanine. This method is not very accurate since the composition of proteins varies greatly and proteins with none of these amino acids do not have maximum absorption at 280 nm. Nucleic acid contamination can also interfere. This method requires a spectrophotometer capable of measuring in the UV region with quartz cuvettes. [ 3 ] : 135 Ultraviolet-visible (UV-vis) spectroscopy involves energy levels that excite electronic transitions. Absorption of UV-vis light excites molecules that are in ground-states to their excited-states. [ 5 ] Visible region 400–700 nm spectrophotometry is used extensively in colorimetry science. It is a known fact that it operates best at the range of 0.2–0.8 O.D. Ink manufacturers, printing companies, textiles vendors, and many more, need the data provided through colorimetry. They take readings in the region of every 5–20 nanometers along the visible region, and produce a spectral reflectance curve or a data stream for alternative presentations. These curves can be used to test a new batch of colorant to check if it makes a match to specifications, e.g., ISO printing standards. Traditional visible region spectrophotometers cannot detect if a colorant or the base material has fluorescence. This can make it difficult to manage color issues if for example one or more of the printing inks is fluorescent. Where a colorant contains fluorescence, a bi-spectral fluorescent spectrophotometer is used. There are two major setups for visual spectrum spectrophotometers, d/8 (spherical) and 0/45. The names are due to the geometry of the light source, observer and interior of the measurement chamber. Scientists use this instrument to measure the amount of compounds in a sample. If the compound is more concentrated more light will be absorbed by the sample; within small ranges, the Beer–Lambert law holds and the absorbance between samples vary with concentration linearly. In the case of printing measurements two alternative settings are commonly used- without/with uv filter to control better the effect of uv brighteners within the paper stock. Samples are usually prepared in cuvettes ; depending on the region of interest, they may be constructed of glass , plastic (visible spectrum region of interest), or quartz (Far UV spectrum region of interest). Some applications require small volume measurements which can be performed with micro-volume platforms. As described in the applications section, spectrophotometry can be used in both qualitative and quantitative analysis of DNA, RNA, and proteins. Qualitative analysis can be used and spectrophotometers are used to record spectra of compounds by scanning broad wavelength regions to determine the absorbance properties (the intensity of the color) of the compound at each wavelength. [ 5 ] One experiment that can demonstrate the various uses that visible spectrophotometry can have is the separation of β-galactosidase from a mixture of various proteins. Largely, spectrophotometry is best used to help quantify the amount of purification your sample has undergone relative to total protein concentration. By running an affinity chromatography, B-Galactosidase can be isolated and tested by reacting collected samples with Ortho-Nitrophenyl-β-galactoside (ONPG) and determining if the sample turns yellow. [ 3 ] : 21–119 Following this testing the sample at 420 nm for specific interaction with ONPG and at 595 for a Bradford Assay the amount of purification can be assessed quantitatively. [ 3 ] : 21–119 In addition to this spectrophotometry can be used in tandem with other techniques such as SDS-Page electrophoresis in order to purify and isolate various protein samples. Spectrophotometers designed for the infrared region are quite different because of the technical requirements of measurement in that region. One major factor is the type of photosensors that are available for different spectral regions, but infrared measurement is also challenging because virtually everything emits IR as thermal radiation, especially at wavelengths beyond about 5 μm. Another complication is that quite a few materials such as glass and plastic absorb infrared, making it incompatible as an optical medium. Ideal optical materials are salts , which do not absorb strongly. Samples for IR spectrophotometry may be smeared between two discs of potassium bromide or ground with potassium bromide and pressed into a pellet. Where aqueous solutions are to be measured, insoluble silver chloride is used to construct the cell. Spectroradiometers , which operate almost like the visible region spectrophotometers, are designed to measure the spectral density of illuminants. Applications may include evaluation and categorization of lighting for sales by the manufacturer, or for the customers to confirm the lamp they decided to purchase is within their specifications. Components:
https://en.wikipedia.org/wiki/Spectrophotometry
A spectroradiometer is a light measurement tool that is able to measure both the wavelength and amplitude of the light emitted from a light source. Spectrometers discriminate the wavelength based on the position the light hits at the detector array allowing the full spectrum to be obtained with a single acquisition. Most spectrometers have a base measurement of counts which is the un-calibrated reading and is thus impacted by the sensitivity of the detector to each wavelength. By applying a calibration , the spectrometer is then able to provide measurements of spectral irradiance , spectral radiance and/or spectral flux. This data is also then used with built in or PC software and numerous algorithms to provide readings or Irradiance (W/cm2), Illuminance (lux or fc), Radiance (W/sr), Luminance (cd), Flux (Lumens or Watts), Chromaticity, Color Temperature, Peak and Dominant Wavelength. Some more complex spectrometer software packages also allow calculation of PAR μmol/m 2 /s, Metamerism, and candela calculations based on distance and include features like 2- and 20-degree observer, baseline overlay comparisons, transmission and reflectance. Spectrometers are available in numerous packages and sizes covering many wavelength ranges. The effective wavelength (spectral) range of a spectrometer is determined not only by the grating dispersion ability but also depends on the detectors' sensitivity range. Limited by the semiconductor's band gap the silicon-based detector responds to 200-1100 nm while the InGaAs based detector is sensitive to 900-1700 nm (or out to 2500 nm with cooling). Lab/Research spectrometers often cover a broad spectral range from UV to NIR and require a PC. There are also IR Spectrometers that require higher power to run a cooling system. Many Spectrometers can be optimized for a specific range i.e. UV, or VIS and combined with a second system to allow more precise measurements, better resolution, and eliminate some of the more common errors found in broadband system such as stray light and lack of sensitivity. Portable devices are also available for numerous spectral ranges covering UV to NIR and offer many different package styles and sizes. Hand held systems with integrated displays typically have built in optics, and an onboard computer with pre-programmed software. Mini spectrometers are also able to be used hand held, or in the lab as they are powered and controlled by a PC and require a USB cable. Input optics may be incorporated or are commonly attached by a fiber optic light guide. There are also micro Spectrometers smaller than a quarter that can be integrated into a system, or used stand alone. The field of spectroradiometry concerns itself with the measurement of absolute radiometric quantities in narrow wavelength intervals. [ 1 ] It is useful to sample the spectrum with narrow bandwidth and wavelength increments because many sources have line structures [ 2 ] Most often in spectroradiometry, spectral irradiance is the desired measurement. In practice, the average spectral irradiance is measured, shown mathematically as the approximation: Where E {\displaystyle E} is the spectral irradiance, Φ {\displaystyle \Phi } is the radiant flux of the source ( SI unit: watt , W) within a wavelength interval Δ λ {\displaystyle \Delta \lambda } (SI unit: meter , m), incident on the surface area, A {\displaystyle A} (SI unit: square meter, m 2 ). The SI unit for spectral irradiance is W/m 3 . However it is often more useful to measure area in terms of centimeters and wavelength in nanometers , thus submultiples of the SI units of spectral irradiance will be used, for example μW/cm 2 *nm [ 3 ] Spectral irradiance will vary from point to point on the surface in general. In practice, it is important note how radiant flux varies with direction, the size of the solid angle subtended by the source at each point on the surface, and the orientation of the surface. Given these considerations, it is often more prudent to use a more rigorous form of the equation to account for these dependencies [ 3 ] Note that the prefix "spectral" is to be understood as an abbreviation of the phrase "spectral concentration of" which is understood and defined by the CIE as the "quotient of the radiometric quantity taken over an infinitesimal range on either side of a given wavelength, by the range". [ 4 ] The spectral power distribution (SPD) of a source describes how much flux reaches the sensor over a particular wavelength and area. This effectively expresses the per-wavelength contribution to the radiometric quantity being measured. The SPD of a source is commonly shown as an SPD curve. SPD curves provide a visual representation of the color characteristics of a light source, showing the radiant flux emitted by the source at various wavelengths across the visible spectrum [ 5 ] It is also a metric by which we can evaluate a light source's ability to render colors, that is, whether a certain color stimulus can be properly rendered under a given illuminant . The quality of a given spectroradiometric system is a function of its electronics, optical components, software, power supply, and calibration. Under ideal laboratory conditions and with highly trained experts, it is possible to achieve small (a few tenths to a few percent) errors in measurements. However, in many practical situations, there is the likelihood of errors on the order of 10 percent [ 3 ] Several types of error are at play when taking physical measurements. The three basic types of error noted as the limiting factors of accuracy of measurement are random, systematic, and periodic errors [ 6 ] In addition to these generic sources of error, a few of the more specific reasons for error in spectroradiometry include: Gamma-scientific, a California-based manufacturer of light measurement devices, lists seven factors affecting the accuracy and performance of their spectroradiometers, due to either the system calibration, the software and power supply, the optics, or the measurement engine itself. [ 7 ] Stray light is unwanted wavelength radiation reaching the incorrect detector element. It generates erroneous electronic counts not related to designed spectral signal for the pixel or element of the detector array. It can come from light scatter and reflection of imperfect optical elements as well as higher order diffraction effects. The second order effect can be removed or at least dramatically reduced, by installing order sorting filters before the detector. A Si detector's sensitivity to visible and NIR is nearly an order of magnitude larger than that in the UV range. This means that the pixels at the UV spectral position respond to stray light in visible and NIR much more strongly than to their own designed spectral signal. Therefore, the stray light impacts in UV region are much more significant as compared to visible and NIR pixels. This situation gets worse the shorter the wavelength. When measuring broad band light with small fraction of UV signals, the stray light impact can sometimes be dominant in the UV range since the detector pixels are already struggling to get enough UV signals from the source. For this reason, calibration using a QTH standard lamp can have huge errors (more than 100%) below 350 nm and a deuterium standard lamp is required for more accurate calibration in this region. In fact, absolute light measurement in the UV region can have large errors even with the correct calibration when majority of the electronic counts in these pixels is result of the stray light (longer wavelength strikes instead of the actual UV light). There are numerous companies that offer calibration for spectrometers, but not all are equal. It is important to find a traceable, certified laboratory to perform calibration. The calibration certificate should state the light source used (ex: Halogen, Deuterium, Xenon, LED), and the uncertainty of the calibration for each band (UVC, UVB, VIS..), each wavelength in nm, or for the full spectrum measured. It should also list the confidence level for the calibration uncertainty. Like a camera, most spectrometers allow the user to select the exposure time and quantity of samples to be collected. Setting the integration time and the number of scans is an important step. Too long of an integration time can cause saturation. (In a camera photo this could appear as a large white spot, where as in a spectrometer it can appear as a dip, or cut off peak) Too short an integration time can generate noisy results (In a camera photo this would be a dark or blurry area, where as in a spectrometer this may appear are spiky or unstable readings). The exposure time is the time the light falls on the sensor during a measurement. Adjusting this parameter changes the overall sensitivity of the instrument, as changing the exposure time does for a camera. The minimum integration time varies by instrument with a minimum of .5 msec and a maximum of about 10 minutes per scan. A practical setting is in the range of 3 to 999 ms depending on the light intensity. The integration time should be adjusted for a signal which does not exceed the maximum counts (16-bit CCD has 65,536, 14-bit CCD has 16,384). Saturation occurs when the integration time is set too high. Typically, a peak signal of about 85% of the maximum is a good target and yields a good S/N ratio. (ex: 60K counts or 16K counts respectively) The number of scans indicates how many measurements will be averaged. Other things being equal, the Signal-to-Noise Ratio (SNR) of the collected spectra improves by the square root of the number N of scans averaged. For example, if 16 spectral scans are averaged, the SNR is improved by a factor of 4 over that of a single scan. S/N ratio is measured at the input light level which reaches the full scale of the spectrometer. It is the ratio of signal counts Cs (usually at full scale) to RMS (root mean square) noise at this light level. This noise includes the dark noise Nd, the shot noise Ns related to the counts generated by the input light and read out noise. This is the best S/N ratio one can get from the spectrometer for light measurements. The essential components of a spectroradiometric system are as follows: The front-end optics of a spectroradiometer includes the lenses, diffusers, and filters that modify the light as it first enters the system. For Radiance an optic with a narrow field of view is required. For total flux an integrating sphere is required. For Irradiance cosine correcting optics are required. The material used for these elements determines what type of light is capable of being measured. For example, to take UV measurements, quartz rather than glass lenses, optical fibers, Teflon diffusers, and barium sulphate coated integrating spheres are often used to ensure accurate UV measurement. [ 8 ] To perform spectral analysis of a source, monochromatic light at every wavelength would be needed to create a spectrum response of the illuminant. A monochromator is used to sample wavelengths from the source and essentially produce a monochromatic signal. It is essentially a variable filter, selectively separating and transmitting a specific wavelength or band of wavelengths from the full spectrum of measured light and excluding any light that falls outside that region. [ 9 ] A typical monochromator achieves this through the use of entrance and exit slits, collimating and focus optics, and a wavelength-dispersing element such as a diffraction grating or prism. [ 6 ] Modern monochromators are manufactured with diffraction gratings, and diffraction gratings are used almost exclusively in spectroradiometric applications. Diffraction gratings are preferable due to their versatility, low attenuation, extensive wavelength range, lower cost, and more constant dispersion. [ 9 ] Single or double monochromators can be used depending on application, with double monochromators generally providing more precision due to the additional dispersion and baffling between gratings. [ 8 ] The detector used in a spectroradiometer is determined by the wavelength over which the light is being measured, as well as the required dynamic range and sensitivity of the measurements. Basic spectroradiometer detector technologies generally fall into one of three groups: photoemissive detectors (e.g. photomultiplier tubes), semiconductor devices (e.g. silicon), or thermal detectors (e.g. thermopile). [ 10 ] The spectral response of a given detector is determined by its core materials. For example, photocathodes found in photomultiplier tubes can be manufactured from certain elements to be solar-blind – sensitive to UV and non-responsive to light in the visible or IR. [ 11 ] CCD (Charge Coupled Device) arrays typically one dimensional (linear) or two dimensional (area) arrays of thousands or millions of individual detector elements (also known as pixels) and CMOS sensors. They include a silicon or InGaAs based multichannel array detector capable of measuring UV, visible and near-infra light. CMOS (Complementary Metal Oxide Semiconductor) sensors differs from a CCD in that they add an amplifier to each photodiode. This is called an active pixel sensor because the amplifier is part of the pixel. Transistor switches connect each photodiode to the intrapixel amplifier at the time of readout. The logging system is often simply a personal computer. In initial signal processing, the signal often needs to be amplified and converted for use with the control system. The lines of communication between monochromator, detector output, and computer should be optimized to ensure the desired metrics and features are being used. [ 8 ] The commercially available software included with spectroradiometric systems often come stored with useful reference functions for further calculation of measurements, such as CIE color matching functions and the V λ {\displaystyle \lambda } curve. [ 12 ] Spectroradiometers are used in many applications, and can be made to meet a wide variety of specifications. Example applications include: It is possible to build a basic optical spectrometer using an optical disc grating and a basic webcam, using a CFL lamp for calibrating the wavelengths. [ 15 ] A calibration using a source of known spectrum can then turn the spectrometer into a spectroradiometer by interpreting the brightness of photo pixels. [ 16 ] A DIY build is affected by some extra error sources in the photo-to-value conversion: photographic noise (requiring dark frame subtraction ) and non-linearity in the CCD-to-photograph conversion (possibly solved by a raw image format ). [ 17 ]
https://en.wikipedia.org/wiki/Spectroradiometer