text
stringlengths
559
401k
source
stringlengths
13
121
A computed tomography scan (CT scan), formerly called computed axial tomography scan (CAT scan), is a medical imaging technique used to obtain detailed internal images of the body. The personnel that perform CT scans are called radiographers or radiology technologists. CT scanners use a rotating X-ray tube and a row of detectors placed in a gantry to measure X-ray attenuations by different tissues inside the body. The multiple X-ray measurements taken from different angles are then processed on a computer using tomographic reconstruction algorithms to produce tomographic (cross-sectional) images (virtual "slices") of a body. CT scans can be used in patients with metallic implants or pacemakers, for whom magnetic resonance imaging (MRI) is contraindicated. Since its development in the 1970s, CT scanning has proven to be a versatile imaging technique. While CT is most prominently used in medical diagnosis, it can also be used to form images of non-living objects. The 1979 Nobel Prize in Physiology or Medicine was awarded jointly to South African-American physicist Allan MacLeod Cormack and British electrical engineer Godfrey Hounsfield "for the development of computer-assisted tomography". == Types == On the basis of image acquisition and procedures, various type of scanners are available in the market. === Sequential CT === Sequential CT, also known as step-and-shoot CT, is a type of scanning method in which the CT table moves stepwise. The table increments to a particular location and then stops which is followed by the X-ray tube rotation and acquisition of a slice. The table then increments again, and another slice is taken. The table movement stops while taking slices. This results in an increased time of scanning. === Spiral CT === Spinning tube, commonly called spiral CT, or helical CT, is an imaging technique in which an entire X-ray tube is spun around the central axis of the area being scanned. These are the dominant type of scanners on the market because they have been manufactured longer and offer a lower cost of production and purchase. The main limitation of this type of CT is the bulk and inertia of the equipment (X-ray tube assembly and detector array on the opposite side of the circle) which limits the speed at which the equipment can spin. Some designs use two X-ray sources and detector arrays offset by an angle, as a technique to improve temporal resolution. === Electron beam tomography === Electron beam tomography (EBT) is a specific form of CT in which a large enough X-ray tube is constructed so that only the path of the electrons, travelling between the cathode and anode of the X-ray tube, are spun using deflection coils. This type had a major advantage since sweep speeds can be much faster, allowing for less blurry imaging of moving structures, such as the heart and arteries. Fewer scanners of this design have been produced when compared with spinning tube types, mainly due to the higher cost associated with building a much larger X-ray tube and detector array and limited anatomical coverage. === Dual energy CT === Dual energy CT, also known as spectral CT, is an advancement of computed Tomography in which two energies are used to create two sets of data. A dual energy CT may employ dual source, single source with dual detector layer, single source with energy switching methods to get two different sets of data. Dual source CT is an advanced scanner with a two X-ray tube detector system, unlike conventional single tube systems. These two detector systems are mounted on a single gantry at 90° in the same plane. Dual source CT scanners allow fast scanning with higher temporal resolution by acquiring a full CT slice in only half a rotation. Fast imaging reduces motion blurring at high heart rates and potentially allowing for shorter breath-hold time. This is particularly useful for ill patients having difficulty holding their breath or unable to take heart-rate lowering medication. Single source with energy switching is another mode of dual energy CT in which a single tube is operated at two different energies by switching the energies frequently. === CT perfusion imaging === CT perfusion imaging is a specific form of CT to assess flow through blood vessels whilst injecting a contrast agent. Blood flow, blood transit time, and organ blood volume, can all be calculated with reasonable sensitivity and specificity. This type of CT may be used on the heart, although sensitivity and specificity for detecting abnormalities are still lower than for other forms of CT. This may also be used on the brain, where CT perfusion imaging can often detect poor brain perfusion well before it is detected using a conventional spiral CT scan. This is better for stroke diagnosis than other CT types. === PET CT === Positron emission tomography–computed tomography is a hybrid CT modality which combines, in a single gantry, a positron emission tomography (PET) scanner and an X-ray computed tomography (CT) scanner, to acquire sequential images from both devices in the same session, which are combined into a single superposed (co-registered) image. Thus, functional imaging obtained by PET, which depicts the spatial distribution of metabolic or biochemical activity in the body can be more precisely aligned or correlated with anatomic imaging obtained by CT scanning. PET-CT gives both anatomical and functional details of an organ under examination and is helpful in detecting different type of cancers. == Medical use == Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement conventional X-ray imaging and medical ultrasonography. It has more recently been used for preventive medicine or screening for disease, for example, CT colonography for people with a high risk of colon cancer, or full-motion heart scans for people with a high risk of heart disease. Several institutions offer full-body scans for the general population although this practice goes against the advice and official position of many professional organizations in the field primarily due to the radiation dose applied. The use of CT scans has increased dramatically over the last two decades in many countries. An estimated 72 million scans were performed in the United States in 2007 and more than 80 million in 2015. === Head === CT scanning of the head is typically used to detect infarction (stroke), tumors, calcifications, haemorrhage, and bone trauma. Of the above, hypodense (dark) structures can indicate edema and infarction, hyperdense (bright) structures indicate calcifications and haemorrhage and bone trauma can be seen as disjunction in bone windows. Tumors can be detected by the swelling and anatomical distortion they cause, or by surrounding edema. CT scanning of the head is also used in CT-guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer. === Neck === Contrast CT is generally the initial study of choice for neck masses in adults. CT of the thyroid plays an important role in the evaluation of thyroid cancer. CT scan often incidentally finds thyroid abnormalities, and so is often the preferred investigation modality for thyroid abnormalities. === Lungs === A CT scan can be used for detecting both acute and chronic changes in the lung parenchyma, the tissue of the lungs. It is particularly relevant here because normal two-dimensional X-rays do not show such defects. A variety of techniques are used, depending on the suspected abnormality. For evaluation of chronic interstitial processes such as emphysema, and fibrosis, thin sections with high spatial frequency reconstructions are used; often scans are performed both on inspiration and expiration. This special technique is called high resolution CT that produces a sampling of the lung, and not continuous images. Bronchial wall thickening can be seen on lung CTs and generally (but not always) implies inflammation of the bronchi. An incidentally found nodule in the absence of symptoms (sometimes referred to as an incidentaloma) may raise concerns that it might represent a tumor, either benign or malignant. Perhaps persuaded by fear, patients and doctors sometimes agree to an intensive schedule of CT scans, sometimes up to every three months and beyond the recommended guidelines, in an attempt to do surveillance on the nodules. However, established guidelines advise that patients without a prior history of cancer and whose solid nodules have not grown over a two-year period are unlikely to have any malignant cancer. For this reason, and because no research provides supporting evidence that intensive surveillance gives better outcomes, and because of risks associated with having CT scans, patients should not receive CT screening in excess of those recommended by established guidelines. === Angiography === Computed tomography angiography (CTA) is a type of contrast CT to visualize the arteries and veins throughout the body. This ranges from arteries serving the brain to those bringing blood to the lungs, kidneys, arms and legs. An example of this type of exam is CT pulmonary angiogram (CTPA) used to diagnose pulmonary embolism (PE). It employs computed tomography and an iodine-based contrast agent to obtain an image of the pulmonary arteries. CT scans can reduce the risk of angiography by providing clinicians with more information about the positioning and number of clots prior to the procedure. === Cardiac === A CT scan of the heart is performed to gain knowledge about cardiac or coronary anatomy. Traditionally, cardiac CT scans are used to detect, diagnose, or follow up coronary artery disease. More recently CT has played a key role in the fast-evolving field of transcatheter structural heart interventions, more specifically in the transcatheter repair and replacement of heart valves. The main forms of cardiac CT scanning are: Coronary CT angiography (CCTA): the use of CT to assess the coronary arteries of the heart. The subject receives an intravenous injection of radiocontrast, and then the heart is scanned using a high-speed CT scanner, allowing radiologists to assess the extent of occlusion in the coronary arteries, usually to diagnose coronary artery disease. Coronary CT calcium scan: also used for the assessment of severity of coronary artery disease. Specifically, it looks for calcium deposits in the coronary arteries that can narrow arteries and increase the risk of a heart attack. A typical coronary CT calcium scan is done without the use of radiocontrast, but it can possibly be done from contrast-enhanced images as well. To better visualize the anatomy, post-processing of the images is common. Most common are multiplanar reconstructions (MPR) and volume rendering. For more complex anatomies and procedures, such as heart valve interventions, a true 3D reconstruction or a 3D print is created based on these CT images to gain a deeper understanding. === Abdomen and pelvis === CT is an accurate technique for diagnosis of abdominal diseases like Crohn's disease, GIT bleeding, and diagnosis and staging of cancer, as well as follow-up after cancer treatment to assess response. It is commonly used to investigate acute abdominal pain. Non-contrast-enhanced CT scans are the gold standard for diagnosing kidney stone disease. They allow clinicians to estimate the size, volume, and density of stones, helping to guide further treatment; with size being especially important in predicting the time to spontaneous passage of a stone. === Axial skeleton and extremities === For the axial skeleton and extremities, CT is often used to image complex fractures, especially ones around joints, because of its ability to reconstruct the area of interest in multiple planes. Fractures, ligamentous injuries, and dislocations can easily be recognized with a 0.2 mm resolution. With modern dual-energy CT scanners, new areas of use have been established, such as aiding in the diagnosis of gout. === Biomechanical use === CT is used in biomechanics to quickly reveal the geometry, anatomy, density and elastic moduli of biological tissues. == Other uses == === Industrial use === Industrial CT scanning (industrial computed tomography) is a process which uses X-ray equipment to produce 3D representations of components both externally and internally. Industrial CT scanning has been used in many areas of industry for internal inspection of components. Some of the key uses for CT scanning have been flaw detection, failure analysis, metrology, assembly analysis, image-based finite element methods and reverse engineering applications. CT scanning is also employed in the imaging and conservation of museum artifacts. === Aviation security === CT scanning has also found an application in transport security (predominantly airport security) where it is currently used in a materials analysis context for explosives detection CTX (explosive-detection device) and is also under consideration for automated baggage/parcel security scanning using computer vision based object recognition algorithms that target the detection of specific threat items based on 3D appearance (e.g. guns, knives, liquid containers). Its usage in airport security pioneered at Shannon Airport in March 2022 has ended the ban on liquids over 100 ml there, a move that Heathrow Airport plans for a full roll-out on 1 December 2022 and the TSA spent $781.2 million on an order for over 1,000 scanners, ready to go live in the summer. === Geological use === X-ray CT is used in geological studies to quickly reveal materials inside a drill core. Dense minerals such as pyrite and barite appear brighter and less dense components such as clay appear dull in CT images. === Paleontological use === Traditional methods of studying fossils are often destructive, such as the use of thin sections and physical preparation. X-ray CT is used in paleontology to non-destructively visualize fossils in 3D. This has many advantages. For example, we can look at fragile structures that might never otherwise be able to be studied. In addition, one can freely move around models of fossils in virtual 3D space to inspect it without damaging the fossil. === Cultural heritage use === X-ray CT and micro-CT can also be used for the conservation and preservation of objects of cultural heritage. For many fragile objects, direct research and observation can be damaging and can degrade the object over time. Using CT scans, conservators and researchers are able to determine the material composition of the objects they are exploring, such as the position of ink along the layers of a scroll, without any additional harm. These scans have been optimal for research focused on the workings of the Antikythera mechanism or the text hidden inside the charred outer layers of the En-Gedi Scroll. However, they are not optimal for every object subject to these kinds of research questions, as there are certain artifacts like the Herculaneum papyri in which the material composition has very little variation along the inside of the object. After scanning these objects, computational methods can be employed to examine the insides of these objects, as was the case with the virtual unwrapping of the En-Gedi scroll and the Herculaneum papyri. Micro-CT has also proved useful for analyzing more recent artifacts such as still-sealed historic correspondence that employed the technique of letterlocking (complex folding and cuts) that provided a "tamper-evident locking mechanism". Further examples of use cases in archaeology is imaging the contents of sarcophagi or ceramics. Recently, CWI in Amsterdam has collaborated with Rijksmuseum to investigate art object inside details in the framework called IntACT. === Microorganism research === Varied types of fungus can degrade wood to different degrees, one Belgium research group has been used X-ray CT 3 dimension with sub-micron resolution unveiled fungi can penetrate micropores of 0.6 μm under certain conditions. === Timber sawmill === Sawmills use industrial CT scanners to detect round defects, for instance knots, to improve total value of timber productions. Most sawmills are planning to incorporate this robust detection tool to improve productivity in the long run, however initial investment cost is high. == Interpretation of results == === Presentation === The result of a CT scan is a volume of voxels, which may be presented to a human observer by various methods, which broadly fit into the following categories: Slices (of varying thickness). Thin slice is generally regarded as planes representing a thickness of less than 3 mm. Thick slice is generally regarded as planes representing a thickness between 3 mm and 5 mm. Projection, including maximum intensity projection and average intensity projection Volume rendering (VR) Technically, all volume renderings become projections when viewed on a 2-dimensional display, making the distinction between projections and volume renderings a bit vague. The epitomes of volume rendering models feature a mix of for example coloring and shading in order to create realistic and observable representations. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. ==== Grayscale ==== Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely block the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. ==== Windowing ==== CT data sets have a very high dynamic range which must be reduced for display or printing. This is typically done via a process of "windowing", which maps a range (the "window") of pixel values to a grayscale ramp. For example, CT images of the brain are commonly viewed with a window extending from 0 HU to 80 HU. Pixel values of 0 and lower, are displayed as black; values of 80 and higher are displayed as white; values within the window are displayed as a gray intensity proportional to position within the window. The window used for display must be matched to the X-ray density of the object of interest, in order to optimize the visible detail. Window width and window level parameters are used to control the windowing of a scan. ==== Multiplanar reconstruction and projections ==== Multiplanar reconstruction (MPR) is the process of converting data from one anatomical plane (usually transverse) to other planes. It can be used for thin slices as well as projections. Multiplanar reconstruction is possible as present CT scanners provide almost isotropic resolution. MPR is used almost in every scan. The spine is frequently examined with it. An image of the spine in axial plane can only show one vertebral bone at a time and cannot show its relation with other vertebral bones. By reformatting the data in other planes, visualization of the relative position can be achieved in sagittal and coronal plane. New software allows the reconstruction of data in non-orthogonal (oblique) planes, which help in the visualization of organs which are not in orthogonal planes. It is better suited for visualization of the anatomical structure of the bronchi as they do not lie orthogonal to the direction of the scan. Curved-plane reconstruction (or curved planar reformation = CPR) is performed mainly for the evaluation of vessels. This type of reconstruction helps to straighten the bends in a vessel, thereby helping to visualize a whole vessel in a single image or in multiple images. After a vessel has been "straightened", measurements such as cross-sectional area and length can be made. This is helpful in preoperative assessment of a surgical procedure. For 2D projections used in radiation therapy for quality assurance and planning of external beam radiotherapy, including digitally reconstructed radiographs, see Beam's eye view. ==== Volume rendering ==== A threshold value of radiodensity is set by the operator (e.g., a level that corresponds to bone). With the help of edge detection image processing algorithms a 3D model can be constructed from the initial data and displayed on screen. Various thresholds can be used to get multiple models, each anatomical component such as muscle, bone and cartilage can be differentiated on the basis of different colours given to them. However, this mode of operation cannot show interior structures. Surface rendering is limited technique as it displays only the surfaces that meet a particular threshold density, and which are towards the viewer. However, In volume rendering, transparency, colours and shading are used which makes it easy to present a volume in a single image. For example, Pelvic bones could be displayed as semi-transparent, so that, even viewing at an oblique angle one part of the image does not hide another. === Image quality === ==== Dose versus image quality ==== An important issue within radiology today is how to reduce the radiation dose during CT examinations without compromising the image quality. In general, higher radiation doses result in higher-resolution images, while lower doses lead to increased image noise and unsharp images. However, increased dosage raises the adverse side effects, including the risk of radiation-induced cancer – a four-phase abdominal CT gives the same radiation dose as 300 chest X-rays. Several methods that can reduce the exposure to ionizing radiation during a CT scan exist. New software technology can significantly reduce the required radiation dose. New iterative tomographic reconstruction algorithms (e.g., iterative Sparse Asymptotic Minimum Variance) could offer super-resolution without requiring higher radiation dose. Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different body types and organs require different amounts of radiation. Higher resolution is not always suitable, such as detection of small pulmonary masses. ==== Artifacts ==== Although images produced by CT are generally faithful representations of the scanned volume, the technique is susceptible to a number of artifacts, such as the following:Chapters 3 and 5 Streak artifact Streaks are often seen around materials that block most X-rays, such as metal or bone. Numerous factors contribute to these streaks: under sampling, photon starvation, motion, beam hardening, and Compton scatter. This type of artifact commonly occurs in the posterior fossa of the brain, or if there are metal implants. The streaks can be reduced using newer reconstruction techniques. Approaches such as metal artifact reduction (MAR) can also reduce this artifact. MAR techniques include spectral imaging, where CT images are taken with photons of different energy levels, and then synthesized into monochromatic images with special software such as GSI (Gemstone Spectral Imaging). Partial volume effect This appears as "blurring" of edges. It is due to the scanner being unable to differentiate between a small amount of high-density material (e.g., bone) and a larger amount of lower density (e.g., cartilage). The reconstruction assumes that the X-ray attenuation within each voxel is homogeneous; this may not be the case at sharp edges. This is most commonly seen in the z-direction (craniocaudal direction), due to the conventional use of highly anisotropic voxels, which have a much lower out-of-plane resolution, than in-plane resolution. This can be partially overcome by scanning using thinner slices, or an isotropic acquisition on a modern scanner. Ring artifact Probably the most common mechanical artifact, the image of one or many "rings" appears within an image. They are usually caused by the variations in the response from individual elements in a two dimensional X-ray detector due to defect or miscalibration. Ring artifacts can largely be reduced by intensity normalization, also referred to as flat field correction. Remaining rings can be suppressed by a transformation to polar space, where they become linear stripes. A comparative evaluation of ring artefact reduction on X-ray tomography images showed that the method of Sijbers and Postnov can effectively suppress ring artefacts. Noise This appears as grain on the image and is caused by a low signal to noise ratio. This occurs more commonly when a thin slice thickness is used. It can also occur when the power supplied to the X-ray tube is insufficient to penetrate the anatomy. Windmill Streaking appearances can occur when the detectors intersect the reconstruction plane. This can be reduced with filters or a reduction in pitch. Beam hardening This can give a "cupped appearance" when grayscale is visualized as height. It occurs because conventional sources, like X-ray tubes emit a polychromatic spectrum. Photons of higher photon energy levels are typically attenuated less. Because of this, the mean energy of the spectrum increases when passing the object, often described as getting "harder". This leads to an effect increasingly underestimating material thickness, if not corrected. Many algorithms exist to correct for this artifact. They can be divided into mono- and multi-material methods. == Advantages == CT scanning has several advantages over traditional two-dimensional medical radiography. First, CT eliminates the superimposition of images of structures outside the area of interest. Second, CT scans have greater image resolution, enabling examination of finer details. CT can distinguish between tissues that differ in radiographic density by 1% or less. Third, CT scanning enables multiplanar reformatted imaging: scan data can be visualized in the transverse (or axial), coronal, or sagittal plane, depending on the diagnostic task. The improved resolution of CT has permitted the development of new investigations. For example, CT angiography avoids the invasive insertion of a catheter. CT scanning can perform a virtual colonoscopy with greater accuracy and less discomfort for the patient than a traditional colonoscopy. Virtual colonography is far more accurate than a barium enema for detection of tumors and uses a lower radiation dose. CT is a moderate-to-high radiation diagnostic technique. The radiation dose for a particular examination depends on multiple factors: volume scanned, patient build, number and type of scan protocol, and desired resolution and image quality. Two helical CT scanning parameters, tube current and pitch, can be adjusted easily and have a profound effect on radiation. CT scanning is more accurate than two-dimensional radiographs in evaluating anterior interbody fusion, although they may still over-read the extent of fusion. == Adverse effects == === Cancer === The radiation used in CT scans can damage body cells, including DNA molecules, which can lead to radiation-induced cancer. The radiation doses received from CT scans is variable. Compared to the lowest dose X-ray techniques, CT scans can have 100 to 1,000 times higher dose than conventional X-rays. However, a lumbar spine X-ray has a similar dose as a head CT. Articles in the media often exaggerate the relative dose of CT by comparing the lowest-dose X-ray techniques (chest X-ray) with the highest-dose CT techniques. In general, a routine abdominal CT has a radiation dose similar to three years of average background radiation. Large scale population-based studies have consistently demonstrated that low dose radiation from CT scans has impacts on cancer incidence in a variety of cancers. For example, in a large population-based Australian cohort it was found that up to 3.7% of brain cancers were caused by CT scan radiation. Some experts project that in the future, between three and five percent of all cancers would result from medical imaging. An Australian study of 10.9 million people reported that the increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. In this group, one in every 1,800 CT scans was followed by an excess cancer. If the lifetime risk of developing cancer is 40% then the absolute risk rises to 40.05% after a CT. The risks of CT scan radiation are especially important in patients undergoing recurrent CT scans within a short time span of one to five years. Some experts note that CT scans are known to be "overused," and "there is distressingly little evidence of better health outcomes associated with the current high rate of scans." On the other hand, a recent paper analyzing the data of patients who received high cumulative doses showed a high degree of appropriate use. This creates an important issue of cancer risk to these patients. Moreover, a highly significant finding that was previously unreported is that some patients received >100 mSv dose from CT scans in a single day, which counteracts existing criticisms some investigators may have on the effects of protracted versus acute exposure. There are contrarian views and the debate is ongoing. Some studies have shown that publications indicating an increased risk of cancer from typical doses of body CT scans are plagued with serious methodological limitations and several highly improbable results, concluding that no evidence indicates such low doses cause any long-term harm. One study estimated that as many as 0.4% of cancers in the United States resulted from CT scans, and that this may have increased to as much as 1.5 to 2% based on the rate of CT use in 2007. Others dispute this estimate, as there is no consensus that the low levels of radiation used in CT scans cause damage. Lower radiation doses are used in many cases, such as in the investigation of renal colic. A person's age plays a significant role in the subsequent risk of cancer. Estimated lifetime cancer mortality risks from an abdominal CT of a one-year-old is 0.1%, or 1:1000 scans. The risk for someone who is 40 years old is half that of someone who is 20 years old with substantially less risk in the elderly. The International Commission on Radiological Protection estimates that the risk to a fetus being exposed to 10 mGy (a unit of radiation exposure) increases the rate of cancer before 20 years of age from 0.03% to 0.04% (for reference a CT pulmonary angiogram exposes a fetus to 4 mGy). A 2012 review did not find an association between medical radiation and cancer risk in children noting however the existence of limitations in the evidences over which the review is based. CT scans can be performed with different settings for lower exposure in children with most manufacturers of CT scans as of 2007 having this function built in. Furthermore, certain conditions can require children to be exposed to multiple CT scans. Current recommendations are to inform patients of the risks of CT scanning. However, employees of imaging centers tend not to communicate such risks unless patients ask. === Contrast reactions === In the United States half of CT scans are contrast CTs using intravenously injected radiocontrast agents. The most common reactions from these agents are mild, including nausea, vomiting, and an itching rash. Severe life-threatening reactions may rarely occur. Overall reactions occur in 1 to 3% with nonionic contrast and 4 to 12% of people with ionic contrast. Skin rashes may appear within a week to 3% of people. The old radiocontrast agents caused anaphylaxis in 1% of cases while the newer, low-osmolar agents cause reactions in 0.01–0.04% of cases. Death occurs in about 2 to 30 people per 1,000,000 administrations, with newer agents being safer. There is a higher risk of mortality in those who are female, elderly or in poor health, usually secondary to either anaphylaxis or acute kidney injury. The contrast agent may induce contrast-induced nephropathy. This occurs in 2 to 7% of people who receive these agents, with greater risk in those who have preexisting kidney failure, preexisting diabetes, or reduced intravascular volume. People with mild kidney impairment are usually advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT. Those with severe kidney failure requiring dialysis require less strict precautions, as their kidneys have so little function remaining that any further damage would not be noticeable and the dialysis will remove the contrast agent; it is normally recommended, however, to arrange dialysis as soon as possible following contrast administration to minimize any adverse effects of the contrast. In addition to the use of intravenous contrast, orally administered contrast agents are frequently used when examining the abdomen. These are frequently the same as the intravenous contrast agents, merely diluted to approximately 10% of the concentration. However, oral alternatives to iodinated contrast exist, such as very dilute (0.5–1% w/v) barium sulfate suspensions. Dilute barium sulfate has the advantage that it does not cause allergic-type reactions or kidney failure, but cannot be used in patients with suspected bowel perforation or suspected bowel injury, as leakage of barium sulfate from damaged bowel can cause fatal peritonitis. Side effects from contrast agents, administered intravenously in some CT scans, might impair kidney performance in patients with kidney disease, although this risk is now believed to be lower than previously thought. === Scan dose === The table reports average radiation exposures; however, there can be a wide variation in radiation doses between similar scan types, where the highest dose could be as much as 22 times higher than the lowest dose. A typical plain film X-ray involves radiation dose of 0.01 to 0.15 mGy, while a typical CT can involve 10–20 mGy for specific organs, and can go up to 80 mGy for certain specialized CT scans. For purposes of comparison, the world average dose rate from naturally occurring sources of background radiation is 2.4 mSv per year, equal for practical purposes in this application to 2.4 mGy per year. While there is some variation, most people (99%) received less than 7 mSv per year as background radiation. Medical imaging as of 2007 accounted for half of the radiation exposure of those in the United States with CT scans making up two thirds of this amount. In the United Kingdom it accounts for 15% of radiation exposure. The average radiation dose from medical sources is ≈0.6 mSv per person globally as of 2007. Those in the nuclear industry in the United States are limited to doses of 50 mSv a year and 100 mSv every 5 years. Lead is the main material used by radiography personnel for shielding against scattered X-rays. ==== Radiation dose units ==== The radiation dose reported in the gray or mGy unit is proportional to the amount of energy that the irradiated body part is expected to absorb, and the physical effect (such as DNA double strand breaks) on the cells' chemical bonds by X-ray radiation is proportional to that energy. The sievert unit is used in the report of the effective dose. The sievert unit, in the context of CT scans, does not correspond to the actual radiation dose that the scanned body part absorbs but to another radiation dose of another scenario, the whole body absorbing the other radiation dose and the other radiation dose being of a magnitude, estimated to have the same probability to induce cancer as the CT scan. Thus, as is shown in the table above, the actual radiation that is absorbed by a scanned body part is often much larger than the effective dose suggests. A specific measure, termed the computed tomography dose index (CTDI), is commonly used as an estimate of the radiation absorbed dose for tissue within the scan region, and is automatically computed by medical CT scanners. The equivalent dose is the effective dose of a case, in which the whole body would actually absorb the same radiation dose, and the sievert unit is used in its report. In the case of non-uniform radiation, or radiation given to only part of the body, which is common for CT examinations, using the local equivalent dose alone would overstate the biological risks to the entire organism. ==== Effects of radiation ==== Most adverse health effects of radiation exposure may be grouped in two general categories: deterministic effects (harmful tissue reactions) due in large part to the killing/malfunction of cells following high doses; stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells. The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 one in 2,000. Because of increased susceptibility of fetuses to radiation exposure, the radiation dosage of a CT scan is an important consideration in the choice of medical imaging in pregnancy. ==== Excess doses ==== In October, 2009, the US Food and Drug Administration (FDA) initiated an investigation of brain perfusion CT (PCT) scans, based on radiation burns caused by incorrect settings at one particular facility for this particular type of CT scan. Over 200 patients were exposed to radiation at approximately eight times the expected dose for an 18-month period; over 40% of them lost patches of hair. This event prompted a call for increased CT quality assurance programs. It was noted that "while unnecessary radiation exposure should be avoided, a medically needed CT scan obtained with appropriate acquisition parameter has benefits that outweigh the radiation risks." Similar problems have been reported at other centers. These incidents are believed to be due to human error. == Procedure == CT scan procedure varies according to the type of the study and the organ being imaged. The patient lies on the CT table and the centering of the table is done according to the body part. The IV line is established in case of contrast-enhanced CT. After selecting proper and rate of contrast from the pressure injector, the scout is taken to localize and plan the scan. Once the plan is selected, the contrast is given. The raw data is processed according to the study and proper windowing is done to make scans easy to diagnose. === Preparation === Patient preparation may vary according to the type of scan. The general patient preparation includes. Signing the informed consent. Removal of metallic objects and jewelry from the region of interest. Changing to the hospital gown according to hospital protocol. Checking of kidney function, especially creatinine and urea levels (in case of CECT). == Mechanism == Computed tomography operates by using an X-ray generator that rotates around the object; X-ray detectors are positioned on the opposite side of the circle from the X-ray source. As the X-rays pass through the patient, they are attenuated differently by various tissues according to the tissue density. A visual representation of the raw data obtained is called a sinogram, yet it is not sufficient for interpretation. Once the scan data has been acquired, the data must be processed using a form of tomographic reconstruction, which produces a series of cross-sectional images. These cross-sectional images are made up of small units of pixels or voxels. Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU or more (os temporale) and can cause artifacts. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. Initially, the images generated in CT scans were in the transverse (axial) anatomical plane, perpendicular to the long axis of the body. Modern scanners allow the scan data to be reformatted as images in other planes. Digital geometry processing can generate a three-dimensional image of an object inside the body from a series of two-dimensional radiographic images taken by rotation around a fixed axis. These cross-sectional images are widely used for medical diagnosis and therapy. === Contrast === Contrast media used for X-ray CT, as well as for plain film X-ray, are called radiocontrasts. Radiocontrasts for CT are, in general, iodine-based. This is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. Using contrast material can also help to obtain functional information about tissues. Often, images are taken both with and without radiocontrast. == History == The history of X-ray computed tomography goes back to at least 1917 with the mathematical theory of the Radon transform. In October 1963, William H. Oldendorf received a U.S. patent for a "radiant energy apparatus for investigating selected areas of interior objects obscured by dense material". The first commercially viable CT scanner was invented by Godfrey Hounsfield in 1972. It is often claimed that revenues from the sales of The Beatles' records in the 1960s helped fund the development of the first CT scanner at EMI. The first production X-ray CT machines were in fact called EMI scanners. === Etymology === The word tomography is derived from the Greek tome 'slice' and graphein 'to write'. Computed tomography was originally known as the "EMI scan" as it was developed in the early 1970s at a research branch of EMI, a company best known today for its music and recording business. It was later known as computed axial tomography (CAT or CT scan) and body section röntgenography. The term CAT scan is no longer in technical use because current CT scans enable for multiplanar reconstructions. This makes CT scan the most appropriate term, which is used by radiologists in common vernacular as well as in textbooks and scientific papers. In Medical Subject Headings (MeSH), computed axial tomography was used from 1977 to 1979, but the current indexing explicitly includes X-ray in the title. The term sinogram was introduced by Paul Edholm and Bertil Jacobson in 1975. == Society and culture == === Campaigns === In response to increased concern by the public and the ongoing progress of best practices, the Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently Campaign which is designed to maintain high-quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in Radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely. The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose. === Prevalence === Use of CT has increased dramatically over the last two decades. An estimated 72 million scans were performed in the United States in 2007, accounting for close to half of the total per-capita dose rate from radiologic and nuclear medicine procedures. Of the CT scans, six to eleven percent are done in children, an increase of seven to eightfold from 1980. Similar increases have been seen in Europe and Asia. In Calgary, Canada, 12.1% of people who present to the emergency with an urgent complaint received a CT scan, most commonly either of the head or of the abdomen. The percentage who received CT, however, varied markedly by the emergency physician who saw them from 1.8% to 25%. In the emergency department in the United States, CT or MRI imaging is done in 15% of people who present with injuries as of 2007 (up from 6% in 1998). The increased use of CT scans has been the greatest in two fields: screening of adults (screening CT of the lung in smokers, virtual colonoscopy, CT cardiac screening, and whole-body CT in asymptomatic patients) and CT imaging of children. Shortening of the scanning time to around 1 second, eliminating the strict need for the subject to remain still or be sedated, is one of the main reasons for the large increase in the pediatric population (especially for the diagnosis of appendicitis). As of 2007, in the United States a proportion of CT scans are performed unnecessarily. Some estimates place this number at 30%. There are a number of reasons for this including: legal concerns, financial incentives, and desire by the public. For example, some healthy people avidly pay to receive full-body CT scans as screening. In that case, it is not at all clear that the benefits outweigh the risks and costs. Deciding whether and how to treat incidentalomas is complex, radiation exposure is not negligible, and the money for the scans involves opportunity cost. == Manufacturers == Major manufacturers of CT scanning devices and equipment are: Canon Medical Systems Corporation Fujifilm Healthcare GE HealthCare Neusoft Medical Systems Philips Siemens Healthineers United Imaging == Research == Photon-counting computed tomography is a CT technique currently under development. Typical CT scanners use energy integrating detectors; photons are measured as a voltage on a capacitor which is proportional to the X-rays detected. However, this technique is susceptible to noise and other factors which can affect the linearity of the voltage to X-ray intensity relationship. Photon counting detectors (PCDs) are still affected by noise but it does not change the measured counts of photons. PCDs have several potential advantages, including improving signal (and contrast) to noise ratios, reducing doses, improving spatial resolution, and through use of several energies, distinguishing multiple contrast agents. PCDs have only recently become feasible in CT scanners due to improvements in detector technologies that can cope with the volume and rate of data required. As of February 2016, photon counting CT is in use at three sites. Some early research has found the dose reduction potential of photon counting CT for breast imaging to be very promising. In view of recent findings of high cumulative doses to patients from recurrent CT scans, there has been a push for scanning technologies and techniques that reduce ionising radiation doses to patients to sub-milliSievert (sub-mSv in the literature) levels during the CT scan process, a goal that has been lingering. == See also == == References == == External links == Development of CT imaging CT Artefacts—PPT by David Platten Filler A (2009-06-30). "The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, and DTI". Nature Precedings: 1. doi:10.1038/npre.2009.3267.4. ISSN 1756-0357. Boone JM, McCollough CH (2021). "Computed tomography turns 50". Physics Today. 74 (9): 34–40. Bibcode:2021PhT....74i..34B. doi:10.1063/PT.3.4834. ISSN 0031-9228. S2CID 239718717.
Wikipedia/Multidetector_computed_tomography
Rate-distortion optimization (RDO) is a method of improving video quality in video compression. The name refers to the optimization of the amount of distortion (loss of video quality) against the amount of data required to encode the video, the rate. While it is primarily used by video encoders, rate-distortion optimization can be used to improve quality in any encoding situation (image, video, audio, or otherwise) where decisions have to be made that affect both file size and quality simultaneously. == Background == The classical method of making encoding decisions is for the video encoder to choose the result which yields the highest quality output image. However, this has the disadvantage that the choice it makes might require more bits while giving comparatively little quality benefit. One common example of this problem is in motion estimation, and in particular regarding the use of quarter pixel-precision motion estimation. Adding the extra precision to the motion of a block during motion estimation might increase quality, but in some cases that extra quality isn't worth the extra bits necessary to encode the motion vector to a higher precision. == How it works == Rate-distortion optimization solves the aforementioned problem by acting as a video quality metric, measuring both the deviation from the source material and the bit cost for each possible decision outcome. The bits are mathematically measured by multiplying the bit cost by the Lagrangian, a value representing the relationship between bit cost and quality for a particular quality level. The deviation from the source is usually measured as the mean squared error, in order to maximize the PSNR video quality metric. Calculating the bit cost is made more difficult by the entropy encoders in modern video codecs, requiring the rate-distortion optimization algorithm to pass each block of video to be tested to the entropy coder to measure its actual bit cost. In MPEG codecs, the full process consists of a discrete cosine transform, followed by quantization and entropy encoding. Because of this, rate-distortion optimization is much slower than most other block-matching metrics, such as the simple sum of absolute differences (SAD) and sum of absolute transformed differences (SATD). As such it is usually used only for the final steps of the motion estimation process, such as deciding between different partition types in H.264/AVC. == List of encoders that support RDO == Ateme H.264 encoder Grass Valley ViBE encoders (SD & HD MPEG-2/MPEG-4) Harmonic Electra 8000 encoder (SD & HD MPEG-2/MPEG-4) libavcodec MainConcept H.264 encoder Microsoft VC-1 encoder Tandberg Television SD MPEG-2 EN8100 Tandberg Television HD MPEG-4 EN8190 Tandberg Television SD & HD MPEG-4 iPlex Theora 1.1-alpha1 and later (the "Thusnelda" branch) x264 H.264 encoder x265 H.265 encoder Xvid MPEG-4 ASP encoder H.264/AVC reference software JM (Joint Model) H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (Intel Quick Sync Video acceleration) hardware encoder HEVC reference software HM (HEVC Test Model) Kvazaar (partial) == References ==
Wikipedia/Rate–distortion_optimization
The term Blahut–Arimoto algorithm is often used to refer to a class of algorithms for computing numerically either the information theoretic capacity of a channel, the rate-distortion function of a source or a source encoding (i.e. compression to remove the redundancy). They are iterative algorithms that eventually converge to one of the maxima of the optimization problem that is associated with these information theoretic concepts. == History and application == For the case of channel capacity, the algorithm was independently invented by Suguru Arimoto and Richard Blahut. In addition, Blahut's treatment gives algorithms for computing rate distortion and generalized capacity with input contraints (i.e. the capacity-cost function, analogous to rate-distortion). These algorithms are most applicable to the case of arbitrary finite alphabet sources. Much work has been done to extend it to more general problem instances. Recently, a version of the algorithm that accounts for continuous and multivariate outputs was proposed with applications in cellular signaling. There exists also a version of Blahut–Arimoto algorithm for directed information. == Algorithm for Channel Capacity == A discrete memoryless channel (DMC) can be specified using two random variables X , Y {\displaystyle X,Y} with alphabet X , Y {\displaystyle {\mathcal {X}},{\mathcal {Y}}} , and a channel law as a conditional probability distribution p ( y | x ) {\displaystyle p(y|x)} . The channel capacity, defined as C := sup p ∼ X I ( X ; Y ) {\displaystyle C:=\sup _{p\sim X}I(X;Y)} , indicates the maximum efficiency that a channel can communicate, in the unit of bit per use. Now if we denote the cardinality | X | = n , | Y | = m {\displaystyle |{\mathcal {X}}|=n,|{\mathcal {Y}}|=m} , then p Y | X {\displaystyle p_{Y|X}} is a n × m {\displaystyle n\times m} matrix, which we denote the i t h {\displaystyle i^{th}} row, j t h {\displaystyle j^{th}} column entry by w i j {\displaystyle w_{ij}} . For the case of channel capacity, the algorithm was independently invented by Suguru Arimoto and Richard Blahut. They both found the following expression for the capacity of a DMC with channel law: C = max p max Q ∑ i = 1 n ∑ j = 1 m p i w i j log ⁡ ( Q j i p i ) {\displaystyle C=\max _{\mathbf {p} }\max _{Q}\sum _{i=1}^{n}\sum _{j=1}^{m}p_{i}w_{ij}\log \left({\dfrac {Q_{ji}}{p_{i}}}\right)} where p {\displaystyle \mathbf {p} } and Q {\displaystyle Q} are maximized over the following requirements: p {\displaystyle \mathbf {p} } is a probability distribution on X {\displaystyle X} , That is, if we write p {\displaystyle \mathbf {p} } as ( p 1 , p 2 . . . , p n ) , ∑ i = 1 n p i = 1 {\displaystyle (p_{1},p_{2}...,p_{n}),\sum _{i=1}^{n}p_{i}=1} Q = ( q j i ) {\displaystyle Q=(q_{ji})} is a m × n {\displaystyle m\times n} matrix that behaves like a transition matrix from Y {\displaystyle Y} to X {\displaystyle X} with respect to the channel law. That is, For all 1 ≤ i ≤ n , 1 ≤ j ≤ m {\displaystyle 1\leq i\leq n,1\leq j\leq m} : q j i ≥ 0 , q j i = 0 ⇔ w i j = 0 {\displaystyle q_{ji}\geq 0,q_{ji}=0\Leftrightarrow w_{ij}=0} Every row sums up to 1, i.e. ∑ i = 1 n q j i = 1 {\displaystyle \sum _{i=1}^{n}q_{ji}=1} . Then upon picking a random probability distribution p 0 := ( p 1 0 , p 2 0 , . . . p n 0 ) {\displaystyle \mathbf {p} ^{0}:=(p_{1}^{0},p_{2}^{0},...p_{n}^{0})} on X {\displaystyle X} , we can generate a sequence ( p 0 , Q 0 , p 1 , Q 1 . . . ) {\displaystyle (\mathbf {p} ^{0},Q^{0},\mathbf {p} ^{1},Q^{1}...)} iteratively as follows: ( q j i t ) := p i t w i j ∑ k = 1 n p k t w k j {\displaystyle (q_{ji}^{t}):={\dfrac {p_{i}^{t}w_{ij}}{\sum _{k=1}^{n}p_{k}^{t}w_{kj}}}} p k t + 1 := ∏ j = 1 m ( q j k t ) w k j ∑ i = 1 n ∏ j = 1 m ( q j i t ) w i j {\displaystyle p_{k}^{t+1}:={\dfrac {\prod _{j=1}^{m}(q_{jk}^{t})^{w_{kj}}}{\sum _{i=1}^{n}\prod _{j=1}^{m}(q_{ji}^{t})^{w_{ij}}}}} For t = 0 , 1 , 2... {\displaystyle t=0,1,2...} . Then, using the theory of optimization, specifically coordinate descent, Yeung showed that the sequence indeed converges to the required maximum. That is, lim t → ∞ ∑ i = 1 n ∑ j = 1 m p i t w i j log ⁡ ( Q j i t p i t ) = C {\displaystyle \lim _{t\to \infty }\sum _{i=1}^{n}\sum _{j=1}^{m}p_{i}^{t}w_{ij}\log \left({\dfrac {Q_{ji}^{t}}{p_{i}^{t}}}\right)=C} . So given a channel law p ( y | x ) {\displaystyle p(y|x)} , the capacity can be numerically estimated up to arbitrary precision. == Algorithm for Rate-Distortion == Suppose we have a source X {\displaystyle X} with probability p ( x ) {\displaystyle p(x)} of any given symbol. We wish to find an encoding p ( x ^ | x ) {\displaystyle p({\hat {x}}|x)} that generates a compressed signal X ^ {\displaystyle {\hat {X}}} from the original signal while minimizing the expected distortion ⟨ d ( x , x ^ ) ⟩ {\displaystyle \langle d(x,{\hat {x}})\rangle } , where the expectation is taken over the joint probability of X {\displaystyle X} and X ^ {\displaystyle {\hat {X}}} . We can find an encoding that minimizes the rate-distortion functional locally by repeating the following iteration until convergence: p t + 1 ( x ^ | x ) = p t ( x ^ ) exp ⁡ ( − β d ( x , x ^ ) ) ∑ x ^ p t ( x ^ ) exp ⁡ ( − β d ( x , x ^ ) ) {\displaystyle p_{t+1}({\hat {x}}|x)={\frac {p_{t}({\hat {x}})\exp(-\beta d(x,{\hat {x}}))}{\sum _{\hat {x}}p_{t}({\hat {x}})\exp(-\beta d(x,{\hat {x}}))}}} p t + 1 ( x ^ ) = ∑ x p ( x ) p t + 1 ( x ^ | x ) {\displaystyle p_{t+1}({\hat {x}})=\sum _{x}p(x)p_{t+1}({\hat {x}}|x)} where β {\displaystyle \beta } is a parameter related to the slope in the rate-distortion curve that we are targeting and thus is related to how much we favor compression versus distortion (higher β {\displaystyle \beta } means less compression). == References ==
Wikipedia/Blahut–Arimoto_algorithm
In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable X {\displaystyle X} , which may be any member x {\displaystyle x} within the set X {\displaystyle {\mathcal {X}}} and is distributed according to p : X → [ 0 , 1 ] {\displaystyle p\colon {\mathcal {X}}\to [0,1]} , the entropy is H ( X ) := − ∑ x ∈ X p ( x ) log ⁡ p ( x ) , {\displaystyle \mathrm {H} (X):=-\sum _{x\in {\mathcal {X}}}p(x)\log p(x),} where Σ {\displaystyle \Sigma } denotes the sum over the variable's possible values. The choice of base for log {\displaystyle \log } , the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable. The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem. Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition E [ − log ⁡ p ( X ) ] {\displaystyle \mathbb {E} [-\log p(X)]} generalizes the above. == Introduction == The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number will win a lottery has high informational value because it communicates the occurrence of a very low probability event. The information content, also called the surprisal or self-information, of an event E {\displaystyle E} is a function that increases as the probability p ( E ) {\displaystyle p(E)} of an event decreases. When p ( E ) {\displaystyle p(E)} is close to 1, the surprisal of the event is low, but if p ( E ) {\displaystyle p(E)} is close to 0, the surprisal of the event is high. This relationship is described by the function log ⁡ ( 1 p ( E ) ) , {\displaystyle \log \left({\frac {1}{p(E)}}\right),} where log {\displaystyle \log } is the logarithm, which gives 0 surprise when the probability of the event is 1. In fact, log is the only function that satisfies а specific set of conditions defined in section § Characterization. Hence, we can define the information, or surprisal, of an event E {\displaystyle E} by I ( E ) = log ⁡ ( 1 p ( E ) ) , {\displaystyle I(E)=\log \left({\frac {1}{p(E)}}\right),} or equivalently, I ( E ) = − log ⁡ ( p ( E ) ) . {\displaystyle I(E)=-\log(p(E)).} Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial.: 67  This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability ( p = 1 / 6 {\displaystyle p=1/6} ) than each outcome of a coin toss ( p = 1 / 2 {\displaystyle p=1/2} ). Consider a coin with probability p of landing on heads and probability 1 − p of landing on tails. The maximum surprise is when p = 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit (similarly, one trit with equiprobable values contains log 2 ⁡ 3 {\displaystyle \log _{2}3} (about 1.58496) bits of information because it can have one of three values). The minimum surprise is when p = 0 (impossibility) or p = 1 (certainty) and the entropy is zero bits. When the entropy is zero, sometimes referred to as unity, there is no uncertainty at all – no freedom of choice – no information. Other values of p give entropies between zero and one bits. === Example === Information theory is useful to calculate the smallest amount of information required to convey a message, as in data compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect. English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.: 234  == Definition == Named after Boltzmann's Η-theorem, Shannon defined the entropy Η (Greek capital letter eta) of a discrete random variable X {\textstyle X} , which takes values in the set X {\displaystyle {\mathcal {X}}} and is distributed according to p : X → [ 0 , 1 ] {\displaystyle p:{\mathcal {X}}\to [0,1]} such that p ( x ) := P [ X = x ] {\displaystyle p(x):=\mathbb {P} [X=x]} : H ( X ) = E [ I ⁡ ( X ) ] = E [ − log ⁡ p ( X ) ] . {\displaystyle \mathrm {H} (X)=\mathbb {E} [\operatorname {I} (X)]=\mathbb {E} [-\log p(X)].} Here E {\displaystyle \mathbb {E} } is the expected value operator, and I is the information content of X.: 11 : 19–20  I ⁡ ( X ) {\displaystyle \operatorname {I} (X)} is itself a random variable. The entropy can explicitly be written as: H ( X ) = − ∑ x ∈ X p ( x ) log b ⁡ p ( x ) , {\displaystyle \mathrm {H} (X)=-\sum _{x\in {\mathcal {X}}}p(x)\log _{b}p(x),} where b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the corresponding units of entropy are the bits for b = 2, nats for b = e, and bans for b = 10. In the case of p ( x ) = 0 {\displaystyle p(x)=0} for some x ∈ X {\displaystyle x\in {\mathcal {X}}} , the value of the corresponding summand 0 logb(0) is taken to be 0, which is consistent with the limit:: 13  lim p → 0 + p log ⁡ ( p ) = 0. {\displaystyle \lim _{p\to 0^{+}}p\log(p)=0.} One may also define the conditional entropy of two variables X {\displaystyle X} and Y {\displaystyle Y} taking values from sets X {\displaystyle {\mathcal {X}}} and Y {\displaystyle {\mathcal {Y}}} respectively, as:: 16  H ( X | Y ) = − ∑ x , y ∈ X × Y p X , Y ( x , y ) log ⁡ p X , Y ( x , y ) p Y ( y ) , {\displaystyle \mathrm {H} (X|Y)=-\sum _{x,y\in {\mathcal {X}}\times {\mathcal {Y}}}p_{X,Y}(x,y)\log {\frac {p_{X,Y}(x,y)}{p_{Y}(y)}},} where p X , Y ( x , y ) := P [ X = x , Y = y ] {\displaystyle p_{X,Y}(x,y):=\mathbb {P} [X=x,Y=y]} and p Y ( y ) = P [ Y = y ] {\displaystyle p_{Y}(y)=\mathbb {P} [Y=y]} . This quantity should be understood as the remaining randomness in the random variable X {\displaystyle X} given the random variable Y {\displaystyle Y} . === Measure theory === Entropy can be formally defined in the language of measure theory as follows: Let ( X , Σ , μ ) {\displaystyle (X,\Sigma ,\mu )} be a probability space. Let A ∈ Σ {\displaystyle A\in \Sigma } be an event. The surprisal of A {\displaystyle A} is σ μ ( A ) = − ln ⁡ μ ( A ) . {\displaystyle \sigma _{\mu }(A)=-\ln \mu (A).} The expected surprisal of A {\displaystyle A} is h μ ( A ) = μ ( A ) σ μ ( A ) . {\displaystyle h_{\mu }(A)=\mu (A)\sigma _{\mu }(A).} A μ {\displaystyle \mu } -almost partition is a set family P ⊆ P ( X ) {\displaystyle P\subseteq {\mathcal {P}}(X)} such that μ ( ∪ ⁡ P ) = 1 {\displaystyle \mu (\mathop {\cup } P)=1} and μ ( A ∩ B ) = 0 {\displaystyle \mu (A\cap B)=0} for all distinct A , B ∈ P {\displaystyle A,B\in P} . (This is a relaxation of the usual conditions for a partition.) The entropy of P {\displaystyle P} is H μ ( P ) = ∑ A ∈ P h μ ( A ) . {\displaystyle \mathrm {H} _{\mu }(P)=\sum _{A\in P}h_{\mu }(A).} Let M {\displaystyle M} be a sigma-algebra on X {\displaystyle X} . The entropy of M {\displaystyle M} is H μ ( M ) = sup P ⊆ M H μ ( P ) . {\displaystyle \mathrm {H} _{\mu }(M)=\sup _{P\subseteq M}\mathrm {H} _{\mu }(P).} Finally, the entropy of the probability space is H μ ( Σ ) {\displaystyle \mathrm {H} _{\mu }(\Sigma )} , that is, the entropy with respect to μ {\displaystyle \mu } of the sigma-algebra of all measurable subsets of X {\displaystyle X} . Recent studies on layered dynamical systems have introduced the concept of symbolic conditional entropy, further extending classical entropy measures to more abstract informational structures. == Example == Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as a Bernoulli process. The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because H ( X ) = − ∑ i = 1 n p ( x i ) log b ⁡ p ( x i ) = − ∑ i = 1 2 1 2 log 2 ⁡ 1 2 = − ∑ i = 1 2 1 2 ⋅ ( − 1 ) = 1. {\displaystyle {\begin{aligned}\mathrm {H} (X)&=-\sum _{i=1}^{n}{p(x_{i})\log _{b}p(x_{i})}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\log _{2}{\frac {1}{2}}}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\cdot (-1)}=1.\end{aligned}}} However, if we know the coin is not fair, but comes up heads or tails with probabilities p and q, where p ≠ q, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if p = 0.7, then H ( X ) = − p log 2 ⁡ p − q log 2 ⁡ q = − 0.7 log 2 ⁡ ( 0.7 ) − 0.3 log 2 ⁡ ( 0.3 ) ≈ − 0.7 ⋅ ( − 0.515 ) − 0.3 ⋅ ( − 1.737 ) = 0.8816 < 1. {\displaystyle {\begin{aligned}\mathrm {H} (X)&=-p\log _{2}p-q\log _{2}q\\[1ex]&=-0.7\log _{2}(0.7)-0.3\log _{2}(0.3)\\[1ex]&\approx -0.7\cdot (-0.515)-0.3\cdot (-1.737)\\[1ex]&=0.8816<1.\end{aligned}}} Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.: 14–15  == Characterization == To understand the meaning of −Σ pi log(pi), first define an information function I in terms of an event i with probability pi. The amount of information acquired due to the observation of event i follows from Shannon's solution of the fundamental properties of information: I(p) is monotonically decreasing in p: an increase in the probability of an event decreases the information from an observed event, and vice versa. I(1) = 0: events that always occur do not communicate information. I(p1·p2) = I(p1) + I(p2): the information learned from independent events is the sum of the information learned from each event. I(p) is a twice continuously differentiable function of p. Given two independent events, if the first event can yield one of n equiprobable outcomes and another has one of m equiprobable outcomes then there are mn equiprobable outcomes of the joint event. This means that if log2(n) bits are needed to encode the first value and log2(m) to encode the second, one needs log2(mn) = log2(m) + log2(n) to encode both. Shannon discovered that a suitable choice of I {\displaystyle \operatorname {I} } is given by: I ⁡ ( p ) = log ⁡ ( 1 p ) = − log ⁡ ( p ) . {\displaystyle \operatorname {I} (p)=\log \left({\tfrac {1}{p}}\right)=-\log(p).} In fact, the only possible values of I {\displaystyle \operatorname {I} } are I ⁡ ( u ) = k log ⁡ u {\displaystyle \operatorname {I} (u)=k\log u} for k < 0 {\displaystyle k<0} . Additionally, choosing a value for k is equivalent to choosing a value x > 1 {\displaystyle x>1} for k = − 1 / log ⁡ x {\displaystyle k=-1/\log x} , so that x corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties. The different units of information (bits for the binary logarithm log2, nats for the natural logarithm ln, bans for the decimal logarithm log10 and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides log2(2) = 1 bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, n tosses provide n bits of information, which is approximately 0.693n nats or 0.301n decimal digits. The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves. === Alternative characterization === Another characterization of entropy uses the following properties. We denote pi = Pr(X = xi) and Ηn(p1, ..., pn) = Η(X). Continuity: H should be continuous, so that changing the values of the probabilities by a very small amount should only change the entropy by a small amount. Symmetry: H should be unchanged if the outcomes xi are re-ordered. That is, H n ( p 1 , p 2 , … , p n ) = H n ( p i 1 , p i 2 , … , p i n ) {\displaystyle \mathrm {H} _{n}\left(p_{1},p_{2},\ldots ,p_{n}\right)=\mathrm {H} _{n}\left(p_{i_{1}},p_{i_{2}},\ldots ,p_{i_{n}}\right)} for any permutation { i 1 , . . . , i n } {\displaystyle \{i_{1},...,i_{n}\}} of { 1 , . . . , n } {\displaystyle \{1,...,n\}} . Maximum: H n {\displaystyle \mathrm {H} _{n}} should be maximal if all the outcomes are equally likely i.e. H n ( p 1 , … , p n ) ≤ H n ( 1 n , … , 1 n ) {\displaystyle \mathrm {H} _{n}(p_{1},\ldots ,p_{n})\leq \mathrm {H} _{n}\left({\frac {1}{n}},\ldots ,{\frac {1}{n}}\right)} . Increasing number of outcomes: for equiprobable events, the entropy should increase with the number of outcomes i.e. H n ( 1 n , … , 1 n ⏟ n ) < H n + 1 ( 1 n + 1 , … , 1 n + 1 ⏟ n + 1 ) . {\displaystyle \mathrm {H} _{n}{\bigg (}\underbrace {{\frac {1}{n}},\ldots ,{\frac {1}{n}}} _{n}{\bigg )}<\mathrm {H} _{n+1}{\bigg (}\underbrace {{\frac {1}{n+1}},\ldots ,{\frac {1}{n+1}}} _{n+1}{\bigg )}.} Additivity: given an ensemble of n uniformly distributed elements that are partitioned into k boxes (sub-systems) with b1, ..., bk elements each, the entropy of the whole ensemble should be equal to the sum of the entropy of the system of boxes and the individual entropies of the boxes, each weighted with the probability of being in that particular box. ==== Discussion ==== The rule of additivity has the following consequences: for positive integers bi where b1 + ... + bk = n, H n ( 1 n , … , 1 n ) = H k ( b 1 n , … , b k n ) + ∑ i = 1 k b i n H b i ( 1 b i , … , 1 b i ) . {\displaystyle \mathrm {H} _{n}\left({\frac {1}{n}},\ldots ,{\frac {1}{n}}\right)=\mathrm {H} _{k}\left({\frac {b_{1}}{n}},\ldots ,{\frac {b_{k}}{n}}\right)+\sum _{i=1}^{k}{\frac {b_{i}}{n}}\,\mathrm {H} _{b_{i}}\left({\frac {1}{b_{i}}},\ldots ,{\frac {1}{b_{i}}}\right).} Choosing k = n, b1 = ... = bn = 1 this implies that the entropy of a certain outcome is zero: Η1(1) = 0. This implies that the efficiency of a source set with n symbols can be defined simply as being equal to its n-ary entropy. See also Redundancy (information theory). The characterization here imposes an additive property with respect to a partition of a set. Meanwhile, the conditional probability is defined in terms of a multiplicative property, P ( A ∣ B ) ⋅ P ( B ) = P ( A ∩ B ) {\displaystyle P(A\mid B)\cdot P(B)=P(A\cap B)} . Observe that a logarithm mediates between these two operations. The conditional entropy and related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisals μ ( A ) ⋅ ln ⁡ μ ( A ) {\displaystyle \mu (A)\cdot \ln \mu (A)} for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string, log 2 {\displaystyle \log _{2}} lends itself to practical interpretations. Motivated by such relations, a plethora of related and competing quantities have been defined. For example, David Ellerman's analysis of a "logic of partitions" defines a competing measure in structures dual to that of subsets of a universal set. Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted into Shannon's bits, to get the formulas for conditional entropy, and so on. === Alternative characterization via additivity and subadditivity === Another succinct axiomatic characterization of Shannon entropy was given by Aczél, Forte and Ng, via the following properties: Subadditivity: H ( X , Y ) ≤ H ( X ) + H ( Y ) {\displaystyle \mathrm {H} (X,Y)\leq \mathrm {H} (X)+\mathrm {H} (Y)} for jointly distributed random variables X , Y {\displaystyle X,Y} . Additivity: H ( X , Y ) = H ( X ) + H ( Y ) {\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (X)+\mathrm {H} (Y)} when the random variables X , Y {\displaystyle X,Y} are independent. Expansibility: H n + 1 ( p 1 , … , p n , 0 ) = H n ( p 1 , … , p n ) {\displaystyle \mathrm {H} _{n+1}(p_{1},\ldots ,p_{n},0)=\mathrm {H} _{n}(p_{1},\ldots ,p_{n})} , i.e., adding an outcome with probability zero does not change the entropy. Symmetry: H n ( p 1 , … , p n ) {\displaystyle \mathrm {H} _{n}(p_{1},\ldots ,p_{n})} is invariant under permutation of p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} . Small for small probabilities: lim q → 0 + H 2 ( 1 − q , q ) = 0 {\displaystyle \lim _{q\to 0^{+}}\mathrm {H} _{2}(1-q,q)=0} . ==== Discussion ==== It was shown that any function H {\displaystyle \mathrm {H} } satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant. Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vector p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} . It is worth noting that if we drop the "small for small probabilities" property, then H {\displaystyle \mathrm {H} } must be a non-negative linear combination of the Shannon entropy and the Hartley entropy. == Further properties == The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variable X: Adding or removing an event with probability zero does not contribute to the entropy: H n + 1 ( p 1 , … , p n , 0 ) = H n ( p 1 , … , p n ) . {\displaystyle \mathrm {H} _{n+1}(p_{1},\ldots ,p_{n},0)=\mathrm {H} _{n}(p_{1},\ldots ,p_{n}).} The maximal entropy of an event with n different outcomes is logb(n): it is attained by the uniform probability distribution. That is, uncertainty is maximal when all possible events are equiprobable:: 29  H ( p 1 , … , p n ) ≤ log b ⁡ n . {\displaystyle \mathrm {H} (p_{1},\dots ,p_{n})\leq \log _{b}n.} The entropy or the amount of information revealed by evaluating (X,Y) (that is, evaluating X and Y simultaneously) is equal to the information revealed by conducting two consecutive experiments: first evaluating the value of Y, then revealing the value of X given that you know the value of Y. This may be written as:: 16  H ( X , Y ) = H ( X | Y ) + H ( Y ) = H ( Y | X ) + H ( X ) . {\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (X|Y)+\mathrm {H} (Y)=\mathrm {H} (Y|X)+\mathrm {H} (X).} If Y = f ( X ) {\displaystyle Y=f(X)} where f {\displaystyle f} is a function, then H ( f ( X ) | X ) = 0 {\displaystyle \mathrm {H} (f(X)|X)=0} . Applying the previous formula to H ( X , f ( X ) ) {\displaystyle \mathrm {H} (X,f(X))} yields H ( X ) + H ( f ( X ) | X ) = H ( f ( X ) ) + H ( X | f ( X ) ) , {\displaystyle \mathrm {H} (X)+\mathrm {H} (f(X)|X)=\mathrm {H} (f(X))+\mathrm {H} (X|f(X)),} so H ( f ( X ) ) ≤ H ( X ) {\displaystyle \mathrm {H} (f(X))\leq \mathrm {H} (X)} , the entropy of a variable can only decrease when the latter is passed through a function. If X and Y are two independent random variables, then knowing the value of Y doesn't influence our knowledge of the value of X (since the two don't influence each other by independence): H ( X | Y ) = H ( X ) . {\displaystyle \mathrm {H} (X|Y)=\mathrm {H} (X).} More generally, for any random variables X and Y, we have: 29  H ( X | Y ) ≤ H ( X ) . {\displaystyle \mathrm {H} (X|Y)\leq \mathrm {H} (X).} The entropy of two simultaneous events is no more than the sum of the entropies of each individual event i.e., H ( X , Y ) ≤ H ( X ) + H ( Y ) {\displaystyle \mathrm {H} (X,Y)\leq \mathrm {H} (X)+\mathrm {H} (Y)} , with equality if and only if the two events are independent.: 28  The entropy H ( p ) {\displaystyle \mathrm {H} (p)} is concave in the probability mass function p {\displaystyle p} , i.e.: 30  H ( λ p 1 + ( 1 − λ ) p 2 ) ≥ λ H ( p 1 ) + ( 1 − λ ) H ( p 2 ) {\displaystyle \mathrm {H} (\lambda p_{1}+(1-\lambda )p_{2})\geq \lambda \mathrm {H} (p_{1})+(1-\lambda )\mathrm {H} (p_{2})} for all probability mass functions p 1 , p 2 {\displaystyle p_{1},p_{2}} and 0 ≤ λ ≤ 1 {\displaystyle 0\leq \lambda \leq 1} .: 32  Accordingly, the negative entropy (negentropy) function is convex, and its convex conjugate is LogSumExp. == Aspects == === Relationship to thermodynamic entropy === The inspiration for adopting the word entropy in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics. In statistical thermodynamics the most general formula for the thermodynamic entropy S of a thermodynamic system is the Gibbs entropy S = − k B ∑ i p i ln ⁡ p i , {\displaystyle S=-k_{\text{B}}\sum _{i}p_{i}\ln p_{i}\,,} where kB is the Boltzmann constant, and pi is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Ludwig Boltzmann (1872). The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy introduced by John von Neumann in 1927: S = − k B T r ( ρ ln ⁡ ρ ) , {\displaystyle S=-k_{\text{B}}\,{\rm {Tr}}(\rho \ln \rho )\,,} where ρ is the density matrix of the quantum mechanical system and Tr is the trace. At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constant kB indicates, the changes in S / kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy. The connection between thermodynamics and what is now known as information theory was first made by Boltzmann and expressed by his equation: S = k B ln ⁡ W , {\displaystyle S=k_{\text{B}}\ln W,} where S {\displaystyle S} is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), W is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and kB is the Boltzmann constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is pi = 1/W. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently kB times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate. In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient. === Data compression === Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or arithmetic coding. (See also Kolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors. The entropy rate of a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text. If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but is communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have more than one bit of information per bit of message, but that any value less than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shorten all messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger. A 2011 study in Science estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.: 60–65  The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way broadcast networks, or to exchange information through two-way telecommunications networks. === Entropy as a measure of diversity === Entropy is one of several ways to measure biodiversity and is applied in the form of the Shannon index. A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecological richness, evenness, and dominance. Specifically, Shannon entropy is the logarithm of 1D, the true diversity index with parameter equal to 1. The Shannon index is related to the proportional abundances of types. === Entropy of a sequence === There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message: the self-information of an individual message or symbol taken from a given probability distribution (message or sequence seen as an individual event), the joint entropy of the symbols forming the message or sequence (seen as a set of events), the entropy rate of a stochastic process (message or sequence is seen as a succession of events). (The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information. It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy rate. Shannon himself used the term in this way. If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are N published books, and each book is only published once, the estimate of the probability of each book is 1/N, and the entropy (in bits) is −log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest. The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately log2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n−2) for n = 3, 4, 5, ..., F(1) =1, F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence. === Limitations of entropy in cryptography === In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average) 2 127 {\displaystyle 2^{127}} guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called guesswork can be used to measure the effort required for a brute force attack. Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all. === Data as a Markov process === A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is: H ( S ) = − ∑ i p i log ⁡ p i , {\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\log p_{i},} where pi is the probability of i. For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is: H ( S ) = − ∑ i p i ∑ j p i ( j ) log ⁡ p i ( j ) , {\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}\ p_{i}(j)\log p_{i}(j),} where i is a state (certain preceding characters) and p i ( j ) {\displaystyle p_{i}(j)} is the probability of j given i as the previous character. For a second order Markov source, the entropy rate is H ( S ) = − ∑ i p i ∑ j p i ( j ) ∑ k p i , j ( k ) log ⁡ p i , j ( k ) . {\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}p_{i}(j)\sum _{k}p_{i,j}(k)\ \log p_{i,j}(k).} == Efficiency (normalized entropy) == A source set X {\displaystyle {\mathcal {X}}} with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency: η ( X ) = H H max = − ∑ i = 1 n p ( x i ) log b ⁡ ( p ( x i ) ) log b ⁡ ( n ) . {\displaystyle \eta (X)={\frac {H}{H_{\text{max}}}}=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}.} Applying the basic properties of the logarithm, this quantity can also be expressed as: η ( X ) = − ∑ i = 1 n p ( x i ) log b ⁡ ( p ( x i ) ) log b ⁡ ( n ) = ∑ i = 1 n log b ⁡ ( p ( x i ) − p ( x i ) ) log b ⁡ ( n ) = ∑ i = 1 n log n ⁡ ( p ( x i ) − p ( x i ) ) = log n ⁡ ( ∏ i = 1 n p ( x i ) − p ( x i ) ) . {\displaystyle {\begin{aligned}\eta (X)&=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}=\sum _{i=1}^{n}{\frac {\log _{b}\left(p(x_{i})^{-p(x_{i})}\right)}{\log _{b}(n)}}\\[1ex]&=\sum _{i=1}^{n}\log _{n}\left(p(x_{i})^{-p(x_{i})}\right)=\log _{n}\left(\prod _{i=1}^{n}p(x_{i})^{-p(x_{i})}\right).\end{aligned}}} Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy log b ⁡ ( n ) {\displaystyle {\log _{b}(n)}} . Furthermore, the efficiency is indifferent to the choice of (positive) base b, as indicated by the insensitivity within the final logarithm above thereto. == Entropy for continuous random variables == === Differential entropy === The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function f(x) with finite or infinite support X {\displaystyle \mathbb {X} } on the real line is defined by analogy, using the above form of the entropy as an expectation:: 224  H ( X ) = E [ − log ⁡ f ( X ) ] = − ∫ X f ( x ) log ⁡ f ( x ) d x . {\displaystyle \mathrm {H} (X)=\mathbb {E} [-\log f(X)]=-\int _{\mathbb {X} }f(x)\log f(x)\,\mathrm {d} x.} This is the differential entropy (or continuous entropy). A precursor of the continuous entropy h[f] is the expression for the functional Η in the H-theorem of Boltzmann. Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points. To answer this question, a connection must be established between the two functions: In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the n (finite or infinite) bins whose probabilities are denoted by pn. As the continuous domain is generalized, the width must be made explicit. To do this, start with a continuous function f discretized into bins of size Δ {\displaystyle \Delta } . By the mean-value theorem there exists a value xi in each bin such that f ( x i ) Δ = ∫ i Δ ( i + 1 ) Δ f ( x ) d x {\displaystyle f(x_{i})\Delta =\int _{i\Delta }^{(i+1)\Delta }f(x)\,dx} the integral of the function f can be approximated (in the Riemannian sense) by ∫ − ∞ ∞ f ( x ) d x = lim Δ → 0 ∑ i = − ∞ ∞ f ( x i ) Δ , {\displaystyle \int _{-\infty }^{\infty }f(x)\,dx=\lim _{\Delta \to 0}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta ,} where this limit and "bin size goes to zero" are equivalent. We will denote H Δ := − ∑ i = − ∞ ∞ f ( x i ) Δ log ⁡ ( f ( x i ) Δ ) {\displaystyle \mathrm {H} ^{\Delta }:=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log \left(f(x_{i})\Delta \right)} and expanding the logarithm, we have H Δ = − ∑ i = − ∞ ∞ f ( x i ) Δ log ⁡ ( f ( x i ) ) − ∑ i = − ∞ ∞ f ( x i ) Δ log ⁡ ( Δ ) . {\displaystyle \mathrm {H} ^{\Delta }=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(\Delta ).} As Δ → 0, we have ∑ i = − ∞ ∞ f ( x i ) Δ → ∫ − ∞ ∞ f ( x ) d x = 1 ∑ i = − ∞ ∞ f ( x i ) Δ log ⁡ ( f ( x i ) ) → ∫ − ∞ ∞ f ( x ) log ⁡ f ( x ) d x . {\displaystyle {\begin{aligned}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta &\to \int _{-\infty }^{\infty }f(x)\,dx=1\\\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))&\to \int _{-\infty }^{\infty }f(x)\log f(x)\,dx.\end{aligned}}} Note; log(Δ) → −∞ as Δ → 0, requires a special definition of the differential or continuous entropy: h [ f ] = lim Δ → 0 ( H Δ + log ⁡ Δ ) = − ∫ − ∞ ∞ f ( x ) log ⁡ f ( x ) d x , {\displaystyle h[f]=\lim _{\Delta \to 0}\left(\mathrm {H} ^{\Delta }+\log \Delta \right)=-\int _{-\infty }^{\infty }f(x)\log f(x)\,dx,} which is, as said before, referred to as the differential entropy. This means that the differential entropy is not a limit of the Shannon entropy for n → ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension). === Limiting density of discrete points === It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when x is a dimensioned variable. f(x) will then have the units of 1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If Δ is some "standard" value of x (i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as: H = ∫ − ∞ ∞ f ( x ) log ⁡ ( f ( x ) Δ ) d x , {\displaystyle \mathrm {H} =\int _{-\infty }^{\infty }f(x)\log(f(x)\,\Delta )\,dx,} and the result will be the same for any choice of units for x. In fact, the limit of discrete entropy as N → ∞ {\displaystyle N\rightarrow \infty } would also include a term of log ⁡ ( N ) {\displaystyle \log(N)} , which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme. === Relative entropy === Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure m as follows. Assume that a probability distribution p is absolutely continuous with respect to a measure m, i.e. is of the form p(dx) = f(x)m(dx) for some non-negative m-integrable function f with m-integral 1, then the relative entropy can be defined as D K L ( p ‖ m ) = ∫ log ⁡ ( f ( x ) ) p ( d x ) = ∫ f ( x ) log ⁡ ( f ( x ) ) m ( d x ) . {\displaystyle D_{\mathrm {KL} }(p\|m)=\int \log(f(x))p(dx)=\int f(x)\log(f(x))m(dx).} In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measure m is the counting measure, and the differential entropy, where the measure m is the Lebesgue measure. If the measure m is itself a probability distribution, the relative entropy is non-negative, and zero if p = m as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure m. The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measure m. == Use in number theory == Terence Tao used entropy to make a useful connection trying to solve the Erdős discrepancy problem. Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using the Liouville function (which is a useful mathematical function for studying distribution of primes) XH = λ ( n + H ) {\displaystyle \lambda (n+H)} . And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values of XH could take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per the Erdős discrepancy problem. The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon entropy, but also it used the Liouville function along with averages of modulated multiplicative functions in short intervals. Proving it also broke the "parity barrier" for this specific problem. While the use of Shannon entropy in the proof is novel it is likely to open new research in this direction. == Use in combinatorics == Entropy has become a useful quantity in combinatorics. === Loomis–Whitney inequality === A simple example of this is an alternative proof of the Loomis–Whitney inequality: for every subset A ⊆ Zd, we have | A | d − 1 ≤ ∏ i = 1 d | P i ( A ) | {\displaystyle |A|^{d-1}\leq \prod _{i=1}^{d}|P_{i}(A)|} where Pi is the orthogonal projection in the ith coordinate: P i ( A ) = { ( x 1 , … , x i − 1 , x i + 1 , … , x d ) : ( x 1 , … , x d ) ∈ A } . {\displaystyle P_{i}(A)=\{(x_{1},\ldots ,x_{i-1},x_{i+1},\ldots ,x_{d}):(x_{1},\ldots ,x_{d})\in A\}.} The proof follows as a simple corollary of Shearer's inequality: if X1, ..., Xd are random variables and S1, ..., Sn are subsets of {1, ..., d} such that every integer between 1 and d lies in exactly r of these subsets, then H [ ( X 1 , … , X d ) ] ≤ 1 r ∑ i = 1 n H [ ( X j ) j ∈ S i ] {\displaystyle \mathrm {H} [(X_{1},\ldots ,X_{d})]\leq {\frac {1}{r}}\sum _{i=1}^{n}\mathrm {H} [(X_{j})_{j\in S_{i}}]} where ( X j ) j ∈ S i {\displaystyle (X_{j})_{j\in S_{i}}} is the Cartesian product of random variables Xj with indexes j in Si (so the dimension of this vector is equal to the size of Si). We sketch how Loomis–Whitney follows from this: Indeed, let X be a uniformly distributed random variable with values in A and so that each point in A occurs with equal probability. Then (by the further properties of entropy mentioned above) Η(X) = log|A|, where |A| denotes the cardinality of A. Let Si = {1, 2, ..., i−1, i+1, ..., d}. The range of ( X j ) j ∈ S i {\displaystyle (X_{j})_{j\in S_{i}}} is contained in Pi(A) and hence H [ ( X j ) j ∈ S i ] ≤ log ⁡ | P i ( A ) | {\displaystyle \mathrm {H} [(X_{j})_{j\in S_{i}}]\leq \log |P_{i}(A)|} . Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain. === Approximation to binomial coefficient === For integers 0 < k < n let q = k/n. Then 2 n H ( q ) n + 1 ≤ ( n k ) ≤ 2 n H ( q ) , {\displaystyle {\frac {2^{n\mathrm {H} (q)}}{n+1}}\leq {\tbinom {n}{k}}\leq 2^{n\mathrm {H} (q)},} where : 43  H ( q ) = − q log 2 ⁡ ( q ) − ( 1 − q ) log 2 ⁡ ( 1 − q ) . {\displaystyle \mathrm {H} (q)=-q\log _{2}(q)-(1-q)\log _{2}(1-q).} A nice interpretation of this is that the number of binary strings of length n with exactly k many 1's is approximately 2 n H ( k / n ) {\displaystyle 2^{n\mathrm {H} (k/n)}} . == Use in machine learning == Machine learning techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty. Decision tree learning algorithms use relative entropy to determine the decision rules that govern the data at each node. The information gain in decision trees I G ( Y , X ) {\displaystyle IG(Y,X)} , which is equal to the difference between the entropy of Y {\displaystyle Y} and the conditional entropy of Y {\displaystyle Y} given X {\displaystyle X} , quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attribute X {\displaystyle X} . The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally. Bayesian inference models often apply the principle of maximum entropy to obtain prior probability distributions. The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior. Classification in machine learning performed by logistic regression or artificial neural networks often employs a standard loss function, called cross-entropy loss, that minimizes the average cross entropy between ground truth and predicted distributions. In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy). == See also == == Notes == == References == This article incorporates material from Shannon's entropy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. == Further reading == === Textbooks on information theory === Cover, T.M., Thomas, J.A. (2006), Elements of Information Theory – 2nd Ed., Wiley-Interscience, ISBN 978-0-471-24195-9 MacKay, D.J.C. (2003), Information Theory, Inference and Learning Algorithms, Cambridge University Press, ISBN 978-0-521-64298-9 Arndt, C. (2004), Information Measures: Information and its Description in Science and Engineering, Springer, ISBN 978-3-540-40855-0 Gray, R. M. (2011), Entropy and Information Theory, Springer. Martin, Nathaniel F.G.; England, James W. (2011). Mathematical Theory of Entropy. Cambridge University Press. ISBN 978-0-521-17738-2. Shannon, C.E., Weaver, W. (1949) The Mathematical Theory of Communication, Univ of Illinois Press. ISBN 0-252-72548-4 Stone, J. V. (2014), Chapter 1 of Information Theory: A Tutorial Introduction Archived 3 June 2016 at the Wayback Machine, University of Sheffield, England. ISBN 978-0956372857. == External links == "Entropy", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Entropy" Archived 4 June 2016 at the Wayback Machine at Rosetta Code—repository of implementations of Shannon entropy in different programming languages. Entropy Archived 31 May 2016 at the Wayback Machine an interdisciplinary journal on all aspects of the entropy concept. Open access.
Wikipedia/Shannon_Entropy
The conditional quantum entropy is an entropy measure used in quantum information theory. It is a generalization of the conditional entropy of classical information theory. For a bipartite state ρ A B {\displaystyle \rho ^{AB}} , the conditional entropy is written S ( A | B ) ρ {\displaystyle S(A|B)_{\rho }} , or H ( A | B ) ρ {\displaystyle H(A|B)_{\rho }} , depending on the notation being used for the von Neumann entropy. The quantum conditional entropy was defined in terms of a conditional density operator ρ A | B {\displaystyle \rho _{A|B}} by Nicolas Cerf and Chris Adami, who showed that quantum conditional entropies can be negative, something that is forbidden in classical physics. The negativity of quantum conditional entropy is a sufficient criterion for quantum non-separability. In what follows, we use the notation S ( ⋅ ) {\displaystyle S(\cdot )} for the von Neumann entropy, which will simply be called "entropy". == Definition == Given a bipartite quantum state ρ A B {\displaystyle \rho ^{AB}} , the entropy of the joint system AB is S ( A B ) ρ = d e f S ( ρ A B ) {\displaystyle S(AB)_{\rho }\ {\stackrel {\mathrm {def} }{=}}\ S(\rho ^{AB})} , and the entropies of the subsystems are S ( A ) ρ = d e f S ( ρ A ) = S ( t r B ρ A B ) {\displaystyle S(A)_{\rho }\ {\stackrel {\mathrm {def} }{=}}\ S(\rho ^{A})=S(\mathrm {tr} _{B}\rho ^{AB})} and S ( B ) ρ {\displaystyle S(B)_{\rho }} . The von Neumann entropy measures an observer's uncertainty about the value of the state, that is, how much the state is a mixed state. By analogy with the classical conditional entropy, one defines the conditional quantum entropy as S ( A | B ) ρ = d e f S ( A B ) ρ − S ( B ) ρ {\displaystyle S(A|B)_{\rho }\ {\stackrel {\mathrm {def} }{=}}\ S(AB)_{\rho }-S(B)_{\rho }} . An equivalent operational definition of the quantum conditional entropy (as a measure of the quantum communication cost or surplus when performing quantum state merging) was given by Michał Horodecki, Jonathan Oppenheim, and Andreas Winter. == Properties == Unlike the classical conditional entropy, the conditional quantum entropy can be negative. This is true even though the (quantum) von Neumann entropy of single variable is never negative. The negative conditional entropy is also known as the coherent information, and gives the additional number of bits above the classical limit that can be transmitted in a quantum dense coding protocol. Positive conditional entropy of a state thus means the state cannot reach even the classical limit, while the negative conditional entropy provides for additional information. == References == Nielsen, Michael A.; Chuang, Isaac L. (2010). Quantum Computation and Quantum Information (2nd ed.). Cambridge: Cambridge University Press. ISBN 978-1-107-00217-3. OCLC 844974180. Wilde, Mark M. (2017), "Preface to the Second Edition", Quantum Information Theory, Cambridge University Press, pp. xi–xii, arXiv:1106.1445, Bibcode:2011arXiv1106.1445W, doi:10.1017/9781316809976.001, ISBN 9781316809976, S2CID 2515538
Wikipedia/Conditional_quantum_entropy
In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample. More precisely, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and the area under the entire curve is equal to 1. The terms probability distribution function and probability function have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. "Density function" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables. == Example == Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives exactly 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on. In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour−1). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour−1. This quantity 2 hour−1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour−1) dt. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour−1)×(1 nanosecond) ≈ 6×10−13 (using the unit conversion 3.6×1012 nanoseconds = 1 hour). There is a probability density function f with f(5 hours) = 2 hour−1. The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window. == Absolutely continuous univariate distributions == A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable X {\displaystyle X} has density f X {\displaystyle f_{X}} , where f X {\displaystyle f_{X}} is a non-negative Lebesgue-integrable function, if: Pr [ a ≤ X ≤ b ] = ∫ a b f X ( x ) d x . {\displaystyle \Pr[a\leq X\leq b]=\int _{a}^{b}f_{X}(x)\,dx.} Hence, if F X {\displaystyle F_{X}} is the cumulative distribution function of X {\displaystyle X} , then: F X ( x ) = ∫ − ∞ x f X ( u ) d u , {\displaystyle F_{X}(x)=\int _{-\infty }^{x}f_{X}(u)\,du,} and (if f X {\displaystyle f_{X}} is continuous at x {\displaystyle x} ) f X ( x ) = d d x F X ( x ) . {\displaystyle f_{X}(x)={\frac {d}{dx}}F_{X}(x).} Intuitively, one can think of f X ( x ) d x {\displaystyle f_{X}(x)\,dx} as being the probability of X {\displaystyle X} falling within the infinitesimal interval [ x , x + d x ] {\displaystyle [x,x+dx]} . == Formal definition == (This definition may be extended to any probability distribution using the measure-theoretic definition of probability.) A random variable X {\displaystyle X} with values in a measurable space ( X , A ) {\displaystyle ({\mathcal {X}},{\mathcal {A}})} (usually R n {\displaystyle \mathbb {R} ^{n}} with the Borel sets as measurable subsets) has as probability distribution the pushforward measure X∗P on ( X , A ) {\displaystyle ({\mathcal {X}},{\mathcal {A}})} : the density of X {\displaystyle X} with respect to a reference measure μ {\displaystyle \mu } on ( X , A ) {\displaystyle ({\mathcal {X}},{\mathcal {A}})} is the Radon–Nikodym derivative: f = d X ∗ P d μ . {\displaystyle f={\frac {dX_{*}P}{d\mu }}.} That is, f is any measurable function with the property that: Pr [ X ∈ A ] = ∫ X − 1 A d P = ∫ A f d μ {\displaystyle \Pr[X\in A]=\int _{X^{-1}A}\,dP=\int _{A}f\,d\mu } for any measurable set A ∈ A . {\displaystyle A\in {\mathcal {A}}.} === Discussion === In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof). It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere. == Further details == Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval [0, 1/2] has probability density f(x) = 2 for 0 ≤ x ≤ 1/2 and f(x) = 0 elsewhere. The standard normal distribution has probability density f ( x ) = 1 2 π e − x 2 / 2 . {\displaystyle f(x)={\frac {1}{\sqrt {2\pi }}}\,e^{-x^{2}/2}.} If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be calculated as E ⁡ [ X ] = ∫ − ∞ ∞ x f ( x ) d x . {\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }x\,f(x)\,dx.} Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point. A distribution has a density function if its cumulative distribution function F(x) is absolutely continuous. In this case: F is almost everywhere differentiable, and its derivative can be used as probability density: d d x F ( x ) = f ( x ) . {\displaystyle {\frac {d}{dx}}F(x)=f(x).} If a probability distribution admits a density, then the probability of every one-point set {a} is zero; the same holds for finite and countable sets. Two probability densities f and g represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero. In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following: If dt is an infinitely small number, the probability that X is included within the interval (t, t + dt) is equal to f(t) dt, or: Pr ( t < X < t + d t ) = f ( t ) d t . {\displaystyle \Pr(t<X<t+dt)=f(t)\,dt.} == Link between discrete and continuous distributions == It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution—that is, taking −1 or 1 for values, with probability 1⁄2 each. The density of probability associated with this variable is: f ( t ) = 1 2 ( δ ( t + 1 ) + δ ( t − 1 ) ) . {\displaystyle f(t)={\frac {1}{2}}(\delta (t+1)+\delta (t-1)).} More generally, if a discrete variable can take n different values among real numbers, then the associated probability density function is: f ( t ) = ∑ i = 1 n p i δ ( t − x i ) , {\displaystyle f(t)=\sum _{i=1}^{n}p_{i}\,\delta (t-x_{i}),} where x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the discrete values accessible to the variable and p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} are the probabilities associated with these values. This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean, variance, and kurtosis), starting from the formulas given for a continuous distribution of the probability. == Families of densities == It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by μ {\displaystyle \mu } and σ 2 {\displaystyle \sigma ^{2}} respectively, giving the family of densities f ( x ; μ , σ 2 ) = 1 σ 2 π e − 1 2 ( x − μ σ ) 2 . {\displaystyle f(x;\mu ,\sigma ^{2})={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x-\mu }{\sigma }}\right)^{2}}.} Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution. Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. == Densities associated with multiple variables == For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is Pr ( X 1 , … , X n ∈ D ) = ∫ D f X 1 , … , X n ( x 1 , … , x n ) d x 1 ⋯ d x n . {\displaystyle \Pr \left(X_{1},\ldots ,X_{n}\in D\right)=\int _{D}f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{n}.} If F(x1, ..., xn) = Pr(X1 ≤ x1, ..., Xn ≤ xn) is the cumulative distribution function of the vector (X1, ..., Xn), then the joint probability density function can be computed as a partial derivative f ( x ) = ∂ n F ∂ x 1 ⋯ ∂ x n | x {\displaystyle f(x)=\left.{\frac {\partial ^{n}F}{\partial x_{1}\cdots \partial x_{n}}}\right|_{x}} === Marginal densities === For i = 1, 2, ..., n, let fXi(xi) be the probability density function associated with variable Xi alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X1, ..., Xn by integrating over all values of the other n − 1 variables: f X i ( x i ) = ∫ f ( x 1 , … , x n ) d x 1 ⋯ d x i − 1 d x i + 1 ⋯ d x n . {\displaystyle f_{X_{i}}(x_{i})=\int f(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{i-1}\,dx_{i+1}\cdots dx_{n}.} === Independence === Continuous random variables X1, ..., Xn admitting a joint density are all independent from each other if f X 1 , … , X n ( x 1 , … , x n ) = f X 1 ( x 1 ) ⋯ f X n ( x n ) . {\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{X_{1}}(x_{1})\cdots f_{X_{n}}(x_{n}).} === Corollary === If the joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable f X 1 , … , X n ( x 1 , … , x n ) = f 1 ( x 1 ) ⋯ f n ( x n ) , {\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{1}(x_{1})\cdots f_{n}(x_{n}),} (where each fi is not necessarily a density) then the n variables in the set are all independent from each other, and the marginal probability density function of each of them is given by f X i ( x i ) = f i ( x i ) ∫ f i ( x ) d x . {\displaystyle f_{X_{i}}(x_{i})={\frac {f_{i}(x_{i})}{\int f_{i}(x)\,dx}}.} === Example === This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call R → {\displaystyle {\vec {R}}} a 2-dimensional random vector of coordinates (X, Y): the probability to obtain R → {\displaystyle {\vec {R}}} in the quarter plane of positive x and y is Pr ( X > 0 , Y > 0 ) = ∫ 0 ∞ ∫ 0 ∞ f X , Y ( x , y ) d x d y . {\displaystyle \Pr \left(X>0,Y>0\right)=\int _{0}^{\infty }\int _{0}^{\infty }f_{X,Y}(x,y)\,dx\,dy.} == Function of random variables and change of variables in the probability density function == If the probability density function of a random variable (or vector) X is given as fX(x), it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g(X). This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape fg(X) = fY using a known (for instance, uniform) random number generator. It is tempting to think that in order to find the expected value E(g(X)), one must first find the probability density fg(X) of the new random variable Y = g(X). However, rather than computing E ⁡ ( g ( X ) ) = ∫ − ∞ ∞ y f g ( X ) ( y ) d y , {\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }yf_{g(X)}(y)\,dy,} one may find instead E ⁡ ( g ( X ) ) = ∫ − ∞ ∞ g ( x ) f X ( x ) d x . {\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }g(x)f_{X}(x)\,dx.} The values of the two integrals are the same in all cases in which both X and g(X) actually have probability density functions. It is not necessary that g be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician. === Scalar to scalar === Let g : R → R {\displaystyle g:\mathbb {R} \to \mathbb {R} } be a monotonic function, then the resulting density function is f Y ( y ) = f X ( g − 1 ( y ) ) | d d y ( g − 1 ( y ) ) | . {\displaystyle f_{Y}(y)=f_{X}{\big (}g^{-1}(y){\big )}\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|.} Here g−1 denotes the inverse function. This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is, | f Y ( y ) d y | = | f X ( x ) d x | , {\displaystyle \left|f_{Y}(y)\,dy\right|=\left|f_{X}(x)\,dx\right|,} or f Y ( y ) = | d x d y | f X ( x ) = | d d y ( x ) | f X ( x ) = | d d y ( g − 1 ( y ) ) | f X ( g − 1 ( y ) ) = | ( g − 1 ) ′ ( y ) | ⋅ f X ( g − 1 ( y ) ) . {\displaystyle f_{Y}(y)=\left|{\frac {dx}{dy}}\right|f_{X}(x)=\left|{\frac {d}{dy}}(x)\right|f_{X}(x)=\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|f_{X}{\big (}g^{-1}(y){\big )}={\left|\left(g^{-1}\right)'(y)\right|}\cdot f_{X}{\big (}g^{-1}(y){\big )}.} For functions that are not monotonic, the probability density function for y is ∑ k = 1 n ( y ) | d d y g k − 1 ( y ) | ⋅ f X ( g k − 1 ( y ) ) , {\displaystyle \sum _{k=1}^{n(y)}\left|{\frac {d}{dy}}g_{k}^{-1}(y)\right|\cdot f_{X}{\big (}g_{k}^{-1}(y){\big )},} where n(y) is the number of solutions in x for the equation g ( x ) = y {\displaystyle g(x)=y} , and g k − 1 ( y ) {\displaystyle g_{k}^{-1}(y)} are these solutions. === Vector to vector === Suppose x is an n-dimensional random variable with joint density f. If y = G(x), where G is a bijective, differentiable function, then y has density pY: p Y ( y ) = f ( G − 1 ( y ) ) | det [ d G − 1 ( z ) d z | z = y ] | {\displaystyle p_{Y}(\mathbf {y} )=f{\Bigl (}G^{-1}(\mathbf {y} ){\Bigr )}\left|\det \left[\left.{\frac {dG^{-1}(\mathbf {z} )}{d\mathbf {z} }}\right|_{\mathbf {z} =\mathbf {y} }\right]\right|} with the differential regarded as the Jacobian of the inverse of G(⋅), evaluated at y. For example, in the 2-dimensional case x = (x1, x2), suppose the transform G is given as y1 = G1(x1, x2), y2 = G2(x1, x2) with inverses x1 = G1−1(y1, y2), x2 = G2−1(y1, y2). The joint distribution for y = (y1, y2) has density p Y 1 , Y 2 ( y 1 , y 2 ) = f X 1 , X 2 ( G 1 − 1 ( y 1 , y 2 ) , G 2 − 1 ( y 1 , y 2 ) ) | ∂ G 1 − 1 ∂ y 1 ∂ G 2 − 1 ∂ y 2 − ∂ G 1 − 1 ∂ y 2 ∂ G 2 − 1 ∂ y 1 | . {\displaystyle p_{Y_{1},Y_{2}}(y_{1},y_{2})=f_{X_{1},X_{2}}{\big (}G_{1}^{-1}(y_{1},y_{2}),G_{2}^{-1}(y_{1},y_{2}){\big )}\left\vert {\frac {\partial G_{1}^{-1}}{\partial y_{1}}}{\frac {\partial G_{2}^{-1}}{\partial y_{2}}}-{\frac {\partial G_{1}^{-1}}{\partial y_{2}}}{\frac {\partial G_{2}^{-1}}{\partial y_{1}}}\right\vert .} === Vector to scalar === Let V : R n → R {\displaystyle V:\mathbb {R} ^{n}\to \mathbb {R} } be a differentiable function and X {\displaystyle X} be a random vector taking values in R n {\displaystyle \mathbb {R} ^{n}} , f X {\displaystyle f_{X}} be the probability density function of X {\displaystyle X} and δ ( ⋅ ) {\displaystyle \delta (\cdot )} be the Dirac delta function. It is possible to use the formulas above to determine f Y {\displaystyle f_{Y}} , the probability density function of Y = V ( X ) {\displaystyle Y=V(X)} , which will be given by f Y ( y ) = ∫ R n f X ( x ) δ ( y − V ( x ) ) d x . {\displaystyle f_{Y}(y)=\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} .} This result leads to the law of the unconscious statistician: E Y ⁡ [ Y ] = ∫ R y f Y ( y ) d y = ∫ R y ∫ R n f X ( x ) δ ( y − V ( x ) ) d x d y = ∫ R n ∫ R y f X ( x ) δ ( y − V ( x ) ) d y d x = ∫ R n V ( x ) f X ( x ) d x = E X ⁡ [ V ( X ) ] . {\displaystyle {\begin{aligned}\operatorname {E} _{Y}[Y]&=\int _{\mathbb {R} }yf_{Y}(y)\,dy\\&=\int _{\mathbb {R} }y\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} \,dy\\&=\int _{{\mathbb {R} }^{n}}\int _{\mathbb {R} }yf_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,dy\,d\mathbf {x} \\&=\int _{\mathbb {R} ^{n}}V(\mathbf {x} )f_{X}(\mathbf {x} )\,d\mathbf {x} =\operatorname {E} _{X}[V(X)].\end{aligned}}} Proof: Let Z {\displaystyle Z} be a collapsed random variable with probability density function p Z ( z ) = δ ( z ) {\displaystyle p_{Z}(z)=\delta (z)} (i.e., a constant equal to zero). Let the random vector X ~ {\displaystyle {\tilde {X}}} and the transform H {\displaystyle H} be defined as H ( Z , X ) = [ Z + V ( X ) X ] = [ Y X ~ ] . {\displaystyle H(Z,X)={\begin{bmatrix}Z+V(X)\\X\end{bmatrix}}={\begin{bmatrix}Y\\{\tilde {X}}\end{bmatrix}}.} It is clear that H {\displaystyle H} is a bijective mapping, and the Jacobian of H − 1 {\displaystyle H^{-1}} is given by: d H − 1 ( y , x ~ ) d y d x ~ = [ 1 − d V ( x ~ ) d x ~ 0 n × 1 I n × n ] , {\displaystyle {\frac {dH^{-1}(y,{\tilde {\mathbf {x} }})}{dy\,d{\tilde {\mathbf {x} }}}}={\begin{bmatrix}1&-{\frac {dV({\tilde {\mathbf {x} }})}{d{\tilde {\mathbf {x} }}}}\\\mathbf {0} _{n\times 1}&\mathbf {I} _{n\times n}\end{bmatrix}},} which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that f Y , X ( y , x ) = f X ( x ) δ ( y − V ( x ) ) , {\displaystyle f_{Y,X}(y,x)=f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )},} which if marginalized over x {\displaystyle x} leads to the desired probability density function. == Sums of independent random variables == The probability density function of the sum of two independent random variables U and V, each of which has a probability density function, is the convolution of their separate density functions: f U + V ( x ) = ∫ − ∞ ∞ f U ( y ) f V ( x − y ) d y = ( f U ∗ f V ) ( x ) {\displaystyle f_{U+V}(x)=\int _{-\infty }^{\infty }f_{U}(y)f_{V}(x-y)\,dy=\left(f_{U}*f_{V}\right)(x)} It is possible to generalize the previous relation to a sum of N independent random variables, with densities U1, ..., UN: f U 1 + ⋯ + U ( x ) = ( f U 1 ∗ ⋯ ∗ f U N ) ( x ) {\displaystyle f_{U_{1}+\cdots +U}(x)=\left(f_{U_{1}}*\cdots *f_{U_{N}}\right)(x)} This can be derived from a two-way change of variables involving Y = U + V and Z = V, similarly to the example below for the quotient of independent random variables. == Products and quotients of independent random variables == Given two independent random variables U and V, each of which has a probability density function, the density of the product Y = UV and quotient Y = U/V can be computed by a change of variables. === Example: Quotient distribution === To compute the quotient Y = U/V of two independent random variables U and V, define the following transformation: Y = U / V Z = V {\displaystyle {\begin{aligned}Y&=U/V\\[1ex]Z&=V\end{aligned}}} Then, the joint density p(y,z) can be computed by a change of variables from U,V to Y,Z, and Y can be derived by marginalizing out Z from the joint density. The inverse transformation is U = Y Z V = Z {\displaystyle {\begin{aligned}U&=YZ\\V&=Z\end{aligned}}} The absolute value of the Jacobian matrix determinant J ( U , V ∣ Y , Z ) {\displaystyle J(U,V\mid Y,Z)} of this transformation is: | det [ ∂ u ∂ y ∂ u ∂ z ∂ v ∂ y ∂ v ∂ z ] | = | det [ z y 0 1 ] | = | z | . {\displaystyle \left|\det {\begin{bmatrix}{\frac {\partial u}{\partial y}}&{\frac {\partial u}{\partial z}}\\{\frac {\partial v}{\partial y}}&{\frac {\partial v}{\partial z}}\end{bmatrix}}\right|=\left|\det {\begin{bmatrix}z&y\\0&1\end{bmatrix}}\right|=|z|.} Thus: p ( y , z ) = p ( u , v ) J ( u , v ∣ y , z ) = p ( u ) p ( v ) J ( u , v ∣ y , z ) = p U ( y z ) p V ( z ) | z | . {\displaystyle p(y,z)=p(u,v)\,J(u,v\mid y,z)=p(u)\,p(v)\,J(u,v\mid y,z)=p_{U}(yz)\,p_{V}(z)\,|z|.} And the distribution of Y can be computed by marginalizing out Z: p ( y ) = ∫ − ∞ ∞ p U ( y z ) p V ( z ) | z | d z {\displaystyle p(y)=\int _{-\infty }^{\infty }p_{U}(yz)\,p_{V}(z)\,|z|\,dz} This method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because Z can be mapped directly back to V, and for a given V the quotient U/V is monotonic. This is similarly the case for the sum U + V, difference U − V and product UV. Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables. === Example: Quotient of two standard normals === Given two standard normal variables U and V, the quotient can be computed as follows. First, the variables have the following density functions: p ( u ) = 1 2 π e − u 2 / 2 p ( v ) = 1 2 π e − v 2 / 2 {\displaystyle {\begin{aligned}p(u)&={\frac {1}{\sqrt {2\pi }}}e^{-{u^{2}}/{2}}\\[1ex]p(v)&={\frac {1}{\sqrt {2\pi }}}e^{-{v^{2}}/{2}}\end{aligned}}} We transform as described above: Y = U / V Z = V {\displaystyle {\begin{aligned}Y&=U/V\\[1ex]Z&=V\end{aligned}}} This leads to: p ( y ) = ∫ − ∞ ∞ p U ( y z ) p V ( z ) | z | d z = ∫ − ∞ ∞ 1 2 π e − 1 2 y 2 z 2 1 2 π e − 1 2 z 2 | z | d z = ∫ − ∞ ∞ 1 2 π e − 1 2 ( y 2 + 1 ) z 2 | z | d z = 2 ∫ 0 ∞ 1 2 π e − 1 2 ( y 2 + 1 ) z 2 z d z = ∫ 0 ∞ 1 π e − ( y 2 + 1 ) u d u u = 1 2 z 2 = − 1 π ( y 2 + 1 ) e − ( y 2 + 1 ) u | u = 0 ∞ = 1 π ( y 2 + 1 ) {\displaystyle {\begin{aligned}p(y)&=\int _{-\infty }^{\infty }p_{U}(yz)\,p_{V}(z)\,|z|\,dz\\[5pt]&=\int _{-\infty }^{\infty }{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}y^{2}z^{2}}{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}z^{2}}|z|\,dz\\[5pt]&=\int _{-\infty }^{\infty }{\frac {1}{2\pi }}e^{-{\frac {1}{2}}\left(y^{2}+1\right)z^{2}}|z|\,dz\\[5pt]&=2\int _{0}^{\infty }{\frac {1}{2\pi }}e^{-{\frac {1}{2}}\left(y^{2}+1\right)z^{2}}z\,dz\\[5pt]&=\int _{0}^{\infty }{\frac {1}{\pi }}e^{-\left(y^{2}+1\right)u}\,du&&u={\tfrac {1}{2}}z^{2}\\[5pt]&=\left.-{\frac {1}{\pi \left(y^{2}+1\right)}}e^{-\left(y^{2}+1\right)u}\right|_{u=0}^{\infty }\\[5pt]&={\frac {1}{\pi \left(y^{2}+1\right)}}\end{aligned}}} This is the density of a standard Cauchy distribution. == See also == Density estimation – Estimate of an unobservable underlying probability density function Kernel density estimation – EstimatorPages displaying short descriptions with no spaces Likelihood function – Function related to statistics and probability theory List of probability distributions Probability amplitude – Complex number whose squared absolute value is a probability Probability mass function – Discrete-variable probability distribution Secondary measure – Concept in mathematics Merging independent probability density functions Uses as position probability density: Atomic orbital – Function describing an electron in an atom Home range – The area in which an animal lives and moves on a periodic basis == References == == Further reading == Billingsley, Patrick (1979). Probability and Measure. New York, Toronto, London: John Wiley and Sons. ISBN 0-471-00710-2. Casella, George; Berger, Roger L. (2002). Statistical Inference (Second ed.). Thomson Learning. pp. 34–37. ISBN 0-534-24312-6. Stirzaker, David (2003). Elementary Probability. Cambridge University Press. ISBN 0-521-42028-8. Chapters 7 to 9 are about continuous variables. == External links == Ushakov, N.G. (2001) [1994], "Density of a probability distribution", Encyclopedia of Mathematics, EMS Press Weisstein, Eric W. "Probability density function". MathWorld.
Wikipedia/Joint_probability_density_function
In algebra and in particular in algebraic combinatorics, a quasisymmetric function is any element in the ring of quasisymmetric functions which is in turn a subring of the formal power series ring with a countable number of variables. This ring generalizes the ring of symmetric functions. This ring can be realized as a specific limit of the rings of quasisymmetric polynomials in n variables, as n goes to infinity. This ring serves as universal structure in which relations between quasisymmetric polynomials can be expressed in a way independent of the number n of variables (but its elements are neither polynomials nor functions). == Definitions == The ring of quasisymmetric functions, denoted QSym, can be defined over any commutative ring R such as the integers. Quasisymmetric functions are power series of bounded degree in variables x 1 , x 2 , x 3 , … {\displaystyle x_{1},x_{2},x_{3},\dots } with coefficients in R, which are shift invariant in the sense that the coefficient of the monomial x 1 α 1 x 2 α 2 ⋯ x k α k {\displaystyle x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}\cdots x_{k}^{\alpha _{k}}} is equal to the coefficient of the monomial x i 1 α 1 x i 2 α 2 ⋯ x i k α k {\displaystyle x_{i_{1}}^{\alpha _{1}}x_{i_{2}}^{\alpha _{2}}\cdots x_{i_{k}}^{\alpha _{k}}} for any strictly increasing sequence of positive integers i 1 < i 2 < ⋯ < i k {\displaystyle i_{1}<i_{2}<\cdots <i_{k}} indexing the variables and any positive integer sequence ( α 1 , α 2 , … , α k ) {\displaystyle (\alpha _{1},\alpha _{2},\ldots ,\alpha _{k})} of exponents. Much of the study of quasisymmetric functions is based on that of symmetric functions. A quasisymmetric function in finitely many variables is a quasisymmetric polynomial. Both symmetric and quasisymmetric polynomials may be characterized in terms of actions of the symmetric group S n {\displaystyle S_{n}} on a polynomial ring in n {\displaystyle n} variables x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} . One such action of S n {\displaystyle S_{n}} permutes variables, changing a polynomial p ( x 1 , … , x n ) {\displaystyle p(x_{1},\dots ,x_{n})} by iteratively swapping pairs ( x i , x i + 1 ) {\displaystyle (x_{i},x_{i+1})} of variables having consecutive indices. Those polynomials unchanged by all such swaps form the subring of symmetric polynomials. A second action of S n {\displaystyle S_{n}} conditionally permutes variables, changing a polynomial p ( x 1 , … , x n ) {\displaystyle p(x_{1},\ldots ,x_{n})} by swapping pairs ( x i , x i + 1 ) {\displaystyle (x_{i},x_{i+1})} of variables except in monomials containing both variables. Those polynomials unchanged by all such conditional swaps form the subring of quasisymmetric polynomials. One quasisymmetric polynomial in four variables x 1 , x 2 , x 3 , x 4 {\displaystyle x_{1},x_{2},x_{3},x_{4}} is the polynomial x 1 2 x 2 x 3 + x 1 2 x 2 x 4 + x 1 2 x 3 x 4 + x 2 2 x 3 x 4 . {\displaystyle x_{1}^{2}x_{2}x_{3}+x_{1}^{2}x_{2}x_{4}+x_{1}^{2}x_{3}x_{4}+x_{2}^{2}x_{3}x_{4}.\,} The simplest symmetric polynomial containing these monomials is x 1 2 x 2 x 3 + x 1 2 x 2 x 4 + x 1 2 x 3 x 4 + x 2 2 x 3 x 4 + x 1 x 2 2 x 3 + x 1 x 2 2 x 4 + x 1 x 3 2 x 4 + x 2 x 3 2 x 4 + x 1 x 2 x 3 2 + x 1 x 2 x 4 2 + x 1 x 3 x 4 2 + x 2 x 3 x 4 2 . {\displaystyle {\begin{aligned}x_{1}^{2}x_{2}x_{3}+x_{1}^{2}x_{2}x_{4}+x_{1}^{2}x_{3}x_{4}+x_{2}^{2}x_{3}x_{4}+x_{1}x_{2}^{2}x_{3}+x_{1}x_{2}^{2}x_{4}+x_{1}x_{3}^{2}x_{4}+x_{2}x_{3}^{2}x_{4}\\{}+x_{1}x_{2}x_{3}^{2}+x_{1}x_{2}x_{4}^{2}+x_{1}x_{3}x_{4}^{2}+x_{2}x_{3}x_{4}^{2}.\,\end{aligned}}} == Important bases == QSym is a graded R-algebra, decomposing as QSym = ⨁ n ≥ 0 QSym n , {\displaystyle \operatorname {QSym} =\bigoplus _{n\geq 0}\operatorname {QSym} _{n},\,} where QSym n {\displaystyle \operatorname {QSym} _{n}} is the R {\displaystyle R} -span of all quasisymmetric functions that are homogeneous of degree n {\displaystyle n} . Two natural bases for QSym n {\displaystyle \operatorname {QSym} _{n}} are the monomial basis { M α } {\displaystyle \{M_{\alpha }\}} and the fundamental basis { F α } {\displaystyle \{F_{\alpha }\}} indexed by compositions α = ( α 1 , α 2 , … , α k ) {\displaystyle \alpha =(\alpha _{1},\alpha _{2},\ldots ,\alpha _{k})} of n {\displaystyle n} , denoted α ⊨ n {\displaystyle \alpha \vDash n} . The monomial basis consists of M 0 = 1 {\displaystyle M_{0}=1} and all formal power series M α = ∑ i 1 < i 2 < ⋯ < i k x i 1 α 1 x i 2 α 2 ⋯ x i k α k . {\displaystyle M_{\alpha }=\sum _{i_{1}<i_{2}<\cdots <i_{k}}x_{i_{1}}^{\alpha _{1}}x_{i_{2}}^{\alpha _{2}}\cdots x_{i_{k}}^{\alpha _{k}}.\,} The fundamental basis consists F 0 = 1 {\displaystyle F_{0}=1} and all formal power series F α = ∑ α ⪰ β M β , {\displaystyle F_{\alpha }=\sum _{\alpha \succeq \beta }M_{\beta },\,} where α ⪰ β {\displaystyle \alpha \succeq \beta } means we can obtain α {\displaystyle \alpha } by adding together adjacent parts of β {\displaystyle \beta } , for example, (3,2,4,2) ⪰ {\displaystyle \succeq } (3,1,1,1,2,1,2). Thus, when the ring R {\displaystyle R} is the ring of rational numbers, one has QSym n = span Q ⁡ { M α ∣ α ⊨ n } = span Q ⁡ { F α ∣ α ⊨ n } . {\displaystyle \operatorname {QSym} _{n}=\operatorname {span} _{\mathbb {Q} }\{M_{\alpha }\mid \alpha \vDash n\}=\operatorname {span} _{\mathbb {Q} }\{F_{\alpha }\mid \alpha \vDash n\}.\,} Then one can define the algebra of symmetric functions Λ = Λ 0 ⊕ Λ 1 ⊕ ⋯ {\displaystyle \Lambda =\Lambda _{0}\oplus \Lambda _{1}\oplus \cdots } as the subalgebra of QSym spanned by the monomial symmetric functions m 0 = 1 {\displaystyle m_{0}=1} and all formal power series m λ = ∑ M α , {\displaystyle m_{\lambda }=\sum M_{\alpha },} where the sum is over all compositions α {\displaystyle \alpha } which rearrange to the integer partition λ {\displaystyle \lambda } . Moreover, we have Λ n = Λ ∩ QSym n {\displaystyle \Lambda _{n}=\Lambda \cap \operatorname {QSym} _{n}} . For example, F ( 1 , 2 ) = M ( 1 , 2 ) + M ( 1 , 1 , 1 ) {\displaystyle F_{(1,2)}=M_{(1,2)}+M_{(1,1,1)}} and m ( 2 , 1 ) = M ( 2 , 1 ) + M ( 1 , 2 ) . {\displaystyle m_{(2,1)}=M_{(2,1)}+M_{(1,2)}.} Other important bases for quasisymmetric functions include the basis of quasisymmetric Schur functions, the "type I" and "type II" quasisymmetric power sums, and bases related to enumeration in matroids. == Applications == Quasisymmetric functions have been applied in enumerative combinatorics, symmetric function theory, representation theory, and number theory. Applications of quasisymmetric functions include enumeration of P-partitions, permutations, tableaux, chains of posets, reduced decompositions in finite Coxeter groups (via Stanley symmetric functions), and parking functions. In symmetric function theory and representation theory, applications include the study of Schubert polynomials, Macdonald polynomials, Hecke algebras, and Kazhdan–Lusztig polynomials. Often quasisymmetric functions provide a powerful bridge between combinatorial structures and symmetric functions. == Related algebras == As a graded Hopf algebra, the dual of the ring of quasisymmetric functions is the ring of noncommutative symmetric functions. Every symmetric function is also a quasisymmetric function, and hence the ring of symmetric functions is a subalgebra of the ring of quasisymmetric functions. The ring of quasisymmetric functions is the terminal object in category of graded Hopf algebras with a single character. Hence any such Hopf algebra has a morphism to the ring of quasisymmetric functions. One example of this is the peak algebra. === Other related algebras === The Malvenuto–Reutenauer algebra is a Hopf algebra based on permutations that relates the rings of symmetric functions, quasisymmetric functions, and noncommutative symmetric functions, (denoted Sym, QSym, and NSym respectively), as depicted the following commutative diagram. The duality between QSym and NSym mentioned above is reflected in the main diagonal of this diagram. Many related Hopf algebras were constructed from Hopf monoids in the category of species by Aguiar and Majahan. One can also construct the ring of quasisymmetric functions in noncommuting variables. == References == == External links == BIRS Workshop on Quasisymmetric Functions
Wikipedia/Quasisymmetric_function
In signal processing, discrete transforms are mathematical transforms, often linear transforms, of signals between discrete domains, such as between discrete time and discrete frequency. Many common integral transforms used in signal processing have their discrete counterparts. For example, for the Fourier transform the counterpart is the discrete Fourier transform. In addition to spectral analysis of signals, discrete transforms play important role in data compression, signal detection, digital filtering and correlation analysis. The discrete cosine transform (DCT) is the most widely used transform coding compression algorithm in digital media, followed by the discrete wavelet transform (DWT). Transforms between a discrete domain and a continuous domain are not discrete transforms. For example, the discrete-time Fourier transform and the Z-transform, from discrete time to continuous frequency, and the Fourier series, from continuous time to discrete frequency, are outside the class of discrete transforms. Classical signal processing deals with one-dimensional discrete transforms. Other application areas, such as image processing, computer vision, high-definition television, visual telephony, etc. make use of two-dimensional and in general, multidimensional discrete transforms. == See also == List of transforms § Discrete transforms == References ==
Wikipedia/Discrete_transform
The joint quantum entropy generalizes the classical joint entropy to the context of quantum information theory. Intuitively, given two quantum states ρ {\displaystyle \rho } and σ {\displaystyle \sigma } , represented as density operators that are subparts of a quantum system, the joint quantum entropy is a measure of the total uncertainty or entropy of the joint system. It is written S ( ρ , σ ) {\displaystyle S(\rho ,\sigma )} or H ( ρ , σ ) {\displaystyle H(\rho ,\sigma )} , depending on the notation being used for the von Neumann entropy. Like other entropies, the joint quantum entropy is measured in bits, i.e. the logarithm is taken in base 2. In this article, we will use S ( ρ , σ ) {\displaystyle S(\rho ,\sigma )} for the joint quantum entropy. == Background == In information theory, for any classical random variable X {\displaystyle X} , the classical Shannon entropy H ( X ) {\displaystyle H(X)} is a measure of how uncertain we are about the outcome of X {\displaystyle X} . For example, if X {\displaystyle X} is a probability distribution concentrated at one point, the outcome of X {\displaystyle X} is certain and therefore its entropy H ( X ) = 0 {\displaystyle H(X)=0} . At the other extreme, if X {\displaystyle X} is the uniform probability distribution with n {\displaystyle n} possible values, intuitively one would expect X {\displaystyle X} is associated with the most uncertainty. Indeed, such uniform probability distributions have maximum possible entropy H ( X ) = log 2 ⁡ ( n ) {\displaystyle H(X)=\log _{2}(n)} . In quantum information theory, the notion of entropy is extended from probability distributions to quantum states, or density matrices. For a state ρ {\displaystyle \rho } , the von Neumann entropy is defined by − Tr ⁡ ρ log ⁡ ρ . {\displaystyle -\operatorname {Tr} \rho \log \rho .} Applying the spectral theorem, or Borel functional calculus for infinite dimensional systems, we see that it generalizes the classical entropy. The physical meaning remains the same. A maximally mixed state, the quantum analog of the uniform probability distribution, has maximum von Neumann entropy. On the other hand, a pure state, or a rank one projection, will have zero von Neumann entropy. We write the von Neumann entropy S ( ρ ) {\displaystyle S(\rho )} (or sometimes H ( ρ ) {\displaystyle H(\rho )} . == Definition == Given a quantum system with two subsystems A and B, the term joint quantum entropy simply refers to the von Neumann entropy of the combined system. This is to distinguish from the entropy of the subsystems. In symbols, if the combined system is in state ρ A B {\displaystyle \rho ^{AB}} , the joint quantum entropy is then S ( ρ A , ρ B ) = S ( ρ A B ) = − Tr ⁡ ( ρ A B log ⁡ ( ρ A B ) ) . {\displaystyle S(\rho ^{A},\rho ^{B})=S(\rho ^{AB})=-\operatorname {Tr} (\rho ^{AB}\log(\rho ^{AB})).} Each subsystem has its own entropy. The state of the subsystems are given by the partial trace operation. == Properties == The classical joint entropy is always at least equal to the entropy of each individual system. This is not the case for the joint quantum entropy. If the quantum state ρ A B {\displaystyle \rho ^{AB}} exhibits quantum entanglement, then the entropy of each subsystem may be larger than the joint entropy. This is equivalent to the fact that the conditional quantum entropy may be negative, while the classical conditional entropy may never be. Consider a maximally entangled state such as a Bell state. If ρ A B {\displaystyle \rho ^{AB}} is a Bell state, say, | Ψ ⟩ = 1 2 ( | 00 ⟩ + | 11 ⟩ ) , {\displaystyle \left|\Psi \right\rangle ={\frac {1}{\sqrt {2}}}\left(|00\rangle +|11\rangle \right),} then the total system is a pure state, with entropy 0, while each individual subsystem is a maximally mixed state, with maximum von Neumann entropy log ⁡ 2 = 1 {\displaystyle \log 2=1} . Thus the joint entropy of the combined system is less than that of subsystems. This is because for entangled states, definite states cannot be assigned to subsystems, resulting in positive entropy. Notice that the above phenomenon cannot occur if a state is a separable pure state. In that case, the reduced states of the subsystems are also pure. Therefore, all entropies are zero. == Relations to other entropy measures == The joint quantum entropy S ( ρ A B ) {\displaystyle S(\rho ^{AB})} can be used to define of the conditional quantum entropy: S ( ρ A | ρ B ) = d e f S ( ρ A , ρ B ) − S ( ρ B ) {\displaystyle S(\rho ^{A}|\rho ^{B})\ {\stackrel {\mathrm {def} }{=}}\ S(\rho ^{A},\rho ^{B})-S(\rho ^{B})} and the quantum mutual information: I ( ρ A : ρ B ) = d e f S ( ρ A ) + S ( ρ B ) − S ( ρ A , ρ B ) {\displaystyle I(\rho ^{A}:\rho ^{B})\ {\stackrel {\mathrm {def} }{=}}\ S(\rho ^{A})+S(\rho ^{B})-S(\rho ^{A},\rho ^{B})} These definitions parallel the use of the classical joint entropy to define the conditional entropy and mutual information. == See also == Quantum relative entropy Quantum mutual information == References == Nielsen, Michael A. and Isaac L. Chuang, Quantum Computation and Quantum Information. Cambridge University Press, 2000. ISBN 0-521-63235-8
Wikipedia/Joint_quantum_entropy
S transform as a time–frequency distribution was developed in 1994 for analyzing geophysics data. In this way, the S transform is a generalization of the short-time Fourier transform (STFT), extending the continuous wavelet transform and overcoming some of its disadvantages. For one, modulation sinusoids are fixed with respect to the time axis; this localizes the scalable Gaussian window dilations and translations in S transform. Moreover, the S transform doesn't have a cross-term problem and yields a better signal clarity than Gabor transform. However, the S transform has its own disadvantages: the clarity is worse than Wigner distribution function and Cohen's class distribution function. A fast S transform algorithm was invented in 2010. It reduces the computational complexity from O[N2·log(N)] to O[N·log(N)] and makes the transform one-to-one, where the transform has the same number of points as the source signal or image, compared to storage complexity of N2 for the original formulation. An implementation is available to the research community under an open source license. A general formulation of the S transform makes clear the relationship to other time frequency transforms such as the Fourier, short time Fourier, and wavelet transforms. == Definition == There are several ways to represent the idea of the S transform. In here, S transform is derived as the phase correction of the continuous wavelet transform with window being the Gaussian function. S-Transform S x ( t , f ) = ∫ − ∞ ∞ x ( τ ) | f | e − π ( t − τ ) 2 f 2 e − j 2 π f τ d τ {\displaystyle S_{x}(t,f)=\int _{-\infty }^{\infty }x(\tau )|f|e^{-\pi (t-\tau )^{2}f^{2}}e^{-j2\pi f\tau }\,d\tau } Inverse S-Transform x ( τ ) = ∫ − ∞ ∞ [ ∫ − ∞ ∞ S x ( t , f ) d t ] e j 2 π f τ d f {\displaystyle x(\tau )=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }S_{x}(t,f)\,dt\right]\,e^{j2\pi f\tau }\,df} == Modified form == Spectrum Form The above definition implies that the s-transform function can be expressed as the convolution of ( x ( τ ) e − j 2 π f τ ) {\displaystyle (x(\tau )e^{-j2\pi f\tau })} and ( | f | e − π t 2 f 2 ) {\displaystyle (|f|e^{-\pi t^{2}f^{2}})} . Applying the Fourier transform to both ( x ( τ ) e − j 2 π f τ ) {\displaystyle (x(\tau )e^{-j2\pi f\tau })} and ( | f | e − π t 2 f 2 ) {\displaystyle (|f|e^{-\pi t^{2}f^{2}})} gives S x ( t , f ) = ∫ − ∞ ∞ X ( f + α ) e − π α 2 / f 2 e j 2 π α t d α {\displaystyle S_{x}(t,f)=\int _{-\infty }^{\infty }X(f+\alpha )\,e^{-\pi \alpha ^{2}/f^{2}}\,e^{j2\pi \alpha t}\,d\alpha } . Discrete-time S-transform From the spectrum form of S-transform, we can derive the discrete-time S-transform. Let t = n Δ T f = m Δ F α = p Δ F {\displaystyle t=n\Delta _{T}\,\,f=m\Delta _{F}\,\,\alpha =p\Delta _{F}} , where Δ T {\displaystyle \Delta _{T}} is the sampling interval and Δ F {\displaystyle \Delta _{F}} is the sampling frequency. The Discrete time S-transform can then be expressed as: S x ( n Δ T , m Δ F ) = ∑ p = 0 N − 1 X [ ( p + m ) Δ F ] e − π p 2 m 2 e j 2 p n N {\displaystyle S_{x}(n\Delta _{T}\,,m\Delta _{F})=\sum _{p=0}^{N-1}X[(p+m)\,\Delta _{F}]\,e^{-\pi {\frac {p^{2}}{m^{2}}}}\,e^{\frac {j2pn}{N}}} == Implementation of discrete-time S-transform == Below is the Pseudo code of the implementation. Step1.Compute X [ p Δ F ] {\displaystyle X[p\Delta _{F}]\,} loop over m (voices) Step2.Compute e − π p 2 m 2 {\displaystyle e^{-\pi {\frac {p^{2}}{m^{2}}}}} for f = m Δ F {\displaystyle f=m\Delta _{F}} Step3.Move X [ p Δ F ] {\displaystyle X[p\Delta _{F}]} to X [ ( p + m ) Δ F ] {\displaystyle X[(p+m)\Delta _{F}]} Step4.Multiply Step2 and Step3 B [ m , p ] = X [ ( p + m ) Δ F ] ⋅ e − π p 2 m 2 {\displaystyle B[m,p]=X[(p+m)\Delta _{F}]\cdot e^{-\pi {\frac {p^{2}}{m^{2}}}}} Step5.IDFT( B [ m , p ] {\displaystyle B[m,p]} ). Repeat.} == Comparison with other time–frequency analysis tools == === Comparison with Gabor transform === The only difference between the Gabor transform (GT) and the S transform is the window size. For GT, the windows size is a Gaussian function ( e − π ( t − τ ) 2 ) {\displaystyle (e^{-\pi (t-\tau )^{2}})} , meanwhile, the window function for S-Transform is a function of f. With a window function proportional to frequency, S Transform performs well in frequency domain analysis when the input frequency is low. When the input frequency is high, S-Transform has a better clarity in the time domain. As table below. This kind of property makes S-Transform a powerful tool to analyze sound because human is sensitive to low frequency part in a sound signal. === Comparison with Wigner transform === The main problem with the Wigner Transform is the cross term, which stems from the auto-correlation function in the Wigner Transform function. This cross term may cause noise and distortions in signal analyses. S-transform analyses avoid this issue. === Comparison with the short-time Fourier transform === We can compare the S transform and short-time Fourier transform (STFT). First, a high frequency signal, a low frequency signal, and a high frequency burst signal are used in the experiment to compare the performance. The S transform characteristic of frequency dependent resolution allows the detection of the high frequency burst. On the other hand, as the STFT consists of a constant window width, it leads to the result having poorer definition. In the second experiment, two more high frequency bursts are added to crossed chirps. In the result, all four frequencies were detected by the S transform. On the other hand, the two high frequencies bursts are not detected by STFT. The high frequencies bursts cross term caused STFT to have a single frequency at lower frequency. == Applications == Signal filterings Magnetic resonance imaging (MRI) Power system disturbance recognition S transform has been proven to be able to identify a few types of disturbances, like voltage sag, voltage swell, momentary interruption, and oscillatory transients. S transform also be applied for other types of disturbances such as notches, harmonics with sag and swells etc. S transform generates contours which are suitable for simple visual inspection. However, wavelet transform requires specific tools like standard multiresolution analysis. Geophysical signal analysis Reflection seismology Global seismology == See also == Laplace transform Wavelet transform Short-time Fourier transform == References == == Further reading == Rocco Ditommaso, Felice Carlo Ponzo, Gianluca Auletta (2015). Damage detection on framed structures: modal curvature evaluation using Stockwell Transform under seismic excitation. Earthquake Engineering and Engineering Vibration. June 2015, Volume 14, Issue 2, pp 265–274. Rocco Ditommaso, Marco Mucciarelli, Felice C. Ponzo (2010). S-Transform based filter applied to the analysis of non-linear dynamic behaviour of soil and buildings. 14th European Conference on Earthquake Engineering. Proceedings Volume. Ohrid, Republic of Macedonia. August 30 – September 3, 2010. (downloadable from http://roccoditommaso.xoom.it) M. Mucciarelli, M. Bianca, R. Ditommaso, M.R. Gallipoli, A. Masi, C Milkereit, S. Parolai, M. Picozzi, M. Vona (2011). FAR FIELD DAMAGE ON RC BUILDINGS: THE CASE STUDY OF NAVELLI DURING THE L’AQUILA (ITALY) SEISMIC SEQUENCE, 2009. Bulletin of Earthquake Engineering. doi:10.1007/s10518-010-9201-y. J. J. Ding, "Time-frequency analysis and wavelet transform course note," the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2007. Jaya Bharata Reddy, Dusmanta Kumar Mohanta, and B. M. Karan, "Power system disturbance recognition using wavelet and s-transform techniques," Birla institute of Technology, Mesra, Ranchi-835215, 2004. B. Boashash, "Notes on the use of the wigner distribution for time frequency signal analysis", IEEE Trans. on Acoust. Speech. and Signal Processing, vol. 26, no. 9, 1987 R. N. Bracewell, The Fourier Transform and Its Applications, McGraw Hill Book Company, New York, 1978 E. O. Brigham, The Fast Fourier Transform, Prentice-Hall Inc., Englewood Cliffs, New Jersey, 1974 Cohen, L. (1989). "Time-frequency distributions—A review". Proc. IEEE. 77 (7): 941–981. CiteSeerX 10.1.1.1026.2853. doi:10.1109/5.30749. I. Daubechies, "The wavelet transform, time-frequency localization and signal analysis", IEEE Trans. on Information Theory, vol. 36, no. 5, Sept. 1990 Farge, M. (1992). "Wavelet transforms and their application to turbulence". Annual Review of Fluid Mechanics. 24: 395–457. doi:10.1146/annurev.fluid.24.1.395. D. Gabor, "Theory of communication", J. Inst. Elect. Eng., vol. 93, no. 3, pp. 429–457, 1946 Goupillaud, P.; Grossmann, A.; Morlet, J. (1984). "Cycle-octave and related transforms in seismic analysis". Geoexploration. 23: 85–102. doi:10.1016/0016-7142(84)90025-5. F. Hlawatsch and G. F. Boudreuax-Bartels, 1992 "Linear and quadratic timefrequency signal representations", IEEE Signal Processing Magazine, pp. 21–67 Rioul, O.; Vetterli, M. (1991). "Wavelets and signal processing" (PDF). IEEE Signal Processing Magazine. 8 (4): 14–38. Bibcode:1991ISPM....8...14R. doi:10.1109/79.91217. S2CID 13266737. R. K. Young, Wavelet Theory and its Applications, Kluwer Academic Publishers, Dordrecht,1993
Wikipedia/S_transform
In operator theory, a dilation of an operator T on a Hilbert space H is an operator on a larger Hilbert space K, whose restriction to H composed with the orthogonal projection onto H is T. More formally, let T be a bounded operator on some Hilbert space H, and H be a subspace of a larger Hilbert space H' . A bounded operator V on H' is a dilation of T if P H V | H = T {\displaystyle P_{H}\;V|_{H}=T} where P H {\displaystyle P_{H}} is an orthogonal projection on H. V is said to be a unitary dilation (respectively, normal, isometric, etc.) if V is unitary (respectively, normal, isometric, etc.). T is said to be a compression of V. If an operator T has a spectral set X {\displaystyle X} , we say that V is a normal boundary dilation or a normal ∂ X {\displaystyle \partial X} dilation if V is a normal dilation of T and σ ( V ) ⊆ ∂ X {\displaystyle \sigma (V)\subseteq \partial X} . Some texts impose an additional condition. Namely, that a dilation satisfy the following (calculus) property: P H f ( V ) | H = f ( T ) {\displaystyle P_{H}\;f(V)|_{H}=f(T)} where f(T) is some specified functional calculus (for example, the polynomial or H∞ calculus). The utility of a dilation is that it allows the "lifting" of objects associated to T to the level of V, where the lifted objects may have nicer properties. See, for example, the commutant lifting theorem. == Applications == We can show that every contraction on Hilbert spaces has a unitary dilation. A possible construction of this dilation is as follows. For a contraction T, the operator D T = ( I − T ∗ T ) 1 2 {\displaystyle D_{T}=(I-T^{*}T)^{\frac {1}{2}}} is positive, where the continuous functional calculus is used to define the square root. The operator DT is called the defect operator of T. Let V be the operator on H ⊕ H {\displaystyle H\oplus H} defined by the matrix V = [ T D T ∗ D T − T ∗ ] . {\displaystyle V={\begin{bmatrix}T&D_{T^{*}}\\\ D_{T}&-T^{*}\end{bmatrix}}.} V is clearly a dilation of T. Also, T(I - T*T) = (I - TT*)T and a limit argument imply T D T = D T ∗ T . {\displaystyle TD_{T}=D_{T^{*}}T.} Using this one can show, by calculating directly, that V is unitary, therefore a unitary dilation of T. This operator V is sometimes called the Julia operator of T. Notice that when T is a real scalar, say T = cos ⁡ θ {\displaystyle T=\cos \theta } , we have V = [ cos ⁡ θ sin ⁡ θ sin ⁡ θ − cos ⁡ θ ] . {\displaystyle V={\begin{bmatrix}\cos \theta &\sin \theta \\\ \sin \theta &-\cos \theta \end{bmatrix}}.} which is just the unitary matrix describing rotation by θ. For this reason, the Julia operator V(T) is sometimes called the elementary rotation of T. We note here that in the above discussion we have not required the calculus property for a dilation. Indeed, direct calculation shows the Julia operator fails to be a "degree-2" dilation in general, i.e. it need not be true that T 2 = P H V 2 | H {\displaystyle T^{2}=P_{H}\;V^{2}|_{H}} . However, it can also be shown that any contraction has a unitary dilation which does have the calculus property above. This is Sz.-Nagy's dilation theorem. More generally, if R ( X ) {\displaystyle {\mathcal {R}}(X)} is a Dirichlet algebra, any operator T with X {\displaystyle X} as a spectral set will have a normal ∂ X {\displaystyle \partial X} dilation with this property. This generalises Sz.-Nagy's dilation theorem as all contractions have the unit disc as a spectral set. == Notes == == References ==
Wikipedia/Dilation_(operator_theory)
The complex wavelet transform (CWT) is a complex-valued extension to the standard discrete wavelet transform (DWT). It is a two-dimensional wavelet transform which provides multiresolution, sparse representation, and useful characterization of the structure of an image. Further, it purveys a high degree of shift-invariance in its magnitude, which was investigated in. However, a drawback to this transform is that it exhibits 2 d {\displaystyle 2^{d}} (where d {\displaystyle d} is the dimension of the signal being transformed) redundancy compared to a separable (DWT). The use of complex wavelets in image processing was originally set up in 1995 by J.M. Lina and L. Gagnon in the framework of the Daubechies orthogonal filters banks. It was then generalized in 1997 by Nick Kingsbury of Cambridge University. In the area of computer vision, by exploiting the concept of visual contexts, one can quickly focus on candidate regions, where objects of interest may be found, and then compute additional features through the CWT for those regions only. These additional features, while not necessary for global regions, are useful in accurate detection and recognition of smaller objects. Similarly, the CWT may be applied to detect the activated voxels of cortex and additionally the temporal independent component analysis (tICA) may be utilized to extract the underlying independent sources whose number is determined by Bayesian information criterion [1]. == Dual-tree complex wavelet transform == The dual-tree complex wavelet transform (DTCWT) calculates the complex transform of a signal using two separate DWT decompositions (tree a and tree b). If the filters used in one are specifically designed different from those in the other it is possible for one DWT to produce the real coefficients and the other the imaginary. This redundancy of two provides extra information for analysis but at the expense of extra computational power. It also provides approximate shift-invariance (unlike the DWT) yet still allows perfect reconstruction of the signal. The design of the filters is particularly important for the transform to occur correctly and the necessary characteristics are: The low-pass filters in the two trees must differ by half a sample period Reconstruction filters are the reverse of analysis All filters from the same orthonormal set Tree a filters are the reverse of tree b filters Both trees have the same frequency response == See also == Continuous wavelet transform Wavelet series == References == == External links == An MPhil thesis: Complex wavelet transforms and their applications CWT for EMG analysis A paper on DTCWT Another full paper 3-D DT MRI data visualization Multidimensional, mapping-based complex wavelet transforms Image Analysis Using a Dual-Tree M {\displaystyle M} -band Wavelet Transform (2006), preprint, Caroline Chaux, Laurent Duval, Jean-Christophe Pesquet Noise covariance properties in dual-tree wavelet decompositions (2007), preprint, Caroline Chaux, Laurent Duval, Jean-Christophe Pesquet A nonlinear Stein based estimator for multichannel image denoising (2007), preprint, Caroline Chaux, Laurent Duval, Amel Benazza-Benyahia, Jean-Christophe Pesquet Caroline Chaux website ( M {\displaystyle M} -band dual-tree wavelets) Laurent Duval website ( M {\displaystyle M} -band dual-tree wavelets) James E. Fowler (dual-tree wavelets for video and hyperspectral image compression) Nick Kingsbury website (dual-tree wavelets) Jean-Christophe Pesquet website ( M {\displaystyle M} -band dual-tree wavelets) Ivan Selesnick (dual-tree wavelets)
Wikipedia/Complex_wavelet_transform
The stationary wavelet transform (SWT) is a wavelet transform algorithm designed to overcome the lack of translation-invariance of the discrete wavelet transform (DWT). Translation-invariance is achieved by removing the downsamplers and upsamplers in the DWT and upsampling the filter coefficients by a factor of 2 ( j − 1 ) {\displaystyle 2^{(j-1)}} in the j {\displaystyle j} th level of the algorithm. The SWT is an inherently redundant scheme as the output of each level of SWT contains the same number of samples as the input – so for a decomposition of N levels there is a redundancy of N in the wavelet coefficients. This algorithm is more famously known by the French expression à trous, meaning “with holes”, which refers to inserting zeros in the filters. It was introduced by Holschneider et al. == Definition == The basic discrete wavelet transform (DWT) algorithm is adapted to yield a stationary wavelet transform (SWT) which is independent of the origin. The approach of the SWT is simple, which is by applying suitable high-pass and low-pass filters to the data at each level, resulting in the generation of two sequences at the subsequent level. Without employment of downsampling techniques, the length of the new sequences is maintained to be the same as the original sequences. Rather than employing decimation similar to the standard wavelet transform which removes elements, the filters at each level are adjusted by augmenting them with zero-padding, as explained in the following: Z x 2 j = x j , Z x 2 j + 1 = 0 {\displaystyle {Zx}_{2j}=x_{j},\ {Zx}_{2j+1}=0} for all integers j {\displaystyle j} D 0 r H [ r ] = H D 0 r {\displaystyle D_{0}^{r}H^{\left[r\right]}=HD_{0}^{r}} D 0 r G [ r ] = G D 0 r {\displaystyle D_{0}^{r}G^{\left[r\right]}=GD_{0}^{r}} where Z {\displaystyle Z} is the operator that intersperses a given sequence with zeros, for all integers j {\displaystyle j} . D 0 r {\displaystyle D_{0}^{r}} is the binary decimation operator H [ r ] {\displaystyle H^{\left[r\right]}} is a filter with weights h 2 r [ r ] j = h j {\displaystyle {h_{2^{r}}^{\left[r\right]}}_{j}=h_{j}} and h k [ r ] = 0 {\displaystyle h_{k}^{\left[r\right]}=0} if k {\displaystyle k} is not a multiple of 2 r . {\displaystyle 2^{r}.} G [ r ] {\displaystyle G^{\left[r\right]}} is a filter with weights g 2 r [ r ] j = h j {\displaystyle {g_{2^{r}}^{\left[r\right]}}_{j}=h_{j}} and g k [ r ] = 0 {\displaystyle g_{k}^{\left[r\right]}=0} if k {\displaystyle k} is not a multiple of 2 r . {\displaystyle 2^{r}.} The design of the filters H [ r ] {\displaystyle H^{\left[r\right]}} and G [ r ] {\displaystyle G^{\left[r\right]}} involve of inserting a zero between every adjacent pair of elements in the filter H [ r − 1 ] {\displaystyle H^{\left[r-1\right]}} and G [ r − 1 ] {\displaystyle G^{\left[r-1\right]}} respectively. The designation of a J {\displaystyle a^{J}} as the original sequence c J {\displaystyle c^{J}} is required before defining the stationary wavelet transform. a j − 1 = H [ J − j ] a j {\displaystyle a^{j-1}=H^{\left[J-j\right]}a^{j}} , for j = J , J − 1 , … , 1 {\displaystyle j=J,J-1,\ \ldots \ ,1\ } b j − 1 = G [ J − j ] a j {\displaystyle b^{j-1}=G^{\left[J-j\right]}a^{j}} , for j = J , J − 1 , … , 1 {\displaystyle j=J,J-1,\ \ldots \ ,1} where a j = b j {\displaystyle a^{j}=b^{j}} , given the length of a j {\displaystyle a^{j}} is 2 J {\displaystyle 2^{J}} == Implementation == The following block diagram depicts the digital implementation of SWT. In the above diagram, filters in each level are up-sampled versions of the previous (see figure below). == Applications == A few applications of SWT are specified below. === Image enhancement === The SWT can be used to perform image resolution enhancement to provide a better image quality. The main drawback from enhancing image resolution through conventional method, interpolation, is the loss of the high frequency components. This results in the smoothing of interpolation, providing a blurry image with the absence or reduced presence of fine details, sharp edges. Information of high frequency components (edges) are crucial for achieving better image quality of super-resolved image. It first decomposes the input image into various subband images by applying a one-level DWT. There are three subband images to capture the high frequency components of the input image. After that is the implementation of SWT, its purpose is to mitigate the information loss produced by the downsampling in each DWT subband. Fortified and corrected high frequency subbands are formed by summing up the high frequency subbands from DWT and SWT, and as a result, the output image is with sharpen edges. === Signal denoising === The traditional denoising procedure mainly consist of first transforming the signal to another domain, then apply thresholding, and lastly perform inverse transformation to reconstruct the original signal. Stationary wavelet transform is introduced to resolve the Gibbs phenomenon brought by the shifting process in discrete wavelet transform. This phenomenon affects the image quality (noises) after the reconstruction process. The modified procedure is simple, by first perform stationary wavelet transform to the signal, thresholding, and finally transforming back. A brief explanation is shown as following: Unlike the discrete wavelet transform, SWT does not downsample the signal at each level. Instead, it maintains the original sampling rate throughout the decomposition process, and this ensures the encapsulation of high, low-frequency components in an effective way. As the noise is often spread across all scales, with small contribution in magnitude, thresholding is implemented as the next step to the wavelet coefficients. Coefficients below a certain threshold level are set to zero or reduced, resulting in the separation of the signal from the noise. After removing or suppression of the noise coefficients, which the reconstruction progress does not consider them, the denoised signal is clearer. Signal denoising is also commonly used in biomedical signal denoising (ECG), image denoising. The effectiveness of SWT in signal denoising makes it a valuable tool in real-world applications in various fields. Pattern recognition Brain image classification Pathological brain detection == Code example == Here is an example of applying the stationary wavelet transform to the chirp signal, coded with Python: Install required packages to Python Import libraries in Python Main code Output == Synonyms == Redundant wavelet transform Algorithme à trous Quasi-continuous wavelet transform Translation invariant wavelet transform Shift invariant wavelet transform Cycle spinning Maximal overlap wavelet transform (MODWT) Undecimated wavelet transform (UWT) == See also == Wavelet packet decomposition Wavelet transform == References ==
Wikipedia/Stationary_wavelet_transform
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder. The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding: encoding is done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal. Data Compression algorithms present a space-time complexity trade-off between the bytes needed to store or transmit information, and the Computational resources needed to perform the encoding and decoding. The design of data compression schemes involves balancing the degree of compression, the amount of distortion introduced (when using lossy data compression), and the computational resources or time required to compress and decompress the data. == Lossless == Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information, so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy. For example, an image may have areas of color that do not change over several pixels; instead of coding "red pixel, red pixel, ..." the data may be encoded as "279 red pixels". This is a basic example of run-length encoding; there are many schemes to reduce file size by eliminating redundancy. The Lempel–Ziv (LZ) compression methods are among the most popular algorithms for lossless storage. DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. In the mid-1980s, following work by Terry Welch, the Lempel–Ziv–Welch (LZW) algorithm rapidly became the method of choice for most general-purpose compression systems. LZW is used in GIF images, programs such as PKZIP, and hardware devices such as modems. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded. Grammar-based codes like this can compress highly repetitive input extremely effectively, for instance, a biological data collection of the same or closely related species, a huge versioned document collection, internet archival, etc. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string. Other practical grammar compression algorithms include Sequitur and Re-Pair. The strongest modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows–Wheeler transform can also be viewed as an indirect form of statistical modelling. In a further refinement of the direct use of probabilistic modelling, statistical estimates can be coupled to an algorithm called arithmetic coding. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a finite-state machine to produce a string of encoded bits from a series of input data symbols. It can achieve superior compression compared to other techniques such as the better-known Huffman algorithm. It uses an internal memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary and are context-dependent, as it can be easily coupled with an adaptive model of the probability distribution of the input data. An early example of the use of arithmetic coding was in an optional (but not widely used) feature of the JPEG image coding standard. It has since been applied in various other designs including H.263, H.264/MPEG-4 AVC and HEVC for video coding. Archive software typically has the ability to adjust the "dictionary size", where a larger size demands more random-access memory during compression and decompression, but compresses stronger, especially on repeating patterns in files' content. == Lossy == In the late 1980s, digital images became more common, and standards for lossless image compression emerged. In the early 1990s, lossy compression methods began to be widely used. In these schemes, some loss of information is accepted as dropping nonessential detail can save storage space. There is a corresponding trade-off between preserving information and reducing size. Lossy data compression schemes are designed by research on how people perceive the data in question. For example, the human eye is more sensitive to subtle variations in luminance than it is to the variations in color. JPEG image compression works in part by rounding off nonessential bits of information. A number of popular compression formats exploit these perceptual differences, including psychoacoustics for sound, and psychovisuals for images and video. Most forms of lossy compression are based on transform coding, especially the discrete cosine transform (DCT). It was first proposed in 1972 by Nasir Ahmed, who then developed a working algorithm with T. Natarajan and K. R. Rao in 1973, before introducing it in January 1974. DCT is the most widely used lossy compression method, and is used in multimedia formats for images (such as JPEG and HEIF), video (such as MPEG, AVC and HEVC) and audio (such as MP3, AAC and Vorbis). Lossy image compression is used in digital cameras, to increase storage capacities. Similarly, DVDs, Blu-ray and streaming video use lossy video coding formats. Lossy compression is extensively used in video. In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of the audio signal. Compression of human speech is often performed with even more specialized techniques; speech coding is distinguished as a separate discipline from general-purpose audio compression. Speech coding is used in internet telephony, for example, audio compression is used for CD ripping and is decoded by the audio players. Lossy compression can cause generation loss. == Theory == The theoretical basis for compression is provided by information theory and, more specifically, Shannon's source coding theorem; domain-specific theories include algorithmic information theory for lossless compression and rate–distortion theory for lossy compression. These areas of study were essentially created by Claude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Other topics associated with compression include coding theory and statistical inference. === Machine learning === There is a close connection between machine learning and compression. A system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding on the output distribution). Conversely, an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence". An alternative view can show compression algorithms implicitly map strings into implicit feature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponding to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM. According to AIXI theory, a connection more directly explained in Hutter Prize, the best possible compression of x is the smallest possible software that generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can not unzip it without both, but there may be an even smaller combined form. Examples of AI-powered audio/video compression software include NVIDIA Maxine, AIVC. Examples of software that can perform AI-powered image compression include OpenCV, TensorFlow, MATLAB's Image Processing Toolbox (IPT) and High-Fidelity Generative Image Compression. In unsupervised machine learning, k-means clustering can be utilized to compress data by grouping similar data points into clusters. This technique simplifies handling extensive datasets that lack predefined labels and finds widespread use in fields such as image compression. Data compression aims to reduce the size of data files, enhancing storage efficiency and speeding up data transmission. K-means clustering, an unsupervised machine learning algorithm, is employed to partition a dataset into a specified number of clusters, k, each represented by the centroid of its points. This process condenses extensive datasets into a more compact set of representative points. Particularly beneficial in image and signal processing, k-means clustering aids in data reduction by replacing groups of data points with their centroids, thereby preserving the core information of the original data while significantly decreasing the required storage space. Large language models (LLMs) are also efficient lossless data compressors on some data sets, as demonstrated by DeepMind's research with the Chinchilla 70B model. Developed by DeepMind, Chinchilla 70B effectively compressed data, outperforming conventional methods such as Portable Network Graphics (PNG) for images and Free Lossless Audio Codec (FLAC) for audio. It achieved compression of image and audio data to 43.4% and 16.4% of their original sizes, respectively. There is, however, some reason to be concerned that the data set used for testing overlaps the LLM training data set, making it possible that the Chinchilla 70B model is only an efficient compression tool on data it has already been trained on. === Data differencing === Data compression can be viewed as a special case of data differencing. Data differencing consists of producing a difference given a source and a target, with patching reproducing the target given a source and a difference. Since there is no separate source and target in data compression, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a difference from nothing. This is the same as considering absolute entropy (corresponding to data compression) as a special case of relative entropy (corresponding to data differencing) with no initial data. The term differential compression is used to emphasize the data differencing connection. == Uses == === Image === Entropy coding originated in the 1940s with the introduction of Shannon–Fano coding, the basis for Huffman coding which was developed in 1950. Transform coding dates back to the late 1960s, with the introduction of fast Fourier transform (FFT) coding in 1968 and the Hadamard transform in 1969. An important image compression technique is the discrete cosine transform (DCT), a technique developed in the early 1970s. DCT is the basis for JPEG, a lossy compression format which was introduced by the Joint Photographic Experts Group (JPEG) in 1992. JPEG greatly reduces the amount of data required to represent an image at the cost of a relatively small reduction in image quality and has become the most widely used image file format. Its highly efficient DCT-based compression algorithm was largely responsible for the wide proliferation of digital images and digital photos. Lempel–Ziv–Welch (LZW) is a lossless compression algorithm developed in 1984. It is used in the GIF format, introduced in 1987. DEFLATE, a lossless compression algorithm specified in 1996, is used in the Portable Network Graphics (PNG) format. Wavelet compression, the use of wavelets in image compression, began after the development of DCT coding. The JPEG 2000 standard was introduced in 2000. In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004. === Audio === Audio data compression, not to be confused with dynamic range compression, has the potential to reduce the transmission bandwidth and storage requirements of audio data. Audio compression formats compression algorithms are implemented in software as audio codecs. In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, quantization, DCT and linear prediction to reduce the amount of information used to represent the uncompressed data. Lossy audio compression algorithms provide higher compression and are used in numerous audio applications including Vorbis and MP3. These algorithms almost all rely on psychoacoustics to eliminate or reduce fidelity of less audible sounds, thereby reducing the space required to store or transmit them. The acceptable trade-off between loss of audio quality and transmission or storage size depends upon the application. For example, one 640 MB compact disc (CD) holds approximately one hour of uncompressed high fidelity music, less than 2 hours of music compressed losslessly, or 7 hours of music compressed in the MP3 format at a medium bit rate. A digital sound recorder can typically store around 200 hours of clearly intelligible speech in 640 MB. Lossless audio compression produces a representation of digital data that can be decoded to an exact digital duplicate of the original. Compression ratios are around 50–60% of the original size, which is similar to those for generic lossless data compression. Lossless codecs use curve fitting or linear prediction as a basis for estimating the signal. Parameters describing the estimation and the difference between the estimation and the actual signal are coded separately. A number of lossless audio compression formats exist. See list of lossless codecs for a listing. Some formats are associated with a distinct system, such as Direct Stream Transfer, used in Super Audio CD and Meridian Lossless Packing, used in DVD-Audio, Dolby TrueHD, Blu-ray and HD DVD. Some audio file formats feature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (Scalable to Lossless), WavPack, and OptimFROG DualStream. When audio files are to be processed, either by further compression or for editing, it is desirable to work from an unchanged original (uncompressed or losslessly compressed). Processing of a lossily compressed file for some purpose usually produces a final result inferior to the creation of the same compressed file from an uncompressed original. In addition to sound editing or mixing, lossless audio compression is often used for archival storage, or as master copies. ==== Lossy audio compression ==== Lossy audio compression is used in a wide range of applications. In addition to standalone audio-only applications of file playback in MP3 players or computers, digitally compressed audio streams are used in most video DVDs, digital television, streaming media on the Internet, satellite and cable radio, and increasingly in terrestrial radio broadcasts. Lossy compression typically achieves far greater compression than lossless compression, by discarding less-critical data based on psychoacoustic optimizations. Psychoacoustics recognizes that not all data in an audio stream can be perceived by the human auditory system. Most lossy compression reduces redundancy by first identifying perceptually irrelevant sounds, that is, sounds that are very hard to hear. Typical examples include high frequencies or sounds that occur at the same time as louder sounds. Those irrelevant sounds are coded with decreased accuracy or not at all. Due to the nature of lossy algorithms, audio quality suffers a digital generation loss when a file is decompressed and recompressed. This makes lossy compression unsuitable for storing the intermediate results in professional audio engineering applications, such as sound editing and multitrack recording. However, lossy formats such as MP3 are very popular with end-users as the file size is reduced to 5-20% of the original size and a megabyte can store about a minute's worth of music at adequate quality. Several proprietary lossy compression algorithms have been developed that provide higher quality audio performance by using a combination of lossless and lossy algorithms with adaptive bit rates and lower compression ratios. Examples include aptX, LDAC, LHDC, MQA and SCL6. ===== Coding methods ===== To determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain, typically the frequency domain. Once transformed, component frequencies can be prioritized according to how audible they are. Audibility of spectral components is assessed using the absolute threshold of hearing and the principles of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency—and, in some cases, temporal masking—where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weigh the perceptual importance of components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models. Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. LPC uses a model of the human vocal tract to analyze speech sounds and infer the parameters used by the model to produce them moment to moment. These changing parameters are transmitted or stored and used to drive another model in the decoder which reproduces the sound. Lossy formats are often used for the distribution of streaming audio or interactive communication (such as in cell phone networks). In such applications, the data must be decompressed as the data flows, rather than after the entire data stream has been transmitted. Not all audio codecs can be used for streaming applications. Latency is introduced by the methods used to encode and decode the data. Some codecs will analyze a longer segment, called a frame, of the data to optimize efficiency, and then code it in a manner that requires a larger segment of data at one time to decode. The inherent latency of the coding algorithm can be critical; for example, when there is a two-way transmission of data, such as with a telephone conversation, significant delays may seriously degrade the perceived quality. In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples that must be analyzed before a block of audio is processed. In the minimum case, latency is zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony. In algorithms such as MP3, however, a large number of samples have to be analyzed to implement a psychoacoustic model in the frequency domain, and latency is on the order of 23 ms. ===== Speech encoding ===== Speech encoding is an important category of audio data compression. The perceptual models used to estimate what aspects of speech a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice is normally far narrower than that needed for music, and the sound is normally less complex. As a result, speech can be encoded at high quality using a relatively low bit rate. This is accomplished, in general, by some combination of two approaches: Only encoding sounds that could be made by a single human voice. Throwing away more of the data in the signal—keeping just enough to reconstruct an "intelligible" voice rather than the full frequency range of human hearing. The earliest algorithms used in speech encoding (and audio data compression in general) were the A-law algorithm and the μ-law algorithm. ==== History ==== Early audio research was conducted at Bell Labs. There, in 1950, C. Chapin Cutler filed the patent on differential pulse-code modulation (DPCM). In 1973, Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan. Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC). Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time. Perceptual coding is used by modern audio compression formats such as MP3 and AAC. Discrete cosine transform (DCT), developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, provided the basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3, Dolby Digital, and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986. The world's first commercial broadcast automation audio compression system was developed by Oscar Bonello, an engineering professor at the University of Buenos Aires. In 1983, using the psychoacoustic principle of the masking of critical bands first published in 1967, he started developing a practical application based on the recently developed IBM PC computer, and the broadcast automation system was launched in 1987 under the name Audicom. 35 years later, almost all the radio stations in the world were using this technology manufactured by a number of companies because the inventor refused to patent his work, preferring to publish it and leave it in the public domain. A literature compendium for a large variety of audio coding systems was published in the IEEE's Journal on Selected Areas in Communications (JSAC), in February 1988. While there were some papers from before that time, this collection documented an entire variety of finished, working audio coders, nearly all of them using perceptual techniques and some kind of frequency analysis and back-end noiseless coding. === Video === Uncompressed video requires a very high data rate. Although lossless video compression codecs perform at a compression factor of 5 to 12, a typical H.264 lossy compression video has a compression factor between 20 and 200. The two key video compression techniques used in video coding standards are the DCT and motion compensation (MC). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT video coding (block motion compensation). Most video codecs are used alongside audio compression techniques to store the separate but complementary data streams as one combined package using so-called container formats. ==== Encoding theory ==== Video data may be represented as a series of still image frames. Such data usually contains abundant amounts of spatial and temporal redundancy. Video compression algorithms attempt to reduce redundancy and store information more compactly. Most video compression formats and codecs exploit both spatial and temporal redundancy (e.g. through difference coding with motion compensation). Similarities can be encoded by only storing differences between e.g. temporally adjacent frames (inter-frame coding) or spatially adjacent pixels (intra-frame coding). Inter-frame compression (a temporal delta encoding) (re)uses data from one or more earlier or later frames in a sequence to describe the current frame. Intra-frame coding, on the other hand, uses only data from within the current frame, effectively being still-image compression. The intra-frame video coding formats used in camcorders and video editing employ simpler compression that uses only intra-frame prediction. This simplifies video editing software, as it prevents a situation in which a compressed frame refers to data that the editor has deleted. Usually, video compression additionally employs lossy compression techniques like quantization that reduce aspects of the source data that are (more or less) irrelevant to the human visual perception by exploiting perceptual features of human vision. For example, small differences in color are more difficult to perceive than are changes in brightness. Compression algorithms can average a color across these similar areas in a manner similar to those used in JPEG image compression. As in all lossy compression, there is a trade-off between video quality and bit rate, cost of processing the compression and decompression, and system requirements. Highly compressed video may present visible or distracting artifacts. Other methods other than the prevalent DCT-based transform formats, such as fractal compression, matching pursuit and the use of a discrete wavelet transform (DWT), have been the subject of some research, but are typically not used in practical products. Wavelet compression is used in still-image coders and video coders without motion compensation. Interest in fractal compression seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness of such methods. ===== Inter-frame coding ===== In inter-frame coding, individual frames of a video sequence are compared from one frame to the next, and the video compression codec records the differences to the reference frame. If the frame contains areas where nothing has moved, the system can simply issue a short command that copies that part of the previous frame into the next one. If sections of the frame move in a simple manner, the compressor can emit a (slightly longer) command that tells the decompressor to shift, rotate, lighten, or darken the copy. This longer command still remains much shorter than data generated by intra-frame compression. Usually, the encoder will also transmit a residue signal which describes the remaining more subtle differences to the reference imagery. Using entropy coding, these residue signals have a more compact representation than the full signal. In areas of video with more motion, the compression must encode more data to keep up with the larger number of pixels that are changing. Commonly during explosions, flames, flocks of animals, and in some panning shots, the high-frequency detail leads to quality decreases or to increases in the variable bitrate. ==== Hybrid block-based transform formats ==== Many commonly used video compression methods (e.g., those in standards approved by the ITU-T or ISO) share the same basic architecture that dates back to H.261 which was standardized in 1988 by the ITU-T. They mostly rely on the DCT, applied to rectangular blocks of neighboring pixels, and temporal prediction using motion vectors, as well as nowadays also an in-loop filtering step. In the prediction stage, various deduplication and difference-coding techniques are applied that help decorrelate data and describe new data based on already transmitted data. Then rectangular blocks of remaining pixel data are transformed to the frequency domain. In the main lossy processing stage, frequency domain data gets quantized in order to reduce information that is irrelevant to human visual perception. In the last stage statistical redundancy gets largely eliminated by an entropy coder which often applies some form of arithmetic coding. In an additional in-loop filtering stage various filters can be applied to the reconstructed image signal. By computing these filters also inside the encoding loop they can help compression because they can be applied to reference material before it gets used in the prediction process and they can be guided using the original signal. The most popular example are deblocking filters that blur out blocking artifacts from quantization discontinuities at transform block boundaries. ==== History ==== In 1967, A.H. Robinson and C. Cherry proposed a run-length encoding bandwidth compression scheme for the transmission of analog television signals. The DCT, which is fundamental to modern video compression, was introduced by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974. H.261, which debuted in 1988, commercially introduced the prevalent basic architecture of video compression technology. It was the first video coding format based on DCT compression. H.261 was developed by a number of companies, including Hitachi, PictureTel, NTT, BT and Toshiba. The most popular video coding standards used for codecs have been the MPEG standards. MPEG-1 was developed by the Motion Picture Experts Group (MPEG) in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262, which was developed by a number of companies, primarily Sony, Thomson and Mitsubishi Electric. MPEG-2 became the standard video format for DVD and SD digital television. In 1999, it was followed by MPEG-4/H.263. It was also developed by a number of companies, primarily Mitsubishi Electric, Hitachi and Panasonic. H.264/MPEG-4 AVC was developed in 2003 by a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics. AVC commercially introduced the modern context-adaptive binary arithmetic coding (CABAC) and context-adaptive variable-length coding (CAVLC) algorithms. AVC is the main video encoding standard for Blu-ray Discs, and is widely used by video sharing websites and streaming internet services such as YouTube, Netflix, Vimeo, and iTunes Store, web software such as Adobe Flash Player and Microsoft Silverlight, and various HDTV broadcasts over terrestrial and satellite television. === Genetics === Genetics compression algorithms are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype. In 2012, a team of scientists from Johns Hopkins University published a genetic compression algorithm that does not use a reference genome for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression and is less computationally intensive than the leading general-purpose compression utilities. For this, Chanda, Elhaik, and Bader introduced MAF-based encoding (MAFE), which reduces the heterogeneity of the dataset by sorting SNPs by their minor allele frequency, thus homogenizing the dataset. Other algorithms developed in 2009 and 2013 (DNAZip and GenomeZip) have compression ratios of up to 1200-fold—allowing 6 billion basepair diploid human genomes to be stored in 2.5 megabytes (relative to a reference genome or averaged over many genomes). For a benchmark in genetics/genomics data compressors, see == Outlook and currently unused potential == It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information. == See also == == References == == External links == "Part 3: Video compression", Data Compression Basics Pierre Larbier, Using 10-bit AVC/H.264 Encoding with 4:2:2 for Broadcast Contribution, Ateme, archived from the original on 2009-09-05 Why does 10-bit save bandwidth (even when content is 8-bit)? at the Wayback Machine (archived 2017-08-30) Which compression technology should be used? at the Wayback Machine (archived 2017-08-30) Introduction to Compression Theory (PDF), Wiley, archived (PDF) from the original on 2007-09-28 EBU subjective listening tests on low-bitrate audio codecs Audio Archiving Guide: Music Formats (Guide for helping a user pick out the right codec) MPEG 1&2 video compression intro (pdf format) at the Wayback Machine (archived September 28, 2007) hydrogenaudio wiki comparison Introduction to Data Compression by Guy E Blelloch from CMU Explanation of lossless signal compression method used by most codecs Videsignline – Intro to Video Compression at the Wayback Machine (archived 2010-03-15) Data Footprint Reduction Technology at the Wayback Machine (archived 2013-05-27) What is Run length Coding in video compression
Wikipedia/Data_compression_algorithm
A generalized probabilistic theory (GPT) is a general framework to describe the operational features of arbitrary physical theories. A GPT must specify what kind of physical systems one can find in the lab, as well as rules to compute the outcome statistics of any experiment involving labeled preparations, transformations and measurements. The framework of GPTs has been used to define hypothetical non-quantum physical theories which nonetheless possess quantum theory's most remarkable features, such as entanglement or teleportation. Notably, a small set of physically motivated axioms is enough to single out the GPT representation of quantum theory. The mathematical formalism of GPTs has been developed since the 1950s and 1960s by many authors, and rediscovered independently several times. The earliest ideas are due to Segal and Mackey, although the first comprehensive and mathematically rigorous treatment can be traced back to the work of Ludwig, Dähn, and Stolz, all three based at the University of Marburg. While the formalism in these earlier works is less similar to the modern one, already in the early 1970s the ideas of the Marburg school had matured and the notation had developed towards the modern usage, thanks also to the independent contribution of Davies and Lewis. The books by Ludwig and the proceedings of a conference held in Marburg in 1973 offer a comprehensive account of these early developments. The term "generalized probabilistic theory" itself was coined by Jonathan Barrett in 2007, based on the version of the framework introduced by Lucien Hardy. Note that some authors use the term operational probabilistic theory (OPT). OPTs are an alternative way to define hypothetical non-quantum physical theories, based on the language of category theory, in which one specify the axioms that should be satisfied by observations. == Definition == A GPT is specified by a number of mathematical structures, namely: a family of state spaces, each of which represents a physical system; a composition rule (usually corresponds to a tensor product), which specifies how joint state spaces are formed; a set of measurements, which map states to probabilities and are usually described by an effect algebra; a set of possible physical operations, i.e., transformations that map state spaces to state spaces. It can be argued that if one can prepare a state x {\displaystyle x} and a different state y {\displaystyle y} , then one can also toss a (possibly biased) coin which lands on one side with probability p {\displaystyle p} and on the other with probability 1 − p {\displaystyle 1-p} and prepare either x {\displaystyle x} or y {\displaystyle y} , depending on the side the coin lands on. The resulting state is a statistical mixture of the states x {\displaystyle x} and y {\displaystyle y} and in GPTs such statistical mixtures are described by convex combinations, in this case p x + ( 1 − p ) y {\displaystyle px+(1-p)y} . For this reason all state spaces are assumed to be convex sets. Following a similar reasoning, one can argue that also the set of measurement outcomes and set of physical operations must be convex. Additionally it is always assumed that measurement outcomes and physical operations are affine maps, i.e. that if Φ {\displaystyle \Phi } is a physical transformation, then we must have Φ ( p x + ( 1 − p ) y ) = p Φ ( x ) + ( 1 − p ) Φ ( y ) {\displaystyle \Phi (px+(1-p)y)=p\Phi (x)+(1-p)\Phi (y)} and similarly for measurement outcomes. This follows from the argument that we should obtain the same outcome if we first prepare a statistical mixture and then apply the physical operation, or if we prepare a statistical mixture of the outcomes of the physical operations. Note that physical operations are a subset of all affine maps which transform states into states as we must require that a physical operation yields a valid state even when it is applied to a part of a system (the notion of "part" is subtle: it is specified by explaining how different system types compose and how the global parameters of the composite system are affected by local operations). For practical reasons it is often assumed that a general GPT is embedded in a finite-dimensional vector space, although infinite-dimensional formulations exist. == Classical, quantum, and beyond == Classical theory is a GPT where states correspond to probability distributions and both measurements and physical operations are stochastic maps. One can see that in this case all state spaces are simplexes. Standard quantum information theory is a GPT where system types are described by a natural number D {\displaystyle D} which corresponds to the complex Hilbert space dimension. States of the systems of Hilbert space dimension D {\displaystyle D} are described by the normalized positive semidefinite matrices, i.e. by the density matrices. Measurements are identified with Positive Operator valued Measures (POVMs), and the physical operations are completely positive maps. Systems compose via the tensor product of the underlying complex Hilbert spaces. Real quantum theory is the GPT which is obtained from standard quantum information theory by restricting the theory to real Hilbert spaces. It does not satisfy the axiom of local tomography. The framework of GPTs has provided examples of consistent physical theories which cannot be embedded in quantum theory and indeed exhibit very non-quantum features. One of the first ones was Box-world, the theory with maximal non-local correlations. Other examples are theories with third-order interference and the family of GPTs known as generalized bits. Many features that were considered purely quantum are actually present in all non-classical GPTs. These include the impossibility of universal broadcasting, i.e., the no-cloning theorem; the existence of incompatible measurements; and the existence of entangled states or entangled measurements. == See also == Quantum foundations == References ==
Wikipedia/Generalized_probabilistic_theory
Frame rate, most commonly expressed in frame/s, frames per second or FPS, is typically the frequency (rate) at which consecutive images (frames) are captured or displayed. This definition applies to film and video cameras, computer animation, and motion capture systems. In these contexts, frame rate may be used interchangeably with frame frequency and refresh rate, which are expressed in hertz. Additionally, in the context of computer graphics performance, FPS is the rate at which a system, particularly a GPU, is able to generate frames, and refresh rate is the frequency at which a display shows completed frames. In electronic camera specifications frame rate refers to the maximum possible rate frames could be captured, but in practice, other settings (such as exposure time) may reduce the actual frequency to a lower number than the frame rate. == Human vision == The temporal sensitivity and resolution of human vision varies depending on the type and characteristics of visual stimulus, and it differs between individuals. The human visual system can process 10 to 12 images per second and perceive them individually, while higher rates are perceived as motion. Modulated light (such as a computer display) is perceived as stable by the majority of participants in studies when the rate is higher than 50 Hz. This perception of modulated light as steady is known as the flicker fusion threshold. However, when the modulated light is non-uniform and contains an image, the flicker fusion threshold can be much higher, in the hundreds of hertz. With regard to image recognition, people have been found to recognize a specific image in an unbroken series of different images, each of which lasts as little as 13 milliseconds. Persistence of vision sometimes accounts for very short single-millisecond visual stimulus having a perceived duration of between 100 ms and 400 ms. Multiple stimuli that are very short are sometimes perceived as a single stimulus, such as a 10 ms green flash of light immediately followed by a 10 ms red flash of light perceived as a single yellow flash of light. == Film and video == === Silent film === Early silent films had stated frame rates anywhere from 16 to 24 frames per second (FPS), but since the cameras were hand-cranked, the rate often changed during the scene to fit the mood. Projectionists could also change the frame rate in the theater by adjusting a rheostat controlling the voltage powering the film-carrying mechanism in the projector. Film companies often intended for theaters to show their silent films at a higher frame rate than that at which they were filmed. These frame rates were enough for the sense of motion, but it was perceived as jerky motion. To minimize the perceived flicker, projectors employed dual- and triple-blade shutters, so each frame was displayed two or three times, increasing the flicker rate to 48 or 72 hertz and reducing eye strain. Thomas Edison said that 46 frames per second was the minimum needed for the eye to perceive motion: "Anything less will strain the eye." In the mid to late 1920s, the frame rate for silent film increased to 20–26 FPS. === Sound film === When sound film was introduced in 1926, variations in film speed were no longer tolerated, as the human ear is more sensitive than the eye to changes in frequency. Many theaters had shown silent films at 22 to 26 FPS, which is why the industry chose 24 FPS for sound film as a compromise. From 1927 to 1930, as various studios updated equipment, the rate of 24 FPS became standard for 35 mm sound film. At 24 FPS, the film travels through the projector at a rate of 456 millimetres (18.0 in) per second. This allowed simple two-blade shutters to give a projected series of images at 48 per second, satisfying Edison's recommendation. Many modern 35 mm film projectors use three-blade shutters to give 72 images per second—each frame is flashed on screen three times. === Animation === In drawn animation, moving characters are often shot "on twos", that is to say, one drawing is shown for every two frames of film (which usually runs at 24 frame per second), meaning there are only 12 drawings per second. Even though the image update rate is low, the fluidity is satisfactory for most subjects. However, when a character is required to perform a quick movement, it is usually necessary to revert to animating "on ones", as "twos" are too slow to convey the motion adequately. A blend of the two techniques keeps the eye fooled without unnecessary production cost. Animation for most "Saturday morning cartoons" was produced as cheaply as possible and was most often shot on "threes" or even "fours", i.e. three or four frames per drawing. This translates to only 8 or 6 drawings per second respectively. Anime is also usually drawn on threes or twos. === Modern video standards === Due to the mains frequency of electric grids, analog television broadcast was developed with frame rates of 50 Hz (most of the world) or 60 Hz (Canada, US, Mexico, Philippines, Japan, South Korea). The frequency of the electricity grid was extremely stable and therefore it was logical to use for synchronization. The introduction of color television technology made it necessary to lower that 60 FPS frequency by 0.1% to avoid "dot crawl", a display artifact appearing on legacy black-and-white displays, showing up on highly-color-saturated surfaces. It was found that by lowering the frame rate by 0.1%, the undesirable effect was minimized. As of 2021, video transmission standards in North America, Japan, and South Korea are still based on 60/1.001 ≈ 59.94 images per second. Two sizes of images are typically used: 1920×1080 ("1080i/p") and 1280×720 ("720p"). Confusingly, interlaced formats are customarily stated at 1/2 their image rate, 29.97/25 FPS, and double their image height, but these statements are purely custom; in each format, 60 images per second are produced. A resolution of 1080i produces 59.94 or 50 1920×540 images, each squashed to half-height in the photographic process and stretched back to fill the screen on playback in a television set. The 720p format produces 59.94/50 or 29.97/25 1280×720p images, not squeezed, so that no expansion or squeezing of the image is necessary. This confusion was industry-wide in the early days of digital video software, with much software being written incorrectly, the developers believing that only 29.97 images were expected each second, which was incorrect. While it was true that each picture element was polled and sent only 29.97 times per second, the pixel location immediately below that one was polled 1/60 of a second later, part of a completely separate image for the next 1/60-second frame. At its native 24 FPS rate, film could not be displayed on 60 Hz video without the necessary pulldown process, often leading to "judder": to convert 24 frames per second into 60 frames per second, every odd frame is repeated, playing twice, while every even frame is tripled. This creates uneven motion, appearing stroboscopic. Other conversions have similar uneven frame doubling. Newer video standards support 120, 240, or 300 frames per second, so frames can be evenly sampled for standard frame rates such as 24, 48 and 60 FPS film or 25, 30, 50 or 60 FPS video. Of course these higher frame rates may also be displayed at their native rates. === Electronic camera specifications === In electronic camera specifications frame rate refers to the maximum possible rate frames that can be captured (e.g. if the exposure time were set to near-zero), but in practice, other settings (such as exposure time) may reduce the actual frequency to a lower number than the frame rate. == Computer games == In computer video games, frame rate plays an important part in the experience as, unlike film, games are rendered in real-time. 60 frames per second has for a long time been considered the minimum frame rate for smoothly animated game play. Video games designed for PAL markets, before the sixth generation of video game consoles, had lower frame rates by design due to the 50 Hz output. This noticeably made fast-paced games, such as racing or fighting games, run slower; less frequently developers accounted for the frame rate difference and altered the game code to achieve (nearly) identical pacing across both regions, with varying degrees of success. Computer monitors marketed to competitive PC gamers can hit 360 Hz, 500 Hz, or more. High frame rates make action scenes look less blurry, such as sprinting through the wilderness in an open world game, spinning rapidly to face an opponent in a first-person shooter, or keeping track of details during an intense fight in a multiplayer online battle arena. Input latency is also reduced. Some people may have difficulty perceiving the differences between high frame rates, though. Frame time is related to frame rate, but it measures the time between frames. A game could maintain an average of 60 frames per second but appear choppy because of a poor frame time. Game reviews sometimes average the worst 1% of frame rates, reported as the 99th percentile, to measure how choppy the game appears. A small difference between the average frame rate and 99th percentile would generally indicate a smooth experience. To mitigate the choppiness of poorly optimized games, players can set frame rate caps closer to their 99% percentile. When a game's frame rate is different than the display's refresh rate, screen tearing can occur. Vsync mitigates this, but it caps the frame rate to the display's refresh rate, increases input lag, and introduces judder. Variable refresh rate displays automatically set their refresh rate equal to the game's frame rate, as long as it is within the display's supported range. == Frame rate up-conversion == Frame rate up-conversion (FRC) is the process of increasing the temporal resolution of a video sequence by synthesizing one or more intermediate frames between two consecutive frames. A low frame rate causes aliasing, yields abrupt motion artifacts, and degrades the video quality. Consequently, the temporal resolution is an important factor affecting video quality. Algorithms for FRC are widely used in applications, including visual quality enhancement, video compression and slow-motion video generation. === Methods === Most FRC methods can be categorized into optical flow or kernel-based and pixel hallucination-based methods. ==== Flow-based FRC ==== Flow-based methods linearly combine predicted optical flows between two input frames to approximate flows from the target intermediate frame to the input frames. They also propose flow reversal (projection) for more accurate image warping. Moreover, there are algorithms that give different weights of overlapped flow vectors depending on the object depth of the scene via a flow projection layer. ==== Pixel hallucination-based FRC ==== Pixel hallucination-based methods use deformable convolution to the center frame generator by replacing optical flows with offset vectors. There are algorithms that also interpolate middle frames with the help of deformable convolution in the feature domain. However, since these methods directly hallucinate pixels unlike the flow-based FRC methods, the predicted frames tend to be blurry when fast-moving objects are present. == See also == Delta timing Federal Standard 1037C Film-out Flicker fusion threshold Glossary of video terms High frame rate List of motion picture film formats Micro stuttering MIL-STD-188 Movie projector Time-lapse photography Video compression == References == == External links == "Temporal Rate Conversion"—a very detailed guide about the visual interference of TV, video & PC (Wayback Machine copy) Compare frames per second: which looks better?—a web tool to visually compare differences in frame rate and motion blur.
Wikipedia/Burst_rate
Super-resolution imaging (SR) is a class of techniques that improve the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced. In some radar and sonar imaging applications (e.g. magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g. MUSIC) and compressed sensing-based algorithms (e.g., SAMV) are employed to achieve SR over standard periodogram algorithm. Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy. == Basic concepts == Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles: Diffraction limit: The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by laws of physics, whether formulated by the diffraction equations in the wave theory of light or equivalently the uncertainty principle for photons in quantum mechanics. Information transfer can never be increased beyond this boundary, but packets outside the limits can be cleverly swapped for (or multiplexed with) some inside it. One does not so much “break” as “run around” the diffraction limit. New procedures probing electro-magnetic disturbances at the molecular level (in the so-called near field) remain fully consistent with Maxwell's equations. Spatial-frequency domain: A succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred into the optical image, i.e., are not resolved. But in fact what is set by diffraction theory is the width of the passband, not a fixed upper limit. No laws of physics are broken when a spatial frequency band beyond the cut-off spatial frequency is swapped for one inside it: this has long been implemented in dark-field microscopy. Nor are information-theoretical rules broken when superimposing several bands, disentangling them in the received image needs assumptions of object invariance during multiple exposures, i.e., the substitution of one kind of uncertainty for another. Information: When the term super-resolution is used in techniques of inferring object details from statistical treatment of the image within standard resolution limits, for example, averaging multiple exposures, it involves an exchange of one kind of information (extracting signal from noise) for another (the assumption that the target has remained invariant). Resolution and localization: True resolution involves the distinction of whether a target, e.g. a star or a spectral line, is single or double, ordinarily requiring separable peaks in the image. When a target is known to be single, its location can be determined with higher precision than the image width by finding the centroid (center of gravity) of its image light distribution. The word ultra-resolution had been proposed for this process but it did not catch on, and the high-precision localization procedure is typically referred to as super-resolution. The technical achievements of enhancing the performance of imaging-forming and –sensing devices now classified as super-resolution use to the fullest but always stay within the bounds imposed by the laws of physics and information theory. == Techniques == === Optical or diffractive super-resolution === Substituting spatial-frequency bands: Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum. Dark-field illumination in microscopy is an example. See also aperture synthesis. ==== Multiplexing spatial-frequency bands ==== An image is formed using the normal passband of the optical device. Then some known light structure, for example a set of light fringes that need not even be within the passband, is superimposed on the target. The image now contains components resulting from the combination of the target and the superimposed light structure, e.g. moiré fringes, and carries information about target detail which simple unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed. For an example, see structured illumination (figure to left). ==== Multiple parameter use within traditional diffraction limit ==== If a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would use normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution. ==== Probing near-field electromagnetic disturbance ==== The usual discussion of super-resolution involved conventional imagery of an object by an optical system. But modern technology allows probing the electromagnetic disturbance within molecular distances of the source which has superior resolution properties, see also evanescent waves and the development of the new super lens. === Geometrical or image-processing super-resolution === ==== Multi-exposure image noise reduction ==== When an image is degraded by noise, there can be more detail in the average of many exposures, even within the diffraction limit. See example on the right. ==== Single-frame deblurring ==== Known defects in a given imaging situation, such as defocus or aberrations, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it. ==== Sub-pixel image localization ==== The location of a single source can be determined by computing the "center of gravity" (centroid) of the light distribution extending over several adjacent pixels (see figure on the left). Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has become known as super-resolution microscopy, e.g. stochastic optical reconstruction microscopy (STORM), where fluorescent probes attached to molecules give nanoscale distance information. It is also the mechanism underlying visual hyperacuity. ==== Bayesian induction beyond traditional diffraction limit ==== Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object. The classical example is Toraldo di Francia's proposition of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?" The approach can take the form of extrapolating the image in the frequency domain, by assuming that the object is an analytic function, and that we can exactly know the function values in some interval. This method is severely limited by the ever-present noise in digital imaging systems, but it can work for radar, astronomy, microscopy or magnetic resonance imaging. More recently, a fast single image super-resolution algorithm based on a closed-form solution to ℓ 2 − ℓ 2 {\displaystyle \ell _{2}-\ell _{2}} problems has been proposed and demonstrated to accelerate most of the existing Bayesian super-resolution methods significantly. == Aliasing == Geometrical SR reconstruction algorithms are possible if and only if the input low resolution images have been under-sampled and therefore contain aliasing. Because of this aliasing, the high-frequency content of the desired reconstruction image is embedded in the low-frequency content of each of the observed images. Given a sufficient number of observation images, and if the set of observations vary in their phase (i.e. if the images of the scene are shifted by a sub-pixel amount), then the phase information can be used to separate the aliased high-frequency content from the true low-frequency content, and the full-resolution image can be accurately reconstructed. In practice, this frequency-based approach is not used for reconstruction, but even in the case of spatial approaches (e.g. shift-add fusion), the presence of aliasing is still a necessary condition for SR reconstruction. == Technical implementations == There are many both single-frame and multiple-frame variants of SR. Multiple-frame SR uses the sub-pixel shifts between multiple low resolution images of the same scene. It creates an improved resolution image fusing information from all low resolution images, and the created higher resolution images are better descriptions of the scene. Single-frame SR methods attempt to magnify the image without producing blur. These methods use other parts of the low resolution images, or other unrelated images, to guess what the high-resolution image should look like. Algorithms can also be divided by their domain: frequency or space domain. Originally, super-resolution methods worked well only on grayscale images, but researchers have found methods to adapt them to color camera images. Recently, the use of super-resolution for 3D data has also been shown. == Research == There is promising research on using deep convolutional networks to perform super-resolution. In particular work has been demonstrated showing the transformation of a 20x microscope image of pollen grains into a 1500x scanning electron microscope image using it. While this technique can increase the information content of an image, there is no guarantee that the upscaled features exist in the original image and deep convolutional upscalers should not be used in analytical applications with ambiguous inputs. These methods can hallucinate image features, which can make them unsafe for medical use. == See also == Optical resolution Oversampling Video super-resolution Single-particle trajectory Superoscillation == References == === Other related work === Curtis, Craig H.; Milster, Tom D. (October 1992). "Analysis of Superresolution in Magneto-Optic Data Storage Devices". Applied Optics. 31 (29): 6272–6279. Bibcode:1992ApOpt..31.6272M. doi:10.1364/AO.31.006272. PMID 20733840. Zalevsky, Z.; Mendlovic, D. (2003). Optical Superresolution. Springer. ISBN 978-0-387-00591-1. Caron, J.N. (September 2004). "Rapid supersampling of multiframe sequences by use of blind deconvolution". Optics Letters. 29 (17): 1986–1988. Bibcode:2004OptL...29.1986C. doi:10.1364/OL.29.001986. PMID 15455755. Clement, G.T.; Huttunen, J.; Hynynen, K. (2005). "Superresolution ultrasound imaging using back-projected reconstruction". Journal of the Acoustical Society of America. 118 (6): 3953–3960. Bibcode:2005ASAJ..118.3953C. doi:10.1121/1.2109167. PMID 16419839. Geisler, W.S.; Perry, J.S. (2011). "Statistics for optimal point prediction in natural images". Journal of Vision. 11 (12): 14. doi:10.1167/11.12.14. PMC 5144165. PMID 22011382. Cheung, V.; Frey, B. J.; Jojic, N. (20–25 June 2005). Video epitomes (PDF). Conference on Computer Vision and Pattern Recognition (CVPR). Vol. 1. pp. 42–49. doi:10.1109/CVPR.2005.366. Bertero, M.; Boccacci, P. (October 2003). "Super-resolution in computational imaging". Micron. 34 (6–7): 265–273. doi:10.1016/s0968-4328(03)00051-9. PMID 12932769. Borman, S.; Stevenson, R. (1998). "Spatial Resolution Enhancement of Low-Resolution Image Sequences – A Comprehensive Review with Directions for Future Research" (Technical report). University of Notre Dame. Borman, S.; Stevenson, R. (1998). Super-resolution from image sequences — a review (PDF). Midwest Symposium on Circuits and Systems. Park, S. C.; Park, M. K.; Kang, M. G. (May 2003). "Super-resolution image reconstruction: a technical overview". IEEE Signal Processing Magazine. 20 (3): 21–36. Bibcode:2003ISPM...20...21P. doi:10.1109/MSP.2003.1203207. S2CID 12320918. Farsiu, S.; Robinson, D.; Elad, M.; Milanfar, P. (August 2004). "Advances and Challenges in Super-Resolution". International Journal of Imaging Systems and Technology. 14 (2): 47–57. doi:10.1002/ima.20007. S2CID 12351561. Elad, M.; Hel-Or, Y. (August 2001). "Fast Super-Resolution Reconstruction Algorithm for Pure Translational Motion and Common Space-Invariant Blur". IEEE Transactions on Image Processing. 10 (8): 1187–1193. Bibcode:2001ITIP...10.1187E. CiteSeerX 10.1.1.11.2502. doi:10.1109/83.935034. PMID 18255535. Irani, M.; Peleg, S. (June 1990). Super Resolution From Image Sequences (PDF). International Conference on Pattern Recognition. Vol. 2. pp. 115–120. Sroubek, F.; Cristobal, G.; Flusser, J. (2007). "A Unified Approach to Superresolution and Multichannel Blind Deconvolution". IEEE Transactions on Image Processing. 16 (9): 2322–2332. Bibcode:2007ITIP...16.2322S. doi:10.1109/TIP.2007.903256. PMID 17784605. S2CID 6367149. Calabuig, Alejandro; Micó, Vicente; Garcia, Javier; Zalevsky, Zeev; Ferreira, Carlos (March 2011). "Single-exposure super-resolved interferometric microscopy by red–green–blue multiplexing". Optics Letters. 36 (6): 885–887. Bibcode:2011OptL...36..885C. doi:10.1364/OL.36.000885. PMID 21403717. Chan, Wai-San; Lam, Edmund; Ng, Michael K.; Mak, Giuseppe Y. (September 2007). "Super-resolution reconstruction in a computational compound-eye imaging system". Multidimensional Systems and Signal Processing. 18 (2–3): 83–101. Bibcode:2007MSySP..18...83C. doi:10.1007/s11045-007-0022-3. S2CID 16452552. Ng, Michael K.; Shen, Huanfeng; Lam, Edmund Y.; Zhang, Liangpei (2007). "A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video". EURASIP Journal on Advances in Signal Processing. 2007: 074585. Bibcode:2007EJASP2007..104N. doi:10.1155/2007/74585. hdl:10722/73871. Glasner, D.; Bagon, S.; Irani, M. (October 2009). Super-Resolution from a Single Image (PDF). International Conference on Computer Vision (ICCV).; "example and results". Ben-Ezra, M.; Lin, Zhouchen; Wilburn, B.; Zhang, Wei (July 2011). "Penrose Pixels for Super-Resolution" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 33 (7): 1370–1383. CiteSeerX 10.1.1.174.8804. doi:10.1109/TPAMI.2010.213. PMID 21135446. S2CID 184868. Berliner, L.; Buffa, A. (2011). "Super-resolution variable-dose imaging in digital radiography: quality and dose reduction with a fluoroscopic flat-panel detector". Int J Comput Assist Radiol Surg. 6 (5): 663–673. doi:10.1007/s11548-011-0545-9. PMID 21298404. Timofte, R.; De Smet, V.; Van Gool, L. (November 2014). A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution (PDF). 12th Asian Conference on Computer Vision (ACCV).; "codes and data". Huang, J.-B; Singh, A.; Ahuja, N. (June 2015). Single Image Super-Resolution from Transformed Self-Exemplars. IEEE Conference on Computer Vision and Pattern Recognition.; "project page". CHRISTENSEN-JEFFRIES, T.; COUTURE, O.; DAYTON, P.A.; ELDAR, Y.C.; HYNYNEN, K.; KIESSLING, F.; O’REILLY, M.; PINTON, G.F.; SCHMITZ, G.; TANG, M.-X.; TANTER, M.; VAN SLOUN, R.J.G. (2020). "Super-resolution Ultrasound Imaging". Ultrasound Med. Biol. 46 (4): 865–891. doi:10.1016/j.ultrasmedbio.2019.11.013. PMC 8388823. PMID 31973952.
Wikipedia/Super-resolution_imaging
Frame rate, most commonly expressed in frame/s, frames per second or FPS, is typically the frequency (rate) at which consecutive images (frames) are captured or displayed. This definition applies to film and video cameras, computer animation, and motion capture systems. In these contexts, frame rate may be used interchangeably with frame frequency and refresh rate, which are expressed in hertz. Additionally, in the context of computer graphics performance, FPS is the rate at which a system, particularly a GPU, is able to generate frames, and refresh rate is the frequency at which a display shows completed frames. In electronic camera specifications frame rate refers to the maximum possible rate frames could be captured, but in practice, other settings (such as exposure time) may reduce the actual frequency to a lower number than the frame rate. == Human vision == The temporal sensitivity and resolution of human vision varies depending on the type and characteristics of visual stimulus, and it differs between individuals. The human visual system can process 10 to 12 images per second and perceive them individually, while higher rates are perceived as motion. Modulated light (such as a computer display) is perceived as stable by the majority of participants in studies when the rate is higher than 50 Hz. This perception of modulated light as steady is known as the flicker fusion threshold. However, when the modulated light is non-uniform and contains an image, the flicker fusion threshold can be much higher, in the hundreds of hertz. With regard to image recognition, people have been found to recognize a specific image in an unbroken series of different images, each of which lasts as little as 13 milliseconds. Persistence of vision sometimes accounts for very short single-millisecond visual stimulus having a perceived duration of between 100 ms and 400 ms. Multiple stimuli that are very short are sometimes perceived as a single stimulus, such as a 10 ms green flash of light immediately followed by a 10 ms red flash of light perceived as a single yellow flash of light. == Film and video == === Silent film === Early silent films had stated frame rates anywhere from 16 to 24 frames per second (FPS), but since the cameras were hand-cranked, the rate often changed during the scene to fit the mood. Projectionists could also change the frame rate in the theater by adjusting a rheostat controlling the voltage powering the film-carrying mechanism in the projector. Film companies often intended for theaters to show their silent films at a higher frame rate than that at which they were filmed. These frame rates were enough for the sense of motion, but it was perceived as jerky motion. To minimize the perceived flicker, projectors employed dual- and triple-blade shutters, so each frame was displayed two or three times, increasing the flicker rate to 48 or 72 hertz and reducing eye strain. Thomas Edison said that 46 frames per second was the minimum needed for the eye to perceive motion: "Anything less will strain the eye." In the mid to late 1920s, the frame rate for silent film increased to 20–26 FPS. === Sound film === When sound film was introduced in 1926, variations in film speed were no longer tolerated, as the human ear is more sensitive than the eye to changes in frequency. Many theaters had shown silent films at 22 to 26 FPS, which is why the industry chose 24 FPS for sound film as a compromise. From 1927 to 1930, as various studios updated equipment, the rate of 24 FPS became standard for 35 mm sound film. At 24 FPS, the film travels through the projector at a rate of 456 millimetres (18.0 in) per second. This allowed simple two-blade shutters to give a projected series of images at 48 per second, satisfying Edison's recommendation. Many modern 35 mm film projectors use three-blade shutters to give 72 images per second—each frame is flashed on screen three times. === Animation === In drawn animation, moving characters are often shot "on twos", that is to say, one drawing is shown for every two frames of film (which usually runs at 24 frame per second), meaning there are only 12 drawings per second. Even though the image update rate is low, the fluidity is satisfactory for most subjects. However, when a character is required to perform a quick movement, it is usually necessary to revert to animating "on ones", as "twos" are too slow to convey the motion adequately. A blend of the two techniques keeps the eye fooled without unnecessary production cost. Animation for most "Saturday morning cartoons" was produced as cheaply as possible and was most often shot on "threes" or even "fours", i.e. three or four frames per drawing. This translates to only 8 or 6 drawings per second respectively. Anime is also usually drawn on threes or twos. === Modern video standards === Due to the mains frequency of electric grids, analog television broadcast was developed with frame rates of 50 Hz (most of the world) or 60 Hz (Canada, US, Mexico, Philippines, Japan, South Korea). The frequency of the electricity grid was extremely stable and therefore it was logical to use for synchronization. The introduction of color television technology made it necessary to lower that 60 FPS frequency by 0.1% to avoid "dot crawl", a display artifact appearing on legacy black-and-white displays, showing up on highly-color-saturated surfaces. It was found that by lowering the frame rate by 0.1%, the undesirable effect was minimized. As of 2021, video transmission standards in North America, Japan, and South Korea are still based on 60/1.001 ≈ 59.94 images per second. Two sizes of images are typically used: 1920×1080 ("1080i/p") and 1280×720 ("720p"). Confusingly, interlaced formats are customarily stated at 1/2 their image rate, 29.97/25 FPS, and double their image height, but these statements are purely custom; in each format, 60 images per second are produced. A resolution of 1080i produces 59.94 or 50 1920×540 images, each squashed to half-height in the photographic process and stretched back to fill the screen on playback in a television set. The 720p format produces 59.94/50 or 29.97/25 1280×720p images, not squeezed, so that no expansion or squeezing of the image is necessary. This confusion was industry-wide in the early days of digital video software, with much software being written incorrectly, the developers believing that only 29.97 images were expected each second, which was incorrect. While it was true that each picture element was polled and sent only 29.97 times per second, the pixel location immediately below that one was polled 1/60 of a second later, part of a completely separate image for the next 1/60-second frame. At its native 24 FPS rate, film could not be displayed on 60 Hz video without the necessary pulldown process, often leading to "judder": to convert 24 frames per second into 60 frames per second, every odd frame is repeated, playing twice, while every even frame is tripled. This creates uneven motion, appearing stroboscopic. Other conversions have similar uneven frame doubling. Newer video standards support 120, 240, or 300 frames per second, so frames can be evenly sampled for standard frame rates such as 24, 48 and 60 FPS film or 25, 30, 50 or 60 FPS video. Of course these higher frame rates may also be displayed at their native rates. === Electronic camera specifications === In electronic camera specifications frame rate refers to the maximum possible rate frames that can be captured (e.g. if the exposure time were set to near-zero), but in practice, other settings (such as exposure time) may reduce the actual frequency to a lower number than the frame rate. == Computer games == In computer video games, frame rate plays an important part in the experience as, unlike film, games are rendered in real-time. 60 frames per second has for a long time been considered the minimum frame rate for smoothly animated game play. Video games designed for PAL markets, before the sixth generation of video game consoles, had lower frame rates by design due to the 50 Hz output. This noticeably made fast-paced games, such as racing or fighting games, run slower; less frequently developers accounted for the frame rate difference and altered the game code to achieve (nearly) identical pacing across both regions, with varying degrees of success. Computer monitors marketed to competitive PC gamers can hit 360 Hz, 500 Hz, or more. High frame rates make action scenes look less blurry, such as sprinting through the wilderness in an open world game, spinning rapidly to face an opponent in a first-person shooter, or keeping track of details during an intense fight in a multiplayer online battle arena. Input latency is also reduced. Some people may have difficulty perceiving the differences between high frame rates, though. Frame time is related to frame rate, but it measures the time between frames. A game could maintain an average of 60 frames per second but appear choppy because of a poor frame time. Game reviews sometimes average the worst 1% of frame rates, reported as the 99th percentile, to measure how choppy the game appears. A small difference between the average frame rate and 99th percentile would generally indicate a smooth experience. To mitigate the choppiness of poorly optimized games, players can set frame rate caps closer to their 99% percentile. When a game's frame rate is different than the display's refresh rate, screen tearing can occur. Vsync mitigates this, but it caps the frame rate to the display's refresh rate, increases input lag, and introduces judder. Variable refresh rate displays automatically set their refresh rate equal to the game's frame rate, as long as it is within the display's supported range. == Frame rate up-conversion == Frame rate up-conversion (FRC) is the process of increasing the temporal resolution of a video sequence by synthesizing one or more intermediate frames between two consecutive frames. A low frame rate causes aliasing, yields abrupt motion artifacts, and degrades the video quality. Consequently, the temporal resolution is an important factor affecting video quality. Algorithms for FRC are widely used in applications, including visual quality enhancement, video compression and slow-motion video generation. === Methods === Most FRC methods can be categorized into optical flow or kernel-based and pixel hallucination-based methods. ==== Flow-based FRC ==== Flow-based methods linearly combine predicted optical flows between two input frames to approximate flows from the target intermediate frame to the input frames. They also propose flow reversal (projection) for more accurate image warping. Moreover, there are algorithms that give different weights of overlapped flow vectors depending on the object depth of the scene via a flow projection layer. ==== Pixel hallucination-based FRC ==== Pixel hallucination-based methods use deformable convolution to the center frame generator by replacing optical flows with offset vectors. There are algorithms that also interpolate middle frames with the help of deformable convolution in the feature domain. However, since these methods directly hallucinate pixels unlike the flow-based FRC methods, the predicted frames tend to be blurry when fast-moving objects are present. == See also == Delta timing Federal Standard 1037C Film-out Flicker fusion threshold Glossary of video terms High frame rate List of motion picture film formats Micro stuttering MIL-STD-188 Movie projector Time-lapse photography Video compression == References == == External links == "Temporal Rate Conversion"—a very detailed guide about the visual interference of TV, video & PC (Wayback Machine copy) Compare frames per second: which looks better?—a web tool to visually compare differences in frame rate and motion blur.
Wikipedia/Update_rate
Variable refresh rate (VRR) refers to a dynamic display that can continuously and seamlessly change its refresh rate without user input. A display supporting a variable refresh rate usually supports a specific range of refresh rates (e.g. 30 Hertz through 144 Hertz). This is called the VRR range. The refresh rate can continuously vary seamlessly anywhere within this range. == Purpose == On displays with a fixed refresh rate, a frame can only be shown on the screen at specific intervals, evenly spaced apart. If a new frame is not ready when that interval arrives, then the old frame is held on screen until the next interval (stutter) or a mixture of the old frame and the completed part of the new frame is shown (tearing). Conversely, if the frame is ready before the interval arrives, then it won't be shown until that interval arrives. Variable refresh rates eliminate these issues by matching the refresh rates of a display to be in sync with the frame rate from a video input, making the display motion more smooth. Although VRR is strongly associated with video games due to such content having unpredictable, discontinuous frame rates that would benefit from the technology, it is also useful for media whose frame rate is fixed and known in advance, such as film and video. Being able to sync the refresh rate with industry standard framerates (24, 30, and 60 FPS), it again helps to eliminate screen tearing. VRR also has use in power management, by temporarily lowering the refresh rate of a display during instances when there is little movement on the screen to save power. == History == Vector displays had a variable refresh rate on their cathode-ray tube (CRT), depending on the number of vectors on the screen, since more vectors took more time to draw on their screen. Since the 2010s, raster displays gained several industry standards for variable refresh rates. Historically, there was only a limited selection of fixed refresh rates for common display modes. == Implementations == Variable refresh rate display technologies include several industry standards and proprietary standards: AMD FreeSync Nvidia G-Sync DisplayPort 1.2a's optional Adaptive-Sync feature HDMI 2.1 Variable Refresh Rate (VRR) Apple ProMotion Qualcomm Q-Sync == References == == External links == TestUFO Animation: Variable Refresh Rate Simulation
Wikipedia/Variable_refresh_rate
In motion picture technology—either film or video—high frame rate (HFR) refers to higher frame rates than typical prior practice. The frame rate for motion picture film cameras was typically 24 frames per second (fps) with multiple flashes on each frame during projection to prevent flicker. Analog television and video employed interlacing where only half of the image (known as a video field) was recorded and played back/refreshed at once but at twice the rate of what would be allowed for progressive video of the same bandwidth, resulting in smoother playback, as opposed to progressive video which is more similar to how celluloid works. The field rate of analog television and video systems was typically 50 or 60 fields per second. Usage of frame rates higher than 24 fps for feature motion pictures and higher than 30 fps for other applications are emerging trends. Filmmakers may capture their projects in a high frame rate so that it can be evenly converted to multiple lower rates for distribution. == History of frame rates in cinema == In early cinema history, there was no standard frame rate established. Thomas Edison's early films were shot at 40 fps, while the Lumière Brothers used 16 fps. This had to do with a combination of the use of a hand crank rather than a motor, which created variable frame rates because of the inconsistency of the cranking of the film through the camera. After the introduction of synch sound recording, 24 fps became the industry standard frame rate for capture and projection of motion pictures. 24 fps was chosen because it was the minimum frame rate that would produce adequate sound quality. This was done because film was expensive, and using the lowest possible frame rate would use the least amount of film. A few film formats have experimented with frame rates higher than the 24 fps standard. The original 3-strip Cinerama features of the 1950s ran at 26 fps. The first two Todd-AO 70 mm features, Oklahoma! (1955) and Around the World in 80 Days (1956) were shot and projected at 30 fps. Douglas Trumbull's 70 mm Showscan film format operated at 60 fps. The IMAX HD film Momentum, presented at Seville Expo '92, was shot and projected at 48 fps. IMAX HD has also been used in film-based theme park attractions, including Disney's Soarin' Over California. The proposed Maxivision 48 format ran 35 mm film at 48 fps, but was never commercially deployed. Digital Cinema Initiatives has published a document outlining recommended practice for high frame rate digital cinema. This document outlines the frame rates and resolutions that can be used in high frame rate digital theatrical presentations with currently available equipment. In the case of cinema shot on film, as opposed to (whether analog or digital) video, HFR offers an additional benefit beyond temporal smoothness and motion blur. Especially for stationary subject matter, when shot with sufficiently fast stock, the physically random repositioning of film grains in each frame at higher rates effectively oversamples the image's spatial resolution beyond the minimum fineness of individual grains when viewed. == Usage in the film industry == Peter Jackson's The Hobbit film series, beginning with The Hobbit: An Unexpected Journey in December 2012, used a shooting and projection frame rate of 48 frames per second, becoming the first feature film with a wide release to do so. Its 2013 sequel, The Hobbit: The Desolation of Smaug and 2014 sequel, The Hobbit: The Battle of the Five Armies, followed suit. All films also have versions which are converted and projected at 24 fps. In 2016, Ang Lee released Billy Lynn's Long Halftime Walk. Unlike The Hobbit trilogy, which used 48 frames per second, the picture shot and projected selected scenes in 120 frames per second, which is five times faster than the 24 frames per second standard used in Hollywood. Lee's 2019 Gemini Man was also shot and distributed in 120 frames per second. Other filmmakers who intend to use the high frame rate format include James Cameron in his Avatar sequels and Andy Serkis in his adaptation of George Orwell's Animal Farm. In early 2022, Cameron announced that HFR conversions for his previous films, Avatar and Titanic, were in the works. Avatar: The Way of Water released on December 16, 2022 with a dynamic frame rate. Some scenes are displayed up to 48 fps, while others are displayed in a more traditional, slower rate. == Out of the cinema == Even when shot on film, frame rates higher than 24 fps and 30 fps are quite common in TV drama and in-game cinematics. ~50 or ~60 frames per second have been universal in television and video equipment, broadcast, and storage standards since their inception. Support for native 120 fps content is a primary feature of new Ultra-high-definition television standards such as ATSC 3.0. Some media players are capable of showing arbitrarily high framerates and almost all computers and smart devices can handle such formats as well. In recent years some televisions have the ability to take normal 24 fps videos and "upconvert" them to HFR content by interpolating the motion of the picture, creating new computer generated frames between each two key frames and running them at higher refresh rate. Similar computer programs allow for that as well but with higher precision and better quality as the computing power of the PC has grown, either realtime or offline. Filmmakers may originate their projects at 120, 240 or 300 fps so that it may be evenly pulled down to various multiple differing frame rates for distribution, such as 25, 30, 50, and 60 fps for video and 24, 48 or 60 fps for cinematic theater. The same is also done when creating slow motion sequences and is sometimes referred to as "overcranking." == Video file recording methods == Usually, cameras (including those in mobile phones) historically had two ways of encoding high framerate (or slow motion) video into the video file: the real-time method and the menial method. == See also == High-motion Motion interpolation Variable refresh rate Soap opera effect == References ==
Wikipedia/High_frame_rate
The C0 and C1 control code or control character sets define control codes for use in text by computer systems that use ASCII and derivatives of ASCII. The codes represent additional information about the text, such as the position of a cursor, an instruction to start a new line, or a message that the text has been received. C0 codes are the range 00HEX–1FHEX and the default C0 set was originally defined in ISO 646 (ASCII). C1 codes are the range 80HEX–9FHEX and the default C1 set was originally defined in ECMA-48 (harmonized later with ISO 6429). The ISO/IEC 2022 system of specifying control and graphic characters allows other C0 and C1 sets to be available for specialized applications, but they are rarely used. == C0 controls == ASCII defines 32 control characters, plus the DEL character. This large number of codes was desirable at the time, as multi-byte controls would require implementation of a state machine in the terminal, which was very difficult with contemporary electronics and mechanical terminals. Only a few codes have maintained their use: BEL, ESC, and the format effector (FEn) characters BS, TAB, LF, VT, FF, and CR. Others are unused or have acquired different meanings such as NUL being the C string terminator. Some data transfer protocols such as ANPA-1312, Kermit, and XMODEM do make extensive use of SOH, STX, ETX, EOT, ACK, NAK and SYN for purposes approximating their original definitions; and some file formats use the "Information Separators" (ISn) such as the Unix info format and Python's splitlines string method. The names of some codes were changed in ISO 6429:1992 (or ECMA-48:1991) to be neutral with respect to writing direction. The abbreviations used were not changed, as the standard had already specified that those would remain unchanged when the standard is translated to other languages. In this table both new and old names are shown for the renamed controls (the old name is the one matching the abbreviation). Unicode provides Control Pictures that can replace C0 control characters to make them visible on screen. However caret notation is used more often. == C1 controls == In 1973, ECMA-35 and ISO 2022 attempted to define a method so an 8-bit "extended ASCII" code could be converted to a corresponding 7-bit code, and vice versa. In a 7-bit environment, the Shift Out (SO) would change the meaning of the 96 bytes 0x20 through 0x7F (i.e. all but the C0 control codes), to be the characters that an 8-bit environment would print if it used the same code with the high bit set. This meant that the range 0x80 through 0x9F could not be printed in a 7-bit environment, thus it was decided that no alternative character set could use them, and that these codes should be additional control codes, which become known as the C1 control codes. To allow a 7-bit environment to use these new controls, the sequences ESC @ through ESC _ were to be considered equivalent. The later ISO 8859 standards abandoned support for 7-bit codes, but preserved this range of control characters. The first C1 control code set to be registered for use with ISO 2022 was DIN 31626, a specialised set for bibliographic use which was registered in 1979. The more common general-use ISO/IEC 6429 set was registered in 1983, although the ECMA-48 specification upon which it was based had been first published in 1976 and JIS X 0211 (formerly JIS C 6323). Symbolic names defined by RFC 1345 and early drafts of ISO 10646, but not in ISO/IEC 6429 (PAD, HOP and SGC) are also used. Except for SS2 and SS3 in EUC-JP text, and NEL in text transcoded from EBCDIC, the 8-bit forms of these codes were almost never used. CSI, DCS and OSC are used to control text terminals and terminal emulators, but almost always by using their 7-bit escape code representations. Nowadays if these codes are encountered it is far more likely they are intended to be printing characters from that position of Windows-1252 or Mac OS Roman. Except for NEL, Unicode does not provide a "control picture" for any of these. There is no well-known variation of Caret notation for them either. == Other control code sets == The ISO/IEC 2022 (ECMA-35) extension mechanism allowed escape sequences to change the C0 and C1 sets. The standard C0 control character set shown above is chosen with the sequence ESC ! @ and the above C1 set chosen with the sequence ESC " C. Several official and unofficial alternatives have been defined, but this is pretty much obsolete. Most were forced to retain a good deal of compatibility with the ASCII controls for interoperability. The standard makes ESC, SP and DEL "fixed" coded characters, which are available in their ASCII locations in all encodings that conform to the standard. It also specifies that if a C0 set included transmission control (TCn) codes, they must be encoded at their ASCII locations and could not be put in a C1 set, and any new transmission controls must be in a C1 set. === Alternative C0 character sets === ANPA-1312, a text markup language used for news transmission, replaces several C0 control characters. IPTC 7901, the newer international version of the above, has its own variations. Videotex has a completely different set. Teletext also defines a set similar to Videotex. T.61/T.51, and others replaced EM and GS with SS2 and SS3 so these functions could be used in a 7-bit environment without resorting to escape sequences. Some sets replaced FS with SS2, (same as ANPA-1312). The now-withdrawn JIS C 6225, designated JIS X 0207 in later sources. replaced FS with CEX or "Control Extension" which introduces control sequences for vertical text behaviour, superscripts and subscripts and for transmitting custom character graphics. === Alternative C1 character sets === A specialized C1 control code set is registered for bibliographic use (including string collation), such as by MARC-8. Various specialised C1 control code sets are registered for use by Videotex formats. The Stratus VOS operating system uses a C1 set called the NLS control set. It includes SS1 (Single-Shift 1) through SS15 (Single-Shift 15) controls, used to invoke individual characters from pre-defined supplementary character sets, in a similar manner to the single-shift mechanism of ISO/IEC 2022. The only single-shift controls defined by ISO/IEC 2022 are SS2 and SS3; these are retained in the VOS set at their original code points and function the same way. EBCDIC defines up to 29 additional control codes besides those present in ASCII. When translating EBCDIC to Unicode (or to ISO 8859), these codes are mapped to C1 control characters in a manner specified by IBM's Character Data Representation Architecture (CDRA). Although the New Line (NL) does translate to the ISO/IEC 6429 NEL (although it is often swapped with LF, following UNIX line ending convention), the remainder of the control codes do not correspond. For example, the EBCDIC control SPS and the ECMA-48 control PLU are both used to begin a superscript or end a subscript, but are not mapped to one another. Extended-ASCII-mapped EBCDIC can therefore be regarded as having its own C1 set, although it is not registered with the ISO-IR registry for ISO/IEC 2022. == Unicode == Unicode reserves the 65 code points described above for compatibility with the C0 and C1 control codes, giving them the general category Cc (control). These are: U+0000–U+001F (C0 controls) and U+007F (DEL) assigned to the C0 Controls and Basic Latin block, and U+0080–U+009F (C1 controls) assigned to the C1 Controls and Latin-1 Supplement block. Unicode only specifies semantics for the C0 format controls HT, LF, VT, FF, and CR (note BS is missing); the C0 information separators FS, GS, RS, US (and SP); and the C1 control NEL. The rest of the codes are transparent to Unicode and their meanings are left to higher-level protocols, with ISO/IEC 6429 suggested as a default. Unicode includes many additional format effector characters besides these, such as marks, embeds, isolates and pops for explicit bidirectional formatting, and the zero-width joiner and non-joiner for controlling ligature use. However these are given the general category Cf (format) rather than Cc. == See also == Control Pictures - Unicode graphical representation characters for the C0 control codes ANSI escape code == Footnotes == == References == The Unicode Standard C0 Controls and Basic Latin C1 Controls and Latin-1 Supplement Control Pictures The Unicode Standard, Version 6.1.0, Chapter 16: Special Areas and Format Characters ATIS Telecom Glossary 2007 De litteris regentibus C1 quaestiones septem or Are C1 characters legal in XHTML 1.0? W3C I18N FAQ: HTML, XHTML, XML and Control Codes International register of coded character sets to be used with escape sequences
Wikipedia/C0_and_C1_control_codes
In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set. There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements. Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied. A generalization defines an order on an n-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered. == Definition == The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols. The formal notion starts with a finite set A, often called the alphabet, which is totally ordered. That is, for any two symbols a and b in A that are not the same symbol, either a < b or b < a. The words of A are the finite sequences of symbols from A, including words of length 1 containing a single symbol, words of length 2 with 2 symbols, and so on, even including the empty sequence ε {\displaystyle \varepsilon } with no symbols at all. The lexicographical order on the set of all these finite words orders the words as follows: Given two different words of the same length, say a = a1a2...ak and b = b1b2...bk, the order of the two words depends on the alphabetic order of the symbols in the first place i where the two words differ (counting from the beginning of the words): a < b if and only if ai < bi in the underlying order of the alphabet A. If two words have different lengths, the usual lexicographical order pads the shorter one with "blanks" (a special symbol that is treated as smaller than every element of A) at the end until the words are the same length, and then the words are compared as in the previous case. However, in combinatorics, another convention is frequently used for the second case, whereby a shorter sequence is always smaller than a longer sequence. This variant of the lexicographical order is sometimes called shortlex order. In lexicographical order, the word "Thomas" appears before "Thompson" because they first differ at the fifth letter ('a' and 'p'), and letter 'a' comes before the letter 'p' in the alphabet. Because it is the first difference, in this case the 5th letter is the "most significant difference" for alphabetical ordering. An important property of the lexicographical order is that for each n, the set of words of length n is well-ordered by the lexicographical order (provided the alphabet is finite); that is, every decreasing sequence of words of length n is finite (or equivalently, every non-empty subset has a least element). It is not true that the set of all finite words is well-ordered; for example, the infinite set of words {b, ab, aab, aaab, ... } has no lexicographically earliest element. == Numeral systems and dates == The lexicographical order is used not only in dictionaries, but also commonly for numbers and dates. One of the drawbacks of the Roman numeral system is that it is not always immediately obvious which of two numbers is the smaller. On the other hand, with the positional notation of the Hindu–Arabic numeral system, comparing numbers is easy, because the natural order on natural numbers is the same as the variant shortlex of the lexicographic order. In fact, with positional notation, a natural number is represented by a sequence of numerical digits, and a natural number is larger than another one if either it has more digits (ignoring leading zeroes) or the number of digits is the same and the first (most significant) digit which differs is larger. For real numbers written in decimal notation, a slightly different variant of the lexicographical order is used: the parts on the left of the decimal point are compared as before; if they are equal, the parts at the right of the decimal point are compared with the lexicographical order. The padding 'blank' in this context is a trailing "0" digit. When negative numbers are also considered, one has to reverse the order for comparing negative numbers. This is not usually a problem for humans, but it may be for computers (testing the sign takes some time). This is one of the reasons for adopting two's complement representation for representing signed integers in computers. Another example of a non-dictionary use of lexicographical ordering appears in the ISO 8601 standard for dates, which expresses a date as YYYY-MM-DD. This formatting scheme has the advantage that the lexicographical order on sequences of characters that represent dates coincides with the chronological order: an earlier CE date is smaller in the lexicographical order than a later date up to year 9999. This date ordering makes computerized sorting of dates easier by avoiding the need for a separate sorting algorithm. == Monoid of words == The monoid of words over an alphabet A is the free monoid over A. That is, the elements of the monoid are the finite sequences (words) of elements of A (including the empty sequence, of length 0), and the operation (multiplication) is the concatenation of words. A word u is a prefix (or 'truncation') of another word v if there exists a word w such that v = uw. By this definition, the empty word ( ε {\displaystyle \varepsilon } ) is a prefix of every word, and every word is a prefix of itself (with w = ε {\displaystyle =\varepsilon } ); care must be taken if these cases are to be excluded. With this terminology, the above definition of the lexicographical order becomes more concise: Given a partially or totally ordered set A, and two words a and b over A such that b is non-empty, then one has a < b under lexicographical order, if at least one of the following conditions is satisfied: a is a prefix of b there exists words u, v, w (possibly empty) and elements x and y of A such that x < y a = uxv b = uyw Notice that, due to the prefix condition in this definition, ε < b for all b ≠ ε , {\displaystyle \varepsilon <b\,\,{\text{ for all }}b\neq \varepsilon ,} where ε {\displaystyle \varepsilon } is the empty word. If < {\displaystyle \,<\,} is a total order on A , {\displaystyle A,} then so is the lexicographic order on the words of A . {\displaystyle A.} However, in general this is not a well-order, even if the alphabet A {\displaystyle A} is well-ordered. For instance, if A = {a, b}, the language {anb | n ≥ 0, b > ε} has no least element in the lexicographical order: ... < aab < ab < b. Since many applications require well orders, a variant of the lexicographical orders is often used. This well-order, sometimes called shortlex or quasi-lexicographical order, consists in considering first the lengths of the words (if length(a) < length(b), then a < b {\displaystyle a<b} ), and, if the lengths are equal, using the lexicographical order. If the order on A is a well-order, the same is true for the shortlex order. == Cartesian products == The lexicographical order defines an order on an n-ary Cartesian product of ordered sets, which is a total order when all these sets are themselves totally ordered. An element of a Cartesian product E 1 × ⋯ × E n {\displaystyle E_{1}\times \cdots \times E_{n}} is a sequence whose i {\displaystyle i} th element belongs to E i {\displaystyle E_{i}} for every i . {\displaystyle i.} As evaluating the lexicographical order of sequences compares only elements which have the same rank in the sequences, the lexicographical order extends to Cartesian products of ordered sets. Specifically, given two partially ordered sets A {\displaystyle A} and B , {\displaystyle B,} the lexicographical order on the Cartesian product A × B {\displaystyle A\times B} is defined as ( a , b ) ≤ ( a ′ , b ′ ) if and only if a < a ′ or ( a = a ′ and b ≤ b ′ ) , {\displaystyle (a,b)\leq \left(a^{\prime },b^{\prime }\right){\text{ if and only if }}a<a^{\prime }{\text{ or }}\left(a=a^{\prime }{\text{ and }}b\leq b^{\prime }\right),} The result is a partial order. If A {\displaystyle A} and B {\displaystyle B} are each totally ordered, then the result is a total order as well. The lexicographical order of two totally ordered sets is thus a linear extension of their product order. One can define similarly the lexicographic order on the Cartesian product of an infinite family of ordered sets, if the family is indexed by the natural numbers, or more generally by a well-ordered set. This generalized lexicographical order is a total order if each factor set is totally ordered. Unlike the finite case, an infinite product of well-orders is not necessarily well-ordered by the lexicographical order. For instance, the set of countably infinite binary sequences (by definition, the set of functions from natural numbers to { 0 , 1 } , {\displaystyle \{0,1\},} also known as the Cantor space { 0 , 1 } ω {\displaystyle \{0,1\}^{\omega }} ) is not well-ordered; the subset of sequences that have precisely one 1 {\displaystyle 1} (that is, { 100000..., 010000..., 001000..., ... }) does not have a least element under the lexicographical order induced by 0 < 1 , {\displaystyle 0<1,} because 100000... > 010000... > 001000... > ... is an infinite descending chain. Similarly, the infinite lexicographic product is not Noetherian either because 011111... < 101111... < 110111 ... < ... is an infinite ascending chain. == Functions over a well-ordered set == The functions from a well-ordered set X {\displaystyle X} to a totally ordered set Y {\displaystyle Y} may be identified with sequences indexed by X {\displaystyle X} of elements of Y . {\displaystyle Y.} They can thus be ordered by the lexicographical order, and for two such functions f {\displaystyle f} and g , {\displaystyle g,} the lexicographical order is thus determined by their values for the smallest x {\displaystyle x} such that f ( x ) ≠ g ( x ) . {\displaystyle f(x)\neq g(x).} If Y {\displaystyle Y} is also well-ordered and X {\displaystyle X} is finite, then the resulting order is a well-order. As shown above, if X {\displaystyle X} is infinite this is not the case. == Finite subsets == In combinatorics, one has often to enumerate, and therefore to order the finite subsets of a given set S . {\displaystyle S.} For this, one usually chooses an order on S . {\displaystyle S.} Then, sorting a subset of S {\displaystyle S} is equivalent to convert it into an increasing sequence. The lexicographic order on the resulting sequences induces thus an order on the subsets, which is also called the lexicographical order. In this context, one generally prefer to sort first the subsets by cardinality, such as in the shortlex order. Therefore, in the following, we will consider only orders on subsets of fixed cardinal. For example, using the natural order of the integers, the lexicographical ordering on the subsets of three elements of S = { 1 , 2 , 3 , 4 , 5 , 6 } {\displaystyle S=\{1,2,3,4,5,6\}} is 123 < 124 < 125 < 126 < 134 < 135 < 136 < 145 < 146 < 156 < 234 < 235 < 236 < 245 < 246 < 256 < 345 < 346 < 356 < 456. For ordering finite subsets of a given cardinality of the natural numbers, the colexicographical order (see below) is often more convenient, because all initial segments are finite, and thus the colexicographical order defines an order isomorphism between the natural numbers and the set of sets of n {\displaystyle n} natural numbers. This is not the case for the lexicographical order, as, with the lexicographical order, we have, for example, 12 n < 134 {\displaystyle 12n<134} for every n > 2. {\displaystyle n>2.} == Group orders of Zn == Let Z n {\displaystyle \mathbb {Z} ^{n}} be the free Abelian group of rank n , {\displaystyle n,} whose elements are sequences of n {\displaystyle n} integers, and operation is the addition. A group order on Z n {\displaystyle \mathbb {Z} ^{n}} is a total order, which is compatible with addition, that is a < b if and only if a + c < b + c . {\displaystyle a<b\quad {\text{ if and only if }}\quad a+c<b+c.} The lexicographical ordering is a group order on Z n . {\displaystyle \mathbb {Z} ^{n}.} The lexicographical ordering may also be used to characterize all group orders on Z n . {\displaystyle \mathbb {Z} ^{n}.} In fact, n {\displaystyle n} linear forms with real coefficients, define a map from Z n {\displaystyle \mathbb {Z} ^{n}} into R n , {\displaystyle \mathbb {R} ^{n},} which is injective if the forms are linearly independent (it may be also injective if the forms are dependent, see below). The lexicographic order on the image of this map induces a group order on Z n . {\displaystyle \mathbb {Z} ^{n}.} Robbiano's theorem is that every group order may be obtained in this way. More precisely, given a group order on Z n , {\displaystyle \mathbb {Z} ^{n},} there exist an integer s ≤ n {\displaystyle s\leq n} and s {\displaystyle s} linear forms with real coefficients, such that the induced map φ {\displaystyle \varphi } from Z n {\displaystyle \mathbb {Z} ^{n}} into R s {\displaystyle \mathbb {R} ^{s}} has the following properties; φ {\displaystyle \varphi } is injective; the resulting isomorphism from Z n {\displaystyle \mathbb {Z} ^{n}} to the image of φ {\displaystyle \varphi } is an order isomorphism when the image is equipped with the lexicographical order on R s . {\displaystyle \mathbb {R} ^{s}.} == Colexicographic order == The colexicographic or colex order is a variant of the lexicographical order that is obtained by reading finite sequences from the right to the left instead of reading them from the left to the right. More precisely, whereas the lexicographical order between two sequences is defined by a1a2...ak <lex b1b2 ... bk if ai < bi for the first i where ai and bi differ, the colexicographical order is defined by a1a2...ak <colex b1b2...bk if ai < bi for the last i where ai and bi differ In general, the difference between the colexicographical order and the lexicographical order is not very significant. However, when considering increasing sequences, typically for coding subsets, the two orders differ significantly. For example, for ordering the increasing sequences (or the sets) of two natural integers, the lexicographical order begins by 12 < 13 < 14 < 15 < ... < 23 < 24 < 25 < ... < 34 < 35 < ... < 45 < ..., and the colexicographic order begins by 12 < 13 < 23 < 14 < 24 < 34 < 15 < 25 < 35 < 45 < .... The main property of the colexicographical order for increasing sequences of a given length is that every initial segment is finite. In other words, the colexicographical order for increasing sequences of a given length induces an order isomorphism with the natural numbers, and allows enumerating these sequences. This is frequently used in combinatorics, for example in the proof of the Kruskal–Katona theorem. == Monomials == When considering polynomials, the order of the terms does not matter in general, as the addition is commutative. However, some algorithms, such as polynomial long division, require the terms to be in a specific order. Many of the main algorithms for multivariate polynomials are related with Gröbner bases, concept that requires the choice of a monomial order, that is a total order, which is compatible with the monoid structure of the monomials. Here "compatible" means that a < b implies a c < b c , {\displaystyle a<b{\text{ implies }}ac<bc,} if the monoid operation is denoted multiplicatively. This compatibility implies that the product of a polynomial by a monomial does not change the order of the terms. For Gröbner bases, a further condition must be satisfied, namely that every non-constant monomial is greater than the monomial 1. However this condition is not needed for other related algorithms, such as the algorithms for the computation of the tangent cone. As Gröbner bases are defined for polynomials in a fixed number of variables, it is common to identify monomials (for example x 1 x 2 3 x 4 x 5 2 {\displaystyle x_{1}x_{2}^{3}x_{4}x_{5}^{2}} ) with their exponent vectors (here [1, 3, 0, 1, 2]). If n is the number of variables, every monomial order is thus the restriction to N n {\displaystyle \mathbb {N} ^{n}} of a monomial order of Z n {\displaystyle \mathbb {Z} ^{n}} (see above § Group orders of Zn Z n , {\displaystyle \mathbb {Z} ^{n},} for a classification). One of these admissible orders is the lexicographical order. It is, historically, the first to have been used for defining Gröbner bases, and is sometimes called pure lexicographical order for distinguishing it from other orders that are also related to a lexicographical order. Another one consists in comparing first the total degrees, and then resolving the conflicts by using the lexicographical order. This order is not widely used, as either the lexicographical order or the degree reverse lexicographical order have generally better properties. The degree reverse lexicographical order consists also in comparing first the total degrees, and, in case of equality of the total degrees, using the reverse of the colexicographical order. That is, given two exponent vectors, one has [ a 1 , … , a n ] < [ b 1 , … , b n ] {\displaystyle [a_{1},\ldots ,a_{n}]<[b_{1},\ldots ,b_{n}]} if either a 1 + ⋯ + a n < b 1 + ⋯ + b n , {\displaystyle a_{1}+\cdots +a_{n}<b_{1}+\cdots +b_{n},} or a 1 + ⋯ + a n = b 1 + ⋯ + b n and a i > b i for the largest i for which a i ≠ b i . {\displaystyle a_{1}+\cdots +a_{n}=b_{1}+\cdots +b_{n}\quad {\text{ and }}\quad a_{i}>b_{i}{\text{ for the largest }}i{\text{ for which }}a_{i}\neq b_{i}.} For this ordering, the monomials of degree one have the same order as the corresponding indeterminates (this would not be the case if the reverse lexicographical order would be used). For comparing monomials in two variables of the same total degree, this order is the same as the lexicographic order. This is not the case with more variables. For example, for exponent vectors of monomials of degree two in three variables, one has for the degree reverse lexicographic order: [ 0 , 0 , 2 ] < [ 0 , 1 , 1 ] < [ 1 , 0 , 1 ] < [ 0 , 2 , 0 ] < [ 1 , 1 , 0 ] < [ 2 , 0 , 0 ] {\displaystyle [0,0,2]<[0,1,1]<[1,0,1]<[0,2,0]<[1,1,0]<[2,0,0]} For the lexicographical order, the same exponent vectors are ordered as [ 0 , 0 , 2 ] < [ 0 , 1 , 1 ] < [ 0 , 2 , 0 ] < [ 1 , 0 , 1 ] < [ 1 , 1 , 0 ] < [ 2 , 0 , 0 ] . {\displaystyle [0,0,2]<[0,1,1]<[0,2,0]<[1,0,1]<[1,1,0]<[2,0,0].} A useful property of the degree reverse lexicographical order is that a homogeneous polynomial is a multiple of the least indeterminate if and only if its leading monomial (its greater monomial) is a multiple of this least indeterminate. == See also == Collation Kleene–Brouwer order Lexicographic preferences - an application of lexicographic order in economics. Lexicographic optimization - an algorithmic problem of finding a lexicographically-maximal element. Lexicographic order topology on the unit square Lexicographic ordering in tensor abstract index notation Lexicographically minimal string rotation Leximin order Long line (topology) Lyndon word Pre-order - the name of the lexicographical order (of bits) in a binary tree traversal Star product, a different way of combining partial orders Shortlex order Orders on the Cartesian product of totally ordered sets == References == == External links == Learning materials related to Lexicographic and colexicographic order at Wikiversity
Wikipedia/Colexicographic_order
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed (after creation). A string is often implemented as an array data structure of bytes (or words) that stores a sequence of elements, typically characters, using some character encoding. More general, string may also denote a sequence (or list) of data other than just characters. Depending on the programming language and precise data type used, a variable declared to be a string may either cause storage in memory to be statically allocated for a predetermined maximum length or employ dynamic allocation to allow it to hold a variable number of elements. When a string appears literally in source code, it is known as a string literal or an anonymous string. In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set called an alphabet. == Purpose == A primary purpose of strings is to store human-readable text, like words and sentences. Strings are used to communicate information from a computer program to the user of the program. A program may also accept string input from its user. Further, strings may store data expressed as characters yet not intended for human reading. Example strings and their purposes: A message like "file upload complete" is a string that software shows to end users. In the program's source code, this message would likely appear as a string literal. User-entered text, like "I got a new job today" as a status update on a social media service. Instead of a string literal, the software would likely store this string in a database. Alphabetical data, like "AGATGCCGT" representing nucleic acid sequences of DNA. Computer settings or parameters, like "?action=edit" as a URL query string. Often these are intended to be somewhat human-readable, though their primary purpose is to communicate to computers. The term string may also designate a sequence of data or computer records other than characters — like a "string of bits" — but when used without qualification it refers to strings of characters. == History == Use of the word "string" to mean any items arranged in a line, series or succession dates back centuries. In 19th-century typesetting, compositors used the term "string" to denote a length of type printed on paper; the string would be measured to determine the compositor's pay. Use of the word "string" to mean "a sequence of symbols or linguistic elements in a definite order" emerged from mathematics, symbolic logic, and linguistic theory to speak about the formal behavior of symbolic systems, setting aside the symbols' meaning. For example, logician C. I. Lewis wrote in 1918: A mathematical system is any set of strings of recognisable marks in which some of the strings are taken initially and the remainder derived from these by operations performed according to rules which are independent of any meaning assigned to the marks. That a system should consist of 'marks' instead of sounds or odours is immaterial. According to Jean E. Sammet, "the first realistic string handling and pattern matching language" for computers was COMIT in the 1950s, followed by the SNOBOL language of the early 1960s. == String datatypes == A string datatype is a datatype modeled on the idea of a formal string. Strings are such an important and useful datatype that they are implemented in nearly every programming language. In some languages they are available as primitive types and in others as composite types. The syntax of most high-level programming languages allows for a string, usually quoted in some way, to represent an instance of a string datatype; such a meta-string is called a literal or string literal. === String length === Although formal strings can have an arbitrary finite length, the length of strings in real languages is often constrained to an artificial maximum. In general, there are two types of string datatypes: fixed-length strings, which have a fixed maximum length to be determined at compile time and which use the same amount of memory whether this maximum is needed or not, and variable-length strings, whose length is not arbitrarily fixed and which can use varying amounts of memory depending on the actual requirements at run time (see Memory management). Most strings in modern programming languages are variable-length strings. Of course, even variable-length strings are limited in length by the amount of available memory. The string length can be stored as a separate integer (which may put another artificial limit on the length) or implicitly through a termination character, usually a character value with all bits zero such as in C programming language. See also "Null-terminated" below. === Character encoding === String datatypes have historically allocated one byte per character, and, although the exact character set varied by region, character encodings were similar enough that programmers could often get away with ignoring this, since characters a program treated specially (such as period and space and comma) were in the same place in all the encodings a program would encounter. These character sets were typically based on ASCII or EBCDIC. If text in one encoding was displayed on a system using a different encoding, text was often mangled, though often somewhat readable and some computer users learned to read the mangled text. Logographic languages such as Chinese, Japanese, and Korean (known collectively as CJK) need far more than 256 characters (the limit of a one 8-bit byte per-character encoding) for reasonable representation. The normal solutions involved keeping single-byte representations for ASCII and using two-byte representations for CJK ideographs. Use of these with existing code led to problems with matching and cutting of strings, the severity of which depended on how the character encoding was designed. Some encodings such as the EUC family guarantee that a byte value in the ASCII range will represent only that ASCII character, making the encoding safe for systems that use those characters as field separators. Other encodings such as ISO-2022 and Shift-JIS do not make such guarantees, making matching on byte codes unsafe. These encodings also were not "self-synchronizing", so that locating character boundaries required backing up to the start of a string, and pasting two strings together could result in corruption of the second string. Unicode has simplified the picture somewhat. Most programming languages now have a datatype for Unicode strings. Unicode's preferred byte stream format UTF-8 is designed not to have the problems described above for older multibyte encodings. UTF-8, UTF-16 and UTF-32 require the programmer to know that the fixed-size code units are different from the "characters", the main difficulty currently is incorrectly designed APIs that attempt to hide this difference (UTF-32 does make code points fixed-sized, but these are not "characters" due to composing codes). === Implementations === Some languages, such as C++, Perl and Ruby, normally allow the contents of a string to be changed after it has been created; these are termed mutable strings. In other languages, such as Java, JavaScript, Lua, Python, and Go, the value is fixed and a new string must be created if any alteration is to be made; these are termed immutable strings. Some of these languages with immutable strings also provide another type that is mutable, such as Java and .NET's StringBuilder, the thread-safe Java StringBuffer, and the Cocoa NSMutableString. There are both advantages and disadvantages to immutability: although immutable strings may require inefficiently creating many copies, they are simpler and completely thread-safe. Strings are typically implemented as arrays of bytes, characters, or code units, in order to allow fast access to individual units or substrings—including characters when they have a fixed length. A few languages such as Haskell implement them as linked lists instead. A lot of high-level languages provide strings as a primitive data type, such as JavaScript and PHP, while most others provide them as a composite data type, some with special language support in writing literals, for example, Java and C#. Some languages, such as C, Prolog and Erlang, avoid implementing a dedicated string datatype at all, instead adopting the convention of representing strings as lists of character codes. Even in programming languages having a dedicated string type, string can usually be iterated as a sequence character codes, like lists of integers or other values. === Representations === Representations of strings depend heavily on the choice of character repertoire and the method of character encoding. Older string implementations were designed to work with repertoire and encoding defined by ASCII, or more recent extensions like the ISO 8859 series. Modern implementations often use the extensive repertoire defined by Unicode along with a variety of complex encodings such as UTF-8 and UTF-16. The term byte string usually indicates a general-purpose string of bytes, rather than strings of only (readable) characters, strings of bits, or such. Byte strings often imply that bytes can take any value and any data can be stored as-is, meaning that there should be no value interpreted as a termination value. Most string implementations are very similar to variable-length arrays with the entries storing the character codes of corresponding characters. The principal difference is that, with certain encodings, a single logical character may take up more than one entry in the array. This happens for example with UTF-8, where single codes (UCS code points) can take anywhere from one to four bytes, and single characters can take an arbitrary number of codes. In these cases, the logical length of the string (number of characters) differs from the physical length of the array (number of bytes in use). UTF-32 avoids the first part of the problem. ==== Null-terminated ==== The length of a string can be stored implicitly by using a special terminating character; often this is the null character (NUL), which has all bits zero, a convention used and perpetuated by the popular C programming language. Hence, this representation is commonly referred to as a C string. This representation of an n-character string takes n + 1 space (1 for the terminator), and is thus an implicit data structure. In terminated strings, the terminating code is not an allowable character in any string. Strings with length field do not have this limitation and can also store arbitrary binary data. An example of a null-terminated string stored in a 10-byte buffer, along with its ASCII (or more modern UTF-8) representation as 8-bit hexadecimal numbers is: The length of the string in the above example, "FRANK", is 5 characters, but it occupies 6 bytes. Characters after the terminator do not form part of the representation; they may be either part of other data or just garbage. (Strings of this form are sometimes called ASCIZ strings, after the original assembly language directive used to declare them.) ==== Byte- and bit-terminated ==== Using a special byte other than null for terminating strings has historically appeared in both hardware and software, though sometimes with a value that was also a printing character. $ was used by many assembler systems, : used by CDC systems (this character had a value of zero), and the ZX80 used " since this was the string delimiter in its BASIC language. Somewhat similar, "data processing" machines like the IBM 1401 used a special word mark bit to delimit strings at the left, where the operation would start at the right. This bit had to be clear in all other parts of the string. This meant that, while the IBM 1401 had a seven-bit word, almost no-one ever thought to use this as a feature, and override the assignment of the seventh bit to (for example) handle ASCII codes. Early microcomputer software relied upon the fact that ASCII codes do not use the high-order bit, and set it to indicate the end of a string. It must be reset to 0 prior to output. ==== Length-prefixed ==== The length of a string can also be stored explicitly, for example by prefixing the string with the length as a byte value. This convention is used in many Pascal dialects; as a consequence, some people call such a string a Pascal string or P-string. Storing the string length as byte limits the maximum string length to 255. To avoid such limitations, improved implementations of P-strings use 16-, 32-, or 64-bit words to store the string length. When the length field covers the address space, strings are limited only by the available memory. If the length is bounded, then it can be encoded in constant space, typically a machine word, thus leading to an implicit data structure, taking n + k space, where k is the number of characters in a word (8 for 8-bit ASCII on a 64-bit machine, 1 for 32-bit UTF-32/UCS-4 on a 32-bit machine, etc.). If the length is not bounded, encoding a length n takes log(n) space (see fixed-length code), so length-prefixed strings are a succinct data structure, encoding a string of length n in log(n) + n space. In the latter case, the length-prefix field itself does not have fixed length, therefore the actual string data needs to be moved when the string grows such that the length field needs to be increased. Here is a Pascal string stored in a 10-byte buffer, along with its ASCII / UTF-8 representation: ==== Strings as records ==== Many languages, including object-oriented ones, implement strings as records with an internal structure like: However, since the implementation is usually hidden, the string must be accessed and modified through member functions. text is a pointer to a dynamically allocated memory area, which might be expanded as needed. See also string (C++). ==== Other representations ==== Both character termination and length codes limit strings: For example, C character arrays that contain null (NUL) characters cannot be handled directly by C string library functions: Strings using a length code are limited to the maximum value of the length code. Both of these limitations can be overcome by clever programming. It is possible to create data structures and functions that manipulate them that do not have the problems associated with character termination and can in principle overcome length code bounds. It is also possible to optimize the string represented using techniques from run length encoding (replacing repeated characters by the character value and a length) and Hamming encoding. While these representations are common, others are possible. Using ropes makes certain string operations, such as insertions, deletions, and concatenations more efficient. The core data structure in a text editor is the one that manages the string (sequence of characters) that represents the current state of the file being edited. While that state could be stored in a single long consecutive array of characters, a typical text editor instead uses an alternative representation as its sequence data structure—a gap buffer, a linked list of lines, a piece table, or a rope—which makes certain string operations, such as insertions, deletions, and undoing previous edits, more efficient. === Security concerns === The differing memory layout and storage requirements of strings can affect the security of the program accessing the string data. String representations requiring a terminating character are commonly susceptible to buffer overflow problems if the terminating character is not present, caused by a coding error or an attacker deliberately altering the data. String representations adopting a separate length field are also susceptible if the length can be manipulated. In such cases, program code accessing the string data requires bounds checking to ensure that it does not inadvertently access or change data outside of the string memory limits. String data is frequently obtained from user input to a program. As such, it is the responsibility of the program to validate the string to ensure that it represents the expected format. Performing limited or no validation of user input can cause a program to be vulnerable to code injection attacks. == Literal strings == Sometimes, strings need to be embedded inside a text file that is both human-readable and intended for consumption by a machine. This is needed in, for example, source code of programming languages, or in configuration files. In this case, the NUL character does not work well as a terminator since it is normally invisible (non-printable) and is difficult to input via a keyboard. Storing the string length would also be inconvenient as manual computation and tracking of the length is tedious and error-prone. Two common representations are: Surrounded by quotation marks (ASCII 0x22 double quote "str" or ASCII 0x27 single quote 'str'), used by most programming languages. To be able to include special characters such as the quotation mark itself, newline characters, or non-printable characters, escape sequences are often available, usually prefixed with the backslash character (ASCII 0x5C). Terminated by a newline sequence, for example in Windows INI files. == Non-text strings == While character strings are very common uses of strings, a string in computer science may refer generically to any sequence of homogeneously typed data. A bit string or byte string, for example, may be used to represent non-textual binary data retrieved from a communications medium. This data may or may not be represented by a string-specific datatype, depending on the needs of the application, the desire of the programmer, and the capabilities of the programming language being used. If the programming language's string implementation is not 8-bit clean, data corruption may ensue. C programmers draw a sharp distinction between a "string", aka a "string of characters", which by definition is always null terminated, vs. a "array of characters" which may be stored in the same array but is often not null terminated. Using C string handling functions on such an array of characters often seems to work, but later leads to security problems. == String processing algorithms == There are many algorithms for processing strings, each with various trade-offs. Competing algorithms can be analyzed with respect to run time, storage requirements, and so forth. The name stringology was coined in 1984 by computer scientist Zvi Galil for the theory of algorithms and data structures used for string processing. Some categories of algorithms include: String searching algorithms for finding a given substring or pattern String manipulation algorithms Sorting algorithms Regular expression algorithms Parsing a string Sequence mining Advanced string algorithms often employ complex mechanisms and data structures, among them suffix trees and finite-state machines. == Character string-oriented languages and utilities == Character strings are such a useful datatype that several languages have been designed in order to make string processing applications easy to write. Examples include the following languages: AWK Icon MUMPS Perl Rexx Ruby sed SNOBOL Tcl TTM Many Unix utilities perform simple string manipulations and can be used to easily program some powerful string processing algorithms. Files and finite streams may be viewed as strings. Some APIs like Multimedia Control Interface, embedded SQL or printf use strings to hold commands that will be interpreted. Many scripting programming languages, including Perl, Python, Ruby, and Tcl employ regular expressions to facilitate text operations. Perl is particularly noted for its regular expression use, and many other languages and applications implement Perl compatible regular expressions. Some languages such as Perl and Ruby support string interpolation, which permits arbitrary expressions to be evaluated and included in string literals. == Character string functions == String functions are used to create strings or change the contents of a mutable string. They also are used to query information about a string. The set of functions and their names varies depending on the computer programming language. The most basic example of a string function is the string length function – the function that returns the length of a string (not counting any terminator characters or any of the string's internal structural information) and does not modify the string. This function is often named length or len. For example, length("hello world") would return 11. Another common function is concatenation, where a new string is created by appending two strings, often this is the + addition operator. Some microprocessor's instruction set architectures contain direct support for string operations, such as block copy (e.g. In intel x86m REPNZ MOVSB). == Formal theory == Let Σ be a finite set of distinct, unambiguous symbols (alternatively called characters), called the alphabet. A string (or word or expression) over Σ is any finite sequence of symbols from Σ. For example, if Σ = {0, 1}, then 01011 is a string over Σ. The length of a string s is the number of symbols in s (the length of the sequence) and can be any non-negative integer; it is often denoted as |s|. The empty string is the unique string over Σ of length 0, and is denoted ε or λ. The set of all strings over Σ of length n is denoted Σn. For example, if Σ = {0, 1}, then Σ2 = {00, 01, 10, 11}. We have Σ0 = {ε} for every alphabet Σ. The set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn, Σ ∗ = ⋃ n ∈ N ∪ { 0 } Σ n {\displaystyle \Sigma ^{*}=\bigcup _{n\in \mathbb {N} \cup \{0\}}\Sigma ^{n}} For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000, 001, 010, 011, ...}. Although the set Σ* itself is countably infinite, each element of Σ* is a string of finite length. A set of strings over Σ (i.e. any subset of Σ*) is called a formal language over Σ. For example, if Σ = {0, 1}, the set of strings with an even number of zeros, {ε, 1, 00, 11, 001, 010, 100, 111, 0000, 0011, 0101, 0110, 1001, 1010, 1100, 1111, ...}, is a formal language over Σ. === Concatenation and substrings === Concatenation is an important binary operation on Σ*. For any two strings s and t in Σ*, their concatenation is defined as the sequence of symbols in s followed by the sequence of characters in t, and is denoted st. For example, if Σ = {a, b, ..., z}, s = bear, and t = hug, then st = bearhug and ts = hugbear. String concatenation is an associative, but non-commutative operation. The empty string ε serves as the identity element; for any string s, εs = sε = s. Therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers (that is, a function L : Σ ∗ ↦ N ∪ { 0 } {\displaystyle L:\Sigma ^{*}\mapsto \mathbb {N} \cup \{0\}} , such that L ( s t ) = L ( s ) + L ( t ) ∀ s , t ∈ Σ ∗ {\displaystyle L(st)=L(s)+L(t)\quad \forall s,t\in \Sigma ^{*}} ). A string s is said to be a substring or factor of t if there exist (possibly empty) strings u and v such that t = usv. The relation "is a substring of" defines a partial order on Σ*, the least element of which is the empty string. === Prefixes and suffixes === A string s is said to be a prefix of t if there exists a string u such that t = su. If u is nonempty, s is said to be a proper prefix of t. Symmetrically, a string s is said to be a suffix of t if there exists a string u such that t = us. If u is nonempty, s is said to be a proper suffix of t. Suffixes and prefixes are substrings of t. Both the relations "is a prefix of" and "is a suffix of" are prefix orders. === Reversal === The reverse of a string is a string with the same symbols but in reverse order. For example, if s = abc (where a, b, and c are symbols of the alphabet), then the reverse of s is cba. A string that is the reverse of itself (e.g., s = madam) is called a palindrome, which also includes the empty string and all strings of length 1. === Rotations === A string s = uv is said to be a rotation of t if t = vu. For example, if Σ = {0, 1} the string 0011001 is a rotation of 0100110, where u = 00110 and v = 01. As another example, the string abc has three different rotations, viz. abc itself (with u=abc, v=ε), bca (with u=bc, v=a), and cab (with u=c, v=ab). === Lexicographical ordering === It is often useful to define an ordering on a set of strings. If the alphabet Σ has a total order (cf. alphabetical order) one can define a total order on Σ* called lexicographical order. The lexicographical order is total if the alphabetical order is, but is not well-founded for any nontrivial alphabet, even if the alphabetical order is. For example, if Σ = {0, 1} and 0 < 1, then the lexicographical order on Σ* includes the relationships ε < 0 < 00 < 000 < ... < 0001 < ... < 001 < ... < 01 < 010 < ... < 011 < 0110 < ... < 01111 < ... < 1 < 10 < 100 < ... < 101 < ... < 111 < ... < 1111 < ... < 11111 ... With respect to this ordering, e.g. the infinite set { 1, 01, 001, 0001, 00001, 000001, ... } has no minimal element. See Shortlex for an alternative string ordering that preserves well-foundedness. For the example alphabet, the shortlex order is ε < 0 < 1 < 00 < 01 < 10 < 11 < 000 < 001 < 010 < 011 < 100 < 101 < 0110 < 111 < 0000 < 0001 < 0010 < 0011 < ... < 1111 < 00000 < 00001 ... === String operations === A number of additional operations on strings commonly occur in the formal theory. These are given in the article on string operations. === Topology === Strings admit the following interpretation as nodes on a graph, where k is the number of symbols in Σ: Fixed-length strings of length n can be viewed as the integer locations in an n-dimensional hypercube with sides of length k-1. Variable-length strings (of finite length) can be viewed as nodes on a perfect k-ary tree. Infinite strings (otherwise not considered here) can be viewed as infinite paths on a k-node complete graph. The natural topology on the set of fixed-length strings or variable-length strings is the discrete topology, but the natural topology on the set of infinite strings is the limit topology, viewing the set of infinite strings as the inverse limit of the sets of finite strings. This is the construction used for the p-adic numbers and some constructions of the Cantor set, and yields the same topology. Isomorphisms between string representations of topologies can be found by normalizing according to the lexicographically minimal string rotation. == See also == Binary-safe — a property of string manipulating functions treating their input as raw data stream Bit array — a string of binary digits C string handling — overview of C string handling C++ string handling — overview of C++ string handling Comparison of programming languages (string functions) Connection string — passed to a driver to initiate a connection (e.g., to a database) Empty string — its properties and representation in programming languages Incompressible string — a string that cannot be compressed by any algorithm Rope (data structure) — a data structure for efficiently manipulating long strings String metric — notions of similarity between strings == References ==
Wikipedia/Character_string_(computer_science)
The Hartley function is a measure of uncertainty, introduced by Ralph Hartley in 1928. If a sample from a finite set A uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function H 0 ( A ) := l o g b | A | , {\displaystyle H_{0}(A):=\mathrm {log} _{b}\vert A\vert ,} where |A| denotes the cardinality of A. If the base of the logarithm is 2, then the unit of uncertainty is the shannon (more commonly known as bit). If it is the natural logarithm, then the unit is the nat. Hartley used a base-ten logarithm, and with this base, the unit of information is called the hartley (aka ban or dit) in his honor. It is also known as the Hartley entropy or max-entropy. == Hartley function, Shannon entropy, and Rényi entropy == The Hartley function coincides with the Shannon entropy (as well as with the Rényi entropies of all orders) in the case of a uniform probability distribution. It is a special case of the Rényi entropy since: H 0 ( X ) = 1 1 − 0 log ⁡ ∑ i = 1 | X | p i 0 = log ⁡ | X | . {\displaystyle H_{0}(X)={\frac {1}{1-0}}\log \sum _{i=1}^{|{\mathcal {X}}|}p_{i}^{0}=\log |{\mathcal {X}}|.} But it can also be viewed as a primitive construction, since, as emphasized by Kolmogorov and Rényi, the Hartley function can be defined without introducing any notions of probability (see Uncertainty and information by George J. Klir, p. 423). == Characterization of the Hartley function == The Hartley function only depends on the number of elements in a set, and hence can be viewed as a function on natural numbers. Rényi showed that the Hartley function in base 2 is the only function mapping natural numbers to real numbers that satisfies H ( m n ) = H ( m ) + H ( n ) {\displaystyle H(mn)=H(m)+H(n)} (additivity) H ( m ) ≤ H ( m + 1 ) {\displaystyle H(m)\leq H(m+1)} (monotonicity) H ( 2 ) = 1 {\displaystyle H(2)=1} (normalization) Condition 1 says that the uncertainty of the Cartesian product of two finite sets A and B is the sum of uncertainties of A and B. Condition 2 says that a larger set has larger uncertainty. == Derivation of the Hartley function == We want to show that the Hartley function, log2(n), is the only function mapping natural numbers to real numbers that satisfies H ( m n ) = H ( m ) + H ( n ) {\displaystyle H(mn)=H(m)+H(n)\,} (additivity) H ( m ) ≤ H ( m + 1 ) {\displaystyle H(m)\leq H(m+1)\,} (monotonicity) H ( 2 ) = 1 {\displaystyle H(2)=1\,} (normalization) Let f be a function on positive integers that satisfies the above three properties. From the additive property, we can show that for any integer n and k, f ( n k ) = k f ( n ) . {\displaystyle f(n^{k})=kf(n).\,} Let a, b, and t be any positive integers. There is a unique integer s determined by a s ≤ b t ≤ a s + 1 . ( 1 ) {\displaystyle a^{s}\leq b^{t}\leq a^{s+1}.\qquad (1)} Therefore, s log 2 ⁡ a ≤ t log 2 ⁡ b ≤ ( s + 1 ) log 2 ⁡ a {\displaystyle s\log _{2}a\leq t\log _{2}b\leq (s+1)\log _{2}a\,} and s t ≤ log 2 ⁡ b log 2 ⁡ a ≤ s + 1 t . {\displaystyle {\frac {s}{t}}\leq {\frac {\log _{2}b}{\log _{2}a}}\leq {\frac {s+1}{t}}.} On the other hand, by monotonicity, f ( a s ) ≤ f ( b t ) ≤ f ( a s + 1 ) . {\displaystyle f(a^{s})\leq f(b^{t})\leq f(a^{s+1}).\,} Using equation (1), one gets s f ( a ) ≤ t f ( b ) ≤ ( s + 1 ) f ( a ) , {\displaystyle sf(a)\leq tf(b)\leq (s+1)f(a),\,} and s t ≤ f ( b ) f ( a ) ≤ s + 1 t . {\displaystyle {\frac {s}{t}}\leq {\frac {f(b)}{f(a)}}\leq {\frac {s+1}{t}}.} Hence, | f ( b ) f ( a ) − log 2 ⁡ ( b ) log 2 ⁡ ( a ) | ≤ 1 t . {\displaystyle \left\vert {\frac {f(b)}{f(a)}}-{\frac {\log _{2}(b)}{\log _{2}(a)}}\right\vert \leq {\frac {1}{t}}.} Since t can be arbitrarily large, the difference on the left hand side of the above inequality must be zero, f ( b ) f ( a ) = log 2 ⁡ ( b ) log 2 ⁡ ( a ) . {\displaystyle {\frac {f(b)}{f(a)}}={\frac {\log _{2}(b)}{\log _{2}(a)}}.} So, f ( a ) = μ log 2 ⁡ ( a ) {\displaystyle f(a)=\mu \log _{2}(a)\,} for some constant μ, which must be equal to 1 by the normalization property. == See also == Rényi entropy Min-entropy == References == This article incorporates material from Hartley function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. This article incorporates material from Derivation of Hartley function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Hartley_function
In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function: ψ ( z ) = d d z ln ⁡ Γ ( z ) = Γ ′ ( z ) Γ ( z ) . {\displaystyle \psi (z)={\frac {\mathrm {d} }{\mathrm {d} z}}\ln \Gamma (z)={\frac {\Gamma '(z)}{\Gamma (z)}}.} It is the first of the polygamma functions. This function is strictly increasing and strictly concave on ( 0 , ∞ ) {\displaystyle (0,\infty )} , and it asymptotically behaves as ψ ( z ) ∼ ln ⁡ z − 1 2 z , {\displaystyle \psi (z)\sim \ln {z}-{\frac {1}{2z}},} for complex numbers with large modulus ( | z | → ∞ {\displaystyle |z|\rightarrow \infty } ) in the sector | arg ⁡ z | < π − ε {\displaystyle |\arg z|<\pi -\varepsilon } for any ε > 0 {\displaystyle \varepsilon >0} . The digamma function is often denoted as ψ 0 ( x ) , ψ ( 0 ) ( x ) {\displaystyle \psi _{0}(x),\psi ^{(0)}(x)} or Ϝ (the uppercase form of the archaic Greek consonant digamma meaning double-gamma). Gamma. == Relation to harmonic numbers == The gamma function obeys the equation Γ ( z + 1 ) = z Γ ( z ) . {\displaystyle \Gamma (z+1)=z\Gamma (z).\,} Taking the logarithm on both sides and using the functional equation property of the log-gamma function gives: log ⁡ Γ ( z + 1 ) = log ⁡ ( z ) + log ⁡ Γ ( z ) , {\displaystyle \log \Gamma (z+1)=\log(z)+\log \Gamma (z),} Differentiating both sides with respect to z gives: ψ ( z + 1 ) = ψ ( z ) + 1 z {\displaystyle \psi (z+1)=\psi (z)+{\frac {1}{z}}} Since the harmonic numbers are defined for positive integers n as H n = ∑ k = 1 n 1 k , {\displaystyle H_{n}=\sum _{k=1}^{n}{\frac {1}{k}},} the digamma function is related to them by ψ ( n ) = H n − 1 − γ , {\displaystyle \psi (n)=H_{n-1}-\gamma ,} where H0 = 0, and γ is the Euler–Mascheroni constant. For half-integer arguments the digamma function takes the values ψ ( n + 1 2 ) = − γ − 2 ln ⁡ 2 + ∑ k = 1 n 2 2 k − 1 = − γ − 2 ln ⁡ 2 + 2 H 2 n − H n . {\displaystyle \psi \left(n+{\tfrac {1}{2}}\right)=-\gamma -2\ln 2+\sum _{k=1}^{n}{\frac {2}{2k-1}}=-\gamma -2\ln 2+2H_{2n}-H_{n}.} == Integral representations == If the real part of z is positive then the digamma function has the following integral representation due to Gauss: ψ ( z ) = ∫ 0 ∞ ( e − t t − e − z t 1 − e − t ) d t . {\displaystyle \psi (z)=\int _{0}^{\infty }\left({\frac {e^{-t}}{t}}-{\frac {e^{-zt}}{1-e^{-t}}}\right)\,dt.} Combining this expression with an integral identity for the Euler–Mascheroni constant γ {\displaystyle \gamma } gives: ψ ( z + 1 ) = − γ + ∫ 0 1 ( 1 − t z 1 − t ) d t . {\displaystyle \psi (z+1)=-\gamma +\int _{0}^{1}\left({\frac {1-t^{z}}{1-t}}\right)\,dt.} The integral is Euler's harmonic number H z {\displaystyle H_{z}} , so the previous formula may also be written ψ ( z + 1 ) = ψ ( 1 ) + H z . {\displaystyle \psi (z+1)=\psi (1)+H_{z}.} A consequence is the following generalization of the recurrence relation: ψ ( w + 1 ) − ψ ( z + 1 ) = H w − H z . {\displaystyle \psi (w+1)-\psi (z+1)=H_{w}-H_{z}.} An integral representation due to Dirichlet is: ψ ( z ) = ∫ 0 ∞ ( e − t − 1 ( 1 + t ) z ) d t t . {\displaystyle \psi (z)=\int _{0}^{\infty }\left(e^{-t}-{\frac {1}{(1+t)^{z}}}\right)\,{\frac {dt}{t}}.} Gauss's integral representation can be manipulated to give the start of the asymptotic expansion of ψ {\displaystyle \psi } . ψ ( z ) = log ⁡ z − 1 2 z − ∫ 0 ∞ ( 1 2 − 1 t + 1 e t − 1 ) e − t z d t . {\displaystyle \psi (z)=\log z-{\frac {1}{2z}}-\int _{0}^{\infty }\left({\frac {1}{2}}-{\frac {1}{t}}+{\frac {1}{e^{t}-1}}\right)e^{-tz}\,dt.} This formula is also a consequence of Binet's first integral for the gamma function. The integral may be recognized as a Laplace transform. Binet's second integral for the gamma function gives a different formula for ψ {\displaystyle \psi } which also gives the first few terms of the asymptotic expansion: ψ ( z ) = log ⁡ z − 1 2 z − 2 ∫ 0 ∞ t d t ( t 2 + z 2 ) ( e 2 π t − 1 ) . {\displaystyle \psi (z)=\log z-{\frac {1}{2z}}-2\int _{0}^{\infty }{\frac {t\,dt}{(t^{2}+z^{2})(e^{2\pi t}-1)}}.} From the definition of ψ {\displaystyle \psi } and the integral representation of the gamma function, one obtains ψ ( z ) = 1 Γ ( z ) ∫ 0 ∞ t z − 1 ln ⁡ ( t ) e − t d t , {\displaystyle \psi (z)={\frac {1}{\Gamma (z)}}\int _{0}^{\infty }t^{z-1}\ln(t)e^{-t}\,dt,} with ℜ z > 0 {\displaystyle \Re z>0} . == Infinite product representation == The function ψ ( z ) / Γ ( z ) {\displaystyle \psi (z)/\Gamma (z)} is an entire function, and it can be represented by the infinite product ψ ( z ) Γ ( z ) = − e 2 γ z ∏ k = 0 ∞ ( 1 − z x k ) e z x k . {\displaystyle {\frac {\psi (z)}{\Gamma (z)}}=-e^{2\gamma z}\prod _{k=0}^{\infty }\left(1-{\frac {z}{x_{k}}}\right)e^{\frac {z}{x_{k}}}.} Here x k {\displaystyle x_{k}} is the kth zero of ψ {\displaystyle \psi } (see below), and γ {\displaystyle \gamma } is the Euler–Mascheroni constant. Note: This is also equal to − d d z 1 Γ ( z ) {\displaystyle -{\frac {d}{dz}}{\frac {1}{\Gamma (z)}}} due to the definition of the digamma function: Γ ′ ( z ) Γ ( z ) = ψ ( z ) {\displaystyle {\frac {\Gamma '(z)}{\Gamma (z)}}=\psi (z)} . == Series representation == === Series formula === Euler's product formula for the gamma function, combined with the functional equation and an identity for the Euler–Mascheroni constant, yields the following expression for the digamma function, valid in the complex plane outside the negative integers (Abramowitz and Stegun 6.3.16): ψ ( z + 1 ) = − γ + ∑ n = 1 ∞ ( 1 n − 1 n + z ) , z ≠ − 1 , − 2 , − 3 , … , = − γ + ∑ n = 1 ∞ ( z n ( n + z ) ) , z ≠ − 1 , − 2 , − 3 , … . {\displaystyle {\begin{aligned}\psi (z+1)&=-\gamma +\sum _{n=1}^{\infty }\left({\frac {1}{n}}-{\frac {1}{n+z}}\right),\qquad z\neq -1,-2,-3,\ldots ,\\&=-\gamma +\sum _{n=1}^{\infty }\left({\frac {z}{n(n+z)}}\right),\qquad z\neq -1,-2,-3,\ldots .\end{aligned}}} Equivalently, ψ ( z ) = − γ + ∑ n = 0 ∞ ( 1 n + 1 − 1 n + z ) , z ≠ 0 , − 1 , − 2 , … , = − γ + ∑ n = 0 ∞ z − 1 ( n + 1 ) ( n + z ) , z ≠ 0 , − 1 , − 2 , … . {\displaystyle {\begin{aligned}\psi (z)&=-\gamma +\sum _{n=0}^{\infty }\left({\frac {1}{n+1}}-{\frac {1}{n+z}}\right),\qquad z\neq 0,-1,-2,\ldots ,\\&=-\gamma +\sum _{n=0}^{\infty }{\frac {z-1}{(n+1)(n+z)}},\qquad z\neq 0,-1,-2,\ldots .\end{aligned}}} ==== Evaluation of sums of rational functions ==== The above identity can be used to evaluate sums of the form ∑ n = 0 ∞ u n = ∑ n = 0 ∞ p ( n ) q ( n ) , {\displaystyle \sum _{n=0}^{\infty }u_{n}=\sum _{n=0}^{\infty }{\frac {p(n)}{q(n)}},} where p(n) and q(n) are polynomials of n. Performing partial fraction on un in the complex field, in the case when all roots of q(n) are simple roots, u n = p ( n ) q ( n ) = ∑ k = 1 m a k n + b k . {\displaystyle u_{n}={\frac {p(n)}{q(n)}}=\sum _{k=1}^{m}{\frac {a_{k}}{n+b_{k}}}.} For the series to converge, lim n → ∞ n u n = 0 , {\displaystyle \lim _{n\to \infty }nu_{n}=0,} otherwise the series will be greater than the harmonic series and thus diverge. Hence ∑ k = 1 m a k = 0 , {\displaystyle \sum _{k=1}^{m}a_{k}=0,} and ∑ n = 0 ∞ u n = ∑ n = 0 ∞ ∑ k = 1 m a k n + b k = ∑ n = 0 ∞ ∑ k = 1 m a k ( 1 n + b k − 1 n + 1 ) = ∑ k = 1 m ( a k ∑ n = 0 ∞ ( 1 n + b k − 1 n + 1 ) ) = − ∑ k = 1 m a k ( ψ ( b k ) + γ ) = − ∑ k = 1 m a k ψ ( b k ) . {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }u_{n}&=\sum _{n=0}^{\infty }\sum _{k=1}^{m}{\frac {a_{k}}{n+b_{k}}}\\&=\sum _{n=0}^{\infty }\sum _{k=1}^{m}a_{k}\left({\frac {1}{n+b_{k}}}-{\frac {1}{n+1}}\right)\\&=\sum _{k=1}^{m}\left(a_{k}\sum _{n=0}^{\infty }\left({\frac {1}{n+b_{k}}}-{\frac {1}{n+1}}\right)\right)\\&=-\sum _{k=1}^{m}a_{k}{\big (}\psi (b_{k})+\gamma {\big )}\\&=-\sum _{k=1}^{m}a_{k}\psi (b_{k}).\end{aligned}}} With the series expansion of higher rank polygamma function a generalized formula can be given as ∑ n = 0 ∞ u n = ∑ n = 0 ∞ ∑ k = 1 m a k ( n + b k ) r k = ∑ k = 1 m ( − 1 ) r k ( r k − 1 ) ! a k ψ ( r k − 1 ) ( b k ) , {\displaystyle \sum _{n=0}^{\infty }u_{n}=\sum _{n=0}^{\infty }\sum _{k=1}^{m}{\frac {a_{k}}{(n+b_{k})^{r_{k}}}}=\sum _{k=1}^{m}{\frac {(-1)^{r_{k}}}{(r_{k}-1)!}}a_{k}\psi ^{(r_{k}-1)}(b_{k}),} provided the series on the left converges. === Taylor series === The digamma has a rational zeta series, given by the Taylor series at z = 1. This is ψ ( z + 1 ) = − γ − ∑ k = 1 ∞ ( − 1 ) k ζ ( k + 1 ) z k , {\displaystyle \psi (z+1)=-\gamma -\sum _{k=1}^{\infty }(-1)^{k}\,\zeta (k+1)\,z^{k},} which converges for |z| < 1. Here, ζ(n) is the Riemann zeta function. This series is easily derived from the corresponding Taylor's series for the Hurwitz zeta function. === Newton series === The Newton series for the digamma, sometimes referred to as Stern series, derived by Moritz Abraham Stern in 1847, reads ψ ( s ) = − γ + ( s − 1 ) − ( s − 1 ) ( s − 2 ) 2 ⋅ 2 ! + ( s − 1 ) ( s − 2 ) ( s − 3 ) 3 ⋅ 3 ! ⋯ , ℜ ( s ) > 0 , = − γ − ∑ k = 1 ∞ ( − 1 ) k k ( s − 1 k ) ⋯ , ℜ ( s ) > 0. {\displaystyle {\begin{aligned}\psi (s)&=-\gamma +(s-1)-{\frac {(s-1)(s-2)}{2\cdot 2!}}+{\frac {(s-1)(s-2)(s-3)}{3\cdot 3!}}\cdots ,\quad \Re (s)>0,\\&=-\gamma -\sum _{k=1}^{\infty }{\frac {(-1)^{k}}{k}}{\binom {s-1}{k}}\cdots ,\quad \Re (s)>0.\end{aligned}}} where (sk) is the binomial coefficient. It may also be generalized to ψ ( s + 1 ) = − γ − 1 m ∑ k = 1 m − 1 m − k s + k − 1 m ∑ k = 1 ∞ ( − 1 ) k k { ( s + m k + 1 ) − ( s k + 1 ) } , ℜ ( s ) > − 1 , {\displaystyle \psi (s+1)=-\gamma -{\frac {1}{m}}\sum _{k=1}^{m-1}{\frac {m-k}{s+k}}-{\frac {1}{m}}\sum _{k=1}^{\infty }{\frac {(-1)^{k}}{k}}\left\{{\binom {s+m}{k+1}}-{\binom {s}{k+1}}\right\},\qquad \Re (s)>-1,} where m = 2, 3, 4, ... === Series with Gregory's coefficients, Cauchy numbers and Bernoulli polynomials of the second kind === There exist various series for the digamma containing rational coefficients only for the rational arguments. In particular, the series with Gregory's coefficients Gn is ψ ( v ) = ln ⁡ v − ∑ n = 1 ∞ | G n | ( n − 1 ) ! ( v ) n , ℜ ( v ) > 0 , {\displaystyle \psi (v)=\ln v-\sum _{n=1}^{\infty }{\frac {{\big |}G_{n}{\big |}(n-1)!}{(v)_{n}}},\qquad \Re (v)>0,} ψ ( v ) = 2 ln ⁡ Γ ( v ) − 2 v ln ⁡ v + 2 v + 2 ln ⁡ v − ln ⁡ 2 π − 2 ∑ n = 1 ∞ | G n ( 2 ) | ( v ) n ( n − 1 ) ! , ℜ ( v ) > 0 , {\displaystyle \psi (v)=2\ln \Gamma (v)-2v\ln v+2v+2\ln v-\ln 2\pi -2\sum _{n=1}^{\infty }{\frac {{\big |}G_{n}(2){\big |}}{(v)_{n}}}\,(n-1)!,\qquad \Re (v)>0,} ψ ( v ) = 3 ln ⁡ Γ ( v ) − 6 ζ ′ ( − 1 , v ) + 3 v 2 ln ⁡ v − 3 2 v 2 − 6 v ln ⁡ ( v ) + 3 v + 3 ln ⁡ v − 3 2 ln ⁡ 2 π + 1 2 − 3 ∑ n = 1 ∞ | G n ( 3 ) | ( v ) n ( n − 1 ) ! , ℜ ( v ) > 0 , {\displaystyle \psi (v)=3\ln \Gamma (v)-6\zeta '(-1,v)+3v^{2}\ln {v}-{\frac {3}{2}}v^{2}-6v\ln(v)+3v+3\ln {v}-{\frac {3}{2}}\ln 2\pi +{\frac {1}{2}}-3\sum _{n=1}^{\infty }{\frac {{\big |}G_{n}(3){\big |}}{(v)_{n}}}\,(n-1)!,\qquad \Re (v)>0,} where (v)n is the rising factorial (v)n = v(v+1)(v+2) ... (v+n-1), Gn(k) are the Gregory coefficients of higher order with Gn(1) = Gn, Γ is the gamma function and ζ is the Hurwitz zeta function. Similar series with the Cauchy numbers of the second kind Cn reads ψ ( v ) = ln ⁡ ( v − 1 ) + ∑ n = 1 ∞ C n ( n − 1 ) ! ( v ) n , ℜ ( v ) > 1 , {\displaystyle \psi (v)=\ln(v-1)+\sum _{n=1}^{\infty }{\frac {C_{n}(n-1)!}{(v)_{n}}},\qquad \Re (v)>1,} A series with the Bernoulli polynomials of the second kind has the following form ψ ( v ) = ln ⁡ ( v + a ) + ∑ n = 1 ∞ ( − 1 ) n ψ n ( a ) ( n − 1 ) ! ( v ) n , ℜ ( v ) > − a , {\displaystyle \psi (v)=\ln(v+a)+\sum _{n=1}^{\infty }{\frac {(-1)^{n}\psi _{n}(a)\,(n-1)!}{(v)_{n}}},\qquad \Re (v)>-a,} where ψn(a) are the Bernoulli polynomials of the second kind defined by the generating equation z ( 1 + z ) a ln ⁡ ( 1 + z ) = ∑ n = 0 ∞ z n ψ n ( a ) , | z | < 1 , {\displaystyle {\frac {z(1+z)^{a}}{\ln(1+z)}}=\sum _{n=0}^{\infty }z^{n}\psi _{n}(a)\,,\qquad |z|<1\,,} It may be generalized to ψ ( v ) = 1 r ∑ l = 0 r − 1 ln ⁡ ( v + a + l ) + 1 r ∑ n = 1 ∞ ( − 1 ) n N n , r ( a ) ( n − 1 ) ! ( v ) n , ℜ ( v ) > − a , r = 1 , 2 , 3 , … {\displaystyle \psi (v)={\frac {1}{r}}\sum _{l=0}^{r-1}\ln(v+a+l)+{\frac {1}{r}}\sum _{n=1}^{\infty }{\frac {(-1)^{n}N_{n,r}(a)(n-1)!}{(v)_{n}}},\qquad \Re (v)>-a,\quad r=1,2,3,\ldots } where the polynomials Nn,r(a) are given by the following generating equation ( 1 + z ) a + m − ( 1 + z ) a ln ⁡ ( 1 + z ) = ∑ n = 0 ∞ N n , m ( a ) z n , | z | < 1 , {\displaystyle {\frac {(1+z)^{a+m}-(1+z)^{a}}{\ln(1+z)}}=\sum _{n=0}^{\infty }N_{n,m}(a)z^{n},\qquad |z|<1,} so that Nn,1(a) = ψn(a). Similar expressions with the logarithm of the gamma function involve these formulas ψ ( v ) = 1 v + a − 1 2 { ln ⁡ Γ ( v + a ) + v − 1 2 ln ⁡ 2 π − 1 2 + ∑ n = 1 ∞ ( − 1 ) n ψ n + 1 ( a ) ( v ) n ( n − 1 ) ! } , ℜ ( v ) > − a , {\displaystyle \psi (v)={\frac {1}{v+a-{\tfrac {1}{2}}}}\left\{\ln \Gamma (v+a)+v-{\frac {1}{2}}\ln 2\pi -{\frac {1}{2}}+\sum _{n=1}^{\infty }{\frac {(-1)^{n}\psi _{n+1}(a)}{(v)_{n}}}(n-1)!\right\},\qquad \Re (v)>-a,} and ψ ( v ) = 1 1 2 r + v + a − 1 { ln ⁡ Γ ( v + a ) + v − 1 2 ln ⁡ 2 π − 1 2 + 1 r ∑ n = 0 r − 2 ( r − n − 1 ) ln ⁡ ( v + a + n ) + 1 r ∑ n = 1 ∞ ( − 1 ) n N n + 1 , r ( a ) ( v ) n ( n − 1 ) ! } , {\displaystyle \psi (v)={\frac {1}{{\tfrac {1}{2}}r+v+a-1}}\left\{\ln \Gamma (v+a)+v-{\frac {1}{2}}\ln 2\pi -{\frac {1}{2}}+{\frac {1}{r}}\sum _{n=0}^{r-2}(r-n-1)\ln(v+a+n)+{\frac {1}{r}}\sum _{n=1}^{\infty }{\frac {(-1)^{n}N_{n+1,r}(a)}{(v)_{n}}}(n-1)!\right\},} where ℜ ( v ) > − a {\displaystyle \Re (v)>-a} and r = 2 , 3 , 4 , … {\displaystyle r=2,3,4,\ldots } . == Reflection formula == The digamma and polygamma functions satisfy reflection formulas similar to that of the gamma function: ψ ( 1 − x ) − ψ ( x ) = π cot ⁡ π x {\displaystyle \psi (1-x)-\psi (x)=\pi \cot \pi x} . ψ ′ ( − x ) + ψ ′ ( x ) = π 2 sin 2 ⁡ ( π x ) + 1 x 2 {\displaystyle \psi '(-x)+\psi '(x)={\frac {\pi ^{2}}{\sin ^{2}(\pi x)}}+{\frac {1}{x^{2}}}} . == Recurrence formula and characterization == The digamma function satisfies the recurrence relation ψ ( x + 1 ) = ψ ( x ) + 1 x . {\displaystyle \psi (x+1)=\psi (x)+{\frac {1}{x}}.} Thus, it can be said to "telescope" ⁠1/x⁠, for one has Δ [ ψ ] ( x ) = 1 x {\displaystyle \Delta [\psi ](x)={\frac {1}{x}}} where Δ is the forward difference operator. This satisfies the recurrence relation of a partial sum of the harmonic series, thus implying the formula ψ ( n ) = H n − 1 − γ {\displaystyle \psi (n)=H_{n-1}-\gamma } where γ is the Euler–Mascheroni constant. Actually, ψ is the only solution of the functional equation F ( x + 1 ) = F ( x ) + 1 x {\displaystyle F(x+1)=F(x)+{\frac {1}{x}}} that is monotonic on R+ and satisfies F(1) = −γ. This fact follows immediately from the uniqueness of the Γ function given its recurrence equation and convexity restriction. This implies the useful difference equation: ψ ( x + N ) − ψ ( x ) = ∑ k = 0 N − 1 1 x + k {\displaystyle \psi (x+N)-\psi (x)=\sum _{k=0}^{N-1}{\frac {1}{x+k}}} == Some finite sums involving the digamma function == There are numerous finite summation formulas for the digamma function. Basic summation formulas, such as ∑ r = 1 m ψ ( r m ) = − m ( γ + ln ⁡ m ) , {\displaystyle \sum _{r=1}^{m}\psi \left({\frac {r}{m}}\right)=-m(\gamma +\ln m),} ∑ r = 1 m ψ ( r m ) ⋅ exp ⁡ 2 π r k i m = m ln ⁡ ( 1 − exp ⁡ 2 π k i m ) , k ∈ Z , m ∈ N , k ≠ m {\displaystyle \sum _{r=1}^{m}\psi \left({\frac {r}{m}}\right)\cdot \exp {\dfrac {2\pi rki}{m}}=m\ln \left(1-\exp {\frac {2\pi ki}{m}}\right),\qquad k\in \mathbb {Z} ,\quad m\in \mathbb {N} ,\ k\neq m} ∑ r = 1 m − 1 ψ ( r m ) ⋅ cos ⁡ 2 π r k m = m ln ⁡ ( 2 sin ⁡ k π m ) + γ , k = 1 , 2 , … , m − 1 {\displaystyle \sum _{r=1}^{m-1}\psi \left({\frac {r}{m}}\right)\cdot \cos {\dfrac {2\pi rk}{m}}=m\ln \left(2\sin {\frac {k\pi }{m}}\right)+\gamma ,\qquad k=1,2,\ldots ,m-1} ∑ r = 1 m − 1 ψ ( r m ) ⋅ sin ⁡ 2 π r k m = π 2 ( 2 k − m ) , k = 1 , 2 , … , m − 1 {\displaystyle \sum _{r=1}^{m-1}\psi \left({\frac {r}{m}}\right)\cdot \sin {\frac {2\pi rk}{m}}={\frac {\pi }{2}}(2k-m),\qquad k=1,2,\ldots ,m-1} are due to Gauss. More complicated formulas, such as ∑ r = 0 m − 1 ψ ( 2 r + 1 2 m ) ⋅ cos ⁡ ( 2 r + 1 ) k π m = m ln ⁡ ( tan ⁡ π k 2 m ) , k = 1 , 2 , … , m − 1 {\displaystyle \sum _{r=0}^{m-1}\psi \left({\frac {2r+1}{2m}}\right)\cdot \cos {\frac {(2r+1)k\pi }{m}}=m\ln \left(\tan {\frac {\pi k}{2m}}\right),\qquad k=1,2,\ldots ,m-1} ∑ r = 0 m − 1 ψ ( 2 r + 1 2 m ) ⋅ sin ⁡ ( 2 r + 1 ) k π m = − π m 2 , k = 1 , 2 , … , m − 1 {\displaystyle \sum _{r=0}^{m-1}\psi \left({\frac {2r+1}{2m}}\right)\cdot \sin {\dfrac {(2r+1)k\pi }{m}}=-{\frac {\pi m}{2}},\qquad k=1,2,\ldots ,m-1} ∑ r = 1 m − 1 ψ ( r m ) ⋅ cot ⁡ π r m = − π ( m − 1 ) ( m − 2 ) 6 {\displaystyle \sum _{r=1}^{m-1}\psi \left({\frac {r}{m}}\right)\cdot \cot {\frac {\pi r}{m}}=-{\frac {\pi (m-1)(m-2)}{6}}} ∑ r = 1 m − 1 ψ ( r m ) ⋅ r m = − γ 2 ( m − 1 ) − m 2 ln ⁡ m − π 2 ∑ r = 1 m − 1 r m ⋅ cot ⁡ π r m {\displaystyle \sum _{r=1}^{m-1}\psi \left({\frac {r}{m}}\right)\cdot {\frac {r}{m}}=-{\frac {\gamma }{2}}(m-1)-{\frac {m}{2}}\ln m-{\frac {\pi }{2}}\sum _{r=1}^{m-1}{\frac {r}{m}}\cdot \cot {\frac {\pi r}{m}}} ∑ r = 1 m − 1 ψ ( r m ) ⋅ cos ⁡ ( 2 ℓ + 1 ) π r m = − π m ∑ r = 1 m − 1 r ⋅ sin ⁡ 2 π r m cos ⁡ 2 π r m − cos ⁡ ( 2 ℓ + 1 ) π m , ℓ ∈ Z {\displaystyle \sum _{r=1}^{m-1}\psi \left({\frac {r}{m}}\right)\cdot \cos {\dfrac {(2\ell +1)\pi r}{m}}=-{\frac {\pi }{m}}\sum _{r=1}^{m-1}{\frac {r\cdot \sin {\dfrac {2\pi r}{m}}}{\cos {\dfrac {2\pi r}{m}}-\cos {\dfrac {(2\ell +1)\pi }{m}}}},\qquad \ell \in \mathbb {Z} } ∑ r = 1 m − 1 ψ ( r m ) ⋅ sin ⁡ ( 2 ℓ + 1 ) π r m = − ( γ + ln ⁡ 2 m ) cot ⁡ ( 2 ℓ + 1 ) π 2 m + sin ⁡ ( 2 ℓ + 1 ) π m ∑ r = 1 m − 1 ln ⁡ sin ⁡ π r m cos ⁡ 2 π r m − cos ⁡ ( 2 ℓ + 1 ) π m , ℓ ∈ Z {\displaystyle \sum _{r=1}^{m-1}\psi \left({\frac {r}{m}}\right)\cdot \sin {\dfrac {(2\ell +1)\pi r}{m}}=-(\gamma +\ln 2m)\cot {\frac {(2\ell +1)\pi }{2m}}+\sin {\dfrac {(2\ell +1)\pi }{m}}\sum _{r=1}^{m-1}{\frac {\ln \sin {\dfrac {\pi r}{m}}}{\cos {\dfrac {2\pi r}{m}}-\cos {\dfrac {(2\ell +1)\pi }{m}}}},\qquad \ell \in \mathbb {Z} } ∑ r = 1 m − 1 ψ 2 ( r m ) = ( m − 1 ) γ 2 + m ( 2 γ + ln ⁡ 4 m ) ln ⁡ m − m ( m − 1 ) ln 2 ⁡ 2 + π 2 ( m 2 − 3 m + 2 ) 12 + m ∑ ℓ = 1 m − 1 ln 2 ⁡ sin ⁡ π ℓ m {\displaystyle \sum _{r=1}^{m-1}\psi ^{2}\left({\frac {r}{m}}\right)=(m-1)\gamma ^{2}+m(2\gamma +\ln 4m)\ln {m}-m(m-1)\ln ^{2}2+{\frac {\pi ^{2}(m^{2}-3m+2)}{12}}+m\sum _{\ell =1}^{m-1}\ln ^{2}\sin {\frac {\pi \ell }{m}}} are due to works of certain modern authors (see e.g. Appendix B in Blagouchine (2014)). We also have 1 + 1 2 + 1 3 + . . . + 1 k − 1 − γ = 1 k ∑ n = 0 k − 1 ψ ( 1 + n k ) , k = 2 , 3 , . . . {\displaystyle 1+{\frac {1}{2}}+{\frac {1}{3}}+...+{\frac {1}{k-1}}-\gamma ={\frac {1}{k}}\sum _{n=0}^{k-1}\psi \left(1+{\frac {n}{k}}\right),k=2,3,...} == Gauss's digamma theorem == For positive integers r and m (r < m), the digamma function may be expressed in terms of Euler's constant and a finite number of elementary functions ψ ( r m ) = − γ − ln ⁡ ( 2 m ) − π 2 cot ⁡ ( r π m ) + 2 ∑ n = 1 ⌊ m − 1 2 ⌋ cos ⁡ ( 2 π n r m ) ln ⁡ sin ⁡ ( π n m ) {\displaystyle \psi \left({\frac {r}{m}}\right)=-\gamma -\ln(2m)-{\frac {\pi }{2}}\cot \left({\frac {r\pi }{m}}\right)+2\sum _{n=1}^{\left\lfloor {\frac {m-1}{2}}\right\rfloor }\cos \left({\frac {2\pi nr}{m}}\right)\ln \sin \left({\frac {\pi n}{m}}\right)} which holds, because of its recurrence equation, for all rational arguments. == Multiplication theorem == The multiplication theorem of the Γ {\displaystyle \Gamma } -function is equivalent to ψ ( n z ) = 1 n ∑ k = 0 n − 1 ψ ( z + k n ) + ln ⁡ n . {\displaystyle \psi (nz)={\frac {1}{n}}\sum _{k=0}^{n-1}\psi \left(z+{\frac {k}{n}}\right)+\ln n.} == Asymptotic expansion == The digamma function has the asymptotic expansion ψ ( z ) ∼ ln ⁡ z + ∑ n = 1 ∞ ζ ( 1 − n ) z n = ln ⁡ z − ∑ n = 1 ∞ B n n z n , {\displaystyle \psi (z)\sim \ln z+\sum _{n=1}^{\infty }{\frac {\zeta (1-n)}{z^{n}}}=\ln z-\sum _{n=1}^{\infty }{\frac {B_{n}}{nz^{n}}},} where Bk is the kth Bernoulli number and ζ is the Riemann zeta function. The first few terms of this expansion are: ψ ( z ) ∼ ln ⁡ z − 1 2 z − 1 12 z 2 + 1 120 z 4 − 1 252 z 6 + 1 240 z 8 − 1 132 z 10 + 691 32760 z 12 − 1 12 z 14 + ⋯ . {\displaystyle \psi (z)\sim \ln z-{\frac {1}{2z}}-{\frac {1}{12z^{2}}}+{\frac {1}{120z^{4}}}-{\frac {1}{252z^{6}}}+{\frac {1}{240z^{8}}}-{\frac {1}{132z^{10}}}+{\frac {691}{32760z^{12}}}-{\frac {1}{12z^{14}}}+\cdots .} Although the infinite sum does not converge for any z, any finite partial sum becomes increasingly accurate as z increases. The expansion can be found by applying the Euler–Maclaurin formula to the sum ∑ n = 1 ∞ ( 1 n − 1 z + n ) {\displaystyle \sum _{n=1}^{\infty }\left({\frac {1}{n}}-{\frac {1}{z+n}}\right)} The expansion can also be derived from the integral representation coming from Binet's second integral formula for the gamma function. Expanding t / ( t 2 + z 2 ) {\displaystyle t/(t^{2}+z^{2})} as a geometric series and substituting an integral representation of the Bernoulli numbers leads to the same asymptotic series as above. Furthermore, expanding only finitely many terms of the series gives a formula with an explicit error term: ψ ( z ) = ln ⁡ z − 1 2 z − ∑ n = 1 N B 2 n 2 n z 2 n + ( − 1 ) N + 1 2 z 2 N ∫ 0 ∞ t 2 N + 1 d t ( t 2 + z 2 ) ( e 2 π t − 1 ) . {\displaystyle \psi (z)=\ln z-{\frac {1}{2z}}-\sum _{n=1}^{N}{\frac {B_{2n}}{2nz^{2n}}}+(-1)^{N+1}{\frac {2}{z^{2N}}}\int _{0}^{\infty }{\frac {t^{2N+1}\,dt}{(t^{2}+z^{2})(e^{2\pi t}-1)}}.} == Inequalities == When x > 0, the function ln ⁡ x − 1 2 x − ψ ( x ) {\displaystyle \ln x-{\frac {1}{2x}}-\psi (x)} is completely monotonic and in particular positive. This is a consequence of Bernstein's theorem on monotone functions applied to the integral representation coming from Binet's first integral for the gamma function. Additionally, by the convexity inequality 1 + t ≤ e t {\displaystyle 1+t\leq e^{t}} , the integrand in this representation is bounded above by e − t z / 2 {\displaystyle e^{-tz}/2} . Consequently 1 x − ln ⁡ x + ψ ( x ) {\displaystyle {\frac {1}{x}}-\ln x+\psi (x)} is also completely monotonic. It follows that, for all x > 0, ln ⁡ x − 1 x ≤ ψ ( x ) ≤ ln ⁡ x − 1 2 x . {\displaystyle \ln x-{\frac {1}{x}}\leq \psi (x)\leq \ln x-{\frac {1}{2x}}.} This recovers a theorem of Horst Alzer. Alzer also proved that, for s ∈ (0, 1), 1 − s x + s < ψ ( x + 1 ) − ψ ( x + s ) , {\displaystyle {\frac {1-s}{x+s}}<\psi (x+1)-\psi (x+s),} Related bounds were obtained by Elezovic, Giordano, and Pecaric, who proved that, for x > 0 , ln ⁡ ( x + 1 2 ) − 1 x < ψ ( x ) < ln ⁡ ( x + e − γ ) − 1 x , {\displaystyle \ln(x+{\tfrac {1}{2}})-{\frac {1}{x}}<\psi (x)<\ln(x+e^{-\gamma })-{\frac {1}{x}},} where γ = − ψ ( 1 ) {\displaystyle \gamma =-\psi (1)} is the Euler–Mascheroni constant. The constants ( 0.5 {\displaystyle 0.5} and e − γ ≈ 0.56 {\displaystyle e^{-\gamma }\approx 0.56} ) appearing in these bounds are the best possible. The mean value theorem implies the following analog of Gautschi's inequality: If x > c, where c ≈ 1.461 is the unique positive real root of the digamma function, and if s > 0, then exp ⁡ ( ( 1 − s ) ψ ′ ( x + 1 ) ψ ( x + 1 ) ) ≤ ψ ( x + 1 ) ψ ( x + s ) ≤ exp ⁡ ( ( 1 − s ) ψ ′ ( x + s ) ψ ( x + s ) ) . {\displaystyle \exp \left((1-s){\frac {\psi '(x+1)}{\psi (x+1)}}\right)\leq {\frac {\psi (x+1)}{\psi (x+s)}}\leq \exp \left((1-s){\frac {\psi '(x+s)}{\psi (x+s)}}\right).} Moreover, equality holds if and only if s = 1. Inspired by the harmonic mean value inequality for the classical gamma function, Horzt Alzer and Graham Jameson proved, among other things, a harmonic mean-value inequality for the digamma function: − γ ≤ 2 ψ ( x ) ψ ( 1 x ) ψ ( x ) + ψ ( 1 x ) {\displaystyle -\gamma \leq {\frac {2\psi (x)\psi ({\frac {1}{x}})}{\psi (x)+\psi ({\frac {1}{x}})}}} for x > 0 {\displaystyle x>0} Equality holds if and only if x = 1 {\displaystyle x=1} . == Computation and approximation == The asymptotic expansion gives an easy way to compute ψ(x) when the real part of x is large. To compute ψ(x) for small x, the recurrence relation ψ ( x + 1 ) = 1 x + ψ ( x ) {\displaystyle \psi (x+1)={\frac {1}{x}}+\psi (x)} can be used to shift the value of x to a higher value. Beal suggests using the above recurrence to shift x to a value greater than 6 and then applying the above expansion with terms above x14 cut off, which yields "more than enough precision" (at least 12 digits except near the zeroes). As x goes to infinity, ψ(x) gets arbitrarily close to both ln(x − ⁠1/2⁠) and ln x. Going down from x + 1 to x, ψ decreases by ⁠1/x⁠, ln(x − ⁠1/2⁠) decreases by ln(x + ⁠1/2⁠) / (x − ⁠1/2⁠), which is more than ⁠1/x⁠, and ln x decreases by ln(1 + ⁠1/x⁠), which is less than ⁠1/x⁠. From this we see that for any positive x greater than ⁠1/2⁠, ψ ( x ) ∈ ( ln ⁡ ( x − 1 2 ) , ln ⁡ x ) {\displaystyle \psi (x)\in \left(\ln \left(x-{\tfrac {1}{2}}\right),\ln x\right)} or, for any positive x, exp ⁡ ψ ( x ) ∈ ( x − 1 2 , x ) . {\displaystyle \exp \psi (x)\in \left(x-{\tfrac {1}{2}},x\right).} The exponential exp ψ(x) is approximately x − ⁠1/2⁠ for large x, but gets closer to x at small x, approaching 0 at x = 0. For x < 1, we can calculate limits based on the fact that between 1 and 2, ψ(x) ∈ [−γ, 1 − γ], so ψ ( x ) ∈ ( − 1 x − γ , 1 − 1 x − γ ) , x ∈ ( 0 , 1 ) {\displaystyle \psi (x)\in \left(-{\frac {1}{x}}-\gamma ,1-{\frac {1}{x}}-\gamma \right),\quad x\in (0,1)} or exp ⁡ ψ ( x ) ∈ ( exp ⁡ ( − 1 x − γ ) , e exp ⁡ ( − 1 x − γ ) ) . {\displaystyle \exp \psi (x)\in \left(\exp \left(-{\frac {1}{x}}-\gamma \right),e\exp \left(-{\frac {1}{x}}-\gamma \right)\right).} From the above asymptotic series for ψ, one can derive an asymptotic series for exp(−ψ(x)). The series matches the overall behaviour well, that is, it behaves asymptotically as it should for large arguments, and has a zero of unbounded multiplicity at the origin too. 1 exp ⁡ ψ ( x ) ∼ 1 x + 1 2 ⋅ x 2 + 5 4 ⋅ 3 ! ⋅ x 3 + 3 2 ⋅ 4 ! ⋅ x 4 + 47 48 ⋅ 5 ! ⋅ x 5 − 5 16 ⋅ 6 ! ⋅ x 6 + ⋯ {\displaystyle {\frac {1}{\exp \psi (x)}}\sim {\frac {1}{x}}+{\frac {1}{2\cdot x^{2}}}+{\frac {5}{4\cdot 3!\cdot x^{3}}}+{\frac {3}{2\cdot 4!\cdot x^{4}}}+{\frac {47}{48\cdot 5!\cdot x^{5}}}-{\frac {5}{16\cdot 6!\cdot x^{6}}}+\cdots } This is similar to a Taylor expansion of exp(−ψ(1 / y)) at y = 0, but it does not converge. (The function is not analytic at infinity.) A similar series exists for exp(ψ(x)) which starts with exp ⁡ ψ ( x ) ∼ x − 1 2 . {\displaystyle \exp \psi (x)\sim x-{\frac {1}{2}}.} If one calculates the asymptotic series for ψ(x+1/2) it turns out that there are no odd powers of x (there is no x−1 term). This leads to the following asymptotic expansion, which saves computing terms of even order. exp ⁡ ψ ( x + 1 2 ) ∼ x + 1 4 ! ⋅ x − 37 8 ⋅ 6 ! ⋅ x 3 + 10313 72 ⋅ 8 ! ⋅ x 5 − 5509121 384 ⋅ 10 ! ⋅ x 7 + ⋯ {\displaystyle \exp \psi \left(x+{\tfrac {1}{2}}\right)\sim x+{\frac {1}{4!\cdot x}}-{\frac {37}{8\cdot 6!\cdot x^{3}}}+{\frac {10313}{72\cdot 8!\cdot x^{5}}}-{\frac {5509121}{384\cdot 10!\cdot x^{7}}}+\cdots } Similar in spirit to the Lanczos approximation of the Γ {\displaystyle \Gamma } -function is Spouge's approximation. Another alternative is to use the recurrence relation or the multiplication formula to shift the argument of ψ ( x ) {\displaystyle \psi (x)} into the range 1 ≤ x ≤ 3 {\displaystyle 1\leq x\leq 3} and to evaluate the Chebyshev series there. == Special values == The digamma function has values in closed form for rational numbers, as a result of Gauss's digamma theorem. Some are listed below: ψ ( 1 ) = − γ ψ ( 1 2 ) = − 2 ln ⁡ 2 − γ ψ ( 1 3 ) = − π 2 3 − 3 ln ⁡ 3 2 − γ ψ ( 1 4 ) = − π 2 − 3 ln ⁡ 2 − γ ψ ( 1 6 ) = − π 3 2 − 2 ln ⁡ 2 − 3 ln ⁡ 3 2 − γ ψ ( 1 8 ) = − π 2 − 4 ln ⁡ 2 − π + ln ⁡ ( 2 + 1 ) − ln ⁡ ( 2 − 1 ) 2 − γ . {\displaystyle {\begin{aligned}\psi (1)&=-\gamma \\\psi \left({\tfrac {1}{2}}\right)&=-2\ln {2}-\gamma \\\psi \left({\tfrac {1}{3}}\right)&=-{\frac {\pi }{2{\sqrt {3}}}}-{\frac {3\ln {3}}{2}}-\gamma \\\psi \left({\tfrac {1}{4}}\right)&=-{\frac {\pi }{2}}-3\ln {2}-\gamma \\\psi \left({\tfrac {1}{6}}\right)&=-{\frac {\pi {\sqrt {3}}}{2}}-2\ln {2}-{\frac {3\ln {3}}{2}}-\gamma \\\psi \left({\tfrac {1}{8}}\right)&=-{\frac {\pi }{2}}-4\ln {2}-{\frac {\pi +\ln \left({\sqrt {2}}+1\right)-\ln \left({\sqrt {2}}-1\right)}{\sqrt {2}}}-\gamma .\end{aligned}}} Moreover, by taking the logarithmic derivative of | Γ ( b i ) | 2 {\displaystyle |\Gamma (bi)|^{2}} or | Γ ( 1 2 + b i ) | 2 {\displaystyle |\Gamma ({\tfrac {1}{2}}+bi)|^{2}} where b {\displaystyle b} is real-valued, it can easily be deduced that Im ⁡ ψ ( b i ) = 1 2 b + π 2 coth ⁡ ( π b ) , {\displaystyle \operatorname {Im} \psi (bi)={\frac {1}{2b}}+{\frac {\pi }{2}}\coth(\pi b),} Im ⁡ ψ ( 1 2 + b i ) = π 2 tanh ⁡ ( π b ) . {\displaystyle \operatorname {Im} \psi ({\tfrac {1}{2}}+bi)={\frac {\pi }{2}}\tanh(\pi b).} Apart from Gauss's digamma theorem, no such closed formula is known for the real part in general. We have, for example, at the imaginary unit the numerical approximation Re ⁡ ψ ( i ) = − γ − ∑ n = 0 ∞ n − 1 n 3 + n 2 + n + 1 ≈ 0.09465. {\displaystyle \operatorname {Re} \psi (i)=-\gamma -\sum _{n=0}^{\infty }{\frac {n-1}{n^{3}+n^{2}+n+1}}\approx 0.09465.} == Roots of the digamma function == The roots of the digamma function are the saddle points of the complex-valued gamma function. Thus they lie all on the real axis. The only one on the positive real axis is the unique minimum of the real-valued gamma function on R+ at x0 = 1.46163214496836234126.... All others occur single between the poles on the negative axis: x1 = −0.50408300826445540925... x2 = −1.57349847316239045877... x3 = −2.61072086844414465000... x4 = −3.63529336643690109783... ⋮ {\displaystyle \vdots } Already in 1881, Charles Hermite observed that x n = − n + 1 ln ⁡ n + O ( 1 ( ln ⁡ n ) 2 ) {\displaystyle x_{n}=-n+{\frac {1}{\ln n}}+O\left({\frac {1}{(\ln n)^{2}}}\right)} holds asymptotically. A better approximation of the location of the roots is given by x n ≈ − n + 1 π arctan ⁡ ( π ln ⁡ n ) n ≥ 2 {\displaystyle x_{n}\approx -n+{\frac {1}{\pi }}\arctan \left({\frac {\pi }{\ln n}}\right)\qquad n\geq 2} and using a further term it becomes still better x n ≈ − n + 1 π arctan ⁡ ( π ln ⁡ n + 1 8 n ) n ≥ 1 {\displaystyle x_{n}\approx -n+{\frac {1}{\pi }}\arctan \left({\frac {\pi }{\ln n+{\frac {1}{8n}}}}\right)\qquad n\geq 1} which both spring off the reflection formula via 0 = ψ ( 1 − x n ) = ψ ( x n ) + π tan ⁡ π x n {\displaystyle 0=\psi (1-x_{n})=\psi (x_{n})+{\frac {\pi }{\tan \pi x_{n}}}} and substituting ψ(xn) by its not convergent asymptotic expansion. The correct second term of this expansion is ⁠1/2n⁠, where the given one works well to approximate roots with small n. Another improvement of Hermite's formula can be given: x n = − n + 1 log ⁡ n − 1 2 n ( log ⁡ n ) 2 + O ( 1 n 2 ( log ⁡ n ) 2 ) . {\displaystyle x_{n}=-n+{\frac {1}{\log n}}-{\frac {1}{2n(\log n)^{2}}}+O\left({\frac {1}{n^{2}(\log n)^{2}}}\right).} Regarding the zeros, the following infinite sum identities were recently proved by István Mező and Michael Hoffman ∑ n = 0 ∞ 1 x n 2 = γ 2 + π 2 2 , ∑ n = 0 ∞ 1 x n 3 = − 4 ζ ( 3 ) − γ 3 − γ π 2 2 , ∑ n = 0 ∞ 1 x n 4 = γ 4 + π 4 9 + 2 3 γ 2 π 2 + 4 γ ζ ( 3 ) . {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }{\frac {1}{x_{n}^{2}}}&=\gamma ^{2}+{\frac {\pi ^{2}}{2}},\\\sum _{n=0}^{\infty }{\frac {1}{x_{n}^{3}}}&=-4\zeta (3)-\gamma ^{3}-{\frac {\gamma \pi ^{2}}{2}},\\\sum _{n=0}^{\infty }{\frac {1}{x_{n}^{4}}}&=\gamma ^{4}+{\frac {\pi ^{4}}{9}}+{\frac {2}{3}}\gamma ^{2}\pi ^{2}+4\gamma \zeta (3).\end{aligned}}} In general, the function Z ( k ) = ∑ n = 0 ∞ 1 x n k {\displaystyle Z(k)=\sum _{n=0}^{\infty }{\frac {1}{x_{n}^{k}}}} can be determined and it is studied in detail by the cited authors. The following results ∑ n = 0 ∞ 1 x n 2 + x n = − 2 , ∑ n = 0 ∞ 1 x n 2 − x n = γ + π 2 6 γ {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }{\frac {1}{x_{n}^{2}+x_{n}}}&=-2,\\\sum _{n=0}^{\infty }{\frac {1}{x_{n}^{2}-x_{n}}}&=\gamma +{\frac {\pi ^{2}}{6\gamma }}\end{aligned}}} also hold true. == Regularization == The digamma function appears in the regularization of divergent integrals ∫ 0 ∞ d x x + a , {\displaystyle \int _{0}^{\infty }{\frac {dx}{x+a}},} this integral can be approximated by a divergent general Harmonic series, but the following value can be attached to the series ∑ n = 0 ∞ 1 n + a = − ψ ( a ) . {\displaystyle \sum _{n=0}^{\infty }{\frac {1}{n+a}}=-\psi (a).} == In applied mathematics == Many notable probability distributions use the gamma function in the definition of their probability density or mass functions. Then in statistics when doing maximum likelihood estimation on models involving such distributions, the digamma function naturally appears when the derivative of the log-likelihood is taken for finding the maxima. == See also == Polygamma function Trigamma function Chebyshev expansions of the digamma function in Wimp, Jet (1961). "Polynomial approximations to integral transforms". Math. Comp. 15 (74): 174–178. doi:10.1090/S0025-5718-61-99221-3. == References == == External links == OEIS sequence A020759 (Decimal expansion of (-1)*Gamma'(1/2)/Gamma(1/2) where Gamma(x) denotes the Gamma function)—psi(1/2) OEIS: A047787 psi(1/3), OEIS: A200064 psi(2/3), OEIS: A020777 psi(1/4), OEIS: A200134 psi(3/4), OEIS: A200135 to OEIS: A200138 psi(1/5) to psi(4/5).
Wikipedia/Digamma_function
The Adaptive Multi-Rate (AMR, AMR-NB or GSM-AMR) audio codec is an audio compression format optimized for speech coding. AMR is a multi-rate narrowband speech codec that encodes narrowband (200–3400 Hz) signals at variable bit rates ranging from 4.75 to 12.2 kbit/s with toll quality speech starting at 7.4 kbit/s. AMR was adopted as the standard speech codec by 3GPP in October 1999 and is now widely used in GSM and UMTS. It uses link adaptation to select from one of eight different bit rates based on link conditions. AMR is also a file format for storing spoken audio using the AMR codec. Many modern mobile telephone handsets can store short audio recordings in the AMR format, and both free and proprietary programs exist (see Software support) to convert between this and other formats, although AMR is a speech format and is unlikely to give ideal results for other audio. The common filename extension is .amr. There also exists another storage format for AMR that is suitable for applications with more advanced demands on the storage format, like random access or synchronization with video. This format is the 3GPP-specified 3GP container format based on ISO base media file format. == Usage == The frames contain 160 samples and are 20 milliseconds long. AMR uses various techniques, such as ACELP, DTX, VAD and CNG. The usage of AMR requires optimized link adaptation that selects the best codec mode to meet the local radio channel and capacity requirements. If the radio conditions are bad, source coding is reduced and channel coding is increased. This improves the quality and robustness of the network connection while sacrificing some voice clarity. In the particular case of AMR this improvement is somewhere around S/N = 4–6 dB for usable communication. The new intelligent system allows the network operator to prioritize capacity or quality per base station. There are a total of 14 modes of the AMR codec, eight are available in a full rate channel (FR) and six on a half rate channel (HR). == Features == Sampling frequency 8 kHz/13-bit (160 samples for 20 ms frames), filtered to 200–3400 Hz. The AMR codec uses eight source codecs with bit-rates of 12.2, 10.2, 7.95, 7.40, 6.70, 5.90, 5.15 and 4.75 kbit/s. Generates frame length of 95, 103, 118, 134, 148, 159, 204, or 244 bits for AMR FR bit rates 4.75, 5.15, 5.90, 6.70, 7.40, 7.95, 10.2, or 12.2 kbit/s, respectively. AMR HR frame lengths are different. AMR utilizes discontinuous transmission (DTX), with voice activity detection (VAD) and comfort noise generation (CNG) to reduce bandwidth usage during silence periods Algorithmic delay is 20 ms per frame. For bit-rates of 12.2, there is no "algorithm" look-ahead delay. For other rates, look-ahead delay is 5 ms. Note that there is 5 ms "dummy" look-ahead delay, to allow seamless frame-wise mode switching with the rest of rates. AMR is a hybrid speech coder, and as such transmits both speech parameters and a waveform signal Linear predictive coding (LPC) is used to synthesize the speech from a residual waveform. The LPC parameters are encoded as line spectral pairs (LSP). The residual waveform is coded using algebraic code-excited linear prediction (ACELP). The complexity of the algorithm is rated at 5, using a relative scale where G.711 is 1 and G.729a is 15. PSQM testing under ideal conditions yields mean opinion scores of 4.14 for AMR (12.2 kbit/s), compared to 4.45 for G.711 (μ-law) PSQM testing under network stress yields mean opinion scores of 3.79 for AMR (12.2 kbit/s), compared to 4.13 for G.711 (μ-law) == Licensing and patent issues == AMR codecs incorporate several patents of Nokia, Ericsson, NTT and VoiceAge, the last one being the License Administrator for the AMR patent pools. VoiceAge also accepts submission of patents for determination of their possible essentiality to these standards. The initial fee for professional content creation tools and "real-time channel" products is US$6,500. The minimum annual royalty is $10,000, which, in the first year, excludes the initial fee. Per-channel license fees fall from $0.99 to $0.50 with volume, up to a maximum of $2 million annually. In the category of personal computer products, e.g., media players, the AMR decoder is licensed for free. The license fee for a sold encoder falls from $0.40 to $0.30 with volume, up to a maximum of $300,000 annually. The minimum annual royalty is not applied to licensed products that fall under the category of personal computer products and use only the free decoder. More information: VoiceAge licensing information, including pricing to license the AMR codecs 3GPP legal issues The 3G Patent Platform and its licensing policy AMR Codecs as Shared Libraries — legal notices for usage of amrnb and amrwb libraries based on the reference implementation == Software support == 3GPP TS 26.073 – AMR speech Codec (C source code) – reference implementation Audacity (beta version 1.3) via the FFmpeg integration libraries (both input and output format) FFmpeg with OpenCORE AMR libraries Android Used for voice recorder. AMR Codecs as Shared Libraries – amrnb and amrwb libraries development site. These libraries are based on the reference implementation and were created to prevent embedding of possibly patented source code into many open source projects. Open source software to convert the .amr format: RetroCode, Amr2Wav, both are in an early developmental stage AMR Player is freeware to play AMR audio files, and can convert AMR from/to MP3/WAV audio format. Nokia Multimedia Converter 2.0 can convert (create) samples, one can use Nokia's conversion tool to create both .amr and .awb files. It works in Windows 7 as well if the setup is run in XP compatibility mode. MPlayer (SMPlayer, KMPlayer) Parole Media Player 0.8.1 (in Ubuntu 16.04) QuickTime Player and multimedia framework RealPlayer version 11 and later VLC media player version 1.1.0 and later (input format only, not output format) ffdshow Apple iPhone (can play back AMR files) iOS & macOS (iMessage) BlackBerry smartphones (used for voice recorder file format, while BlackBerry 10 cannot play AMR format) K-Lite Codec Pack Media Player Classic Home Cinema, around 1.7.1 foobar2000 with the component foo_input_amr == See also == Adaptive Multi-Rate Wideband (AMR-WB) Extended Adaptive Multi-Rate – Wideband (AMR-WB+) Half Rate Full Rate Enhanced Full Rate (EFR) Sampling rate IS-641 3GP Comparison of audio coding formats RTP audio video profile == References == == External links == 3GPP TS 26.090 – Mandatory Speech Codec speech processing functions; Adaptive Multi-Rate (AMR) speech codec; Transcoding functions 3GPP TS 26.071 – Mandatory Speech Codec speech processing functions; AMR Speech Codec; General Description 3GPP codecs specifications; 3G and beyond / GSM, 26 series RFC 4867 – RTP Payload Format and File Storage Format for the Adaptive Multi-Rate (AMR) and Adaptive Multi-Rate Wideband (AMR-WB) Audio Codecs RFC 4281 – The Codecs Parameter for "Bucket" Media Types
Wikipedia/Adaptive_Multi-Rate
Enhanced Full Rate or EFR or GSM-EFR or GSM 06.60 is a speech coding standard that was developed in order to improve the quality of GSM. Enhanced Full Rate was developed by Nokia and the Université de Sherbrooke (Canada). In 1995, ETSI selected the Enhanced Full Rate voice codec as the industry standard codec for GSM/DCS. == Technology == The sampling rate is 8000 sample/s leading to a bit rate for the encoded bit stream of 12.2 kbit/s. The coding scheme is the so-called Algebraic Code Excited Linear Prediction Coder (ACELP). The encoder is fed with data consisting of samples with a resolution of 13 bits left justified in a 16-bit word. The three least significant bits are set to 0. The decoder outputs data in the same format. The Enhanced Full Rate (GSM 06.60) technical specification describes the detailed mapping between input blocks of 160 speech samples in 13-bit uniform PCM format to encoded blocks of 244 bits and from encoded blocks of 244 bits to output blocks of 160 reconstructed speech samples. It also specifies the conversion between A-law or μ-law (PCS 1900) 8-bit PCM and 13-bit uniform PCM. This part of specification also describes the codec down to the bit level, thus enabling the verification of compliance to the part to a high degree of confidence by use of a set of digital test sequences. These test sequences are described in GSM 06.54 and are available on disks. This standard is defined in ETSI ETS 300 726 (GSM 06.60). The packing is specified in ETSI Technical Specification TS 101 318. ETSI has selected the Enhanced Full Rate voice codec as the industry standard codec for GSM/DCS in 1995. Enhanced Full Rate was also chosen as the industry standard in US market for PCS 1900 GSM frequency band. == Licensing and patent issues == The Enhanced Full Rate incorporate several patents. It uses the patented ACELP technology, which is licensed by the VoiceAge Corporation. Enhanced Full Rate was developed by Nokia and the Université de Sherbrooke (Canada). == See also == Half Rate Full Rate Adaptive Multi-Rate (AMR) Adaptive Multi-Rate Wideband (AMR-WB) Extended Adaptive Multi-Rate - Wideband (AMR-WB+) Comparison of audio coding formats == References == == External links == RFC 3551 - GSM-EFR (GSM 06.60) ETS 300 726 (GSM 06.60) 3GPP TS06.60 - technical specification Summary of GSM Codecs
Wikipedia/Enhanced_Full_Rate
In mathematics, a measure-preserving dynamical system is an object of study in the abstract formulation of dynamical systems, and ergodic theory in particular. Measure-preserving systems obey the Poincaré recurrence theorem, and are a special case of conservative systems. They provide the formal, mathematical basis for a broad range of physical systems, and, in particular, many systems from classical mechanics (in particular, most non-dissipative systems) as well as systems in thermodynamic equilibrium. == Definition == A measure-preserving dynamical system is defined as a probability space and a measure-preserving transformation on it. In more detail, it is a system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} with the following structure: X {\displaystyle X} is a set, B {\displaystyle {\mathcal {B}}} is a σ-algebra over X {\displaystyle X} , μ : B → [ 0 , 1 ] {\displaystyle \mu :{\mathcal {B}}\rightarrow [0,1]} is a probability measure, so that μ ( X ) = 1 {\displaystyle \mu (X)=1} , and μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} , T : X → X {\displaystyle T:X\rightarrow X} is a measurable transformation which preserves the measure μ {\displaystyle \mu } , i.e., ∀ A ∈ B μ ( T − 1 ( A ) ) = μ ( A ) {\displaystyle \forall A\in {\mathcal {B}}\;\;\mu (T^{-1}(A))=\mu (A)} . == Discussion == One may ask why the measure preserving transformation is defined in terms of the inverse μ ( T − 1 ( A ) ) = μ ( A ) {\displaystyle \mu (T^{-1}(A))=\mu (A)} instead of the forward transformation μ ( T ( A ) ) = μ ( A ) {\displaystyle \mu (T(A))=\mu (A)} . This can be understood intuitively. Consider the typical measure on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} , and a map T x = 2 x mod 1 = { 2 x if x < 1 / 2 2 x − 1 if x > 1 / 2 {\displaystyle Tx=2x\mod 1={\begin{cases}2x{\text{ if }}x<1/2\\2x-1{\text{ if }}x>1/2\\\end{cases}}} . This is the Bernoulli map. Now, distribute an even layer of paint on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} , and then map the paint forward. The paint on the [ 0 , 1 / 2 ] {\displaystyle [0,1/2]} half is spread thinly over all of [ 0 , 1 ] {\displaystyle [0,1]} , and the paint on the [ 1 / 2 , 1 ] {\displaystyle [1/2,1]} half as well. The two layers of thin paint, layered together, recreates the exact same paint thickness. More generally, the paint that would arrive at subset A ⊂ [ 0 , 1 ] {\displaystyle A\subset [0,1]} comes from the subset T − 1 ( A ) {\displaystyle T^{-1}(A)} . For the paint thickness to remain unchanged (measure-preserving), the mass of incoming paint should be the same: μ ( A ) = μ ( T − 1 ( A ) ) {\displaystyle \mu (A)=\mu (T^{-1}(A))} . Consider a mapping T {\displaystyle {\mathcal {T}}} of power sets: T : P ( X ) → P ( X ) {\displaystyle {\mathcal {T}}:P(X)\to P(X)} Consider now the special case of maps T {\displaystyle {\mathcal {T}}} which preserve intersections, unions and complements (so that it is a map of Borel sets) and also sends X {\displaystyle X} to X {\displaystyle X} (because we want it to be conservative). Every such conservative, Borel-preserving map can be specified by some surjective map T : X → X {\displaystyle T:X\to X} by writing T ( A ) = T − 1 ( A ) {\displaystyle {\mathcal {T}}(A)=T^{-1}(A)} . Of course, one could also define T ( A ) = T ( A ) {\displaystyle {\mathcal {T}}(A)=T(A)} , but this is not enough to specify all such possible maps T {\displaystyle {\mathcal {T}}} . That is, conservative, Borel-preserving maps T {\displaystyle {\mathcal {T}}} cannot, in general, be written in the form T ( A ) = T ( A ) ; {\displaystyle {\mathcal {T}}(A)=T(A);} . μ ( T − 1 ( A ) ) {\displaystyle \mu (T^{-1}(A))} has the form of a pushforward, whereas μ ( T ( A ) ) {\displaystyle \mu (T(A))} is generically called a pullback. Almost all properties and behaviors of dynamical systems are defined in terms of the pushforward. For example, the transfer operator is defined in terms of the pushforward of the transformation map T {\displaystyle T} ; the measure μ {\displaystyle \mu } can now be understood as an invariant measure; it is just the Frobenius–Perron eigenvector of the transfer operator (recall, the FP eigenvector is the largest eigenvector of a matrix; in this case it is the eigenvector which has the eigenvalue one: the invariant measure.) There are two classification problems of interest. One, discussed below, fixes ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} and asks about the isomorphism classes of a transformation map T {\displaystyle T} . The other, discussed in transfer operator, fixes ( X , B ) {\displaystyle (X,{\mathcal {B}})} and T {\displaystyle T} , and asks about maps μ {\displaystyle \mu } that are measure-like. Measure-like, in that they preserve the Borel properties, but are no longer invariant; they are in general dissipative and so give insights into dissipative systems and the route to equilibrium. In terms of physics, the measure-preserving dynamical system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} often describes a physical system that is in equilibrium, for example, thermodynamic equilibrium. One might ask: how did it get that way? Often, the answer is by stirring, mixing, turbulence, thermalization or other such processes. If a transformation map T {\displaystyle T} describes this stirring, mixing, etc. then the system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} is all that is left, after all of the transient modes have decayed away. The transient modes are precisely those eigenvectors of the transfer operator that have eigenvalue less than one; the invariant measure μ {\displaystyle \mu } is the one mode that does not decay away. The rate of decay of the transient modes are given by (the logarithm of) their eigenvalues; the eigenvalue one corresponds to infinite half-life. == Informal example == The microcanonical ensemble from physics provides an informal example. Consider, for example, a fluid, gas or plasma in a box of width, length and height w × l × h , {\displaystyle w\times l\times h,} consisting of N {\displaystyle N} atoms. A single atom in that box might be anywhere, having arbitrary velocity; it would be represented by a single point in w × l × h × R 3 . {\displaystyle w\times l\times h\times \mathbb {R} ^{3}.} A given collection of N {\displaystyle N} atoms would then be a single point somewhere in the space ( w × l × h ) N × R 3 N . {\displaystyle (w\times l\times h)^{N}\times \mathbb {R} ^{3N}.} The "ensemble" is the collection of all such points, that is, the collection of all such possible boxes (of which there are an uncountably-infinite number). This ensemble of all-possible-boxes is the space X {\displaystyle X} above. In the case of an ideal gas, the measure μ {\displaystyle \mu } is given by the Maxwell–Boltzmann distribution. It is a product measure, in that if p i ( x , y , z , v x , v y , v z ) d 3 x d 3 p {\displaystyle p_{i}(x,y,z,v_{x},v_{y},v_{z})\,d^{3}x\,d^{3}p} is the probability of atom i {\displaystyle i} having position and velocity x , y , z , v x , v y , v z {\displaystyle x,y,z,v_{x},v_{y},v_{z}} , then, for N {\displaystyle N} atoms, the probability is the product of N {\displaystyle N} of these. This measure is understood to apply to the ensemble. So, for example, one of the possible boxes in the ensemble has all of the atoms on one side of the box. One can compute the likelihood of this, in the Maxwell–Boltzmann measure. It will be enormously tiny, of order O ( 2 − 3 N ) . {\displaystyle {\mathcal {O}}\left(2^{-3N}\right).} Of all possible boxes in the ensemble, this is a ridiculously small fraction. The only reason that this is an "informal example" is because writing down the transition function T {\displaystyle T} is difficult, and, even if written down, it is hard to perform practical computations with it. Difficulties are compounded if there are interactions between the particles themselves, like a van der Waals interaction or some other interaction suitable for a liquid or a plasma; in such cases, the invariant measure is no longer the Maxwell–Boltzmann distribution. The art of physics is finding reasonable approximations. This system does exhibit one key idea from the classification of measure-preserving dynamical systems: two ensembles, having different temperatures, are inequivalent. The entropy for a given canonical ensemble depends on its temperature; as physical systems, it is "obvious" that when the temperatures differ, so do the systems. This holds in general: systems with different entropy are not isomorphic. == Examples == Unlike the informal example above, the examples below are sufficiently well-defined and tractable that explicit, formal computations can be performed. μ could be the normalized angle measure dθ/2π on the unit circle, and T a rotation. See equidistribution theorem; the Bernoulli scheme; the interval exchange transformation; with the definition of an appropriate measure, a subshift of finite type; the base flow of a random dynamical system; the flow of a Hamiltonian vector field on the tangent bundle of a closed connected smooth manifold is measure-preserving (using the measure induced on the Borel sets by the symplectic volume form) by Liouville's theorem (Hamiltonian); for certain maps and Markov processes, the Krylov–Bogolyubov theorem establishes the existence of a suitable measure to form a measure-preserving dynamical system. == Generalization to groups and monoids == The definition of a measure-preserving dynamical system can be generalized to the case in which T is not a single transformation that is iterated to give the dynamics of the system, but instead is a monoid (or even a group, in which case we have the action of a group upon the given probability space) of transformations Ts : X → X parametrized by s ∈ Z (or R, or N ∪ {0}, or [0, +∞)), where each transformation Ts satisfies the same requirements as T above. In particular, the transformations obey the rules: T 0 = i d X : X → X {\displaystyle T_{0}=\mathrm {id} _{X}:X\rightarrow X} , the identity function on X; T s ∘ T t = T t + s {\displaystyle T_{s}\circ T_{t}=T_{t+s}} , whenever all the terms are well-defined; T s − 1 = T − s {\displaystyle T_{s}^{-1}=T_{-s}} , whenever all the terms are well-defined. The earlier, simpler case fits into this framework by defining Ts = Ts for s ∈ N. == Homomorphisms == The concept of a homomorphism and an isomorphism may be defined. Consider two dynamical systems ( X , A , μ , T ) {\displaystyle (X,{\mathcal {A}},\mu ,T)} and ( Y , B , ν , S ) {\displaystyle (Y,{\mathcal {B}},\nu ,S)} . Then a mapping φ : X → Y {\displaystyle \varphi :X\to Y} is a homomorphism of dynamical systems if it satisfies the following three properties: The map φ {\displaystyle \varphi \ } is measurable. For each B ∈ B {\displaystyle B\in {\mathcal {B}}} , one has μ ( φ − 1 B ) = ν ( B ) {\displaystyle \mu (\varphi ^{-1}B)=\nu (B)} . For μ {\displaystyle \mu } -almost all x ∈ X {\displaystyle x\in X} , one has φ ( T x ) = S ( φ x ) {\displaystyle \varphi (Tx)=S(\varphi x)} . The system ( Y , B , ν , S ) {\displaystyle (Y,{\mathcal {B}},\nu ,S)} is then called a factor of ( X , A , μ , T ) {\displaystyle (X,{\mathcal {A}},\mu ,T)} . The map φ {\displaystyle \varphi \;} is an isomorphism of dynamical systems if, in addition, there exists another mapping ψ : Y → X {\displaystyle \psi :Y\to X} that is also a homomorphism, which satisfies for μ {\displaystyle \mu } -almost all x ∈ X {\displaystyle x\in X} , one has x = ψ ( φ x ) {\displaystyle x=\psi (\varphi x)} ; for ν {\displaystyle \nu } -almost all y ∈ Y {\displaystyle y\in Y} , one has y = φ ( ψ y ) {\displaystyle y=\varphi (\psi y)} . Hence, one may form a category of dynamical systems and their homomorphisms. == Generic points == A point x ∈ X is called a generic point if the orbit of the point is distributed uniformly according to the measure. == Symbolic names and generators == Consider a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} , and let Q = {Q1, ..., Qk} be a partition of X into k measurable pair-wise disjoint sets. Given a point x ∈ X, clearly x belongs to only one of the Qi. Similarly, the iterated point Tnx can belong to only one of the parts as well. The symbolic name of x, with regards to the partition Q, is the sequence of integers {an} such that T n x ∈ Q a n . {\displaystyle T^{n}x\in Q_{a_{n}}.} The set of symbolic names with respect to a partition is called the symbolic dynamics of the dynamical system. A partition Q is called a generator or generating partition if μ-almost every point x has a unique symbolic name. == Operations on partitions == Given a partition Q = {Q1, ..., Qk} and a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} , define the T-pullback of Q as T − 1 Q = { T − 1 Q 1 , … , T − 1 Q k } . {\displaystyle T^{-1}Q=\{T^{-1}Q_{1},\ldots ,T^{-1}Q_{k}\}.} Further, given two partitions Q = {Q1, ..., Qk} and R = {R1, ..., Rm}, define their refinement as Q ∨ R = { Q i ∩ R j ∣ i = 1 , … , k , j = 1 , … , m , μ ( Q i ∩ R j ) > 0 } . {\displaystyle Q\vee R=\{Q_{i}\cap R_{j}\mid i=1,\ldots ,k,\ j=1,\ldots ,m,\ \mu (Q_{i}\cap R_{j})>0\}.} With these two constructs, the refinement of an iterated pullback is defined as ⋁ n = 0 N T − n Q = { Q i 0 ∩ T − 1 Q i 1 ∩ ⋯ ∩ T − N Q i N where i ℓ = 1 , … , k , ℓ = 0 , … , N , μ ( Q i 0 ∩ T − 1 Q i 1 ∩ ⋯ ∩ T − N Q i N ) > 0 } {\displaystyle {\begin{aligned}\bigvee _{n=0}^{N}T^{-n}Q&=\{Q_{i_{0}}\cap T^{-1}Q_{i_{1}}\cap \cdots \cap T^{-N}Q_{i_{N}}\\&{}\qquad {\mbox{ where }}i_{\ell }=1,\ldots ,k,\ \ell =0,\ldots ,N,\ \\&{}\qquad \qquad \mu \left(Q_{i_{0}}\cap T^{-1}Q_{i_{1}}\cap \cdots \cap T^{-N}Q_{i_{N}}\right)>0\}\\\end{aligned}}} which plays crucial role in the construction of the measure-theoretic entropy of a dynamical system. == Measure-theoretic entropy == The entropy of a partition Q {\displaystyle {\mathcal {Q}}} is defined as H ( Q ) = − ∑ Q ∈ Q μ ( Q ) log ⁡ μ ( Q ) . {\displaystyle H({\mathcal {Q}})=-\sum _{Q\in {\mathcal {Q}}}\mu (Q)\log \mu (Q).} The measure-theoretic entropy of a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} with respect to a partition Q = {Q1, ..., Qk} is then defined as h μ ( T , Q ) = lim N → ∞ 1 N H ( ⋁ n = 0 N T − n Q ) . {\displaystyle h_{\mu }(T,{\mathcal {Q}})=\lim _{N\rightarrow \infty }{\frac {1}{N}}H\left(\bigvee _{n=0}^{N}T^{-n}{\mathcal {Q}}\right).} Finally, the Kolmogorov–Sinai metric or measure-theoretic entropy of a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} is defined as h μ ( T ) = sup Q h μ ( T , Q ) . {\displaystyle h_{\mu }(T)=\sup _{\mathcal {Q}}h_{\mu }(T,{\mathcal {Q}}).} where the supremum is taken over all finite measurable partitions. A theorem of Yakov Sinai in 1959 shows that the supremum is actually obtained on partitions that are generators. Thus, for example, the entropy of the Bernoulli process is log 2, since almost every real number has a unique binary expansion. That is, one may partition the unit interval into the intervals [0, 1/2) and [1/2, 1]. Every real number x is either less than 1/2 or not; and likewise so is the fractional part of 2nx. If the space X is compact and endowed with a topology, or is a metric space, then the topological entropy may also be defined. If T {\displaystyle T} is an ergodic, piecewise expanding, and Markov on X ⊂ R {\displaystyle X\subset \mathbb {R} } , and μ {\displaystyle \mu } is absolutely continuous with respect to the Lebesgue measure, then we have the Rokhlin formula (section 4.3 and section 12.3 ): h μ ( T ) = ∫ ln ⁡ | d T / d x | μ ( d x ) {\displaystyle h_{\mu }(T)=\int \ln |dT/dx|\mu (dx)} This allows calculation of entropy of many interval maps, such as the logistic map. Ergodic means that T − 1 ( A ) = A {\displaystyle T^{-1}(A)=A} implies A {\displaystyle A} has full measure or zero measure. Piecewise expanding and Markov means that there is a partition of X {\displaystyle X} into finitely many open intervals, such that for some ϵ > 0 {\displaystyle \epsilon >0} , | T ′ | ≥ 1 + ϵ {\displaystyle |T'|\geq 1+\epsilon } on each open interval. Markov means that for each I i {\displaystyle I_{i}} from those open intervals, either T ( I i ) ∩ I i = ∅ {\displaystyle T(I_{i})\cap I_{i}=\emptyset } or T ( I i ) ∩ I i = I i {\displaystyle T(I_{i})\cap I_{i}=I_{i}} . == Classification and anti-classification theorems == One of the primary activities in the study of measure-preserving systems is their classification according to their properties. That is, let ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} be a measure space, and let U {\displaystyle U} be the set of all measure preserving systems ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} . An isomorphism S ∼ T {\displaystyle S\sim T} of two transformations S , T {\displaystyle S,T} defines an equivalence relation R ⊂ U × U . {\displaystyle {\mathcal {R}}\subset U\times U.} The goal is then to describe the relation R {\displaystyle {\mathcal {R}}} . A number of classification theorems have been obtained; but quite interestingly, a number of anti-classification theorems have been found as well. The anti-classification theorems state that there are more than a countable number of isomorphism classes, and that a countable amount of information is not sufficient to classify isomorphisms. The first anti-classification theorem, due to Hjorth, states that if U {\displaystyle U} is endowed with the weak topology, then the set R {\displaystyle {\mathcal {R}}} is not a Borel set. There are a variety of other anti-classification results. For example, replacing isomorphism with Kakutani equivalence, it can be shown that there are uncountably many non-Kakutani equivalent ergodic measure-preserving transformations of each entropy type. These stand in contrast to the classification theorems. These include: Ergodic measure-preserving transformations with a pure point spectrum have been classified. Bernoulli shifts are classified by their metric entropy. See Ornstein theory for more. == See also == Krylov–Bogolyubov theorem on the existence of invariant measures Poincaré recurrence theorem – Certain dynamical systems will eventually return to (or approximate) their initial state == References == == Further reading == Michael S. Keane, "Ergodic theory and subshifts of finite type", (1991), appearing as Chapter 2 in Ergodic Theory, Symbolic Dynamics and Hyperbolic Spaces, Tim Bedford, Michael Keane and Caroline Series, Eds. Oxford University Press, Oxford (1991). ISBN 0-19-853390-X (Provides expository introduction, with exercises, and extensive references.) Lai-Sang Young, "Entropy in Dynamical Systems" (pdf; ps), appearing as Chapter 16 in Entropy, Andreas Greven, Gerhard Keller, and Gerald Warnecke, eds. Princeton University Press, Princeton, NJ (2003). ISBN 0-691-11338-6 T. Schürmann and I. Hoffmann, The entropy of strange billiards inside n-simplexes. J. Phys. A 28(17), page 5033, 1995. PDF-Document (gives a more involved example of measure-preserving dynamical system.)
Wikipedia/Metric_entropy
The Atkinson–Shiffrin model (also known as the multi-store model or modal model) is a model of memory proposed in 1968 by Richard Atkinson and Richard Shiffrin. The model asserts that human memory has three separate components: a sensory register, where sensory information enters memory, a short-term store, also called working memory or short-term memory, which receives and holds input from both the sensory register and the long-term store, and a long-term store, where information which has been rehearsed (explained below) in the short-term store is held indefinitely. Since its first publication this model has come under much scrutiny and has been criticized for various reasons (described below). But it is notable for the significant influence it had in stimulating memory research. == Summary == The model of memories is an explanation of how memory processes work. The three-part, multi-store model was first described by Atkinson and Shiffrin in 1968, though the vac idea of distinct memory stores was by no means a new idea at the time. William James described a distinction between primary and secondary memory in 1890, where primary memory consisted of thoughts held for a short time in consciousness and secondary memory consisted of a permanent, unconscious store.But at the time the parsimony of separate memory stores was a contested notion. A summary of the evidence given for the distinction between long-term and short-term stores is given below. Additionally, Atkinson and Shiffrin included a sensory register alongside the previously theorized primary and secondary memory, as well as a variety of control processes which regulate the transfer of memory. Following its first publication, multiple extensions of the model have been put forth such as a precategorical acoustic store, the search of associative memory model, the perturbation model, and permastore. Additionally, alternative frameworks have been proposed, such as procedural reinstatement, a distinctiveness model, and Baddeley and Hitch's model of working memory, among others. == Sensory register == When an environmental stimulus is detected by the senses, it is briefly available in what Atkinson and Shiffrin called the sensory registers (also sensory buffers or sensory memory). Though this store is generally referred to as "the sensory register" or "sensory memory", it is actually composed of multiple registers, one for each sense. The sensory registers do not process the information carried by the stimulus, but rather detect and hold information for milliseconds to seconds to be used in short-term memory. For this reason Atkinson and Shiffrin also called the registers "buffers", as they prevent immense amounts of information from overwhelming higher-level cognitive processes. Information is only transferred to the short-term memory when attention is given to it, otherwise it decays rapidly and is forgotten. While it is generally agreed that there is a sensory register for each sense, most of the research in the area has focused on the visual and auditory systems. === Iconic memory === Iconic memory, which is associated with the visual system, is perhaps the most researched of the sensory registers. The original evidence suggesting sensory stores which are separate to short-term and long-term memory was experimentally demonstrated for the visual system using a tachistoscope. Iconic memory is only limited to field of vision. That is, as long as a stimulus has entered the field of vision there is no limit to the amount of visual information iconic memory can hold at any one time. As noted above, sensory registers do not allow for further processing of information, and as such iconic memory only holds information for visual stimuli such as shape, size, color and location (but not semantic meaning). As the higher-level processes are limited in their capacities, not all information from sensory memory can be conveyed. It has been argued that the momentary mental freezing of visual input allows for the selection of specific aspects which should be passed on for further memory processing. The biggest limitation of iconic memory is the rapid decay of the information stored there; items in iconic memory decay after only 0.5–1.0 seconds. === Echoic memory === Echoic memory, coined by Ulric Neisser, refers to information that is registered by the auditory system. As with iconic memory, echoic memory only holds superficial aspects of sound (e.g. pitch, tempo, or rhythm) and it has a nearly limitless capacity. Echoic memory is generally cited as having a duration of between 1.5 and 5 seconds depending on context but has been shown to last up to 20 seconds in the absence of competing information. == Short-term store == While much of the information in sensory memory decays and is forgotten, some is attended to. The information that is attended is transferred to the short-term store (also short-term memory, working memory; note that while these terms are often used interchangeably they were not originally intended to be used as such). === Duration === As with sensory memory, the information that enters short-term memory decays and is lost, but the information in the short-term store has a longer duration, approximately 18–20 seconds when the information is not being actively rehearsed, though it is possible that this depends on modality and could be as long as 30 seconds. Fortunately, the information can be held in the short-term store for much longer through what Atkinson and Shiffrin called rehearsal. For auditory information rehearsal can be taken in a literal sense: continually repeating the items. However, the term can be applied for any information that is attended to, such as when a visual image is intentionally held in mind. Finally, information in the short-term store does not have to be of the same modality as its sensory input. For example, written text which enters visually can be held as auditory information, and likewise auditory input can be visualized. On this model, rehearsal of information allows for it to be stored more permanently in the long-term store. Atkinson and Shiffrin discussed this at length for auditory and visual information but did not give much attention to the rehearsal/storage of other modalities due to the experimental difficulties of studying those modalities. === Capacity === There is a limit to the amount of information that can be held in the short-term store: 7 ± 2 chunks. These chunks, which were noted by Miller in his seminal paper The Magical Number Seven, Plus or Minus Two, are defined as independent items of information. It is important to note that some chunks are perceived as one unit though they could be broken down into multiple items, for example "1066" can be either the series of four digits "1, 0, 6, 6" or the semantically grouped item "1066" which is the year the Battle of Hastings was fought. Chunking allows for large amounts of information to be held in memory: 149283141066 is twelve individual items, well outside the limit of the short-term store, but it can be grouped semantically into the 4 chunks "Columbus[1492] ate[8] pie[314→3.14→π] at the Battle of Hastings[1066]". Because short-term memory is limited in capacity, it severely limits the amount of information that can be attended to at any one time. == Long-term store == The long-term store (also long-term memory) is a more or less permanent store. Information that is stored here can be "copied" and transferred to the short-term store where it can be attended to and manipulated. === Transfer from STS === Information is postulated to enter the long-term store from the short-term store more or less automatically. According to Atkinson and Shiffrin model, transfer from the short-term store to the long-term store is occurring for as long as the information is being attended to in the short-term store. In this way, varying amounts of attention result in varying amounts of time in short-term memory. Ostensibly, the longer an item is held in short-term memory, the stronger its memory trace will be in long-term memory. Atkinson and Shiffrin cite evidence for this transfer mechanism in studies by Hebb (1961) and Melton (1963) which show that repeated rote repetition enhances long-term memory. One may also think to the original Ebbinghaus memory experiments showing that forgetting increases for items which are studied fewer times. Finally, the authors note that there are stronger encoding processes than simple rote rehearsal, namely relating the new information to information which has already made its way into the long-term store. === Capacity and duration === In this model, as with most models of memory, long-term memory is assumed to be nearly limitless in its duration and capacity. It is most often the case that brain structures begin to deteriorate and fail before any limit of learning is reached. This is not to assume that any item which is stored in long-term memory is accessible at any point in the lifetime. Rather, it is noted that the connections, cues, or associations to the memory deteriorate; the memory remains intact but unreachable. == Evidence for distinct stores == At the time of the original publication there was a schism in the field of memory on the issue of a single process or dual-process model of memory, the two processes referring to short-term and long-term memory. Atkinson and Shiffrin cite hippocampal lesion studies as compelling evidence for a separation of the two stores. These studies showed that patients with bilateral damage to the hippocampal region had nearly no ability to form new long-term memories though their short-term memory remained intact. One may also be familiar with similar evidence found through the study of Henry Molaison, famously known as H.M., who underwent a severe bilateral medial temporal lobectomy which removed most of his hippocampal regions. These data suggest that there is indeed a clear separation between the short-term and long-term stores. == Criticism == === Sensory register as a separate store === One of the early and central criticisms to the Atkinson–Shiffrin model was the inclusion of the sensory registers as part of memory. Specifically, the original model seemed to describe the sensory registers as both a structure and a control process. Parsimony would suggest that if the sensory registers are actually control processes, there is no need for a tri-partite system. Later revisions to the model addressed these claims and incorporated the sensory registers with the short-term store. === Division and nature of working memory === Baddeley and Hitch have in turn called into question the specific structure of the short-term store, proposing that it is subdivided into multiple components. While the different components were not specifically addressed in the original Atkinson-Shiffrin model, the authors do note that little research has been done investigating the different ways sensory modalities may be represented in the short-term store. Thus the model of working memory given by Baddeley and Hitch should be viewed as a refinement of the original model. === Rehearsal as the sole transfer mechanism === The model has been further criticized as suggesting that rehearsal is the key process that initiates and facilitates transfer of information into LTM. There is very little evidence supporting this hypothesis, and long-term recall can in fact be better predicted by a levels-of-processing framework. In this framework, items which are encoded at a deeper, more semantic level are shown to have stronger traces in long-term memory. This criticism is somewhat unfounded as Atkinson and Shiffrin clearly state a difference between rehearsal and coding, where coding is akin to elaborative processes which levels-of-processing would call deep-processing. In this light, the levels-of-processing framework could be seen as more of an extension of the Atkinson-Shiffrin model rather than a refutation. === Division of long-term memory === In the case of long-term memory it is unlikely that different types of information, such as the motor skills to ride a bike, memory for vocabulary, and memory for personal life events are stored in the same fashion. Endel Tulving notes the importance of encoding specificity in long-term memory. To clarify, there are definite differences in the way information is stored depending on whether it is episodic (memories of events), procedural (knowledge of how to do things), or semantic (general knowledge). A short (non-inclusive) example comes from the study of Henry Molaison (H.M.): learning a simple motor task (tracing a star pattern in a mirror), which involves implicit and procedural long-term storage, is unaffected by bilateral lesioning of the hippocampal regions while other forms of long-term memory, like vocabulary learning (semantic) and memories for events, are severely impaired. === Further reading === For more thorough and technical reviews of the main criticisms please refer to the following resources: Raaijmakers, Jeroen G. W. (1993). "The story of the two-store model of memory: past criticisms, current status, and future directions". Attention and performance. Vol. XIV (silver jubilee volume). Cambridge, MA: MIT Press. pp. 467–488. ISBN 978-0-262-13284-8. Baddeley, Alan (April 1994). "The magical number seven: still magic after all these years?". Psychological Review. 101 (2): 353–356. doi:10.1037/0033-295X.101.2.353. PMID 8022967. == Search of associative memory (SAM) == Due to the above and other criticism through the 1970s, the original model underwent many revisions to account for phenomena it could not explain. The "search of associative memory" (SAM) model is the culmination of that work. The SAM model uses a two-phase memory system: short- and long-term stores. Unlike the original Atkinson–Shiffrin model, there is no sensory store in the SAM model. === Short-term store === Short-term store takes on the form of a buffer, which has a limited capacity. The model assumes a buffer rehearsal system in which the buffer has a size, r. Items enter the short-term store and accompany other items that are already present in the buffer, until size r has been reached. Once the buffer is at full capacity, when new items enter, they replace an item, r, which already exists in the buffer. A probability of 1/r determines which already existing item will be replaced from the buffer. In general, items that have been in the buffer for longer are more likely to be replaced by new items. === Long-term store === The long-term store is responsible for storing relationships between different items and of items to their contexts. Context information refers to the situational and temporal factors present at the time when an item is in the short-term store, such as emotional feelings or environmental details. The amount of item-context information which is transferred to the long-term store is proportional to the amount of time that the item remains in the short-term store. On the other hand, the strength of the item-item associations is proportional to the amount of time that two items simultaneously existed in the short-term store. === Retrieval from long-term store === It is best to show how items are recalled from the long-term store using an example. Assume a participant has just studied a list of word pairs and is now being tested on his memory of those pairs. If the prior list contained, blanket – ocean, the test would be to recall ocean when prompted with blanket – ?. Memories stored in long-term store are retrieved through a logical process involving the assembly of cues, sampling, recovery, and evaluation of recovery. According to the model, when an item needs to be recalled from memory the individual assembles the various cues for the item in the short-term store. In this case, the cues would be any cues surrounding the pair blanket – ocean, like the words that preceded and followed it, what the participant was feeling at the time, how far into the list the words were, etc. Using these cues the individual determines which area of the long-term store to search and then samples any items with associations to the cues. This search is automatic and unconscious, which is how the authors would explain how an answer "pops" into one's head. The items which are eventually recovered, or recalled, are those with the strongest associations to the cue item, here blanket. Once an item has been recovered it is evaluated, here the participant would decide whether blanket – [recovered word] matches blanket – ocean. If there is a match, or if the participant believes there is a match, the recovered word is output. Otherwise the search starts from the beginning using different cues or weighting cues differently if possible. === Recency effects === The usefulness of the SAM model and in particular its model of the short-term store is often demonstrated by its application to the recency effect in free recall. When serial-position curves are applied to SAM, a strong recency effect is observed, but this effect is strongly diminished when a distractor, usually arithmetic, is placed in between study and test trials. The recency effect occurs because items at the end of the test list are likely to still be present in short-term store and therefore retrieved first. However, when new information is processed, this item enters the short-term store and displaces other information from it. When a distracting task is given after the presentation of all items, information from this task displaces the last items from short-term store, resulting in a substantial reduction of recency. === Problems for the SAM model === The SAM model faces serious problems in accounting for long-term recency data and long-range contiguity data. While both of these effects are observed, the short-term store cannot account for the effects. Since a distracting task after the presentation of word pairs or large interpresentation intervals filled with distractors would be expected to displace the last few studied items from the short-term store, recency effects are still observed. According to the rules of the short-term store, recency and contiguity effects should be eliminated with these distractors as the most recently studied items would no longer be present in the short-term memory. Currently, the SAM model competes with single-store free recall models of memory, such as the Temporal Context Model. Additionally, the original model assumes that the only significant associations between items are those formed during the study portion of an experiment. In other words, it does not account for the effects of prior knowledge about to-be-studied items. A more recent extension of the model incorporates various features which allow the model to account for memory store for the effects of prior semantic knowledge and prior episodic knowledge. The extension proposes a store for preexisting semantic associations; a contextual drift mechanism allowing for decontextualisation of knowledge, e.g. if you first learned a banana was a fruit because you put it in the same class as apple, you do not always have to think of apples to know bananas are fruits; a memory search mechanism that uses both episodic and semantic associations, as opposed to a unitary mechanism; and a large lexicon including both words from prior lists and unpresented words. == References == == External links == Simply Psychology webpage on multistore model
Wikipedia/Atkinson–Shiffrin_memory_model
Psychoacoustics is the branch of psychophysics involving the scientific study of the perception of sound by the human auditory system. It is the branch of science studying the psychological responses associated with sound including noise, speech, and music. Psychoacoustics is an interdisciplinary field including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science. == Background == Hearing is not a purely mechanical phenomenon of wave propagation, but is also a sensory and perceptual event. When a person hears something, that something arrives at the ear as a mechanical sound wave traveling through the air, but within the ear it is transformed into neural action potentials. These nerve pulses then travel to the brain where they are perceived. Hence, in many problems in acoustics, such as for audio processing, it is advantageous to take into account not just the mechanics of the environment, but also the fact that both the ear and the brain are involved in a person's listening experience. The inner ear, for example, does significant signal processing in converting sound waveforms into neural stimuli, this processing renders certain differences between waveforms imperceptible. Data compression techniques, such as MP3, make use of this fact. In addition, the ear has a nonlinear response to sounds of different intensity levels; this nonlinear response is called loudness. Telephone networks and audio noise reduction systems make use of this fact by nonlinearly compressing data samples before transmission and then expanding them for playback. Another effect of the ear's nonlinear response is that sounds that are close in frequency produce phantom beat notes, or intermodulation distortion products. == Limits of perception == The human ear can nominally hear sounds in the range 20 to 20000 Hz. The upper limit tends to decrease with age; most adults are unable to hear above 16000 Hz. Under ideal laboratory conditions, the lowest frequency that has been identified as a musical tone is 12 Hz. Tones between 4 and 16 Hz can be perceived via the body's sense of touch. Human perception of audio signal time separation has been measured to be less than 10 μs. This does not mean that frequencies above 100 kHz (1/10 μs) are audible, but that time discrimination is not directly coupled with frequency range. Frequency resolution of the ear is about 3.6 Hz within the octave of 1000–2000 Hz That is, changes in pitch larger than 3.6 Hz can be perceived in a clinical setting. However, even smaller pitch differences can be perceived through other means. For example, the interference of two pitches can often be heard as a repetitive variation in the volume of the tone. This amplitude modulation occurs with a frequency equal to the difference in frequencies of the two tones and is known as beating. The semitone scale used in Western musical notation is not a linear frequency scale but logarithmic. Other scales have been derived directly from experiments on human hearing perception, such as the mel scale and Bark scale (these are used in studying perception, but not usually in musical composition), and these are approximately logarithmic in frequency at the high-frequency end, but nearly linear at the low-frequency end. The intensity range of audible sounds is enormous. Human eardrums are sensitive to variations in sound pressure and can detect pressure changes from as small as a few micropascals (μPa) to greater than 100 kPa. For this reason, sound pressure level is also measured logarithmically, with all pressures referenced to 20 μPa (or 1.97385×10−10 atm). The lower limit of audibility is therefore defined as 0 dB, but the upper limit is not as clearly defined. The upper limit is more a question of the potential to cause noise-induced hearing loss. A more rigorous exploration of the lower limits of audibility determines that the minimum threshold at which a sound can be heard is frequency dependent. By measuring this minimum intensity for test tones of various frequencies, a frequency-dependent absolute threshold of hearing (ATH) curve may be derived. Typically, the ear shows a peak of sensitivity (i.e., its lowest ATH) between 1–5 kHz, though the threshold changes with age, with older ears showing decreased sensitivity above 2 kHz. The ATH is the lowest of the equal-loudness contours. Equal-loudness contours indicate the sound pressure level (dB SPL), over the range of audible frequencies, that are perceived as being of equal loudness. Equal-loudness contours were first measured by Fletcher and Munson at Bell Labs in 1933 using pure tones reproduced via headphones, and the data they collected are called Fletcher–Munson curves. Because subjective loudness was difficult to measure, the Fletcher–Munson curves were averaged over many subjects. Robinson and Dadson refined the process in 1956 to obtain a new set of equal-loudness curves for a frontal sound source measured in an anechoic chamber. The Robinson-Dadson curves were standardized as ISO 226 in 1986. In 2003, ISO 226 was revised using data collected from 12 international studies. == Sound localization == Sound localization is the process of determining the location of a sound source. The brain utilizes subtle differences in loudness, tone and timing between the two ears to allow us to localize sound sources. Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the zenith or vertical angle, and the distance (for static sounds) or velocity (for moving sounds). Humans, as most four-legged animals, are adept at detecting direction in the horizontal, but less so in the vertical directions due to the ears being placed symmetrically. Some species of owls have their ears placed asymmetrically and can detect sound in all three planes, an adaptation to hunt small mammals in the dark. == Masking effects == Suppose a listener can hear a given acoustical signal under silent conditions. When a signal is playing while another sound is being played, the signal has to be stronger for the listener to hear it. The interfering signal is known as the masker and the impeded listening, masking. The masker does not need to have the frequency components of the original signal for masking to happen. A masked signal can be heard even though it is weaker than the masker. Masking happens when a signal and a masker are played together—for instance, when one person whispers while another person shouts—and the listener doesn't hear the weaker signal as it has been masked by the louder masker. Masking can also happen to a signal before a masker starts or after a masker stops. For example, a sudden loud clap sound can make sounds inaudible immediately preceding or following. The effect of backward masking is weaker than forward masking. The masking effect has been widely studied in psychoacoustical research and are exploited in lossy audio encoding, such as MP3. == Missing fundamental == When presented with a harmonic series of frequencies in the relationship 2f, 3f, 4f, 5f, etc. (where f is a specific frequency), humans tend to perceive that the pitch is f. An audible example can be found on YouTube. == Software == The psychoacoustic model provides for high quality lossy signal compression by describing which parts of a given digital audio signal can be removed (or aggressively compressed) safely—that is, without significant losses in the (consciously) perceived quality of the sound. It can explain how a sharp clap of the hands might seem painfully loud in a quiet library but is hardly noticeable after a car backfires on a busy, urban street. This provides great benefit to the overall compression ratio, and psychoacoustic analysis routinely leads to compressed music files that are one-tenth to one-twelfth the size of high-quality masters, but with discernibly less proportional quality loss. Such compression is a feature of nearly all modern lossy audio compression formats. Some of these formats include Dolby Digital (AC-3), MP3, Opus, Ogg Vorbis, AAC, WMA, MPEG-1 Layer II (used for digital audio broadcasting in several countries), and ATRAC, the compression used in MiniDisc and some Walkman models. Psychoacoustics is based heavily on human anatomy, especially the ear's limitations in perceiving sound as outlined previously. To summarize, these limitations are: High-frequency limit Absolute threshold of hearing Temporal masking (forward masking, backward masking) Simultaneous masking (also known as spectral masking) A compression algorithm can assign a lower priority to sounds outside the range of human hearing. By carefully shifting bits away from the unimportant components and toward the important ones, the algorithm ensures that the sounds a listener is most likely to perceive are most accurately represented. == Music == Psychoacoustics includes topics and studies that are relevant to music psychology and music therapy. Theorists such as Benjamin Boretz consider some of the results of psychoacoustics to be meaningful only in a musical context. Irv Teibel's Environments series LPs (1969–79) are an early example of commercially available sounds released expressly for enhancing psychological abilities. == Applied psychoacoustics == Psychoacoustics has long enjoyed a symbiotic relationship with computer science. Internet pioneers J. C. R. Licklider and Bob Taylor both completed graduate-level work in psychoacoustics, while BBN Technologies originally specialized in consulting on acoustics issues before it began building the first packet-switched network. Licklider wrote a paper entitled "A duplex theory of pitch perception". Psychoacoustics is applied within many fields of software development, where developers map proven and experimental mathematical patterns in digital signal processing. Many audio compression codecs such as MP3 and Opus use a psychoacoustic model to increase compression ratios. The success of conventional audio systems for the reproduction of music in theatres and homes can be attributed to psychoacoustics and psychoacoustic considerations gave rise to novel audio systems, such as psychoacoustic sound field synthesis. Furthermore, scientists have experimented with limited success in creating new acoustic weapons, which emit frequencies that may impair, harm, or kill. Psychoacoustics are also leveraged in sonification to make multiple independent data dimensions audible and easily interpretable. This enables auditory guidance without the need for spatial audio and in sonification computer games and other applications, such as drone flying and image-guided surgery. It is also applied today within music, where musicians and artists continue to create new auditory experiences by masking unwanted frequencies of instruments, causing other frequencies to be enhanced. Yet another application is in the design of small or lower-quality loudspeakers, which can use the phenomenon of missing fundamentals to give the effect of bass notes at lower frequencies than the loudspeakers are physically able to produce (see references). Automobile manufacturers engineer their engines and even doors to have a certain sound. == See also == === Related fields === Cognitive neuroscience of music Music psychology === Psychoacoustic topics === == References == === Notes === === Sources === == External links == The Musical Ear—Perception of Sound Archived 2005-12-25 at the Wayback Machine Müller C, Schnider P, Persterer A, Opitz M, Nefjodova MV, Berger M (1993). "[Applied psychoacoustics in space flight]". Wien Med Wochenschr (in German). 143 (23–24): 633–5. PMID 8178525.—Simulation of Free-field Hearing by Head Phones GPSYCHO—An Open-source Psycho-Acoustic and Noise-Shaping Model for ISO-Based MP3 Encoders. Definition of: perceptual audio coding Java appletdemonstrating masking Temporal Masking HyperPhysics Concepts—Sound and Hearing The MP3 as Standard Object
Wikipedia/Psychoacoustic_model
Smart Bitrate Control, commonly referred to as SBC, was a technique for achieving greatly improved video compression efficiency using the DivX 3.11 Alpha video codec or Microsoft's proprietary MPEG4v2 video codec and the Nandub video encoder. SBC relied on two main technologies to achieve this improved efficiency: Multipass encoding and Variable Keyframe Intervals (VKI). SBC ceased to be commonly used after XviD and DivX development progressed to a point where they incorporated the same features that SBC pioneered and could offer even more efficient video compression without the need for a specialized application. Files created by SBC are compatible with DivX 3.11 Alpha and can be decoded by most codecs that support ISO MPEG4 video. == Technical details == The DivX 3.11 Alpha codec allowed a user to control three aspects of the encoding process: the average bitrate, keyframe interval, and whether the codec preserved smoother motion or more detailed images. DivX attempted to encode an entire movie at an average bitrate the user specified, varying the quality of the video in order to achieve the target bitrate. This meant that a simple section of video, such as a still image, would look very good, but complex video, such as an action scene, would look very bad. DivX's keyframe placement was also very simplistic, it would place keyframes only on the interval that the user selected, every 300 frames (10 seconds at 30 frame/s) by default. Nandub's multipass encoding encoded the video twice; in the first pass it would analyze the video (and write information to a log file), in the second it would actually produce the output file. Instead of varying the image quality to achieve an average bitrate, this allowed SBC to vary the bitrate to achieve an average quality, using higher bitrate for more complex scenes and lower bitrate for simpler scenes. VKI would place keyframes only where needed, such as at scene changes, rather than at a fixed interval. This significantly improved both the compression efficiency and visual quality of the resulting video. A VKI patch (called the DivX Scene Detect Patch) was also available for DivX to allow for VKI functionality without using Nandub, but it offered inferior performance compared to the VKI algorithms included in Nandub. Nandub was a modification of the open source VirtualDub video encoder performed by Nando that incorporated SBC features. == See also == Variable bitrate == External links == Nandub project page at Sourceforge Guide to SBC at Digital-Digest Nandub Options Explained by Koepi, Encoding in Nandub on Doom9.org
Wikipedia/Smart_Bitrate_Control
Haiku, originally OpenBeOS, is a free and open-source operating system for personal computers. It is a community-driven continuation of BeOS and aims to be binary-compatible with it, but is largely a reimplementation with the exception of certain components like the Deskbar. The Haiku project began in 2001, supported by the nonprofit Haiku Inc., and the operating system remains in beta. == History and project == On 17 August 2001 Palm, Inc. announced the purchase of Be, Inc., marking the end of BeOS development. The day after, Michael Phipps started the OpenBeOS project to support the BeOS user community by creating an open-source, backward-compatible replacement for BeOS. Palm refused to license the BeOS code to a third-party, meaning that OpenBeOS had to be reverse-engineered. In 2003, Phipps founded the non-profit organization Haiku, Inc. in Rochester, New York, United States, to financially support development. In 2004, the project held its first North American developers' conference, WalterCon; it was also announced on this day that OpenBeOS was renamed to Haiku to avoid infringing on Palm's trademarks. The BeUnited.org nonprofit organization, which promoted open standards for BeOS-compatible operating system projects, announced that Haiku would be its "reference platform". In February 2007, the project held a Tech Talk at Googleplex, attended by ex-Be engineers as well as Jean-Louis Gassée who voiced his support for the project. There is also an annual conference, BeGeistert, held in Germany since 1998 when BeOS was active. == Development == Apart from the graphical user interface (Tracker and Deskbar, which were open sourced with BeOS 5), Haiku is original software. The modular design of BeOS allowed individual components of Haiku to initially be developed in teams in relative isolation, in many cases developing them as replacements for the BeOS components prior to the completion of other parts of the operating system. The first project by OpenBeOS was a community-created "stop-gap" update for BeOS 5.0.3 in 2002, featuring open source replacement for some BeOS components. The kernel of NewOS, for x86, SuperH, and PowerPC architectures was successfully forked that same year, and Haiku has been based on it since. The app_server window manager was completed in 2005. In July 2006, Haiku developer Stephan Aßmus introduced Icon-O-Matic, an icon editor, and a storage format (HVIF) with a rendering engine based on Anti-Grain Geometry. The PackageInstaller was created by Lukasz Zemczak at the 2007 Google Summer of Code. Java support was eventually added by a team from BeUnited who had ported it to BeOS, followed by WLAN from the FreeBSD stack. Alongside a port to GCC4, the first alpha release finally arrived after seven years of development. Initially targeting full BeOS 5 compatibility, a community poll was launched to redefine the future of Haiku beyond a free software refactoring of BeOS from the late 1990s. It was decided to add support for contemporary systems, protocols, hardware, web standards, and compatibility with FLOSS libraries. On October 27, 2009, Haiku obtained Qt4 support. The WebPositive browser was first preloaded with Alpha2, replacing BeZillaBrowser. After this, much time was spent on building a package management system, which went live in September 2013. Beta1 arrived in 2018, and one of the most notable new features was the PackageFS and package installation through the HaikuDepot and pkgman; Beta1 was the first official Haiku release to support full package management. Wine was first ported to Haiku in 2022. === Release history === == Architecture == As with BeOS, Haiku is written in C++ and provides an object-oriented API. The Haiku kernel is a modular hybrid kernel which began as a fork of NewOS, a modular monokernel written by former Be Inc. engineer Travis Geiselbrecht. Many features have been implemented, including a virtual file system (VFS) layer and symmetric multiprocessing (SMP) support. It runs on 32-bit and 64-bit x86 processors, and recently has been ported to RISC-V; there is also a port for ARM under development, but is currently far behind the x86 port. The application program interface (API) is based on that of BeOS, which is divided into a number of "kits" which collect related classes together and bear some relation to the library which contains the supporting code. In 2007, Access Co Ltd, the owners of Be, Inc's intellectual property, released the text of this (BeBook) under a Creative Commons licence. The boot loader is filesystem agnostic and can also chainload GRUB, LILO and NTLDR. Since the Beta1 release, Haiku's memory management includes ASLR, DEP, and SMAP. Graphics operations and window management is handled by the app_server protocol. VESA is used as a fallback video output mode. Haiku is POSIX compatible and has translation layers for X11 and Wayland. == User interface == The graphical user interface is formed of Tracker, a file manager, and the Deskbar, an always-on-top taskbar that is placed in the upper right corner of the screen containing a menu, tray, and a list of running programs. Tracker is an evolution from OpenTracker, which was released under a license with two addenda restricting the use of Be Inc. trademarks; Zeta also modified OpenTracker on their own operating system. The icons in Haiku are named stippi and were designed by Stephan Aßmus. Aßmus also created the Haiku Vector Icon Format (HVIF), a vector storage format to store icons in Haiku, and is aimed at fast rendering and small file sizes. == Software == Package management is done by the graphical application HaikuDepot, and a command-line equivalent called pkgman. Packages can also be activated by installing them from remote repositories with pkgman, or dropping them over a special packages directory. Haiku package management mounts activated packages over a read-only system directory. The Haiku package management system performs dependency solving with libsolv from the openSUSE project. It comes with a number of preloaded applications, such as a WebKit-based web browser WebPositive, a document reader BePDF, a simple web server PoorMan, text editors Pe and StyledEdit, an IRC client Vision, and a Bash-based terminal emulator Terminal. === Compatibility with BeOS === Haiku R1 aims to be compatible with BeOS 5 at both the source and binary level, allowing software written and compiled for BeOS to be compiled and run without modification on Haiku. The 64-bit version of Haiku, however, does not have BeOS compatibility at the binary level, but the API still does. (The same would apply to other non-IA32 ports, such as RISC-V.) Installation of these PKG format files are done using the PackageInstaller. == Reception == In 2013 after the release of Haiku Alpha 4, Ars Technica reviewed the operating system and praised it for being fast, but ultimately stating that it "may not be much more than an interesting diversion, something to play with on a spare bit of hardware". Haiku Beta 4 was reviewed by ZDNET in 2023 where it stated: "Haiku is for those who experienced either NeXT or AfterStep and want an operating system that looks and feels a bit old school but performs faster than any OS they've ever experienced." It further praised Haiku's kernel, file system, and object-oriented API. As of 2018, the Free Software Foundation has included Haiku in a list of non-endorsed operating systems because: "Haiku includes some software that you're not allowed to modify. It also includes nonfree firmware blobs." == See also == Comparison of operating systems List of BeOS applications ZETA Syllable Desktop == References == == External links == Official website Haiku Inc. company website Haiku at DistroWatch Haiku Tech Talk at Google (February 13, 2007) on YouTube Ryan Leavengood (May 2012). "The Dawn of Haiku OS". IEEE Spectrum. Archived from the original on February 3, 2013. Retrieved April 30, 2012. Hardware List, hardware compatible with Haiku (at Besly)
Wikipedia/Haiku_Applications
Communication Theory is a quarterly peer-reviewed academic journal publishing research articles, theoretical essays, and reviews on topics of broad theoretical interest from across the range of communication studies. It was established in 1991 and the current editor-in-chief is Thomas Hanitzsch (University of Munich). According to the Journal Citation Reports, the journal has a 2014 impact factor of 1.667, ranking it 13th out of 76 journals in the category "Communication". It is published by Wiley-Blackwell on behalf of the International Communication Association. Communication theories are frameworks used by scholars and practitioners to understand and predict how information is conveyed, interpreted, and understood. They explore basic elements of communication, from interpersonal conversations to mass media messaging, and provide insights into message crafting, technology influence, and media priorities. == Editors == The following persons have been editor-in-chief of the journal: == References == == External links == Official website Former official website
Wikipedia/Communication_Theory_(journal)
"Communication Theory as a Field" is a 1999 article by Robert T. Craig, attempting to unify the academic field of communication theory. Craig argues that communication theorists can become unified in dialogue by charting what he calls the "dialogical dialectical tension", or the similarities and differences in their understanding of "communication" and demonstrating how those elements create tension within the field. Craig mapped these similarities and differences into seven suggested traditions of communication theory and showed how each of these traditions understand communication, as well as how each tradition's understanding creates tension with the other traditions. The article has received multiple awards, has become the foundation for many communication theory textbooks, and has been translated into several different languages. "Communication theory as a field" has created two main dialogues between Craig and other theorists. Myers argued that Craig misrepresented the theoretical assumptions of his theory, and that the theory itself does not distinguish between good and bad theories. Craig responded that Myers misunderstood not only the basic argument of the article, but also misrepresented his own case study. Russill proposed pragmatism as an eighth tradition of communication theory, Craig responded by expanding this idea and placing Russill's proposition in conversation with the other seven traditions. == Recognition and awards == "Communication Theory as a Field" has been recognized by multiple associations for its influence upon the field of communication. These awards include the Best Article Award from the International Communication Association as well as the Golden Anniversary Monograph Award from the National Communication Association. That work has since been translated into French and Russian. The theory presented in "Communication Theory as a Field" has become the basis of the book "Theorizing Communication" which Craig co-edited with Heidi Muller, as well as being adopted by several other communication theory textbooks as a new framework for understanding the field of communication theory. == Metamodel == Sparked by the "Third Debate" within the field of International Relations Theory in the 1980s, "Communication Theory as a Field" expanded the conversation regarding disciplinary identity in the field of communication. At that time, communication theory textbooks had little to no agreement on how to present the field or which theories to include in their textbooks. This article has since become the foundational framework for four different textbooks to introduce the field of communication. In this article Craig "proposes a vision for communication theory that takes a huge step toward unifying this rather disparate field and addressing its complexities." To move toward this unifying vision Craig focused on communication theory as a practical discipline and shows how "various traditions of communication theory can be engaged in dialogue on the practice of communication." In this deliberative process theorists would engage in dialog about the "practical implications of communication theories." In the end Craig proposes seven different traditions of communication theory and outlines how each one of them would engage the others in dialogue. Craig argues that while the study of communication and communication theory has become a rich and flourishing field "Communication theory as an identifiable field of study does not yet exist" and the field of communication theory has become fragmented into separate domains which simply ignore each other. This inability to engage in dialog with one another causes theorists to view communication from isolated viewpoints, and denies them the richness that is available when engaging different perspectives. Craig argues that communication theorists are all engaging in the study of practical communication. By doing so different traditions are able to have a common ground from which a dialog can form, albeit each taking a different perspective of communication. Through this process of forming a dialog between theorists with different viewpoints on communication "communication theory can fully engage with the ongoing practical discourse (or metadiscourse) about communication in society." The communication discipline began not as a single discipline, but through many different disciplines independently researching communication. This interdisciplinary beginning has separated theorists through their different conceptions of communication, rather than unifying them in the common topic of communication. Craig argues that the solution to this incoherence in the field of communication is not a unified theory of communication, but to create a dialogue between these theorists which engages these differences with one another to create new understandings of communication. To achieve this dialog Craig proposes what he calls "Dialogical-Dialectical coherence," or a "common awareness of certain complementaries and tensions among different types of communication theory." Craig believes that the different theories cannot develop in total isolation from one another, therefore this dialogical-dialectical coherence will provide a set of background assumptions from which different theories can engage each other in productive argumentation. Craig argues for a metatheory, or "second level" theory which deals with "first level" theories about communication. This second level metamodel of communication theory would help to understand the differences between first level communication traditions. With this thesis in place, Craig proposes seven suggested traditions of communication that have emerged and each of which have their own way of understanding communication. Rhetorical: views communication as the practical art of discourse. Semiotic: views communication as the mediation by signs. Phenomenological: communication is the experience of dialogue with others. Cybernetic: communication is the flow of information. Socio-psychological: communication is the interaction of individuals. Socio-cultural: communication is the production and reproduction of the social order. Critical: communication is the process in which all assumptions can be challenged. These proposed seven traditions of communication theory are then placed on two separate tables first to show how each traditions different interpretation of communication defines the tradition's vocabulary, communication problems, and commonplaces, and next to show what argumentation between the traditions would look like. Craig then outlines the specifics of each tradition. === Conclusion === Craig concluded with an open invitation to explore how the differences in these theories might shed light on key issues, show where new traditions could be created, and engaging communication theory with communication problems through metadiscourse. Craig further proposes several future traditions that could possibly be fit into the metamodel. A feminist tradition where communication is theorized as "connectedness to others", an aesthetic tradition theorizing communication as "embodied performance", an economic tradition theorizing communication as "exchange", and a spiritual tradition theorizing communication on a "nonmaterial or mystical plane of existence." == Response == === Myers, constitutive metamodel, and truth === In 2001 Myers, a computer-mediated communication scholar from Loyola University New Orleans, criticizes Craig's ideas in "A Pox on All Compromises: A reply to Craig (1999)." Myers makes two main arguments against Craig's article. Myers argues that Craig misrepresents the metamodel, and that the lack of any critical truth within Craigs construction is problematic for the field of communication theory. The metamodel is misrepresented by unjustly arguing that there is a separation between first and second level constitutive models while hiding the paradox within this statement, and that it privileges the constitutive model rather than another theoretical conception. Next Myers argues that Craig fails to draw any way to discern truth within the theories. Using a case study regarding the rise and fall of technological determinism among computer-mediated communication scholars, Myers argues that a metamodel needs to provide some mechanism that will "reduce misrepresentation and mistake" in evaluating theory. Myers frames Craig's ideas of collective discourse without an evaluative criteria of what is good theory and bad theory as "a Mad Hatter's tea party" which will "allow all to participate in this party of discourse" but will not be able to "inform any of the participants when it is time to leave." ==== Craig's response to Myers ==== Craig responded, in "an almost Jamesian reply", that Myers criticisms were not founded in actual inconsistencies within Craigs argument. Rather they were founded in the difference between Myers and Craig's "respective notions of truth and the proper role of empirical truth as a criterion for adjudicating among theories." In regard to Myers first claim that the separation between first level theories and second level metatheory is paradoxical and therefore an inaccurate or misguided distinction, Craig admits that there is a paradox inherent within a separation between first order theories and metatheory but "slippage between logical levels is an inherent feature (or bug) of communication, and we should not forget that theory is, among other things, communication. " Craig cites Gregory Bateson as pointing out that while the theory of logical types forbids the mixing of different "levels" to avoid paradox, "practical communication necessarily does exactly that." Actual communication is fraught with paradox, and while a logicians ideal would have us try and resolve these paradoxes, in actual practice we don't because there is no way to do so. In actually occurring communication people employ different means of dealing with this paradox, but resolving the paradox is not a possible solution. Craig argues that Myers has been unable to prove any inconsistency or misrepresentation when it came to using the constitutive model for his metamodel. Rather than trying to subvert every other theory to a constitutional model, Craig used the constitutive model not for some theory of truth or logical necessity, but because the constitutive model pragmatically will accomplish the goal of the project, that of opening up a space from which competing theories of communication can interact. With this the constitutional model will be able to maintain a theoretical cosmopolitanism. On the second argument that the metamodel lacks any empirical truth criteria, Craig argues that not only did Myers miss the point of the metamodel by claiming it should evaluate the truth of theories but that Myers own case study fails to back up his point. The metamodel itself does not distinguish the falseness of other models. However, contrary to Myers claim, the metamodel does allow theorists engaged in discussion to judge the validity of theories "on the basis of empirical evidence in ordinary reasonable ways." What the metamodel does deny is a universally established absolute truth in the field of communication theory. Craig points out that Myers was correct in that the metamodel is ill-equipped to judge theories as valid or invalid, it also doesn't do a good job of closing "the Antarctic ozone hole or solve other problems for which it was not designed." The case study that Myers presents is the debate about technological determinism in the realm of Computer Mediated Communication. Craig points out that this debate occurred between social scientific researchers. This type of research has a shared commitment to empirical research methods. So in spite of already possessing a shared truth criteria, these researchers failed to prevent errors Myers hopes would be avoided by holding onto a form of absolute truth. This case study would be a good critique of empirical truth but "how it supports a critique of the constitutive metamodel is less than apparent." By relying upon this case study Myers sabotages his argument for establishing an absolute truth criteria, demonstrating that "we would gain little by holding on to such a criterion." === Russill, pragmatism as an eighth tradition === After this exchange between Myers and Craig, there was no real disciplinary discussion of the metamodel besides textbooks which used the metamodel as a framework for introducing the field. Then in 2004 in an unpublished dissertation, which was mentioned in a footnote in his 2005 "The road not Taken: William James's Radical Empiricism and Communication Theory," Russill proposed the possibly of pragmatism as an eighth tradition of communication studies. This was attempted by using "Craig's rules" for the requirements of a tradition in communication theory which Russill formulates as "a problem formulation..., an initial vocabulary..., and arguments for the plausibility of this viewpoint in relation to prevailing traditions of theory." Russill did not write his dissertation with the goal of constructing a tradition of communication theory, rather he was attempting to "resuscitate and reconstruct Dewey's theory of the public as a pragmatist theory of democratic communication." To accomplish this goal Russill places Dewey in conversation with a variety of theorists including William James, John Locke, James Carey, Michel Foucault, Jürgen Habermas, and Walter Lippmann among others. Russill makes the argument that the pragmatist tradition "conceptualizes communication in response to the problem of incommensurability." Incommensurability being how a pluralistic society can engage in cooperation when there is an absence "of common, absolute standards for resolving differences." Russill briefly attempted to construct a pragmatist tradition of communication only to establish Dewey's theory of the public within that tradition. To do this he outlines pragmatism as a tradition that identifies the problem formulation as "incommensurability", and the vocabulary as "democracy, publics, power, criticism, response-ability, triple contingency." ==== Craig's response to Russill ==== Craig responds to this in "Pragmatism in the Field of Communication Theory" and mentions that while Russill "does not entirely follow 'Craig's Rules'" for a new tradition of communication theory, Russill "does define a pragmatist tradition in terms of a distinct way of framing the problem of communication and articulates premises that make the tradition theoretically and practically plausible." Craig points out that Russill is not the first communication theorists who writes on pragmatism, however he is the first to use the constitutive metamodel to define it as a tradition of communication. This conception of pragmatism as an eighth tradition of communication studies allows a new space for theories, which Craig identified as either ambiguously placed or neglected, to "immediately snap into focus as contributors to a distinct [pragmatic] tradition." To fully outline a new tradition of communication theory, Russill would have had to fully incorporate that tradition within the dialogical-dialectical matrix. Russill failed to fully consider the full range of criticism which would occur between the Pragmatist tradition and the other traditions of communication. Craig uses the dialogical-dialectical matrix to outline how pragmatism could be incorporated into the metamodel. == See also == Meta-ethics Metaphilosophy == Notes == == References == Anderson, John Arthur (1996). Communication Theory: Epistemological Foundations. Guilford Press. ISBN 978-1-57230-083-5. Retrieved February 2, 2011. Anderson, James A.; Baym, Geoffrey (December 2004). "Philosophies and Philosophic Issues in Communication, 1995-2004". Journal of Communication. 55 (4): 437–448. doi:10.1111/j.1460-2466.2004.tb02647.x. Craig, Robert T. (May 1999). "Communication Theory as a Field" (PDF). Communication Theory. 9 (2): 119–161. doi:10.1111/j.1468-2885.1999.tb00355.x. Retrieved January 8, 2011. Craig, Robert T. (May 2001). "Minding My Metamodel, Mending Myers". Communication Theory. 11 (2): 231–240. doi:10.1111/j.1468-2885.2001.tb00241.x. Craig, Robert T. (May 2007). "Pragmatism in the Field of Communication Theory". Communication Theory. 2007 (17): 125–145. doi:10.1111/j.1468-2885.2007.00292.x. Craig, Robert (2006). "A Path Through the Methodological Divides" (PDF). KEIO Communication Review. 28: 9–17. Retrieved January 8, 2011. Craig, Robert T. (2009a). "Reflection on "Communication Theory as a Field" (PDF). Revue internationale de communication sociale et publique (2): 7–12. doi:10.4000/communiquer.346. Retrieved January 29, 2011. Craig, Robert T. (2009b). "La communication en tant que champ d'études" (PDF). Revue internationale de communication sociale et publique. 1: 1–42. Retrieved January 8, 2011. Craig, Robert; Muller, Heidi, eds. (April 2007). Theorizing Communication: Readings Across the Traditions. SAGE Publications. ISBN 978-1-4129-5237-8. Retrieved January 29, 2011. D'Angelo, Paul (December 2002). "News Framing as a Multiparadigmatic Research Program:A Response to Entman". Journal of Communication. 52 (4): 870–888. doi:10.1111/j.1460-2466.2002.tb02578.x. S2CID 146693414. Donsback, Wolfgang (September 2006). "The Identity of Communication Research" (PDF). Journal of Communication. 54 (4): 589–615. doi:10.1111/j.1460-2466.2006.00294.x. Archived from the original (PDF) on July 20, 2011. Retrieved January 28, 2011. Griffin, Emory A. (2006). A First Look at Communication Theory (6 ed.). McGraw-Hill. ISBN 9780073010182. Retrieved January 29, 2011. Jimenez, Leonarda; Guillem, Susana (August 2009). "Does Communication Studies Have an Identity? Setting the Bases for Contemporary Research". Catalan Journal of Communication and Cultural Studies. 1 (1): 15–27. doi:10.1386/cjcs.1.1.15_1. Lindlof, Thomas R.; Taylor, Bryan C. (2002). Qualitative Communication Research Methods (2 ed.). Sage Publications Ltd. ISBN 9780761924944. Retrieved January 28, 2011. Littlejohn, Stephen; Foss, Karen (2008). Theories of Human Communication (PDF) (9 ed.). Thomson and Wadsworth. Archived from the original (PDF) on December 14, 2010. Retrieved January 23, 2011. Miller, Katherine (2005). Communication Theories:Perspectives, Processes, and Contexts (2 ed.). McGraw-Hill. ISBN 9780072937947. Retrieved January 29, 2011. Myers, David (May 2001). "A Pox on All Compromises: Reply to Craig(1999)". Communication Theory. 11 (2): 218–230. doi:10.1111/j.1468-2885.2001.tb00240.x. Penman, Robyn (2000). Reconstructing Communicating: looking to a Future. Lawrence Erlbaum Associates, Inc. ISBN 9781410605832. Retrieved January 28, 2011. Russill, Chris (May 2004). Barton, Richard L.; Bettig, Ronald V.; Nichols, John S.; et al. (eds.). Toward a Pragmatist Theory of Communication (PhD thesis). Russill, Chris (2005). "The road not Taken: William James's Radical Empiricism and Communication". The Communication Review. 8 (3): 277–305. doi:10.1080/10714420500240474. ISSN 1547-7487. S2CID 143442291.
Wikipedia/Communication_theory_as_a_field
A nanonetwork or nanoscale network is a set of interconnected nanomachines (devices a few hundred nanometers or a few micrometers at most in size) which are able to perform only very simple tasks such as computing, data storing, sensing and actuation. Nanonetworks are expected to expand the capabilities of single nanomachines both in terms of complexity and range of operation by allowing them to coordinate, share and fuse information. Nanonetworks enable new applications of nanotechnology in the biomedical field, environmental research, military technology and industrial and consumer goods applications. Nanoscale communication is defined in IEEE P1906.1. == Communication approaches == Classical communication paradigms need to be revised for the nanoscale. The two main alternatives for communication in the nanoscale are based either on electromagnetic communication or on molecular communication. === Electromagnetic === This is defined as the transmission and reception of electromagnetic radiation from components based on novel nanomaterials. Recent advancements in carbon and molecular electronics have opened the door to a new generation of electronic nanoscale components such as nanobatteries, nanoscale energy harvesting systems, nano-memories, logical circuitry in the nanoscale and even nano-antennas. From a communication perspective, the unique properties observed in nanomaterials will decide on the specific bandwidths for emission of electromagnetic radiation, the time lag of the emission, or the magnitude of the emitted power for a given input energy, amongst others. For the time being, two main alternatives for electromagnetic communication in the nanoscale have been envisioned. First, it has been experimentally demonstrated that is possible to receive and demodulate an electromagnetic wave by means of a nanoradio, i.e., an electromechanically resonating carbon nanotube which is able to decode an amplitude or frequency modulated wave. Second, graphene-based nano-antennas have been analyzed as potential electromagnetic radiators in the terahertz band. === Molecular === Molecular communication is defined as the transmission and reception of information by means of molecules. The different molecular communication techniques can be classified according to the type of molecule propagation in walkaway-based, flow-based or diffusion-based communication. In walkway-based molecular communication, the molecules propagate through pre-defined pathways by using carrier substances, such as molecular motors. This type of molecular communication can also be achieved by using E. coli bacteria as chemotaxis. In flow-based molecular communication, the molecules propagate through diffusion in a fluidic medium whose flow and turbulence are guided and predictable. The hormonal communication through blood streams inside the human body is an example of this type of propagation. The flow-based propagation can also be realized by using carrier entities whose motion can be constrained on the average along specific paths, despite showing a random component. A good example of this case is given by pheromonal long range molecular communications. In diffusion-based molecular communication, the molecules propagate through spontaneous diffusion in a fluidic medium. In this case, the molecules can be subject solely to the laws of diffusion or can also be affected by non-predictable turbulence present in the fluidic medium. Pheromonal communication, when pheromones are released into a fluidic medium, such as air or water, is an example of diffusion-based architecture. Other examples of this kind of transport include calcium signaling among cells, as well as quorum sensing among bacteria. Based on the macroscopic theory of ideal (free) diffusion the impulse response of a unicast molecular communication channel was reported in a paper that identified that the impulse response of the ideal diffusion based molecular communication channel experiences temporal spreading. Such temporal spreading has a deep impact in the performance of the system, for example in creating the intersymbol interference (ISI) at the receiving nanomachine. In order to detect the concentration-encoded molecular signal two detection methods named sampling-based detection (SD) and energy-based detection (ED) have been proposed. While the SD approach is based on the concentration amplitude of only one sample taken at a suitable time instant during the symbol duration, the ED approach is based on the total accumulated number of molecules received during the entire symbol duration. In order to reduce the impact of ISI a controlled pulse-width based molecular communication scheme has been analysed. The work presented in showed that it is possible to realize multilevel amplitude modulation based on ideal diffusion. A comprehensive study of pulse-based binary and sinus-based, concentration-encoded molecular communication system have also been investigated. == See also == IEEE P1906.1 Recommended Practice for Nanoscale and Molecular Communication Framework == References == == External links == IEEE Communications Society Best Readings in Nanoscale Communication Networks Stack Exchange Page for Q&A on NanoNetworking Nanoscale Networking in Industry Instructions to join P1906.1 Working Group MONACO Project – Broadband Wireless Networking Laboratory at Georgia Tech, Atlanta, Georgia, US GRANET Project – Broadband Wireless Networking Laboratory at Georgia Tech, Atlanta, Georgia, US NaNoNetworking Center in Catalunya at Universitat Politècnica de Catalunya, Barcelona, Catalunya, Spain Molecular communication research at York University, Toronto, Canada Research on Molecular Communication at University of Ottawa, Ottawa, Canada Intelligence Networking Lab. at Yonsei University, Korea Wiki on Molecular Communication at University of California, Irvine, California, US Home page of the IEEE Communications Society Emerging Technical Subcommittee on Nanoscale, Molecular, and Quantum Networking. P1906.1 – Recommended Practice for Nanoscale and Molecular Communication Framework IEEE 802.15 Terahertz Interest Group Nano Communication Networks (Elsevier) Journal A simulation tool for nanoscale biological networks – Elsevier presentation NanoNetworking Research Group (NRG) at Boğaziçi University, Istanbul, Turkey
Wikipedia/Nanoscale_networking
A network processor is an integrated circuit which has a feature set specifically targeted at the networking application domain. Network processors are typically software programmable devices and would have generic characteristics similar to general purpose central processing units that are commonly used in many different types of equipment and products. == History of development == In modern telecommunications networks, information (voice, video, data) is transferred as packet data (termed packet switching) which is in contrast to older telecommunications networks that carried information as analog signals such as in the public switched telephone network (PSTN) or analog TV/Radio networks. The processing of these packets has resulted in the creation of integrated circuits (IC) that are optimised to deal with this form of packet data. Network processors have specific features or architectures that are provided to enhance and optimise packet processing within these networks. Network processors have evolved into ICs with specific functions. This evolution has resulted in more complex and more flexible ICs being created. The newer circuits are programmable and thus allow a single hardware IC design to undertake a number of different functions, where the appropriate software is installed. Network processors are used in the manufacture of many different types of network equipment such as: Routers, software routers and switches (Inter-network processors) Firewalls Session border controllers Intrusion detection devices Intrusion prevention devices Network monitoring systems Network security (secure cryptoprocessors) === Reconfigurable Match-Tables === Reconfigurable Match-Tables were introduced in 2013 to allow switches to operate at high speeds while maintaining flexibility when it comes to the network protocols running on them, or the processing to does to them. P4 is used to program the chips. The company Barefoot Networks was based around these processors and was later purchased by Intel in 2019. An RMT pipeline relies on three main stages; the programmable parser, the Match-Action tables and the programmable deparser. The parser reads the packet in chunks and processes these chunks to find out which protocols are used in the packet (Ethernet, VLAN, IPv4...) and extracts certain fields from the packet into the Packet Header Vector (PHV). Certain fields in the PHV may be reserved for special uses such as present headers or total packet length. The protocols are typically programmable, and so are the fields to extract. The Match-Action tables are a series of units that read an input PHV, match certain fields in it using a crossbar and CAM memory, the result is a wide instruction that operates on one or more fields of the PHV and data to support this instruction. The output PHV is then sent to the next MA stage or to the deparser. The deparser takes in the PHV as well as the original packet and its metadata (to fill in missing bits that weren't extracted into the PHV) and then outputs the modified packet as chunks. It's typically programmable as with the parser and may reuse some of the configuration files. FlexNIC attempts to apply this model to Network Interface Controllers allowing servers to send and receive packets at high speeds while maintaining protocol flexibility and without increasing the CPU overhead. == Generic functions == In the generic role as a packet processor, a number of optimised features or functions are typically present in a network processor, which include: Pattern matching – the ability to find specific patterns of bits or bytes within packets in a packet stream. Key lookup – the ability to quickly undertake a database lookup using a key (typically an address in a packet) to find a result, typically routing information. Computation Data bitfield manipulation – the ability to change certain data fields contained in the packet as it is being processed. Queue management – as packets are received, processed and scheduled to be sent onwards, they are stored in queues. Control processing – the micro operations of processing a packet are controlled at a macro level which involves communication and orchestration with other nodes in a system. Quick allocation and re-circulation of packet buffers. == Architectural paradigms == In order to deal with high data-rates, several architectural paradigms are commonly used: Pipeline of processors - each stage of the pipeline consisting of a processor performing one of the functions listed above. Parallel processing with multiple processors, often including multithreading. Specialized microcoded engines to more efficiently accomplish the tasks at hand. With the advent of multicore architectures, network processors can be used for higher layer (L4-L7) processing. Additionally, traffic management, which is a critical element in L2-L3 network processing and used to be executed by a variety of co-processors, has become an integral part of the network processor architecture, and a substantial part of its silicon area ("real estate") is devoted to the integrated traffic manager. Modern network processors are also equipped with low-latency high-throughput on-chip interconnection networks optimized for the exchange of small messages among cores (few data words). Such networks can be used as an alternative facility for the efficient inter-core communication aside of the standard use of shared memory. == Applications == Using the generic function of the network processor, a software program implements an application that the network processor executes, resulting in the piece of physical equipment performing a task or providing a service. Some of the applications types typically implemented as software running on network processors are: Packet or frame discrimination and forwarding, that is, the basic operation of a router or switch. Quality of service (QoS) enforcement – identifying different types or classes of packets and providing preferential treatment for some types or classes of packet at the expense of other types or classes of packet. Access Control functions – determining whether a specific packet or stream of packets should be allowed to traverse the piece of network equipment. Encryption of data streams – built in hardware-based encryption engines allow individual data flows to be encrypted by the processor. TCP offload processing == See also == Content processor Multi-core processor Knowledge-based processor Active networking Computer engineering Internet List of defunct network processor companies Network Processing Forum Queueing theory Network on a chip Network interface controller == References ==
Wikipedia/Network_processing
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, and causal inference. == Methodology == === Optimization problems === In a genetic algorithm, a population of candidate solutions (called individuals, creatures, organisms, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. A typical genetic algorithm requires: a genetic representation of the solution domain, a fitness function to evaluate the solution domain. A standard representation of each candidate solution is as an array of bits (also called bit set or bit string). Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming. Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators. ==== Initialization ==== The population size depends on the nature of the problem, but typically contains hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found or the distribution of the sampling probability tuned to focus in those areas of greater interest. ==== Selection ==== During each successive generation, a portion of the existing population is selected to reproduce for a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming. The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem-dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise. In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used. ==== Genetic operators ==== The next step is to generate a second generation population of solutions from those selected, through a combination of genetic operators: crossover (also called recombination), and mutation. For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes. These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children. Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search. Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms. It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem's complexity class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required. ==== Heuristics ==== In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution. ==== Termination ==== This generational process is repeated until a termination condition has been reached. Common terminating conditions are: A solution is found that satisfies minimum criteria Fixed number of generations reached Allocated budget (computation time/money) reached The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results Manual inspection Combinations of the above == The building block hypothesis == Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of: A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness. A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic. Goldberg describes the heuristic as follows: "Short, low order, and highly fit schemata are sampled, recombined [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings. "Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks." Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many estimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold. Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs' efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms. == Limitations == The practical use of a genetic algorithm has limitations, especially as compared to alternative optimization algorithms: Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods cannot deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an approximated fitness that is computationally efficient. It is apparent that amalgamation of approximate models may be one of the most promising approaches to convincingly use GA to solve complex real life problems. Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or a plane . In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts. The "better" solution is only in comparison to other solutions. As a result, the stopping criterion is not clear in every problem. In many problems, GAs have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the No Free Lunch theorem proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation. Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems. GAs cannot effectively solve problems in which the only fitness measure is a binary pass/fail outcome (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure. For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches. == Variants == === Chromosome representation === The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains. When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution. Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a virtual alphabet (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation. An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome. This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes. Another important expansion of the Genetic Algorithm (GA) accessible solution space was driven by the need to make representations amenable to variable levels of knowledge about the solution states. Variable-length representations were inspired by the observation that, in nature, evolution tends to progress from simpler organisms to more complex ones—suggesting an underlying rationale for embracing flexible structures. A second, more pragmatic motivation was that most real-world engineering and knowledge-based problems do not naturally conform to rigid knowledge structures. These early innovations in variable-length representations laid essential groundwork for the development of Genetic programming, which further extended the classical GA paradigm. Such representations required enhancements to the simplistic genetic operators used for fixed-length chromosomes, enabling the emergence of more sophisticated and adaptive GA models. === Elitism === A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next. === Parallel implementations === Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction. Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function. === Adaptive GAs === Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Researchers have analyzed GA convergence analytically. Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm), the adjustment of pc and pm depends on the fitness values of the solutions. There are more examples of AGA variants: Successive zooming method is an early example of improving convergence. In CAGA (clustering-based adaptive genetic algorithm), through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states. Recent approaches use more abstract variables for deciding pc and pm. Examples are dominance & co-dominance principles and LIGA (levelized interpolative genetic algorithm), which combines a flexible GA with modified A* search to tackle search space anisotropicity. It can be quite effective to combine GA with other optimization methods. A GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA while overcoming the lack of robustness of hill climbing. This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency. A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination. A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA, GEMGA and LLGA. == Problem domains == Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems. As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain). Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, walking methods for computer figures, optimal design of aerodynamic bodies in complex flowfields In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task: [I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate. [...] I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs. == History == In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution. Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey. His 1954 publication was not widely noticed. Starting in 1957, the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms. Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998). Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game, artificial evolution only became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies. Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania. === Commercial products === In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes. In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York Times technology writer John Markoff wrote about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995. Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version. Since the 1990s, MATLAB has built in three derivative-free optimization heuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search). == Related techniques == === Parent fields === Genetic algorithms are a sub-field: Evolutionary algorithms Evolutionary computing Metaheuristics Stochastic optimization Optimization === Related fields === ==== Evolutionary algorithms ==== Evolutionary algorithms is a sub-field of evolutionary computing. Evolution strategies (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value domain. They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Evolutionary programming (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents. Estimation of Distribution Algorithm (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled or generated from guided-crossover. Genetic programming (GP) is a related technique popularized by John Koza in which computer programs, rather than function parameters, are optimized. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list structures typical of genetic algorithms. There are many variants of Genetic Programming, including Cartesian genetic programming, Gene expression programming, grammatical evolution, Linear genetic programming, Multi expression programming etc. Grouping genetic algorithm (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items. The idea behind this GA evolution proposed by Emanuel Falkenauer is that solving some complex problems, a.k.a. clustering or partitioning problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include bin packing, line balancing, clustering with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date. Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference. ==== Swarm intelligence ==== Swarm intelligence is a sub-field of evolutionary computing. Ant colony optimization (ACO) uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas. Although considered an Estimation of distribution algorithm, Particle swarm optimization (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables. ==== Other evolutionary computing algorithms ==== Evolutionary computation is a sub-field of the metaheuristic methods. Memetic algorithm (MA), often called hybrid genetic algorithm among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from memes, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms. Bacteriologic algorithms (BA) inspired by evolutionary ecology and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining. Cultural algorithm (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space. Differential evolution (DE) inspired by migration of superorganisms. Gaussian adaptation (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information. Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder (average information) of the Gaussian simultaneously keeping the mean fitness constant. ==== Other metaheuristic methods ==== Metaheuristic methods broadly fall within stochastic optimisation methods. Simulated annealing (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule. Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space. Extremal optimization (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes local modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of emergent improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions. ==== Other stochastic optimisation methods ==== The cross-entropy (CE) method generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration. Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular reinforcement learning, active or query learning, neural networks, and metaheuristics. == See also == Genetic programming List of genetic algorithm applications Genetic algorithms in signal processing (a.k.a. particle filters) Propagation of schema Universal Darwinism Metaheuristics Learning classifier system Rule-based machine learning == References == == Bibliography == == External links == === Resources === [1] Provides a list of resources in the genetic algorithms field An Overview of the History and Flavors of Evolutionary Algorithms === Tutorials === Genetic Algorithms - Computer programs that "evolve" in ways that resemble natural selection can solve complex problems even their creators do not fully understand An excellent introduction to GA by John Holland and with an application to the Prisoner's Dilemma An online interactive Genetic Algorithm tutorial for a reader to practise or learn how a GA works: Learn step by step or watch global convergence in batch, change the population size, crossover rates/bounds, mutation rates/bounds and selection mechanisms, and add constraints. A Genetic Algorithm Tutorial by Darrell Whitley Computer Science Department Colorado State University An excellent tutorial with much theory "Essentials of Metaheuristics", 2009 (225 p). Free open text by Sean Luke. Global Optimization Algorithms – Theory and Application Archived 11 September 2008 at the Wayback Machine Genetic Algorithms in Python Tutorial with the intuition behind GAs and Python implementation. Genetic Algorithms evolves to solve the prisoner's dilemma. Written by Robert Axelrod.
Wikipedia/Genetic_Algorithm
Active Network, LLC, is an American multinational corporation headquartered in Dallas, Texas, that provides software as a service for activity and participant management. ACTIVE's management software supports a range of clients including: races, nonprofits, outdoor activities, camps, sports, schools, and universities. On October 18, 2022, the Consumer Financial Protection Bureau (CFPB) sued the online event registration company ACTIVE Network for tricking people trying to sign up for fundraising road races and other events, into enrolling into its annual subscription discount club, Active Advantage. The CFPB’s lawsuit describes how ACTIVE automatically and unlawfully enrolled families into its discount club by using digital duplicity. Consumers, many of whom just thought they were registering for a community race or event, ended up being enrolled into a costly membership club. The CFPB is suing to require ACTIVE to change this unlawful enrollment practice, reimburse consumers, and pay a penalty. == History == ACTIVE was founded in 1998 by Jim Woodman under the name of Active USA as an information source for recreational athletes looking for regional sports information. In December 1999, ActiveUSA and Racegate merged in a deal that brought TicketMaster City Search as a major investor and the combined entity became headquartered in La Jolla, Calif. ACTIVE underwent a round of mergers and acquisitions in 1999 and 2000 with companies including: LeagueLink, Inc, Eteamz and Sierra Digital (RecWare). On May 25, 2011, ACTIVE went public at $15 per share under the ticker symbol “ACTV” and completed its initial public offering two days later. On September 30, 2013, ACTIVE announced it was to be acquired by Vista Equity Partners for $1.05 billion, completing the sale on November 15 of the same year, at which time the company's president, Darko Dejanovic, was named CEO. Following its acquisition, ACTIVE moved its corporate headquarters from San Diego to Dallas. In March 2015, ACTIVE launched ACTIVEkids.com, a site focused on kid specific events and activities. Following the launch of ACTIVEkids.com, ACTIVE announced a new data insights platform, Activity Cloud, to provide business intelligence to event organizers in May 2015. In September 2017, Global Payments Inc. completed the acquisition of ACTIVE's communities and sports divisions. == Products == === Current === ACTIVE.com – Online race and activity registration portal ACTIVEkids.com – Online kids events and activity registration portal IPICO Sports – Race timing chip systems (Purchased February 2015) Activity Cloud – Data insights for event organizers (Launched May 2015) ActiveWorks Endurance – Online race management and marketing ActiveWorks Camps & Class Manager – Online camp management software ActiveWorks Hy-Tek – Online swim team and meet management software ActiveNet – Facility and program management software LeagueOne – League management software TeamPages – Sports websites for teams and leagues JumpForward – Collegiate athlete recruiting management (Purchased May 2016) Maximum Solutions – Recreation management (Purchased January 2017) Thriva – Online registration software and marketing RegCenter – Online registration management === Former === ActiveWorks Outdoors – Online campground, hunting and fishing, marina, venue, lodging and golf management (Vista Equity retained ownership during the Global Payments Inc. acquisition of ACTIVE) FellowshipOne – Church management software (Sold to Ministry Brands in March 2016) ServiceU – Event, giving and ticketing management (Sold to Ministry Brands in March 2016) ActiveGolf – Software for online tee time booking and golf operations. (Sold to GolfNow in October 2014) == Acquisitions == === ReserveAmerica === ACTIVE purchased ReserveAmerica from IAC in 2009 to extend their offerings to campground booking services. ACTIVE traded 3.5 million shares of convertible preferred stock in ACTIVE as part of the deal, giving IAC a nine percent holding in the company. ReserveAmerica provides reservations for campgrounds at state and national parks as well as reservation services for the Army Corps of Engineers, federal campgrounds and privately operated campgrounds. === StarCite === In 2012, ACTIVE purchased StarCite, a strategic meetings management technology, and created the Business Solutions group. In February 2014, the division was spun off and merged with Lanyon, a hospitality travel company, into Lanyon. === IPICO Sports === Philip Lockwood founded Mercury Sports Group (MSG) in 2003 with the goal to revolutionize race timing. At the same time, IPICO Inc. was working on their RFID technology in South Africa while looking for ways to expand into the global market. After working together to build technology for sports applications, MSG was acquired by IPICO Inc. to form IPICO Sports in 2008 following the shipment of the first Elite Reader kit in March 2007. In 2009, IPICO won Frost & Sullivan's North American RFID Sports Technology Leadership of the Year Award. In 2015, IPICO Sports was acquired by event management and registration company ACTIVE Network, LLC. IPICO race timing chips have been used to track a variety of sporting events, including the Main Cross Country Festival of Champions. Additionally, the BolderBOULDER worked with IPICO Sports and End Result starting in 2007, tracking times for over 53,000 participants in the 2009 race. IPICO also teamed up with End Result on the 2008 Ironman World Championship, among other races. In addition to running races and triathlons, other events utilize IPICO's sport timing. One example is the American Birkebeiner cross country skiing event. ACTIVE acquired IPICO Sports in February 2015. === JumpForward === In May 2016, ACTIVE acquired JumpForward, a provider for NCAA recruiting, compliance, and business office management. === Maximum Solutions === In January 2017, ACTIVE acquired recreation management software provider Maximum Solutions. == Hacking incident == In August 2016, a computer breach of Active Network's system that processes online hunting and fishing license sales was announced. Several million records containing personal information of persons with Oregon, Idaho and Washington state licenses were exposed. == Controversies == Active Network was charged by the Alameda County District Attorney in 2016 for violating California consumer protection laws. The complaint alleges Active Network enrolled consumers into a free trial of their product Active Advantage without disclosing sufficient information that it would become a paid subscription without cancellation. The matter was settled with Active Network agreeing to pay $2.7 million in civil penalties and setting aside $1 million in restitution payments for the approximately 100,000 California consumers who were enrolled between 2010 and 2013. == References == == External links == Official website
Wikipedia/Active_Network,_LLC
Non-standard positional numeral systems here designates numeral systems that may loosely be described as positional systems, but that do not entirely comply with the following description of standard positional systems: In a standard positional numeral system, the base b is a positive integer, and b different numerals are used to represent all non-negative integers. The standard set of numerals contains the b values 0, 1, 2, etc., up to b − 1, but the value is weighted according to the position of the digit in a number. The value of a digit string like pqrs in base b is given by the polynomial form p × b 3 + q × b 2 + r × b + s {\displaystyle p\times b^{3}+q\times b^{2}+r\times b+s} . The numbers written in superscript represent the powers of the base used. For instance, in hexadecimal (b = 16), using the numerals A for 10, B for 11 etc., the digit string 7A3F means 7 × 16 3 + 10 × 16 2 + 3 × 16 + 15 {\displaystyle 7\times 16^{3}+10\times 16^{2}+3\times 16+15} , which written in our normal decimal notation is 31295. Upon introducing a radix point "." and a minus sign "−", real numbers can be represented up to arbitrary accuracy. This article summarizes facts on some non-standard positional numeral systems. In most cases, the polynomial form in the description of standard systems still applies. Some historical numeral systems may be described as non-standard positional numeral systems. E.g., the sexagesimal Babylonian notation and the Chinese rod numerals, which can be classified as standard systems of base 60 and 10, respectively, counting the space representing zero as a numeral, can also be classified as non-standard systems, more specifically, mixed-base systems with unary components, considering the primitive repeated glyphs making up the numerals. However, most of the non-standard systems listed below have never been intended for general use, but were devised by mathematicians or engineers for special academic or technical use. == Bijective numeration systems == A bijective numeral system with base b uses b different numerals to represent all non-negative integers. However, the numerals have values 1, 2, 3, etc. up to and including b, whereas zero is represented by an empty digit string. For example, it is possible to have decimal without a zero. === Base one (unary numeral system) === Unary is the bijective numeral system with base b = 1. In unary, one numeral is used to represent all positive integers. The value of the digit string pqrs given by the polynomial form can be simplified into p + q + r + s since bn = 1 for all n. Non-standard features of this system include: The value of a digit does not depend on its position. Thus, one can easily argue that unary is not a positional system at all. Introducing a radix point in this system will not enable representation of non-integer values. The single numeral represents the value 1, not the value 0 = b − 1. The value 0 cannot be represented (or is implicitly represented by an empty digit string). == Signed-digit representation == In some systems, while the base is a positive integer, negative digits are allowed. Non-adjacent form is a particular system where the base is b = 2. In the balanced ternary system, the base is b = 3, and the numerals have the values −1, 0 and +1 (rather than 0, 1 and 2 as in the standard ternary system, or 1, 2 and 3 as in the bijective ternary system). == Gray code == The reflected binary code, also known as the Gray code, is closely related to binary numbers, but some bits are inverted, depending on the parity of the higher order bits. == Graphical and physical variants == Cistercian numerals are a decimal positional numeral system, but the positions are not aligned as in common decimal notation; instead, they are attached to the top-right, top-left, bottom-right and bottom-left of a vertical stem, respectively, and thus limited to four in number (so only integers from 0 to 9999 can be represented). The system has close similarities to standard positional numeral systems, but may also be compared to e.g. Greek numerals, where different sets of symbols (in fact, Greek letters) are used for the ones, tens, hundreds and thousands, likewise giving an upper limit on the numbers that can be represented. Similarly, in computers, e.g. the long integer format is a standard binary system (apart from the sign bit), but it has a limited number of positions, and the physical locations for the representations of the digits may not be aligned. In an analog odometer and in an abacus, the decimal digits are aligned but limited in number. == Bases that are not positive integers == A few positional systems have been suggested in which the base b is not a positive integer. === Negative base === Negative-base systems include negabinary, negaternary and negadecimal, with bases −2, −3, and −10 respectively; in base −b the number of different numerals used is b. Due to the properties of negative numbers raised to powers, all integers, positive and negative, can be represented without a sign. === Complex base === In a purely imaginary base bi system, where b is an integer larger than 1 and i the imaginary unit, the standard set of digits consists of the b2 numbers from 0 to b2 − 1. It can be generalized to other complex bases, giving rise to the complex-base systems. === Non-integer base === In non-integer bases, the number of different numerals used clearly cannot be b. Instead, the numerals 0 to ⌊ b ⌋ {\displaystyle \lfloor b\rfloor } are used. For example, golden ratio base (phinary), uses the 2 different numerals 0 and 1. == Mixed bases == It is sometimes convenient to consider positional numeral systems where the weights associated with the positions do not form a geometric sequence 1, b, b2, b3, etc., starting from the least significant position, as given in the polynomial form. Examples include: Measuring time often uses a mix of base 24 for hours (or base 12 on an analog clock), and base 60 for minutes and seconds, with each part often written base 10, as in 20:15:00 representing twenty hours and fifteen minutes. Similarly, giving an angle in degrees, minutes and seconds (sometimes with decimals), can be interpreted as a mixed-radix system. Non-decimal currencies have been common, e.g. in Commonwealth countries that before decimalization used pounds, shillings and pennies. The Mayan numeral system was base 20, but when applied to the calendar it was a mixed-radix system as one of its positions represented a multiplication by 18 rather than 20, in order to fit a 360-day calendar. The factorial number system is a mixed-radix system where the weights form a sequence where each weight is an integer multiple of the previous one, and the number of permitted digit values varies accordingly from position to position. Sequences where each weight is not an integer multiple of the previous weight may also be used, but then every integer may not have a unique representation. For example, Fibonacci coding uses the digits 0 and 1, weighted according to the Fibonacci sequence (1, 2, 3, 5, 8, ...); a unique representation of all non-negative integers may be ensured by forbidding consecutive 1s. Binary-coded decimal (BCD) are mixed base systems where bits (binary digits) are used to express decimal digits. E.g., in 1001 0011, each group of four bits may represent a decimal digit (in this example 9 and 3, so the eight bits combined represent decimal 93). The weights associated with these 8 positions are 80, 40, 20, 10, 8, 4, 2 and 1. Uniqueness is ensured by requiring that, in each group of four bits, if the first bit is 1, the next two must be 00. == Asymmetric numeral systems == Asymmetric numeral systems are systems used in computer science where each digit can have different bases, usually non-integer. In these, not only are the bases of a given digit different, they can be also nonuniform and altered in an asymmetric way to encode information more efficiently. They are optimized for chosen non-uniform probability distributions of symbols, using on average approximately Shannon entropy bits per symbol. == See also == List of numeral systems Komornik–Loreti constant == External links == Expansions in non-integer bases: the top order and the tail == References ==
Wikipedia/Non-standard_positional_numeral_systems
In information theory, polar codes are a linear block error-correcting codes. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity. Polar codes were developed by Erdal Arikan, a professor of electrical engineering at Bilkent University. Notably, polar codes have modest encoding and decoding complexity O(n log n), which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can reach the fundamental lower bounds for energy consumption of two dimensional circuitry to within an O(nε polylog n) factor for any ε > 0. == Industrial applications == Polar codes have some limitations when used in industrial applications. Primarily, the original design of the polar codes achieves capacity when block sizes are asymptotically large with a successive cancellation decoder. However, with the block sizes used in industry, the performance of the successive cancellation is poor compared to well-defined and implemented coding schemes such as low-density parity-check code (LDPC) and turbo code. Polar performance can be improved with successive cancellation list decoding, but its usability in real applications is still questionable due to very poor implementation efficiencies caused by the iterative approach. In October 2016, Huawei announced that it had achieved 27 Gbit/s in 5G field trial tests using polar codes for channel coding. The improvements have been introduced so that the channel performance has now almost closed the gap to the Shannon limit, which sets the bar for the maximum rate for a given bandwidth and a given noise level. In November 2016, 3GPP agreed to adopt polar codes for the eMBB (Enhanced Mobile Broadband) control channels for the 5G NR (New Radio) interface. At the same meeting, 3GPP agreed to use LDPC for the corresponding data channel. == PAC codes == In 2019, Arıkan suggested to employ a convolutional pre-transformation before polar coding. These pre-transformed variant of polar codes were dubbed polarization-adjusted convolutional (PAC) codes. It was shown that the pre-transformation can effectively improve the distance properties of polar codes by reducing the number of minimum-weight and in general small-weight codewords, resulting in the improvement of block error rates under near maximum likelihood (ML) decoding algorithm such as Fano decoding and list decoding. Fano decoding is a tree search algorithm that determines the transmitted codeword by utilizing an optimal metric function to efficiently guide the search process. PAC codes are also equivalent to post-transforming polar codes with certain cyclic codes. At short blocklengths, such codes outperform both convolutional codes and CRC-aided list decoding of conventional polar codes. == Neural Polar Decoders == Neural Polar Decoders (NPDs) are an advancement in channel coding that combine neural networks (NNs) with polar codes, providing unified decoding for channels with or without memory, without requiring an explicit channel model. They use four neural networks to approximate the functions of polar decoding: the embedding (E) NN, the check-node (F) NN, the bit-node (G) NN, and the embedding-to-LLR (H) NN. The weights of these NNs are determined by estimating the mutual information of the synthetic channels. By the end of training, the weights of the NPD are fixed and can then be used for decoding. The computational complexity of NPDs is determined by the parameterization of the neural networks, unlike successive cancellation (SC) trellis decoders, whose complexity is determined by the channel model and are typically used for finite-state channels (FSCs). The computational complexity of NPDs is O ( k d N log 2 ⁡ N ) {\textstyle O(kdN\log _{2}N)} , where k {\displaystyle k} is the number of hidden units in the neural networks, d {\displaystyle d} is the dimension of the embedding, and N {\displaystyle N} is the block length. In contrast, the computational complexity of SC trellis decoders is O ( | S | 3 N log 2 ⁡ N ) {\displaystyle O(|{\mathcal {S}}|^{3}N\log _{2}N)} , where S {\displaystyle {\mathcal {S}}} is the state space of the channel model. NPDs can be integrated into SC decoding schemes such as SC list decoding and CRC-aided SC decoding. They are also compatible with non-uniform and i.i.d. input distributions by integrating them into the Honda-Yamamoto scheme. This flexibility allows NPDs to be used in various decoding scenarios, improving error correction performance while maintaining manageable computational complexity. == References == == External links == AFF3CT home page: A Fast Forward Error Correction Toolbox for high speed polar code simulations in software
Wikipedia/Polar_code_(coding_theory)
In coding theory, the Sardinas–Patterson algorithm is a classical algorithm for determining in polynomial time whether a given variable-length code is uniquely decodable, named after August Albert Sardinas and George W. Patterson, who published it in 1953. The algorithm carries out a systematic search for a string which admits two different decompositions into codewords. As Knuth reports, the algorithm was rediscovered about ten years later in 1963 by Floyd, despite the fact that it was at the time already well known in coding theory. == Idea of the algorithm == Consider the code { a ↦ 1 , b ↦ 011 , c ↦ 01110 , d ↦ 1110 , e ↦ 10011 } {\displaystyle \{\,a\mapsto 1,b\mapsto 011,c\mapsto 01110,d\mapsto 1110,e\mapsto 10011\,\}} . This code, which is based on an example by Berstel, is an example of a code which is not uniquely decodable, since the string 011101110011 can be interpreted as the sequence of codewords 01110 – 1110 – 011, but also as the sequence of codewords 011 – 1 – 011 – 10011. Two possible decodings of this encoded string are thus given by cdb and babe. In general, a codeword can be found by the following idea: In the first round, we choose two codewords x 1 {\displaystyle x_{1}} and y 1 {\displaystyle y_{1}} such that x 1 {\displaystyle x_{1}} is a prefix of y 1 {\displaystyle y_{1}} , that is, x 1 w = y 1 {\displaystyle x_{1}w=y_{1}} for some "dangling suffix" w {\displaystyle w} . If one tries first x 1 = 011 {\displaystyle x_{1}=011} and y 1 = 01110 {\displaystyle y_{1}=01110} , the dangling suffix is w = 10 {\displaystyle w=10} . If we manage to find two sequences x 2 , … , x p {\displaystyle x_{2},\ldots ,x_{p}} and y 2 , … , y q {\displaystyle y_{2},\ldots ,y_{q}} of codewords such that x 2 ⋯ x p = w y 2 ⋯ y q {\displaystyle x_{2}\cdots x_{p}=wy_{2}\cdots y_{q}} , then we are finished: For then the string x = x 1 x 2 ⋯ x p {\displaystyle x=x_{1}x_{2}\cdots x_{p}} can alternatively be decomposed as y 1 y 2 ⋯ y q {\displaystyle y_{1}y_{2}\cdots y_{q}} , and we have found the desired string having at least two different decompositions into codewords. In the second round, we try out two different approaches: the first trial is to look for a codeword that has w as prefix. Then we obtain a new dangling suffix w', with which we can continue our search. If we eventually encounter a dangling suffix that is itself a codeword (or the empty word), then the search will terminate, as we know there exists a string with two decompositions. The second trial is to seek for a codeword that is itself a prefix of w. In our example, we have w = 10 {\displaystyle w=10} , and the sequence 1 is a codeword. We can thus also continue with w'=0 as the new dangling suffix. == Precise description of the algorithm == The algorithm is described most conveniently using quotients of formal languages. In general, for two sets of strings D and N, the (left) quotient N − 1 D {\displaystyle N^{-1}D} is defined as the residual words obtained from D by removing some prefix in N. Formally, N − 1 D = { y ∣ x y ∈ D and x ∈ N } {\displaystyle N^{-1}D=\{\,y\mid xy\in D~{\textrm {and}}~x\in N\,\}} . Now let C {\displaystyle C} denote the (finite) set of codewords in the given code. The algorithm proceeds in rounds, where we maintain in each round not only one dangling suffix as described above, but the (finite) set of all potential dangling suffixes. Starting with round i = 1 {\displaystyle i=1} , the set of potential dangling suffixes will be denoted by S i {\displaystyle S_{i}} . The sets S i {\displaystyle S_{i}} are defined inductively as follows: S 1 = C − 1 C ∖ { ε } {\displaystyle S_{1}=C^{-1}C\setminus \{\varepsilon \}} . Here, the symbol ε {\displaystyle \varepsilon } denotes the empty word. S i + 1 = C − 1 S i ∪ S i − 1 C {\displaystyle S_{i+1}=C^{-1}S_{i}\cup S_{i}^{-1}C} , for all i ≥ 1 {\displaystyle i\geq 1} . The algorithm computes the sets S i {\displaystyle S_{i}} in increasing order of i {\displaystyle i} . As soon as one of the S i {\displaystyle S_{i}} contains a word from C or the empty word, then the algorithm terminates and answers that the given code is not uniquely decodable. Otherwise, once a set S i {\displaystyle S_{i}} equals a previously encountered set S j {\displaystyle S_{j}} with j < i {\displaystyle j<i} , then the algorithm would enter in principle an endless loop. Instead of continuing endlessly, it answers that the given code is uniquely decodable. == Termination and correctness of the algorithm == Since all sets S i {\displaystyle S_{i}} are sets of suffixes of a finite set of codewords, there are only finitely many different candidates for S i {\displaystyle S_{i}} . Since visiting one of the sets for the second time will cause the algorithm to stop, the algorithm cannot continue endlessly and thus must always terminate. More precisely, the total number of dangling suffixes that the algorithm considers is at most equal to the total of the lengths of the codewords in the input, so the algorithm runs in polynomial time as a function of this input length. By using a suffix tree to speed the comparison between each dangling suffix and the codewords, the time for the algorithm can be bounded by O(nk), where n is the total length of the codewords and k is the number of codewords. The algorithm can be implemented using a pattern matching machine. The algorithm can also be implemented to run on a nondeterministic Turing machine that uses only logarithmic space; the problem of testing unique decipherability is NL-complete, so this space bound is optimal. A proof that the algorithm is correct, i.e. that it always gives the correct answer, is found in the textbooks by Salomaa and by Berstel et al. == See also == Kraft's inequality in some cases provides a quick way to exclude the possibility that a given code is uniquely decodable. Prefix codes and block codes are important classes of codes which are uniquely decodable by definition. Timeline of information theory Post's correspondence problem is similar, yet undecidable. == Notes == == References == Berstel, Jean; Perrin, Dominique; Reutenauer, Christophe (2010). Codes and automata. Encyclopedia of Mathematics and its Applications. Vol. 129. Cambridge: Cambridge University Press. ISBN 978-0-521-88831-8. Zbl 1187.94001. Berstel, Jean; Reutenauer, Christophe (2011). Noncommutative rational series with applications. Encyclopedia of Mathematics and Its Applications. Vol. 137. Cambridge: Cambridge University Press. ISBN 978-0-521-19022-0. Zbl 1250.68007. Knuth, Donald E. (December 2003). "Robert W Floyd, In Memoriam". SIGACT News. 34 (4): 3–13. doi:10.1145/954092.954488. S2CID 35605565. Rodeh, M. (1982). "A fast test for unique decipherability based on suffix trees (Corresp.)". IEEE Transactions on Information Theory. 28 (4): 648–651. doi:10.1109/TIT.1982.1056535.. Apostolico, A.; Giancarlo, R. (1984). "Pattern matching machine implementation of a fast test for unique decipherability". Information Processing Letters. 18 (3): 155–158. doi:10.1016/0020-0190(84)90020-6.. Rytter, Wojciech (1986). "The space complexity of the unique decipherability problem". Information Processing Letters. 23 (1): 1–3. doi:10.1016/0020-0190(86)90121-3. MR 0853618.. Salomaa, Arto (1981). Jewels of Formal Language Theory. Pitman Publishing. ISBN 0-273-08522-0. Zbl 0487.68064. Sardinas, August Albert; Patterson, George W. (1953), "A necessary and sufficient condition for the unique decomposition of coded messages", Convention Record of the I.R.E., 1953 National Convention, Part 8: Information Theory, pp. 104–108. Further reading Robert G. Gallager: Information Theory and Reliable Communication. Wiley, 1968
Wikipedia/Sardinas–Patterson_algorithm
Algebraic geometry codes, often abbreviated AG codes, are a type of linear code that generalize Reed–Solomon codes. The Russian mathematician V. D. Goppa constructed these codes for the first time in 1982. == History == The name of these codes has evolved since the publication of Goppa's paper describing them. Historically these codes have also been referred to as geometric Goppa codes; however, this is no longer the standard term used in coding theory literature. This is due to the fact that Goppa codes are a distinct class of codes which were also constructed by Goppa in the early 1970s. These codes attracted interest in the coding theory community because they have the ability to surpass the Gilbert–Varshamov bound; at the time this was discovered, the Gilbert–Varshamov bound had not been broken in the 30 years since its discovery. This was demonstrated by Tfasman, Vladut, and Zink in the same year as the code construction was published, in their paper "Modular curves, Shimura curves, and Goppa codes, better than Varshamov-Gilbert bound". The name of this paper may be one source of confusion affecting references to algebraic geometry codes throughout 1980s and 1990s coding theory literature. == Construction == In this section the construction of algebraic geometry codes is described. The section starts with the ideas behind Reed–Solomon codes, which are used to motivate the construction of algebraic geometry codes. === Reed–Solomon codes === Algebraic geometry codes are a generalization of Reed–Solomon codes. Constructed by Irving Reed and Gustave Solomon in 1960, Reed–Solomon codes use univariate polynomials to form codewords, by evaluating polynomials of sufficiently small degree at the points in a finite field F q {\displaystyle \mathbb {F} _{q}} . Formally, Reed–Solomon codes are defined in the following way. Let F q = { α 1 , … , α q } {\displaystyle \mathbb {F} _{q}=\{\alpha _{1},\dots ,\alpha _{q}\}} . Set positive integers k ≤ n ≤ q {\displaystyle k\leq n\leq q} . Let F q [ x ] < k := { f ∈ F q [ x ] : deg ⁡ f < k } {\displaystyle \mathbb {F} _{q}[x]_{<k}:=\left\{f\in \mathbb {F} _{q}[x]:\deg f<k\right\}} The Reed–Solomon code R S ( q , n , k ) {\displaystyle RS(q,n,k)} is the evaluation code R S ( q , n , k ) = { ( f ( α 1 ) , f ( α 2 ) , … , f ( α n ) ) : f ∈ F q [ x ] < k } ⊆ F q n . {\displaystyle RS(q,n,k)=\left\{\left(f(\alpha _{1}),f(\alpha _{2}),\dots ,f(\alpha _{n})\right):f\in \mathbb {F} _{q}[x]_{<k}\right\}\subseteq \mathbb {F} _{q}^{n}.} === Codes from algebraic curves === Goppa observed that F q {\displaystyle \mathbb {F} _{q}} can be considered as an affine line, with corresponding projective line P F q 1 {\displaystyle \mathbb {P} _{\mathbb {F} _{q}}^{1}} . Then, the polynomials in F q [ x ] < k {\displaystyle \mathbb {F} _{q}[x]_{<k}} (i.e. the polynomials of degree less than k {\displaystyle k} over F q {\displaystyle \mathbb {F} _{q}} ) can be thought of as polynomials with pole allowance no more than k {\displaystyle k} at the point at infinity in P F q 1 {\displaystyle \mathbb {P} _{\mathbb {F} _{q}}^{1}} . With this idea in mind, Goppa looked toward the Riemann–Roch theorem. The elements of a Riemann–Roch space are exactly those functions with pole order restricted below a given threshold, with the restriction being encoded in the coefficients of a corresponding divisor. Evaluating those functions at the rational points on an algebraic curve X {\displaystyle X} over F q {\displaystyle \mathbb {F} _{q}} (that is, the points in F q 2 {\displaystyle \mathbb {F} _{q}^{2}} on the curve X {\displaystyle X} ) gives a code in the same sense as the Reed-Solomon construction. However, because the parameters of algebraic geometry codes are connected to algebraic function fields, the definitions of the codes are often given in the language of algebraic function fields over finite fields. Nevertheless, it is important to remember the connection to algebraic curves, as this provides a more geometrically intuitive method of thinking about AG codes as extensions of Reed-Solomon codes. Formally, algebraic geometry codes are defined in the following way. Let F / F q {\displaystyle F/\mathbb {F} _{q}} be an algebraic function field, D = P 1 + ⋯ + P n {\displaystyle D=P_{1}+\dots +P_{n}} be the sum of n {\displaystyle n} distinct places of F / F q {\displaystyle F/\mathbb {F} _{q}} of degree one, and G {\displaystyle G} be a divisor with disjoint support from D {\displaystyle D} . The algebraic geometry code C L ( D , G ) {\displaystyle C_{\mathcal {L}}(D,G)} associated with divisors D {\displaystyle D} and G {\displaystyle G} is defined as C L ( D , G ) := { ( f ( P 1 ) , … , f ( P n ) ) : f ∈ L ( G ) } ⊆ F q n . {\displaystyle C_{\mathcal {L}}(D,G):=\lbrace (f(P_{1}),\dots ,f(P_{n})):f\in {\mathcal {L}}(G)\rbrace \subseteq \mathbb {F} _{q}^{n}.} More information on these codes may be found in both introductory texts as well as advanced texts on coding theory. == Examples == === Reed-Solomon codes === One can see that R S ( q , n , k ) = C L ( D , ( k − 1 ) P ∞ ) {\displaystyle RS(q,n,k)={\mathcal {C}}_{\mathcal {L}}(D,(k-1)P_{\infty })} where P ∞ {\displaystyle P_{\infty }} is the point at infinity on the projective line P F q 1 {\displaystyle \mathbb {P} _{\mathbb {F} _{q}}^{1}} and D = P 1 + ⋯ + P q {\displaystyle D=P_{1}+\dots +P_{q}} is the sum of the other F q {\displaystyle \mathbb {F} _{q}} -rational points. === One-point Hermitian codes === The Hermitian curve is given by the equation x q + 1 = y q + y {\displaystyle x^{q+1}=y^{q}+y} considered over the field F q 2 {\displaystyle \mathbb {F} _{q^{2}}} . This curve is of particular importance because it meets the Hasse–Weil bound with equality, and thus has the maximal number of affine points over F q 2 {\displaystyle \mathbb {F} _{q^{2}}} . With respect to algebraic geometry codes, this means that Hermitian codes are long relative to the alphabet they are defined over. The Riemann–Roch space of the Hermitian function field is given in the following statement. For the Hermitian function field F q 2 ( x , y ) {\displaystyle \mathbb {F} _{q^{2}}(x,y)} given by x q + 1 = y q + y {\displaystyle x^{q+1}=y^{q}+y} and for m ∈ Z + {\displaystyle m\in \mathbb {Z} ^{+}} , the Riemann–Roch space L ( m P ∞ ) {\displaystyle {\mathcal {L}}(mP_{\infty })} is L ( m P ∞ ) = ⟨ x a y b : 0 ≤ b ≤ q − 1 , a q + b ( q + 1 ) ≤ m ⟩ , {\displaystyle {\mathcal {L}}(mP_{\infty })=\left\langle x^{a}y^{b}:0\leq b\leq q-1,aq+b(q+1)\leq m\right\rangle ,} where P ∞ {\displaystyle P_{\infty }} is the point at infinity on H q ( F q 2 ) {\displaystyle {\mathcal {H}}_{q}(\mathbb {F} _{q^{2}})} . With that, the one-point Hermitian code can be defined in the following way. Let H q {\displaystyle {\mathcal {H}}_{q}} be the Hermitian curve defined over F q 2 {\displaystyle \mathbb {F} _{q^{2}}} . Let P ∞ {\displaystyle P_{\infty }} be the point at infinity on H q ( F q 2 ) {\displaystyle {\mathcal {H}}_{q}(\mathbb {F} _{q^{2}})} , and D = P 1 + ⋯ + P n {\displaystyle D=P_{1}+\cdots +P_{n}} be a divisor supported by the n := q 3 {\displaystyle n:=q^{3}} distinct F q 2 {\displaystyle \mathbb {F} _{q^{2}}} -rational points on H q {\displaystyle {\mathcal {H}}_{q}} other than P ∞ {\displaystyle P_{\infty }} . The one-point Hermitian code C ( D , m P ∞ ) {\displaystyle C(D,mP_{\infty })} is C ( D , m P ∞ ) := { ( f ( P 1 ) , … , f ( P n ) ) : f ∈ L ( m P ∞ ) } ⊆ F q 2 n . {\displaystyle C(D,mP_{\infty }):=\left\lbrace (f(P_{1}),\dots ,f(P_{n})):f\in {\mathcal {L}}(mP_{\infty })\right\rbrace \subseteq \mathbb {F} _{q^{2}}^{n}.} == References ==
Wikipedia/Algebraic_geometry_code
Basic Rate Interface (BRI, 2B+D, 2B1D) or Basic Rate Access is an Integrated Services Digital Network (ISDN) configuration intended primarily for use in subscriber lines similar to those that have long been used for voice-grade telephone service. As such, an ISDN BRI connection can use the existing telephone infrastructure at a business. The BRI configuration provides 2 data (bearer) channels (B channels) at 64 kbit/s each and 1 control (delta) channel (D channel) at 16 kbit/s. The B channels are used for voice or user data, and the D channel is used for any combination of data, control signaling, and X.25 packet networking. The 2 B channels can be aggregated by channel bonding providing a total data rate of 128 kbit/s. The BRI ISDN service is commonly installed for residential or small business service (ISDN PABX) in many countries. In contrast to the BRI, the Primary Rate Interface (PRI) configuration provides more B channels and operates at a higher bit rate. == Physical interfaces == The BRI is split in two sections: a) in-house cabling (S/T reference point or S-bus) from the ISDN terminal up to the network termination 1 (NT1) and b) transmission from the NT1 to the central office (U reference point). The in-house part is defined in I.430 produced by the International Telecommunication Union (ITU). The S/T-interface (S0) uses four wires; one pair for the uplink and another pair for the downlink. It offers a full-duplex mode of operation. The I.430 protocol defines 48-bit packets comprising 16 bits from the B1 channel, 16 bits from B2 channel, 4 bits from the D channel, and 12 bits used for synchronization purposes. These packets are sent at a rate of 4 kHz, resulting in a gross bit rate of 192 kbit/s and – giving the data rates listed above – a maximum possible throughput of 144kbit/s. The S0 offers point-to-point or point-to-multipoint operation; Max length: 900 metres (3,000 ft) (point-to-point), 300 metres (980 ft) (point-to-multipoint). The Up Interface uses two wires. The gross bit rate is 160 kbit/s; 144 kbit/s throughput, 12 kbit/s sync and 4 kbit/s maintenance. The signals on the U reference point are encoded by two modulation techniques: 2B1Q in North America, Italy and Switzerland, and 4B3T elsewhere. Depending on the applicable cable length, two varieties are implemented, UpN and Up0. The Uk0 interface uses one wire pair with echo cancellation for the long last mile cable between the telephone exchange and the network terminator. The maximum length of this BRI section is between 4 and 8 kilometres (2.5 and 5.0 mi). == References == == External links == "BRI defined on birds-eye.net". Archived from the original on 2012-08-25. Retrieved 2012-08-25.
Wikipedia/Basic_Rate_Interface
An order of magnitude is generally a factor of ten. A quantity growing by four orders of magnitude implies it has grown by a factor of 10000 or 104. However, because computers are binary, orders of magnitude are sometimes given as powers of two. This article presents a list of multiples, sorted by orders of magnitude, for bit rates measured in bits per second. Since some bit rates may measured in other quantities of data or time (like MB/s), information to assist with converting to and from these formats is provided. This article assumes the following: A group of 8 bits (8 bit) constitutes one byte (1 B). The byte is the most common unit of measurement of information (megabyte, mebibyte, gigabyte, gibibyte, etc.). The decimal SI prefixes kilo, mega etc., are powers of 10. The power of two equivalents are the binary prefixes kibi, mebi, etc. Accordingly: 1 kB (kilobyte) = 1000 bytes = 8000 bits 1 KiB (kibibyte) = 210 bytes = 1024 bytes = 8192 bits 1 kbit (kilobit) = 125 bytes = 1000 bits 1 Kibit (kibibit) = 210 bits = 1024 bits = 128 bytes == See also == Data-rate units List of interface bit rates Spectral efficiency Orders of magnitude (data) Orders of magnitude (time) == References ==
Wikipedia/Orders_of_magnitude_(bit_rate)
The μ-law algorithm (sometimes written mu-law, often abbreviated as u-law) is a companding algorithm, primarily used in 8-bit PCM digital telecommunications systems in North America and Japan. It is one of the two companding algorithms in the G.711 standard from ITU-T, the other being the similar A-law. A-law is used in regions where digital telecommunication signals are carried on E-1 circuits, e.g. Europe. The terms PCMU, G711u or G711MU are used for G711 μ-law. Companding algorithms reduce the dynamic range of an audio signal. In analog systems, this can increase the signal-to-noise ratio (SNR) achieved during transmission; in the digital domain, it can reduce the quantization error (hence increasing the signal-to-quantization-noise ratio). These SNR increases can be traded instead for reduced bandwidth for equivalent SNR. At the cost of a reduced peak SNR, it can be mathematically shown that μ-law's non-linear quantization effectively increases dynamic range by 33 dB or 5+1⁄2 bits over a linearly-quantized signal, hence 13.5 bits (which rounds up to 14 bits) is the most resolution required for an input digital signal to be compressed for 8-bit μ-law. == Algorithm types == The μ-law algorithm may be described in an analog form and in a quantized digital form. === Continuous === For a given input x, the equation for μ-law encoding is F ( x ) = sgn ⁡ ( x ) ln ⁡ ( 1 + μ | x | ) ln ⁡ ( 1 + μ ) , − 1 ≤ x ≤ 1 , {\displaystyle F(x)=\operatorname {sgn}(x){\dfrac {\ln(1+\mu |x|)}{\ln(1+\mu )}},\quad -1\leq x\leq 1,} where μ = 255 in the North American and Japanese standards, and sgn(x) is the sign function. The range of this function is −1 to 1. μ-law expansion is then given by the inverse equation: F − 1 ( y ) = sgn ⁡ ( y ) ( 1 + μ ) | y | − 1 μ , − 1 ≤ y ≤ 1. {\displaystyle F^{-1}(y)=\operatorname {sgn}(y){\dfrac {(1+\mu )^{|y|}-1}{\mu }},\quad -1\leq y\leq 1.} === Discrete === The discrete form is defined in ITU-T Recommendation G.711. G.711 is unclear about how to code the values at the limit of a range (e.g. whether +31 codes to 0xEF or 0xF0). However, G.191 provides example code in the C language for a μ-law encoder. The difference between the positive and negative ranges, e.g. the negative range corresponding to +30 to +1 is −31 to −2. This is accounted for by the use of 1's complement (simple bit inversion) rather than 2's complement to convert a negative value to a positive value during encoding. == Implementation == The μ-law algorithm may be implemented in several ways: Analog Use an amplifier with non-linear gain to achieve companding entirely in the analog domain. Non-linear ADC Use an analog-to-digital converter with quantization levels which are unequally spaced to match the μ-law algorithm. Digital Use the quantized digital version of the μ-law algorithm to convert data once it is in the digital domain. Software/DSP Use the continuous version of the μ-law algorithm to calculate the companded values. == Usage justification == μ-law encoding is used because speech has a wide dynamic range. In analog signal transmission, in the presence of relatively constant background noise, the finer detail is lost. Given that the precision of the detail is compromised anyway, and assuming that the signal is to be perceived as audio by a human, one can take advantage of the fact that the perceived acoustic intensity level or loudness is logarithmic by compressing the signal using a logarithmic-response operational amplifier (Weber–Fechner law). In telecommunications circuits, most of the noise is injected on the lines, thus after the compressor, the intended signal is perceived as significantly louder than the static, compared to an uncompressed source. This became a common solution, and thus, prior to common digital usage, the μ-law specification was developed to define an interoperable standard. This pre-existing algorithm had the effect of significantly lowering the amount of bits required to encode a recognizable human voice in digital systems. A sample could be effectively encoded using μ-law in as little as 8 bits, which conveniently matched the symbol size of the majority of common computers. μ-law encoding effectively reduced the dynamic range of the signal, thereby increasing the coding efficiency while biasing the signal in a way that results in a signal-to-distortion ratio that is greater than that obtained by linear encoding for a given number of bits. The μ-law algorithm is also used in the .au format, which dates back at least to the SPARCstation 1 by Sun Microsystems as the native method used by the /dev/audio interface, widely used as a de facto standard for sound on Unix systems. The au format is also used in various common audio APIs such as the classes in the sun.audio Java package in Java 1.1 and in some C# methods. This plot illustrates how μ-law concentrates sampling in the smaller (softer) values. The horizontal axis represents the byte values 0-255 and the vertical axis is the 16-bit linear decoded value of μ-law encoding. == Comparison with A-law == The μ-law algorithm provides a slightly larger dynamic range than the A-law at the cost of worse proportional distortions for small signals. By convention, A-law is used for an international connection if at least one country uses it. == See also == Dynamic range compression Signal compression (disambiguation) G.711, a waveform speech coder using either A-law or μ-law encoding Tapered floating point == References == This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 22 January 2022. == External links == Waveform Coding Techniques – details of implementation A-Law and mu-Law Companding Implementations Using the TMS320C54x (PDF) TMS320C6000 μ-Law and A-Law Companding with Software or the McBSP (PDF) A-law and μ-law realisation (in C) u-law implementation in C-language with example code
Wikipedia/Mu-law_algorithm
In telecommunication and information theory, the code rate (or information rate) of a forward error correction code is the proportion of the data-stream that is useful (non-redundant). That is, if the code rate is k / n {\displaystyle k/n} for every k bits of useful information, the coder generates a total of n bits of data, of which n − k {\displaystyle n-k} are redundant. If R is the gross bit rate or data signalling rate (inclusive of redundant error coding), the net bit rate (the useful bit rate exclusive of error correction codes) is ≤ R ⋅ k / n {\displaystyle \leq R\cdot k/n} . For example: The code rate of a convolutional code will typically be 1⁄2, 2⁄3, 3⁄4, 5⁄6, 7⁄8, etc., corresponding to one redundant bit inserted after every single, second, third, etc., bit. The code rate of the octet oriented Reed Solomon block code denoted RS(204,188) is 188/204, meaning that 204 − 188 = 16 redundant octets (or bytes) are added to each block of 188 octets of useful information. A few error correction codes do not have a fixed code rate—rateless erasure codes. Note that bit/s is a more widespread unit of measurement for the information rate, implying that it is synonymous with net bit rate or useful bit rate exclusive of error-correction codes. == See also == Entropy rate Information rate Punctured code == References ==
Wikipedia/Code_rate
In telecommunications, data transfer rate is the average number of bits (bitrate), characters or symbols (baudrate), or data blocks per unit time passing through a communication link in a data-transmission system. Common data rate units are multiples of bits per second (bit/s) and bytes per second (B/s). For example, the data rates of modern residential high-speed Internet connections are commonly expressed in megabits per second (Mbit/s). == Standards for unit symbols and prefixes == === Unit symbol === The ISQ symbols for the bit and byte are bit and B, respectively. In the context of data-rate units, one byte consists of 8 bits, and is synonymous with the unit octet. The abbreviation bps is often used to mean bit/s, so that when a 1 Mbps connection is advertised, it usually means that the maximum achievable bandwidth is 1 Mbit/s (one million bits per second), which is 0.125 MB/s (megabyte per second), or about 0.1192 MiB/s (mebibyte per second). The Institute of Electrical and Electronics Engineers (IEEE) uses the symbol b for bit. === Unit prefixes === In both the SI and ISQ, the prefix k stands for kilo, meaning 1000, while Ki is the symbol for the binary prefix kibi-, meaning 1024. The binary prefixes were introduced in 1998 by the International Electrotechnical Commission (IEC) and in IEEE 1541-2002 which was reaffirmed on 27 March 2008. The letter K is often used as a non-standard abbreviation for 1,024, especially in "KB" to mean KiB, the kilobyte in its binary sense. In the context of data rates, however, typically only decimal prefixes are used, and they have their standard SI interpretation. === Variations === In 1999, the IEC published Amendment 2 to "IEC 60027-2: Letter symbols to be used in electrical technology – Part 2: Telecommunications and electronics". This standard, approved in 1998, introduced the prefixes kibi-, mebi-, gibi-, tebi-, pebi-, and exbi- to be used in specifying binary multiples of a quantity. The name is derived from the first two letters of the original SI prefixes followed by bi (short for binary). It also clarifies that the SI prefixes are used only to mean powers of 10 and never powers of 2. == Decimal multiples of bits == These units are often used in a manner inconsistent with the IEC standard. === Kilobit per second === Kilobit per second (symbol kbit/s or kb/s, often abbreviated "kbps") is a unit of data transfer rate equal to: 1,000 bits per second 125 bytes per second === Megabit per second === Megabit per second (symbol Mbit/s or Mb/s, often abbreviated "Mbps") is a unit of data transfer rate equal to: 1,000 kilobits per second 1,000,000 bits per second 125,000 bytes per second 125 kilobytes per second === Gigabit per second === Gigabit per second (symbol Gbit/s or Gb/s, often abbreviated "Gbps") is a unit of data transfer rate equal to: 1,000 megabits per second 1,000,000 kilobits per second 1,000,000,000 bits per second 125,000,000 bytes per second 125 megabytes per second === Terabit per second === Terabit per second (symbol Tbit/s or Tb/s, sometimes abbreviated "Tbps") is a unit of data transfer rate equal to: 1,000 gigabits per second 1,000,000 megabits per second 1,000,000,000 kilobits per second 1,000,000,000,000 bits per second 125,000,000,000 bytes per second 125 gigabytes per second === Petabit per second === Petabit per second (symbol Pbit/s or Pb/s, sometimes abbreviated "Pbps") is a unit of data transfer rate equal to: 1,000 terabits per second 1,000,000 gigabits per second 1,000,000,000 megabits per second 1,000,000,000,000 kilobits per second 1,000,000,000,000,000 bits per second 125,000,000,000,000 bytes per second 125 terabytes per second == Decimal multiples of bytes == These units are often not used in the suggested ways; see § Variations. === Kilobyte per second === kilobyte per second (kB/s) (sometimes abbreviated "kBps") is a unit of data transfer rate equal to: 8,000 bits per second 1,000 bytes per second 8 kilobits per second === Megabyte per second === megabyte per second (MB/s) (can be abbreviated as MBps) is a unit of data transfer rate equal to: 8,000,000 bits per second 1,000,000 bytes per second 1,000 kilobytes per second 8 megabits per second === Gigabyte per second === gigabyte per second (GB/s) (can be abbreviated as GBps) is a unit of data transfer rate equal to: 8,000,000,000 bits per second 1,000,000,000 bytes per second 1,000,000 kilobytes per second 1,000 megabytes per second 8 gigabits per second === Terabyte per second === terabyte per second (TB/s) (can be abbreviated as TBps) is a unit of data transfer rate equal to: 8,000,000,000,000 bits per second 1,000,000,000,000 bytes per second 1,000,000,000 kilobytes per second 1,000,000 megabytes per second 1,000 gigabytes per second 8 terabits per second == Conversion table == == Examples of bit rates == == See also == Binary prefix Bit rate List of interface bit rates Orders of magnitude (bit rate) Orders of magnitude (data) Metric prefix Instructions per second == Notes == == References == International Electrotechnical Commission (2007). "Prefixes for binary multiples" (archived). Retrieved on 2007-05-06. - updated page Archived 2020-05-11 at the Wayback Machine lacks table but now references IEC 80000-13:2008 rather than IEC 60027-2. IEC 60027-2 "Letter symbols to be used in electrical technology – Part 2: Telecommunications and electronics+ Donald Knuth: "What is a kilobyte?" Archived 2016-03-05 at the Wayback Machine == External links == Valid8 Data Rate Calculator
Wikipedia/Data-rate_units
In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time. The bit rate is expressed in the unit bit per second (symbol: bit/s), often in conjunction with an SI prefix such as kilo (1 kbit/s = 1,000 bit/s), mega (1 Mbit/s = 1,000 kbit/s), giga (1 Gbit/s = 1,000 Mbit/s) or tera (1 Tbit/s = 1,000 Gbit/s). The non-standard abbreviation bps is often used to replace the standard symbol bit/s, so that, for example, 1 Mbps is used to mean one million bits per second. In most computing and digital communication environments, one byte per second (symbol: B/s) corresponds roughly to 8 bit/s. 1 byte = 8 bits However if stop bits, start bits, and parity bits need to be factored in, a higher number of bits per second will be required to achieve a throughput of the same number of bytes. == Prefixes == When quantifying large or small bit rates, SI prefixes (also known as metric prefixes or decimal prefixes) are used, thus: Binary prefixes are sometimes used for bit rates. The International Standard (IEC 80000-13) specifies different symbols for binary and decimal (SI) prefixes (e.g., 1 KiB/s = 1024 B/s = 8192 bit/s, and 1 MiB/s = 1024 KiB/s). == In data communications == === Gross bit rate === In digital communication systems, the physical layer gross bitrate, raw bitrate, data signaling rate, gross data transfer rate or uncoded transmission rate (sometimes written as a variable Rb or fb) is the total number of physically transferred bits per second over a communication link, including useful data as well as protocol overhead. In case of serial communications, the gross bit rate is related to the bit transmission time T b {\displaystyle T_{\text{b}}} as: R b = 1 T b , {\displaystyle R_{\text{b}}={1 \over T_{\text{b}}},} The gross bit rate is related to the symbol rate or modulation rate, which is expressed in bauds or symbols per second. However, the gross bit rate and the baud value are equal only when there are only two levels per symbol, representing 0 and 1, meaning that each symbol of a data transmission system carries exactly one bit of data; for example, this is not the case for modern modulation systems used in modems and LAN equipment. For most line codes and modulation methods: symbol rate ≤ gross bit rate {\displaystyle {\text{symbol rate}}\leq {\text{gross bit rate}}} More specifically, a line code (or baseband transmission scheme) representing the data using pulse-amplitude modulation with 2 N {\displaystyle 2^{N}} different voltage levels, can transfer N {\displaystyle N} bits per pulse. A digital modulation method (or passband transmission scheme) using 2 N {\displaystyle 2^{N}} different symbols, for example 2 N {\displaystyle 2^{N}} amplitudes, phases or frequencies, can transfer N {\displaystyle N} bits per symbol. This results in: gross bit rate = symbol rate × N {\displaystyle {\text{gross bit rate}}={\text{symbol rate}}\times N} An exception from the above is some self-synchronizing line codes, for example Manchester coding and return-to-zero (RTZ) coding, where each bit is represented by two pulses (signal states), resulting in: gross bit rate = symbol rate/2 {\displaystyle {\text{gross bit rate = symbol rate/2}}} A theoretical upper bound for the symbol rate in baud, symbols/s or pulses/s for a certain spectral bandwidth in hertz is given by the Nyquist law: symbol rate ≤ Nyquist rate = 2 × bandwidth {\displaystyle {\text{symbol rate}}\leq {\text{Nyquist rate}}=2\times {\text{bandwidth}}} In practice this upper bound can only be approached for line coding schemes and for so-called vestigial sideband digital modulation. Most other digital carrier-modulated schemes, for example ASK, PSK, QAM and OFDM, can be characterized as double sideband modulation, resulting in the following relation: symbol rate ≤ bandwidth {\displaystyle {\text{symbol rate}}\leq {\text{bandwidth}}} In case of parallel communication, the gross bit rate is given by ∑ i = 1 n log 2 ⁡ M i T i {\displaystyle \sum _{i=1}^{n}{\frac {\log _{2}{M_{i}}}{T_{i}}}} where n is the number of parallel channels, Mi is the number of symbols or levels of the modulation in the ith channel, and Ti is the symbol duration time, expressed in seconds, for the ith channel. === Information rate === The physical layer net bitrate, information rate, useful bit rate, payload rate, net data transfer rate, coded transmission rate, effective data rate or wire speed (informal language) of a digital communication channel is the capacity excluding the physical layer protocol overhead, for example time division multiplex (TDM) framing bits, redundant forward error correction (FEC) codes, equalizer training symbols and other channel coding. Error-correcting codes are common especially in wireless communication systems, broadband modem standards and modern copper-based high-speed LANs. The physical layer net bitrate is the datarate measured at a reference point in the interface between the data link layer and physical layer, and may consequently include data link and higher layer overhead. In modems and wireless systems, link adaptation (automatic adaptation of the data rate and the modulation and/or error coding scheme to the signal quality) is often applied. In that context, the term peak bitrate denotes the net bitrate of the fastest and least robust transmission mode, used for example when the distance is very short between sender and transmitter. Some operating systems and network equipment may detect the "connection speed" (informal language) of a network access technology or communication device, implying the current net bit rate. The term line rate in some textbooks is defined as gross bit rate, in others as net bit rate. The relationship between the gross bit rate and net bit rate is affected by the FEC code rate according to the following. net bit rate ≤ gross bit rate × code rate The connection speed of a technology that involves forward error correction typically refers to the physical layer net bit rate in accordance with the above definition. For example, the net bitrate (and thus the "connection speed") of an IEEE 802.11a wireless network is the net bit rate of between 6 and 54 Mbit/s, while the gross bit rate is between 12 and 72 Mbit/s inclusive of error-correcting codes. The net bit rate of ISDN2 Basic Rate Interface (2 B-channels + 1 D-channel) of 64+64+16 = 144 kbit/s also refers to the payload data rates, while the D channel signalling rate is 16 kbit/s. The net bit rate of the Ethernet 100BASE-TX physical layer standard is 100 Mbit/s, while the gross bitrate is 125 Mbit/s, due to the 4B5B (four bit over five bit) encoding. In this case, the gross bit rate is equal to the symbol rate or pulse rate of 125 megabaud, due to the NRZI line code. In communications technologies without forward error correction and other physical layer protocol overhead, there is no distinction between gross bit rate and physical layer net bit rate. For example, the net as well as gross bit rate of Ethernet 10BASE-T is 10 Mbit/s. Due to the Manchester line code, each bit is represented by two pulses, resulting in a pulse rate of 20 megabaud. The "connection speed" of a V.92 voiceband modem typically refers to the gross bit rate, since there is no additional error-correction code. It can be up to 56,000 bit/s downstream and 48,000 bit/s upstream. A lower bit rate may be chosen during the connection establishment phase due to adaptive modulation – slower but more robust modulation schemes are chosen in case of poor signal-to-noise ratio. Due to data compression, the actual data transmission rate or throughput (see below) may be higher. The channel capacity, also known as the Shannon capacity, is a theoretical upper bound for the maximum net bitrate, exclusive of forward error correction coding, that is possible without bit errors for a certain physical analog node-to-node communication link. net bit rate ≤ channel capacity The channel capacity is proportional to the analog bandwidth in hertz. This proportionality is called Hartley's law. Consequently, the net bit rate is sometimes called digital bandwidth capacity in bit/s. === Network throughput === The term throughput, essentially the same thing as digital bandwidth consumption, denotes the achieved average useful bit rate in a computer network over a logical or physical communication link or through a network node, typically measured at a reference point above the data link layer. This implies that the throughput often excludes data link layer protocol overhead. The throughput is affected by the traffic load from the data source in question, as well as from other sources sharing the same network resources. See also measuring network throughput. === Goodput (data transfer rate) === Goodput or data transfer rate refers to the achieved average net bit rate that is delivered to the application layer, exclusive of all protocol overhead, data packets retransmissions, etc. For example, in the case of file transfer, the goodput corresponds to the achieved file transfer rate. The file transfer rate in bit/s can be calculated as the file size (in bytes) divided by the file transfer time (in seconds) and multiplied by eight. As an example, the goodput or data transfer rate of a V.92 voiceband modem is affected by the modem physical layer and data link layer protocols. It is sometimes higher than the physical layer data rate due to V.44 data compression, and sometimes lower due to bit-errors and automatic repeat request retransmissions. If no data compression is provided by the network equipment or protocols, we have the following relation: goodput ≤ throughput ≤ maximum throughput ≤ net bit rate for a certain communication path. === Progress trends === These are examples of physical layer net bit rates in proposed communication standard interfaces and devices: == Multimedia == In digital multimedia, bit rate represents the amount of information, or detail, that is stored per unit of time of a recording. The bitrate depends on several factors: The original material may be sampled at different frequencies. The samples may use different numbers of bits. The data may be encoded by different schemes. The information may be digitally compressed by different algorithms or to different degrees. Generally, choices are made about the above factors in order to achieve the desired trade-off between minimizing the bitrate and maximizing the quality of the material when it is played. If lossy data compression is used on audio or visual data, differences from the original signal will be introduced; if the compression is substantial, or lossy data is decompressed and recompressed, this may become noticeable in the form of compression artifacts. Whether these affect the perceived quality, and if so how much, depends on the compression scheme, encoder power, the characteristics of the input data, the listener's perceptions, the listener's familiarity with artifacts, and the listening or viewing environment. The encoding bit rate of a multimedia file is its size in bytes divided by the playback time of the recording (in seconds), multiplied by eight. For real-time streaming multimedia, the encoding bit rate is the goodput that is required to avoid playback interruption. The term average bitrate is used in case of variable bitrate multimedia source coding schemes. In this context, the peak bit rate is the maximum number of bits required for any short-term block of compressed data. A theoretical lower bound for the encoding bit rate for lossless data compression is the source information rate, also known as the entropy rate. The bitrates in this section are approximately the minimum that the average listener in a typical listening or viewing environment, when using the best available compression, would perceive as not significantly worse than the reference standard. === Audio === ==== CD-DA ==== Compact Disc Digital Audio (CD-DA) uses 44,100 samples per second, each with a bit depth of 16, a format sometimes abbreviated like "16bit / 44.1kHz". CD-DA is also stereo, using a left and right channel, so the amount of audio data per second is double that of mono, where only a single channel is used. The bit rate of PCM audio data can be calculated with the following formula: bit rate = sample rate × bit depth × channels {\displaystyle {\text{bit rate}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}} For example, the bit rate of a CD-DA recording (44.1 kHz sampling rate, 16 bits per sample and two channels) can be calculated as follows: 44 , 100 × 16 × 2 = 1 , 411 , 200 bit/s = 1 , 411.2 kbit/s {\displaystyle 44,100\times 16\times 2=1,411,200\ {\text{bit/s}}=1,411.2\ {\text{kbit/s}}} The cumulative size of a length of PCM audio data (excluding a file header or other metadata) can be calculated using the following formula: size in bits = sample rate × bit depth × channels × time . {\displaystyle {\text{size in bits}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}\times {\text{time}}.} The cumulative size in bytes can be found by dividing the file size in bits by the number of bits in a byte, which is eight: size in bytes = size in bits 8 {\displaystyle {\text{size in bytes}}={\frac {\text{size in bits}}{8}}} Therefore, 80 minutes (4,800 seconds) of CD-DA data requires 846,720,000 bytes of storage: 44 , 100 × 16 × 2 × 4 , 800 8 = 846 , 720 , 000 bytes ≈ 847 MB ≈ 807.5 MiB {\displaystyle {\frac {44,100\times 16\times 2\times 4,800}{8}}=846,720,000\ {\text{bytes}}\approx 847\ {\text{MB}}\approx 807.5\ {\text{MiB}}} where MiB is mebibytes with binary prefix Mi, meaning 220 = 1,048,576. ==== MP3 ==== The MP3 audio format provides lossy data compression. Audio quality improves with increasing bitrate: 32 kbit/s – generally acceptable only for speech 96 kbit/s – generally used for speech or low-quality streaming 128 or 160 kbit/s – mid-range bitrate quality 192 kbit/s – medium quality bitrate 256 kbit/s – a commonly used high-quality bitrate 320 kbit/s – highest level supported by the MP3 standard ==== Other audio ==== 700 bit/s – lowest bitrate open-source speech codec Codec2, but Codec2 sounds much better at 1.2 kbit/s 800 bit/s – minimum necessary for recognizable speech, using the special-purpose FS-1015 speech codecs 2.15 kbit/s – minimum bitrate available through the open-source Speex codec 6 kbit/s – minimum bitrate available through the open-source Opus codec 8 kbit/s – telephone quality using speech codecs 32–500 kbit/s – lossy audio as used in Ogg Vorbis 256 kbit/s – Digital Audio Broadcasting (DAB) MP2 bit rate required to achieve a high quality signal 292 kbit/s – Sony Adaptive Transform Acoustic Coding (ATRAC) for use on the MiniDisc Format 400 kbit/s–1,411 kbit/s – lossless audio as used in formats such as Free Lossless Audio Codec, WavPack, or Monkey's Audio to compress CD audio 1,411.2 kbit/s – Linear PCM sound format of CD-DA 5,644.8 kbit/s – DSD, which is a trademarked implementation of PDM sound format used on Super Audio CD. 6.144 Mbit/s – E-AC-3 (Dolby Digital Plus), an enhanced coding system based on the AC-3 codec 9.6 Mbit/s – DVD-Audio, a digital format for delivering high-fidelity audio content on a DVD. DVD-Audio is not intended to be a video delivery format and is not the same as video DVDs containing concert films or music videos. These discs cannot be played on a standard DVD-player without DVD-Audio logo. 18 Mbit/s – advanced lossless audio codec based on Meridian Lossless Packing (MLP) === Video === 16 kbit/s – videophone quality (minimum necessary for a consumer-acceptable "talking head" picture using various video compression schemes) 128–384 kbit/s – business-oriented videoconferencing quality using video compression 400 kbit/s YouTube 240p videos (using H.264) 750 kbit/s YouTube 360p videos (using H.264) 1 Mbit/s YouTube 480p videos (using H.264) 1.15 Mbit/s max – VCD quality (using MPEG1 compression) 2.5 Mbit/s YouTube 720p videos (using H.264) 3.5 Mbit/s typ – Standard-definition television quality (with bit-rate reduction from MPEG-2 compression) 3.8 Mbit/s YouTube 720p60 (60 FPS) videos (using H.264) 4.5 Mbit/s YouTube 1080p videos (using H.264) 6.8 Mbit/s YouTube 1080p60 (60 FPS) videos (using H.264) 9.8 Mbit/s max – DVD (using MPEG2 compression) 8 to 15 Mbit/s typ – HDTV quality (with bit-rate reduction from MPEG-4 AVC compression) 19 Mbit/s approximate – HDV 720p (using MPEG2 compression) 24 Mbit/s max – AVCHD (using MPEG4 AVC compression) 25 Mbit/s approximate – HDV 1080i (using MPEG2 compression) 29.4 Mbit/s max – HD DVD 40 Mbit/s max – 1080p Blu-ray Disc (using MPEG2, MPEG4 AVC or VC-1 compression) 250 Mbit/s max – DCP (using JPEG 2000 compression) 1.4 Gbit/s – 10-bit 4:4:4 uncompressed 1080p at 24 FPS === Notes === For technical reasons (hardware/software protocols, overheads, encoding schemes, etc.) the actual bit rates used by some of the compared-to devices may be significantly higher than listed above. For example, telephone circuits using μlaw or A-law companding (pulse code modulation) yield 64 kbit/s. == See also == == References == == External links == Live Video Streaming Bitrate Calculator Calculate bitrate for video and live streams DVD-HQ bit rate calculator Calculate bit rate for various types of digital video media. Maximum PC - Do Higher MP3 Bit Rates Pay Off? Valid8 Data Rate Calculator
Wikipedia/Transmission_rate
In computer networking, upstream refers to the direction in which data can be transferred from the client to the server (uploading). This differs greatly from downstream not only in theory and usage, but also in that upstream speeds are usually at a premium. Whereas downstream speed is important to the average home user for purposes of downloading content, uploads are used mainly for web server applications and similar processes where the sending of data is critical. Upstream speeds are also important to users of peer-to-peer software. ADSL and cable modems are asymmetric, with the upstream data rate much lower than that of its downstream. Symmetric connections such as Symmetric Digital Subscriber Line (SDSL) and T1, however, offer identical upstream and downstream rates. If a node A on the Internet is closer (fewer hops away) to the Internet backbone than a node B, then A is said to be upstream of B or conversely, B is downstream of A. Related to this is the idea of upstream providers. An upstream provider is usually a large ISP that provides Internet access to a local ISP. Hence, the word upstream also refers to the data connection between two ISPs. == See also == Upstream server Return channel == References ==
Wikipedia/Upstream_(networking)
In the mathematical theory of probability, the entropy rate or source information rate is a function assigning an entropy to a stochastic process. For a strongly stationary process, the conditional entropy for latest random variable eventually tend towards this rate value. == Definition == A process X {\displaystyle X} with a countable index gives rise to the sequence of its joint entropies H n ( X 1 , X 2 , … X n ) {\displaystyle H_{n}(X_{1},X_{2},\dots X_{n})} . If the limit exists, the entropy rate is defined as H ( X ) := lim n → ∞ 1 n H n . {\displaystyle H(X):=\lim _{n\to \infty }{\tfrac {1}{n}}H_{n}.} Note that given any sequence ( a n ) n {\displaystyle (a_{n})_{n}} with a 0 = 0 {\displaystyle a_{0}=0} and letting Δ a k := a k − a k − 1 {\displaystyle \Delta a_{k}:=a_{k}-a_{k-1}} , by telescoping one has a n = ∑ k = 1 n Δ a k {\displaystyle a_{n}={\textstyle \sum _{k=1}^{n}}\Delta a_{k}} . The entropy rate thus computes the mean of the first n {\displaystyle n} such entropy changes, with n {\displaystyle n} going to infinity. The behaviour of joint entropies from one index to the next is also explicitly subject in some characterizations of entropy. == Discussion == While X {\displaystyle X} may be understood as a sequence of random variables, the entropy rate H ( X ) {\displaystyle H(X)} represents the average entropy change per one random variable, in the long term. It can be thought of as a general property of stochastic sources - this is the subject of the asymptotic equipartition property. === For strongly stationary processes === A stochastic process also gives rise to a sequence of conditional entropies, comprising more and more random variables. For strongly stationary stochastic processes, the entropy rate equals the limit of that sequence H ( X ) = lim n → ∞ H ( X n | X n − 1 , X n − 2 , … X 1 ) {\displaystyle H(X)=\lim _{n\to \infty }H(X_{n}|X_{n-1},X_{n-2},\dots X_{1})} The quantity given by the limit on the right is also denoted H ′ ( X ) {\displaystyle H'(X)} , which is motivated to the extent that here this is then again a rate associated with the process, in the above sense. === For Markov chains === Since a stochastic process defined by a Markov chain that is irreducible and aperiodic has a stationary distribution, the entropy rate is independent of the initial distribution. For example, consider a Markov chain defined on a countable number of states. Given its right stochastic transition matrix P i j {\displaystyle P_{ij}} and an entropy h i := − ∑ j P i j log ⁡ P i j {\displaystyle h_{i}:=-\sum _{j}P_{ij}\log P_{ij}} associated with each state, one finds H ( X ) = ∑ i μ i h i , {\displaystyle \displaystyle H(X)=\sum _{i}\mu _{i}h_{i},} where μ i {\displaystyle \mu _{i}} is the asymptotic distribution of the chain. In particular, it follows that the entropy rate of an i.i.d. stochastic process is the same as the entropy of any individual member in the process. === For hidden Markov models === The entropy rate of hidden Markov models (HMM) has no known closed-form solution. However, it has known upper and lower bounds. Let the underlying Markov chain X 1 : ∞ {\displaystyle X_{1:\infty }} be stationary, and let Y 1 : ∞ {\displaystyle Y_{1:\infty }} be the observable states, then we have H ( Y n | X 1 , Y 1 : n − 1 ) ≤ H ( Y ) ≤ H ( Y n | Y 1 : n − 1 ) {\displaystyle H(Y_{n}|X_{1},Y_{1:n-1})\leq H(Y)\leq H(Y_{n}|Y_{1:n-1})} and at the limit of n → ∞ {\displaystyle n\to \infty } , both sides converge to the middle. == Applications == The entropy rate may be used to estimate the complexity of stochastic processes. It is used in diverse applications ranging from characterizing the complexity of languages, blind source separation, through to optimizing quantizers and data compression algorithms. For example, a maximum entropy rate criterion may be used for feature selection in machine learning. == See also == Information source (mathematics) Markov information source Asymptotic equipartition property Maximal entropy random walk - chosen to maximize entropy rate == References == == External links == Cover, T. and Thomas, J. Elements of Information Theory. John Wiley and Sons, Inc. Second Edition, 2006.
Wikipedia/Source_information_rate
In digital communications, a chip is a pulse of a direct-sequence spread spectrum (DSSS) code, such as a pseudo-random noise (PN) code sequence used in direct-sequence code-division multiple access (CDMA) channel access techniques. In a binary direct-sequence system, each chip is typically a rectangular pulse of +1 or −1 amplitude, which is multiplied by a data sequence (similarly +1 or −1 representing the message bits) and by a carrier waveform to make the transmitted signal. The chips are therefore just the bit sequence out of the code generator; they are called chips to avoid confusing them with message bits. The chip rate of a code is the number of pulses per second (chips per second) at which the code is transmitted (or received). The chip rate is larger than the symbol rate, meaning that one symbol is represented by multiple chips. The ratio is known as the spreading factor (SF) or processing gain: SF = chip rate symbol rate {\displaystyle {\mbox{SF}}={\frac {\mbox{chip rate}}{\mbox{symbol rate}}}} == Orthogonal variable spreading factor == Orthogonal variable spreading factor (OVSF) is an implementation of code-division multiple access (CDMA) where before each signal is transmitted, the signal is spread over a wide spectrum range through the use of a user's code. Users' codes are carefully chosen to be mutually orthogonal to each other. These codes are derived from an OVSF code tree, and each user is given a different code. An OVSF code tree is a complete binary tree that reflects the construction of Hadamard matrices. == See also == Baud GPS signals == References == == External links == CDMA basics
Wikipedia/Chip_rate
The attention schema theory (AST) of consciousness is a neuroscientific and evolutionary theory of consciousness (or subjective awareness) developed by neuroscientist Michael Graziano at Princeton University. It proposes that brains construct subjective awareness as a schematic model of the process of attention. The theory is a materialist theory of consciousness. It shares similarities with the illusionist ideas of philosophers like Daniel Dennett, Patricia Churchland, and Keith Frankish. Graziano proposed that an attention schema is like the body schema. Just as the brain constructs a simplified model of the body to monitor and control its movement, it also constructs a simplified model of attention to help monitor and control its own attention. The information in that model, portraying an incomplete and simplified version of attention, leads the brain to conclude that it has a non-physical essence of awareness. Thus subjective awareness is the brain's efficient but imperfect model of its own attention. This approach intends to explain how awareness and attention are similar in many respects, yet are sometimes dissociated; how the brain can be aware of internal and external events, and provides testable predictions. In the theory, an attention schema necessarily evolved due to its fundamental adaptive uses in perception, cognition, and social interaction. == Description == The AST describes how an information-processing machine can claim to have a conscious, subjective experience, while having no means to discern the difference between its claim and reality. In the theory, the brain is an information processor captive to the information constructed within it. In this approach, the challenge of explaining consciousness is not, "How does the brain produce an ineffable internal experience," but rather, "How does the brain construct a quirky self description, and what is the useful cognitive role of that self model?" In other words, because we claim to be conscious, some mechanism in the brain must therefore have computed the requisite information about consciousness to enable the system to output that claim. AST proposes that this is an adaptive function: it serves as an internal model of one of the brain's most important features: attention. A crucial aspect of the theory is model-based knowledge. The brain constructs rich internal models that lie beneath the level of higher cognition or of language. Cognition has partial access to those internal models, and the content of those models are reported as literal reality. The AST can be summarized in three broad points: The brain is an information-processing device. The brain has a capacity to focus its processing resources more on some signals than on others. That focus may be on incoming sensory signals or internal information such as recalled memories. This ability is called attention. The brain also builds a set of information, or a representation, descriptive of its own attention. This internal model is the attention schema. The attention schema allows a machine to make claims about its consciousness. When it claims to be conscious of concept X (to have a subjective awareness or a mental possession of X), the machine is using higher cognition to access an attention schema, and reporting the information therein. === Example === Suppose a person looks at an apple. When the person reports, "I have a subjective experience of that shiny red apple," three items are linked together in that claim: the self, the apple, and a subjective experience. The claim about the presence of a self depends on cognitive access to a self model. Without a self model, and its requisite information, the system would be unable to make claims referencing itself. The claim about the presence of, and properties of, an apple depends on cognitive access to a model of the apple, presumably constructed in the visual system. Again, without the requisite information, the system would be unable to make any claims about the apple or its visual properties. In AST, the claim about the presence of subjective experience depends on cognitive access to an internal model of attention. That internal model does not provide a scientifically precise description of attention, complete with the details of neurons, lateral inhibitory synapses, and competitive signals. The model is silent on the physical mechanisms of attention. Instead, like all internal models in the brain, it is simplified and schematic for the sake of efficiency. Accessing the information within these three, linked internal models, cognitive machinery claims that there is a self, an apple, and the self has a mental possession of the apple. The mental possession is invisible and has no physically descriptive properties, but it has a general location inside the body and a specific anchor to the apple. This mental essence empowers the self to understand, react to, and remember the apple. The brain (as a machine) relies on its (incomplete and inaccurate) model of attention, claims to have a metaphysical consciousness of the apple. === Subjective experience === In the AST, subjective experience (consciousness, or mental possession of a object or experience) is a simplified construct that describes the act of attending to something. The internal model of attention is not constructed at a higher cognitive level. It is not a cognitive self theory or learned. Instead, it is constructed beneath the level of cognition and is automatic (and necessary), much like the internal model of the apple and the self. The attention schema is a perception-like model of attention, to distinguish it from higher-order cognitive models such as beliefs or intellectually reasoned theories. This explains how a machine with an attention schema contains the requisite information to claim to have a consciousness of something, whether of an apple, a thought, or of itself. The machine can understand consciousness in the same ways that we do; and on accessing its internal information, it does not find explanatory meta-information, computes a conclusion, or accesses an internal model, but it instead learns only the narrow contents of the internal models. In AST, human are machines of that sort. === Connection with illusionism === AST is consistent with the perspective called illusionism. The term "illusion", however, may have connotations that are not quite apt for this theory. Three issues with that label arise. In the AST, the attention schema is a well-functioning internal model, which is not normally in error. This differs from the common view of an 'illusion' as dismissible or harmful. An illusion is often equated with a mirage, falsely indicating a presence that actually does not exist. If consciousness is an illusion, then by implication nothing real is present behind the illusion. But in the AST, consciousness is a good, if detail-poor, account of attention, which is a physical and mechanistic process emergent from the interactions of neurons. When one claim to be subjectively conscious of something, they are providing a schematized version of the physical reality. An illusion is experienced by some agent. When calling consciousness an illusion, one needs to be careful to define what is meant by "experience" so as to avoid circularity. The AST is not a theory of how the brain has experiences, but rather how a machine can make the claim to have experiences. By being stuck in a logic loop, or captive to its own internal information, an intelligent agent cannot avoid making such a claim. == Proposed functions of the attention schema == The central hypothesis in AST is that the brain constructs an internal model of attention, the attention schema. Its primary adaptive function is to enable a better, more flexible control of attention. Two main types of functions have been proposed for the attention schema: control of attention and social cognition. === Control of attention === In dynamical systems control, a fundamental principle is that a control system works better and more flexibly if it constructs an internal model of the item it controls. An airplane autopilot system works better if it incorporates a model of the dynamics of the airplane. An air and temperature controller for a building works better if it incorporates a rich, predictive model of the building's airflow and temperature dynamics. Similarly, the brain's controller of attention should work better by constructing an internal model of what attention is, how it changes over time, what its consequences are, and what state it is in at any moment. Thus the brain's controller of attention should incorporate an internal model of attention – a set of information that is continuously updated and that reflects the dynamics and the changing state of attention. Since attention is one of the most pervasive and important processes in the brain, the proposed attention schema, helping to control attention, would be of fundamental importance to the system. A growing set of behavioral evidence supports this hypothesis. When subjective awareness of a visual stimulus is absent, people can still direct attention to that stimulus, but that attention loses some aspects of control. It is less stable over time, and is less adaptable given training on perturbations. Initial experiments suggest that this may be true, and support the proposal that awareness acts like the internal model for the control of attention. === Social cognition === A second proposed function of an attention schema is for social cognition – using the attention schema to model the attentional states of others as well as of ourselves. In effect, just as humans attribute awareness to themselves, it is also attributed to others. An advantage of this use of an attention schema is in behavioral prediction, which can aid in the survival of intelligent social agents. This is because an agent's attention influences its behavior, since what it is attending to, they are likely to behave toward (and vice-versa). An internal model of attention, and its dynamics and consequences, would be useful for predicting behavior. An intelligent agent can also plan its own future partly by predicting their own actions. Research into AST therefore focuses on the overlap between one's own claims of awareness and one's attributions of awareness to others. Initial research using brain scanning in humans suggests that both processes recruit cortical networks that converge on the temporoparietal junction. == Analogy to the body schema == AST was developed in analogy to the psychological and neuroscientific work on the body schema, an area of research to which Graziano contributed heavily in his previous publications. In this section, the central ideas of AST are explained by use of the analogy to the body schema. === Example === Suppose a person, Kevin, has reached out and grasped an apple, and is asked what he is holding. He can say that the object is an apple, and can describe its properties. This is because Kevin's brain has constructed a schematic description of the apple, here called an internal model. This internal model is a set of information, about size, color, shape, and location, that is constantly updated as new signals are processed. This model allows Kevin's brain to react to the apple and even predict how it may behave in different circumstances. Kevin's brain has constructed an apple schema. His cognitive and linguistic processors can access this internal model of an apple, and thus he can answer questions about it. Now Kevin is asked, "How are you holding the apple? What is your physical relationship to the apple?" Once again Kevin can answer. The reason is that, in addition to an internal model of the apple, Kevin's brain also constructs an internal model of his body, including his arm and hand. This internal mode (the body schema), is a set of information which is constantly updated as new signals are processed, about the size and shape of Kevin's limbs, how they are hinged, how they tend to move, their state at each moment and in the near future. The primary purpose of this body schema is to allow Kevin's brain to control movement. Because he knows the state that his arm is in, he can better guide its movement. A side effect of this internal body schema is that he can explicitly talk about his body. His cognitive and linguistic processors can access this body schema, and therefore Kevin can answer, "I am grasping the apple with my hand, while my arm is outstretched." However the body schema is limited. If Kevin is asked, "How many muscles are in your arm? Where do they attach to the bones?" he cannot answer based on his body schema. He may have intellectual knowledge learned from a book, but he has no immediate insight into the muscles of his particular arm. The body schema is a reduced model which lacks that level of mechanistic detail. AST takes this analysis one step further, and includes Kevin's ability to pay attention to the apple, in addition to physically grasping it. AST's definition of attention means that Kevin's brain has focused processing resources on the apple. The internal model of the apple has been 'boosted in strength', and as a result Kevin's brain processes the apple deeply, is more likely to store it in memory, or to trigger a response to it. In this definition, attention is a mechanistic, data-handling process. It involves a relative deployment of processing resources to a specific signal. Now if Kevin is asked, "What is your mental relationship to the apple?", he can answer this question too. According to AST, this is because Kevin's brain constructs not only an internal model of the apple and his body, but also an internal model of his attention. This attention schema is a set of information describing what attention is, its basic properties, dynamics, consequences, and its state at a particular moment. Kevin's cognitive and linguistic machinery has access to this internal model, and therefore Kevin can describe his mental relationship to the apple. However, just as in the case of the body schema, the attention schema lacks information about its mechanistic details. It does not contain information about neurons, synapses, or electrochemical signals that make attention possible. As a result, Kevin reports an experience lacking clear physical attributes. He says, "I have a mental grasp of the apple. That mental possession, in and of itself, has no physical properties. It just is. It's vaguely located inside me. It is what allows me to know about that apple, remember it, and react to it. It's my mental self taking hold of the apple – my experience of the apple." Here Kevin describes a subjective, experiential consciousness of the apple, which seems (from his point of view) to transcend any physical mechanism. Again, this is only because it is an incomplete description of the physical reality: Kevin's account of his consciousness is a simplified, schematic description of his state of attention. The example given above relates to a consciousness of an apple. The same reasoning can be applied to other concepts, such as consciousness of a sound, a memory, or oneself as a whole. == See also == Integrated information theory of consciousness Global workspace theory of consciousness == References == == Further reading == == External links == How Consciousness Works. And Why We Believe in Ghosts (21 August 2013) – Michael Graziano – Aeon Consciousness and the Unashamed Rationalist (30 August 2013) – Michael Graziano – HuffPost Are We Really Conscious? (10 October 2014) – Michael Graziano – The New York Times Can We Make Consciousness into an Engineering Problem? (10 July 2015) – Michael Graziano – Aeon Rethinking Consciousness: A Q&A with Michael Graziano (29 July 2015) – Evan Nesterak & Michael Graziano – The Psych Report Archived 31 December 2019 at the Wayback Machine What is Consciousness? Dr. Michael Graziano on Attention Schema Theory (4 February 2018) – Isabel Pastor Guzman & Michael Graziano – Brain World
Wikipedia/Attention_schema_theory
In the philosophy of mind, double-aspect theory is the view that the mental and the physical are two aspects of, or perspectives on, the same substance. It is also called dual-aspect monism, not to be confused with mind–body dualism. The theory's relationship to neutral monism is ill-defined, Neutral monism and the dual-aspect theory share a central claim: there is an underlying reality that is neither mental nor physical. But that is where the agreement stops. Neutral monism has no room for the central feature of the dual-aspect theory: the mental and physical aspects, sides, or properties that characterize the underlying entities of dual-aspect theory. The neutral monist accepts the mental/physical distinction. According to Harald Atmanspacher, "dual-aspect approaches consider the mental and physical domains of reality as aspects, or manifestations, of an underlying undivided reality in which the mental and the physical do not exist as separate domains. In such a framework, the distinction between mind and matter results from an epistemic split that separates the aspects of the underlying reality. Consequently, the status of the psychophysically neutral domain is considered as ontic relative to the mind–matter distinction". == Theories == Possible double-aspect theorists include: Baruch Spinoza, who believed that Nature or God (Deus sive Natura) has infinite aspects, but that Extension and Mind are the only aspects of which we have knowledge. Arthur Schopenhauer, who considered the fundamental aspects of reality to be Will and Representation. David Bohm, who used implicate and explicate order as a means of displaying dual-aspects. Gustav Fechner Mark Solms, neuropsychoanalyst, for whom dual-aspect monism represents a matrix of ontological juxtaposition of psychoanalytical and neuroscientific knowledge from two distinct perspectives: looking from the inside and looking from the outside. George Henry Lewes Thomas Jay Oord - calls his version "Material-Mental Monism" John Polkinghorne Brian O'Shaughnessy on the dual aspect theory of the Will Thomas Nagel David Chalmers, who explores a double-aspect view of information, with similarities to Kenneth Sayre's information-based neutral monism J. A. Scott Kelso, The Complementary Nature (MIT Press, 2006) attempts to reconcile what it calls "the philosophy of complementary pairs" with the science of coordination dynamics. === Pauli-Jung conjecture === Pauli and Jung's approach to dual-aspect monism has a very specific further feature, namely that different aspects may show a complementarity in a quantum physical sense. That is, the Pauli-Jung conjecture implies that with regard to mental and physical states there may be incompatible descriptions of different parts that emerge from the whole. This stands in close analogy to quantum physics, where complementary properties cannot be determined jointly with accuracy. Atmanspacher further refers to Paul Bernays' views on complementarity in physics and in philosophy when he states that "Two descriptions are complementary if they mutually exclude each other, yet are both necessary to describe a situation exhaustively." == See also == Anomalous monism Neutral monism Property dualism Samkhya darsana == Notes == == External links == Neutral Monism in Relation to Dual Aspect Theory
Wikipedia/Double-aspect_theory
The Science of Consciousness (TSC; formerly Toward a Science of Consciousness) is an international academic conference that has been held biannually since 1994. It is organized by the Center for Consciousness Studies of the University of Arizona. Alternate conferences are held in Arizona (either Tucson or Phoenix), and the others in locations worldwide. The conference is devoted exclusively to the investigation of consciousness. == Associated people == The main organizer is Stuart Hameroff, an anestheologist and the director of the center that hosts the conference. One of the speakers at the first conference, David Chalmers, had co-organized some of the following ones, until the event became too far away from the scientific mainstream. == Conference books == Three books published by MIT Press have resulted from the conference. John Benjamins published a book containing selected proceedings from TSC 1999. == Independent academic coverage == An essay review Toward a science of consciousness:Tucson I and II by J. Gray was printed in ISR Interdisciplinary Science Reviews Volume 24 Issue 4 (1 April 1999), pp. 255–260. Michael Punt reviewed TSC 2002 in the journal Leonardo. In the Journal of Consciousness Exploration & Research, Christopher Holvenstot reviewed TSC 2011, likening it to The Greatest Show on Earth. A review of TSC 2012 may be found in the Journal of Consciousness Studies. A commentary on dropping the word "Toward" was published in the Journal of Consciousness Studies in 2016. == Media coverage == Chapter 8 of John Horgan's book The Undiscovered Mind is entirely devoted to his experiences at the first (1994) TSC conference. In The New York Times, George Johnson wrote about the 2016 conference that "wild speculations and carnivalesque pseudoscience were juxtaposed with sober sessions". The conference and its main organizers were the subject of a long feature in June 2018, first in the Chronicle of Higher Education, and re-published in The Guardian. Tom Bartlett concluded that the conference was "more or less the Stuart [Hameroff] Show. He decides who will and who will not present. [...] Some consciousness researchers believe that the whole shindig has gone off the rails, that it’s seriously damaging the field of consciousness studies, and that it should be shut down." == See also == Association for the Scientific Study of Consciousness, which puts on a similar series of conferences about consciousness. == References == == External links == TSC webpage
Wikipedia/The_Science_of_Consciousness
Holonomic brain theory is a branch of neuroscience investigating the idea that consciousness is formed by quantum effects in or between brain cells. Holonomic refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates. Holonomic brain theory is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry. This specific theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm building on the initial theories of holograms originally formulated by Dennis Gabor. It describes human cognition by modeling the brain as a holographic storage network. Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These oscillations are waves and create wave interference patterns in which memory is encoded naturally, and the wave function may be analyzed by a Fourier transform. Gabor, Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which can also be analyzed with a Fourier transform. In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network. This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain cluster of neurons). == Origins and development == In 1946 Dennis Gabor invented the hologram mathematically, describing a system where an image can be reconstructed through information that is stored throughout the hologram. He demonstrated that the information pattern of a three-dimensional object can be encoded in a beam of light, which is more-or-less two-dimensional. Gabor also developed a mathematical model for demonstrating a holographic associative memory. One of Gabor's colleagues, Pieter Jacobus Van Heerden, also developed a related holographic mathematical memory model in 1963. This model contained the key aspect of non-locality, which became important years later when, in 1967, experiments by both Braitenberg and Kirschfield showed that exact localization of memory in the brain was false. Karl Pribram had worked with psychologist Karl Lashley on Lashley's engram experiments, which used lesions to determine the exact location of specific memories in primate brains. Lashley made small lesions in the brains and found that these had little effect on memory. On the other hand, Pribram removed large areas of cortex, leading to multiple serious deficits in memory and cognitive function. Memories were not stored in a single neuron or exact location, but were spread over the entirety of a neural network. Lashley suggested that brain interference patterns could play a role in perception, but was unsure how such patterns might be generated in the brain or how they would lead to brain function. Several years later an article by neurophysiologist John Eccles described how a wave could be generated at the branching ends of pre-synaptic axons. Multiple of these waves could create interference patterns. Soon after, Emmett Leith was successful in storing visual images through the interference patterns of laser beams, inspired by Gabor's previous use of Fourier transformations to store information within a hologram. After studying the work of Eccles and that of Leith, Pribram put forward the hypothesis that memory might take the form of interference patterns that resemble laser-produced holograms. In 1980, physicist David Bohm presented his ideas of holomovement and Implicate and explicate order. Pribram became aware of Bohm's work in 1975 and realized that, since a hologram could store information within patterns of interference and then recreate that information when activated, it could serve as a strong metaphor for brain function. Pribram was further encouraged in this line of speculation by the fact that neurophysiologists Russell and Karen DeValois together established "the spatial frequency encoding displayed by cells of the visual cortex was best described as a Fourier transform of the input pattern." == Theory overview == === Hologram and holonomy === A main characteristic of a hologram is that every part of the stored information is distributed over the entire hologram. Both processes of storage and retrieval are carried out in a way described by Fourier transformation equations. As long as a part of the hologram is large enough to contain the interference pattern, that part can recreate the entirety of the stored image, but the image may have unwanted changes, called noise. An analogy to this is the broadcasting region of a radio antenna. In each smaller individual location within the entire area it is possible to access every channel, similar to how the entirety of the information of a hologram is contained within a part. Another analogy of a hologram is the way sunlight illuminates objects in the visual field of an observer. It doesn't matter how narrow the beam of sunlight is. The beam always contains all the information of the object, and when conjugated by a lens of a camera or the eyeball, produces the same full three-dimensional image. The Fourier transform formula converts spatial forms to spatial wave frequencies and vice versa, as all objects are in essence vibratory structures. Different types of lenses, acting similarly to optic lenses, can alter the frequency nature of information that is transferred. This non-locality of information storage within the hologram is crucial, because even if most parts are damaged, the entirety will be contained within even a single remaining part of sufficient size. Pribram and others noted the similarities between an optical hologram and memory storage in the human brain. According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. This allows the brain to maintain function and memory even when it is damaged. It is only when there exist no parts big enough to contain the whole that the memory is lost. This can also explain why some children retain normal intelligence when large portions of their brain—in some cases, half—are removed. It can also explain why memory is not lost when the brain is sliced in different cross-sections.[5] Pribram proposed that neural holograms were formed by the diffraction patterns of oscillating electric waves within the cortex. Representation occurs as a dynamical transformation in a distributed network of dendritic microprocesses. It is important to note the difference between the idea of a holonomic brain and a holographic one. Pribram does not suggest that the brain functions as a single hologram. Rather, the waves within smaller neural networks create localized holograms within the larger workings of the brain. This patch holography is called holonomy or windowed Fourier transformations. A holographic model can also account for other features of memory that more traditional models cannot. The Hopfield memory model has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable. On the other hand, holographic memory models have much larger theoretical storage capacities. Holographic models can also demonstrate associative memory, store complex connections between different concepts, and resemble forgetting through "lossy storage". === Synaptodendritic web === In classic brain theory the summation of electrical inputs to the dendrites and soma (cell body) of a neuron either inhibit the neuron or excite it and set off an action potential down the axon to where it synapses with the next neuron. However, this fails to account for different varieties of synapses beyond the traditional axodendritic (axon to dendrite). There is evidence for the existence of other kinds of synapses, including serial synapses and those between dendrites and soma and between different dendrites. Many synaptic locations are functionally bipolar, meaning they can both send and receive impulses from each neuron, distributing input and output over the entire group of dendrites. Processes in this dendritic arbor, the network of teledendrons and dendrites, occur due to the oscillations of polarizations in the membrane of the fine-fibered dendrites, not due to the propagated nerve impulses associated with action potentials. Pribram posits that the length of the delay of an input signal in the dendritic arbor before it travels down the axon is related to mental awareness. The shorter the delay the more unconscious the action, while a longer delay indicates a longer period of awareness. A study by David Alkon showed that after unconscious Pavlovian conditioning there was a proportionally greater reduction in the volume of the dendritic arbor, akin to synaptic elimination when experience increases the automaticity of an action. Pribram and others theorize that, while unconscious behavior is mediated by impulses through nerve circuits, conscious behavior arises from microprocesses in the dendritic arbor. At the same time, the dendritic network is extremely complex, able to receive 100,000 to 200,000 inputs in a single tree, due to the large amount of branching and the many dendritic spines protruding from the branches. Furthermore, synaptic hyperpolarization and depolarization remains somewhat isolated due to the resistance from the narrow dendritic spine stalk, allowing a polarization to spread without much interruption to the other spines. This spread is further aided intracellularly by the microtubules and extracellularly by glial cells. These polarizations act as waves in the synaptodendritic network, and the existence of multiple waves at once gives rise to interference patterns. === Deep and surface structure of memory === Pribram suggests that there are two layers of cortical processing: a surface structure of separated and localized neural circuits and a deep structure of the dendritic arborization that binds the surface structure together. The deep structure contains distributed memory, while the surface structure acts as the retrieval mechanism. Binding occurs through the temporal synchronization of the oscillating polarizations in the synaptodendritic web. It had been thought that binding only occurred when there was no phase lead or lag present, but a study by Saul and Humphrey found that cells in the lateral geniculate nucleus do in fact produce these. Here phase lead and lag act to enhance sensory discrimination, acting as a frame to capture important features. These filters are also similar to the lenses necessary for holographic functioning. Pribram notes that holographic memories show large capacities, parallel processing and content addressability for rapid recognition, associative storage for perceptual completion and for associative recall. In systems endowed with memory storage, these interactions therefore lead to progressively more self-determination. == Recent studies == While Pribram originally developed the holonomic brain theory as an analogy for certain brain processes, several papers (including some more recent ones by Pribram himself) have proposed that the similarity between hologram and certain brain functions is more than just metaphorical, but actually structural. Others still maintain that the relationship is only analogical. Several studies have shown that the same series of operations used in holographic memory models are performed in certain processes concerning temporal memory and optomotor responses. This indicates at least the possibility of the existence of neurological structures with certain holonomic properties. Other studies have demonstrated the possibility that biophoton emission (biological electrical signals that are converted to weak electromagnetic waves in the visible range) may be a necessary condition for the electric activity in the brain to store holographic images. These may play a role in cell communication and certain brain processes including sleep, but further studies are needed to strengthen current ones. Other studies have shown the correlation between more advanced cognitive function and homeothermy. Taking holographic brain models into account, this temperature regulation would reduce distortion of the signal waves, an important condition for holographic systems. See: Computation approach in terms of holographic codes and processing. == Criticism and alternative models == Pribram's holonomic model of brain function did not receive widespread attention at the time, but other quantum models have been developed since, including brain dynamics by Jibu & Yasue and Vitiello's dissipative quantum brain dynamics. Though not directly related to the holonomic model, they continue to move beyond approaches based solely in classic brain theory. === Correlograph === In 1969 scientists D. Wilshaw, O. P. Buneman and H. Longuet-Higgins proposed an alternative, non-holographic model that fulfilled many of the same requirements as Gabor's original holographic model. The Gabor model did not explain how the brain could use Fourier analysis on incoming signals or how it would deal with the low signal-noise ratio in reconstructed memories. Longuet-Higgin's correlograph model built on the idea that any system could perform the same functions as a Fourier holograph if it could correlate pairs of patterns. It uses minute pinholes that do not produce diffraction patterns to create a similar reconstruction as that in Fourier holography. Like a hologram, a discrete correlograph can recognize displaced patterns and store information in a parallel and non-local way so it usually will not be destroyed by localized damage. They then expanded the model beyond the correlograph to an associative net where the points become parallel lines arranged in a grid. Horizontal lines represent axons of input neurons while vertical lines represent output neurons. Each intersection represents a modifiable synapse. Though this cannot recognize displaced patterns, it has a greater potential storage capacity. This was not necessarily meant to show how the brain is organized, but instead to show the possibility of improving on Gabor's original model. One property of the associative net that makes it attractive as a neural model is that good retrieval can be obtained even when some of the storage elements are damaged or when some of the components of the address are incorrect. P. Van Heerden countered this model by demonstrating mathematically that the signal-noise ratio of a hologram could reach 50% of ideal. He also used a model with a 2D neural hologram network for fast searching imposed upon a 3D network for large storage capacity. A key quality of this model was its flexibility to change the orientation and fix distortions of stored information, which is important for our ability to recognize an object as the same entity from different angles and positions, something the correlograph and association network models lack. == See also == Gestalt psychology – Theory of perception Orchestrated objective reduction – Theory of a quantum origin of consciousness Quantum cognition – Application of quantum theory mathematics to cognitive phenomena Quantum mysticism – Pseudoscience purporting to build on the principles of quantum mechanics Self-organizing map – Machine learning technique useful for dimensionality reduction Sparse distributed memory – Mathematical model of memory Visual perception – Ability to interpret the surrounding environment using light in the visible spectrum == References == === Works cited === Bohm, David (1980). Wholeness and the Implicate Order. London: Routledge. ISBN 0-7100-0971-2. Pribram, Karl (1986). "Holonomic Brain Theory In Imaging And Object Perception". Acta Psychologica. 63 (1–3): 175–210. doi:10.1016/0001-6918(86)90062-4. PMID 3591432. Pribram, Karl (1991). Brain and Perception: Holonomy and Structure in Figural Processing. Lawrence Erlbaum Associates. == Further reading == Aerts, Diedrick; Czachor, Marek; Sozzo, Sandro (2011). Privman, V.; Ovchinnikov, V. (eds.). Quantum Interaction Approach in Cognition, Artificial Intelligence, and Robots. IARIA, Proceedings of the Fifth International Conference on Quantum, Nano and Micro Technologies. Brussels University Press. pp. 35–40. arXiv:1104.3345. Mishlove, Jeffrey (1998). "The Holographic Brain: Karl Pribram, Ph.D. interview". TWM.co.nz. Archived from the original on 2006-05-18. Retrieved 2012-05-18. Peruš, Mitja; Loo, Chu Kiong (2011). Biological And Quantum Computing For Human Vision: Holonomic Models And Applications. Medical Information Sciences Reference. ISBN 978-1615207855. Pribram, Karl (1993). Rethinking Neural Networks: Quantum Fields And Biological Data. Lawrence Erlbaum Associates and INNS Press. Pribram, Karl (2007). "Holonomic brain theory". Scholarpedia. 2 (5). Washington, DC: Georgetown University: 2735. Bibcode:2007SchpJ...2.2735P. doi:10.4249/scholarpedia.2735. Pribram, Karl (2013). The Form Within. Prospecta Press. Talbot, Michael (2011). The Holographic Universe. HarperCollins. == External links == KarlPribram.com, hosts PDFs of Pribram's articles about HBT in English and Spanish
Wikipedia/Holonomic_brain_theory
The Dehaene–Changeux model (DCM), also known as the global neuronal workspace, or global cognitive workspace model, is a part of Bernard Baars's global workspace model for consciousness. It is a computer model of the neural correlates of consciousness programmed as a neural network. It attempts to reproduce the swarm behaviour of the brain's higher cognitive functions such as consciousness, decision-making and the central executive functions. It was developed by cognitive neuroscientists Stanislas Dehaene and Jean-Pierre Changeux beginning in 1986. It has been used to provide a predictive framework to the study of inattentional blindness and the solving of the Tower of London test. == History == The Dehaene–Changeux model was initially established as a spin glass neural network attempting to represent learning and to then provide a stepping stone towards artificial learning among other objectives. It would later be used to predict observable reaction times within the priming paradigm and in inattentional blindness. == Structure == === General structure === The Dehaene–Changeux model is a meta neural network (i.e. a network of neural networks) composed of a very large number of integrate-and-fire neurons programmed in either a stochastic or deterministic way. The neurons are organised in complex thalamo-cortical columns with long-range connexions and a critical role played by the interaction between von Economo's areas. Each thalamo-cortical column is composed of pyramidal cells and inhibitory interneurons receiving a long-distance excitatory neuromodulation which could represent noradrenergic input. === A swarm and a multi-agent system composed of neural networks === Among others Cohen & Hudson (2002) had already used "meta neural networks as intelligent agents for diagnosis". Similarly to Cohen & Hudson, Dehaene & Changeux have established their model as an interaction of meta-neural networks (thalamocortical columns) themselves programmed in the manner of a "hierarchy of neural networks that together act as an intelligent agent", in order to use them as a system composed of a large scale of inter-connected intelligent agents for predicting the self-organized behaviour of the neural correlates of consciousness. Jain et al. (2002) had already clearly identified spiking neurons as intelligent agents since the lower bound for computational power of networks of spiking neurons is the capacity to simulate in real-time for boolean-valued inputs any Turing machine. The DCM being composed of a very large number of interacting sub-networks which are themselves intelligent agents, it is formally a multi-agent system programmed as a swarm or neural networks and a fortiori of spiking neurons. == Behavior == The DCM exhibits several surcritical emergent behaviors such as multistability and a Hopf bifurcation between two very different regimes which may represent either sleep or arousal with a various all-or-none behaviors which Dehaene et al. use to determine a testable taxonomy between different states of consciousness. == Scholarly reception == === Self-organized criticality === The Dehaene–Changeux model contributed to the study of nonlinearity and self-organized criticality in particular as an explanatory model of the brain's emergent behaviors, including consciousness. Studying the brain's phase-locking and large-scale synchronization, Kitzbichler et al. (2011a) confirmed that criticality is a property of human brain functional network organization at all frequency intervals in the brain's physiological bandwidth. Furthermore, exploring the neural dynamics of cognitive efforts after, inter alia, the Dehaene–Changeux model, Kitzbichler et al. (2011b) demonstrated how cognitive effort breaks the modularity of mind to make human brain functional networks transiently adopt a more efficient but less economical configuration. Werner (2007a) used the Dehaene–Changeux global neuronal workspace to defend the use of statistical physics approaches for exploring phase transitions, scaling and universality properties of the so-called "Dynamic Core" of the brain, with relevance to the macroscopic electrical activity in EEG and EMG. Furthermore, building from the Dehaene–Changeux model, Werner (2007b) proposed that the application of the twin concepts of scaling and universality of the theory of non-equilibrium phase transitions can serve as an informative approach for elucidating the nature of underlying neural-mechanisms, with emphasis on the dynamics of recursively reentrant activity flow in intracortical and cortico-subcortical neuronal loops. Friston (2000) also claimed that "the nonlinear nature of asynchronous coupling enables the rich, context-sensitive interactions that characterize real brain dynamics, suggesting that it plays a role in functional integration that may be as important as synchronous interactions". === States of consciousness and phenomenology === It contributed to the study of phase transition in the brain under sedation, and notably GABA-ergic sedation such as that induced by propofol (Murphy et al. 2011, Stamatakis et al. 2010). The Dehaene–Changeux model was contrasted and cited in the study of collective consciousness and its pathologies (Wallace et al. 2007). Boly et al. (2007) used the model for a reverse somatotopic study, demonstrating a correlation between baseline brain activity and somatosensory perception in humans. Boly et al. (2008) also used the DCM in a study of the baseline state of consciousness of the human brain's default network. === Adversarial collaboration to test the Dehaene–Changeux model and integrated information theory === In 2019, the Templeton Foundation announced funding in excess of $6,000,000 to test opposing empirical predictions of Dehaene–Changeux model and a rival theory (integrated information theory, or IIT). The originators of both theories signed off on experimental protocols and data analyses as well as the exact conditions that satisfy if their championed theory correctly predicted the outcome or not. Initial results were revealed in June 2023. None of the Dehaene–Changeux model predictions passed what was agreed upon pre-registration while two out of three of IIT's predictions passed that threshold. == Publications == Rialle, V and Stip, E. (May 1994). "Cognitive modeling in psychiatry: from symbolic models to parallel and distributed models". J Psychiatry Neurosci. 19(3): 178–192. Zigmond, Michael J. (1999). Fundamental neuroscience. Academic Press, p1551. Dehaene, Stanislas (2001). The cognitive neuroscience of consciousness. MIT Press, p. 13. Ravi Prakash, Om Prakash, Shashi Prakash, Priyadarshi Abhishek, and Sachin Gandotra (2008). "Global workspace model of consciousness and its electromagnetic correlates". Ann Indian Acad Neurol. Jul–Sep; 11(3): 146–153. doi:10.4103/0972-2327.42933 unflagged free DOI (link) Gazzaniga, Michael S. (2004). The cognitive neurosciences. MIT Press, p. 1146. Laureys, Steven; et al. (2006). The boundaries of consciousness: neurobiology and neuropathology. Volume 150 of Progress in Brain Research. Elsevier, p. 45. Naccache, L. (March 2007). "Cognitive aging considered from the point of view of cognitive neurosciences of consciousness". Psychologie & NeuroPsychiatrie du vieillissement. Volume 5, Number 1, 17–21. Hans Liljenström, Peter Århem (2008). Consciousness transitions: phylogenetic, ontogenetic, and physiological aspects. Elsevier, p. 126. Tim Bayne, Axel Cleeremans, Patrick Wilken (2009). The Oxford companion to consciousness. Oxford University Press, p. 332. Bernard J. Baars, Nicole M. Gage (2010). Cognition, brain, and consciousness: introduction to cognitive neuroscience. Academic Press, p. 287. Carlos Hernández, Ricardo Sanz, Jaime Gómez-Ramirez, Leslie S. Smith, Amir Hussain, Antonio Chella, Igor Aleksander (2011). From Brains to Systems: Brain-Inspired Cognitive Systems. Volume 718 of Advances in Experimental Medicine and Biology Series. Springer, p. 230. == See also == Artificial consciousness Complex system Neuroscience == References == == External links == "Selected publications of Stanislas Dehaene" INSERM-CEA Cognitive Neuroimaging Unit.
Wikipedia/Dehaene–Changeux_model
Global workspace theory (GWT) is a framework for thinking about consciousness introduced in 1988, by cognitive scientist Bernard Baars. It was developed to qualitatively explain a large set of matched pairs of conscious and unconscious processes. GWT has been influential in modeling consciousness and higher-order cognition as emerging from competition and integrated flows of information across widespread, parallel neural processes. Bernard Baars derived inspiration for the theory as the cognitive analog of the blackboard system of early artificial intelligence system architectures, where independent programs shared information. Global workspace theory is one of the leading theories of consciousness. While aspects of GWT are matters of debate, it remains a focus of current research, including brain interpretations and computational simulations. == Theater metaphor == GWT uses the metaphor of a theater, with conscious thought being like material illuminated on the main stage. Attention acts as a spotlight, bringing some of this unconscious activity into conscious awareness on the global workspace. Baars wrote in his 1997 article "In the Theatre of Consciousness" in the Journal of Consciousness Studies that the concept describes: [A] stage, an attentional spotlight shining on the stage, actors to represent the contents of conscious experience, an audience, and a few invisible people behind the scenes, who exercise great influence on whatever becomes visible on stage. The stage receives sensory and abstract information, but only events in the spotlight shining on the stage are completely conscious. A review of Baars' 1997 book In the Theater of Consciousness: The Workspace of the Mind further described: Thus peripheral and central sensory stimuli, imagination, and intuition compete for the center of attention, from where they address the unconscious processes of memory, interpretation, automatic routines, and motivation which, in turn, affect the control and context operators running the show from behind the scenes. In a discussion with Susan Blackmore in her book Conversations on Consciousness, Baars said: From my point of view, the metaphor that is useful for understanding consciousness is the theatre metaphor, which also happens to be quite ancient, going back at least to Plato in the West, and to the Vedanta scriptures in the East. The theatre metaphor, in a simple way, says that what’s conscious is like the bright spot cast by a spotlight on to the stage of a theatre. What’s unconscious is everything else: all the people sitting in the audience are unconscious components of the brain which get information from consciousness; and there are people sitting behind the scenes, the director and the playwright and so on, who are shaping the contents of consciousness, telling the actor in the light spot what to say. It’s a very simple metaphor, but it turns out to be quite useful. Baars distinguishes this from Cartesian theater: "You don't have a little self sitting in the theatre". == The model == The brain contains many specialized processes or modules that operate in parallel, much of which is unconscious. The global workspace is a functional hub of broadcast and integration that allows information to be disseminated across modules. As such GWT can be classified as a functionalist theory of consciousness. When sensory input, memories, or internal representations receive attention, they enter the global workspace and become accessible to various cognitive processes. As elements compete for attention, those that succeed gain entry to the global workspace, allowing their information to be distributed and coordinated throughout the whole cognitive system. GWT resembles the concept of working memory and is proposed to correspond to a 'momentarily active, subjectively experienced' event in working memory. It facilitates top-down control of attention, working memory, planning, and problem-solving through this information sharing. GWT involves a fleeting memory with a duration of a few seconds (much shorter than the 10–30 seconds of classical working memory). GWT contents are proposed to correspond to what we are conscious of, and are broadcast to a multitude of unconscious cognitive brain processes, which may be called receiving processes. Other unconscious processes, operating in parallel with limited communication between them, can form coalitions which can act as input processes to the global workspace. Since globally broadcast messages can evoke actions in receiving processes throughout the brain, the global workspace may be used to exercise executive control to perform voluntary actions. Individual as well as allied processes compete for access to the global workspace, striving to disseminate their messages to all other processes in an effort to recruit more cohorts and thereby increase the likelihood of achieving their goals. Incoming stimuli need to be stored temporarily in order to be able to compete for attention and conscious access. Kouider and Dehaene predicted the existence of a sensory memory buffer that maintains stimuli for "a few hundreds of milliseconds". Recent research offers preliminary evidence for such a buffer store and indicates a gradual but rapid decay with extraction of meaningful information severely impaired after 300 ms and most data being completely lost after 700 ms. Baars asserts that working memory "is closely associated with conscious experience, though not identical to it." Conscious events may involve more necessary conditions, such as interacting with a "self" system, and an executive interpreter in the brain, such as has been suggested by a number of authors including Michael S. Gazzaniga. Nevertheless, GWT can successfully model a number of characteristics of consciousness, such as its role in handling novel situations, its limited capacity, its sequential nature, and its ability to trigger a vast range of unconscious brain processes. Moreover, GWT lends itself well to computational modeling. Stan Franklin's IDA model is one such computational implementation of GWT. See also Dehaene et al. (2003), Shanahan and Bao's "Global Workspace Network" model. GWT also specifies "behind the scenes" contextual systems, which shape conscious contents without ever becoming conscious, such as the dorsal cortical stream of the visual system. This architectural approach leads to specific neural hypotheses. Sensory events in different modalities may compete with each other for consciousness if their contents are incompatible. For example, the audio and video track of a movie will compete rather than fuse if the two tracks are out of sync by more than 100 ms., approximately. The 100 ms time domain corresponds closely with the known brain physiology of consciousness, including brain rhythms in the alpha-theta-gamma domain, and event-related potentials in the 200–300 ms domain. However, much of this research is based on studies of unconscious priming and recent studies show that many of the methods used for unconscious priming are flawed. == Global neuronal workspace == Stanislas Dehaene extended the global workspace with the "neuronal avalanche" showing how sensory information gets selected to be broadcast throughout the cortex. Many brain regions, the prefrontal cortex, anterior temporal lobe, inferior parietal lobe, and the precuneus all send and receive numerous projections to and from a broad variety of distant brain regions, allowing the neurons there to integrate information over space and time. Multiple sensory modules can therefore converge onto a single coherent interpretation, for example, a "red sports car zooming by". This global interpretation is broadcast back to the global workspace creating the conditions for the emergence of a single state of consciousness, at once differentiated and integrated. Alternatively, the theory of practopoiesis suggests that the global workspace is achieved in the brain primarily through fast adaptive mechanisms of nerve cells. According to that theory, connectivity does not matter much. Critical is rather the fact that neurons can rapidly adapt to the sensory context within which they operate. Notably, for achieving a global workspace, the theory presumes that these fast adaptive mechanisms have the capability to learn when and how to adapt. == Criticism == J. W. Dalton has criticized the global workspace theory on the grounds that it provides, at best, an account of the cognitive function of consciousness, and fails even to address the deeper problem of its nature, of what consciousness is, and of how any mental process whatsoever can be conscious: the hard problem of consciousness. However, the abstract of A. C. Elitzur's 1997 paper summarized that while GWT "does not address the 'hard problems,' namely, the very nature of consciousness, it constrains any theory that attempts to do so and provides important insights into the relation between consciousness and cognition". In Consciousness: A Very Short Introduction, Susan Blackmore said there are two possible interpretations of GWT and it is often hard to tell which people mean, but "in the first version, the hard problem remains: something magical happens to turn unconscious items into conscious ones. In the second, it disappears, but we have to give up the idea that some items are conscious and others not". == See also == Artificial consciousness Cognitive map Cognitive model Conceptual space Image schema LIDA (cognitive architecture) Multiple drafts model of consciousness Neural correlates of consciousness Sparse distributed memory == Notes == == References == == Further reading == Baars, Bernard J. (2002) The conscious access hypothesis: Origins and recent evidence. Trends in Cognitive Sciences, 6 (1), 47–52. Baars, Bernard J. (2017). "The Global Workspace Theory of Consciousness: Predictions and Results". In Schneider, Susan; Velmans, Max (eds.). The Blackwell Companion to Consciousness (2nd ed.). Wiley-Blackwell. pp. 227–242. doi:10.1002/9781119132363.ch16. ISBN 978-0-470-67406-2. Blackmore, Susan (May 2002). "There Is No Stream of Consciousness". Journal of Consciousness Studies. 9 (5–6). Retrieved 3 May 2025. Blackmore, Susan (2004). Why Global Workspace Theory cannot explain consciousness(2004) Presentation. Damasio, A.R. (1989). Time-locked multiregional retroactivation: A systems-level proposal for the neural substrates of recall and recognition. Cognition 33. 1–2:25–62. Dehaene, S., Sergent, C. and Changeux, J.-P. (2003). A neuronal network model linking subjective reports and objective physiological data during conscious perception. Proc. National Academy of Science (USA) 100. 14: 8520–8525. Metzinger, T. (ed) (2000). Neural Correlates of Consciousness: Empirical and Conceptual Questions. MIT Press. == External links == Continuous updates on Global Workspace Theory by Baars and colleagues and published articles for download Synopsis by Baars and Katherine McGovern Review of Bernard Baars' A Cognitive Theory of Consciousness Robinson R (2009) Exploring the "Global Workspace" of Consciousness
Wikipedia/Global_workspace_theory
Orchestrated objective reduction (Orch OR) is a theory postulating that consciousness originates at the quantum level inside neurons (rather than being a product of neural connections). The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules. It is proposed that the theory may answer the hard problem of consciousness and provide a mechanism for free will. The hypothesis was first put forward in the early 1990s by Nobel laureate for physics Roger Penrose, and anaesthesiologist Stuart Hameroff. The hypothesis combines approaches from molecular biology, neuroscience, pharmacology, philosophy, quantum information theory, and quantum gravity. While some other theories assert that consciousness emerges as the complexity of the computations performed by cerebral neurons increases, Orch OR posits that consciousness is based on non-computable quantum processing performed by qubits formed collectively on cellular microtubules, a process significantly amplified in the neurons. The qubits are based on oscillating dipoles forming superposed resonance rings in helical pathways throughout lattices of microtubules. The oscillations are either electric, due to charge separation from London forces, or magnetic, due to electron spin—and possibly also due to nuclear spins (that can remain isolated for longer periods) that occur in gigahertz, megahertz and kilohertz frequency ranges. Orchestration refers to the hypothetical process by which connective proteins, such as microtubule-associated proteins (MAPs), influence or orchestrate qubit state reduction by modifying the spacetime-separation of their superimposed states. The latter is based on Penrose's objective-collapse theory for interpreting quantum mechanics, which postulates the existence of an objective threshold governing the collapse of quantum states, related to the difference of the spacetime curvature of these states in the universe's fine-scale structure. Orchestrated objective reduction has been criticized from its inception by mathematicians, philosophers, and scientists. The criticism concentrated on three issues: Penrose's interpretation of Gödel's theorem; Penrose's abductive reasoning linking non-computability to quantum events; and the brain's unsuitability to host the quantum phenomena required by the theory, since it is considered too "warm, wet and noisy" to avoid decoherence. == Background == In 1931, mathematician and logician Kurt Gödel proved that any effectively generated theory capable of proving basic arithmetic cannot be both consistent and complete. In other words, a mathematically sound theory lacks the means to prove itself. In his first book concerning consciousness, The Emperor's New Mind (1989), Roger Penrose argued that equivalent statements to "Gödel-type propositions" had recently been put forward. Partially in response to Gödel's argument, the Penrose–Lucas argument leaves the question of the physical basis of non-computable behaviour open. Most physical laws are computable, and thus algorithmic. However, Penrose determined that wave function collapse was a prime candidate for a non-computable process. In quantum mechanics, particles are treated differently from the objects of classical mechanics. Particles are described by wave functions that evolve according to the Schrödinger equation. Non-stationary wave functions are linear combinations of the eigenstates of the system, a phenomenon described by the superposition principle. When a quantum system interacts with a classical system—i.e. when an observable is measured—the system appears to collapse to a random eigenstate of that observable from a classical vantage point. If collapse is truly random, then no process or algorithm can deterministically predict its outcome. This provided Penrose with a candidate for the physical basis of the non-computable process that he hypothesized to exist in the brain. However, he disliked the random nature of environmentally induced collapse, as randomness was not a promising basis for mathematical understanding. Penrose proposed that isolated systems may still undergo a new form of wave function collapse, which he called objective reduction (OR). Penrose sought to reconcile general relativity and quantum theory using his own ideas about the possible structure of spacetime. He suggested that at the Planck scale curved spacetime is not continuous, but discrete. He further postulated that each separated quantum superposition has its own piece of spacetime curvature, a blister in spacetime. Penrose suggests that gravity exerts a force on these spacetime blisters, which become unstable above the Planck scale of 10 − 35 m {\displaystyle 10^{-35}{\text{m}}} and collapse to just one of the possible states. The rough threshold for OR is given by Penrose's indeterminacy principle: τ ≈ ℏ / E G {\displaystyle \tau \approx \hbar /E_{G}} where: τ {\displaystyle \tau } is the time until OR occurs, E G {\displaystyle E_{G}} is the gravitational self-energy or the degree of spacetime separation given by the superpositioned mass, and ℏ {\displaystyle \hbar } is the reduced Planck constant. Thus, the greater the mass–energy of the object, the faster it will undergo OR and vice versa. Mesoscopic objects could collapse on a timescale relevant to neural processing. An essential feature of Penrose's theory is that the choice of states when objective reduction occurs is selected neither randomly (as are choices following wave function collapse) nor algorithmically. Rather, states are selected by a "non-computable" influence embedded in the Planck scale of spacetime geometry. Penrose claimed that such information is Platonic, representing pure mathematical truths, which relates to Penrose's ideas concerning the three worlds: the physical, the mental, and the Platonic mathematical world. In Shadows of the Mind (1994), Penrose briefly indicates that this Platonic world could also include aesthetic and ethical values, but he does not commit to this further hypothesis. The Penrose–Lucas argument was criticized by mathematicians, computer scientists, and philosophers, and the consensus among experts in these fields is that the argument fails, with different authors attacking different aspects of the argument. Minsky argued that because humans can believe false ideas to be true, human mathematical understanding need not be consistent and consciousness may easily have a deterministic basis. Feferman argued that mathematicians do not progress by mechanistic search through proofs, but by trial-and-error reasoning, insight and inspiration, and that machines do not share this approach with humans. == Orch OR == Penrose outlined a predecessor to Orch OR in The Emperor's New Mind, coming to the problem from a mathematical viewpoint and in particular Gödel's theorem, but lacked a detailed proposal for how quantum processes could be implemented in the brain. Stuart Hameroff separately worked in cancer research and anesthesia, which gave him an interest in brain processes. Hameroff read Penrose's book and suggested to him that microtubules within neurons were suitable candidate sites for quantum processing, and ultimately for consciousness. Throughout the 1990s, the two collaborated on the Orch OR theory, which Penrose published in Shadows of the Mind (1994). Hameroff's contribution to the theory derived from his study of the neural cytoskeleton, and particularly on microtubules. As neuroscience has progressed, the role of the cytoskeleton and microtubules has assumed greater importance. In addition to providing structural support, microtubule functions include axoplasmic transport and control of the cell's movement, growth and shape. Orch OR combines the Penrose–Lucas argument with Hameroff's hypothesis on quantum processing in microtubules. It proposes that when condensates in the brain undergo an objective wave function reduction, their collapse connects noncomputational decision-making to experiences embedded in spacetime's fundamental geometry. The theory further proposes that the microtubules both influence and are influenced by the conventional activity at the synapses between neurons. === Microtubule computation === Hameroff proposed that microtubules were suitable candidates for quantum processing. Microtubules are made up of tubulin protein subunits. The tubulin protein dimers of the microtubules have hydrophobic pockets that may contain delocalized π electrons. Tubulin has other, smaller non-polar regions, for example 8 tryptophans per tubulin, which contain π electron-rich indole rings distributed throughout tubulin with separations of roughly 2 nm. Hameroff claims that this is close enough for the tubulin π electrons to become quantum entangled. During entanglement, particle states become inseparably correlated. Hameroff originally suggested in the fringe Journal of Cosmology that the tubulin-subunit electrons would form a Bose–Einstein condensate. He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules. However, this too was rejected by Reimers's group. Hameroff and Penrose contested the conclusion, considering that Reimers's microtubule model was oversimplified. Hameroff then proposed that condensates in microtubules in one neuron can link with microtubule condensates in other neurons and glial cells via the gap junctions of electrical synapses. Hameroff proposed that the gap between the cells is sufficiently small that quantum objects can tunnel across it, allowing them to extend across a large area of the brain. He further postulated that the action of this large-scale quantum activity is the source of 40 Hz gamma waves, building upon the much less controversial theory that gap junctions are related to the gamma oscillation. === Related experimental results === ==== Superradiance ==== In a study Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins re-emit trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility (in a 2024 study, superradiance was confirmed to occur in networks of tryptophans, which are found in microtubules). Tuszyński told New Scientist that "We're not at the level of interpreting this physiologically, saying 'Yeah, this is where consciousness begins,' but it may." The 2024 study, called Ultraviolet Superradiance from Mega-Networks of Tryptophan in Biological Architectures and published in The Journal of Physical Chemistry, confirmed superradiance in networks of tryptophans. Large networks of tryptophans are a warm and noisy environment, in which quantum effects typically are not expected to take place. The results of the study were theoretically predicted and then experimentally confirmed by the researchers. Majed Chergui, who led the experimental team, stated that "It's a beautiful result. It took very precise and careful application of standard protein spectroscopy methods, but guided by the theoretical predictions of our collaborators, we were able to confirm a stunning signature of superradiance in a micron-scale biological system." Marlan Scully, a physicist well-known for his work in the field of theoretical quantum optics, said "We will certainly be examining closely the implications for quantum effects in living systems for years to come." The study states that "by analyzing the coupling with the electromagnetic field of mega-networks of tryptophans present in these biologically relevant architectures, we find the emergence of collective quantum optical effects, namely, superradiant and subradiant eigenmodes. ... our work demonstrates that collective and cooperative UV excitations in mega-networks of tryptophans support robust quantum states in protein aggregates, with observed consequences even under thermal equilibrium conditions." ==== Microtubule quantum vibration theory of anesthetic action ==== In an experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules farther than expected, which did not occur when repeated under anesthesia. However, diffusion results have to be interpreted carefully, since even classical diffusion can be very complex due to the wide range of length scales in the fluid filled extracellular space. At high concentrations (~5 MAC) the anesthetic gas halothane causes reversible depolymerization of microtubules. This cannot be the mechanism of anesthetic action, however, because human anesthesia is performed at 1 MAC. (Neither Penrose or Hameroff claim that depolymerization is the mechanism of action for Orch OR.) At ~1 MAC halothane, reported minor changes in tubulin protein expression (~1.3-fold) in primary cortical neurons after exposure to halothane and isoflurane are not evidence that tubulin directly interacts with general anesthetics, but rather shows that the proteins controlling tubulin production are possible anesthetic targets. Further proteomic study reports 0.5 mM [14C]halothane binding to tubulin monomers alongside three dozens of other proteins. In addition, modulation of microtubule stability has been reported during anthracene general anesthesia of tadpoles. The study called Direct Modulation of Microtubule Stability Contributes to Anthracene General Anesthesia claims to provide "strong evidence that destabilization of neuronal microtubules provides a path to achieving general anesthesia". A highly disputed theory put forth in the mid-1990s by Hameroff and Penrose posits that consciousness is based on quantum vibrations in tubulin/microtubules inside brain neurons. Computer modeling of tubulin's atomic structure found that anesthetic gas molecules bind adjacent to amino acid aromatic rings of non-polar π-electrons and that collective quantum dipole oscillations among all π-electron resonance rings in each tubulin showed a spectrum with a common mode peak at 613 THz. Simulated presence of 8 different anesthetic gases abolished the 613 THz peak, whereas the presence of 2 different nonanesthetic gases did not affect the 613 THz peak, from which it was speculated that this 613 THz peak in microtubules could be related to consciousness and anesthetic action. Another study that Hameroff was a part of claims to show "anesthetic molecules can impair π-resonance energy transfer and exciton hopping in 'quantum channels' of tryptophan rings in tubulin, and thus account for selective action of anesthetics on consciousness and memory". In a study published in August 2024, an undergraduate group led by a Wellesley College professor found that rats given epothilone B, a drug that binds to microtubules, took over a minute longer to fall unconscious when exposed to an anesthetic gas. == Criticism == Orch OR has been criticized both by physicists and neuroscientists who consider it to be a poor model of brain physiology. Orch OR has also been criticized for lacking explanatory power; the philosopher Patricia Churchland wrote, "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules." David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he argues that there is no particular reason why particular macroscopic physical features in the brain should give rise to consciousness, he also thinks that there is no particular reason why a particular quantum feature, such as the EM field in the brain, should give rise to consciousness either. === Endogenous ferritin quenches microtubule radiance and would prevent Orch-OR === While some of the studies noted above purport to show superradiance and an influence of anesthetics on decreasing excitation diffusion through microtubules, those studies were performed under artificial conditions that failed to include microtubule associated proteins like ferritin, which quenches microtubule superradiance. Extensive evidence that was published prior to those studies establishes that ferritin interacts with microtubules in vivo and is essential for microtubule stability and function and should have been addressed by those studies. === Decoherence in living organisms === In 2000 Max Tegmark claimed that any quantum coherent system in the brain would undergo effective wave function collapse due to environmental interaction long before it could influence neural processes (the "warm, wet and noisy" argument, as it later came to be known). He determined the decoherence timescale of microtubule entanglement at brain temperatures to be on the order of femtoseconds, far too brief for neural processing. Christof Koch and Klaus Hepp also agreed that quantum coherence does not play, or does not need to play any major role in neurophysiology. Koch and Hepp concluded that "The empirical demonstration of slowly decoherent and controllable quantum bits in neurons connected by electrical or chemical synapses, or the discovery of an efficient quantum algorithm for computations performed by the brain, would do much to bring these speculations from the 'far-out' to the mere 'very unlikely'." In response to Tegmark's claims, Hagan, Tuszynski and Hameroff claimed that Tegmark did not address the Orch OR model, but instead a model of his own construction. This involved superpositions of quanta separated by 24 nm rather than the much smaller separations stipulated for Orch OR. As a result, Hameroff's group claimed a decoherence time seven orders of magnitude greater than Tegmark's, although still far below 25 ms. Hameroff's group also suggested that the Debye layer of counterions could screen thermal fluctuations, and that the surrounding actin gel might enhance the ordering of water, further screening noise. They also suggested that incoherent metabolic energy could further order water, and finally that the configuration of the microtubule lattice might be suitable for quantum error correction, a means of resisting quantum decoherence. In 2009, Reimers et al. and McKemmish et al., published critical assessments. Earlier versions of the theory had required tubulin-electrons to form either Bose–Einsteins or Frohlich condensates, and the Reimers group noted the lack of empirical evidence that such could occur. Additionally they calculated that microtubules could only support weak 8 MHz coherence. McKemmish et al. argued that aromatic molecules cannot switch states because they are delocalised; and that changes in tubulin protein-conformation driven by GTP conversion would result in a prohibitive energy requirement. In 2022, a group of Italian physicists conducted several experiments that failed to provide evidence in support of a gravity-related quantum collapse model of consciousness, weakening the possibility of a quantum explanation for consciousness. === Neuroscience === Biology-based criticisms have been offered, including a lack of explanation for the probabilistic release of neurotransmitter from presynaptic axon terminals and an error in the calculated number of the tubulin dimers per cortical neuron. In 2014, Penrose and Hameroff published responses to some criticisms and revisions to many of the theory's peripheral assumptions, while retaining the core hypothesis. == See also == == References == == Sources == Hofstadter, Douglas (1979), Gödel, Escher, Bach: an Eternal Golden Braid Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2 Turing, Alan (October 1950). "Computing Machinery and Intelligence". Mind. 59 (236): 433–460. doi:10.1093/mind/LIX.236.433. ISSN 1460-2113. JSTOR 2251299. S2CID 14636783. == External links == Center for Consciousness Studies Hameroff's "Quantum Consciousness" site Hameroff, Stuart; Bandyopadhyay, Anirban; Lauretta, Dante (8 May 2024). "Consciousness came before life". Institute of Art and Ideas. Penrose, Roger (1999). "Science and the Mind". Kavli Institute for Theoretical Physics Public Lectures.
Wikipedia/Orchestrated_objective_reduction
Developed in his (1999) book, "The Feeling of What Happens", Antonio Damasio's theory of consciousness proposes that consciousness arises from the interactions between the brain, the body, and the environment. According to this theory, consciousness is not a unitary experience, but rather emerges from the dynamic interplay between different brain regions and their corresponding bodily states. Damasio argues that our conscious experiences are influenced by the emotional responses that are generated by our body's interactions with the environment, and that these emotional responses play a crucial role in shaping our conscious experience. This theory emphasizes the importance of the body and its physiological processes in the emergence of consciousness. Damasio's three layered theory is based on a hierarchy of stages, with each stage building upon the last. The most basic representation of the organism is referred to as the Protoself, Core Consciousness, and Extended Consciousness. Damasio's approach to explaining the development of consciousness relies on three notions: emotion, feeling, and feeling a feeling. Emotions are a collection of unconscious neural responses that give rise to feelings. Emotions are complex reactions to stimuli that cause observable external changes in the organism. A feeling arises when the organism becomes aware of the changes it is experiencing as a result of external or internal stimuli. Antonio Damasio's work on consciousness : 1. Holistic Approach: Damasio argues that consciousness isn't just a brain function but involves the entire body. He suggests that the brain works in tandem with older biological systems like the endocrine and immune systems, emphasizing a holistic view of consciousness . 2. Homeostasis as Central: Damasio's theory places homeostasis at the core of consciousness, proposing that consciousness evolved to help organisms maintain internal stability, which is crucial for survival . 3. Microbiome Influence: Damasio highlights the role of the gut microbiome in influencing brain function and emotional states, suggesting that our consciousness is affected by the microbial environment within our bodies . 4. Dual Mind Registers: He distinguishes between two mental registers: one for cognitive functions like reasoning, and another for emotions and feelings, which are tied to the body's state . == Protoself == According to Damasio's theory of consciousness, the protoself is the first stage in the hierarchical process of consciousness generation. Shared by many species, the protoself is the most basic representation of the organism, and it arises from the brain's constant interaction with the body. The protoself is an unconscious process that creates a "map" of the body's physiological state, which is then used by the brain to generate conscious experience. This "map" is constantly updated as the brain receives new stimuli from the body, and it forms the foundation for the development of more complex forms of consciousness. Damasio asserts that the protoself is signified by a collection of neural patterns that are representative of the body's internal state. The function of this 'self' is to constantly detect and record, moment by moment, the internal physical changes that affect the homeostasis of the organism. Protoself does not represent a traditional sense of self; rather, it is a pre-conscious state, which provides a reference for the core self and autobiographical self to build from. As Damasio puts it, "Protoself is a coherent collection of neural patterns, which map moment-by-moment the state of the physical structure of the organism" (Damasio 1999). Multiple brain areas are required for the protoself to function. Namely, the hypothalamus, which controls the general homeostasis of the organism, the brain stem, whose nuclei map body signals, and the insular cortex, whose function is linked to emotion. These brain areas work together to keep up with the constant process of collecting neural patterns to map the current status of the body's responses to environmental changes. The protoself does not require language in order to function; moreover, it is a direct report of one's experience. In this state, emotion begins to manifest itself as second-order neural patterns located in subcortical areas of the brain. Emotion acts as a neural object, from which a physical reaction can be drawn. This reaction causes the organism to become aware of the changes that are affecting it. From this realization, springs Damasio's notion of “feeling”. This occurs when the patterns contributing to emotion manifest as mental images, or brain movies. When the body is modified by these neural objects, the second layer of self emerges. This is known as core consciousness. == Core consciousness == Sufficiently more evolved is the second layer of Damasio's theory, Core Consciousness. This emergent process occurs when an organism becomes consciously aware of feelings associated with changes occurring to its internal bodily state; it is able to recognize that its thoughts are its own, and that they are formulated in its own perspective. It develops a momentary sense of self, as the brain continuously builds representative images, based on communications received from the Protoself. This level of consciousness is not exclusive to human beings and remains consistent and stable throughout the lifetime of the organism The image is a result of mental patterns which are caused by an interaction with internal or external stimulus. A relationship is established, between the organism and the object it is observing as the brain continuously creates images to represent the organism's experience of qualia. Damasio's definition of emotion is that of an unconscious reaction to any internal or external stimulus which activates neural patterns in the brain. ‘Feeling’ emerges as a still unconscious state which simply senses the changes affecting the Protoself due to the emotional state. These patterns develop into mental images, which then float into the organism's awareness. Put simply, consciousness is the feeling of knowing a feeling. When the organism becomes aware of the feeling that its bodily state (Protoself) is being affected by its experiences, or response to emotion, Core Consciousness is born. The brain continues to present nonverbal narrative sequence of images in the mind of the organism, based on its relationship to objects. An object in this context can be anything from a person, to a melody, to a neural image. Core consciousness is concerned only with the present moment, here and now. It does not require language or memory, nor can it reflect on past experiences or project itself into the future. == Extended consciousness == When consciousness moves beyond the here and now, Damasio's third and final layer emerges as Extended Consciousness. This level could not exist without its predecessors, and, unlike them, requires a vast use of conventional memory. Therefore, an injury to a person's memory center can cause damage to their extended consciousness, without hurting the other layers. The autobiographical self draws on memory of past experiences which involves use of higher thought. This autobiographical layer of self is developed gradually over time. Working memory is necessary for an extensive display of items to be recalled and referenced. Linguistic areas of the brain are activated to enhance the organism's experience, however, according to the language of thought hypothesis, language would not be necessarily required. == Criticism == Damasio's theory of consciousness has been met with criticism for its lack of explanation regarding the generation of conscious experiences by the brain. Researchers have posited that the brain's interaction with the body alone cannot account for the complexity of conscious experience, and that additional factors must be considered. Furthermore, the theory has been criticized for its inadequate treatment of the concept of self-awareness and its lack of a clear method of measuring consciousness, which hinders empirical testing and evaluation. === Formalistic Elements === Theories of emotion currently fall into four main categories which follow one another in a historical series: evolutionary (ethological), physiological, neurological, and cognitive. Evolutionary theories derive from Darwin's 'Emotions in man and the animals'. Physiological theories suggest that responses within the body are responsible for emotions. Neurological theories propose that activity within the brain leads to emotional responses. Cognitive theories argue that thoughts and other mental activity play an essential role in forming emotions. Note that no current theory of emotion falls strictly within a single category, rather each theory uses one approach to form its core premises from which it is then able to extend its main postulates. Damasio's tri-level view of the human mind, which posits that we share the two lowest levels with other animals, has been suggested before. For example, see Dyer ( www.conscious-computation.webnode.com ). Dyer's triune expansion is compared to Damasio's- 1. sensorimotor stage (cf. Damasio's Protoself) 2. spatiotemporal stage (Core-Consciousness) 3. cognolinguistic stage (Extended Consciousness) An important feature of Damasio's theory (one that it shares with Dyer's theory) is the key role played by mental images, consciously mediating the information exchange between endocrine and cognitive. Ledoux and Brown have a different view of how emotion is connected to general cognition. They place emotionality on a similar level as that of other cognitive states. (In fairness, both Dyer's and Damasio's models concur on this point, ie that emotionality is not isolated to a particular layer within the tri-level framework). Earlier less sophisticated models placed the emotions strictly within the Limbic circuits, where their primary role was to consciously respond to, as well as cause responses within, the hypothalamus, the interface between intentional mind states and metabolic (endocrine) body states. Emotionality is demonstrably a global mind state, just like consciousness. For example, we can be simultaneously aware of (low-level) a pain (low-level) in our body, and an idea (high level) that enters our imagination (working memory). Likewise, our (low-level) emotional reaction to a painful workplace injury (fear, threat to well-being) can coexist with our (high-level) feeling of anger and indignation at the co-worker who failed to follow safety guidelines. === Substantive Process === A careful reading of Damasio's work reveals that he distinguishes his theories from those of his predecessors, in how the formalistic elements interact with each other in a dynamically integrated system. E.g, the suggestion of a dynamic neural map ultimately posits that we are the instantaneous configuration of a neural state in the present moment, rather than the supporting biological construct. I.e., our conscious identity is the software, not the hardware, even though our unique hardware constrains how we operate as the software. === Need for consciousness and qualia === A common criticism stems from the fact that both knowing and feeling can be processed with equal success without conscious awareness, as machines, for instance, do, and those models do not explain the need for consciousness and qualia. == References ==
Wikipedia/Damasio's_theory_of_consciousness
How the Self Controls Its Brain is a book by Sir John Eccles, proposing a theory of philosophical dualism, and offering a justification of how there can be mind-brain action without violating the principle of the conservation of energy. The model was developed jointly with the nuclear physicist Friedrich Beck in the period 1991–1992. Eccles called the fundamental neural units of the cerebral cortex "dendrons", which are cylindrical bundles of neurons arranged vertically in the six outer layers or laminae of the cortex, each cylinder being about 60 micrometres in diameter. Eccles proposed that each of the 40 million dendrons is linked with a mental unit, or "psychon", representing a unitary conscious experience. In willed actions and thought, psychons act on dendrons and, for a moment, increase the probability of the firing of selected neurons through quantum tunneling effect in synaptic exocytosis, while in perception the reverse process takes place. Previous mention of the "psychon" The earliest prior use of the word "psychon" with a similar meaning of an "element of consciousness" is in the book "Concerning Fluctuating and Inaudible Sounds" by K. Dunlap in 1908. The most popular prior use is in Robert Heinlein's short story Gulf, wherein a character refers to the fastest speed of thought possible as "one psychon per chronon". == See also == Brain and mind Dualistic interactionism == References ==
Wikipedia/How_the_Self_Controls_Its_Brain
Weather satellite pictures are often broadcast as high-resolution picture transmissions (HRPTs), color high-resolution picture transmissions (CHRPTs) for Chinese weather satellite transmissions, or advanced high-resolution picture transmissions (AHRPTs) for EUMETSAT weather satellite transmissions. HRPT transmissions are available around the world and are available from both polar and geostationary weather satellites. The polar satellites rotate in orbits that allow each location on Earth to be covered by the weather satellite twice per day while the geostationary satellites remain in one location at the equator taking weather images of the Earth from that location over the equator. The sensor on weather satellites that picks up the data transmitted in HRPT is referred to as an Advanced Very High Resolution Radiometer (AVHRR) for NOAA satellites. Broadcast signal The working frequency band for HRPT is L Band at 1.670–1.710 GHz and the modulation type isBPSK. On NOAA KLM satellites the transmission power is 6.35 Watts, or 38.03 dBm. The METOP-A satellite broadcasts with a bandwidth of 4.5 MHz, these use QPSK and AHRPT. == Reception == === Hardware === In order to receive HRPT transmissions a high gain antenna is required, such as a small satellite dish, a helical antenna, or a crossed yagi. Basic reception equipment includes a parabolic dish antenna attached to an Azimuth-Elevation unit. The HRPT signal is further enhanced with a 1.7 GHz pre-amplifier. An HRPT receiver unit and a dish tracking controller are required to steer the Azimuth-Elevation unit controlling the parabolic dish. As an alternative to receiving direct broadcast from polar orbiting satellites, users in Europe and Africa can also receive rebroadcast data from the EUMETSAT EUMETCAST service via Digital Video Broadcasting using a simple stationary satellite dish. === Software === Both commercial and free software for demodulating HRPT transmission signals exists: Example of commercial demodulation software is XHRPT Decoder. Free software exists as a part of GNURadio package, the GR-NOAA blocks and flowcharts distributed by Manuel Bülo. Free software for decoding data packets contained in HRPT is available, for example DWDSAT HRPT Viewer V1.1.0 or AAPP with Satpy. == Satellite status == == Notes and references ==
Wikipedia/High-resolution_picture_transmission
16K resolution is a display resolution with approximately 16,000 pixels horizontally. The most commonly discussed 16K resolution as of 2025, is 15360 × 8640, which doubles the pixel count of 8K UHD in each dimension, for a total of four times as many pixels. This resolution has 132.7 megapixels, 16 times as many pixels as 4K resolution and 64 times as many pixels as 1080p resolution. As of May 2025, 16K resolutions can be run in prototype displays, large public displays or using multi-monitor setups with AMD Eyefinity, or Nvidia Surround or Mosaic Technology. == History == In 2016, AMD announced a target for their future graphics cards to support 16K resolution with a refresh rate of 240 Hz for "true immersion" in VR. Linus Tech Tips released a series of videos in 2017 attempting to play video games at 16K using sixteen 4K monitors. In 2018, US filmmaker Martin Lisius released a short time-lapse film titled, "Prairie Wind" that he produced using a 2-camera Canon EOS 5DS system he developed. Two still images were stitched together to create one 15985 × 5792 pixel image and then rendered as 16K resolution video with an extremely wide aspect ratio of 2.76∶1. This is among the first known 16K videos to exist. Innolux displayed the world's first 100-inch 16K (15360 × 8640) display module at Touch Taiwan in August 2018. Sony introduced a 64 by 18 foot (19.5 m × 5.5 m) commercial 16K display at NAB 2019 that is set to be released in Japan. It is made up of 576 modules (each 360 × 360) in a formation of 48 by 12 modules, forming a 17280 × 4320 screen with a 4∶1 aspect ratio. On June 26, 2019, VESA formally released the DisplayPort 2.0 standard with support for one 16K (15360 × 8640-pixel) display supporting 30-bit-per-pixel 4:4:4 RGB/Y′CBCR-color HDR video at a refresh rate of 60 Hz using DSC video compression. Sphere, an entertainment venue located in Las Vegas, Nevada, opened with a wraparound interior LED screen on September 29, 2023. According to Sphere Entertainment, the screen within the theater is 160,000 sq ft (15,000 m2) and supports a 16K (16000 × 16000-pixel) resolution, making it the highest resolution LED screen in the world. == See also == Virtual reality 32K resolution – digital video formats with a horizontal resolution of around 32,000 pixels 10K resolution – digital video formats with a horizontal resolution of around 10,000 pixels, aimed at non-television computer monitor usage 8K resolution – digital video formats with a horizontal resolution of around 8,000 pixels 5K resolution – digital video formats with a horizontal resolution of around 5,000 pixels, aimed at non-television computer monitor usage 4K resolution – digital video formats with a horizontal resolution of around 4,000 pixels 2K resolution – digital video formats with a horizontal resolution of around 2,000 pixels High-definition television (HDTV) – digital video formats with resolutions of 1280 × 720 or 1920 × 1080 Graphics display resolution Sphere (venue), an entertainment venue with the world's first 16k resolution LED screen Postcard from Earth, a film shot in 18k resolution to display at Sphere == References ==
Wikipedia/16k_resolution
High-resolution audio is a term for music files with bit depth greater than 16-bit and sampling frequency higher than 44.1 kHz or 48 kHz used in CD and DVD formats. The Audio Engineering Society (AES), Consumer Technology Association (CTA) and Japan Audio Society (JAS) set 24-bit/96 kHz as the minimum requirement to fulfill the standard. The Recording Academy Producers & Engineers Wing also cites 24-bit/96 kHz as the preferred resolution for tracking, mixing and mastering audio. It is supported by media formats such as DVD-Audio, DualDisc and High Fidelity Pure Audio, download stores like Bandcamp, HDtracks and Qobuz, and streaming platforms including Apple Music, Amazon Music and Tidal. Research into high-resolution audio began in the late 1980s and recordings were made available on the consumer market in 1996. Other bit depth/sample rate combinations that are often marketed as "high-resolution" include 1-bit/2.8224 MHz (DSD), 20-bit/44.1 kHz (HDCD), 24-bit/44.1, 88.2 or 176.4 kHz, 24-bit/48, 96 or 192 kHz, and 24-bit/352.8 kHz (DXD). Reference-grade digital-to-analog converters that oversample to very high rates such as 24-bit/384 kHz, 32-bit/384 kHz and 32-bit/768 kHz are also available for both consumer and professional use. Sony's LDAC, Dolby's Digital Plus and Lenbrook's MQA are marketed as "hi-res," however, these codecs employ lossy compression and can often have lower bit rates than Compact Disc Digital Audio, and thus, cannot be classified as "true high-resolution." == Definitions == High-resolution audio is generally used to refer to audio files that have a higher sample rate and/or bit depth than that of Compact Disc Digital Audio (CD-DA), which operates at 44.1 kHz/16-bit. The Recording Industry Association of America (RIAA), in cooperation with the Consumer Electronics Association, DEG: The Digital Entertainment Group, and The Recording Academy Producers & Engineers Wing, formulated the following definition of high-resolution audio in June 2014: "lossless audio capable of reproducing the full spectrum of sound from recordings which have been mastered from better than CD quality music sources which represent what the artists, producers and engineers originally intended." File formats capable of storing high-resolution audio include FLAC, ALAC, WAV, AIFF, MQA and DSD (the format used by SACD). == History == One of the first attempts to market high-resolution audio was High Definition Compatible Digital in 1995, an encoding/decoding technique using standard CD audio. This was followed by two more optical disc formats claiming sonic superiority over CD-DA: SACD in 1999, and DVD-Audio in 2000. These formats offer additional benefits such as multi-channel surround sound. Following a format war, none of these achieved widespread adoption. Following the rise in online music retailing at the start of the 21st century, high-resolution audio downloads were introduced by HDtracks starting in 2008. Further attempts to market high-resolution audio on optical disc followed with Pure Audio Blu-ray in 2009, and High Fidelity Pure Audio in 2013. Competition in online high-resolution audio retail stepped-up in 2014 with the announcement of Neil Young's Pono service. In 2014, the Japan Electronics and Information Technology Industries Association (JEITA) announced a specification and accompanying "Hi-Res AUDIO" logo for consumer audio products, administered by the Japan Audio Society (JAS). The standard sets minimums of 96 kHz sample rate and 24-bit depth, and for analog processes, 40 kHz. The related "Hi-Res Audio Wireless" standard additionally requires support for the LDAC, LHDC, LC3plus and MQair codecs. Sony reaffirmed its commitment towards the development in the high-resolution audio segment by offering a slew of Hi-Res Audio products. == Streaming services == As of 2021, some music streaming services such as Tidal, Qobuz, Amazon Music, and Apple Music have options to enable the playback of high-resolution audio files. == Controversy == Whether there is any benefit to high-resolution audio over CD-DA is controversial, with some sources claiming sonic superiority: "The DSD process used for producing SACDs captures more of the nuances from a performance and reproduces them with a clarity and transparency not possible with CD."—The Mariinsky record label of the Mariinsky Ballet (formerly Kirov Ballet), St. Petersburg, Russia, that sells Super Audio CDs (SACDs) "The claimed main-benefit of high-resolution audio files is superior sound quality [...] 24-bit/96 kHz or 24-bit/192 kHz files should, therefore, more closely replicate the sound quality that the musicians and engineers were working with in the studio."—What Hi-Fi? "...music professionals with access to first generation data have widely reported subjectively better sound, and a meta-analysis of previously published listening tests comparing high resolution to CD found a clear, though small, audible difference that significantly increased when the listening tests included standard training (i.e., with experience in listening)."—Journal of the Audio Engineering Society, Volume 67, Issue 5 ...and with other opinions ranging from skeptical to highly critical: "If [the music business] cared about sound quality in the first place, they would make all of the releases sound great in every format they sell: MP3, FLAC, CD, iTunes, or LP."—cnet "Impractical overkill that nobody can afford"—Gizmodo "A solution to a problem that doesn't exist, a business model based on willful ignorance and scamming people."—Xiph.org Business magazine Bloomberg Businessweek suggests that caution is in order with regard to high-resolution audio: "There is reason to be wary, given consumer electronics companies' history of pushing advancements whose main virtue is to require everyone to buy new gadgets." High-resolution files that are downloaded from niche websites that cater to audiophile listeners often include different mastering in the release – thus many comparisons of CD to these releases are evaluating differences in mastering, rather than bit depth. Most early papers using blind listening tests concluded that differences are not audible by the sample of listeners taking the test. Blind tests have shown that musicians and composers are unable to distinguish higher resolutions from 16-bit audio at 48 kHz. One 2014 paper showed that dithering using outdated methods produces audible artifacts in blind listening tests. Joshua Reiss performed a meta-analysis on 20 of published tests, saying that trained listeners could distinguish between hi-resolution recordings and their CD equivalents under blind conditions. In a paper published in the July 2016 issue of the AES Journal, Reiss says that, although the individual tests had mixed results, and that the effect was "small and difficult to detect," the overall result was that trained listeners could distinguish between high-resolution recordings and their CD equivalents under blind conditions: "Overall, there was a small but statistically significant ability to discriminate between standard-quality audio (44.1 or 48 kHz, 16 bit) and high-resolution audio (beyond standard quality). When subjects were trained, the ability to discriminate was far more significant." Hiroshi Nittono pointed out that the results in Reiss's paper showed that the ability to distinguish hi resolution audio from CD quality audio "was only slightly better than chance". Some technical explanations for sonic superiority cite the improved time domain impulse response of the anti-aliasing filter allowed by higher sample rates. This reduces the energy spread in time from transient signals such as plucking a string or striking a cymbal. == See also == High fidelity Loudness war == Notes == == References ==
Wikipedia/High-resolution_audio
The Multi-Color Graphics Array or MCGA is a video subsystem built into the motherboard of the IBM PS/2 Model 30, introduced in April 1987, and Model 25, introduced later in August 1987; no standalone MCGA cards were ever made. The MCGA supports all CGA display modes plus 640 × 480 monochrome at a refresh rate of 60 Hz, and 320 × 200 with 256 colors (out of an 18-bit RGB palette of 262,144) at 70 Hz. The display adapter uses a DE-15 connector, sometimes referred to as HD-15. MCGA is similar to VGA in that it had a 256-color mode (the 256-color mode in VGA was sometimes referred to as MCGA) and uses 15-pin analog connectors. The PS/2 chipset's limited abilities prevent EGA compatibility and high-resolution multi-color VGA display modes. The tenure of MCGA was brief; the PS/2 Model 25 and Model 30 were discontinued by 1992, and the only manufacturer to produce a clone of this display adapter was Epson, in the Equity Ie and PSE-30, since the VGA standard introduced at the same time was considered superior. == Software support == The 256-color mode proved most popular for gaming. 256-color VGA games ran fine on MCGA as long as they stuck to the basic 320 × 200 256-color mode and didn't attempt to use VGA-specific features such as multiple screen pages. Games lacking support for 256-color graphics were forced to fall back to four-color CGA mode (or not run at all) due to the incompatibility with EGA video modes (320 × 200, 640 × 200, or 640 × 350, all in 16 colors). Some games, including point-and-click adventures from Sierra On-line and Lucasfilm Games, as well as simulation and strategy titles from Microprose, solved this problem for low-resolution titles by supporting the MCGA's 320 × 200 256-color mode and picking the colors most resembling the EGA 16-color RGB palette, while leaving the other available colors in that mode unused. Higher resolution titles were often unsupported unless graphics could be converted into either MCGA low or high (640 × 480 monochrome, which would also support 640 × 400 and 640 × 350 with some letterboxing) resolution mode in an acceptable fashion. An alternative approach used by a small number of (generally earlier) games was to use four-color CGA assets but make use of the adaptor's ability to freely change the palette for a slightly enhanced appearance. == Output capabilities == MCGA offered: 640 × 480 monochrome (mode 11h) 320 × 200 in 256 colors (from a palette of 262,144; mode 13h) CGA compatible modes: 40 × 25 text mode with 8×8 pixel font (effective resolution of 320 × 200; mode 0/1h) 80 × 25 text mode with 8×8 pixel font (effective resolution of 640 × 200; mode 2/3h) 320 × 200 in four colors from a 16 color hardware palette with a pixel aspect ratio of 1:1.2. (mode 4/5h) 640 × 200 in two colors with a pixel aspect ratio of 1:2.4 (mode 6h) == See also == List of defunct graphics chips and card companies == References == Mueller, Scott (1992). Upgrading and Repairing PCs (Second ed.). Que Books. ISBN 0-88022-856-3.
Wikipedia/Multi-Color_Graphics_Array
The Enhanced Graphics Adapter (EGA) is an IBM PC graphics adapter and de facto computer display standard from 1984 that superseded the CGA standard introduced with the original IBM PC, and was itself superseded by the VGA standard in 1987. In addition to the original EGA card manufactured by IBM, many compatible third-party cards were manufactured, and EGA graphics modes continued to be supported by VGA and later standards. == History == EGA was introduced in October 1984 by IBM, shortly after its new PC/AT. The EGA could be installed in previously released IBM PCs, but required a ROM upgrade on the mainboard. Chips and Technologies' first product, announced in September 1985, was a four-chip EGA chipset that handled the functions of 19 of IBM's proprietary chips on the original Enhanced Graphics Adapter. By that November's COMDEX, more than a half dozen companies had announced EGA-compatible boards based on C&T's chipset. The first EGA-compatible board was Vega in December 1985, released by Video Seven and using C&T's chipset.: 34  The Vega was half the length of the original IBM EGA board. Between 1984 and 1987, several third-party manufacturers produced compatible cards, such as the Autoswitch EGA or Genoa Systems' Super EGA chipset. Later cards supporting an extended version of the VGA were similarly named Super VGA. The EGA standard was made obsolete in 1987 by the introduction of MCGA and VGA with the PS/2 computer line. == Adoption == By 1985 InfoWorld described EGA as the "next graphics standard", but with "sluggish sales" because of high cost and lack of software support. The magazine said that "market reaction ... although positive, has not been overwhelming, in part because the EGA's complexity has slowed software vendors' efforts to support it". Commercial software began supporting EGA soon after its introduction, with The Ancient Art of War, released in 1984. Microsoft Flight Simulator v2.12, Jet, Silent Service, and Cyrus, all released in 1985, offered EGA support, along with Windows 1.0. Sierra's King's Quest III, released in 1986, was one of the earliest mainstream PC games to use it. The first clone boards appeared in late 1985, lowering EGA's cost. By 1987, EGA support was commonplace. Most software made up to 1991 could run in EGA, although the vast majority of commercial games used 320 × 200 with 16 colors for backward compatibility with CGA and Tandy, and to support users who did not own an enhanced EGA monitor. 350-line modes were mostly used by freeware/shareware games and application software, although SimCity is a notable example of a commercial game that runs in 640 × 350 with 16 colors mode. Modern adventure games, like The Crimson Diamond, use freeware tools like the Adventure Game Studio to create games with EGA-style color palettes but with modern features. == Hardware design == The original IBM EGA was an 8-bit PC ISA card with 64 KB of onboard RAM. An optional daughter-board (the Graphics Memory Expansion Card) provided a minimum of 64 KB additional RAM, and up to 192 KB if fully populated with the Graphics Memory Module Kit. Without these upgrades, the card would be limited to four colors in 640 × 350 mode. Output was via direct-drive RGB, as with the CGA, but no composite video output was included. MDA and CGA monitors could be driven, as well as newly released enhanced color monitors for use specifically with EGA. EGA-specific monitors used a dual-sync design which could switch from the 15.7 kHz of 200-line modes to 21.8 kHz for 350-line modes. Many EGA cards have DIP switches on the back of the card to select the monitor type. If CGA is selected, the card will operate in 200-line mode and use 8×8 characters in text mode. If EGA is selected, the card will operate in 350-line mode and use 8×14 text. Some third-party cards using the EGA specification were sold with the full 128 KB of RAM from the factory, while others included as much as 256 KB to enable multiple graphics pages, multiple text-mode character sets, and large scrolling displays. A few third-party cards, such as the ATI Technologies EGA Wonder, built on the EGA standard to additionally offer features such as extended graphics modes as high as 800 × 560 and automatic monitor type detection. == Capabilities == EGA produces a display of up to 16 colors (using a fixed palette, or one selected from a gamut of 64 colors (6-bit RGB), depending on mode) at several resolutions up to 640 × 350 pixels, as well as two monochrome modes at higher resolutions. EGA cards include a ROM to extend the system BIOS for additional graphics functions, and a custom CRT controller (CRTC). The IBM EGA CRTC supports all of the modes of the IBM MDA and CGA adapters through specific mode options, but it is not fully register-compatible with the Motorola MC6845 used in those cards, so software that directly programs the registers to select modes may produce different results on the EGA. Supported resolutions are 320 × 200 and 640 × 200 (on a CGA or EGA monitor), 720 × 350 and 640 × 350 (on an MDA monitor) and 320 × 350 and 640 × 350 (on an EGA monitor). EGA scans at 21.8 kHz when 350-line modes are used and 15.7 kHz when 200-line modes are used. For both horizontal scan rates, the vertical scan rate is 60 Hz. In the 640 × 350 high-resolution mode, which requires an enhanced EGA monitor, 16 colors can be selected from a palette of 64, comprising all combinations of two bits per pixel (four levels of intensity) for red, green and blue. On EGA adapters with only 64 KB of video RAM, only 4 colors can be selected per pixel. The 640 × 200 and 320 × 200 graphics modes provide backward compatibility with CGA software and monitors, but they can use the entire 16-color CGA palette simultaneously, instead of the smaller 4-color palettes that CGA is limited to in those modes. EGA's 16-color graphic modes use bit planes and mask registers together with CPU bitwise operations for accelerated graphics. The same techniques went on to be used in the VGA. === Modes === EGA supports: 640 × 350 × 16 colors (from a 6 bit palette of 64 colors), pixel aspect ratio of 1:1.37. 640 × 350 × 2 colors, pixel aspect ratio of 1:1.37. 640 × 200 × 16 colors, pixel aspect ratio of 1:2.4. 320 × 200 × 16 colors, pixel aspect ratio of 1:1.2. Text modes: 40 × 25 with 8 × 8 pixel font (effective resolution of 320 × 200) 80 × 25 with 8 × 8 pixel font (effective resolution of 640 × 200) 80 × 25 with 8 × 14 pixel font (effective resolution of 640 × 350) 80 × 43 with 8 × 8 pixel font (effective resolution of 640 × 344) Extended graphics modes of third-party boards: 640 × 400 640 × 480 720 × 540 800 × 560 === Color palette === With the EGA, all 16 CGA colors can be used simultaneously, and each can be mapped in from a larger palette of 64 colors (two bits each for red, green and blue). The CGA's alternate brown color is included in the larger palette, so it can be used without any additional display hardware. The later VGA standard built on this by mapping each of the 64 colors in from a larger, customizable, palette of 256. Standard EGA monitors do not support use of the extended color palette in 200-line modes, because the monitor cannot distinguish between being connected to a CGA card or being connected to an EGA card outputting a 200-line mode. EGA redefines some pins of the connector to carry the extended color information. If the monitor were connected to a CGA card, these pins would not carry valid color information, and the screen might be garbled if the monitor were to interpret them as such. For this reason, standard EGA monitors will use the CGA pin assignment in 200-line modes, so the monitor can also be used with a CGA card. Some EGA monitors are switchable, meaning that they can be set up to use the full palette even in 200-line modes, often through a mechanical switch. Only a few commercial games were released with support for the extended color palette in 320 × 200 or 640 × 200 (including the DOS version of Super Off Road). When selecting a color from the EGA palette, two bits are used for the red, green and blue channels to signal values of 0, 1, 2 or 3. For instance, to select the color magenta, the red and blue values would be medium intensity (2, or 10 in binary) and the green value would be off (0). The table below displays an example palette matching the standard 16 CGA colors, with their representations in rgbRGB binary (internal card bit order), where the lowercase letters are the low-intensity bits, and uppercase letters are high-intensity bits. Decimal and hexadecimal values (converted to equivalent 24-bit sRGB web colors) are also shown. The following images illustrate the full EGA palette in detail. === Specifications === EGA uses a female nine-pin D-subminiature (DE-9) connector for output, identical to the CGA connector. The signal standard and pinout is backward-compatible with CGA, allowing EGA monitors to be used on CGA cards and conversely. When operating in EGA modes, pins 2, 6 and 7 are repurposed for EGA's secondary RGB signals (see pinout table below). When operating in 200-line CGA modes, the EGA card is fully backward compatible with a standard IBM CGA monitor; however, third-party monitors had varying compatibility. Third-party monitors sometimes connected pin two to ground internally. When connected to an EGA card, this shorts the EGA's secondary red output to ground and can damage the card. Also, some monitors were wired with pin two as their sole ground, and these will not work with the EGA. Conversely, an EGA monitor should work with a CGA adapter, but if it is not set to CGA mode, the secondary red signal will be grounded (always zero), and the secondary blue will be floating (unconnected), causing all high-intensity colors except brown to display incorrectly, and all colors to potentially have a variable blue tint due to the indeterminate state of the unconnected secondary blue. The IBM 5154 EGA monitor has a special IBM 5153 CGA compatibility mode when operating with CGA sync signals and automatically changes to the CGA pinout to avoid all of the mentioned problems when operating in this mode. The original IBM EGA card includes a feature connector (blue connector J4, see first photo on this page), providing access to two RCA connectors at the back of card, in addition to several analog and digital signals that the EGA adaptor can be configured to use. A light pen interface was also present on the original card. === Memory mapping === For color text and CGA graphics modes, video memory is mapped to 16 KB of addresses beginning at address B8000h, and in monochrome (MDA-compatible) text mode, video memory occupies 16 KB beginning at B0000h. These address mappings are for backward compatibility. For modes new to the EGA, the video memory begins at address A0000h and occupies 64 KB. The different base addresses for color vs. monochrome modes makes it possible for an EGA to be used simultaneously with a monochrome graphics card in the same computer, or for an EGA in MDA text mode to be used simultaneously with a CGA in the same computer. EGA's native graphics modes are planar, as opposed to the interleaved CGA and Hercules modes. Video memory is divided into four "planes" (except 640 × 350 × 2, which has one plane), one for each component of the RGBI color space. Each pixel is represented by one bit in each plane. If a bit in the red plane is on, but none of the equivalent bits in the other pages are, a red pixel will appear in that location on screen. If all the other bits for that particular pixel were also on, it would become white, and so forth. Planes are different sizes depending on the mode: All planes reside at segment A000 in the CPU's address space. They are bank-switched, and only one plane can be read on the CPU bus at once; however, the programmer may set the control registers on the card to select which planes are written to and write to several at once. An exception is read mode 1, in which all four planes are read and compared with programmed "Color Compare" data, and a byte indicating the result of comparing all four planes can be read on the I/O bus. == See also == JEGA (Japanese Enhanced Graphics Adapter for AX computers) Video card Graphics display resolution Graphics processing unit List of display interfaces List of monochrome and RGB color formats – 6-bit RGB section List of 16-bit computer color palettes – EGA section Professional Graphics Controller Hercules InColor Card VGA compatible text mode – EGA's own modes are just a subset, and all features are nearly the same List of defunct graphics chips and card companies == References == == External links == Representative screenshots of EGA games
Wikipedia/Enhanced_Graphics_Adapter
8K resolution refers to an image or display resolution with a width of approximately 8,000 pixels. 8K UHD (7680 × 4320) is the highest resolution defined in the Rec. 2020 (UHDTV) standard. 8K display resolution is the successor to 4K resolution. TV manufacturers pushed to make 4K a new standard by 2017. At CES 2012, the first prototype 8K TVs were unveiled by Japanese electronics corporation Sharp. The feasibility of a fast transition to this new standard is questionable in view of the absence of broadcasting resources. In 2018, Strategy Analytics predicted that 8K-ready devices will still only account for 3% of UHD TVs by 2023 with global sales of 11 million units a year. However, TV manufacturers remain optimistic as the 4K market grew much faster than expected, with actual sales exceeding projections nearly six-fold in 2016. In 2013, a transmission network's capability to carry HDTV resolution was limited by internet speeds and relied on satellite broadcast to transmit the high data rates. The demand is expected to drive the adoption of video compression standards and to place significant pressure on physical communication networks in the near future. As of 2018, few cameras had the capability to shoot video in 8K, NHK being one of the few companies to have created a small broadcasting camera with an 8K image sensor. By 2018, Red Digital Cinema camera company had delivered three 8K cameras in both a Full Frame sensor and Super 35 sensor. == History == Japan's public broadcaster NHK was the first to start research and development of 4320p resolution in 1995 and the format was first displayed in 2005. The format was standardized by SMPTE in October 2007. The interface was standardized by SMPTE in August 2010 and recommended as the international standard for television by ITU-R in 2012. Followed by public displays at electronics shows and screenings of 2014 Winter Olympics in Sochi and public viewings in February 2014 and the FIFA World Cup in Brazil in June 2014 using HEVC with partners AstroDesign and Ikegami Electronics. On January 6, 2015, the MHL Consortium announced the release of the superMHL specification which will support 8K resolution at 120 fps, 48-bit video, the Rec. 2020 color space, high dynamic range support, a 32-pin reversible superMHL connector, and power charging of up to 40 watts. On March 1, 2016, The Video Electronics Standards Association (VESA) unveiled DisplayPort 1.4, a new format that allows the use of 8K resolution (7680 × 4320) at 60 Hz with HDRR and 32 audio channels through USB-C. On January 4, 2017, the HDMI Forum announced HDMI 2.1 featuring support for 8K video with HDR, which will be "released early in Q2 2017". 8K Association Formed at CES 2019 to Help Develop 8K Ecosystem. In early February 2020, Samsung Electronics announced during their Unpacked event that their Samsung Galaxy S20 can video record in 8K, which uses 600 MB of storage per minute. === First cameras === On April 6, 2013, Astrodesign Inc. announced the AH-4800, capable of recording 8K resolution. In April 2015, it was announced by Red that their newly unveiled Red Weapon VV is also capable of recording 8K footage. In October 2016, they announced two additional 8K cameras, Red Weapon 8K S35 and Red Epic-W 8K S35. The Red Weapon Dragon VV has been discontinued as of October 7, 2017, when Red unveiled the Red Weapon Monstro VV, their fourth camera capable of shooting 8K, with additional improvements in dynamic range and noise reduction, among other features. === Mobile phone cameras === In May 2019, mobile phone vendors started releasing the first mobile phones with 8K video recording capabilities, such as the ZTE Nubia Red Magic 3 series. This is enabled by the sufficient resolution of image sensors used in mobile phones, and by the sufficient chipset performance. However, mobile phones with up to 5K (2880p) or 6K (3240p) video cameras have never been released. Asus ZenFone 7 Redmi K30 Pro Redmi K40 ROG Phone 3 Vivo X50 Vivo X60 LG V60 ThinQ Nubia Z20 Samsung Galaxy Note 20 Samsung Galaxy S20 Samsung Galaxy S21 Samsung Galaxy S22 Samsung Galaxy S23 Samsung Galaxy S24 Xiaomi Mi 10 Xiaomi Mi 10 Ultra Xiaomi Mi 10T Xiaomi Mi 11 === Productions and broadcasting === In 2007, the original 65 mm negative of the 1992 film Baraka was re-scanned at 8K with a film scanner built specifically for the job at FotoKem Laboratories, and used to remaster the 2008 Blu-ray release. Chicago Sun-Times critic Roger Ebert described the Blu-ray release as "the finest video disc I have ever viewed or ever imagined." A similar 8K scan/4K intermediate digital restoration of Lawrence of Arabia was made for Blu-ray and theatrical re-release during 2012 by Sony Pictures to celebrate the film's 50th anniversary. According to Grover Crisp, executive VP of restoration at Sony Pictures, the new 8K scan has such high resolution that when examined, showed a series of fine concentric lines in a pattern "reminiscent of a fingerprint" near the top of the frame. This was caused by the film emulsion melting and cracking in the desert heat during production. Sony had to hire a third party to minimize or eliminate the rippling artifacts in the new restored version. On May 17, 2013, the Franklin Institute premiered To Space and Back, an 8K×8K, 60 fps, 3D video running approximately 25 minutes. During its first run at the Fels Planetarium it was played at 4K, 60 fps. In November 2013, NHK screened the experimental-drama short film "The Chorus" at Tokyo Film Festival which was filmed in 8K and 22.2 sound format. On May 1, 2015, an 8K abstract computer animation was screened at the Filmatic Festival at the University of California, San Diego. The work was created as an assignment in the VIS 40/ICAM 40 Introduction to Computing in the Arts class taught at UCSD by Associate Teaching Professor Brett Stalbaum during the winter quarter of 2015, with each student producing three hundred 8192 × 4800 pixel frames. The work's music soundtrack was composed by Mark Matamoros. On January 6, 2016, director James Gunn stated that the 2017 film Guardians of the Galaxy Vol. 2 would be the first feature film to be shot in 8K, using the Red Weapon 8K VV. The movie was shot with 8K cameras (and partially with 4K cameras), although the digital intermediate of the movie is in a lower resolution. Japanese public broadcaster NHK began research and development on 8K in 1995, having spent over $1 billion on R&D since then. Codenamed Super Hi-Vision (named after its old Hi-Vision analog HDTV system), NHK also was simultaneously working on the development of 22.2 channel surround sound audio. The world's first 8K television was unveiled by Sharp at the Consumer Electronics Show (CES) in 2012. Experimental transmissions of the resolution were tested with the 2012 Summer Olympics, and at the Cannes Film Festival showcasing Beauties À La Carte, a 27-minute short showcased publicly on a 220" screen, with a three-year roadmap that entails the launch of 8K test broadcasting in 2016, with plans to roll out full 8K services by 2018, and in time for the 2020 Summer Olympics, which were delayed to 2021 due to the COVID-19 pandemic. Ultimately, about 200 hours of material from the Tokyo Olympics, including the opening and closing ceremonies, were broadcast in 8K (on the NHK BS8K channel). The specification for an 8K Blu-ray format was also completed by the Blu-ray Disc Association for use in Japan by December 2017. As of the end of 2024, there are no standalone Blu-ray players certified as 8K capable (even though PC Blu-ray drives able to read 100 GB and 128 GB discs are sold commercially), and no home video releases in 8K on physical media by any major studio. On December 1, 2018, NHK launched BS8K, a broadcast channel transmitting at 8K resolution. Films shown in 8K on that channel include 2001: A Space Odyssey (1968) and My Fair Lady (1964); in both cases, 70 mm analog prints were used as a basis for the remastered 8K version. New productions filmed with digital 8K cameras have also been aired in 8K on that channel; they include the Edo period drama Kikyo – The Return and the WWII dramas Wife of a Spy and Gift of Fire. In Germany, the third season of Das Boot (a historical drama TV series set on a Nazi submarine) was made available in 8K on the Samsung TV Plus streaming service. == Gaming == Sony announced that the PlayStation 5 would support 8K graphics. Microsoft then announced its Xbox Series X with 8K graphic support, released in November 2020. The GeForce RTX 3090, released in September 2020 at an MSRP of $1499, was marketed by Nvidia as the first graphics card capable of 8K 60 fps HDR gaming, recording, and streaming with ShadowPlay on PCs. However, only its successor, the GeForce RTX 4090, is often regarded as the first graphics card to achieve playable frame rates at 8K in many modern titles. == Resolutions == === 7680 × 4320 === This is the resolution of the UHDTV2 format defined in SMPTE ST 2036–1, as well as the 8K UHDTV format defined in ITU-R BT.2020. It was also chosen by the DVB project as the resolution for their 8K broadcasting standard, UHD-2. It has 33.2 million total pixels, and is double the linear resolution of 4K UHD (four times as many total pixels), three times the linear resolution of 1440p (nine times as many total pixels), four times the linear resolution of 1080p (16 times as many total pixels), and six times the linear resolution of 720p (36 times as many total pixels). == Devices == === TVs === Sharp's 85" 8K LCD TV, 7680 × 4320 resolution – International Consumer Electronics Show (CES) 2012 Panasonic's 145" 8K Plasma Display, 7680 × 4320 resolution – Internationale Funkausstellung Berlin (IFA) 2012 LG's 98" 8K LCD TV, 7680 × 4320 resolution – Internationale Funkausstellung Berlin (IFA) 2014 Panasonic's 55" 8K 120 Hz LCD, 7680 × 4320 resolution – International Consumer Electronics Show (CES) 2015 Samsung's 110" 8K 3D LCD TV, 7680 × 4320 resolution – International Consumer Electronics Show (CES) 2015 BOE's 98" 8K TV at CEATEC 2015 LG's 98-inch UH9800 with ColorPrime Plus technology – International Consumer Electronics Show (CES) 2016 Samsung's 98-inch SUHD 8K curved TV – International Consumer Electronics Show (CES) 2016 Hisense's 98-inch ULED 8K TV – International Consumer Electronics Show (CES) 2016 Changhong's 98-inch 98ZHQ2R "8K Super UHD", 7680 × 4320 resolution – International Consumer Electronics Show (CES) 2016 Samsung's Q9S 85-inch QLED TV – International Consumer Electronics Show (CES) 2018 LG's 88 inch 8K OLED TV – International Consumer Electronics Show (CES) 2018 Samsung's Q900 R – 65, 75, 82, and 85 inch 8K QLED TV models at CES 2019 Sony's Z9G/ZG9 – 85 inch and 100 inch 8K Ultra HD Bravia TV International Consumer Electronics Show (CES) 2019 Sony's Z8H/ZH8 – 75 and 85 inch 8K Ultra HD Bravia TV International Consumer Electronics Show (CES) 2020 Sony's Z9J – 65, 75, 85, and 100 inch 8K Ultra HD Bravia International Consumer Electronics Show 2021 virtual event. Sony's Z9K – 65, 75, 85, and 100 inch 8K Ultra HD Bravia International Consumer Electronics Show 2022 hybrid event. TCL's 75 inch 8K QLED TV – FIBA Basketball World Cup 2019 Edition displayed at IFA 2018 Hisense's U9E – 75 inch 8K QLED at IFA global press conference 2019 Samsung's QLED 8K TV Q700T/Q800T/Q950TS Samsung.com === Projectors === Digital Projection INSIGHT Laser 8K at Integrated Systems Europe 2018 === Monitors === Canon 30" 8K reference display – September 2015 Sharp's prototype 27-inch 8K 120 Hz IGZO desktop monitor with HDR (CEATEC 2016) Philips 328P8K 8K UHD desktop Monitor (CES 2017) Dell UltraSharp 32 Ultra HD 8K Monitor (UP3218K) (CES 2017) BOE 8K 13.3 inch Narrow Bezel Laptop Display at CITE 2018 ==== 8K VR Headset ==== Pimax Vision 8K X, made up of two 3840 × 2160 screens @ 90 Hz, started Crowdfunding in October 2017, with the product released and shipped in September 2020. === Cameras === Astrodesign AH-4800, 1.7-inch CMOS camera capable of recording in 8K resolution. Unveiled by on April 6, 2013. RED Weapon Vista Vision 35MM 8K (8192 × 4320) at 60 fps in full-sensor mode, or up to 75 fps in a scope (2.40∶1) frame format. The camera has a 40.96 mm × 21.6 mm sensor based on the previous generation Dragon sensor. Unveiled at NAB 2015, released end of 2015. RED DSMC2 Helium with an S35MM 8K 29.9 mm × 15.77 mm 35.4 megapixel CMOS sensor—up to 60 fps at 8K (8192 × 4320) and 75 fps at 8K 2.4∶1 (8192 × 3456) with a dynamic range of 16.5+ stops; limited release July 2016, general release October 2016. RED Epic-W with an S35MM 8K 29.9 mm × 15.77 mm 35.4 megapixel CMOS Helium sensor—up to 30 fps at 8K (8192 × 4320) with a dynamic range of 16.5+ stops; release date: October 2016. RED DSMC2 Monstro 8K VV 40.96 mm × 21.60 mm 35.4 megapixel CMOS "wider than full frame" Monstro sensor—up to 60 fps at 8K (8192 × 4320) and 75 fps at 8K 2.4∶1 (8192 × 3456) with a dynamic range of 17+ stops; release date: October 2017. Ikegami S35MM SHK-810 8K broadcast camera. Unveiled at NAB 2015. Hitachi S35MM SK-UHD8060 broadcast camera Unveiled at NAB 2015. Hitachi S35MM SK-UHD8000 broadcast camera. Production version of the SK-UHD8060. Canon Cinema EOS System S35MM 8K camera. Unveiled September 2015. Panavision DXL 35MM 8K 60 fps and HDR Digital Cinematography Camera (Vista Vision Sensor). May 2016 Sharp S35MM 8C-B60A 8K Professional broadcast Camcorder. November 2017 Cinemartin Fran 8K VV Global Shutter, announced on May 8, 2018, starting sales in fall 2018. Company went to bankruptcy on April 1, 2019, and camera is no longer available. It never reached production stage, only prototype. Blackmagic URSA Mini Pro 12K, originally 110 fps in 8K, since September 2020 firmware update up to 120 fps for DCI, 16:9 and 6:5 Anamorphic aspect ratio modes and up to 160 fps for 2.4:1 aspect ratio mode. Canon EOS R5 camera. Announced July 9, 2020. Sony Alpha 1 flagship mirrorless camera. Announced in San Diego, CA – January 26, 2021 – Sony Electronics Nikon Z 9 camera. Announced on October 28, 2021. Sony VENICE 2 camera (Full-Frame 8.6K image sensor) Announced November 15, 2021. Fujifilm GFX100 II camera. Announced on 12 September 2023. ==== Action Cameras ==== Byroras CA100 shoots 8K @ 15 fps, up to 40 m underwater Nello X3K+ shoots 8K @ 15 fps Insta360 Ace Pro shoots 8K @ 24 fps ==== Smartphones with 8K camera ==== Samsung Galaxy S20 series, shoots 8K @ 24 fps, went on sale from February 2020 Samsung Galaxy Note 20 series, shoots 8K @ 24 fps, went on sale from August 2020 Samsung Galaxy S21 series, shoots 8K @ 24 fps, went on sale from January 2021 Samsung Galaxy S22 series, shoots 8K @ 24 fps, went on sale from February 2022 Samsung Galaxy S23 series, shoots 8K @ 30 fps, went on sale from February 2023 Samsung Galaxy S24 series, shoots 8K @ 30 fps, went on sale from January 2024 Samsung Galaxy S25 series, shoots 8K @ 30 fps, went on sale from February 2025 Asus ROG Phone 3, shoots 8K @ 30 fps, went on sale from July 2020 Asus ROG Phone 5, shoots 8K @ 30 fps, went on sale from April 2021 Asus ZenFone 7, shoots 8K @ 30 fps, went on sale from September 2020 Asus ZenFone 8, shoots 8K @ 24 fps, went on sale from May 2021 Asus ZenFone 8 Flip, shoots 8K @ 30 fps, went on sale from May 2021 Lenovo Legion Duel 2, shoots 8K @ 24 fps, went on sale from May 2021 LG V60 ThinQ, shoots 8K @ 30 fps, went on sale from March 2020 Motorola Edge 20 Pro, shoots 8K @ 24 fps, went on sale from August 2021 OnePlus 9/9 Pro, shoots 8K @ 30 fps, went on sale from March 2021 Xiaomi Mi 10/Mi 10 Pro, shoots 8K @ 30 fps, went on sale from February 2020 Xiaomi Mi 10 Ultra, shoots 8K @ 24 fps, went on sale from August 2020 Xiaomi Mi 10T/Mi 10T Pro, shoots 8K @ 30 fps, went on sale from October 2020 Xiaomi Mi 11, shoots 8K @ 24/30 fps, went on sale from January 2021 Xiaomi Mi 11 Pro/Mi 11 Ultra, shoots 8K @ 24 fps, went on sale from April 2021 Xiaomi MIX 4, shoots 8K @ 24 fps, went on sale from August 2021 Redmi K30 Pro, Poco F2 Pro, shoots 8K @ 24/30 fps, went on sale from March–May 2020 Redmi K40 Pro/K40 Pro+, shoots 8K @ 30 fps, went on sale from March 2021 Sharp Aquos R5G, shoots 8K @ 30 fps, went on sale from July 2020 Vivo X50 Pro+, shoots 8K @ 30 fps, went on sale from July 2020 Vivo X60 Pro+, shoots 8K @ 30 fps, went on sale from January 2021 ZTE Axon 30 Ultra, shoots 8K @ 30 fps, went on sale from April 2021 ZTE Nubia Red Magic 3, shoots 8K @ 15 fps, went on sale from May 2019 ZTE Nubia Red Magic 3s, shoots 8K @ 15 fps, went on sale from September 2019 ZTE Nubia Red Magic 5G, shoots 8K @ 15 fps, went on sale from March 2020 ZTE Nubia Red Magic 5S, shoots 8K @ 30 fps, went on sale from August 2020 ZTE Nubia Red Magic 6/6 Pro, shoots 8K @ 30 fps, went on sale from March 2021 ZTE Nubia Red Magic 6R, shoots 8K @ 30 fps, went on sale from June 2021 ZTE Nubia Z20, shoots 8K @ 15 fps, went on sale from August 2019 ZTE Nubia Z30 Pro, shoots 8K @ 30 fps, went on sale from May 2021 ==== 8K VR camera ==== QooCam 8K, first affordable 8K 360° VR camera, with built-in video stitching. Insta360 Pro 2 === Fulldome === Definiti 8K theaters, 8192 × 8192 resolution (apu) === Consoles === Sony's PlayStation 5 (Not the Pro version), while capable of outputting 8K via HDMI 2.1, such resolutions are currently not available due to Sony’s native 4K cap, went on sale November 2020. Sony has removed mentions of 8K from the PlayStation 5 box between late January and the middle of February 2024. Sony's Playstation 5 Pro, is capable of outputting 8K at 60Hz using PSSR, but only some games would work with it, went on sale November 2024. == See also == 2K resolution – digital video formats with a horizontal resolution of around 2,000 pixels 4K resolution – digital video formats with a horizontal resolution of around 4000 pixels 5K resolution – digital video formats with a horizontal resolution of around 5000 pixels, aimed at non-television computer monitor usage 10K resolution – digital video formats with a horizontal resolution of around 10,000 pixels, aimed at non-television computer monitor usage 16K resolution – digital video formats with a horizontal resolution of around 16,000 pixels 32K resolution Ultra-high-definition television (UHDTV) – digital video formats with resolutions of 4K (3840 × 2160) and 8K (7680 × 4320) Rec. 2020 – ITU-R Recommendation for UHDTV Digital movie camera Digital cinematography – makes extensive use of UHD video List of large sensor interchangeable-lens video cameras == References == == External links == Media related to 8K UHD cameras at Wikimedia Commons
Wikipedia/8K_resolution
The brain is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. It consists of nervous tissue and is typically located in the head (cephalization), usually near organs for special senses such as vision, hearing, and olfaction. Being the most specialized organ, it is responsible for receiving information from the sensory nervous system, processing that information (thought, cognition, and intelligence) and the coordination of motor control (muscle activity and endocrine system). While invertebrate brains arise from paired segmental ganglia (each of which is only responsible for the respective body segment) of the ventral nerve cord, vertebrate brains develop axially from the midline dorsal nerve cord as a vesicular enlargement at the rostral end of the neural tube, with centralized control over all body segments. All vertebrate brains can be embryonically divided into three parts: the forebrain (prosencephalon, subdivided into telencephalon and diencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon, subdivided into metencephalon and myelencephalon). The spinal cord, which directly interacts with somatic functions below the head, can be considered a caudal extension of the myelencephalon enclosed inside the vertebral column. Together, the brain and spinal cord constitute the central nervous system in all vertebrates. In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons, typically communicating with one another via cytoplasmic processes known as dendrites and axons. Axons are usually myelinated and carry trains of rapid micro-electric signal pulses called action potentials to target specific recipient cells in other areas of the brain or distant parts of the body. The prefrontal cortex, which controls executive functions, is particularly well developed in humans. Physiologically, brains exert centralized control over a body's other organs. They act on the rest of the body both by generating patterns of muscle activity and by driving the secretion of chemicals called hormones. This centralized control allows rapid and coordinated responses to changes in the environment. Some basic types of responsiveness such as reflexes can be mediated by the spinal cord or peripheral ganglia, but sophisticated purposeful control of behavior based on complex sensory input requires the information integrating capabilities of a centralized brain. The operations of individual brain cells are now understood in considerable detail but the way they cooperate in ensembles of millions is yet to be solved. Recent models in modern neuroscience treat the brain as a biological computer, very different in mechanism from a digital computer, but similar in the sense that it acquires information from the surrounding world, stores it, and processes it in a variety of ways. This article compares the properties of brains across the entire range of animal species, with the greatest attention to vertebrates. It deals with the human brain insofar as it shares the properties of other brains. The ways in which the human brain differs from other brains are covered in the human brain article. Several topics that might be covered here are instead covered there because much more can be said about them in a human context. The most important that are covered in the human brain article are brain disease and the effects of brain damage. == Structure == The shape and size of the brain varies greatly between species, and identifying common features is often difficult. Nevertheless, there are a number of principles of brain architecture that apply across a wide range of species. Some aspects of brain structure are common to almost the entire range of animal species; others distinguish "advanced" brains from more primitive ones, or distinguish vertebrates from invertebrates. The simplest way to gain information about brain anatomy is by visual inspection, but many more sophisticated techniques have been developed. Brain tissue in its natural state is too soft to work with, but it can be hardened by immersion in alcohol or other fixatives, and then sliced apart for examination of the interior. Visually, the interior of the brain consists of areas of so-called grey matter, with a dark color, separated by areas of white matter, with a lighter color. Further information can be gained by staining slices of brain tissue with a variety of chemicals that bring out areas where specific types of molecules are present in high concentrations. It is also possible to examine the microstructure of brain tissue using a microscope, and to trace the pattern of connections from one brain area to another. === Cellular structure === The brains of all species are composed primarily of two broad classes of brain cells: neurons and glial cells. Glial cells (also known as glia or neuroglia) come in several types, and perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Neurons, however, are usually considered the most important cells in the brain. In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons. The property that makes neurons unique is their ability to send signals to specific target cells, sometimes over long distances. They send these signals by means of an axon, which is a thin protoplasmic fiber that extends from the cell body and projects, usually with numerous branches, to other areas, sometimes nearby, sometimes in distant parts of the brain or body. The length of an axon can be extraordinary: for example, if a pyramidal cell (an excitatory neuron) of the cerebral cortex were magnified so that its cell body became the size of a human body, its axon, equally magnified, would become a cable a few centimeters in diameter, extending more than a kilometer. These axons transmit signals in the form of electrochemical pulses called action potentials, which last less than a thousandth of a second and travel along the axon at speeds of 1–100 meters per second. Some neurons emit action potentials constantly, at rates of 10–100 per second, usually in irregular patterns; other neurons are quiet most of the time, but occasionally emit a burst of action potentials. Axons transmit signals to other neurons by means of specialized junctions called synapses. A single axon may make as many as several thousand synaptic connections with other cells. When an action potential, traveling along an axon, arrives at a synapse, it causes a chemical called a neurotransmitter to be released. The neurotransmitter binds to receptor molecules in the membrane of the target cell. Synapses are the key functional elements of the brain. The essential function of the brain is cell-to-cell communication, and synapses are the points at which communication occurs. The human brain has been estimated to contain approximately 100 trillion synapses; even the brain of a fruit fly contains several million. The functions of these synapses are very diverse: some are excitatory (exciting the target cell); others are inhibitory; others work by activating second messenger systems that change the internal chemistry of their target cells in complex ways. A large number of synapses are dynamically modifiable; that is, they are capable of changing strength in a way that is controlled by the patterns of signals that pass through them. It is widely believed that activity-dependent modification of synapses is the brain's primary mechanism for learning and memory. Most of the space in the brain is taken up by axons, which are often bundled together in what are called nerve fiber tracts. A myelinated axon is wrapped in a fatty insulating sheath of myelin, which serves to greatly increase the speed of signal propagation. (There are also unmyelinated axons). Myelin is white, making parts of the brain filled exclusively with nerve fibers appear as light-colored white matter, in contrast to the darker-colored grey matter that marks areas with high densities of neuron cell bodies. == Evolution == === Generic bilaterian nervous system === Except for a few primitive organisms such as sponges (which have no nervous system) and cnidarians (which have a diffuse nervous system consisting of a nerve net), all living multicellular animals are bilaterians, meaning animals with a bilaterally symmetric body plan (that is, left and right sides that are approximate mirror images of each other). All bilaterians are thought to have descended from a common ancestor that appeared late in the Cryogenian period, 700–650 million years ago, and it has been hypothesized that this common ancestor had the shape of a simple tubeworm with a segmented body. At a schematic level, that basic worm-shape continues to be reflected in the body and nervous system architecture of all modern bilaterians, including vertebrates. The fundamental bilateral body form is a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The brain is small and simple in some species, such as nematode worms; in other species, such as vertebrates, it is a large and very complex organ. Some types of worms, such as leeches, also have an enlarged ganglion at the back end of the nerve cord, known as a "tail brain". There are a few types of existing bilaterians that lack a recognizable brain, including echinoderms and tunicates. It has not been definitively established whether the existence of these brainless species indicates that the earliest bilaterians lacked a brain, or whether their ancestors evolved in a way that led to the disappearance of a previously existing brain structure. === Invertebrates === This category includes tardigrades, arthropods, molluscs, and numerous types of worms. The diversity of invertebrate body plans is matched by an equal diversity in brain structures. Two groups of invertebrates have notably complex brains: arthropods (insects, crustaceans, arachnids, and others), and cephalopods (octopuses, squids, and similar molluscs). The brains of arthropods and cephalopods arise from twin parallel nerve cords that extend through the body of the animal. Arthropods have a central brain, the supraesophageal ganglion, with three divisions and large optical lobes behind each eye for visual processing. Cephalopods such as the octopus and squid have the largest brains of any invertebrates. There are several invertebrate species whose brains have been studied intensively because they have properties that make them convenient for experimental work: Fruit flies (Drosophila), because of the large array of techniques available for studying their genetics, have been a natural subject for studying the role of genes in brain development. In spite of the large evolutionary distance between insects and mammals, many aspects of Drosophila neurogenetics have been shown to be relevant to humans. The first biological clock genes, for example, were identified by examining Drosophila mutants that showed disrupted daily activity cycles. A search in the genomes of vertebrates revealed a set of analogous genes, which were found to play similar roles in the mouse biological clock—and therefore almost certainly in the human biological clock as well. Studies done on Drosophila, also show that most neuropil regions of the brain are continuously reorganized throughout life in response to specific living conditions. The nematode worm Caenorhabditis elegans, like Drosophila, has been studied largely because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model organism for studying the way that genes control development. One of the advantages of working with this worm is that the body plan is very stereotyped: the nervous system of the hermaphrodite contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed each one under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body. The complete neuronal wiring diagram of C.elegans – its connectome was achieved. Nothing approaching this level of detail is available for any other organism, and the information gained has enabled a multitude of studies that would otherwise have not been possible. The sea slug Aplysia californica was chosen by Nobel Prize-winning neurophysiologist Eric Kandel as a model for studying the cellular basis of learning and memory, because of the simplicity and accessibility of its nervous system, and it has been examined in hundreds of experiments. === Vertebrates === The first vertebrates appeared over 500 million years ago (Mya) during the Cambrian period, and may have resembled the modern jawless fish (hagfish and lamprey) in form. Jawed vertebrates appeared by 445 Mya, tetrapods by 350 Mya, amniotes by 310 Mya and mammaliaforms by 200 Mya (approximately). Each vertebrate clade has an equally long evolutionary history, but the brains of modern fish, amphibians, reptiles, birds and mammals show a gradient of size and complexity that roughly follows the evolutionary sequence. All of these brains contain the same set of basic anatomical structures, but many are rudimentary in the hagfish, whereas in mammals the foremost part (forebrain, especially the telencephalon) is greatly developed and expanded. Brains are most commonly compared in terms of their mass. The relationship between brain size, body size and other variables has been studied across a wide range of vertebrate species. As a rule of thumb, brain size increases with body size, but not in a simple linear proportion. In general, smaller animals tend to have proportionally larger brains, measured as a fraction of body size. For mammals, the relationship between brain volume and body mass essentially follows a power law with an exponent of about 0.75. This formula describes the central tendency, but every family of mammals departs from it to some degree, in a way that reflects in part the complexity of their behavior. For example, primates have brains 5 to 10 times larger than the formula predicts. Predators, who have to implement various hunting strategies against the ever changing anti-predator adaptations, tend to have larger brains relative to body size than their prey. All vertebrate brains share a common underlying form, which appears most clearly during early stages of embryonic development. In its earliest form, the brain appears as three vesicular swellings at the front end of the neural tube; these swellings eventually become the forebrain (prosencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon), respectively. At the earliest stages of brain development, the three areas are roughly equal in size. In many aquatic/semiaquatic vertebrates such as fish and amphibians, the three parts remain similar in size in adults, but in terrestrial tetrapods such as mammals, the forebrain becomes much larger than the other parts, the hindbrain develops a bulky dorsal extension known as the cerebellum, and the midbrain becomes very small as a result. The brains of vertebrates are made of very soft tissue. Living brain tissue is pinkish on the outside and mostly white on the inside, with subtle variations in color. Vertebrate brains are surrounded by a system of connective tissue membranes called meninges, which separate the skull from the brain. Cerebral arteries pierce the outer two layers of the meninges, the dura and arachnoid mater, into the subarachnoid space and perfuse the brain parenchyma via arterioles perforating into the innermost layer of the meninges, the pia mater. The endothelial cells in the cerebral blood vessel walls are joined tightly to one another, forming the blood–brain barrier, which blocks the passage of many toxins and pathogens (though at the same time blocking antibodies and some drugs, thereby presenting special challenges in treatment of diseases of the brain). As a result of the osmotic restriction by the blood-brain barrier, the metabolites within the brain are cleared mostly by bulk flow of the cerebrospinal fluid within the glymphatic system instead of via venules like other parts of the body. Neuroanatomists usually divide the vertebrate brain into six main subregions: the telencephalon (the cerebral hemispheres), diencephalon (thalamus and hypothalamus), mesencephalon (midbrain), cerebellum, pons and medulla oblongata, with the midbrain, pons and medulla often collectively called the brainstem. Each of these areas has a complex internal structure. Some parts, such as the cerebral cortex and the cerebellar cortex, are folded into convoluted gyri and sulci in order to maximize surface area within the available intracranial space. Other parts, such as the thalamus and hypothalamus, consist of many small clusters of nuclei known as "ganglia". Thousands of distinguishable areas can be identified within the vertebrate brain based on fine distinctions of neural structure, chemistry, and connectivity. Although the same basic components are present in all vertebrate brains, some branches of vertebrate evolution have led to substantial distortions of brain geometry, especially in the forebrain area. The brain of a shark shows the basic components in a straightforward way, but in teleost fishes (the great majority of existing fish species), the forebrain has become "everted", like a sock turned inside out. In birds, there are also major changes in forebrain structure. These distortions can make it difficult to match brain components from one species with those of another species. Here is a list of some of the most important vertebrate brain components, along with a brief description of their functions as currently understood: The medulla, along with the spinal cord, contains many small nuclei involved in a wide variety of sensory and involuntary motor functions such as vomiting, heart rate and digestive processes. The pons lies in the brainstem directly above the medulla. Among other things, it contains nuclei that control often voluntary but simple acts such as sleep, respiration, swallowing, bladder function, equilibrium, eye movement, facial expressions, and posture. The hypothalamus is a small region at the base of the forebrain, whose complexity and importance belies its size. It is composed of numerous small nuclei, each with distinct connections and neurochemistry. The hypothalamus is engaged in additional involuntary or partially voluntary acts such as sleep and wake cycles, eating and drinking, and the release of some hormones. The thalamus is a collection of nuclei with diverse functions: some are involved in relaying information to and from the cerebral hemispheres, while others are involved in motivation. The subthalamic area (zona incerta) seems to contain action-generating systems for several types of "consummatory" behaviors such as eating, drinking, defecation, and copulation. The cerebellum modulates the outputs of other brain systems, whether motor-related or thought related, to make them certain and precise. Removal of the cerebellum does not prevent an animal from doing anything in particular, but it makes actions hesitant and clumsy. This precision is not built-in but learned by trial and error. The muscle coordination learned while riding a bicycle is an example of a type of neural plasticity that may take place largely within the cerebellum. 10% of the brain's total volume consists of the cerebellum and 50% of all neurons are held within its structure. The optic tectum allows actions to be directed toward points in space, most commonly in response to visual input. In mammals, it is usually referred to as the superior colliculus, and its best-studied function is to direct eye movements. It also directs reaching movements and other object-directed actions. It receives strong visual inputs, but also inputs from other senses that are useful in directing actions, such as auditory input in owls and input from the thermosensitive pit organs in snakes. In some primitive fishes, such as lampreys, this region is the largest part of the brain. The superior colliculus is part of the midbrain. The pallium is a layer of grey matter that lies on the surface of the forebrain and is the most complex and most recent evolutionary development of the brain as an organ. In reptiles and mammals, it is called the cerebral cortex. Multiple functions involve the pallium, including smell and spatial memory. In mammals, where it becomes so large as to dominate the brain, it takes over functions from many other brain areas. In many mammals, the cerebral cortex consists of folded bulges called gyri that create deep furrows or fissures called sulci. The folds increase the surface area of the cortex and therefore increase the amount of gray matter and the amount of information that can be stored and processed. The hippocampus, strictly speaking, is found only in mammals. However, the area it derives from, the medial pallium, has counterparts in all vertebrates. There is evidence that this part of the brain is involved in complex events such as spatial memory and navigation in fishes, birds, reptiles, and mammals. The basal ganglia are a group of interconnected structures in the forebrain. The primary function of the basal ganglia appears to be action selection: they send inhibitory signals to all parts of the brain that can generate motor behaviors, and in the right circumstances can release the inhibition, so that the action-generating systems are able to execute their actions. Reward and punishment exert their most important neural effects by altering connections within the basal ganglia. The olfactory bulb is a special structure that processes olfactory sensory signals and sends its output to the olfactory part of the pallium. It is a major brain component in many vertebrates, but is greatly reduced in humans and other primates (whose senses are dominated by information acquired by sight rather than smell). ==== Reptiles ==== Modern reptiles and mammals diverged from a common ancestor around 320 million years ago. The number of extant reptiles far exceeds the number of mammalian species, with 11,733 recognized species of reptiles compared to 5,884 extant mammals. Along with the species diversity, reptiles have diverged in terms of external morphology, from limbless to tetrapod gliders to armored chelonians, reflecting adaptive radiation to a diverse array of environments. Morphological differences are reflected in the nervous system phenotype, such as: absence of lateral motor column neurons in snakes, which innervate limb muscles controlling limb movements; absence of motor neurons that innervate trunk muscles in tortoises; presence of innervation from the trigeminal nerve to pit organs responsible to infrared detection in snakes. Variation in size, weight, and shape of the brain can be found within reptiles. For instance, crocodilians have the largest brain volume to body weight proportion, followed by turtles, lizards, and snakes. Reptiles vary in the investment in different brain sections. Crocodilians have the largest telencephalon, while snakes have the smallest. Turtles have the largest diencephalon per body weight whereas crocodilians have the smallest. On the other hand, lizards have the largest mesencephalon. Yet their brains share several characteristics revealed by recent anatomical, molecular, and ontogenetic studies. Vertebrates share the highest levels of similarities during embryological development, controlled by conserved transcription factors and signaling centers, including gene expression, morphological and cell type differentiation. In fact, high levels of transcriptional factors can be found in all areas of the brain in reptiles and mammals, with shared neuronal clusters enlightening brain evolution. Conserved transcription factors elucidate that evolution acted in different areas of the brain by either retaining similar morphology and function, or diversifying it. Anatomically, the reptilian brain has less subdivisions than the mammalian brain, however it has numerous conserved aspects including the organization of the spinal cord and cranial nerve, as well as elaborated brain pattern of organization. Elaborated brains are characterized by migrated neuronal cell bodies away from the periventricular matrix, region of neuronal development, forming organized nuclear groups. Aside from reptiles and mammals, other vertebrates with elaborated brains include hagfish, galeomorph sharks, skates, rays, teleosts, and birds. Overall elaborated brains are subdivided in forebrain, midbrain, and hindbrain. The hindbrain coordinates and integrates sensory and motor inputs and outputs responsible for, but not limited to, walking, swimming, or flying. It contains input and output axons interconnecting the spinal cord, midbrain and forebrain transmitting information from the external and internal environments. The midbrain links sensory, motor, and integrative components received from the hindbrain, connecting it to the forebrain. The tectum, which includes the optic tectum and torus semicircularis, receives auditory, visual, and somatosensory inputs, forming integrated maps of the sensory and visual space around the animal. The tegmentum receives incoming sensory information and forwards motor responses to and from the forebrain. The isthmus connects the hindbrain with midbrain. The forebrain region is particularly well developed, is further divided into diencephalon and telencephalon. Diencephalon is related to regulation of eye and body movement in response to visual stimuli, sensory information, circadian rhythms, olfactory input, and autonomic nervous system.Telencephalon is related to control of movements, neurotransmitters and neuromodulators responsible for integrating inputs and transmitting outputs are present, sensory systems, and cognitive functions. ==== Birds ==== ==== Mammals ==== The most obvious difference between the brains of mammals and other vertebrates is their size. On average, a mammal has a brain roughly twice as large as that of a bird of the same body size, and ten times as large as that of a reptile of the same body size. Size, however, is not the only difference: there are also substantial differences in shape. The hindbrain and midbrain of mammals are generally similar to those of other vertebrates, but dramatic differences appear in the forebrain, which is greatly enlarged and also altered in structure. The cerebral cortex is the part of the brain that most strongly distinguishes mammals. In non-mammalian vertebrates, the surface of the cerebrum is lined with a comparatively simple three-layered structure called the pallium. In mammals, the pallium evolves into a complex six-layered structure called neocortex or isocortex. Several areas at the edge of the neocortex, including the hippocampus and amygdala, are also much more extensively developed in mammals than in other vertebrates. The elaboration of the cerebral cortex carries with it changes to other brain areas. The superior colliculus, which plays a major role in visual control of behavior in most vertebrates, shrinks to a small size in mammals, and many of its functions are taken over by visual areas of the cerebral cortex. The cerebellum of mammals contains a large portion (the neocerebellum) dedicated to supporting the cerebral cortex, which has no counterpart in other vertebrates. In placentals, there is a wide nerve tract connecting the cerebral hemispheres called the corpus callosum. ===== Primates ===== The brains of humans and other primates contain the same structures as the brains of other mammals, but are generally larger in proportion to body size. The encephalization quotient (EQ) is used to compare brain sizes across species. It takes into account the nonlinearity of the brain-to-body relationship. Humans have an average EQ in the 7-to-8 range, while most other primates have an EQ in the 2-to-3 range. Dolphins have values higher than those of primates other than humans, but nearly all other mammals have EQ values that are substantially lower. Most of the enlargement of the primate brain comes from a massive expansion of the cerebral cortex, especially the prefrontal cortex and the parts of the cortex involved in vision. The visual processing network of primates includes at least 30 distinguishable brain areas, with a complex web of interconnections. It has been estimated that visual processing areas occupy more than half of the total surface of the primate neocortex. The prefrontal cortex carries out functions that include planning, working memory, motivation, attention, and executive control. It takes up a much larger proportion of the brain for primates than for other species, and an especially large fraction of the human brain. == Development == The brain develops in an intricately orchestrated sequence of stages. It changes in shape from a simple swelling at the front of the nerve cord in the earliest embryonic stages, to a complex array of areas and connections. Neurons are created in special zones that contain stem cells, and then migrate through the tissue to reach their ultimate locations. Once neurons have positioned themselves, their axons sprout and navigate through the brain, branching and extending as they go, until the tips reach their targets and form synaptic connections. In a number of parts of the nervous system, neurons and synapses are produced in excessive numbers during the early stages, and then the unneeded ones are pruned away. For vertebrates, the early stages of neural development are similar across all species. As the embryo transforms from a round blob of cells into a wormlike structure, a narrow strip of ectoderm running along the midline of the back is induced to become the neural plate, the precursor of the nervous system. The neural plate folds inward to form the neural groove, and then the lips that line the groove merge to enclose the neural tube, a hollow cord of cells with a fluid-filled ventricle at the center. At the front end, the ventricles and cord swell to form three vesicles that are the precursors of the prosencephalon (forebrain), mesencephalon (midbrain), and rhombencephalon (hindbrain). At the next stage, the forebrain splits into two vesicles called the telencephalon (which will contain the cerebral cortex, basal ganglia, and related structures) and the diencephalon (which will contain the thalamus and hypothalamus). At about the same time, the hindbrain splits into the metencephalon (which will contain the cerebellum and pons) and the myelencephalon (which will contain the medulla oblongata). Each of these areas contains proliferative zones where neurons and glial cells are generated; the resulting cells then migrate, sometimes for long distances, to their final positions. Once a neuron is in place, it extends dendrites and an axon into the area around it. Axons, because they commonly extend a great distance from the cell body and need to reach specific targets, grow in a particularly complex way. The tip of a growing axon consists of a blob of protoplasm called a growth cone, studded with chemical receptors. These receptors sense the local environment, causing the growth cone to be attracted or repelled by various cellular elements, and thus to be pulled in a particular direction at each point along its path. The result of this pathfinding process is that the growth cone navigates through the brain until it reaches its destination area, where other chemical cues cause it to begin generating synapses. Considering the entire brain, thousands of genes create products that influence axonal pathfinding. The synaptic network that finally emerges is only partly determined by genes, though. In many parts of the brain, axons initially "overgrow", and then are "pruned" by mechanisms that depend on neural activity. In the projection from the eye to the midbrain, for example, the structure in the adult contains a very precise mapping, connecting each point on the surface of the retina to a corresponding point in a midbrain layer. In the first stages of development, each axon from the retina is guided to the right general vicinity in the midbrain by chemical cues, but then branches very profusely and makes initial contact with a wide swath of midbrain neurons. The retina, before birth, contains special mechanisms that cause it to generate waves of activity that originate spontaneously at a random point and then propagate slowly across the retinal layer. These waves are useful because they cause neighboring neurons to be active at the same time; that is, they produce a neural activity pattern that contains information about the spatial arrangement of the neurons. This information is exploited in the midbrain by a mechanism that causes synapses to weaken, and eventually vanish, if activity in an axon is not followed by activity of the target cell. The result of this sophisticated process is a gradual tuning and tightening of the map, leaving it finally in its precise adult form. Similar things happen in other brain areas: an initial synaptic matrix is generated as a result of genetically determined chemical guidance, but then gradually refined by activity-dependent mechanisms, partly driven by internal dynamics, partly by external sensory inputs. In some cases, as with the retina-midbrain system, activity patterns depend on mechanisms that operate only in the developing brain, and apparently exist solely to guide development. In humans and many other mammals, new neurons are created mainly before birth, and the infant brain contains substantially more neurons than the adult brain. There are, however, a few areas where new neurons continue to be generated throughout life. The two areas for which adult neurogenesis is well established are the olfactory bulb, which is involved in the sense of smell, and the dentate gyrus of the hippocampus, where there is evidence that the new neurons play a role in storing newly acquired memories. With these exceptions, however, the set of neurons that is present in early childhood is the set that is present for life. Glial cells are different: as with most types of cells in the body, they are generated throughout the lifespan. There has long been debate about whether the qualities of mind, personality, and intelligence can be attributed to heredity or to upbringing. Although many details remain to be settled, neuroscience shows that both factors are important. Genes determine both the general form of the brain and how it reacts to experience, but experience is required to refine the matrix of synaptic connections, resulting in greatly increased complexity. The presence or absence of experience is critical at key periods of development. Additionally, the quantity and quality of experience are important. For example, animals raised in enriched environments demonstrate thick cerebral cortices, indicating a high density of synaptic connections, compared to animals with restricted levels of stimulation. == Physiology == The functions of the brain depend on the ability of neurons to transmit electrochemical signals to other cells, and their ability to respond appropriately to electrochemical signals received from other cells. The electrical properties of neurons are controlled by a wide variety of biochemical and metabolic processes, most notably the interactions between neurotransmitters and receptors that take place at synapses. === Neurotransmitters and receptors === Neurotransmitters are chemicals that are released at synapses when the local membrane is depolarised and Ca2+ enters into the cell, typically when an action potential arrives at the synapse – neurotransmitters attach themselves to receptor molecules on the membrane of the synapse's target cell (or cells), and thereby alter the electrical or chemical properties of the receptor molecules. With few exceptions, each neuron in the brain releases the same chemical neurotransmitter, or combination of neurotransmitters, at all the synaptic connections it makes with other neurons; this rule is known as Dale's principle. Thus, a neuron can be characterized by the neurotransmitters that it releases. The great majority of psychoactive drugs exert their effects by altering specific neurotransmitter systems. This applies to drugs such as cannabinoids, nicotine, heroin, cocaine, alcohol, fluoxetine, chlorpromazine, and many others. The two neurotransmitters that are most widely found in the vertebrate brain are glutamate, which almost always exerts excitatory effects on target neurons, and gamma-aminobutyric acid (GABA), which is almost always inhibitory. Neurons using these transmitters can be found in nearly every part of the brain. Because of their ubiquity, drugs that act on glutamate or GABA tend to have broad and powerful effects. Some general anesthetics act by reducing the effects of glutamate; most tranquilizers exert their sedative effects by enhancing the effects of GABA. There are dozens of other chemical neurotransmitters that are used in more limited areas of the brain, often areas dedicated to a particular function. Serotonin, for example—the primary target of many antidepressant drugs and many dietary aids—comes exclusively from a small brainstem area called the raphe nuclei. Norepinephrine, which is involved in arousal, comes exclusively from a nearby small area called the locus coeruleus. Other neurotransmitters such as acetylcholine and dopamine have multiple sources in the brain but are not as ubiquitously distributed as glutamate and GABA. === Electrical activity === As a side effect of the electrochemical processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity, which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity when the animal is actively engaged in a task, called beta and gamma waves. During an epileptic seizure, the brain's inhibitory control mechanisms fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions of individual neurons is a major focus of current research in neurophysiology. === Metabolism === All vertebrates have a blood–brain barrier that allows metabolism inside the brain to operate differently from metabolism in other parts of the body. The neurovascular unit regulates cerebral blood flow so that activated neurons can be supplied with energy. Glial cells play a major role in brain metabolism by controlling the chemical composition of the fluid that surrounds neurons, including levels of ions and nutrients. Brain tissue consumes a large amount of energy in proportion to its volume, so large brains place severe metabolic demands on animals. The need to limit body weight in order, for example, to fly, has apparently led to selection for a reduction of brain size in some species, such as bats. Most of the brain's energy consumption goes into sustaining the electric charge (membrane potential) of neurons. Most vertebrate species devote between 2% and 8% of basal metabolism to the brain. In primates, however, the percentage is much higher—in humans it rises to 20–25%. The energy consumption of the brain does not vary greatly over time, but active regions of the cerebral cortex consume somewhat more energy than inactive regions; this forms the basis for the functional brain imaging methods of PET, fMRI, and NIRS. The brain typically gets most of its energy from oxygen-dependent metabolism of glucose (i.e., blood sugar), but ketones provide a major alternative source, together with contributions from medium chain fatty acids (caprylic and heptanoic acids), lactate, acetate, and possibly amino acids. == Function == Information from the sense organs is collected in the brain. There it is used to determine what actions the organism is to take. The brain processes the raw data to extract information about the structure of the environment. Next it combines the processed information with information about the current needs of the animal and with memory of past circumstances. Finally, on the basis of the results, it generates motor response patterns. These signal-processing tasks require intricate interplay between a variety of functional subsystems. The function of the brain is to provide coherent control over the actions of an animal. A centralized brain allows groups of muscles to be co-activated in complex patterns; it also allows stimuli impinging on one part of the body to evoke responses in other parts, and it can prevent different parts of the body from acting at cross-purposes to each other. === Perception === The human brain is provided with information about light, sound, the chemical composition of the atmosphere, temperature, the position of the body in space (proprioception), the chemical composition of the bloodstream, and more. In other animals additional senses are present, such as the infrared heat-sense of snakes, the magnetic field sense of some birds, or the electric field sense mainly seen in aquatic animals. Each sensory system begins with specialized receptor cells, such as photoreceptor cells in the retina of the eye, or vibration-sensitive hair cells in the cochlea of the ear. The axons of sensory receptor cells travel into the spinal cord or brain, where they transmit their signals to a first-order sensory nucleus dedicated to one specific sensory modality. This primary sensory nucleus sends information to higher-order sensory areas that are dedicated to the same modality. Eventually, via a way-station in the thalamus, the signals are sent to the cerebral cortex, where they are processed to extract the relevant features, and integrated with signals coming from other sensory systems. === Motor control === Motor systems are areas of the brain that are involved in initiating body movements, that is, in activating muscles. Except for the muscles that control the eye, which are driven by nuclei in the midbrain, all the voluntary muscles in the body are directly innervated by motor neurons in the spinal cord and hindbrain. Spinal motor neurons are controlled both by neural circuits intrinsic to the spinal cord, and by inputs that descend from the brain. The intrinsic spinal circuits implement many reflex responses, and contain pattern generators for rhythmic movements such as walking or swimming. The descending connections from the brain allow for more sophisticated control. The brain contains several motor areas that project directly to the spinal cord. At the lowest level are motor areas in the medulla and pons, which control stereotyped movements such as walking, breathing, or swallowing. At a higher level are areas in the midbrain, such as the red nucleus, which is responsible for coordinating movements of the arms and legs. At a higher level yet is the primary motor cortex, a strip of tissue located at the posterior edge of the frontal lobe. The primary motor cortex sends projections to the subcortical motor areas, but also sends a massive projection directly to the spinal cord, through the pyramidal tract. This direct corticospinal projection allows for precise voluntary control of the fine details of movements. Other motor-related brain areas exert secondary effects by projecting to the primary motor areas. Among the most important secondary areas are the premotor cortex, supplementary motor area, basal ganglia, and cerebellum. In addition to all of the above, the brain and spinal cord contain extensive circuitry to control the autonomic nervous system which controls the movement of the smooth muscle of the body. === Sleep === Many animals alternate between sleeping and waking in a daily cycle. Arousal and alertness are also modulated on a finer time scale by a network of brain areas. A key component of the sleep system is the suprachiasmatic nucleus (SCN), a tiny part of the hypothalamus located directly above the point at which the optic nerves from the two eyes cross. The SCN contains the body's central biological clock. Neurons there show activity levels that rise and fall with a period of about 24 hours, circadian rhythms: these activity fluctuations are driven by rhythmic changes in expression of a set of "clock genes". The SCN continues to keep time even if it is excised from the brain and placed in a dish of warm nutrient solution, but it ordinarily receives input from the optic nerves, through the retinohypothalamic tract (RHT), that allows daily light-dark cycles to calibrate the clock. The SCN projects to a set of areas in the hypothalamus, brainstem, and midbrain that are involved in implementing sleep-wake cycles. An important component of the system is the reticular formation, a group of neuron-clusters scattered diffusely through the core of the lower brain. Reticular neurons send signals to the thalamus, which in turn sends activity-level-controlling signals to every part of the cortex. Damage to the reticular formation can produce a permanent state of coma. Sleep involves great changes in brain activity. Until the 1950s it was generally believed that the brain essentially shuts off during sleep, but this is now known to be far from true; activity continues, but patterns become very different. There are two types of sleep: REM sleep (with dreaming) and NREM (non-REM, usually without dreaming) sleep, which repeat in slightly varying patterns throughout a sleep episode. Three broad types of distinct brain activity patterns can be measured: REM, light NREM and deep NREM. During deep NREM sleep, also called slow wave sleep, activity in the cortex takes the form of large synchronized waves, whereas in the waking state it is noisy and desynchronized. Levels of the neurotransmitters norepinephrine and serotonin drop during slow wave sleep, and fall almost to zero during REM sleep; levels of acetylcholine show the reverse pattern. === Homeostasis === For any animal, survival requires maintaining a variety of parameters of bodily state within a limited range of variation: these include temperature, water content, salt concentration in the bloodstream, blood glucose levels, blood oxygen level, and others. The ability of an animal to regulate the internal environment of its body—the milieu intérieur, as the pioneering physiologist Claude Bernard called it—is known as homeostasis (Greek for "standing still"). Maintaining homeostasis is a crucial function of the brain. The basic principle that underlies homeostasis is negative feedback: any time a parameter diverges from its set-point, sensors generate an error signal that evokes a response that causes the parameter to shift back toward its optimum value. (This principle is widely used in engineering, for example in the control of temperature using a thermostat.) In vertebrates, the part of the brain that plays the greatest role is the hypothalamus, a small region at the base of the forebrain whose size does not reflect its complexity or the importance of its function. The hypothalamus is a collection of small nuclei, most of which are involved in basic biological functions. Some of these functions relate to arousal or to social interactions such as sexuality, aggression, or maternal behaviors; but many of them relate to homeostasis. Several hypothalamic nuclei receive input from sensors located in the lining of blood vessels, conveying information about temperature, sodium level, glucose level, blood oxygen level, and other parameters. These hypothalamic nuclei send output signals to motor areas that can generate actions to rectify deficiencies. Some of the outputs also go to the pituitary gland, a tiny gland attached to the brain directly underneath the hypothalamus. The pituitary gland secretes hormones into the bloodstream, where they circulate throughout the body and induce changes in cellular activity. === Motivation === The individual animals need to express survival-promoting behaviors, such as seeking food, water, shelter, and a mate. The motivational system in the brain monitors the current state of satisfaction of these goals, and activates behaviors to meet any needs that arise. The motivational system works largely by a reward–punishment mechanism. When a particular behavior is followed by favorable consequences, the reward mechanism in the brain is activated, which induces structural changes inside the brain that cause the same behavior to be repeated later, whenever a similar situation arises. Conversely, when a behavior is followed by unfavorable consequences, the brain's punishment mechanism is activated, inducing structural changes that cause the behavior to be suppressed when similar situations arise in the future. Most organisms studied to date use a reward–punishment mechanism: for instance, worms and insects can alter their behavior to seek food sources or to avoid dangers. In vertebrates, the reward-punishment system is implemented by a specific set of brain structures, at the heart of which lie the basal ganglia, a set of interconnected areas at the base of the forebrain. The basal ganglia are the central site at which decisions are made: the basal ganglia exert a sustained inhibitory control over most of the motor systems in the brain; when this inhibition is released, a motor system is permitted to execute the action it is programmed to carry out. Rewards and punishments function by altering the relationship between the inputs that the basal ganglia receive and the decision-signals that are emitted. The reward mechanism is better understood than the punishment mechanism, because its role in drug abuse has caused it to be studied very intensively. Research has shown that the neurotransmitter dopamine plays a central role: addictive drugs such as cocaine, amphetamine, and nicotine either cause dopamine levels to rise or cause the effects of dopamine inside the brain to be enhanced. === Learning and memory === Almost all animals are capable of modifying their behavior as a result of experience—even the most primitive types of worms. Because behavior is driven by brain activity, changes in behavior must somehow correspond to changes inside the brain. Already in the late 19th century theorists like Santiago Ramón y Cajal argued that the most plausible explanation is that learning and memory are expressed as changes in the synaptic connections between neurons. Until 1970, however, experimental evidence to support the synaptic plasticity hypothesis was lacking. In 1971 Tim Bliss and Terje Lømo published a paper on a phenomenon now called long-term potentiation: the paper showed clear evidence of activity-induced synaptic changes that lasted for at least several days. Since then technical advances have made these sorts of experiments much easier to carry out, and thousands of studies have been made that have clarified the mechanism of synaptic change, and uncovered other types of activity-driven synaptic change in a variety of brain areas, including the cerebral cortex, hippocampus, basal ganglia, and cerebellum. Brain-derived neurotrophic factor (BDNF) and physical activity appear to play a beneficial role in the process. Neuroscientists currently distinguish several types of learning and memory that are implemented by the brain in distinct ways: Working memory is the ability of the brain to maintain a temporary representation of information about the task that an animal is currently engaged in. This sort of dynamic memory is thought to be mediated by the formation of cell assemblies—groups of activated neurons that maintain their activity by constantly stimulating one another. Episodic memory is the ability to remember the details of specific events. This sort of memory can last for a lifetime. Much evidence implicates the hippocampus in playing a crucial role: people with severe damage to the hippocampus sometimes show amnesia, that is, inability to form new long-lasting episodic memories. Semantic memory is the ability to learn facts and relationships. This sort of memory is probably stored largely in the cerebral cortex, mediated by changes in connections between cells that represent specific types of information. Instrumental learning is the ability for rewards and punishments to modify behavior. It is implemented by a network of brain areas centered on the basal ganglia. Motor learning is the ability to refine patterns of body movement by practicing, or more generally by repetition. A number of brain areas are involved, including the premotor cortex, basal ganglia, and especially the cerebellum, which functions as a large memory bank for microadjustments of the parameters of movement. == Research == The field of neuroscience encompasses all approaches that seek to understand the brain and the rest of the nervous system. Psychology seeks to understand mind and behavior, and neurology is the medical discipline that diagnoses and treats diseases of the nervous system. The brain is also the most important organ studied in psychiatry, the branch of medicine that works to study, prevent, and treat mental disorders. Cognitive science seeks to unify neuroscience and psychology with other fields that concern themselves with the brain, such as computer science (artificial intelligence and similar fields) and philosophy. The oldest method of studying the brain is anatomical, and until the middle of the 20th century, much of the progress in neuroscience came from the development of better cell stains and better microscopes. Neuroanatomists study the large-scale structure of the brain as well as the microscopic structure of neurons and their components, especially synapses. Among other tools, they employ a plethora of stains that reveal neural structure, chemistry, and connectivity. In recent years, the development of immunostaining techniques has allowed investigation of neurons that express specific sets of genes. Also, functional neuroanatomy uses medical imaging techniques to correlate variations in human brain structure with differences in cognition or behavior. Neurophysiologists study the chemical, pharmacological, and electrical properties of the brain: their primary tools are drugs and recording devices. Thousands of experimentally developed drugs affect the nervous system, some in highly specific ways. Recordings of brain activity can be made using electrodes, either glued to the scalp as in EEG studies, or implanted inside the brains of animals for extracellular recordings, which can detect action potentials generated by individual neurons. Because the brain does not contain pain receptors, it is possible using these techniques to record brain activity from animals that are awake and behaving without causing distress. The same techniques have occasionally been used to study brain activity in human patients with intractable epilepsy, in cases where there was a medical necessity to implant electrodes to localize the brain area responsible for epileptic seizures. Functional imaging techniques such as fMRI are also used to study brain activity; these techniques have mainly been used with human subjects, because they require a conscious subject to remain motionless for long periods of time, but they have the great advantage of being noninvasive. Another approach to brain function is to examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges, surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes and other types of brain damage have been a key source of information about brain function. Because there is no ability to experimentally control the nature of the damage, however, this information is often difficult to interpret. In animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce precise patterns of damage and then examine the consequences for behavior. Computational neuroscience encompasses two approaches: first, the use of computers to study the brain; second, the study of how brains perform computation. On one hand, it is possible to write a computer program to simulate the operation of a group of neurons by making use of systems of equations that describe their electrochemical activity; such simulations are known as biologically realistic neural networks. On the other hand, it is possible to study algorithms for neural computation by simulating, or mathematically analyzing, the operations of simplified "units" that have some of the properties of neurons but abstract out much of their biological complexity. The computational functions of the brain are studied both by computer scientists and neuroscientists. Computational neurogenetic modeling is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. Recent years have seen increasing applications of genetic and genomic techniques to the study of the brain and a focus on the roles of neurotrophic factors and physical activity in neuroplasticity. The most common subjects are mice, because of the availability of technical tools. It is now possible with relative ease to "knock out" or mutate a wide variety of genes, and then examine the effects on brain function. More sophisticated approaches are also being used: for example, using Cre-Lox recombination it is possible to activate or deactivate genes in specific parts of the brain, at specific times. Recent years have also seen rapid advances in single-cell sequencing technologies, and these have been used to leverage the cellular heterogeneity of the brain as a means of better understanding the roles of distinct cell types in disease and biology (as well as how genomic variants influence individual cell types). In 2024, investigators studied a large integrated dataset of almost 3 million nuclei from the human prefrontal cortext from 388 individuals. In doing so, they annotated 28 cell types to evaluate expression and chromatin variation across gene families and drug targets. They identified about half a million cell type–specific regulatory elements and about 1.5 million single-cell expression quantitative trait loci (i.e., genomic variants with strong statistical associations with changes in gene expression within specific cell types), which were then used to build cell-type regulatory networks (the study also describes cell-to-cell communication networks). These networks were found to manifest cellular changes in aging and neuropsychiatric disorders. As part of the same investigation, a machine learning model was designed to accurately impute single-cell expression (this model prioritized ~250 disease-risk genes and drug targets with associated cell types). === History === The oldest brain to have been discovered was in Armenia in the Areni-1 cave complex. The brain, estimated to be over 5,000 years old, was found in the skull of a 12 to 14-year-old girl. Although the brains were shriveled, they were well preserved due to the climate found inside the cave. Early philosophers were divided as to whether the seat of the soul lies in the brain or heart. Aristotle favored the heart, and thought that the function of the brain was merely to cool the blood. Democritus, the inventor of the atomic theory of matter, argued for a three-part soul, with intellect in the head, emotion in the heart, and lust near the liver. The unknown author of On the Sacred Disease, a medical treatise in the Hippocratic Corpus, came down unequivocally in favor of the brain, writing: Men ought to know that from nothing else but the brain come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations. ... And by the same organ we become mad and delirious, and fears and terrors assail us, some by night, and some by day, and dreams and untimely wanderings, and cares that are not suitable, and ignorance of present circumstances, desuetude, and unskillfulness. All these things we endure from the brain, when it is not healthy... The Roman physician Galen also argued for the importance of the brain, and theorized in some depth about how it might work. Galen traced out the anatomical relationships among brain, nerves, and muscles, demonstrating that all muscles in the body are connected to the brain through a branching network of nerves. He postulated that nerves activate muscles mechanically by carrying a mysterious substance he called pneumata psychikon, usually translated as "animal spirits". Galen's ideas were widely known during the Middle Ages, but not much further progress came until the Renaissance, when detailed anatomical study resumed, combined with the theoretical speculations of René Descartes and those who followed him. Descartes, like Galen, thought of the nervous system in hydraulic terms. He believed that the highest cognitive functions are carried out by a non-physical res cogitans, but that the majority of behaviors of humans, and all behaviors of animals, could be explained mechanistically. The first real progress toward a modern understanding of nervous function, though, came from the investigations of Luigi Galvani (1737–1798), who discovered that a shock of static electricity applied to an exposed nerve of a dead frog could cause its leg to contract. Since that time, each major advance in understanding has followed more or less directly from the development of a new technique of investigation. Until the early years of the 20th century, the most important advances were derived from new methods for staining cells. Particularly critical was the invention of the Golgi stain, which (when correctly used) stains only a small fraction of neurons, but stains them in their entirety, including cell body, dendrites, and axon. Without such a stain, brain tissue under a microscope appears as an impenetrable tangle of protoplasmic fibers, in which it is impossible to determine any structure. In the hands of Camillo Golgi, and especially of the Spanish neuroanatomist Santiago Ramón y Cajal, the new stain revealed hundreds of distinct types of neurons, each with its own unique dendritic structure and pattern of connectivity. In the first half of the 20th century, advances in electronics enabled investigation of the electrical properties of nerve cells, culminating in work by Alan Hodgkin, Andrew Huxley, and others on the biophysics of the action potential, and the work of Bernard Katz and others on the electrochemistry of the synapse. These studies complemented the anatomical picture with a conception of the brain as a dynamic entity. Reflecting the new understanding, in 1942 Charles Sherrington visualized the workings of the brain waking from sleep: The great topmost sheet of the mass, that where hardly a light had twinkled or moved, becomes now a sparkling field of rhythmic flashing points with trains of traveling sparks hurrying hither and thither. The brain is waking and with it the mind is returning. It is as if the Milky Way entered upon some cosmic dance. Swiftly the head mass becomes an enchanted loom where millions of flashing shuttles weave a dissolving pattern, always a meaningful pattern though never an abiding one; a shifting harmony of subpatterns. The invention of electronic computers in the 1940s, along with the development of mathematical information theory, led to a realization that brains can potentially be understood as information processing systems. This concept formed the basis of the field of cybernetics, and eventually gave rise to the field now known as computational neuroscience. The earliest attempts at cybernetics were somewhat crude in that they treated the brain as essentially a digital computer in disguise, as for example in John von Neumann's 1958 book, The Computer and the Brain. Over the years, though, accumulating information about the electrical responses of brain cells recorded from behaving animals has steadily moved theoretical concepts in the direction of increasing realism. One of the most influential early contributions was a 1959 paper titled What the frog's eye tells the frog's brain: the paper examined the visual responses of neurons in the retina and optic tectum of frogs, and came to the conclusion that some neurons in the tectum of the frog are wired to combine elementary responses in a way that makes them function as "bug perceivers". A few years later David Hubel and Torsten Wiesel discovered cells in the primary visual cortex of monkeys that become active when sharp edges move across specific points in the field of view—a discovery for which they won a Nobel Prize. Follow-up studies in higher-order visual areas found cells that detect binocular disparity, color, movement, and aspects of shape, with areas located at increasing distances from the primary visual cortex showing increasingly complex responses. Other investigations of brain areas unrelated to vision have revealed cells with a wide variety of response correlates, some related to memory, some to abstract types of cognition such as space. Theorists have worked to understand these response patterns by constructing mathematical models of neurons and neural networks, which can be simulated using computers. Some useful models are abstract, focusing on the conceptual structure of neural algorithms rather than the details of how they are implemented in the brain; other models attempt to incorporate data about the biophysical properties of real neurons. No model on any level is yet considered to be a fully valid description of brain function, though. The essential difficulty is that sophisticated computation by neural networks requires distributed processing in which hundreds or thousands of neurons work cooperatively—current methods of brain activity recording are only capable of isolating action potentials from a few dozen neurons at a time. Furthermore, even single neurons appear to be complex and capable of performing computations. So, brain models that do not reflect this are too abstract to be representative of brain operation; models that do try to capture this are very computationally expensive and arguably intractable with present computational resources. However, the Human Brain Project is trying to build a realistic, detailed computational model of the entire human brain. The wisdom of this approach has been publicly contested, with high-profile scientists on both sides of the argument. In the second half of the 20th century, developments in chemistry, electron microscopy, genetics, computer science, functional brain imaging, and other fields progressively opened new windows into brain structure and function. In the United States, the 1990s were officially designated as the "Decade of the Brain" to commemorate advances made in brain research, and to promote funding for such research. In the 21st century, these trends have continued, and several new approaches have come into prominence, including multielectrode recording, which allows the activity of many brain cells to be recorded all at the same time; genetic engineering, which allows molecular components of the brain to be altered experimentally; genomics, which allows variations in brain structure to be correlated with variations in DNA properties and neuroimaging. == Society and culture == === As food === Animal brains are used as food in numerous cuisines. === In rituals === Some archaeological evidence suggests that the mourning rituals of European Neanderthals also involved the consumption of the brain. The Fore people of Papua New Guinea are known to eat human brains. In funerary rituals, those close to the dead would eat the brain of the deceased to create a sense of immortality. A prion disease called kuru has been traced to this. == See also == == References == == External links == The Brain from Top to Bottom, at McGill University "The Brain", BBC Radio 4 discussion with Vivian Nutton, Jonathan Sawday & Marina Wallace (In Our Time, May 8, 2008) Our Quest to Understand the Brain – with Matthew Cobb Royal Institution lecture. Archived at Ghostarchive.
Wikipedia/Brain_function
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. == Optimization problems == Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set. A problem with continuous variables is known as a continuous optimization, in which optimal arguments from a continuous set must be found. They can include constrained problems and multimodal problems. An optimization problem can be represented in the following way: Given: a function f : A → R {\displaystyle \mathbb {R} } from some set A to the real numbers Sought: an element x0 ∈ A such that f(x0) ≤ f(x) for all x ∈ A ("minimization") or such that f(x0) ≥ f(x) for all x ∈ A ("maximization"). Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework. Since the following is valid: f ( x 0 ) ≥ f ( x ) ⇔ − f ( x 0 ) ≤ − f ( x ) , {\displaystyle f(\mathbf {x} _{0})\geq f(\mathbf {x} )\Leftrightarrow -f(\mathbf {x} _{0})\leq -f(\mathbf {x} ),} it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too. Problems formulated using this technique in the fields of physics may refer to the technique as energy minimization, speaking of the value of the function f as representing the energy of the system being modeled. In machine learning, it is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically, A is some subset of the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the search space or the choice set, while the elements of A are called candidate solutions or feasible solutions. The function f is variously called an objective function, criterion function, loss function, cost function (minimization), utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution. In mathematics, conventional optimization problems are usually stated in terms of minimization. A local minimum x* is defined as an element for which there exists some δ > 0 such that ∀ x ∈ A where ‖ x − x ∗ ‖ ≤ δ , {\displaystyle \forall \mathbf {x} \in A\;{\text{where}}\;\left\Vert \mathbf {x} -\mathbf {x} ^{\ast }\right\Vert \leq \delta ,\,} the expression f(x*) ≤ f(x) holds; that is to say, on some region around x* all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly. While a local minimum is at least as good as any nearby elements, a global minimum is at least as good as every feasible element. Generally, unless the objective function is convex in a minimization problem, there may be several local minima. In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem. == Notation == Optimization problems are often expressed with special notation. Here are some examples: === Minimum and maximum value of a function === Consider the following notation: min x ∈ R ( x 2 + 1 ) {\displaystyle \min _{x\in \mathbb {R} }\;\left(x^{2}+1\right)} This denotes the minimum value of the objective function x2 + 1, when choosing x from the set of real numbers R {\displaystyle \mathbb {R} } . The minimum value in this case is 1, occurring at x = 0. Similarly, the notation max x ∈ R 2 x {\displaystyle \max _{x\in \mathbb {R} }\;2x} asks for the maximum value of the objective function 2x, where x may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined". === Optimal input arguments === Consider the following notation: a r g m i n x ∈ ( − ∞ , − 1 ] x 2 + 1 , {\displaystyle {\underset {x\in (-\infty ,-1]}{\operatorname {arg\,min} }}\;x^{2}+1,} or equivalently a r g m i n x x 2 + 1 , subject to: x ∈ ( − ∞ , − 1 ] . {\displaystyle {\underset {x}{\operatorname {arg\,min} }}\;x^{2}+1,\;{\text{subject to:}}\;x\in (-\infty ,-1].} This represents the value (or values) of the argument x in the interval (−∞,−1] that minimizes (or minimize) the objective function x2 + 1 (the actual minimum value of that function is not what the problem asks for). In this case, the answer is x = −1, since x = 0 is infeasible, that is, it does not belong to the feasible set. Similarly, a r g m a x x ∈ [ − 5 , 5 ] , y ∈ R x cos ⁡ y , {\displaystyle {\underset {x\in [-5,5],\;y\in \mathbb {R} }{\operatorname {arg\,max} }}\;x\cos y,} or equivalently a r g m a x x , y x cos ⁡ y , subject to: x ∈ [ − 5 , 5 ] , y ∈ R , {\displaystyle {\underset {x,\;y}{\operatorname {arg\,max} }}\;x\cos y,\;{\text{subject to:}}\;x\in [-5,5],\;y\in \mathbb {R} ,} represents the {x, y} pair (or pairs) that maximizes (or maximize) the value of the objective function x cos y, with the added constraint that x lie in the interval [−5,5] (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form {5, 2kπ} and {−5, (2k + 1)π}, where k ranges over all integers. Operators arg min and arg max are sometimes also written as argmin and argmax, and stand for argument of the minimum and argument of the maximum. == History == Fermat and Lagrange found calculus-based formulae for identifying optima, while Newton and Gauss proposed iterative methods for moving towards an optimum. The term "linear programming" for certain optimization cases was due to George B. Dantzig, although much of the theory had been introduced by Leonid Kantorovich in 1939. (Programming in this context does not refer to computer programming, but comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and also John von Neumann and other researchers worked on the theoretical aspects of linear programming (like the theory of duality) around the same time. Other notable researchers in mathematical optimization include the following: == Major subfields == Convex programming studies the case when the objective function is convex (minimization) or concave (maximization) and the constraint set is convex. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming. Linear programming (LP), a type of convex programming, studies the case in which the objective function f is linear and the constraints are specified using only linear equalities and inequalities. Such a constraint set is called a polyhedron or a polytope if it is bounded. Second-order cone programming (SOCP) is a convex program, and includes certain types of quadratic programs. Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are semidefinite matrices. It is a generalization of linear and convex quadratic programming. Conic programming is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone. Geometric programming is a technique whereby objective and inequality constraints expressed as posynomials and equality constraints as monomials can be transformed into a convex program. Integer programming studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming. Quadratic programming allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming. Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem. Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it. Stochastic programming studies the case in which some of the constraints or parameters depend on random variables. Robust optimization is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set. Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one. Stochastic optimization is used with random (noisy) function measurements or random inputs in the search process. Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions. Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems. Constraint satisfaction studies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning). Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints. Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling. Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model. In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): Calculus of variations is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area. Optimal control theory is a generalization of the calculus of variations which introduces control policies. Dynamic programming is the approach to solve the stochastic optimization problem with stochastic, randomness, and unknown model parameters. It studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation. Mathematical programming with equilibrium constraints is where the constraints include variational inequalities or complementarities. === Multi-objective optimization === Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier. A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering. === Multi-modal or global optimization === Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms, Bayesian optimization and simulated annealing. == Classification of critical points and extrema == === Feasibility problem === The satisfiability problem, also called the feasibility problem, is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal. Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative. === Existence === The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view. === Necessary conditions for optimality === One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions. Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'. === Sufficient conditions for optimality === While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality. === Sensitivity and continuity of optima === The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics. The maximum theorem of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters. === Calculus of optimization === For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum. Further, critical points can be classified using the definiteness of the Hessian matrix: If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of saddle point. Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. Lagrangian relaxation can also provide approximate solutions to difficult constrained problems. When the objective function is a convex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods. === Global convergence === More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points. == Computational optimization techniques == To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge). === Optimization algorithms === Simplex algorithm of George Dantzig, designed for linear programming Extensions of the simplex algorithm, designed for quadratic programming and for linear-fractional programming Variants of the simplex algorithm that are especially suited for network optimization Combinatorial algorithms Quantum optimization algorithms === Iterative methods === The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself. Methods that evaluate Hessians (or approximate Hessians, using finite differences): Newton's method Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems. Interior point methods: This is a large class of methods for constrained optimization, some of which use only (sub)gradient information and others of which require the evaluation of Hessians. Methods that evaluate gradients, or approximate gradients in some way (or even subgradients): Coordinate descent methods: Algorithms which update a single coordinate in each iteration Conjugate gradient methods: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.) Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems. Subgradient methods: An iterative method for large locally Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient–projection methods are similar to conjugate–gradient methods. Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for convex minimization problems (similar to conjugate gradient methods). Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods. Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems). Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000). Simultaneous perturbation stochastic approximation (SPSA) method for stochastic optimization; uses random (efficient) gradient approximation. Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used. Interpolation methods Pattern search methods, which have better convergence properties than the Nelder–Mead heuristic (with simplices), which is listed below. Mirror descent === Heuristics === Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics: == Applications == === Mechanics === Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem. Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems. This approach may be applied in cosmology and astrophysics. === Economics and finance === Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria. The Journal of Economic Literature codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their profit. Also, agents are often modeled as being risk-averse, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics. Since the 1970s, economists have modeled dynamic decisions over time using control theory. For example, dynamic search models are used to study labor-market behavior. A crucial distinction is between deterministic and stochastic models. Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments. === Electrical engineering === Some common applications of optimization techniques in electrical engineering include active filter design, stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures, handset antennas, electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993. Optimization techniques are also used in power-flow analysis. === Civil engineering === Optimization has been widely used in civil engineering. Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource leveling, water resource allocation, traffic management and schedule optimization. === Operations research === Another field that uses optimization techniques extensively is operations research. Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses stochastic programming to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and stochastic optimization methods. === Control engineering === Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled. === Geophysics === Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the physical properties and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used. === Molecular modeling === Nonlinear optimization methods are widely used in conformational analysis. === Computational systems biology === Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology. Linear programming has been applied to calculate the maximal possible yields of fermentation products, and to infer gene regulatory networks from multiple microarray datasets as well as transcriptional regulatory networks from high-throughput data. Nonlinear programming has been used to analyze energy metabolism and has been applied to metabolic engineering and parameter estimation in biochemical pathways. === Machine learning === == Solvers == == See also == == Notes == == Further reading == Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge: Cambridge University Press. ISBN 0-521-83378-7. Gill, P. E.; Murray, W.; Wright, M. H. (1982). Practical Optimization. London: Academic Press. ISBN 0-12-283952-8. Lee, Jon (2004). A First Course in Combinatorial Optimization. Cambridge University Press. ISBN 0-521-01012-8. Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin: Springer. ISBN 0-387-30303-0. G.L. Nemhauser, A.H.G. Rinnooy Kan and M.J. Todd (eds.): Optimization, Elsevier, (1989). Stanislav Walukiewicz:Integer Programming, Springer,ISBN 978-9048140688, (1990). R. Fletcher: Practical Methods of Optimization, 2nd Ed., Wiley, (2000). Panos M. Pardalos:Approximation and Complexity in Numerical Optimization: Continuous and Discrete Problems, Springer,ISBN 978-1-44194829-8, (2000). Xiaoqi Yang, K. L. Teo, Lou Caccetta (Eds.):Optimization Methods and Applications,Springer, ISBN 978-0-79236866-3, (2001). Panos M. Pardalos, and Mauricio G. C. Resende(Eds.):Handbook of Applied Optimization、Oxford Univ Pr on Demand, ISBN 978-0-19512594-8, (2002). Wil Michiels, Emile Aarts, and Jan Korst: Theoretical Aspects of Local Search, Springer, ISBN 978-3-64207148-5, (2006). Der-San Chen, Robert G. Batson, and Yu Dang: Applied Integer Programming: Modeling and Solution,Wiley,ISBN 978-0-47037306-4, (2010). Mykel J. Kochenderfer and Tim A. Wheeler: Algorithms for Optimization, The MIT Press, ISBN 978-0-26203942-0, (2019). Vladislav Bukshtynov: Optimization: Success in Practice, CRC Press (Taylor & Francis), ISBN 978-1-03222947-8, (2023) . Rosario Toscano: Solving Optimization Problems with the Heuristic Kalman Algorithm: New Stochastic Methods, Springer, ISBN 978-3-031-52458-5 (2024). Immanuel M. Bomze, Tibor Csendes, Reiner Horst and Panos M. Pardalos: Developments in Global Optimization, Kluwer Academic, ISBN 978-1-4419-4768-0 (2010). == External links == "Decision Tree for Optimization Software". Links to optimization source codes "Global optimization". "EE364a: Convex Optimization I". Course from Stanford University. Varoquaux, Gaël. "Mathematical Optimization: Finding Minima of Functions".
Wikipedia/Energy_function
The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition that expresses testable information). Another way of stating this: Take precisely stated prior data or testable information about a probability distribution function. Consider the set of all trial probability distributions that would encode the prior data. According to this principle, the distribution with maximal information entropy is the best choice. == History == The principle was first expounded by E. T. Jaynes in two papers in 1957, where he emphasized a natural correspondence between statistical mechanics and information theory. In particular, Jaynes argued that the Gibbsian method of statistical mechanics is sound by also arguing that the entropy of statistical mechanics and the information entropy of information theory are the same concept. Consequently, statistical mechanics should be considered a particular application of a general tool of logical inference and information theory. == Overview == In most practical cases, the stated prior data or testable information is given by a set of conserved quantities (average values of some moment functions), associated with the probability distribution in question. This is the way the maximum entropy principle is most often used in statistical thermodynamics. Another possibility is to prescribe some symmetries of the probability distribution. The equivalence between conserved quantities and corresponding symmetry groups implies a similar equivalence for these two ways of specifying the testable information in the maximum entropy method. The maximum entropy principle is also needed to guarantee the uniqueness and consistency of probability assignments obtained by different methods, statistical mechanics and logical inference in particular. The maximum entropy principle makes explicit our freedom in using different forms of prior data. As a special case, a uniform prior probability density (Laplace's principle of indifference, sometimes called the principle of insufficient reason), may be adopted. Thus, the maximum entropy principle is not merely an alternative way to view the usual methods of inference of classical statistics, but represents a significant conceptual generalization of those methods. However these statements do not imply that thermodynamical systems need not be shown to be ergodic to justify treatment as a statistical ensemble. In ordinary language, the principle of maximum entropy can be said to express a claim of epistemic modesty, or of maximum ignorance. The selected distribution is the one that makes the least claim to being informed beyond the stated prior data, that is to say the one that admits the most ignorance beyond the stated prior data. == Testable information == The principle of maximum entropy is useful explicitly only when applied to testable information. Testable information is a statement about a probability distribution whose truth or falsity is well-defined. For example, the statements the expectation of the variable x {\displaystyle x} is 2.87 and p 2 + p 3 > 0.6 {\displaystyle p_{2}+p_{3}>0.6} (where p 2 {\displaystyle p_{2}} and p 3 {\displaystyle p_{3}} are probabilities of events) are statements of testable information. Given testable information, the maximum entropy procedure consists of seeking the probability distribution which maximizes information entropy, subject to the constraints of the information. This constrained optimization problem is typically solved using the method of Lagrange multipliers. Entropy maximization with no testable information respects the universal "constraint" that the sum of the probabilities is one. Under this constraint, the maximum entropy discrete probability distribution is the uniform distribution, p i = 1 n f o r a l l i ∈ { 1 , … , n } . {\displaystyle p_{i}={\frac {1}{n}}\ {\rm {for\ all}}\ i\in \{\,1,\dots ,n\,\}.} == Applications == The principle of maximum entropy is commonly applied in two ways to inferential problems: === Prior probabilities === The principle of maximum entropy is often used to obtain prior probability distributions for Bayesian inference. Jaynes was a strong advocate of this approach, claiming the maximum entropy distribution represented the least informative distribution. A large amount of literature is now dedicated to the elicitation of maximum entropy priors and links with channel coding. === Posterior probabilities === Maximum entropy is a sufficient updating rule for radical probabilism. Richard Jeffrey's probability kinematics is a special case of maximum entropy inference. However, maximum entropy is not a generalisation of all such sufficient updating rules. === Maximum entropy models === Alternatively, the principle is often invoked for model specification: in this case the observed data itself is assumed to be the testable information. Such models are widely used in natural language processing. An example of such a model is logistic regression, which corresponds to the maximum entropy classifier for independent observations. === Probability density estimation === One of the main applications of the maximum entropy principle is in discrete and continuous density estimation. Similar to support vector machine estimators, the maximum entropy principle may require the solution to a quadratic programming problem, and thus provide a sparse mixture model as the optimal density estimator. One important advantage of the method is its ability to incorporate prior information in the density estimation. == General solution for the maximum entropy distribution with linear constraints == === Discrete case === We have some testable information I about a quantity x taking values in {x1, x2,..., xn}. We assume this information has the form of m constraints on the expectations of the functions fk; that is, we require our probability distribution to satisfy the moment inequality/equality constraints: ∑ i = 1 n Pr ( x i ) f k ( x i ) ≥ F k k = 1 , … , m . {\displaystyle \sum _{i=1}^{n}\Pr(x_{i})f_{k}(x_{i})\geq F_{k}\qquad k=1,\ldots ,m.} where the F k {\displaystyle F_{k}} are observables. We also require the probability density to sum to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint ∑ i = 1 n Pr ( x i ) = 1. {\displaystyle \sum _{i=1}^{n}\Pr(x_{i})=1.} The probability distribution with maximum information entropy subject to these inequality/equality constraints is of the form: Pr ( x i ) = 1 Z ( λ 1 , … , λ m ) exp ⁡ [ λ 1 f 1 ( x i ) + ⋯ + λ m f m ( x i ) ] , {\displaystyle \Pr(x_{i})={\frac {1}{Z(\lambda _{1},\ldots ,\lambda _{m})}}\exp \left[\lambda _{1}f_{1}(x_{i})+\cdots +\lambda _{m}f_{m}(x_{i})\right],} for some λ 1 , … , λ m {\displaystyle \lambda _{1},\ldots ,\lambda _{m}} . It is sometimes called the Gibbs distribution. The normalization constant is determined by: Z ( λ 1 , … , λ m ) = ∑ i = 1 n exp ⁡ [ λ 1 f 1 ( x i ) + ⋯ + λ m f m ( x i ) ] , {\displaystyle Z(\lambda _{1},\ldots ,\lambda _{m})=\sum _{i=1}^{n}\exp \left[\lambda _{1}f_{1}(x_{i})+\cdots +\lambda _{m}f_{m}(x_{i})\right],} and is conventionally called the partition function. (The Pitman–Koopman theorem states that the necessary and sufficient condition for a sampling distribution to admit sufficient statistics of bounded dimension is that it have the general form of a maximum entropy distribution.) The λk parameters are Lagrange multipliers. In the case of equality constraints their values are determined from the solution of the nonlinear equations F k = ∂ ∂ λ k log ⁡ Z ( λ 1 , … , λ m ) . {\displaystyle F_{k}={\frac {\partial }{\partial \lambda _{k}}}\log Z(\lambda _{1},\ldots ,\lambda _{m}).} In the case of inequality constraints, the Lagrange multipliers are determined from the solution of a convex optimization program with linear constraints. In both cases, there is no closed form solution, and the computation of the Lagrange multipliers usually requires numerical methods. === Continuous case === For continuous distributions, the Shannon entropy cannot be used, as it is only defined for discrete probability spaces. Instead Edwin Jaynes (1963, 1968, 2003) gave the following formula, which is closely related to the relative entropy (see also differential entropy). H c = − ∫ p ( x ) log ⁡ p ( x ) q ( x ) d x {\displaystyle H_{c}=-\int p(x)\log {\frac {p(x)}{q(x)}}\,dx} where q(x), which Jaynes called the "invariant measure", is proportional to the limiting density of discrete points. For now, we shall assume that q is known; we will discuss it further after the solution equations are given. A closely related quantity, the relative entropy, is usually defined as the Kullback–Leibler divergence of p from q (although it is sometimes, confusingly, defined as the negative of this). The inference principle of minimizing this, due to Kullback, is known as the Principle of Minimum Discrimination Information. We have some testable information I about a quantity x which takes values in some interval of the real numbers (all integrals below are over this interval). We assume this information has the form of m constraints on the expectations of the functions fk, i.e. we require our probability density function to satisfy the inequality (or purely equality) moment constraints: ∫ p ( x ) f k ( x ) d x ≥ F k k = 1 , … , m . {\displaystyle \int p(x)f_{k}(x)\,dx\geq F_{k}\qquad k=1,\dotsc ,m.} where the F k {\displaystyle F_{k}} are observables. We also require the probability density to integrate to one, which may be viewed as a primitive constraint on the identity function and an observable equal to 1 giving the constraint ∫ p ( x ) d x = 1. {\displaystyle \int p(x)\,dx=1.} The probability density function with maximum Hc subject to these constraints is: p ( x ) = 1 Z ( λ 1 , … , λ m ) q ( x ) exp ⁡ [ λ 1 f 1 ( x ) + ⋯ + λ m f m ( x ) ] {\displaystyle p(x)={\frac {1}{Z(\lambda _{1},\dotsc ,\lambda _{m})}}q(x)\exp \left[\lambda _{1}f_{1}(x)+\dotsb +\lambda _{m}f_{m}(x)\right]} with the partition function determined by Z ( λ 1 , … , λ m ) = ∫ q ( x ) exp ⁡ [ λ 1 f 1 ( x ) + ⋯ + λ m f m ( x ) ] d x . {\displaystyle Z(\lambda _{1},\dotsc ,\lambda _{m})=\int q(x)\exp \left[\lambda _{1}f_{1}(x)+\dotsb +\lambda _{m}f_{m}(x)\right]\,dx.} As in the discrete case, in the case where all moment constraints are equalities, the values of the λ k {\displaystyle \lambda _{k}} parameters are determined by the system of nonlinear equations: F k = ∂ ∂ λ k log ⁡ Z ( λ 1 , … , λ m ) . {\displaystyle F_{k}={\frac {\partial }{\partial \lambda _{k}}}\log Z(\lambda _{1},\dotsc ,\lambda _{m}).} In the case with inequality moment constraints the Lagrange multipliers are determined from the solution of a convex optimization program. The invariant measure function q(x) can be best understood by supposing that x is known to take values only in the bounded interval (a, b), and that no other information is given. Then the maximum entropy probability density function is p ( x ) = A ⋅ q ( x ) , a < x < b {\displaystyle p(x)=A\cdot q(x),\qquad a<x<b} where A is a normalization constant. The invariant measure function is actually the prior density function encoding 'lack of relevant information'. It cannot be determined by the principle of maximum entropy, and must be determined by some other logical method, such as the principle of transformation groups or marginalization theory. === Examples === For several examples of maximum entropy distributions, see the article on maximum entropy probability distributions. == Justifications for the principle of maximum entropy == Proponents of the principle of maximum entropy justify its use in assigning probabilities in several ways, including the following two arguments. These arguments take the use of Bayesian probability as given, and are thus subject to the same postulates. === Information entropy as a measure of 'uninformativeness' === Consider a discrete probability distribution among m {\displaystyle m} mutually exclusive propositions. The most informative distribution would occur when one of the propositions was known to be true. In that case, the information entropy would be equal to zero. The least informative distribution would occur when there is no reason to favor any one of the propositions over the others. In that case, the only reasonable probability distribution would be uniform, and then the information entropy would be equal to its maximum possible value, log ⁡ m {\displaystyle \log m} . The information entropy can therefore be seen as a numerical measure which describes how uninformative a particular probability distribution is, ranging from zero (completely informative) to log ⁡ m {\displaystyle \log m} (completely uninformative). By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. To choose a distribution with lower entropy would be to assume information we do not possess. Thus the maximum entropy distribution is the only reasonable distribution. The dependence of the solution on the dominating measure represented by m ( x ) {\displaystyle m(x)} is however a source of criticisms of the approach since this dominating measure is in fact arbitrary. === The Wallis derivation === The following argument is the result of a suggestion made by Graham Wallis to E. T. Jaynes in 1962. It is essentially the same mathematical argument used for the Maxwell–Boltzmann statistics in statistical mechanics, although the conceptual emphasis is quite different. It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept. The information entropy function is not assumed a priori, but rather is found in the course of the argument; and the argument leads naturally to the procedure of maximizing the information entropy, rather than treating it in some other way. Suppose an individual wishes to make a probability assignment among m {\displaystyle m} mutually exclusive propositions. They have some testable information, but are not sure how to go about including this information in their probability assessment. They therefore conceive of the following random experiment. They will distribute N {\displaystyle N} quanta of probability (each worth 1 / N {\displaystyle 1/N} ) at random among the m {\displaystyle m} possibilities. (One might imagine that they will throw N {\displaystyle N} balls into m {\displaystyle m} buckets while blindfolded. In order to be as fair as possible, each throw is to be independent of any other, and every bucket is to be the same size.) Once the experiment is done, they will check if the probability assignment thus obtained is consistent with their information. (For this step to be successful, the information must be a constraint given by an open set in the space of probability measures). If it is inconsistent, they will reject it and try again. If it is consistent, their assessment will be p i = n i N {\displaystyle p_{i}={\frac {n_{i}}{N}}} where p i {\displaystyle p_{i}} is the probability of the i {\displaystyle i} th proposition, while ni is the number of quanta that were assigned to the i {\displaystyle i} th proposition (i.e. the number of balls that ended up in bucket i {\displaystyle i} ). Now, in order to reduce the 'graininess' of the probability assignment, it will be necessary to use quite a large number of quanta of probability. Rather than actually carry out, and possibly have to repeat, the rather long random experiment, the protagonist decides to simply calculate and use the most probable result. The probability of any particular result is the multinomial distribution, P r ( p ) = W ⋅ m − N {\displaystyle Pr(\mathbf {p} )=W\cdot m^{-N}} where W = N ! n 1 ! n 2 ! ⋯ n m ! {\displaystyle W={\frac {N!}{n_{1}!\,n_{2}!\,\dotsb \,n_{m}!}}} is sometimes known as the multiplicity of the outcome. The most probable result is the one which maximizes the multiplicity W {\displaystyle W} . Rather than maximizing W {\displaystyle W} directly, the protagonist could equivalently maximize any monotonic increasing function of W {\displaystyle W} . They decide to maximize 1 N log ⁡ W = 1 N log ⁡ N ! n 1 ! n 2 ! ⋯ n m ! = 1 N log ⁡ N ! ( N p 1 ) ! ( N p 2 ) ! ⋯ ( N p m ) ! = 1 N ( log ⁡ N ! − ∑ i = 1 m log ⁡ ( ( N p i ) ! ) ) . {\displaystyle {\begin{aligned}{\frac {1}{N}}\log W&={\frac {1}{N}}\log {\frac {N!}{n_{1}!\,n_{2}!\,\dotsb \,n_{m}!}}\\[6pt]&={\frac {1}{N}}\log {\frac {N!}{(Np_{1})!\,(Np_{2})!\,\dotsb \,(Np_{m})!}}\\[6pt]&={\frac {1}{N}}\left(\log N!-\sum _{i=1}^{m}\log((Np_{i})!)\right).\end{aligned}}} At this point, in order to simplify the expression, the protagonist takes the limit as N → ∞ {\displaystyle N\to \infty } , i.e. as the probability levels go from grainy discrete values to smooth continuous values. Using Stirling's approximation, they find lim N → ∞ ( 1 N log ⁡ W ) = 1 N ( N log ⁡ N − ∑ i = 1 m N p i log ⁡ ( N p i ) ) = log ⁡ N − ∑ i = 1 m p i log ⁡ ( N p i ) = log ⁡ N − log ⁡ N ∑ i = 1 m p i − ∑ i = 1 m p i log ⁡ p i = ( 1 − ∑ i = 1 m p i ) log ⁡ N − ∑ i = 1 m p i log ⁡ p i = − ∑ i = 1 m p i log ⁡ p i = H ( p ) . {\displaystyle {\begin{aligned}\lim _{N\to \infty }\left({\frac {1}{N}}\log W\right)&={\frac {1}{N}}\left(N\log N-\sum _{i=1}^{m}Np_{i}\log(Np_{i})\right)\\[6pt]&=\log N-\sum _{i=1}^{m}p_{i}\log(Np_{i})\\[6pt]&=\log N-\log N\sum _{i=1}^{m}p_{i}-\sum _{i=1}^{m}p_{i}\log p_{i}\\[6pt]&=\left(1-\sum _{i=1}^{m}p_{i}\right)\log N-\sum _{i=1}^{m}p_{i}\log p_{i}\\[6pt]&=-\sum _{i=1}^{m}p_{i}\log p_{i}\\[6pt]&=H(\mathbf {p} ).\end{aligned}}} All that remains for the protagonist to do is to maximize entropy under the constraints of their testable information. They have found that the maximum entropy distribution is the most probable of all "fair" random distributions, in the limit as the probability levels go from discrete to continuous. === Compatibility with Bayes' theorem === Giffin and Caticha (2007) state that Bayes' theorem and the principle of maximum entropy are completely compatible and can be seen as special cases of the "method of maximum relative entropy". They state that this method reproduces every aspect of orthodox Bayesian inference methods. In addition this new method opens the door to tackling problems that could not be addressed by either the maximal entropy principle or orthodox Bayesian methods individually. Moreover, recent contributions (Lazar 2003, and Schennach 2005) show that frequentist relative-entropy-based inference approaches (such as empirical likelihood and exponentially tilted empirical likelihood – see e.g. Owen 2001 and Kitamura 2006) can be combined with prior information to perform Bayesian posterior analysis. Jaynes stated Bayes' theorem was a way to calculate a probability, while maximum entropy was a way to assign a prior probability distribution. It is however, possible in concept to solve for a posterior distribution directly from a stated prior distribution using the principle of minimum cross-entropy (or the Principle of Maximum Entropy being a special case of using a uniform distribution as the given prior), independently of any Bayesian considerations by treating the problem formally as a constrained optimisation problem, the Entropy functional being the objective function. For the case of given average values as testable information (averaged over the sought after probability distribution), the sought after distribution is formally the Gibbs (or Boltzmann) distribution the parameters of which must be solved for in order to achieve minimum cross entropy and satisfy the given testable information. == Relevance to physics == The principle of maximum entropy bears a relation to a key assumption of kinetic theory of gases known as molecular chaos or Stosszahlansatz. This asserts that the distribution function characterizing particles entering a collision can be factorized. Though this statement can be understood as a strictly physical hypothesis, it can also be interpreted as a heuristic hypothesis regarding the most probable configuration of particles before colliding. == See also == == Notes == == References == Bajkova, A. T. (1992). "The generalization of maximum entropy method for reconstruction of complex functions". Astronomical and Astrophysical Transactions. 1 (4): 313–320. Bibcode:1992A&AT....1..313B. doi:10.1080/10556799208230532. Fornalski, K.W.; Parzych, G.; Pylak, M.; Satuła, D.; Dobrzyński, L. (2010). "Application of Bayesian reasoning and the Maximum Entropy Method to some reconstruction problems" (PDF). Acta Physica Polonica A. 117 (6): 892–899. Bibcode:2010AcPPA.117..892F. doi:10.12693/APhysPolA.117.892. Giffin, A. and Caticha, A., 2007, Updating Probabilities with Data and Moments Guiasu, S.; Shenitzer, A. (1985). "The principle of maximum entropy". The Mathematical Intelligencer. 7 (1): 42–48. doi:10.1007/bf03023004. S2CID 53059968. Harremoës, P.; Topsøe (2001). "Maximum entropy fundamentals". Entropy. 3 (3): 191–226. Bibcode:2001Entrp...3..191H. doi:10.3390/e3030191. Jaynes, E. T. (1963). "Information Theory and Statistical Mechanics". In Ford, K. (ed.). Statistical Physics. New York: Benjamin. p. 181. Jaynes, E. T., 1986 (new version online 1996), "Monkeys, kangaroos and N", in Maximum-Entropy and Bayesian Methods in Applied Statistics, J. H. Justice (ed.), Cambridge University Press, Cambridge, p. 26. Kapur, J. N.; and Kesavan, H. K., 1992, Entropy Optimization Principles with Applications, Boston: Academic Press. ISBN 0-12-397670-7 Kitamura, Y., 2006, Empirical Likelihood Methods in Econometrics: Theory and Practice, Cowles Foundation Discussion Papers 1569, Cowles Foundation, Yale University. Lazar, N (2003). "Bayesian empirical likelihood". Biometrika. 90 (2): 319–326. doi:10.1093/biomet/90.2.319. Owen, A. B., 2001, Empirical Likelihood, Chapman and Hall/CRC. ISBN 1-58-488071-6. Schennach, S. M. (2005). "Bayesian exponentially tilted empirical likelihood". Biometrika. 92 (1): 31–46. doi:10.1093/biomet/92.1.31. Uffink, Jos (1995). "Can the Maximum Entropy Principle be explained as a consistency requirement?" (PDF). Studies in History and Philosophy of Modern Physics. 26B (3): 223–261. Bibcode:1995SHPMP..26..223U. CiteSeerX 10.1.1.27.6392. doi:10.1016/1355-2198(95)00015-1. hdl:1874/2649. Archived from the original (PDF) on 2006-06-03. == Further reading == Boyd, Stephen; Lieven Vandenberghe (2004). Convex Optimization (PDF). Cambridge University Press. p. 362. ISBN 0-521-83378-7. Retrieved 2008-08-24. Ratnaparkhi A. (1997) "A simple introduction to maximum entropy models for natural language processing" Technical Report 97-08, Institute for Research in Cognitive Science, University of Pennsylvania. An easy-to-read introduction to maximum entropy methods in the context of natural language processing. Tang, A.; Jackson, D.; Hobbs, J.; Chen, W.; Smith, J. L.; Patel, H.; Prieto, A.; Petrusca, D.; Grivich, M. I.; Sher, A.; Hottowy, P.; Dabrowski, W.; Litke, A. M.; Beggs, J. M. (2008). "A Maximum Entropy Model Applied to Spatial and Temporal Correlations from Cortical Networks in Vitro". Journal of Neuroscience. 28 (2): 505–518. doi:10.1523/JNEUROSCI.3359-07.2008. PMC 6670549. PMID 18184793. Open access article containing pointers to various papers and software implementations of Maximum Entropy Model on the net.
Wikipedia/Maximum_entropy_principle
In variational Bayesian methods, the evidence lower bound (often abbreviated ELBO, also sometimes called the variational lower bound or negative variational free energy) is a useful lower bound on the log-likelihood of some observed data. The ELBO is useful because it provides a guarantee on the worst-case for the log-likelihood of some distribution (e.g. p ( X ) {\displaystyle p(X)} ) which models a set of data. The actual log-likelihood may be higher (indicating an even better fit to the distribution) because the ELBO includes a Kullback-Leibler divergence (KL divergence) term which decreases the ELBO due to an internal part of the model being inaccurate despite good fit of the model overall. Thus improving the ELBO score indicates either improving the likelihood of the model p ( X ) {\displaystyle p(X)} or the fit of a component internal to the model, or both, and the ELBO score makes a good loss function, e.g., for training a deep neural network to improve both the model overall and the internal component. (The internal component is q ϕ ( ⋅ | x ) {\displaystyle q_{\phi }(\cdot |x)} , defined in detail later in this article.) == Definition == Let X {\displaystyle X} and Z {\displaystyle Z} be random variables, jointly distributed with distribution p θ {\displaystyle p_{\theta }} . For example, p θ ( X ) {\displaystyle p_{\theta }(X)} is the marginal distribution of X {\displaystyle X} , and p θ ( Z ∣ X ) {\displaystyle p_{\theta }(Z\mid X)} is the conditional distribution of Z {\displaystyle Z} given X {\displaystyle X} . Then, for a sample x ∼ p data {\displaystyle x\sim p_{\text{data}}} , and any distribution q ϕ {\displaystyle q_{\phi }} , the ELBO is defined as L ( ϕ , θ ; x ) := E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] . {\displaystyle L(\phi ,\theta ;x):=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}\right].} The ELBO can equivalently be written as L ( ϕ , θ ; x ) = E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) ] + H [ q ϕ ( z | x ) ] = ln p θ ( x ) − D K L ( q ϕ ( z | x ) | | p θ ( z | x ) ) . {\displaystyle {\begin{aligned}L(\phi ,\theta ;x)=&\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {}p_{\theta }(x,z)\right]+H[q_{\phi }(z|x)]\\=&\mathbb {\ln } {}\,p_{\theta }(x)-D_{KL}(q_{\phi }(z|x)||p_{\theta }(z|x)).\\\end{aligned}}} In the first line, H [ q ϕ ( z | x ) ] {\displaystyle H[q_{\phi }(z|x)]} is the entropy of q ϕ {\displaystyle q_{\phi }} , which relates the ELBO to the Helmholtz free energy. In the second line, ln ⁡ p θ ( x ) {\displaystyle \ln p_{\theta }(x)} is called the evidence for x {\displaystyle x} , and D K L ( q ϕ ( z | x ) | | p θ ( z | x ) ) {\displaystyle D_{KL}(q_{\phi }(z|x)||p_{\theta }(z|x))} is the Kullback-Leibler divergence between q ϕ {\displaystyle q_{\phi }} and p θ {\displaystyle p_{\theta }} . Since the Kullback-Leibler divergence is non-negative, L ( ϕ , θ ; x ) {\displaystyle L(\phi ,\theta ;x)} forms a lower bound on the evidence (ELBO inequality) ln ⁡ p θ ( x ) ≥ E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] . {\displaystyle \ln p_{\theta }(x)\geq \mathbb {\mathbb {E} } _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }(z\vert x)}}\right].} == Motivation == === Variational Bayesian inference === Suppose we have an observable random variable X {\displaystyle X} , and we want to find its true distribution p ∗ {\displaystyle p^{*}} . This would allow us to generate data by sampling, and estimate probabilities of future events. In general, it is impossible to find p ∗ {\displaystyle p^{*}} exactly, forcing us to search for a good approximation. That is, we define a sufficiently large parametric family { p θ } θ ∈ Θ {\displaystyle \{p_{\theta }\}_{\theta \in \Theta }} of distributions, then solve for min θ L ( p θ , p ∗ ) {\displaystyle \min _{\theta }L(p_{\theta },p^{*})} for some loss function L {\displaystyle L} . One possible way to solve this is by considering small variation from p θ {\displaystyle p_{\theta }} to p θ + δ θ {\displaystyle p_{\theta +\delta \theta }} , and solve for L ( p θ , p ∗ ) − L ( p θ + δ θ , p ∗ ) = 0 {\displaystyle L(p_{\theta },p^{*})-L(p_{\theta +\delta \theta },p^{*})=0} . This is a problem in the calculus of variations, thus it is called the variational method. Since there are not many explicitly parametrized distribution families (all the classical distribution families, such as the normal distribution, the Gumbel distribution, etc, are far too simplistic to model the true distribution), we consider implicitly parametrized probability distributions: First, define a simple distribution p ( z ) {\displaystyle p(z)} over a latent random variable Z {\displaystyle Z} . Usually a normal distribution or a uniform distribution suffices. Next, define a family of complicated functions f θ {\displaystyle f_{\theta }} (such as a deep neural network) parametrized by θ {\displaystyle \theta } . Finally, define a way to convert any f θ ( z ) {\displaystyle f_{\theta }(z)} into a distribution (in general simple too, but unrelated to p ( z ) {\displaystyle p(z)} ) over the observable random variable X {\displaystyle X} . For example, let f θ ( z ) = ( f 1 ( z ) , f 2 ( z ) ) {\displaystyle f_{\theta }(z)=(f_{1}(z),f_{2}(z))} have two outputs, then we can define the corresponding distribution over X {\displaystyle X} to be the normal distribution N ( f 1 ( z ) , e f 2 ( z ) ) {\displaystyle {\mathcal {N}}(f_{1}(z),e^{f_{2}(z)})} . This defines a family of joint distributions p θ {\displaystyle p_{\theta }} over ( X , Z ) {\displaystyle (X,Z)} . It is very easy to sample ( x , z ) ∼ p θ {\displaystyle (x,z)\sim p_{\theta }} : simply sample z ∼ p {\displaystyle z\sim p} , then compute f θ ( z ) {\displaystyle f_{\theta }(z)} , and finally sample x ∼ p θ ( ⋅ | z ) {\displaystyle x\sim p_{\theta }(\cdot |z)} using f θ ( z ) {\displaystyle f_{\theta }(z)} . In other words, we have a generative model for both the observable and the latent. Now, we consider a distribution p θ {\displaystyle p_{\theta }} good, if it is a close approximation of p ∗ {\displaystyle p^{*}} : p θ ( X ) ≈ p ∗ ( X ) {\displaystyle p_{\theta }(X)\approx p^{*}(X)} since the distribution on the right side is over X {\displaystyle X} only, the distribution on the left side must marginalize the latent variable Z {\displaystyle Z} away. In general, it's impossible to perform the integral p θ ( x ) = ∫ p θ ( x | z ) p ( z ) d z {\displaystyle p_{\theta }(x)=\int p_{\theta }(x|z)p(z)dz} , forcing us to perform another approximation. Since p θ ( x ) = p θ ( x | z ) p ( z ) p θ ( z | x ) {\displaystyle p_{\theta }(x)={\frac {p_{\theta }(x|z)p(z)}{p_{\theta }(z|x)}}} (Bayes' Rule), it suffices to find a good approximation of p θ ( z | x ) {\displaystyle p_{\theta }(z|x)} . So define another distribution family q ϕ ( z | x ) {\displaystyle q_{\phi }(z|x)} and use it to approximate p θ ( z | x ) {\displaystyle p_{\theta }(z|x)} . This is a discriminative model for the latent. The entire situation is summarized in the following table: In Bayesian language, X {\displaystyle X} is the observed evidence, and Z {\displaystyle Z} is the latent/unobserved. The distribution p {\displaystyle p} over Z {\displaystyle Z} is the prior distribution over Z {\displaystyle Z} , p θ ( x | z ) {\displaystyle p_{\theta }(x|z)} is the likelihood function, and p θ ( z | x ) {\displaystyle p_{\theta }(z|x)} is the posterior distribution over Z {\displaystyle Z} . Given an observation x {\displaystyle x} , we can infer what z {\displaystyle z} likely gave rise to x {\displaystyle x} by computing p θ ( z | x ) {\displaystyle p_{\theta }(z|x)} . The usual Bayesian method is to estimate the integral p θ ( x ) = ∫ p θ ( x | z ) p ( z ) d z {\displaystyle p_{\theta }(x)=\int p_{\theta }(x|z)p(z)dz} , then compute by Bayes' rule p θ ( z | x ) = p θ ( x | z ) p ( z ) p θ ( x ) {\displaystyle p_{\theta }(z|x)={\frac {p_{\theta }(x|z)p(z)}{p_{\theta }(x)}}} . This is expensive to perform in general, but if we can simply find a good approximation q ϕ ( z | x ) ≈ p θ ( z | x ) {\displaystyle q_{\phi }(z|x)\approx p_{\theta }(z|x)} for most x , z {\displaystyle x,z} , then we can infer z {\displaystyle z} from x {\displaystyle x} cheaply. Thus, the search for a good q ϕ {\displaystyle q_{\phi }} is also called amortized inference. All in all, we have found a problem of variational Bayesian inference. === Deriving the ELBO === A basic result in variational inference is that minimizing the Kullback–Leibler divergence (KL-divergence) is equivalent to maximizing the log-likelihood: E x ∼ p ∗ ( x ) [ ln ⁡ p θ ( x ) ] = − H ( p ∗ ) − D K L ( p ∗ ( x ) ‖ p θ ( x ) ) {\displaystyle \mathbb {E} _{x\sim p^{*}(x)}[\ln p_{\theta }(x)]=-H(p^{*})-D_{\mathit {KL}}(p^{*}(x)\|p_{\theta }(x))} where H ( p ∗ ) = − E x ∼ p ∗ [ ln ⁡ p ∗ ( x ) ] {\displaystyle H(p^{*})=-\mathbb {\mathbb {E} } _{x\sim p^{*}}[\ln p^{*}(x)]} is the entropy of the true distribution. So if we can maximize E x ∼ p ∗ ( x ) [ ln ⁡ p θ ( x ) ] {\displaystyle \mathbb {E} _{x\sim p^{*}(x)}[\ln p_{\theta }(x)]} , we can minimize D K L ( p ∗ ( x ) ‖ p θ ( x ) ) {\displaystyle D_{\mathit {KL}}(p^{*}(x)\|p_{\theta }(x))} , and consequently find an accurate approximation p θ ≈ p ∗ {\displaystyle p_{\theta }\approx p^{*}} . To maximize E x ∼ p ∗ ( x ) [ ln ⁡ p θ ( x ) ] {\displaystyle \mathbb {E} _{x\sim p^{*}(x)}[\ln p_{\theta }(x)]} , we simply sample many x i ∼ p ∗ ( x ) {\displaystyle x_{i}\sim p^{*}(x)} , i.e. use importance sampling N max θ E x ∼ p ∗ ( x ) [ ln ⁡ p θ ( x ) ] ≈ max θ ∑ i ln ⁡ p θ ( x i ) {\displaystyle N\max _{\theta }\mathbb {E} _{x\sim p^{*}(x)}[\ln p_{\theta }(x)]\approx \max _{\theta }\sum _{i}\ln p_{\theta }(x_{i})} where N {\displaystyle N} is the number of samples drawn from the true distribution. This approximation can be seen as overfitting. In order to maximize ∑ i ln ⁡ p θ ( x i ) {\displaystyle \sum _{i}\ln p_{\theta }(x_{i})} , it's necessary to find ln ⁡ p θ ( x ) {\displaystyle \ln p_{\theta }(x)} : ln ⁡ p θ ( x ) = ln ⁡ ∫ p θ ( x | z ) p ( z ) d z {\displaystyle \ln p_{\theta }(x)=\ln \int p_{\theta }(x|z)p(z)dz} This usually has no closed form and must be estimated. The usual way to estimate integrals is Monte Carlo integration with importance sampling: ∫ p θ ( x | z ) p ( z ) d z = E z ∼ q ϕ ( ⋅ | x ) [ p θ ( x , z ) q ϕ ( z | x ) ] {\displaystyle \int p_{\theta }(x|z)p(z)dz=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[{\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}\right]} where q ϕ ( z | x ) {\displaystyle q_{\phi }(z|x)} is a sampling distribution over z {\displaystyle z} that we use to perform the Monte Carlo integration. So we see that if we sample z ∼ q ϕ ( ⋅ | x ) {\displaystyle z\sim q_{\phi }(\cdot |x)} , then p θ ( x , z ) q ϕ ( z | x ) {\displaystyle {\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}} is an unbiased estimator of p θ ( x ) {\displaystyle p_{\theta }(x)} . Unfortunately, this does not give us an unbiased estimator of ln ⁡ p θ ( x ) {\displaystyle \ln p_{\theta }(x)} , because ln {\displaystyle \ln } is nonlinear. Indeed, we have by Jensen's inequality, ln ⁡ p θ ( x ) = ln ⁡ E z ∼ q ϕ ( ⋅ | x ) [ p θ ( x , z ) q ϕ ( z | x ) ] ≥ E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] {\displaystyle \ln p_{\theta }(x)=\ln \mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[{\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}\right]\geq \mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}\right]} In fact, all the obvious estimators of ln ⁡ p θ ( x ) {\displaystyle \ln p_{\theta }(x)} are biased downwards, because no matter how many samples of z i ∼ q ϕ ( ⋅ | x ) {\displaystyle z_{i}\sim q_{\phi }(\cdot |x)} we take, we have by Jensen's inequality: E z i ∼ q ϕ ( ⋅ | x ) [ ln ⁡ ( 1 N ∑ i p θ ( x , z i ) q ϕ ( z i | x ) ) ] ≤ ln ⁡ E z i ∼ q ϕ ( ⋅ | x ) [ 1 N ∑ i p θ ( x , z i ) q ϕ ( z i | x ) ] = ln ⁡ p θ ( x ) {\displaystyle \mathbb {E} _{z_{i}\sim q_{\phi }(\cdot |x)}\left[\ln \left({\frac {1}{N}}\sum _{i}{\frac {p_{\theta }(x,z_{i})}{q_{\phi }(z_{i}|x)}}\right)\right]\leq \ln \mathbb {E} _{z_{i}\sim q_{\phi }(\cdot |x)}\left[{\frac {1}{N}}\sum _{i}{\frac {p_{\theta }(x,z_{i})}{q_{\phi }(z_{i}|x)}}\right]=\ln p_{\theta }(x)} Subtracting the right side, we see that the problem comes down to a biased estimator of zero: E z i ∼ q ϕ ( ⋅ | x ) [ ln ⁡ ( 1 N ∑ i p θ ( z i | x ) q ϕ ( z i | x ) ) ] ≤ 0 {\displaystyle \mathbb {E} _{z_{i}\sim q_{\phi }(\cdot |x)}\left[\ln \left({\frac {1}{N}}\sum _{i}{\frac {p_{\theta }(z_{i}|x)}{q_{\phi }(z_{i}|x)}}\right)\right]\leq 0} At this point, we could branch off towards the development of an importance-weighted autoencoder, but we will instead continue with the simplest case with N = 1 {\displaystyle N=1} : ln ⁡ p θ ( x ) = ln ⁡ E z ∼ q ϕ ( ⋅ | x ) [ p θ ( x , z ) q ϕ ( z | x ) ] ≥ E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] {\displaystyle \ln p_{\theta }(x)=\ln \mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[{\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}\right]\geq \mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}\right]} The tightness of the inequality has a closed form: ln ⁡ p θ ( x ) − E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] = D K L ( q ϕ ( ⋅ | x ) ‖ p θ ( ⋅ | x ) ) ≥ 0 {\displaystyle \ln p_{\theta }(x)-\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}\right]=D_{\mathit {KL}}(q_{\phi }(\cdot |x)\|p_{\theta }(\cdot |x))\geq 0} We have thus obtained the ELBO function: L ( ϕ , θ ; x ) := ln ⁡ p θ ( x ) − D K L ( q ϕ ( ⋅ | x ) ‖ p θ ( ⋅ | x ) ) {\displaystyle L(\phi ,\theta ;x):=\ln p_{\theta }(x)-D_{\mathit {KL}}(q_{\phi }(\cdot |x)\|p_{\theta }(\cdot |x))} === Maximizing the ELBO === For fixed x {\displaystyle x} , the optimization max θ , ϕ L ( ϕ , θ ; x ) {\displaystyle \max _{\theta ,\phi }L(\phi ,\theta ;x)} simultaneously attempts to maximize ln ⁡ p θ ( x ) {\displaystyle \ln p_{\theta }(x)} and minimize D K L ( q ϕ ( ⋅ | x ) ‖ p θ ( ⋅ | x ) ) {\displaystyle D_{\mathit {KL}}(q_{\phi }(\cdot |x)\|p_{\theta }(\cdot |x))} . If the parametrization for p θ {\displaystyle p_{\theta }} and q ϕ {\displaystyle q_{\phi }} are flexible enough, we would obtain some ϕ ^ , θ ^ {\displaystyle {\hat {\phi }},{\hat {\theta }}} , such that we have simultaneously ln ⁡ p θ ^ ( x ) ≈ max θ ln ⁡ p θ ( x ) ; q ϕ ^ ( ⋅ | x ) ≈ p θ ^ ( ⋅ | x ) {\displaystyle \ln p_{\hat {\theta }}(x)\approx \max _{\theta }\ln p_{\theta }(x);\quad q_{\hat {\phi }}(\cdot |x)\approx p_{\hat {\theta }}(\cdot |x)} Since E x ∼ p ∗ ( x ) [ ln ⁡ p θ ( x ) ] = − H ( p ∗ ) − D K L ( p ∗ ( x ) ‖ p θ ( x ) ) {\displaystyle \mathbb {E} _{x\sim p^{*}(x)}[\ln p_{\theta }(x)]=-H(p^{*})-D_{\mathit {KL}}(p^{*}(x)\|p_{\theta }(x))} we have ln ⁡ p θ ^ ( x ) ≈ max θ − H ( p ∗ ) − D K L ( p ∗ ( x ) ‖ p θ ( x ) ) {\displaystyle \ln p_{\hat {\theta }}(x)\approx \max _{\theta }-H(p^{*})-D_{\mathit {KL}}(p^{*}(x)\|p_{\theta }(x))} and so θ ^ ≈ arg ⁡ min D K L ( p ∗ ( x ) ‖ p θ ( x ) ) {\displaystyle {\hat {\theta }}\approx \arg \min D_{\mathit {KL}}(p^{*}(x)\|p_{\theta }(x))} In other words, maximizing the ELBO would simultaneously allow us to obtain an accurate generative model p θ ^ ≈ p ∗ {\displaystyle p_{\hat {\theta }}\approx p^{*}} and an accurate discriminative model q ϕ ^ ( ⋅ | x ) ≈ p θ ^ ( ⋅ | x ) {\displaystyle q_{\hat {\phi }}(\cdot |x)\approx p_{\hat {\theta }}(\cdot |x)} . == Main forms == The ELBO has many possible expressions, each with some different emphasis. E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] = ∫ q ϕ ( z | x ) ln ⁡ p θ ( x , z ) q ϕ ( z | x ) d z {\displaystyle \mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}\right]=\int q_{\phi }(z|x)\ln {\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}dz} This form shows that if we sample z ∼ q ϕ ( ⋅ | x ) {\displaystyle z\sim q_{\phi }(\cdot |x)} , then ln ⁡ p θ ( x , z ) q ϕ ( z | x ) {\displaystyle \ln {\frac {p_{\theta }(x,z)}{q_{\phi }(z|x)}}} is an unbiased estimator of the ELBO. ln ⁡ p θ ( x ) − D K L ( q ϕ ( ⋅ | x ) ‖ p θ ( ⋅ | x ) ) {\displaystyle \ln p_{\theta }(x)-D_{\mathit {KL}}(q_{\phi }(\cdot |x)\;\|\;p_{\theta }(\cdot |x))} This form shows that the ELBO is a lower bound on the evidence ln ⁡ p θ ( x ) {\displaystyle \ln p_{\theta }(x)} , and that maximizing the ELBO with respect to ϕ {\displaystyle \phi } is equivalent to minimizing the KL-divergence from p θ ( ⋅ | x ) {\displaystyle p_{\theta }(\cdot |x)} to q ϕ ( ⋅ | x ) {\displaystyle q_{\phi }(\cdot |x)} . E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x | z ) ] − D K L ( q ϕ ( ⋅ | x ) ‖ p ) {\displaystyle \mathbb {E} _{z\sim q_{\phi }(\cdot |x)}[\ln p_{\theta }(x|z)]-D_{\mathit {KL}}(q_{\phi }(\cdot |x)\;\|\;p)} This form shows that maximizing the ELBO simultaneously attempts to keep q ϕ ( ⋅ | x ) {\displaystyle q_{\phi }(\cdot |x)} close to p {\displaystyle p} and concentrate q ϕ ( ⋅ | x ) {\displaystyle q_{\phi }(\cdot |x)} on those z {\displaystyle z} that maximizes ln ⁡ p θ ( x | z ) {\displaystyle \ln p_{\theta }(x|z)} . That is, the approximate posterior q ϕ ( ⋅ | x ) {\displaystyle q_{\phi }(\cdot |x)} balances between staying close to the prior p {\displaystyle p} and moving towards the maximum likelihood arg ⁡ max z ln ⁡ p θ ( x | z ) {\displaystyle \arg \max _{z}\ln p_{\theta }(x|z)} . === Data-processing inequality === Suppose we take N {\displaystyle N} independent samples from p ∗ {\displaystyle p^{*}} , and collect them in the dataset D = { x 1 , . . . , x N } {\displaystyle D=\{x_{1},...,x_{N}\}} , then we have empirical distribution q D ( x ) = 1 N ∑ i δ x i {\displaystyle q_{D}(x)={\frac {1}{N}}\sum _{i}\delta _{x_{i}}} . Fitting p θ ( x ) {\displaystyle p_{\theta }(x)} to q D ( x ) {\displaystyle q_{D}(x)} can be done, as usual, by maximizing the loglikelihood ln ⁡ p θ ( D ) {\displaystyle \ln p_{\theta }(D)} : D K L ( q D ( x ) ‖ p θ ( x ) ) = − 1 N ∑ i ln ⁡ p θ ( x i ) − H ( q D ) = − 1 N ln ⁡ p θ ( D ) − H ( q D ) {\displaystyle D_{\mathit {KL}}(q_{D}(x)\|p_{\theta }(x))=-{\frac {1}{N}}\sum _{i}\ln p_{\theta }(x_{i})-H(q_{D})=-{\frac {1}{N}}\ln p_{\theta }(D)-H(q_{D})} Now, by the ELBO inequality, we can bound ln ⁡ p θ ( D ) {\displaystyle \ln p_{\theta }(D)} , and thus D K L ( q D ( x ) ‖ p θ ( x ) ) ≤ − 1 N L ( ϕ , θ ; D ) − H ( q D ) {\displaystyle D_{\mathit {KL}}(q_{D}(x)\|p_{\theta }(x))\leq -{\frac {1}{N}}L(\phi ,\theta ;D)-H(q_{D})} The right-hand-side simplifies to a KL-divergence, and so we get: D K L ( q D ( x ) ‖ p θ ( x ) ) ≤ − 1 N ∑ i L ( ϕ , θ ; x i ) − H ( q D ) = D K L ( q D , ϕ ( x , z ) ; p θ ( x , z ) ) {\displaystyle D_{\mathit {KL}}(q_{D}(x)\|p_{\theta }(x))\leq -{\frac {1}{N}}\sum _{i}L(\phi ,\theta ;x_{i})-H(q_{D})=D_{\mathit {KL}}(q_{D,\phi }(x,z);p_{\theta }(x,z))} This result can be interpreted as a special case of the data processing inequality. In this interpretation, maximizing L ( ϕ , θ ; D ) = ∑ i L ( ϕ , θ ; x i ) {\displaystyle L(\phi ,\theta ;D)=\sum _{i}L(\phi ,\theta ;x_{i})} is minimizing D K L ( q D , ϕ ( x , z ) ; p θ ( x , z ) ) {\displaystyle D_{\mathit {KL}}(q_{D,\phi }(x,z);p_{\theta }(x,z))} , which upper-bounds the real quantity of interest D K L ( q D ( x ) ; p θ ( x ) ) {\displaystyle D_{\mathit {KL}}(q_{D}(x);p_{\theta }(x))} via the data-processing inequality. That is, we append a latent space to the observable space, paying the price of a weaker inequality for the sake of more computationally efficient minimization of the KL-divergence. == References == == Notes ==
Wikipedia/Variational_free_energy
The free energy principle is a mathematical principle of information physics. Its application to fMRI brain imaging data as a theoretical framework suggests that the brain reduces surprise or uncertainty by making predictions based on internal models and uses sensory input to update its models so as to improve the accuracy of its predictions. This principle approximates an integration of Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. From it, wide-ranging inferences have been made about brain function, perception, and action. Its applicability to living systems has been questioned. == Overview == In biophysics and cognitive science, the free energy principle is a mathematical principle describing a formal account of the representational capacities of physical systems: that is, why things that exist look as if they track properties of the systems to which they are coupled. It establishes that the dynamics of physical systems minimise a quantity known as surprisal (which is the negative log probability of some outcome); or equivalently, its variational upper bound, called free energy. The principle is used especially in Bayesian approaches to brain function, but also some approaches to artificial intelligence; it is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception-action loops in neuroscience. The free energy principle models the behaviour of systems that are distinct from, but coupled to, another system (e.g., an embedding environment), where the degrees of freedom that implement the interface between the two systems is known as a Markov blanket. More formally, the free energy principle says that if a system has a "particular partition" (i.e., into particles, with their Markov blankets), then subsets of that system will track the statistical structure of other subsets (which are known as internal and external states or paths of a system). The free energy principle is based on the Bayesian idea of the brain as an “inference engine.” Under the free energy principle, systems pursue paths of least surprise, or equivalently, minimize the difference between predictions based on their model of the world and their sense and associated perception. This difference is quantified by variational free energy and is minimized by continuous correction of the world model of the system, or by making the world more like the predictions of the system. By actively changing the world to make it closer to the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods. The free energy principle is a mathematical principle of information physics: much like the principle of maximum entropy or the principle of least action, it is true on mathematical grounds. To attempt to falsify the free energy principle is a category mistake, akin to trying to falsify calculus by making empirical observations. (One cannot invalidate a mathematical theory in this way; instead, one would need to derive a formal contradiction from the theory.) In a 2018 interview, Friston explained what it entails for the free energy principle to not be subject to falsification: I think it is useful to make a fundamental distinction at this point—that we can appeal to later. The distinction is between a state and process theory; i.e., the difference between a normative principle that things may or may not conform to, and a process theory or hypothesis about how that principle is realized. Under this distinction, the free energy principle stands in stark distinction to things like predictive coding and the Bayesian brain hypothesis. This is because the free energy principle is what it is — a principle. Like Hamilton's principle of stationary action, it cannot be falsified. It cannot be disproven. In fact, there's not much you can do with it, unless you ask whether measurable systems conform to the principle. On the other hand, hypotheses that the brain performs some form of Bayesian inference or predictive coding are what they are—hypotheses. These hypotheses may or may not be supported by empirical evidence. There are many examples of these hypotheses being supported by empirical evidence. == Background == The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz's work on unconscious inference and subsequent treatments in psychology and machine learning. Variational free energy is a function of observations and a probability density over their hidden causes. This variational density is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation to Bayesian model evidence. Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world. However, free energy is also an upper bound on the self-information of outcomes, where the long-term average of surprise is entropy. This means that if a system acts to minimise free energy, it will implicitly place an upper bound on the entropy of the outcomes – or sensory states – it samples. === Relationship to other theories === Active inference is closely related to the good regulator theorem and related accounts of self-organisation, such as self-assembly, pattern formation, autopoiesis and practopoiesis. It addresses the themes considered in cybernetics, synergetics and embodied cognition. Because free energy can be expressed as the expected energy of observations under the variational density minus its entropy, it is also related to the maximum entropy principle. Finally, because the time average of energy is action, the principle of minimum variational free energy is a principle of least action. Active inference allowing for scale invariance has also been applied to other theories and domains. For instance, it has been applied to sociology, linguistics and communication, semiotics, and epidemiology among others. Negative free energy is formally equivalent to the evidence lower bound, which is commonly used in machine learning to train generative models, such as variational autoencoders. == Action and perception == Active inference applies the techniques of approximate Bayesian inference to infer the causes of sensory data from a 'generative' model of how that data is caused and then uses these inferences to guide action. Bayes' rule characterizes the probabilistically optimal inversion of such a causal model, but applying it is typically computationally intractable, leading to the use of approximate methods. In active inference, the leading class of such approximate methods are variational methods, for both practical and theoretical reasons: practical, as they often lead to simple inference procedures; and theoretical, because they are related to fundamental physical principles, as discussed above. These variational methods proceed by minimizing an upper bound on the divergence between the Bayes-optimal inference (or 'posterior') and its approximation according to the method. This upper bound is known as the free energy, and we can accordingly characterize perception as the minimization of the free energy with respect to inbound sensory information, and action as the minimization of the same free energy with respect to outbound action information. This holistic dual optimization is characteristic of active inference, and the free energy principle is the hypothesis that all systems which perceive and act can be characterized in this way. In order to exemplify the mechanics of active inference via the free energy principle, a generative model must be specified, and this typically involves a collection of probability density functions which together characterize the causal model. One such specification is as follows. The system is modelled as inhabiting a state space X {\displaystyle X} , in the sense that its states form the points of this space. The state space is then factorized according to X = Ψ × S × A × R {\displaystyle X=\Psi \times S\times A\times R} , where Ψ {\displaystyle \Psi } is the space of 'external' states that are 'hidden' from the agent (in the sense of not being directly perceived or accessible), S {\displaystyle S} is the space of sensory states that are directly perceived by the agent, A {\displaystyle A} is the space of the agent's possible actions, and R {\displaystyle R} is a space of 'internal' states that are private to the agent. Keeping with the Figure 1, note that in the following the ψ ˙ , ψ , s , a {\displaystyle {\dot {\psi }},\psi ,s,a} and μ {\displaystyle \mu } are functions of (continuous) time t {\displaystyle t} . The generative model is the specification of the following density functions: A sensory model, p S : S × Ψ × A → R {\displaystyle p_{S}:S\times \Psi \times A\to \mathbb {R} } , often written as p S ( s ∣ ψ , a ) {\displaystyle p_{S}(s\mid \psi ,a)} , characterizing the likelihood of sensory data given external states and actions; a stochastic model of the environmental dynamics, p Ψ : Ψ × Ψ × A → R {\displaystyle p_{\Psi }:\Psi \times \Psi \times A\to \mathbb {R} } , often written p Ψ ( ψ ˙ ∣ ψ , a ) {\displaystyle p_{\Psi }({\dot {\psi }}\mid \psi ,a)} , characterizing how the external states are expected by the agent to evolve over time t {\displaystyle t} , given the agent's actions; an action model, p A : A × R × S → R {\displaystyle p_{A}:A\times R\times S\to \mathbb {R} } , written p A ( a ∣ μ , s ) {\displaystyle p_{A}(a\mid \mu ,s)} , characterizing how the agent's actions depend upon its internal states and sensory data; and an internal model, p R : R × S → R {\displaystyle p_{R}:R\times S\to \mathbb {R} } , written p R ( μ ∣ s ) {\displaystyle p_{R}(\mu \mid s)} , characterizing how the agent's internal states depend upon its sensory data. These density functions determine the factors of a "joint model", which represents the complete specification of the generative model, and which can be written as p ( ψ ˙ , s , a , μ ∣ ψ ) = p S ( s ∣ ψ , a ) p Ψ ( ψ ˙ ∣ ψ , a ) p A ( a ∣ μ , s ) p R ( μ ∣ s ) {\displaystyle p({\dot {\psi }},s,a,\mu \mid \psi )=p_{S}(s\mid \psi ,a)p_{\Psi }({\dot {\psi }}\mid \psi ,a)p_{A}(a\mid \mu ,s)p_{R}(\mu \mid s)} . Bayes' rule then determines the "posterior density" p Bayes ( ψ ˙ | s , a , μ , ψ ) {\displaystyle p_{\text{Bayes}}({\dot {\psi }}|s,a,\mu ,\psi )} , which expresses a probabilistically optimal belief about the external state ψ ˙ {\displaystyle {\dot {\psi }}} given the preceding state ψ {\displaystyle \psi } and the agent's actions, sensory signals, and internal states. Since computing p Bayes {\displaystyle p_{\text{Bayes}}} is computationally intractable, the free energy principle asserts the existence of a "variational density" q ( ψ ˙ | s , a , μ , ψ ) {\displaystyle q({\dot {\psi }}|s,a,\mu ,\psi )} , where q {\displaystyle q} is an approximation to p Bayes {\displaystyle p_{\text{Bayes}}} . One then defines the free energy as F ( μ , a ; s ) ⏟ f r e e − e n e r g y = E q ( ψ ˙ ) [ − log ⁡ p ( ψ ˙ , s , a , μ ∣ ψ ) ] ⏟ expected energy − H [ q ( ψ ˙ ∣ s , a , μ , ψ ) ] ⏟ e n t r o p y = − log ⁡ p ( s ) ⏟ s u r p r i s e + K L [ q ( ψ ˙ ∣ s , a , μ , ψ ) ∥ p Bayes ( ψ ˙ ∣ s , a , μ , ψ ) ] ⏟ d i v e r g e n c e ≥ − log ⁡ p ( s ) ⏟ s u r p r i s e {\displaystyle {\begin{aligned}{\underset {\mathrm {free-energy} }{\underbrace {F(\mu ,a\,;s)} }}&={\underset {\text{expected energy}}{\underbrace {\mathbb {E} _{q({\dot {\psi }})}[-\log p({\dot {\psi }},s,a,\mu \mid \psi )]} }}-{\underset {\mathrm {entropy} }{\underbrace {\mathbb {H} [q({\dot {\psi }}\mid s,a,\mu ,\psi )]} }}\\&={\underset {\mathrm {surprise} }{\underbrace {-\log p(s)} }}+{\underset {\mathrm {divergence} }{\underbrace {\mathbb {KL} [q({\dot {\psi }}\mid s,a,\mu ,\psi )\parallel p_{\text{Bayes}}({\dot {\psi }}\mid s,a,\mu ,\psi )]} }}\\&\geq {\underset {\mathrm {surprise} }{\underbrace {-\log p(s)} }}\end{aligned}}} and defines action and perception as the joint optimization problem μ ∗ = a r g m i n μ { F ( μ , a ; s ) ) } a ∗ = a r g m i n a { F ( μ ∗ , a ; s ) } {\displaystyle {\begin{aligned}\mu ^{*}&={\underset {\mu }{\operatorname {arg\,min} }}\{F(\mu ,a\,;\,s))\}\\a^{*}&={\underset {a}{\operatorname {arg\,min} }}\{F(\mu ^{*},a\,;\,s)\}\end{aligned}}} where the internal states μ {\displaystyle \mu } are typically taken to encode the parameters of the 'variational' density q {\displaystyle q} and hence the agent's "best guess" about the posterior belief over Ψ {\displaystyle \Psi } . Note that the free energy is also an upper bound on a measure of the agent's (marginal, or average) sensory surprise, and hence free energy minimization is often motivated by the minimization of surprise. == Free energy minimisation == === Free energy minimisation and self-organisation === Free energy minimisation has been proposed as a hallmark of self-organising systems when cast as random dynamical systems. This formulation rests on a Markov blanket (comprising action and sensory states) that separates internal and external states. If internal states and action minimise free energy, then they place an upper bound on the entropy of sensory states: lim T → ∞ 1 T ∫ 0 T F ( s ( t ) , μ ( t ) ) d t ⏟ free-action ≥ lim T → ∞ 1 T ∫ 0 T − log ⁡ p ( s ( t ) ∣ m ) ⏟ surprise d t = H [ p ( s ∣ m ) ] {\displaystyle \lim _{T\to \infty }{\frac {1}{T}}{\underset {\text{free-action}}{\underbrace {\int _{0}^{T}F(s(t),\mu (t))\,dt} }}\geq \lim _{T\to \infty }{\frac {1}{T}}\int _{0}^{T}{\underset {\text{surprise}}{\underbrace {-\log p(s(t)\mid m)} }}\,dt=H[p(s\mid m)]} This is because – under ergodic assumptions – the long-term average of surprise is entropy. This bound resists a natural tendency to disorder – of the sort associated with the second law of thermodynamics and the fluctuation theorem. However, formulating a unifying principle for the life sciences in terms of concepts from statistical physics, such as random dynamical system, non-equilibrium steady state and ergodicity, places substantial constraints on the theoretical and empirical study of biological systems with the risk of obscuring all features that make biological systems interesting kinds of self-organizing systems. === Free energy minimisation and Bayesian inference === All Bayesian inference can be cast in terms of free energy minimisation. When free energy is minimised with respect to internal states, the Kullback–Leibler divergence between the variational and posterior density over hidden states is minimised. This corresponds to approximate Bayesian inference – when the form of the variational density is fixed – and exact Bayesian inference otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g., Kalman filtering). It is also used in Bayesian model selection, where free energy can be usefully decomposed into complexity and accuracy: F ( s , μ ) ⏟ free-energy = D K L [ q ( ψ ∣ μ ) ∥ p ( ψ ∣ m ) ] ⏟ complexity − E q [ log ⁡ p ( s ∣ ψ , m ) ] ⏟ a c c u r a c y {\displaystyle {\underset {\text{free-energy}}{\underbrace {F(s,\mu )} }}={\underset {\text{complexity}}{\underbrace {D_{\mathrm {KL} }[q(\psi \mid \mu )\parallel p(\psi \mid m)]} }}-{\underset {\mathrm {accuracy} }{\underbrace {E_{q}[\log p(s\mid \psi ,m)]} }}} Models with minimum free energy provide an accurate explanation of data, under complexity costs; cf. Occam's razor and more formal treatments of computational costs. Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data). === Free energy minimisation and thermodynamics === Variational free energy is an information-theoretic functional and is distinct from thermodynamic (Helmholtz) free energy. However, the complexity term of variational free energy shares the same fixed point as Helmholtz free energy (under the assumption the system is thermodynamically closed but not isolated). This is because if sensory perturbations are suspended (for a suitably long period of time), complexity is minimised (because accuracy can be neglected). At this point, the system is at equilibrium and internal states minimise Helmholtz free energy, by the principle of minimum energy. === Free energy minimisation and information theory === Free energy minimisation is equivalent to maximising the mutual information between sensory states and internal states that parameterise the variational density (for a fixed entropy variational density). This relates free energy minimization to the principle of minimum redundancy. == Free energy minimisation in neuroscience == Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states: Ψ = X × Θ × Π {\displaystyle \Psi =X\times \Theta \times \Pi } that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively. === Perceptual inference and categorisation === Free energy minimisation formalises the notion of unconscious inference in perception and provides a normative (Bayesian) theory of neuronal processing. The associated process theory of neuronal dynamics is based on minimising free energy through gradient descent. This corresponds to generalised Bayesian filtering (where ~ denotes a variable in generalised coordinates of motion and D {\displaystyle D} is a derivative matrix operator): μ ~ ˙ = D μ ~ − ∂ μ F ( s , μ ) | μ = μ ~ {\displaystyle {\dot {\tilde {\mu }}}=D{\tilde {\mu }}-\partial _{\mu }F(s,\mu ){\Big |}_{\mu ={\tilde {\mu }}}} Usually, the generative models that define free energy are non-linear and hierarchical (like cortical hierarchies in the brain). Special cases of generalised filtering include Kalman filtering, which is formally equivalent to predictive coding – a popular metaphor for message passing in the brain. Under hierarchical models, predictive coding involves the recurrent exchange of ascending (bottom-up) prediction errors and descending (top-down) predictions that is consistent with the anatomy and physiology of sensory and motor systems. === Perceptual learning and memory === In predictive coding, optimising model parameters through a gradient descent on the time integral of free energy (free action) reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain. === Perceptual precision, attention and salience === Optimizing the precision parameters corresponds to optimizing the gain of prediction errors (cf., Kalman gain). In neuronally plausible implementations of predictive coding, this corresponds to optimizing the excitability of superficial pyramidal cells and has been interpreted in terms of attentional gain. With regard to the top-down vs. bottom-up controversy, which has been addressed as a major open problem of attention, a computational model has succeeded in illustrating the circular nature of the interplay between top-down and bottom-up mechanisms. Using an established emergent model of attention, namely SAIM, the authors proposed a model called PE-SAIM, which, in contrast to the standard version, approaches selective attention from a top-down position. The model takes into account the transmission of prediction errors to the same level or a level above, in order to minimise the energy function that indicates the difference between the data and its cause, or, in other words, between the generative model and the posterior. To increase validity, they also incorporated neural competition between stimuli into their model. A notable feature of this model is the reformulation of the free energy function only in terms of prediction errors during task performance: ∂ E t o t a l ( Y V P , X S N , x C N , y K N ) ∂ y m n S N = x m n C N − b C N ε n m C N + b C N ∑ k ( ε k n m K N ) {\displaystyle {\dfrac {\partial E^{total}(Y^{VP},X^{SN},x^{CN},y^{KN})}{\partial y_{mn}^{SN}}}=x_{mn}^{CN}-b^{CN}\varepsilon _{nm}^{CN}+b^{CN}\sum _{k}(\varepsilon _{knm}^{KN})} where E t o t a l {\displaystyle E^{total}} is the total energy function of the neural networks entail, and ε k n m K N {\displaystyle \varepsilon _{knm}^{KN}} is the prediction error between the generative model (prior) and posterior changing over time. Comparing the two models reveals a notable similarity between their respective results while also highlighting a remarkable discrepancy, whereby – in the standard version of the SAIM – the model's focus is mainly upon the excitatory connections, whereas in the PE-SAIM, the inhibitory connections are leveraged to make an inference. The model has also proved to be fit to predict the EEG and fMRI data drawn from human experiments with high precision. In the same vein, Yahya et al. also applied the free energy principle to propose a computational model for template matching in covert selective visual attention that mostly relies on SAIM. According to this study, the total free energy of the whole state-space is reached by inserting top-down signals in the original neural networks, whereby we derive a dynamical system comprising both feed-forward and backward prediction error. == Active inference == When gradient descent is applied to action a ˙ = − ∂ a F ( s , μ ~ ) {\displaystyle {\dot {a}}=-\partial _{a}F(s,{\tilde {\mu }})} , motor control can be understood in terms of classical reflex arcs that are engaged by descending (corticospinal) predictions. This provides a formalism that generalizes the equilibrium point solution – to the degrees of freedom problem – to movement trajectories. === Active inference and optimal control === Active inference is related to optimal control by replacing value or cost-to-go functions with prior beliefs about state transitions or flow. This exploits the close connection between Bayesian filtering and the solution to the Bellman equation. However, active inference starts with (priors over) flow f = Γ ⋅ ∇ V + ∇ × W {\displaystyle f=\Gamma \cdot \nabla V+\nabla \times W} that are specified with scalar V ( x ) {\displaystyle V(x)} and vector W ( x ) {\displaystyle W(x)} value functions of state space (cf., the Helmholtz decomposition). Here, Γ {\displaystyle \Gamma } is the amplitude of random fluctuations and cost is c ( x ) = f ⋅ ∇ V + ∇ ⋅ Γ ⋅ V {\displaystyle c(x)=f\cdot \nabla V+\nabla \cdot \Gamma \cdot V} . The priors over flow p ( x ~ ∣ m ) {\displaystyle p({\tilde {x}}\mid m)} induce a prior over states p ( x ∣ m ) = exp ⁡ ( V ( x ) ) {\displaystyle p(x\mid m)=\exp(V(x))} that is the solution to the appropriate forward Kolmogorov equations. In contrast, optimal control optimises the flow, given a cost function, under the assumption that W = 0 {\displaystyle W=0} (i.e., the flow is curl free or has detailed balance). Usually, this entails solving backward Kolmogorov equations. === Active inference and optimal decision (game) theory === Optimal decision problems (usually formulated as partially observable Markov decision processes) are treated within active inference by absorbing utility functions into prior beliefs. In this setting, states that have a high utility (low cost) are states an agent expects to occupy. By equipping the generative model with hidden states that model control, policies (control sequences) that minimise variational free energy lead to high utility states. Neurobiologically, neuromodulators such as dopamine are considered to report the precision of prediction errors by modulating the gain of principal cells encoding prediction error. This is closely related to – but formally distinct from – the role of dopamine in reporting prediction errors per se and related computational accounts. === Active inference and cognitive neuroscience === Active inference has been used to address a range of issues in cognitive neuroscience, brain function and neuropsychiatry, including action observation, mirror neurons, saccades and visual search, eye movements, sleep, illusions, attention, action selection, consciousness, hysteria and psychosis. Explanations of action in active inference often depend on the idea that the brain has 'stubborn predictions' that it cannot update, leading to actions that cause these predictions to come true. == See also == Action-specific perception – Psychological theory that people perceive their environment and events within itPages displaying wikidata descriptions as a fallback Affordance – Possibility of an action on an object or environment Autopoiesis – System capable of producing itself Bayesian approaches to brain function – Explaining the brain's abilities through statistical principles Constructal law - Law of design evolution in nature, animate and inanimate Decision theory – Branch of applied probability theory Embodied cognition – Interdisciplinary theory Entropic force – Physical force that originates from thermodynamics instead of fundamental interactions Principle of minimum energy – thermodynamic formulation based on the second lawPages displaying wikidata descriptions as a fallback Info-metrics – Interdisciplinary approach to scientific modelling and information processing Optimal control – Mathematical way of attaining a desired output from a dynamic system Adaptive system – System that can adapt to the environment Predictive coding – Theory of brain function Self-organization – Process of creating order by local interactions Surprisal – Basic quantity derived from the probability of a particular event occurring from a random variablePages displaying short descriptions of redirect targets Synergetics (Haken) – school of thought on thermodynamics and systems phenomena developed by Hermann HakenPages displaying wikidata descriptions as a fallback Variational Bayesian methods – Mathematical methods used in Bayesian inference and machine learning == References == == External links == Behavioral and Brain Sciences (by Andy Clark)
Wikipedia/Active_inference
The fast wavelet transform is a mathematical algorithm designed to turn a waveform or signal in the time domain into a sequence of coefficients based on an orthogonal basis of small finite waves, or wavelets. The transform can be easily extended to multidimensional signals, such as images, where the time domain is replaced with the space domain. This algorithm was introduced in 1989 by Stéphane Mallat. It has as theoretical foundation the device of a finitely generated, orthogonal multiresolution analysis (MRA). In the terms given there, one selects a sampling scale J with sampling rate of 2J per unit interval, and projects the given signal f onto the space V J {\displaystyle V_{J}} ; in theory by computing the scalar products s n ( J ) := 2 J ⟨ f ( t ) , φ ( 2 J t − n ) ⟩ , {\displaystyle s_{n}^{(J)}:=2^{J}\langle f(t),\varphi (2^{J}t-n)\rangle ,} where φ {\displaystyle \varphi } is the scaling function of the chosen wavelet transform; in practice by any suitable sampling procedure under the condition that the signal is highly oversampled, so P J [ f ] ( x ) := ∑ n ∈ Z s n ( J ) φ ( 2 J x − n ) {\displaystyle P_{J}[f](x):=\sum _{n\in \mathbb {Z} }s_{n}^{(J)}\,\varphi (2^{J}x-n)} is the orthogonal projection or at least some good approximation of the original signal in V J {\displaystyle V_{J}} . The MRA is characterised by its scaling sequence a = ( a − N , … , a 0 , … , a N ) {\displaystyle a=(a_{-N},\dots ,a_{0},\dots ,a_{N})} or, as Z-transform, a ( z ) = ∑ n = − N N a n z − n {\displaystyle a(z)=\sum _{n=-N}^{N}a_{n}z^{-n}} and its wavelet sequence b = ( b − N , … , b 0 , … , b N ) {\displaystyle b=(b_{-N},\dots ,b_{0},\dots ,b_{N})} or b ( z ) = ∑ n = − N N b n z − n {\displaystyle b(z)=\sum _{n=-N}^{N}b_{n}z^{-n}} (some coefficients might be zero). Those allow to compute the wavelet coefficients d n ( k ) {\displaystyle d_{n}^{(k)}} , at least some range k=M,...,J-1, without having to approximate the integrals in the corresponding scalar products. Instead, one can directly, with the help of convolution and decimation operators, compute those coefficients from the first approximation s ( J ) {\displaystyle s^{(J)}} . == Forward DWT == For the discrete wavelet transform (DWT), one computes recursively, starting with the coefficient sequence s ( J ) {\displaystyle s^{(J)}} and counting down from k = J − 1 to some M < J, s n ( k ) := 1 2 ∑ m = − N N a m s 2 n + m ( k + 1 ) {\displaystyle s_{n}^{(k)}:={\frac {1}{2}}\sum _{m=-N}^{N}a_{m}s_{2n+m}^{(k+1)}} or s ( k ) ( z ) := ( ↓ 2 ) ( a ∗ ( z ) ⋅ s ( k + 1 ) ( z ) ) {\displaystyle s^{(k)}(z):=(\downarrow 2)(a^{*}(z)\cdot s^{(k+1)}(z))} and d n ( k ) := 1 2 ∑ m = − N N b m s 2 n + m ( k + 1 ) {\displaystyle d_{n}^{(k)}:={\frac {1}{2}}\sum _{m=-N}^{N}b_{m}s_{2n+m}^{(k+1)}} or d ( k ) ( z ) := ( ↓ 2 ) ( b ∗ ( z ) ⋅ s ( k + 1 ) ( z ) ) {\displaystyle d^{(k)}(z):=(\downarrow 2)(b^{*}(z)\cdot s^{(k+1)}(z))} , for k = J − 1, J − 2, ..., M and all n ∈ Z {\displaystyle n\in \mathbb {Z} } . In the Z-transform notation: The downsampling operator ( ↓ 2 ) {\displaystyle (\downarrow 2)} reduces an infinite sequence, given by its Z-transform, which is simply a Laurent series, to the sequence of the coefficients with even indices, ( ↓ 2 ) ( c ( z ) ) = ∑ k ∈ Z c 2 k z − k {\displaystyle (\downarrow 2)(c(z))=\sum _{k\in \mathbb {Z} }c_{2k}z^{-k}} . The starred Laurent-polynomial a ∗ ( z ) {\displaystyle a^{*}(z)} denotes the adjoint filter, it has time-reversed adjoint coefficients, a ∗ ( z ) = ∑ n = − N N a − n ∗ z − n {\displaystyle a^{*}(z)=\sum _{n=-N}^{N}a_{-n}^{*}z^{-n}} . (The adjoint of a real number being the number itself, of a complex number its conjugate, of a real matrix the transposed matrix, of a complex matrix its hermitian adjoint). Multiplication is polynomial multiplication, which is equivalent to the convolution of the coefficient sequences. It follows that P k [ f ] ( x ) := ∑ n ∈ Z s n ( k ) φ ( 2 k x − n ) {\displaystyle P_{k}[f](x):=\sum _{n\in \mathbb {Z} }s_{n}^{(k)}\,\varphi (2^{k}x-n)} is the orthogonal projection of the original signal f or at least of the first approximation P J [ f ] ( x ) {\displaystyle P_{J}[f](x)} onto the subspace V k {\displaystyle V_{k}} , that is, with sampling rate of 2k per unit interval. The difference to the first approximation is given by P J [ f ] ( x ) = P k [ f ] ( x ) + D k [ f ] ( x ) + ⋯ + D J − 1 [ f ] ( x ) , {\displaystyle P_{J}[f](x)=P_{k}[f](x)+D_{k}[f](x)+\dots +D_{J-1}[f](x),} where the difference or detail signals are computed from the detail coefficients as D k [ f ] ( x ) := ∑ n ∈ Z d n ( k ) ψ ( 2 k x − n ) , {\displaystyle D_{k}[f](x):=\sum _{n\in \mathbb {Z} }d_{n}^{(k)}\,\psi (2^{k}x-n),} with ψ {\displaystyle \psi } denoting the mother wavelet of the wavelet transform. == Inverse DWT == Given the coefficient sequence s ( M ) {\displaystyle s^{(M)}} for some M < J and all the difference sequences d ( k ) {\displaystyle d^{(k)}} , k = M,...,J − 1, one computes recursively s n ( k + 1 ) := ∑ k = − N N a k s 2 n − k ( k ) + ∑ k = − N N b k d 2 n − k ( k ) {\displaystyle s_{n}^{(k+1)}:=\sum _{k=-N}^{N}a_{k}s_{2n-k}^{(k)}+\sum _{k=-N}^{N}b_{k}d_{2n-k}^{(k)}} or s ( k + 1 ) ( z ) = a ( z ) ⋅ ( ↑ 2 ) ( s ( k ) ( z ) ) + b ( z ) ⋅ ( ↑ 2 ) ( d ( k ) ( z ) ) {\displaystyle s^{(k+1)}(z)=a(z)\cdot (\uparrow 2)(s^{(k)}(z))+b(z)\cdot (\uparrow 2)(d^{(k)}(z))} for k = J − 1,J − 2,...,M and all n ∈ Z {\displaystyle n\in \mathbb {Z} } . In the Z-transform notation: The upsampling operator ( ↑ 2 ) {\displaystyle (\uparrow 2)} creates zero-filled holes inside a given sequence. That is, every second element of the resulting sequence is an element of the given sequence, every other second element is zero or ( ↑ 2 ) ( c ( z ) ) := ∑ n ∈ Z c n z − 2 n {\displaystyle (\uparrow 2)(c(z)):=\sum _{n\in \mathbb {Z} }c_{n}z^{-2n}} . This linear operator is, in the Hilbert space ℓ 2 ( Z , R ) {\displaystyle \ell ^{2}(\mathbb {Z} ,\mathbb {R} )} , the adjoint to the downsampling operator ( ↓ 2 ) {\displaystyle (\downarrow 2)} . == See also == Lifting scheme Fast Fourier transform == References == S.G. Mallat "A Theory for Multiresolution Signal Decomposition: The Wavelet Representation" IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, no. 7. July 1989. I. Daubechies, Ten Lectures on Wavelets. SIAM, 1992. A.N. Akansu Multiplierless Suboptimal PR-QMF Design Proc. SPIE 1818, Visual Communications and Image Processing, p. 723, November, 1992 A.N. Akansu Multiplierless 2-band Perfect Reconstruction Quadrature Mirror Filter (PR-QMF) Banks US Patent 5,420,891, 1995 A.N. Akansu Multiplierless PR Quadrature Mirror Filters for Subband Image Coding IEEE Trans. Image Processing, p. 1359, September 1996 M.J. Mohlenkamp, M.C. Pereyra Wavelets, Their Friends, and What They Can Do for You (2008 EMS) p. 38 B.B. Hubbard The World According to Wavelets: The Story of a Mathematical Technique in the Making (1998 Peters) p. 184 S.G. Mallat A Wavelet Tour of Signal Processing (1999 Academic Press) p. 255 A. Teolis Computational Signal Processing with Wavelets (1998 Birkhäuser) p. 116 Y. Nievergelt Wavelets Made Easy (1999 Springer) p. 95 == Further reading == G. Beylkin, R. Coifman, V. Rokhlin, "Fast wavelet transforms and numerical algorithms" Comm. Pure Appl. Math., 44 (1991) pp. 141–183 doi:10.1002/cpa.3160440202 (This article has been cited over 2400 times.)
Wikipedia/Fast_wavelet_transform
Adam7 is an interlacing algorithm for raster images, best known as the interlacing scheme optionally used in PNG images. An Adam7 interlaced image is broken into seven subimages, which are defined by replicating this 8×8 pattern across the full image. The subimages are then stored in the image file in numerical order. Adam7 uses seven passes and operates in both dimensions, compared to only four passes in the vertical dimension used by GIF. This means that an approximation of the entire image can be perceived much more quickly in the early passes, particularly if interpolation algorithms such as bicubic interpolation are used. == History == Adam7 is named after Adam M. Costello, who suggested the method on February 2, 1995, and after the seven steps involved. It is a rearrangement of this five-pass scheme that had earlier been proposed by Lee Daniel Crocker: Alternative speculative proposals at the time included square spiral interlacing and using Peano curves, but these were rejected as being overcomplicated. == Passes == The pixels included in each pass, and the total pixels encoded at that point are as follows: When rendering, the image will generally be interpolated at earlier stages, rather than just these pixels being rendered. == Related algorithms == Adam7 is a multiscale model of the data, similar to a discrete wavelet transform with Haar wavelets, though it starts from an 8×8 block, and downsamples the image, rather than decimating (low-pass filtering, then downsampling). It thus offers worse frequency behavior, showing artifacts (pixelation) at the early stages, in return for simpler implementation. === Iteration === Adam7 arises from iteration of the following pattern: which may be interpreted as "folding" in the vertical and horizontal dimensions. Similarly, GIF interlacing 1324 can be seen as iteration of the 12 pattern, but only in the vertical direction (12 expands to 1.2. which is filled in as 1324). Using this 3-pass pattern means the first pass is (1/2)2 = 1/4 (25%) of the image. Iterating this pattern once yields a 5-pass scheme; after 3 passes this yields which is then filled in to: In the 5-pass pattern, the first pass (1/4)2 = 1/16 (6.25%) of the image. Iterating again yields the 7-pass Adam7 scheme, where the first pass (1/8)2 = 1/64 (1.5625%) of the image. In principle this can be iterated, yielding a 9-pass scheme, an 11-pass scheme, and so forth, or alternatively an adaptive number of passes can be used, as many as the image size will allow (so the first pass consists of a single pixel), as is usual in scale-free multiscale modeling. In the context that PNG was developed (i.e., for the image sizes and connection speeds in question), a 7-pass scheme was seen as sufficient, and preferable to a simple 5-pass scheme. == References == == External links == Animated comparison of Adam7 and GIF interlacing
Wikipedia/Adam7_algorithm
In the mathematics of signal processing, the harmonic wavelet transform, introduced by David Edward Newland in 1993, is a wavelet-based linear transformation of a given function into a time-frequency representation. It combines advantages of the short-time Fourier transform and the continuous wavelet transform. It can be expressed in terms of repeated Fourier transforms, and its discrete analogue can be computed efficiently using a fast Fourier transform algorithm. == Harmonic wavelets == The transform uses a family of "harmonic" wavelets indexed by two integers j (the "level" or "order") and k (the "translation"), given by w ( 2 j t − k ) {\displaystyle w(2^{j}t-k)\!} , where w ( t ) = e i 4 π t − e i 2 π t i 2 π t . {\displaystyle w(t)={\frac {e^{i4\pi t}-e^{i2\pi t}}{i2\pi t}}.} These functions are orthogonal, and their Fourier transforms are a square window function (constant in a certain octave band and zero elsewhere). In particular, they satisfy: ∫ − ∞ ∞ w ∗ ( 2 j t − k ) ⋅ w ( 2 j ′ t − k ′ ) d t = 1 2 j δ j , j ′ δ k , k ′ {\displaystyle \int _{-\infty }^{\infty }w^{*}(2^{j}t-k)\cdot w(2^{j'}t-k')\,dt={\frac {1}{2^{j}}}\delta _{j,j'}\delta _{k,k'}} ∫ − ∞ ∞ w ( 2 j t − k ) ⋅ w ( 2 j ′ t − k ′ ) d t = 0 {\displaystyle \int _{-\infty }^{\infty }w(2^{j}t-k)\cdot w(2^{j'}t-k')\,dt=0} where "*" denotes complex conjugation and δ {\displaystyle \delta } is Kronecker's delta. As the order j increases, these wavelets become more localized in Fourier space (frequency) and in higher frequency bands, and conversely become less localized in time (t). Hence, when they are used as a basis for expanding an arbitrary function, they represent behaviors of the function on different timescales (and at different time offsets for different k). However, it is possible to combine all of the negative orders (j < 0) together into a single family of "scaling" functions φ ( t − k ) {\displaystyle \varphi (t-k)} where φ ( t ) = e i 2 π t − 1 i 2 π t . {\displaystyle \varphi (t)={\frac {e^{i2\pi t}-1}{i2\pi t}}.} The function φ is orthogonal to itself for different k and is also orthogonal to the wavelet functions for non-negative j: ∫ − ∞ ∞ φ ∗ ( t − k ) ⋅ φ ( t − k ′ ) d t = δ k , k ′ {\displaystyle \int _{-\infty }^{\infty }\varphi ^{*}(t-k)\cdot \varphi (t-k')\,dt=\delta _{k,k'}} ∫ − ∞ ∞ w ∗ ( 2 j t − k ) ⋅ φ ( t − k ′ ) d t = 0 for j ≥ 0 {\displaystyle \int _{-\infty }^{\infty }w^{*}(2^{j}t-k)\cdot \varphi (t-k')\,dt=0{\text{ for }}j\geq 0} ∫ − ∞ ∞ φ ( t − k ) ⋅ φ ( t − k ′ ) d t = 0 {\displaystyle \int _{-\infty }^{\infty }\varphi (t-k)\cdot \varphi (t-k')\,dt=0} ∫ − ∞ ∞ w ( 2 j t − k ) ⋅ φ ( t − k ′ ) d t = 0 for j ≥ 0. {\displaystyle \int _{-\infty }^{\infty }w(2^{j}t-k)\cdot \varphi (t-k')\,dt=0{\text{ for }}j\geq 0.} In the harmonic wavelet transform, therefore, an arbitrary real- or complex-valued function f ( t ) {\displaystyle f(t)} (in L2) is expanded in the basis of the harmonic wavelets (for all integers j) and their complex conjugates: f ( t ) = ∑ j = − ∞ ∞ ∑ k = − ∞ ∞ [ a j , k w ( 2 j t − k ) + a ~ j , k w ∗ ( 2 j t − k ) ] , {\displaystyle f(t)=\sum _{j=-\infty }^{\infty }\sum _{k=-\infty }^{\infty }\left[a_{j,k}w(2^{j}t-k)+{\tilde {a}}_{j,k}w^{*}(2^{j}t-k)\right],} or alternatively in the basis of the wavelets for non-negative j supplemented by the scaling functions φ: f ( t ) = ∑ k = − ∞ ∞ [ a k φ ( t − k ) + a ~ k φ ∗ ( t − k ) ] + ∑ j = 0 ∞ ∑ k = − ∞ ∞ [ a j , k w ( 2 j t − k ) + a ~ j , k w ∗ ( 2 j t − k ) ] . {\displaystyle f(t)=\sum _{k=-\infty }^{\infty }\left[a_{k}\varphi (t-k)+{\tilde {a}}_{k}\varphi ^{*}(t-k)\right]+\sum _{j=0}^{\infty }\sum _{k=-\infty }^{\infty }\left[a_{j,k}w(2^{j}t-k)+{\tilde {a}}_{j,k}w^{*}(2^{j}t-k)\right].} The expansion coefficients can then, in principle, be computed using the orthogonality relationships: a j , k = 2 j ∫ − ∞ ∞ f ( t ) ⋅ w ∗ ( 2 j t − k ) d t a ~ j , k = 2 j ∫ − ∞ ∞ f ( t ) ⋅ w ( 2 j t − k ) d t a k = ∫ − ∞ ∞ f ( t ) ⋅ φ ∗ ( t − k ) d t a ~ k = ∫ − ∞ ∞ f ( t ) ⋅ φ ( t − k ) d t . {\displaystyle {\begin{aligned}a_{j,k}&{}=2^{j}\int _{-\infty }^{\infty }f(t)\cdot w^{*}(2^{j}t-k)\,dt\\{\tilde {a}}_{j,k}&{}=2^{j}\int _{-\infty }^{\infty }f(t)\cdot w(2^{j}t-k)\,dt\\a_{k}&{}=\int _{-\infty }^{\infty }f(t)\cdot \varphi ^{*}(t-k)\,dt\\{\tilde {a}}_{k}&{}=\int _{-\infty }^{\infty }f(t)\cdot \varphi (t-k)\,dt.\end{aligned}}} For a real-valued function f(t), a ~ j , k = a j , k ∗ {\displaystyle {\tilde {a}}_{j,k}=a_{j,k}^{*}} and a ~ k = a k ∗ {\displaystyle {\tilde {a}}_{k}=a_{k}^{*}} so one can cut the number of independent expansion coefficients in half. This expansion has the property, analogous to Parseval's theorem, that: ∑ j = − ∞ ∞ ∑ k = − ∞ ∞ 2 − j ( | a j , k | 2 + | a ~ j , k | 2 ) = ∑ k = − ∞ ∞ ( | a k | 2 + | a ~ k | 2 ) + ∑ j = 0 ∞ ∑ k = − ∞ ∞ 2 − j ( | a j , k | 2 + | a ~ j , k | 2 ) = ∫ − ∞ ∞ | f ( x ) | 2 d x . {\displaystyle {\begin{aligned}&\sum _{j=-\infty }^{\infty }\sum _{k=-\infty }^{\infty }2^{-j}\left(|a_{j,k}|^{2}+|{\tilde {a}}_{j,k}|^{2}\right)\\&{}=\sum _{k=-\infty }^{\infty }\left(|a_{k}|^{2}+|{\tilde {a}}_{k}|^{2}\right)+\sum _{j=0}^{\infty }\sum _{k=-\infty }^{\infty }2^{-j}\left(|a_{j,k}|^{2}+|{\tilde {a}}_{j,k}|^{2}\right)\\&{}=\int _{-\infty }^{\infty }|f(x)|^{2}\,dx.\end{aligned}}} Rather than computing the expansion coefficients directly from the orthogonality relationships, however, it is possible to do so using a sequence of Fourier transforms. This is much more efficient in the discrete analogue of this transform (discrete t), where it can exploit fast Fourier transform algorithms. == References == Newland, David E. (8 October 1993). "Harmonic wavelet analysis". Proceedings of the Royal Society of London. A. 443 (1917): 203–225. Bibcode:1993RSPSA.443..203N. doi:10.1098/rspa.1993.0140. JSTOR 52388. S2CID 122912891. Silverman, B. W.; Vassilicos, J. C., eds. (2000). Wavelets: The Key to Intermittent Information?. Oxford University Press. ISBN 0-19-850716-X. Boashash, Boualem, ed. (2003). Time Frequency Signal Analysis and Processing: A Comprehensive Reference. Elsevier. ISBN 0-08-044335-4.
Wikipedia/Newland_transform
Adaptive bitrate streaming is a technique used in streaming multimedia over computer networks. While in the past most video or audio streaming technologies utilized streaming protocols such as RTP with RTSP, today's adaptive streaming technologies are based almost exclusively on HTTP, and are designed to work efficiently over large distributed HTTP networks. Adaptive bitrate streaming works by detecting a user's bandwidth and CPU capacity in real time, adjusting the quality of the media stream accordingly. It requires the use of an encoder which encodes a single source media (video or audio) at multiple bit rates. The player client switches between streaming the different encodings depending on available resources. This results in providing very little buffering, faster start times and a good experience for both high-end and low-end connections. More specifically, adaptive bitrate streaming is a method of video streaming over HTTP where the source content is encoded at multiple bit rates. Each of the different bit rate streams are segmented into small multi-second parts. The segment size can vary depending on the particular implementation, but they are typically between two and ten seconds. First, the client downloads a manifest file that describes the available stream segments and their respective bit rates. During stream start-up, the client usually requests the segments from the lowest bit rate stream. If the client finds that the network throughput is greater than the bit rate of the downloaded segment, then it will request a higher bit rate segment. Later, if the client finds that the network throughput has deteriorated, it will request a lower bit rate segment. An adaptive bitrate (ABR) algorithm in the client performs the key function of deciding which bit rate segments to download, based on the current state of the network. Several types of ABR algorithms are in commercial use: throughput-based algorithms use the throughput achieved in recent prior downloads for decision-making (e.g., throughput rule in dash.js), buffer-based algorithms use only the client's current buffer level (e.g., BOLA in dash.js), and hybrid algorithms combine both types of information (e.g., DYNAMIC in dash.js). == Current uses == Post-production houses, content delivery networks and studios use adaptive bit rate technology in order to provide consumers with higher quality video using less manpower and fewer resources. The creation of multiple video outputs, particularly for adaptive bit rate streaming, adds great value to consumers. If the technology is working properly, the end user or consumer's content should play back without interruption and potentially go unnoticed. Media companies have been actively using adaptive bit rate technology for many years now and it has essentially become standard practice for high-end streaming providers; permitting little buffering when streaming high-resolution feeds (begins with low-resolution and climbs). == Benefits of adaptive bitrate streaming == Traditional server-driven adaptive bitrate streaming provides consumers of streaming media with the best-possible experience, since the media server automatically adapts to any changes in each user's network and playback conditions. The media and entertainment industry also benefit from adaptive bitrate streaming. As the video space grows, content delivery networks and video providers can provide customers with a superior viewing experience. Adaptive bitrate technology requires additional encoding, but simplifies the overall workflow and creates better results. HTTP-based adaptive bitrate streaming technologies yield additional benefits over traditional server-driven adaptive bitrate streaming. First, since the streaming technology is built on top of HTTP, contrary to RTP-based adaptive streaming, the packets have no difficulties traversing firewall and NAT devices. Second, since HTTP streaming is purely client-driven, all adaptation logic resides at the client. This reduces the requirement of persistent connections between server and client application. Furthermore, the server is not required to maintain session state information on each client, increasing scalability. Finally, existing HTTP delivery infrastructure, such as HTTP caches and servers can be seamlessly adopted. A scalable CDN is used to deliver media streaming to an Internet audience. The CDN receives the stream from the source at its Origin server, then replicates it to many or all of its Edge cache servers. The end-user requests the stream and is redirected to the "closest" Edge server. This can be tested using libdash and the Distributed DASH (D-DASH) dataset, which has several mirrors across Europe, Asia and the US. The use of HTTP-based adaptive streaming allows the Edge server to run a simple HTTP server software, whose licence cost is cheap or free, reducing software licensing cost, compared to costly media server licences (e.g. Adobe Flash Media Streaming Server). The CDN cost for HTTP streaming media is then similar to HTTP web caching CDN cost. == History == Adaptive bit rate over HTTP was created by the DVD Forum at the WG1 Special Streaming group in October 2002. The group was co-chaired by Toshiba and Phoenix Technologies, The expert group count with the collaboration of Microsoft, Apple Computer, DTS Inc., Warner Brothers, 20th Century Fox, Digital Deluxe, Disney, Macromedia and Akamai. The technology was originally called DVDoverIP and was an integral effort of the DVD ENAV book. The concept came from storing MPEG-1 and MPEG-2 DVD TS Sectors into small 2KB files, which will be served using an HTTP server to the player. The MPEG-1 segments provided the lower bandwidth stream, while the MPEG-2 provided a higher bit rate stream. The original XML schema provided a simple playlist of bit rates, languages and url servers. The first working prototype was presented to the DVD Forum by Phoenix Technologies at the Harman Kardon Lab in Villingen Germany. == Implementations == Adaptive bit rate streaming was introduced by Move Networks in 2006 and is now being developed and utilized by Adobe Systems, Apple, Microsoft and Octoshape. In October 2010, Move Networks was awarded a patent for their adaptive bit rate streaming (US patent number 7818444). === Dynamic Adaptive Streaming over HTTP (DASH) === Dynamic Adaptive Streaming over HTTP (DASH), also known as MPEG-DASH, is the only adaptive bit-rate HTTP-based streaming solution that is an international standard MPEG-DASH technology was developed under MPEG. Work on DASH started in 2010 and became a Draft International Standard in January 2011 and an International Standard in November 2011. The MPEG-DASH standard was published as ISO/IEC 23009-1:2012 in April, 2012. MPEG-DASH is a technology related to Adobe Systems HTTP Dynamic Streaming, Apple Inc. HTTP Live Streaming (HLS) and Microsoft Smooth Streaming. DASH is based on Adaptive HTTP streaming (AHS) in 3GPP Release 9 and on HTTP Adaptive Streaming (HAS) in Open IPTV Forum Release 2. As part of their collaboration with MPEG, 3GPP Release 10 has adopted DASH (with specific codecs and operating modes) for use over wireless networks. The goal of standardizing an adaptive streaming solution is to assure the market that the solution can work universally, unlike other solutions that are more specific to certain vendors, such as Apple’s HLS, Microsoft’s Smooth Streaming, or Adobe’s HDS. Available implementations are the HTML5-based bitdash MPEG-DASH player as well as the open source C++-based DASH client access library libdash of bitmovin GmbH, the DASH tools of the Institute of Information Technology (ITEC) at Alpen-Adria University Klagenfurt, the multimedia framework of the GPAC group at Telecom ParisTech, and the dash.js player of the DASH-IF. === Apple HTTP Live Streaming (HLS) === HTTP Live Streaming (HLS) is an HTTP-based media streaming communications protocol implemented by Apple Inc. as part of QuickTime X and iOS. HLS supports both live and Video on demand content. It works by breaking down media streams or files into short pieces (media segments) which are stored as MPEG-TS or fragmented MP4 files. This is typically done at multiple bitrates using a stream or file segmenter application, also known as a packager. One such segmenter implementation is provided by Apple. Additional packagers are available, including free / open source offerings like Google's Shaka Packager and various commercial tools as well - such as Unified Streaming. The segmenter is also responsible for producing a set of playlist files in the M3U8 format which describe the media chunks. Each playlist is specific to a given bitrate, and contains the relative or absolute URLs to the chunks for that bitrate. The client is then responsible for requesting the appropriate playlist depending on available bandwidth. HTTP Live Streaming is a standard feature in the iPhone 3.0 and newer versions. Apple has submitted its solution to the IETF for consideration as an Informational Request for Comments. This was officially accepted as RFC 8216 A number of proprietary and open source solutions exist for both the server implementation (segmenter) and the client player. HLS streams can be identified by the playlist URL format extension of m3u8 or MIME type of application/vnd.apple.mpegurl. These adaptive streams can be made available in many different bitrates and the client device interacts with the server to obtain the best available bitrate which can reliably be delivered. Playback of HLS is supported on many platforms including Safari and native apps on macOS / iOS, Microsoft Edge on Windows 10, ExoPlayer on Android, and the Roku platform. Many Smart TVs also have native support for HLS. Playing HLS on other platforms like Chrome / Firefox is typically achieved via a browser / JavaScript player implementation. Many open source and commercial players are available including hls.js, video.js http-streaming, BitMovin, JWPlayer, THEOplayer, etc. === Adobe HTTP Dynamic Streaming (HDS) === "HTTP Dynamic streaming is the process of efficiently delivering streaming video to users by dynamically switching among different streams of varying quality and size during playback. This provides users with the best possible viewing experience their bandwidth and local computer hardware (CPU) can support. Another major goal of dynamic streaming is to make this process smooth and seamless to users, so that if up-scaling or down-scaling the quality of the stream is necessary, it is a smooth and nearly unnoticeable switch without disrupting the continuous playback." The latest versions of Flash Player and Flash Media Server support adaptive bit-rate streaming over the traditional RTMP protocol, as well as HTTP, similar to the HTTP-based solutions from Apple and Microsoft, HTTP dynamic streaming being supported in Flash Player 10.1 and later. HTTP-based streaming has the advantage of not requiring any firewall ports being opened outside of the normal ports used by web browsers. HTTP-based streaming also allows video fragments to be cached by browsers, proxies, and CDNs, drastically reducing the load on the source server. === Microsoft Smooth Streaming (MSS) === Smooth Streaming is an IIS Media Services extension that enables adaptive streaming of media to clients over HTTP. The format specification is based on the ISO base media file format and standardized by Microsoft as the Protected Interoperable File Format. Microsoft is actively involved with 3GPP, MPEG and DECE organizations' efforts to standardize adaptive bit-rate HTTP streaming. Microsoft provides Smooth Streaming Client software development kits for Silverlight and Windows Phone 7, as well as a Smooth Streaming Porting Kit that can be used for other client operating systems, such as Apple iOS, Android, and Linux. IIS Media Services 4.0, released in November 2010, introduced a feature which enables Live Smooth Streaming H.264/AAC videos to be dynamically repackaged into the Apple HTTP Adaptive Streaming format and delivered to iOS devices without the need for re-encoding. Microsoft has successfully demonstrated delivery of both live and on-demand 1080p HD video with Smooth Streaming to Silverlight clients. In 2010, Microsoft also partnered with NVIDIA to demonstrate live streaming of 1080p stereoscopic 3D video to PCs equipped with NVIDIA 3D Vision technology. === Common Media Application Format (CMAF) === CMAF is a presentation container format used for the delivery of both HLS and MPEG-DASH. Hence it is intended to simplify delivery of HTTP-based streaming media. It was proposed in 2016 by Apple and Microsoft and officially published in 2018. === QuavStreams Adaptive Streaming over HTTP === QuavStreams Adaptive Streaming is a multimedia streaming technology developed by Quavlive. The streaming server is an HTTP server that has multiple versions of each video, encoded at different bitrates and resolutions. The server delivers the encoded video/audio frames switching from one level to another, according to the current available bandwidth. The control is entirely server-based, so the client does not need special additional features. The streaming control employs feedback control theory. Currently, QuavStreams supports H.264/MP3 codecs muxed into the FLV container and VP8/Vorbis codecs muxed into the WEBM container. === Uplynk === Uplynk delivers HD adaptive bitrate streaming to multiple platforms, including iOS, Android, Windows Mac, Linux, and Roku, across various browser combinations, by encoding video in the cloud using a single non-proprietary adaptive streaming format. Rather than streaming and storing multiple formats for different platforms and devices, Uplynk stores and streams only one. The first studio to use this technology for delivery was Disney–ABC Television Group, using it for video encoding for web, mobile and tablet streaming apps on the ABC Player, ABC Family and Watch Disney apps, as well as the live Watch Disney Channel, Watch Disney Junior, and Watch Disney XD. === Self-learning clients === In recent years, the benefits of self-learning algorithms in adaptive bitrate streaming have been investigated in academia. While most of the initial self-learning approaches are implemented at the server-side (e.g. performing admission control using reinforcement learning or artificial neural networks), more recent research is focusing on the development of self-learning HTTP Adaptive Streaming clients. Multiple approaches have been presented in literature using the SARSA or Q-learning algorithm. In all of these approaches, the client state is modeled using, among others, information about the current perceived network throughput and buffer filling level. Based on this information, the self-learning client autonomously decides which quality level to select for the next video segment. The learning process is steered using feedback information, representing the Quality of Experience (QoE) (e.g. based on the quality level, the number of switches and the number of video freezes). Furthermore, it was shown that multi-agent Q-learning can be applied to improve QoE fairness among multiple adaptive streaming clients. == Criticisms == HTTP-based adaptive bit rate technologies are significantly more operationally complex than traditional streaming technologies. Some of the documented considerations are things such as additional storage and encoding costs, and challenges with maintaining quality globally. There have also been some interesting dynamics found around the interactions between complex adaptive bit rate logic competing with complex TCP flow control logic. However, these criticisms have been outweighed in practice by the economics and scalability of HTTP delivery: whereas non-HTTP streaming solutions require massive deployment of specialized streaming server infrastructure, HTTP-based adaptive bit-rate streaming can leverage the same HTTP web servers used to deliver all other content over the Internet. With no single clearly defined or open standard for the digital rights management used in the above methods, there is no 100% compatible way of delivering restricted or time-sensitive content to any device or player. This also proves to be a problem with digital rights management being employed by any streaming protocol. The method of segmenting files into smaller files used by some implementations (as used by HTTP Live Streaming) could be deemed unnecessary due to the ability of HTTP clients to request byte ranges from a single video asset file that could have multiple video tracks at differing bit rates with the manifest file only indicating track number and bit rate. However, this approach allows for serving of chunks by any simple HTTP server and so therefore guarantees CDN compatibility. Implementations using byte ranges such as Microsoft Smooth Streaming require a dedicated HTTP server such as IIS to respond to the requests for video asset chunks. == See also == Multiple description coding Hierarchical modulation – alternative with reduced storage and authoring demands == References == == Further reading == The Next Big Thing in Video: Adaptive Bitrate Streaming Archived 19 June 2010 at the Wayback Machine
Wikipedia/Adaptive_bitrate_streaming
Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. In machine learning, the term inference is sometimes used instead to mean "make a prediction, by evaluating an already trained model"; in this context inferring properties of the model is referred to as training or learning (rather than inference), and using a model for prediction is referred to as inference (instead of prediction); see also predictive inference. == Introduction == Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model. Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". The conclusion of a statistical inference is a statistical proposition. Some common forms of statistical proposition are the following: a point estimate, i.e. a particular value that best approximates some parameter of interest; an interval estimate, e.g. a confidence interval (or set estimate), i.e. an interval constructed using a dataset drawn from a population so that, under repeated sampling of such datasets, such intervals would contain the true parameter value with the probability at the stated confidence level; a credible interval, i.e. a set of values containing, for example, 95% of posterior belief; rejection of a hypothesis; clustering or classification of data points into groups. == Models and assumptions == Any statistical inference requires some assumptions. A statistical model is a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference. Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn. === Degree of models/assumptions === Statisticians distinguish between three levels of modeling assumptions: Fully parametric: The probability distributions describing the data-generation process are assumed to be fully described by a family of probability distributions involving only a finite number of unknown parameters. For example, one may assume that the distribution of population values is truly Normal, with unknown mean and variance, and that datasets are generated by 'simple' random sampling. The family of generalized linear models is a widely used and flexible class of parametric models. Non-parametric: The assumptions made about the process generating the data are much less than in parametric statistics and may be minimal. For example, every continuous probability distribution has a median, which may be estimated using the sample median or the Hodges–Lehmann–Sen estimator, which has good properties when the data arise from simple random sampling. Semi-parametric: This term typically implies assumptions 'in between' fully and non-parametric approaches. For example, one may assume that a population distribution has a finite mean. Furthermore, one may assume that the mean response level in the population depends in a truly linear manner on some covariate (a parametric assumption) but not make any parametric assumption describing the variance around that mean (i.e. about the presence or possible form of any heteroscedasticity). More generally, semi-parametric models can often be separated into 'structural' and 'random variation' components. One component is treated parametrically and the other non-parametrically. The well-known Cox model is a set of semi-parametric assumptions. === Importance of valid models/assumptions === Whatever level of assumption is made, correctly calibrated inference, in general, requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified. Incorrect assumptions of 'simple' random sampling can invalidate statistical inference. More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions. Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference. The use of any parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal." In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population." Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy-tailed. ==== Approximate distributions ==== Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these. With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem. Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience. Following Kolmogorov's work in the 1950s, advanced statistics uses approximation theory and functional analysis to quantify the error of approximation. In this approach, the metric geometry of probability distributions is studied; this approach quantifies approximation error with, for example, the Kullback–Leibler divergence, Bregman divergence, and the Hellinger distance. With indefinitely large samples, limiting results like the central limit theorem describe the sample statistic's limiting distribution if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples. However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify the generalized method of moments and the use of generalized estimating equations, which are popular in econometrics and biostatistics. The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation. The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families). === Randomization-based models === For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, the randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments. Statistical inference from randomized studies is also more straightforward than many other situations. In Bayesian inference, randomization is also of importance: in survey sampling, use of sampling without replacement ensures the exchangeability of the sample with the population; in randomized experiments, randomization warrants a missing at random assumption for covariate information. Objective randomization allows properly inductive procedures. Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures. (However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences.) Similarly, results from randomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena. However, a good observational study may be better than a bad randomized experiment. The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model. However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical. ==== Model-based analysis of randomized experiments ==== It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments. However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme. Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units. ==== Model-free randomization inference ==== Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations. For example, model-free simple linear regression is based either on: a random design, where the pairs of observations ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , ⋯ , ( X n , Y n ) {\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\cdots ,(X_{n},Y_{n})} are independent and identically distributed (iid), or a deterministic design, where the variables X 1 , X 2 , ⋯ , X n {\displaystyle X_{1},X_{2},\cdots ,X_{n}} are deterministic, but the corresponding response variables Y 1 , Y 2 , ⋯ , Y n {\displaystyle Y_{1},Y_{2},\cdots ,Y_{n}} are random and independent with a common conditional distribution, i.e., P ( Y j ≤ y | X j = x ) = D x ( y ) {\displaystyle P\left(Y_{j}\leq y|X_{j}=x\right)=D_{x}(y)} , which is independent of the index j {\displaystyle j} . In either case, the model-free randomization inference for features of the common conditional distribution D x ( . ) {\displaystyle D_{x}(.)} relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature conditional mean, μ ( x ) = E ( Y | X = x ) {\displaystyle \mu (x)=E(Y|X=x)} , can be consistently estimated via local averaging or local polynomial fitting, under the assumption that μ ( x ) {\displaystyle \mu (x)} is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the conditional mean, μ ( x ) {\displaystyle \mu (x)} . == Paradigms for inference == Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms. Bandyopadhyay and Forster describe four paradigms: The classical (or frequentist) paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the Akaikean-Information Criterion-based paradigm. === Frequentist inference === This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging. ==== Examples of frequentist inference ==== p-value Confidence interval Null hypothesis significance testing ==== Frequentist inference, objectivity, and decision theory ==== One interpretation of frequentist inference (or classical inference) is that it is applicable only in terms of frequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach. The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility functions. However, some elements of frequentist statistics, such as statistical decision theory, do incorporate utility functions. In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators, or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property. However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under absolute value loss functions, in that they minimize expected loss, and least squares estimators are optimal under squared error loss functions, in that they minimize expected loss. While statisticians using frequentist inference must choose for themselves the parameters of interest, and the estimators/test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'. === Bayesian inference === The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions. There are several different justifications for using the Bayesian approach. ==== Examples of Bayesian inference ==== Credible interval for interval estimation Bayes factors for model comparison ==== Bayesian inference, subjectivity and decision theory ==== Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's utility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been proposed but not yet fully developed.) Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically) incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent. Some advocates of Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs. === Likelihood-based inference === Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data. Likelihoodism approaches statistics by using the likelihood function, denoted as L ( x | θ ) {\displaystyle L(x|\theta )} , quantifies the probability of observing the given data x {\displaystyle x} , assuming a specific set of parameter values θ {\displaystyle \theta } . In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data. The process of likelihood-based inference usually involves the following steps: Formulating the statistical model: A statistical model is defined based on the problem at hand, specifying the distributional assumptions and the relationship between the observed data and the unknown parameters. The model can be simple, such as a normal distribution with known variance, or complex, such as a hierarchical model with multiple levels of random effects. Constructing the likelihood function: Given the statistical model, the likelihood function is constructed by evaluating the joint probability density or mass function of the observed data as a function of the unknown parameters. This function represents the probability of observing the data for different values of the parameters. Maximizing the likelihood function: The next step is to find the set of parameter values that maximizes the likelihood function. This can be achieved using optimization techniques such as numerical optimization algorithms. The estimated parameter values, often denoted as y ¯ {\displaystyle {\bar {y}}} , are the maximum likelihood estimates (MLEs). Assessing uncertainty: Once the MLEs are obtained, it is crucial to quantify the uncertainty associated with the parameter estimates. This can be done by calculating standard errors, confidence intervals, or conducting hypothesis tests based on asymptotic theory or simulation techniques such as bootstrapping. Model checking: After obtaining the parameter estimates and assessing their uncertainty, it is important to assess the adequacy of the statistical model. This involves checking the assumptions made in the model and evaluating the fit of the model to the data using goodness-of-fit tests, residual analysis, or graphical diagnostics. Inference and interpretation: Finally, based on the estimated parameters and model assessment, statistical inference can be performed. This involves drawing conclusions about the population parameters, making predictions, or testing hypotheses based on the estimated model. === AIC-based inference === The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection. AIC is founded on information theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the goodness of fit of the model and the simplicity of the model.) === Other paradigms for inference === ==== Minimum description length ==== The minimum description length (MDL) principle has been developed from ideas in information theory and the theory of Kolmogorov complexity. The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches. However, if a "data generating mechanism" does exist in reality, then according to Shannon's source coding theorem it provides the MDL description of the data, on average and asymptotically. In minimizing description length (or descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori estimation (using maximum-entropy Bayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling. The MDL principle has been applied in communication-coding theory in information theory, in linear regression, and in data mining. The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory. ==== Fiducial inference ==== Fiducial inference was an approach to statistical inference based on fiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious. However this argument is the same as that which shows that a so-called confidence distribution is not a valid probability distribution and, since this has not invalidated the application of confidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's fiducial argument as a special case of an inference theory using upper and lower probabilities. ==== Structural inference ==== Developing ideas of Fisher and of Pitman from 1938 to 1939, George A. Barnard developed "structural inference" or "pivotal inference", an approach using invariant probabilities on group families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful. Donald A. S. Fraser developed a general theory for structural inference based on group theory and applied this to linear models. The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist. == Inference topics == The topics below are usually included in the area of statistical inference. Statistical assumptions Statistical decision theory Estimation theory Statistical hypothesis testing Revising opinions in statistics Design of experiments, the analysis of variance, and regression Survey sampling Summarizing statistical data == Predictive inference == Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations. Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability, but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics). De Finetti's idea of exchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper, and has since been propounded by such statisticians as Seymour Geisser. == See also == Algorithmic inference Induction (philosophy) Informal inferential reasoning Information field theory Population proportion Philosophy of statistics Prediction interval Predictive analytics Predictive modelling Stylometry == Notes == == References == === Citations === === Sources === == Further reading == Casella, G., Berger, R. L. (2002). Statistical Inference. Duxbury Press. ISBN 0-534-24312-6 Freedman, D.A. (1991). "Statistical models and shoe leather". Sociological Methodology. 21: 291–313. doi:10.2307/270939. JSTOR 270939. Held L., Bové D.S. (2014). Applied Statistical Inference—Likelihood and Bayes (Springer). Lenhard, Johannes (2006). "Models and Statistical Inference: the controversy between Fisher and Neyman–Pearson" (PDF). British Journal for the Philosophy of Science. 57: 69–91. doi:10.1093/bjps/axi152. S2CID 14136146. Lindley, D (1958). "Fiducial distribution and Bayes' theorem". Journal of the Royal Statistical Society, Series B. 20: 102–7. doi:10.1111/j.2517-6161.1958.tb00278.x. Rahlf, Thomas (2014). "Statistical Inference", in Claude Diebolt, and Michael Haupert (eds.), "Handbook of Cliometrics ( Springer Reference Series)", Berlin/Heidelberg: Springer. Reid, N.; Cox, D. R. (2014). "On Some Principles of Statistical Inference". International Statistical Review. 83 (2): 293–308. doi:10.1111/insr.12067. hdl:10.1111/insr.12067. S2CID 17410547. Sagitov, Serik (2022). "Statistical Inference". Wikibooks. http://upload.wikimedia.org/wikipedia/commons/f/f9/Statistical_Inference.pdf Young, G.A., Smith, R.L. (2005). Essentials of Statistical Inference, CUP. ISBN 0-521-83971-8 == External links == Statistical Inference – lecture on the MIT OpenCourseWare platform Statistical Inference – lecture by the National Programme on Technology Enhanced Learning An online, Bayesian (MCMC) demo/calculator is available at causaScientia
Wikipedia/Statistical_Inference
The Hartley function is a measure of uncertainty, introduced by Ralph Hartley in 1928. If a sample from a finite set A uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function H 0 ( A ) := l o g b | A | , {\displaystyle H_{0}(A):=\mathrm {log} _{b}\vert A\vert ,} where |A| denotes the cardinality of A. If the base of the logarithm is 2, then the unit of uncertainty is the shannon (more commonly known as bit). If it is the natural logarithm, then the unit is the nat. Hartley used a base-ten logarithm, and with this base, the unit of information is called the hartley (aka ban or dit) in his honor. It is also known as the Hartley entropy or max-entropy. == Hartley function, Shannon entropy, and Rényi entropy == The Hartley function coincides with the Shannon entropy (as well as with the Rényi entropies of all orders) in the case of a uniform probability distribution. It is a special case of the Rényi entropy since: H 0 ( X ) = 1 1 − 0 log ⁡ ∑ i = 1 | X | p i 0 = log ⁡ | X | . {\displaystyle H_{0}(X)={\frac {1}{1-0}}\log \sum _{i=1}^{|{\mathcal {X}}|}p_{i}^{0}=\log |{\mathcal {X}}|.} But it can also be viewed as a primitive construction, since, as emphasized by Kolmogorov and Rényi, the Hartley function can be defined without introducing any notions of probability (see Uncertainty and information by George J. Klir, p. 423). == Characterization of the Hartley function == The Hartley function only depends on the number of elements in a set, and hence can be viewed as a function on natural numbers. Rényi showed that the Hartley function in base 2 is the only function mapping natural numbers to real numbers that satisfies H ( m n ) = H ( m ) + H ( n ) {\displaystyle H(mn)=H(m)+H(n)} (additivity) H ( m ) ≤ H ( m + 1 ) {\displaystyle H(m)\leq H(m+1)} (monotonicity) H ( 2 ) = 1 {\displaystyle H(2)=1} (normalization) Condition 1 says that the uncertainty of the Cartesian product of two finite sets A and B is the sum of uncertainties of A and B. Condition 2 says that a larger set has larger uncertainty. == Derivation of the Hartley function == We want to show that the Hartley function, log2(n), is the only function mapping natural numbers to real numbers that satisfies H ( m n ) = H ( m ) + H ( n ) {\displaystyle H(mn)=H(m)+H(n)\,} (additivity) H ( m ) ≤ H ( m + 1 ) {\displaystyle H(m)\leq H(m+1)\,} (monotonicity) H ( 2 ) = 1 {\displaystyle H(2)=1\,} (normalization) Let f be a function on positive integers that satisfies the above three properties. From the additive property, we can show that for any integer n and k, f ( n k ) = k f ( n ) . {\displaystyle f(n^{k})=kf(n).\,} Let a, b, and t be any positive integers. There is a unique integer s determined by a s ≤ b t ≤ a s + 1 . ( 1 ) {\displaystyle a^{s}\leq b^{t}\leq a^{s+1}.\qquad (1)} Therefore, s log 2 ⁡ a ≤ t log 2 ⁡ b ≤ ( s + 1 ) log 2 ⁡ a {\displaystyle s\log _{2}a\leq t\log _{2}b\leq (s+1)\log _{2}a\,} and s t ≤ log 2 ⁡ b log 2 ⁡ a ≤ s + 1 t . {\displaystyle {\frac {s}{t}}\leq {\frac {\log _{2}b}{\log _{2}a}}\leq {\frac {s+1}{t}}.} On the other hand, by monotonicity, f ( a s ) ≤ f ( b t ) ≤ f ( a s + 1 ) . {\displaystyle f(a^{s})\leq f(b^{t})\leq f(a^{s+1}).\,} Using equation (1), one gets s f ( a ) ≤ t f ( b ) ≤ ( s + 1 ) f ( a ) , {\displaystyle sf(a)\leq tf(b)\leq (s+1)f(a),\,} and s t ≤ f ( b ) f ( a ) ≤ s + 1 t . {\displaystyle {\frac {s}{t}}\leq {\frac {f(b)}{f(a)}}\leq {\frac {s+1}{t}}.} Hence, | f ( b ) f ( a ) − log 2 ⁡ ( b ) log 2 ⁡ ( a ) | ≤ 1 t . {\displaystyle \left\vert {\frac {f(b)}{f(a)}}-{\frac {\log _{2}(b)}{\log _{2}(a)}}\right\vert \leq {\frac {1}{t}}.} Since t can be arbitrarily large, the difference on the left hand side of the above inequality must be zero, f ( b ) f ( a ) = log 2 ⁡ ( b ) log 2 ⁡ ( a ) . {\displaystyle {\frac {f(b)}{f(a)}}={\frac {\log _{2}(b)}{\log _{2}(a)}}.} So, f ( a ) = μ log 2 ⁡ ( a ) {\displaystyle f(a)=\mu \log _{2}(a)\,} for some constant μ, which must be equal to 1 by the normalization property. == See also == Rényi entropy Min-entropy == References == This article incorporates material from Hartley function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. This article incorporates material from Derivation of Hartley function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Max-entropy
Zooniverse is a citizen science web portal owned and operated by the Citizen Science Alliance. It is home to some of the Internet's largest, most popular and most successful citizen science projects. The organization grew from the original Galaxy Zoo project and now hosts dozens of projects which allow volunteers to participate in crowdsourced scientific research. It has headquarters at Oxford University and the Adler Planetarium. Unlike many early internet-based citizen science projects (such as SETI@home) which used spare computer processing power to analyse data, known as volunteer computing, Zooniverse projects require the active participation of human volunteers to complete research tasks. Projects have been drawn from disciplines including astronomy, ecology, cell biology, humanities, and climate science. As of 14 February 2014, the Zooniverse community consisted of more than 1 million registered volunteers. By March 2019, that number had reportedly risen to 1.6 million. The volunteers are often collectively referred to as "Zooites". The data collected from the various projects has led to the publication of more than 100 scientific papers. A daily news website called 'The Daily Zooniverse' provides information on the different projects under the Zooniverse umbrella, and has a presence on social media. The founder and former principal investigator (P.I.) of the project, Chris Lintott, published a book called The Crowd & the Cosmos: Adventures in the Zooniverse in 2019. In September 2023 the role of P.I. was taken over by Laura Trouille, VP of Science Engagement at the Adler Planetarium, who was co-P.I. for Zooniverse from 2015-2023. == Citizen Science Alliance == The Zooniverse is hosted by the Citizen Science Alliance, which is governed by a board of directors from seven institutions in the United Kingdom and the United States. The partners are the Adler Planetarium, Johns Hopkins University, University of Minnesota, National Maritime Museum, University of Nottingham, Oxford University and Vizzuality. == Projects == === Art projects === === Space projects === === Nature and climate projects === === Biology Projects === === Humanities projects === === Physics projects === == Retired projects == == Project Builder == Zooniverse supports Project Builder, a tool that allows anyone to create their own project by uploading a dataset of images, video files or sound files. In Project Builder a Project Owner creates a workflow for the projects, a tutorial, a field guide and the talk forum of the Project and can add collaborators, researchers and moderators to their project. The moderators for example will have partial administrator rights in the talk, but cannot change anything concerning the workflow. == Zooniverse Mobile App == Only certain kinds of projects can be enabled on Zooniverse mobile app (Android & iOS). == See also == Amateur exoplanet discoveries 9Spitch – Galaxy in the constellation CetusPages displaying short descriptions of redirect targets AWI0005x3s – Red dwarf star in the constellation CarinaPages displaying short descriptions of redirect targets LSPM J0207+3331 – Star in the constellation Taurus Hanny's Voorwerp – Astronomical object appearing as a bright blob, discovered by Hanny van Arkel Green Pea Galaxies – Possible type of luminous blue compact galaxy K2-138 – Star in the constellation Aquarius PH1b – Circumbinary Neptunian planet orbiting the Kepler-64 star system Stargazing Live – Live BBC television programme Tabby's Star – Star noted for unusual dimming events == References == Media related to Zooniverse at Wikimedia Commons
Wikipedia/Zooniverse_(citizen_science_project)
There are several methods currently used by astronomers to detect distant exoplanets from Earth. Theoretically, some of these methods can be used to detect Earth as an exoplanet from distant star systems. == History == In June 2021, astronomers identified 1,715 stars (with likely related exoplanetary systems) within 326 light-years (100 parsecs) that have a favorable positional vantage point—in relation to the Earth Transit Zone (ETZ)—of detecting Earth as an exoplanet transiting the Sun since the beginnings of human civilization (about 5,000 years ago); an additional 319 stars are expected to arrive at this special vantage point in the next 5,000 years. Seven known exoplanet hosts, including Ross 128, may be among these stars. Teegarden's Star and Trappist-1 may be expected to see the Earth in 29 and 1,642 years, respectively. Radio waves, emitted by humans, have reached over 75 of the closest stars that were studied. In June 2021, astronomers reported identifying 29 planets in habitable zones that may be capable of observing the Earth. Earlier, in October 2020, astronomers had initially identified 508 such stars within 326 light-years (100 parsecs) that would have a favorable positional vantage point—in relation to the Earth Transit Zone (ETZ)—of detecting Earth as an exoplanet transiting the Sun. Transit method is the most popular tool used to detect exoplanets and the most common tool to spectroscopically analyze exoplanetary atmospheres. As a result, such studies, based on the transit method, will be useful in the search for life on exoplanets beyond the Solar System by the SETI program, Breakthrough Listen Initiative, as well as upcoming exoplanetary TESS mission searches. Detectability of Earth from distant star-based systems may allow for the detectability of humanity and/or analysis of Earth from distant vantage points such as via "atmospheric SETI" for the detection of atmospheric compositions explainable only by use of (artificial) technology like air pollution containing nitrogen dioxide from e.g. transportation technologies. The easiest or most likely artificial signals from Earth to be detectable are brief pulses transmitted by anti-ballistic missile (ABM) early-warning and space-surveillance radars during the Cold War and later astronomical and military radars. Unlike the earliest and conventional radio- and television-broadcasting which has been claimed to be undetectable at short distances, such signals could be detected from very distant, possibly star-based, receiver stations – any single of which would detect brief episodes of powerful pulses repeating with intervals of one Earth day – and could be used to detect both Earth as well as the presence of a radar-utilizing civilization on it. Studies have suggested that radio broadcast leakage – with the program material likely not being detectable – may be a technosignature detectable at distances of up to a hundred light years with technology equivalent to the Square Kilometer Array if the location of Earth is known. Likewise, if Earth's location can be and is known, it may be possible to use atmospheric analysis to detect life or favorable conditions for it on Earth via biosignatures, including MERMOZ instruments that may be capable of remotely detecting living matter on Earth. == Experiments == In 1980s, astronomer Carl Sagan persuaded NASA to perform an experiment of detecting life and civilization on Earth using instruments of the Galileo spacecraft. It was launched in December 1990, and when it was 960 km (600 mi) from the planet's surface, Galileo turned its instruments to observe Earth. Sagan's paper was titled "A search for life on Earth from the Galileo spacecraft"; he wrote that "high-resolution images of Australia and Antarctica obtained as Galileo flew overhead did not yield signs of civilization"; other measurements showed the presence of vegetation and detected radio transmissions. == See also == == References == == External links == Extrasolar Planets Encyclopaedia by the Paris Observatory
Wikipedia/Detecting_Earth_from_distant_star-based_systems
In science fiction and fantasy literatures, the term insectoid ("insect-like") denotes any fantastical fictional creature sharing physical or other traits with ordinary insects (or arachnids). Most frequently, insect-like or spider-like extraterrestrial life forms is meant; in such cases convergent evolution may presumably be responsible for the existence of such creatures. Occasionally, an earth-bound setting — such as in the film The Fly (1958), in which a scientist is accidentally transformed into a grotesque human–fly hybrid, or Kafka's famous novella The Metamorphosis (1915), which does not bother to explain how a man becomes an enormous insect — is the venue. == Etymology == The term insectoid denotes any creature or object that shares a similar body or traits with common earth insects and arachnids. The term is a combination of "insect" and "-oid" (a suffix denoting similarity). == History == Insect-like extraterrestrials have long been a part of the tradition of science fiction. In the 1902 film A Trip to the Moon, Georges Méliès portrayed the Selenites (moon inhabitants) as insectoid. The Woggle-Bug appeared in L. Frank Baum's Oz books beginning in 1904. Olaf Stapledon incorporates insectoids in his 1937 Star Maker novel. In the pulp fiction novels, insectoid creatures were frequently used as the antagonists threatening the damsel in distress. Notable later depictions of hostile insect aliens include the antagonistic "Arachnids", or "Bugs", in Robert A. Heinlein's novel Starship Troopers (1959) and the "buggers" in Orson Scott Card's Ender's Game series (from 1985). The hive mind, or group mind, is a theme in science fiction going back to the alien hive society depicted in H. G. Wells's The First Men in the Moon (1901). Hive minds often imply a lack, or loss, of individuality, identity, or personhood. The individuals forming the hive may specialize in different functions, in the manner of social insects. The hive queen has been a figure in novels including C. J. Cherryh's Serpent's Reach (1981) and the Alien film franchise (from 1979). Insectoid sexuality has been addressed in Philip Jose Farmer's The Lovers (1952) Octavia Butler's Xenogenesis novels (from 1987) and China Miéville's Perdido Street Station (2000). == Analysis == The motif of the insect became widely used in science fiction as an "abject human/insect hybrids that form the most common enemy" in related media. Bugs or bug-like shapes have been described as a common trope in them, and the term 'insectoid' is considered "almost a cliche" with regards to the "ubiquitous way of representing alien life". In expressing his ambivalence with regard to science fiction, insectoids were on his mind when Carl Sagan complained of the type of story which "simply ignores what we know of molecular biology and Darwinian evolution.... I have...problems with films in which spiders 30 feet tall are menacing the cities of earth: Since insects and arachnids breathe by diffusion, such marauders would asphyxiate before they could savage their first metropolis". == Examples == A wide range of different fiction has featured different insectoids ranging from characters and races: === Literature === Science fiction writer Bob Olsen (1884–1956) wrote a sequence of short stories, two of which involve humans experiencing the life of ants ("The Ant with the Human Soul", Amazing Stories Quarterly, Spring/Summer 1932 and "Perils Among the Drivers", Amazing Stories, March 1934) and one ("Six-Legged Gangsters", Amazing Stories, June 1935) told from the ants' point of view. L. Sprague de Camp's novel Rogue Queen (1951), describes the methods of procreation and social mores in a humanoid society patterned after bees. === Comics === ==== Marvel Comics ==== The Arthrosians The Brood Bug The Chr'Ylites The Horde Human Fly The Klklk The Kt'kn The Sakaaran Natives Miek The Sligs The Sm'ggani The Vrellnexians ==== DC Comics ==== The Bugs of New Genesis Forager Mantis Charaxas The Circadians The Freshishs Hellgrammite Insect Queen The Kwai The Progeny Red Bee II The Tchk-Tchkii The Tyreans ==== Image Comics ==== The Thraxans === Games === The Tyranids from Warhammer 40,000 The Grekka Targs and Skrashers from StarTopia The Thri-kreen from Dungeons & Dragons and especially the Dark Sun and Spelljammer settings, "praying mantis man" appearing as antagonists and a player character race. The Rachni from Mass Effect The Zerg from StarCraft The Bugs from Helldivers and the Terminids from Helldivers 2. === Films === The Bugs from Men in Black The Bugs from Starship Troopers The Wasp Woman from Monkeybone The Xenomorph from the Alien franchise === Television === Beetlemon and Stingmon from the Digimon franchise Buzz-Off and Webstor from Masters of the Universe The Empress of the Racknoss, the Malmooth, the Time Beetle, the Vespiform, the Viperox, the Wiirn, and the Zarbi from Doctor Who The Irkens from Invader ZIM Stingfly from A.T.O.M. Sweet-Bee from She-Ra: Princess of Power The Xindi-Insectoids from Star Trek: Enterprise == See also == Bug-eyed monster Insects in mythology Insects in religion List of fictional arthropods List of reptilian humanoids List of piscine and amphibian humanoids == References == == External links ==
Wikipedia/Insectoids_in_science_fiction
The cultural impact of extraterrestrial contact is the corpus of changes to terrestrial science, technology, religion, politics, and ecosystems resulting from contact with an extraterrestrial civilization. This concept is closely related to the search for extraterrestrial intelligence (SETI), which attempts to locate intelligent life as opposed to analyzing the implications of contact with that life. The potential changes from extraterrestrial contact could vary greatly in magnitude and type, based on the extraterrestrial civilization's level of technological advancement, degree of benevolence or malevolence, and level of mutual comprehension between itself and humanity. The medium through which humanity is contacted, be it electromagnetic radiation, direct physical interaction, extraterrestrial artifact, or otherwise, may also influence the results of contact. Incorporating these factors, various systems have been created to assess the implications of extraterrestrial contact. The implications of extraterrestrial contact, particularly with a technologically superior civilization, have often been likened to the meeting of two vastly different human cultures on Earth, a historical precedent being the Columbian Exchange. Such meetings have generally led to the destruction of the civilization receiving contact (as opposed to the "contactor", which initiates contact), and therefore destruction of human civilization is a possible outcome. Extraterrestrial contact is also analogous to the numerous encounters between non-human native and invasive species occupying the same ecological niche. However, the absence of verified public contact to date means tragic consequences are still largely speculative. == Background == === Search for extraterrestrial intelligence === To detect extraterrestrial civilizations with radio telescopes, one must identify an artificial, coherent signal against a background of various natural phenomena that also produce radio waves. Telescopes capable of this include the Allen Telescope Array in Hat Creek, California and the new Five hundred meter Aperture Spherical Telescope in China and formerly the now demolished Arecibo Observatory in Puerto Rico. Various programs to detect extraterrestrial intelligence have had government funding in the past. Project Cyclops was commissioned by NASA in the 1970s to investigate the most effective way to search for signals from intelligent extraterrestrial sources, but the report's recommendations were set aside in favor of the much more modest approach of Messaging to Extra-Terrestrial Intelligence (METI), the sending of messages that intelligent extraterrestrial beings might intercept. NASA then drastically reduced funding for SETI programs, which have since turned to private donations to continue their search. With the discovery in the late 20th and early 21st centuries of numerous extrasolar planets, some of which may be habitable, governments have once more become interested in funding new programs. In 2006 the European Space Agency launched COROT, the first spacecraft dedicated to the search for exoplanets, and in 2009 NASA launched the Kepler space observatory for the same purpose. By February 2013 Kepler had detected 105 of the 5,943 confirmed exoplanets, and one of them, Kepler-22b, is potentially habitable. After it was discovered, the SETI Institute resumed the search for an intelligent extraterrestrial civilization, focusing on Kepler's candidate planets, with funding from the United States Air Force. Newly discovered planets, particularly ones that are potentially habitable, have enabled SETI and METI programs to refocus projects for communication with extraterrestrial intelligence. In 2009 A Message From Earth (AMFE) was sent toward the Gliese 581 planetary system, which contains two potentially habitable planets, the confirmed Gliese 581d and the more habitable but unconfirmed Gliese 581g. In the SETILive project, which began in 2012, human volunteers analyze data from the Allen Telescope Array to search for possible alien signals that computers might miss because of terrestrial radio interference. The data for the study is obtained by observing Kepler target stars with the radio telescope. In addition to radio-based methods, some projects, such as SEVENDIP (Search for Extraterrestrial Visible Emissions from Nearby Developed Intelligent Populations) at the University of California, Berkeley, are using other regions of the electromagnetic spectrum to search for extraterrestrial signals. Various other projects are not searching for coherent signals, but want to rather use electromagnetic radiation to find other evidence of extraterrestrial intelligence, such as megascale astroengineering projects. Several signals, such as the Wow! signal, have been detected in the history of the search for extraterrestrial intelligence, but none have yet been confirmed as being of intelligent origin. === Impact assessment === The implications of extraterrestrial contact depend on the method of discovery, the nature of the extraterrestrial beings, and their location relative to the Earth. Considering these factors, the Rio scale has been devised in order to provide a more quantitative picture of the results of extraterrestrial contact. More specifically, the scale gauges whether communication was conducted through radio, the information content of any messages, and whether discovery arose from a deliberately beamed message (and if so, whether the detection was the result of a specialized SETI effort or through general astronomical observations) or by the detection of occurrences such as radiation leakage from astroengineering installations. The question of whether or not a purported extraterrestrial signal has been confirmed as authentic, and with what degree of confidence, will also influence the impact of the contact. The Rio scale was modified in 2011 to include a consideration of whether contact was achieved through an interstellar message or through a physical extraterrestrial artifact, with a suggestion that the definition of artifact be expanded to include "technosignatures", including all indications of intelligent extraterrestrial life other than the interstellar radio messages sought by traditional SETI programs. A study by astronomer Steven J. Dick at the United States Naval Observatory considered the cultural impact of extraterrestrial contact by analyzing events of similar significance in the history of science. The study argues that the impact would be most strongly influenced by the information content of the message received, if any. It distinguishes short-term and long-term impact. Seeing radio-based contact as a more plausible scenario than a visit from extraterrestrial spacecraft, the study rejects the commonly stated analogy of European colonization of the Americas as an accurate model for information-only contact, preferring events of profound scientific significance, such as the Copernican and Darwinian revolutions, as more predictive of how humanity might be impacted by extraterrestrial contact. The physical distance between the two civilizations has also been used to assess the cultural impact of extraterrestrial contact. Historical examples show that the greater the distance, the less the contacted civilization perceives a threat to itself and its culture. Therefore, contact occurring within the Solar System, and especially in the immediate vicinity of Earth, is likely to be the most disruptive and negative for humanity. On a smaller scale, people close to the epicenter of contact would experience a greater effect than would those living farther away, and a contact having multiple epicenters would cause a greater shock than one with a single epicenter. Space scientists Martin Dominik and John Zarnecki state that in the absence of any data on the nature of extraterrestrial intelligence, one must predict the cultural impact of extraterrestrial contact on the basis of generalizations encompassing all life and of analogies with history. The beliefs of the general public about the effect of extraterrestrial contact have also been studied. A poll of United States and Chinese university students in 2000 provides factor analysis of responses to questions about, inter alia, the participants' belief that extraterrestrial life exists in the Universe, that such life may be intelligent, and that humans will eventually make contact with it. The study shows significant weighted correlations between participants' belief that extraterrestrial contact may either conflict with or enrich their personal religious beliefs and how conservative such religious beliefs are. The more conservative the respondents, the more harmful they considered extraterrestrial contact to be. Other significant correlation patterns indicate that students took the view that the search for extraterrestrial intelligence may be futile or even harmful. Psychologists Douglas Vakoch and Yuh-shiow Lee conducted a survey to assess people's reactions to receiving a message from extraterrestrials, including their judgments about likelihood that extraterrestrials would be malevolent. "People who view the world as a hostile place are more likely to think extraterrestrials will be hostile," Vakoch told USA Today. === Post-detection protocols === Various protocols have been drawn up detailing a course of action for scientists and governments after extraterrestrial contact. Post-detection protocols must address three issues: what to do in the first weeks after receiving a message from an extraterrestrial source; whether or not to send a reply; and analyzing the long-term consequences of the message received. No post-detection protocol, however, is binding under national or international law, and Dominik and Zarnecki consider the protocols likely to be ignored if contact occurs. One of the first post-detection protocols, the "Declaration of Principles for Activities Following the Detection of Extraterrestrial Intelligence", was created by the SETI Permanent Committee of the International Academy of Astronautics (IAA). It was later approved by the Board of Trustees of the IAA and by the International Institute of Space Law, and still later by the International Astronomical Union (IAU), the Committee on Space Research, the International Union of Radio Science, and others. It was subsequently endorsed by most researchers involved in the search for extraterrestrial intelligence, including the SETI Institute. The Declaration of Principles contains the following broad provisions: Any person or organization detecting a signal should try to verify that it is likely to be of intelligent origin before announcing it. The discoverer of a signal should, for the purposes of independent verification, communicate with other signatories of the Declaration before making a public announcement, and should also inform their national authorities. Once a given astronomical observation has been determined to be a credible extraterrestrial signal, the astronomical community should be informed through the Central Bureau for Astronomical Telegrams of the IAU. The Secretary-General of the United Nations and various other global scientific unions should also be informed. Following confirmation of an observation's extraterrestrial origin, news of the discovery should be made public. The discoverer has the right to make the first public announcement. All data confirming the discovery should be published to the international scientific community and stored in an accessible form as permanently as possible. Should evidence for extraterrestrial intelligence take the form of electromagnetic signals, the Secretary-General of the International Telecommunication Union (ITU) should be contacted, and may request in the next ITU Weekly Circular to minimize terrestrial use of the electromagnetic frequency bands in which the signal was detected. Neither the discoverer nor anyone else should respond to an observed extraterrestrial intelligence; doing so requires international agreement under separate procedures. The SETI Permanent Committee of the IAA and Commission 51 of the IAU should continually review procedures regarding detection of extraterrestrial intelligence and management of data related to such discoveries. A committee comprising members from various international scientific unions, and other bodies designated by the committee, should regulate continued SETI research. A separate "Proposed Agreement on the Sending of Communications to Extraterrestrial Intelligence" was subsequently created. It proposes an international commission, membership of which would be open to all interested nations, to be constituted on detection of extraterrestrial intelligence. This commission would decide whether to send a message to the extraterrestrial intelligence, and if so, would determine the contents of the message on the basis of principles such as justice, respect for cultural diversity, honesty, and respect for property and territory. The draft proposes to forbid the sending of any message by an individual nation or organization without the permission of the commission, and suggests that, if the detected intelligence poses a danger to human civilization, the United Nations Security Council should authorize any message to extraterrestrial intelligence. However, this proposal, like all others, has not been incorporated into national or international law. Paul Davies, a member of the SETI Post-Detection Taskgroup, has stated that post-detection protocols, calling for international consultation before taking any major steps regarding the detection, are unlikely to be followed by astronomers, who would put the advancement of their careers over the word of a protocol that is not part of national or international law. == Contact scenarios and considerations == Scientific literature and science fiction have put forward various models of the ways in which extraterrestrial and human civilizations might interact. Their predictions range widely, from sophisticated civilizations that could advance human civilization in many areas to imperial powers that might draw upon the forces necessary to subjugate humanity. Some theories suggest that an extraterrestrial civilization could be advanced enough to dispense with biology, living instead inside of advanced computers. The implications of discovery depend heavily on the level of aggressiveness of the civilization interacting with humanity, its ethics, and how much human and extraterrestrial biologies have in common. These factors may govern the quantity and type of dialogue that can take place. The question of whether contact is via signals from distant places or via probes or extraterrestrials in Earth's vicinity (or both) will also govern the magnitude of the long-term implications of contact. In the case of communication using electromagnetic signals, the long silence between the reception of one message and another would mean that the content of any message would particularly affect the consequences of contact (see also #Scientific and technological and #Political below), as would the extent of mutual comprehension. Concerning probes, a study suggested the first interstellar probe to transit between two civilizations is not likely to be the civilization's earliest (e.g. the ones sent first) but a more advanced one as (at least) the departure speed is thought to (likely) improve for at least some duration per each civilization, which e.g. may have implications for the type of probes to expect and the impacts of any probes sent earlier. === Friendly civilizations === Many writers have speculated on the ways in which a friendly civilization might interact with humankind. Albert Harrison, a professor emeritus of psychology at the University of California, Davis, thought that a highly advanced civilization might teach humanity such things as a physical theory of everything, how to use zero-point energy, or how to travel faster than light. They suggest that collaboration with such a civilization could initially be in the arts and humanities before moving to the hard sciences, and even that artists may spearhead collaboration. Seth D. Baum, of the Global Catastrophic Risk Institute, and others consider that the greater longevity of cooperative civilizations in comparison to uncooperative and aggressive ones might render extraterrestrial civilizations in general more likely to aid humanity. In contrast to these views, Paolo Musso, a member of the SETI Permanent Study Group of the International Academy of Astronautics (IAA) and the Pontifical Academy of Sciences, took the view that extraterrestrial civilizations possess, like humans, a morality driven not entirely by altruism but for individual benefit as well, thus leaving open the possibility that at least some extraterrestrial civilizations are hostile. Futurist Allen Tough suggests that an extremely advanced extraterrestrial civilization, recalling its own past of war and plunder and knowing that it possesses superweapons that could destroy it, would be likely to try to help humans rather than to destroy them. He identifies three approaches that a friendly civilization might take to help humanity: Intervention only to avert catastrophe: this would involve occasional limited intervention to stop events that could destroy human civilization completely, such as nuclear war or asteroid impact. Advice and action with consent: under this approach, the extraterrestrials would be more closely involved in terrestrial affairs, advising world leaders and acting with their consent to protect against danger. Forcible corrective action: the extraterrestrials could require humanity to reduce major risks against its will, intending to help humans advance to the next stage of civilization. Tough considers advising and acting only with consent to be a more likely choice than the forceful option. While coercive aid may be possible, and advanced extraterrestrials would recognize their own practices as superior to those of humanity, it may be unlikely that this method would be used in cultural cooperation. Lemarchand suggests that instruction of a civilization in its "technological adolescence", such as humanity, would probably focus on morality and ethics rather than on science and technology, to ensure that the civilization did not destroy itself with technology it was not yet ready to use. According to Tough, it is unlikely that the avoidance of immediate dangers and prevention of future catastrophes would be conducted through radio, as these tasks would demand constant surveillance and quick action. However, cultural cooperation might take place through radio or a space probe in the Solar System, as radio waves could be used to communicate information about advanced technologies and cultures to humanity. Even if an ancient and advanced extraterrestrial civilization wished to help humanity, humans could suffer from a loss of identity and confidence due to the technological and cultural prowess of the extraterrestrial civilization. However, a friendly civilization may calibrate its contact with humanity in such a way as to minimize unintended consequences. Michael A. G. Michaud suggests that a friendly and advanced extraterrestrial civilization may even avoid all contact with an emerging intelligent species like humanity, to ensure that the less advanced civilization can develop naturally at its own pace; this is known as the zoo hypothesis. === Hostile civilizations === Science fiction often depicts humans successfully repelling alien invasions, but scientists more often take the view that an extraterrestrial civilization with sufficient power to reach the Earth would be able to destroy human civilization or humanity with minimal effort. Operations that are enormous on a human scale, such as destroying all major population centers on a planet, bombarding a planet with deadly neutron radiation, or even traveling to another planetary system in order to lay waste to it, may be important tools for a hostile civilization. Deardorff speculates that a small proportion of the intelligent life forms in the galaxy may be aggressive, but the actual aggressiveness or benevolence of the civilizations would cover a wide spectrum, with some civilizations "policing" others. Civilizations may not be homogeneous and contain different factions or subgroups. According to Harrison and Dick, hostile extraterrestrial life may indeed be rare in the Universe, just as belligerent and autocratic nations on Earth have been the ones that lasted for the shortest periods of time, and humanity is seeing a shift away from these characteristics in its own sociopolitical systems. In addition, the causes of war may be diminished greatly for a civilization with access to the galaxy, as there are prodigious quantities of natural resources in space accessible without resort to violence. SETI researcher Carl Sagan believed that a civilization with the technological prowess needed to reach the stars and come to Earth must have transcended war to be able to avoid self-destruction. Representatives of such a civilization would treat humanity with dignity and respect, and humanity, with its relatively backward technology, would have no choice but to reciprocate. Seth Shostak, an astronomer at the SETI Institute, disagrees, stating that the finite quantity of resources in the galaxy would cultivate aggression in any intelligent species, and that an explorer civilization that would want to contact humanity would be aggressive. Similarly, Ragbir Bhathal claimed that since the laws of evolution would be the same on another habitable planet as they are on Earth, an extremely advanced extraterrestrial civilization may have the motivation to colonize humanity in a similar manner to the European colonization of much of the rest of the world. Disputing these analyses, David Brin states that while an extraterrestrial civilization may have an imperative to act for no benefit to itself, it would be naïve to suggest that such a trait would be prevalent throughout the galaxy. Brin points to the fact that in many moral systems on Earth, such as the Aztec or Carthaginian one, non-military killing has been accepted and even "exalted" by society, and further mentions that such acts are not confined to humans but can be found throughout the animal kingdom. Baum et al. speculate that highly advanced civilizations are unlikely to come to Earth to enslave humans, as the achievement of their level of advancement would have required them to solve the problems of labor and resources by other means, such as creating a sustainable environment and using mechanized labor. Moreover, humans may be an unsuitable food source for extraterrestrials because of marked differences in biochemistry. For example, the chirality of molecules used by terrestrial biota may differ from those used by extraterrestrial beings. Douglas Vakoch argues that transmitting intentional signals does not increase the risk of an alien invasion, contrary to concerns raised by British cosmologist Stephen Hawking, because "any civilization that has the ability to travel between the stars can already pick up our accidental radio and TV leakage" at a distance of several hundred light-years. The easiest or most likely artificial signals from Earth to be detectable are brief pulses transmitted by anti-ballistic missile (ABM) early-warning and space-surveillance radars during the Cold War and later astronomical and military radars. Unlike the earliest and conventional radio- and television-broadcasting which has been claimed to be undetectable at short distances, such signals could be detected also from relatively distant receiver stations in certain regions. Politicians have also commented on the likely human reaction to contact with hostile species. In a 1987 speech at the 42d Session of the United Nations General Assembly, Ronald Reagan said, "I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world." === Equally advanced and more advanced civilizations === Robert Freitas speculated in 1978 that the technological advancement and energy usage of a civilization, measured either relative to another civilization or in absolute terms by its rating on the Kardashev scale, may play an important role in the result of extraterrestrial contact. Given the infeasibility of interstellar space flight for civilizations at a technological level similar to that of humanity, interactions between such civilizations would have to take place by radio. Because of the long transit times of radio waves between stars, such interactions would not lead to the establishment of diplomatic relations, nor any significant future interaction at all, between the two civilizations. According to Freitas, direct contact with civilizations significantly more advanced than humanity would have to take place within the Solar System, as only the more advanced society would have the resources and technology to cross interstellar space. Consequently, such contact could only be with civilizations rated as Type II or higher on the Kardashev scale, as Type I civilizations would be incapable of regular interstellar travel. Freitas expected that such interactions would be carefully planned by the more advanced civilization to avoid mass societal shock for humanity. However much planning an extraterrestrial civilization may do before contacting humanity, the humans may experience great shock and terror on their arrival, especially as they would lack any understanding of the contacting civilization. Ben Finney compares the situation to that of the tribespeople of New Guinea, an island that was settled fifty thousand years ago during the last glacial period but saw little contact with the outside world until the arrival of European colonial powers in the late 19th and early 20th centuries. The huge difference between the indigenous stone-age society and the Europeans' technical civilization caused unexpected behaviors among the native populations known as cargo cults: to coax the gods into bringing them the technology that the Europeans possessed, the natives created wooden "radio stations" and "airstrips" as a form of sympathetic magic. Finney argues that humanity may misunderstand the true meaning of an extraterrestrial transmission to Earth, much as the people of New Guinea could not understand the source of modern goods and technologies. He concludes that the results of extraterrestrial contact will become known over the long term with rigorous study, rather than as fast, sharp events briefly making newspaper headlines. Billingham has suggested that a civilization which is far more technologically advanced than humanity is also likely to be culturally and ethically advanced, and would therefore be unlikely to conduct astroengineering projects that would harm human civilization. Such projects could include Dyson spheres, which completely enclose stars and capture all energy coming from them. Even if well within the capability of an advanced civilization and providing an enormous amount of energy, such a project would not be undertaken. For similar reasons, such civilizations would not readily give humanity the knowledge required to build such devices. Nevertheless, the existence of such capabilities would at least show that civilizations have survived "technological adolescence". Despite the caution that such an advanced civilization would exercise in dealing with the less mature human civilization, Sagan imagined that an advanced civilization might send those on Earth an Encyclopædia Galactica describing the sciences and cultures of many extraterrestrial societies. Whether an advanced extraterrestrial civilization would send humanity a decipherable message is a matter of debate in itself. Sagan argued that a highly advanced extraterrestrial civilization would bear in mind that they were communicating with a relatively primitive one and therefore would try to ensure that the receiving civilization would be able to understand the message. Marvin Minsky believed that aliens might think similarly to humans because of shared constraints, permitting communication. Arguing against this view, astronomer Guillermo Lemarchand stated that an advanced civilization would probably encrypt a message with high information content, such as an Encyclopædia Galactica, in order to ensure that only other ethically advanced civilizations would be able to understand it. Douglas Vakoch assumes it may take some time to decode any message, telling ABC News that "I don't think we're going to understand immediately what they have to say." "There’s going to be a lot of guesswork in trying to interpret another civilization," he told Science Friday, adding that "in some ways, any message we get from an extraterrestrial will be like a cosmic Rorschach ink blot test." === Interstellar groups of civilizations === Given the age of the galaxy, Harrison surmises that "galactic clubs" might exist, groupings of civilizations from across the galaxy. Such clubs could begin as loose confederations or alliances, eventually developing into powerful unions of many civilizations. If humanity could enter into a dialogue with one extraterrestrial civilization, it might be able to join such a galactic club. As more extraterrestrial civilizations, or unions thereof, are found, these could also become assimilated into such a club. Sebastian von Hoerner has suggested that entry into a galactic club may be a way for humanity to handle the culture shock arising from contact with an advanced extraterrestrial civilization. Whether a broad spectrum of civilizations from many places in the galaxy would even be able to cooperate is disputed by Michaud, who states that civilizations with huge differences in the technologies and resources at their command "may not consider themselves even remotely equal". It is unlikely that humanity would meet the basic requirements for membership at its current low level of technological advancement. A galactic club may, William Hamilton speculates, set extremely high entrance requirements that are unlikely to be met by less advanced civilizations. When two Canadian astronomers argued that they potentially discovered 234 extraterrestrial civilizations through analysis of the Sloan Digital Sky Survey database, Douglas Vakoch doubted their explanation for their findings, noting that it would be unusual for all of these stars to pulse at exactly the same frequency unless they were part of a coordinated network: "If you take a step back," he said, "that would mean you have 234 independent stars that all decided to transmit the exact same way." Michaud suggests that an interstellar grouping of civilizations might take the form of an empire, which need not necessarily be a force for evil, but may provide for peace and security throughout its jurisdiction. Owing to the distances between the stars, such an empire would not necessarily maintain control solely by military force, but may rather tolerate local cultures and institutions to the extent that these would not pose a threat to the central imperial authority. Such tolerance may, as has happened historically on Earth, extend to allowing nominal self-rule of specific regions by existing institutions, while maintaining that area as a puppet or client state to accomplish the aims of the imperial power. However, particularly advanced powers may use methods, including faster-than-light travel, to make centralized administration more effective. In contrast to the belief that an extraterrestrial civilization would want to establish an empire, Ćirković proposes that an extraterrestrial civilization would maintain equilibrium rather than expand outward. In such an equilibrium, a civilization would only colonize a small number of stars, aiming to maximize efficiency rather than to expand massive and unsustainable imperial structures. This contrasts with the classic Kardashev Type III civilization, which has access to the energy output of an entire galaxy and is not subject to any limits on its future expansion. According to this view, advanced civilizations may not resemble the classic examples in science fiction, but might more closely reflect the small, independent Greek city-states, with an emphasis on cultural rather than territorial growth. === Extraterrestrial artifacts === An extraterrestrial civilization may choose to communicate with humanity by means of artifacts or probes rather than by radio, for various reasons. While probes may take a long time to reach the Solar System, once there they would be able to hold a sustained dialogue that would be impossible using radio from hundreds or thousands of light-years away. Radio would be completely unsuitable for surveillance and continued monitoring of a civilization, and should an extraterrestrial civilization wish to perform these activities on humanity, artifacts may be the only option other than to send large, crewed spacecraft to the Solar System. Although faster-than-light travel has been seriously considered by physicists such as Miguel Alcubierre, Tough speculates that the enormous amount of energy required to achieve such speeds under currently proposed mechanisms means that robotic probes traveling at conventional speeds will still have an advantage for various applications. 2013 research at NASA's Johnson Space Center, however, shows that faster-than-light travel with the Alcubierre drive requires dramatically less energy than previously thought, needing only about 1 tonne of exotic mass-energy to move a spacecraft at 10 times the speed of light, in contrast to previous estimates that stated that only a Jupiter-mass object would contain sufficient energy to power a faster-than-light spacecraft. According to Tough, an extraterrestrial civilization might want to send various types of information to humanity by means of artifacts, such as an Encyclopædia Galactica, containing the wisdom of countless extraterrestrial cultures, or perhaps an invitation to engage in diplomacy with them. A civilization that sees itself on the brink of decline might use the abilities it still possesses to send probes throughout the galaxy, with its cultures, values, religions, sciences, technologies, and laws, so that these may not die along with the civilization itself. Freitas finds numerous reasons why interstellar probes may be a preferred method of communication among extraterrestrial civilizations wishing to make contact with Earth. A civilization aiming to learn more about the distribution of life within the galaxy might, he speculates, send probes to a large number of star systems, rather than using radio, as one cannot ensure a response by radio but can (he says) ensure that probes will return to their sender with data on the star systems they survey. Furthermore, probes would enable the surveying of non-intelligent populations, or those not yet capable of space navigation (like humans before the 20th century), as well as intelligent populations that might not wish to provide information about themselves and their planets to extraterrestrial civilizations. In addition, the greater energy required to send living beings rather than a robotic probe would, according to Michaud, be only used for purposes such as a one-way migration. Freitas points out that probes, unlike the interstellar radio waves commonly targeted by SETI searches, could store information for long, perhaps geological, timescales, and could emit strong radio signals unambiguously recognizable as being of intelligent origin, rather than being dismissed as a UFO or a natural phenomenon. Probes could also modify any signal they send to suit the system they were in, which would be impossible for a radio transmission originating from outside the target star system. Moreover, the use of small robotic probes with widely distributed beacons in individual systems, rather than a small number of powerful, centralized beacons, would provide a security advantage to the civilization using them. Rather than revealing the location of a radio beacon powerful enough to signal the whole galaxy and risk such a powerful device being compromised, decentralized beacons installed on robotic probes need not reveal any information that an extraterrestrial civilization prefers others not to have. Given the age of the Milky Way galaxy, an ancient extraterrestrial civilization may have existed and sent probes to the Solar System millions or even billions of years before the evolution of Homo sapiens. Thus, a probe sent may have been nonfunctional for millions of years before humans learn of its existence. Such a "dead" probe would not pose an imminent threat to humanity, but would prove that interstellar flight is possible. However, if an active probe were to be discovered, humans would react much more strongly than they would to the discovery of a probe that has long since ceased to function. == Further implications of contact == === Theological === The confirmation of extraterrestrial intelligence could have a profound impact on religious doctrines, potentially causing theologians to reinterpret scriptures to accommodate the new discoveries. However, a survey of people with many different religious beliefs indicated that their faith would not be affected by the discovery of extraterrestrial intelligence, and another study, conducted by Ted Peters of the Pacific Lutheran Theological Seminary, shows that most people would not consider their religious beliefs superseded by it. Surveys of religious leaders indicate that only a small percentage are concerned that the existence of extraterrestrial intelligence might fundamentally contradict the views of the adherents of their religion. Gabriel Funes, the chief astronomer of the Vatican Observatory and a papal adviser on science, has stated that the Catholic Church would be likely to welcome extraterrestrial visitors warmly. There are many UFO religions such as Raëlism. Astronomer David Weintraub suggests unambiguous contact would result in more of these kinds of beliefs and communities, saying "There undoubtedly would be people who would find this as an opportunity or an excuse to call attention to themselves for whatever reason and there would be new religions". Contact with extraterrestrial intelligence would not be completely inconsequential for religion. The Peters study showed that most non-religious people, and a significant minority of religious people, believe that the world could face a religious crisis, even if their own beliefs were unaffected. Contact with extraterrestrial intelligence would be most likely to cause a problem for western religions, in particular traditionalist Christianity, because of the geocentric nature of western faiths. The discovery of extraterrestrial life would not contradict basic conceptions of God, however, and seeing that science has challenged established dogma in the past, for example with the theory of evolution, it is likely that existing religions will adapt similarly to the new circumstances. Douglas Vakoch argues that it is not likely that the discovery of extraterrestrial life will impact religious beliefs. In the view of Musso, a global religious crisis would be unlikely even for Abrahamic faiths, as the studies of himself and others on Christianity, the most "anthropocentric" religion, see no conflict between that religion and the existence of extraterrestrial intelligence. In addition, the cultural and religious values of extraterrestrial species would likely be shared over centuries if contact is to occur by radio, meaning that rather than causing a huge shock to humanity, such information would be viewed much as archaeologists and historians view ancient artifacts and texts. Funes speculates that a decipherable message from extraterrestrial intelligence could initiate an interstellar exchange of knowledge in various disciplines, including whatever religions an extraterrestrial civilization may host. Billingham further suggests that an extremely advanced and friendly extraterrestrial civilization might put an end to present-day religious conflicts and lead to greater religious toleration worldwide. On the other hand, Jill Tarter puts forward the view that contact with extraterrestrial intelligence might eliminate religion as we know it and introduce humanity to an all-encompassing faith. Vakoch doubts that humans would be inclined to adopt extraterrestrial religions, telling ABC News "I think religion meets very human needs, and unless extraterrestrials can provide a replacement for it, I don't think religion is going to go away," and adding, "if there are incredibly advanced civilizations with a belief in God, I don't think Richard Dawkins will start believing." === Political === According to experts such as Niklas Hedman, executive director of UN Office for Outer Space Affairs, there are "no international agreements or mechanisms in place for how humanity would handle an encounter with extraterrestrial intelligence". Tim Folger speculates that news of radio contact with an extraterrestrial civilization would prove impossible to suppress and would travel rapidly, though Cold War scientific literature on the subject contradicts this. Media coverage of the discovery would probably die down quickly, though, as scientists began to decipher the message and learn its true impact. Different branches of government (for example legislative, executive, and judiciary) may pursue their own policies, potentially giving rise to power struggles. Even in the event of a single contact with no follow-up, radio contact may prompt fierce disagreements as to which bodies have the authority to represent humanity as a whole. Michaud hypothesizes that the fear arising from direct contact may cause nation-states to put aside their conflicts and work together for the common defense of humanity. Apart from the question of who would represent the Earth as a whole, contact could create other international problems, such as the degree of involvement of governments foreign to the one whose radio astronomers received the signal. The United Nations discussed various issues of foreign relations immediately before the launch of the Voyager probes, which in 2012 left the Solar System carrying a golden record in case they are found by extraterrestrial intelligence. Among the issues discussed were what messages would best represent humanity, what format they should take, how to convey the cultural history of the Earth, and what international groups should be formed to study extraterrestrial intelligence in greater detail. According to Luca Codignola of the University of Genoa, contact with a powerful extraterrestrial civilization is comparable to occasions where one powerful civilization destroyed another, such as the arrival of Christopher Columbus and Hernán Cortés into the Americas and the subsequent destruction of the indigenous civilizations and their ways of life. However, the applicability of such a model to contact with extraterrestrial civilizations, and that specific interpretation of the arrival of the European colonists to the Americas, have been disputed. Even so, any large difference between the power of an extraterrestrial civilization and our own could be demoralizing and potentially cause or accelerate the collapse of human society. Being discovered by a "superior" extraterrestrial civilization, and continued contact with it, might have psychological effects that could destroy a civilization, as is claimed to have happened in the past on Earth. Even in the absence of close contact between humanity and extraterrestrials, high-information messages from an extraterrestrial civilization to humanity have the potential to cause a great cultural shock. Sociologist Donald Tarter has conjectured that knowledge of extraterrestrial culture and theology has the potential to compromise human allegiance to existing organizational structures and institutions. The cultural shock of meeting an extraterrestrial civilization may be spread over decades or even centuries if an extraterrestrial message to humanity is extremely difficult to decipher. A study suggests there may be a threat from the perception by state actors (or their subsequent actions based on this perception) that other state-level actors could seek to gain and achieve an information monopoly on communications with an extraterrestrial intelligence. It recommends transparency and data sharing, further development of postdetection protocols (see above), and better education of policymakers in this space. === Legal === Contact with extraterrestrial civilizations would raise legal questions, such as the rights of the extraterrestrial beings. An extraterrestrial arriving on Earth might only have the protection of animal cruelty statutes. Much as various classes of human being, such as women, children, and indigenous people, were initially denied human rights, so might extraterrestrial beings, who could therefore be legally owned and killed. If such a species were not to be treated as a legal animal, there would arise the challenge of defining the boundary between a legal person and a legal animal, considering the numerous factors that constitute intelligence. Some ethicists (see below) are considering "how the rights of a completely unfamiliar alien species would fit into our legal and ethical frameworks" and there is a case for "human rights" to evolve into "sentient rights". Freitas considers that even if an extraterrestrial being were to be afforded legal personhood, problems of nationality and immigration would arise. An extraterrestrial being would not have a legally recognized earthly citizenship, and drastic legal measures might be required in order to account for the technically illegal immigration of extraterrestrial individuals. If contact were to take place through electromagnetic signals, these issues would not arise. Rather, issues relating to patent and copyright law regarding who, if anyone, has rights to the information from the extraterrestrial civilization would be the primary legal problem. === Scientific and technological === The scientific and technological impact of extraterrestrial contact through electromagnetic waves would probably be quite small, especially at first. However, if the message contains a large amount of information, deciphering it could give humans access to a galactic heritage perhaps predating the formation of the Solar System, which may greatly advance our technology and science. A possible negative effect could be to demoralize research scientists as they come to know that what they are researching may already be known to another civilization. On the other hand, extraterrestrial civilizations with malicious intent could send (unfiltered) information that could enable or facilitate human civilization to destroy itself, such as powerful computer viruses, knowledge to build an advanced artificial intelligence or information on how to make extremely potent weapons that humans would not yet be able to use responsibly. While the motives for such an action are unknown, it may require minimal energy use on the part of the extraterrestrials. It may also be possible that such is sent without malicious intent. According to Musso, however, computer viruses in particular will be nearly impossible unless extraterrestrials possess detailed knowledge of human computer architectures, which would only happen if a human message sent to the stars were protected with little thought to security. Even a virtual machine on which extraterrestrials could run computer programs could be designed specifically for the purpose, bearing little relation to computer systems commonly used on Earth. In addition, humans could send messages to extraterrestrials detailing that they do not want access to the Encyclopædia Galactica until they have reached a suitable level of advancement, thus possibly raising chances that harmful impacts of technology from recipient extraterrestrials are mitigated. Extraterrestrial technology could have profound impacts on the nature of human culture and civilization. Just as television provided a new outlet for a wide variety of political, religious, and social groups, and as the printing press made the Bible available to the common people of Europe, allowing them to interpret it for themselves, so an extraterrestrial technology might change humanity in ways not immediately apparent. Harrison speculates that a knowledge of extraterrestrial technologies could increase the gap between scientific and cultural progress, leading to societal shock and an inability to compensate for negative effects of technology. He gives the example of improvements in agricultural technology during the Industrial Revolution, which displaced thousands of farm laborers until society could retrain them for jobs suited to the new social order. Contact with an extraterrestrial civilization far more advanced than humanity could cause a much greater shock than the Industrial Revolution, or anything previously experienced by humanity. Michaud suggests that humanity could be impacted by an influx of extraterrestrial science and technology in the same way that medieval European scholars were impacted by the knowledge of Arab scientists. Humanity might at first revere the knowledge as having the potential to advance the human species, and might even feel inferior to the extraterrestrial species, but would gradually grow in arrogance as it gained more and more intimate knowledge of the science, technology, and other cultural developments of an advanced extraterrestrial civilization. The discovery of extraterrestrial intelligence would have various impacts on biology and astrobiology. The discovery of extraterrestrial life in any form, intelligent or non-intelligent, would give humanity greater insight into the nature of life on Earth and would improve the conception of how the tree of life is organized. Human biologists could possibly learn about extraterrestrial biochemistry and observe how it differs from that found on Earth. This knowledge could help human civilization to learn which aspects of life are common throughout the universe and which are possibly specific to Earth. === Worldviews === Some have argued that confirmed reliable detection of extraterrestrial intelligence or contact may be one of the biggest moments in human history and would have major implications for humanity including its contemporary prevalent worldviews, not just from implications within the fields of theology (see above) and science (see above), similar to the paradigm shift away from geocentrism as a dominant element of human worldviews. Harvard astronomer and lead scientist of The Galileo Project, Avi Loeb, has argued that humanity is not ready to adopt a sense of what he calls "cosmic modesty" and that this could change if the project detects "relics" of more advanced civilizations. Loeb postulates that if we find that we "are not the smartest kid on the cosmic block, it will give us a different perspective" – such as the way we think about our place in the universe, for example with relevance to prevalent religious worldviews, in which humans may often be considered unique or exceptional. According to Major John R. King, potential sociological consequences of alien contact may include (1) Initial shock and consternation (2) Loss or reduction of ego (3) Modification of human values (4) Decrease in status of [certain] scientists and (5) Reevaluation of religions. The "mediocrity principle" which claims that "there is nothing special about Earth's status or position in the Universe" could present a great challenge to Abrahamic religions, which "teach that human beings are purposefully created by God and occupy a privileged position in relation to other creatures", albeit some have argued that "discovery of life elsewhere in the Universe would not compromise God's love for Earth life" despite there being no "positive affirmation of alien life" in popular religious texts such as the Bible and that other civilisations may be "completely unaware of Jesus' story" and may have no such popular story from their own past. There is widespread belief that religions would adapt to contact. === Ethics === Astroethics refers to the contemplation and development of ethical standards for a variety of outer space issues, including questions of how to interact remotely or in close encounters and concerns not only humans' ethics but also ethics of non-human intelligences, including whether they all afford us rights (and which each or overall). === Ecological and biological-warfare impacts === An extraterrestrial civilization might bring to Earth pathogens or invasive life forms that do not harm its own biosphere. Alien pathogens could decimate the human population, which would have no immunity to them, or they might use terrestrial livestock or plants as hosts, causing indirect harm to humans. Invasive organisms brought by extraterrestrial civilizations could cause great ecological harm because of the terrestrial biosphere's lack of defenses against them. On the other hand, pathogens and invasive species of extraterrestrial origin might differ enough from terrestrial organisms in their biology to have no adverse effects. Furthermore, pathogens and parasites on Earth are generally suited to only a small and exclusive set of environments, to which extraterrestrial pathogens would have had no opportunity to adapt. If an extraterrestrial civilization bearing malice towards humanity gained sufficient knowledge of terrestrial biology and weaknesses in the immune systems of terrestrial biota, it might be able to create extremely potent biological weapons. Even a civilization without malicious intent could inadvertently cause harm to humanity by not taking account of all the risks of their actions. According to Baum, even if an extraterrestrial civilization were to communicate using electromagnetic signals alone, it could send humanity information with which humans themselves could create lethal biological weapons. == See also == Archaeology, Anthropology, and Interstellar Communication Relative species abundance == Notes == == References == == Further reading == Dick, Steven J. (2015). The impact of discovering life beyond earth. Cambridge University Press. ISBN 978-1-107-10998-8. Ashkenazi, Michael (2016). What We Know About Extraterrestrial Intelligence. Springer. ISBN 978-3-319-44455-0. Vakoch, Douglas (2013). Astrobiology, History, and Society — Life Beyond Earth and the Impact of Discovery. Springer. ISBN 978-3-642-43540-9. == External links == SETI Institute Cultural Aspects of SETI Introduction to ExtraTerrestrial Intelligence
Wikipedia/Potential_cultural_impact_of_extraterrestrial_contact
The Center for Astrophysics | Harvard & Smithsonian (CfA), previously known as the Harvard–Smithsonian Center for Astrophysics, is an astrophysics research institute jointly operated by the Harvard College Observatory and Smithsonian Astrophysical Observatory. Founded in 1973 and headquartered in Cambridge, Massachusetts, United States, the CfA leads a broad program of research in astronomy, astrophysics, Earth and space sciences, as well as science education. The CfA either leads or participates in the development and operations of more than fifteen ground- and space-based astronomical research observatories across the electromagnetic spectrum, including the forthcoming Giant Magellan Telescope (GMT) and the Chandra X-ray Observatory, one of NASA's Great Observatories. Hosting more than 850 scientists, engineers, and support staff, the CfA is among the largest astronomical research institutes in the world. Its projects have included Nobel Prize-winning advances in cosmology and high energy astrophysics, the discovery of many exoplanets, and the first image of a black hole. The CfA also serves a major role in the global astrophysics research community: the CfA's Astrophysics Data System (ADS), for example, has been universally adopted as the world's online database of astronomy and physics papers. Known for most of its history as the "Harvard-Smithsonian Center for Astrophysics", the CfA rebranded in 2018 to its current name in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. Lisa Kewley has served as the director of the CfA since 2022. == History of the CfA == The Center for Astrophysics | Harvard & Smithsonian is not formally an independent legal organization, but rather an institutional entity operated under a memorandum of understanding between Harvard University and the Smithsonian Institution. This collaboration was formalized on July 1, 1973, with the goal of coordinating the related research activities of the Harvard College Observatory (HCO) and the Smithsonian Astrophysical Observatory (SAO) under the leadership of a single director, and housed within the same complex of buildings on the Harvard campus in Cambridge, Massachusetts. The CfA's history is therefore also that of the two fully independent organizations that comprise it. With a combined history of more than 300 years, HCO and SAO have been host to major milestones in astronomical history that predate the CfA's founding. These are briefly summarized below. === History of the Smithsonian Astrophysical Observatory (SAO) === Samuel Pierpont Langley, the third Secretary of the Smithsonian, founded the Smithsonian Astrophysical Observatory on the south yard of the Smithsonian Castle (on the U.S. National Mall) on March 1, 1890. The Astrophysical Observatory's initial, primary purpose was to "record the amount and character of the Sun's heat". Charles Greeley Abbot was named SAO's first director, and the observatory operated solar telescopes to take daily measurements of the Sun's intensity in different regions of the optical electromagnetic spectrum. In doing so, the observatory enabled Abbot to make critical refinements to the Solar constant, as well as to serendipitously discover Solar variability. It is likely that SAO's early history as a solar observatory was part of the inspiration behind the Smithsonian's "sunburst" logo, designed in 1965 by Crimilda Pontes. In 1955, the scientific headquarters of SAO moved from Washington, D.C. to Cambridge, Massachusetts, to affiliate with the Harvard College Observatory (HCO). Fred Lawrence Whipple, then the chairman of the Harvard Astronomy Department, was named the new director of SAO. The collaborative relationship between SAO and HCO therefore predates the official creation of the CfA by 18 years. SAO's move to Harvard's campus also resulted in a rapid expansion of its research program. Following the launch of Sputnik (the world's first human-made satellite) in 1957, SAO accepted a national challenge to create a worldwide satellite-tracking network, collaborating with the United States Air Force on Project Space Track. With the creation of NASA the following year and throughout the Space Race, SAO led major efforts in the development of orbiting observatories and large ground-based telescopes, laboratory and theoretical astrophysics, as well as the application of computers to astrophysical problems. === History of Harvard College Observatory (HCO) === Partly in response to renewed public interest in astronomy following the 1835 return of Halley's Comet, the Harvard College Observatory was founded in 1839, when the Harvard Corporation appointed William Cranch Bond as an "Astronomical Observer to the University". For its first four years of operation, the observatory was situated at the Dana-Palmer House (where Bond also resided) near Harvard Yard, and consisted of little more than three small telescopes and an astronomical clock. In his 1840 book recounting the history of the college, then Harvard President Josiah Quincy III noted that "there is wanted a reflecting telescope equatorially mounted". This telescope, the 15-inch "Great Refractor", opened seven years later (in 1847) at the top of Observatory Hill in Cambridge (where it still exists today, housed in the oldest of the CfA's complex of buildings). The telescope was the largest in the United States from 1847 until 1867. William Bond and pioneer photographer John Adams Whipple used the Great Refractor to produce the first clear Daguerrotypes of the Moon (winning them an award at the 1851 Great Exhibition in London). Bond and his son, George Phillips Bond (the second director of HCO), used it to discover Saturn's 8th moon, Hyperion (which was also independently discovered by William Lassell). Under the directorship of Edward Charles Pickering from 1877 to 1919, the observatory became the world's major producer of stellar spectra and magnitudes, established an observing station in Peru, and applied mass-production methods to the analysis of data. It was during this time that HCO became host to a series of major discoveries in astronomical history, powered by the observatory's so-called "Computers" (women hired by Pickering as skilled workers to process astronomical data). These "Computers" included Williamina Fleming, Annie Jump Cannon, Henrietta Swan Leavitt, Florence Cushman and Antonia Maury, all widely recognized today as major figures in scientific history. Henrietta Swan Leavitt, for example, discovered the so-called period-luminosity relation for Classical Cepheid variable stars, establishing the first major "standard candle" with which to measure the distance to galaxies. Now called "Leavitt's law", the discovery is regarded as one of the most foundational and important in the history of astronomy; astronomers like Edwin Hubble, for example, would later use Leavitt's law to establish that the Universe is expanding, the primary piece of evidence for the Big Bang model. Upon Pickering's retirement in 1921, the directorship of HCO fell to Harlow Shapley (a major participant in the so-called "Great Debate" of 1920). This era of the observatory was made famous by the work of Cecelia Payne-Gaposchkin, who became the first woman to earn a PhD in astronomy from Radcliffe College (a short walk from the observatory). Payne-Gapochkin's 1925 thesis proposed that stars were composed primarily of hydrogen and helium, an idea thought ridiculous at the time. Between Shapley's tenure and the formation of the CfA, the observatory was directed by Donald H. Menzel and then Leo Goldberg, both of whom maintained widely recognized programs in solar and stellar astrophysics. Menzel played a major role in encouraging the Smithsonian Astrophysical Observatory to move to Cambridge and collaborate more closely with HCO. === Joint history as the Center for Astrophysics (CfA) === The collaborative foundation for what would ultimately give rise to the Center for Astrophysics began with SAO's move to Cambridge in 1955. Fred Whipple, who was already chair of the Harvard Astronomy Department (housed within HCO since 1931), was named SAO's new director at the start of this new era; an early test of the model for a unified directorship across HCO and SAO. The following 18 years would see the two independent entities merge ever closer together, operating effectively (but informally) as one large research center. This joint relationship was formalized as the new Harvard–Smithsonian Center for Astrophysics on July 1, 1973. George B. Field, then affiliated with Berkeley, was appointed as its first director. That same year, a new astronomical journal, the CfA Preprint Series was created, and a CfA/SAO instrument flying aboard Skylab discovered coronal holes on the Sun. The founding of the CfA also coincided with the birth of X-ray astronomy as a new, major field that was largely dominated by CfA scientists in its early years. Riccardo Giacconi, regarded as the "father of X-ray astronomy", founded the High Energy Astrophysics Division within the new CfA by moving most of his research group (then at American Sciences and Engineering) to SAO in 1973. That group would later go on to launch the Einstein Observatory (the first imaging X-ray telescope) in 1976, and ultimately lead the proposals and development of what would become the Chandra X-ray Observatory. Chandra, the second of NASA's Great Observatories and still the most powerful X-ray telescope in history, continues operations today as part of the CfA's Chandra X-ray Center. Giacconi would later win the 2002 Nobel Prize in Physics for his foundational work in X-ray astronomy. Shortly after the launch of the Einstein Observatory, the CfA's Steven Weinberg won the 1979 Nobel Prize in Physics for his work on electroweak unification. The following decade saw the start of the landmark CfA Redshift Survey (the first attempt to map the large scale structure of the Universe), as well as the release of the "Field Report", a highly influential Astronomy and Astrophysics Decadal Survey chaired by the outgoing CfA Director George Field. He would be replaced in 1982 by Irwin Shapiro, who during his tenure as director (1982 to 2004) oversaw the expansion of the CfA's observing facilities around the world, including the newly named Fred Lawrence Whipple Observatory, the Infrared Telescope (IRT) aboard the Space Shuttle, the 6.5-meter Multiple Mirror Telescope (MMT), the SOHO satellite, and the launch of Chandra in 1999. CfA-led discoveries throughout this period include canonical work on Supernova 1987A, the "CfA2 Great Wall" (then the largest known coherent structure in the Universe), the best-yet evidence for supermassive black holes, and the first convincing evidence for an extrasolar planet. The 1980s also saw the CfA play a distinct role in the history of computer science and the internet: in 1986, SAO started developing SAOImage, one of the world's first X11-based applications made publicly available (its successor, DS9, remains the most widely used astronomical FITS image viewer worldwide). During this time, scientists and software developers at the CfA also began work on what would become the Astrophysics Data System (ADS), one of the world's first online databases of research papers. By 1993, the ADS was running the first routine transatlantic queries between databases, a foundational aspect of the internet today. == The CfA today == === Research at the CfA === Charles Alcock, known for a number of major works related to massive compact halo objects, was named the third director of the CfA in 2004. Today Alcock oversees one of the largest and most productive astronomical institutes in the world, with more than 850 staff and an annual budget in excess of $100 million. The Harvard Department of Astronomy, housed within the CfA, maintains a continual complement of approximately 60 PhD students, more than 100 postdoctoral researchers, and roughly 25 undergraduate astronomy and astrophysics majors from Harvard College. SAO, meanwhile, hosts a long-running and highly rated REU Summer Intern program as well as many visiting graduate students. The CfA estimates that roughly 10% of the professional astrophysics community in the United States spent at least a portion of their career or education there. The CfA is either a lead or major partner in the operations of the Fred Lawrence Whipple Observatory, the Submillimeter Array, MMT Observatory, the South Pole Telescope, VERITAS, and a number of other smaller ground-based telescopes. The CfA's 2019–2024 Strategic Plan includes the construction of the Giant Magellan Telescope as a driving priority for the center. Along with the Chandra X-ray Observatory, the CfA plays a central role in a number of space-based observing facilities, including the recently launched Parker Solar Probe, Kepler space telescope, the Solar Dynamics Observatory (SDO), and Hinode. The CfA, via the Smithsonian Astrophysical Observatory, recently played a major role in the Lynx X-ray Observatory, a NASA-funded large mission concept study commissioned as part of the 2020 Astronomy and Astrophysics Decadal Survey ("Astro2020"). If launched, Lynx would be the most powerful X-ray observatory constructed to date, enabling order-of-magnitude advances in capability over Chandra. SAO is one of the 13 stakeholder institutes for the Event Horizon Telescope Board, and the CfA hosts its Array Operations Center. In 2019, the project revealed the first direct image of a black hole. The result is widely regarded as a triumph not only of observational astronomy, but of its intersection with theoretical astrophysics. Union of the observational and theoretical subfields of astrophysics has been a major focus of the CfA since its founding. In 2018, the CfA rebranded, changing its official name to the "Center for Astrophysics | Harvard & Smithsonian" in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. Today, the CfA receives roughly 70% of its funding from NASA, 22% from Smithsonian federal funds, and 4% from the National Science Foundation. The remaining 4% comes from contributors including the United States Department of Energy, the Annenberg Foundation, as well as other gifts and endowments. === Organizational structure === Research across the CfA is organized into six divisions and seven research centers: ==== Scientific divisions within the CfA ==== Atomic and Molecular Physics (AMP) High Energy Astrophysics (HEA) Optical and Infrared Astronomy (OIR) Radio and Geoastronomy (RG) Solar, Stellar, and Planetary Sciences (SSP) Theoretical Astrophysics (TA) ==== Centers hosted at the CfA ==== Chandra X-ray Center (CXC), the science operations center for NASA's Chandra X-ray Observatory Institute for Theory and Computation (ITC) Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP) Center for Parallel Astrophysical Computing (CPAC) Minor Planet Center (MPC) Telescope Data Center (TDC) Radio Telescope Data Center (RTDC) Solar & Stellar X-ray Group (SSXG) The CfA is also host to the Harvard University Department of Astronomy, large central engineering and computation facilities, the Science Education Department, the John G. Wolbach Library, the world's largest database of astronomy and physics papers (ADS), and the world's largest collection of astronomical photographic plates. === Observatories operated with CfA participation === ==== Ground-based observatories ==== Fred Lawrence Whipple Observatory Magellan telescopes MMT Observatory Event Horizon Telescope South Pole Telescope Submillimeter Array 1.2-Meter Millimeter-Wave Telescope Very Energetic Radiation Imaging Telescope Array System (VERITAS) ==== Space-based observatories and probes ==== Chandra X-ray Observatory Transiting Exoplanet Survey Satellite (TESS) Parker Solar Probe Hinode Kepler Solar Dynamics Observatory (SDO) Solar and Heliospheric Observatory (SOHO) Spitzer Space Telescope ==== Planned future observatories ==== Lynx X-ray Observatory Giant Magellan Telescope Murchison Widefield Array Square Kilometer Array Pan-STARRS Vera C. Rubin Observatory (formerly called the Large Synoptic Survey Telescope) == See also == Clara Sousa-Silva, research scientist List of astronomical observatories == References == == External links == Official website
Wikipedia/Harvard-Smithsonian_Center_for_Astrophysics
The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way Galaxy. The equation was formulated in 1961 by Frank Drake, not for purposes of quantifying the number of civilizations, but as a way to stimulate scientific dialogue at the first scientific meeting on the search for extraterrestrial intelligence (SETI). The equation summarizes the main concepts which scientists must contemplate when considering the question of other radio-communicative life. It is more properly thought of as an approximation than as a serious attempt to determine a precise number. Criticism related to the Drake equation focuses not on the equation itself, but on the fact that the estimated values for several of its factors are highly conjectural, the combined multiplicative effect being that the uncertainty associated with any derived value is so large that the equation cannot be used to draw firm conclusions. == Equation == The Drake equation is: N = R ∗ ⋅ f p ⋅ n e ⋅ f l ⋅ f i ⋅ f c ⋅ L {\displaystyle N=R_{*}\cdot f_{\mathrm {p} }\cdot n_{\mathrm {e} }\cdot f_{\mathrm {l} }\cdot f_{\mathrm {i} }\cdot f_{\mathrm {c} }\cdot L} where N = the number of civilizations in the Milky Way galaxy with which communication might be possible (i.e. which are on the current past light cone); and R∗ = the average rate of star formation in our Galaxy. fp = the fraction of those stars that have planets. ne = the average number of planets that can potentially support life per star that has planets. fl = the fraction of planets that could support life that actually develop life at some point. fi = the fraction of planets with life that go on to develop intelligent life (civilizations). fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space. L = the length of time for which such civilizations release detectable signals into space. This form of the equation first appeared in Drake's 1965 paper. == History == In September 1959, physicists Giuseppe Cocconi and Philip Morrison published an article in the journal Nature with the provocative title "Searching for Interstellar Communications". Cocconi and Morrison argued that radio telescopes had become sensitive enough to pick up transmissions that might be broadcast into space by civilizations orbiting other stars. Such messages, they suggested, might be transmitted at a wavelength of 21 cm (1,420.4 MHz). This is the wavelength of radio emission by neutral hydrogen, the most common element in the universe, and they reasoned that other intelligences might see this as a logical landmark in the radio spectrum. Two months later, Harvard University astronomy professor Harlow Shapley speculated on the number of inhabited planets in the universe, saying "The universe has 10 million, million, million suns (10 followed by 18 zeros) similar to our own. One in a million has planets around it. Only one in a million million has the right combination of chemicals, temperature, water, days and nights to support planetary life as we know it. This calculation arrives at the estimated figure of 100 million worlds where life has been forged by evolution." Seven months after Cocconi and Morrison published their article, Drake began searching for extraterrestrial intelligence in an experiment called Project Ozma. It was the first systematic search for signals from communicative extraterrestrial civilizations. Using the 85 ft (26 m) dish of the National Radio Astronomy Observatory, Green Bank in Green Bank, West Virginia, Drake monitored two nearby Sun-like stars: Epsilon Eridani and Tau Ceti, slowly scanning frequencies close to the 21 cm wavelength for six hours per day from April to July 1960. The project was well designed, inexpensive, and simple by today's standards. It detected no signals. Soon thereafter, Drake hosted the first search for extraterrestrial intelligence conference on detecting their radio signals. The meeting was held at the Green Bank facility in 1961. The equation that bears Drake's name arose out of his preparations for the meeting. As I planned the meeting, I realized a few day[s] ahead of time we needed an agenda. And so I wrote down all the things you needed to know to predict how hard it's going to be to detect extraterrestrial life. And looking at them it became pretty evident that if you multiplied all these together, you got a number, N, which is the number of detectable civilizations in our galaxy. This was aimed at the radio search, and not to search for primordial or primitive life forms. The ten attendees were conference organizer J. Peter Pearman, Frank Drake, Philip Morrison, businessman and radio amateur Dana Atchley, chemist Melvin Calvin, astronomer Su-Shu Huang, neuroscientist John C. Lilly, inventor Barney Oliver, astronomer Carl Sagan, and radio-astronomer Otto Struve. These participants called themselves "The Order of the Dolphin" (because of Lilly's work on dolphin communication), and commemorated their first meeting with a plaque at the observatory hall. == Usefulness == The Drake equation results in a summary of the factors affecting the likelihood that we might detect radio-communication from intelligent extraterrestrial life. The last three parameters, fi, fc, and L, are not known and are very difficult to estimate, with values ranging over many orders of magnitude (see § Criticism). Therefore, the usefulness of the Drake equation is not in the solving, but rather in the contemplation of all the various concepts which scientists must incorporate when considering the question of life elsewhere, and gives the question of life elsewhere a basis for scientific analysis. The equation has helped draw attention to some particular scientific problems related to life in the universe, for example abiogenesis, the development of multi-cellular life, and the development of intelligence itself. Within the limits of existing human technology, any practical search for distant intelligent life must necessarily be a search for some manifestation of a distant technology. After about 50 years, the Drake equation is still of seminal importance because it is a 'road map' of what we need to learn in order to solve this fundamental existential question. It also formed the backbone of astrobiology as a science; although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories. Some 50 years of SETI have failed to find anything, even though radio telescopes, receiver techniques, and computational abilities have improved significantly since the early 1960s. SETI efforts since 1961 have conclusively ruled out widespread alien emissions near the 21 cm wavelength of the hydrogen frequency. == Estimates == === Original estimates === There is considerable disagreement on the values of these parameters, but the 'educated guesses' used by Drake and his colleagues in 1961 were: R∗ = 1 yr−1 (1 star formed per year, on the average over the life of the galaxy; this was regarded as conservative) fp = 0.2 to 0.5 (one fifth to one half of all stars formed will have planets) ne = 1 to 5 (stars with planets will have between 1 and 5 planets capable of developing life) fl = 1 (100% of these planets will develop life) fi = 1 (100% of which will develop intelligent life) fc = 0.1 to 0.2 (10–20% of which will be able to communicate) L = somewhere between 1000 and 100,000,000 years Inserting the above minimum numbers into the equation gives a minimum N of 20 (see: Range of results). Inserting the maximum numbers gives a maximum of 50,000,000. Drake states that given the uncertainties, the original meeting concluded that N ≈ L, and there were probably between 1000 and 100,000,000 planets with civilizations in the Milky Way Galaxy. === Current estimates === This section discusses and attempts to list the best current estimates for the parameters of the Drake equation. ==== Rate of star creation in this Galaxy, R∗ ==== Calculations in 2010, from NASA and the European Space Agency indicate that the rate of star formation in this Galaxy is about 0.68–1.45 M☉ of material per year. To get the number of stars per year, we divide this by the initial mass function (IMF) for stars, where the average new star's mass is about 0.5 M☉. This gives a star formation rate of about 1.5–3 stars per year. ==== Fraction of those stars that have planets, fp ==== Analysis of microlensing surveys, in 2012, has found that fp may approach 1—that is, stars are orbited by planets as a rule, rather than the exception; and that there are one or more bound planets per Milky Way star. ==== Average number of planets that might support life per star that has planets, ne ==== In November 2013, astronomers reported, based on Kepler space telescope data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of sun-like stars and red dwarf stars within the Milky Way Galaxy. 11 billion of these estimated planets may be orbiting sun-like stars. Since there are about 100 billion stars in the galaxy, this implies fp · ne is roughly 0.4. The nearest planet in the habitable zone is Proxima Centauri b, which is as close as about 4.2 light-years away. The consensus at the Green Bank meeting was that ne had a minimum value between 3 and 5. Dutch science journalist Govert Schilling has opined that this is optimistic. Even if planets are in the habitable zone, the number of planets with the right proportion of elements is difficult to estimate. Brad Gibson, Yeshe Fenner, and Charley Lineweaver determined that about 10% of star systems in the Milky Way Galaxy are hospitable to life, by having heavy elements, being far from supernovae and being stable for a sufficient time. The discovery of numerous gas giants in close orbit with their stars has introduced doubt that life-supporting planets commonly survive the formation of their stellar systems. So-called hot Jupiters may migrate from distant orbits to near orbits, in the process disrupting the orbits of habitable planets. On the other hand, the variety of star systems that might have habitable zones is not just limited to solar-type stars and Earth-sized planets. It is now estimated that even tidally locked planets close to red dwarf stars might have habitable zones, although the flaring behavior of these stars might speak against this. The possibility of life on moons of gas giants (such as Jupiter's moon Europa, or Saturn's moons Titan and Enceladus) adds further uncertainty to this figure. The authors of the rare Earth hypothesis propose a number of additional constraints on habitability for planets, including being in galactic zones with suitably low radiation, high star metallicity, and low enough density to avoid excessive asteroid bombardment. They also propose that it is necessary to have a planetary system with large gas giants which provide bombardment protection without a hot Jupiter; and a planet with plate tectonics, a large moon that creates tidal pools, and moderate axial tilt to generate seasonal variation. ==== Fraction of the above that actually go on to develop life, fl ==== Geological evidence from the Earth suggests that fl may be high; life on Earth appears to have begun around the same time as favorable conditions arose, suggesting that abiogenesis may be relatively common once conditions are right. However, this evidence only looks at the Earth (a single model planet), and contains anthropic bias, as the planet of study was not chosen randomly, but by the living organisms that already inhabit it (ourselves). From a classical hypothesis testing standpoint, without assuming that the underlying distribution of fl is the same for all planets in the Milky Way, there are zero degrees of freedom, permitting no valid estimates to be made. If life (or evidence of past life) were to be found on Mars, Europa, Enceladus or Titan that developed independently from life on Earth it would imply a value for fl close to 1. While this would raise the number of degrees of freedom from zero to one, there would remain a great deal of uncertainty on any estimate due to the small sample size, and the chance they are not really independent. Countering this argument is that there is no evidence for abiogenesis occurring more than once on the Earth—that is, all terrestrial life stems from a common origin. If abiogenesis were more common it would be speculated to have occurred more than once on the Earth. Scientists have searched for this by looking for bacteria that are unrelated to other life on Earth, but none have been found yet. It is also possible that life arose more than once, but that other branches were out-competed, or died in mass extinctions, or were lost in other ways. Biochemists Francis Crick and Leslie Orgel laid special emphasis on this uncertainty: "At the moment we have no means at all of knowing" whether we are "likely to be alone in the galaxy (Universe)" or whether "the galaxy may be pullulating with life of many different forms." As an alternative to abiogenesis on Earth, they proposed the hypothesis of directed panspermia, which states that Earth life began with "microorganisms sent here deliberately by a technological society on another planet, by means of a special long-range unmanned spaceship". In 2020, a paper by scholars at the University of Nottingham proposed an "Astrobiological Copernican" principle, based on the Principle of Mediocrity, and speculated that "intelligent life would form on other [Earth-like] planets like it has on Earth, so within a few billion years life would automatically form as a natural part of evolution". In the authors' framework, fl, fi, and fc are all set to a probability of 1 (certainty). Their resultant calculation concludes there are more than thirty current technological civilizations in the galaxy (disregarding error bars). ==== Fraction of the above that develops intelligent life, fi ==== This value remains particularly controversial. Those who favor a low value, such as the biologist Ernst Mayr, point out that of the billions of species that have existed on Earth, only one has become intelligent and from this, infer a tiny value for fi. Likewise, the Rare Earth hypothesis, notwithstanding their low value for ne above, also think a low value for fi dominates the analysis. Those who favor higher values note the generally increasing complexity of life over time, concluding that the appearance of intelligence is almost inevitable, implying an fi approaching 1. Skeptics point out that the large spread of values in this factor and others make all estimates unreliable. (See Criticism). In addition, while it appears that life developed soon after the formation of Earth, the Cambrian explosion, in which a large variety of multicellular life forms came into being, occurred a considerable amount of time after the formation of Earth, which suggests the possibility that special conditions were necessary. Some scenarios such as the snowball Earth or research into extinction events have raised the possibility that life on Earth is relatively fragile. Research on any past life on Mars is relevant since a discovery that life did form on Mars but ceased to exist might raise the estimate of fl but would indicate that in half the known cases, intelligent life did not develop. Estimates of fi have been affected by discoveries that the Solar System's orbit is circular in the galaxy, at such a distance that it remains out of the spiral arms for tens of millions of years (evading radiation from novae). Also, Earth's large moon may aid the evolution of life by stabilizing the planet's axis of rotation. There has been quantitative work to begin to define f l ⋅ f i {\displaystyle f_{\mathrm {l} }\cdot f_{\mathrm {i} }} . One example is a Bayesian analysis published in 2020. In the conclusion, the author cautions that this study applies to Earth's conditions. In Bayesian terms, the study favors the formation of intelligence on a planet with identical conditions to Earth but does not do so with high confidence. Planetary scientist Pascal Lee of the SETI Institute proposes that this fraction is very low (0.0002). He based this estimate on how long it took Earth to develop intelligent life (1 million years since Homo erectus evolved, compared to 4.6 billion years since Earth formed). ==== Fraction of the above revealing their existence via signal release into space, fc ==== For deliberate communication, the one example we have (the Earth) does not do much explicit communication, though there are some efforts covering only a tiny fraction of the stars that might look for human presence. (See Arecibo message, for example). There is considerable speculation why an extraterrestrial civilization might exist but choose not to communicate. However, deliberate communication is not required, and calculations indicate that current or near-future Earth-level technology might well be detectable to civilizations not too much more advanced than present day humans. By this standard, the Earth is a communicating civilization. Another question is what percentage of civilizations in the galaxy are close enough for us to detect, assuming that they send out signals. For example, existing Earth radio telescopes could only detect Earth radio transmissions from roughly a light year away. ==== Lifetime of such a civilization wherein it communicates its signals into space, L ==== Michael Shermer estimated L as 420 years, based on the duration of sixty historical Earthly civilizations. Using 28 civilizations more recent than the Roman Empire, he calculates a figure of 304 years for "modern" civilizations. It could also be argued from Michael Shermer's results that the fall of most of these civilizations was followed by later civilizations that carried on the technologies, so it is doubtful that they are separate civilizations in the context of the Drake equation. In the expanded version, including reappearance number, this lack of specificity in defining single civilizations does not matter for the result, since such a civilization turnover could be described as an increase in the reappearance number rather than increase in L, stating that a civilization reappears in the form of the succeeding cultures. Furthermore, since none could communicate over interstellar space, the method of comparing with historical civilizations could be regarded as invalid. David Grinspoon has argued that once a civilization has developed enough, it might overcome all threats to its survival. It will then last for an indefinite period of time, making the value for L potentially billions of years. If this is the case, then he proposes that the Milky Way Galaxy may have been steadily accumulating advanced civilizations since it formed. He proposes that the last factor L be replaced with fIC · T, where fIC is the fraction of communicating civilizations that become "immortal" (in the sense that they simply do not die out), and T representing the length of time during which this process has been going on. This has the advantage that T would be a relatively easy-to-discover number, as it would simply be some fraction of the age of the universe. It has also been hypothesized that once a civilization has learned of a more advanced one, its longevity could increase because it can learn from the experiences of the other. The astronomer Carl Sagan speculated that all of the terms, except for the lifetime of a civilization, are relatively high and the determining factor in whether there are large or small numbers of civilizations in the universe is the civilization lifetime, or in other words, the ability of technological civilizations to avoid self-destruction. In Sagan's case, the Drake equation was a strong motivating factor for his interest in environmental issues and his efforts to warn against the dangers of nuclear warfare. Paleobiologist Olev Vinn suggests that the lifetime of most technological civilizations is brief due to inherited behavior patterns present in all intelligent organisms. These behaviors, incompatible with civilized conditions, inevitably lead to self-destruction soon after the emergence of advanced technologies. An intelligent civilization might not be organic, as some have suggested that artificial general intelligence may replace humanity. === Range of results === As many skeptics have pointed out, the Drake equation can give a very wide range of values, depending on the assumptions, as the values used in portions of the Drake equation are not well established. In particular, the result can be N ≪ 1, meaning we are likely alone in the galaxy, or N ≫ 1, implying there are many civilizations we might contact. One of the few points of wide agreement is that the presence of humanity implies a probability of intelligence arising of greater than zero. As an example of a low estimate, combining NASA's star formation rates, the rare Earth hypothesis value of fp · ne · fl = 10−5, Mayr's view on intelligence arising, Drake's view of communication, and Shermer's estimate of lifetime: R∗ = 1.5–3 yr−1, fp · ne · fl = 10−5, fi = 10−9, fc = 0.2[Drake, above], and L = 304 years gives: N = 1.5 × 10−5 × 10−9 × 0.2 × 304 = 9.1 × 10−13 i.e., suggesting that we are probably alone in this galaxy, and possibly in the observable universe. On the other hand, with larger values for each of the parameters above, values of N can be derived that are greater than 1. The following higher values that have been proposed for each of the parameters: R∗ = 1.5–3 yr−1, fp = 1, ne = 0.2, fl = 0.13, fi = 1, fc = 0.2[Drake, above], and L = 109 years Use of these parameters gives: N = 3 × 1 × 0.2 × 0.13 × 1 × 0.2 × 109 = 15,600,000 Monte Carlo simulations of estimates of the Drake equation factors based on a stellar and planetary model of the Milky Way have resulted in the number of civilizations varying by a factor of 100. === Possible former technological civilizations === In 2016, Adam Frank and Woodruff Sullivan modified the Drake equation to determine just how unlikely the event of a technological species arising on a given habitable planet must be, to give the result that Earth hosts the only technological species that has ever arisen, for two cases: (a) this Galaxy, and (b) the universe as a whole. By asking this different question, one removes the lifetime and simultaneous communication uncertainties. Since the numbers of habitable planets per star can today be reasonably estimated, the only remaining unknown in the Drake equation is the probability that a habitable planet ever develops a technological species over its lifetime. For Earth to have the only technological species that has ever occurred in the universe, they calculate the probability of any given habitable planet ever developing a technological species must be less than 2.5×10−24. Similarly, for Earth to have been the only case of hosting a technological species over the history of this Galaxy, the odds of a habitable zone planet ever hosting a technological species must be less than 1.7×10−11 (about 1 in 60 billion). The figure for the universe implies that it is extremely unlikely that Earth hosts the only technological species that has ever occurred. On the other hand, for this Galaxy one must think that fewer than 1 in 60 billion habitable planets develop a technological species for there not to have been at least a second case of such a species over the past history of this Galaxy. == Modifications == As many observers have pointed out, the Drake equation is a very simple model that omits potentially relevant parameters, and many changes and modifications to the equation have been proposed. One line of modification, for example, attempts to account for the uncertainty inherent in many of the terms. Combining the estimates of the original six factors by major researchers via a Monte Carlo procedure leads to a best value for the non-longevity factors of 0.85 1/years. This result differs insignificantly from the estimate of unity given both by Drake and the Cyclops report. Others note that the Drake equation ignores many concepts that might be relevant to the odds of contacting other civilizations. For example, David Brin states: "The Drake equation merely speaks of the number of sites at which ETIs spontaneously arise. The equation says nothing directly about the contact cross-section between an ETIS and contemporary human society". Because it is the contact cross-section that is of interest to the SETI community, many additional factors and modifications of the Drake equation have been proposed. Colonization It has been proposed to generalize the Drake equation to include additional effects of alien civilizations colonizing other star systems. Each original site expands with an expansion velocity v, and establishes additional sites that survive for a lifetime L. The result is a more complex set of 3 equations. Reappearance factor The Drake equation may furthermore be multiplied by how many times an intelligent civilization may occur on planets where it has happened once. Even if an intelligent civilization reaches the end of its lifetime after, for example, 10,000 years, life may still prevail on the planet for billions of years, permitting the next civilization to evolve. Thus, several civilizations may come and go during the lifespan of one and the same planet. Thus, if nr is the average number of times a new civilization reappears on the same planet where a previous civilization once has appeared and ended, then the total number of civilizations on such a planet would be 1 + nr, which is the actual reappearance factor added to the equation. The factor depends on what generally is the cause of civilization extinction. If it is generally by temporary uninhabitability, for example a nuclear winter, then nr may be relatively high. On the other hand, if it is generally by permanent uninhabitability, such as stellar evolution, then nr may be almost zero. In the case of total life extinction, a similar factor may be applicable for fl, that is, how many times life may appear on a planet where it has appeared once. METI factor Alexander Zaitsev said that to be in a communicative phase and emit dedicated messages are not the same. For example, humans, although being in a communicative phase, are not a communicative civilization; we do not practise such activities as the purposeful and regular transmission of interstellar messages. For this reason, he suggested introducing the METI factor (messaging to extraterrestrial intelligence) to the classical Drake equation. He defined the factor as "the fraction of communicative civilizations with clear and non-paranoid planetary consciousness", or alternatively expressed, the fraction of communicative civilizations that actually engage in deliberate interstellar transmission. The METI factor is somewhat misleading since active, purposeful transmission of messages by a civilization is not required for them to receive a broadcast sent by another that is seeking first contact. It is merely required they have capable and compatible receiver systems operational; however, this is a variable humans cannot accurately estimate. Biogenic gases Astronomer Sara Seager proposed a revised equation that focuses on the search for planets with biosignature gases. These gases are produced by living organisms that can accumulate in a planet atmosphere to levels that can be detected with remote space telescopes. The Seager equation looks like this: N = N ∗ ⋅ F Q ⋅ F H Z ⋅ F O ⋅ F L ⋅ F S {\displaystyle N=N_{*}\cdot F_{\mathrm {Q} }\cdot F_{\mathrm {HZ} }\cdot F_{\mathrm {O} }\cdot F_{\mathrm {L} }\cdot F_{\mathrm {S} }} where: N = the number of planets with detectable signs of life N∗ = the number of stars observed FQ = the fraction of stars that are quiet FHZ = the fraction of stars with rocky planets in the habitable zone FO = the fraction of those planets that can be observed FL = the fraction that have life FS = the fraction on which life produces a detectable signature gas Seager stresses, "We're not throwing out the Drake Equation, which is really a different topic," explaining, "Since Drake came up with the equation, we have discovered thousands of exoplanets. We as a community have had our views revolutionized as to what could possibly be out there. And now we have a real question on our hands, one that's not related to intelligent life: Can we detect any signs of life in any way in the very near future?" Carl Sagan's version of the Drake equation American astronomer Carl Sagan made some modifications in the Drake equation and presented it in the 1980 program Cosmos: A Personal Voyage. The modified equation is shown below N = N ∗ ⋅ f p ⋅ n e ⋅ f l ⋅ f i ⋅ f c ⋅ f L {\displaystyle N=N_{\mathrm {*} }\cdot f_{\mathrm {p} }\cdot n_{\mathrm {e} }\cdot f_{\mathrm {l} }\cdot f_{\mathrm {i} }\cdot f_{\mathrm {c} }\cdot f_{\mathrm {L} }} where N = the number of civilizations in the Milky Way galaxy with which communication might be possible (i.e. which are on the current past light cone); and N∗ = Number of stars in the Milky Way Galaxy fp = the fraction of those stars that have planets. ne = the average number of planets that can potentially support life per star that has planets. fl = the fraction of planets that could support life that actually develop life at some point. fi = the fraction of planets with life that go on to develop intelligent life (civilizations). fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space. fL = fraction of a planetary lifetime graced by a technological civilization == Criticism == Criticism of the Drake equation is varied. Firstly, many of the terms in the equation are largely or entirely based on conjecture. Star formation rates are well-known, and the incidence of planets has a sound theoretical and observational basis, but the other terms in the equation become very speculative. The uncertainties revolve around the present day understanding of the evolution of life, intelligence, and civilization, not physics. No statistical estimates are possible for some of the parameters, where only one example is known. The net result is that the equation cannot be used to draw firm conclusions of any kind, and the resulting margin of error is huge, far beyond what some consider acceptable or meaningful. Others point out that the equation was formulated before our understanding of the universe had matured. Astrophysicist Ethan Siegel, said: The Drake equation, when it was put forth, made an assumption about the Universe that we now know is untrue: It assumed that the Universe was eternal and static in time. As we learned only a few years after Frank Drake first proposed his equation, the Universe doesn’t exist in a steady state, where it’s unchanging in time, but rather has evolved from a hot, dense, energetic, and rapidly expanding state: a hot Big Bang that occurred over a finite duration in our cosmic past. One reply to such criticisms is that even though the Drake equation currently involves speculation about unmeasured parameters, it was intended as a way to stimulate dialogue on these topics. Then the focus becomes how to proceed experimentally. Indeed, Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference. === Fermi paradox === A civilization lasting for tens of millions of years could be able to spread throughout the galaxy, even at the slow speeds foreseeable with present-day technology. However, no confirmed signs of civilizations or intelligent life elsewhere have been found, either in this Galaxy or in the observable universe of 2 trillion galaxies. According to this line of thinking, the tendency to fill (or at least explore) all available territory seems to be a universal trait of living things, so the Earth should have already been colonized, or at least visited, but no evidence of this exists. Hence Fermi's question "Where is everybody?". A large number of explanations have been proposed to explain this lack of contact; a book published in 2015 elaborated on 75 different explanations. In terms of the Drake Equation, the explanations can be divided into three classes: Few intelligent civilizations ever arise. This is an argument that at least one of the first few terms, R∗ · fp · ne · fl · fi, has a low value. The most common suspect is fi, but explanations such as the rare Earth hypothesis argue that ne is the small term. Intelligent civilizations exist, but we see no evidence, meaning fc is small. Typical arguments include that civilizations are too far apart, it is too expensive to spread throughout the galaxy, civilizations broadcast signals for only a brief period of time, communication is dangerous, and many others. The lifetime of intelligent, communicative civilizations is short, meaning the value of L is small. Drake suggested that a large number of extraterrestrial civilizations would form, and he further speculated that the lack of evidence of such civilizations may be because technological civilizations tend to disappear rather quickly. Typical explanations include it is the nature of intelligent life to destroy itself, it is the nature of intelligent life to destroy others, they tend to be destroyed by natural events, and others. These lines of reasoning lead to the Great Filter hypothesis, which states that since there are no observed extraterrestrial civilizations despite the vast number of stars, at least one step in the process must be acting as a filter to reduce the final value. According to this view, either it is very difficult for intelligent life to arise, or the lifetime of technologically advanced civilizations, or the period of time they reveal their existence must be relatively short. An analysis by Anders Sandberg, Eric Drexler and Toby Ord suggests "a substantial ex ante (predicted) probability of there being no other intelligent life in our observable universe". == In popular culture == The equation was cited by Gene Roddenberry as supporting the multiplicity of inhabited planets shown on Star Trek, the television series he created. However, Roddenberry did not have the equation with him, and he was forced to "invent" it for his original proposal. The invented equation created by Roddenberry is: F f 2 ( M g E ) − C 1 R i 1 ⋅ M = L / S o {\displaystyle Ff^{2}(MgE)-C^{1}Ri^{1}\cdot M=L/So} Regarding Roddenberry's fictional version of the equation, Drake himself commented that a number raised to the first power is just the number itself. A commemorative plate on NASA's Europa Clipper mission, which launched October 14, 2024, features a poem by the U.S. Poet Laureate Ada Limón, waveforms of the word 'water' in 103 languages, a schematic of the water hole, the Drake equation, and a portrait of planetary scientist Ron Greeley on it. The track Abiogenesis on the Carbon Based Lifeforms album World of Sleepers features the Drake equation in a spoken voice-over. == See also == Astrobiology – Science concerned with life in the universe Goldilocks principle – Analogy for optimal conditions Kardashev scale – Measure of a civilization's evolution Planetary habitability – Known extent to which a planet is suitable for life Ufology – Study of UFOs Lincoln index – Statistical measure The Search for Life: The Drake Equation, BBC documentary == Notes == == References == == Further reading == Morton, Oliver (2002). "A Mirror in the Sky". In Graham Formelo (ed.). It Must Be Beautiful. Granta Books. ISBN 1-86207-555-7. Rood, Robert T.; James S. Trefil (1981). Are We Alone? The Possibility of Extraterrestrial Civilizations. New York: Scribner. ISBN 0684178427. Vakoch, Douglas A.; Dowd, Matthew F., eds. (2015). The Drake Equation: Estimating the Prevalence of Extraterrestrial Life Through the Ages. Cambridge, UK: Cambridge University Press. ISBN 978-1-10-707365-4. == External links == Interactive Drake Equation Calculator Frank Drake's 2010 article on "The Origin of the Drake Equation" "Only a matter of time, says Frank Drake". A Q&A with Frank Drake in February 2010 Drake, Frank (December 2004). "The E.T. Equation, Recalculated". Wired. Macromedia Flash page allowing the user to modify Drake's values from PBS's Nova "The Drake Equation", Astronomy Cast episode #23; includes full transcript Animated simulation of the Drake equation. (Archived 8 December 2015 at the Wayback Machine) "The Alien Equation", BBC Radio program Discovery (22 September 2010) "Reflections on the Equation" (PDF), by Frank Drake, 2013
Wikipedia/Drake_equation
The communication with extraterrestrial intelligence (CETI) is a branch of the search for extraterrestrial intelligence (SETI) that focuses on composing and deciphering interstellar messages that theoretically could be understood by another technological civilization. The best-known CETI experiment of its kind was the 1974 Arecibo message composed by Frank Drake. There are multiple independent organizations and individuals engaged in CETI research; the generic application of abbreviations CETI and SETI (search for extraterrestrial intelligence) in this article should not be taken as referring to any particular organization (such as the SETI Institute). CETI research has focused on four broad areas: mathematical languages, pictorial systems such as the Arecibo message, algorithmic communication systems (ACETI), and computational approaches to detecting and deciphering "natural" language communication. There remain many undeciphered writing systems in human communication, such as Linear A, discovered by archeologists. Much of the research effort is directed at how to overcome similar problems of decipherment that arise in many scenarios of interplanetary communication. On 13 February 2015, scientists (including Douglas Vakoch, David Grinspoon, Seth Shostak, and David Brin) at an annual meeting of the American Association for the Advancement of Science, discussed active SETI and whether transmitting a message to possible intelligent extraterrestrials in the cosmos was a good idea. That same week, a statement was released, signed by many in the SETI community, that a "worldwide scientific, political, and humanitarian discussion must occur before any message is sent". On 28 March 2015, a related essay was written by Seth Shostak and published in The New York Times. == History == In the 19th century, many books and articles speculated about the possible inhabitants of other planets. Many people believed that intelligent beings might live on the Moon, Mars, and/or Venus. Since travel to other planets was not possible at the time, some people suggested ways to signal extraterrestrials even before radio was discovered. Carl Friedrich Gauss is often credited with an 1820 proposal that a giant triangle and three squares, the Pythagoras, could be drawn on the Siberian tundra. The outlines of the shapes would have been ten-mile-wide strips of pine forest, whereas the interiors could be filled with rye or wheat. Joseph Johann Littrow proposed in 1819 to use the Sahara as a sort of blackboard. Giant trenches several hundred yards wide could delineate twenty-mile-wide shapes. Then the trenches would be filled with water, and then enough kerosene could be poured on top of the water to burn for six hours. Using this method, a different signal could be sent every night. Meanwhile, other astronomers were looking for signs of life on other planets. In 1822, Franz von Paula Gruithuisen thought he saw a giant city and evidence of agriculture on the Moon, but astronomers using more powerful instruments refuted his claims. Gruithuisen also believed he saw evidence of life on Venus. Ashen light had previously been observed on the dark side of Venus, and he postulated that it was caused by a great fire festival put on by the inhabitants to celebrate their new emperor. Later he revised his position, stating that the Venusians could be burning their rainforest to make more farmland. By the late 1800s, the possibility of life on the Moon was put to rest. Astronomers at that time believed in the Kant-Laplace hypothesis, which stated that the farthest planets from the sun are the oldest – therefore Mars was more likely to have advanced civilizations than Venus. Subsequent investigations focused on contacting Martians. In 1877, Giovanni Schiaparelli announced he had discovered "canali" ("channels" in Italian, which occur naturally, and mistranslated as "canals", which are artificial) on Mars. This was followed by thirty years of enthusiasm about the possibility of life on Mars. Eventually the Martian canals proved illusory. The inventor Charles Cros was convinced that pinpoints of light observed on Mars and Venus were the lights of large cities. He spent years of his life trying to get funding for a giant mirror with which to signal the Martians. The mirror would be focused on the Martian desert, where the intense reflected sunlight could be used to burn figures into the Martian sand. Inventor Nikola Tesla mentioned many times during his career that he thought his inventions such as his Tesla coil, used in the role of a "resonant receiver", could be used to communicate with other planets, and that he even had observed repetitive signals of what he believed were extraterrestrial radio communications coming from Venus or Mars in 1899. These "signals" turned out to be terrestrial radiation, however. Around 1900, the Guzman Prize was created; the first person to establish interplanetary communication would be awarded 100,000 francs, under one stipulation: Mars was excluded because Madame Guzman thought communicating with Mars would be too easy to deserve a prize. == Mathematical and scientific languages == === Lincos (Lingua cosmica) === Published in 1960 by Hans Freudenthal, Lincos: Design of a Language for Cosmic Intercourse, expands upon Astraglossa to create a general-purpose language derived from basic mathematics and logic symbols. Several researchers have expanded further upon Freudenthal's work. A dictionary resembling Lincos was featured in the Carl Sagan novel Contact and its film adaptation. === Astraglossa === Published in 1963 by Lancelot Hogben, "Astraglossa" is an essay describing a system for combining numbers and operators in a series of short and long pulses. In Hogben's system, short pulses represent numbers, while trains of long pulses represent symbols for addition, subtraction, etc. === Carl Sagan === In the 1985 science fiction novel Contact, Carl Sagan explored in some depth how a message might be constructed to allow communication with an alien civilization, using prime numbers as a starting point, followed by various universal principles and facts of mathematics and science. Sagan also edited a nonfiction book on the subject. An updated collection of articles on the same topic was published in 2011. === A language based on the fundamental facts of science === Published in 1992 by Carl Devito and Richard Oehrle, A language based on the fundamental facts of science is a paper describing a language similar in syntax to Astraglossa and Lincos, but which builds its vocabulary around known physical properties. === Busch general-purpose binary language used in Lone Signal transmissions === In 2010, Michael W. Busch created a general-purpose binary language later used in the Lone Signal project to transmit crowdsourced messages to extraterrestrial intelligence (METI). This was followed by an attempt to extend the syntax used in the Lone Signal hailing message to communicate in a way that, while neither mathematical nor strictly logical, was nonetheless understandable given the prior definition of terms and concepts in the Lone Signal hailing message. == Pictorial messages == Pictorial communication systems seek to describe fundamental mathematical or physical concepts via simplified diagrams sent as bitmaps. These messages necessarily assume that the recipient has similar visual capabilities and can understand basic mathematics and geometry. A common critique of pictorial systems is that they presume a shared understanding of special shapes, which may not be the case with a species with substantially different vision, and therefore a different way of interpreting visual information. For instance, an arrow representing the movement of some object might be misinterpreted as a weapon firing. === Pioneer probes === Two etched plaques, known as the Pioneer plaques, were included aboard the Pioneer 10 and Pioneer 11 spacecraft when they launched in 1972 and 1973. The plaques depict the specific location of the Solar System within the galaxy and the Earth within the Solar System, as well as the form of the human body. === Voyager probes === Launched in 1977, the Voyager probes carried two golden records that were inscribed with diagrams similar to the Pioneer plaques, depicting the human form, the Solar System, and its location. Also included were recordings of images and sounds from Earth. === Arecibo message === The Arecibo message, transmitted in 1974, was a 1,679-pixel bitmap that, when properly arranged into 73 rows and 23 columns, shows the numbers one through ten; the atomic numbers of hydrogen, carbon, nitrogen, oxygen, and phosphorus; the formulas for the sugars and bases that make up the nucleotides of DNA; the number of nucleotides in the human genome; the double helix structure of DNA; a simple illustration of a human being and its height; the human population of Earth; a diagram of the Solar System; and an illustration of the Arecibo telescope with its diameter. === Cosmic Call messages === The Cosmic Call messages consisted of a few digital sections – "Rosetta Stone", a copy of the Arecibo Message, the Bilingual Image Glossary, and the Braastad message – as well as text, audio, video, and other image files submitted for transmission by people around the world. The "Rosetta Stone" was composed by Stéphane Dumas and Yvan Dutil, and represents a multi-page bitmap that builds a vocabulary of symbols representing numbers and mathematical operations. The message proceeds from basic mathematics to progressively more complex concepts, including physical processes and objects (such as a hydrogen atom). The message was designed with a noise-resistant format and characters that make it resistant to alteration by noise. These messages were transmitted in 1999 and 2003 from Evpatoria Planetary Radar in Russia under the scientific guidance of Alexander L. Zaitsev. Richard Braastad coordinated the overall project. Star systems to which the messages were sent include the following: == Multi-modal messages == === Teen-Age Message === The Teen-Age Message, composed by Russian scientists (Zaitsev, Gindilis, Pshenichner, Filippova) and teens, was transmitted from the 70-m dish of Evpatoria Deep Space Center in Ukraine to six star systems resembling that of the Sun on August 29 and September 3 and 4, 2001. The message consists of three parts: Section 1 represents a coherent-sounding radio signal with slow Doppler wavelength tuning to imitate transmission from the Sun's center. This signal was transmitted in order to help extraterrestrials detect the TAM and diagnose the radio propagation effect of the interstellar medium. Section 2 is analog information representing musical melodies performed on the theremin. This electric musical instrument produces a quasi-monochromatic signal, which is easily detectable across interstellar distances. There were seven musical compositions in the First Theremin Concert for Aliens. The 14-minute analog transmission of the theremin concert would take almost 50 hours by digital means; see The First Musical Interstellar Radio Message. Section 3 represents a well-known Arecibo-like binary digital information: the logotype of the TAM, bilingual Russian and English greeting to aliens, and image glossary. Star systems to which the message was sent are the following: === Cosmic Call 2 (Cosmic Call 2003) message === The Cosmic Call-2 message contained text, images, video, music, the Dutil/Dumas message, a copy of the 1974 Arecibo message, BIG = Bilingual Image Glossary, the AI program Ella, and the Braastad message. == Algorithmic messages == Algorithmic communication systems are a relatively new field within CETI. In these systems, which build upon early work on mathematical languages, the sender describes a small set of mathematic and logic symbols that form the basis for a rudimentary programming language that the recipient can run on a virtual machine. Algorithmic communication has a number of advantages over static pictorial and mathematical messages, including: localized communication (the recipient can probe and interact with the programs within a message, without transmitting a reply to the sender and then waiting years for a response), forward error correction (the message might contain algorithms that process data elsewhere in the message), and the ability to embed proxy agents within the message. In principle, a sophisticated program when run on a fast enough computing substrate, may exhibit complex behavior and perhaps, intelligence. === CosmicOS === CosmicOS, designed by Paul Fitzpatrick at MIT, describes a virtual machine that is derived from lambda calculus. === Logic Gate Matrices === Logic Gate Matrices (a.k.a. LGM), developed by Brian McConnell, describes a universal virtual machine that is constructed by connecting coordinates in an n-dimensional space via mathematics and logic operations, for example: (1,0,0) <-- (OR (0,0,1) (0,0,2)). Using this method, one may describe an arbitrarily complex computing substrate as well as the instructions to be executed on it. == Natural language messages == This research focuses on the event that we receive a signal or message that is either not directed at us (eavesdropping) or one that is in its natural communicative form. To tackle this difficult scenario, methods are being developed that will detect if a signal has structure indicative of an intelligent source, categorize the type of structure detected, and then decipher its content, from its physical level encoding and patterns to the parts-of-speech that encode internal and external ontologies. Primarily, this structure modeling focuses on the search for generic human and inter-species language universals to devise computational methods by which language may be discriminated from non-language, and core structural syntactic elements of unknown languages may be detected. Aims of this research include contributing to the understanding of language structure and the detection of intelligent language-like features in signals, in order to aid the search for extraterrestrial intelligence. The problem goal is therefore to separate language from non-language without dialogue, and learn something about the structure of language in the passing. The language may not be human (animals, aliens, computers, etc.), the perceptual space may be unknown, and human language structure cannot be presumed, but must begin somewhere. The language signal should be approached from a naive viewpoint, increasing ignorance and assuming as little as possible. If a sequence can be tokenized, that is, separated into "words", an unknown human language may be distinguished from many other data sequences by the frequency distribution of the tokens. Human languages conform to a Zipfian distribution, while many (but not all) other data sequences do not. It has been proposed that an alien language also might conform to such a distribution. When displayed in a log-log graph of frequency vs. rank, this distribution would appear as a somewhat straight line with a slope of approximately -1. SETI scientist Laurance Doyle explains that the slope of a line that represents individual tokens in a stream of tokens may indicate whether the stream contains linguistic or other structured content. If the line angles at 45°, the stream contains such content. If the line is flat, it does not. == CETI researchers == Frank Drake (SETI Institute): SETI pioneer, composed the Arecibo message. Dr John Elliott: research into developing strategies, which are based on receiving a 'natural' language message, that look at developing algorithms to detect if an ET signal has intelligent-like structure and if so, then how to decipher its content. Author of many papers in this area and a contributor to SETI's book on interstellar communication. Other contributions include message design and construction; member of: International Academy of Astronautics, SETI Permanent Study Group; International Task Group for the Post-detection identification of unknown radio signals. Laurence Doyle (SETI Institute): studies animal communication, and has developed statistical measures of complexity in animal utterances as well as human language. Stephane Dumas: developed Cosmic Call messages, as well as a general technique for generating 2-D symbols that remain recognizable even if corrupted by noise. Yvan Dutil: developed Cosmic Call messages with Stephane Dumas. Paul Fitzpatrick (MIT): developed CosmicOS system based on lambda calculus Brian McConnell: developed framework for algorithmic communication systems (ACETI) from 2000 to 2002. Marvin Minsky (MIT AI researcher): Believes that aliens may think similarly to humans because of shared constraints, permitting communication. First proposed the idea of including algorithms within an interstellar message. Carl Sagan: co-authored the Arecibo message and was heavily involved in SETI throughout his life. Douglas Vakoch (METI): editor of Archaeology, Anthropology, and Interstellar Communication, a 2014 essay collection on CETI. Alexander Zaitsev (IRE, Russia): composed Teen Age Message with Boris Pshenichner, Lev Gindilis, Lilia Filippova, et al., composed Bilingual Image Glossary for Cosmic Call 2003 Message, Scientific Manager of transmitting from Evpatoria Planetary Radar the Cosmic Call 1999, the Teen Age Message 2001, and the Cosmic Call 2003, Scientific consultant for A Message From Earth project. Michael W. Busch: (Lone Signal) created the binary encoding system for the ongoing Lone Signal hailing message. Jacob Haqq Misra: (Lone Signal) is the chief science officer for the ongoing Lone Signal active SETI project. == Interspecies communication == Some researchers have concluded that in order to communicate with extraterrestrial species, humanity must first try to communicate with Earth's intelligent animal species. John C. Lilly worked with interspecies communication by teaching dolphins English (successful with rhythms, not with understandability, given their different mouth/blowhole shapes). He practiced various disciplines of spirituality and also ingested psychedelic drugs such as LSD and (later) ketamine in the company of dolphins. He tried to determine whether he could communicate non-verbally with dolphins, and also tried to determine if some extraterrestrial radio signals are intelligent communications. Similarly, Laurance Doyle, Robert Freitas and Brenda McCowan compare the complexity of cetacean and human languages to help determine whether a specific signal from space is complex enough to represent a message that needs to be decoded. == See also == == References == == Further reading ==
Wikipedia/Communication_with_extraterrestrial_intelligence
Strategic Explorations of Exoplanets and Disks with Subaru (SEEDS) is a multi-year survey that used the Subaru Telescope on Mauna Kea, Hawaii in an effort to directly image extrasolar planets and protoplanetary/debris disks around hundreds of nearby stars. SEEDS is a Japanese-led international project. It consists of some 120 researchers from a number of institutions in Japan, the U.S. and the EU. The survey's headquarters is at the National Astronomical Observatory of Japan (NAOJ) and led by Principal Investigator Motohide Tamura. The goals of the survey are to address the following key issues in the study of extrasolar planets and disks: the detection and census of exoplanets in the regions around solar-mass and massive stars; the evolution of protoplanetary disks and debris disks; and the link between exoplanets and circumstellar disks. == Observations and Results == The direct imaging survey was carried out with a suite of high-contrast instrumentation at the large Subaru 8.2 m telescope, including a second-generation adaptive optics (AO) system with 188 actuators (AO188) and a dedicated coronagraph instrument called HiCIAO. Observations began in late October 2009 and were completed in early January 2015, having observed roughly 500 nearby stars (including duplicates). The survey was conducted in the H-band (1.65 micron) and once a planet/companion candidate was detected, it was also observed at other near-infrared wavelengths. SEEDS has reported four candidate planets to date. The first one is GJ 758 b, with a mass around 10–30 Jupiter masses and orbiting around a Sun-like star. The projected distance from the central star to the companion is 29 AU at a distance of around 52 light years. The second discovery was of a very faint planet orbiting a Sun-like star named GJ 504. The projected distance from the central star is 44 AU at a distance of 59 light years. The central star itself is bright, visible to the naked-eye (V ~ 5 mag), but the planet is very dim, 17–20 mag at infrared wavelengths. The planet mass is estimated to be only 3–4.5 Jupiter masses, estimated from its luminosity and age. It is one of the lightest-mass planets ever imaged. The survey also discovered a likely superjovian-mass planet named Kappa Andromedae b, orbiting a young B-type star 2.8 times the mass of Sun. HD 100546 b was confirmed as a planet with a disk system around a very young star as part of the SEEDS survey. SEEDS has also reported the detection of three brown dwarfs in the Pleiades cluster as part of the Open Cluster category survey and several stellar or substellar companions around planetary systems, from the radial velocity detection. SEEDS has detected interesting fine-structures in disks around dozens of young stars. These disks exhibit gaps, spiral arms, rings, and other structures at similar radial distances where the outer planets are imaged. These structures can be considered to be "signposts" of planets. The results obtained on disks support the need for a new planet formation model. Additional planet and disk discoveries include: New high resolution imaging of the AB Aurigae system Detection of extended outer regions of the debris ring around HR 4796 A New reflected light imaging of the transitional disk gap around LkCa 15 Explorations for outer massive bodies around the transiting planet system HAT-P-7 Imaging discovery of the debris disk around HIP 79977 First infrared images of the inner gap in the 2MASS J16042165-2130284 transitional disk. Direct imaging discovery of a large inner gap in the protoplanetary disk around PDS 70 Discovery of spiral structures in the transitional disk around SAO 206462 Discovery of a stellar companion to the extrasolar planet system HAT-P-7 Near-IR scattered light detection of the spiral-armed transitional disk of the star MWC 758 Direct imaging of the UX Tau A pre-transitional disk revealing gap structures Scattered light imaging of the MWC 80 protoplanetary disk at a historic minimum of the near-IR excess High-contrast imaging discovery of architecture in the LkCa 15 transitional disk Submillimeter and near-infrared observation of the transitional disk around Sz 91 First high resolution infrared images of circumstellar disk around SU Aur revealing tidal-like tails == See also == Definition of 'Extrasolar planet' Direct imaging of extrasolar planets List of extrasolar planets List of planet types == References ==
Wikipedia/Strategic_Explorations_of_Exoplanets_and_Disks_with_Subaru
The Center for Life Detection (CLD) is a collaboration among scientists and technologists from NASA’s Ames Research Center and Goddard Spaceflight Center, which formed in 2018 to support the planning and implementation of missions that will seek evidence of life beyond Earth. CLD is supported by NASA’s Planetary Science Division and is one of three core teams in the Network for Life Detection. CLD’s perspectives on life detection science and technology development are summarized in “Groundwork for Life Detection”, a white paper submitted to and cited in the 2023-2032 Planetary Science and Astrobiology Decadal Survey. == Activities == The search for life elsewhere is among the NASA Science Mission Directorate's high-level priorities (Science 2020-2024: A Vision for Scientific Excellence, Priority 1). The Center for Life Detection was founded to support this search by: conducting research on biosignature “detectability” to help inform target/sample selection and measurement strategies/requirements; developing tools and engagement activities that enable members of the broader astrobiology community to formulate their knowledge, research, and expertise in a way that facilitates use in mission planning; supporting the instrument development community in mapping existing and emerging measurement technology to life detection science objectives, in order to establish science traceability and identify technology development needs. == Research == Multiple worlds within and beyond the Solar System are considered potentially habitable by virtue of the presence of liquid water, and mission concepts to seek evidence of life on these worlds are being developed. On Earth, the abundance distribution of life and its products ranges over many orders of magnitude, as a function of multiple environmental and ecological factors. Similar variability can be expected both within and among inhabited worlds beyond Earth, if any exist, and understanding it can inform target selection, observing strategies, and measurement requirements for missions that seek evidence of life. To build this understanding, scientists in CLD conduct research to assess how environmental factors affect “detectability” – the extent to which life, if present, would express itself in characteristic, observable features. This research is responsive to a recommendation in the National Academies Consensus Report on Astrobiology Strategy (NASEM ABS): “Detectability: NASA should support expanding biosignature research to address gaps in understanding biosignature preservation and the breadth of possible false positives and false negative signatures”. The research is conducted with applications to Mars, Ocean Worlds, and Exoplanets. == The Life Detection Forum Project == The astrobiology knowledge that will be required for life detection mission concept development and science definition is diverse, often taking forms that do not map clearly to mission design, and diffuse, in that it is spread across many scientific disciplines and a wide-ranging literature. The Life Detection Forum (LDF) project seeks to develop a ‘living’, community-driven suite of tools to centralize the requisite body of knowledge and organize it in a way that streamlines its use in program planning, mission concept development, and interpretation of findings. Researchers in CLD work actively to engage a diverse range of communities in the use of this tool in order to harness expertise that is not well represented in the traditional sphere of space science. The LDF is being built as a web-based platform that can be populated and continually updated by a broad user base, in order to track the evolving state of knowledge regarding life detection science and technology. The core module of the system, released in early 2021, is the Life Detection Knowledge Base (LDKB). LDKB is a system for organizing user-provided knowledge about objects, patterns, or processes that might serve as evidence for life according to its bearing on the potential for false positive or false negative results. A technology-oriented counterpart to LDKB, the Measurement Technology Module (MTM) is currently in development. MTM will house user-contributed information regarding current and emerging technologies that could be used to support life detection objectives. When combined, LDKB and MTM will provide a basis for establishing science traceability and identifying technology development needs. The Life Detection Forum Project is responsive to the NASEM ABS recommendation: “NASA should aid the community in developing a comprehensive framework [...] to guide testing and evaluation of in situ and remote biosignatures. == Workshops == CLD sought extensive community involvement in the development of LDF tools and early stages of LDKB content development, through a series of workshops and hands-on community engagement activities. Introduction to the Life Detection Forum Project (Summer 2019) Special session, approx. 130 participants Introduction to, and feedback on, the LDF concept and a basic working model Criteria for Life Detection Measurements (Fall 2020) Two community workshops, 60+ participants Establish & vet the evaluative organizing basis for LDKB The Life Detection Knowledge Base (January 2021) Rollout of LDKB at a community workshop of > 150 participants LDKB Content Development Groups (Spring-Fall 2021) CLD-facilitated, community-based user groups (100+ participants, active 6–8 months) Content provision in 5 theme areas, beta testing of LDKB, build & train user base Future of the Search for Life (Spring 2022) 2-weeks workshop, 100 participants Engage scientists and technologists to discuss high-priority approaches to life detection, define measurement requirements, and identify corresponding measurement technology gaps == References == == External links == Astrobiology.nasa.gov https://www.nfold.org/
Wikipedia/Center_for_Life_Detection_Science
The SOPHIE (Spectrographe pour l’Observation des Phénomènes des Intérieurs stellaires et des Exoplanètes, literally meaning "spectrograph for the observation of the phenomena of the stellar interiors and of the exoplanets") échelle spectrograph is a high-resolution echelle spectrograph installed on the 1.93m reflector telescope at the Haute-Provence Observatory located in south-eastern France. The purpose of this instrument is asteroseismology and extrasolar planet detection by the radial velocity method. It builds upon and replaces the older ELODIE spectrograph. This instrument was made available for use by the general astronomical community October 2006. == Characteristics == The electromagnetic spectrum wavelength range is from 387.2 to 694.3 nanometers. The spectrograph is fed from the Cassegrain focus through either one of two separate optical fiber sets, yielding two different spectral resolutions (HE and HR modes). The instrument is entirely computer-controlled. A standard data reduction pipeline automatically processes the data upon every CCD readout cycle. HR mode is the high resolution mode. This mode incorporates a 40 micrometre exit slit to achieve high spectral resolution of R = 75000. HE mode is the high efficiency mode. This mode is used when a higher throughput is desired particularly in the case of faint objects spectral resolution is set to R = 40000. The R2 échelle diffraction grating has 52.65 grooves per millimeter and was manufactured by Richardson Gratings. It is blazed at 65° and its size is 20.4 cm x 40.8 cm. It is mounted in a fixed configuration. The spectrum is projected onto the E2V Technologies type 44-82 CCD detector of 4096 x 2048 pixels kept at a constant temperature of –100 °C. This grating yields 41 spectral orders, of which 39 are currently extracted, to obtain wavelengths between 387.2 nm and 694.3 nm. == Performance == In HE mode, a signal-to-noise ratio (per pixel) of 27 was reached in 90 min for an object of magnitude 14.5 in the V band. The stability of the instrument can be described by the lowest dispersion possible for radial velocity observations, in m/s. In HR mode the short term stability has been measured to be 1.3 m/s, while it is 2 m/s for longer timescales. == See also == CORALIE spectrograph HARPS spectrograph == References == == External links == SOPHIE Home Page
Wikipedia/SOPHIE_échelle_spectrograph
The Hart–Tipler conjecture is the idea that an absence of detectable Von Neumann probes is contrapositive evidence that no intelligent life exists outside of the Solar System. This idea was first proposed in opposition to the Drake equation in a 1975 paper by Michael H. Hart titled "Explanation for the Absence of Extraterrestrials on Earth". Assuming that the probes traveled at 1/10 the speed of light and that no time was lost in building new ships upon arriving at the destination, Hart surmised that a wave of Von Neumann probes could cross the galaxy in approximately 650,000 years, a comparatively minimal span of time relative to the estimated age of the universe at 13.7 billion years. Hart’s argument was extended by cosmologist Frank Tipler in his 1981 paper entitled "Extraterrestrial intelligent beings do not exist". The conjecture is the first of many proposed solutions to the Fermi paradox (the conflict between the lack of obvious evidence for alien life and various high probability estimates for its existence). In this case, the solution is that there is no other intelligent life because such estimates are incorrect. The conjecture is named after astrophysicist Michael H. Hart and mathematical physicist and cosmologist Frank Tipler. == Background == There is no reliable or reproducible evidence that aliens have visited Earth. No transmissions or evidence of intelligent extraterrestrial life have been detected or observed anywhere other than Earth in the Universe. If intelligent life existed, it would have produced enough self-replicating spacecraft, known as von Neumann probes, to cover the universe by now, which runs counter to the knowledge that the Universe is filled with a very large number of planets, some of which likely hold the conditions hospitable for life. Life typically expands until it fills all available niches. These contradictory facts form the basis for the Fermi paradox, of which the Hart–Tipler conjecture is one proposed solution. == Relationship to other proposed Fermi paradox solutions == The firstborn hypothesis is a special case of the Hart–Tipler conjecture which states that no other intelligent life has been discovered because humanity is the first intelligent life in the universe. According to the Berserker hypothesis, the absence of interstellar probes is not evidence of life's absence, since such probes could "go berserk" and destroy other civilizations, before self-destructing. == References ==
Wikipedia/Hart–Tipler_conjecture
The Terrestrial Planet Finder (TPF) was a proposed project by NASA to construct a system of space telescopes for detecting extrasolar terrestrial planets. TPF was postponed several times and finally cancelled in 2011. There were two telescope systems under consideration, the TPF-I, which had several small telescopes, and TPF-C, which used one large telescope. == History == In May 2002, NASA chose two TPF mission architecture concepts for further study and technology development. Each would use a different means to achieve the same goal—to block the light from a parent star in order to see its much smaller, dimmer planets. The technological challenge of imaging planets near their much brighter star has been likened to finding a firefly near the beam of a distant searchlight. Additional goals of the mission would include the characterization of the surfaces and atmospheres of newfound planets, and looking for the chemical signatures of life. The two planned architectures were: Infrared astronomical interferometer (TPF-I): Multiple small telescopes on a fixed structure or on separated spacecraft floating in precision formation would simulate a much larger, very powerful telescope. The interferometer would use a technique called nulling to reduce the starlight by a factor of one million, thus enabling the detection of the very dim infrared emission from the planets. Visible Light Coronagraph (TPF-C): A large optical telescope, with a mirror three to four times bigger and at least 100 times more precise than the Hubble Space Telescope, would collect starlight and the very dim reflected light from the planets. The telescope would have special optics to reduce the starlight by a factor of one billion, thus enabling astronomers to detect faint planets. NASA and Jet Propulsion Laboratory (JPL) were to issue calls for proposals seeking input on the development and demonstration of technologies to implement the two architectures, and on scientific research relevant to planet finding. Launch of TPF-C had been anticipated to occur around 2014, and TPF-I possibly by 2020. According to NASA's 2007 budget documentation, released on 6 February 2006, the project was deferred indefinitely. In June 2006, a House of Representatives subcommittee voted to provide funding for the TPF along with the long-sought mission to Europa, a moon of Jupiter that might harbor extraterrestrial life. Congressional spending limits under House Resolution 20 passed on 31 January 2007, by the United States House of Representatives and 14 February by the U.S. Senate postponed the program indefinitely. Actual funding has not materialized, and TPF remains a concept. In June 2011, the TPF (and SIM) programs were reported as "cancelled". == Top 10 target stars == == See also == Automated Planet Finder – Robotic optical telescope searching for extrasolar planets CoRoT – European space telescope that operated between 2006 - 2014 Darwin – 2007 European study concept of an array of space observatories High Accuracy Radial Velocity Planet Searcher – Instrument for detecting planets James Webb Space Telescope – NASA/ESA/CSA space telescope launched in 2021 Kepler space telescope – NASA space telescope for exoplanetology (2009–2018) List of NASA cancellations Navigator Program – NASA projectPages displaying wikidata descriptions as a fallback Space Interferometry Mission – Cancelled NASA space telescope == References == == External links == Canceling NASA's Terrestrial Planet Finder: The White House's Increasingly Nearsighted "Vision" For Space Exploration Congressional Inaction Leaves Science Still Devastated. Planetary Society (2006-11-26). Current status of TPF development work (March 2007) Interferometric Nulling at TNO Planet Imager (PI)
Wikipedia/Terrestrial_Planet_Finder
Extraterrestrial intelligence (ETI) refers to hypothetical intelligent extraterrestrial life. No such life has ever been verifiably observed to exist. The question of whether other inhabited worlds might exist has been debated since ancient history. The modern form of the concept emerged when the Copernican Revolution demonstrated that the Earth was a planet revolving around the Sun, and other planets were, conversely, other worlds. The question of whether other inhabited planets or moons exist was a natural consequence of this new understanding. It has become one of the most speculative questions in science and is a central theme of both science fiction and popular culture. An alternative name for it is "Extraterrestrial Technological Instantiations" (ETI). The term was coined to avoid the use of terms such as "civilizations", "species", and "intelligence", as those may prove to be ambiguous and open to interpretation, or simply inapplicable in its local context. == Intelligence == Intelligence is, along with the more precise concept of sapience, used to describe extraterrestrial life with similar cognitive abilities as humans. Another interchangeable term is sophoncy, being wise or wiser, first coined by Karen Anderson and published in the 1966 works by her husband Poul Anderson. Sentience, like consciousness, is a concept sometimes mistakenly used to refer to the concept of intelligence and sapience, since it does not exclude forms of life that are non-sapient (or more broadly non-intelligent or non-conscious). The term extraterrestrial civilization frames a more particular case of extraterrestrial intelligence. It is the possible long-term result of intelligent and specifically sapient extraterrestrial life. == Probability == The Copernican principle is generalized to the relativistic concept that humans are not privileged observers of the universe. Many prominent scientists, including Stephen Hawking have proposed that the sheer scale of the universe makes it improbable for intelligent life not to have emerged elsewhere. However, Fermi's Paradox highlights the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilization and humanity's lack of contact with, or evidence for, such civilizations. So far, there is no observation of extraterrestrial life, including intelligent extraterrestrial life. The Kardashev scale is a speculative method of measuring a civilization's level of technological advancement, based on the amount of energy a civilization is able to utilize. The Drake equation is a probabilistic framework used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. == Search for extraterrestrial intelligence == There has been a search for signals from extraterrestrial intelligence for several decades, with no significant results. Active SETI (Active Search for Extra-Terrestrial Intelligence) is the attempt to send messages to intelligent extraterrestrial life. Active SETI messages are usually sent in the form of radio signals. Physical messages like that of the Pioneer plaque may also be considered an active SETI message. Communication with extraterrestrial intelligence (CETI) is a branch of the search for extraterrestrial intelligence that focuses on composing and deciphering messages that could theoretically be understood by another technological civilization. The best-known CETI experiment was the 1974 Arecibo message composed by Frank Drake and Carl Sagan. There are multiple independent organizations and individuals engaged in CETI research. The U.S. government's position, in line with that of most relevant experts, is that "chances of contact with an extraterrestrial intelligence are extremely small, given the distances involved." This line of thinking has led some to conclude that first contact might be made with extraterrestrial artificial intelligence, rather than with biological beings. The Wow! signal remains the best candidate for an extraterrestrial radio signal ever detected, though the fact that no similar signal has ever been observed again makes attribution of the signal to any cause difficult if not impossible. On 14 June 2022 astronomers working with China's FAST telescope reported the possibility of having detected artificial (presumably alien) signals, but cautions that further studies are required to determine if some kind of natural radio interference may be the source. On 18 June 2022 Dan Werthimer, chief scientist for several SETI-related projects, reportedly noted that “These signals are from radio interference; they are due to radio pollution from earthlings, not from E.T.” == Potential cultural impact of extraterrestrial contact == The potential changes from extraterrestrial contact could vary greatly in magnitude and type, based on the extraterrestrial civilization's level of technological advancement, degree of benevolence or malevolence, and level of mutual comprehension between itself and humanity. Some theories suggest that an extraterrestrial civilization could be advanced enough to dispense with biology, living instead inside of advanced computers. The medium through which humanity is contacted, be it electromagnetic radiation, direct physical interaction, extraterrestrial artefact, or otherwise, may also influence the results of contact. Incorporating these factors, various systems have been created to assess the implications of extraterrestrial contact. The implications of extraterrestrial contact, particularly with a technologically superior civilization, have often been likened to the meeting of two vastly different human cultures on Earth, a historical precedent being the Columbian Exchange. Such meetings have generally led to the destruction of the civilization receiving contact (as opposed to the "contactor", which initiates contact), and therefore destruction of human civilization is a possible outcome. However, the absence of any such contact to date means such conjecture is largely speculative. == UFOlogy == The extraterrestrial hypothesis is the idea that some UFOs are vehicles containing or sent by extraterrestrial beings (usually called aliens in this context). As an explanation for UFOs, ETI is sometimes contrasted with EDI (extradimensional intelligence), for example by J. Allen Hynek. In 2023, House lawmakers held a hearing to examine how the executive branch handles reports of UFOs. == In culture == The theories and reception of the probability of intelligent life has been a recurring cultural element, particularly of popular culture since the prospect and achievement of spaceflight. New Mexico has even declared in 2003 the 14th of February as the Extraterrestrial Culture Day. == See also == Extraterrestrial life Rare Earth hypothesis Cosmic pluralism Fermi Paradox Extraterrestrials in fiction First contact (science fiction) Contact (1997 film) Life origination beyond planets Messaging to Extra-Terrestrial Intelligence (METI or Active SETI) Quiet and loud aliens Sapience Sentiocentrism Extraterrestrial: The First Sign of Intelligent Life Beyond Earth, 2021 popular science book by Avi Loeb == References == == External links == An astrophysicist's view of UFOs (Adam Frank; New York Times; 30 May 2021)
Wikipedia/Extraterrestrial_intelligence
ELODIE was an echelle spectrograph installed on the 1.93m reflector at the Observatoire de Haute-Provence in south-eastern France. Its optical instrumentation was developed by André Baranne from the Marseille Observatory. The purpose of the instrument was extrasolar planet detection by the radial velocity method. ELODIE's first light was achieved in 1993. The instrument was decommissioned in August 2006 and replaced in September 2006 by SOPHIE, a new instrument of the same type but with improved features. == Characteristics == The instrument could observe the electromagnetic spectrum over a wavelength range of 389.5 nm to 681.5 nm in a single exposure, split into 67 spectral orders. The instrument, which was located in a temperature-controlled room, was fed with optical fibers from the Cassegrain focus. The observatory provided an integrated data reduction pipeline which fully reduced the spectra immediately after acquisition and allowed the user to measure radial velocities to an accuracy as good as ±7 m/s. Over 34,000 spectra were taken with ELODIE, over 20,000 of which are publicly available through a dedicated on-line archive. The instrument was the result of a collaboration between the observatories of Haute-Provence, Geneva and Marseille. A publication describing the instrument appeared in Astronomy & Astrophysics Supplements. == Discovered planets == The first extrasolar planet to be discovered orbiting a Sun-like star, 51 Pegasi b, was discovered in 1995 using ELODIE. Michel Mayor and Didier Queloz received the Nobel Prize in Physics in 2019 for their achievement. Over twenty such planets have been found with ELODIE. The instrument was also used to find a planet by the transit method. == See also == CORALIE spectrograph is a similar instrument at La Silla Observatory in Chile List of extrasolar planets == References == == External links == (in English) The ELODIE Archive
Wikipedia/ELODIE_spectrograph
The Berkeley Open Infrastructure for Network Computing (BOINC, pronounced –rhymes with "oink") is an open-source middleware system for volunteer computing (a type of distributed computing). Developed originally to support SETI@home, it became the platform for many other applications in areas as diverse as medicine, molecular biology, mathematics, linguistics, climatology, environmental science, and astrophysics, among others. The purpose of BOINC is to enable researchers to utilize processing resources of personal computers and other devices around the world. BOINC development began with a group based at the Space Sciences Laboratory (SSL) at the University of California, Berkeley, and led by David P. Anderson, who also led SETI@home. As a high-performance volunteer computing platform, BOINC brings together 34,236 active participants employing 136,341 active computers (hosts) worldwide, processing daily on average 20.164 PetaFLOPS as of 16 November 2021 (it would be the 21st largest processing capability in the world compared with an individual supercomputer). The National Science Foundation (NSF) funds BOINC through awards SCI/0221529, SCI/0438443 and SCI/0721124. Guinness World Records ranks BOINC as the largest computing grid in the world. BOINC code runs on various operating systems, including Microsoft Windows, macOS, Android, Linux, and FreeBSD. BOINC is free software released under the terms of the GNU Lesser General Public License (LGPL). == History == BOINC was originally developed to manage the SETI@home project. David P. Anderson has said that he chose its name because he wanted something that was not "imposing", but rather "light, catchy, and maybe - like 'Unix' - a little risqué", so he "played around with various acronyms and settled on 'BOINC'". The original SETI client was a non-BOINC software exclusively for SETI@home. It was one of the first volunteer computing projects, and not designed with a high level of security. As a result, some participants in the project attempted to cheat the project to gain "credits", while others submitted entirely falsified work. BOINC was designed, in part, to combat these security breaches. The BOINC project started in February 2002, and its first version was released on April 10, 2002. The first BOINC-based project was Predictor@home, launched on June 9, 2004. In 2009, AQUA@home deployed multi-threaded CPU applications for the first time, followed by the first OpenCL application in 2010. As of 15 August 2022, there are 33 projects on the official list. There are also, however, BOINC projects not included on the official list. Each year, an international BOINC Workshop is hosted to increase collaboration among project administrators. In 2021, the workshop was hosted virtually. While not affiliated with BOINC officially, there have been several independent projects that reward BOINC users for their participation, including Charity Engine (sweepstakes based on processing power with prizes funded by private entities who purchase computational time of CE users), Bitcoin Utopia (now defunct), and Gridcoin (a blockchain which mints coins based on processing power). == Design and structure == BOINC is software that can exploit the unused CPU and GPU cycles on computer hardware to perform scientific computing. In 2008, BOINC's website announced that Nvidia had developed a language called CUDA that uses GPUs for scientific computing. With NVIDIA's assistance, several BOINC-based projects (e.g., MilkyWay@home. SETI@home) developed applications that run on NVIDIA GPUs using CUDA. BOINC added support for the ATI/AMD family of GPUs in October 2009. The GPU applications run from 2 to 10 times faster than the former CPU-only versions. GPU support (via OpenCL) was added for computers using macOS with AMD Radeon graphic cards, with the current BOINC client supporting OpenCL on Windows, Linux, and macOS. GPU support is also provided for Intel GPUs. BOINC consists of a server system and client software that communicate to process and distribute work units and return results. === Mobile application === A BOINC app also exists for Android, allowing every person owning an Android device – smartphone, tablet and/or Kindle – to share their unused computing power. The user is allowed to select the research projects they want to support, if it is in the app's available project list. By default, the application will allow computing only when the device is connected to a WiFi network, is being charged, and the battery has a charge of at least 90%. Some of these settings can be changed to users needs. Not all BOINC projects are available and some of the projects are not compatible with all versions of Android operating system or availability of work is intermittent. Currently available projects are Asteroids@home, Einstein@Home, LHC@home, Moo! Wrapper, Rosetta@home, World Community Grid and Yoyo@home. As of September 2021, the most recent version of the mobile application can only be downloaded from the BOINC website or the F-Droid repository as the official Google Play store does not allow downloading and running executables not signed by the app developer and each BOINC project has their own executable files. === User interfaces === BOINC can be controlled remotely by remote procedure calls (RPC), from the command line, and from a BOINC Manager. BOINC Manager currently has two "views": the Advanced View and the Simplified GUI. The Grid View was removed in the 6.6.x clients as it was redundant. The appearance (skin) of the Simplified GUI is user-customizable, in that users can create their own designs. === Account managers === A BOINC Account Manager is an application that manages multiple BOINC project accounts across multiple computers (CPUs) and operating systems. Account managers were designed for people who are new to BOINC or have several computers participating in several projects. The account manager concept was conceived and developed jointly by GridRepublic and BOINC. Current and past account managers include: BAM! (BOINC Account Manager) (The first publicly available Account Manager, released for public use on May 30, 2006) GridRepublic (Follows the ideas of simplicity and neatness in account management) Charity Engine (Non-profit account manager for hire, uses prize draws and continuous charity fundraising to motivate people to join the grid) Science United (An account manager designed to make BOINC easier to use which automatically selects vetted BOINC projects for users based on desired research areas such as "medicine" or "physics") Dazzler (Open-source Account Manager, to ease institutional management resources) === Credit system === The BOINC Credit System is designed to avoid bad hardware and cheating by validating results before granting credit. The credit management system helps to ensure that users are returning results which are both statistically and scientifically accurate. Online volunteer computing is a complicated and variable mix of long-term users, retiring users and new users with different personal aspirations. == Projects == BOINC is used by many groups and individuals. Some BOINC projects are based at universities and research labs while others are independent areas of research or interest. === Active === === Completed === == See also == == References == == External links == Official website
Wikipedia/Berkeley_Open_Infrastructure_for_Network_Computing
The term "information algebra" refers to mathematical techniques of information processing. Classical information theory goes back to Claude Shannon. It is a theory of information transmission, looking at communication and storage. However, it has not been considered so far that information comes from different sources and that it is therefore usually combined. It has furthermore been neglected in classical information theory that one wants to extract those parts out of a piece of information that are relevant to specific questions. A mathematical phrasing of these operations leads to an algebra of information, describing basic modes of information processing. Such an algebra involves several formalisms of computer science, which seem to be different on the surface: relational databases, multiple systems of formal logic or numerical problems of linear algebra. It allows the development of generic procedures of information processing and thus a unification of basic methods of computer science, in particular of distributed information processing. Information relates to precise questions, comes from different sources, must be aggregated, and can be focused on questions of interest. Starting from these considerations, information algebras (Kohlas 2003) are two-sorted algebras ( Φ , D ) {\displaystyle (\Phi ,D)} : Where Φ {\displaystyle \Phi } is a semigroup, representing combination or aggregation of information, and D {\displaystyle D} is a lattice of domains (related to questions) whose partial order reflects the granularity of the domain or the question, and a mixed operation representing focusing or extraction of information. == Information and its operations == More precisely, in the two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)} , the following operations are defined Additionally, in D {\displaystyle D} the usual lattice operations (meet and join) are defined. == Axioms and definition == The axioms of the two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)} , in addition to the axioms of the lattice D {\displaystyle D} : A two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)} satisfying these axioms is called an Information Algebra. == Order of information == A partial order of information can be introduced by defining ϕ ≤ ψ {\displaystyle \phi \leq \psi } if ϕ ⊗ ψ = ψ {\displaystyle \phi \otimes \psi =\psi } . This means that ϕ {\displaystyle \phi } is less informative than ψ {\displaystyle \psi } if it adds no new information to ψ {\displaystyle \psi } . The semigroup Φ {\displaystyle \Phi } is a semilattice relative to this order, i.e. ϕ ⊗ ψ = ϕ ∨ ψ {\displaystyle \phi \otimes \psi =\phi \vee \psi } . Relative to any domain (question) x ∈ D {\displaystyle x\in D} a partial order can be introduced by defining ϕ ≤ x ψ {\displaystyle \phi \leq _{x}\psi } if ϕ ⇒ x ≤ ψ ⇒ x {\displaystyle \phi ^{\Rightarrow x}\leq \psi ^{\Rightarrow x}} . It represents the order of information content of ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } relative to the domain (question) x {\displaystyle x} . == Labeled information algebra == The pairs ( ϕ , x ) {\displaystyle (\phi ,x)\ } , where ϕ ∈ Φ {\displaystyle \phi \in \Phi } and x ∈ D {\displaystyle x\in D} such that ϕ ⇒ x = ϕ {\displaystyle \phi ^{\Rightarrow x}=\phi } form a labeled Information Algebra. More precisely, in the two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)\ } , the following operations are defined == Models of information algebras == Here follows an incomplete list of instances of information algebras: Relational algebra: The reduct of a relational algebra with natural join as combination and the usual projection is a labeled information algebra, see Example. Constraint systems: Constraints form an information algebra (Jaffar & Maher 1994). Semiring valued algebras: C-Semirings induce information algebras (Bistarelli, Montanari & Rossi1997);(Bistarelli et al. 1999);(Kohlas & Wilson 2006). Logic: Many logic systems induce information algebras (Wilson & Mengin 1999). Reducts of cylindric algebras (Henkin, Monk & Tarski 1971) or polyadic algebras are information algebras related to predicate logic (Halmos 2000). Module algebras: (Bergstra, Heering & Klint 1990);(de Lavalette 1992). Linear systems: Systems of linear equations or linear inequalities induce information algebras (Kohlas 2003). === Worked-out example: relational algebra === Let A {\displaystyle {\mathcal {A}}} be a set of symbols, called attributes (or column names). For each α ∈ A {\displaystyle \alpha \in {\mathcal {A}}} let U α {\displaystyle U_{\alpha }} be a non-empty set, the set of all possible values of the attribute α {\displaystyle \alpha } . For example, if A = { name , age , income } {\displaystyle {\mathcal {A}}=\{{\texttt {name}},{\texttt {age}},{\texttt {income}}\}} , then U name {\displaystyle U_{\texttt {name}}} could be the set of strings, whereas U age {\displaystyle U_{\texttt {age}}} and U income {\displaystyle U_{\texttt {income}}} are both the set of non-negative integers. Let x ⊆ A {\displaystyle x\subseteq {\mathcal {A}}} . An x {\displaystyle x} -tuple is a function f {\displaystyle f} so that dom ( f ) = x {\displaystyle {\hbox{dom}}(f)=x} and f ( α ) ∈ U α {\displaystyle f(\alpha )\in U_{\alpha }} for each α ∈ x {\displaystyle \alpha \in x} The set of all x {\displaystyle x} -tuples is denoted by E x {\displaystyle E_{x}} . For an x {\displaystyle x} -tuple f {\displaystyle f} and a subset y ⊆ x {\displaystyle y\subseteq x} the restriction f [ y ] {\displaystyle f[y]} is defined to be the y {\displaystyle y} -tuple g {\displaystyle g} so that g ( α ) = f ( α ) {\displaystyle g(\alpha )=f(\alpha )} for all α ∈ y {\displaystyle \alpha \in y} . A relation R {\displaystyle R} over x {\displaystyle x} is a set of x {\displaystyle x} -tuples, i.e. a subset of E x {\displaystyle E_{x}} . The set of attributes x {\displaystyle x} is called the domain of R {\displaystyle R} and denoted by d ( R ) {\displaystyle d(R)} . For y ⊆ d ( R ) {\displaystyle y\subseteq d(R)} the projection of R {\displaystyle R} onto y {\displaystyle y} is defined as follows: π y ( R ) := { f [ y ] ∣ f ∈ R } . {\displaystyle \pi _{y}(R):=\{f[y]\mid f\in R\}.} The join of a relation R {\displaystyle R} over x {\displaystyle x} and a relation S {\displaystyle S} over y {\displaystyle y} is defined as follows: R ⋈ S := { f ∣ f ( x ∪ y ) -tuple , f [ x ] ∈ R , f [ y ] ∈ S } . {\displaystyle R\bowtie S:=\{f\mid f\quad (x\cup y){\hbox{-tuple}},\quad f[x]\in R,\;f[y]\in S\}.} As an example, let R {\displaystyle R} and S {\displaystyle S} be the following relations: R = name age A 34 B 47 S = name income A 20'000 B 32'000 {\displaystyle R={\begin{matrix}{\texttt {name}}&{\texttt {age}}\\{\texttt {A}}&{\texttt {34}}\\{\texttt {B}}&{\texttt {47}}\\\end{matrix}}\qquad S={\begin{matrix}{\texttt {name}}&{\texttt {income}}\\{\texttt {A}}&{\texttt {20'000}}\\{\texttt {B}}&{\texttt {32'000}}\\\end{matrix}}} Then the join of R {\displaystyle R} and S {\displaystyle S} is: R ⋈ S = name age income A 34 20'000 B 47 32'000 {\displaystyle R\bowtie S={\begin{matrix}{\texttt {name}}&{\texttt {age}}&{\texttt {income}}\\{\texttt {A}}&{\texttt {34}}&{\texttt {20'000}}\\{\texttt {B}}&{\texttt {47}}&{\texttt {32'000}}\\\end{matrix}}} A relational database with natural join ⋈ {\displaystyle \bowtie } as combination and the usual projection π {\displaystyle \pi } is an information algebra. The operations are well defined since d ( R ⋈ S ) = d ( R ) ∪ d ( S ) {\displaystyle d(R\bowtie S)=d(R)\cup d(S)} If x ⊆ d ( R ) {\displaystyle x\subseteq d(R)} , then d ( π x ( R ) ) = x {\displaystyle d(\pi _{x}(R))=x} . It is easy to see that relational databases satisfy the axioms of a labeled information algebra: semigroup ( R 1 ⋈ R 2 ) ⋈ R 3 = R 1 ⋈ ( R 2 ⋈ R 3 ) {\displaystyle (R_{1}\bowtie R_{2})\bowtie R_{3}=R_{1}\bowtie (R_{2}\bowtie R_{3})} and R ⋈ S = S ⋈ R {\displaystyle R\bowtie S=S\bowtie R} transitivity If x ⊆ y ⊆ d ( R ) {\displaystyle x\subseteq y\subseteq d(R)} , then π x ( π y ( R ) ) = π x ( R ) {\displaystyle \pi _{x}(\pi _{y}(R))=\pi _{x}(R)} . combination If d ( R ) = x {\displaystyle d(R)=x} and d ( S ) = y {\displaystyle d(S)=y} , then π x ( R ⋈ S ) = R ⋈ π x ∩ y ( S ) {\displaystyle \pi _{x}(R\bowtie S)=R\bowtie \pi _{x\cap y}(S)} . idempotency If x ⊆ d ( R ) {\displaystyle x\subseteq d(R)} , then R ⋈ π x ( R ) = R {\displaystyle R\bowtie \pi _{x}(R)=R} . support If x = d ( R ) {\displaystyle x=d(R)} , then π x ( R ) = R {\displaystyle \pi _{x}(R)=R} . == Connections == Valuation algebras Dropping the idempotency axiom leads to valuation algebras. These axioms have been introduced by (Shenoy & Shafer 1990) to generalize local computation schemes (Lauritzen & Spiegelhalter 1988) from Bayesian networks to more general formalisms, including belief function, possibility potentials, etc. (Kohlas & Shenoy 2000). For a book-length exposition on the topic see Pouly & Kohlas (2011). Domains and information systems Compact Information Algebras (Kohlas 2003) are related to Scott domains and Scott information systems (Scott 1970);(Scott 1982);(Larsen & Winskel 1984). Uncertain information Random variables with values in information algebras represent probabilistic argumentation systems (Haenni, Kohlas & Lehmann 2000). Semantic information Information algebras introduce semantics by relating information to questions through focusing and combination (Groenendijk & Stokhof 1984);(Floridi 2004). Information flow Information algebras are related to information flow, in particular classifications (Barwise & Seligman 1997). Tree decomposition Information algebras are organized into a hierarchical tree structure, and decomposed into smaller problems. Semigroup theory ... Compositional models Such models may be defined within the framework of information algebras: https://arxiv.org/abs/1612.02587 Extended axiomatic foundations of information and valuation algebras The concept of conditional independence is basic for information algebras and a new axiomatic foundation of information algebras, based on conditional independence, extending the old one (see above) is available: https://arxiv.org/abs/1701.02658 == Historical Roots == The axioms for information algebras are derived from the axiom system proposed in (Shenoy and Shafer, 1990), see also (Shafer, 1991). == References == Barwise, J.; Seligman, J. (1997), Information Flow: The Logic of Distributed Systems, Cambridge U.K.: Number 44 in Cambridge Tracts in Theoretical Computer Science, Cambridge University Press Bergstra, J.A.; Heering, J.; Klint, P. (1990), "Module algebra", Journal of the ACM, 73 (2): 335–372, doi:10.1145/77600.77621, S2CID 7910431 Bistarelli, S.; Fargier, H.; Montanari, U.; Rossi, F.; Schiex, T.; Verfaillie, G. (1999), "Semiring-based CSPs and valued CSPs: Frameworks, properties, and comparison", Constraints, 4 (3): 199–240, doi:10.1023/A:1026441215081, S2CID 17232456, archived from the original on March 10, 2022 Bistarelli, Stefano; Montanari, Ugo; Rossi, Francesca (1997), "Semiring-based constraint satisfaction and optimization", Journal of the ACM, 44 (2): 201–236, CiteSeerX 10.1.1.45.5110, doi:10.1145/256303.256306, S2CID 4003767 de Lavalette, Gerard R. Renardel (1992), "Logical semantics of modularisation", in Egon Börger; Gerhard Jäger; Hans Kleine Büning; Michael M. Richter (eds.), CSL: 5th Workshop on Computer Science Logic, Volume 626 of Lecture Notes in Computer Science, Springer, pp. 306–315, ISBN 978-3-540-55789-0 Floridi, Luciano (2004), "Outline of a theory of strongly semantic information" (PDF), Minds and Machines, 14 (2): 197–221, doi:10.1023/b:mind.0000021684.50925.c9, S2CID 3058065 Groenendijk, J.; Stokhof, M. (1984), Studies on the Semantics of Questions and the Pragmatics of Answers, PhD thesis, Universiteit van Amsterdam Haenni, R.; Kohlas, J.; Lehmann, N. (2000), "Probabilistic argumentation systems" (PDF), in J. Kohlas; S. Moral (eds.), Handbook of Defeasible Reasoning and Uncertainty Management Systems, Dordrecht: Volume 5: Algorithms for Uncertainty and Defeasible Reasoning, Kluwer, pp. 221–287, archived from the original on January 25, 2005 Halmos, Paul R. (2000), "An autobiography of polyadic algebras", Logic Journal of the IGPL, 8 (4): 383–392, doi:10.1093/jigpal/8.4.383, S2CID 36156234 Henkin, L.; Monk, J. D.; Tarski, A. (1971), Cylindric Algebras, Amsterdam: North-Holland, ISBN 978-0-7204-2043-2 Jaffar, J.; Maher, M. J. (1994), "Constraint logic programming: A survey", Journal of Logic Programming, 19/20: 503–581, doi:10.1016/0743-1066(94)90033-7 Kohlas, J. (2003), Information Algebras: Generic Structures for Inference, Springer-Verlag, ISBN 978-1-85233-689-9 Kohlas, J.; Shenoy, P.P. (2000), "Computation in valuation algebras", in J. Kohlas; S. Moral (eds.), Handbook of Defeasible Reasoning and Uncertainty Management Systems, Volume 5: Algorithms for Uncertainty and Defeasible Reasoning, Dordrecht: Kluwer, pp. 5–39 Kohlas, J.; Wilson, N. (2006), Exact and approximate local computation in semiring-induced valuation algebras (PDF), Technical Report 06-06, Department of Informatics, University of Fribourg, archived from the original on September 24, 2006 Larsen, K. G.; Winskel, G. (1984), "Using information systems to solve recursive domain equations effectively", in Gilles Kahn; David B. MacQueen; Gordon D. Plotkin (eds.), Semantics of Data Types, International Symposium, Sophia-Antipolis, France, June 27–29, 1984, Proceedings, vol. 173 of Lecture Notes in Computer Science, Berlin: Springer, pp. 109–129 Lauritzen, S. L.; Spiegelhalter, D. J. (1988), "Local computations with probabilities on graphical structures and their application to expert systems", Journal of the Royal Statistical Society, Series B, 50 (2): 157–224, doi:10.1111/j.2517-6161.1988.tb01721.x Pouly, Marc; Kohlas, Jürg (2011), Generic Inference: A Unifying Theory for Automated Reasoning, John Wiley & Sons, ISBN 978-1-118-01086-0 Scott, Dana S. (1970), Outline of a mathematical theory of computation, Technical Monograph PRG–2, Oxford University Computing Laboratory, Programming Research Group Scott, D.S. (1982), "Domains for denotational semantics", in M. Nielsen; E.M. Schmitt (eds.), Automata, Languages and Programming, Springer, pp. 577–613 Shafer, G. (1991), An axiomatic study of computation in hypertrees, Working Paper 232, School of Business, University of Kansas Shenoy, P. P.; Shafer, G. (1990). "Axioms for probability and belief-function proagation". In Ross D. Shachter; Tod S. Levitt; Laveen N. Kanal; John F. Lemmer (eds.). Uncertainty in Artificial Intelligence 4. Vol. 9. Amsterdam: Elsevier. pp. 169–198. doi:10.1016/B978-0-444-88650-7.50019-6. hdl:1808/144. ISBN 978-0-444-88650-7. {{cite book}}: |journal= ignored (help) Wilson, Nic; Mengin, Jérôme (1999), "Logical deduction using the local computation framework", in Anthony Hunter; Simon Parsons (eds.), Symbolic and Quantitative Approaches to Reasoning and Uncertainty, European Conference, ECSQARU'99, London, UK, July 5–9, 1999, Proceedings, volume 1638 of Lecture Notes in Computer Science, Springer, pp. 386–396, ISBN 978-3-540-66131-3
Wikipedia/Valuation_algebra
Signal modulation is the process of varying one or more properties of a periodic waveform in electronics and telecommunication for the purpose of transmitting information. The process encodes information in form of the modulation or message signal onto a carrier signal to be transmitted. For example, the message signal might be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing a sequence of binary digits, a bitstream from a computer. This carrier wave usually has a much higher frequency than the message signal does. This is because it is impractical to transmit signals with low frequencies. Generally, receiving a radio wave requires a radio antenna with a length that is one-fourth of the wavelength of the transmitted wave. For low frequency radio waves, wavelength is on the scale of kilometers and building such a large antenna is not practical. Another purpose of modulation is to transmit multiple channels of information through a single communication medium, using frequency-division multiplexing (FDM). For example, in cable television (which uses FDM), many carrier signals, each modulated with a different television channel, are transported through a single cable to customers. Since each carrier occupies a different frequency, the channels do not interfere with each other. At the destination end, the carrier signal is demodulated to extract the information bearing modulation signal. A modulator is a device or circuit that performs modulation. A demodulator (sometimes detector) is a circuit that performs demodulation, the inverse of modulation. A modem (from modulator–demodulator), used in bidirectional communication, can perform both operations. The lower frequency band occupied by the modulation signal is called the baseband, while the higher frequency band occupied by the modulated carrier is called the passband. In analog modulation, an analog modulation signal is "impressed" on the carrier. Examples are amplitude modulation (AM) in which the amplitude (strength) of the carrier wave is varied by the modulation signal, and frequency modulation (FM) in which the frequency of the carrier wave is varied by the modulation signal. These were the earliest types of modulation, and are used to transmit an audio signal representing sound in AM and FM radio broadcasting. More recent systems use digital modulation, which impresses a digital signal consisting of a sequence of binary digits (bits), a bitstream, on the carrier, by means of mapping bits to elements from a discrete alphabet to be transmitted. This alphabet can consist of a set of real or complex numbers, or sequences, like oscillations of different frequencies, so-called frequency-shift keying (FSK) modulation. A more complicated digital modulation method that employs multiple carriers, orthogonal frequency-division multiplexing (OFDM), is used in WiFi networks, digital radio stations and digital cable television transmission. == Analog modulation methods == In analog modulation, the modulation is applied continuously in response to the analog information signal. Common analog modulation techniques include: Amplitude modulation (AM) (here the amplitude of the carrier signal is varied in accordance with the instantaneous amplitude of the modulating signal) Double-sideband modulation (DSB) Double-sideband modulation with carrier (DSB-WC) (used on the AM radio broadcasting band) Double-sideband suppressed-carrier transmission (DSB-SC) Double-sideband reduced carrier transmission (DSB-RC) Single-sideband modulation (SSB, or SSB-AM) Single-sideband modulation with carrier (SSB-WC) Single-sideband modulation suppressed carrier modulation (SSB-SC) Vestigial sideband modulation (VSB, or VSB-AM) Quadrature amplitude modulation (QAM) Angle modulation, which is approximately constant envelope Frequency modulation (FM) (here the frequency of the carrier signal is varied in accordance with the instantaneous amplitude of the modulating signal) Phase modulation (PM) (here the phase shift of the carrier signal is varied in accordance with the instantaneous amplitude of the modulating signal) Transpositional Modulation (TM), in which the waveform inflection is modified resulting in a signal where each quarter cycle is transposed in the modulation process. TM is a pseudo-analog modulation (AM). Where an AM carrier also carries a phase variable phase f(ǿ). TM is f(AM,ǿ) == Digital modulation methods == In digital modulation, an analog carrier signal is modulated by a discrete signal. Digital modulation methods can be considered as digital-to-analog conversion and the corresponding demodulation or detection as analog-to-digital conversion. The changes in the carrier signal are chosen from a finite number of M alternative symbols (the modulation alphabet). A simple example: A telephone line is designed for transferring audible sounds, for example, tones, and not digital bits (zeros and ones). Computers may, however, communicate over a telephone line by means of modems, which are representing the digital bits by tones, called symbols. If there are four alternative symbols (corresponding to a musical instrument that can generate four different tones, one at a time), the first symbol may represent the bit sequence 00, the second 01, the third 10 and the fourth 11. If the modem plays a melody consisting of 1000 tones per second, the symbol rate is 1000 symbols/second, or 1000 baud. Since each tone (i.e., symbol) represents a message consisting of two digital bits in this example, the bit rate is twice the symbol rate, i.e. 2000 bits per second. According to one definition of digital signal, the modulated signal is a digital signal. According to another definition, the modulation is a form of digital-to-analog conversion. Most textbooks would consider digital modulation schemes as a form of digital transmission, synonymous to data transmission; very few would consider it as analog transmission. === Fundamental digital modulation methods === The most fundamental digital modulation techniques are based on keying: PSK (phase-shift keying): a finite number of phases are used. FSK (frequency-shift keying): a finite number of frequencies are used. ASK (amplitude-shift keying): a finite number of amplitudes are used. QAM (quadrature amplitude modulation): a finite number of at least two phases and at least two amplitudes are used. In QAM, an in-phase signal (or I, with one example being a cosine waveform) and a quadrature phase signal (or Q, with an example being a sine wave) are amplitude modulated with a finite number of amplitudes and then summed. It can be seen as a two-channel system, each channel using ASK. The resulting signal is equivalent to a combination of PSK and ASK. In all of the above methods, each of these phases, frequencies or amplitudes are assigned a unique pattern of binary bits. Usually, each phase, frequency or amplitude encodes an equal number of bits. This number of bits comprises the symbol that is represented by the particular phase, frequency or amplitude. If the alphabet consists of M = 2 N {\displaystyle M=2^{N}} alternative symbols, each symbol represents a message consisting of N bits. If the symbol rate (also known as the baud rate) is f S {\displaystyle f_{S}} symbols/second (or baud), the data rate is N f S {\displaystyle Nf_{S}} bit/second. For example, with an alphabet consisting of 16 alternative symbols, each symbol represents 4 bits. Thus, the data rate is four times the baud rate. In the case of PSK, ASK or QAM, where the carrier frequency of the modulated signal is constant, the modulation alphabet is often conveniently represented on a constellation diagram, showing the amplitude of the I signal at the x-axis, and the amplitude of the Q signal at the y-axis, for each symbol. === Modulator and detector principles of operation === PSK and ASK, and sometimes also FSK, are often generated and detected using the principle of QAM. The I and Q signals can be combined into a complex-valued signal I+jQ (where j is the imaginary unit). The resulting so called equivalent lowpass signal or equivalent baseband signal is a complex-valued representation of the real-valued modulated physical signal (the so-called passband signal or RF signal). These are the general steps used by the modulator to transmit data: Group the incoming data bits into codewords, one for each symbol that will be transmitted. Map the codewords to attributes, for example, amplitudes of the I and Q signals (the equivalent low pass signal), or frequency or phase values. Adapt pulse shaping or some other filtering to limit the bandwidth and form the spectrum of the equivalent low pass signal, typically using digital signal processing. Perform digital to analog conversion (DAC) of the I and Q signals (since today all of the above is normally achieved using digital signal processing, DSP). Generate a high-frequency sine carrier waveform, and perhaps also a cosine quadrature component. Carry out the modulation, for example by multiplying the sine and cosine waveform with the I and Q signals, resulting in the equivalent low pass signal being frequency shifted to the modulated passband signal or RF signal. Sometimes this is achieved using DSP technology, for example direct digital synthesis using a waveform table, instead of analog signal processing. In that case, the above DAC step should be done after this step. Amplification and analog bandpass filtering to avoid harmonic distortion and periodic spectrum. At the receiver side, the demodulator typically performs: Bandpass filtering. Automatic gain control, AGC (to compensate for attenuation, for example fading). Frequency shifting of the RF signal to the equivalent baseband I and Q signals, or to an intermediate frequency (IF) signal, by multiplying the RF signal with a local oscillator sine wave and cosine wave frequency (see the superheterodyne receiver principle). Sampling and analog-to-digital conversion (ADC) (sometimes before or instead of the above point, for example by means of undersampling). Equalization filtering, for example, a matched filter, compensation for multipath propagation, time spreading, phase distortion and frequency selective fading, to avoid intersymbol interference and symbol distortion. Detection of the amplitudes of the I and Q signals, or the frequency or phase of the IF signal. Quantization of the amplitudes, frequencies or phases to the nearest allowed symbol values. Mapping of the quantized amplitudes, frequencies or phases to codewords (bit groups). Parallel-to-serial conversion of the codewords into a bit stream. Pass the resultant bit stream on for further processing such as removal of any error-correcting codes. As is common to all digital communication systems, the design of both the modulator and demodulator must be done simultaneously. Digital modulation schemes are possible because the transmitter-receiver pair has prior knowledge of how data is encoded and represented in the communications system. In all digital communication systems, both the modulator at the transmitter and the demodulator at the receiver are structured so that they perform inverse operations. Asynchronous methods do not require a receiver reference clock signal that is phase synchronized with the sender carrier signal. In this case, modulation symbols (rather than bits, characters, or data packets) are asynchronously transferred. The opposite is synchronous modulation. === List of common digital modulation techniques === The most common digital modulation techniques are: Phase-shift keying (PSK) Binary PSK (BPSK), using M=2 symbols Quadrature PSK (QPSK), using M=4 symbols 8PSK, using M=8 symbols 16PSK, using M=16 symbols Differential PSK (DPSK) Differential QPSK (DQPSK) Offset QPSK (OQPSK) π/4–QPSK Frequency-shift keying (FSK) Audio frequency-shift keying (AFSK) Multi-frequency shift keying (M-ary FSK or MFSK) Dual-tone multi-frequency signaling (DTMF) Amplitude-shift keying (ASK) On-off keying (OOK), the most common ASK form M-ary vestigial sideband modulation, for example 8VSB Quadrature amplitude modulation (QAM), a combination of PSK and ASK Polar modulation like QAM a combination of PSK and ASK Continuous phase modulation (CPM) methods Minimum-shift keying (MSK) Gaussian minimum-shift keying (GMSK) Continuous-phase frequency-shift keying (CPFSK) Orthogonal frequency-division multiplexing (OFDM) modulation Discrete multitone (DMT), including adaptive modulation and bit-loading Wavelet modulation Trellis coded modulation (TCM), also known as Trellis modulation Spread spectrum techniques Direct-sequence spread spectrum (DSSS) Chirp spread spectrum (CSS) according to IEEE 802.15.4a CSS uses pseudo-stochastic coding Frequency-hopping spread spectrum (FHSS) applies a special scheme for channel release MSK and GMSK are particular cases of continuous phase modulation. Indeed, MSK is a particular case of the sub-family of CPM known as continuous-phase frequency-shift keying (CPFSK) which is defined by a rectangular frequency pulse (i.e. a linearly increasing phase pulse) of one-symbol-time duration (total response signaling). OFDM is based on the idea of frequency-division multiplexing (FDM), but the multiplexed streams are all parts of a single original stream. The bit stream is split into several parallel data streams, each transferred over its own sub-carrier using some conventional digital modulation scheme. The modulated sub-carriers are summed to form an OFDM signal. This dividing and recombining help with handling channel impairments. OFDM is considered as a modulation technique rather than a multiplex technique since it transfers one bit stream over one communication channel using one sequence of so-called OFDM symbols. OFDM can be extended to multi-user channel access method in the orthogonal frequency-division multiple access (OFDMA) and multi-carrier code-division multiple access (MC-CDMA) schemes, allowing several users to share the same physical medium by giving different sub-carriers or spreading codes to different users. Of the two kinds of RF power amplifier, switching amplifiers (Class D amplifiers) cost less and use less battery power than linear amplifiers of the same output power. However, they only work with relatively constant-amplitude-modulation signals such as angle modulation (FSK or PSK) and CDMA, but not with QAM and OFDM. Nevertheless, even though switching amplifiers are completely unsuitable for normal QAM constellations, often the QAM modulation principle are used to drive switching amplifiers with these FM and other waveforms, and sometimes QAM demodulators are used to receive the signals put out by these switching amplifiers. === Automatic digital modulation recognition (ADMR) === Automatic digital modulation recognition in intelligent communication systems is one of the most important issues in software-defined radio and cognitive radio. According to incremental expanse of intelligent receivers, automatic modulation recognition becomes a challenging topic in telecommunication systems and computer engineering. Such systems have many civil and military applications. Moreover, blind recognition of modulation type is an important problem in commercial systems, especially in software-defined radio. Usually in such systems, there are some extra information for system configuration, but considering blind approaches in intelligent receivers, we can reduce information overload and increase transmission performance. Obviously, with no knowledge of the transmitted data and many unknown parameters at the receiver, such as the signal power, carrier frequency and phase offsets, timing information, etc., blind identification of the modulation is made fairly difficult. This becomes even more challenging in real-world scenarios with multipath fading, frequency-selective and time-varying channels. There are two main approaches to automatic modulation recognition. The first approach uses likelihood-based methods to assign an input signal to a proper class. Another recent approach is based on feature extraction. === Digital baseband modulation === Digital baseband modulation changes the characteristics of a baseband signal, i.e., one without a carrier at a higher frequency. This can be used as equivalent signal to be later frequency-converted to a carrier frequency, or for direct communication in baseband. The latter methods both involve relatively simple line codes, as often used in local buses, and complicated baseband signalling schemes such as used in DSL. == Pulse modulation methods == Pulse modulation schemes aim at transferring a narrowband analog signal over an analog baseband channel as a two-level signal by modulating a pulse wave. Some pulse modulation schemes also allow the narrowband analog signal to be transferred as a digital signal (i.e., as a quantized discrete-time signal) with a fixed bit rate, which can be transferred over an underlying digital transmission system, for example, some line code. These are not modulation schemes in the conventional sense since they are not channel coding schemes, but should be considered as source coding schemes, and in some cases analog-to-digital conversion techniques. Analog-over-analog methods Pulse-amplitude modulation (PAM) Pulse-width modulation (PWM) and pulse-depth modulation (PDM) Pulse-frequency modulation (PFM) Pulse-position modulation (PPM) Analog-over-digital methods Pulse-code modulation (PCM) Differential PCM (DPCM) Adaptive DPCM (ADPCM) Delta modulation (DM or Δ-modulation) Delta-sigma modulation (ΣΔ) Continuously variable slope delta modulation (CVSDM), also called adaptive delta modulation (ADM) Pulse-density modulation (PDM) == Miscellaneous modulation techniques == The use of on-off keying to transmit Morse code at radio frequencies is known as continuous wave (CW) operation. Adaptive modulation Space modulation is a method whereby signals are modulated within airspace such as that used in instrument landing systems. The microwave auditory effect has been pulse modulated with audio waveforms to evoke understandable spoken numbers. == See also == == References == == Further reading == Bryant, James; Analog Devices (2013). "Multipliers vs. Modulators" (PDF). Analog Dialogue. 47 (2): 3. == External links == Interactive presentation of soft-demapping for AWGN-channel in a web-demo Institute of Telecommunications, University of Stuttgart Modem (Modulation and Demodulation) CodSim 2.0: Open source Virtual Laboratory for Digital Data Communications Model Department of Computer Architecture, University of Malaga. Simulates Digital line encodings and Digital Modulations. Written in HTML for any web browser.
Wikipedia/Pulse_modulation_methods
In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method must have an expected code length greater than or equal to the entropy of the source. More precisely, the source coding theorem states that for any source distribution, the expected code length satisfies E x ∼ P ⁡ [ ℓ ( d ( x ) ) ] ≥ E x ∼ P ⁡ [ − log b ⁡ ( P ( x ) ) ] {\displaystyle \operatorname {E} _{x\sim P}[\ell (d(x))]\geq \operatorname {E} _{x\sim P}[-\log _{b}(P(x))]} , where ℓ {\displaystyle \ell } is the function specifying the number of symbols in a code word, d {\displaystyle d} is the coding function, b {\displaystyle b} is the number of symbols used to make output codes and P {\displaystyle P} is the probability of the source symbol. An entropy coding attempts to approach this lower bound. Two of the most common entropy coding techniques are Huffman coding and arithmetic coding. If the approximate entropy characteristics of a data stream are known in advance (especially for signal compression), a simpler static code may be useful. These static codes include universal codes (such as Elias gamma coding or Fibonacci coding) and Golomb codes (such as unary coding or Rice coding). Since 2014, data compressors have started using the asymmetric numeral systems family of entropy coding techniques, which allows combination of the compression ratio of arithmetic coding with a processing cost similar to Huffman coding. == Entropy as a measure of similarity == Besides using entropy coding as a way to compress digital data, an entropy encoder can also be used to measure the amount of similarity between streams of data and already existing classes of data. This is done by generating an entropy coder/compressor for each class of data; unknown data is then classified by feeding the uncompressed data to each compressor and seeing which compressor yields the highest compression. The coder with the best compression is probably the coder trained on the data that was most similar to the unknown data. == See also == Arithmetic coding Asymmetric numeral systems (ANS) Context-adaptive binary arithmetic coding (CABAC) Huffman coding Range coding == References == == External links == Information Theory, Inference, and Learning Algorithms, by David MacKay (2003), gives an introduction to Shannon theory and data compression, including the Huffman coding and arithmetic coding. Source Coding, by T. Wiegand and H. Schwarz (2011).
Wikipedia/Entropy_coded
The Military Reaction Force, Military Reconnaissance Force or Mobile Reconnaissance Force (MRF) was a covert intelligence-gathering and counterinsurgency unit of the British Army active in Northern Ireland during the Troubles. The unit was formed during the summer of 1971 and operated until late 1972 or early 1973. MRF teams operated in plain clothes and civilian vehicles, equipped with pistols and submachine guns. They were tasked with tracking and arresting or killing members of the Provisional Irish Republican Army (IRA). It is alleged that the MRF killed a number of Catholic civilians in drive-by shootings. The MRF also handled informers within the paramilitary groups and ran several front companies to gather intelligence. In October 1972, the IRA uncovered and attacked two of the MRF's front companies—a mobile laundry service and a massage parlour—which contributed to the unit's dissolution. The MRF was succeeded by the Special Reconnaissance Unit (SRU; or 14 Intelligence Company) and, later, by the Force Research Unit (FRU). == Origins and structure == The MRF was established in the summer of 1971. It appears to have its origins in ideas and techniques developed by Brigadier Sir Frank Kitson, a senior commander in the British Army, who had created "counter gangs" to defeat the Kenya Land and Freedom Army in the Colony and Protectorate of Kenya during the Mau Mau rebellion. He was the author of two books on counter-insurgency tactics: Gangs & Counter Gangs (1960) and Low Intensity Operations (1971). From 1970 to 1972, Kitson served in Northern Ireland as commander of the 39th Infantry Brigade. It has been claimed that he was responsible for establishing the MRF and that the unit was attached to his brigade. The MRF was based at Palace Barracks in the Belfast suburb of Holywood. The MRF's first commander was Captain Arthur Watchus. In June 1972, he was succeeded as commander by Captain James 'Hamish' McGregor. The unit consisted of up to 40 men, handpicked from throughout the British Army. A Ministry of Defence review concluded the MRF had "no provision for detailed command and control". == Modus operandi == In March 1994, the UK's Junior Defence Minister Jeremy Hanley issued the following description of the MRF in reply to a parliamentary written question: "The MRF was a small military unit which, during the period 1971 to 1973, was responsible for carrying out surveillance tasks in Northern Ireland in those circumstances where soldiers in uniform and with Army vehicles would be too easily recognized". Martin Dillon described the MRF's purpose as being "to draw the Provisional IRA into a shooting war with loyalists in order to distract the IRA from its objective of attacking the Army". Many details about the unit's modus operandi have been revealed by former members. One issued a statement to the Troops Out Movement in July 1978. In 2012–13, a former MRF member using the covername 'Simon Cursey' gave a number of interviews and published the book MRF Shadow Troop about his time in the unit. In November 2013, a BBC Panorama documentary was aired about the MRF. It drew on information from seven former members, as well as a number of other sources. The Panorama documentary identified 10 unarmed civilians who were shot by the MRF (the MRF was disbanded in 1973). The MRF had both a "defensive" surveillance role and an "offensive" role. MRF operatives patrolled the streets in these cars in teams of two to four, tracking down and arresting or killing suspected IRA members. They were armed with Browning pistols and Sterling sub-machine guns. Former MRF members admitted that the unit shot unarmed people without warning, both IRA members and civilians. Former MRF members claim they had a list of targets they were ordered to "shoot on sight". One member interviewed for the BBC's Panorama, Soldier F, said "We were not there to act like an army unit, we were there to act like a terror group". Soldier H said "We operated initially with them thinking that we were the UVF", to which Soldier F added: "We wanted to cause confusion". Another said that their role was "to draw out the IRA and to minimise their activities". They said they fired on groups of people manning defensive barricades, on the assumption that some might be armed. The MRF member who made a statement in 1978 opined that the unit's role was one of "repression through fear, terror and violence". He said that the unit had been trained to use weapons favoured by the IRA. Republicans argued that the MRF deliberately attacked civilians for two main reasons: firstly, to draw the IRA into a sectarian conflict with loyalists and divert it from its campaign against the state; and secondly, to show Catholics that the IRA could not protect them, thus draining its support. The MRF's surveillance operations included the use of front companies (see below) and disguises. Former members claim they posed as road sweepers, dustmen and even homeless meths-drinkers while carrying out surveillance. The MRF is known to have used agents referred to as 'Freds'. These were republican or loyalist paramilitaries who were recruited by the Intelligence Corps. The Freds would work inside paramilitary groups, feeding back information to the MRF. They were also ferried through Belfast in armoured cars, and through the gunslit would point-out paramilitary individuals of note. Through this method the MRF compiled extensive photographs and dossiers of Belfast militants of both factions. == Alleged attacks on civilians == In 1972, it is alleged MRF teams carried out a number of drive-by shootings in Catholic and Irish nationalist areas of Belfast, some of which were attributed to Ulster loyalist paramilitaries. MRF members have asserted the unit's involvement in most of these attacks. There are also allegations that the unit helped loyalists to carry out attacks. === McGurk's Bar bombing === On 4 December 1971, the loyalist Ulster Volunteer Force (UVF) detonated a time bomb at the door of McGurk's public house, located on the corner of North Queen Street and Great George's Street in Belfast. The pub was frequented by Irish Catholics/nationalists. The explosion caused the building to collapse, killing fifteen Catholic civilians and wounding seventeen more. It was the deadliest attack in Belfast during the Troubles. The book Killing For Britain (2009), written by former UVF member 'John Black', claims that the MRF organised the bombing and helped the bombers get in and out of the area. Two days before the bombing, republican prisoners had escaped from nearby Crumlin Road Prison. Security was tightened, and there were many checkpoints in the area at the time. However, locals claimed that the security forces helped the bombers by removing the checkpoints an hour before the attack. One of the bombers—Robert Campbell—said that their original target had been The Gem, a nearby pub that was allegedly linked to the Official IRA. It is claimed the MRF plan was to help the UVF bomb The Gem, and then blame the bombing on the Provisional IRA. This would start a feud between the two IRA factions, diverting them from their fight against the security forces and draining their support. Campbell said that The Gem had security outside and, after waiting for almost an hour, they decided to bomb the nearest 'Catholic pub' instead. Immediately afterward, the security forces claimed that a bomb had accidentally exploded while being handled by IRA members inside McGurk's. === Whiterock Road shooting === On 15 April 1972, brothers Gerry and John Conway—both Catholic civilians—were walking along Whiterock Road to catch a bus. As they passed St Thomas's School, a car stopped, and three men leapt out and began shooting at them with pistols. The brothers ran, but both were shot and wounded. Witnesses said one of the gunmen returned to the car and spoke into a handset radio. Shortly afterward two armoured personnel carriers arrived, and there was a conversation between the uniformed and the plainclothes soldiers. The three vehicles then left, and the brothers were taken by ambulance to the Royal Victoria Hospital. The British Army told journalists that an army patrol had encountered two wanted men, that one had fired at the patrol, and that the patrol returned fire. In a 1978 interview, a former MRF member claimed he had been one of the gunmen. He confirmed that the brothers were unarmed but claimed his patrol had mistaken the brothers for two IRA men whom the MRF were ordered to "shoot on sight". === Andersonstown shootings === On 12 May 1972, the British government announced there would be no disciplinary action against the soldiers involved in Bloody Sunday. That night, MRF personnel shot seven Catholic civilians in the Andersonstown area. An MRF team in an unmarked car approached a checkpoint manned by members of the Catholic Ex-Servicemen's Association (CESA) at the entrance to Riverdale Park South. The CESA was an unarmed vigilante organization set up by former members of the British Army to protect Catholic areas. The car stopped, and then reversed. One of the MRF men opened fire from the car with a sub-machine gun, killing Catholic civilian Patrick McVeigh (44) and wounding four others. The car continued on, turned, and then drove past the scene of the shooting. All of the men were local residents and McVeigh, who was shot through the back, had stopped to chat to the CESA members as he walked home. He was a married father of six children. The British Army told journalists that gunmen in a passing car had fired indiscriminately at civilians and called it an "apparently motiveless crime". The car had come from a Protestant area and had returned the same way. This, together with the spokesman's statement, implied that loyalists were responsible. An inquest into the attack was held in December 1972, where it was admitted that the car's occupants were soldiers belonging to an undercover unit known as the MRF. The soldiers did not appear at the inquest but issued statements to it, claiming they had been shot at by six gunmen and were returning fire. However, eyewitnesses said none of the CESA members were armed, and this was supported by forensic evidence. The MRF members involved were never prosecuted. There is no evidence that any of their targets were in the IRA. An MRF member stated in 1978 that their intention was to make it look like a loyalist attack, thus provoking sectarian conflict and "taking the heat off the Army". Minutes before the shooting at the checkpoint, two other Catholic civilians had been shot nearby by another MRF team. The two young men—Aidan McAloon and Eugene Devlin—had taken a taxi home from a disco and were dropped off at Slievegallion Drive. As they began walking along the street, in the direction of a vigilante barricade, the MRF team opened fire on them from an unmarked car. The MRF team told the Royal Military Police that they had shot a man who was firing a rifle. Witnesses said there was no gunman on the street, and police forensics experts found no evidence that McAloon or Devlin had fired weapons. Two weeks later, on 27 May, Catholic civilian Gerard Duddy (20) was killed in a drive-by shooting at the same spot where Patrick McVeigh was killed. His death was blamed on loyalists. === Killing of Jean Smith === On the night of 9 June 1972, Catholic civilian Jean Smith (or Smyth) was shot dead on the Glen Road. Jean was a 24-year-old mother of one. She was shot while sitting in the passenger seat of a car at the Glen Road bus terminus. As her male companion turned the car, he heard what he thought was a tyre bursting. When he got out to check, the car was hit by a burst of automatic gunfire. Smith was shot in the head and died shortly afterward. Her companion stopped a passing taxi and asked the driver to take her to hospital. However, the taxi was then stopped by police and diverted to Andersonstown RUC base, where they were held for several hours. The security forces blamed the killing on the IRA. In October 1973, however, the Belfast Telegraph published an article suggesting that Smith could have been shot by the MRF. Documents uncovered from the British National Archives reveal that the MRF fired shots in the area that night. They claim to have fired at two gunmen and hit one of them. The Belfast Telegraph article also suggested that Smith could have been shot by the IRA, who fired on the car thinking it was carrying MRF members. The IRA deny this and claim that it was not in the area at the time of the shooting. === Glen Road shooting === On 22 June 1972, the Provisional IRA announced that it would begin a ceasefire in four days, as a prelude to secret talks with the British government. That afternoon, MRF members in an unmarked car shot and wounded three Catholic men standing by a car at Glen Road bus terminus. A man in a nearby house was also wounded by the gunfire. Shortly afterward, the MRF unit's car was stopped by the Royal Ulster Constabulary (RUC) and the occupants were arrested. Inside the car was a Thompson sub-machine gun, "for years the IRA's favourite weapon". One of the MRF members—Clive Graham Williams—was charged with attempted murder. He told the court that two of the men had been armed, and one had fired at the MRF car. He claimed he was returning fire. Witnesses said that none of the civilians were armed, and that it was an unprovoked attack. Police forensics experts found no evidence that the civilians had fired weapons. However, key witnesses were not called to give evidence in person, and Williams was acquitted on 26 June 1973. He was later promoted and awarded the Military Medal for bravery. === St James's Crescent shooting === On the night of 27 September 1972, the MRF shot dead Catholic civilian Daniel Rooney and wounded his friend Brendan Brennan. They were shot from a passing car while standing on a street corner at St James's Crescent, in the Falls district. British Army told journalists that the two men fired at an undercover Army patrol and that the patrol returned fire. They also claimed that the two men were IRA members. The IRA, the men's families, and residents of the area denied this, and Rooney's name has never appeared on a republican roll of honour. An inquest was held in December 1973. The court was told that forensic tests on the men's hands and clothing found no firearms residue. The six soldiers involved repeated the British Army's claim, but they did not appear at the inquest. Their statements were read by a police officer and they were referred to by initials. === New Lodge Six === There are also allegations that the MRF was involved in a drive-by shooting in the Catholic New Lodge area on 3 February 1973. The car's occupants opened fire on a group of young people standing outside a pub on Antrim Road, killing IRA members James Sloan and James McCann and wounding others. The gunmen drove on and allegedly fired at another group of people outside a takeaway. In the hours that followed, a further four people—an IRA member and three civilians—were shot dead in the area by British snipers. The dead became known as the "New Lodge Six". In June 1973, the Northern Ireland Civil Rights Association issued advice on how to behave in the event of being "shot by MRF/SAS squads", saying for example that people should "pretend to be dead until the squad moves away". == Front companies == The MRF ran a number of front companies in Belfast during the early 1970s. They included Four Square Laundry (a mobile laundry service operating in nationalist West Belfast) and the Gemini massage parlour on Antrim Road. The MRF also had an office at College Square. All were set up to gather intelligence on the Provisional Irish Republican Army (IRA) and Irish nationalist movement. A Four Square van visited houses in nationalist West Belfast twice a week to collect and deliver laundry. One "employee" (a young man) drove the van while another (a young woman) collected and delivered the laundry. Both were from Northern Ireland. Four Square initially gathered customers by offering "discount vouchers", which were numbered and colour-coded by street. Clothes collected for washing were first forensically checked for traces of explosives, as well as blood or firearms residue. They were also compared to previous laundry loads from the same house—the sudden presence of different-sized clothes could indicate that the house was harbouring an IRA member. Surveillance operatives and equipment were hidden in the back of the van or in a compartment in the roof. Further intelligence was gathered by staff observing and "chatting" to locals whilst collecting their laundry. However, in September 1972, the IRA found that two of its members—Seamus Wright and Kevin McKee—were working for the MRF as double agents. Under interrogation, McKee told the IRA about the MRF's operations, including the laundry and the massage parlour. The leaders of the Provisional IRA Belfast Brigade ordered that the companies immediately be put under surveillance. This surveillance confirmed that McKee's information was correct. The IRA later took Wright and McKee to South Armagh, where they were "executed" as spies. Their bodies were recovered in 2015. === October 1972 attacks === Following these revelations, the leaders of the IRA's Belfast Brigade planned an operation against the MRF, which was to take place on 2 October 1972. The 2nd Battalion would attack the Four Square Laundry van and the office at College Square, while the 3rd Battalion would raid the massage parlour. At about 11:20AM on 2 October, IRA volunteers ambushed the Four Square Laundry van in the nationalist Twinbrook area of West Belfast. Four volunteers were involved: one drove the car while three others did the shooting. They shot dead the driver, an undercover British soldier of the Royal Engineers, and machine-gunned the roof compartment where undercover operatives were thought to be hiding. The other Four Square employee—a female soldier from the Women's Royal Army Corps (WRAC)—was collecting and delivering laundry from a nearby house at the time. The residents, who thought that loyalists were attacking the van, took her into the house and kept her safe. The woman was later secretly invested at Buckingham Palace with an MBE. About an hour later, the same IRA unit raided College Square but found nobody there. Meanwhile, a unit of the 3rd Battalion made for the room above the massage parlour, which they believed was being used to gather intelligence. They claimed to have shot three undercover soldiers: two men and a woman. According to some sources, the IRA claimed to have killed two surveillance officers allegedly hidden in the laundry van, and two MRF members at the massage parlour. However, the British Army only confirmed the death of the van driver on that day. Brendan Hughes said that the operation "was a great morale booster for the IRA and for the people that were involved". The MRF, realising its undercover operations were blown, disbanded the units and was itself disbanded shortly afterwards. The incident was believed to have prompted the establishment of a new undercover intelligence unit: the 14 Intelligence Company (also known as "The Det"). == See also == 14 Intelligence Company '71 (film) Drummuckavall ambush Force Research Unit Glasdrumman ambush Glenanne gang Operation Conservation Special Reconnaissance Unit == References ==
Wikipedia/Military_Reaction_Force
The RAF Advanced Air Striking Force (AASF) comprised the light bombers of 1 Group RAF Bomber Command, which took part in the Battle of France during the Second World War. Before hostilities began, it had been agreed between the United Kingdom and France that in case of war, the short-range aircraft of Bomber Command would move to French airfields to operate against targets in Nazi Germany. The AASF was formed on 24 August 1939 from the ten squadrons of Fairey Battle light bombers of 1 Group under the command of Air Vice-Marshal Patrick Playfair and was dispatched to airfields in the Rheims area on 2 September 1939. The AASF was answerable to the Air Ministry and independent of the British Expeditionary Force. For unity of command, the AASF and the Air Component of the BEF (Air Vice-Marshal Charles Blount), came under the command of British Air Forces in France (Air Vice-Marshal Arthur Barratt) on 15 January 1940. Using the bombers for attacks on strategic targets in Germany was set aside, due to Anglo-French reluctance to provoke German retaliation; attacks on German military forces and their communications were substituted. The Battle of France began with the German invasion of the Low Countries on 10 May 1940. The Battle squadrons suffered 40 per cent losses on 10 May, 100 per cent on 11 May and 63 per cent on 12 May. In 48 hours the number of operational AASF bombers fell from 135 to 72. On 14 May the AASF made a maximum effort, 63 Battles and eight Bristol Blenheims attacked targets near Sedan. More than half the bombers were lost, bringing AASF losses to 75 per cent. The remaining bombers began to operate at night and periodically by day, sometimes with fighter escorts. From 10 May to the end of the month, the AASF lost 119 Battle crews killed and 100 aircraft. Experience, better tactics and periods of bad weather from 15 May to 5 June led to losses of 0.5 per cent, albeit with a similar reduction in effectiveness. On 14 June, the remaining Battles returned to Britain; the Hurricane squadrons returned on 18 June and rejoined Fighter Command. The AASF was dissolved on 26 June, the Battles returning to 1 Group, Bomber Command, to prepare for operations against a German invasion, along with the rest of the Royal Air Force. == Background == === 1930s Anglo-French air policy === Once British rearmament began, the air policy of the British government was to have air defences sufficient to defeat an attack and an offensive force equal to that of the Luftwaffe. With no land border to defend, British resources had been concentrated on radar stations, anti-aircraft guns and increasing the number of the most modern fighter aircraft. If Germany attacked, the British intended to take the war to the Germans by attacking strategically important targets with its heavy bombers, types unsuitable for operations in direct support of land forces. Implementation of the policy required a considerable number of first-class fighter aircraft to defeat an attacker and bombers to destroy ground targets. In 1938 the RAF expansion programme was intended to provide means for the air defence of Britain and for counter-offensive operations against Germany. Army co-operation received few resources and no plans were made for RAF participation in mass land operations or the dispatch abroad of large expeditionary air forces. The Western Plan was devised by the Air Ministry for mobilisation and the deployment of squadrons to their wartime airfields. Provision was made for the immediate dispatch of an Advanced Air Striking Force of ten squadrons to France, followed by a second echelon of ten more. Refuelling facilities were also planned for other squadrons, the arrangements for transport and servicing being co-ordinated with the army; thought was also given to basing squadrons in Belgium if it was invaded by Germany. In February 1939, the British Cabinet had authorised joint planning with the French and preferably with Belgium and the Netherlands in case of war with Germany, Italy and Japan. Two weeks before the first meeting, Germany occupied the rump of Czechoslovakia; war preparations took on a new urgency and staff conversations began on 29 March 1939. Agreement was reached with France to base the AASF on French airfields but only to bring them closer to their intended targets in Germany, until longer-range types became available. French strategy emphasised the defence of the national territory and Allied efforts were expected to give equal emphasis but the British refused to stake everything on the success of a defensive campaign against the Germans in France. The different circumstances of the two countries led the French to rely on a mass land army, with air defence a secondary concern, the Armeé de l'Air was hampered in the late 1930s by the slow progress of its re-equipment, lacking anti-aircraft guns, sufficient fighter aircraft and the means to detect and track enemy aircraft. Observation services relied on civilian telephones and in October 1939, the Armeé de l'Air had only 549 fighters, 131 of which were considered anciens (obsolete). Lack of aircraft led the French to advocate a bombing policy of tactical co-operation with the armies, attacking German forces and communications in the front line, rather than the strategic bombing of Germany, for fear of retaliation. From the spring of 1939, arrangements were made for the reception of the AASF, the defence of British bases in France, bombing policy in support of ground forces confronting a German attack in the Benelux countries, and operations against the Luftwaffe. A supply of British bombs was dumped near Reims, disguised as a sale to the Armeé de l'Air. Discussion of strategic air operations against the German war economy was delayed because the British did not expect to begin such operations as soon as war was declared and because the French had no bombers capable of them. In the last days of peace, the Cabinet limited air bombardment strictly to military objectives which were narrowly defined and a joint declaration was issued concerning the policy of following of the rules of war pertaining to poison gas, submarine warfare and air attacks on merchant ships to avoid provoking the Germans while the Anglo-French air forces were being built up. === August 1939 === On 24 August 1939, the British government gave orders for the armed forces partially to mobilise and on 2 September No. 1 Group RAF (Air Vice-Marshal Patrick Playfair) sent its ten Fairey Battle day-bomber squadrons to France according to plans made by the British and French earlier in the year. The group was the first echelon of the AASF and flew from RAF Abingdon, RAF Harwell, RAF Benson, RAF Boscombe Down and RAF Bicester. Group headquarters became the AASF when the order to move to France was received and the home station HQs became 71, 72 and 74–76 Wings. The Bristol Blenheims of No. 2 Group RAF were to become the second echelon as 70, 79 and 81–83 Wings, flying from RAF Upper Heyford, RAF Wattisham, RAF Watton, RAF West Raynham and RAF Wyton; 70 Wing with 18 and 57 squadrons was converting from Battles to Blenheims and intended for the Air Component once the re-equipment was complete. On 3 September, as the British government declared war on Germany; the AASF Battle squadrons were getting used to their French airfields, which were somewhat rudimentary compared to their well-developed Bomber Command stations, some having to wait for the French to deliver aviation fuel. === September === Strategic bombing operations did not take place as the Luftwaffe and Allied strategic bombers observed a tacit truce. The French tried to divert German resources from their Invasion of Poland with the Saar Offensive (7–16 September), in which the Battle squadrons were to participate. The main Luftwaffe bases were too far inside Germany but airstrips, supply dumps and reserves would be well within the range of the Battles. The British and French governments feared that they had more to lose by courting German retaliation but this deprived the Battle squadrons of a chance to test their equipment and tactics. The preparations did establish that Battles would attack targets within 10 mi (16 km) of the front line, including fleeting opportunities, much against the wishes of Group Captain John Slessor, a former director of plans at the Air Ministry, who stressed that the Battle crews were not trained for close support. Other officers thought that "...it is about as far as the Battle will be able to get with a return ticket". Playfair had the fuselage fuel tank removed from AASF Battles and the bomb bays were to be modified to carry 40 lb (18 kg) anti-personnel bombs, once the equipment arrived in November. ==== Battle reconnaissances ==== Neither the French nor the British wanted the AASF to sit idle and the Battles began to conduct "high-altitude", formation, photographic reconnaissance sorties, to map the German front line but the Battle had a service ceiling of only 25,000 ft (7,600 m) and needed to be much lower for formation flying. Battle sorties began on the morning of 10 September, three aircraft of 150 Squadron flying inside Allied lines and photographing obliquely. On 19 September the Battles began to fly beyond the front line, 10 mi (16 km) at first, then 20 mi (32 km). Playfair, mindful of the risk, tried to time sorties to coincide with French fighter operations in the vicinity and wanted close escorts if German fighters were around. Three Battles from 103 Squadron and three from 218 Squadron reconnoitred on 17 September, the Battles encountering intermittent anti-aircraft fire (FlaK). Bad weather led to a two-day lull, then on 20 September, three Battles from 88 Squadron west of Saarbrücken were attacked by three Messerschmitt Bf 109Ds, which shot down two of the Battles, whose defensive fire was ineffective. One Battle pilot crash-landed and his aircraft caught fire, killing the observer and gunner; the other Battle crashed the same way and the third Battle was hit in its fuel tanks and incinerated the crew. Playfair concluded that Battles should receive an escort anywhere near the front line but the Air Ministry rejected the claim that more fighters were necessary and he had to ask the French instead. The Chief of Staff of the French Air Force (Chef d'état-major de l'Armée de l'air), Général d'Armée Aérienne Joseph Vuillemin, was short of fighters but promised to help, provided the British helped themselves. At a meeting on 28 September, the British representative repeated the claim that tight formation-flying and collective firepower obviated the need for escorts and Vuillemin cancelled French co-operation. Two days later, five Battles from 150 Squadron on reconnaissance near Saarbrücken and Merzig, were attacked by eight Bf 109Es. The Battles closed up but four were shot down, most in flames. The surviving Battle pilot ran for home and crashed on landing but saved his crew. The squadron immediately fitted its aircraft with an extra rear-facing gun in the bomb-aiming position against attacks from below and behind; in England the Air Ministry blamed the Battle, rather than faulty tactics and equipment and declared it obsolete. For protection against fighter attack, 85 lb (39 kg) of armour for each aircraft was rushed to France and 15 and 40 squadrons returned to Britain to convert to Blenheims, being replaced by 114 and 139 squadrons which were already flying the type. ==== AASF tactical changes ==== In England, discussions for a second-generation Battle took place and the AASF was ordered to train Battle crews for low-level tactical operations to avoid the Bf 109s. The crews practised attacks on road vehicles from as low as 50 ft (15 m) and some rehearsals had fighter escorts, a new task given to the AASF Hurricane squadrons. Air Chief Marshal Robert Brooke-Popham, having been dug out of retirement, inspected the RAF in France and organised a meeting with Fairey, the Air Ministry and crews from 150 Squadron, to discuss protection against ground fire. Fairey considered that the Battle was already at its maximum weight and that self-sealing fuel tanks and armour could be added only by reducing the bomb load or range. No one in Britain knew that the Battles in France had already had their fuselage fuel tanks removed, which had saved 300 lb (140 kg). Fairey suggested a ventral machine-gun [40 lb (18 kg)], crew armour [100 lb (45 kg)], safer fuel tanks [100 lb (45 kg)], armour around the rear gunner [25 lb (11 kg)] and another 80 lb (36 kg) of ventral (underside) armour. Only the 25 lb (11 kg) armour plate for the rear gunner needed to be manufactured and the extra armour was ordered to France as soon as possible. Battle fuel tanks were to be given the French Semape coating, which easily plugged holes from rifle-calibre bullets and also gave some protection from 20 mm (0.79 in) cannon fire. Semape would use up the 100 lb (45 kg) allotted for fuel tank protection and was still under test but 26 lb (12 kg) of armour plates were added to the rear of the tanks against hits from behind. On 18 December, twenty-two Vickers Wellington medium bombers were sent to attack German ships in the Battle of the Heligoland Bight and eighteen were lost, many shot down in flames; some of those not shot down ran out of fuel from punctured fuel tanks. The fitting of self-sealing tanks became a crisis measure for Bomber Command and took precedence over the Battle modification, especially as their existing tanks had been armoured against hits from behind and the conversion was put back to March 1940. The extra armour decided on in September was apparently sent to France but never fitted to the Battles. Operational instructions issued by BAFF included a warning that Bomber aircraft have proved extremely useful in support of an advancing army, especially against weak anti-aircraft resistance, but it is not clear that a bomber force used against an advancing army well supported by all forms of anti-aircraft defence and a large force of fighter aircraft, will be economically effective. The RAF had tried to improve the performance of its aircraft; streamlining the Blenheim had added 15 mph (24 km/h) to its speed. To remedy the vulnerability of Battles to attacks from below, a rear-facing machine-gun was fitted to the bomb-aimer's position but "...it needed a contortionist to fire it....", To enable the gunner to fire backwards behind the tail, the gun swivels on a mounting fixed in the bombing aperture and is made capable of firing upside down, being provided with extra sights which will work in this position. The gunner wears a special harness enabling him to assume an almost upside-down position. Fairey designed a well in the floor of the bomb aiming position for the gunner to lie prone facing the rear but the change would need three months for development and testing. With 500 Battles in storage, the modifications could be done at Fairey and the aircraft swapped with those in France without interfering with AASF operations but the idea was shelved. To speed production of new aircraft, a review was held to strip existing machines of superfluous equipment and the committee suggested that for tactical bombing, the Battle autopilot [80 lb (36 kg)], night flying gear [44 lb (20 kg)], bomb sight [34 lb (15 kg)] and the navigator–bomb-aimer [200 lb (91 kg)] could be dispensed with, saving 358 lb (162 kg), which would allow the fitting of more forward-firing guns with no net increase in weight. The Air Ministry prevaricated and the equipment was not removed, the Ministry even deprecating the use of the existing forward-firing gun, which was supposed to be reserved for engagements with German fighters, not for strafing unless circumstances were exceptional. === October–December === On 11 October, Luftwaffe Dornier Do 17 bombers begin to cross the lines at high altitude and one flew at 20,000 ft (6,100 m) over the 1 Squadron AASF fighter base at Vassincourt Airfield, only to be shot down near Vausigny. The two Hawker Hurricane fighter squadrons (67 Wing) were part of the AASF to provide fighter protection for their bases, with another squadron of Hurricanes in England made available as a reinforcement. The second echelon squadrons of 2 Group, with seven Blenheim squadrons and two Armstrong Whitworth Whitley medium bomber squadrons, stood ready to move to France if the Germans attacked. At 10:00 a.m. on 8 November, 73 Squadron shot down a Do 17, its first victory of the war. To counter the high-flying Dorniers, seven fighter sectors were established on 21 November in Zone d'Opérations Aériennes Nord (ZOAN, "Air Zone North") and Zone d'Opérations Aériennes Est (ZOAE, "Air Zone East") and on 22 November, 1 Squadron shot down two Do 17s in the morning, a Hurricane force-landing after being hit in the engine; early in the afternoon, three Hurricanes over Metz shared a Heinkel He 111 with the Armée de l'Air, one Hurricane being damaged in a collision with a French fighter; 73 Squadron claimed two Dorniers shot down and one damaged, shared with French fighters. For most of December, flying was washed out by bad weather but on 21 December, two Hurricanes shot down a French Potez 637 over Villers-sur-Meuse, with only one survivor. The next day, five Bf 109s bounced three 73 Squadron Hurricanes and shot two down. == Prelude == === January 1940 === ==== British Air Forces in France ==== On the declaration of war, the Air Component had come under the command of Lord Gort the Commander-in-Chief of the BEF and the AASF remained under Bomber Command control but based with the Armée de l'Air. Thought had been given to liaison and Air Missions had been installed in the main Allied headquarters but training exercises showed that communication was inadequate. In January 1940, command of the AASF and the Air Component was unified under Air Marshal Arthur Barratt as Air Officer Commanding-in-Chief British Air Forces in France (BAFF), the Air Component being detached from the BEF while remaining under its operational control and Bomber Command losing the AASF, since the Battle would not be used for strategic bombing. Barratt was charged with giving "full assurance" to the BEF of air support and to provide the BEF with ...such bomber squadrons as the latter may, in consultation with him, consider necessary from time to time. Since the British held only a small part of the Western Front, Barratt was expected to operate in the context of the immediate needs of the Allies. In France the new arrangement worked well but the War Office and the Air Ministry never agreed on what support should be given to the Field Force of the BEF. When Air Marshal Charles Portal replaced Edgar Ludlow-Hewitt as AOC-in-C Bomber Command on 3 April, he prevented the second echelon of the AASF from going to France, with the agreement of Cyril Newall, the Chief of the Air Staff. Portal took the view that fifty Blenheims attempting to attack an advancing army, using out of date information, could not achieve results commensurate with the expected losses. On 8 May he wrote, I am convinced that the proposed employment of these units is fundamentally unsound, and if it is persisted in it is likely to have disastrous consequences on the future of the war in the air. The airfields occupied by the first echelon were still being equipped for operations and would become dangerously congested if the second echelon arrived. Barratt questioned the wisdom of an assumption that because the AASF was behind the Maginot Line, mobility was less important than that of the Air Component and approval was eventually given to make the AASF semi-mobile. Motorisation came too late and the AASF had to beg, steal or borrow French vehicles when squadrons changed base; by late April, the AASF strength had risen to 6,859 men. === February–March === More flying was possible in January but the air forces spent most of February on the ground, with many of the aircrews on leave. The weather became much better for flying and on 2 March a Dornier was shot down by two 1 Squadron Hurricanes, one of the British pilots being killed while attempting a forced landing after being hit in the engine by return fire; next day, British fighters shot down a He 111. On 3 March, two 73 Squadron pilots escorting a Potez 63 at 20,000 ft (6,100 m) spotted seven He 111s 5,000 ft (1,500 m) higher and gave chase, only to be attacked by six Bf 109s. A Bf 109 overshot one of the Hurricanes, which fired on it as it drew ahead. The Bf 109 fell, leaving a trail of black smoke, the eleventh victory for the squadron. The Hurricane was hit by the third Bf 109 and the pilot only just managed to reach a French airfield and make an emergency landing. On the morning of 4 March, a 1 Squadron Hurricane shot down a Bf 109 over Germany and later, three other Hurricanes of the squadron attacked nine Messerschmitt Bf 110s north of Metz and shot one down. On 29 March, three Hurricanes of 1 Squadron were attacked by Bf 109s and Bf 110s over Bouzonville, a Bf 109 being shot down at Apach and a Bf 110 north-west of Bitche; a Hurricane pilot was killed trying to land at Brienne-le-Château. === April – 9 May === In April the usual reconnaissance flights were flown by the Luftwaffe and larger formations of fighters patrolled the front line with formations of up to three Luftwaffe squadrons (Staffeln) flew at high altitude as far as Nancy and Metz. Reconnaissance aircraft began to cross the front line in squadron strength to benefit from greater firepower on the most dangerous part of the journey, before dispersing towards their objectives. Hurricanes shot down a Bf 109 on 7 April at Ham-sous-Varsberg and on 9 April, when the Germans began Operation Weserübung, the invasion of Denmark and Norway, Bomber Command aircraft were diverted to operations in Scandinavia. The Battle squadrons took over leaflet raids over Germany by night and suffered no losses. The situation was unchanged until the night of 9/10 May, when the heavy guns of the German and French armies began to bombard the Maginot and Siegfried lines. In early May, the AASF had 416 aircraft, 256 light bombers, 110 of the 200 serviceable bombers being Battles. The Armée de l'Air had less than a hundred bombers, 75 per cent of which were obsolescent. The Luftwaffe in the west had 3,530 operational aircraft, including about 1,300 bombers and 380 dive bombers. == Battle of France (Fall Gelb) == === 10 May === As dawn broke, German bombers made a coordinated, hour-long attack on 72 airfields in the Netherlands, Belgium and France, inflicting severe losses on the Belgian Air Component (Belgische Luchtmacht/Force aérienne belge) and the Royal Netherlands Air Force (Koninklijke Luchtmacht). The Luftwaffe bombers flew in formations of three to thirty Heinkel 111, Dornier 17 or Junkers 88s but had least effect on the British and French airfields, over which British and French fighters intercepted the German raiders. Nine British-occupied bases were attacked to little effect. Hurricanes of 1 Squadron at Vassincourt patrolled the Maginot Line from 4:00 a.m. and shot down a He 111 for one Hurricane damaged. At 5:30 a.m. A Flight shot down a Do 17 near Dun-sur-Meuse for one Hurricane crash-landed. At Rouvres, two 73 Squadron Hurricanes attacked three bombers over the airfield, damaging one for a Hurricane forced down damaged. At 5:00 a.m. four Hurricanes attacked eleven Do 17s near the airfield, one Hurricane landing in flames with a badly burned pilot and one Hurricane returning damaged. More Hurricanes were scrambled and shot down two Do 17s; a He 111 was shot down soon afterwards. Orders to 73 Squadron led to it moving back from its forward airfield to its base in the AASF area around Reims. From England, 501 Squadron with Hurricanes, landed at Bétheniville to join the AASF and went into action within the hour against forty He 111 bombers. A transport aircraft ferrying pilots and ground crews of the squadron crashed on landing; three pilots were killed and six injured. The AASF bomber squadrons remained on the ground waiting for orders but the bombing policy established by Grand Quartier Général (GQG, French supreme headquarters) did not require the British to obtain permission. At Chauny, Barratt and d'Astier discussed reconnaissance reports and Barratt ordered the AASF into action. A German column had been reported in Luxembourg by a French reconnaissance aircraft several hours earlier. The French bomber squadrons received orders and counter-orders; some were sent to make low-level demonstrations to reassure French troops and were intercepted by German fighters. The AASF squadrons had been on stand-by since 6:00 a.m., one flight in each squadron at thirty minutes' readiness and the other at two hours' notice. Barratt called General Alphonse Georges, commander of the Théâtre d'Opérations du Nord-Est (North-eastern Theatre of Operations) to tell him that the AASF would commence operations but it took until 12:20 p.m. to give the order to attack. Thirty-two Battles from 12, 103, 105, 142, 150, 218 and 226 squadrons flew at low altitude, in groups of two to four bombers, to attack German columns. The first wave of eight Battles had support from five 1 Squadron and three 73 Squadron Hurricanes, sent to patrol over Luxembourg City and clear away German fighters. The two fighter formations were not co-ordinated and had only vague orders; the three 73 Squadron Hurricanes attacked a force of unescorted German bombers and were bounced by German fighters before they made contact with the Battles. At least one Hurricane was shot down; the 1 Squadron pilots saw what was happening but were too low to help. The Battles hedge-hopped towards the target and evaded the German fighters but were well inside the range of German ground fire. Two Battles of 12 Squadron attacked at 30 ft (9.1 m) and one was shot down as it approached the target. The second aircraft strafed the column with its forward firing machine-gun and bombed; neither side could miss and the Battle crash-landed in a field. Another twelve Battles were shot down and most of the rest were damaged. In the afternoon, a second raid by 32 Battles flying at 250 ft (76 m) was intercepted by Bf 109s and ten were shot down by fighters and ground fire. During the day, AASF and Air Component Hurricanes claimed sixty Luftwaffe aircraft shot down, sixteen probables and twenty-two damaged. The AASF Hurricanes had flown 47 sorties and been provisionally credited with shooting down six bombers for five Hurricanes shot down or force-landed in a 1999 analysis by Cull et al. No aircraft From Bomber Command in England appeared because the British state was preoccupied with a change in government. Barratt requested support and during the night, Bomber Command sent 36 Wellington bombers to attack Waalhaven and eight Whitleys from 77 and 102 squadrons bombed transport bottlenecks into the southern Netherlands at Geldern, Goch and Aldekirk; Rees and Wesel over the German border also being raided. === 11 May === As Blenheim crews of 114 Squadron at Vraux were preparing to take off to attack German tank columns in the Ardennes, nine Dornier 17s appeared at treetop height and bombed them, destroying several Blenheims, damaging others and causing casualties. From 9:30 to 10:00 a.m., eight Battles in two flights of two sections each from 88 and 218 squadrons took off to raid German troop concentrations near Prüm 10 mi (16 km) over the border in the Rhineland, where two panzer divisions had begun their westwards advance the day before and were already past Chabrehez, 20 mi (32 km) inside Belgium. From Reims, the Battles had to make a 60 mi (97 km) flight diagonally across the front. The raid was the first by 88 Squadron whose two sections flew 300 yd (270 m) apart to give the Germans no time to react. The Battles received constant small-arms fire at the vicinity of Neufchâteau, 50 mi (80 km) from Prüm and for the rest of the approach. An aircraft from the second flight force-landed near Bastogne, two more were lost near St Vith and the surviving aircraft had fuel sloshing around the cockpit. The pilot turned back and attacked a column in a narrow valley at Udler, 15 mi (24 km) short of Prüm but the bomb-release gear had been damaged and they did not drop; the Battle managed to return and land at Vassincourt, the four Battles from 218 Squadron disappeared. An attack planned for the afternoon was cancelled because of the dusk and because Barratt wanted to conserve his aircraft. The Belgian government appealed to the Allies to destroy the Albert Canal bridges around Maastricht but the Germans had already installed many anti-aircraft guns there. Six Belgian Battles out of nine from Aeltre were shot down around noon along with two of the six fighter escorts, the three survivors causing no damage. Six Blenheims from 21 Squadron and six from 110 Squadron in Britain attacked next from 3,000 ft (910 m). As the bombers approached they met massed anti-aircraft fire and broke formation to attack from different directions, only to spot Bf 109s and form up again. Four Blenheims were shot down, the rest were damaged and no bomb hit the target. Ten modern French LeO 451s from GB I/12 and II/12, escorted by Morane-Saulnier M.S.406 fighters, attempted the first French bombing raid of the battle and set fire to some German vehicles but failed to hit the bridges. The Morane pilots attacked the German fighters and claimed five Bf 109s for four Moranes; one LeO 451 was shot down and the rest so badly damaged that they were out of action for several days. During the night of 11/12 May, Barratt called on Bomber Command to attack transport targets around München-Gladbach; Whitleys from 51, 58, 77 and 102 squadrons, with Hampdens from 44, 49, 50, 61 and 144 squadrons sent 36 bombers but five Hampdens returned early and only half the remainder claimed to have bombed the target. A Whitley and two Hampdens were shot down, the two Hampden crews, minus a pilot, making their way back to Allied lines. AASF, Air Component and 11 Group Hurricane pilots claimed 55 German aircraft and French fighter pilots in the RAF area claimed another 15; analysis by Cull et al. in 1999 attributed 34 Luftwaffe aircraft destroyed or damaged to Hurricane pilots. === 12 May === At 7:00 a.m., nine Blenheims of 139 Squadron flew from Plivot to attack a German column near Tongeren but were intercepted by fifty Bf 109s and lost seven aircraft, two of the crews returning on foot after crash-landing. At Amifontaine, 12 Squadron was briefed for an attack on the bridges near Maastricht with six Battles. After the fate of the Belgian Battles the day before, the commander asked for volunteers and every pilot stepped forward; the six crews on standby were chosen. Two Blenheim squadrons were supposed to attack Maastricht at the same time as a diversion and twelve Hurricane squadrons were flying in support but half of these were operating to the north-west and the others were only flying in the vicinity, except for 1 Squadron, which was to sweep ahead to clear away German fighters. Three Battles of B Flight were to attack the bridge at Veldwezelt and three from A Flight the bridge at Vroenhoven. Two Battles of A Flight took off at 8:00 a.m. and climbed to 7,000 ft (2,100 m); 15 mi (24 km) short of Maastricht, the aircraft received anti-aircraft fire, surprising the crews with the extent of the German advance. The Hurricane pilots saw about 120 German fighters above them and attacked; three Bf 109s and six Hurricanes were shot down. During the diversion, A Flight dived over the Maastricht−Tongeren road towards the Vroenhoven bridge covered by three Hurricanes; a Bf 109 closed on the leading aircraft, then veered off towards the second Battle, which hid in a cloud. The Battles dived from 6,000 ft (1,800 m) and bombed at 2,000 ft (610 m), both being hit in the engine, one Battle came down in a field, the crew being captured. The second Battle crew, having shaken off the Bf 109, saw bombs from the first Battle explode on the bridge and hit the water and the side of the canal. The second Battle pilot turned away, amidst a web of tracer from ground fire and was then damaged by a Bf 109 fighter, which was hit by the rear gunner. The port fuel tank caught fire and the pilot ordered the crew to parachute, then he noticed that the fire had gone out. The pilot nursed the bomber home but ran out of fuel a few miles short and landed in a field; the observer got back to Amifontaine but the gunner was taken prisoner. Five minutes later, B Flight attacked the bridge at Veldwezelt, having flown over Belgium in line astern at 50 ft (15 m). One Battle was hit and caught fire before the target, bombed and crashed near the canal; the pilot, despite severe burns, saving the crew who were taken prisoner. A second Battle was hit, zoomed while on fire, dived into the ground and exploded, killing the crew. The third Battle made a steep turn near the bridge then dived into it, destroying the west end. German engineers began immediately to build a pontoon bridge and as they worked, 24 Blenheims from 2 Group in England attacked the bridges at Maastricht; ten were shot down. At 1:00 p.m. 18 Breguet 693s from GA 18 with Morane 406 fighter escorts, attacked German tank columns in the area of Hasselt, St Trond, Liège and Maastricht, losing eight bombers. Twelve LeO 451s attacked columns around Tongeren, St Trond and Waremme at 6:30 p.m. and survived, despite most being damaged. Late in the afternoon, fifteen Battles flew against German troops near Bouillon and six were shot down. During the night, forty Blenheims of 2 Group flew in relays against the Maastricht bridges with few losses. At daybreak, the AASF intervened against the German advance towards Sedan for the first time, three Battles of 103 Squadron attacking a bridge over the Semois, the last river east of the Meuse. The Battles flew very low and all returned. At about 1:00 p.m. three more Battles of 103 Squadron attacked the bridge from 4,000 ft (1,200 m) and were intercepted by Bf 110s. The Battles dived and hedge hopped to evade the fighters, bombing a pontoon bridge next to the ruins of the original one from 20 ft (6.1 m) and escaped. At about 3:00 p.m. three Battles of 150 Squadron bombed German columns around Neufchâteau and Bertrix, east of Bouillon. One Battle was hit and crashed in flames but the other two bombed from 100 ft (30 m) and got away. At 5:00 p.m. three 103 Squadron Battles and three from 218 Squadron attacked in the vicinity of Bouillon, the Battles from 103 Squadron flew individually at low altitude and those of 218 Squadron flew in formation at 1,000 ft (300 m). General cover was provided by the Hurricanes of 73 Squadron but they claimed only a Henschel Hs 126 reconnaissance aircraft. Two 218 Squadron Battles were shot down and the low-level attack by 103 Squadron cost two more, the squadron having decided to dispense with the navigator for tactical operations by day. The surviving crew of 103 Squadron had also protected themselves by attacking a German tank column west of the target and running for home, according to the original AASF intention of attacking the first German troops encountered. Barratt had decided that the Battles should attack from a higher altitude to reduce losses from ground fire but Playfair took the view that the new policy would not put the Battles out of range of German anti-aircraft guns. The results of the operations on 12 May gave no conclusive evidence that low attacks were more dangerous. In the sixty sorties since 10 May the Battle squadrons had lost thirty aircraft and in the evening Barratt was ordered to conserve his force until the climax of the battle. In emergencies, the AASF was supposed to maintain a tempo of two-hourly attacks but this proved impossible; Playfair was ordered to rest the Battle squadrons on 13 May. By the end of the day, the AASF had been reduced to 72 serviceable bombers. AASF and Air Component Hurricanes were confronted by more Bf 109s over the front line, which shot down at least six of the twelve Hurricanes lost. The two Hurricane forces claimed 60 Luftwaffe aircraft shot down, probables or damaged, 27 being attributed to them in a 1999 analysis. === 13 May === Fifty miles north of the AASF bases, opposite the Meuse, the crisis of the battle of France was beginning but the Allied commanders still took the threat in the Low Countries more seriously. Four Battles of 76 Wing (12, 142 and 226 squadrons) received orders to attack German forces around Wageningen, about 250 mi (400 km) away but the raid was cancelled because of poor weather. Later on, seven Battles of 226 Squadron were sent to attack German columns near Breda, 200 mi (320 km) distant, despite the target being closer to 2 Group in England. No German columns were found; the Battles demolished a factory to block the road and returned safely. Information about the situation on the Meuse began to arrive and AASF HQ began to consider a contingency plan to evacuate to fields further south. During the evening a French pilot saw Germans crossing the Meuse at Dinant and landed at the closest airfield, which was that of 12 Squadron, quickly to attack the German crossing but Playfair and Barratt refused to allow it. Pressure on the British air commanders increased during the night when Billotte, the commander of Groupe d'armées 1 (1st Army Group), told Barratt and d'Astier that "victory or defeat hinges on the destruction of those bridges". The Germans had bridgeheads on the west bank of the Meuse and were building pontoon bridges to get tanks across; Barratt and d'Astier were told to make an immediate maximum effort. Unlike the permanent bridges attacked on 12 May, the German defences at Sedan were not organised, pontoon bridges were more vulnerable and the river was much closer to the AASF airfields, commensurately further from Luftwaffe bases. French bombers made two attacks during the day and overnight the French bombed German rear areas as the Blenheims of 2 Group attacked the Maastricht bridges and railways at Aachen and Eindhoven. Ten Hurricanes were lost on 13 May, six to German fighters for a claim of five Bf 109s and five Bf 110s, double the number eventually attributed to AASF and Air Component Hurricanes. Total claims were 37 German aircraft shot down, probables or damaged and 21 recognised in a 1999 analysis. The Hurricane squadrons in France lost 27 fighters shot down, 22 to German fighters, seventeen pilots being killed and five wounded. The Hurricane pilots claimed 83 German aircraft shot down, probables or damaged, later reduced to 46. === 14 May === At dawn, six Battles from 103 Squadron attacked the pontoon bridges over the Meuse at Gaulier north of Sedan; all of the Battles returned and some of the pontoons may have been damaged. At 7:00 a.m., four Battles attacked and returned safely. French apprehensions about the situation grew so intense that the Armée de l'Air decided to use obsolete Amiot 143 bombers and Barratt agreed to make a maximum effort. Hurricane squadrons from the north were to reinforce the AASF but still only to fly in the general area of the Battles, along with French fighters. After a second attack from the French-based bombers, 2 Group were to attack from England. At 9:00 a.m. eight Breguet 693s with fifteen Hurricane and fifteen Bloch 152 fighter escorts, attacked German tanks at Bazeilles and the pontoons between Douzy and Vrigne-sur-Meuse, against scattered anti-aircraft and fighter opposition; all the Breguets returned. Just after noon, eight LeO 451s and 13 Amiot 143s, also with fifteen Hurricane and fifteen Bloch 152 fighter escorts, attacked the same targets; three Amiots and a LeO were shot down. From 3:00 p.m. to 3:45 p.m. 45 Battles attacked the bridges and 18 Battles with eight Blenheims went for German columns. Some Battles flew higher, reducing the risk of hits by ground fire but became more vulnerable to fighters. Five Battles from 12 Squadron dive-bombed a crossroads at Givonne against intense small-arms fire; two managed to bomb but only one Battle returned. Eight Battles from 142 Squadron flew in pairs to attack pontoon bridges from low level, with bombs fuzed for an eleven-second delay. The pairs were intercepted by German fighters; four Battles were shot down, at least two by fighters. Six Battles of 226 Squadron tried to dive-bomb bridges at Douzy and Mouzon against ground fire. One aircraft was damaged and turned back; three more Battles were shot down. Seven of eleven 105 Squadron Battles were lost, one Battle landing at a nearby friendly airfield and another crash-landing. Four Battles of 150 Squadron were shot down by Bf 109s and eight from 103 Squadron bombed the Meuse crossings at very low altitude or in dives. Three of the Battles were hit but made it back to Allied areas before crash-landing, all but one pilot surviving and returning to base. Ten of eleven Battles from 218 Squadron were shot down and of the ten Battles from 88 Squadron, four against bridges and six to bomb columns between Bouillon and Givonne, nine returned. The operation was the costliest to the RAF of its kind in the war; 35 Battles and been lost from the 63 that attacked, along with five of the eight Blenheims. The survivors were too damaged to form a second wave. The afternoon attacks had met a much more effective defence than those in the morning and flying higher over German ground fire had only brought the Battles closer to German fighters. The German XIX Corps reported constant air attacks, which delayed the crossing of German tanks to the west bank of the Meuse. Every serviceable French bomber had flown and since 10 May, the Armeé de l'Air had lost 135 fighters, 21 bombers and 76 other aircraft. Six Battle crews returned on foot through German-held territory but 102 aircrew had been killed or captured and more than 200 Hurricanes had been lost in four days. As night fell, 28 Blenheims of 2 Group attacked the bridges and seven were shot down, two coming down behind Allied lines. In Britain, Air Marshal Hugh Dowding, the Air Officer Commanding RAF Fighter Command, was heard by the War Cabinet. Having already been ordered to send another 32 Hurricanes to France, Dowding urged that French requests for another ten fighter squadrons be refused. The Air Staff took the losses as proof that tactical operations were not worth the cost, despite it working so well for the Luftwaffe and judged the Battle to be obsolete, despite the Blenheim, German Junkers Ju 87s and the new French Breguet 693 bombers suffering just as many losses when not escorted by fighters. Playfair and Barratt appealed for more fighters and got a few, despite calls from everywhere for more. Barratt demanded that no more Battles be sent to France without self-sealing tanks, until then the Battles would fly at night, except for crews with insufficient training in night operations or in dire emergency. The AASF and Air Component Hurricane squadrons lost 27 aircraft, 22 to German fighters, 15 pilots being killed and four wounded; another two pilots had been killed and one wounded by German bombers or ground fire. The Hurricane squadrons claimed 83 German aircraft shot down, probables or damaged and a 1999 analysis attributed 46 German aircraft shot down or damaged to British fighters. === 15–16 May === After the AASF losses from 10 to 14 May, attacks on the Meuse bridgeheads on 15 may were made by Bomber Command squadrons based in England. German mobile forces broke out of the bridgehead at Sedan and at 11:00 a.m. twelve Blenheims from 2 Group attacked German columns around Dinant as 150 French fighters patrolled in relays. The RAF sent another sixteen Blenheims escorted by 27 French fighters at 3:00 p.m. to attack bridges near Samoy and German tanks at Monthermé and Mezières, from which four Blenheims were lost. On the night of 15/16 May around twenty Battles flew and attacked targets at Bouillon, Sedan and Monthermé for no loss but cloud cover made navigation and target finding difficult; fires were seen but no-one claimed great results. Night raids were suspended because Barratt expected the Germans to wheel south behind the Maginot Line and ordered the Battle squadrons to retire to bases around Troyes in southern Champagne, where during the Phoney War, the army and the RAF had prepared many airfields and several grass airstrips. Amid confusion caused by Luftwaffe attacks on the airfields and roads full of troops and refugees, the squadrons began to retire, many of the Battle squadrons being out of action during the moves, which turned out to be unnecessary when the Germans drove west instead of south. The AASF had been deemed a static unit, protected by the Maginot Line and was 600 lorries short of even its slender establishment of vehicles. The AASF was fortunate that the Germans went west and there was time to fetch most of the equipment, using 300 new lorries from the US, loaned by the French, at the behest of the Air Attaché in Paris. Drivers were rushed by air from Britain but were ignorant of the vehicles, the locations of AASF bases and of France; someone loaded the starting handles, jacks and tools onto a lorry bound for the west coast, under the impression that they were superfluous spare parts. BAFF losses since 10 May stood at 86 Battles, 39 Blenheims, nine Westland Lysander army co-operation aircraft and 71 Hurricanes; Bomber Command had lost 43 aircraft, mainly from 2 Group. The AASF and Air Component Hurricanes suffered 21 losses, half to Bf 110s and three to Bf 109s; five pilots were killed, two taken prisoner and four were wounded. The Hurricane pilots claimed fifty German aircraft, later reduced to 27 in a 1999 analysis by Cull, Lander and Weiss. On 16 May, 103 Squadron moved south with full bomb loads to be ready as soon as they reached their new airfields but the squadron was not called on and the other squadrons seemed more intent on settling in, despite the disaster on the Meuse. === 17 May === The nine surviving Blenheims of 114 and 139 squadrons were transferred to the Air Component, reducing the AASF to six Battle and three Hurricane squadrons; for the next five days the AASF flew few missions, most of those at night. The AASF withdrew 105 and 218 squadrons and their remaining aircraft, transferring crews to the other squadrons; 218 Squadron aircraft flying a few sorties before the change. The six squadrons sent away as much superfluous equipment as possible to become more mobile. In March, 98 Squadron had been based at Nantes as a reserve and sent crews and machines to the active squadrons; a shortage of gunners led to pilots substituting for gunners on occasion. Bomber Command sent twelve Blenheims of 82 Squadron, 2 Group, to attack German troops at Gembloux; ten were shot down by Bf 109s, an eleventh by ground fire and the twelfth Blenheim was damaged but returned to base. As the French armies and the BEF in the north retreated, the most exposed Air Component squadrons were withdrawn westwards and land line communication with BAFF HQ, south of the German advance, was cut. No Hurricane pilots of the AASF and Air Component were killed but a minimum of 16 Hurricanes were shot down and one pilot taken prisoner. AASF, Air Component and 11 Group squadrons claimed 55 Luftwaffe aircraft shot down, probables or damaged, later reduced to 28 by Cull et al. === 18 May === On 18 May, the Battle squadrons were on stand-by but only 103 Squadron flew operations. Targets around St Quentin were bombed but low-level attacks were abandoned and with no escorts, the Battles flew in ones and twos, as fast as possible, trusting the manoeuvrability of the Battle rather than formation flying to evade fighters, despite being 100 mph (160 km/h) slower. The Battles flew at 8,000 ft (2,400 m) and attacked in a shallow dive, dropping bombs with instantaneous fuzes at 4,000 ft (1,200 m), all the Battles returning safely. At least of 33 Hurricanes were shot down, most by German fighters. Seven pilots were killed, five made prisoners of war and four were wounded. Half of the Hurricane claims were against bombers and many Bf 110 fighters were shot down. The AASF and Air Component Hurricane squadrons claimed 97 German aircraft shot down, probables or damaged, later reduced by Cull et al. to 46. === 19 May === On 19 May the German advance in the north led to the Air Component squadrons retiring to English bases. The AASF had 12, 88, 103, 142, 150, 218 and 226 squadrons available. D'Astier tried to support a counter-attack near Laon but the AASF HQ was out of touch and Barratt knew nothing about it, except that German forces were west of the Montcornet–Neufchâtel road. Reconnaissance reports showed Barrett that to the east, more German troops were north of Rethel, a threat to the AASF bases further south. All but 226 Squadron was ordered make another maximum effort by day against anything they saw on the roads between Rethel and Montcornet. The wrong orders reached 88 Squadron which attacked around Hirson an important point on the German drive west and 142 Squadron attacked targets far to the west around Laon, near the French counter-attack, unlike the Rethel area, which was on the fringe of the offensive and only occupied by screening forces. Six Battles of 150 Squadron attacked road columns near Fraillicourt and Chappes, against intense anti-aircraft fire, one Battle being shot down and two making emergency landings at the nearest Allied airfield. The crews of 218 Squadron bombed tanks and lorries near Hauteville and Château-Porcien, in shallow dives from 7,000 to 6,000 ft (2,100 to 1,800 m), bombing from 4,000 to 2,000 ft (1,220 to 610 m). Plenty of targets were found by 88 Squadron, which with 218 Squadron, lost no aircraft. Three Battles from 142 Squadron were shot down west of Laon and of six Battles sent by 12 Squadron north or Rethel, which found only one column to bomb, two were lost, one to a Bf 109. Six Battles sent by 103 Squadron bombed targets near Rethel and all came home. The raids did nothing to assist the French counter-attack. The Germans had passed beyond the terrain bottlenecks further east. Six Battles out of 36 had been lost, an 18 per cent loss rate, which was a considerable improvement on the 50 per cent rate from 10 to 15 May but still unsustainable. Barratt concluded that night operations were the only way to save the Battle force from destruction. AASF and Air Component Hurricanes claimed 112 Luftwaffe aircraft shot down, probables or damaged (later reduced to 56) for a loss of 22 Hurricanes shot down and 13 force-landed, 24 to Bf 109s, which suffered 14 losses to the Hurricanes. Eight fighter pilots were killed, seven were wounded and three taken prisoner. === 20–21 May === On 20 May, 73 Squadron flew one patrol, 1 Squadron and 501 Squadron being rested and no claims or losses were recorded for the AASF Hurricanes. The commander of 1 Squadron asked for the relief of tired pilots and eight were immediately dispatched, three arriving the same day. Twelve Hurricanes were shot down, at least seven by ground fire while strafing, three pilots being killed and one captured. AASF and Air Component pilots claimed forty aircraft shot down, probables or damaged, reduced by Cull et al. to 18. During the night of 20/21 May, 38 Battles were sent to bomb communications around Givet, Dinant, Fumay, Monthermé and Charleville-Mézières. Misty conditions around the Meuse led to few of the Battle pilots claiming hits on anything and one aircraft was lost. Delays caused to the German advance in these areas could have no effect on the beginning of the Battle of Arras 100 mi (160 km) to the west. On 21 May the AASF and Air Component Hurricanes claimed four German aircraft, three of which were recognised by Cull et al. for the loss of three Hurricanes, one pilot killed, one taken prisoner and one returned unhurt. The AASF bombers resumed daylight operations and 33 Battles, in small flights, attacked German columns near Reims, indirectly supported by 26 Hurricanes in the area. Attacks near Le Cateau and St Quentin could have been on French troops by mistake; poor air reconnaissance made it difficult to find German forces or their objectives. === 21–22 May === By 21 May only some Lysanders of 4 Squadron, attached to the BEF HQ, were all that remained of the Air Component in France. The pilots of the three AASF Hurricane squadrons were exhausted; most of those of 1 Squadron were replaced and 73 Squadron pilots were given notice of their replacement; the 501 Squadron pilots, having been in France only since 10 May, had to remain. Hurricane reconnaissance sorties discovered the German advance from Cambrai on Arras and the ground control organisation at Merville ordered the Hurricanes that were airborne to strafe them. When fighter squadrons on escort or fighter patrol turned for home, they began to use up their ammunition on ground targets. The fighters still attacked in tight formations and in fifty ground strafing sorties by the various British fighter forces in France, six Hurricanes were shot down and three pilots killed. With another French counter-attack due on 22 May, the British government demanded a greater effort from the RAF and during the night of 21/22 May, 41 Battles were prepared for raids on the Ardennes. After 12 Battles had taken off, the Air Ministry cancelled the operation in favour of operations against German tanks the next day around Amiens, Arras and Abbeville, contrary to Barratt's better judgement since tanks were small targets and the Battles would have to attack at low altitude. The patrol area was too far west to assist a French counter-attack south from Douai on Cambrai but might have had an effect on operations on the right flank of the BEF. The weather remained poor and after several aircraft took off, the raid was cancelled. Tanks were seen around Doullens, Amiens and Bapaume and one was claimed for the loss of one Battle and three damaged. Night operations by the AASF Battles against the Meuse crossings had suffered few losses but their training in night flying during the spring could not overcome the inherent difficulty of night navigation and target-finding. During the night of 22/23 May, 103 Squadron was sent to bomb Trier on the German–Luxembourg border. === 23 May === Uncertainty about supporting an Allied attack south or a retreat north, led to a dawn attack by the Battles of 88 Squadron around Douai and Arras being cancelled. To prevent an Allied retreat to Dunkirk being cut off, bombing of German forces to the north-west of Arras was substituted. The new raid was cancelled too and 88 Squadron flew no operations. In the evening, 12 Squadron sent four Battles against German tanks on the Arras–Doullens road but the weather deteriorated and only two of the bombers found the target; all four aircraft returned. Four Battles from 150 Squadron made dive-bombing attacks on German tanks at the exit of the village of Ransart and vehicles in a stand of trees further south. One pilot dropped his bombs then strafed another column that appeared, despite considerable anti-aircraft fire; the four Battles survived despite all being met with ground fire. During the night of 23/24 May, 37 Battles bombed targets at Monthermé and Fumay. Hurricane squadrons of the AASF and Fighter Command engaged Bf 109s in northern France, claiming six fighters for the loss of ten Hurricanes. === 24 May === No Battle sorties were flown in daylight on 24 May but 73 Squadron Hurricanes from Gaye, near Paris, claimed one Luftwaffe aircraft for the loss of two aircraft and one pilot severely burned. During the night of 24/25 May 41 Battles attacked railway sidings at Libramont, supply dumps at Florenville and the roads through Sedan, Fumay, Givet and Dinant with 20 lb (9.1 kg) incendiary, 40 lb (18 kg) anti-personnel and the usual 250 lb (110 kg) bombs. AASF night sorties equalled the number flown by the Armée de l'Air but the Battles were not built for night operations, despite night flying training; the limited view from the observer compartment left the occupant unable to help with target finding. Long-range night sorties were extremely difficult and attacks on the Meuse crossings from the AASF bases around Troyes required a 100 mi (160 km) cross-country flight. The Allied retreats towards the Channel and North Sea coasts had multiplied the number of supply routes open to the German armies, making attacks on the Meuse crossings less effective. === 25 May === During the night of 24/25 May, Frankfurt, 100 mi (160 km) over the German border, was attacked by 88 Squadron, a 250 mi (400 km)-flight of dubious relevance to the fighting in France and Belgium. During the day, about ten Battles from various squadrons attacked German columns on the Abbeville–Hesdin road for the loss of one aircraft, the crew managing to return on foot. A Do. 17 was claimed by 73 Squadron for the loss of a Hurricane and the capture of the pilot. The code-breakers at Bletchley Park in Britain decrypted a Luftwaffe signal received at 1:30 p.m., that revealed a conference of Luftwaffe commanders to be held on the next day at the HQ of Fliegerkorps VIII at 9:00 a.m. in Roumont Château, near Libramont in southern Belgium. The commanders were to arrive at Ochamps airfield nearby, at 8:30 a.m. The Air Ministry received the information in the early hours of 25/26 May and signalled the news to the AASF HQ at 2:40 a.m., sending more intelligence at 5:15 a.m. === 26 May === Playfair passed orders to attack the château to 103, 142 and 150 squadrons at 7:30 a.m. At 9:00 a.m. fourteen Battles set off for the target supported by Hurricanes of 1 and 73 squadrons. The Battles flew in pairs and the leader of the two aircraft from 150 Squadron lost touch with the other Battle after flying into a storm and descending to 5,000 ft (1,500 m). The crew eventually found the château, dive-bombed from 3,000 ft (910 m) and were attacked by four Bf 110s. The pilot tried to hedge hop out of danger and as the Bf 110s gave chase, the pilot fired at a German aircraft he had spotted landing on an airstrip ahead. The Battle was hit on the armour fitted to resist fighter attack and the robust structure of the airframe protected the crew; the pilot only had to force-land after the engine was hit. The pilot escaped but the navigator and gunner were taken prisoner. Most of the rest of the Battles found the target through the stormy weather and damaged the building but inflicted no casualties; a second Battle was lost. No more operations in daylight took place until 28 May. === 27–31 May === The Battles were grounded by bad weather on the night of 26/27 May but thirty-six Battles attacked German targets on the night of 27/28 May and a fire was reported after bombing near Florenville in Belgium. The weather grounded most of the Battles on the nights of 28/29, 29/30 and 30/31 May. On 28 May, the AASF Battles began attacks to obstruct the massing of German forces on the Somme and Aisne rivers and the bridgeheads that the Germans had established over the Somme. The Battles used '40 lb' (17.5 kg) General Purpose bombs during the day for the first time, which could be dropped from lower altitudes and were better suited to hit dispersed troops and vehicles. Six Battles from 226 Squadron attempted to dive-bomb targets around Laon but failed because of low cloud; three tried to bomb targets through gaps in the clouds and three returned with their loads. Six more 226 Squadron Battles were later sent to bomb roads into Amiens and Péronne; one Battle returned early after a window panel blew out and the others used steep or shallow dive-bombing attacks against German troops and transport. After bombing, the pilots used their forward-firing guns to strafe every German column they saw, one pilot taking five minutes to exhaust his ammunition; a Battle received damage but all returned. Day sorties were also flown by 103 Squadron on the roads into Abbeville, one Battle being severely damaged. Since 14 May the Battles had flown about 100 sorties by day for a loss of nine aircraft, a considerable reduction in the rate of loss, despite the lack of self-sealing fuel tanks, un-armoured engines and no close fighter escorts. The weather prevented night sorties on 26/27 May but on the night of 27/28 May, 36 Battles flew and fires were reported in Florenville; no other effect being reported by the crews. Few sorties were flown on the nights of 28/29, 29/30 and 30/31 May because of bad weather. Since 10 May, the AASF had lost more than 119 aircrew killed and 100 Battles. == Fall Rot == === 1–4 June === Operations in May proved the French right about the lack of fighters in the AASF. Fighter sweeps and patrols in the general area of Battle operations had proved futile and Barratt judged that day bombers needed close escort by fighters if they were to survive. From 20 May, Blenheims flying from Britain had received fighter escorts and losses were drastically reduced; the AASF Battles needed similar protection, which needed more than the three Hurricane squadrons in the AASF could provide. Barratt informed the Air Ministry that either the fighters should be reinforced or the bombers returned to Britain. Given the desperate situation in France, returning the AASF was unthinkable; Barrett said that fighter reinforcements were needed immediately, not as a panic measure once the German offensive resumed. The British government and the Air Staff refused to weaken British air defences and not even the AASF fighter losses were replaced. Barrett decided to limit daylight Battle sorties to the number that the Hurricanes could escort and keep the rest on night raids. The build-up in the bridgeheads over the Somme at Péronne, Amiens and Abbeville continued, making German intentions obvious. The AASF bombers were used to interrupt the German preparations on the night of 31 May/1 June but some Battles were sent to attack the Rhine bridges near Mainz, about 100 mi (160 km) east of the Meuse; military traffic over the Rhine could have no influence on the battle soon to begin. The night bombing operations were also disrupted by the move of the Battle squadrons from their airfields around Troyes to Tours, 150 mi (240 km) west of Paris, 200 mi (320 km) from the nearest German forces and 300 mi (480 km) from targets in the Ardennes. The Battles of 12 Squadron moved from Échemines to Sougé then had to use Échemines as an advanced base. During the move, 103 Squadron and 150 Squadron flew several sorties against targets in the Ardennes and Germany. On the night of 3/4 June, 12 Squadron made five sorties to attack rail lines near Trier. On 4 June, the Battle squadrons were stood down for maintenance and to "settle in"; Fall Rot (the German offensive) began the next day. === 5 June === The German offensive over the Somme and Aisne rivers began with attacks from the three Somme bridgeheads but made only slow progress. The AASF had only 18 operational Hurricanes, which were used to protect Rouen, not the Battles, which had not flown in daylight since 28 May. With the French sending every aircraft that could fly, Barratt returned the Battles to day operations. At 7:30 p.m. eleven Battles of 12 and 150 squadrons flew to Échemines and then attacked German columns on the Péronne–Roye and Amiens–Montdidier roads, although many crews failed to find targets and some mistakenly attacked French tanks near Tricot, 25 mi (40 km) behind the French front line; when French fighters arrived, the Battles flew away. The AASF used Échemines and other airfields as advanced bases for day and night operations, despite complaints that their facilities were inadequate and that Échemines had recently been in use as a main base. The German armies managed to enlarge the Somme bridgeheads by nightfall and during the night, Battles attacked the airfield near Guise but eleven were sent to attack targets in the Ardennes, which was of no help to the French armies. === 6 June === The three AASF fighter squadrons were brought up to strength, although this was still inadequate; a deputation from the 51st (Highland) Division even went to BAFF HQ to demand more protection. Nine Battles escorted by a flight of 73 Squadron Hurricanes flew from Échemines at 4:30 p.m. against German columns on the Ham–Péronne road and others bombed tanks and vehicles between Péronne and Roye. Bf 109s got past the Hurricanes and attacked two Battles, which survived. Raids that night were made closer to the front line, where they might have immediate effect and the pilots flew lower. After the first take-off attempt at 11:45 p.m. and a second try at 1:15 a.m. were interrupted by German air raids, Battles from Harbouville attacked a Somme bridge north of Abbeville, roads out of the town and roads out of Amiens. As the Battles returned, several were damaged by another German raid. At Échemines, twelve Battles set forth to bomb airfields and other targets near Laon, Guise ad St Quentin, one pilot claiming hits on a fuel dump. === 7 June === The French defence of the Abbeville bridgehead began to collapse and 22 Battles attacked German columns between the Abbeville–Blangy-sur-Bresle road and Poix. The aircraft bombed from 2,000 to 3,000 ft (610 to 910 m) to evade ground fire, despite having an escort of only a flight of 73 Squadron Hurricanes, the escorts flying close to the bombers. When escorted, the Battles flew to the target in formation, attacked singly then ran for home. When there were no escorts, the Battles flew alone or in pairs, relying on their manoeuvrability to escape from fighters, their targets being close enough to the front line to give them a chance of escaping; several Battles survived fighter attacks but three were shot down. In the first three days of Fall Rot the Battles flew 42 day sorties for a loss of three aircraft, despite their lack of self-sealing fuel tanks, armour against ground fire and the small number of escorts. The reduction in losses was marked but 17 French day bomber groups had managed to fly 300 sorties over the same period; Barrett remained reluctant to risk more Battles in daylight. === 8 June === As the French armies resisted the German offensive from the Somme bridgeheads, reconnaissance aircraft returned with evidence of an imminent German attack over the Aisne, to the east of Paris. During the night of 7/8 June, eight Battles attacked the Laon–Soissons road after German tanks were reported there. The battle at the Somme bridgeheads retained its priority because the French defences north of Amiens began to collapse (Poix having been lost on the evening of 7 June). The AASF fighters was reinforced by 17 Squadron and 242 Squadron from England. At 1:30 p.m. twelve Battles attacked German columns in the area of Abbeville, Longpré, Poix and Aumale, escorted by seven Hurricanes; three Battles were shot down. At 3:30 p.m. another eleven Battles attacked, despite the Hurricane escorts not arriving; the pilots reported that German tank and lorry columns were 5 mi (8.0 km) long and one Battle was lost. As one pilot returned, he saw a formation of Ju 87s, dived through and damaged one Stuka with his forward-firing gun, before the Bf 109 escorts intervened. The Battle gunner was wounded but claimed a Bf 109 and the pilot made an emergency landing south of Paris; another Battle pilot engaged a Junkers Ju 88 bomber. === 9 June === During the night of 8/9 June, several Battles bombed the Somme river crossings at Amiens and Abbeville and seven sorties were flown against the forests around Laon, north of the Aisne, where German troops were thought to be hiding. During the day, an attack on German tank, artillery and troop columns near Argueil was planned but the diversion of the Hurricanes to protect the 51st (Highland) Division led to the operation being cancelled. The expected German offensive over the Aisne began against determined French resistance but on the Channel coast the Germans had broken through. During the night of 9/10 June, ten Battles were sent to bomb bridges and roads at Abbeville and Amiens; nine more, using the staging post of Échemines airfield, attacked targets around Laon, dropping incendiary bombs on Forêt de Saint-Gobain to the north-west, to flush out German troops thought to be using it for cover; lorries driving south with their lights on were also attacked. === 10 June === In the afternoon, twelve Battles attacked German units close to Vernon on the Seine, about half-way between Paris and Rouen; one Battle being shot down and another damaged, thought to have been hit by a Hurricane. Later, twelve Battles attacked German motorised columns near Vernon, the Seine bridge at Pont Saint-Pierre and a bridge further south. Despite all of the Hurricanes being sent to cover the evacuation from Le Havre, no Battles were lost. Fifteen Battles returned to the targets after dark and seven were sent to attack the Meuse crossings. === 11 June === The speed of the German advance made the use of forward airfields like Échemies as staging posts redundant and at dawn, twelve Battles bombed crossings over the Seine to the south of Les Andelys. Around noon, six Battles with fighter escort bombed more bridges in the area and in the afternoon sixteen escorted Battles made similar attacks. After a request from the French naval commander of Le Havre to attack tanks thought to be close to the port, six Battles with escorts tried to attack the tanks but found only a couple of armoured vehicles and attacked them. It turned out that the tanks had reached the coast and turned north to cut off the French IX Corps and the 51st (Highland) Division. The Battles had managed a minimum of 38 sorties for a loss of two or three aircraft. Another 24 Battles were to continue the attacks downstream of Rouen that night but only five took off due to inclement weather. === 12 June === At dawn, nine Battles bombed the roads around Les Andelys, north of the Seine, for no loss. In the afternoon, twelve Battles bombed a concentration of vehicles blocked at the Pont de l'Arche railway bridge south of Le Manoir, forcing German engineers repairing it to flee for cover, also for no loss. An attack on pontoon bridges south of Les Andelys was thought to have failed in poor visibility, for one Battle lost and one damaged. Fifteen Battles were sent out to the roads around Les Andelys that night but only seven reached the target in more poor weather. === 13 June === At dawn six battles flew an armed reconnaissance around Vernon and Évreux, again in poor weather, which made it impossible for fighters to escort the Battles, difficult for the crews to see targets and for German fighters to intercept them; there were no losses. Later operations by 150 and 142 squadrons flew in such bad weather that two of the seven Battles turned back. No fighters were able to escort the Battles but four were shot down by Bf 109s. During the afternoon, the French armies east of Paris tried to retreat from the Aisne to the Marne but two armies diverged and German armoured columns rushed through the gap, overtaking some of the retreating French troops past Montmirail towards the Seine. At 3:00 p.m. twelve Battles attacked German columns south of the town and lost one Battle. So many German tanks and vehicles were seen that a maximum effort was made by 26 Battles without fighter escort, except for some French fighters which were busy protecting French bombers. The British bombers were attacked by German fighters and by massed anti-aircraft fire; six Battles were shot down, four by fighters. Paris was declared an open city and with the end of hostilities likely, preparations to evacuate the AASF continued as the Battle squadrons fought on without fighter escorts. === 14–26 June === Fighter operations were still hampered by bad weather and escorting the Battles was made more difficult by a disorganised retirement of the fighter squadrons to other airfields. Ten Battles tried to attack German columns near Évreux but could not find them in the bad weather. Two Battles reconnoitring near Paris spotted two Bf 109s on Le Coudray airfield, south of Paris, which had just been evacuated. During the afternoon, nine Battles attacked woods around Évreux and the airfield, two of the three bombers from 12 Squadron being shot down. Orders arrived from Britain during the evening for the Battle squadrons to return and the AASF bombers prepared to make a final attack at dawn. Ten Battles of 150 Squadron, with Hurricane escorts, took off to attack targets around Évreux again. The Battles landed at Nantes, the first stage of their departure from France; the other squadrons managed twelve sorties and took the same route back; about sixty Battles returning to Britain. The remaining AASF Hurricanes began operations to cover evacuations from ports on the French Atlantic coast. Nantes, Brest and St Nazaire were defended by 1, 73 and 242 squadrons, St Malo and Cherbourg by 17 and 501 squadrons flying from Dinard in Brittany and later the Channel Islands. On 18 June, 1 and 73 squadrons, the first to France in 1939, were the last to leave, although many unserviceable Hurricanes and those without fuel were abandoned, not all of them being destroyed. AASF headquarters was disbanded on 26 June 1940. == Aftermath == === Analysis === ==== Fighters ==== Flying Officer Paul Richey of 1 Squadron told a staff officer from AASF HQ that We're operating in penny packets and are always hopelessly inferior in numbers to the formations we meet. My humble opinion is that we should not operate in formations less than two squadrons strong on bomber cover. and in 1999, Cull et al. wrote that Hurricane tactics in France were inept, the fighters being sent into action in threes or sixes against far larger Luftwaffe formations. Fighter pilots had been trained to attack bombers over England, beyond the range of German single-engined fighter escorts and formation flying received more emphasis than observation, dog-fighting and gunnery. Experience gained during the Phoney War was not generally applied; some Hurricane pilots enjoyed great success but the Hurricane squadrons suffered needless casualties for lack of training and leadership. The use of tight formations meant that Hurricane pilots seldom saw the aircraft that shot them down. Few squadron commanders flew on operations and some were flagrantly incompetent. Flying from improvised and unprotected airstrips, rather than the airfields enjoyed by the Fighter Command squadrons in Britain, reduced the number of serviceable aircraft. Most Hurricane engagements took place against bombers, reconnaissance aircraft and the twin-engined Bf 110 heavy fighter, inferior in manoeuvrability but present in much greater numbers. The Hurricanes made bomber escorts necessary but suffered disproportionate losses when they met Luftwaffe Bf 109s which used stalking tactics, attacking unseen out of the sun rather than dog-fighting. ==== Bombers ==== According to Greg Baughen (2016) on 10 May, the Fairey Battle had been a disaster as a tactical bomber; when flying high, they were shot down by fighters and when low, by anti-aircraft fire. With fuel for a range of 1,000 mi (1,600 km) in un-armoured and non-self-sealing tanks, carrying a navigator in a cabin with a poor view outside, the Battle was highly unsuitable for short-range, low altitude, tactical attacks. Extra armour and self-sealing fuel tanks had been delivered to France but not fitted. The crews lacked experience and at first may have taken too long to bomb, sometimes attacking over flat ground, which gave a good view of the target and an equally good view of the aircraft from the ground. With more experience, pilots used ground features for cover and made shorter approach runs. The German invasion demonstrated that RAF crews were not sufficiently equipped or trained for daylight tactical operations and army support; during the Battle of Arras (21 May) the BEF had received no air cover. The attacks on 14 May were catastrophic and the Air Staff claimed that the losses suffered by the AASF showed that tactical air support was not a feasible operation of war. The ministry claimed that the Battle was obsolete, rather than in need of more armour, self-sealing tanks, guns and fighter support. After the war, German officers said that day bombing caused them many delays and that they had not noticed the British night bombing offensive against the Ruhr. The AASF had made a start on becoming a flexible tactical air force; daylight operations by the Battles had more effect on the German offensive than Wellington bombers flying by night. Sending low-performance aircraft over the battlefield might seem suicidal but slow biplanes like Dutch Fokker C.X bombers, the German Henschel Hs 123 dive-bomber and close support aircraft and British Hawker Hector army co-operation aircraft had been used in 1940 for ground attacks at low altitude. Against the ragged forward edge of advanced forces, with ill-organised anti-aircraft defences, such slow, light and highly-manoeuvrable aircraft could hit targets and escape. ==== Command ==== By 19 May, the lack of centralised command made Allied problems much worse; in the north the BEF commanded the Air Component, in the south, the AASF flew in support of the army but the BAFF HQ had to move south to Coulommiers, separating it from the ZOAN HQ. The Air Ministry continued its strategic bombing campaign, Fighter Command was preoccupied by the air defence of Britain, 2 Group was commanded jointly by Bomber Command and BAFF and the French command structure was similarly fragmented. After Dunkirk, Barratt realised that the AASF, down to six Battle squadrons, needed more than its three Hurricane squadrons for day operations. Barratt offered the Air Ministry the choice of a better-balanced force or the withdrawal of the AASF to Britain. If more fighters were to be sent to France it was vital that they be dispatched promptly, not after the Allies had been forced into another retreat. The Cabinet discussed the situation on 3 June; 323 Hurricanes had been lost in May, 226 new ones had been delivered and Fighter Command had 500 operational aircraft but the Cabinet refused to increase the three AASF fighter squadrons. The Air Staff found it difficult to explain away the success of Luftwaffe tactical operations but claimed that Luftwaffe air superiority was due to the advance of the German armies and that it was unworkable for armies in retreat, which became a self-fulfilling prophecy. After the Battle of France, the Air Ministry continue to define air superiority as the possession of more bombers than an opposing air force, despite the period from 1939 to late 1940 exploding many interwar theories of warfare. Land battles had not resembled those of the First World War and bombers had not ended wars in a few weeks. The Air Staff emphasised the value of ground strafing, which had been demonstrated by the Hurricane squadrons of the AASF and the Air Component, rather than bombing in support of the army. === Casualties === The Battle squadrons suffered a 40 per cent loss on 10 May, 100 per cent on 11 May and 63 per cent on 12 May. In 48 hours the number of operational AASF bombers fell from 135 to 72. On 14 May the AASF made a maximum effort, sending 63 Battles and eight Blenheims to attack targets near Sedan; more than half were lost, cumulative AASF losses reaching 75 per cent. In three weeks more than 100 Battles had been shot down and 119 aircrew killed. The remaining bombers began to operate mostly at night and from 15 May to 5 June losses fell to 0.5 per cent, albeit with much reduced bombing accuracy. In 2017, Greg Baughen wrote that an Air Ministry study estimated that from 5 to 15 June, the AASF Battles had flown 264 day sorties for a loss of 23 aircraft, a 9 per cent rate. Although the loss of RAF records during the débâcle made totals unreliable, the loss rate was probably the same and while high, the losses contrasted favourably with the 50 per cent rate from 10 to 15 May. Blenheim losses averaged 7 per cent but from 20 May, when they received escorts, the rate fell to 5 per cent, Martin Maryland bombers of the Armée de l'Air had a loss rate of 4 per cent, the lowest of the Allied day bombers. In the ten days from the beginning of Fall Rot until the return of the Battle squadrons to Britain, the 2 Group Blenheims flew 473 sorties and the Armée de l'Air 619, the 264 Battle sorties contributing 20 per cent of the total, a considerable achievement for six squadrons. From 10 May to 24 June the AASF lost 229 aircraft and the Air Component another 279. In five weeks, the RAF lost 1,500 men killed, wounded and missing and 1,029 aircraft. === Subsequent operations === Back in Britain, the surviving AASF aircrew were sent on leave; as BAFF had been dissolved, the bomber squadrons reverted to 1 Group, Bomber Command and the Hurricanes to Fighter Command. The Air Ministry contemplated how to use the Battle crews but Bomber Command wanted nothing to do with army support. Despite the losses in France, there were more than 300 Battles in storage and it was in production as a trainer. The two Battle squadrons withdrawn when the AASF was reduced in size converted to Blenheims but 98 Squadron, despite a grievous loss of personnel, was reformed with Battles. The RAF set up 67 Group to defend Northern Ireland with 88 and 226 squadrons; 98 Squadron was chosen to go to Iceland for coastal reconnaissance. Playfair advocated the use of Battles as night bombers, noting the improvement in accuracy when crews flew low but against an invasion, waiting until dark would be impossible. On 5 July, 1 Group, based at Hucknall, with the four remaining Battle squadrons, their 45 Battles and 55 crews at RAF Binbrook and RAF Newton, was declared operational for emergencies. For the rest of the summer the Battle squadrons stood by in case of invasion, later joined by the Polish 300 Ziemi Mazowieckiej, 301 Ziemi Pomorskiej, 304 Ziemi Śląskiej and 305 Ziemi Wielkopolska bomber squadrons. == Gallery == == See also == List of Royal Air Force commands == Notes == == Footnotes == == References == == Further reading == Bond, Brian (1980). British Military Policy between the Two World Wars. Oxford: Clarendon Press. ISBN 0-19-822464-8. Butler, James (1971) [1957]. Grand Strategy: September 1939 – June 1941. History of the Second World War United Kingdom Military Series. Vol. II (2nd ed.). HMSO. ISBN 978-0-11-630095-9 – via Internet Archive. Murland, Jerry (2022). Allied Air Operations 1939–1940: The War over France and the Low Countries. Barnsley: Pen & Sword Aviation. ISBN 978-1-39908-771-1. Powell, M. L. (2014). Army Co-operation Command and Tactical Air Power Development in Britain 1940–43: The Role of Army Co-operation Command in Army Air Support (PhD thesis). Birmingham University. pp. 71–103. OCLC 890146868. Docket uk.bl.ethos.607269. Retrieved 5 October 2018. Richey, P. H. M. (2002) [1941]. Fighter Pilot: A Personal Record of the Campaign in France 1939–1940 (repr. Cassell Military Paperbacks ed.). London: Batsford. ISBN 978-1-4072-2128-1. Terraine, J. (1998) [1985]. The Right of the Line: The Royal Air Force in the European War 1939–1945 (repr. Wordsworth Editions, Ware ed.). London: Hodder and Stoughton. ISBN 978-1-85326-683-6. == External links == Royal Air Force Order of Battle, France, 10 May 1940
Wikipedia/RAF_Advanced_Air_Striking_Force
Fish (sometimes capitalised as FISH) was the UK's GC&CS Bletchley Park codename for any of several German teleprinter stream ciphers used during World War II. Enciphered teleprinter traffic was used between German High Command and Army Group commanders in the field, so its intelligence value (Ultra) was of the highest strategic value to the Allies. This traffic normally passed over landlines, but as German forces extended their geographic reach beyond western Europe, they had to resort to wireless transmission. Bletchley Park decrypts of messages enciphered with the Enigma machines revealed that the Germans called one of their wireless teleprinter transmission systems "Sägefisch" ('sawfish') which led British cryptographers to refer to encrypted German radiotelegraphic traffic as "Fish." The code "Tunny" ('tuna') was the name given to the first non-Morse link, and it was subsequently used for the Lorenz SZ machines and the traffic enciphered by them. == History == In June 1941, the British "Y" wireless intercept stations, as well as receiving Enigma-enciphered Morse code traffic, started to receive non-Morse traffic which was initially called NoMo. NoMo1 was a German army link between Berlin and Athens, and NoMo2 a temporary air force link between Berlin and Königsberg. The parallel Enigma-enciphered link to NoMo2, which was being read by Government Code and Cypher School at Bletchley Park, revealed that the Germans called the wireless teleprinter transmission systems "Sägefisch" (sawfish). This led the British to use the code Fish dubbing the machine and its traffic Tunny. The enciphering/deciphering equipment was called a Geheimschreiber (secret writer) which, like Enigma, used a symmetrical substitution alphabet. The teleprinter code used was the International Telegraph Alphabet No. 2 (ITA2)—Murray's modification of the 5-bit Baudot code. When the Germans invaded Russia, during World War II, the Germans began to use a new type of enciphered transmission between central headquarters and headquarters in the field. The transmissions were known as Fish at Bletchley Park. (See Lorenz cipher, Cryptanalysis of the Lorenz cipher.) The German army used Fish for communications between the highest authorities in Berlin and the high-ranking officials of the German Army in the field. The Fish traffic which the personnel at Bletchley Park intercepted, contained discussions, orders, situation reports and many more details about the intentions of the German Army. However, these transmissions were so challenging to decrypt that even with the assistance of the high speed Colossus computer, the messages could not be read until several days later. “Vital intelligence was obtained about Hitler’s intentions in the run up to D-Day 1944.” == Traffic code names == === Tunny === The NoMo1 link was initially named Tunny (for tunafish), a name which went on to be used both for the Lorenz SZ40/42 machines and for the Bletchley Park analogues of them. The NoMo1 link was subsequently renamed Codfish. A large number of Tunny links were monitored by the Y-station at Knockholt and given names of fish. Most of these were between the Oberkommando der Wehrmacht (German High Command, OKW) in Berlin and German army commands throughout occupied Europe. The Tunny links were based on two central transmitting and receiving points in Strausberg near Berlin for Army generals in the West and one in Königsberg in Prussia for the Eastern Front. The number of radio links jumped from eight in mid-1943 to fourteen or fifteen. In 1941 the initial experimental Tunny link was between Berlin and Athens/Salonoka. By D-Day in 1944 there were twenty-six links, based on Konigsberg and Straussberg. Other links were usually mobile; in two trucks, one with radio equipment and one with a "send" Tunny machine and a receive "Tunny" machine. The links carried very high-grade intelligence: messages from Hitler and the High Command to various Army Group commanders in the field. Cryptanalysis of the Lorenz cipher at Bletchley Park was assisted initially by a machine called Heath Robinson and later by the Colossus computers yielded a great deal of valuable high-level intelligence. Tunny decrypts provided high-grade intelligence in an unprecedented quality. Walter Jacobs, a US Army codebreaker who worked at Bletchley Park, wrote in an official report on the operation to break Tunny that in March 1945 alone 'upward of five million letters of current transmission, containing intelligence of the highest order, were deciphered'. === Sturgeon === This was the name given to traffic encoded with the Siemens and Halske T52 Geheimschreiber. In May 1940, after the German invasion of Norway, the Swedish mathematician and cryptographer Arne Beurling used traffic intercepted from telegraph lines that passed through Sweden to break this cipher. Although Bletchley Park eventually diagnosed and broke Sturgeon, the relatively low value of the intelligence gained, compared to the effort involved, meant that they did not read much of its traffic. === Thrasher === This was the name used for traffic enciphered on a Geheimschreiber that was probably the Siemens T43 one-time tape machine. This was used only on a few circuits, in the later stages of the war and was diagnosed at Bletchley Park, but considered to be unbreakable. == List of senior staff involved at Bletchley Park == Including both executives and cryptographers on FISH (Tunny) in the Testery. Ralph Tester — linguist and head of the Testery Peter Benenson — codebreaker John Christie — codebreaker Tom Colvill — general manager Peter Edgerley — codebreaker Peter Ericsson — shift-leader, linguist and senior codebreaker Peter Hilton — codebreaker and mathematician Roy Jenkins — codebreaker and later cabinet minister Victor Masters — shift-leader Max Newman — mathematician and codebreaker who later set up the Newmanry Denis Oswald — linguist and senior codebreaker Jerry Roberts — shift-leader, linguist and senior codebreaker John Thompson — codebreaker John Tiltman — senior codebreaker and intelligence officer Alan Turing - mathematician, mainly on Enigma traffic W.T. Tutte — codebreaker and mathematician == See also == Colossus (computer) Heath Robinson (codebreaking machine) TICOM Turingery == Notes == == References ==
Wikipedia/Fish_(cryptography)
The Daily Telegraph, known online and elsewhere as The Telegraph, is a British daily broadsheet conservative newspaper published in London by Telegraph Media Group and distributed in the United Kingdom and internationally. It was founded by Arthur B. Sleigh in 1855 as The Daily Telegraph and Courier. The Telegraph is considered a newspaper of record in the UK. The paper's motto, "Was, is, and will be", was included in its emblem which was used for over a century starting in 1858. In 2013, The Daily Telegraph and The Sunday Telegraph, which started in 1961, were merged, although the latter retains its own editor. It is politically conservative and supports the Conservative Party. It was moderately liberal politically before the late 1870s. The Telegraph has had a number of news scoops, including the outbreak of World War II by rookie reporter Clare Hollingworth, described as "the scoop of the century", the 2009 parliamentary expenses scandal – which led to a number of high-profile political resignations and for which it was named 2009 British Newspaper of the Year – its 2016 undercover investigation on the England football manager Sam Allardyce, and the Lockdown Files in 2023. In May 2025, investment management firm RedBird Capital Partners announced plans to acquire the newspaper's publisher for £500 million (about $674 million). == History == === Founding and early history === The Daily Telegraph and Courier was founded by Colonel Arthur B. Sleigh in June 1855 to air a personal grievance against the future commander-in-chief of the British Army, Prince George, Duke of Cambridge. Joseph Moses Levy, the owner of The Sunday Times, agreed to print the newspaper, and the first edition was published on 29 June 1855. The paper cost 2d and was four pages long. Nevertheless, the first edition stressed the quality and independence of its articles and journalists: "We shall be guided by a high tone of independent action." As the paper was not a success, Sleigh was unable to pay Levy the printing bill. Levy took over the newspaper, his aim being to produce a cheaper newspaper than his main competitors in London, the Daily News and The Morning Post, to expand the size of the overall market. Levy appointed his son, Edward Levy-Lawson, Lord Burnham, and Thornton Leigh Hunt to edit the newspaper. Lord Burnham relaunched the paper as The Daily Telegraph, with the slogan "the largest, best, and cheapest newspaper in the world". Hunt laid out the newspaper's principles in a memorandum sent to Levy: "We should report all striking events in science, so told that the intelligent public can understand what has happened and can see its bearing on our daily life and our future. The same principle should apply to all other events—to fashion, to new inventions, to new methods of conducting business". In 1876, Jules Verne published his novel Michael Strogoff, whose plot takes place during a fictional uprising and war in Siberia. Verne included among the book's characters a war correspondent of The Daily Telegraph, named Harry Blount—who is depicted as an exceptionally dedicated, resourceful and brave journalist, taking great personal risks to follow closely the ongoing war and bring accurate news of it to The Telegraph's readership, ahead of competing papers. === 1901 to 1945 === In 1908, The Daily Telegraph printed an article in the form of an interview with Kaiser Wilhelm II of Germany that damaged Anglo-German relations and added to international tensions in the build-up to World War I. In 1928, the son of Baron Burnham, Harry Lawson Webster Levy-Lawson, 2nd Baron Burnham, sold the paper to William Berry, 1st Viscount Camrose, in partnership with his brother Gomer Berry, 1st Viscount Kemsley and Edward Iliffe, 1st Baron Iliffe. In 1937, the newspaper absorbed The Morning Post, which traditionally espoused a conservative position and sold predominantly amongst the retired officer class. Originally William Ewart Berry, 1st Viscount Camrose, bought The Morning Post with the intention of publishing it alongside The Daily Telegraph, but poor sales of the former led him to merge the two. For some years, the paper was retitled The Daily Telegraph and Morning Post before it reverted to just The Daily Telegraph. In the late 1930s, Victor Gordon Lennox, The Telegraph's diplomatic editor, published an anti-appeasement private newspaper The Whitehall Letter that received much of its information from leaks from Sir Robert Vansittart, the Permanent Under-Secretary of the Foreign Office, and Rex Leeper, the Foreign Office's Press Secretary. As a result, Gordon Lennox was monitored by MI5. In 1939, The Telegraph published Clare Hollingworth's scoop that Germany was to invade Poland. In November 1940, Fleet Street, with its close proximity to the river and docklands, was subjected to almost daily bombing raids by the Luftwaffe and The Telegraph started printing in Manchester at Kemsley House (now The Printworks entertainment venue), which was run by Camrose's brother Kemsley. Manchester quite often printed the entire run of The Telegraph when its Fleet Street offices were under threat. The name Kemsley House was changed to Thomson House in 1959. In 1986, printing of Northern editions of the Daily and Sunday Telegraph moved to Trafford Park and in 2008 to Newsprinters at Knowsley, Liverpool. During the Second World War, The Daily Telegraph covertly helped in the recruitment of code-breakers for Bletchley Park. The ability to solve The Telegraph's crossword in under 12 minutes was considered to be a recruitment test. The newspaper was asked to organise a crossword competition, after which each of the successful participants was contacted and asked if they would be prepared to undertake "a particular type of work as a contribution to the war effort". The competition itself was won by F. H. W. Hawes of Dagenham who finished the crossword in less than eight minutes. === 1946 to 1985 === Both the Camrose (Berry) and Burnham (Levy-Lawson) families remained involved in management until Conrad Black took control in 1986. On the death of his father in 1954, Seymour Berry, 2nd Viscount Camrose assumed the chairmanship of the Daily Telegraph with his brother Michael Berry, Baron Hartwell as his editor-in-chief. During this period, the company saw the launch of sister paper The Sunday Telegraph in 1960. === 1986 to 2004 === Canadian businessman Conrad Black, through companies controlled by him, bought the Telegraph Group in 1986. Black, through his holding company Ravelston Corporation, owned 78% of Hollinger Inc. which in turn owned 30% of Hollinger International. Hollinger International in turn owned the Telegraph Group and other publications such as the Chicago Sun-Times, the Jerusalem Post and The Spectator. On 18 January 2004, Black was dismissed as chairman of the Hollinger International board over allegations of financial wrongdoing. Black was also sued by the company. Later that day, it was reported that the Barclay brothers had agreed to purchase Black's 78% interest in Hollinger Inc. for £245m, giving them a controlling interest in the company, and to buy out the minority shareholders later. However, a lawsuit was filed by the Hollinger International board to try to block Black from selling his shares in Hollinger Inc. until an investigation into his dealings was completed. Black filed a countersuit but, eventually, United States judge Leo Strine sided with the Hollinger International board and blocked Black from selling his Hollinger Inc. shares to the twins. On 7 March 2004, the twins announced that they were launching another bid, this time just for The Daily Telegraph and its Sunday sister paper rather than all of Hollinger Inc. The then owner of the Daily Express, Richard Desmond, was also interested in purchasing the paper, selling his interest in several pornographic magazines to finance the initiative. Desmond withdrew in March 2004, when the price climbed above £600m, as did Daily Mail and General Trust plc a few months later on 17 June. === Since 2004 === In November 2004, The Telegraph celebrated the tenth anniversary of its website, Electronic Telegraph, now renamed www.telegraph.co.uk. The Electronic Telegraph launched in 1995 with The Daily Telegraph Guide to the Internet by writer Sue Schofield for an annual charge of £180.00. On 8 May 2006, the first stage of a major redesign of the website took place, with a wider page layout and greater prominence for audio, video and journalist blogs. On 10 October 2005, The Daily Telegraph relaunched to incorporate a tabloid sports section and a new standalone business section. The Daily Mail's star columnist and political analyst Simon Heffer left that paper in October 2005 to rejoin The Daily Telegraph, where he has become associate editor. Heffer has written two columns a week for the paper since late October 2005 and is a regular contributor to the news podcast. In November 2005, the first regular podcast service by a newspaper in the UK was launched. Just before Christmas 2005, it was announced that The Telegraph titles would be moving from Canada Place in Canary Wharf, to new offices at Victoria Plaza at 111 Buckingham Palace Road near Victoria Station in central London. The new office features a "hub and spoke" layout for the newsroom to produce content for print and online editions. In October 2006, with its relocation to Victoria, the company was renamed the Telegraph Media Group, repositioning itself as a multimedia company. On 2 September 2008, the Daily Telegraph was printed with colour on each page for the first time when it left Westferry for Newsprinters at Broxbourne, Hertfordshire, another arm of the Murdoch company. The paper is also printed in Liverpool and Glasgow by Newsprinters. In May 2009, the daily and Sunday editions published details of MPs' expenses. This led to a number of high-profile resignations from both the ruling Labour administration and the Conservative opposition. In June 2014, The Telegraph was criticised by Private Eye for its policy of replacing experienced journalists and news managers with less-experienced staff and search engine optimisers. On 26 October 2019, the Financial Times reported that the Barclay Brothers were about to put the Telegraph Media Group up for sale. The Financial Times also reported that the Daily Mail and General Trust (owner of the Daily Mail, The Mail on Sunday, Metro and Ireland on Sunday) would be interested in buying. The Daily Telegraph supported Liz Truss in the July–September 2022 Conservative Party leadership election. In July 2023, it was announced that Lloyds Banking Group had appointed Mike McTighe as chairman of Press Acquisitions Limited and May Corporation Limited in order to spearhead the sale of The Telegraph and The Spectator. ==== Accusation of news coverage influence by advertisers ==== In July 2014, the Daily Telegraph was criticised for carrying links on its website to pro-Kremlin articles supplied by a Russian state-funded publication that downplayed any Russian involvement in the downing of the passenger jet Malaysia Airlines Flight 17. These had featured on its website as part of a commercial deal, but were later removed. As of 2014, the paper was paid £900,000 a year to include the supplement Russia Beyond the Headlines, a publication sponsored by the Rossiyskaya Gazeta, the Russian government's official newspaper. In February 2015, the chief political commentator of the Daily Telegraph, Peter Oborne, resigned. Oborne accused the paper of a "form of fraud on its readers" for its coverage of the bank HSBC in relation to a Swiss tax-dodging scandal that was widely covered by other news media. He alleged that editorial decisions about news content had been heavily influenced by the advertising arm of the newspaper because of commercial interests. Jay Rosen at New York University stated that Oborne's resignation statement was "one of the most important things a journalist has written about journalism lately". Oborne cited other instances of advertising strategy influencing the content of articles, linking the refusal to take an editorial stance on the repression of democratic demonstrations in Hong Kong to the Telegraph's support from China. Additionally, he said that favourable reviews of the Cunard cruise liner Queen Mary II appeared in the Telegraph, noting: "On 10 May last year The Telegraph ran a long feature on Cunard's Queen Mary II liner on the news review page. This episode looked to many like a plug for an advertiser on a page normally dedicated to serious news analysis. I again checked and certainly Telegraph competitors did not view Cunard's liner as a major news story. Cunard is an important Telegraph advertiser." In response, the Telegraph called Oborne's statement an "astonishing and unfounded attack, full of inaccuracy and innuendo". Later that month, Telegraph editor Chris Evans invited journalists at the newspaper to contribute their thoughts on the issue. Press Gazette reported later in 2015 that Oborne had joined the Daily Mail tabloid newspaper and The Telegraph had "issued new guidelines over the way editorial and commercial staff work together". In January 2017, the Telegraph Media Group had a higher number of upheld complaints than any other UK newspaper by its regulator IPSO. Most of these findings pertained to inaccuracy, as with other UK newspapers. In October 2017, a number of major western news organisations whose coverage had irked Beijing were excluded from Xi Jinping's speech event launching a new politburo. However, the Daily Telegraph had been granted an invitation to the event. In April 2019, Business Insider reported The Telegraph had partnered with Facebook to publish articles "downplaying 'technofears' and praising the company". ==== Premature obituaries ==== The paper published premature obituaries for Cockie Hoogterp, the second wife of Baron Blixen, Dave Swarbrick in 1999, and Dorothy Southworth Ritter, the widow of Tex Ritter and mother of John Ritter, in August 2001. ==== Accusation of antisemitism ==== Editors for both the Daily Telegraph and the Sunday Telegraph have been criticised by Guardian columnist Owen Jones for publishing and authoring articles which espouse Cultural Marxism, an antisemitic conspiracy theory. In 2018, Allister Heath, the editor of the Sunday Telegraph wrote that "Cultural Marxism is running rampant." Assistant comment editor of the Daily Telegraph Sherelle Jacobs also used the term in 2019. The Daily Telegraph also published an anonymous civil servant who stated: "There is a strong presence of Anglophobia, combined with cultural Marxism that runs through the civil service." ==== False allegations of Islamic extremism ==== In January 2019, the paper published an article written by Camilla Tominey titled "Police called in after Scout group run from mosque is linked to Islamic extremist and Holocaust denier" in which it was reported that the police were investigating Ahammed Hussain, the Leader of the Scout Group at the Lewisham Islamic Centre, because he had links to extremist Muslim groups that promoted terrorism and antisemitism. In January 2020, the paper issued an official apology and accepted that the article contained many falsehoods, and that Hussain had never supported or promoted terrorism, or been antisemitic. The paper paid Hussain damages and costs. In a letter sent to Hussain's lawyers accompanying the text of their published apology, the newspaper's lawyers wrote: "The article was published by our client following receipt of information in good faith from the Scout Association and the Henry Jackson Society; nevertheless our client now accepts that the article (using that expression to refer to both print and online versions) is defamatory of your client and will apologise to him for publishing it." ==== China Watch ==== In 2016, the Hong Kong Free Press reported that The Daily Telegraph was receiving £750,000 annually to carry a supplement called 'China Watch' as part of a commercial deal with Chinese state-run newspaper China Daily. The Guardian reported in 2018 that the China Watch supplement was being carried by The Telegraph along with other newspapers of record such as The New York Times, The Wall Street Journal and Le Figaro. The Telegraph published the supplement once a month in print, and published it online at least until March 2020. In April 2020, The Telegraph removed China Watch from its website, along with another advertisement feature section by Chinese state-run media outlet People's Daily Online. The paper had run many pieces critical of China since the start of the COVID-19 pandemic. ==== COVID-19 misinformation ==== In January 2021, the British press regulator, the Independent Press Standards Organisation, ordered The Daily Telegraph to publish a correction to two "significantly misleading" claims in a comment article published by Toby Young. The July 2020 article "When we have herd immunity Boris will face a reckoning on this pointless and damaging lockdown," which spread COVID-19 misinformation that the common cold provided "natural immunity" to COVID-19 and that London was "probably approaching herd immunity". The regulator said that a correction was appropriate rather than a more serious response due to the level of scientific uncertainty at the time the comment was published. At the time of the ruling, The Telegraph had removed the comment article but had not issued a correction. ==== Climate change ==== The Telegraph has published multiple columns and news articles which promote pseudoscientific views on climate change, and misleadingly cast the subject of climate change as a subject of active scientific debate when there is a scientific consensus on climate change. It has published columns about the "conspiracy behind the Anthropogenic Global Warming myth", described climate scientists as "white-coated prima donnas and narcissists," and claimed that "global warming causes about as much damage as benefits." In 2015, a Telegraph news article incorrectly claimed that scientists predicted a mini-ice age by 2030. Climate change denying journalist James Delingpole was first to use "Climategate" on his Telegraph blog for a manufactured controversy where emails were leaked from climate scientists ahead of the Copenhagen climate summit and misleadingly presented to give the appearance that the climate scientists were engaged in fraud. In 2014, The Telegraph was one of several media titles to give evidence to the House of Commons Select Committee 'Communicating climate science'. The paper told MPs they believe climate change is happening and humans play a role in it. Editors told the committee, "we believe that the climate is changing, that the reason for that change includes human activity, but that human ingenuity and adaptability should not be ignored in favour of economically damaging prescriptions." In November 2023, the journalist and climate activist group DeSmog published its judgments for coverage of environmental topics in 171 of The Telegraph's opinion pieces from April to October 2023. DeSmog stated that of these 171 pieces, 85 per cent were categorized as "anti-green", defined as "attacking climate policy, questioning climate science and ridiculing environmental groups." ==== Owen Paterson ==== The Daily Telegraph, in particular its columnist and former editor Charles Moore, were staunch supporters of Owen Paterson, a former MP and minister who resigned after it was found that he had breached advocacy rules to lobby ministers for fees. A plan to overhaul the Commons standard and spare Paterson from being suspended and a possible recall petition that follows was leaked to the newspaper and it was "approvingly" splashed across the paper's front page. Boris Johnson flew back from the COP 26 summit in Glasgow to attend a Telegraph journalists' reunion at the Garrick and left the club with Moore the same evening. ==== 2023–2024 takeover bid ==== In June 2023, The Guardian and other newspapers reported that, following a breakdown in discussions relating to a financial dispute, Lloyds Bank was planning to take control of the companies owning the Telegraph titles and the Spectator and sell them off. Representatives of the Barclay family have described the reports as "irresponsible". By 20 October, a sale of the publications had been initiated after bankers seized control. Lloyds appointed receivers and started shopping the brands to bidders. By November, it was revealed that the bid had been agreed upon by RedBird IMI, a joint venture between RedBird Capital Partners and International Media Investments, a firm based in the United Arab Emirates and owned by Sheikh Mansour bin Zayed Al Nahyan. The bid would see the firm take over The Telegraph, while allowing the Barclay family to repay a debt of £1.2 billion to Lloyds Bank. Conservative MPs raised national security concerns, and pushed the government to investigate the bid, as the United Arab Emirates had a poor reputation for freedom of speech. Culture secretary Lucy Frazer issued a public interest intervention notice on 30 November, preventing the group from taking over without further scrutiny from the media regulator Ofcom over potential breaches of media standards. Conservative MPs also called on Deputy Prime Minister Oliver Dowden to use the National Security and Investment Act 2021 to investigate the Emirati-backed bid. Chairman Andrew Neil threatened to quit if the sale was approved, saying: "You cannot have a major mainstream newspaper group owned by an undemocratic government or dictatorship where no one has a vote." Fraser Nelson, editor of The Spectator, which would be included in the sale, also opposed the move, saying, "the very reason why a foreign government would want to buy a sensitive asset is the very reason why a national government should be wary of selling them." In March 2024, the Lords voted in a new law, under which restrictions were imposed on foreign governments regarding the ownership of British newspapers and magazines, including only being allowed up to a 0.1 per cent stake. In April 2024, the UK government effectively banned RedBird IMI from taking over The Telegraph and The Spectator by introducing new laws which prevented foreign governments from owning British newspapers. RedBird also confirmed it would withdraw its takeover plans, saying they were "no longer feasible". In April 2024, RedBird IMI confirmed to put up The Telegraph for sale again and to begin open auction. However, the Abu Dhabi fund suggested that it seek to recoup the £600 million it spent acquiring the newspaper, or will otherwise retain some involvement. The Telegraph was left in limbo, as the staff remained blocked from taking strategic decisions. The owner of The New York Sun, Dovid Efune came up as a leading bidder, but struggled to take over the paper. The Columbia Journalism Review dubbed it as “the newspaper auction from hell”. On 17 January 2025, David Castelblanco, a partner at the Abu Dhabi fund RedBird, urged The Telegraph to make significant job cuts, including over 100 non-editorial roles. He also advised the executives to halt planned editorial investments, which included expansions of the U.S. newsroom. The intervention was likely to raise concerns about foreign interference and fuels fears of foreign influence in the decision-making process of The Telegraph. On 19 January, Sir Iain Duncan Smith stated that the UAE shouldn’t be allowed to acquire the British newspaper. He also accused the UK government of “foot-dragging” the process due to fear of upsetting the Emirates, and asked for an explanation about the Digital Markets, Competition and Consumers Act 2024. Sir Ed Davey also called for the Cultural Secretary Lisa Nandy to set a deadline for The Telegraph’s sale, and urged the ministers to ensure that the Abu Dhabi fund is “not improperly meddling in the meantime”. == Circulation == It had a circulation of 270,000 in 1856, and 240,000 in 1863. It had a circulation of 1,393,094 in 1968, and 1,358,875 in 1978. It had a circulation of 1,439,000 in 1980, and 1,235,000 in 1984. It had a circulation of 1,133,173 in 1988. The paper had a circulation of 363,183 in December 2018, not including bulk sales. It descended further until it withdrew from newspaper circulation audits in 2020. The bulk of its readership has moved online; the Telegraph Media Group reported a subscription number of 1,035,710 for December 2023, composed of 117,586 for its print edition, 688,012 for its digital version and 230,112 for other subscriptions. == Political stance == The Daily Telegraph supported Whig, and moderate liberal ideas, before the late 1870s. The Daily Telegraph is politically conservative and has endorsed the Conservative Party at every UK general election since 1945. The personal links between the paper's editors and the leadership of the Conservative Party, along with the paper's generally right-wing stance and influence over Conservative activists, have led the paper commonly to be referred to, especially in Private Eye, as the Torygraph. When the Barclay brothers purchased the Telegraph Group for around £665 million in late June 2004, Sir David Barclay suggested that The Daily Telegraph might no longer be the "house newspaper" of the Conservatives in the future. In an interview with The Guardian, he said: "Where the government are right we shall support them." The editorial board endorsed the Conservative Party in the 2005 general election. During the 2014 Scottish independence referendum, the paper supported the Better Together 'No' Campaign. Alex Salmond, the former leader of the SNP, called The Telegraph "extreme" on Question Time in September 2015. In the 2016 United Kingdom European Union membership referendum, it endorsed voting to leave the EU. In December 2015, The Daily Telegraph was fined £30,000 for "sending an unsolicited email to hundreds of thousands of its subscribers, urging them to vote for the Conservatives." During the 2019 Conservative Party leadership election, The Daily Telegraph endorsed their former columnist Boris Johnson. In 2019, former columnist Graham Norton, who had left the paper in late 2018, said "about a year before I left, it took a turn" and criticised it for "toxic" political stances, namely for a piece defending US Supreme Court then-nominee Brett Kavanaugh and for being "a mouthpiece for Boris Johnson" whose columns were allegedly published with "no fact-checking at all". === LGBT+ rights === In 2012, prior to the legalisation of same-sex marriage in the United Kingdom, Telegraph View published an editorial stating that it was a "pointless distraction" as "many [gay couples] already avail themselves of the civil partnerships introduced by Labour". The Telegraph wrote in another editorial that same year that it feared that changing "the law on gay marriage risks inflaming anti-homosexual bigotry". In 2015, the newspaper published an article by former editor Charles Moore claiming a "gay rights sharia" was dictating what the LGBT+ community should believe, following Dolce & Gabbana's openly gay founders criticising gay adoptions. Moore wrote: "If you are gay, Mr Strudwick seemed to assert, there are certain things you must believe. Nothing else is permitted under the gay rights sharia." Moore has previously expressed his views that civil partnerships achieved a "balance" for heterosexual and homosexual couples. In 2013, he wrote: "Respectable people are truly terrified of being thought anti-homosexual. In a way, they are right to be, because attacking people for their personal preferences can be a nasty thing." Also in 2015, The Telegraph published its "Out at Work" list, naming "the top 50 list of LGBT executives". Since then, The Telegraph appeared to shift towards a more liberal attitude on LGBT+ issues, publishing articles that then-Prime Minister Theresa May needed to be "serious about LGBT equality" and that "bathroom bills" in Texas – which were criticised as being transphobic – were "a Kafkaesque state intrusion". The newspaper also featured an article written by Maria Munir about their experience coming out to President Barack Obama as non-binary. Stonewall CEO Ruth Hunt penned an article in The Telegraph after the Orlando nightclub shooting in June 2016 that the attack on a gay nightclub "grew out of everyday homophobia". Also in 2016, Telegraph Executive Director Lord Black was awarded Peer of the Year at the 2016 PinkNews Awards for his campaigning on LGBT rights. The Telegraph has published articles which have been criticised by PinkNews as transphobic. In 2017, the newspaper published an article by Allison Pearson titled: "Will our spineless politicians' love affair with LGBT ever end?", arguing that NHS patients' being asked their sexual orientation was unnecessary and another in 2018 with the headline: "The tyranny of the transgender minority has got to be stopped." == Sister publications == === The Sunday Telegraph === The Daily Telegraph's sister Sunday paper was founded in 1961. The writer Sir Peregrine Worsthorne is probably the best known journalist associated with the title (1961–1997), eventually being editor for three years from 1986. In 1989, the Sunday title was briefly merged into a seven-day operation under Max Hastings's overall control. In 2005, the paper was revamped, with Stella being added to the more traditional television and radio section. It costs £2.20 and includes separate Money, Living, Sport and Business supplements. Circulation of The Sunday Telegraph in July 2010 was 505,214 (ABC). === Young Telegraph === Young Telegraph was a weekly section of The Daily Telegraph published as a 14-page supplement in the weekend edition of the newspaper. Young Telegraph featured a mixture of news, features, cartoon strips and product reviews aimed at 8–12-year-olds. It was edited by Damien Kelleher (1993–1997) and Kitty Melrose (1997–1999). Launched in 1990, the award-winning supplement also ran original serialised stories featuring popular brands such as Young Indiana Jones and the British children's sitcom Maid Marian and Her Merry Men. It featured the cartoon "Mad Gadget" by Chris Winn, and a computer game Mad Gadget: Lost In Time (1993) and a book Mad Gadget: Gadget Mad (1995) were produced. In 1995, an interactive spin-off called Electronic Young Telegraph (EYT) was launched on floppy disk. Described as an interactive computer magazine for children, Electronic Young Telegraph was edited by Adam Tanswell, who led the relaunch of the product on CD-Rom in 1998. Electronic Young Telegraph featured original content including interactive quizzes, informative features and computer games, as well as entertainment news and reviews. It was later re-branded as T:Drive in 1999. === Website === Telegraph.co.uk is the online version of the newspaper. It uses the banner title The Telegraph and includes articles from the print editions of The Daily Telegraph and The Sunday Telegraph, as well as web-only content such as breaking news, features, picture galleries and blogs. It was named UK Consumer Website of the Year in 2007 and Digital Publisher of the year in 2009 by the Association of Online Publishers. The site is overseen by Kate Day, digital director of Telegraph Media Group. Other staff include Shane Richmond, head of technology (editorial), and Ian Douglas, head of digital production. In November 2012, international customers accessing the Telegraph.co.uk site would have to sign up for a subscription package. Visitors had access to 20 free articles a month before having to subscribe for unlimited access. In March 2013, the pay meter system was also rolled out in the UK. The site, which has been the focus of the group's efforts to create an integrated news operation producing content for print and online from the same newsroom, completed a relaunch during 2008 involving the use of the Escenic content management system, popular among northern European and Scandinavian newspaper groups. Telegraph TV is a Video on Demand service run by The Daily Telegraph and the Sunday Telegraph. It is hosted on The Telegraph's website, telegraph.co.uk. Telegraph.co.uk became the most popular UK newspaper site in April 2008. It was overtaken by Guardian.co.uk in April 2009 and later by "Mail Online". In December 2010, "Telegraph.co.uk" was the third most visited British newspaper website with 1.7 million daily browsers compared to 2.3 million for "Guardian.co.uk" and nearly 3 million for "Mail Online". In October 2023, "Telegraph.co.uk" was the tenth most visited UK newspaper site, with 13.8 million monthly visits, compared to the most popular, the BBC, with 38.3 million. ==== History ==== The website was launched, under the name electronic telegraph at midday on 15 November 1994 at the headquarters of The Daily Telegraph at Canary Wharf in London Docklands with Ben Rooney as its first editor. It was Europe's first daily web-based newspaper. At this time, the modern internet was still in its infancy, with as few as 10,000 websites estimated to have existed at the time – compared to more than 100 billion by 2009. In 1994, only around 1% of the British population (some 600,000 people) had internet access at home, compared to more than 80% in 2009. Initially, the site published only the top stories from the print edition of the newspaper but it gradually increased its coverage until virtually all of the newspaper was carried online and the website was also publishing original material. The website, hosted on a Sun Microsystems Sparc 20 server and connected via a 64 kbit/s leased line from Demon Internet, was edited by Ben Rooney. An early coup for the site was the publication of articles by Ambrose Evans-Pritchard on Bill Clinton and the Whitewater controversy. The availability of the articles online brought a large American audience to the site. In 1997, the Clinton administration issued a 331-page report that accused Evans-Pritchard of peddling "right-wing inventions". Derek Bishton, who by then had succeeded Rooney as editor, later wrote: "In the days before ET it would have been highly unlikely that anyone in the US would have been aware of Evans-Pritchard's work – and certainly not to the extent that the White House would be forced to issue such a lengthy rebuttal." Bishton, who later became consulting editor for Telegraph Media Group, was followed as editor by Richard Burton, who was made redundant in August 2006. Edward Roussel replaced Burton. ==== My Telegraph ==== My Telegraph offers a platform for readers to have their own blog, save articles, and network with other readers. Launched in May 2007, My Telegraph won a Cross Media Award from international newspaper organisation IFRA in October 2007. One of the judges, Robert Cauthorn, described the project as "the best deployment of blogging yet seen in any newspaper anywhere in the world". == Notable stories == In December 2010, Telegraph reporters posing as constituents secretly recorded Business Secretary Vince Cable. In an undisclosed part of the transcript given to the BBC's Robert Peston by a whistleblower unhappy that The Telegraph had not published Cable's comments in full, Cable stated in reference to Rupert Murdoch's News Corporation takeover bid for BSkyB, "I have declared war on Mr Murdoch and I think we are going to win." Following this revelation, Cable had his responsibility for media affairs – including ruling on Murdoch's takeover plans – withdrawn from his role as business secretary. In May 2011, the Press Complaints Commission upheld a complaint regarding The Telegraph's use of subterfuge: "On this occasion, the commission was not convinced that the public interest was such as to justify proportionately this level of subterfuge." In July 2011, a firm of private investigators hired by The Telegraph to track the source of the leak concluded "strong suspicion" that two former Telegraph employees who had moved to News International, one of them Will Lewis, had gained access to the transcript and audio files and leaked them to Peston. === 2009 MP expenses scandal === In May 2009, The Daily Telegraph obtained a full copy of all the expenses claims of British Members of Parliament. The Telegraph began publishing, in instalments from 8 May 2009, certain MPs' expenses. The Telegraph justified the publication of the information because it contended that the official information due to be released would have omitted key information about redesignating of second-home nominations. This led to a number of high-profile resignations from both the ruling Labour administration and the Conservative opposition. === 2016 Sam Allardyce investigation === In September 2016, Telegraph reporters posing as businessmen filmed England manager Sam Allardyce offering to give advice on how to get around on FA rules on player third party ownership and negotiating a £400,000 deal. The investigation saw Allardyce leave his job by mutual consent on 27 September and making the statement "entrapment has won". == Reception and historical value == Denise Bates included The Daily Telegraph in a list of national newspapers which, because of the quality of their reporting, or the extent of their audience, stand out and are likely to be used for historical research. The editors of Encyclopaedia Britannica said that The Daily Telegraph has consistently had a "high standard of reporting". The Daily Telegraph was renowned for its foreign correspondents. According to the DNCJ, during the nineteenth century, The Daily Telegraph had excellent coverage of the arts. In 1989, Nicholas and Erbach said that The Daily Telegraph is factually accurate, and that its reputation for being so extends outside the country. === Awards === The Daily Telegraph has been named the National Newspaper of the Year in 2009, 1996 and 1993, while The Sunday Telegraph won the same award in 1999. Its investigation on the 2009 expenses scandal was named the "Scoop of the Year" in 2009, with William Lewis winning "Journalist of the Year". The Telegraph won "Team of the Year" in 2004 for its coverage of the Iraq War. The paper also won "Columnist of the Year" three years' running from 2002 to 2004: Zoë Heller (2002), Robert Harris (2003) and Boris Johnson (2004). == Charity and fundraising work == In 1979, following a letter in The Daily Telegraph and a Government report highlighting the shortfall in care available for premature babies, Bliss, the special care baby charity, was founded. In 2009, as part of the Bliss 30th birthday celebrations, the charity was chosen as one of four beneficiaries of the newspaper's Christmas Charity Appeal. In February 2010, a cheque was presented to Bliss for £120,000. In 2014, The Telegraph designed a newspaper-themed Paddington Bear statue, one of fifty located around London prior to the release of the film Paddington, which was auctioned to raise funds for the National Society for the Prevention of Cruelty to Children (NSPCC). == Notable people == === Editors === === Notable columnists and journalists === == See also == List of the oldest newspapers History of journalism Newspaper of record == References == == Further reading == Burnham, E. F. L. (1955). Peterborough Court: the story of the Daily Telegraph. Cassell. Hart-Davis, Duff (1991). The house the Berrys built: inside the Telegraph, 1928-1986. Sevenoaks: Coronet. ISBN 9780340553367. Hastings, Max (October 2024). Editor. London: Pan Macmillan. ISBN 9781035057344. A memoir of Hastings' ten years as the paper's editor. Merrill, John C. and Harold A. Fisher. The world's great dailies: profiles of fifty newspapers (1980) pp. 111–16. William Camrose: Giant of Fleet Street by his son Lord Hartwell. Illustrated biography with black-and-white photographic plates and includes an index. Concerns his links with The Daily Telegraph. == External links == Official website
Wikipedia/Daily_Telegraph
The Office of Strategic Services (OSS) was the first intelligence agency of the United States, formed during World War II. The OSS was formed as an agency of the Joint Chiefs of Staff (JCS) to coordinate espionage activities behind enemy lines for all branches of the United States Armed Forces. Other OSS functions included the use of propaganda, subversion, and post-war planning. The OSS was dissolved a month after the end of the war. Intelligence tasks were shortly later resumed and carried over by its successors, the Strategic Services Unit (SSU), the Department of State's Bureau of Intelligence and Research (INR), and the Central Intelligence Group (CIG), the intermediary precursor to the independent Central Intelligence Agency (CIA). On December 14, 2016, the organization was collectively honored with a Congressional Gold Medal. == Origin == Before the OSS, the various departments of the executive branch, including the State, Treasury, Navy, and War Departments, conducted American intelligence activities on an ad hoc basis, with no overall direction, coordination, or control. The US Army and US Navy had separate code-breaking departments: Signal Intelligence Service and OP-20-G. (A previous code-breaking operation of the State Department, the MI-8, run by Herbert Yardley, had been shut down in 1929 by Secretary of State Henry Stimson, deeming it an inappropriate function for the diplomatic arm, because "gentlemen don't read each other's mail.") The FBI was responsible for domestic security and anti-espionage operations. President Franklin D. Roosevelt was concerned about American intelligence deficiencies. On the suggestion of William Stephenson, the senior British intelligence officer in the western hemisphere, Roosevelt requested that William J. Donovan draft a plan for an intelligence service based on the British Secret Intelligence Service (MI6) and Special Operations Executive (SOE). Donovan envisioned a single agency responsible for foreign intelligence and special operations involving commandos, disinformation, partisan and guerrilla activities. Donovan worked closely with Australian-born British intelligence officer Charles Howard 'Dick' Ellis, who has been credited with writing the blueprint. Said Ellis: I was soon requested to draft a blueprint for an American intelligence agency, the equivalent of BSC [British Security Co-ordination] and based on these British wartime improvisations... detailed tables of organisation were disclosed to Washington... among these were the organisational tables that led to the birth of General William Donovan's OSS. After submitting his (and Ellis's) work, "Memorandum of Establishment of Service of Strategic Information", Donovan was appointed "Coordinator of Information" on July 11, 1941, heading the new organization known as the Office of the Coordinator of Information (COI). Ellis, described as Donovan's "right-hand man", "effectively ran the organization". Writes Fink: Ellis was sent from New York by William Stephenson "to Washington to open a sub-station to facilitate daily liaison with Donovan, who reciprocated by sending [future Director of Central Intelligence, DCI] Allen Welsh Dulles to liaise with BSC in the Rockefeller Center". According to Thomas F. Troy, paraphrasing Stephenson, Ellis 'was the tradecraft expert, the organization man, the one who furnished Bill Donovan with charts and memoranda on running an intelligence organization". Donovan had responsibilities but no actual powers and the existing US agencies were skeptical if not hostile to the British. Until some months after Pearl Harbor, the bulk of OSS intelligence came from the UK. British Security Co-ordination (BSC), under the direction of Ellis, trained the first OSS agents in Canada, until training stations were set up in the US with guidance from BSC instructors, who also provided information on how the SOE was arranged and managed. The British immediately made available their short-wave broadcasting capabilities to Europe, Africa, and the Far East and provided equipment for agents until American production was established. Writes Fink: William Casey, who headed up OSS's Europe-based human-intelligence operations, the Secret Intelligence Branch, and went on to become director of the CIA, wrote in his autobiography, The Secret War Against Hitler, that Ellis was not only writing blueprints but involved in on-the-ground, logistical programs: "Dick Ellis, [an] experienced British pro, helped establish training centres, mostly around Washington." United States Assistant Secretary of State Adolf Berle commented: "The really active head of the intelligence section in [William] Donovan's [OSS] group is [Ellis] ... in other words, [Stephenson's] assistant in the British intelligence [sic] is running Donovan's intelligence service." The Office of Strategic Services was established by a Presidential military order issued by President Roosevelt on June 13, 1942, to collect and analyze strategic information required by the Joint Chiefs of Staff and to conduct special operations not assigned to other agencies. During the war, the OSS supplied policymakers with facts and estimates, but the OSS never had jurisdiction over all foreign intelligence activities. The FBI was left responsible for intelligence work in Latin America, and the Army and Navy continued to develop and rely on their own sources of intelligence. Donald Downes, who was developing counterintelligence capabilities in Washington, explained the situation in his memoir: Edgar Hoover was out for Donovan's scalp and any type of co-operation was pretty well one-sided. Not only OSS, but the British Secret Intelligence, many of whose investigations were bound to lead to America, were constantly being hounded by the FBI... A friend of ours in the Department of Justice had warned us that Edgar Hoover believed we were 'penetrating' embassies and that he was annoyed. == Activities == OSS proved especially useful in providing a worldwide overview of the German war effort, its strengths and weaknesses. In direct operations it was successful in supporting Operation Torch in French North Africa in 1942, where it identified pro-Allied potential supporters and located landing sites. OSS operations in neutral countries, especially Stockholm, Sweden, provided in-depth information on German advanced technology. The Madrid station set up agent networks in France that supported the Allied invasion of southern France in 1944. Most famous were the operations in Switzerland run by Allen Dulles that provided extensive information on German strength, air defenses, submarine production, and the V-1 and V-2 weapons. It revealed some of the secret German efforts in chemical and biological warfare. Switzerland's station also supported resistance fighters in France, Austria and Italy, and helped with the surrender of German forces in Italy in 1945. For the duration of World War II, the Office of Strategic Services was conducting multiple activities and missions, including collecting intelligence by spying, performing acts of sabotage, waging propaganda war, organizing and coordinating anti-Nazi resistance groups in Europe, and providing military training for anti-Japanese guerrilla movements in Asia, among other things. At the height of its influence during World War II, the OSS employed almost 24,000 people. From 1943 to 1945, the OSS played a major role in training Kuomintang troops in China and Burma, and recruited Kachin and other indigenous irregular forces for sabotage as well as guides for Allied forces in Burma fighting the Japanese Army. Among other activities, the OSS helped arm, train, and supply resistance movements in areas occupied by the Axis powers during World War II, including Mao Zedong's Red Army in China (known as the Dixie Mission) and the Viet Minh in French Indochina. OSS officer Archimedes Patti played a central role in OSS operations in French Indochina and met frequently with Ho Chi Minh in 1945. One of the greatest accomplishments of the OSS during World War II was its penetration of Nazi Germany by OSS operatives. The OSS was responsible for training German and Austrian individuals for missions inside Germany. Some of these agents included exiled communists and Socialist party members, labor activists, anti-Nazi prisoners-of-war, and German and Jewish refugees. The OSS also recruited and ran one of the war's most important spies, the German diplomat Fritz Kolbe. From 1943 the OSS was in contact with the Austrian resistance group around Kaplan Heinrich Maier. As a result, plans and production facilities for V-2 rockets, Tiger tanks and aircraft (Messerschmitt Bf 109, Messerschmitt Me 163 Komet, etc.) were passed on to Allied general staffs in order to enable Allied bombers to get accurate air strikes. The Maier group informed very early about the mass murder of Jews through its contacts with the Semperit factory near Auschwitz. The group was gradually dismantled by the German authorities because of a double agent who worked for both the OSS and the Gestapo. This uncovered a transfer of money from the Americans to Vienna via Istanbul and Budapest, and most of the members were executed after a People's Court hearing. In 1943, the Office of Strategic Services set up operations in Istanbul. Turkey, as a neutral country during the Second World War, was a place where both the Axis and Allied powers had spy networks. The railroads connecting central Asia with Europe, as well as Turkey's close proximity to the Balkan states, placed it at a crossroads of intelligence gathering. The goal of the OSS Istanbul operation called Project Net-1 was to infiltrate and extenuate subversive action in the old Ottoman and Austro-Hungarian Empires. The head of operations at OSS Istanbul was a banker from Chicago named Lanning "Packy" Macfarland, who maintained a cover story as a banker for the American lend-lease program. Macfarland hired Alfred Schwarz, an Austrian businessman (* 25. April 1904 in Prostějov, Austria-Hungary; † 13. August 1988 in Lucerne, Switzerland) who came to be known as "Dogwood" and ended up establishing the Dogwood information chain. Dogwood in turn hired a personal assistant named Walter Arndt and established himself as an employee of the Istanbul Western Electrik Kompani. Through Schwarz and Arndt the OSS was able to infiltrate anti-fascist groups in Austria, Hungary, and Germany. Schwarz was able to convince Romanian, Bulgarian, Hungarian, and Swiss diplomatic couriers to smuggle American intelligence information into these territories and establish contact with elements antagonistic to the Nazis and their collaborators. Couriers and agents memorized information and produced analytical reports; when they were not able to memorize effectively they recorded information on microfilm and hid it in their shoes or hollowed pencils. Through this process information about the Nazi regime made its way to Macfarland and the OSS in Istanbul and eventually to Washington. While the OSS "Dogwood-chain" produced a lot of information, its reliability was increasingly questioned by British intelligence. By May 1944, through collaboration between the OSS, British intelligence, Cairo, and Washington, the entire Dogwood-chain was found to be unreliable and dangerous. Planting phony information into the OSS was intended to misdirect the resources of the Allies. Schwarz's Dogwood-chain, which was the largest American intelligence gathering tool in occupied territory, was shortly thereafter shut down. The OSS purchased Soviet code and cipher material (or Finnish information on them) from émigré Finnish army officers in late 1944. Secretary of State Edward Stettinius, Jr., protested that this violated an agreement President Roosevelt made with the Soviet Union not to interfere with Soviet cipher traffic from the United States. General Donovan might have copied the papers before returning them the following January, but there is no record of Arlington Hall receiving them, and CIA and NSA archives have no surviving copies. This codebook was in fact used as part of the Venona decryption effort, which helped uncover large-scale Soviet espionage in North America. RYPE was the codename of the airborne unit who was dropped in the Norwegian mountains of Snåsa on March 24, 1945 to carry out sabotage actions behind enemy lines. From the base at the Gjefsjøen mountain farm, the group conducted successful railroad sabotages, with the intention of preventing the withdrawal of German forces from northern Norway. Operasjon Rype was the only U.S. operation on German-occupied Norwegian soil during WW2. The group consisted mainly of Norwegian Americans recruited from the 99th Infantry Battalion. Operasjon Rype was led by William Colby. The OSS sent four teams of two under Captain Stephen Vinciguerra (codename Algonquin, teams Alsace, Poissy, S&S and Student), with Operation Varsity in March 1945 to infiltrate and report from behind enemy lines, but none succeeded. Team S&S had two agents in Wehrmacht uniforms and a captured Kϋbelwagen; to report by radio. But the Kϋbelwagen was put out of action while in the glider; three tires and the long-range radio were shot up (German gunners were told to attack the gliders not the tow planes). == Weapons and gadgets == The OSS espionage and sabotage operations produced a steady demand for highly specialized equipment. General Donovan invited experts, organized workshops, and funded labs that later formed the core of the Research & Development Branch. Boston chemist Stanley P. Lovell became its first head, and Donovan humorously called him his "Professor Moriarty".: 101  Throughout the war years, the OSS Research & Development successfully adapted Allied weapons and espionage equipment, and produced its own line of novel spy tools and gadgets, including silenced pistols, lightweight sub-machine guns, "Beano" grenades that exploded upon impact, explosives disguised as lumps of coal ("Black Joe") or bags of Chinese flour ("Aunt Jemima"), acetone time delay fuses for limpet mines, compasses hidden in uniform buttons, playing cards that concealed maps, a 16mm Kodak camera in the shape of a matchbox, tasteless poison tablets ("K" and "L" pills), and cigarettes laced with tetrahydrocannabinol acetate (an extract of Indian hemp) to induce uncontrollable chattiness. The OSS also developed innovative communication equipment such as wiretap gadgets, electronic beacons for locating agents, and the "Joan-Eleanor" portable radio system that made it possible for operatives on the ground to establish secure contact with a plane that was preparing to land or drop cargo. The OSS Research & Development also printed fake German and Japanese-issued identification cards, and various passes, ration cards, and counterfeit money. On August 28, 1943, Stanley Lovell was asked to make a presentation in front of a hostile Joint Chiefs of Staff, who were skeptical of OSS plans beyond collecting military intelligence and were ready to split the OSS between the Army and the Navy.: 5–7  While explaining the purpose and mission of his department and introducing various gadgets and tools, he reportedly casually dropped into a waste basket a Hedy, a panic-inducing explosive device in the shape of a firecracker, which shortly produced a loud shrieking sound followed by a deafening boom. The presentation was interrupted and did not resume since everyone in the room fled. In reality, the Hedy, jokingly named after Hollywood movie star Hedy Lamarr for her ability to distract men, later saved the lives of some trapped OSS operatives.: 184–185  Not all projects worked. Some ideas were odd, such as a failed attempt to use insects to spread anthrax in Spain.: 150–151  Stanley Lovell was later quoted saying, "It was my policy to consider any method whatever that might aid the war, however unorthodox or untried". In 1939, a young physician named Christian J. Lambertsen developed an oxygen rebreather set (the Lambertsen Amphibious Respiratory Unit) and demonstrated it to the OSS—after already being rejected by the U.S. Navy—in a pool at the Shoreham Hotel in Washington D.C., in 1942. The OSS not only bought into the concept, they hired Lambertsen to lead the program and build up the dive element for the organization. His responsibilities included training and developing methods of combining self-contained diving and swimmer delivery including the Lambertsen Amphibious Respiratory Unit for the OSS "Operational Swimmer Group". Growing involvement of the OSS with coastal infiltration and water-based sabotage eventually led to creation of the OSS Maritime Unit. == Headquarters and field offices == The bulk of the OSS, after the expansion out of and away from COI, eventually found itself headquartered at a complex near 23rd Street and E Street in Washington, D.C. This complex was unassuming, appearing to be a mix of normal government offices and apartment buildings to nearby residents and office workers. It is known as the "Navy Hill Complex," "Potomac Hill Complex," and the "E Street Complex." The OSS Society and State Department have engaged in efforts with the National Park Service to add the Headquarters complex to the National Register of Historic Places. == Training facilities == At Camp X, near Whitby, Ontario, an "assassination and elimination" training program was operated by the British Special Operations Executive, assigning exceptional masters in the art of knife-wielding combat, such as William E. Fairbairn and Eric A. Sykes, to instruct trainees. Many members of the Office of Strategic Services also were trained there. It was dubbed "the school of mayhem and murder" by George Hunter White who trained at the facility in the 1940's. Beginning in January 1941, Colonel Millard Preston Goodfellow, creator and Director of the Special Operations Branch (at this time still known as SA/G within the COI), negotiated with the National Park Service to obtain three tracts of land to be dedicated as training camps for both SA/G and SA/B. In March, he assigned Garland H. Williams to be the Training Director of these facilities. Commander N.G.A Woolley was loaned to COI by the British Navy and helped Donovan and Goodfellow to organize underwater training and craft landing. From these incipient beginnings, the Office of Strategic Services opened camps in the United States, and finally abroad. Prince William Forest Park (then known as Chopawamsic Recreational Demonstration Area) was the site of an OSS training camp that operated from 1942 to 1945. Area "C", consisting of approximately 6,000 acres (24 km2), was used extensively for communications training, whereas Area "A" was used for training some of the OGs (Operational Groups). Catoctin Mountain Park, now the location of Camp David, was the site of OSS training Area "B" where the first Special Operations, or SO, were trained. Special Operations was modeled after Great Britain's Special Operations Executive, which included parachute, sabotage, self-defense, weapons, and leadership training to support guerrilla or partisan resistance. Considered most mysterious of all was the "cloak and dagger" Secret Intelligence, or SI branch. Secret Intelligence employed "country estates as schools for introducing recruits into the murky world of espionage. Thus, it established Training Areas E and RTU-11 ("the Farm") in spacious manor houses with surrounding horse farms." Morale Operations training included psychological warfare and propaganda. The Congressional Country Club (Area F) in Bethesda, Maryland, was the primary OSS training facility. The Facilities of the Catalina Island Marine Institute at Toyon Bay on Santa Catalina Island, Calif., are composed (in part) of a former OSS survival training camp. The National Park Service commissioned a study of OSS National Park training facilities by Professor John Chambers of Rutgers University. The main OSS training camps abroad were located initially in Great Britain, French Algeria, and Egypt; later as the Allies advanced, a school was established in southern Italy. In the Far East, OSS training facilities were established in India, Ceylon, and then China. The London branch of the OSS, its first overseas facility, was at 70 Grosvenor Street, W1. In addition to training local agents, the overseas OSS schools also provided advanced training and field exercises for graduates of the training camps in the United States and for Americans who enlisted in the OSS in the war zones. The most famous of the latter was Virginia Hall in France. The OSS's Mediterranean training center in Cairo, Egypt, known to many as the Spy School, was a lavish palace belonging to King Farouk's brother-in-law, called Ras el Kanayas. It was modeled after the SOE's training facility STS 102 in Haifa, Palestine. Americans whose heritage stemmed from Italy, Yugoslavia, and Greece were trained at the "Spy School" and also sent for parachute, weapons, and commando training, and Morse code and encryption lessons at STS 102. After completion of their spy training, these agents were sent back on missions to the Balkans and Italy where their accents would not pose a problem for their assimilation. == Personnel == The names of all 13,000 OSS personnel and documents of their OSS service, previously a closely guarded secret, were released by the US National Archives on August 14, 2008. Among the 24,000 names were those of Sterling Hayden, Milton Wolff, Carl C. Cable, Julia Child, Ralph Bunche, Arthur Goldberg, Saul K. Padover, Arthur Schlesinger, Jr., Bruce Sundlun, William Colby, René Joyeuse, and John Ford. The 750,000 pages in the 35,000 personnel files include applications of people who were not recruited or hired, as well as the service records of those who served. OSS soldiers were primarily inducted from the United States Armed Forces. Other members included foreign nationals including displaced individuals from the former czarist Russia, an example being Prince Serge Obolensky. Donovan sought independent thinkers, and in order to bring together those many intelligent, quick-witted individuals who could think out-of-the box, he chose them from all walks of life, backgrounds, without distinction to culture or religion. Donovan was quoted as saying, "I'd rather have a young lieutenant with enough guts to disobey a direct order than a colonel too regimented to think for himself." In a matter of a few short months, he formed an organization which equalled and then rivalled Great Britain's Secret Intelligence Service and its Special Operations Executive. Donovan, inspired by Britain's SOE, assembled an outstanding group of clinical psychologists to carry out evaluations of potential OSS candidates at a variety of sites, primary among these was Station S in Northern Virginia near where Dulles International Airport now stands. Recent research from remaining records from the OSS Station S program describes how those characteristics (independent thought, effective intelligence, interpersonal skills) were found among OSS candidates One such agent was Ivy League polyglot and Jewish American baseball catcher Moe Berg, who played 15 seasons in the major leagues. As a Secret Intelligence agent, he was dispatched to seek information on German physicist Werner Heisenberg and his knowledge on the atomic bomb. One of the most highly decorated and flamboyant OSS soldiers was US Marine Colonel Peter Ortiz. Enlisting early in the war, as a French Foreign Legionnaire, he went on to join the OSS and to be the most highly decorated US Marine in the OSS during World War II. Julia Child, who later authored cookbooks, worked directly under Donovan. René Joyeuse M.D., MS, FACS was a Swiss, French and American soldier, physician and researcher, who distinguished himself as an agent of Allied intelligence in German-occupied France during World War II. He received the US Army Distinguished Service Cross for his actions with the OSS, after the war he became a Physician, Researcher and was a co-founder of The American Trauma Society. "Jumping Joe" Savoldi (code name Sampson) was recruited by the OSS in 1942 because of his hand-to-hand combat and language skills as well as his deep knowledge of the Italian geography and Benito Mussolini's compound. He was assigned to the Special Operations Branch and took part in missions in North Africa, Italy, and France during 1943–1945. One of the forefathers of today's commandos was Navy Lieutenant Jack Taylor. He was sequestered by the OSS early in the war and had a long career behind enemy lines. Taro and Mitsu Yashima, both Japanese political dissidents who were imprisoned in Japan for protesting its militarist regime, worked for the OSS in psychological warfare against the Japanese Empire. Nisei linguists In late 1943, a representative from OSS visited the 442nd Infantry Regiment looking to recruit volunteers willing to undertake "extremely hazardous assignment." All selected were Nisei. The recruits were assigned to OSS Detachments 101 and 202, in the China-Burma-India Theater. "Once deployed, they were to interrogate prisoners, translate documents, monitor radio communications, and conduct covert operations... Detachment 101 and 102's clandestine operations were extremely successful." == Dissolution into other agencies == On September 20, 1945, President Harry S. Truman signed Executive Order 9621, terminating the OSS. Due to administrative error, the order only allowed the agency ten days to close. The State Department took over the Research and Analysis Branch (R&A); it became the Bureau of Intelligence and Research, The War Department took over the Secret Intelligence (SI) and Counter-Espionage (X-2) Branches, which were then housed in the new Strategic Services Unit (SSU). Brigadier General John Magruder (formerly Donovan's Deputy Director for Intelligence in OSS) became the new SSU director. He oversaw the liquidation of the OSS and managed the institutional preservation of its clandestine intelligence capability. In January 1946, President Truman created the Central Intelligence Group (CIG), which was the direct precursor to the CIA. SSU assets, which now constituted a streamlined "nucleus" of clandestine intelligence, were transferred to the CIG in mid-1946 and reconstituted as the Office of Special Operations (OSO). The National Security Act of 1947 established the Central Intelligence Agency, which then took up some OSS functions. The direct descendant of the paramilitary component of the OSS is the CIA Special Activities Division. Today, the joint-branch United States Special Operations Command, founded in 1987, uses the same spearhead design on its insignia, as homage to its indirect lineage. The Defense Intelligence Agency currently manages the OSS' mandate to provide strategic military intelligence to the Joint Chiefs of Staff and the Secretary of Defense and to coordinate human espionage activities across the United States Armed Forces (through the Defense Clandestine Service) and was awarded status as an OSS Heritage organization by the OSS Society. == Branches == == Detachments == US Army units attached to the OSS == See also == Charles Douglas Jackson Dick Ellis Operation Jedburgh Operation Paperclip Paramarines Special Forces (United States Army) X-2 Counter Espionage Branch History of espionage Art Looting Investigation Unit (ALIU) Millard Preston Goodfellow William J. Donovan The Pond Garland H. Williams == Citations == == General and cited references == Paulson, Alan (1995). "Required reading: OSS Weapons". Fighting Firearms. 3 (2): 20–21, 80–81. Brunner, John (1991). OSS Crossbows. Phillips Publications. ISBN 0932572154. Brunner, John (2005). OSS Weapons II. Phillips Publications. ISBN 978-0932572431. == Further reading == == External links == "The Office of Strategic Services: America's First Intelligence Agency" National Park Service Report on OSS Training Facilities Collection of Documents at the Franklin D. Roosevelt Presidential Museum and Library, Part 1 and Part 2 The OSS Society OSS Reborn Works by Office of Strategic Services at Project Gutenberg Office of Strategic Services collection at Internet Archive Works by or about Office of Strategic Services at the Internet Archive Works by Office of Strategic Services at LibriVox (public domain audiobooks)
Wikipedia/Office_of_Strategic_Services