text
stringlengths 2
132k
| source
dict |
|---|---|
The BOttle MAnnequin ABsorber phantom was developed by Bush in 1949 (Bush 1949) and has since been accepted in North America as the industry standard (ANSI 1995) for calibrating whole body counting systems. The phantom consists of 10 polyethylene bottles, either cylinders or elliptical cylinders, that represent the head, neck chest, abdomen, thighs, calves, and arms. Each section is filled with a radioactive solution, in water, that has the amount of radioactivity proportional to the volume of each section. This simulates a homogeneous distribution of material throughout the body. The solution will also be acidified and contain stable element carrier so that the radioactivity does not plate out on the container walls. The phantom, which contains a known amount of radioactivity can be used to calibrate the whole body counter by relating the observed response to the known amount of radioactivity. As different radioactive materials emit different energies of gamma photons, the calibration has to be repeated to cover the expected energy range: usually 120 to 2,000 keV. Examples of radioactive isotopes that are used for efficiency calibration include 57Co, 60Co, 88Y, 137Cs and 152Eu. Although the phantom was designed to be used lying down, it is used in any orientation. == Other uses == Performance testing: BOMAB phantoms are sometimes used by performance testing organizations to test operating assay facilities. Phantoms, containing known quantities of radioactive material, are sent to assay facilities as blind samples. Design characteristics: Phantoms can be used to evaluate the relative effect of size, shape and positioning on the performance of in vivo measurement equipment. Background: A water filled BOMAB is often used to estimate the (blank) background for in vivo assay systems. Detection Limits: A BOMAB filled with approximately 140 g of K-40, which is the nominal content in a 70 kg man, is
|
{
"page_id": 12783915,
"source": null,
"title": "Bomab"
}
|
sometimes used to estimate detection sensitivity of in vivo personnel counting systems. == See also == Computational human phantom Imaging phantom == References == == External links == Bush F. The integral dose received from a uniformly distributed radioactive isotope. British J Radiol. 22:96-102; 1949. Health Physics Society. Specifications for the Bottle Manikin Absorber Phantom. An American National Standard. New York: American National Standards Institute; ANSI/HPS N13.35; 1995.
|
{
"page_id": 12783915,
"source": null,
"title": "Bomab"
}
|
CuproBraze is a copper-alloy heat exchanger technology for high-temperature and pressure environments such as those in modern diesel engines. The technology, developed by the International Copper Association (ICA), is licensed for free to heat exchanger manufacturers around the world. Applications for CuproBraze include charge air coolers, radiators, oil coolers, climate control systems, and heat transfer cores. CuproBraze is suited for charge air coolers and radiators in heavy industry where machinery must operate for long periods of time under harsh conditions without failures. The technology is intended for off-road vehicles, trucks, buses, industrial engines, generators, locomotives, and military equipment. It is also used for light trucks, SUVs and passenger cars with special needs. Compared with previous heat exchanger models CuproBraze creates new materials for heat exchanger parts that have previously been made of soldered copper/brass plate fin, soldered copper brass serpentine fin, and brazed aluminum serpentine fin to suit more demanding applications. Aluminum heat exchangers are viable and economical for cars, light trucks, and other light-duty applications. However, they are not amenable for environments characterized by high operating temperatures, humidity, vibration, salty corrosive air, and air pollution. In these environments, the additional tensile strength, durability, and corrosion resistance that CuproBraze technology provides are useful. The CuproBraze technology uses brazing instead of soldering to join copper and brass radiator components. The heat exchangers are made with anneal-resistant copper and brass alloys. The tubes are fabricated from brass strip and coated with a brazing filler material in form of a powder-based paste or an amorphous brazing foil is laid between the tube and fin. There is another method of coating the tube in-line on the tube mill. This is done using the twin wire-arc spray process where the wire is the braze alloy, deposited on the tube as it is being manufactured at
|
{
"page_id": 37753134,
"source": null,
"title": "CuproBraze"
}
|
200-400 fpm. This saves one process step of coating the tube later. The coated tubes, along with copper fins, headers and side supports made of brass, are fitted together into a core assembly which is brazed in a furnace. The technology enables brazed serpentine fins to be used in copper-brass heat exchanger designs. The benefits include tougher joints. == Performance properties == CuproBraze has better than other materials in several aspects, as shown in the table below. === Thermal performance === The ability to withstand elevated temperatures is essential in high-heat applications. Aluminum alloys are challenged at higher temperatures due to their lower melting points. The yield strength of aluminum is compromised above 200 °C. Problems with fatigue cracking are exacerbated at elevated temperatures. CuproBraze heat exchangers can operate correctly under temperatures of 290°C and above. Anneal-resistant copper and brass strip ensure that radiator cores maintain their strength without softening, despite exposures to high brazing temperatures. === Heat transfer efficiency === Cooling efficiency is a measure of heat rejection from a given space by a heat exchanger. The overall thermal efficiency of a heat exchanger core depends on many factors, such as thermal conductivity of fins and tubes; strength and weight of the fins and tubes; spacing, size, thickness and shape of fins and tubes; velocity of the air passing through the core; and other factors. The main performance criterion for heat exchangers is cooling efficiency. Heat exchanger cores made from copper and brass can reject more heat per unit volume than any other material. This is why copper-brass heat exchangers generally have a greater cooling efficiency than alternate materials. Brazed copper-brass heat exchangers are also more rugged than soldered copper-brass and alternate materials, including brazed aluminum serpentine. Air pressure drop is a factor of heat exchanger design. A heat
|
{
"page_id": 37753134,
"source": null,
"title": "CuproBraze"
}
|
exchanger core with a smaller air pressure drops from the front to the back of the core (i.e., from the windward to the leeward side in a wind tunnel test) is more efficient. Air pressure drops typically are 24% less for CuproBraze versus aluminum heat exchangers. This advantage, responsible for a 6% increase in heat rejection, contributes to CuproBraze's overall greater efficiency. Since copper's thermal conductivity is higher than aluminum, copper has a higher capacity to dissipate heat. By using thinner material gauges in combination with higher fin density, heat dissipation capacity with CuproBraze can be increased with air pressure drops still at reasonable levels. === Size === Due to its high heat transfer efficiency, CuproBraze has an efficient heating effect. This is because the same heat rejection level can be achieved with a smaller-sized core. Hence, a significant reduction in frontal area and volume is achievable with CuproBraze versus other materials. === Strength and durability === Three new alloys were developed to enhance the strength and durability of CuproBraze heat exchangers: 1) an anneal-resistant fin material that maintains its strength after brazing; 2) an anneal-resistant tube alloy that retains its fine grain structure after brazing and provides ductility and fatigue strength in the brazed heat exchanger core; and 3) the brazing alloy. Brazing at 650 °C creates a joint that is stronger than a soldered joint and comparable in strength to a welded joint. Unlike welding, brazing does not melt the base metals. Therefore, brazing is better for joining dissimilar alloys. CuproBraze has more strength at elevated temperatures than soldered copper-brass or aluminum. Due to the lower thermal expansion of copper versus aluminum, there is less thermal stress during the manufacturing of CuproBraze and in its use as a heat exchanger. CuproBraze heat exchangers have stronger tube-to-header joints than
|
{
"page_id": 37753134,
"source": null,
"title": "CuproBraze"
}
|
other materials. These braze joints are the very important in heat exchangers and must be leak-free. CuproBraze also has higher tolerances to internal pressures because its thin-gauge high strength materials provide stronger support for the tubes. The material is also less sensitive to bad coolants than aluminum heat exchangers. Test results demonstrate a much longer fatigue life for CuproBraze joints compared to similar soldered copper-brass or brazed aluminum joints. Stronger joints allow for the use of thinner fins and new radiator and cooler designs. The copper fins are not easily bent when dirty radiators are washed with high pressure water. Anticorrosive coatings further improve strength and resistance against humidity, sand erosion, and stone impingement on copper fins. For further information, see: CuproBraze: Durability and reliability (Technology Series): and CupropBraze durability (design criteria series). === Emissions === New legislation in Europe, Japan and the U.S. call for strong reductions in NOX and particulate emissions from diesel engines used in trucks, buses, power plants, and other heavy equipment. These goals can in part be accomplished by using cleaner-performing turbocharged diesel engines and charge air coolers. Turbocharging enables better power outputs, and charge air coolers allow power to be produced with more efficiency by reducing the temperature of the air charge entering the engine, thereby increasing its density. The charge air cooler, located between the turbocharger and the engine air inlet manifold, is an air-to-air heat exchanger. It reduces the inlet air temperatures of turbocharged diesel engines from 200 °C to 45 °C while increasing inlet air densities to increase engine efficiencies. Even higher inlet temperatures (246 °C or higher) and boost pressures may be necessary to comply with the emissions standards in the future. Present-day charge air cooler systems, based on aluminum alloys, experience durability problems at temperatures and pressures necessary to
|
{
"page_id": 37753134,
"source": null,
"title": "CuproBraze"
}
|
meet the U.S. Tier 4i standards for stationary and mobile engines. Published reports estimate that the average life of an aluminum charge air cooler is currently about 3,500 hours. Aluminum is near its upper technological limit to accommodate higher temperatures and thermal stress levels because the tensile strength of the metal declines rapidly at 150 °C and repetitive thermal cycling between 150 °C and 200 °C substantially weakens it. Thermal cycling creates weak spots in aluminum tubes, which in turn causes charge air coolers to fail. A potential option is to install stainless steel precoolers in aluminum charge air coolers, but limited space and the complexity of this solution is a tampering factor for this option. A CuproBraze charge air cooler can operate at temperatures as high as 290 °C without creep, fatigue, or other metallurgical problems. === Corrosion resistance === Exterior corrosion resistance in a heat exchanger is especially important in coastal areas, humid areas, polluted areas, and in mining operations. Corrosion mechanisms of copper and aluminum alloys are different. CuproBraze tube contains 85% copper which provides much resistance against dezincification and stress corrosion cracking. The copper alloys tend to corrode uniformly over entire surfaces at known rates. This predictability of copper corrosion is important for proper maintenance management. Aluminum, on the other hand, is more likely to corrode locally by pitting, resulting eventually in holes. In accelerated corrosion tests, such as SWAAT for salt spray and marine conditions, CuproBraze performed better than aluminum. The corrosion resistance of CuproBraze is generally better than soft soldered heat exchangers. This is because the materials in CuproBraze heat exchangers are of equal nobility, so galvanic differences are minimized. On soft soldered heat exchangers, the solder is less noble than fin and tube materials and can suffer from galvanic attack in corrosive environments.
|
{
"page_id": 37753134,
"source": null,
"title": "CuproBraze"
}
|
=== Repairability === CuproBraze can be repaired with little complexities. This advantage of the technology is important in remote areas where spare parts may be limited. CuproBraze can be repaired with lead-free soft solder (for example 97% tin, 3% copper) or with common silver-containing brazing alloys. === Antimicrobial === Biofouling is often a problem in HVAC systems that operate in warm, dark, and humid environments. The antimicrobial properties of CuproBraze alloys eliminate foul odors, thereby improving indoor air quality. CuproBraze is being investigated in mobile air conditioner units as a solution to bad odors from fungus and bacteria in aluminum-based heat exchange systems. == Uses == Russian OEMs, such as Kamaz and Ural Automotive Plant, are using CuproBraze radiators and charge air coolers in heavy-duty trucks for off-highway and on-highway applications. Other manufacturers include UAZ and GAZ (Russia) and MAZ (Belarus). The Finnish Radiator Manufacturing Company, also known as FinnRadiator, produces 95% of its radiators and charge air-coolers with CuproBraze for OEM manufacturers of off-road construction equipment. Nakamura Jico Co., Ltd. (Japan) manufactures CuproBraze heat exchangers for construction equipment, locomotives and on-highway trucks. Young Touchstone supplies CuproBraze radiators to MotivePower's diesel-powered commuter train locomotives in North America. Siemens AG Transportation Systems plans to use the technology for its Asia Runner locomotive for South Vietnam and other Asian markets. Bombardier Transportation heat exchangers cool transformer oil in electric-powered locomotives. These huge oil coolers have been used successfully in coal trains for South African Railways. Kohler Power Systems Americas, one of the largest users of diesel engines for power generation, adopted CuproBraze for diesel engine turbocharger air-to-air cooling in its "gen sets". == See also == Copper in heat exchangers == Further reading == Palmqvist U., Liljedahl M. and Falkenö A., 2007. Copper and its Properties for HVAC Systems; Society of Automotive
|
{
"page_id": 37753134,
"source": null,
"title": "CuproBraze"
}
|
Engineers (SAE) Technical Paper Series 2007-01-1385; https://web.archive.org/web/20121023013350/http://store.sae.org/ Falkenö A., 2006. Environmentally Driven Development of New Heat Exchanger Materials, SAE Technical Paper Series 2006-01-0727; https://web.archive.org/web/20121023013350/http://store.sae.org/ Falkenö A., Tapper L., Ainali M. and Gustafsson B., 2003. The Influence of the Brazing Parameters on the Quality of the Heat-Exchanger Made by the CuproBraze Process, SAE Technical Paper Series 2003-04-0037; https://web.archive.org/web/20121023013350/http://store.sae.org/ Tapper L, Ainali M., 2001. Interactions between the materials in the 632 tube-fin-joints in brazed copper–brass heat exchangers, SAE 633; 2001-01-1726. 634 Ainali M., Korpinen T. and Forsén O., 2001. External Corrosion Resistance of CuproBraze Radiators; SAE Technical Paper Series 2001-01-1718; https://web.archive.org/web/20121023013350/http://store.sae.org/ Korpinen T., Electrochemical Tests with Copper/Brass Radiator Tube Materials in Coolants, 2001. SAE Technical Paper Series 2001-01-1754; https://web.archive.org/web/20121023013350/http://store.sae.org/ Gustafsson B. and Scheel J. 2000. CuproBraze Mobile Heat Exchanger Technology; SAE Technical Paper Series 2000-01-3456; https://web.archive.org/web/20121023013350/http://store.sae.org/ == References ==
|
{
"page_id": 37753134,
"source": null,
"title": "CuproBraze"
}
|
Ground truth is information that is known to be real or true, provided by direct observation and measurement (i.e. empirical evidence) as opposed to information provided by inference. == Etymology == The Oxford English Dictionary (s.v. ground truth) records the use of the word Groundtruth in the sense of 'fundamental truth' from Henry Ellison's poem "The Siberian Exile's Tale", published in 1833. == Usage == The term "ground truth" can be used as a noun, adjective, and verb. Noun: "ground truth" (no hyphen). Example: "The ground truth is essential for training accurate models." Adjective: "ground-truth" (hyphenated compound adjective). Example: "We need to use ground-truth data to validate the model." Verb: "to ground-truth" or "to groundtruth" (compound verb,). Example: "We need to ground-truth the results to ensure their accuracy." == Statistics and machine learning == "Ground truth" may be seen as a conceptual term relative to the knowledge of the truth concerning a specific question. It is the ideal expected result. This is used in statistical models to prove or disprove research hypotheses. The term "ground truthing" refers to the process of gathering the proper objective (provable) data for this test. Compare with gold standard. For example, suppose we are testing a stereo vision system to see how well it can estimate 3D positions. The "ground truth" might be the positions given by a laser rangefinder which is known to be much more accurate than the camera system. Bayesian spam filtering is a common example of supervised learning. In this system, the algorithm is manually taught the differences between spam and non-spam. This depends on the ground truth of the messages used to train the algorithm – inaccuracies in the ground truth will correlate to inaccuracies in the resulting spam/non-spam verdicts. == Remote sensing == In remote sensing, "ground truth" refers
|
{
"page_id": 332079,
"source": null,
"title": "Ground truth"
}
|
to information collected at the imaged location. Ground truth allows image data to be related to real features and materials on the ground. The collection of ground truth data enables calibration of remote-sensing data, and aids in the interpretation and analysis of what is being sensed. Examples include cartography, meteorology, analysis of aerial photographs, satellite imagery and other techniques in which data are gathered at a distance. More specifically, ground truth may refer to a process in which "pixels" on a satellite image are compared to what is imaged (at the time of capture) in order to verify the contents of the "pixels" in the image (noting that the concept of "pixel" is imaging-system-dependent). In the case of a classified image, supervised classification can help to determine the accuracy of the classification by the remote sensing system which can minimize error in the classification. Ground truth is usually done on site, correlating what is known with surface observations and measurements of various properties of the features of the ground resolution cells under study in the remotely sensed digital image. The process also involves taking geographic coordinates of the ground resolution cell with GPS technology and comparing those with the coordinates of the "pixel" being studied provided by the remote sensing software to understand and analyze the location errors and how it may affect a particular study. Ground truth is important in the initial supervised classification of an image. When the identity and location of land cover types are known through a combination of field work, maps, and personal experience these areas are known as training sites. The spectral characteristics of these areas are used to train the remote sensing software using decision rules for classifying the rest of the image. These decision rules such as Maximum Likelihood Classification, Parallelopiped Classification,
|
{
"page_id": 332079,
"source": null,
"title": "Ground truth"
}
|
and Minimum Distance Classification offer different techniques to classify an image. Additional ground truth sites allow the remote sensor to establish an error matrix that validates the accuracy of the classification method used. Different classification methods may have different percentages of error for a given classification project. It is important that the remote sensor chooses a classification method that works best with the number of classifications used while providing the least amount of error. Ground truth also helps with atmospheric correction. Since images from satellites have to pass through the atmosphere, they can get distorted because of absorption in the atmosphere. So ground truth can help fully identify objects in satellite photos. === Errors of commission === An example of an error of commission is when a pixel reports the presence of a feature (such a tree) that, in reality, is absent (no tree is actually present). Ground truthing ensures that the error matrices have a higher accuracy percentage than would be the case if no pixels were ground-truthed. This value is the inverse of the user's accuracy, i.e. Commission Error = 1 - user's accuracy. === Errors of omission === An example of an error of omission is when pixels of a certain type, for example, maple trees, are not classified as maple trees. The process of ground-truthing helps to ensure that the pixel is classified correctly and the error matrices are more accurate. This value is the inverse of the producer's accuracy, i.e. Omission Error = 1 - producer's accuracy == Geographical information systems == In GIS the spatial data is modeled as field (like in remote sensing raster images) or as object (like in vectorial map representation). They are modeled from the real world (also named geographical reality), typically by a cartographic process (illustrated). Geographic information systems
|
{
"page_id": 332079,
"source": null,
"title": "Ground truth"
}
|
such as GIS, GPS, and GNSS, have become so widespread that the term "ground truth" has taken on special meaning in that context. If the location coordinates returned by a location method such as GPS are an estimate of a location, then the "ground truth" is the actual location on Earth. A smart phone might return a set of estimated location coordinates such as 43.87870,-103.45901. The ground truth being estimated by those coordinates is the tip of George Washington's nose on Mount Rushmore. The accuracy of the estimate is the maximum distance between the location coordinates and the ground truth. We could say in this case that the estimate accuracy is 10 meters, meaning that the point on earth represented by the location coordinates is thought to be within 10 meters of George's nose—the ground truth. In slang, the coordinates indicate where we think George Washington's nose is located, and the ground truth is where it really is. In practice a smart phone or hand-held GPS unit is routinely able to estimate the ground truth within 6–10 meters. Specialized instruments can reduce GPS measurement error to under a centimeter. == Military usage == US military slang uses "ground truth" to refer to the facts comprising a tactical situation—as opposed to intelligence reports, mission plans, and other descriptions reflecting the conative or policy-based projections of the industrial·military complex. The term appears in the title of the Iraq War documentary film The Ground Truth (2006), and also in military publications, for example Stars and Stripes saying: "Stripes decided to figure out what the ground truth was in Iraq." == See also == Baseline (science) Calibration Foundationalism == References == == External links == Forestry Organization Remote Sensing Technology Project (includes an example of an error matrix)
|
{
"page_id": 332079,
"source": null,
"title": "Ground truth"
}
|
Sanger sequencing is a method of DNA sequencing that involves electrophoresis and is based on the random incorporation of chain-terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication. After first being developed by Frederick Sanger and colleagues in 1977, it became the most widely used sequencing method for approximately 40 years. An automated instrument using slab gel electrophoresis and fluorescent labels was first commercialized by Applied Biosystems in March 1987. Later, automated slab gels were replaced with automated capillary array electrophoresis. Recently, higher volume Sanger sequencing has been replaced by next generation sequencing methods, especially for large-scale, automated genome analyses. However, the Sanger method remains in wide use for smaller-scale projects and for validation of deep sequencing results. It still has the advantage over short-read sequencing technologies (like Illumina) in that it can produce DNA sequence reads of > 500 nucleotides and maintains a very low error rate with accuracies around 99.99%. Sanger sequencing is still actively being used in efforts for public health initiatives such as sequencing the spike protein from SARS-CoV-2 as well as for the surveillance of norovirus outbreaks through the United States Center for Disease Control and Prevention (CDC)'s CaliciNet surveillance network. == Method == The classical chain-termination method requires a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleotide triphosphates (dNTPs), and modified di-deoxynucleotide triphosphates (ddNTPs), the latter of which terminate DNA strand elongation. These chain-terminating nucleotides lack a 3'-OH group required for the formation of a phosphodiester bond between two nucleotides, causing DNA polymerase to cease extension of DNA when a modified ddNTP is incorporated. The ddNTPs may be radioactively or fluorescently labelled for detection in automated sequencing machines. The DNA sample is divided into four separate sequencing reactions, containing all four of the standard deoxynucleotides (dATP, dGTP, dCTP and dTTP)
|
{
"page_id": 1708335,
"source": null,
"title": "Sanger sequencing"
}
|
and the DNA polymerase. To each reaction is added only one of the four dideoxynucleotides (ddATP, ddGTP, ddCTP, or ddTTP), while the other added nucleotides are ordinary ones. The deoxynucleotide concentration should be approximately 100-fold higher than that of the corresponding dideoxynucleotide (e.g. 0.5mM dTTP : 0.005mM ddTTP) to allow enough fragments to be produced while still transcribing the complete sequence (but the concentration of ddNTP also depends on the desired length of sequence). Putting it in a more sensible order, four separate reactions are needed in this process to test all four ddNTPs. Following rounds of template DNA extension from the bound primer, the resulting DNA fragments are heat denatured and separated by size using gel electrophoresis. In the original publication of 1977, the formation of base-paired loops of ssDNA was a cause of serious difficulty in resolving bands at some locations. This is frequently performed using a denaturing polyacrylamide-urea gel with each of the four reactions run in one of four individual lanes (lanes A, T, G, C). The DNA bands may then be visualized by autoradiography or UV light, and the DNA sequence can be directly read off the X-ray film or gel image. In the image on the right, X-ray film was exposed to the gel, and the dark bands correspond to DNA fragments of different lengths. A dark band in a lane indicates a DNA fragment that is the result of chain termination after incorporation of a dideoxynucleotide (ddATP, ddGTP, ddCTP, or ddTTP). The relative positions of the different bands among the four lanes, from bottom to top, are then used to read the DNA sequence. Technical variations of chain-termination sequencing include tagging with nucleotides containing radioactive phosphorus for radiolabelling, or using a primer labeled at the 5' end with a fluorescent dye. Dye-primer sequencing
|
{
"page_id": 1708335,
"source": null,
"title": "Sanger sequencing"
}
|
facilitates reading in an optical system for faster and more economical analysis and automation. The later development by Leroy Hood and coworkers of fluorescently labeled ddNTPs and primers set the stage for automated, high-throughput DNA sequencing. Chain-termination methods have greatly simplified DNA sequencing. For example, chain-termination-based kits are commercially available that contain the reagents needed for sequencing, pre-aliquoted and ready to use. Limitations include non-specific binding of the primer to the DNA, affecting accurate read-out of the DNA sequence, and DNA secondary structures affecting the fidelity of the sequence. === Dye-terminator sequencing === Dye-terminator sequencing utilizes labelling of the chain terminator ddNTPs, which permits sequencing in a single reaction rather than four reactions as in the labelled-primer method. In dye-terminator sequencing, each of the four dideoxynucleotide chain terminators is labelled with fluorescent dyes, each of which emits light at different wavelengths. Owing to its greater expediency and speed, dye-terminator sequencing is now the mainstay in automated sequencing. Its limitations include dye effects due to differences in the incorporation of the dye-labelled chain terminators into the DNA fragment, resulting in unequal peak heights and shapes in the electronic DNA sequence trace electropherogram (a type of chromatogram) after capillary electrophoresis (see figure to the left). This problem has been addressed with the use of modified DNA polymerase enzyme systems and dyes that minimize incorporation variability, as well as methods for eliminating "dye blobs". The dye-terminator sequencing method, along with automated high-throughput DNA sequence analyzers, was used for the vast majority of sequencing projects until the introduction of next generation sequencing. === Automation and sample preparation === Automated DNA-sequencing instruments (DNA sequencers) can sequence up to 384 DNA samples in a single batch. Batch runs may occur up to 24 times a day. DNA sequencers separate strands by size (or length) using capillary
|
{
"page_id": 1708335,
"source": null,
"title": "Sanger sequencing"
}
|
electrophoresis, they detect and record dye fluorescence, and output data as fluorescent peak trace chromatograms. Sequencing reactions (thermocycling and labelling), cleanup and re-suspension of samples in a buffer solution are performed separately, before loading samples onto the sequencer. A number of commercial and non-commercial software packages can trim low-quality DNA traces automatically. These programs score the quality of each peak and remove low-quality base peaks (which are generally located at the ends of the sequence). The accuracy of such algorithms is inferior to visual examination by a human operator, but is adequate for automated processing of large sequence data sets. === Applications of dye-terminating sequencing === The field of public health plays many roles to support patient diagnostics as well as environmental surveillance of potential toxic substances and circulating biological pathogens. Public health laboratories (PHL) and other laboratories around the world have played a pivotal role in providing rapid sequencing data for the surveillance of the virus SARS-CoV-2, causative agent for COVID-19, during the pandemic that was declared a public health emergency on January 30, 2020. Laboratories were tasked with the rapid implementation of sequencing methods and asked to provide accurate data to assist in the decision-making models for the development of policies to mitigate spread of the virus. Many laboratories resorted to next generation sequencing methodologies while others supported efforts with Sanger sequencing. The sequencing efforts of SARS-CoV-2 are many, while most laboratories implemented whole genome sequencing of the virus, others have opted to sequence very specific genes of the virus such as the S-gene, encoding the information needed to produce the spike protein. The high mutation rate of SARS-CoV-2 leads to genetic differences within the S-gene and these differences have played a role in the infectivity of the virus. Sanger sequencing of the S-gene provides a quick, accurate,
|
{
"page_id": 1708335,
"source": null,
"title": "Sanger sequencing"
}
|
and more affordable method to retrieving the genetic code. Laboratories in lower income countries may not have the capabilities to implement expensive applications such as next generation sequencing, so Sanger methods may prevail in supporting the generation of sequencing data for surveillance of variants. Sanger sequencing is also the "gold standard" for norovirus surveillance methods for the Center for Disease Control and Prevention's (CDC) CaliciNet network. CalciNet is an outbreak surveillance network that was established in March 2009. The goal of the network is to collect sequencing data of circulating noroviruses in the United States and activate downstream action to determine the source of infection to mitigate the spread of the virus. The CalciNet network has identified many infections as foodborne illnesses. This data can then be published and used to develop recommendations for future action to prevent tainting food. The methods employed for detection of norovirus involve targeted amplification of specific areas of the genome. The amplicons are then sequenced using dye-terminating Sanger sequencing and the chromatograms and sequences generated are analyzed with a software package developed in BioNumerics. Sequences are tracked and strain relatedness is studied to infer epidemiological relevance. === Challenges === Common challenges of DNA sequencing with the Sanger method include poor quality in the first 15–40 bases of the sequence due to primer binding and deteriorating quality of sequencing traces after 700–900 bases. Base calling software such as Phred typically provides an estimate of quality to aid in trimming of low-quality regions of sequences. In cases where DNA fragments are cloned before sequencing, the resulting sequence may contain parts of the cloning vector. In contrast, PCR-based cloning and next-generation sequencing technologies based on pyrosequencing often avoid using cloning vectors. Recently, one-step Sanger sequencing (combined amplification and sequencing) methods such as Ampliseq and SeqSharp have been
|
{
"page_id": 1708335,
"source": null,
"title": "Sanger sequencing"
}
|
developed that allow rapid sequencing of target genes without cloning or prior amplification. Current methods can directly sequence only relatively short (300–1000 nucleotides long) DNA fragments in a single reaction. The main obstacle to sequencing DNA fragments above this size limit is insufficient power of separation for resolving large DNA fragments that differ in length by only one nucleotide. == Microfluidic Sanger sequencing == Microfluidic Sanger sequencing is a lab-on-a-chip application for DNA sequencing, in which the Sanger sequencing steps (thermal cycling, sample purification, and capillary electrophoresis) are integrated on a wafer-scale chip using nanoliter-scale sample volumes. This technology generates long and accurate sequence reads, while obviating many of the significant shortcomings of the conventional Sanger method (e.g. high consumption of expensive reagents, reliance on expensive equipment, personnel-intensive manipulations, etc.) by integrating and automating the Sanger sequencing steps. In its modern inception, high-throughput genome sequencing involves fragmenting the genome into small single-stranded pieces, followed by amplification of the fragments by polymerase chain reaction (PCR). Adopting the Sanger method, each DNA fragment is irreversibly terminated with the incorporation of a fluorescently labeled dideoxy chain-terminating nucleotide, thereby producing a DNA “ladder” of fragments that each differ in length by one base and bear a base-specific fluorescent label at the terminal base. Amplified base ladders are then separated by capillary array electrophoresis (CAE) with automated, in situ “finish-line” detection of the fluorescently labeled ssDNA fragments, which provides an ordered sequence of the fragments. These sequence reads are then computer assembled into overlapping or contiguous sequences (termed "contigs") which resemble the full genomic sequence once fully assembled. Sanger methods achieve maximum read lengths of approximately 800 bp (typically 500–600 bp with non-enriched DNA). The longer read lengths in Sanger methods display significant advantages over other sequencing methods especially in terms of sequencing repetitive regions
|
{
"page_id": 1708335,
"source": null,
"title": "Sanger sequencing"
}
|
of the genome. A challenge of short-read sequence data is particularly an issue in sequencing new genomes (de novo) and in sequencing highly rearranged genome segments, typically those seen of cancer genomes or in regions of chromosomes that exhibit structural variation. === Applications of microfluidic sequencing technologies === Other useful applications of DNA sequencing include single nucleotide polymorphism (SNP) detection, single-strand conformation polymorphism (SSCP) heteroduplex analysis, and short tandem repeat (STR) analysis. Resolving DNA fragments according to differences in size and/or conformation is the most critical step in studying these features of the genome. === Device design === The sequencing chip has a four-layer construction, consisting of three 100-mm-diameter glass wafers (on which device elements are microfabricated) and a polydimethylsiloxane (PDMS) membrane. Reaction chambers and capillary electrophoresis channels are etched between the top two glass wafers, which are thermally bonded. Three-dimensional channel interconnections and microvalves are formed by the PDMS and bottom manifold glass wafer. The device consists of three functional units, each corresponding to the Sanger sequencing steps. The thermal cycling (TC) unit is a 250-nanoliter reaction chamber with integrated resistive temperature detector, microvalves, and a surface heater. Movement of reagent between the top all-glass layer and the lower glass-PDMS layer occurs through 500-μm-diameter via-holes. After thermal-cycling, the reaction mixture undergoes purification in the capture/purification chamber, and then is injected into the capillary electrophoresis (CE) chamber. The CE unit consists of a 30-cm capillary which is folded into a compact switchback pattern via 65-μm-wide turns. === Sequencing chemistry === Thermal cycling In the TC reaction chamber, dye-terminator sequencing reagent, template DNA, and primers are loaded into the TC chamber and thermal-cycled for 35 cycles ( at 95 °C for 12 seconds and at 60 °C for 55 seconds). Purification The charged reaction mixture (containing extension fragments, template DNA, and
|
{
"page_id": 1708335,
"source": null,
"title": "Sanger sequencing"
}
|
excess sequencing reagent) is conducted through a capture/purification chamber at 30 °C via a 33-Volts/cm electric field applied between capture outlet and inlet ports. The capture gel through which the sample is driven, consists of 40 μM of oligonucleotide (complementary to the primers) covalently bound to a polyacrylamide matrix. Extension fragments are immobilized by the gel matrix, and excess primer, template, free nucleotides, and salts are eluted through the capture waste port. The capture gel is heated to 67–75 °C to release extension fragments. Capillary electrophoresis Extension fragments are injected into the CE chamber where they are electrophoresed through a 125-167-V/cm field. === Platforms === The Apollo 100 platform (Microchip Biotechnologies Inc., Dublin, California) integrates the first two Sanger sequencing steps (thermal cycling and purification) in a fully automated system. The manufacturer claims that samples are ready for capillary electrophoresis within three hours of the sample and reagents being loaded into the system. The Apollo 100 platform requires sub-microliter volumes of reagents. === Comparisons to other sequencing techniques === The ultimate goal of high-throughput sequencing is to develop systems that are low-cost, and extremely efficient at obtaining extended (longer) read lengths. Longer read lengths of each single electrophoretic separation, substantially reduces the cost associated with de novo DNA sequencing and the number of templates needed to sequence DNA contigs at a given redundancy. Microfluidics may allow for faster, cheaper and easier sequence assembly. == See also == Maxam–Gilbert sequencing Second-generation sequencing Third-generation sequencing == References == == Further reading == Dewey FE, Pan S, Wheeler MT, Quake SR, Ashley EA (February 2012). "DNA sequencing: clinical applications of new DNA sequencing technologies". Circulation. 125 (7): 931–944. doi:10.1161/CIRCULATIONAHA.110.972828. PMC 3364518. PMID 22354974. Sanger F, Coulson AR, Barrell BG, Smith AJ, Roe BA (October 1980). "Cloning in single-stranded bacteriophage as an aid to
|
{
"page_id": 1708335,
"source": null,
"title": "Sanger sequencing"
}
|
rapid DNA sequencing". Journal of Molecular Biology. 143 (2): 161–178. doi:10.1016/0022-2836(80)90196-5. PMID 6260957. == External links == MBI Says New Tool That Automates Sanger Sample Prep Cuts Reagent and Labor Costs
|
{
"page_id": 1708335,
"source": null,
"title": "Sanger sequencing"
}
|
A cloned enzyme donor immunoassay (CEDIA) is a competitive homogenous enzyme immunoassay. This assay makes use of two component fragments of an enzyme which are each individually inactive. Under the right conditions in solution these fragments can spontaneously reassemble to form the active enzyme. For use in biochemical assays, one of the enzyme fragments is attached to an analyte of interest. The analyte-enzyme-fragment-conjugate is still able to reassemble with the other enzyme fragment to form an active enzyme. However it is unable to do this if the analyte is bound to an antibody. To determine the quantity of analyte in a sample, an aliquot of sample must be added to a solution containing enzyme-fragment-analyte-conjugate, the other enzyme fragment, antibody directed against the analyte and substrate for the enzyme reaction. Competition for the antibody occurs between the analyte in the sample and the enzyme-fragment-analyte-conjugate. High concentrations of analyte in the sample lead to a relatively small amount of the enzyme-fragment-analyte-conjugate being prevented from forming active enzyme and therefore high enzyme activity. Conversely, low concentrations of analyte in the sample lead to a relatively large amount of the enzyme-fragment-analyte-conjugate being prevented from forming active enzymes and therefore low enzyme activity. == References ==
|
{
"page_id": 38080816,
"source": null,
"title": "Cloned enzyme donor immunoassay"
}
|
A consensus site is a term in molecular biology that refers to a site on a protein that is often modified in a particular way. Modifications may be N- or O- linked glycosylation, phosphorylation, tyrosine sulfation or other. == References ==
|
{
"page_id": 24514868,
"source": null,
"title": "Consensus site"
}
|
Oncogenomics is a sub-field of genomics that characterizes cancer-associated genes. It focuses on genomic, epigenomic and transcript alterations in cancer. Cancer is a genetic disease caused by accumulation of DNA mutations and epigenetic alterations leading to unrestrained cell proliferation and neoplasm formation. The goal of oncogenomics is to identify new oncogenes or tumor suppressor genes that may provide new insights into cancer diagnosis, predicting clinical outcome of cancers and new targets for cancer therapies. The success of targeted cancer therapies such as Gleevec, Herceptin and Avastin raised the hope for oncogenomics to elucidate new targets for cancer treatment. Besides understanding the underlying genetic mechanisms that initiate or drive cancer progression, oncogenomics targets personalized cancer treatment. Cancer develops due to DNA mutations and epigenetic alterations that accumulate randomly. Identifying and targeting the mutations in an individual patient may lead to increased treatment efficacy. The completion of the Human Genome Project facilitated the field of oncogenomics and increased the abilities of researchers to find oncogenes. Sequencing technologies and global methylation profiling techniques have been applied to the study of oncogenomics. == History == The genomics era began in the 1990s, with the generation of DNA sequences of many organisms. In the 21st century, the completion of the Human Genome Project enabled the study of functional genomics and examining tumor genomes. Cancer is a main focus. The epigenomics era largely began more recently, about 2000. One major source of epigenetic change is altered methylation of CpG islands at the promoter region of genes (see DNA methylation in cancer). A number of recently devised methods can assess the DNA methylation status in cancers versus normal tissues. Some methods assess methylation of CpGs located in different classes of loci, including CpG islands, shores, and shelves as well as promoters, gene bodies, and intergenic regions. Cancer
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
is also a major focus of epigenetic studies. Access to whole cancer genome sequencing is important to cancer (or cancer genome) research because: Mutations are the immediate cause of cancer and define the tumor phenotype. Access to cancerous and normal tissue samples from the same patient and the fact that most cancer mutations represent somatic events, allow the identification of cancer-specific mutations. Cancer mutations are cumulative and sometimes are related to disease stage. Metastasis and drug resistance are distinguishable. Access to methylation profiling is important to cancer research because: Epi-drivers, along with Mut-drivers, can act as immediate causes of cancers Cancer epimutations are cumulative and sometimes related to disease stage === Whole genome sequencing === The first cancer genome was sequenced in 2008. This study sequenced a typical acute myeloid leukaemia (AML) genome and its normal counterpart genome obtained from the same patient. The comparison revealed ten mutated genes. Two were already thought to contribute to tumor progression: an internal tandem duplication of the FLT3 receptor tyrosine kinase gene, which activates kinase signaling and is associated with a poor prognosis and a four base insertion in exon 12 of the NPM1 gene (NPMc). These mutations are found in 25–30% of AML tumors and are thought to contribute to disease progression rather than to cause it directly. The remaining 8 were new mutations and all were single base changes: Four were in families that are strongly associated with cancer pathogenesis (PTPRT, CDH24, PCLKC and SLC15A1). The other four had no previous association with cancer pathogenesis. They did have potential functions in metabolic pathways that suggested mechanisms by which they could act to promote cancer (KNDC1, GPR124, EB12, GRINC1B) These genes are involved in pathways known to contribute to cancer pathogenesis, but before this study most would not have been candidates for
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
targeted gene therapy. This analysis validated the approach of whole cancer genome sequencing in identifying somatic mutations and the importance of parallel sequencing of normal and tumor cell genomes. In 2011, the genome of an exceptional bladder cancer patient whose tumor had been eliminated by the drug everolimus was sequenced, revealing mutations in two genes, TSC1 and NF2. The mutations disregulated mTOR, the protein inhibited by everolimus, allowing it to reproduce without limit. As a result, in 2015, the Exceptional Responders Initiative was created at the National Cancer Institute. The initiative allows such exceptional patients (who have responded positively for at least six months to a cancer drug that usually fails) to have their genomes sequenced to identify the relevant mutations. Once identified, other patients could be screened for those mutations and then be given the drug. In 2016 To that end, a nationwide cancer drug trial began in 2015, involving up to twenty-four hundred centers. Patients with appropriate mutations are matched with one of more than forty drugs. In 2014 the Center for Molecular Oncology rolled out the MSK-IMPACT test, a screening tool that looks for mutations in 341 cancer-associated genes. By 2015 more than five thousand patients had been screened. Patients with appropriate mutations are eligible to enroll in clinical trials that provide targeted therapy. == Technologies == Genomics technologies include: === Genome sequencing === DNA sequencing: Pyrosequencing-based sequencers offer a relatively low-cost method to generate sequence data. Array Comparative Genome Hybridization: This technique measures the DNA copy number differences between normal and cancer genomes. It uses the fluorescence intensity from fluorescent-labeled samples, which are hybridized to known probes on a microarray. Representational oligonucleotide microarray analysis: Detects copy number variation using amplified restriction-digested genomic fragments that are hybridized to human oligonucleotides, achieving a resolution between 30 and 35
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
kbit/s. Digital karyotyping: Detects copy number variation using genomics tags obtained via restriction enzyme digests. These tags are then linked to into ditags, concatenated, cloned, sequenced and mapped back to the reference genome to evaluate tag density. Bacterial artificial chromosome (BAC)-end sequencing (end-sequence profiling): Identifies chromosomal breakpoints by generating a BAC library from a cancer genome and sequencing their ends. The BAC clones that contain chromosome aberrations have end sequences that do not map to a similar region of the reference genome, thus identifying a chromosomal breakpoint. === Transcriptomes === Microarrays: Assess transcript abundance. Useful in classification, prognosis, raise the possibility of differential treatment approaches and aid identification of mutations in the proteins' coding regions. The relative abundance of alternative transcripts has become an important feature of cancer research. Particular alternative transcript forms correlate with specific cancer types. RNA-Seq === Bioinformatics and functional analysis of oncogenes === Bioinformatics technologies allow the statistical analysis of genomic data. The functional characteristics of oncogenes has yet to be established. Potential functions include their transformational capabilities relating to tumour formation and specific roles at each stage of cancer development. After the detection of somatic cancer mutations across a cohort of cancer samples, bioinformatic computational analyses can be carried out to identify likely functional and likely driver mutations. There are three main approaches routinely used for this identification: mapping mutations, assessing the effect of mutation of the function of a protein or a regulatory element and finding signs of positive selection across a cohort of tumors. The approaches are not necessarily sequential however, there are important relationships of precedence between elements from the different approaches. Different tools are used at each step. === Operomics === Operomics aims to integrate genomics, transcriptomics and proteomics to understand the molecular mechanisms that underlie the cancer development. == Comparative
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
oncogenomics == Comparative oncogenomics uses cross-species comparisons to identify oncogenes. This research involves studying cancer genomes, transcriptomes and proteomes in model organisms such as mice, identifying potential oncogenes and referring back to human cancer samples to see whether homologues of these oncogenes are important in causing human cancers. Genetic alterations in mouse models are similar to those found in human cancers. These models are generated by methods including retroviral insertion mutagenesis or graft transplantation of cancerous cells. == Source of cancer driver mutations, cancer mutagenesis == Mutations provide the raw material for natural selection in evolution and can be caused by errors of DNA replication, the action of exogenous mutagens or endogenous DNA damage. The machinery of replication and genome maintenance can be damaged by mutations, or altered by physiological conditions and differential levels of expression in cancer (see references in). As pointed out by Gao et al., the stability and integrity of the human genome are maintained by the DNA-damage response (DDR) system. Un-repaired DNA damage is a major cause of mutations that drive carcinogenesis. If DNA repair is deficient, DNA damage tends to accumulate. Such excess DNA damage can increase mutational errors during DNA replication due to error-prone translesion synthesis. Excess DNA damage can also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations can give rise to cancer. DDR genes are often repressed in human cancer by epigenetic mechanisms. Such repression may involve DNA methylation of promoter regions or repression of DDR genes by a microRNA. Epigenetic repression of DDR genes occurs more frequently than gene mutation in many types of cancer (see Cancer epigenetics). Thus, epigenetic repression often plays a more important role than mutation in reducing expression of DDR genes. This reduced expression of DDR genes is likely an important
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
driver of carcinogenesis. Nucleotide sequence context influences mutation probability and analysis of mutational (mutable) DNA motifs can be essential for understanding the mechanisms of mutagenesis in cancer. Such motifs represent the fingerprints of interactions between DNA and mutagens, between DNA and repair/replication/modification enzymes. Examples of motifs are the AID motif WRCY/RGYW (W = A or T, R = purine and Y = pyrimidine) with C to T/G/A mutations, and error-prone DNA pol η attributed AID-related mutations (A to G/C/G) in WA/TW motifs. Another (agnostic) way to analyze the observed mutational spectra and DNA sequence context of mutations in tumors involves pooling all mutations of different types and contexts from cancer samples into a discrete distribution. If multiple cancer samples are available, their context-dependent mutations can be represented in the form of a nonnegative matrix. This matrix can be further decomposed into components (mutational signatures) which ideally should describe individual mutagenic factors. Several computational methods have been proposed for solving this decomposition problem. The first implementation of Non-negative Matrix Factorization (NMF) method is available in Sanger Institute Mutational Signature Framework in the form of a MATLAB package. On the other hand, if mutations from a single tumor sample are only available, the DeconstructSigs R package and MutaGene server may provide the identification of contributions of different mutational signatures for a single tumor sample. In addition, MutaGene server provides mutagen or cancer-specific mutational background models and signatures that can be applied to calculate expected DNA and protein site mutability to decouple relative contributions of mutagenesis and selection in carcinogenesis. == Synthetic lethality == Synthetic lethality arises when a combination of deficiencies in the expression of two or more genes leads to cell death, whereas a deficiency in only one of these genes does not. The deficiencies can arise through mutations, epigenetic alterations
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
or inhibitors of one of the genes. The therapeutic potential of synthetic lethality as an efficacious anti-cancer strategy is continually improving. Recently, the applicability of synthetic lethality to targeted cancer therapy has heightened due to the recent work of scientists including Ronald A. DePinho and colleagues, in what is termed 'collateral lethality'. Muller et al. found that passenger genes, with chromosomal proximity to tumor suppressor genes, are collaterally deleted in some cancers. Thus, the identification of collaterally deleted redundant genes carrying out an essential cellular function may be the untapped reservoir for then pursuing a synthetic lethality approach. Collateral lethality therefore holds great potential in identification of novel and selective therapeutic targets in oncology. In 2012, Muller et al. identified that homozygous deletion of redundant-essential glycolytic ENO1 gene in human glioblastoma (GBM) is the consequence of proximity to 1p36 tumor suppressor locus deletions and may hold potential for a synthetic lethality approach to GBM inhibition. ENO1 is one of three homologous genes (ENO2, ENO3) that encodes the mammalian alpha-enolase enzyme. ENO2, which encodes enolase 2, is mostly expressed in neural tissues, leading to the postulation that in ENO1-deleted GBM, ENO2 may be the ideal target as the redundant homologue of ENO1. Muller found that both genetic and pharmacological ENO2 inhibition in GBM cells with homozygous ENO1 deletion elicits a synthetic lethality outcome by selective killing of GBM cells. In 2016, Muller and colleagues discovered antibiotic SF2312 as a highly potent nanomolar-range enolase inhibitor which preferentially inhibits glioma cell proliferation and glycolytic flux in ENO1-deleted cells. SF2312 was shown to be more efficacious than pan-enolase inhibitor PhAH and have more specificity for ENO2 inhibition over ENO1. Subsequent work by the same team showed that the same approach could be applied to pancreatic cancer, whereby homozygously deleted SMAD4 results in the collateral
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
deletion of mitochondrial malic enzyme 2 (ME2), an oxidative decarboxylase essential for redox homeostasis. Dey et al. show that ME2 genomic deletion in pancreatic ductal adenocarcinoma cells results in high endogenous reactive oxygen species, consistent with KRAS-driven pancreatic cancer, and essentially primes ME2-null cells for synthetic lethality by depletion of redundant NAD(P)+-dependent isoform ME3. The effects of ME3 depletion were found to be mediated by inhibition of de novo nucleotide synthesis resulting from AMPK activation and mitochondrial ROS-mediated apoptosis. Meanwhile, Oike et al. demonstrated the generalizability of the concept by targeting redundant essential-genes in process other than metabolism, namely the SMARCA4 and SMARCA2 subunits in the chromatin-remodeling SWI/SNF complex. Some oncogenes are essential for survival of all cells (not only cancer cells). Thus, drugs that knock out these oncogenes (and thereby kill cancer cells) may also damage normal cells, inducing significant illness. However, other genes may be essential to cancer cells but not to healthy cells. Treatments based on the principle of synthetic lethality have prolonged the survival of cancer patients, and show promise for future advances in reversal of carcinogenesis. A major type of synthetic lethality operates on the DNA repair defect that often initiates a cancer, and is still present in the tumor cells. Some examples are given here. BRCA1 or BRCA2 expression is deficient in a majority of high-grade breast and ovarian cancers, usually due to epigenetic methylation of its promoter or epigenetic repression by an over-expressed microRNA (see articles BRCA1 and BRCA2). BRCA1 and BRCA2 are important components of the major pathway for homologous recombinational repair of double-strand breaks. If one or the other is deficient, it increases the risk of cancer, especially breast or ovarian cancer. A back-up DNA repair pathway, for some of the damages usually repaired by BRCA1 and BRCA2, depends on PARP1.
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
Thus, many ovarian cancers respond to an FDA-approved treatment with a PARP inhibitor, causing synthetic lethality to cancer cells deficient in BRCA1 or BRCA2. This treatment is also being evaluated for breast cancer and numerous other cancers in Phase III clinical trials in 2016. There are two pathways for homologous recombinational repair of double-strand breaks. The major pathway depends on BRCA1, PALB2 and BRCA2 while an alternative pathway depends on RAD52. Pre-clinical studies, involving epigenetically reduced or mutated BRCA-deficient cells (in culture or injected into mice), show that inhibition of RAD52 is synthetically lethal with BRCA-deficiency. Mutations in genes employed in DNA mismatch repair (MMR) cause a high mutation rate. In tumors, such frequent subsequent mutations often generate "non-self" immunogenic antigens. A human Phase II clinical trial, with 41 patients, evaluated one synthetic lethal approach for tumors with or without MMR defects. The product of gene PD-1 ordinarily represses cytotoxic immune responses. Inhibition of this gene allows a greater immune response. When cancer patients with a defect in MMR in their tumors were exposed to an inhibitor of PD-1, 67–78% of patients experienced immune-related progression-free survival. In contrast, for patients without defective MMR, addition of PD-1 inhibitor generated only 11% of patients with immune-related progression-free survival. Thus inhibition of PD-1 is primarily synthetically lethal with MMR defects. ARID1A, a chromatin modifier, is required for non-homologous end joining, a major pathway that repairs double-strand breaks in DNA, and also has transcription regulatory roles. ARID1A mutations are one of the 12 most common carcinogenic mutations. Mutation or epigenetically decreased expression of ARID1A has been found in 17 types of cancer. Pre-clinical studies in cells and in mice show that synthetic lethality for ARID1A deficiency occurs by either inhibition of the methyltransferase activity of EZH2, or with addition of the kinase inhibitor dasatinib.
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
Another approach is to individually knock out each gene in a genome and observe the effect on normal and cancerous cells. If the knockout of an otherwise nonessential gene has little or no effect on healthy cells, but is lethal to cancerous cells containing a mutated oncogene, then the system-wide suppression of the suppressed gene can destroy cancerous cells while leaving healthy ones relatively undamaged. The technique was used to identify PARP-1 inhibitors to treat BRCA1/BRCA2-associated cancers. In this case, the combined presence of PARP-1 inhibition and of the cancer-associated mutations in BRCA genes is lethal only to the cancerous cells. == Databases for cancer research == The Cancer Genome Project is an initiative to map out all somatic mutations in cancer. The project systematically sequences the exons and flanking splice junctions of the genomes of primary tumors and cancerous cell lines. COSMIC software displays the data generated from these experiments. As of February 2008, the CGP had identified 4,746 genes and 2,985 mutations in 1,848 tumours. The Cancer Genome Anatomy Project includes information of research on cancer genomes, transcriptomes and proteomes. Progenetix is an oncogenomic reference database, presenting cytogenetic and molecular-cytogenetic tumor data. Oncomine has compiled data from cancer transcriptome profiles. The integrative oncogenomics database IntOGen and the Gitools datasets integrate multidimensional human oncogenomic data classified by tumor type. The first version of IntOGen focused on the role of deregulated gene expression and CNV in cancer. A later version emphasized mutational cancer driver genes across 28 tumor types,. All releases of IntOGen data are made available at the IntOGen database. The International Cancer Genome Consortium is the biggest project to collect human cancer genome data. The data is accessible through the ICGC website. The BioExpress® Oncology Suite contains gene expression data from primary, metastatic and benign tumor samples and
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
normal samples, including matched adjacent controls. The suite includes hematological malignancy samples for many well-known cancers. Specific databases for model animals include the Retrovirus Tagged Cancer Gene Database (RTCGD) that compiled research on retroviral and transposon insertional mutagenesis in mouse tumors. == Gene families == Mutational analysis of entire gene families revealed that genes of the same family have similar functions, as predicted by similar coding sequences and protein domains. Two such classes are the kinase family, involved in adding phosphate groups to proteins and the phosphatase family, involved with removing phosphate groups from proteins. These families were first examined because of their apparent role in transducing cellular signals of cell growth or death. In particular, more than 50% of colorectal cancers carry a mutation in a kinase or phosphatase gene. Phosphatidylinositold 3-kinases (PIK3CA) gene encodes for lipid kinases that commonly contain mutations in colorectal, breast, gastric, lung and various other cancers. Drug therapies can inhibit PIK3CA. Another example is the BRAF gene, one of the first to be implicated in melanomas. BRAF encodes a serine/threonine kinase that is involved in the RAS-RAF-MAPK growth signaling pathway. Mutations in BRAF cause constitutive phosphorylation and activity in 59% of melanomas. Before BRAF, the genetic mechanism of melanoma development was unknown and therefore prognosis for patients was poor. == Mitochondrial DNA == Mitochondrial DNA (mtDNA) mutations are linked the formation of tumors. Four types of mtDNA mutations have been identified: === Point mutations === Point mutations have been observed in the coding and non-coding region of the mtDNA contained in cancer cells. In individuals with bladder, head/neck and lung cancers, the point mutations within the coding region show signs of resembling each other. This suggests that when a healthy cell transforms into a tumor cell (a neoplastic transformation) the mitochondria seem to become
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
homogenous. Abundant point mutations located within the non-coding region, D-loop, of the cancerous mitochondria suggest that mutations within this region might be an important characteristic in some cancers. === Deletions === This type of mutation is sporadically detected due to its small size ( < 1 kb). The appearance of certain specific mtDNA mutations (264-bp deletion and 66-bp deletion in the complex 1 subunit gene ND1) in multiple types of cancer provide some evidence that small mtDNA deletions might appear at the beginning of tumorigenesis. It also suggests that the amount of mitochondria containing these deletions increases as the tumor progresses. An exception is a relatively large deletion that appears in many cancers (known as the "common deletion"), but more mtDNA large scale deletions have been found in normal cells compared to tumor cells. This may be due to a seemingly adaptive process of tumor cells to eliminate any mitochondria that contain these large scale deletions (the "common deletion" is > 4 kb). === Insertions === Two small mtDNA insertions of ~260 and ~520 bp can be present in breast cancer, gastric cancer, hepatocellular carcinoma (HCC) and colon cancer and in normal cells. No correlation between these insertions and cancer are established. === Copy number mutations === The characterization of mtDNA via real-time polymerase chain reaction assays shows the presence of quantitative alteration of mtDNA copy number in many cancers. Increase in copy number is expected to occur because of oxidative stress. On the other hand, decrease is thought to be caused by somatic point mutations in the replication origin site of the H-strand and/or the D310 homopolymeric c-stretch in the D-loop region, mutations in the p53 (tumor suppressor gene) mediated pathway and/or inefficient enzyme activity due to POLG mutations. Any increase/decrease in copy number then remains constant within tumor
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
cells. The fact that the amount of mtDNA is constant in tumor cells suggests that the amount of mtDNA is controlled by a much more complicated system in tumor cells, rather than simply altered as a consequence of abnormal cell proliferation. The role of mtDNA content in human cancers apparently varies for particular tumor types or sites. 57.7% (500/867) contained somatic point putations and of the 1172 mutations surveyed 37.8% (443/1127) were located in the D-loop control region, 13.1% (154/1172) were located in the tRNA or rRNA genes and 49.1% (575/1127) were found in the mRNA genes needed for producing complexes required for mitochondrial respiration. === Diagnostic applications === Some anticancer drugs target mtDNA and have shown positive results in killing tumor cells. Research has used mitochondrial mutations as biomarkers for cancer cell therapy. It is easier to target mutation within mitochondrial DNA versus nuclear DNA because the mitochondrial genome is much smaller and easier to screen for specific mutations. MtDNA content alterations found in blood samples might be able to serve as a screening marker for predicting future cancer susceptibility as well as tracking malignant tumor progression. Along with these potential helpful characteristics of mtDNA, it is not under the control of the cell cycle and is important for maintaining ATP generation and mitochondrial homeostasis. These characteristics make targeting mtDNA a practical therapeutic strategy. == Cancer biomarkers == Several biomarkers can be useful in cancer staging, prognosis and treatment. They can range from single-nucleotide polymorphisms (SNPs), chromosomal aberrations, changes in DNA copy number, microsatellite instability, promoter region methylation, or even high or low protein levels. Between 2013 and 2019 only 6.8% of people with cancer in 2 US states underwent genetic testing, suggesting broad under-utilization of information that could improve treatment decisions and patient outcomes. == See also ==
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
International Cancer Genome Consortium Personalized onco-genomics The Cancer Genome Atlas == References ==
|
{
"page_id": 7737653,
"source": null,
"title": "Oncogenomics"
}
|
Elizabeth Janet Browne (née Bell, born 30 March 1950) is a British historian of science, known especially for her work on the history of 19th-century biology. She taught at the Wellcome Trust Centre for the History of Medicine, University College, London, before returning to Harvard. She is currently Aramont Professor of the History of Science at Harvard University. == Biography == Browne is the daughter of Douglas Bell CBE (1905–1993) and his wife Betty Bell. She married Nicholas Browne in 1972; they have two daughters. Browne gained a BA degree from Trinity College, Dublin in 1972 and from Imperial College, London an MSc (1973) and PhD (1978) on the history of science. She was a research fellow at Harvard University. She received an honorary Doctor in Science (Sc. D) degree from Trinity College, Dublin in 2009 in recognition of her contribution to the biographical knowledge of Charles Darwin. After working as an associate editor on the University of Cambridge Library project to collect, edit, and publish the correspondence of Charles Darwin, she wrote a two-volume biography of the naturalist: Charles Darwin: Voyaging (1995), on his youth and years on the Beagle, and Charles Darwin: The Power of Place (2002), covering the years after the publication of his theory of evolution. The latter book has received acclaim for its innovative interpretation of the role of Darwin's correspondence in the formation of his scientific theory and recruitment of scientific support. In 2004, this volume won the History of Science Society's Pfizer Award, the Society's highest honor awarded to individual works of scholarship. In 2003, it also won the James Tait Black Memorial Prize for Biography. In 2020 she was admitted as a member of the Royal Irish Academy. Browne currently serves as the Aramont Professor in the History of Science at Harvard
|
{
"page_id": 2036020,
"source": null,
"title": "Janet Browne"
}
|
University. She specializes in life sciences, natural history, and evolutionary biology from the 17th to the 20th century. == Publications == The following is a selection of Browne's publications, chosen primarily by convenience from internet searches, but also to indicate the timespan over which she has published. Browne, Janet (1978). "The Charles Darwin - Joseph Hooker correspondence: an analysis of manuscript resources and their use in biography". Journal of the Society for the Bibliography of Natural History. 8 (4): 351–366. doi:10.3366/jsbnh.1978.8.part_4.351. Browne, Janet (March 1980). "Darwin's botanical arithmetic and the "principle of divergence," 1854–1858". Journal of the History of Biology. 13 (1): 53–89. doi:10.1007/BF00125354. PMID 11610734. S2CID 5801204. Browne, Janet (December 1989). "Botany for Gentlemen: Erasmus Darwin and "The Loves of the Plants"" (PDF). Isis. 80 (4): 593–621. doi:10.1086/355166. JSTOR 234174. S2CID 53599751. Browne, Janet (1990). "Spas and Sensibilities: Darwin at Malvern". Medical History. Supplement No.10 (10): 102–113. doi:10.1017/s0025727300071027. PMC 2557456. PMID 11622586. Browne, Janet (1992). "A science of empire: British biogeography before Darwin". Revue d'histoire des sciences. 45 (4): 453–475. doi:10.3406/rhs.1992.4244. Browne, Janet (1995). Charles Darwin: vol. 1 Voyaging. London: Jonathan Cape. ISBN 1-84413-314-1. Browne, Janet (1996). Charles Darwin: Voyaging. Princeton, New Jersey: Princeton University Press. ISBN 978-0-691-02606-0. Browne, Janet (December 2001). "Darwin in Caricature: A Study in the Popularisation and Disseminatin of Evolution". Proceedings of the American Philosophical Society. 145 (4): 496–509. JSTOR 1558189. Full article Browne, Janet (2002). Charles Darwin: vol. 2 The Power of Place. London: Jonathan Cape. ISBN 0-7126-6837-3. Browne, Janet (2003). "Charles Darwin as a Celebrity". Science in Context. 16 (1–2): 175–194. doi:10.1017/S0269889703000772. S2CID 145301604. Browne, Janet (2006). Darwin's Origin of Species: A Biography. Crows Nest NSW, Australia: Allen & Unwin. ISBN 978-1-74114-784-1. Retrieved 30 July 2010. Also ISBN 1-74114-784-0 Adrian Desmond, James Moore & Janet Browne (2007). Charles Darwin. Oxford and New York:
|
{
"page_id": 2036020,
"source": null,
"title": "Janet Browne"
}
|
Oxford University Press. ISBN 978-0-19-921354-2. Retrieved 30 July 2010. Browne, Janet (2009). "Darwin the Scientist". Cold Spring Harbor Symposia on Quantitative Biology. 74: 1–7. doi:10.1101/sqb.2009.74.047. PMID 20508059. Browne, Janet (2010). "Making Darwin: Biography and the Changing Representations of Charles Darwin". Journal of Interdisciplinary History. 40 (3): 347–373. doi:10.1162/jinh.2010.40.3.347. S2CID 145165183. == References == == External links == Quotations related to Janet Browne at Wikiquote Janet Browne profile at Harvard University
|
{
"page_id": 2036020,
"source": null,
"title": "Janet Browne"
}
|
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision. == A == A* search Pronounced "A-star". A graph traversal and pathfinding algorithm which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. abductive logic programming (ALP) A high-level knowledge-representation framework that can be used to solve problems declaratively based on abductive reasoning. It extends normal logic programming by allowing some predicates to be incompletely defined, declared as abducible predicates. abductive reasoning Also abduction. A form of logical inference which starts with an observation or set of observations then seeks to find the simplest and most likely explanation. This process, unlike deductive reasoning, yields a plausible conclusion but does not positively verify it. abductive inference, or retroduction ablation The removal of a component of an AI system. An ablation study aims to determine the contribution of a component to an AI system by removing the component, and then analyzing the resultant performance of the system. abstract data type A mathematical model for data types, where a data type is defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations. abstraction The process of removing physical, spatial, or temporal details or attributes in the study of objects or systems in order to more closely attend to other details of interest accelerating change A perceived increase in the rate of technological change throughout history, which may suggest faster and more profound change in the future and may or
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
may not be accompanied by equally profound social and cultural change. action language A language for specifying state transition systems, and is commonly used to create formal models of the effects of actions on the world. Action languages are commonly used in the artificial intelligence and robotics domains, where they describe how actions affect the states of systems over time, and may be used for automated planning. action model learning An area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners. action selection A way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment. activation function In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. adaptive algorithm An algorithm that changes its behavior at the time it is run, based on a priori defined reward mechanism or criterion. adaptive neuro fuzzy inference system (ANFIS) Also adaptive network-based fuzzy inference system. A kind of artificial neural network that is based on Takagi–Sugeno fuzzy inference system. The technique was developed in the early 1990s. Since it integrates both neural networks and fuzzy logic principles, it has potential to capture the benefits of both in a single framework. Its inference system corresponds to a set of fuzzy IF–THEN rules that have learning capability to approximate nonlinear functions. Hence, ANFIS is considered to be a universal estimator. For using the ANFIS in a
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
more efficient and optimal way, one can use the best parameters obtained by genetic algorithm. admissible heuristic In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path. affective computing Also artificial emotional intelligence or emotion AI. The study and development of systems and devices that can recognize, interpret, process, and simulate human affects. Affective computing is an interdisciplinary field spanning computer science, psychology, and cognitive science. agent architecture A blueprint for software agents and intelligent control systems, depicting the arrangement of components. The architectures implemented by intelligent agents are referred to as cognitive architectures. AI accelerator A class of microprocessor or computer system designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision, and machine learning. AI-complete In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm. algorithm An unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, and automated reasoning tasks. algorithmic efficiency A property of an algorithm which relates to the number of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on usage of different resources. Algorithmic efficiency can be thought of as analogous to
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
engineering productivity for a repeating or continuous process. algorithmic probability In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. AlphaGo A computer program that plays the board game Go. It was developed by Alphabet Inc.'s Google DeepMind in London. AlphaGo has several versions including AlphaGo Zero, AlphaGo Master, AlphaGo Lee, etc. In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicaps on a full-sized 19×19 board. ambient intelligence (AmI) Electronic environments that are sensitive and responsive to the presence of people. analysis of algorithms The determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). analytics The discovery, interpretation, and communication of meaningful patterns in data. answer set programming (ASP) A form of declarative programming oriented towards difficult (primarily NP-hard) search problems. It is based on the stable model (answer set) semantics of logic programming. In ASP, search problems are reduced to computing stable models, and answer set solvers—programs for generating stable models—are used to perform search. ant colony optimization (ACO) A probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. anytime algorithm An algorithm that can return a valid solution to a problem even if it is interrupted before it ends. application programming interface (API) A set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
set of clearly defined methods of communication among various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. An API may be for a web-based system, operating system, database system, computer hardware, or software library. approximate string matching Also fuzzy string searching. The technique of finding strings that match a pattern approximately (rather than exactly). The problem of approximate string matching is typically divided into two sub-problems: finding approximate substring matches inside a given string and finding dictionary strings that match the pattern approximately. approximation error The discrepancy between an exact value and some approximation to it. argumentation framework Also argumentation system. A way to deal with contentious information and draw conclusions from it. In an abstract argumentation framework, entry-level information is a set of abstract arguments that, for instance, represent data or a proposition. Conflicts between arguments are represented by a binary relation on the set of arguments. In concrete terms, you represent an argumentation framework with a directed graph such that the nodes are the arguments, and the arrows represent the attack relation. There exist some extensions of the Dung's framework, like the logic-based argumentation frameworks or the value-based argumentation frameworks. artificial general intelligence (AGI) A type of AI that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. artificial immune system (AIS) A class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving. artificial intelligence (AI) Also machine intelligence. Any intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science,
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". Artificial Intelligence Markup Language An XML dialect for creating natural language software agents. Association for the Advancement of Artificial Intelligence (AAAI) An international, nonprofit, scientific society devoted to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public understanding of artificial intelligence (AI), improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions. asymptotic computational complexity In computational complexity theory, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation. attention mechanism Machine learning-based attention is a mechanism mimicking cognitive attention. It calculates "soft" weights for each word, more precisely for its embedding, in the context window. It can do it either in parallel (such as in transformers) or sequentially (such as in recursive neural networks). "Soft" weights can change during each runtime, in contrast to "hard" weights, which are (pre-)trained and fine-tuned and remain frozen afterwards. Multiple attention heads are used in transformer-based large language models. attributional calculus A logic and representation system defined by Ryszard S. Michalski. It combines elements of predicate logic, propositional calculus, and multi-valued logic. Attributional calculus provides a formal language for natural induction, an inductive learning process whose results are in forms natural to people. augmented reality (AR) An interactive experience of a real-world environment where the
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
objects that reside in the real-world are "augmented" by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory. autoencoder A type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning). A common implementation is the variational autoencoder (VAE). automata theory The study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics (a subject of study in both mathematics and computer science). automated machine learning (AutoML) A field of machine learning (ML) which aims to automatically configure an ML system to maximize its performance (e.g, classification accuracy). automated planning and scheduling Also simply AI planning. A branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory. automated reasoning An area of computer science and mathematical logic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science, and even philosophy. autonomic computing (AC) The self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Initiated by IBM in 2001, this initiative ultimately aimed to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. autonomous car Also self-driving car, robot car, and driverless
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
car. A vehicle that is capable of sensing its environment and moving with little or no human input. autonomous robot A robot that performs behaviors or tasks with a high degree of autonomy. Autonomous robotics is usually considered to be a subfield of artificial intelligence, robotics, and information engineering. == B == backpropagation A method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network. Backpropagation is shorthand for "the backward propagation of errors", since an error is computed at the output and distributed backwards throughout the network's layers. It is commonly used to train deep neural networks, a term referring to neural networks with more than one hidden layer. backpropagation through structure (BPTS) A gradient-based technique for training recurrent neural networks, proposed in a 1996 paper written by Christoph Goller and Andreas Küchler. backpropagation through time (BPTT) A gradient-based technique for training certain types of recurrent neural networks, such as Elman networks. The algorithm was independently derived by numerous researchers. backward chaining Also backward reasoning. An inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications. bag-of-words model A simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model has also been used for computer vision. The bag-of-words model is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a feature for training a classifier. bag-of-words model in computer vision In computer vision, the bag-of-words model
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
(BoW model) can be applied to image classification, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary. In computer vision, a bag of visual words is a vector of occurrence counts of a vocabulary of local image features. batch normalization A technique for improving the performance and stability of artificial neural networks. It is a technique to provide any layer in a neural network with inputs that are zero mean/unit variance. Batch normalization was introduced in a 2015 paper. It is used to normalize the input layer by adjusting and scaling the activations. Bayesian programming A formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available. bees algorithm A population-based search algorithm which was developed by Pham, Ghanbarzadeh and et al. in 2005. It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighborhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bees algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bees algorithm have been proven in a number of studies. behavior informatics (BI) The informatics of behaviors so as to obtain behavior intelligence and behavior insights. behavior tree (BT) A mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
tasks are implemented. BTs present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make BTs less error-prone and very popular in the game developer community. BTs have shown to generalize several other control architectures. belief–desire–intention software model (BDI) A software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer. bias–variance tradeoff In statistics and machine learning, the bias–variance tradeoff is the property of a set of predictive models whereby models with a lower bias in parameter estimation have a higher variance of the parameter estimates across samples, and vice versa. big data A term used to refer to data sets that are too large or complex for traditional data-processing application software to adequately deal with. Data with many cases (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate. Big O notation A mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation. binary tree A tree data structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set. Some authors allow the binary tree to be the empty set as well. blackboard system An artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. Boltzmann machine Also stochastic Hopfield network with hidden units. A type of stochastic recurrent neural network and Markov random field. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield networks. Boolean satisfiability problem Also propositional satisfiability problem; abbreviated SATISFIABILITY or SAT. The problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. boosting A machine learning ensemble metaheuristic for primarily reducing bias (as opposed to variance), by training models sequentially, each one correcting the errors of its predecessor. bootstrap aggregating Also bagging or bootstrapping. A machine learning ensemble metaheuristic for primarily reducing variance (as opposed to bias), by training multiple models independently and averaging their predictions. brain technology Also self-learning know-how system. A technology that employs the latest findings in neuroscience. The term was first introduced by the Artificial Intelligence Laboratory in Zurich, Switzerland, in the context of the ROBOY project. Brain Technology can be employed in robots, know-how management systems and any other application with self-learning capabilities. In particular, Brain Technology applications allow the visualization of the underlying learning architecture often coined as "know-how maps". branching factor In computing, tree data structures, and game theory, the number of children at each node, the outdegree. If this value is not uniform, an average branching factor can be calculated. brute-force search Also exhaustive search or generate and test. A very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement. == C == capsule neural network (CapsNet) A machine learning system that is a type of artificial neural network (ANN) that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization. case-based reasoning (CBR) Broadly construed, the process of solving new problems based on the solutions of similar past problems. chatbot Also smartbot, talkbot, chatterbot, bot, IM bot, interactive agent, conversational interface, or artificial conversational entity. A computer
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
program or an artificial intelligence which conducts a conversation via auditory or textual methods. cloud robotics A field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centred on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low cost, smarter robots have intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc. cluster analysis Also clustering. The task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics. Cobweb An incremental system for hierarchical conceptual clustering. COBWEB was invented by Professor Douglas H. Fisher, currently at Vanderbilt University. COBWEB incrementally organizes observations into a classification tree. Each node in a classification tree represents a class (concept) and is labeled by a probabilistic concept that summarizes the attribute-value distributions of objects classified under the node. This classification tree can be used to predict missing attributes or the class of a
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
new object. cognitive architecture The Institute of Creative Technologies defines cognitive architecture as: "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together – in conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a diversity of complex environments." cognitive computing In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, CC is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus. cognitive science The interdisciplinary scientific study of the mind and its processes. combinatorial optimization In Operations Research, applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects. committee machine A type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response. The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare ensembles of classifiers. commonsense knowledge In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy. commonsense reasoning A branch of artificial intelligence concerned with simulating the human ability to make presumptions about the type and essence of ordinary situations they encounter every day. computational chemistry A branch of chemistry that uses computer simulation to assist in solving chemical problems. computational complexity theory Focuses on
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. computational creativity Also artificial creativity, mechanical creativity, creative computing, or creative computation. A multidisciplinary endeavour that includes the fields of artificial intelligence, cognitive psychology, philosophy, and the arts. computational cybernetics The integration of cybernetics and computational intelligence techniques. computational humor A branch of computational linguistics and artificial intelligence which uses computers in humor research. computational intelligence (CI) Usually refers to the ability of a computer to learn a specific task from data or experimental observation. computational learning theory In computer science, computational learning theory (or just learning theory) is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms. computational linguistics An interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective, as well as the study of appropriate computational approaches to linguistic questions. computational mathematics The mathematical research in areas of science where computing plays an essential role. computational neuroscience Also theoretical neuroscience or mathematical neuroscience. A branch of neuroscience which employs mathematical models, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology, and cognitive abilities of the nervous system. computational number theory Also algorithmic number theory. The study of algorithms for performing number theoretic computations. computational problem In theoretical computer science, a computational problem is a mathematical object representing a collection of questions that computers might be able to solve. computational statistics Also statistical computing. The interface between statistics and computer science. computer-automated design (CAutoD) Design automation usually refers to electronic design automation, or Design Automation which is
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
a Product Configurator. Extending Computer-Aided Design (CAD), automated design and computer-automated design are concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems. More recently, traditional CAD simulation is seen to be transformed to CAutoD by biologically inspired machine learning, including heuristic search techniques such as evolutionary computation, and swarm intelligence algorithms. computer audition (CA) See machine listening. computer science The theory, experimentation, and engineering that form the basis for the design and use of computers. It involves the study of algorithms that process, store, and communicate digital information. A computer scientist specializes in the theory of computation and the design of computational systems. computer vision An interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. concept drift In predictive analytics and machine learning, the concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. connectionism An approach in the fields of cognitive science, that hopes to explain mental phenomena using artificial neural networks. consistent heuristic In the study of path-finding problems in artificial intelligence, a heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighboring vertex to the goal, plus the cost of reaching that neighbor. constrained conditional model (CCM) A machine learning and inference framework that augments the learning of
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
conditional (probabilistic or discriminative) models with declarative constraints. constraint logic programming A form of constraint programming, in which logic programming is extended to include concepts from constraint satisfaction. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is A(X,Y) :- X+Y>0, B(X), C(Y). In this clause, X+Y>0 is a constraint; A(X,Y), B(X), and C(Y) are literals as in regular logic programming. This clause states one condition under which the statement A(X,Y) holds: X+Y is greater than zero and both B(X) and C(Y) are true. constraint programming A programming paradigm wherein relations between variables are stated in the form of constraints. Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found. constructed language Also conlang. A language whose phonology, grammar, and vocabulary are consciously devised, instead of having developed naturally. Constructed languages may also be referred to as artificial, planned, or invented languages. control theory In control systems engineering is a subfield of mathematics that deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability. convolutional neural network In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural network most commonly applied to image analysis. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. crossover Also recombination. In
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
genetic algorithms and evolutionary computation, a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and analogous to the crossover that happens during sexual reproduction in biological organisms. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions are typically mutated before being added to the population. == D == Darkforest A computer go program developed by Facebook, based on deep learning techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search. The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them. With the update, the system is known as Darkfmcts3. Dartmouth workshop The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 summer workshop now considered by many (though not all) to be the seminal event for artificial intelligence as a field. data augmentation Data augmentation in data analysis are techniques used to increase the amount of data. It helps reduce overfitting when training a learning algorithm. data fusion The process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source. data integration The process of combining data residing in different sources and providing users with a unified view of them. This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes. It
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
has become the focus of extensive theoretical work, and numerous open problems remain unsolved. data mining The process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. data science An interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured, similar to data mining. Data science is a "concept to unify statistics, data analysis, machine learning, and their related methods" in order to "understand and analyze actual phenomena" with data. It employs techniques and theories drawn from many fields within the context of mathematics, statistics, information science, and computer science. data set Also dataset. A collection of data. Most commonly a data set corresponds to the contents of a single database table, or a single statistical data matrix, where every column of the table represents a particular variable, and each row corresponds to a given member of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows. data warehouse (DW or DWH) Also enterprise data warehouse (EDW). A system used for reporting and data analysis. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place Datalog A declarative logic programming language that syntactically is a subset of Prolog. It is often used as a query language for deductive databases. In recent years, Datalog has found new application in data integration, information extraction, networking, program analysis, security, and cloud computing.
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
decision boundary In the case of backpropagation-based artificial neural networks or perceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers in the network. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn any continuous function on compact subsets of Rn as shown by the Universal approximation theorem, thus it can have an arbitrary decision boundary. decision support system (DSS) Aan information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both. decision theory Also theory of choice. The study of the reasoning underlying an agent's choices. Decision theory can be broken into two branches: normative decision theory, which gives advice on how to make the best decisions given a set of uncertain beliefs and a set of values, and descriptive decision theory which analyzes how existing, possibly irrational agents actually make decisions. decision tree learning Uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining and machine learning. declarative programming A programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow. deductive classifier A type of artificial intelligence inference engine. It takes as input a set of declarations
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. Deep Blue was a chess-playing computer developed by IBM. It is known for being the first computer chess-playing system to win both a chess game and a chess match against a reigning world champion under regular time controls. deep learning A subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be either supervised, semi-supervised, or unsupervised. DeepMind Technologies A British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc. The company is based in London, with research centres in Canada, France, and the United States. Acquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans, as well as a neural Turing machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain. The company made headlines in 2016 after its AlphaGo program beat human professional Go player Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film. A more general program, AlphaZero, beat the most powerful programs playing Go, chess, and shogi (Japanese chess) after a few days of play against itself using reinforcement learning. default logic A
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions. Density-based spatial clustering of applications with noise (DBSCAN) A clustering algorithm proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu in 1996. description logic (DL) A family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy descriptions logics, and each description logic features a different balance between DL expressivity and reasoning complexity by supporting different sets of mathematical constructors. developmental robotics (DevRob) Also epigenetic robotics. A scientific field which aims at studying the developmental mechanisms, architectures, and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. diagnosis Concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. If the system is not functioning correctly, the algorithm should be able to determine, as accurately as possible, which part of the system is failing, and which kind of fault it is facing. The computation is based on observations, which provide information on the current behaviour. dialogue system Also conversational agent (CA). A computer system intended to converse with a human with a coherent structure. Dialogue systems have employed text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel. diffusion model In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of latent variable models. They are Markov chains trained using variational inference. The goal of diffusion models is to learn the
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
latent structure of a dataset by modeling the way in which data points diffuse through the latent space. In computer vision, this means that a neural network is trained to denoise images blurred with Gaussian noise by learning to reverse the diffusion process. It mainly consists of three major components: the forward process, the reverse process, and the sampling procedure. Three examples of generic diffusion modeling frameworks used in computer vision are denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. Dijkstra's algorithm An algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. dimensionality reduction Also dimension reduction. The process of reducing the number of random variables under consideration by obtaining a set of principal variables. It can be divided into feature selection and feature extraction. discrete system Any system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite-state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals. distributed artificial intelligence (DAI) Also decentralized artificial intelligence. A subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems. double descent A phenomenon in
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
statistics and machine learning where a model with a small number of parameters and a model with an extremely large number of parameters have a small test error, but a model whose number of parameters is about the same as the number of data points used to train the model will have a large error. This phenomenon has been considered surprising, as it contradicts assumptions about overfitting in classical machine learning. dropout Also dilution. A regularization technique for reducing overfitting in artificial neural networks by preventing complex co-adaptations on training data. dynamic epistemic logic (DEL) A logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur. == E == eager learning A learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed to lazy learning, where generalization beyond the training data is delayed until a query is made to the system. early stopping A regularization technique often used when training a machine learning model with an iterative method such as gradient descent. Ebert test A test which gauges whether a computer-based synthesized voice can tell a joke with sufficient skill to cause people to laugh. It was proposed by film critic Roger Ebert at the 2011 TED conference as a challenge to software developers to have a computerized voice master the inflections, delivery, timing, and intonations of a speaking human. The test is similar to the Turing test proposed by Alan Turing in 1950 as a way to gauge a computer's ability to exhibit intelligent behavior by generating performance indistinguishable from a human being. echo state network (ESN) A recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can (re)produce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system. embodied agent Also interface agent. An intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment. embodied cognitive science An interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: 1) the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity, 2) the formation of a common set of general principles of intelligent behavior, and 3) the experimental use of robotic agents in controlled environments. error-driven learning A sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to minimize some error feedback. It is a type of reinforcement learning. ensemble learning The use of multiple machine learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. epoch In machine learning, particularly in the creation of artificial neural networks, an epoch is training the model for one cycle through the full training dataset. Small models are typically trained for as many epochs as it takes to reach
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
the best performance on the validation dataset. The largest models may train for only one epoch. ethics of artificial intelligence The part of the ethics of technology specific to artificial intelligence. evolutionary algorithm (EA) A subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators. evolutionary computation A family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. evolving classification function (ECF) Evolving classification functions are used for classifying and clustering in the field of machine learning and artificial intelligence, typically employed for data stream mining tasks in dynamic and changing environments. existential risk The hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. expert system A computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. == F == fast-and-frugal trees A type of classification tree. Fast-and-frugal trees can be used as decision-making tools which operate as lexicographic classifiers, and, if required, associate an action (decision) to each class or category. feature An individual measurable property or characteristic of a phenomenon. In computer vision and image processing, a feature is
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in an image (such as points, edges, or objects), or the result of a general neighborhood operation or feature detection applied to the image. feature extraction In machine learning, pattern recognition, and image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations. feature learning Also representation learning. In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. feature selection In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. federated learning A machine learning technique that allows for training models on multiple devices with decentralized data, thus helping preserve the privacy of individual users and their data. first-order logic Also first-order predicate calculus or predicate logic. A collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists X such that X is Socrates and X is a man" and there exists is a quantifier while X is
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations. fluent A condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time. formal language A set of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules. forward chaining Also forward reasoning. One of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, businesses and production rule systems. The opposite of forward chaining is backward chaining. Forward chaining starts with the available data and uses inference rules to extract more data (from an end user, for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When such a rule is found, the engine can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data. frame An artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". Frames are the primary data structure used in artificial intelligence frame language. frame language A technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
capabilities of frame and object-oriented languages overlap significantly. frame problem The problem of finding adequate collections of axioms for a viable description of a robot environment. friendly artificial intelligence Also friendly AI or FAI. A hypothetical artificial general intelligence (AGI) that would have a positive effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained. futures studies The study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them. fuzzy control system A control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively). fuzzy logic A simple form for the many-valued logic, in which the truth values of variables may have any degree of "Truthfulness" that can be represented by any real number in the range between 0 (as in Completely False) and 1 (as in Completely True) inclusive. Consequently, It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In contrast to Boolean logic, where the truth values of variables may have the integer values 0 or 1 only. fuzzy rule A rule used within fuzzy logic systems to infer an output based on input variables. fuzzy set In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition — an
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1. In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics. == G == game theory The study of mathematical models of strategic interaction between rational decision-makers. general game playing (GGP) General game playing is the design of artificial intelligence programs to be able to run and play more than one game successfully. generalization The concept that humans, other animals, and artificial neural networks use past learning in present situations of learning if the conditions in the situations are regarded as similar. generalization error For supervised learning applications in machine learning and statistical learning theory, generalization error (also known as the out-of-sample error or the risk) is a measure of how accurately a learning algorithm is able to predict outcomes for previously unseen data. generative adversarial network (GAN) A class of machine learning systems. Two neural networks contest with each other in a zero-sum game framework. generative artificial intelligence Generative artificial intelligence is artificial intelligence capable of generating text, images, or other media in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics, typically using transformer-based deep neural networks. generative pretrained
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
transformer (GPT) A large language model based on the transformer architecture that generates text. It is first pretrained to predict the next token in texts (a token is typically a word, subword, or punctuation). After their pretraining, GPT models can generate human-like text by repeatedly predicting the token that they would expect to follow. GPT models are usually also fine-tuned, for example with reinforcement learning from human feedback to reduce hallucination or harmful behaviour, or to format the output in a conversationnal format. genetic algorithm (GA) A metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection. genetic operator An operator used in genetic algorithms to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful. glowworm swarm optimization A swarm intelligence optimization algorithm based on the behaviour of glowworms (also known as fireflies or lightning bugs). gradient boosting A machine learning technique based on boosting in a functional space, where the target is pseudo-residuals instead of residuals as in traditional boosting. graph (abstract data type) In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from mathematics; specifically, the field of graph theory. graph (discrete mathematics) In mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called vertices (also called
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
nodes or points) and each of the related pairs of vertices is called an edge (also called an arc or line). graph database (GDB) A database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A key concept of the system is the graph (or edge or relationship), which directly relates data items in the store a collection of nodes of data and edges representing the relationships between the nodes. The relationships allow data in the store to be linked together directly, and in many cases retrieved with one operation. Graph databases hold the relationships between data as a priority. Querying relationships within a graph database is fast because they are perpetually stored within the database itself. Relationships can be intuitively visualized using graph databases, making it useful for heavily inter-connected data. graph theory The study of graphs, which are mathematical structures used to model pairwise relations between objects. graph traversal Also graph search. The process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal. == H == hallucination A response generated by AI that contains false or misleading information presented as fact. heuristic A technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also called simply a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
may approximate the exact solution. hidden layer A layer of neurons in an artificial neural network that is neither an input layer nor an output layer. hyper-heuristic A heuristic search method that seeks to automate the process of selecting, combining, generating, or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems, often by the incorporation of machine learning techniques. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem. hyperparameter A parameter that can be set in order to define any configurable part of a machine learning model's learning process. hyperparameter optimization The process of choosing a set of optimal hyperparameters for a learning algorithm. hyperplane A decision boundary in machine learning classifiers that partitions the input space into two or more sections, with each section corresponding to a unique class label. == I == IEEE Computational Intelligence Society A professional society of the Institute of Electrical and Electronics Engineers (IEEE) focussing on "the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained". incremental learning A method of machine learning, in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms. inference engine A component of the system that applies logical rules to the knowledge base to deduce new information. information integration (II) The merging
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
of information from heterogeneous sources with differing conceptual, contextual and typographical representations. It is used in data mining and consolidation of data from unstructured or semi-structured resources. Typically, information integration refers to textual representations of knowledge but is sometimes applied to rich-media content. Information fusion, which is a related term, involves the combination of information into a new set of information towards reducing redundancy and uncertainty. Information Processing Language (IPL) A programming language that includes features intended to help with programs that perform simple problem solving actions such as lists, dynamic memory allocation, data types, recursion, functions as arguments, generators, and cooperative multitasking. IPL invented the concept of list processing, albeit in an assembly-language style. intelligence amplification (IA) Also cognitive augmentation, machine augmented intelligence, and enhanced intelligence. The effective use of information technology in augmenting human intelligence. intelligence explosion A possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown, at the time of the technological singularity. intelligent agent (IA) An autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. intelligent control A class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms. intelligent personal assistant Also virtual assistant or personal digital assistant. A software agent that can perform tasks or services for an individual based on verbal commands. Sometimes the term "chatbot" is used to refer to virtual assistants generally or specifically
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
accessed by online chat (or in some cases online chat programs that are exclusively for entertainment purposes). Some virtual assistants are able to interpret human speech and respond via synthesized voices. Users can ask their assistants questions, control home automation devices and media playback via voice, and manage other basic tasks such as email, to-do lists, and calendars with verbal commands. interpretation An assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics. intrinsic motivation An intelligent agent is intrinsically motivated to act if the information content alone, of the experience resulting from the action, is the motivating factor. Information content in this context is measured in the information theory sense as quantifying uncertainty. A typical intrinsic motivation is to search for unusual (surprising) situations, in contrast to a typical extrinsic motivation such as the search for food. Intrinsically motivated artificial agents display behaviours akin to exploration and curiosity. issue tree Also logic tree. A graphical breakdown of a question that dissects it into its different components vertically and that progresses into details as it reads to the right.: 47 Issue trees are useful in problem solving to identify the root causes of a problem as well as to identify its potential solutions. They also provide a reference point to see how each piece fits into the whole picture of a problem. == J == junction tree algorithm Also Clique Tree. A method used in machine learning to extract marginalization in general graphs. In essence, it entails performing belief propagation on a modified graph called a junction
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
tree. The graph is called a tree because it branches into different sections of data; nodes of variables are the branches. == K == kernel method In machine learning, kernel methods are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM). The general task of pattern analysis is to find and study general types of relations (e.g., cluster analysis, rankings, principal components, correlations, classifications) in datasets. KL-ONE A well-known knowledge representation system in the tradition of semantic networks and frames; that is, it is a frame language. The system is an attempt to overcome semantic indistinctness in semantic network representations and to explicitly represent conceptual information as a structured inheritance network. k-nearest neighbors A non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. knowledge acquisition The process used to define the rules and ontologies required for a knowledge-based system. The phrase was first used in conjunction with expert systems to describe the initial tasks associated with developing an expert system, namely finding and interviewing domain experts and capturing their knowledge via rules, objects, and frame-based ontologies. knowledge-based system (KBS) A computer program that reasons and uses a knowledge base to solve complex problems. The term is broad and refers to many different kinds of systems. The one common theme that unites all knowledge based systems is an attempt to represent knowledge explicitly and a reasoning system that allows it to derive new knowledge. Thus, a knowledge-based system has two distinguishing features: a knowledge base and an inference engine. knowledge distillation The process of transferring knowledge from a large machine learning model to a smaller one. knowledge engineering (KE) All technical, scientific, and social aspects
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
involved in building, maintaining, and using knowledge-based systems. knowledge extraction The creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction and ETL, the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge (reusing identifiers or ontologies) or the generation of a schema based on the source data. knowledge Interchange Format (KIF) A computer language designed to enable systems to share and reuse information from knowledge-based systems. KIF is similar to frame languages such as KL-ONE and LOOM but unlike such language its primary role is not intended as a framework for the expression or use of knowledge but rather for the interchange of knowledge between systems. The designers of KIF likened it to PostScript. PostScript was not designed primarily as a language to store and manipulate documents but rather as an interchange format for systems and devices to share documents. In the same way KIF is meant to facilitate sharing of knowledge across different systems that use different languages, formalisms, platforms, etc. knowledge representation and reasoning (KR² or KR&R) The field of artificial intelligence dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets. Examples of knowledge representation formalisms include semantic nets, systems architecture, frames, rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers. k-means clustering A method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. == L == language model A probabilistic model that manipulates natural language. large language model (LLM) A language model with a large number of parameters (typically at least a billion) that are adjusted during training. Due to its size, it requires a lot of data and computing capability to train. Large language models are usually based on the transformer architecture. lazy learning In machine learning, lazy learning is a learning method in which generalization of the training data is, in theory, delayed until a query is made to the system, as opposed to in eager learning, where the system tries to generalize the training data before receiving queries. Lisp (programming language) (LISP) A family of programming languages with a long history and a distinctive, fully parenthesized prefix notation. logic programming A type of programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP), and Datalog. long short-term memory (LSTM) An artificial recurrent neural network architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections that make it a
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
"general purpose computer" (that is, it can compute anything that a Turing machine can). It can not only process single data points (such as images), but also entire sequences of data (such as speech or video). == M == machine vision (MV) The technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision is a term encompassing a large number of technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance. Markov chain A stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Markov decision process (MDP) A discrete time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming and reinforcement learning. mathematical optimization Also mathematical programming. In mathematics, computer science, and operations research, the selection of a best element (with regard to some criterion) from some set of available alternatives. machine learning (ML) The scientific study of algorithms and statistical models that computer systems use in order to perform a specific task effectively without using explicit instructions, relying on patterns and inference instead. machine listening Also computer audition (CA). A
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
general field of study of algorithms and systems for audio understanding by machine. machine perception The capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. mechanism design A field in economics and game theory that takes an engineering approach to designing economic mechanisms or incentives, toward desired objectives, in strategic settings, where players act rationally. Because it starts at the end of the game, then goes backwards, it is also called reverse game theory. It has broad applications, from economics and politics (markets, auctions, voting procedures) to networked-systems (internet interdomain routing, sponsored search auctions). mechatronics Also mechatronic engineering. A multidisciplinary branch of engineering that focuses on the engineering of both electrical and mechanical systems, and also includes a combination of robotics, electronics, computer, telecommunications, systems, control, and product engineering. metabolic network reconstruction and simulation Allows for an in-depth insight into the molecular mechanisms of a particular organism. In particular, these models correlate the genome with molecular physiology. metaheuristic In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a set of solutions which is too large to be completely sampled. model checking In computer science, model checking or property checking is, for a given model of a system, exhaustively and automatically checking whether this model meets a given specification. Typically, one has hardware or software systems in mind, whereas the specification contains safety requirements such as the absence of deadlocks and similar critical states that can cause the system to crash. Model checking
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
is a technique for automatically verifying correctness properties of finite-state systems. modus ponens In propositional logic, modus ponens is a rule of inference. It can be summarized as "P implies Q and P is asserted to be true, therefore Q must be true." modus tollens In propositional logic, modus tollens is a valid argument form and a rule of inference. It is an application of the general truth that if a statement is true, then so is its contrapositive. The inference rule modus tollens asserts that the inference from P implies Q to the negation of Q implies the negation of P is valid. Monte Carlo tree search In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes. multi-agent system (MAS) Also self-organized system. A computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning. multilayer perceptron (MLP) In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is not linearly separable. multi-swarm optimization A variant of particle swarm optimization (PSO) based on the use of multiple sub-swarms instead of one (standard) swarm. The general approach in multi-swarm optimization is that each sub-swarm focuses on a specific region while a specific diversification method decides where and when to launch the sub-swarms. The multi-swarm framework is especially fitted for the optimization on multi-modal problems, where multiple (local) optima exist. mutation A genetic operator used to maintain genetic diversity from one generation of a population of genetic
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
algorithm chromosomes to the next. It is analogous to biological mutation. Mutation alters one or more gene values in a chromosome from its initial state. In mutation, the solution may change entirely from the previous solution. Hence GA can come to a better solution by using mutation. Mutation occurs during evolution according to a user-definable mutation probability. This probability should be set low. If it is set too high, the search will turn into a primitive random search. Mycin An early backward chaining expert system that used artificial intelligence to identify bacteria causing severe infections, such as bacteremia and meningitis, and to recommend antibiotics, with the dosage adjusted for patient's body weight – the name derived from the antibiotics themselves, as many antibiotics have the suffix "-mycin". The MYCIN system was also used for the diagnosis of blood clotting diseases. == N == naive Bayes classifier In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. naive semantics An approach used in computer science for representing basic knowledge about a specific domain, and has been used in applications such as the representation of the meaning of natural language sentences in artificial intelligence applications. In a general setting the term has been used to refer to the use of a limited store of generally understood knowledge about a specific domain in the world, and has been applied to fields such as the knowledge based design of data schemas. name binding In programming languages, name binding is the association of entities (data and/or code) with identifiers. An identifier bound to an object is said to reference that object. Machine languages have no built-in notion of identifiers, but name-object bindings as a service and notation for the
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
programmer is implemented by programming languages. Binding is intimately connected with scoping, as scope determines which names bind to which objects – at which locations in the program code (lexically) and in which one of the possible execution paths (temporally). Use of an identifier id in a context that establishes a binding for id is called a binding (or defining) occurrence. In all other occurrences (e.g., in expressions, assignments, and subprogram calls), an identifier stands for what it is bound to; such occurrences are called applied occurrences. named-entity recognition (NER) Also entity identification, entity chunking, and entity extraction. A subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. named graph A key concept of Semantic Web architecture in which a set of Resource Description Framework statements (a graph) are identified using a URI, allowing descriptions to be made of that set of statements such as context, provenance information or other such metadata. Named graphs are a simple extension of the RDF data model through which graphs can be created but the model lacks an effective means of distinguishing between them once published on the Web at large. natural language generation (NLG) A software process that transforms structured data into plain-English content. It can be used to produce long-form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out loud by a text-to-speech system. natural language processing (NLP) A subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. natural language programming An ontology-assisted way of programming in terms of natural-language sentences, e.g. English. network motif All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs. One important local property of networks are so-called network motifs, which are defined as recurrent and statistically significant sub-graphs or patterns. neural machine translation (NMT) An approach to machine translation that uses a large artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model. neural network A neural network can refer to either a neural circuit of biological neurons (sometimes also called a biological neural network), or a network of artificial neurons or nodes in the case of an artificial neural network. Artificial neural networks are used for solving artificial intelligence (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1. neural Turing machine (NTM) A recurrent neural network model. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
descent. An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone. neuro-fuzzy Combinations of artificial neural networks and fuzzy logic. neurocybernetics Also brain–computer interface (BCI), neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI). A direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions. neuromorphic engineering Also neuromorphic computing. A concept describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors. node A basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers. nondeterministic algorithm An algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. nouvelle AI Nouvelle AI differs from classical AI by aiming to produce robots with intelligence levels similar to insects. Researchers believe that intelligence can emerge organically from simple behaviors as these intelligences interacted with the "real world", instead of using the constructed worlds which symbolic AIs typically needed to have programmed into them. NP In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time. NP-completeness In computational complexity theory, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly (in polynomial time), such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty. NP-hardness Also non-deterministic polynomial-time hardness. In computational complexity theory, the defining property of a class of problems that are, informally, "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem. == O == Occam's razor Also Ockham's razor or Ocham's razor. The problem-solving principle that states that when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions; the principle is not meant to filter out hypotheses that make different predictions. The idea is attributed to the English Franciscan friar William of Ockham (c. 1287–1347), a scholastic philosopher and theologian. offline learning A machine learning training approach in which a model is trained on a fixed dataset that is not updated during the learning process. online machine learning A method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once.
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time. ontology learning Also ontology extraction, ontology generation, or ontology acquisition. The automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. OpenAI The for-profit corporation OpenAI LP, whose parent organization is the non-profit organization OpenAI Inc that conducts research in the field of artificial intelligence (AI) with the stated aim to promote and develop friendly AI in such a way as to benefit humanity as a whole. OpenCog A project that aims to build an open-source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system. Open Mind Common Sense An artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal is to build and utilize a large commonsense knowledge base from the contributions of many thousands of people across the Web. open-source software (OSS) A type of computer software in which source code is released under an license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in an collaborative public manner. Open-source software is
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
a prominent example of open collaboration. overfitting "The production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". In other words, an overfitted model memorizes training data details but cannot generalize to new data. Conversely, an underfitted model is too simple to capture the complexity of the training data. == P == partial order reduction A technique for reducing the size of the state-space to be searched by a model checking or automated planning and scheduling algorithm. It exploits the commutativity of concurrently executed transitions, which result in the same state when executed in different orders. partially observable Markov decision process (POMDP) A generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP. particle swarm optimization (PSO) A computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions. pathfinding Also pathing. The plotting, by a computer application,
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
of the shortest route between two points. It is a more practical variant on solving mazes. This field of research is based heavily on Dijkstra's algorithm for finding a shortest path on a weighted graph. pattern recognition Concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories. perceptron An algorithm for supervised learning of binary classifiers. predicate logic Also first-order logic, predicate logic, and first-order predicate calculus. A collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists x such that x is Socrates and x is a man" and there exists is a quantifier while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic. predictive analytics A variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events. principal component analysis (PCA) A statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component, in turn, has the highest
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors (each being a linear combination of the variables and containing n observations) are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables. principle of rationality Also rationality principle. A principle coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework. It is related to what he called the 'logic of the situation' in an Economica article of 1944/1945, published later in his book The Poverty of Historicism. According to Popper's rationality principle, agents act in the most adequate way according to the objective situation. It is an idealized conception of human behavior which he used to drive his model of situational logic. probabilistic programming (PP) A programming paradigm in which probabilistic models are specified and inference for these models is performed automatically. It represents an attempt to unify probabilistic modeling and traditional general-purpose programming in order to make the former easier and more widely applicable. It can be used to create systems that help make decisions in the face of uncertainty. Programming languages used for probabilistic programming are referred to as "Probabilistic programming languages" (PPLs). production system A computer program typically used to provide some form of AI, which consists primarily of a set of rules about behavior, but also includes the mechanism necessary to follow those rules as the system responds to states of the world. programming language A formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms. Prolog A logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations. propositional calculus Also propositional logic, statement logic, sentential calculus, sentential logic, and zeroth-order logic. A branch of logic which deals with propositions (which can be true or false) and argument flow. Compound propositions are formed by connecting propositions by logical connectives. The propositions without logical connectives are called atomic propositions. Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic. proximal policy optimization (PPO) A reinforcement learning algorithm for training an intelligent agent's decision function to accomplish difficult tasks. Python An interpreted, high-level, general-purpose programming language created by Guido van Rossum and first released in 1991. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects. PyTorch A machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. == Q == Q-learning A model-free reinforcement learning algorithm for learning the value of an action in a particular state. qualification problem In philosophy and artificial intelligence (especially knowledge-based systems), the qualification problem is concerned with the impossibility of listing all of the preconditions required for a real-world action to have its intended effect. It might be posed as how to deal with the things
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
that prevent me from achieving my intended result. It is strongly connected to, and opposite the ramification side of, the frame problem. quantifier In logic, quantification specifies the quantity of specimens in the domain of discourse that satisfy an open formula. The two most common quantifiers mean "for all" and "there exists". For example, in arithmetic, quantifiers allow one to say that the natural numbers go on forever, by writing that for all n (where n is a natural number), there is another number (say, the successor of n) which is one bigger than n. quantum computing The use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.: I-5 query language Query languages or data query languages (DQLs) are computer languages used to make queries in databases and information systems. Broadly, query languages can be classified according to whether they are database query languages or information retrieval query languages. The difference is that a database query language attempts to give factual answers to factual questions, while an information retrieval query language attempts to find documents containing information that is relevant to an area of inquiry. == R == R programming language A programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis. radial basis function network In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment. random forest Also random decision forest. An ensemble learning method for classification, regression, and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set. reasoning system In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems. recurrent neural network (RNN) A class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. regression analysis A set of statistical processes for estimating the relationships between a dependent variable (often called the outcome or response variable, or label in machine learning) and one or more error-free independent variables (often called regressors, predictors, covariates, explanatory variables, or features). The most common form of regression analysis is linear regression, in which one finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion. regularization A set of techniques such as dropout, early stopping, and L1 and L2 regularization to reduce overfitting and underfitting when training a learning algorithm. reinforcement learning
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
(RL) An area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised and unsupervised learning. It differs from supervised learning in that labelled input/output pairs need not be presented, and sub-optimal actions need not be explicitly corrected. Instead the focus is finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). reinforcement learning from human feedback (RLHF) A technique that involve training a "reward model" to predict how humans rate the quality of generated content, and then training a generative AI model to satisfy this reward model via reinforcement learning. It can be used for example to make the generative AI model more truthful or less harmful. representation learning See feature learning. reservoir computing A framework for computation that may be viewed as an extension of neural networks. Typically an input signal is fed into a fixed (random) dynamical system called a reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines and echo state networks are two major types of reservoir computing. Resource Description Framework (RDF) A family of World Wide Web Consortium (W3C) specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description or modeling of information that is implemented in web resources, using a variety of syntax notations and data serialization formats. It is also used in knowledge management applications. restricted
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
Boltzmann machine (RBM) A generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. Rete algorithm A pattern matching algorithm for implementing rule-based systems. The algorithm was developed to efficiently apply many rules or patterns to many objects, or facts, in a knowledge base. It is used to determine which of the system's rules should fire based on its data store, its facts. robotics An interdisciplinary branch of science and engineering that includes mechanical engineering, electronic engineering, information engineering, computer science, and others. Robotics deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing. rule-based system In computer science, a rule-based system is used to store and manipulate knowledge to interpret information in a useful way. It is often used in artificial intelligence applications and research. Normally, the term rule-based system is applied to systems involving human-crafted or curated rule sets. Rule-based systems constructed using automatic rule inference, such as rule-based machine learning, are normally excluded from this system type. == S == satisfiability In mathematical logic, satisfiability and validity are elementary concepts of semantics. A formula is satisfiable if it is possible to find an interpretation (model) that makes the formula true. A formula is valid if all interpretations make the formula true. The opposites of these concepts are unsatisfiability and invalidity, that is, a formula is unsatisfiable if none of the interpretations make the formula true, and invalid if some such interpretation makes the formula false. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition. search algorithm Any algorithm which solves the search problem, namely, to retrieve information stored within some data structure, or calculated in the search space of
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
a problem domain, either with discrete or continuous values. selection The stage of a genetic algorithm in which individual genomes are chosen from a population for later breeding (using the crossover operator). self-management The process by which computer systems manage their own operation without human intervention. semantic network Also frame network. A knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields. semantic reasoner Also reasoning engine, rules engine, or simply reasoner. A piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. semantic query Allows for queries and analytics of associative and contextual nature. Semantic queries enable the retrieval of both explicitly and implicitly derived information based on syntactic, semantic and structural information contained in data. They are designed to deliver precise results (possibly the distinctive selection of one single piece of information) or to answer more fuzzy and wide-open questions through pattern matching and digital reasoning. semantics In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically valid strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
of syntactically invalid strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will be executed on a certain platform, hence creating a model of computation. semi-supervised learning Also weak supervision. A machine learning training paradigm characterized by using a combination of a small amount of human-labeled data (used exclusively in supervised learning), followed by a large amount of unlabeled data (used exclusively in unsupervised learning). sensor fusion The combining of sensory data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. separation logic An extension of Hoare logic, a way of reasoning about programs. The assertion language of separation logic is a special case of the logic of bunched implications (BI). similarity learning An area of supervised learning closely related to classification and regression, but the goal is to learn from a similarity function that measures how similar or related two objects are. It has applications in ranking, in recommendation systems, visual identity tracking, face verification, and speaker verification. simulated annealing (SA) A probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. situated approach In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills. situation calculus A logic
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
formalism designed for representing and reasoning about dynamical domains. Selective Linear Definite clause resolution Also simply SLD resolution. The basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses. software A collection of data or computer instructions that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. software engineering The application of engineering to the development of software in a systematic method. spatial-temporal reasoning An area of artificial intelligence which draws from the fields of computer science, cognitive science, and cognitive psychology. The theoretic goal—on the cognitive side—involves representing and reasoning spatial-temporal knowledge in mind. The applied goal—on the computing side—involves developing high-level control systems of automata for navigating and understanding time and space. SPARQL An RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. sparse dictionary learning Also sparse coding or SDL. A feature learning method aimed at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. speech recognition An interdisciplinary subfield of computational linguistics that develops methodologies and technologies that enables the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the linguistics, computer science, and electrical engineering fields. spiking neural network
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
(SNN) An artificial neural network that more closely mimics a natural neural network. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their Operating Model. state In information technology and computer science, a program is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system. statistical classification In machine learning and statistics, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient (sex, blood pressure, presence or absence of certain symptoms, etc.). Classification is an example of pattern recognition. state–action–reward–state–action (SARSA) A reinforcement learning algorithm for learning a Markov decision process policy. statistical relational learning (SRL) A subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational structure. Note that SRL is sometimes called Relational Machine Learning (RML) in the literature. Typically, the knowledge representation formalisms developed in SRL use (a subset of) first-order logic to describe relational properties of a domain in a general manner (universal quantification) and draw upon probabilistic graphical models (such as Bayesian networks or Markov networks) to model the uncertainty; some also build upon the methods of inductive logic programming. stochastic optimization (SO) Any optimization method that generates and uses random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints.
|
{
"page_id": 50336055,
"source": null,
"title": "Glossary of artificial intelligence"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.