text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
A septic tank is an underground chamber made of concrete, fiberglass, or plastic through which domestic wastewater ( sewage ) flows for basic sewage treatment . [ 2 ] Settling and anaerobic digestion processes reduce solids and organics, but the treatment efficiency is only moderate (referred to as "primary treatment"). [ 2 ] Septic tank systems are a type of simple onsite sewage facility . They can be used in areas that are not connected to a sewerage system, such as rural areas. The treated liquid effluent is commonly disposed in a septic drain field , which provides further treatment. Nonetheless, groundwater pollution may occur and is a problem.
The term "septic" refers to the anaerobic bacterial environment that develops in the tank that decomposes or mineralizes the waste discharged into the tank. Septic tanks can be coupled with other onsite wastewater treatment units such as biofilters or aerobic systems involving artificially forced aeration . [ 3 ]
The rate of accumulation of sludge—also called septage or fecal sludge —is faster than the rate of decomposition. [ 2 ] Therefore, the accumulated fecal sludge must be periodically removed, which is commonly done with a vacuum truck . [ 4 ]
A septic tank consists of one or more concrete or plastic tanks of between 4,500 and 7,500 litres (1,000 and 2,000 gallons); one end is connected to an inlet wastewater pipe and the other to a septic drain field . Generally these pipe connections are made with a T pipe, allowing liquid to enter and exit without disturbing any crust on the surface. [ citation needed ] Today, the design of the tank usually incorporates two chambers, each equipped with an access opening and cover, and separated by a dividing wall with openings located about midway between the floor and roof of the tank.
Wastewater enters the first chamber of the tank, allowing solids to settle and scum to float. The settled solids are anaerobically digested, reducing the volume of solids. The liquid component flows through the dividing wall into the second chamber, where further settlement takes place. One option for the effluent is the draining into the septic drain field , also referred to as a leach field, drain field or seepage field, depending upon locality. A percolation test is required prior to installation to ensure the porosity of the soil is adequate to serve as a drain field. [ 5 ] [ 6 ]
Septic tank effluent can also be conveyed to a secondary treatment, typically constructed wetlands. Constructed wetlands benefit from the good performance of septic tanks at removing solids, which avoids them getting clogged quickly.
Septic tank effluent can also be conveyed to a centralized treatment facility.
The remaining impurities are trapped and eliminated in the soil , with the excess water eliminated through percolation into the soil, through evaporation , and by uptake through the root system of plants and eventual transpiration or entering groundwater or surface water . A piping network, often laid in a stone-filled trench (see weeping tile ), distributes the wastewater throughout the field with multiple drainage holes in the network. The size of the drain field is proportional to the volume of wastewater and inversely proportional to the porosity of the drainage field. The entire septic system can operate by gravity alone or, where topographic considerations require, with inclusion of a lift pump .
Certain septic tank designs include siphons or other devices to increase the volume and velocity of outflow to the drainage field. These help to fill the drainage pipe more evenly and extend the drainage field life by preventing premature clogging or bioclogging .
An Imhoff tank is a two-stage septic system where the sludge is digested in a separate tank. This avoids mixing digested sludge with incoming sewage. Also, some septic tank designs have a second stage where the effluent from the anaerobic first stage is aerated before it drains into the seepage field.
A properly designed and normally operating septic system is odour-free. Besides periodic inspection and emptying, a septic tank should last for decades with minimal maintenance, with concrete, fibreglass, or plastic tanks lasting about 50 years. [ 7 ]
Waste that is not decomposed by the anaerobic digestion must eventually be removed from the septic tank. Otherwise the septic tank fills up and wastewater containing undecomposed material discharges directly to the drainage field. Not only is this detrimental for the environment but, if the sludge overflows the septic tank into the leach field, it may clog the leach field piping or decrease the soil porosity itself, requiring expensive repairs.
When a septic tank is emptied, the accumulated sludge ( septage , also known as fecal sludge [ 8 ] ) is pumped out of the tank by a vacuum truck . How often the septic tank must be emptied depends on the volume of the tank relative to the input of solids, the amount of indigestible solids, and the ambient temperature (because anaerobic digestion occurs more efficiently at higher temperatures), as well as usage, system characteristics and the requirements of the relevant authority.
Some health authorities require tanks to be emptied at prescribed intervals, while others leave it up to the decision of an inspector. Some systems require pumping every few years or sooner, while others may be able to go 10–20 years between pumpings. An older system with an undersize tank that is being used by a large family will require much more frequent pumping than a new system used by only a few people. Anaerobic decomposition is rapidly restarted when the tank is refilled. [ citation needed ]
An empty tank may be damaged by hydrostatic pressure causing the tank to partially "float" out of the ground, especially in flood situations or very wet ground conditions. [ 9 ]
Another option is "scheduled desludging" of septic tanks which has been initiated in several Asian countries including the Philippines, Malaysia, Vietnam, Indonesia, and India. [ 10 ] In this process, every property is covered along a defined route and the property occupiers are informed in advance about desludging that will take place.
The maintenance of a septic system is often the responsibility of the resident or property owner. Some forms of abuse or neglect include the following:
Septic tank additives have been promoted by some manufacturers with the aim to improve the effluent quality from septic tanks, reduce sludge build-up and to reduce odors. These additives—which are commonly based on " effective microorganisms "—are usually costly in the longer term and fail to live up to expectations. [ 14 ] It has been estimated that in the U.S. more than 1,200 septic system additives were available on the market in 2011. [ 15 ] Very little peer-reviewed and replicated field research exists regarding the efficacy of these biological septic tank additives. [ 15 ]
While a properly maintained and located septic tank poses no higher amount of environmental problems than centralized municipal sewage treatment, [ 16 ] certain problems could arise with a septic tank in an unsuitable location, and septic tank failures are typically more expensive to fix or replace than municipal sewer. [ 16 ] Since septic systems require large drainfields , they are unsuitable for densely built areas.
Some constituents of wastewater, especially sulfates , under the anaerobic conditions of septic tanks, are reduced to hydrogen sulfide , a pungent and toxic gas. Nitrates and organic nitrogen compounds can be reduced to ammonia . Because of the anaerobic conditions, fermentation and methanogenesis processes take place, which may generate carbon dioxide and/or methane . Both carbon dioxide and methane are greenhouse gases, with methane having a global warming potential about 25 times larger than carbon dioxide. This makes septic tanks potential greenhouse gas emitters. The same methane can be burnt to produce energy for local usage. [ 17 ]
Septic tanks by themselves are ineffective at removing nitrogen compounds that have potential to cause algal blooms in waterways into which affected water from a septic system finds its way. This can be remedied by using a nitrogen-reducing technology, [ 18 ] such as hybrid constructed wetlands, or by simply ensuring that the leach field is properly sited to prevent direct entry of effluent into bodies of water. [ citation needed ]
The fermentation processes cause the contents of a septic tank to be anaerobic with a low redox potential, which keeps phosphates in a soluble and, thus, mobilized form. Phosphates discharged from a septic tank into the environment can trigger prolific plant growth including algal blooms, which can also include blooms of potentially toxic cyanobacteria .
The soil's capacity to retain phosphorus is usually large enough to handle the load through a normal residential septic tank. An exception occurs when septic drain fields are located in sandy or coarser soils on property adjacent to a water body. Because of limited particle surface area, these soils can become saturated with phosphates. Phosphates will progress beyond the treatment area, posing a threat of eutrophication to surface waters. [ 19 ]
Diseases extremely dangerous to human contact such as E. coli and other coliform bacteria are often reported following failures of septic tanks. [ 20 ]
A properly functioning septic system, on the other hand, provides significant reduction of pathogens compared to direct discharge due to settling (in the tank) and soil absorption (in the drain field). Log reductions of 4–8 for coliform bacteria, 0–2 for viruses are achieved in the effluent. Parasitic worm eggs are also removed. Additional filters may be added to improve removal performance although they will need to be replaced periodically. [ 21 ]
In areas with high population density, groundwater pollution beyond acceptable limits may occur. Some small towns experience the costs of building very expensive centralized wastewater treatment systems because of this problem, due to the high cost of extended collection systems. To reduce residential development that might increase the demand to construct an expensive centralized sewerage system, building moratoriums and limitations on the subdivision of property are often imposed. Ensuring existing septic tanks are functioning properly can also be helpful for a limited time, but becomes less effective as a primary remediation strategy as population density increases.
In areas adjacent to water bodies with fish or shellfish intended for human consumption, improperly maintained and failing septic systems contribute to pollution levels that can force harvest restrictions and/or commercial or recreational harvest closures.
In the United States , the 2008 American Housing Survey indicated that about 20 percent of all households rely on septic tanks, [ 22 ] and that the overwhelming majority of systems are located in rural (50%) and suburban (47%) areas. [ 22 ] Indianapolis is one example of a large city where many of the city's neighborhoods still rely on separate septic systems. [ 23 ] In Europe, septic systems are generally limited to rural areas. [ citation needed ]
In the European Union the EN 12566 standard provides the general requirements for packaged and site assembled treatment plants used for domestic wastewater treatment.
Part 1 ( EN 12566-1 ) is for septic tanks that are prefabricated or factory manufactured and made of polyethylene , glass reinforced polyester , polypropylene , PVC-U , steel or concrete . Part 4 ( EN 12566-4 ) regulates septic tanks that are assembled on site from prefabricated kits, generally of concrete construction. Certified septic tanks of both types must pass a standardized hydraulic test to assess their ability to retain suspended solids within the system. Additionally, their structural adequacy in relevant ground conditions is assessed in terms of water-tightness, treatment efficiency, and structural behaviour. [ 24 ]
In France , about 4 million households (or 20% of the population) are using on-site wastewater disposal systems ( l’assainissement non collectif ), [ 25 ] including septic tanks ( fosse septique ). The legal framework for regulating the construction and maintenance of septic systems was introduced in 1992 and updated in 2009 and 2012 with the intent to establish the technical requirements applicable to individual sewerage systems. [ 26 ] Septic tanks in France are subject to inspection by SPANC ( Service Public d’Assainissement Non Collectif ), a professional body appointed by the respective local authorities to enforce wastewater collection laws, at least once in four years. Following the introduction of EN 12566, the discharge of effluent directly into ditches or watercourses is prohibited, unless the effluent meets prescribed standards. [ 27 ]
According to the Census of Ireland 2011 , 27.5% of Irish households (i.e. about 440,000 households), with the majority in rural areas, use an individual septic tank. [ 28 ]
Following a European Court of Justice judgment made against Ireland in 2009 that deemed the country non-compliant with the Waste Framework Directive in relation to domestic wastewaters disposed of in the countryside, the Water Services (Amendment) Act 2012 was passed in order to regulate wastewater discharges from domestic sources that are not connected to the public sewer network and to provide arrangements for registration and inspection of existing individual domestic wastewater treatment systems. [ 29 ] [ 30 ]
Additionally, a code of practice has been developed by the Environmental Protection Agency to regulate the planning and construction of new septic tanks, secondary treatment systems, septic drain fields and filter systems. [ 31 ] Direct discharge of septic tank effluent into groundwater is prohibited in Ireland, while the indirect discharge via unsaturated subsoil into groundwater, e.g. by means of a septic drain field, or the direct discharge into surface water is permissible in accordance with a Water Pollution Act license. [ 31 ] Registered septic tanks must be desludged by an authorized contractor at least once a year; the removed fecal sludge is disposed of, either to a managed municipal wastewater treatment facility or to agriculture provided that nutrient management regulations are met. [ 31 ]
Since 2015, only certain property owners in England and Wales with septic tanks or small packaged sewage treatment systems need to register their systems, and either apply for a permit or qualify for an exemption with the Environment Agency . [ 32 ] Permits need to be granted to systems that discharge more than a certain volume of effluent in a given time or that discharge effluent directly into sensitive areas (e.g., some groundwater protection zones). [ 33 ] In general, permits are not granted for new septic tanks that discharge directly into surface waters. A septic tank discharging into a watercourse must be replaced or upgraded by 1 January 2020 to a Sewage Treatment Plant (also called an Onsite sewage facility ), or sooner if the property is sold before this date, or if the Environment Agency (EA) finds that it is causing pollution.
In Northern Ireland , the Department of the Environment must give permission for all wastewater discharges where it is proposed that the discharge will go to a waterway or soil infiltration system. The discharge consent will outline conditions relating to the quality and quantity of the discharge in order to ensure the receiving waterway or the underground aquifer can absorb the discharge. [ 34 ]
The Water Environment Regulations 2011 regulate the registration of septic tank systems in Scotland . Proof of registration is required when new properties are being developed or existing properties change ownership. [ 35 ]
In Australia, septic tank design and installation requirements are regulated by State Governments, through Departments of Health and Environmental Protection Agencies. Regulation may include Codes of Practice [ 36 ] [ 37 ] and Legislation. [ 38 ] Regulatory requirements for the design and installation of septic tanks commonly references Australian Standards (1547 and 1546). Capacity requirements for septic tanks may be outlined within Codes of Practice, and can vary between states.
Mainly because of water leaching from the effluent drains of a lot of closely spaced septic systems, [ 39 ] many council districts (e.g. Sunshine Coast, Queensland ) have banned septic systems, and require them to be replaced with much more expensive small-scale sewage treatment systems that actively pump air into the tank, producing an aerobic environment. [ citation needed ] Septic systems have to be replaced as part of any new building applications, regardless of how well the old system performed. [ citation needed ]
According to the US Environmental Protection Agency , in the United States it is the home owners' responsibility to maintain their septic systems. [ 40 ] Anyone who ignores this requirement will eventually experience costly repairs when solids escape the tank and clog the clarified liquid effluent disposal system.
In Washington , for example, a "shellfish protection district" or "clean water district" is a geographic service area designated by a county to protect water quality and tideland resources. The district provides a mechanism to generate local funds for water quality services to control non-point sources of pollution, such as septic system maintenance. The district also serves as an educational resource, calling attention to the pollution sources that threaten shellfish growing waters. [ 41 ]
The term "septic tank", or more usually "septic", is used in some parts of Britain as a slang term to refer to Americans, [ 42 ] from Cockney rhyming slang septic tank equalling yank. [ 43 ] This is sometimes further shortened to "seppo" by Australians . [ 44 ] | https://en.wikipedia.org/wiki/Septic_tank |
Septins are a group of GTP - binding proteins expressed in all eukaryotic cells except plants . [ 1 ] [ 2 ] [ 3 ] Different septins form protein complexes with each other. These complexes can further assemble into filaments, rings and gauzes. Assembled as such, septins function in cells by localizing other proteins , either by providing a scaffold to which proteins can attach, or by forming a barrier preventing the diffusion of molecules from one compartment of the cell to another, [ 2 ] [ 3 ] [ 4 ] [ 5 ] or in the cell cortex as a barrier to the diffusion of membrane-bound proteins. [ 6 ]
Septins have been implicated in the localization of cellular processes at the site of cell division , and at the cell membrane at sites where specialized structures like cilia or flagella are attached to the cell body. [ 4 ] In yeast cells, they compartmentalize parts of the cell and build scaffolding to provide structural support during cell division at the septum , from which they derive their name. [ 3 ] Research in human cells suggests that septins build cages around pathogenic bacteria , that immobilize and prevent them from invading other cells. [ 7 ]
As filament forming proteins, septins can be considered part of the cytoskeleton . [ 4 ] Apart from forming non-polar filaments, septins associate with cell membranes , the cell cortex, actin filaments and microtubules . [ 4 ] [ 6 ]
Septins are P-Loop -NTPase proteins that range in weight from 30-65 kDa. Septins are highly conserved between different eukaryotic species. They are composed of a variable-length proline rich N-terminus with a basic phosphoinositide binding motif important for membrane association, a GTP-binding domain , a highly conserved Septin Unique Element domain, and a C-terminal extension including a coiled coil domain of varying length. [ 4 ]
Septins interact either via their respective GTP-binding domains, or via both their N- and C-termini. Different organisms express a different number of septins, and from those symmetric oligomers are formed. For example, in yeast the octameric complex formed is Cdc11-Cdc12-Cdc3-Cdc10-Cdc10-Cdc3-Cdc12-Cdc11. [ 8 ] In humans, hexameric or octameric complexes are possible. Initially, it was indicated that the human complex was Sept7-Sept6-Sept2-Sept2-Sept6-Sept7; [ 9 ] but recently this order has been revised to Sept2-Sept6-Sept7-Sept7-Sept6-Sept2 [ 10 ] (or Sept2-Sept6-Sept7-Sept3-Sept3-Sept7-Sept6-Sept2 [ 11 ] in case of octameric hetero-oligomers). These complexes then associate to form non-polar filaments, filament bundles, cages or ring structures in cells. [ 4 ]
Septins are found in fungi , animals , and some eukaryotic algae but are not found in plants. [ 1 ]
There are seven different septins in Saccharomyces cerevisiae . Five of those are involved in mitosis, while two (Spr3 and Spr28) are specific to sporulation . [ 2 ] [ 3 ] Mitotic septins (Cdc3, Cdc10, Cdc11, Cdc12, Shs1) form a ring structure at the bud neck during cell division . [ 2 ] [ 4 ] They are involved in the selection of the bud-site, the positioning of the mitotic spindle , polarized growth, and cytokinesis . The sporulating septins (Spr3, Spr28) localize together with Cdc3 and Cdc11 to the edges of prospore membranes. [ 2 ]
Septins form a specialised region in the cell cortex known as the septin cortex. [ 12 ] The septin cortex undergoes several changes throughout the cell cycle : The first visible septin structure is a distinct ring which appears ~15 min before bud emergence. After bud emergence, the ring broadens to assume the shape of an hourglass around the mother-bud neck. During cytokinesis , the septin cortex splits into a double ring which eventually disappears. How can the septin cortex undergo such dramatic changes, although some of its functions may require it to be a stable structure? FRAP analysis has revealed that the turnover of septins at the neck undergoes multiple changes during the cell cycle . The predominant, functional conformation is characterized by a low turnover rate (frozen state), during which the septins are phosphorylated . Structural changes require a destabilization of the septin cortex (fluid state) induced by dephosphorylation prior to bud emergence, ring splitting and cell separation. [ 3 ]
The composition of the septin cortex does not only vary throughout the cell cycle but also along the mother-bud axis. This polarity of the septin network allows concentration of some proteins primarily to the mother side of the neck, some to the center and others to the bud site.
The septins act as a scaffold, recruiting many proteins . These protein complexes are involved in cytokinesis , chitin deposition, cell polarity, spore formation, in the morphogenesis checkpoint, spindle alignment checkpoint and bud site selection.
Budding yeast cytokinesis is driven through two septin dependent, redundant processes: recruitment and contraction of the actomyosin ring and formation of the septum by vesicle fusion with the plasma membrane . In contrast to septin mutants , disruption of one single pathway only leads to a delay in cytokinesis , not complete failure of cell division . Hence, the septins are predicted to act at the most upstream level of cytokinesis .
After the isotropic - apical switch in budding yeast , cortical components, supposedly of the exocyst and polarisome , are delocalized from the apical pole to the entire plasma membrane of the bud, but not the mother cell. The septin ring at the neck serves as a cortical barrier that prevents membrane diffusion of these factors between the two compartments. This asymmetric distribution is abolished in septin mutants .
Some conditional septin mutants do not form buds at their normal axial location. Moreover, the typical localization of some bud-site-selection factors in a double ring at the neck is lost or disturbed in these mutants . This indicates that the septins may serve as anchoring site for such factors in axially budding cells.
Since their discovery in S. cerevisiae , septin homologues have been found in other eukaryotic species, including filamentous fungi . Septins in filamentous fungi display a variety of different shapes within single cells , where they control aspects of filamentous morphology . [ 13 ] [ 14 ]
The genome of C. albicans encodes homologues to all S. cerevisiae septins. Without Cdc3 and Cdc12 genes Candida albicans cannot proliferate, other septins affect morphology and chitin deposition, but are not essential. Candida albicans can display different morphologies of vegetative growth, which determines the appearance of septin structures. Newly forming hyphae form a septin ring at the base, Double rings form at sites of hyphal septation, and a septin cap forms at hyphal tips. Elongated septin- filaments encircle the spherical chlamydospores . Double rings of septins at the septation site also bear growth polarity, with the growing tip ring disassembling, while the basal ring remaining intact. [ 13 ]
Five septins are found in A. nidulans (AnAspAp, AnAspBp, AnAspCp, AnAspDp, AnAspEp). AnAspBp forms single rings at septation sites that eventually split into double rings. Additionally, AnAspBp forms a ring at sites of branch emergence which broadens into a band as the branch grows. Like in C. albicans , double rings reflect polarity of the hypha . In the case of Aspergillus nidulans polarity is conveyed by disassembly of the more basal ring (the ring further away from the hyphal growth tip), leaving the apical ring intact, potentially as a growth guidance cue. [ 2 ] [ 13 ]
The ascomycete A. gossypii possesses homologues to all S. cerevisiae septins, with one being duplicated ( AgCDC3, AgCDC10, AgCDC11A, AgCDC11B, AgCDC12, AgSEP7 ). In vivo studies of AgSep7p- GFP have revealed that septins assemble into discontinuous hyphal rings close to growing tips and sites of branch formation, [ 2 ] and into asymmetric structures at the base of branching points. Rings are made of filaments which are long and diffuse close to growing tips and short and compact further away from the tip. During septum formation, the septin ring splits into two to form a double ring. Agcdc3Δ, Agcdc10Δ and Agcdc12Δ deletion mutants display aberrant morphology and are defective for actin -ring formation, chitin -ring formation, and sporulation . Due to the lack of septa , septin deletion mutants are highly sensitive, and damage of a single hypha can result in complete lysis of a young mycelium .
In contrast to septins in yeast , and in contrast to other cytoskeletal components of animals, septins do not form a continuous network in cells, but several dispersed ones in the cytoplasm of the cell cortex . These are integrated with actin bundles and microtubules . For example, the actin bundling protein anillin is required for correct spatial control of septin organization. [ 5 ] In the sperm cells of mammals , septins form a stable ring called annulus in the tail. In mice (and potentially in humans, too), defective annulus formation leads to male infertility. [ 4 ] [ 5 ]
In humans, septins are involved in cytokinesis , cilium formation and neurogenesis through the capability to recruit other proteins or serve as a diffusion barrier. There are 13 different human genes coding for septins. The septin proteins produced by these genes are grouped into four subfamilies each named after its founding member: (i) SEPT2 ( SEPT1 , SEPT4 , SEPT5 ), (ii) SEPT3 ( SEPT9 , SEPT12 ), (iii) SEPT6 ( SEPT8 , SEPT10 , SEPT11 , SEPT14 ), and (iv) SEPT7 . Septin protein complexes are assembled to form either hetero- hexamers (incorporating monomers selected from three different groups and the monomer from each group is present in two copies; 3 x 2 = 6) or hetero- octamers (monomers from four different groups, each monomer present in two copies; 4 x 2 = 8). These hetero-oligomers in turn form higher-order structures such as filaments and rings. [ 4 ] [ 5 ] [ 1 ]
Septins form cage-like structures around bacterial pathogens , immobilizing harmful microbes and preventing them from invading healthy cells. This cellular defence system could potentially be exploited to create therapies for dysentery and other illnesses . For example, Shigella is a bacterium that causes lethal diarrhoea in humans. To propagate from cell to cell, Shigella bacteria develop actin - polymer 'tails', which propel the microbes and allow them to gain entry into neighbouring host cells. As part of the immune response, human cells produce a cell-signalling protein called TNF-α which trigger thick bundles of septin filaments to encircle the microbes within the infected host cell. [ 15 ] Microbes that become trapped in these septin cages are broken down by autophagy . [ 16 ] Disruptions in septins and mutations in the genes that code for them could be involved in causing leukaemia , colon cancer and neurodegenerative conditions such as Parkinson's disease and Alzheimer's disease . Potential therapies for these, as well as for bacterial conditions such as dysentery caused by Shigella , might bolster the body’s immune system with drugs that mimic the behaviour of TNF-α and allow the septin cages to proliferate. [ 7 ]
In the nematode worm Caenorhabditis elegans there are two genes coding for septins, and septin complexes contain the two different septins in a tetrameric UNC59-UNC61-UNC61-UNC59 complex. Septins in C.elegans concentrate at the cleavage furrow and the spindle midbody during cell division . Septins are also involved in cell migration and axon guidance in C.elegans . [ 2 ]
The septin localized in the mitochondria is called mitochondrial septin (M-septin). It was identified as a CRMP /CRAM-interacting protein in the developing rat brain. [ 17 ]
The septins were discovered in 1970 by Leland H. Hartwell and colleagues in a screen for temperature-sensitive mutants affecting cell division (cdc mutants) in yeast ( Saccharomyces cerevisiae ). The screen revealed four mutants which prevented cytokinesis at restrictive temperature. The corresponding genes represent the four original septins, ScCDC3, ScCDC10, ScCDC11, and ScCDC12 . [ 3 ] [ 4 ] Despite disrupted cytokinesis, the cells continued budding , DNA synthesis , and nuclear division , which resulted in large multinucleate cells with multiple, elongated buds. In 1976, analysis of electron micrographs revealed ~20 evenly spaced striations of 10-nm filaments around the mother-bud neck in wild-type but not in septin-mutant cells. [ 3 ] [ 4 ] [ 13 ] Immunofluorescence studies revealed that the septin proteins colocalize into a septin ring at the neck. [ 4 ] [ 13 ] The localization of all four septins is disrupted in conditional Sccdc3 and Sccdc12 mutants, indicating interdependence of the septin proteins. Strong support for this finding was provided by biochemical studies: The four original septins co-purified on affinity columns , together with a fifth septin protein, encoded by ScSEP7 or ScSHS1 . Purified septins from budding yeast, Drosophila , Xenopus , and mammalian cells are able to self associate in vitro to form filaments. [ 13 ] How the septins interact in vitro to form hetero-oligomers that assemble into filaments was studied in detail in S. cerevisiae .
Micrographs of purified filaments raised the possibility that the septins are organized in parallel to the mother-bud axis. The 10-nm striations seen on electron micrographs may be the result of lateral interaction between the filaments. Mutant strains lacking factors important for septin organization support this view. Instead of continuous rings, the septins form bars oriented along the mother-bud axis in deletion mutants of ScGIN4, ScNAP1 and ScCLA4 . | https://en.wikipedia.org/wiki/Septin |
In biology , a septum ( Latin for something that encloses ; pl. septa ) is a wall, dividing a cavity or structure into smaller ones. A cavity or structure divided in this way may be referred to as septate .
Histological septa are seen throughout most tissues of the body, particularly where they are needed to stiffen soft cellular tissue, and they also provide planes of ingress for small blood vessels. [ citation needed ] Because the dense collagen fibres of a septum usually extend out into the softer adjacent tissues, microscopic fibrous septa are less clearly defined than the macroscopic types of septa listed above. [ 13 ] In rare instances, a septum is a cross-wall. Thus it divides a structure into smaller parts. [ 14 ]
The septum (cell biology) is the boundary formed between dividing cells in the course of cell division . [ 15 ]
A coral septum is one of the radial calcareous plates in the corallites of a coral . [ 18 ]
Annelids have septa that divide their coelom into segmented chambers. [ 19 ]
Many shelled organisms have septa subdividing their shell chamber, including rhizopods , cephalopods and gastropods , the latter seemingly serving as a defence against shell-boring predators. [ 20 ] [ 21 ] | https://en.wikipedia.org/wiki/Septum |
The septum transversum is a thick mass of cranial mesenchyme , formed in the embryo , that gives rise to parts of the thoracic diaphragm and the ventral mesentery of the foregut in the developed human being and other mammals.
The septum transversum originally arises as the most cranial part of the mesenchyme on day 22. [ 1 ] During craniocaudal folding, it assumes a position cranial to the developing heart at the level of the cervical vertebrae. [ 1 ] During subsequent weeks the dorsal end of the embryo grows much faster than its ventral counterpart resulting in an apparent descent of the ventrally located septum transversum. At week 8, it can be found at the level of the thoracic vertebrae. [ 1 ] [ 2 ]
After successful craniocaudal folding the septum transversum picks up innervation from the adjacent ventral rami of spinal nerves C3, C4 and C5, thus forming the precursor of the phrenic nerve . During the descent of the septum, the phrenic nerve is carried along and assumes its descending pathway.
During embryonic development of the thoracic diaphragm , myoblast cells from the septum invade the other components of the diaphragm. They thus give rise to the motor and sensory innervation of the muscular diaphragm by the phrenic nerve .
The cranial part of the septum transversum gives rise to the central tendon of the diaphragm, [ 1 ] and is the origin of the myoblasts that invade the pleuroperitoneal folds resulting in the formation of the muscular diaphragm. [ 3 ]
The caudal part of the septum transversum is invaded by the hepatic diverticulum which divides within it to form the liver and thus gives rise to the ventral mesentery of the foregut, which in turn is the precursor of the lesser omentum , the visceral peritoneum of the liver and the falciform ligament .
Though not derived from the septum transversum, development of the liver is highly dependent upon signals originating here. Bone morphogenetic protein 2 (BMP-2), BMP-4 and BMP-7 produced from the septum transversum join fibroblast growth factor (FGF) signals from the cardiac mesoderm induce part of the foregut to differentiate towards a hepatic fate. [ 4 ] | https://en.wikipedia.org/wiki/Septum_transversum |
A sequence-controlled polymer is a macromolecule , in which the sequence of monomers is controlled to some degree. [ 1 ] [ 2 ] This control can be absolute but not necessarily. In other words, a sequence-controlled polymer can be uniform (its dispersity Ð is equal to 1) or non-uniform (Ð>1). For example, an alternating copolymer synthesized by radical polymerization is a sequence-controlled polymer, even if it is also a non-uniform polymer, in which chains have different chain-lengths and slightly different compositions. [ 2 ] A biopolymer (for example a protein ) with a perfectly-defined primary structure is also a sequence-controlled polymer. However, in the case of uniform macromolecules, the term sequence-defined polymer can also be used.
With comparison to traditional polymers , the composition of sequence-controlled polymers can be precisely defined via chemical synthetic methods, such as multicomponent reactions, click reactions etc. Such tunable polymerizing manner endows sequence-controlled polymers with particular properties and thereby, sequence-controlled polymers-based applications (e.g. information storage, [ 3 ] biomaterials , [ 3 ] nanomaterials [ 4 ] etc.) are developed.
In nature, DNA , RNA , proteins and other macromolecules can also be recognized as sequence-controlled polymers for their well-ordered structural skeletons. DNA, based on A-T, C-G base pairs, are formed in well-aligned sequences. Through precise sequences of DNA, 20 amino acids are able to generate sequential peptide chains with three-dimensional structures by virtue of transcription and translation process. These ordered sequences of different constituents endow organisms with complicated and diverse functions.
Traditional polymers are usually consist of one repeating unit or several repeating units, arranged in random sequences. Sequence-controlled polymers are composed of different repeating units, which are arranged in an ordered manner. In order to control the sequence, various kinds of synthetic methodologies are developed.
DNA, RNA and proteins are most common sequence-controlled polymers in living creatures. Inspired by them, polymerization methods, utilizing DNA or RNA as templates to control sequences of polymer, are developed. At first, taking DNA or RNA as templates, scientists developed a series of peptide nucleic acid (PNA)-based polymers, without using DNA polymerases . [ 6 ] [ 7 ] But this method is limited to polymerization scale and yield. [ 1 ] After that, polymerase chain reaction (PCR) is developed, which currently is still the most extensively used sequence-regulated method. [ 8 ] By employing enzymes, the yields and scales are greatly increased, but the specificity of enzymes towards natural peptides limits this technique to a certain degree. Nowadays, more attention is paid to utilization of ribosomes to directly mimic the transcription and translation process. [ 9 ] This technology called protein engineering is considered as the most promising biological polymerization method for synthesis of sequence-controlled polymers.
Other than biological polymerization methods, scientists have also developed numerous chemical synthetic methods for sequence-controlled polymers. Compared with biological polymerization, chemical polymerization can provide better diversity but most of the chemical methods cannot offer the efficiency and specificity of biological methods. [ 1 ]
One of the chemical polymerization methods is solid-phase synthesis, which can be used to synthesize peptides consisted of natural and non-natural amino acids. In this method, the monomers are attached to the polymer chain via amidation between carbonyl group and amino group. For purpose of sequence control, the amino groups are usually protected by 9-fluorenylmethyloxycarbonyl group ( Fmoc ) and t-butyloxycarbonyl (Boc), [ 10 ] which can be removed under base and acid environment respectively to participate into next-round chain elongation.
Radical polymerization is one of the most commonly used polymerization methods. About 50% of commercially available polymers are synthesized via radical polymerization. [ 11 ] However, the disadvantages of this method are apparent that sequences and polymeric features cannot be well modulated. To overcome these constraints, scientists optimized the employed protocols. The first reported example was the time-controlled sequential addition of highly-reactive N-substituted maleimides in the atom transfer radical polymerization of styrene , which led to programmed sequences of functional monomers. [ 12 ] The development of single-molecule addition into atom-transfer radical polymerization (ATRP), which enhances the sequence control of radical polymerization was also reported. [ 13 ] Other solutions include the use of intermediate purification steps to isolate the desired oligomer sequence in between subsequent reversible addition−fragmentation chain-transfer polymerization (RAFT-polymerizations). Both flash column chromatography [ 14 ] and recycling size exclusion chromatography [ 15 ] have been proven successful in this regard. RAFT single unit monomer insertion (SUMI) is recently developed as an emerging technology for precise control of monomer sequence. [ 16 ]
For the intrinsic shortages of radical polymerization for sequence-controlled polymers, other non-radical polymerizations are also developed. Within those non-radical methods, azide-alkyne cycloaddition (also known as click reaction), [ 18 ] olefin metathesis [ 19 ] among others are utilized to construct sequence-controlled polymers. Depending on these specific chemical reactions, monomers are accurately added to the polymer chain and a well-ordered chain is accomplished stepwise. Meanwhile, by applying multiple chemical reactions, chemists have also developed multi-component reactions [ 20 ] to accelerate the construction of polymer skeletons and also enhance variety. Beyond the aforementioned, there was a research group developing a molecule machine, which successfully achieve a sequence-controlled polymerization of oligopeptides . [ 21 ]
The most important character of sequence-controlled polymers is its controllable sequence of polymer backbone. Nonetheless, to realize a precise sequence control and to regulate sequences in greater polymer backbones are also the most urgent issue, which needs to be addressed in the field of sequence-controlled polymers. Great efforts have been made in development and optimization of methods to improve the sequence-control properties of currently existed synthetic methods and also to further brand-new methods with better synthetic efficiency and sequence control.
One of the most significant character of sequence-controlled biosynthesis against other chemical synthetic methods is that the biomolecules (including DNA and RNA) can initiate their polymerization using highly programmed templates. Hence, biosynthetic methods, like PCR , are still considered one of the most cogent manner to develop sequence-controlled polymers.
To modulate the reactivity between monomer and growing polymeric chain is another approach to enhance sequence control. [ 22 ] The rationale for this method is that monomer has to be activated with first catalyst at beginning as a dormant species, which could then participate into polymerization as the second catalyst is introduced. A real example is utilization of HI as first catalyst and ZnI 2 as second catalyst to achieve sequence-controlled polymerization of vinyl ethers and styrene derivatives. [ 23 ]
In this approach, a recognition site at polymer is offered to non-covalently anchor the monomer at polymer chain, which can subsequently go through a chemical insertion into polymeric backbone. One successful example demonstrates that methacrylic acid (monomer) can be radically incorporated into a backbone featuring a recognizable cationic site ( protonate d primary amine pendant). [ 24 ] Driven by this site-specific reaction, the sequence-controlled polymerization can be achieve by using a template adorned with differenrt recognizable pendants.
The most distinguishable feature of sequence-controlled polymers is the well-ordered chains composed of different repeating units. [ 25 ] By encoding the repeating units, the correspondingly synthesized sequence-controlled polymer can be used for data storage. [ 26 ] To modify the monomer with some bioactive moieties, the obtained sequence-controlled polymer is able to treat diseases. The property of sequence control make sequence-controlled polymers an ideal platform to install various kinds of pendants (like drugs , catalyst ), whereby diverse functions and applications can be realized. | https://en.wikipedia.org/wiki/Sequence-controlled_polymer |
Sequence-defined polymer (Syn. sequence-specific polymer, sequence-ordered polymer) is a uniform macromolecule with an exact chain-length and a perfectly defined sequence of monomers . [ 1 ] [ 2 ] In other words, each monomer unit is at a defined position in the chain e.g. peptides , proteins , oligonucleotides . [ 3 ] Sequence-defined polymers constitute therefore a subclass of the field of sequence-controlled polymers . [ 1 ] [ 4 ]
This article about polymer science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sequence-defined_polymer |
A sequence related amplified polymorphism ( SRAP ) is a molecular technique , developed by G. Li and C. F. Quiros in 2001, [ 1 ] for detecting genetic variation in the open reading frames (ORFs) of genomes of plants and related organisms . [ 2 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sequence-related_amplified_polymorphism |
SequenceBase is a privately held company, [ 1 ] [ 2 ] is an international patent sequence information provider with headquarters located in Edison, NJ, USA.
SequenceBase develops and markets the SequenceBase Research Portal [ 3 ] [ 4 ] [ 5 ] [ 6 ] to the biotechnology , legal, pharmaceutical, scientific, technical and academic bioinformatics communities.
Clarivate Analytics has acquired SequenceBase on 9th September 2019. [ 7 ]
USGENE provides searchable access to all available peptide and nucleotide sequences from the published applications and issued patents of the United States Patent and Trademark Office (USPTO) . [ 8 ] USGENE can be searched directly via the SequenceBase Research Portal [ 9 ] or via STN International [ 10 ] [ 11 ] by FIZ Karlsruhe . The SequenceBase Research Portal offers BLAST+ as a sequence searching method . | https://en.wikipedia.org/wiki/SequenceBase |
A sequence in biology is the one-dimensional ordering of monomers , covalently linked within a biopolymer ; it is also referred to as the primary structure of a biological macromolecule . While it can refer to many different molecules, the term sequence is most often used to refer to a DNA sequence [ 1 ] or a protein sequence . [ 2 ]
This molecular biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sequence_(biology) |
The methods for sequence analysis of synthetic polymers differ from the sequence analysis of biopolymers (e. g. DNA or proteins ). Synthetic polymers are produced by chain-growth or step-growth polymerization and show thereby polydispersity , whereas biopolymers are synthesized by complex template-based mechanisms and are sequence-defined and monodisperse . Synthetic polymers are a mixture of macromolecules of different length and sequence and are analysed via statistical measures (e. g. the degree of polymerization, comonomer composition or dyad and triad fractions). [ 1 ]
Nuclear magnetic resonance (NMR) spectroscopy is known as the most widely applied and “one of the most powerful techniques” for the sequence analysis of synthetic copolymers. [ 1 ] [ 2 ] NMR spectroscopy allows determination of the relative abundance of comonomer sequences at the level of dyads and in cases of small repeat units even triads or more. It also allows the detection and quantification of chain defects and chain end groups , cyclic oligomers and by-products. [ 2 ] However, limitations of NMR spectroscopy are that it cannot, so far, provide information about the sequence distribution along the chain, like gradients, clusters or a long-range order. [ 1 ]
Monitoring the relative abundance of comonomer sequences is a common technique and is used, for example, to observe the progress of transesterification reactions between polyethylene terephthalate (PET) and polyethylene naphthalate (PEN) in their blends.
During such a transesterification reaction, three resonances representing four diads can be distinguished via 1 H NMR spectroscopy by different chemical shifts of the oxyethylene units: The diads -terephthalate-oxyethylene-terephthalate- (TET) and -naphthalate-oxyethylene-naphthalate- (NEN), which are also present in the homopolymers polyethylene naphthalate und polyethylene terephthalate, as well as the (indistinguishable) diads -terephthalate-oxyethylene-naphthalate- (TEN) and -naphthalate-oxyethylene-terephthalate- (NET), which are exclusively present in the copolymer. In the spectrum of a 1:1 physical PET/PEN mixture, only the resonances corresponding to the diads TET and NEN are present at 4.90 and 5.00 ppm, respectively. Once a transesterification reaction occurs, a new resonance at 4.95 ppm emerges that increases in intensity with the reaction time, corresponding to the TEN / NET sequences. [ 2 ]
The example of polyethylene naphthalate and polyethylene terephthalate is relatively simple, as only the aromatic part of the polymers differ (naphthalate vs. terephthalate). In a blend of polyethylene naphthalate and polytrimethylene terephthalate, already six resonances can be distinguished, since both, oxyethylene and oxypropylene, form three resonances. [ 3 ] The sequence patterns can become even more complex, when triads can be distinguished spectroscopically. [ 2 ] The extractable information is limited by the difference in chemical shift and the resonance width. In addition to 1 H NMR spectroscopy, also 13 C NMR spectroscopy is a common method for the sequencing shown above, which is characterized in particular by a very narrow resonance width.
Deconvolution and assignment of these triad-based resonances allows a quantitative determination of the degree of randomness and the average block length via integration of the distinguishable resonances. In a 1:1 mixture of two linear two-component 1:1 polycondensates (A 1 B 1 ) n and (A 2 B 2 ) n (with molecular weight high enough to neglected chain-ends), the following two equations are valid:
[ A i ] = [B i ], wherein (i = 1,2) (1)
[ A 1 B 2 ] = [ A 2 B 1 ] (2)
Equation 1 states that the molar ratio of all four repeat units is identical and equation 2 states that both types of copolymer are of identical concentration. In this case, the degree of randomness χ is calculated as given by equation 3:
χ = [ A i B j A 1 A 2 ] {\displaystyle \chi =[{\frac {A_{i}B_{j}}{A_{1}A_{2}}}]} , wherein (i, j = 1, 2) (3)
In the beginning of a transreaction process (e. g. transesterification or transamidation), the degree of randomness χ ≈ 0 as the system comprises a physical mixture of homopolymers or block copolymers. During the transreaction process χ increases up to χ = 1 for a fully random copolymer. If χ > 1 it indicates a tendency of the monomers to form alternating structure, up to χ = 2 for a completely alternating copolymer. [ 4 ] The degree of randomness χ gives thereby statistical information about the polymer sequence. The calculation can be modified for three-component [ 5 ] and four-component [ 6 ] polycondensates.
NMR spectroscopy is used in industrially relevant systems to study the sequence distribution of copolymers or the occurrence of transesterification in polyester blends. A change in sequence distribution can effect the crystallinity , and transesterification can affect the compatibility of two otherwise incompatible polyesters. Depending on their degree of randomness, copolyesters can show different thermal transitions and behaviours. [ 7 ]
Other options besides traditional NMR spectroscopy for sequence analysis are listed here; [ 8 ] these include Kerr-effect for characterization of polymer microstructures, MALDI-TOF mass spectrometry , depolymerization (controlled chemical degradation of macromolecules) via chain-end depolymerization (i.e., unzipping) and nanopore analysis (most of such reported studies, however, have focused on poly(ethylene glycol) , PEG).
This article incorporates text by Marcus Knappert available under the CC BY-SA 3.0 license. | https://en.wikipedia.org/wiki/Sequence_analysis_of_synthetic_polymers |
In bioinformatics , sequence assembly refers to aligning and merging fragments from a longer DNA sequence in order to reconstruct the original sequence. [ 1 ] This is needed as DNA sequencing technology might not be able to 'read' whole genomes in one go, but rather reads small pieces of between 20 and 30,000 bases, depending on the technology used. [ 1 ] Typically, the short fragments (reads) result from shotgun sequencing genomic DNA, or gene transcript ( ESTs ). [ 1 ]
The problem of sequence assembly can be compared to taking many copies of a book, passing each of them through a shredder with a different cutter, and piecing the text of the book back together just by looking at the shredded pieces. Besides the obvious difficulty of this task, there are some extra practical issues: the original may have many repeated paragraphs, and some shreds may be modified during shredding to have typos. Excerpts from another book may also be added in, and some shreds may be completely unrecognizable.
There are three approaches to assembling sequencing data:
Referenced-guided assembly is a combination of the other types. This type is applied on long reads to mimic short reads advantages (i.e. call quality). The logic behind it is to group the reads by smaller windows within the reference. Reads in each group will then be reduced in size using the k-mere approach to select the highest quality and most probable contiguous (contig). Contigs will then will be joined together to create a scaffold. The final consense is made by closing any gaps in the scaffold.
The first sequence assemblers began to appear in the late 1980s and early 1990s as variants of simpler sequence alignment programs to piece together vast quantities of fragments generated by automated sequencing instruments called DNA sequencers . [ 2 ] As the sequenced organisms grew in size and complexity (from small viruses over plasmids to bacteria and finally eukaryotes ), the assembly programs used in these genome projects needed increasingly sophisticated strategies to handle:
Faced with the challenge of assembling the first larger eukaryotic genomes—the fruit fly Drosophila melanogaster in 2000 and the human genome just a year later,—scientists developed assemblers like Celera Assembler [ 4 ] and Arachne [ 5 ] able to handle genomes of 130 million (e.g., the fruit fly D. melanogaster ) to 3 billion (e.g., the human genome) base pairs. Subsequent to these efforts, several other groups, mostly at the major genome sequencing centers, built large-scale assemblers, and an open source effort known as AMOS [ 6 ] was launched to bring together all the innovations in genome assembly technology under the open source framework.
Expressed sequence tag or EST assembly was an early strategy, dating from the mid-1990s to the mid-2000s, to assemble individual genes rather than whole genomes. [ 7 ] The problem differs from genome assembly in several ways. The input sequences for EST assembly are fragments of the transcribed mRNA of a cell and represent only a subset of the whole genome. [ 7 ] A number of algorithmical problems differ between genome and EST assembly. For instance, genomes often have large amounts of repetitive sequences, concentrated in the intergenic regions. Transcribed genes contain many fewer repeats, making assembly somewhat easier. On the other hand, some genes are expressed (transcribed) in very high numbers (e.g., housekeeping genes ), which means that unlike whole-genome shotgun sequencing, the reads are not uniformly sampled across the genome.
EST assembly is made much more complicated by features like (cis-) alternative splicing , trans-splicing , single-nucleotide polymorphism , and post-transcriptional modification . Beginning in 2008 when RNA-Seq was invented, EST sequencing was replaced by this far more efficient technology, described under de novo transcriptome assembly .
In terms of complexity and time requirements, de-novo assemblies are orders of magnitude slower and more memory intensive than mapping assemblies. This is mostly due to the fact that the assembly algorithm needs to compare every read with every other read (an operation that has a naive time complexity of O( n 2 )). Current de-novo genome assemblers may use different types of graph-based algorithms, such as the: [ 8 ]
Referring to the comparison drawn to shredded books in the introduction: while for mapping assemblies one would have a very similar book as a template (perhaps with the names of the main characters and a few locations changed), de-novo assemblies present a more daunting challenge in that one would not know beforehand whether this would become a science book, a novel, a catalogue, or even several books. Also, every shred would be compared with every other shred.
Handling repeats in de-novo assembly requires the construction of a graph representing neighboring repeats. Such information can be derived from reading a long fragment covering the repeats in full or only its two ends . On the other hand, in a mapping assembly, parts with multiple or no matches are usually left for another assembling technique to look into. [ 3 ]
The complexity of sequence assembly is driven by two major factors: the number of fragments and their lengths. While more and longer fragments allow better identification of sequence overlaps, they also pose problems as the underlying algorithms show quadratic or even exponential complexity behaviour to both number of fragments and their length. And while shorter sequences are faster to align, they also complicate the layout phase of an assembly as shorter reads are more difficult to use with repeats or near identical repeats.
In the earliest days of DNA sequencing, scientists could only gain a few sequences of short length (some dozen bases) after weeks of work in laboratories. Hence, these sequences could be aligned in a few minutes by hand.
In 1975, the dideoxy termination method (AKA Sanger sequencing ) was invented and until shortly after 2000, the technology was improved up to a point where fully automated machines could churn out sequences in a highly parallelised mode 24 hours a day. Large genome centers around the world housed complete farms of these sequencing machines, which in turn led to the necessity of assemblers to be optimised for sequences from whole-genome shotgun sequencing projects where the reads
With the Sanger technology, bacterial projects with 20,000 to 200,000 reads could easily be assembled on one computer. Larger projects, like the human genome with approximately 35 million reads, needed large computing farms and distributed computing.
By 2004 / 2005, pyrosequencing had been brought to commercial viability by 454 Life Sciences . [ 9 ] This new sequencing method generated reads much shorter than those of Sanger sequencing: initially about 100 bases, now 400-500 bases. [ 9 ] Its much higher throughput and lower cost (compared to Sanger sequencing) pushed the adoption of this technology by genome centers, which in turn pushed development of sequence assemblers that could efficiently handle the read sets. The sheer amount of data coupled with technology-specific error patterns in the reads delayed development of assemblers; at the beginning in 2004 only the Newbler assembler from 454 was available. Released in mid-2007, the hybrid version of the MIRA assembler by Chevreux et al. [ 10 ] was the first freely available assembler that could assemble 454 reads as well as mixtures of 454 reads and Sanger reads. Assembling sequences from different sequencing technologies was subsequently coined hybrid assembly . [ 10 ]
From 2006, the Illumina (previously Solexa) technology has been available and can generate about 100 million reads per run on a single sequencing machine. Compare this to the 35 million reads of the human genome project which needed several years to be produced on hundreds of sequencing machines. [ 11 ] Illumina was initially limited to a length of only 36 bases, making it less suitable for de novo assembly (such as de novo transcriptome assembly ), but newer iterations of the technology achieve read lengths above 100 bases from both ends of a 3-400bp clone. [ 11 ] Announced at the end of 2007, the SHARCGS assembler [ 12 ] by Dohm et al. was the first published assembler that was used for an assembly with Solexa reads. It was quickly followed by a number of others.
Later, new technologies like SOLiD from Applied Biosystems , Ion Torrent and SMRT were released and new technologies (e.g. Nanopore sequencing ) continue to emerge. Despite the higher error rates of these technologies they are important for assembly because their longer read length helps to address the repeat problem. [ 11 ] It is impossible to assemble through a perfect repeat that is longer than the maximum read length; however, as reads become longer the chance of a perfect repeat that large becomes small. This gives longer sequencing reads an advantage in assembling repeats even if they have low accuracy (≈85%). [ 11 ]
Most sequence assemblers have some algorithms built in for quality control, such as Phred . [ 13 ] However, such measures do not assess assembly completeness in terms of gene content. Some tools evaluate the quality of an assembly after the fact.
For instance, BUSCO (Benchmarking Universal Single-Copy Orthologs) is a measure of gene completeness in a genome, gene set, or transcriptome , using the fact that many genes are present only as single-copy genes in most genomes. [ 14 ] The initial BUSCO sets represented 3023 genes for vertebrates , 2675 for arthropods , 843 for metazoans , 1438 for fungi and 429 for eukaryotes . This table shows an example for human and fruit fly genomes: [ 14 ]
Different organisms have a distinct region of higher complexity within their genome. Hence, the need of different computational approaches is needed. Some of the commonly used algorithms are:
In general, there are three steps in assembling sequencing reads into a scaffold:
For a lists of de-novo assemblers, see De novo sequence assemblers . For a list of mapping aligners, see List of sequence alignment software § Short-read sequence alignment .
Some of the common tools used in different assembly steps are listed in the following table: | https://en.wikipedia.org/wiki/Sequence_assembly |
In the field of bioinformatics , a sequence database is a type of biological database that is composed of a large collection of computerized (" digital ") nucleic acid sequences , protein sequences , or other polymer sequences stored on a computer. The UniProt database is an example of a protein sequence database. As of 2013 it contained over 40 million sequences and is growing at an exponential rate. [ 1 ] Historically, sequences were published in paper form, but as the number of sequences grew, this storage method became unsustainable.
Searching in a sequence database involves looking for similarities between a genomic/protein sequence and a query string and, finding the sequence in the database that "best" matches the target sequence (based on criteria which vary depending on the search method). The number of matches/hits is used to formulate a score that determines the similarity between the sequence query and the sequences in the sequence database. [ 2 ] The main goal is to have a good balance between the two criteria.
The need for sequence databases originated in 1950 when Fredrick Sanger reported the primary structure of insulin. He won his second Nobel Prize for creating methods for sequencing nucleic acids, and his comparative approach is what sparked other protein biochemists to begin collecting amino acid sequences. Thus marking the beginning of molecular databases. [ 3 ]
In 1965 Margaret Dayhoff and her team at the National Biomedical Research Foundation (NBRF) published "The Atlas of Protein Sequence and Structure". They put all know protein sequences in the Atlas , even unpublished material. This can be seen as the first attempt to create a molecular database. They made use of the newly computerized (1964) Medical Literature Analysis and Retrieval System (MEDLARS) at the National Institutes of Health (NIH). The team used computers to store the data but had to manually type and proofread each sequence, which had a high cost in time and money. [ 3 ]
In 1966 the team released the second edition of the Atlas, double the size of the first. It contained about 1000 sequences, and this time was coined as an information explosion. The National Biomedical Research Foundation (NBRF) was on the cutting edge of utilizing computers for medicine and biology at this time. Dayhoff and her team made use of their facilities for determining amino acid sequences of protein molecules in mainframe computers. The number of discovered sequences continued to grow allowing for a deeper comparative analysis of proteins than ever before. This led to many developments such as, probabilistic models of amino acid substitutions, sequence aligning and phylogenetic trees of evolutionary relationships of proteins. [ 3 ]
Entire sequencing process became fully automated. [ 3 ]
The first nucleotide sequence database was created. Previously known as the European Molecular Biology Laboratory (EMBL) Nucleotide Sequence Data Library (now known as European Nucleotide archive). Human Genome Project began in 1988. The project's goal was sequence and map all the genes in a human which required the capability to create and utilize a large sequence database. [ 4 ]
We now have many sequence databases, tools for using them and easy access to them. One of the largest being GenBank which contains over 2 billion sequences. [ 3 ]
Records in sequence databases are deposited from a wide range of sources, from individual researchers to large genome sequencing centers. As a result, the sequences themselves, and especially the biological annotations attached to these sequences, may vary in quality. There is much redundancy, as multiple labs may submit numerous sequences that are identical, or nearly identical, to others in the databases. [ 5 ]
Many annotations of the sequences are based not on laboratory experiments, but on the results of sequence similarity searches for previously annotated sequences. Once a sequence has been annotated based on similarity to others, and itself deposited in the database, it can also become the basis for future annotations. This can lead to a transitive annotation problem because there may be several such annotation transfers by sequence similarity between a particular database record and actual wet lab experimental information. [ 6 ] Therefore, care must be taken when interpreting the annotation data from sequence databases.
Most of the current database search algorithms rank alignment by a score, which is usually a particular scoring system. [ 7 ] The solution towards solving this issue is found by making a variety of scoring systems available to suit to the specific problem.
When using a searching algorithm we often produce an ordered list which can often carry a lack of biological significance. [ 8 ] | https://en.wikipedia.org/wiki/Sequence_database |
In software engineering , a sequence diagram [ 1 ] shows process interactions arranged in time sequence. This diagram depicts the processes and objects involved and the sequence of messages exchanged as needed to carry out the functionality. Sequence diagrams are typically associated with use case realizations in the 4+1 architectural view model of the system under development. Sequence diagrams are sometimes called event diagrams or event scenarios .
For a particular scenario of a use case , the diagrams show the events that external actors generate, their order, and possible inter-system events. [ 2 ] The diagram emphasizes events that cross the system boundary from actors to systems. A system sequence diagram should be done for the main success scenario of the use case , and frequent or complex alternative scenarios.
There are two kinds of sequence diagrams:
A sequence diagram shows, as parallel vertical lines ( lifelines ), different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them in the order in which they occur. This allows for the graphical specification of simple runtime scenarios.
A system sequence diagram should specify and show the following:
Professionals, in developing a project, often use system sequence diagrams to illustrate how certain tasks are done between users and the system. These tasks may include repetitive, simple, or complex tasks. The purpose is to illustrate the use case in a visual format. Familiarity with unified modeling language (UML) is needed to construct a system sequence diagram. These models show the logic behind the actors (people who affect the system) and the system in performing the task. Reading a sequence diagram begins at the top with the actor(s) or the system(s) (which is located at the top of the page). Under each actor or system there are long dotted lines, called "lifelines", which are attached to them. Actions are performed with lines that extend between these lifelines. The connection between an action line and a lifeline shows the interaction between the actor or system. Messages will often appear at the top or bottom of a system sequence diagram to illustrate the action in detail. For example, a request by an actor to log in would be represented by login (username, password). After each action is performed, the response or next action is located under the previous one. By reading down the lines, one can see in detail how certain actions are performed in the provided model, and in what order.
If the lifeline is that of an object, it demonstrates a role. Leaving the instance name blank can represent anonymous and unnamed instances.
→
Messages, written with horizontal arrows with the message name written above them, display interaction. Solid arrow heads represent synchronous calls, open arrow heads represent asynchronous messages , and dashed lines represent reply messages. [ 3 ] If a caller sends a synchronous message, it must wait until the message is done, such as invoking a subroutine. If a caller sends an asynchronous message, it can continue processing and need not wait for a response. Asynchronous calls are present in multithreaded applications, event-driven applications, and in message-oriented middleware .
Activation boxes, or method -call boxes, are opaque rectangles drawn on top of lifelines to represent that processes are being performed in response to the message (ExecutionSpecifications in UML ).
Objects calling methods on themselves use messages and add new activation boxes on top of any others to indicate a further level of processing . If an object is destroyed (removed from memory ), an X is drawn below the lifeline, and the dashed line ceases to be drawn below it. It should be the result of a message, either from the object itself, or another.
A message sent from outside the diagram can be represented by a message originating from a filled-in circle ( found message in UML) or from a border of the sequence diagram ( gate in UML).
UML has introduced significant improvements to the capabilities of sequence diagrams. Most of these improvements are based on the idea of interaction fragments [ 4 ] which represent smaller pieces of an enclosing interaction. Multiple interaction fragments are combined to create a variety of combined fragments , [ 5 ] which are then used to model interactions that include parallelism, conditional branches, and optional interactions. | https://en.wikipedia.org/wiki/Sequence_diagram |
Sequence graph , also called an alignment graph , breakpoint graph , or adjacency graph, are bidirected graphs used in comparative genomics . The structure consists of multiple graphs or genomes with a series of edges and vertices represented as adjacencies between segments in a genome [ 1 ] and DNA segments respectively. Traversing a connected component of segments and adjacency edges (called a thread ) yields a sequence, which typically represents a genome or a section of a genome. The segments can be thought of as synteny blocks, with the edges dictating how to arrange these blocks in a particular genome, and the labelling of the adjacency edges representing bases that are not contained in synteny blocks.
Before constructing a sequence graph, there must be at least two genomes represented as directed graphs with edges as threads (adjacency edges) and vertices as DNA segments. The genomes should be labeled P and Q, while the sequence graph is labeled as BreakpointGraph( P, Q ). [ 2 ]
The directional vertices of Q and their edges are arranged in the order of P. Once completed, the edges of Q are reconnected to their original vertices. After all edges have been matched the vertex directions are removed and instead each vertex is labeled as v h (vertex head) and v t (vertex tail).
Similarity between genomes is represented by the number of cycles (independent systems) within the sequence graph. The number of cycles is equal to cycles (P, Q). The max number of cycles possible is equal to the number of vertices in the sequence graph.
Figure example.
Upon receiving genomes P (+a +b -c) and Q (+a +b -c), [ 1 ] Q should be realigned to follow the direction edges (red) of P. The vertices should be renamed from a, b, c to a h a t , b h b t , c h c t and the edges of P and Q should be connected to their original vertices (P edges = black, Q edges = green). Remove the directional edges (red). The number of cycles in G(P, Q) is 1 while the max possible is 3.
Alekseyev and Pevzner use sequence graphs to create their own algorithm to study the genome rearrangement history of several mammals, as well as a way to overcome problems with current ancestral reconstruction of genomes. [ 1 ]
Sequence graphs can be used to represent multiple sequence alignments with the addition of a new kind of edge representing homology between segments. [ 3 ] For a set of genomes, one can create an acyclic breakpoint graph with a thread for each genome. For two segments ( a , b ) {\displaystyle (a,b)} and ( c , d ) {\displaystyle (c,d)} , where a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , and d {\displaystyle d} represent the endpoints of the two segments, homology edges can be created from a {\displaystyle a} to c {\displaystyle c} and b {\displaystyle b} to d {\displaystyle d} or from a {\displaystyle a} to d {\displaystyle d} and b {\displaystyle b} to c {\displaystyle c} - representing the two possible orientations of the homology. The advantage of representing a multiple sequence alignment this way is that it is possible to include inversions and other structural rearrangements that wouldn't be allowable in a matrix representation.
If there are multiple possible paths when traversing a thread in a sequence graph, multiple sequences can be represented by the same thread. This means it is possible to create a sequence graph that represents a population of individuals with slightly different genomes - with each genome corresponding to one path through the graph. These graphs have been proposed as a replacement for the reference human genome . [ 4 ] | https://en.wikipedia.org/wiki/Sequence_graph |
Sequence homology is the biological homology between DNA , RNA , or protein sequences , defined in terms of shared ancestry in the evolutionary history of life . Two segments of DNA can have shared ancestry because of three phenomena: either a speciation event (orthologs), or a duplication event (paralogs), or else a horizontal (or lateral) gene transfer event (xenologs). [ 1 ]
Homology among DNA, RNA, or proteins is typically inferred from their nucleotide or amino acid sequence similarity. Significant similarity is strong evidence that two sequences are related by evolutionary changes from a common ancestral sequence. Alignments of multiple sequences are used to indicate which regions of each sequence are homologous.
The term "percent homology" is often used to mean "sequence similarity”, that is the percentage of identical residues ( percent identity ), or the percentage of residues conserved with similar physicochemical properties ( percent similarity ), e.g. leucine and isoleucine , is usually used to "quantify the homology." Based on the definition of homology specified above this terminology is incorrect since sequence similarity is the observation, homology is the conclusion. [ 3 ] Sequences are either homologous or not. [ 3 ] This involves that the term "percent homology" is a misnomer. [ 4 ]
As with morphological and anatomical structures, sequence similarity might occur because of convergent evolution , or, as with shorter sequences, by chance, meaning that they are not homologous. Homologous sequence regions are also called conserved . This is not to be confused with conservation in amino acid sequences, where the amino acid at a specific position has been substituted with a different one that has functionally equivalent physicochemical properties.
Partial homology can occur where a segment of the compared sequences has a shared origin, while the rest does not. Such partial homology may result from a gene fusion event.
Homologous sequences are orthologous if they are inferred to be descended from the same ancestral sequence separated by a speciation event: when a species diverges into two separate species, the copies of a single gene in the two resulting species are said to be orthologous. Orthologs, or orthologous genes, are genes in different species that originated by vertical descent from a single gene of the last common ancestor . The term "ortholog" was coined in 1970 by the molecular evolutionist Walter Fitch . [ 5 ]
For instance, the plant Flu regulatory protein is present both in Arabidopsis (multicellular higher plant) and Chlamydomonas (single cell green algae). The Chlamydomonas version is more complex: it crosses the membrane twice rather than once, contains additional domains and undergoes alternative splicing. However, it can fully substitute the much simpler Arabidopsis protein, if transferred from algae to plant genome by means of genetic engineering . Significant sequence similarity and shared functional domains indicate that these two genes are orthologous genes, [ 6 ] inherited from the shared ancestor .
Orthology is strictly defined in terms of ancestry. Given that the exact ancestry of genes in different organisms is difficult to ascertain due to gene duplication and genome rearrangement events, the strongest evidence that two similar genes are orthologous is usually found by carrying out phylogenetic analysis of the gene lineage. Orthologs often, but not always, have the same function. [ 7 ]
Orthologous sequences provide useful information in taxonomic classification and phylogenetic studies of organisms. The pattern of genetic divergence can be used to trace the relatedness of organisms. Two organisms that are very closely related are likely to display very similar DNA sequences between two orthologs. Conversely, an organism that is further removed evolutionarily from another organism is likely to display a greater divergence in the sequence of the orthologs being studied. [ citation needed ]
Given their tremendous importance for biology and bioinformatics , orthologous genes have been organized in several specialized databases that provide tools to identify and analyze orthologous gene sequences. These resources employ approaches that can be generally classified into those that use heuristic analysis of all pairwise sequence comparisons, and those that use phylogenetic methods. Sequence comparison methods were first pioneered in the COGs database in 1997. [ 8 ] These methods have been extended and automated in twelve different databases the most advanced being AYbRAH Analyzing Yeasts by Reconstructing Ancestry of Homologs [ 9 ] as well as these following databases right now. Some tools predict orthologous de novo from the input protein sequences, might not provide any Database. Among these tools are SonicParanoid and OrthoFinder.
Tree-based phylogenetic approaches aim to distinguish speciation from gene duplication events by comparing gene trees with species trees, as implemented in databases and software tools such as:
A third category of hybrid approaches uses both heuristic and phylogenetic methods to construct clusters and determine trees, for example:
Paralogous genes are genes that are related via duplication events in the last common ancestor (LCA) of the species being compared. They result from the mutation of duplicated genes during separate speciation events. When descendants from the LCA share mutated homologs of the original duplicated genes then those genes are considered paralogs. [ 1 ]
As an example, in the LCA, one gene (gene A) may get duplicated to make a separate similar gene (gene B), those two genes will continue to get passed to subsequent generations. During speciation, one environment will favor a mutation in gene A (gene A1), producing a new species with genes A1 and B. Then in a separate speciation event, one environment will favor a mutation in gene B (gene B1) giving rise to a new species with genes A and B1. The descendants' genes A1 and B1 are paralogous to each other because they are homologs that are related via a duplication event in the last common ancestor of the two species. [ 1 ]
Additional classifications of paralogs include alloparalogs (out-paralogs) and symparalogs (in-paralogs). Alloparalogs are paralogs that evolved from gene duplications that preceded the given speciation event. In other words, alloparalogs are paralogs that evolved from duplication events that happened in the LCA of the organisms being compared. The example above is an example alloparalogy. Symparalogs are paralogs that evolved from gene duplication of paralogous genes in subsequent speciation events. From the example above, if the descendant with genes A1 and B underwent another speciation event where gene A1 duplicated, the new species would have genes B, A1a, and A1b. In this example, genes A1a and A1b are symparalogs. [ 1 ]
Paralogous genes can shape the structure of whole genomes and thus explain genome evolution to a large extent. Examples include the Homeobox ( Hox ) genes in animals. These genes not only underwent gene duplications within chromosomes but also whole genome duplications . As a result, Hox genes in most vertebrates are clustered across multiple chromosomes with the HoxA-D clusters being the best studied. [ 39 ]
Another example are the globin genes which encode myoglobin and hemoglobin and are considered to be ancient paralogs. Similarly, the four known classes of hemoglobins ( hemoglobin A , hemoglobin A2 , hemoglobin B , and hemoglobin F ) are paralogs of each other. While each of these proteins serves the same basic function of oxygen transport, they have already diverged slightly in function: fetal hemoglobin (hemoglobin F) has a higher affinity for oxygen than adult hemoglobin. Function is not always conserved, however. Human angiogenin diverged from ribonuclease , for example, and while the two paralogs remain similar in tertiary structure, their functions within the cell are now quite different. [ citation needed ]
It is often asserted that orthologs are more functionally similar than paralogs of similar divergence, but several papers have challenged this notion. [ 40 ] [ 41 ] [ 42 ]
Paralogs are often regulated differently, e.g. by having different tissue-specific expression patterns (see Hox genes). However, they can also be regulated differently on the protein level. For instance, Bacillus subtilis encodes two paralogues of glutamate dehydrogenase : GudB is constitutively transcribed whereas RocG is tightly regulated. In their active, oligomeric states, both enzymes show similar enzymatic rates. However, swaps of enzymes and promoters cause severe fitness losses, thus indicating promoter–enzyme coevolution. Characterization of the proteins shows that, compared to RocG, GudB's enzymatic activity is highly dependent on glutamate and pH. [ 43 ]
Sometimes, large regions of chromosomes share gene content similar to other chromosomal regions within the same genome. [ 44 ] They are well characterised in the human genome, where they have been used as evidence to support the 2R hypothesis . Sets of duplicated, triplicated and quadruplicated genes, with the related genes on different chromosomes, are deduced to be remnants from genome or chromosomal duplications. A set of paralogy regions is together called a paralogon . [ 45 ] Well-studied sets of paralogy regions include regions of human chromosome 2, 7, 12 and 17 containing Hox gene clusters, collagen genes, keratin genes and other duplicated genes, [ 46 ] regions of human chromosomes 4, 5, 8 and 10 containing neuropeptide receptor genes, NK class homeobox genes and many more gene families , [ 47 ] [ 48 ] [ 49 ] and parts of human chromosomes 13, 4, 5 and X containing the ParaHox genes and their neighbors. [ 50 ] The Major histocompatibility complex (MHC) on human chromosome 6 has paralogy regions on chromosomes 1, 9 and 19. [ 51 ] Much of the human genome seems to be assignable to paralogy regions. [ 52 ]
Ohnologous genes are paralogous genes that have originated by a process of whole-genome duplication . The name was first given in honour of Susumu Ohno by Ken Wolfe. [ 53 ] Ohnologues are useful for evolutionary analysis because all ohnologues in a genome have been diverging for the same length of time (since their common origin in the whole genome duplication). Ohnologues are also known to show greater association with cancers, dominant genetic disorders, and pathogenic copy number variations. [ 54 ] [ 55 ] [ 56 ] [ 57 ] [ 58 ]
Homologs resulting from horizontal gene transfer between two organisms are termed xenologs. Xenologs can have different functions if the new environment is vastly different for the horizontally moving gene. In general, though, xenologs typically have similar function in both organisms. The term was coined by Walter Fitch. [ 5 ]
Homoeologous (also spelled homeologous) chromosomes or parts of chromosomes are those brought together following inter-species hybridization and allopolyploidization to form a hybrid genome , and whose relationship was completely homologous in an ancestral species. [ 59 ] In allopolyploids, the homologous chromosomes within each parental sub-genome should pair faithfully during meiosis , leading to disomic inheritance; however in some allopolyploids, the homoeologous chromosomes of the parental genomes may be nearly as similar to one another as the homologous chromosomes, leading to tetrasomic inheritance (four chromosomes pairing at meiosis), intergenomic recombination , and reduced fertility. [ citation needed ]
Gametology denotes the relationship between homologous genes on non-recombining, opposite sex chromosomes . The term was coined by García-Moreno and Mindell. [ 60 ] 2000. Gametologs result from the origination of genetic sex determination and barriers to recombination between sex chromosomes. Examples of gametologs include CHDW and CHDZ in birds. [ 60 ] | https://en.wikipedia.org/wiki/Sequence_homology |
In molecular biology , the sequence hypothesis was first formally proposed in the review "On Protein Synthesis " by Francis Crick in 1958. [ 1 ] It states that the sequence of bases in the genetic material ( DNA or RNA ) determines the sequence of amino acids for which that segment of nucleic acid codes, and this amino acid sequence determines the three-dimensional structure into which the protein folds. The three-dimensional structure of a protein is required for a protein to be functional. This hypothesis then lays the essential link between information stored and inherited in nucleic acids to the chemical processes which enable life to exist. [ 2 ]
Or, as Crick put it in 1958:
In its simplest form it [the Sequence Hypothesis] assumes that the specificity of a piece of nucleic acid is expressed solely by the sequence of its bases , and that this sequence is a (simple) code for the amino acid sequence of a particular protein.
This hypothesis appears to be rather widely held. Its virtue is that it unites several remarkable pairs of generalisations: the central biochemical importance of proteins and the dominating role of genes , and in particular of their nucleic acid; the linearity of protein molecules (considered covalently) and the genetic linearity within the functional gene [...]; the simplicity of the composition of protein molecules and the simplicity of the nucleic acids.
This description is further amplified in the article and, in discussing how a protein folds up into its three-dimensional structure, Crick suggested that "the folding is simply a function of the order of the amino acids " in the protein. [ 1 ] : 144 | https://en.wikipedia.org/wiki/Sequence_hypothesis |
In bioinformatics , a sequence logo is a graphical representation of the sequence conservation of nucleotides (in a strand of DNA / RNA ) or amino acids (in protein sequences ). [ 1 ] A sequence logo is created from a collection of aligned sequences and depicts the consensus sequence and diversity of the sequences.
Sequence logos are frequently used to depict sequence characteristics such as protein-binding sites in DNA or functional units in proteins.
A sequence logo consists of a stack of letters at each position.
The relative sizes of the letters indicate their frequency in the sequences.
The total height of the letters depicts the information content of the position, in bits.
To create sequence logos, related DNA, RNA or protein sequences, or DNA sequences that have common conserved binding sites, are aligned so that the most conserved parts create good alignments. A sequence logo can then be created from the conserved multiple sequence alignment . The sequence logo will show how well residues are conserved at each position: the higher the number of residues, the higher the letters will be, because the better the conservation is at that position. Different residues at the same position are scaled according to their frequency. The height of the entire stack of residues is the information measured in bits . Sequence logos can be used to represent conserved DNA binding sites , where transcription factors bind.
The information content (y-axis) of position i {\displaystyle i} is given by: [ 2 ]
where H i {\displaystyle H_{i}} is the uncertainty
(sometimes called the Shannon entropy ) of position i {\displaystyle i}
Here, f b , i {\displaystyle f_{b,i}} is the relative frequency of base or amino acid b {\displaystyle b} at position i {\displaystyle i} , and e n {\displaystyle e_{n}} is the small-sample correction for an alignment of n {\displaystyle n} letters. [ 2 ] [ 3 ] The height of letter b {\displaystyle b} in column i {\displaystyle i} is given by
The approximation for the small-sample correction, e n {\displaystyle e_{n}} , is given by:
where s {\displaystyle s} is 4 for nucleotides, 20 for amino acids, and n {\displaystyle n} is the number of sequences in the alignment.
A consensus logo is a simplified variation of a sequence logo that can be embedded in text format.
Like a sequence logo, a consensus logo is created from a collection of aligned protein or DNA/RNA sequences and conveys information about the conservation of each position of a sequence motif or sequence alignment [ 1 ] [ 4 ] . However, a consensus logo displays only conservation information, and not explicitly the frequency information of each nucleotide or amino acid at each position. Instead of a stack made of several characters, denoting the relative frequency of each character, the consensus logo depicts the degree of conservation of each position using the height of the consensus character at that position.
The main, and obvious, advantage of consensus logos over sequence logos is their ability to be embedded as text in any Rich Text Format supporting editor/viewer and, therefore, in scientific manuscripts. As described above, the consensus logo is a cross between sequence logos and consensus sequences . As a result, compared to a sequence logo, the consensus logo omits information (the relative contribution of each character to the conservation of that position in the motif/alignment). Hence, a sequence logo should be used preferentially whenever possible. That being said, the need to include graphic figures in order to display sequence logos has perpetuated the use of consensus sequences in scientific manuscripts, even though they fail to convey information on both conservation and frequency. [ 5 ] Consensus logos represent therefore an improvement over consensus sequences whenever motif/alignment information has to be constrained to text.
Hidden Markov models (HMMs) not only consider the information content of aligned positions in an alignment, but also of insertions and deletions. In an HMM sequence logo used by Pfam , three rows are added to indicate the frequencies of occupancy (presence) and insertion, as well as the expected insertion length. [ 6 ] | https://en.wikipedia.org/wiki/Sequence_logo |
In biology, a sequence motif is a nucleotide or amino-acid sequence pattern that is widespread and usually assumed to be related to biological function of the macromolecule. For example, an N -glycosylation site motif can be defined as Asn, followed by anything but Pro, followed by either Ser or Thr, followed by anything but Pro residue .
When a sequence motif appears in the exon of a gene , it may encode the " structural motif " of a protein ; that is a stereotypical element of the overall structure of the protein. Nevertheless, motifs need not be associated with a distinctive secondary structure . " Noncoding " sequences are not translated into proteins, and nucleic acids with such motifs need not deviate from the typical shape (e.g. the "B-form" DNA double helix ).
Outside of gene exons, there exist regulatory sequence motifs and motifs within the " junk ", such as satellite DNA . Some of these are believed to affect the shape of nucleic acids [ 1 ] (see for example RNA self-splicing ), but this is only sometimes the case. For example, many DNA binding proteins that have affinity for specific DNA binding sites bind DNA in only its double-helical form. They are able to recognize motifs through contact with the double helix's major or minor groove.
Short coding motifs, which appear to lack secondary structure, include those that label proteins for delivery to particular parts of a cell , or mark them for phosphorylation .
Within a sequence or database of sequences, researchers search and find motifs using computer-based techniques of sequence analysis , such as BLAST . Such techniques belong to the discipline of bioinformatics . See also consensus sequence .
Consider the N -glycosylation site motif mentioned above:
This pattern may be written as N{P}[ST]{P} where N = Asn, P = Pro, S = Ser, T = Thr; {X} means any amino acid except X ; and [XY] means either X or Y .
The notation [XY] does not give any indication of the probability of X or Y occurring in the pattern. Observed probabilities can be graphically represented using sequence logos . Sometimes patterns are defined in terms of a probabilistic model such as a hidden Markov model .
The notation [XYZ] means X or Y or Z , but does not indicate the likelihood of any particular match. For this reason, two or more patterns are often associated with a single motif: the defining pattern, and various typical patterns.
For example, the defining sequence for the IQ motif may be taken to be:
where x signifies any amino acid, and the square brackets indicate an alternative (see below for further details about notation).
Usually, however, the first letter is I , and both [RK] choices resolve to R . Since the last choice is so wide, the pattern IQxxxRGxxxR is sometimes equated with the IQ motif itself, but a more accurate description would be a consensus sequence for the IQ motif .
Several notations for describing motifs are in use but most of them are variants of standard notations for regular expressions and use these conventions:
The fundamental idea behind all these notations is the matching principle, which assigns a meaning to a sequence of elements of the pattern notation:
Thus the pattern [AB] [CDE] F matches the six amino acid sequences corresponding to ACF , ADF , AEF , BCF , BDF , and BEF .
Different pattern description notations have other ways of forming pattern elements. One of these notations is the PROSITE notation, described in the following subsection.
The PROSITE notation uses the IUPAC one-letter codes and conforms to the above description with the exception that a concatenation symbol, ' - ', is used between pattern elements, but it is often dropped between letters of the pattern alphabet.
PROSITE allows the following pattern elements in addition to those described previously:
Some examples:
The signature of the C2H2-type zinc finger domain is:
A matrix of numbers containing scores for each residue or nucleotide at each position of a fixed-length motif. There are two types of weight matrices.
An example of a PFM from the TRANSFAC database for the transcription factor AP-1:
The first column specifies the position, the second column contains the number of occurrences of A at that position, the third column contains the number of occurrences of C at that position, the fourth column contains the number of occurrences of G at that position, the fifth column contains the number of occurrences of T at that position, and the last column contains the IUPAC notation for that position.
Note that the sums of occurrences for A, C, G, and T for each row should be equal because the PFM is derived from aggregating several consensus sequences.
The sequence motif discovery process has been well-developed since the 1990s. In particular, most of the existing motif discovery research focuses on DNA motifs. With the advances in high-throughput sequencing, such motif discovery problems are challenged by both the sequence pattern degeneracy issues and the data-intensive computational scalability issues.
Process of discovery
Motif discovery happens in three major phases. A pre-processing stage where sequences are meticulously prepared in assembly and cleaning steps. Assembly involves selecting sequences that contain the desired motif in large quantities, and extraction of unwanted sequences using clustering. Cleaning then ensures the removal of any confounding elements. Next there is the discovery stage. In this phase sequences are represented using consensus strings or Position-specific Weight Matrices (PWM) . After motif representation, an objective function is chosen and a suitable search algorithm is applied to uncover the motifs. Finally the post-processing stage involves evaluating the discovered motifs. [ 2 ]
There are software programs which, given multiple input sequences, attempt to identify one or more candidate motifs. One example is the Multiple EM for Motif Elicitation (MEME) algorithm, which generates statistical information for each candidate. [ 3 ] There are more than 100 publications detailing motif discovery algorithms; Weirauch et al . evaluated many related algorithms in a 2013 benchmark. [ 4 ] The planted motif search is another motif discovery method that is based on combinatorial approach.
Motifs have also been discovered by taking a phylogenetic approach and studying similar genes in different species. For example, by aligning the amino acid sequences specified by the GCM ( glial cells missing ) gene in man, mouse and D. melanogaster , Akiyama and others discovered a pattern which they called the GCM motif in 1996. [ 5 ] It spans about 150 amino acid residues, and begins as follows:
Here each . signifies a single amino acid or a gap, and each * indicates one member of a closely related family of amino acids. The authors were able to show that the motif has DNA binding activity.
A similar approach is commonly used by modern protein domain databases such as Pfam : human curators would select a pool of sequences known to be related and use computer programs to align them and produce the motif profile (Pfam uses HMMs , which can be used to identify other related proteins. [ 6 ] A phylogenic approach can also be used to enhance the de novo MEME algorithm, with PhyloGibbs being an example. [ 7 ]
In 2017, MotifHyades has been developed as a motif discovery tool that can be directly applied to paired sequences. [ 8 ]
In 2018, a Markov random field approach has been proposed to infer DNA motifs from DNA-binding domains of proteins. [ 9 ]
Motif Discovery Algorithms
Motif discovery algorithms use diverse strategies to uncover patterns in DNA sequences. Integrating enumerative, probabilistic, and nature-inspired approaches, demonstrate their adaptability, with the use of multiple methods proving effective in enhancing identification accuracy.
Enumerative Approach: [ 2 ]
Initiating the motif discovery journey, the enumerative approach witnesses algorithms meticulously generating and evaluating potential motifs. Pioneering this domain are Simple Word Enumeration techniques, such as YMF and DREME, which systematically go through the sequence in search of short motifs. Complementing these, Clustering-Based Methods such as CisFinder employ nucleotide substitution matrices for motif clustering, effectively mitigating redundancy. Concurrently, Tree-Based Methods like Weeder and FMotif exploit tree structures, and Graph Theoretic-Based Methods (e.g., WINNOWER) employ graph representations, demonstrating the richness of enumeration strategies.
Probabilistic Approach: [ 2 ]
Diverging into the probabilistic realm, this approach capitalizes on probability models to discern motifs within sequences. MEME, a deterministic exemplar, employs Expectation-Maximization for optimizing Position Weight Matrices (PWMs) and unraveling conserved regions in unaligned DNA sequences. Contrasting this, stochastic methodologies like Gibbs Sampling initiate motif discovery with random motif position assignments, iteratively refining the predictions. This probabilistic framework adeptly captures the inherent uncertainty associated with motif discovery.
Advanced Approach: [ 2 ]
Evolving further, advanced motif discovery embraces sophisticated techniques, with Bayesian modeling [ 10 ] taking center stage. LOGOS and BaMM, exemplifying this cohort, intricately weave Bayesian approaches and Markov models into their fabric for motif identification. The incorporation of Bayesian clustering methods enhances the probabilistic foundation, providing a holistic framework for pattern recognition in DNA sequences.
Nature-Inspired and Heuristic Algorithms: [ 2 ]
A distinct category unfolds, wherein algorithms draw inspiration from the biological realm. Genetic Algorithms (GA) , epitomized by FMGA and MDGA, [ 11 ] navigate motif search through genetic operators and specialized strategies. Harnessing swarm intelligence principles, Particle Swarm Optimization (PSO) , Artificial Bee Colony (ABC) algorithms, and Cuckoo Search (CS) algorithms, featured in GAEM, GARP, and MACS, venture into pheromone-based exploration. These algorithms, mirroring nature's adaptability and cooperative dynamics, serve as avant-garde strategies for motif identification. The synthesis of heuristic techniques in hybrid approaches underscores the adaptability of these algorithms in the intricate domain of motif discovery.
The E. coli lactose operon repressor LacI ( PDB : 1lcc chain A) and E. coli catabolite gene activator ( PDB : 3gap chain A) both have a helix-turn-helix motif, but their amino acid sequences do not show much similarity, as shown in the table below. In 1997, Matsuda, et al. devised a code they called the "three-dimensional chain code" for representing the protein structure as a string of letters. This encoding scheme reveals the similarity between the proteins much more clearly than the amino acid sequence (example from article): [ 12 ] The code encodes the torsion angles between alpha-carbons of the protein backbone . "W" always corresponds to an alpha helix. | https://en.wikipedia.org/wiki/Sequence_motif |
A sequence number is a consecutive number in a sequence of numbers, usually of real integers ( natural numbers ). Sequence numbers have many practical applications. They can be used, among other things, as part of serial numbers on manufactured parts, in case management, [ 1 ] or in databases as a surrogate key for registering and identifying unique entries in a table [ 2 ] [ 3 ] (in which case it is used as a primary key ). [ 4 ] [ 5 ]
Historically, the Norwegian Mapping Authority have used sequence numbers for land registration as a placeholder in cases where an organization number or national identity number have not been known. [ 6 ]
In elections in Norway , sequence numbers are used in the duplicate check to prevent votes being counted twice or to detect duplicate ballots . [ 7 ]
An example of a sequence number being used as a surrogate key is the snr number used by Statistics Norway since 1970, [ 8 ] which uniquely identifies a person even if their social security number changes. The snr number will then be linked to both social security numbers, and act as a link that ensures that each person can be identified by a unique key at all times. [ 8 ]
A distinction is sometimes made between a sequence number and a serial number. For example, a Swiss locomotive may have the designation " Re 465 003-2", [ 9 ] where: | https://en.wikipedia.org/wiki/Sequence_number |
A sequence of events recorder (SER) [ 1 ] is an intelligent standalone microprocessor based system, which monitors external inputs and records the time and sequence of the changes. [ 2 ] [ 3 ] They usually have an external time source such as a GPS or radio clock . When wired inputs change state, the time and state of each change is recorded.
SERs enable rapid root cause analysis after multiple events have occurred due to the secure recording of the sequence of events in the order of occurrence. SERs are therefore utilized as a diagnostic tool to minimize plant downtime. SERs are often interfaced with a SCADA system, distributed control system or programmable logic controller (PLC).
SER reports are used by electrical engineers to analyze large and small electrical system blackouts. After the Northeast blackout of 2003 , the North American Electric Reliability Corporation specified that electrical system data should be time-tagged to the nearest millisecond .
In 1984, the Tetragenics Company, a subsidiary of the Montana Power Company, introduced the first remote terminal unit (RTU) that time-tagged events to the nearest millisecond, and now there are also other RTUs with this capability. Digital protective relays and some PLCs now also include time-tagging to the nearest millisecond; SCADA systems that incorporate these devices provide SER functions without a dedicated SER device.
This computer hardware article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sequence_of_events_recorder |
Sequence saturation mutagenesis ( SeSaM ) is a chemo-enzymatic random mutagenesis method applied for the directed evolution of proteins and enzymes . [ citation needed ] It is one of the most common saturation mutagenesis techniques . In four PCR -based reaction steps, phosphorothioate nucleotides are inserted in the gene sequence, cleaved and the resulting fragments elongated by universal or degenerate nucleotides . These nucleotides are then replaced by standard nucleotides, allowing for a broad distribution of nucleic acid mutations spread over the gene sequence with a preference to transversions and with a unique focus on consecutive point mutations, both difficult to generate by other mutagenesis techniques. The technique was developed by Professor Ulrich Schwaneberg at Jacobs University Bremen and RWTH Aachen University .
SeSaM has been developed in order to overcome several of the major limitations encountered when working with standard mutagenesis methods based on simple error-prone PCR (epPCR) techniques. These epPCR techniques rely on the use of polymerases and thus encounter limitations which mainly result from the circumstance that only single, but very rarely consecutive, nucleic acid [ citation needed ] substitutions are performed and that these substitutions occur usually at specific, favored positions only. In addition, transversions of nucleic acids are much less likely than transitions and require specifically designed polymerases with an altered bias . [ 1 ] These characteristics of epPCR catalyzed nucleic acid exchanges together with the fact that the genetic code is degenerated decrease the resulting diversity on the amino acid level. Synonymous substitutions lead to amino acid preservation or conservative mutations with similar physico-chemical properties such as size and hydrophobicity are strongly prevalent. [ 2 ] [ 3 ] By non-specific introduction of universal bases at every position in the gene sequence, SeSaM overcomes the polymerase bias favoring transitory substitutions at specific positions but opens the complete gene sequence to a diverse array of amino acid exchanges. [ 4 ]
During the development of the SeSaM-method, several modifications were introduced that allowed for the introduction of several mutations simultaneously. [ 5 ] Another advancement of the method was achieved by introduction of degenerate bases instead of universal inosine and the use of optimized DNA polymerases, further increasing the ratio of introduced transversions. [ 6 ] This modified SeSaM-TV+ method in addition allows for and favors the introduction of two consecutive nucleotide exchanges, broadening strongly the spectrum of amino acids that may be substituted.
By several optimizations including the application of an improved chimera polymerase in Step III of the SeSaM-TV-II method [ 7 ] [ 8 ] and the addition of an alternative degenerate nucleotide for efficient substitution of thymine and cytosine bases and increased mutation frequency in SeSaM-P/R, [ 9 ] generated libraries were further improved with regard to transversion number and the number of consecutive mutations was raised to 2–4 consecutive mutations with a rate of consecutive mutations of up to 30%. [ 10 ]
The SeSaM-method consists of four PCR-based steps which can be executed within two to three days. Major parts include the incorporation of phosphorothioate nucleotides, the chemical fragmentation at these positions, the introduction of universal or degenerate bases and their replacement by natural nucleotides inserting point mutations.
Initially, universal “SeSaM”-sequences are inserted by PCR with gene-specific primers binding in front of and behind the gene of interest. The gene of interest with its flanking regions is amplified to introduce these SeSaM_fwd and SeSaM_rev sequences and to generate template for consecutive PCR steps.
These generated so-called fwd template and rev templates are now amplified in a PCR reaction with a pre-defined mixture of phosphorothioate and standard nucleotides to ensure an even distribution of inserted mutations over the full length of the gene. PCR products of Step 1 are cleaved specifically at the phosphorothioate bonds, generating a pool of single-stranded DNA fragments of different lengths starting from the universal primer.
In Step 2 of SeSaM, the DNA single strands are elongated by one to several universal or degenerate bases (depending on the modification of SeSaM applied) catalyzed by terminal deoxynucleotidyl transferase (TdT). This step is the key step to introduce the characteristic consecutive mutations to randomly mutate entire codons.
Subsequently, in Step 3 a PCR is performed recombining the single stranded DNA fragments with the corresponding full-length reverse template, generating the full-length double stranded gene including universal or degenerate bases in its sequence.
By replacement of the universal/degenerate bases in the gene sequence by random standard nucleotides in SeSaM Step 4, a diverse array of full-length gene sequences with substitution mutations is generated, including a high load of transversions and subsequent substitution mutations.
SeSaM is used to directly optimize proteins on amino acid level, but also to preliminarily identify amino acid positions to test in saturation mutagenesis for the ideal amino acid exchange. SeSaM has been successfully applied in numerous directed evolution campaigns of different classes of enzymes for their improvement towards selected properties such as cellulase for ionic liquid resistance, [ 11 ] protease with increased detergent tolerance, [ 12 ] glucose oxidase for analytical application, [ 13 ] phytase with increased thermostability [ 14 ] and monooxygenase with improved catalytic efficiency using alternative electron donors. [ 15 ] SeSaM is patent protected by US770374 B2 in over 13 countries and is one of the platform technologies of SeSaM-Biotech GmbH . | https://en.wikipedia.org/wiki/Sequence_saturation_mutagenesis |
In evolutionary biology , sequence space is a way of representing all possible sequences (for a protein , gene or genome ). [ 1 ] [ 2 ] The sequence space has one dimension per amino acid or nucleotide in the sequence leading to highly dimensional spaces . [ 3 ] [ 4 ]
Most sequences in sequence space have no function, leaving relatively small regions that are populated by naturally occurring genes. [ 5 ] Each protein sequence is adjacent to all other sequences that can be reached through a single mutation . [ 6 ] It has been estimated that the whole functional protein sequence space has been explored by life on the Earth. [ 7 ] Evolution by natural selection can be visualised as the process of sampling nearby sequences in sequence space and moving to any with improved fitness over the current one.
A sequence space is usually laid out as a grid. For protein sequence spaces, each residue in the protein is represented by a dimension with 20 possible positions along that axis corresponding to the possible amino acids. [ 3 ] [ 4 ] Hence there are 400 possible dipeptides arranged in a 20x20 space but that expands to 10 130 for even a small protein of 100 amino acids arranged in a space with 100 dimensions. Although such overwhelming multidimensionality cannot be visualised or represented diagrammatically, it provides a useful abstract model to think about the range of proteins and evolution from one sequence to another.
These highly multidimensional spaces can be compressed to 2 or 3 dimensions using principal component analysis . A fitness landscape is simply a sequence space with an extra vertical axis of fitness added for each sequence. [ 8 ]
Despite the diversity of protein superfamilies, sequence space is extremely sparsely populated by functional proteins. Most random protein sequences have no fold or function. [ 9 ] Enzyme superfamilies , therefore, exist as tiny clusters of active proteins in a vast empty space of non-functional sequence. [ 10 ] [ 11 ]
The density of functional proteins in sequence space, and the proximity of different functions to one another is a key determinant in understanding evolvability . [ 12 ] The degree of interpenetration of two neutral networks of different activities in sequence space will determine how easy it is to evolve from one activity to another. The more overlap between different activities in sequence space, the more cryptic variation for promiscuous activity will be. [ 13 ]
Protein sequence space has been compared to the Library of Babel , a theoretical library containing all possible books that are 410 pages long. [ 14 ] [ 15 ] In the Library of Babel , finding any book that made sense was impossible due to the sheer number and lack of order. The same would be true of protein sequences if it were not for natural selection, which has selected out only protein sequences that make sense. Additionally, each protein sequences is surrounded by a set of neighbours (point mutants) that are likely to have at least some function.
On the other hand, the effective "alphabet" of the sequence space may in fact be quite small, reducing the useful number of amino acids from 20 to a much lower number. For example, in an extremely simplified view, all amino acids can be sorted into two classes (hydrophobic/polar) by hydrophobicity and still allow many common structures to show up. Early life on Earth may have only four or five types of amino acids to work with, [ 16 ] and researches have shown that functional proteins can be created from wild-type ones by a similar alphabet-reduction process. [ 17 ] [ 18 ] Reduced alphabets are also useful in bioinformatics , as they provide an easy way of analyzing protein similarity. [ 19 ] [ 20 ]
A major focus in the field of protein engineering is on creating DNA libraries that sample regions of sequence space, often with the goal of finding mutants of proteins with enhanced functions compared to the wild type . These libraries are created either by using a wild type sequence as a template and applying one or more mutagenesis techniques to make different variants of it, or by creating proteins from scratch using artificial gene synthesis . These libraries are then screened or selected , and ones with improved phenotypes are used for the next round of mutagenesis. | https://en.wikipedia.org/wiki/Sequence_space_(evolution) |
In genetics and biochemistry , sequencing means to determine the primary structure (sometimes incorrectly called the primary sequence) of an unbranched biopolymer . Sequencing results in a symbolic linear depiction known as a sequence which succinctly summarizes much of the atomic-level structure of the sequenced molecule.
DNA sequencing is the process of determining the nucleotide order of a given DNA fragment. So far, most DNA sequencing has been performed using the chain termination method developed by Frederick Sanger . This technique uses sequence-specific termination of a DNA synthesis reaction using modified nucleotide substrates. However, new sequencing technologies such as pyrosequencing are gaining an increasing share of the sequencing market. More genome data are now being produced by pyrosequencing than Sanger DNA sequencing. Pyrosequencing has enabled rapid genome sequencing. Bacterial genomes can be sequenced in a single run with several times coverage with this technique. This technique was also used to sequence the genome of James Watson recently. [ 1 ]
The sequence of DNA encodes the necessary information for living things to survive and reproduce. Determining the sequence is therefore useful in fundamental research into why and how organisms live, as well as in applied subjects. Because of the key importance DNA has to living things, knowledge of DNA sequences is useful in practically any area of biological research. For example, in medicine it can be used to identify, diagnose, and potentially develop treatments for genetic diseases. Similarly, research into pathogens may lead to treatments for contagious diseases. Biotechnology is a burgeoning discipline, with the potential for many useful products and services.
The Carlson curve is a term coined by The Economist [ 2 ] to describe the biotechnological equivalent of Moore's law , and is named after author Rob Carlson. [ 3 ] Carlson accurately predicted the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law. [ 4 ] Carlson curves illustrate the rapid (in some cases hyperexponential) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis , and a range of physical and computational tools used in protein expression and in determining protein structures.
In chain terminator sequencing (Sanger sequencing), extension is initiated at a specific site on the template DNA by using a short oligonucleotide 'primer' complementary to the template at that region. The oligonucleotide primer is extended using a DNA polymerase , an enzyme that replicates DNA. Included with the primer and DNA polymerase are the four deoxynucleotide bases (DNA building blocks), along with a low concentration of a chain terminating nucleotide (most commonly a di- deoxynucleotide). The deoxynucleotides lack in the OH group both at the 2' and at the 3' position of the ribose molecule, therefore once they are inserted within a DNA molecule they prevent it from being further elongated. In this sequencer four different vessels are employed, each containing only of the four dideoxyribonucleotides; the incorporation of the chain terminating nucleotides by the DNA polymerase in a random position results in a series of related DNA fragments, of different sizes, that terminate with a given dideoxiribonucleotide. The fragments are then size-separated by electrophoresis in a slab polyacrylamide gel, or more commonly now, in a narrow glass tube (capillary) filled with a viscous polymer.
An alternative to the labelling of the primer is to label the terminators instead, commonly called 'dye terminator sequencing'. The major advantage of this approach is the complete sequencing set can be performed in a single reaction, rather than the four needed with the labeled-primer approach. This is accomplished by labelling each of the dideoxynucleotide chain-terminators with a separate fluorescent dye, which fluoresces at a different wavelength . This method is easier and quicker than the dye primer approach, but may produce more uneven data peaks (different heights), due to a template dependent difference in the incorporation of the large dye chain-terminators. This problem has been significantly reduced with the introduction of new enzymes and dyes that minimize incorporation variability.
This method is now used for the vast majority of sequencing reactions as it is both simpler and cheaper. The major reason for this is that the primers do not have to be separately labelled (which can be a significant expense for a single-use custom primer), although this is less of a concern with frequently used 'universal' primers. This is changing rapidly due to the increasing cost-effectiveness of second- and third-generation systems from Illumina, 454, ABI, Helicos, and Dover.
The pyrosequencing method is based on the detection of the pyrophosphate release on nucleotide incorporation. Before performing pyrosequencing, the DNA strand to sequence has to be amplified by PCR. Then the order in which the nucleotides have to be added in the sequencer is chosen (i.e. G-A-T-C). When a specific nucleotide is added, if the DNA polymerase incorporates it in the growing chain, the pyrophosphate is released and converted into ATP by ATP sulfurylase. ATP powers the oxidation of luciferase through the luciferase; this reaction generates a light signal recorded as a pyrogram peak. In this way, the nucleotide incorporation is correlated to a signal. The light signal is proportional to the amount of nucleotides incorporated during the synthesis of the DNA strand (i.e. two nucleotides incorporated correspond to two pyrogram peaks). When the added nucleotides aren't incorporated in the DNA molecule, no signal is recorded; the enzyme apyrase removes any unincorporated nucleotide remaining in the reaction.
This method requires neither fluorescently-labelled nucleotides nor gel electrophoresis.
Pyrosequencing, which was developed by Pål Nyrén and Mostafa Ronaghi DNA, has been commercialized by Biotage (for low-throughput sequencing) and 454 Life Sciences (for high-throughput sequencing). The latter platform sequences roughly 100 megabases [now up to 400 megabases] in a seven-hour run with a single machine. In the array-based method (commercialized by 454 Life Sciences), single-stranded DNA is annealed to beads and amplified via EmPCR . These DNA-bound beads are then placed into wells on a fiber-optic chip along with enzymes which produce light in the presence of ATP . When free nucleotides are washed over this chip, light is produced as ATP is generated when nucleotides join with their complementary base pairs . Addition of one (or more) nucleotide(s) results in a reaction that generates a light signal that is recorded by the CCD camera in the instrument. The signal strength is proportional to the number of nucleotides, for example, homopolymer stretches, incorporated in a single nucleotide flow. [1]
Whereas the methods above describe various sequencing methods, separate related terms are used when a large portion of a genome is sequenced. Several platforms were developed to perform exome sequencing (a subset of all DNA across all chromosomes that encode genes) or whole genome sequencing (sequencing of the all nuclear DNA of a human).
RNA is less stable in the cell, and also more prone to nuclease attack experimentally. As RNA is generated by transcription from DNA, the information is already present in the cell's DNA. However, it is sometimes desirable to sequence RNA molecules. While sequencing DNA gives a genetic profile of an organism, sequencing RNA reflects only the sequences that are actively expressed in the cells. To sequence RNA, the usual method is first to reverse transcribe the RNA extracted from the sample to generate cDNA fragments. This can then be sequenced as described above.
The bulk of RNA expressed in cells are ribosomal RNAs or small RNAs , detrimental for cellular translation, but often not the focus of a study. This fraction can be removed in vitro , however, to enrich for the messenger RNA, also included, that usually is of interest. Derived from the exons these mRNAs are to be later translated to proteins that support particular cellular functions. The expression profile therefore indicates cellular activity, particularly desired in the studies of diseases, cellular behaviour, responses to reagents or stimuli. Eukaryotic RNA molecules are not necessarily co-linear with their DNA template, as introns are excised. This gives a certain complexity to map the read sequences back to the genome and thereby identify their origin.
For more information on the capabilities of next-generation sequencing applied to whole transcriptomes see: RNA-Seq and MicroRNA Sequencing .
Methods for performing protein sequencing
include:
If the gene encoding the protein is known, it is currently much easier to sequence the DNA and infer the protein sequence. Determining part of a protein's amino-acid sequence (often one end) by one of the above methods may be sufficient to identify a clone carrying this gene.
Though polysaccharides are also biopolymers, it is not so common to talk of 'sequencing' a polysaccharide, for several reasons. Although many polysaccharides are linear, many have branches. Many different units (individual monosaccharides ) can be used, and bonded in different ways. However, the main theoretical reason is that whereas the other polymers listed here are primarily generated in a 'template-dependent' manner by one processive enzyme, each individual join in a polysaccharide may be formed by a different enzyme . In many cases the assembly is not uniquely specified; depending on which enzyme acts, one of several different units may be incorporated. This can lead to a family of similar molecules being formed. This is particularly true for plant polysaccharides. Methods for the structure determination of oligosaccharides and polysaccharides include NMR spectroscopy and methylation analysis . [ 5 ] | https://en.wikipedia.org/wiki/Sequencing |
Sequencing batch reactors ( SBR ) or sequential batch reactors are a type of activated sludge process for the treatment of wastewater . SBRs treat wastewater such as sewage or output from anaerobic digesters or mechanical biological treatment facilities in batches. Oxygen is bubbled through the mixture of wastewater and activated sludge to reduce the organic matter (measured as biochemical oxygen demand (BOD) and chemical oxygen demand (COD)). The treated effluent may be suitable for discharge to surface waters or possibly for use on land.
While there are several configurations of SBRs, the basic process is similar. The installation consists of one or more tanks that can be operated as plug flow or completely mixed reactors. [ 1 ] The tanks have a “flow through” system, with raw wastewater ( influent ) coming in at one end and treated water ( effluent ) flowing out the other. In systems with multiple tanks, while one tank is in settle/decant mode the other is aerating and filling. In some systems, tanks contain a section known as the bio-selector, which consists of a series of walls or baffles which direct the flow either from side to side of the tank or under and over consecutive baffles. This helps to mix the incoming influent and the returned activated sludge (RAS), beginning the biological digestion process before the liquor enters the main part of the tank.
There are five stages in the treatment process: [ 1 ]
First, the inlet valve is opened and the tank is filled, while mixing is provided by mechanical means, but no air is added yet. This stage is also called the anoxic stage. During the second stage, aeration of the mixed liquor is performed by the use of fixed or floating mechanical pumps or by transferring air into fine bubble diffusers fixed to the floor of the tank. No aeration or mixing is provided in the third stage and the settling of suspended solids starts. During the fourth stage the outlet valve opens and the "clean" supernatant liquor exits the tank. [ 2 ] : 3–8, 19
Aeration times vary according to the plant size and the composition/quantity of the incoming liquor, but are typically 60 to 90 minutes. The addition of oxygen to the liquor encourages the multiplication of aerobic bacteria and they consume the nutrients. This process encourages the conversion of nitrogen from its reduced ammonia form to oxidized nitrite and nitrate forms, a process known as nitrification .
To remove phosphorus compounds from the liquor, aluminium sulfate (alum) is often added during this period. It reacts to form non-soluble compounds, which settle into the sludge in the next stage. [ 3 ]
The settling stage is usually the same length in time as the aeration. During this stage the sludge formed by the bacteria is allowed to settle to the bottom of the tank. The aerobic bacteria continue to multiply until the dissolved oxygen is all but used up. Conditions in the tank, especially near the bottom are now more suitable for the anaerobic bacteria to flourish. Many of these, and some of the bacteria which would prefer an oxygen environment, now start to use oxidized nitrogen instead of oxygen gas (as an alternate terminal electron acceptor ) and convert the nitrogen to a gaseous state, as nitrogen oxides or, ideally, molecular nitrogen ( dinitrogen , N 2 ) gas. This is known as denitrification .
An anoxic SBR can be used for anaerobic processes, such as the removal of ammonia via Anammox , or the study of slow-growing microorganisms. [ 4 ] In this case, the reactors are purged of oxygen by flushing with inert gas and there is no aeration.
As the bacteria multiply and die, the sludge within the tank increases over time and a waste activated sludge (WAS) pump removes some of the sludge during the settling stage to a digester for further treatment. The quantity or “age” of sludge within the tank is closely monitored, as this can have a marked effect on the treatment process.
The sludge is allowed to settle until clear water is on the top 20 to 30 percent of the tank contents.
The decanting stage most commonly involves the slow lowering of a scoop or “trough” into the basin. This has a piped connection to a lagoon where the final effluent is stored for disposal to a wetland, tree plantation, ocean outfall, or to be further treated for use on parks, golf courses etc.
In some situations in which a traditional treatment plant cannot fulfill required treatment (due to higher loading rates, stringent treatment requirements, etc.), the owner might opt to convert their traditional system into a multi-SBR plant. Conversion to SBR will create a longer sludge age, minimizing sludge handling requirements downstream of the SBR. [ 2 ] : 8–10
The reverse can also be done, in which SBR Systems would be converted into extended aeration (EA) systems . SBR treatment systems that cannot cope up with a sudden constant increase of influent may easily be converted into EA plants. Extended aeration plants are more flexible in flow rate, eliminating restrictions presented by pumps located throughout the SBR systems. Clarifiers can be retrofitted in the equalization tanks of the SBR. | https://en.wikipedia.org/wiki/Sequencing_batch_reactor |
Sequencing by hybridization is a class of methods for determining the order in which nucleotides occur on a strand of DNA . Typically used for looking for small changes relative to a known DNA sequence . [ 1 ] The binding of one strand of DNA to its complementary strand in the DNA double-helix (known as hybridization ) is sensitive to even single-base mismatches when the hybrid region is short or if specialized mismatch detection proteins are present. This is exploited in a variety of ways, most notably via DNA chips or microarrays with thousands to billions of synthetic oligonucleotides found in a genome of interest plus many known variations or even all possible single-base variations. [ 2 ] [ 3 ]
The type of sequencing by hybridization described above has largely been displaced by other methods, including sequencing by synthesis, and sequencing by ligation (as well as pore-based methods). However hybridization of oligonucleotides is still used in some sequencing schemes, including hybridization-assisted pore-based sequencing, and reversible hybridization. [ 4 ] | https://en.wikipedia.org/wiki/Sequencing_by_hybridization |
Sequenom, Inc. is an American company based in San Diego , California. It develops enabling molecular technologies, and highly sensitive laboratory genetic tests for NIPT. Sequenom's wholly owned subsidiary, Sequenom Center for Molecular Medicine (SCMM), offers multiple clinical molecular genetics tests to patients, including MaterniT21, plus a noninvasive prenatal test for trisomy 21 , trisomy 18 , and trisomy 13 , and the SensiGene RHD Fetal RHD genotyping test.
The company went public via an initial public offering in 2000. [ 1 ] In June 2014 the company sold its biosciences unit to Agena Bioscience for up to $35.8 million. [ 2 ] In July 2016, it was announced that diagnostic and testing giant LabCorp will acquire Sequenom, paying $2.40 for every outstanding share of Sequenom stock. The acquisition was completed in September 2016. [ 3 ]
Companies also offering non-invasive prenatal genetic testing include Ariosa, [ 4 ] Ravgen , [ 5 ] Illumina (Verinata Health), [ 6 ] PerkinElmer and Natera (The Panorama Prenatal Test). [ 7 ] Other companies and universities that are working towards developing non-invasive prenatal testing include Stanford University . [ 8 ]
In January 2012, Sequenom entered a patent battle with competing companies, Ariosa and Natera , accusing them of infringing the "540 patent" ( US 6258540 ). [ 9 ] The cases are Sequenom Inc. v. Natera Inc. 12-cv-0184, Sequenom v. Ariosa Diagnostics Inc. , 12-cv-0189, U.S. District Court, Southern District of California (San Diego), and Ariosa v. Sequenom .
Verinatal Health and Stanford University later filed suit against Sequenom in a dispute over the 'Quake patent'. Verinata claims that Sequenom's lawyers sent it a letter in 2010 alleging that "'the practice of non-invasive prenatal diagnostics, including diagnosis of the Down Syndrome and other genetic disorders, using cell-free nucleic acids in a sample of maternal blood infringes' the '540 patent, as well as the claims of a pending United States Patent Application." [ 10 ] The '540 patent was invented by Isis Ltd. and expires in 2017.
Stanford University owns the Quake patents and licensing rights; Verinata is its exclusive licensee. [ 10 ]
In April 2012, Sequenom acquired two pending patents from Helicos Biosciences . In consideration for the sale and transfer of the purchased assets, Sequenom paid Helicos $1.3 million. The Helicos patent applications (US Patent application 12/709,057 and 12/727,824) cover methods for detecting fetal nucleic acids and diagnosing fetal abnormalities. [ 11 ]
In July 2012, The United States District Court denied Sequenom's motion for a preliminary injunction motion against Ariosa Diagnostics. [ 12 ]
In August 2013, The Court of Appeals for the Federal Circuit vacated the District Court decision and remanded that case to the District Court. [ 13 ]
In the Ariosa litigation, the District Court (N.D.Cal.) held that the '540 patent was invalid because it claimed a natural phenomenon, the presence of cell-free fetal DNA fragments in maternal blood. On June 13, 2015, the CAFC affirmed the District Court's judgment. [ 14 ] Finally, on December 2, 2015, the Federal Circuit declined to rehear en banc . [ 15 ]
In 2009, Sequenom Center for Molecular Medicine (SCMM) was expected to launch the SEQureDx prenatal screening tests for Down syndrome and Rhesus D. Subsequent investigation revealed significant flaws in the studies of the test's effectiveness. [ 16 ] As a result, the board of directors of Sequenom fired CEO Harry Stylli, senior vice president of research and development Elizabeth Dragon and three other employees after a probe discovered that the company had failed to adequately supervise its Down syndrome test. CFO Paul Hawran also resigned. Board chairman Harry F. Hixson Jr. was named interim CEO and director Ronald M. Lindsay was appointed to replace Dragon. Dragon has since been charged by the Securities and Exchange Commission (SEC) because she "lied to the public about the accuracy of Sequenom's prenatal screening test for Down syndrome". [ 17 ] She died on February 26, 2011. [ 18 ] [ 19 ]
In 2010, Sequenom paid $14 million to settle a shareholder class-action lawsuit that arose from the errors in the development of the Down syndrome test. [ 20 ] Sequenom executives are under investigation by the SEC for insider trading before announcement of problems with the test. [ 21 ] [ 22 ]
On September 1, 2011, Sequenom entered into a cease-and-desist order with SEC. [ 23 ]
MaterniT21 PLUS is Sequenom Center for Molecular Medicine's prenatal test for trisomy 21 ( Down syndrome ), trisomy 18 ( Edwards syndrome ) and trisomy 13 ( Patau syndrome ). The test operates by sampling cell-free DNA in the mother's blood, which contains some DNA from the fetus . The proportions of DNA from sequences from chromosome 21, 18, or 13 can indicate whether the fetus has trisomy in that chromosome. In a randomized controlled trial of 1,696 pregnancies at high risk for Down syndrome, the test correctly identified 98.6% of the actual cases of Down syndrome (209 out of 212), with a false positive rate of 0.2% (3 of 1471 pregnancies without Down); the test gave no result in 0.8% of the cases tested (13 of 1696). [ 24 ]
The primary advantage of MaterniT21 PLUS over the other major high accuracy tests for Down syndrome, Amniocentesis and Chorionic villus sampling , is that MaterniT21 PLUS is noninvasive. [ 24 ] Because amniocentesis and chorionic villus sampling are invasive, they have a chance of causing miscarriage . [ 25 ]
On August 4, 2011, Sequenom said it would call its new blood test for Down syndrome in pregnancy MaterniT21 when the product went on sale in the United States. [ 26 ] [ 27 ] [ 28 ] [ 29 ]
On August 11, 2011, Sequenom announced a European licensing agreement with LifeCodexx. The companies agreed to collaborate in the development and launch of a trisomy 21 laboratory-developed test and other aneuploidies testing in Germany, Austria, Switzerland, and Liechtenstein, with the potential for additional launches in other countries. Under the initial five year licensing agreement, Sequenom granted LifeCodexx licenses to key patent rights, including European Patent EP0994963B1 and pending application EP2183693A1 that enable the development and commercialization of a non-invasive aneuploidy test utilizing circulating cell-free fetal DNA in maternal plasma. [ 30 ]
On October 24, 2011 International Society of Prenatal Diagnostics (ISPD) issued a rapid response statement in response to the launch of Sequenom non-invasive Trisomy 21 (MaterniT21) test. [ 31 ]
On October 17, 2011 Sequenom announced that a clinical validation study leading to the introduction of the MaterniT21 LDT had been published in the journal Genetics in Medicine. [ 32 ] On October 17, 2011 Sequenom Center for Molecular Medicine announced the launch of MaterniT21 Noninvasive Prenatal Test for Down Syndrome. [ 29 ]
Sequenom Oncomap Version 3 – "core" set interrogates ~450 mutations in 35 genes. An "extended" set interrogates ~700 mutations in 113 genes. [ 33 ]
Sequenom OncoCarta(OncoMap) identifies 396 unique "druggable" or "actionable" mutations in 33 cancer genes. In total, 417 mutations are identified. [ 34 ] [ 35 ] [ 36 ] [ 37 ]
MassARRAY spectrometry is more sensitive than PreTect HPV-Proofer and Consensus PCR for type-specific detection of high-risk oncogenic human papillomavirus genotypes in cervical cancer . [ 38 ]
On October 4, 2011 Sequenom introduced iPLEX ADME PGx Panel on MassARRAY System, developed to genotype polymorphisms in genes associated with drug absorption, distribution, metabolism, and excretion (ADME). This Research Use Only (RUO) panel contains a set of pre-designed single nucleotide polymorphisms ( SNP ), insertions and deletions ( INDELS ) and copy number variation ( CNV ) assays for use in the investigation of variants with demonstrated relevance to drug metabolism. After detection on the MassARRAY (RUO) system, a proprietary software solution is then used to score and qualify polymorphisms to create a unique haplotype report. [ 39 ] | https://en.wikipedia.org/wiki/Sequenom |
In mathematical logic , a sequent is a very general kind of conditional assertion.
A sequent may have any number m of condition formulas A i (called " antecedents ") and any number n of asserted formulas B j (called "succedents" or " consequents "). A sequent is understood to mean that if all of the antecedent conditions are true, then at least one of the consequent formulas is true. This style of conditional assertion is almost always associated with the conceptual framework of sequent calculus .
Sequents are best understood in the context of the following three kinds of logical judgments :
Thus sequents are a generalization of simple conditional assertions, which are a generalization of unconditional assertions.
The word "OR" here is the inclusive OR . [ 1 ] The motivation for disjunctive semantics on the right side of a sequent comes from three main benefits.
All three of these benefits were identified in the founding paper by Gentzen (1934 , p. 194).
Not all authors have adhered to Gentzen's original meaning for the word "sequent". For example, Lemmon (1965) used the word "sequent" strictly for simple conditional assertions with one and only one consequent formula. [ 2 ] The same single-consequent definition for a sequent is given by Huth & Ryan 2004 , p. 5.
In a general sequent of the form
both Γ and Σ are sequences of logical formulas, not sets . Therefore both the number and order of occurrences of formulas are significant. In particular, the same formula may appear twice in the same sequence. The full set of sequent calculus inference rules contains rules to swap adjacent formulas on the left and on the right of the assertion symbol (and thereby arbitrarily permute the left and right sequences), and also to insert arbitrary formulas and remove duplicate copies within the left and the right sequences. (However, Smullyan (1995 , pp. 107–108), uses sets of formulas in sequents instead of sequences of formulas. Consequently the three pairs of structural rules called "thinning", "contraction" and "interchange" are not required.)
The symbol ' ⊢ {\displaystyle \vdash } ' is often referred to as the " turnstile ", "right tack", "tee", "assertion sign" or "assertion symbol". It is often read, suggestively, as "yields", "proves" or "entails".
Since every formula in the antecedent (the left side) must be true to conclude the truth of at least one formula in the succedent (the right side), adding formulas to either side results in a weaker sequent, while removing them from either side gives a stronger one. This is one of the symmetry advantages which follows from the use of disjunctive semantics on the right hand side of the assertion symbol, whereas conjunctive semantics is adhered to on the left hand side.
In the extreme case where the list of antecedent formulas of a sequent is empty, the consequent is unconditional. This differs from the simple unconditional assertion because the number of consequents is arbitrary, not necessarily a single consequent. Thus for example, ' ⊢ B 1 , B 2 ' means that either B 1 , or B 2 , or both must be true. An empty antecedent formula list is equivalent to the "always true" proposition, called the " verum ", denoted "⊤". (See Tee (symbol) .)
In the extreme case where the list of consequent formulas of a sequent is empty, the rule is still that at least one term on the right be true, which is clearly impossible . This is signified by the 'always false' proposition, called the " falsum ", denoted "⊥". Since the consequence is false, at least one of the antecedents must be false. Thus for example, ' A 1 , A 2 ⊢ ' means that at least one of the antecedents A 1 and A 2 must be false.
One sees here again a symmetry because of the disjunctive semantics on the right hand side. If the left side is empty, then one or more right-side propositions must be true. If the right side is empty, then one or more of the left-side propositions must be false.
The doubly extreme case ' ⊢ ', where both the antecedent and consequent lists of formulas are empty is " not satisfiable ". [ 3 ] In this case, the meaning of the sequent is effectively ' ⊤ ⊢ ⊥ '. This is equivalent to the sequent ' ⊢ ⊥ ', which clearly cannot be valid.
A sequent of the form ' ⊢ α, β ', for logical formulas α and β, means that either α is true or β is true (or both). But it does not mean that either α is a tautology or β is a tautology. To clarify this, consider the example ' ⊢ B ∨ A, C ∨ ¬A '. This is a valid sequent because either B ∨ A is true or C ∨ ¬A is true. But neither of these expressions is a tautology in isolation. It is the disjunction of these two expressions which is a tautology.
Similarly, a sequent of the form ' α, β ⊢ ', for logical formulas α and β, means that either α is false or β is false. But it does not mean that either α is a contradiction or β is a contradiction. To clarify this, consider the example ' B ∧ A, C ∧ ¬A ⊢ '. This is a valid sequent because either B ∧ A is false or C ∧ ¬A is false. But neither of these expressions is a contradiction in isolation. It is the conjunction of these two expressions which is a contradiction.
Most proof systems provide ways to deduce one sequent from another. These inference rules are written with a list of sequents above and below a line . This rule indicates that if everything above the line is true, so is everything under the line.
A typical rule is:
This indicates that if we can deduce that Γ , α {\displaystyle \Gamma ,\alpha } yields Σ {\displaystyle \Sigma } , and that Γ {\displaystyle \Gamma } yields α {\displaystyle \alpha } , then we can also deduce that Γ {\displaystyle \Gamma } yields Σ {\displaystyle \Sigma } . (See also the full set of sequent calculus inference rules .)
The assertion symbol in sequents originally meant exactly the same as the implication operator. But over time, its meaning has changed to signify provability within a theory rather than semantic truth in all models.
In 1934, Gentzen did not define the assertion symbol ' ⊢ ' in a sequent to signify provability. He defined it to mean exactly the same as the implication operator ' ⇒ '. Using ' → ' instead of ' ⊢ ' and ' ⊃ ' instead of ' ⇒ ', he wrote: "The sequent A 1 , ..., A μ → B 1 , ..., B ν signifies, as regards content, exactly the same as the formula (A 1 & ... & A μ ) ⊃ (B 1 ∨ ... ∨ B ν )". [ 4 ] (Gentzen employed the right-arrow symbol between the antecedents and consequents of sequents. He employed the symbol ' ⊃ ' for the logical implication operator.)
In 1939, Hilbert and Bernays stated likewise that a sequent has the same meaning as the corresponding implication formula. [ 5 ]
In 1944, Alonzo Church emphasized that Gentzen's sequent assertions did not signify provability.
Numerous publications after this time have stated that the assertion symbol in sequents does signify provability within the theory where the sequents are formulated. Curry in 1963, [ 7 ] Lemmon in 1965, [ 2 ] and Huth and Ryan in 2004 [ 8 ] all state that the sequent assertion symbol signifies provability. However, Ben-Ari (2012 , p. 69) states that the assertion symbol in Gentzen-system sequents, which he denotes as ' ⇒ ', is part of the object language, not the metalanguage. [ 9 ]
According to Prawitz (1965): "The calculi of sequents can be understood as meta-calculi for the deducibility relation in the corresponding systems of natural deduction." [ 10 ] And furthermore: "A proof in a calculus of sequents can be looked upon as an instruction on how to construct a corresponding natural deduction." [ 11 ] In other words, the assertion symbol is part of the object language for the sequent calculus, which is a kind of meta-calculus, but simultaneously signifies deducibility in an underlying natural deduction system.
A sequent is a formalized statement of provability that is frequently used when specifying calculi for deduction . In the sequent calculus, the name sequent is used for the construct, which can be regarded as a specific kind of judgment , characteristic to this deduction system.
The intuitive meaning of the sequent Γ ⊢ Σ {\displaystyle \Gamma \vdash \Sigma } is that under the assumption of Γ the conclusion of Σ is provable. Classically, the formulae on the left of the turnstile can be interpreted conjunctively while the formulae on the right can be considered as a disjunction . This means that, when all formulae in Γ hold, then at least one formula in Σ also has to be true. If the succedent is empty, this is interpreted as falsity, i.e. Γ ⊢ {\displaystyle \Gamma \vdash } means that Γ proves falsity and is thus inconsistent. On the other hand an empty antecedent is assumed to be true, i.e., ⊢ Σ {\displaystyle \vdash \Sigma } means that Σ follows without any assumptions, i.e., it is always true (as a disjunction). A sequent of this form, with Γ empty, is known as a logical assertion .
Of course, other intuitive explanations are possible, which are classically equivalent. For example, Γ ⊢ Σ {\displaystyle \Gamma \vdash \Sigma } can be read as asserting that it cannot be the case that every formula in Γ is true and every formula in Σ is false (this is related to the double-negation interpretations of classical intuitionistic logic , such as Glivenko's theorem ).
In any case, these intuitive readings are only pedagogical. Since formal proofs in proof theory are purely syntactic , the meaning of (the derivation of) a sequent is only given by the properties of the calculus that provides the actual rules of inference .
Barring any contradictions in the technically precise definition above we can describe sequents in their introductory logical form. Γ {\displaystyle \Gamma } represents a set of assumptions that we begin our logical process with, for example "Socrates is a man" and "All men are mortal". The Σ {\displaystyle \Sigma } represents a logical conclusion that follows under these premises. For example "Socrates is mortal" follows from a reasonable formalization of the above points and we could expect to see it on the Σ {\displaystyle \Sigma } side of the turnstile . In this sense, ⊢ {\displaystyle \vdash } means the process of reasoning, or "therefore" in English.
The general notion of sequent introduced here can be specialized in various ways. A sequent is said to be an intuitionistic sequent if there is at most one formula in the succedent (although multi-succedent calculi for intuitionistic logic are also possible). More precisely, the restriction of the general sequent calculus to single-succedent-formula sequents, with the same inference rules as for general sequents, constitutes an intuitionistic sequent calculus. (This restricted sequent calculus is denoted LJ.)
Similarly, one can obtain calculi for dual-intuitionistic logic (a type of paraconsistent logic ) by requiring that sequents be singular in the antecedent.
In many cases, sequents are also assumed to consist of multisets or sets instead of sequences. Thus one disregards the order or even the numbers of occurrences of the formulae. For classical propositional logic this does not yield a problem, since the conclusions that one can draw from a collection of premises do not depend on these data. In substructural logic , however, this may become quite important.
Natural deduction systems use single-consequence conditional assertions, but they typically do not use the same sets of inference rules as Gentzen introduced in 1934. In particular, tabular natural deduction systems, which are very convenient for practical theorem-proving in propositional calculus and predicate calculus, were applied by Suppes (1999) and Lemmon (1965) for teaching introductory logic in textbooks.
Historically, sequents have been introduced by Gerhard Gentzen in order to specify his famous sequent calculus . [ 12 ] In his German publication he used the word "Sequenz". However, in English, the word " sequence " is already used as a translation to the German "Folge" and appears quite frequently in mathematics. The term "sequent" then has been created in search for an alternative translation of the German expression.
Kleene [ 13 ] makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as 'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'." | https://en.wikipedia.org/wiki/Sequent |
In mathematical logic , sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen ) instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference , giving a better approximation to the natural style of deduction used by mathematicians than David Hilbert's earlier style of formal logic , in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms . In that case, sequents signify conditional theorems of a first-order theory rather than conditional tautologies.
Sequent calculus is one of several extant styles of proof calculus for expressing line-by-line logical arguments.
In other words, natural deduction and sequent calculus systems are particular distinct kinds of Gentzen-style systems. Hilbert-style systems typically have a very small number of inference rules , relying more on sets of axioms. Gentzen-style systems typically have very few axioms, if any, relying more on sets of rules.
Gentzen-style systems have significant practical and theoretical advantages compared to Hilbert-style systems. For example, both natural deduction and sequent calculus systems facilitate the elimination and introduction of universal and existential quantifiers so that unquantified logical expressions can be manipulated according to the much simpler rules of propositional calculus . In a typical argument, quantifiers are eliminated, then propositional calculus is applied to unquantified expressions (which typically contain free variables ), and then the quantifiers are reintroduced. This very much parallels the way in which mathematical proofs are carried out in practice by mathematicians. Predicate calculus proofs are generally much easier to discover with this approach, and are often shorter. Natural deduction systems are more suited to practical theorem-proving. Sequent calculus systems are more suited to theoretical analysis.
In proof theory and mathematical logic , sequent calculus is a family of formal systems sharing a certain style of inference and certain formal properties. The first sequent calculi systems, LK and LJ , were introduced in 1934/1935 by Gerhard Gentzen [ 1 ] as a tool for studying natural deduction in first-order logic (in classical and intuitionistic versions, respectively). Gentzen's so-called "Main Theorem" ( Hauptsatz ) about LK and LJ was the cut-elimination theorem , [ 2 ] [ 3 ] a result with far-reaching meta-theoretic consequences, including consistency . Gentzen further demonstrated the power and flexibility of this technique a few years later, applying a cut-elimination argument to give a (transfinite) proof of the consistency of Peano arithmetic , in surprising response to Gödel's incompleteness theorems . Since this early work, sequent calculi, also called Gentzen systems , [ 4 ] [ 5 ] [ 6 ] [ 7 ] and the general concepts relating to them, have been widely applied in the fields of proof theory, mathematical logic, and automated deduction .
One way to classify different styles of deduction systems is to look at the form of judgments in the system, i.e. , which things may appear as the conclusion of a (sub)proof. The simplest judgment form is used in Hilbert-style deduction systems , where a judgment has the form
where B {\displaystyle B} is any formula of first-order logic (or whatever logic the deduction system applies to, e.g. , propositional calculus or a higher-order logic or a modal logic ). The theorems are those formulas that appear as the concluding judgment in a valid proof. A Hilbert-style system needs no distinction between formulas and judgments; we make one here solely for comparison with the cases that follow.
The price paid for the simple syntax of a Hilbert-style system is that complete formal proofs tend to get extremely long. Concrete arguments about proofs in such a system almost always appeal to the deduction theorem . This leads to the idea of including the deduction theorem as a formal rule in the system, which happens in natural deduction .
In natural deduction, judgments have the shape
where the A i {\displaystyle A_{i}} 's and B {\displaystyle B} are again formulas and n ≥ 0 {\displaystyle n\geq 0} . In other words, a judgment consists of a list (possibly empty) of formulas on the left-hand side of a turnstile symbol " ⊢ {\displaystyle \vdash } ", with a single formula on the right-hand side, [ 8 ] [ 9 ] [ 10 ] (though permutations of the A i {\displaystyle A_{i}} 's are often immaterial). The theorems are those formulae B {\displaystyle B} such that ⊢ B {\displaystyle \vdash B} (with an empty left-hand side) is the conclusion of a valid proof.
(In some presentations of natural deduction, the A i {\displaystyle A_{i}} s and the turnstile are not written down explicitly; instead a two-dimensional notation from which they can be inferred is used.)
The standard semantics of a judgment in natural deduction is that it asserts that whenever [ 11 ] A 1 {\displaystyle A_{1}} , A 2 {\displaystyle A_{2}} , etc., are all true, B {\displaystyle B} will also be true. The judgments
and
are equivalent in the strong sense that a proof of either one may be extended to a proof of the other.
Finally, sequent calculus generalizes the form of a natural deduction judgment to
a syntactic object called a sequent. The formulas on left-hand side of the turnstile are called the antecedent , and the formulas on right-hand side are called the succedent or consequent ; together they are called cedents or sequents . [ 12 ] Again, A i {\displaystyle A_{i}} and B i {\displaystyle B_{i}} are formulas, and n {\displaystyle n} and k {\displaystyle k} are nonnegative integers, that is, the left-hand-side or the right-hand-side (or neither or both) may be empty. As in natural deduction, theorems are those B {\displaystyle B} where ⊢ B {\displaystyle \vdash B} is the conclusion of a valid proof.
The standard semantics of a sequent is an assertion that whenever every A i {\displaystyle A_{i}} is true, at least one B i {\displaystyle B_{i}} will also be true. [ 13 ] Thus the empty sequent, having both cedents empty, is false. [ 14 ] One way to express this is that a comma to the left of the turnstile should be thought of as an "and", and a comma to the right of the turnstile should be thought of as an (inclusive) "or". The sequents
and
are equivalent in the strong sense that a proof of either sequent may be extended to a proof of the other sequent.
At first sight, this extension of the judgment form may appear to be a strange complication—it is not motivated by an obvious shortcoming of natural deduction, and it is initially confusing that the comma seems to mean entirely different things on the two sides of the turnstile. However, in a classical context the semantics of the sequent can also (by propositional tautology) be expressed either as
(at least one of the As is false, or one of the Bs is true)
(it cannot be the case that all of the As are true and all of the Bs are false).
In these formulations, the only difference between formulas on either side of the turnstile is that one side is negated. Thus, swapping left for right in a sequent corresponds to negating all of the constituent formulas. This means that a symmetry such as De Morgan's laws , which manifests itself as logical negation on the semantic level, translates directly into a left–right symmetry of sequents—and indeed, the inference rules in sequent calculus for dealing with conjunction (∧) are mirror images of those dealing with disjunction (∨).
Many logicians feel [ citation needed ] that this symmetric presentation offers a deeper insight in the structure of the logic than other styles of proof system, where the classical duality of negation is not as apparent in the rules.
Gentzen asserted a sharp distinction between his single-output natural deduction systems (NK and NJ) and his multiple-output sequent calculus systems (LK and LJ). He wrote that the intuitionistic natural deduction system NJ was somewhat ugly. [ 15 ] He said that the special role of the excluded middle in the classical natural deduction system NK is removed in the classical sequent calculus system LK. [ 16 ] He said that the sequent calculus LJ gave more symmetry than natural deduction NJ in the case of intuitionistic logic, as also in the case of classical logic (LK versus NK). [ 17 ] Then he said that in addition to these reasons, the sequent calculus with multiple succedent formulas is intended particularly for his principal theorem ("Hauptsatz"). [ 18 ]
The word "sequent" is taken from the word "Sequenz" in Gentzen's 1934 paper. [ 1 ] Kleene makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as 'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'." [ 19 ]
Sequent calculus can be seen as a tool for proving formulas in propositional logic , similar to the method of analytic tableaux . It gives a series of steps that allows one to reduce the problem of proving a logical formula to simpler and simpler formulas until one arrives at trivial ones. [ 20 ]
Consider the following formula:
This is written in the following form, where the proposition that needs to be proven is to the right of the turnstile symbol ⊢ {\displaystyle \vdash } :
Now, instead of proving this from the axioms, it is enough to assume the premise of the implication and then try to prove its conclusion. [ 21 ] Hence one moves to the following sequent:
Again the right hand side includes an implication, whose premise can further be assumed so that only its conclusion needs to be proven:
Since the arguments in the left-hand side are assumed to be related by conjunction , this can be replaced by the following:
This is equivalent to proving the conclusion in both cases of the disjunction on the first argument on the left. Thus we may split the sequent to two, where we now have to prove each separately:
In the case of the first judgment, we rewrite p → r {\displaystyle p\rightarrow r} as ¬ p ∨ r {\displaystyle \lnot p\lor r} and split the sequent again to get:
The second sequent is done; the first sequent can be further simplified into:
This process can always be continued until there are only atomic formulas in each side.
The process can be graphically described by a rooted tree , as depicted on the right. The root of the tree is the formula we wish to prove; the leaves consist of atomic formulas only. The tree is known as a reduction tree . [ 20 ] [ 22 ]
The items to the left of the turnstile are understood to be connected by conjunction, and those to the right by disjunction. Therefore, when both consist only of atomic symbols, the sequent is accepted axiomatically (and always true) if and only if at least one of the symbols on the right also appears on the left.
Following are the rules by which one proceeds along the tree. Whenever one sequent is split into two, the tree vertex has two child vertices, and the tree is branched. Additionally, one may freely change the order of the arguments in each side; Γ and Δ stand for possible additional arguments. [ 20 ]
The usual term for the horizontal line used in Gentzen-style layouts for natural deduction is inference line . [ 23 ]
Starting with any formula in propositional logic, by a series of steps, the right side of the turnstile can be processed until it includes only atomic symbols. Then, the same is done for the left side. Since every logical operator appears in one of the rules above, and is removed by the rule, the process terminates when no logical operators remain: The formula has been decomposed .
Thus, the sequents in the leaves of the trees include only atomic symbols, which are either provable by the axiom or not, according to whether one of the symbols on the right also appears on the left.
It is easy to see that the steps in the tree preserve the semantic truth value of the formulas implied by them, with conjunction understood between the tree's different branches whenever there is a split. It is also obvious that an axiom is provable if and only if it is true for every assignment of truth values to the atomic symbols. Thus this system is sound and complete for classical propositional logic.
Sequent calculus is related to other axiomatizations of classical propositional calculus, such as Frege's propositional calculus or Jan Łukasiewicz's axiomatization (itself a part of the standard Hilbert system ): Every formula that can be proven in these has a reduction tree. This can be shown as follows: Every proof in propositional calculus uses only axioms and the inference rules. Each use of an axiom scheme yields a true logical formula, and can thus be proven in sequent calculus; examples for these are shown below . The only inference rule in the systems mentioned above is modus ponens , which is implemented by the cut rule.
This section introduces the rules of the sequent calculus LK (standing for Logistische Kalkül) as introduced by Gentzen in 1934. [ 24 ] A (formal) proof in this calculus is a finite sequence of sequents, where each of the sequents is derivable from sequents appearing earlier in the sequence by using one of the rules below.
The following notation will be used:
Note that, contrary to the rules for proceeding along the reduction tree presented above, the following rules are for moving in the opposite directions, from axioms to theorems. Thus they are exact mirror-images of the rules above, except that here symmetry is not implicitly assumed, and rules regarding quantification are added.
Restrictions : In the rules marked with (†), ( ∀ R ) {\displaystyle ({\forall }R)} and ( ∃ L ) {\displaystyle ({\exists }L)} , the variable y {\displaystyle y} must not occur free anywhere in the respective lower sequents.
The above rules can be divided into two major groups: logical and structural ones. Each of the logical rules introduces a new logical formula either on the left or on the right of the turnstile ⊢ {\displaystyle \vdash } . In contrast, the structural rules operate on the structure of the sequents, ignoring the exact shape of the formulas. The two exceptions to this general scheme are the axiom of identity (I) and the rule of (Cut).
Although stated in a formal way, the above rules allow for a very intuitive reading in terms of classical logic. Consider, for example, the rule ( ∧ L 1 ) {\displaystyle ({\land }L_{1})} . It says that, whenever one can prove that Δ {\displaystyle \Delta } can be concluded from some sequence of formulas that contain A {\displaystyle A} , then one can also conclude Δ {\displaystyle \Delta } from the (stronger) assumption that A ∧ B {\displaystyle A\land B} holds. Likewise, the rule ( ¬ R ) {\displaystyle ({\neg }R)} states that, if Γ {\displaystyle \Gamma } and A {\displaystyle A} suffice to conclude Δ {\displaystyle \Delta } , then from Γ {\displaystyle \Gamma } alone one can either still conclude Δ {\displaystyle \Delta } or A {\displaystyle A} must be false, i.e. ¬ A {\displaystyle {\neg }A} holds. All the rules can be interpreted in this way.
For an intuition about the quantifier rules, consider the rule ( ∀ R ) {\displaystyle ({\forall }R)} . Of course concluding that ∀ x A {\displaystyle \forall {x}A} holds just from the fact that A [ y / x ] {\displaystyle A[y/x]} is true is not in general possible. If, however, the variable y is not mentioned elsewhere (i.e. it can still be chosen freely, without influencing the other formulas), then one may assume, that A [ y / x ] {\displaystyle A[y/x]} holds for any value of y. The other rules should then be pretty straightforward.
Instead of viewing the rules as descriptions for legal derivations in predicate logic, one may also consider them as instructions for the construction of a proof for a given statement. In this case the rules can be read bottom-up; for example, ( ∧ R ) {\displaystyle ({\land }R)} says that, to prove that A ∧ B {\displaystyle A\land B} follows from the assumptions Γ {\displaystyle \Gamma } and Σ {\displaystyle \Sigma } , it suffices to prove that A {\displaystyle A} can be concluded from Γ {\displaystyle \Gamma } and B {\displaystyle B} can be concluded from Σ {\displaystyle \Sigma } , respectively. Note that, given some antecedent, it is not clear how this is to be split into Γ {\displaystyle \Gamma } and Σ {\displaystyle \Sigma } . However, there are only finitely many possibilities to be checked since the antecedent by assumption is finite. This also illustrates how proof theory can be viewed as operating on proofs in a combinatorial fashion: given proofs for both A {\displaystyle A} and B {\displaystyle B} , one can construct a proof for A ∧ B {\displaystyle A\land B} .
When looking for some proof, most of the rules offer more or less direct recipes of how to do this. The rule of cut is different: it states that, when a formula A {\displaystyle A} can be concluded and this formula may also serve as a premise for concluding other statements, then the formula A {\displaystyle A} can be "cut out" and the respective derivations are joined. When constructing a proof bottom-up, this creates the problem of guessing A {\displaystyle A} (since it does not appear at all below). The cut-elimination theorem is thus crucial to the applications of sequent calculus in automated deduction : it states that all uses of the cut rule can be eliminated from a proof, implying that any provable sequent can be given a cut-free proof.
The second rule that is somewhat special is the axiom of identity (I). The intuitive reading of this is obvious: every formula proves itself. Like the cut rule, the axiom of identity is somewhat redundant: the completeness of atomic initial sequents states that the rule can be restricted to atomic formulas without any loss of provability.
Observe that all rules have mirror companions, except the ones for implication. This reflects the fact that the usual language of first-order logic does not include the "is not implied by" connective ↚ {\displaystyle \not \leftarrow } that would be the De Morgan dual of implication. Adding such a connective with its natural rules would make the calculus completely left–right symmetric.
Here is the derivation of " ⊢ A ∨ ¬ A {\displaystyle \vdash A\lor \lnot A} ", known as
the Law of excluded middle ( tertium non datur in Latin).
Next is the proof of a simple fact involving quantifiers. Note that the converse is not true, and its falsity can be seen when attempting to derive it bottom-up, because an existing free variable cannot be used in substitution in the rules ( ∀ R ) {\displaystyle (\forall R)} and ( ∃ L ) {\displaystyle (\exists L)} .
For something more interesting we shall prove ( ( A → ( B ∨ C ) ) → ( ( ( B → ¬ A ) ∧ ¬ C ) → ¬ A ) ) {\displaystyle {\left(\left(A\rightarrow \left(B\lor C\right)\right)\rightarrow \left(\left(\left(B\rightarrow \lnot A\right)\land \lnot C\right)\rightarrow \lnot A\right)\right)}} . It is straightforward to find the derivation, which exemplifies the usefulness of LK in automated proving.
These derivations also emphasize the strictly formal structure of the sequent calculus. For example, the logical rules as defined above always act on a formula immediately adjacent to the turnstile, such that the permutation rules are necessary. Note, however, that this is in part an artifact of the presentation, in the original style of Gentzen. A common simplification involves the use of multisets of formulas in the interpretation of the sequent, rather than sequences, eliminating the need for an explicit permutation rule. This corresponds to shifting commutativity of assumptions and derivations outside the sequent calculus, whereas LK embeds it within the system itself.
For certain formulations (i.e. variants) of the sequent calculus, a proof in such a calculus is isomorphic to an upside-down, closed analytic tableau . [ 25 ]
The structural rules deserve some additional discussion.
Weakening (W) allows the addition of arbitrary elements to a sequence. Intuitively, this is allowed in the antecedent because we can always restrict the scope of our proof (if all cars have wheels, then it's safe to say that all black cars have wheels); and in the succedent because we can always allow for alternative conclusions (if all cars have wheels, then it's safe to say that all cars have either wheels or wings).
Contraction (C) and Permutation (P) assure that neither the order (P) nor the multiplicity of occurrences (C) of elements of the sequences matters. Thus, one could instead of sequences also consider sets .
The extra effort of using sequences, however, is justified since part or all of the structural rules may be omitted. Doing so, one obtains the so-called substructural logics .
This system of rules can be shown to be both sound and complete with respect to first-order logic, i.e. a statement A {\displaystyle A} follows semantically from a set of premises Γ {\displaystyle \Gamma } ( Γ ⊨ A ) {\displaystyle (\Gamma \vDash A)} if and only if the sequent Γ ⊢ A {\displaystyle \Gamma \vdash A} can be derived by the above rules. [ 26 ]
In the sequent calculus, the rule of cut is admissible . This result is also referred to as Gentzen's Hauptsatz ("Main Theorem"). [ 2 ] [ 3 ]
The above rules can be modified in various ways:
There is some freedom of choice regarding the technical details of how sequents and structural rules are formalized without changing what sequents the system derives.
First of all, as mentioned above, the sequents can be viewed to consist of sets or multisets . In this case, the rules for permuting and (when using sets) contracting formulas are unnecessary.
The rule of weakening becomes admissible if the axiom (I) is changed to derive any sequent of the form Γ , A ⊢ A , Δ {\displaystyle \Gamma ,A\vdash A,\Delta } . Any weakening that appears in a derivation can then be moved to the beginning of the proof. This may be a convenient change when constructing proofs bottom-up.
One may also change whether rules with more than one premise share the same context for each of those premises or split their contexts between them: For example, ( ∨ L ) {\displaystyle ({\lor }L)} may be instead formulated as
Contraction and weakening make this version of the rule interderivable with the version above, although in their absence, as in linear logic , these rules define different connectives.
One can introduce ⊥ {\displaystyle \bot } , the absurdity constant representing false , with the axiom:
Or if, as described above, weakening is to be an admissible rule, then with the axiom:
With ⊥ {\displaystyle \bot } , negation can be subsumed as a special case of implication, via the definition ( ¬ A ) ⟺ ( A → ⊥ ) {\displaystyle (\neg A)\iff (A\to \bot )} .
Alternatively, one may restrict or forbid the use of some of the structural rules. This yields a variety of substructural logic systems. They are generally weaker than LK ( i.e. , they have fewer theorems), and thus not complete with respect to the standard semantics of first-order logic. However, they have other interesting properties that have led to applications in theoretical computer science and artificial intelligence .
Surprisingly, some small changes in the rules of LK suffice to turn it into a proof system for intuitionistic logic . [ 27 ] To this end, one has to restrict to sequents with at most one formula on the right-hand side, [ 28 ] and modify the rules to maintain this invariant. For example, ( ∨ L ) {\displaystyle ({\lor }L)} is reformulated as follows (where C is an arbitrary formula):
The resulting system is called LJ. It is sound and complete with respect to intuitionistic logic and admits a similar cut-elimination proof. This can be used in proving disjunction and existence properties .
In fact, the only rules in LK that need to be restricted to single-formula consequents are ( → R ) {\displaystyle ({\to }R)} , ( ¬ R ) {\displaystyle (\neg R)} (which can be seen as a special case of → R {\displaystyle {\to }R} , as described above) and ( ∀ R ) {\displaystyle ({\forall }R)} . When multi-formula consequents are interpreted as disjunctions, all of the other inference rules of LK are derivable in LJ, while the rules ( → R ) {\displaystyle ({\to }R)} and ( ∀ R ) {\displaystyle ({\forall }R)} become
and (when y {\displaystyle y} does not occur free in the bottom sequent)
These rules are not intuitionistically valid. | https://en.wikipedia.org/wiki/Sequent_calculus |
In computer science , the process calculi (or process algebras ) are a diverse family of related approaches for formally modelling concurrent systems . Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation ). Leading examples of process calculi include CSP , CCS , ACP , and LOTOS . [ 1 ] More recent additions to the family include the π-calculus , the ambient calculus , PEPA , the fusion calculus and the join-calculus .
While the variety of existing process calculi is very large (including variants that incorporate stochastic behaviour, timing information, and specializations for studying molecular interactions), there are several features that all process calculi have in common: [ 2 ]
To define a process calculus , one starts with a set of names (or channels ) whose purpose is to provide means of communication. In many implementations, channels have rich internal structure to improve efficiency, but this is abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old ones. The basic operators, always present in some form or other, allow: [ 3 ]
Parallel composition of two processes P {\displaystyle {\mathit {P}}} and Q {\displaystyle {\mathit {Q}}} , usually written P | Q {\displaystyle P\vert Q} , is the key primitive distinguishing the process calculi from sequential models of computation. Parallel composition allows computation in P {\displaystyle {\mathit {P}}} and Q {\displaystyle {\mathit {Q}}} to proceed simultaneously and independently. But it also allows interaction, that is synchronisation and flow of information from P {\displaystyle {\mathit {P}}} to Q {\displaystyle {\mathit {Q}}} (or vice versa) on a channel shared by both. Crucially, an agent or process can be connected to more than one channel at a time.
Channels may be synchronous or asynchronous. In the case of a synchronous channel, the agent sending a message waits until another agent has received the message. Asynchronous channels do not require any such synchronization. In some process calculi (notably the π-calculus ) channels themselves can be sent in messages through (other) channels, allowing the topology of process interconnections to change. Some process calculi also allow channels to be created during the execution of a computation.
Interaction can be (but isn't always) a directed flow of information. That is, input and output can be distinguished as dual interaction primitives. Process calculi that make such distinctions typically define an input operator ( e.g. x ( v ) {\displaystyle x(v)} ) and an output operator ( e.g. x ⟨ y ⟩ {\displaystyle x\langle y\rangle } ), both of which name an interaction point (here x {\displaystyle {\mathit {x}}} ) that is used to synchronise with a dual interaction primitive.
Should information be exchanged, it will flow from the outputting to the inputting process. The output primitive will specify the data to be sent. In x ⟨ y ⟩ {\displaystyle x\langle y\rangle } , this data is y {\displaystyle y} . Similarly, if an input expects to receive data, one or more bound variables will act as place-holders to be substituted by data, when it arrives. In x ( v ) {\displaystyle x(v)} , v {\displaystyle v} plays that role. The choice of the kind of data that can be exchanged in an interaction is one of the key features that distinguishes different process calculi.
Sometimes interactions must be temporally ordered. For example, it might be desirable to specify algorithms such as: first receive some data on x {\displaystyle {\mathit {x}}} and then send that data on y {\displaystyle {\mathit {y}}} . Sequential composition can be used for such purposes. It is well known from other models of computation. In process calculi, the sequentialisation operator is usually integrated with input or output, or both. For example, the process x ( v ) ⋅ P {\displaystyle x(v)\cdot P} will wait for an input on x {\displaystyle {\mathit {x}}} . Only when this input has occurred will the process P {\displaystyle {\mathit {P}}} be activated, with the received data through x {\displaystyle {\mathit {x}}} substituted for identifier v {\displaystyle {\mathit {v}}} .
The key operational reduction rule, containing the computational essence of process calculi, can be given solely in terms of parallel composition, sequentialization, input, and output. The details of this reduction vary among the calculi, but the essence remains roughly the same. The reduction rule is:
The interpretation to this reduction rule is:
The class of processes that P {\displaystyle {\mathit {P}}} is allowed to range over as the continuation of the output operation substantially influences the properties of the calculus.
Processes do not limit the number of connections that can be made at a given interaction point. But interaction points allow interference (i.e. interaction). For the
synthesis of compact, minimal and compositional systems, the ability to restrict interference is crucial. Hiding operations allow control of the connections made between interaction points when composing
agents in parallel. Hiding can be denoted in a variety of ways. For example, in the π-calculus the hiding of a name x {\displaystyle {\mathit {x}}} in P {\displaystyle {\mathit {P}}} can be expressed as ( ν x ) P {\displaystyle (\nu \;x)P} , while in CSP it might be written as P ∖ { x } {\displaystyle P\setminus \{x\}} .
The operations presented so far describe only finite interaction and are consequently insufficient for full computability, which includes non-terminating behaviour. Recursion and replication are operations that allow finite descriptions of infinite behaviour. Recursion is well known from the sequential world. Replication ! P {\displaystyle !P} can be understood as abbreviating the parallel composition of a countably infinite number of P {\displaystyle {\mathit {P}}} processes:
Process calculi generally also include a null process (variously denoted as n i l {\displaystyle {\mathit {nil}}} , 0 {\displaystyle 0} , S T O P {\displaystyle {\mathit {STOP}}} , δ {\displaystyle \delta } , or some other appropriate symbol) which has no interaction points. It is utterly inactive and its sole purpose is to act as the inductive anchor on top of which more interesting processes can be generated.
Process algebra has been studied for discrete time and continuous time (real time or dense time). [ 4 ]
In the first half of the 20th century, various formalisms were proposed to capture the informal concept of a computable function , with μ-recursive functions , Turing machines and the lambda calculus possibly being the best-known examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into each other, supports the Church-Turing thesis . Another shared feature is more rarely commented on: they all are most readily understood as models of sequential computation. The subsequent consolidation of computer science required a more subtle formulation of the notion of computation, in particular explicit representations of concurrency and communication. Models of concurrency such as the process calculi, Petri nets in 1962, and the actor model in 1973 emerged from this line of inquiry.
Research on process calculi began in earnest with Robin Milner 's seminal work on the Calculus of Communicating Systems (CCS) during the period from 1973 to 1980. C.A.R. Hoare 's Communicating Sequential Processes (CSP) first appeared in 1978, and was subsequently developed into a full-fledged process calculus during the early 1980s. There was much cross-fertilization of ideas between CCS and CSP as they developed. In 1982 Jan Bergstra and Jan Willem Klop began work on what came to be known as the Algebra of Communicating Processes (ACP), and introduced the term process algebra to describe their work. [ 1 ] CCS, CSP, and ACP constitute the three major branches of the process calculi family: the majority of the other process calculi can trace their roots to one of these three calculi.
Various process calculi have been studied and not all of them fit the paradigm sketched here. The most prominent example may be the ambient calculus . This is to be expected as process calculi are an active field of study. Currently research on process calculi focuses on the following problems.
The ideas behind process algebra have given rise to several tools including:
The history monoid is the free object that is generically able to represent the histories of individual communicating processes. A process calculus is then a formal language imposed on a history monoid in a consistent fashion. [ 6 ] That is, a history monoid can only record a sequence of events, with synchronization, but does not specify the allowed state transitions. Thus, a process calculus is to a history monoid what a formal language is to a free monoid (a formal language is a subset of the set of all possible finite-length strings of an alphabet generated by the Kleene star ).
The use of channels for communication is one of the features distinguishing the process calculi from other models of concurrency , such as Petri nets and the actor model (see Actor model and process calculi ). One of the fundamental motivations for including channels in the process calculi was to enable certain algebraic techniques, thereby making it easier to reason about processes algebraically. | https://en.wikipedia.org/wiki/Sequential_composition |
In object-oriented programming , sequential coupling (also known as temporal coupling ) is a form of coupling where a class requires its methods to be called in a particular sequence. This may be an anti-pattern , depending on context.
Methods whose name starts with Init, Begin, Start, etc. may indicate the existence of sequential coupling.
Using a car as an analogy , if the user steps on the gas without first starting the engine, the car does not crash, fail, or throw an exception - it simply fails to accelerate.
Sequential coupling can be refactored with the template method pattern to overcome the problems posed by the usage of this anti-pattern . [ 1 ] | https://en.wikipedia.org/wiki/Sequential_coupling |
Recognised by John Wozencraft , sequential decoding is a limited memory technique for decoding tree codes . Sequential decoding is mainly used as an approximate decoding algorithm for long constraint-length convolutional codes . This approach may not be as accurate as the Viterbi algorithm but can save a substantial amount of computer memory. It was used to decode a convolutional code in 1968 Pioneer 9 mission.
Sequential decoding explores the tree code in such a way to try to minimise the computational cost and memory requirements to store the tree.
There is a range of sequential decoding approaches based on the choice of metric and algorithm. Metrics include:
Algorithms include:
Given a partially explored tree (represented by a set of nodes which are limit of exploration), we would like to know the best node from which to explore further. The Fano metric (named after Robert Fano ) allows one to calculate from which is the best node to explore further. This metric is optimal given no other constraints (e.g. memory).
For a binary symmetric channel (with error probability p {\displaystyle p} ) the Fano metric can be derived via Bayes' theorem . We are interested in following the most likely path P i {\displaystyle P_{i}} given an explored state of the tree X {\displaystyle X} and a received sequence r {\displaystyle {\mathbf {r} }} . Using the language of probability and Bayes' theorem we want to choose the maximum over i {\displaystyle i} of:
We now introduce the following notation:
We express the likelihood Pr ( r | P i , X ) {\displaystyle \Pr({\mathbf {r} }|P_{i},X)} as p d i ( 1 − p ) n i b − d i 2 − ( N − n i ) b {\displaystyle p^{d_{i}}(1-p)^{n_{i}b-d_{i}}2^{-(N-n_{i})b}} (by using the binary symmetric channel likelihood for the first n i b {\displaystyle n_{i}b} bits followed by a uniform prior over the remaining bits).
We express the prior Pr ( P i | X ) {\displaystyle \Pr(P_{i}|X)} in terms of the number of branch choices one has made, n i {\displaystyle n_{i}} , and the number of branches from each node, 2 R b {\displaystyle 2^{Rb}} .
Therefore:
We can equivalently maximise the log of this probability, i.e.
This last expression is the Fano metric. The important point to see is that we have two terms here: one based on the number of wrong bits and one based on the number of right bits. We can therefore update the Fano metric simply by adding log 2 p + 1 − R {\displaystyle \log _{2}p+1-R} for each non-matching bit and log 2 ( 1 − p ) + 1 − R {\displaystyle \log _{2}(1-p)+1-R} for each matching bit.
For sequential decoding to be a good choice of decoding algorithm, the number of states explored should remain small (otherwise an algorithm which deliberately explores all states, e.g. the Viterbi algorithm , may be more suitable). For a particular noise level there is a maximum coding rate R 0 {\displaystyle R_{0}} called the computational cutoff rate where there is a finite backtracking limit. For the binary symmetric channel:
The simplest algorithm to describe is the "stack algorithm" in which the best N {\displaystyle N} paths found so far are stored. Sequential decoding may introduce an additional error above Viterbi decoding when the correct path has N {\displaystyle N} or more highly scoring paths above it; at this point the best path will drop off the stack and be no longer considered.
The famous Fano algorithm (named after Robert Fano ) has a very low memory requirement and hence is suited to hardware implementations. This algorithm explores backwards and forward from a single point on the tree. | https://en.wikipedia.org/wiki/Sequential_decoding |
Sequential hermaphroditism (called dichogamy in botany ) is one of the two types of hermaphroditism , the other type being simultaneous hermaphroditism . It occurs when the organism's sex changes at some point in its life. [ 1 ] A sequential hermaphrodite produces eggs (female gametes ) and sperm (male gametes ) at different stages in life. [ 2 ] Sequential hermaphroditism occurs in many fish , gastropods , and plants. Species that can undergo these changes do so as a normal event within their reproductive cycle, usually cued by either social structure or the achievement of a certain age or size. [ 3 ]
In animals, the different types of change are male to female ( protandry or protandrous hermaphroditism ), female to male ( protogyny or protogynous hermaphroditism ), [ 4 ] and bidirectional ( serial or bidirectional hermaphroditism ). [ 5 ] Both protogynous and protandrous hermaphroditism allow the organism to switch between functional male and functional female. [ 6 ] Bidirectional hermaphrodites have the capacity for sex change in either direction between male and female or female and male, potentially repeatedly during their lifetime. [ 5 ] These various types of sequential hermaphroditism may indicate that there is no advantage based on the original sex of an individual organism. [ 6 ] Those that change gonadal sex can have both female and male germ cells in the gonads or can change from one complete gonadal type to the other during their last life stage. [ 7 ]
In plants, individual flowers are called dichogamous if their function has the two sexes separated in time, although the plant as a whole may have functionally male and functionally female flowers open at any one moment. A flower is protogynous if its function is first female, then male, and protandrous if its function is first male then female. It used to be thought that this reduced inbreeding , [ 8 ] but it may be a more general mechanism for reducing pollen-pistil interference. [ 9 ] [ clarification needed ]
Hermaphroditic fishes are almost exclusively sequential—simultaneous hermaphroditism is only known to occur in a few fishes, such as the Rivulid killifish Kryptolebias marmoratus [ 10 ] and hamlets . Teleost fishes are the only vertebrate lineage where sequential hermaphroditism occurs. [ 3 ]
In general, protandrous hermaphrodites are animals that develop as males, but can later reproduce as females. [ 11 ] However, protandry features a spectrum of different forms, which are characterized by the overlap between male and female reproductive function throughout an organism's lifetime:
Furthermore, there are also species that reproduce as both sexes throughout their lifespans (i.e simultaneous hermaphrodites ), but shift their reproductive resources from male to female over time. [ 13 ]
Protandry occurs in a widespread range of animal phyla. [ 14 ] In fact, protandrous hermaphroditism occurs in many fish, [ 15 ] mollusks , [ 12 ] and crustaceans , [ 16 ] but is completely absent in terrestrial vertebrates. [ 11 ]
Protandrous fishes include teleost species in the families Pomacentridae , Sparidae , and Gobiidae . [ 17 ] A common example of a protandrous species are clownfish , which have a very structured society. In the species Amphiprion percula , there are zero to four individuals excluded from breeding and a breeding pair living in a sea anemone . Dominance is based on size, the female being the largest and the reproductive male being the second largest. The rest of the group is made up of progressively smaller males that do not breed and have no functioning gonads. [ 18 ] If the female dies, in many cases, the reproductive male gains weight and becomes the female for that group. The largest non-breeding male then sexually matures and becomes the reproductive male for the group. [ 19 ]
Other protandrous fishes can be found in the classes clupeiformes , siluriformes , stomiiformes . Since these groups are distantly related and have many intermediate relatives that are not protandrous, it strongly suggests that protandry evolved multiple times. [ 20 ]
Phylogenies support this assumption because ancestral states differ for each family. For example, the ancestral state of the family Pomacentridae was gonochoristic (single-sexed), indicating that protandry evolved within the family. [ 17 ] Therefore, because other families also contain protandrous species, protandry likely has evolved multiple times. [ citation needed ]
Other examples of protandrous animals include:
Protogynous hermaphrodites are animals that are born female and at some point in their lifespan change sex to male. [ 27 ] Protogyny is a more common form of sequential hermaphroditism in fish, especially when compared to protandry. [ 28 ] As the animal ages, it shifts sex to become a male animal due to internal or external triggers, undergoing physiological and behavioral changes. [ 29 ] In many fishes, female fecundity increases continuously with age, while in other species larger males have a selective advantage (such as in harems), so it is hypothesized that the mating system can determine whether it is more selectively advantageous to be a male or female when an organism's body is larger. [ 27 ] [ 17 ]
Protogyny is the most common form of hermaphroditism in fish in nature. [ 30 ] About 75% of the 500 known sequentially hermaphroditic fish species are protogynous and often have polygynous mating systems. [ 31 ] [ 32 ] In these systems, large males use aggressive territorial defense to dominate female mating. This causes small males to have a severe reproductive disadvantage, which promotes strong selection of size-based protogyny. [ 33 ] Therefore, if an individual is small, it is more reproductively advantageous to be female because they will still be able to reproduce, unlike small males. [ citation needed ]
Common model organisms for this type of sequential hermaphroditism are wrasses . They are one of the largest families of coral reef fish and belong to the family Labridae. Wrasses are found around the world in all marine habitats and tend to bury themselves in sand at night or when they feel threatened. [ 34 ] In wrasses, the larger of a mating pair is the male, while the smaller is the female. In most cases, females and immature males have a uniform color while the male has the terminal bicolored phase. [ 35 ] Large males hold territories and try to pair spawn, while small to mid-size initial-phase males live with females and group spawn . [ 36 ] In other words, both the initial- and terminal-phase males can breed, but they differ in the way they do it.
In the California sheephead ( Semicossyphus pulcher ), a type of wrasse, when the female changes to male, the ovaries degenerate and spermatogenic crypts appear in the gonads. [ 37 ] The general structure of the gonads remains ovarian after the transformation and the sperm is transported through a series of ducts on the periphery of the gonad and oviduct . Here, sex change is age-dependent. For example, the California sheephead stays a female for four to six years before changing sex [ 35 ] since all California sheephead are born female. [ 38 ]
Bluehead wrasses begin life as males or females, but females can change sex and function as males. Young females and males start with a dull initial-phase coloration before progressing into a brilliant terminal-phase coloration, which has a change in intensity of color, stripes, and bars. Terminal-phase coloration occurs when males become large enough to defend territory. [ 39 ] Initial-phase males have larger testes than larger, terminal phase males, which enables the initial-phase males to produce a large amount of sperm. This strategy allows these males to compete with the larger territorial male. [ 40 ]
Botryllus schlosseri , a colonial tunicate , is a protogynous hermaphrodite. In a colony, eggs are released about two days before the peak of sperm emission. [ 41 ] Although self-fertilization is avoided and cross-fertilization favored by this strategy, self-fertilization is still possible. Self-fertilized eggs develop with a substantially higher frequency of anomalies during cleavage than cross-fertilized eggs (23% vs. 1.6%). [ 41 ] Also a significantly lower percentage of larvae derived from self-fertilized eggs metamorphose, and the growth of the colonies derived from their metamorphosis is significantly lower. These findings suggest that self-fertilization gives rise to inbreeding depression associated with developmental deficits that are likely caused by expression of deleterious recessive mutations. [ 42 ]
Other examples of protogynous organisms include:
The ultimate cause of a biological event determines how the event makes organisms better adapted to their environment, and thus why evolution by natural selection has produced that event. While a large number of ultimate causes of hermaphroditism have been proposed, the two causes most relevant to sequential hermaphroditism are the size-advantage model [ 27 ] and protection against inbreeding. [ 54 ]
The size-advantage model states that individuals of a given sex reproduce more effectively if they are a certain size or age. To create selection for sequential hermaphroditism, small individuals must have higher reproductive fitness as one sex and larger individuals must have higher reproductive fitness as the opposite sex. For example, eggs are larger than sperm, thus larger individuals are able to make more eggs, so individuals could maximize their reproductive potential by beginning life as male and then turning female upon achieving a certain size. [ 54 ]
In most ectotherms , body size and female fecundity are positively correlated. [ 4 ] This supports the size-advantage model. Kazancioglu and Alonzo (2010) performed the first comparative analysis of sex change in Labridae . Their analysis supports the size-advantage model and suggest that sequential hermaphroditism is correlated to the size-advantage. They determined that dioecy was less likely to occur when the size advantage is stronger than other advantages. [ 55 ] Warner suggests that selection for protandry may occur in populations where female fecundity is augmented with age and individuals mate randomly. Selection for protogyny may occur where there are traits in the population that depress male fecundity at early ages (territoriality, mate selection or inexperience) and when female fecundity is decreased with age, the latter seems to be rare in the field. [ 4 ] An example of territoriality favoring protogyny occurs when there is a need to protect their habitat and being a large male is advantageous for this purpose. In the mating aspect, a large male has a higher chance of mating, while this has no effect on the female mating fitness. [ 55 ] Thus, he suggests that female fecundity has more impact on sequential hermaphroditism than the age structures of the population. [ 4 ]
The size-advantage model predicts that sex change would only be absent if the relationship between size/age with reproductive potential is identical in both sexes. With this prediction one would assume that hermaphroditism is very common, but this is not the case. Sequential hermaphroditism is very rare and according to scientists this is due to some cost that decreases fitness in sex changers as opposed to those who do not change sex. Some of the hypotheses proposed for the dearth of hermaphrodites are the energetic cost of sex change, genetic and/or physiological barriers to sex change, and sex-specific mortality rates. [ 4 ] [ 56 ] [ 57 ]
In 2009, Kazanciglu and Alonzo found that dioecy was only favored when the cost of changing sex was very large. This indicates that the cost of sex change does not explain the rarity of sequential hermaphroditism by itself. [ 58 ]
The size-advantage model also explains under which mating systems protogyny or protandry would be more adaptive. [ 54 ] [ 59 ] In a haremic mating system, with one large male controlling access to numerous females for mating, this large male achieves greater reprodcutive success than a small female as he can fertilize numerous baches of eggs. So in this kind of haremic mating system (such as many wrasses), protogyny is the most adaptive strategy ("breed as a female when small, and then change to male when you're large and able to control a harem"). In a paired mating system (one male mates with one female, such as in clownfish or moray eels) the male can only fertilize one batch of eggs, whereas the female needs only a small male to fertilize her batch of eggs. so the larger she is, the more eggs she'll be able to produce and have fertilized. Therefore, in this kind of paired mating system, protandry is the most adaptive strategy ("breed as a male when small, and then change to female when you're larger"). [ citation needed ]
Sequential hermaphroditism can also protect against inbreeding in populations of organisms that have low enough motility and/or are sparsely distributed enough that there is a considerable risk of siblings encountering each other after reaching sexual maturity, and interbreeding. If siblings are all the same or similar ages, and if they all begin life as one sex and then transition to the other sex at about the same age, then siblings are highly likely to be the same sex at any given time. This should dramatically reduce the likelihood of inbreeding. Both protandry and protogyny are known to help prevent inbreeding in plants, [ 2 ] and many examples of sequential hermaphroditism attributable to inbreeding prevention have been identified in a wide variety of animals. [ 54 ]
The proximate cause of a biological event concerns the molecular and physiological mechanisms that produce the event. Many studies have focused on the proximate causes of sequential hermaphroditism, which may be caused by various hormonal and enzyme changes in organisms. [ citation needed ]
The role of aromatase has been widely studied in this area. Aromatase is an enzyme that controls the androgen / estrogen ratio in animals by catalyzing the conversion of testosterone into oestradiol , which is irreversible. It has been discovered that the aromatase pathway mediates sex change in both directions in organisms. [ 60 ] Many studies also involve understanding the effect of aromatase inhibitors on sex change. One such study was performed by Kobayashi et al. In their study they tested the role of estrogens in male three-spot wrasses ( Halichoeres trimaculatus ). They discovered that fish treated with aromatase inhibitors showed decreased gonodal weight, plasma estrogen level and spermatogonial proliferation in the testis as well as increased androgen levels. Their results suggest that estrogens are important in the regulation of spermatogenesis in this protogynous hermaphrodite. [ 61 ]
Previous studies have also investigated sex reversal mechanisms in teleost fish. During sex reversal, their whole gonads including the germinal epithelium undergoes significant changes, remodeling, and reformation. One study on the teleost Synbranchus marmoratus found that metalloproteinases (MMPs) were involved in gonadal remodeling. In this process, the ovaries degenerated and were slowly replaced by the germinal male tissue. In particular, the action of MMPs induced significant changes in the interstitial gonadal tissue, allowing for reorganization of germinal epithelial tissue. The study also found that sex steroids help in the sex reversal process by being synthesized as Leydig cells replicate and differentiate. Thus, the synthesis of sex steroids coincides with gonadal remodeling, which is triggered by MMPs produced by germinal epithelial tissue. These results suggests that MMPs and changes in steroid levels play a large role in sequential hermaphroditism in teleosts. [ 62 ]
Sequential hermaphrodites almost always have a sex ratio biased towards the birth sex, and consequently experience significantly more reproductive success after switching sexes. According to the population genetics theory, this should decrease genetic diversity and effective population size (Ne). However, a study of two ecologically similar santer sea bream ( gonochoric ) and slinger sea bream (protogynous) in South African waters found that genetic diversities were similar in the two species, and while Ne was lower in the instant for the sex-changer, they were similar over a relatively short time horizon. [ 63 ] The ability of these organisms to change biological sex has allowed for better reproductive success based on the ability for certain genes to pass down more easily from generation to generation. The change in sex also allows for organisms to reproduce if no individuals of the opposite sex are already present. [ 64 ]
Sequential hermaphroditism in plants is the process in which a plant changes its sex during its lifetime. Sequential hermaphroditism in plants is very rare. There are less than 0.1% of recorded cases in which plant species entirely change their sex. [ 65 ] The Patchy Environment Model and Size Dependent Sex Allocation are the two environmental factors which drive sequential hermaphroditism in plants. The Patchy Environment Model states that plants maximize the use of their resources by changing their sex. For example, if a plant benefits more from the resources of a given environment in a certain sex, it will change to that sex. Furthermore, Size Dependent Sex Allocation outlines that in sequential hermaphroditic plants, it is preferable to change sexes in a way that maximizes their overall fitness compared to their size over time. [ 66 ] Similar to maximizing the use of resources, if the combination of size and fitness for a certain sex is more beneficial, the plant will change to that sex. Evolutionarily, sequential hermaphrodites emerged as certain species obtained a reproductive advantage by changing their sex. [ citation needed ]
Arisaema triphyllum (Jack in the pulpit) is a plant species which is commonly cited as exercising sequential hermaphroditism. [ 67 ] [ 68 ] As A. triphyllum grows, it develops from a nonsexual juvenile plant, to a young all-male plant, to a male-and-female plant, to an all-female plant. This means that A. triphyllum is changing its sex from male to female over the course of its lifetime as its size increases, showcasing Size Dependent Sex Allocation. Another example is Arisaema dracontium or the green dragon, which can change its sex on a yearly basis. [ 67 ] The sex of A. dracontium is also dependent on size: the smaller flowers are male while the larger flowers are both male and female. Typically in Arisaema species, small flowers only contain stamens, meaning they are males. Larger flowers can contain both stamen and pistils or only pistils, meaning they can be either hermaphrodites or strictly female. [ 67 ]
Striped maple trees ( Acer pensylvanicum ) have been shown to change sex over a period of several years, and are sequential hermaphrodites. [ 69 ] When branches were removed from striped maple trees [ 70 ] they changed to female or to female and male as a response to the damage. Sickness will also trigger a sex change to either female or female and male. [ 70 ]
In the context of the sexuality of flowering plants (angiosperms), there are two forms of dichogamy: protogyny —female function precedes male function—and protandry —male function precedes female function.
Examples include in Asteraceae , bisexual tubular (disks) florets are usually protandrous. Whereas in Acacia and Banksia flowers are protogynous, with the style of the female flower elongating, then later in the male phase the anthers shedding pollen. [ citation needed ]
Historically, dichogamy has been regarded as a mechanism for reducing inbreeding . [ 8 ] However, a survey of the angiosperms found that self-incompatible (SI) plants, which are incapable of inbreeding, were as likely to be dichogamous as were self-compatible (SC) plants. [ 71 ] This finding led to a reinterpretation of dichogamy as a more general mechanism for reducing the impact of pollen - pistil interference on pollen import and export. [ 9 ] [ 72 ] Unlike the inbreeding avoidance hypothesis, which focused on female function, this interference-avoidance hypothesis considers both reproductive functions. [ citation needed ]
In many hermaphroditic plant species, the close physical proximity of anthers and stigma makes interference unavoidable, either within a flower or between flowers on an inflorescence . Within-flower interference, which occurs when either the pistil interrupts pollen removal or the anthers prevent pollen deposition, can result in autonomous or facilitated self-pollination. [ 73 ] [ 9 ] Between-flower interference results from similar mechanisms, except that the interfering structures occur on different flowers within the same inflorescence and it requires pollinator activity. This results in geitonogamous pollination, the transfer of pollen between flowers of the same individual. [ 74 ] [ 73 ] In contrast to within-flower interference, geitonogamy necessarily involves the same processes as outcrossing: pollinator attraction, reward provisioning, and pollen removal. Therefore, between-flower interference not only carries the cost of self-fertilization ( inbreeding depression [ 75 ] [ 76 ] ), but also reduces the amount of pollen available for export (so-called "pollen discounting" [ 77 ] ). Because pollen discounting diminishes outcross siring success, interference avoidance may be an important evolutionary force in floral biology. [ 77 ] [ 78 ] [ 72 ] [ 79 ] Dichogamy may reduce between-flower interference by reducing or eliminating the temporal overlap between stigma and anthers within an inflorescence. Large inflorescences attract more pollinators, potentially enhancing reproductive success by increasing pollen import and export. [ 80 ] [ 81 ] [ 82 ] [ 75 ] [ 83 ] [ 84 ] However, large inflorescences also increase the opportunities for both geitonogamy and pollen discounting, so that the opportunity for between-flower interference increases with inflorescence size. [ 78 ] Consequently, the evolution of floral display size may represent a compromise between maximizing pollinator visitation and minimizing geitonogamy and pollen discounting (Barrett et al., 1994). [ 85 ] [ 86 ] [ 87 ]
Protandry may be particularly relevant to this compromise, because it often results in an inflorescence structure with female phase flowers positioned below male phase flowers. [ 88 ] Given the tendency of many insect pollinators to forage upwards through inflorescences, [ 89 ] protandry may enhance pollen export by reducing between-flower interference. [ 90 ] [ 8 ] Furthermore, this enhanced pollen export should increase as floral display size increases, because between-flower interference should increase with floral display size. These effects of protandry on between-flower interference may decouple the benefits of large inflorescences from the consequences of geitonogamy and pollen discounting. Such a decoupling would provide a significant reproductive advantage through increased pollinator visitation and siring success. [ citation needed ]
It has been demonstrated experimentally that dichogamy both reduced rates of self-fertilization and enhanced outcross siring success through reductions in geitonogamy and pollen discounting, respectively. [ 90 ] The influence of inflorescence size on this siring advantage shows bimodal distribution, with increased siring success with both small and large display sizes. [ 91 ]
The duration of stigmatic receptivity plays a key role in regulating the isolation of the male and female stages in dichogamous plants, and stigmatic receptivity can be influenced by both temperature and humidity. [ 92 ] In the moth pollinated orchid, Satyrium longicauda , protandry tends to promote male mating success. [ 93 ] | https://en.wikipedia.org/wiki/Sequential_hermaphroditism |
Sequential pattern mining is a topic of data mining concerned with finding statistically relevant patterns between data examples where the values are delivered in a sequence. [ 1 ] [ 2 ] It is usually presumed that the values are discrete, and thus time series mining is closely related, but usually considered a different activity. Sequential pattern mining is a special case of structured data mining .
There are several key traditional computational problems addressed within this field. These include building efficient databases and indexes for sequence information, extracting the frequently occurring patterns, comparing sequences for similarity , and recovering missing sequence members. In general, sequence mining problems can be classified as string mining which is typically based on string processing algorithms and itemset mining which is typically based on association rule learning . Local process models [ 3 ] extend sequential pattern mining to more complex patterns that can include (exclusive) choices, loops, and concurrency constructs in addition to the sequential ordering construct.
String mining typically deals with a limited alphabet for items that appear in a sequence , but the sequence itself may be typically very long. Examples of an alphabet can be those in the ASCII character set used in natural language text, nucleotide bases 'A', 'G', 'C' and 'T' in DNA sequences , or amino acids for protein sequences . In biology applications analysis of the arrangement of the alphabet in strings can be used to examine gene and protein sequences to determine their properties. Knowing the sequence of letters of a DNA or a protein is not an ultimate goal in itself. Rather, the major task is to understand the sequence, in terms of its structure and biological function . This is typically achieved first by identifying individual regions or structural units within each sequence and then assigning a function to each structural unit. In many cases this requires comparing a given sequence with previously studied ones. The comparison between the strings becomes complicated when insertions , deletions and mutations occur in a string.
A survey and taxonomy of the key algorithms for sequence comparison for bioinformatics is presented by Abouelhoda & Ghanem (2010), which include: [ 4 ]
Some problems in sequence mining lend themselves to discovering frequent itemsets and the order they appear, for example, one is seeking rules of the form "if a {customer buys a car}, he or she is likely to {buy insurance} within 1 week", or in the context of stock prices, "if {Nokia up and Ericsson up}, it is likely that {Motorola up and Samsung up} within 2 days". Traditionally, itemset mining is used in marketing applications for discovering regularities between frequently co-occurring items in large transactions. For example, by analysing transactions of customer shopping baskets in a supermarket, one can produce a rule which reads "if a customer buys onions and potatoes together, he or she is likely to also buy hamburger meat in the same transaction".
A survey and taxonomy of the key algorithms for item set mining is presented by Han et al. (2007). [ 5 ]
The two common techniques that are applied to sequence databases for frequent itemset mining are the influential apriori algorithm and the more-recent FP-growth technique.
With a great variation of products and user buying behaviors, shelf on which products are being displayed is one of the most important resources in retail environment. Retailers can not only increase their profit but, also decrease cost by proper management of shelf space allocation and products display. To solve this problem, George and Binu (2013) have proposed an approach to mine user buying patterns using PrefixSpan algorithm and place the products on shelves based on the order of mined purchasing patterns. [ 6 ]
Commonly used algorithms include: | https://en.wikipedia.org/wiki/Sequential_pattern_mining |
The sequential structure alignment program (SSAP) in chemistry , physics , and biology is a method that uses double dynamic programming to produce a structural alignment based on atom-to-atom vectors in structure space. [ 1 ] [ 2 ] Instead of the alpha carbons typically used in structural alignment, SSAP constructs its vectors from the beta carbons for all residues except glycine, a method which thus takes into account the rotameric state of each residue as well as its location along the backbone. SSAP works by first constructing a series of inter-residue distance vectors between each residue and its nearest non-contiguous neighbors on each protein. A series of matrices are then constructed containing the vector differences between neighbors for each pair of residues for which vectors were constructed. Dynamic programming applied to each resulting matrix determines a series of optimal local alignments which are then summed into a "summary" matrix to which dynamic programming is applied again to determine the overall structural alignment.
SSAP originally produced only pairwise alignments but has since been extended to multiple alignments as well. [ 3 ] It has been applied in an all-to-all fashion to produce a hierarchical fold classification scheme known as CATH (Class, Architecture, Topology, Homology),. [ 4 ] which has been used to construct the CATH Protein Structure Classification database.
Generally, SSAP scores above 80 are associated with highly similar structures. Scores between 70 and 80 indicate a similar fold with minor variations. Structures yielding a score between 60 and 70 do not generally contain the same fold, but usually belong to the same protein class with common structural motifs. [ 5 ] | https://en.wikipedia.org/wiki/Sequential_structure_alignment_program |
Sequential walking is a technique that can be used to solve various 2D NMR spectra. In a 2D experiment, cross peaks must be correlated to the correct nuclei. Using sequential walking, the correct nuclei can be assigned to their crosspeaks. The assigned crosspeaks can give valuable information such as spatial interactions between nuclei.
In a NOESY of DNA , for example, each nucleotide has a different chemical shift associated with it. In general, A's are more downstream, T's are more upstream, and C's and G's are intermediate. Each nucleotide has protons on the deoxyribose sugar, which can be assigned using sequential walking. To do this, the first nucleotide in the sequence must be detected. Knowing the DNA sequence helps, but in general the first nucleotide can be determined using the following rules.
1. 2' and 2" protons of a nucleotide will show up in its column, as well as in the column of the next nucleotide in the sequence. For example, in the sequence CATG, in the column for C, its own 2' and 2" protons will be seen, but none of the other nucleotides. For A, its own 2' and 2" protons will be seen, as well as those from C.
2. Methyl groups on the nucleotide are seen in the column for the nucleotide containing a methyl group, as well as for the nucleotide preceding it. For example, in CATG, the A and T will contain the methyl peak corresponding to the methyl group on T, but G will not.
Once the first nucleotide has been found, you determine which nucleotide is next to it because it should contain the 2' and 2" protons from the previous nucleotide. This is done by "walking" across the spectrum. This process is then repeated sequentially until all nucleotides have been assigned.
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sequential_walking |
Sequest (often stylized as SEQUEST) is a tandem mass spectrometry data analysis program used for protein identification. [ 1 ] Sequest identifies collections of tandem mass spectra to peptide sequences that have been generated from databases of protein sequences.
Sequest identifies each tandem mass spectrum individually. The software evaluates protein sequences from a database to compute the list of peptides that could result from each. The peptide's intact mass is known from the mass spectrum, and Sequest uses this information to determine the set of candidate peptides sequences that could meaningfully be compared to the spectrum by including only those near the mass of the observed peptide ion. For each candidate peptide, Sequest projects a theoretical tandem mass spectrum, and Sequest compares these theoretical spectra to the observed tandem mass spectrum by the use of cross correlation . The candidate sequence with the best matching theoretical tandem mass spectrum is reported as the best identification for this spectrum. | https://en.wikipedia.org/wiki/Sequest |
Sequitur (or Nevill-Manning–Witten algorithm ) is a recursive algorithm developed by Craig Nevill-Manning and Ian H. Witten in 1997 [ 1 ] that infers a hierarchical structure ( context-free grammar ) from a sequence of discrete symbols. The algorithm operates in linear space and time. It can be used in data compression software applications. [ 2 ]
The sequitur algorithm constructs a grammar by substituting repeating phrases in the given sequence with new rules and therefore produces a concise representation of the sequence. For example, if the sequence is
the algorithm will produce
While scanning the input sequence, the algorithm follows two constraints for generating its grammar efficiently: digram uniqueness and rule utility .
Whenever a new symbol is scanned from the sequence, it is appended with the last scanned symbol to form a new digram . If this digram has been formed earlier then a new rule is made to replace both occurrences of the digrams.
Therefore, it ensures that no digram occurs more than once in the grammar. For example, in the sequence S→abaaba , when the first four symbols are already scanned, digrams formed are ab, ba, aa . When the fifth symbol is read, a new digram 'ab' is formed which exists already. Therefore, both instances of 'ab' are replaced by a new rule (say, A) in S . Now, the grammar becomes S→AaAa, A→ab , and the process continues until no repeated digram exists in the grammar.
This constraint ensures that all the rules are used more than once in the right sides of all the productions of the grammar, i.e., if a rule occurs just once, it should be removed from the grammar and its occurrence should be substituted with the symbols from which it is created. For example, in the above example, if one scans the last symbol and applies digram uniqueness for 'Aa', then the grammar will produce: S→BB, A→ab, B→Aa . Now, rule 'A' occurs only once in the grammar in B→Aa . Therefore, A is deleted and finally the grammar becomes
This constraint helps reduce the number of rules in the grammar.
The algorithm works by scanning a sequence of terminal symbols and building a list of all the symbol pairs which it has read. Whenever a second occurrence of a pair is discovered, the two occurrences are replaced in the sequence by an invented nonterminal symbol , the list of symbol pairs is adjusted to match the new sequence, and scanning continues. If a pair's nonterminal symbol is used only in the just created symbol's definition, the used symbol is replaced by its definition and the symbol is removed from the defined nonterminal symbols. Once the scanning has been completed, the transformed sequence can be interpreted as the top-level rule in a grammar for the original sequence. The rule definitions for the nonterminal symbols which it contains can be found in the list of symbol pairs. Those rule definitions may themselves contain additional nonterminal symbols whose rule definitions can also be read from elsewhere in the list of symbol pairs. [ 3 ] | https://en.wikipedia.org/wiki/Sequitur_algorithm |
Serafim Batzoglou is a Greek-American researcher in the field of computational genomics, currently serving as the Chief Data Officer at Seer Inc. [ 1 ]
Prior to that he was Chief Data Officer at insitro, co-founded DNAnexus , [ 2 ] and served as VP of computational genomics at Illumina, Inc. , and professor of computer science at Stanford University between 2001 and 2016 working alongside Daphne Koller (founder and CEO of insitro) in Stanford University’s computer science department.
His research lab focused on computational genomics with special interest in developing algorithms, machine learning methods, and systems for the analysis of large scale genomic data. [ 3 ] He has also been involved with the Human Genome Project and ENCODE .
Batzoglou did his undergraduate studies at MIT and obtained his PhD in computer science from MIT in 2000 under the supervision of Bonnie Berger . [ 4 ] | https://en.wikipedia.org/wiki/Serafim_Batzoglou |
The Sereny test is a test used to test the invasiveness of enteroinvasive Escherichia coli , Shigella species, and Listeria monocytogenes . [ 1 ] [ 2 ] [ 3 ]
It is done by inoculating suspension of bacteria into guinea pig's eye. Severe mucopurulent conjunctivitis and severe keratitis indicates a positive test. [ 3 ]
This medical diagnostic article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sereny_test |
Sergei Nikolaevich Winogradsky ForMemRS [ 1 ] ( Russian : Сергей Николаевич Виноградский ; Ukrainian : Сергій Миколайович Виноградський ; 13 September [ O.S. 1 September] 1856, Kyiv – 24 February 1953, Brie-Comte-Robert ), [ 2 ] also published under the name Sergius Winogradsky , [ 3 ] was a Ukrainian and Russian microbiologist , ecologist and soil scientist who pioneered the cycle-of-life concept. [ 4 ] [ 5 ] Winogradsky discovered the first known form of lithotrophy during his research with Beggiatoa in 1887. He reported that Beggiatoa oxidized hydrogen sulfide (H 2 S) as an energy source and formed intracellular sulfur droplets. [ 3 ] This research provided the first example of lithotrophy, but not autotrophy . Born in the capital of present-day Ukraine, his legacy is also celebrated by this nation. [ 6 ]
His research on nitrifying bacteria would report the first known form of chemoautotrophy , showing how a lithotroph fixes carbon dioxide (CO 2 ) to make organic compounds . [ 7 ]
He is best known in school science as the inventor of the Winogradsky column technique for the study of sediment microbes.
Winogradsky was born in Kyiv , Russian Empire to a family of wealthy lawyers. Among his paternal ancestors were Cossack atamans, and on the maternal side he was linked to the Skoropadsky family . [ 8 ] In his youth Winogradsky was "strictly devoted to the Orthodox faith ", though he later became irreligious. [ 9 ]
After graduating from the 2nd Kiev Gymnasium in 1873, he began studying law, but he entered the Imperial Conservatoire of Music in Saint Petersburg in 1875 to study piano. [ 1 ] However, after two years of music training, he entered the Saint Petersburg Imperial University in 1877 to study chemistry under Nikolai Menshutkin and botany under Andrei Famintsyn , [ 1 ] receiving his degree in 1881 and staying on for a master's in botany, which he received in 1884. In 1885, he moved to the University of Straßburg to work under the renowned botanist Anton de Bary , subsequently becoming renowned for his work on sulfur bacteria.
In 1888, after de Bary's death, he relocated to Zürich , where he began investigation into the process of nitrification, identifying the genera Nitrosomonas and Nitrosococcus , which oxidizes ammonium to nitrite , and Nitrobacter , which oxidizes nitrite to nitrate . [ 10 ]
He returned to St. Petersburg for the period 1891–1905, obtaining his doctoral degree in 1902 and from then on heading the division of general microbiology of the Institute of Experimental Medicine . During this period, he identified the obligate anaerobe Clostridium pasteurianum , which is capable of fixing atmospheric nitrogen. In St. Petersburg he trained Vasily Omelianski , who popularized Winogradskys concepts and methodology in the Soviet Union during the next decades. [ 11 ]
In 1901, he was elected an honorary member of the Moscow Society of Naturalists and, in 1902, a corresponding member of the French Academy of Sciences . In 1905, due to ill health, the scientist left the institute and moved from St. Petersburg to the town of Gorodok in Podolia, where from 1892 he owned a huge estate. In fact, while working as the director of the Institute of Experimental Medicine, Winogradsky renounced his salary, which was transferred to a special account, and then used these funds to build a room for a scientific library, the director of which lived on the income from the estate, where agricultural work was carried out. [ citation needed ]
In Gorodok Winogradsky addressed the problems of agriculture and soil science. He introduced new management methods, bought the best varieties of seeds, plants, and livestock, and advanced technology. His estate became one of the richest and most successful in Podolia, and remained profitable even during the First World War, falling under Austro-Hungarian occupation. [ citation needed ]
He retired from active scientific work in 1905, dividing his time between his private estate in Gorodok and Switzerland.
After the revolution of 1917, Winogradsky went first to Switzerland and then to Belgrade. In 1922, he accepted an invitation to head the Pasteur Institute 's division of agricultural bacteriology at an experimental station at Brie-Comte-Robert , France, about 30 km from Paris. During this period, he worked on a number of topics, among them iron bacteria, nitrifying bacteria, nitrogen fixation by Azotobacter , cellulose -decomposing bacteria, and culture methods for soil microorganisms. In 1923 Winogradsky became an honorary member of the Russian Academy of Sciences despite his emigration. He retired from active life in 1940 and died in Brie-Comte-Robert in 1953.
Winogradsky discovered various biogeochemical cycles and parts of these cycles. These discoveries include
Winogradsky is best known for discovering chemoautotrophy, which soon became popularly known as chemosynthesis , the process by which organisms derive energy from a number of different inorganic compounds and obtain carbon in the form of carbon dioxide . Previously, it was believed that autotrophs obtained their energy solely from light , not from reactions of inorganic compounds. With the discovery of organisms that oxidized inorganic compounds such as hydrogen sulfide and ammonium as energy sources, autotrophs could be divided into two groups: photoautotrophs and chemoautotrophs. Winogradsky was one of the first researchers to attempt to understand microorganisms outside of the medical context, making him among the first students of microbial ecology and environmental microbiology .
The Winogradsky column remains an important display of chemoautotrophy and microbial ecology, demonstrated in microbiology lectures around the world. [ 12 ] | https://en.wikipedia.org/wiki/Sergei_Winogradsky |
Sergey Vladimirovich Fomin (Сергей Владимирович Фомин) (born 16 February 1958 in Saint Petersburg , Russia ) is a Russian American mathematician who has made important contributions in combinatorics and its relations with algebra , geometry , and representation theory . Together with Andrei Zelevinsky , he introduced cluster algebras .
Fomin received his M.Sc. in 1979 and his Ph.D. in 1982 from St. Petersburg State University under the direction of Anatoly Vershik and Leonid Osipov. [ 1 ] Previous to his appointment at the University of Michigan, he held positions at the Massachusetts Institute of Technology from 1992 to 2000, at the St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences , and at the Saint Petersburg Electrotechnical University . Sergey Fomin studied at the 45th Physics-Mathematics School and later taught mathematics there. [ 2 ]
Fomin's contributions include | https://en.wikipedia.org/wiki/Sergey_Fomin |
Leverhulme Trust Fellowship
DFG Fellowship Institute of Analytical Chemistry
Award of President of Ukraine
Sergey Piletsky is a professor of Bioanalytical Chemistry and the Research Director for School of Chemistry, University of Leicester , United Kingdom. [ 1 ]
Sergey graduated from Kyiv University , Ukraine, obtaining an MSc in chemistry in 1985 and researched on synthesis of the polymers selective for nucleic acids, for which he was awarded with a PhD in 1991. Cranfield University awarded Sergey with a DSc for his work on molecularly imprinted polymers for diagnostics applications. [ 1 ]
Sergey is a recipient of Royal Society Wolfson Research Merit Award, [ 1 ] Leverhulme Trust Fellowship, DFG Fellowship from the Institute of Analytical Chemistry, Award of President of Ukraine, and Japan Society for Promotion of Science and Technology Fellowship. [ 1 ]
Sergey's work in molecular imprinting focuses on: (i) the fundamental study of the recognition properties of molecularly imprinted polymers; [ 2 ] [ 3 ] (ii) the development of sensors and assays for environmental and clinical analysis; [ 4 ] and (iii) the development of molecularly imprinted polymer nanoparticles for theranostic applications. [ 5 ]
Sergey introduced computational design into the field of molecular imprinting, by scientifically demonstrating that non-covalent interaction between the template molecule and polymer is through the technique known as 'bite and switch' wherein functional groups first non-covalently bond with the binding site, but during the rebinding step, the polymer matrix forms irreversible covalent bonds with the target molecule. [ 6 ] A number of research groups around the world follow his ideas in developing functional imprinted polymers for a variety of applications. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Sergey_Piletsky |
Sergio Verdú (born Barcelona , Spain , August 15, 1958) is a former professor of electrical engineering and specialist in information theory . Until September 22, 2018, he was the Eugene Higgins Professor of Electrical Engineering at Princeton University , where he taught and conducted research on information theory in the Information Sciences and Systems Group. He was also affiliated with the program in Applied and Computational Mathematics. He was dismissed from the faculty following a university investigation of alleged sexual misconduct. [ 1 ]
Verdu received the Telecommunications Engineering degree from the Polytechnic University of Catalonia , Barcelona , Spain , in 1980 and the PhD degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 1984. Conducted at the Coordinated Science Laboratory of the University of Illinois , his doctoral research was supervised by Vincent Poor and pioneered the field of multiuser detection . In 1998, his book Multiuser Detection was published by Cambridge University Press .
A Title IX investigation by Princeton, made public in 2017 by the Huffington Post , determined that Verdu had violated Princeton’s sexual-
misconduct policy. Previously, Yeohee Im, a graduate student at Princeton, reported Verdu for sexual harassment.
According to the Princeton Dean of Faculty, there were allegations that Verdú had also harassed others, but only the one student was willing to make a formal complaint. Verdú denied the findings of the investigation, stating: "The university advised me not to reply but I categorically deny that there were any advances or any sexual harassment." [ 2 ] [ 3 ] He was subsequently dismissed from Princeton University as of September 22, 2018, following further consideration by the university, which said that "an investigation established that Dr. Verdu violated the university's policy prohibiting consensual relations with students, and its policy requiring honesty and cooperation in university matters". [ 4 ]
Verdu appealed the decision in the District Court. According to the materials of the United States Court of Appeals for the Third Circuit: [ 5 ] "Verdu states three theories under which Princeton discriminated against him: erroneous outcome, selective enforcement, and retaliation." The District Court dismissed Verdu's complaint.
His papers have received several awards:
He served as president of the IEEE Information Theory Society in 1997.
He was the founding editor-in-chief of the journal Foundations and Trends in Communications and Information Theory . | https://en.wikipedia.org/wiki/Sergio_Verdú |
Serial ATA International Organization ( SATA-IO ) is an independent, non-profit organization which provides the computing industry with guidance and support for implementing the SATA specification. SATA-IO was developed by and for leading industry companies. It was officially formed in July 2004 by incorporating the previous Serial ATA Working Group which had been established in February 2000 to specify Serial ATA for desktop applications. [ 1 ]
SATA-IO is affiliated directly to INCITS, and indirectly via INCITS to ANSI. Many members form this organization; it is currently [ as of? ] led by ATP Electronics , Dell , Hewlett-Packard , HGST , Intel , Marvell , PMC-Sierra , SanDisk , Seagate Technology , and Western Digital .
This article about an international organization is a stub . You can help Wikipedia by expanding it .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Serial_ATA_International_Organization |
The Serial Bus Protocol 2 (SBP-2) standard is a transport protocol within the Serial Bus, IEEE Std 1394 -1995 (also known as FireWire or i.Link), developed by T10 . [ 1 ]
Original work on Serial Bus Protocol started as an attempt to adapt SCSI to IEEE Std 1394-1995 serial interface. Later on it was recognized that SBP-2 may have a more general use, and the work on the standard was targeted to provide a generic framework for delivery of commands, data, and status between Serial Bus peripherals.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Serial_Bus_Protocol_2 |
Serial Peripheral Interface ( SPI ) is a de facto standard (with many variants ) for synchronous serial communication , used primarily in embedded systems for short-distance wired communication between integrated circuits .
SPI follows a master–slave architecture , [ 1 ] where a master device orchestrates communication with one or more slave devices by driving the clock and chip select signals. Some devices support changing master and slave roles on the fly.
Motorola 's original specification (from the early 1980s) uses four logic signals , aka lines or wires, to support full duplex communication. It is sometimes called a four-wire serial bus to contrast with three-wire variants which are half duplex , and with the two-wire I²C and 1-Wire serial buses.
Typical applications include interfacing microcontrollers with peripheral chips for Secure Digital cards, liquid crystal displays , analog-to-digital and digital-to-analog converters , flash and EEPROM memory, and various communication chips.
Although SPI is a synchronous serial interface, [ 2 ] it is different from Synchronous Serial Interface (SSI). SSI employs differential signaling and provides only a single simplex communication channel.
Commonly, SPI has four logic signals. Variations may use different names or have different signals.
MOSI on a master outputs to MOSI on a slave. MISO on a slave outputs to MISO on a master.
Each device internally uses a shift register for serial communication, which together forms an inter-chip circular buffer .
Slave devices should use tri-state outputs so their MISO signal becomes high impedance (electrically disconnected) when the device is not selected. Slaves without tri-state outputs cannot share a MISO line with other slaves without using an external tri-state buffer.
To begin communication, the SPI master first selects a slave device by pulling its SS low. (Note: the bar above SS indicates it is an active low signal, so a low voltage means "selected", while a high voltage means "not selected")
If a waiting period is required, such as for an analog-to-digital conversion, the master must wait for at least that period of time before issuing clock cycles. [ note 2 ]
During each SPI clock cycle, full-duplex transmission of a single bit occurs. The master sends a bit on the MOSI line while the slave sends a bit on the MISO line, and then each reads their corresponding incoming bit. This sequence is maintained even when only one-directional data transfer is intended.
Transmission using a single slave involves one shift register in the master and one shift register in the slave, both of some given word size (e.g. 8 bits). The transmissions often consist of eight-bit words, but other word-sizes are also common, for example, sixteen-bit words for touch-screen controllers or audio codecs, such as the TSC2101 by Texas Instruments, or twelve-bit words for many digital-to-analog or analog-to-digital converters.
Data is usually shifted out with the most-significant bit (MSB) first but the original specification has a LSBFE ("LSB-First Enable") to control whether data is transferred least (LSB) or most significant bit (MSB) first. On the clock edge, both master and slave shift out a bit to its counterpart. On the next clock edge, each receiver samples the transmitted bit and stores it in the shift register as the new least-significant bit. After all bits have been shifted out and in, the master and slave have exchanged register values. If more data needs to be exchanged, the shift registers are reloaded and the process repeats. Transmission may continue for any number of clock cycles. When complete, the master stops toggling the clock signal, and typically deselects the slave.
If a single slave device is used, its SS pin may be fixed to logic low if the slave permits it. With multiple slave devices, a multidrop configuration requires an independent SS signal from the master for each slave device, while a daisy-chain configuration only requires one SS signal.
Every slave on the bus that has not been selected should disregard the input clock and MOSI signals. And to prevent contention on MISO, non-selected slaves must use tristate output. Slaves that aren't already tristate will need external tristate buffers to ensure this. [ 3 ]
In addition to setting the clock frequency, the master must also configure the clock polarity and phase with respect to the data. Motorola [ 4 ] [ 5 ] named these two options as CPOL and CPHA (for c lock pol arity and c lock pha se) respectively, a convention most vendors have also adopted.
The SPI timing diagram shown is further described below:
The combinations of polarity and phases are referred to by these "SPI mode" numbers with CPOL as the high order bit and CPHA as the low order bit:
Notes:
Some slave devices are designed to ignore any SPI communications in which the number of clock pulses is greater than specified. Others do not care, ignoring extra inputs and continuing to shift the same output bit. It is common for different devices to use SPI communications with different lengths, as, for example, when SPI is used to access an IC's scan chain by issuing a command word of one size (perhaps 32 bits) and then getting a response of a different size (perhaps 153 bits, one for each pin in that scan chain).
Interrupts are outside the scope of SPI; their usage is neither forbidden nor specified, and so may optionally be implemented.
Microcontrollers configured as slave devices may have hardware support for generating interrupt signals to themselves when data words are received or overflow occurs in a receive FIFO buffer, [ 6 ] and may also set up an interrupt routine when their slave select input line is pulled low or high.
SPI slaves sometimes use an out-of-band signal (another wire) to send an interrupt signal to a master. Examples include pen-down interrupts from touchscreen sensors, thermal limit alerts from temperature sensors , alarms issued by real-time clock chips, SDIO [ note 3 ] and audio jack insertions for an audio codec . Interrupts to master may also be faked by using polling (similarly to USB 1.1 and 2.0 ).
SPI lends itself to a "bus driver" software design. Software for attached devices is written to call a "bus driver" that handles the actual low-level SPI hardware. This permits the driver code for attached devices to port easily to other hardware or a bit-banging software implementation.
The pseudocode below outlines a software implementation (" bit-banging ") of SPI's protocol as a master with simultaneous output and input. This pseudocode is for CPHA=0 and CPOL=0, thus SCLK is pulled low before SS is activated and bits are inputted on SCLK's rising edge while bits are outputted on SCLK's falling edge.
Bit-banging a slave's protocol is similar but different from above. An implementation might involve busy waiting for SS to fall or triggering an interrupt routine when SS falls, and then shifting in and out bits when the received SCLK changes appropriately for however long the transfer size is.
Though the previous operation section focused on a basic interface with a single slave, SPI can instead communicate with multiple slaves using multidrop, daisy chain, or expander configurations.
In the multidrop bus configuration, each slave has its own SS , and the master selects only one at a time. MISO, SCLK, and MOSI are each shared by all devices. This is the way SPI is normally used.
Since the MISO pins of the slaves are connected together, they are required to be tri-state pins (high, low or high-impedance), where the high-impedance output must be applied when the slave is not selected. Slave devices not supporting tri-state may be used in multidrop configuration by adding a tri-state buffer chip controlled by its SS signal. [ 3 ] (Since only a single signal line needs to be tristated per slave, one typical standard logic chip that contains four tristate buffers with independent gate inputs can be used to interface up to four slave devices to an SPI bus)
Caveat: All SS signals should start high (to indicate no slaves are selected) before sending initialization messages to any slave, so other uninitialized slaves ignore messages not addressed to them. This is a concern if the master uses general-purpose input/output (GPIO) pins (which may default to an undefined state) for SS and if the master uses separate software libraries to initialize each device. One solution is to configure all GPIOs used for SS to output a high voltage for all slaves before running initialization code from any of those software libraries. Another solution is to add a pull-up resistor on each SS , to ensure that all SS signals are initially high. [ 3 ]
Some products that implement SPI may be connected in a daisy chain configuration, where the first slave's output is connected to the second slave's input, and so on with subsequent slaves, until the final slave, whose output is connected back to the master's input. This effectively merges the individual communication shift registers of each slave to form a single larger combined shift register that shifts data through the chain. This configuration only requires a single SS line from the master, rather than a separate SS line for each slave. [ 7 ]
In addition to using SPI-specific slaves, daisy-chained SPI can include discrete shift registers for more pins of inputs (e.g. using the parallel-in serial-out 74 xx165) [ 8 ] or outputs (e.g. using the serial-in parallel-out 74 xx595) [ 9 ] chained indefinitely. Other applications that can potentially interoperate with daisy-chained SPI include SGPIO , JTAG , [ 10 ] and I 2 C .
Expander configurations use SPI-controlled addressing units (e.g. binary decoders , demultiplexers , or shift registers) to add chip selects.
For example, one SS can be used for transmitting to a SPI-controlled demultiplexer an index number controlling its select signals, while another SS is routed through that demultiplexer according to that index to select the desired slave. [ 11 ]
SPI is used to talk to a variety of peripherals, such as
Board real estate and wiring savings compared to a parallel bus are significant, and have earned SPI a solid role in embedded systems. That is true for most system-on-a-chip processors, both with higher-end 32-bit processors such as those using ARM , MIPS , or PowerPC and with lower-end microcontrollers such as the AVR , PIC , and MSP430 . These chips usually include SPI controllers capable of running in either master or slave mode. In-system programmable AVR controllers (including blank ones) can be programmed using SPI. [ 12 ]
Chip or FPGA based designs sometimes use SPI to communicate between internal components; on-chip real estate can be as costly as its on-board cousin. And for high-performance systems, FPGAs sometimes use SPI to interface as a slave to a host, as a master to sensors, or for flash memory used to bootstrap if they are SRAM-based.
The full-duplex capability makes SPI very simple and efficient for single master/single slave applications. Some devices use the full-duplex mode to implement an efficient, swift data stream for applications such as digital audio , digital signal processing , or telecommunications channels , but most off-the-shelf chips stick to half-duplex request/response protocols.
SPI implementations have a wide variety of protocol variations. Some devices are transmit-only; others are receive-only. Slave selects are sometimes active-high rather than active-low. Some devices send the least-significant bit first. Signal levels depend entirely on the chips involved. And while the baseline SPI protocol has no command codes, every device may define its own protocol of command codes. Some variations are minor or informal, while others have an official defining document and may be considered to be separate but related protocols.
Motorola in 1983 listed [ 13 ] three 6805 8-bit microcomputers that have an integrated "Serial Peripheral Interface", whose functionality is described in a 1984 manual. [ 14 ]
Motorola's 1987 Application Node AN991 "Using the Serial Peripheral Interface to Communicate Between Multiple Microcomputers" [ 15 ] (now under NXP , last revised 2002 [ 5 ] ) informally serves as the "official" defining document for SPI.
Some devices have timing variations from Motorola's CPOL/CPHA modes. Sending data from slave to master may use the opposite clock edge as master to slave. Devices often require extra clock idle time before the first clock or after the last one, or between a command and its response.
Some devices have two clocks, one to read data, and another to transmit it into the device. Many of the read clocks run from the slave select line.
Different transmission word sizes are common. Many SPI chips only support messages that are multiples of 8 bits. Such chips can not interoperate with the JTAG or SGPIO protocols, or any other protocol that requires messages that are not multiples of 8 bits.
Some devices don't use slave select, and instead manage protocol state machine entry/exit using other methods.
Anyone needing an external connector for SPI defines their own or uses another standard connection such as: UEXT , Pmod , various JTAG connectors , Secure Digital card socket, etc.
Some devices require an additional flow control signal from slave to master, indicating when data is ready. This leads to a 5-wire protocol instead of the usual 4. Such a ready or enable signal is often active-low, and needs to be enabled at key points such as after commands or between words. Without such a signal, data transfer rates may need to be slowed down significantly, or protocols may need to have dummy bytes inserted, to accommodate the worst case for the slave response time. Examples include initiating an ADC conversion, addressing the right page of flash memory, and processing enough of a command that device firmware can load the first word of the response. (Many SPI masters do not support that signal directly, and instead rely on fixed delays.)
SafeSPI [ 16 ] is an industry standard for SPI in automotive applications. Its main focus is the transmission of sensor data between different devices.
In electrically noisy environments, since SPI has few signals, it can be economical to reduce the effects of common mode noise by adapting SPI to use low-voltage differential signaling . [ 17 ] Another advantage is that the controlled devices can be designed to loop-back to test signal integrity. [ 18 ]
A Queued Serial Peripheral Interface ( QSPI ; different to but has same abbreviation as Quad SPI described in § Quad SPI ) is a type of SPI controller that uses a data queue to transfer data across an SPI bus. [ 19 ] It has a wrap-around mode allowing continuous transfers to and from the queue with only intermittent attention from the CPU. Consequently, the peripherals appear to the CPU as memory-mapped parallel devices. This feature is useful in applications such as control of an A/D converter . Other programmable features in Queued SPI are chip selects and transfer length/delay.
SPI controllers from different vendors support different feature sets; such direct memory access (DMA) queues are not uncommon, although they may be associated with separate DMA engines rather than the SPI controller itself, such as used by Multichannel Buffered Serial Port ( MCBSP ). [ note 6 ] Most SPI master controllers integrate support for up to four slave selects, [ note 7 ] although some require slave selects to be managed separately through GPIO lines.
Note that Queued SPI is different from Quad SPI , and some processors even confusingly allow a single "QSPI" interface to operate in either quad or queued mode! [ 20 ]
Microwire, [ 21 ] often spelled μWire , is essentially a predecessor of SPI and a trademark of National Semiconductor . It's a strict subset of SPI: half-duplex, and using SPI mode 0. Microwire chips tend to need slower clock rates than newer SPI versions; perhaps 2 MHz vs. 20 MHz. Some Microwire chips also support a three-wire mode.
Microwire/Plus [ 22 ] is an enhancement of Microwire and features full-duplex communication and support for SPI modes 0 and 1. There was no specified improvement in serial clock speed.
Three-wire variants of SPI restricted to a half-duplex mode use a single bidirectional data line called SISO (slave out/slave in) or MOMI (master out/master in) instead of SPI's two unidirectional lines (MOSI and MISO). Three-wire tends to be used for lower-performance parts, such as small EEPROMs used only during system startup, certain sensors, and Microwire . Few SPI controllers support this mode, although it can be easily bit-banged in software.
For instances where the full-duplex nature of SPI is not used, an extension uses both data pins in a half-duplex configuration to send two bits per clock cycle. Typically a command byte is sent requesting a response in dual mode, after which the MOSI line becomes SIO0 (serial I/O 0) and carries even bits, while the MISO line becomes SIO1 and carries odd bits. Data is still transmitted most-significant bit first, but SIO1 carries bits 7, 5, 3 and 1 of each byte, while SIO0 carries bits 6, 4, 2 and 0.
This is particularly popular among SPI ROMs, which have to send a large amount of data, and comes in two variants: [ 23 ] [ 24 ]
Quad SPI ( QSPI ; different to but has same abbreviation as Queued-SPI described in § Intelligent SPI controllers ) goes beyond dual SPI, adding two more I/O lines (SIO2 and SIO3) and sends 4 data bits per clock cycle. Again, it is requested by special commands, which enable quad mode after the command itself is sent in single mode. [ 23 ] [ 24 ]
Further extending quad SPI, some devices support a "quad everything" mode where all communication takes place over 4 data lines, including commands. [ 25 ] This is variously called "QPI" [ 24 ] (not to be confused with Intel QuickPath Interconnect ) or "serial quad I/O" (SQI) [ 26 ]
This requires programming a configuration bit in the device and requires care after reset to establish communication.
In addition to using multiple lines for I/O, some devices increase the transfer rate by using double data rate transmission. [ 27 ] [ 28 ]
Although there are some similarities between SPI and the JTAG (IEEE 1149.1-2013) protocol, they are not interchangeable. JTAG is specifically intended to provide reliable test access to the I/O pins from an off-board controller with less precise signal delay and skew parameters, while SPI has many varied applications. While not strictly a level sensitive interface, the JTAG protocol supports the recovery of both setup and hold violations between JTAG devices by reducing the clock rate or changing the clock's duty cycles. Consequently, the JTAG interface is not intended to support extremely high data rates. [ 29 ]
SGPIO is essentially another (incompatible) application stack for SPI designed for particular backplane management activities. [ citation needed ] SGPIO uses 3-bit messages.
Intel has developed a successor to its Low Pin Count (LPC) bus that it calls the Enhanced Serial Peripheral Interface (eSPI) bus. Intel aims to reduce the number of pins required on motherboards and increase throughput compared to LPC, reduce the working voltage to 1.8 volts to facilitate smaller chip manufacturing processes, allow eSPI peripherals to share SPI flash devices with the host (the LPC bus did not allow firmware hubs to be used by LPC peripherals), tunnel previous out-of-band pins through eSPI, and allow system designers to trade off cost and performance. [ 30 ] [ 31 ]
An eSPI bus can either be shared with SPI devices to save pins or be separate from an SPI bus to allow more performance, especially when eSPI devices need to use SPI flash devices. [ 30 ]
This standard defines an Alert# signal that is used by an eSPI slave to request service from the master. In a performance-oriented design or a design with only one eSPI slave, each eSPI slave will have its Alert# pin connected to an Alert# pin on the eSPI master that is dedicated to each slave, allowing the eSPI master to grant low-latency service, because the eSPI master will know which eSPI slave needs service and will not need to poll all of the slaves to determine which device needs service. In a budget design with more than one eSPI slave, all of the Alert# pins of the slaves are connected to one Alert# pin on the eSPI master in a wired-OR connection, which requires the master to poll all the slaves to determine which ones need service when the Alert# signal is pulled low by one or more peripherals that need service. Only after all of the devices are serviced will the Alert# signal be pulled high due to none of the eSPI slaves needing service and therefore pulling the Alert# signal low. [ 30 ]
This standard allows designers to use 1-bit, 2-bit, or 4-bit communications at speeds from 20 to 66 MHz to further allow designers to trade off performance and cost. [ 30 ]
Communications that were out-of-band of LPC like general-purpose input/output (GPIO) and System Management Bus (SMBus) should be tunneled through eSPI via virtual wire cycles and out-of-band message cycles respectively in order to remove those pins from motherboard designs using eSPI. [ 30 ]
This standard supports standard memory cycles with lengths of 1 byte to 4 kilobytes of data, short memory cycles with lengths of 1, 2, or 4 bytes that have much less overhead compared to standard memory cycles, and I/O cycles with lengths of 1, 2, or 4 bytes of data which are low overhead as well. This significantly reduces overhead compared to the LPC bus, where all cycles except for the 128-byte firmware hub read cycle spends more than one-half of all of the bus's throughput and time in overhead. The standard memory cycle allows a length of anywhere from 1 byte to 4 kilobytes in order to allow its larger overhead to be amortised over a large transaction. eSPI slaves are allowed to initiate bus master versions of all of the memory cycles. Bus master I/O cycles, which were introduced by the LPC bus specification, and ISA-style DMA including the 32-bit variant introduced by the LPC bus specification, are not present in eSPI. Therefore, bus master memory cycles are the only allowed DMA in this standard. [ 30 ]
eSPI slaves are allowed to use the eSPI master as a proxy to perform flash operations on a standard SPI flash memory slave on behalf of the requesting eSPI slave. [ 30 ]
64-bit memory addressing is also added, but is only permitted when there is no equivalent 32-bit address. [ 30 ]
The Intel Z170 chipset can be configured to implement either this bus or a variant of the LPC bus that is missing its ISA-style DMA capability and is underclocked to 24 MHz instead of the standard 33 MHz. [ 32 ]
The eSPI bus is also adopted by AMD Ryzen chipsets.
Single-board computers may provide pin access to SPI hardware units. For instance, the Raspberry Pi's J8 header exposes at least two SPI units that can be used via Linux drivers or python .
There are a number of USB adapters that allow a desktop PC or smartphone with USB to communicate with SPI chips (e.g. CH341A/B [ 33 ] based or FT 221xs [ 34 ] ). They are used for embedded systems, chips ( FPGA , ASIC , and SoC ) and peripheral testing, programming and debugging. Many of them also provide scripting or programming capabilities (e.g. Visual Basic , C / C++ , VHDL ) and can be used with open source programs like flashrom , IMSProg, SNANDer or avrdude for flash , EEPROM , bootloader and BIOS programming.
The key SPI parameters are: the maximum supported frequency for the serial interface, command-to-command latency, and the maximum length for SPI commands. It is possible to find SPI adapters on the market today that support up to 100 MHz serial interfaces, with virtually unlimited access length.
SPI protocol being a de facto standard, some SPI host adapters also have the ability of supporting other protocols beyond the traditional 4-wire SPI (for example, support of quad-SPI protocol or other custom serial protocol that derive from SPI [ 35 ] ).
Logic analyzers are tools which collect, timestamp , analyze, decode, store, and view the high-speed waveforms, to help debug and develop. Most logic analyzers have the capability to decode SPI bus signals into high-level protocol data with human-readable labels.
SPI waveforms can be seen on analog channels (and/or via digital channels in mixed-signal oscilloscopes ). [ 36 ] Most oscilloscope vendors offer optional support for SPI protocol analysis (both 2-, 3- , and 4-wire SPI) with triggering.
Various alternative abbreviations for the four common SPI signals are used. (This section omits overbars indicating active-low.)
Microchip uses host and client though keeps the abbreviation MOSI and MISO. [ 40 ] | https://en.wikipedia.org/wiki/Serial_Peripheral_Interface |
Serial Analysis of Gene Expression ( SAGE ) is a transcriptomic technique used by molecular biologists to produce a snapshot of the messenger RNA population in a sample of interest in the form of small tags that correspond to fragments of those transcripts. Several variants have been developed since, most notably a more robust version, LongSAGE, [ 2 ] RL-SAGE [ 3 ] and the most recent SuperSAGE. [ 4 ] Many of these have improved the technique with the capture of longer tags, enabling more confident identification of a source gene.
Briefly, SAGE experiments proceed as follows:
The output of SAGE is a list of short sequence tags and the number of times it is observed. Using sequence databases a researcher can usually determine, with some confidence, from which original mRNA (and therefore which gene ) the tag was extracted.
Statistical methods can be applied to tag and count lists from different samples in order to determine which genes are more highly expressed. For example, a normal tissue sample can be compared against a corresponding tumor to determine which genes tend to be more (or less) active.
In 1979 teams at Harvard and Caltech extended the basic idea of making DNA copies of mRNAs in vitro to amplifying a library of such in bacterial plasmids. [ 5 ] In 1982–1983, the idea of selecting random or semi-random clones from such a cDNA library for sequencing was explored by Greg Sutcliffe and coworkers. [ 6 ] and Putney et al. who sequenced 178 clones from a rabbit muscle cDNA library. [ 7 ] In 1991 Adams and co-workers coined the term expressed sequence tag (EST) and initiated more systematic sequencing of cDNAs as a project (starting with 600 brain cDNAs). [ 8 ] The identification of ESTs proceeded rapidly, millions of ESTs now available in public databases (e.g. GenBank ).
In 1995, the idea of reducing the tag length from 100 to 800 bp down to tag length of 10 to 22 bp helped reduce the cost of mRNA surveys. [ 9 ] In this year, the original SAGE protocol was published by Victor Velculescu at the Oncology Center of Johns Hopkins University . [ 9 ] Although SAGE was originally conceived for use in cancer studies, it has been successfully used to describe the transcriptome of other diseases and in a wide variety of organisms.
The general goal of the technique is similar to the DNA microarray . However, SAGE sampling is based on sequencing mRNA output, not on hybridization of mRNA output to probes, so transcription levels are measured more quantitatively than by microarray. In addition, the mRNA sequences do not need to be known a priori , so genes or gene variants which are not known can be discovered. Microarray experiments are much cheaper to perform, so large-scale studies do not typically use SAGE. Quantifying gene expressions is more exact in SAGE because it involves directly counting the number of transcripts whereas spot intensities in microarrays fall in non-discrete gradients and are prone to background noise.
MicroRNAs , or miRNAs for short, are small (~22nt) segments of RNA which have been found to play a crucial role in gene regulation. One of the most commonly used methods for cloning and identifying miRNAs within a cell or tissue was developed in the Bartel Lab and published in a paper by Lau et al. (2001). Since then, several variant protocols have arisen, but most have the same basic format. The procedure is quite similar to SAGE: The small RNA are isolated, then linkers are added to each, and the RNA is converted to cDNA by RT-PCR . Following this, the linkers, containing internal restriction sites, are digested with the appropriate restriction enzyme and the sticky ends are ligated together into concatamers. Following concatenation, the fragments are ligated into plasmids and are used to transform bacteria to generate many copies of the plasmid containing the inserts. Those may then be sequenced to identify the miRNA present, as well as analysing expression levels of a given miRNA by counting the number of times it is present, similar to SAGE.
LongSAGE was a more robust version of the original SAGE developed in 2002 which had a higher throughput, using 20 μg of mRNA to generate a cDNA library of thousands of tags. [ 10 ] Robust LongSage (RL-SAGE) Further improved on the LongSAGE protocol with the ability to generate a library with an insert size of 50 ng mRNA , much smaller than previous LongSAGE insert size of 2 μg mRNA [ 10 ] and using a lower number of ditag polymerase chain reactions ( PCR ) to obtain a complete cDNA library. [ 11 ]
SuperSAGE is a derivative of SAGE that uses the type III- endonuclease EcoP15I of phage P1 , to cut 26 bp long sequence tags from each transcript's cDNA , expanding the tag-size by at least 6 bp as compared to the predecessor techniques SAGE and LongSAGE. [ 12 ] The longer tag-size allows for a more precise allocation of the tag to the corresponding transcript, because each additional base increases the precision of the annotation considerably.
Like in the original SAGE protocol, so-called ditags are formed, using blunt-ended tags. However, SuperSAGE avoids the bias observed during the less random LongSAGE 20 bp ditag-ligation. [ 13 ] By direct sequencing with high-throughput sequencing techniques ( next-generation sequencing , i.e. pyrosequencing ), hundred thousands or millions of tags can be analyzed simultaneously, producing very precise and quantitative gene expression profiles . Therefore, tag-based gene expression profiling also called "digital gene expression profiling" (DGE) can today provide most accurate transcription profiles that overcome the limitations of microarrays . [ 14 ] [ 15 ]
In the mid 2010s several techniques combined with Next Generation Sequencing were developed that employ the "tag" principle for "digital gene expression profiling" but without the use of the tagging enzyme. The "MACE" approach, (=Massive Analysis of cDNA Ends) generates tags somewhere in the last 1500 bps of a transcript. The technique does not depend on restriction enzymes anymore and thereby circumvents bias that is related to the absence or location of the restriction site within the cDNA. Instead, the cDNA is randomly fragmented and the 3'ends are sequenced from the 5' end of the cDNA molecule that carries the poly-A tail. The sequencing length of the tag can be freely chosen. Because of this, the tags can be assembled into contigs and the annotation of the tags can be drastically improved. Therefore, MACE is also use for the analyses of non-model organisms. In addition, the longer contigs can be screened for polymorphisms. As UTRs show a large number of polymorphisms between individuals, the MACE approach can be applied for allele determination, allele specific gene expression profiling and the search for molecular markers for breeding. In addition, the approach allows determining alternative polyadenylation of the transcripts. Because MACE does only require 3’ ends of transcripts, even partly degraded RNA can be analyzed with less degradation dependent bias. The MACE approach uses unique molecular identifiers to allow for identification of PCR bias. [ 16 ] | https://en.wikipedia.org/wiki/Serial_analysis_of_gene_expression |
The serial binary adder or bit-serial adder is a digital circuit that performs binary addition bit by bit. The serial full adder has three single-bit inputs for the numbers to be added and the carry in. There are two single-bit outputs for the sum and carry out. The carry-in signal is the previously calculated carry-out signal. The addition is performed by adding each bit, lowest to highest, one per clock cycle.
Serial binary addition is done by a flip-flop and a full adder . The flip-flop takes the carry-out signal on each clock cycle and provides its value as the carry-in signal on the next clock cycle. After all of the bits of the input operands have arrived, all of the bits of the sum have come out of the sum output.
The serial binary subtractor operates the same as the serial binary adder, except the subtracted number is converted to its two's complement before being added. Alternatively, the number to be subtracted is converted to its ones' complement , by inverting its bits, and the carry flip-flop is initialized to a 1 instead of to 0 as in addition. The ones' complement plus the 1 is the two's complement.
*addition starts from LSb | https://en.wikipedia.org/wiki/Serial_binary_adder |
A serial dilution is the step-wise dilution of a substance in solution , either by using a constant dilution factor , or by using a variable factor between dilutions. If the dilution factor at each step is constant, this results in a geometric progression of the concentration in a logarithmic fashion. A ten-fold serial dilution could be 1 M , 0.1 M, 0.01 M, 0.001 M ... Serial dilutions are used to accurately create highly diluted solutions as well as solutions for experiments resulting in concentration curves with a logarithmic scale . A tenfold dilution for each step is called a logarithmic dilution or log-dilution , a 3.16-fold (10 0.5 -fold) dilution is called a half-logarithmic dilution or half-log dilution , and a 1.78-fold (10 0.25 -fold) dilution is called a quarter-logarithmic dilution or quarter-log dilution . Serial dilutions are widely used in experimental sciences, including biochemistry , pharmacology , microbiology , and physics .
In biology and medicine , besides the more conventional uses described above, serial dilution may also be used to reduce the concentration of microscopic organisms or cells in a sample. As, for instance, the number and size of bacterial colonies that grow on an agar plate in a given time is concentration-dependent, and since many other diagnostic techniques involve physically counting the number of micro-organisms or cells on specials printed with grids (for comparing concentrations of two organisms or cell types in the sample) or wells of a given volume (for absolute concentrations), dilution can be useful for getting more manageable results. [ 1 ] Serial dilution is also a cheaper and simpler method for preparing cultures from a single cell than optical tweezers and micromanipulators . [ 2 ]
Serial dilution is one of the core foundational practices of homeopathy , with " succussion ", or shaking, occurring between each dilution. In homeopathy, serial dilutions (called potentisation) are often taken so far that by the time the last dilution is completed, no molecules of the original substance are likely to remain. [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/Serial_dilution |
Serial homology is a special type of homology , defined by Owen as "representative or repetitive relation in the segments of the same organism." [ 1 ] Ernst Haeckel preferred the term "homotypy" for the same phenomenon.
Classical examples of serial homologies are the development of forelimbs and hind limbs of tetrapods and the iterative structure of the vertebrae . [ 2 ] | https://en.wikipedia.org/wiki/Serial_homology |
Serial manipulators are the most common industrial robots and they are designed as a series of links connected by motor-actuated joints that extend from a base to an end-effector. Often they have an anthropomorphic arm structure described as having a "shoulder", an "elbow", and a "wrist".
Serial robots usually have six joints, because it requires at least six degrees of freedom to place a manipulated object in an arbitrary position and orientation in the workspace of the robot.
A popular application for serial robots in today's industry is the pick-and-place assembly robot, called a SCARA robot, which has four degrees of freedom.
In its most general form, a serial robot consists of a number of rigid links connected to joints. Simplicity considerations in manufacturing and control have led to robots with only revolute or prismatic joints and orthogonal, parallel and/or intersecting joint axes (instead of arbitrarily placed joint axes). Donald L. Pieper derived the first practically relevant result in this context, [ 1 ] referred to as 321 kinematic structure : The inverse kinematics of serial manipulators with six revolute joints, and with three consecutive joints intersecting, can be solved in closed-form, i.e. analytically This result had a tremendous influence on the design of industrial robots.
The main advantage of a serial manipulator is a large workspace with respect to the size of the robot and the floor space it occupies. The main disadvantages of these robots are:
The position and orientation of a robot's end effector are derived from the joint positions by means of a geometric model of the robot arm. For serial robots, the mapping from joint positions to end-effector pose is easy, the inverse mapping is more difficult. Therefore, most industrial robots have special designs that reduce the complexity of the inverse mapping.
The reachable workspace of a robot's end-effector is the manifold of reachable frames. The dextrous workspace consists of the points of the reachable workspace where the robot can generate velocities that span the complete tangent space at that point, i.e., it can translate the manipulated object with three degrees of freedom, and rotate the object with three degrees of rotation freedom. The relationships between joint space and Cartesian space coordinates of the object held by the robot are in general multiple-valued: the same pose can be reached by the serial arm in different ways, each with a different set of joint coordinates. Hence the reachable workspace of the robot is divided in configurations (also called assembly modes), in which the kinematic relationships are locally one-to-one.
A singularity is a configuration of a serial manipulator in which the joint parameters no longer completely define the position and orientation of the end-effector. Singularities occur in configurations when joint axes align in a way that reduces the ability of the arm to position the end-effector. For example when a serial manipulator is fully extended it is in what is known as the boundary singularity. [ 2 ]
At a singularity the end-effector loses one or more degrees of twist freedom (instantaneously, the end-effector cannot move in these directions). Serial robots with less than six independent joints are always singular in the sense that they can never span a six-dimensional twist space. This is often called an architectural singularity.
A singularity is usually not an isolated point in the workspace of the robot, but a sub-manifold.
A redundant manipulator has more than six degrees of freedom which means that it has additional joint parameters [ 3 ] that allow the configuration of the robot to change while it holds its end-effector in a fixed position and orientation.
A typical redundant manipulator has seven joints, for example three at the shoulder, one elbow joint and three at the wrist. This manipulator can move its elbow around a circle while it maintains a specific position and orientation of its end-effector.
A snake robot has many more than six degrees of freedom and is often called hyper-redundant. | https://en.wikipedia.org/wiki/Serial_manipulator |
In set theory a serial relation is a homogeneous relation expressing the connection of an element of a sequence to the following element. The successor function used by Peano to define natural numbers is the prototype for a serial relation.
Bertrand Russell used serial relations in The Principles of Mathematics [ 1 ] (1903) as he explored the foundations of order theory and its applications. The term serial relation was also used by B. A. Bernstein for an article showing that particular common axioms in order theory are nearly incompatible: connectedness , irreflexivity, and transitivity . [ 2 ]
A serial relation R is an endorelation on a set U . As stated by Russell, ∀ x ∃ y x R y , {\displaystyle \forall x\exists y\ xRy,} where the universal and existential quantifiers refer to U . In contemporary language of relations , this property defines a total relation . But a total relation may be heterogeneous. Serial relations are of historic interest.
For a relation R , let { y : xRy } denote the "successor neighborhood" of x . A serial relation can be equivalently characterized as a relation for which every element has a non-empty successor neighborhood. Similarly, an inverse serial relation is a relation in which every element has non-empty "predecessor neighborhood". [ 3 ]
In normal modal logic , the extension of fundamental axiom set K by the serial property results in axiom set D . [ 4 ]
Relations are used to develop series in The Principles of Mathematics . The prototype is Peano's successor function as a one-one relation on the natural numbers . Russell's series may be finite or generated by a relation giving cyclic order . In that case, the point-pair separation relation is used for description. To define a progression, he requires the generating relation to be a connected relation . Then ordinal numbers are derived from progressions, the finite ones are finite ordinals. [ 1 ] : Chapter 28: Progressions and ordinal numbers Distinguishing open and closed series [ 1 ] : 234 results in four total orders: finite, one end, no end and open, and no end and closed. [ 1 ] : 202
Contrary to other writers, Russell admits negative ordinals. For motivation, consider the scales of measurement using scientific notation , where a power of ten represents a decade of measure. Informally, this parameter corresponds to orders of magnitude used to quantify physical units. The parameter takes on negative as well as positive values.
Russell adopted the term stretch from Meinong , who had contributed to the theory of distance. [ 5 ] Stretch refers to the intermediate terms between two points in a series, and the "number of terms measures the distance and divisibility of the whole." [ 1 ] : 181 To explain Meinong, Russell refers to the Cayley–Klein metric , which uses stretch coordinates in anharmonic ratios which determine distance by using logarithm. [ 1 ] : 255 [ 6 ] | https://en.wikipedia.org/wiki/Serial_relation |
Time stretch microscopy , also known as serial time-encoded amplified imaging/microscopy or stretched time-encoded amplified imaging/microscopy' ( STEAM ), is a fast real-time optical imaging method that provides MHz frame rate, ~100 ps shutter speed, and ~30 dB (× 1000) optical image gain. Based on the photonic time stretch technique, STEAM holds world records for shutter speed and frame rate in continuous real-time imaging. STEAM employs the Photonic Time Stretch with internal Raman amplification to realize optical image amplification to circumvent the fundamental trade-off between sensitivity and speed that affects virtually all optical imaging and sensing systems. This method uses a single-pixel photodetector , eliminating the need for the detector array and readout time limitations. Avoiding this problem and featuring the optical image amplification for improvement in sensitivity at high image acquisition rates, STEAM's shutter speed is at least 1000 times faster than the best CCD [ 1 ] and CMOS [ 2 ] cameras. Its frame rate is 1000 times faster than the fastest CCD cameras and 10–100 times faster than the fastest CMOS cameras .
Time stretch microscopy and its application to microfluidics for classification of biological cells was invented at UCLA. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] It combines the concept of spectrally encoded illumination with the photonic time stretch, an ultrafast real-time data acquisition technology developed earlier in the same lab to create a femtosecond real-time single-shot digitizer , [ 11 ] and a single shot stimulated Raman spectrometer. [ 12 ] The first demonstration was a one-dimensional version [ 3 ] and later a two-dimensional version. [ 4 ] Later, a fast imaging vibrometer was created by extending the system to an interferometric configuration. [ 13 ] The technology was then extended to time stretch quantitatve phase imaging ( TS-QPI ) for label free classification of blood cells and combined with artificial intelligence (AI) for classification of cancer cells in blood with over 96% accuracy. [ 14 ] The system measured 16 biophysical parameters of cells simultaneously in a single shot and performed hyper-dimensional classification using a Deep Neural Network (DNN). The results were compared with other machine learning classification algorithms such as logistic regression and naive Bayes with the highest accuracy obtained with deep learning.
This was later extended "Deep Cytometry" [ 15 ] in which the computationally intensive tasks of image processing and feature extraction before deep learning were avoided by directly feeding the time-stretch line scans, each representing a laser pulse into a deep convolutional neural network. This direct classification of raw time-stretched data reduced the inference time by orders of magnitude to 700 micro-seconds on a GPU accelerated processor. At a flow speed of 1 m/s the cells only move less than a millimeter. Therefore, this ultrashort inference time is fast enough for cell sorting.
Fast real-time optical imaging technology is indispensable for studying dynamical events such as shockwaves , laser fusion , chemical dynamics in living cells, neural activity, laser surgery , microfluidics, and MEMS . The usual techniques of conventional CCD and CMOS cameras are inadequate for capturing fast dynamical processes with high sensitivity and speed; there are technological limitations—it takes time to read out the data from the sensor array and there's a fundamental trade-off between sensitivity and speed: at high frame rates, fewer photons are collected during each frame, a problem that affects nearly all optical imaging systems.
The streak camera , used for diagnostics in laser fusion, plasma radiation, and combustion, operates in burst mode only (providing just several frames) and requires synchronization of the camera with the event to be captured. It is therefore unable to capture random or transient events in biological systems. On the other hand, Stroboscopes have a complementary role: they can capture the dynamics of fast events—but only if the event is repetitive, such as rotations, vibrations, and oscillations. They are unable to capture non-repetitive random events that occur only once or do not occur at regular intervals.
The basic principle involves two steps both performed optically. In the first step, the spectrum of a broadband optical pulse is converted by a spatial disperser into a rainbow that illuminates the target. Here the rainbow pulse consists of many subpulses of different colors (frequencies), indicating that the different frequency components (colors) of the rainbow pulse are incident onto different spatial coordinates on the object. Therefore, the spatial information (image) of the object is encoded into the spectrum of the resultant reflected or transmitted rainbow pulse. The image-encoded reflected or transmitted rainbow pulse returns to the same spatial disperser or enters another spatial disperser to combine the colors of the rainbow back into a single pulse. Here STEAM's shutter speed or exposure time corresponds to the temporal width of the rainbow pulse. In the second step, the spectrum is mapped into a serial temporal signal that is stretched in time using dispersive Fourier transform to slow it down such that it can be digitized in real-time. The time stretch happens inside a dispersive fibre that is pumped to create internal Raman amplification. Here the image is optically amplified by stimulated Raman scattering to overcome the thermal noise level of the detector. The amplified time-stretched serial image stream is detected by a single-pixel photodetector and the image is reconstructed in the digital domain. Subsequent pulses capture repetitive frames hence the laser pulse repetition rate corresponds to the frame rate of STEAM. The second is known as the time stretch analogue-to-digital converter , otherwise known as the time stretch recording scope (TiSER).
The simultaneous stretching and amplification is also known as amplified time stretch dispersive Fourier transformation (TS-DFT). [ 16 ] [ 17 ] The amplified time stretch technology was developed earlier to demonstrate analog-to-digital conversion with femtosecond real-time sampling rate [ 11 ] and to demonstrate stimulated Raman spectroscopy in single shot at millions of frames per second. [ 12 ] Amplified time stretch is a process in which the spectrum of an optical pulse is mapped by large group-velocity dispersion into a slowed-down temporal waveform and amplified simultaneously by the process of stimulated Raman scattering . Consequently, the optical spectrum can be captured with a single-pixel photodetector and digitized in real-time. Pulses are repeated for repetitive measurements of the optical spectrum. Amplified time stretch DFT consists of a dispersive fibre pumped by lasers and wavelength-division multiplexers that couple the lasers into and out of the dispersive fibre. Amplified dispersive Fourier transformation was originally developed to enable ultra wideband analog to digital converters and has also been used for high throughput real-time spectroscopy . The resolution of STEAM imager is mainly determined by diffraction limit, the sampling rate of the back-end digitizer, and spatial dispersers. [ 18 ]
Time-stretch quantitative phase imaging ( TS-QPI ) is an imaging technique based on time-stretch technology for simultaneous measurement of phase and intensity spatial profiles. [ 19 ] [ 20 ] [ 21 ] [ 22 ] Developed at UCLA, it has led to the development of time stretch artificial intelligence microscope. [ 19 ]
In time stretched imaging, the object's spatial information is encoded in the spectrum of laser pulses within a pulse duration of sub- nanoseconds . Each pulse representing one frame of the camera is then stretched in time so that it can be digitized in real-time by an electronic analog-to-digital converter (ADC). The ultra-fast pulse illumination freezes the motion of high-speed cells or particles in flow to achieve blur-free imaging. Detection sensitivity is challenged by the low number of photons collected during the ultra-short shutter time (optical pulse width) and the drop in the peak optical power resulting from the time stretch. [ 23 ] These issues are solved in time stretch imaging by implementing a low noise-figure Raman amplifier within the dispersive device that performs time stretching. Moreover, warped stretch transform can be used in time stretch imaging to achieve optical image compression and nonuniform spatial resolution over the field-of-view.
In the coherent version of the time-stretch camera, the imaging is combined with spectral interferometry to measure quantitative phase [ 24 ] [ 25 ] and intensity images in real-time and at high throughput. Integrated with a microfluidic channel, coherent time stretch imaging system measures both quantitative optical phase shift and loss of individual cells as a high-speed imaging flow cytometer, capturing millions of line-images per second in flow rates as high as a few meters per second, reaching up to hundred-thousand cells per second throughput. The time stretch quantitative phase imaging can be combined with machine learning to achieve very accurate label-free classification of the cells.
This method is useful for a broad range of scientific, industrial, and biomedical applications that require high shutter speeds and frame rates. The one-dimensional version can be employed for displacement sensing, [ citation needed ] barcode reading, [ citation needed ] and blood screening; [ 26 ] the two-dimensional version for real-time observation, diagnosis, and evaluation of shockwaves, microfluidic flow, [ 27 ] neural activity, MEMS, [ 28 ] and laser ablation dynamics. [ citation needed ] The three-dimensional version is useful for range detection, [ citation needed ] dimensional metrology, [ citation needed ] and surface vibrometry and velocimetry. [ 29 ]
Big data not only brings opportunity but also a challenge in biomedical and scientific instruments, as acquisition and processing units are overwhelmed by a torrent of data. The need to compress massive volumes of data in real-time has fueled interest in nonuniform stretch transformations – operations that reshape the data according to its sparsity.
Recently researchers at UCLA demonstrated image compression performed in the optical domain and in real-time. [ 30 ] Using nonlinear group delay dispersion and time-stretch imaging, they were able to optically warp the image such that the information-rich portions are sampled at a higher sample density than the sparse regions. This was achieved by restructuring the image before optical-to-electrical conversion followed by a uniform electronic sampler. The reconstruction of the nonuniformly stretched image demonstrates that the resolution is higher where information is rich and lower where information is much less and relatively not important. The information-rich region at the center is well preserved while maintaining the same sampling rates compared to uniform case without down-sampling. Image compression was demonstrated at 36 million frames per second in real time. | https://en.wikipedia.org/wiki/Serial_time-encoded_amplified_microscopy |
In combinatorial data analysis , seriation is the process of finding an arrangement of all objects in a set, in a linear order , given a loss function . [ 1 ] The main goal is exploratory, to reveal structural information.
This statistics -related article is a stub . You can help Wikipedia by expanding it .
This combinatorics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Seriation_(statistics) |
Sericitic alteration or sericitization is a process of mineral alteration caused by hydrothermal fluids invading permeable country rock . Plagioclase feldspar within the rock is converted to sericite (sericite is not a mineral; it is a term that is used to describe any fine-grained white phyllosilicate when a distinction cannot be determined), which typically consists of fine-grained white mica and related minerals. Sericitic alteration occurs within the phyllic alteration zone.
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sericitic_alteration |
In order-theoretic mathematics, a series-parallel partial order is a partially ordered set built up from smaller series-parallel partial orders by two simple composition operations. [ 1 ] [ 2 ]
The series-parallel partial orders may be characterized as the N-free finite partial orders; they have order dimension at most two. [ 1 ] [ 3 ] They include weak orders and the reachability relationship in directed trees and directed series–parallel graphs . [ 2 ] [ 3 ] The comparability graphs of series-parallel partial orders are cographs . [ 2 ] [ 4 ]
Series-parallel partial orders have been applied in job shop scheduling , [ 5 ] machine learning of event sequencing in time series data, [ 6 ] transmission sequencing of multimedia data, [ 7 ] and throughput maximization in dataflow programming . [ 8 ]
Series-parallel partial orders have also been called multitrees; [ 4 ] however, that name is ambiguous: multitrees also refer to partial orders with no four-element diamond suborder [ 9 ] and to other structures formed from multiple trees.
Consider P and Q , two partially ordered sets . The series composition of P and Q , written P ; Q , [ 7 ] P * Q , [ 2 ] or P ⧀ Q , [ 1 ] is the partially ordered set whose elements are the disjoint union of the elements of P and Q . In P ; Q , two elements x and y that both belong to P or that both belong to Q have the same order relation that they do in P or Q respectively. However, for every pair x , y where x belongs to P and y belongs to Q , there is an additional order relation x ≤ y in the series composition. Series composition is an associative operation : one can write P ; Q ; R as the series composition of three orders, without ambiguity about how to combine them pairwise, because both of the parenthesizations ( P ; Q ); R and P ; ( Q ; R ) describe the same partial order. However, it is not a commutative operation , because switching the roles of P and Q will produce a different partial order that reverses the order relations of pairs with one element in P and one in Q . [ 1 ]
The parallel composition of P and Q , written P || Q , [ 7 ] P + Q , [ 2 ] or P ⊕ Q , [ 1 ] is defined similarly, from the disjoint union of the elements in P and the elements in Q , with pairs of elements that both belong to P or both to Q having the same order as they do in P or Q respectively. In P || Q , a pair x , y is incomparable whenever x belongs to P and y belongs to Q . Parallel composition is both commutative and associative. [ 1 ]
The class of series-parallel partial orders is the set of partial orders that can be built up from single-element partial orders using these two operations. Equivalently, it is the smallest set of partial orders that includes the single-element partial order and is closed under the series and parallel composition operations. [ 1 ] [ 2 ]
A weak order is the series parallel partial order obtained from a sequence of composition operations in which all of the parallel compositions are performed first, and then the results of these compositions are combined using only series compositions. [ 2 ]
The partial order N with the four elements a , b , c , and d and exactly the three order relations a ≤ b ≥ c ≤ d is an example of a fence or zigzag poset; its Hasse diagram has the shape of the capital letter "N". It is not series-parallel, because there is no way of splitting it into the series or parallel composition of two smaller partial orders. A partial order P is said to be N-free if there does not exist a set of four elements in P such that the restriction of P to those elements is order-isomorphic to N . The series-parallel partial orders are exactly the nonempty finite N-free partial orders. [ 1 ] [ 2 ] [ 3 ]
It follows immediately from this (although it can also be proven directly) that any nonempty restriction of a series-parallel partial order is itself a series-parallel partial order. [ 1 ]
The order dimension of a partial order P is the minimum size of a realizer of P , a set of linear extensions of P with the property that, for every two distinct elements x and y of P , x ≤ y in P if and only if x has an earlier position than y in every linear extension of the realizer. Series-parallel partial orders have order dimension at most two. If P and Q have realizers { L 1 , L 2 } and { L 3 , L 4 }, respectively, then { L 1 L 3 , L 2 L 4 } is a realizer of the series composition P ; Q , and { L 1 L 3 , L 4 L 2 } is a realizer of the parallel composition P || Q . [ 2 ] [ 3 ] A partial order is series-parallel if and only if it has a realizer in which one of the two permutations is the identity and the other is a separable permutation .
It is known that a partial order P has order dimension two if and only if there exists a conjugate order Q on the same elements, with the property that any two distinct elements x and y are comparable on exactly one of these two orders. In the case of series parallel partial orders, a conjugate order that is itself series parallel may be obtained by performing a sequence of composition operations in the same order as the ones defining P on the same elements, but performing a series composition for each parallel composition in the decomposition of P and vice versa. More strongly, although a partial order may have many different conjugates, every conjugate of a series parallel partial order must itself be series parallel. [ 2 ]
Any partial order may be represented (usually in more than one way) by a directed acyclic graph in which there is a path from x to y whenever x and y are elements of the partial order with x ≤ y . The graphs that represent series-parallel partial orders in this way have been called vertex series parallel graphs, and their transitive reductions (the graphs of the covering relations of the partial order) are called minimal vertex series parallel graphs. [ 3 ] Directed trees and (two-terminal) series parallel graphs are examples of minimal vertex series parallel graphs; therefore, series parallel partial orders may be used to represent reachability relations in directed trees and series parallel graphs. [ 2 ] [ 3 ]
The comparability graph of a partial order is the undirected graph with a vertex for each element and an undirected edge for each pair of distinct elements x , y with either x ≤ y or y ≤ x . That is, it is formed from a minimal vertex series parallel graph by forgetting the orientation of each edge. The comparability graph of a series-parallel partial order is a cograph : the series and parallel composition operations of the partial order give rise to operations on the comparability graph that form the disjoint union of two subgraphs or that connect two subgraphs by all possible edges; these two operations are the basic operations from which cographs are defined. Conversely, every cograph is the comparability graph of a series-parallel partial order. If a partial order has a cograph as its comparability graph, then it must be a series-parallel partial order, because every other kind of partial order has an N suborder that would correspond to an induced four-vertex path in its comparability graph, and such paths are forbidden in cographs. [ 2 ] [ 4 ]
The forbidden suborder characterization of series-parallel partial orders can be used as a basis for an algorithm that tests whether a given binary relation is a series-parallel partial order, in an amount of time that is linear in the number of related pairs. [ 2 ] [ 3 ] Alternatively, if a partial order is described as the reachability order of a directed acyclic graph , it is possible to test whether it is a series-parallel partial order, and if so compute its transitive closure, in time proportional to the number of vertices and edges in the transitive closure; it remains open whether the time to recognize series-parallel reachability orders can be improved to be linear in the size of the input graph. [ 10 ]
If a series-parallel partial order is represented as an expression tree describing the series and parallel composition operations that formed it, then the elements of the partial order may be represented by the leaves of the expression tree. A comparison between any two elements may be performed algorithmically by searching for the lowest common ancestor of the corresponding two leaves; if that ancestor is a parallel composition, the two elements are incomparable, and otherwise the order of the series composition operands determines the order of the elements. In this way, a series-parallel partial order on n elements may be represented in O ( n ) space with O (1) time to determine any comparison value. [ 2 ]
It is NP-complete to test, for two given series-parallel partial orders P and Q , whether P contains a restriction isomorphic to Q . [ 3 ]
Although the problem of counting the number of linear extensions of an arbitrary partial order is #P-complete , [ 11 ] it may be solved in polynomial time for series-parallel partial orders. Specifically, if L ( P ) denotes the number of linear extensions of a partial order P , then L ( P ; Q ) = L ( P ) L ( Q ) and
so the number of linear extensions may be calculated using an expression tree with the same form as the decomposition tree of the given series-parallel order. [ 2 ]
Mannila & Meek (2000) use series-parallel partial orders as a model for the sequences of events in time series data. They describe machine learning algorithms for inferring models of this type, and demonstrate its effectiveness at inferring course prerequisites from student enrollment data and at modeling web browser usage patterns. [ 6 ]
Amer et al. (1994) argue that series-parallel partial orders are a good fit for modeling the transmission sequencing requirements of multimedia presentations. They use the formula for computing the number of linear extensions of a series-parallel partial order as the basis for analyzing multimedia transmission algorithms. [ 7 ]
Choudhary et al. (1994) use series-parallel partial orders to model the task dependencies in a dataflow model of massive data processing for computer vision . They show that, by using series-parallel orders for this problem, it is possible to efficiently construct an optimized schedule that assigns different tasks to different processors of a parallel computing system in order to optimize the throughput of the system. [ 8 ]
A class of orderings somewhat more general than series-parallel partial orders is provided by PQ trees , data structures that have been applied in algorithms for testing whether a graph is planar and recognizing interval graphs . [ 12 ] A P node of a PQ tree allows all possible orderings of its children, like a parallel composition of partial orders, while a Q node requires the children to occur in a fixed linear ordering, like a series composition of partial orders. However, unlike series-parallel partial orders, PQ trees allow the linear ordering of any Q node to be reversed. | https://en.wikipedia.org/wiki/Series-parallel_partial_order |
In mathematics, a multisection of a power series is a new power series composed of equally spaced terms extracted unaltered from the original series. Formally, if one is given a power series
then its multisection is a power series of the form
where p , q are integers, with 0 ≤ p < q . Series multisection represents one of the common transformations of generating functions .
A multisection of the series of an analytic function
has a closed-form expression in terms of the function f ( x ) {\displaystyle f(x)} :
where ω = e 2 π i q {\displaystyle \omega =e^{\frac {2\pi i}{q}}} is a primitive q -th root of unity . This expression is often called a root of unity filter. This solution was first discovered by Thomas Simpson . [ 1 ]
In general, the bisections of a series are the even and odd parts of the series.
Consider the geometric series
By setting z → z q {\displaystyle z\rightarrow z^{q}} in the above series, its multisections are easily seen to be
Remembering that the sum of the multisections must equal the original series, we recover the familiar identity
The exponential function
by means of the above formula for analytic functions separates into
The bisections are trivially the hyperbolic functions :
Higher order multisections are found by noting that all such series must be real-valued along the real line. By taking the real part and using standard trigonometric identities, the formulas may be written in explicitly real form as
These can be seen as solutions to the linear differential equation f ( q ) ( z ) = f ( z ) {\displaystyle f^{(q)}(z)=f(z)} with boundary conditions f ( k ) ( 0 ) = δ k , p {\displaystyle f^{(k)}(0)=\delta _{k,p}} , using Kronecker delta notation. In particular, the trisections are
and the quadrisections are
Multisection of a binomial expansion
at x = 1 gives the following identity for the sum of binomial coefficients with step q :
Series multisection converts an infinite sum into a finite sum. It is used, for example, in a key step of a standard proof of Gauss's digamma theorem , which gives a closed-form solution to the digamma function evaluated at rational values p / q . | https://en.wikipedia.org/wiki/Series_multisection |
1X4A , 2M7S , 2M8D , 2O3D , 3BEG , 4C0O
6426
110809
ENSG00000136450
ENSMUSG00000018379
Q07955
Q6PDM2
NM_001078166 NM_006924
NM_001078167 NM_173374
NP_001071634 NP_008855
NP_001071635 NP_775550
Serine/arginine-rich splicing factor 1 ( SRSF1 ) also known as alternative splicing factor 1 ( ASF1 ), pre-mRNA-splicing factor SF2 ( SF2 ) or ASF1/SF2 is a protein that in humans is encoded by the SRSF1 gene . [ 5 ] ASF/SF2 is an essential sequence specific splicing factor involved in pre-mRNA splicing . [ 6 ] [ 7 ] [ 8 ] SRSF1 is the gene that codes for ASF/SF2 [ 9 ] and is found on chromosome 17 . The resulting splicing factor is a protein of approximately 33 kDa . [ 10 ] ASF/SF2 is necessary for all splicing reactions to occur, and influences splice site selection in a concentration-dependent manner, resulting in alternative splicing . [ 7 ] In addition to being involved in the splicing process, ASF/SF2 also mediates post-splicing activities, such as mRNA nuclear export and translation . [ 11 ]
ASF/SF2 is an SR protein , and as such, contains two functional modules: an arginine - serine rich region (RS domain), where the bulk of ASF/SF2 regulation takes place, and two RNA recognition motifs (RRMs), through which ASF/SF2 interacts with RNA and other splicing factors. [ 12 ] [ 13 ] These modules have different functions within general splicing factor function. [ 13 ]
ASF/SF2 is an integral part of numerous components of the splicing process. ASF/SF2 is required for 5’ splice site cleavage and selection, and is capable of discriminating between cryptic and authentic splice sites. [ 10 ] Subsequent lariat formation during the first chemical step of pre-mRNA splicing also requires ASF/SF2. [ 10 ] ASF/SF2 promotes recruitment of the U1 snRNP to the 5’ splice site, and bridges the 5’ and 3’ splice sites to facilitate splicing reactions. [ 8 ] ASF/SF2 also associates with the U2 snRNP. [ 15 ] During the reaction, ASF/SF2 promotes the use of intron proximal sites and hinders the use of intron distal sites, affecting alternative splicing . [ 16 ] [ 17 ] Alternative splicing is affected by ASF/SF2 in a concentration-dependent manner; differing concentrations of ASF/SF2 is a mechanism for alternative splicing regulation, and will result in differing amounts of product isoforms . [ 6 ] ASF/SF2 accomplishes this regulation through direct or indirect binding to exonic splicing enhancer (ESE) sequences. [ 16 ]
ASF/SF2, in the presence of elF4E , promotes the initiation of translation of ribosome -bound mRNA by suppressing the activity of 4E-BP and recruiting molecules for further regulation of translation. [ 11 ] ASF/SF2 interacts with the nuclear export protein TAP in a regulated manner, controlling the export of mature mRNA from the nucleus . [ 18 ] An increase in cellular ASF/SF2 also will increase the efficiency of nonsense-mediated mRNA decay (NMD), favoring NMD that occurs before mRNA release from the nucleus over NMD that occurs after mRNA export from the nucleus to the cytoplasm. [ 19 ] This shift in NMD caused by increased ASF/SF2 is accompanied by overall enhancement of the pioneer round of translation, through elF4E-bound mRNA translation and subsequent translationally active ribosomes, increased association of pioneer translation initiation complexes with ASF/SF2, and increased levels of active TAP. [ 19 ]
ASF/SF2 has the ability to be phosphorylated at the serines in its RS domain by the SR specific protein kinase, SRPK1 . [ 13 ] SRPK1 and ASF/SF2 form an unusually stable complex of apparent K d of 50nM. [ 12 ] [ 18 ] SRPK1 selectively phosphorylates up to twelve serines in the RS domain of ASF/SF2 through a directional and processive mechanism, moving from the C terminus to the N terminus . [ 13 ] This multi-phosphorylation directs ASF/SF2 to the nucleus, influencing a number of protein-protein interactions associated with splicing. [ 13 ] ASF/SF2's function in export of mature mRNA from the nucleus is dependent on its phosphorylation state; dephosphorylation of ASF/SF2 facilitates binding to TAP, [ 13 ] while phosphorylation directs ASF/SF2 to nuclear speckles . [ 18 ] Both phosphorylation and dephosphorylation of ASF/SF2 are important and necessary for proper splicing to occur, as sequential phosphorylation and dephosphorylation marks the transitions between stages in the splicing process. [ 20 ] In addition, hypophosphorylation and hyperphosphorylation of ASF/SF2 by Clk/Sty can lead to inhibition of splicing. [ 13 ]
ASF/SF2 is involved in genomic stability; it is thought that RNA Polymerase recruits ASF/SF2 to nascent RNA transcripts to impede formation of mutagenic DNA:RNA hybrid R-loop structures between the transcript and the template DNA. [ 8 ] In this way, ASF/SF2 is protecting cells from the potential deleterious effects of transcription itself. [ 8 ] ASF/SF2 is also implicated in cellular mechanisms to hinder exon skipping and to ensure splicing is occurring accurately and correctly. [ 10 ]
ASF/SF2 has been shown to have a critical function in heart development, [ 12 ] embryogenesis, tissue formation, cell motility, and cell viability in general. [ 21 ] [ 22 ]
SFRS1 is a proto-oncogene , and thus ASF/SF2 can act as an oncoprotein; it can alter the splicing patterns of crucial cell cycle regulatory genes and suppressor genes . [ 13 ] ASF/SF2 controls the splicing of various tumor suppressor genes, kinases , and kinase receptors , all of which have the potential to be alternatively spliced into oncogenic isoforms. [ 23 ] As such, ASF/SF2 is an important target for cancer therapy, as it is over-expressed in many tumors . [ 13 ]
Modifications and defects in the alternative splicing pathway are associated with a variety of human diseases. [ 24 ]
ASF/SF2 is involved in the replication of HIV-1 , as HIV-1 needs a delicate balance of spliced and unspliced forms of its viral DNA. [ 25 ] ASF/SF2 action in the replication of HIV-1 is a potential target for HIV therapy. [ 25 ] ASF/SF2 is also implicated in the production of T cell receptors in Systemic Lupus Erythematosus , altering specific chain expression in T cell receptors through alternative splicing. [ 26 ] [ 27 ]
ASF/SF2 has been shown to interact with: | https://en.wikipedia.org/wiki/Serine/arginine-rich_splicing_factor_1 |
Methylotrophs are a diverse group of microorganisms that can use reduced one-carbon compounds, such as methanol or methane , as the carbon source for their growth; and multi-carbon compounds that contain no carbon-carbon bonds, such as dimethyl ether and dimethylamine . This group of microorganisms also includes those capable of assimilating reduced one-carbon compounds by way of carbon dioxide using the ribulose bisphosphate pathway. [ 1 ] These organisms should not be confused with methanogens which on the contrary produce methane as a by-product from various one-carbon compounds such as carbon dioxide.
Some methylotrophs can degrade the greenhouse gas methane , and in this case they are called methanotrophs . The abundance, purity, and low price of methanol compared to commonly used sugars make methylotrophs competent organisms for production of amino acids , vitamins, recombinant proteins, single-cell proteins , co-enzymes and cytochromes .
The key intermediate in methylotrophic metabolism is formaldehyde, which can be diverted to either assimilatory or dissimilatory pathways. [ 2 ] Methylotrophs produce formaldehyde through oxidation of methanol and/or methane. Methane oxidation requires the enzyme methane monooxygenase ( MMO ). [ 3 ] [ 4 ] Methylotrophs with this enzyme are given the name methanotrophs . The oxidation of methane (or methanol) can be assimilatory or dissimilatory in nature (see figure). If dissimilatory, the formaldehyde intermediate is oxidized completely into CO 2 {\displaystyle {\ce {CO2}}} to produce reductant and energy. [ 5 ] [ 6 ] If assimilatory, the formaldehyde intermediate is used to synthesize a 3-Carbon ( C 3 {\displaystyle {\ce {C3}}} ) compound for the production of biomass. [ 2 ] [ 7 ] Many methylotrophs use multi-carbon compounds for anabolism, thus limiting their use of formaldehyde to dissimilatory processes, however methanotrophs are generally limited to only C 1 {\textstyle {\ce {C1}}} metabolism. [ 2 ] [ 5 ]
Methylotrophs use the electron transport chain to conserve energy produced from the oxidation of C 1 {\displaystyle {\ce {C1}}} compounds. An additional activation step is required in methanotrophic metabolism to allow degradation of chemically-stable methane. This oxidation to methanol is catalyzed by MMO, which incorporates one oxygen atom from O 2 {\displaystyle {\ce {O2}}} into methane and reduces the other oxygen atom to water, requiring two equivalents of reducing power. [ 4 ] [ 5 ] Methanol is then oxidized to formaldehyde through the action of methanol dehydrogenase ( MDH ) in bacteria, [ 12 ] or a non-specific alcohol oxidase in yeast. [ 13 ] Electrons from methanol oxidation are passed to a membrane-associated quinone of the electron transport chain to produce ATP {\displaystyle {\ce {ATP}}} . [ 14 ]
In dissimilatory processes, formaldehyde is completely oxidized to CO 2 {\displaystyle {\ce {CO2}}} and excreted. Formaldehyde is oxidized to formate via the action of Formaldehyde dehydrogenase ( FALDH ), which provides electrons directly to a membrane associated quinone of the electron transport chain, usually cytochrome b or c. [ 2 ] [ 5 ] In the case of NAD + {\displaystyle {\ce {NAD+}}} associated dehydrogenases, NADH {\displaystyle {\ce {NADH}}} is produced. [ 7 ]
Finally, formate is oxidized to CO 2 {\displaystyle {\ce {CO2}}} by cytoplasmic or membrane-bound Formate dehydrogenase ( FDH ), producing NADH {\displaystyle {\ce {NADH}}} [ 15 ] and CO 2 {\displaystyle {\ce {CO2}}} .
The main metabolic challenge for methylotrophs is the assimilation of single carbon units into biomass. Through de novo synthesis, methylotrophs must form carbon-carbon bonds between 1-Carbon ( C 1 {\displaystyle {\ce {C1}}} ) molecules. This is an energy intensive process, which facultative methylotrophs avoid by using a range of larger organic compounds. [ 16 ] However, obligate methylotrophs must assimilate C 1 {\displaystyle {\ce {C1}}} molecules. [ 2 ] [ 5 ] There are four distinct assimilation pathways with the common theme of generating one C 3 {\displaystyle {\ce {C3}}} molecule. [ 2 ] Bacteria use three of these pathways [ 7 ] [ 11 ] while Fungi use one. [ 17 ] All four pathways incorporate 3 C 1 {\displaystyle {\ce {C1}}} molecules into multi-carbon intermediates, then cleave one intermediate into a new C 3 {\displaystyle {\ce {C3}}} molecule. The remaining intermediates are rearranged to regenerate the original multi-carbon intermediates.
Each species of methylotrophic bacteria has a single dominant assimilation pathway. [ 5 ] The three characterized pathways for carbon assimilation are the ribulose monophosphate (RuMP) and serine pathways of formaldehyde assimilation as well as the ribulose bisphosphate (RuBP) pathway of CO 2 assimilation. [ 2 ] [ 7 ] [ 11 ] [ 18 ]
Unlike the other assimilatory pathways, bacteria using the RuBP pathway derive all of their organic carbon from CO 2 {\displaystyle {\ce {CO2}}} assimilation. [ 5 ] [ 19 ] This pathway was first elucidated in photosynthetic autotrophs and is better known as the Calvin Cycle. [ 19 ] [ 20 ] Shortly thereafter, methylotrophic bacteria who could grow on reduced C 1 {\displaystyle {\ce {C1}}} compounds were found using this pathway. [ 21 ]
First, 3 molecules of ribulose 5-phosphate are phosphorylated to ribulose 1,5-bisphosphate ( RuBP ). The enzyme ribulose bisphosphate carboxylase ( RuBisCO ) carboxylates these RuBP molecules which produces 6 molecules of 3-phosphoglycerate ( PGA ). The enzyme phosphoglycerate kinase phosphorylates PGA into 1,3-diphosphoglycerate (DPGA). Reduction of 6 DPGA by the enzyme glyceraldehyde phosphate dehydrogenase generates 6 molecules of the C 3 {\displaystyle {\ce {C3}}} compound glyceraldehyde-3-phosphate ( GAP ). One GAP molecule is diverted towards biomass while the other 5 molecules regenerate the 3 molecules of ribulose 5-phosphate. [ 7 ] [ 20 ]
A new pathway was suspected when RuBisCO was not found in the methanotroph Methylmonas methanica . [ 22 ] Through radio-labelling experiments, it was shown that M. methanica used the ribulose monophosphate (RuMP) pathway. This has led researchers to propose that the RuMP cycle may have preceded the RuBP cycle. [ 5 ]
Like the RuBP cycle, this cycle begins with 3 molecules of ribulose-5-phosphate. However, instead of phosphorylating ribulose-5-phosphate, 3 molecules of formaldehyde form a C-C bond through an aldol condensation, producing 3 C 6 {\displaystyle {\ce {C6}}} molecules of 3-hexulose 6-phosphate (hexulose phosphate). One of these molecules of hexulose phosphate is converted into GAP and either pyruvate or dihydroxyacetone phosphate ( DHAP ). The pyruvate or DHAP is used towards biomass while the other 2 hexulose phosphate molecules and the molecule of GAP are used to regenerate the 3 molecules of ribulose-5-phosphate. [ 6 ] [ 22 ]
Unlike the other assimilatory pathways, the serine cycle uses carboxylic acids and amino acids as intermediates instead of carbohydrates. [ 5 ] [ 23 ] First, 2 molecules of formaldehyde are added to 2 molecules of the amino acid glycine . This produces two molecules of the amino acid serine , the key intermediate of this pathway. These serine molecules eventually produce 2 molecules of 2-phosphoglycerate , with one C 3 {\displaystyle {\ce {C3}}} molecule going towards biomass and the other being used to regenerate glycine. Notably, the regeneration of glycine requires a molecule of CO 2 {\displaystyle {\ce {CO2}}} as well, therefore the Serine pathway also differs from the other 3 pathways by its requirement of both formaldehyde and CO 2 {\displaystyle {\ce {CO2}}} . [ 22 ] [ 23 ]
Methylotrophic yeast metabolism differs from bacteria primarily on the basis of the enzymes used and the carbon assimilation pathway. Unlike bacteria which use bacterial MDH, methylotrophic yeasts oxidize methanol in their peroxisomes with a non-specific alcohol oxidase. This produces formaldehyde as well as hydrogen peroxide. [ 24 ] [ 25 ] Compartmentalization of this reaction in peroxisomes likely sequesters the hydrogen peroxide produced. Catalase is produced in the peroxisomes to deal with this harmful by-product. [ 17 ] [ 24 ]
The dihydroxyacetone (DHA) pathway, also known as the xylulose monophosphate (XuMP) pathway, is found exclusively in yeast. [ 24 ] [ 26 ] This pathway assimilates three molecules of formaldehyde into 1 molecule of DHAP using 3 molecules of xylulose 5-phosphate as the key intermediate.
DHA synthase acts as a transferase (transketolase) to transfer part of xylulose 5-phosphate to DHA. Then these 3 molecules of DHA are phosphorylated to DHAP by triokinase . Like the other cycles, 3 C 3 {\displaystyle {\ce {C3}}} molecules are produced with 1 molecule being directed for use as cell material. The other 2 molecules are used to regenerate xylulose 5-phosphate. [ 27 ]
As key players in the carbon cycle , methylotrophs work to reduce global warming primarily through the uptake of methane and other greenhouse gases. In aqueous environments, methanogenic archaea produce 40-50% of the world's methane. Symbiosis between methanogens and methanotrophic bacteria greatly decreases the amount of methane released into the atmosphere. [ 28 ]
This symbiosis is also important in the marine environment. Marine bacteria are very important to food webs and biogeochemical cycles , particularly in coastal surface waters but also in other key ecosystems such as hydrothermal vents . There is evidence of widespread and diverse groups of methylotrophs in the ocean that have potential to significantly impact marine and estuarine ecosystems. [ 29 ] One-carbon compounds used as a carbon and energy source by methylotrophs are found throughout the ocean. These compounds include methane , methanol , methylated amines , methyl halides, and methylated sulfur compounds, such as dimethylsulfide (DMS) and dimethylsulfoxide ( DMSO ). [ 30 ] Some of these compounds are produced by phytoplankton and some come from the atmosphere. Studies incorporating a wider range of one-carbon substrates have found increasing diversity of methylotrophs, suggesting that the diversity of this bacterial group has not yet fully been explored. [ 30 ]
Because these compounds are volatile and impact the climate and atmosphere, research on the interaction of these bacteria with these one-carbon compounds can also help understanding of air-sea fluxes of these compounds, which impact climate predictions. [ 31 ] [ 29 ] For example, it is uncertain whether the ocean acts as a net source or sink of atmospheric methanol, but a diverse set of methylotrophs use methanol as their main energy source. In some regions, methylotrophs have been found to be a net sink of methanol, [ 32 ] while in others a product of methylotroph activity, methylamine , has been found to be emitted from the ocean and form aerosols. [ 29 ] The net direction of these fluxes depends on the utilization by methylotrophs.
Studies have found that methylotrophic capacity varies with the productivity of a system, so the impacts of methylotrophy are likely seasonal. Because some of the one-carbon compounds used by methylotrophs, such as methanol and TMAO , are produced by phytoplankton, their availability will vary temporally and seasonally depending on phytoplankton blooms , weather events, and other ecosystem inputs. [ 33 ] This means that methylotrophic metabolism is expected to follow similar dynamics, which will then impact biogeochemical cycles and carbon fluxes. [ 29 ]
Impacts of methylotrophs were also found in deep-sea hydrothermal vents . Methylotrophs, along with sulfur oxidizers and iron oxidizers, expressed key proteins associated with carbon fixation . [ 34 ] These types of studies will contribute to further understanding of deep sea carbon cycling and the connectivity between deep ocean and surface carbon cycling. The expansion of omics technologies has accelerated research on the diversity of methylotrophs, their abundance and activity in a variety of environmental niches , and their interspecies interactions. [ 35 ] Further research must be done on these bacteria and the overall effect of bacterial drawdown and transformation of one-carbon compounds in the ocean. Current evidence points to a potentially substantial role for methylotrophs in the ocean in the cycling of carbon but also potentially in the global nitrogen, sulfur and phosphorus cycles as well as the air-sea flux of carbon compounds, which could have global climate impacts. [ 31 ]
The use of methylotrophs in the agricultural sector is another way in which they can potentially impact the environment. Traditional chemical fertilizers supply nutrients not readily available from soil but can have some negative environmental impacts and are costly to produce. [ 36 ] Methylotrophs have high potential as alternative biofertilizers and bioinoculants due to their ability to form mutualistic relationships with several plant species. [ 37 ] Methylotrophs provide plants with nutrients such as soluble phosphorus and fixed nitrogen and also play a role in the uptake of said nutrients. [ 36 ] [ 37 ] Additionally, they can help plants respond to environmental stressors through the production of phytohormones . [ 36 ] Methylotrophic growth also inhibits the growth of harmful plant pathogens and induces systemic resistance. [ 37 ] Methylotrophic biofertilizers used either alone or together with chemical fertilizers have been shown to increase both crop yield and quality without loss of nutrients. [ 36 ] | https://en.wikipedia.org/wiki/Serine_cycle |
The Serine octamer cluster in physical chemistry is an unusually stable cluster consisting of eight serine molecules (Ser) implicated in the origin of homochirality . [ 1 ] [ 2 ] This cluster was first discovered in mass spectrometry experiments. Electrospray ionization of an aerosol of serine in methanol results in a mass spectrum with a prominent ion peak of 841 corresponding to the Ser 8 +H + cation . The smaller and larger clusters are virtually absent in the spectrum and therefore the number 8 is called a magic number . The same octamer ions are also produced by rapid evaporation of a serine solution on a hot (200–250 °C) metal surface or by sublimation of solid serine. After production, detection again is by mass-spectroscopic means. For the discussion of homochirality, these laboratory production methods are designed to mimic prebiotic conditions.
The cluster is not only unusually stable but also unusual because the clusters have a strong homochiral preference. A racemic serine solution produces a minimum amount of cluster and with solutions of both enantiomers a maximum amount is formed of both homochiral D-Ser 8 and L-Ser 8 . In another experiment cluster formation of a racemic mixture with deuterium enriched L-serine results in a product distribution with hardly any 50/50 D/L clusters but a preference for either D or L enantioenriched clusters.
A model for chiral amplification is proposed whereby enantioenriched clusters are formed from a non-racemic mixture already enriched by L-serine as a result of a mirror-symmetry breaking process. Cluster formation is followed by isolation and on subsequent dissociation of the cluster a serene solution forms with a higher concentration of L-serine than in the original mixture. A cycle can be maintained in which each turn results in an incremental enrichment in L-serine. Many such cycles eventually result in enantiopure L-serine. This model has been experimentally verified.
Chiral transmission is assumed to take place through so-called substitution reactions of serine clusters. In these reactions, a serine monomer in a cluster can be replaced by another small biologically relevant molecule. For instance Ser 8 reacts with glucose (Glc) to the Ser 6 + Glc 3 + Na + cluster. Moreover, the cluster of synthetic L-glucose with Ser 8 is less abundant than that with the biological D-glucose. | https://en.wikipedia.org/wiki/Serine_octamer_cluster |
In drug development , serious adverse event ( SAE ) is defined as any untoward medical occurrence during a human drug trial that at any dose
The term "life-threatening" in the definition of "serious" refers to an event in which the patient was at risk of death at the time of the event; it does not refer to an event which hypothetically might have caused death if it were more severe. [ 2 ] Adverse events are more broadly defined by international regulation as “Any untoward medical occurrence in a patient or clinical investigation subject administered a pharmaceutical product and which does not necessarily have to have a causal relationship with this treatment.” [ 2 ]
Investigators in human clinical trials are obligated to report these events in clinical study reports. [ 3 ] Research suggests that these events are often inadequately reported in publicly available reports. [ 4 ] Because of the lack of these data and uncertainty about methods for synthesising them, individuals conducting systematic reviews and meta-analyses of therapeutic interventions often unknowingly overemphasise health benefit. [ 5 ] To balance the overemphasis on benefit, scholars have called for more complete reporting of harm from clinical trials. [ 6 ]
Serious adverse reactions are serious adverse events judged to be related to drug therapy. A SUSAR (suspected unexpected serious adverse reaction) should be reported to a drug regulatory authority under an investigational license by using the CIOMS form (or in some countries an equivalent form). "Unexpected" means that for an authorised (approved) medicinal product that the event is not described in the product's labeling, or in the case of an investigational (yet to be approved or disapproved) product that the event is not listed in the Investigator’s Brochure . That is, the AE is unexpected for the drug or device. [ citation needed ]
An adverse effect is an adverse event which is believed to be caused by a health intervention. [ citation needed ] | https://en.wikipedia.org/wiki/Serious_adverse_event |
A serodiscordant relationship, also known as mixed-status , is one where one partner is infected by HIV and the other is not. [ 1 ] This contrasts with seroconcordant relationships, in which both partners are of the same HIV status. Without effective prevention measures, serodiscordant relationships can significantly contribute to the spread of HIV/AIDS , with the risk varying based on the type and frequency of sexual activity and the viral load of the HIV-positive partner. [ 2 ]
Globally, an estimated 34 million people are living with HIV, with 68% residing in sub-Saharan Africa nations such as Lesotho [ 3 ] and 50% of cases affecting women. In the United States, over 140,000 HIV-serodiscordant heterosexual couples are estimated, with 52% of HIV-positive women in a national study reporting serodiscordant partnerships. Similarly, in sub-Saharan Africa, 47% of HIV-positive women are in stable serodiscordant relationships. [ 4 ] The World Health Organization reports that up to 50% of individuals living with HIV in ongoing relationships worldwide have partners who are HIV-negative. [ 5 ]
Serodiscordant couples face numerous issues not faced by seroconcordant couples, including decisions as to what level of sexual activity is comfortable for them, knowing that practicing safer sex reduces but does not eliminate the risk of transmission to the HIV-negative partner. There are also potential psychological issues arising out of taking care of a sick partner, and survivor guilt . [ 6 ] Psychological impacts included anger, fear, grief, and loss of control, often exacerbated by the secrecy surrounding their partner's status. Financial strains may also be more accentuated as one partner becomes ill and potentially less able or unable to work. [ 7 ]
Research involving serodiscordant couples has offered insights into how the virus is passed and how individuals who are HIV positive may be able to reduce the risk of passing the virus to their partner. [ 8 ]
Experts predict that there are thousands of serodiscordant couples in the US who wish to have children. [ 9 ] The Special Program of Assisted Reproduction was developed in 1996 to help serodiscordant couples conceive safely, however, it is solely designed to help couples where the male partner is infected.
The WHO 's consolidated guideline on sexual and reproductive health and rights for women living with HIV provides strategies to minimize HIV transmission risks in serodiscordant relationships when planning pregnancy. Key recommendations include antiretroviral therapy (ART) to suppress the viral load in the HIV-positive partner, pre-exposure prophylaxis (PrEP) for the HIV-negative partner, timing unprotected intercourse during peak fertility, screening and treating sexually transmitted infections in both partners, voluntary medical male circumcision for HIV-negative men, and assisted reproductive techniques such as semen insemination . [ 10 ] | https://en.wikipedia.org/wiki/Serodiscordant |
Serology is the scientific study of serum and other body fluids . In practice, the term usually refers to the diagnostic identification of antibodies in the serum. [ 1 ] Such antibodies are typically formed in response to an infection (against a given microorganism ), [ 2 ] against other foreign proteins (in response, for example, to a mismatched blood transfusion ), or to one's own proteins (in instances of autoimmune disease ). In either case, the procedure is simple. [ citation needed ]
Serological tests are diagnostic methods that are used to identify antibodies and antigens in a patient's sample. Serological tests may be performed to diagnose infections and autoimmune illnesses , to check if a person has immunity to certain diseases, and in many other situations, such as determining an individual's blood type . [ 1 ] Serological tests may also be used in forensic serology to investigate crime scene evidence. [ 3 ] Several methods can be used to detect antibodies and antigens, including ELISA , [ 4 ] agglutination , precipitation , complement-fixation , and fluorescent antibodies and more recently chemiluminescence . [ 5 ]
In microbiology , serologic tests are used to determine if a person has antibodies against a specific pathogen , or to detect antigens associated with a pathogen in a person's sample. [ 6 ] Serologic tests are especially useful for organisms that are difficult to culture by routine laboratory methods, like Treponema pallidum (the causative agent of syphilis ), or viruses . [ 7 ]
The presence of antibodies against a pathogen in a person's blood indicates that they have been exposed to that pathogen. Most serologic tests measure one of two types of antibodies: immunoglobulin M (IgM) and immunoglobulin G (IgG). IgM is produced in high quantities shortly after a person is exposed to the pathogen, and production declines quickly thereafter. IgG is also produced on the first exposure, but not as quickly as IgM. On subsequent exposures, the antibodies produced are primarily IgG, and they remain in circulation for a prolonged period of time. [ 6 ]
This affects the interpretation of serology results: a positive result for IgM suggests that a person is currently or recently infected, while a positive result for IgG and negative result for IgM suggests that the person may have been infected or immunized in the past. Antibody testing for infectious diseases is often done in two phases: during the initial illness (acute phase) and after recovery (convalescent phase). The amount of antibody in each specimen ( antibody titer ) is compared, and a significantly higher amount of IgG in the convalescent specimen suggests infection as opposed to previous exposure. [ 8 ] False negative results for antibody testing can occur in people who are immunosuppressed , as they produce lower amounts of antibodies, and in people who receive antimicrobial drugs early in the course of the infection. [ 7 ]
Blood typing is typically performed using serologic methods. The antigens on a person's red blood cells, which determine their blood type , are identified using reagents that contain antibodies, called antisera . When the antibodies bind to red blood cells that express the corresponding antigen, they cause red blood cells to clump together (agglutinate), which can be identified visually. The person's blood group antibodies can also be identified by adding plasma to cells that express the corresponding antigen and observing the agglutination reactions. [ 9 ] [ 6 ]
Other serologic methods used in transfusion medicine include crossmatching and the direct and indirect antiglobulin tests . Crossmatching is performed before a blood transfusion to ensure that the donor blood is compatible. It involves adding the recipient's plasma to the donor blood cells and observing for agglutination reactions. [ 9 ] The direct antiglobulin test is performed to detect if antibodies are bound to red blood cells inside the person's body, which is abnormal and can occur in conditions like autoimmune hemolytic anemia , hemolytic disease of the newborn and transfusion reactions . [ 10 ] The indirect antiglobulin test is used to screen for antibodies that could cause transfusion reactions and identify certain blood group antigens. [ 11 ]
Serologic tests can help to diagnose autoimmune disorders by identifying abnormal antibodies directed against a person's own tissues ( autoantibodies ). [ 12 ] All people have different immunology graphs. [ citation needed ]
A 2016 research paper by Metcalf et al., amongst whom were Neil Ferguson and Jeremy Farrar , stated that serological surveys are often used by epidemiologists to determine the prevalence of a disease in a population. Such surveys are sometimes performed by random, anonymous sampling from samples taken for other medical tests or to assess the prevalence of antibodies of a specific organism or protective titre of antibodies in a population. Serological surveys are usually used to quantify the proportion of people or animals in a population positive for a specific antibody or the titre or concentrations of an antibody. These surveys are potentially the most direct and informative technique available to infer the dynamics of a population's susceptibility and level of immunity. The authors proposed a World Serology Bank (or serum bank) and foresaw "associated major methodological developments in serological testing, study design , and quantitative analysis , which could drive a step change in our understanding and optimum control of infectious diseases ." [ 13 ]
In a helpful reply entitled "Opportunities and challenges of a World Serum Bank", de Lusignan and Correa observed [ 14 ] that the
principal ethical and logistical challenges that need to be overcome are the methods of obtaining specimens, how informed consent is acquired in busy practices, and the filling in of gaps in patient sampling .
In another helpful reply on the World Serum Bank, the Australian researcher Karen Coates declared that: [ 15 ]
Improved serological surveillance would allow governments , aid agencies , and policy writers to direct public health resources to where they are needed most. A better understanding of infection dynamics with respect to the changing patterns of global weather should inform policy measures including where to concentrate vaccination efforts and insect control measures.
In April 2020, Justin Trudeau formed the COVID-19 Immunity Task Force , whose mandate is to carry out a serological survey in a scheme hatched in the midst of the COVID-19 pandemic . [ 16 ] [ 17 ] | https://en.wikipedia.org/wiki/Serology |
Serotonin ( / ˌ s ɛr ə ˈ t oʊ n ɪ n , ˌ s ɪər ə -/ ) [ 6 ] [ 7 ] [ 8 ] , also known as 5-hydroxytryptamine ( 5-HT ), is a monoamine neurotransmitter with a wide range of functions in both the central nervous system (CNS) and also peripheral tissues. It is involved in mood, cognition, reward, learning, memory, and physiological processes such as vomiting and vasoconstriction. [ 9 ] In the CNS, serotonin regulates mood, appetite, and sleep. [ 10 ] [ unreliable medical source ] [ 11 ] [ unreliable medical source ]
Most of the body's serotonin—about 90%—is synthesized in the gastrointestinal tract by enterochromaffin cells , where it regulates intestinal movements. [ 12 ] [ 13 ] [ 14 ] It is also produced in smaller amounts in the brainstem's raphe nuclei , the skin's Merkel cells , pulmonary neuroendocrine cells , and taste receptor cells of the tongue. Once secreted, serotonin is taken up by platelets in the blood, which release it during clotting to promote vasoconstriction and platelet aggregation. [ 15 ] Around 8% of the body's serotonin is stored in platelets, and 1–2% is found in the CNS. [ 16 ]
Serotonin acts as both a vasoconstrictor and vasodilator depending on concentration and context, influencing hemostasis and blood pressure regulation. [ 17 ] It plays a role in stimulating myenteric neurons and enhancing gastrointestinal motility through uptake and release cycles in platelets and surrounding tissue. [ 18 ] Biochemically, serotonin is an indoleamine synthesized from tryptophan and metabolized primarily in the liver to 5-hydroxyindoleacetic acid (5-HIAA).
Serotonin is targeted by several classes of antidepressants , including selective serotonin reuptake inhibitors (SSRIs) and serotonin–norepinephrine reuptake inhibitors (SNRIs), which block reabsorption in the synapse to elevate its levels. It is found in nearly all bilateral animals , including insects, spiders and worms, [ 19 ] and also occurs in fungi and plants . [ 20 ] In plants and insect venom, it serves a defensive function by inducing pain. [ 21 ] Serotonin released by pathogenic amoebae may cause diarrhea in the human gut, [ 22 ] while its presence in seeds and fruits is thought to stimulate digestion and facilitate seed dispersal. [ 23 ] [ failed verification ]
Biochemically, the indoleamine molecule derives from the amino acid tryptophan , via the (rate-limiting) hydroxylation of the 5 position on the ring (forming the intermediate 5-hydroxytryptophan ), and then decarboxylation to produce serotonin. [ 24 ] Preferable conformations are defined via ethylamine chain, resulting in six different conformations. [ 25 ]
Serotonin crystallizes in P2 1 2 1 2 1 chiral space group forming different hydrogen-bonding interactions between serotonin molecules via N-H...O and O-H...N intermolecular bonds. [ 26 ] Serotonin also forms several salts, including pharmaceutical formulation of serotonin adipate. [ 27 ]
Serotonin is involved in numerous physiological processes, [ 28 ] including sleep , [ 29 ] thermoregulation , learning and memory , pain , (social) behavior, [ 30 ] sexual activity , feeding, motor activity, neural development, [ 31 ] and biological rhythms . [ 32 ] In less complex animals, such as some invertebrates , serotonin regulates feeding and other processes. [ 33 ] In plants serotonin synthesis seems to be associated with stress signals. [ 20 ] [ 34 ] Despite its longstanding prominence in pharmaceutical advertising, the claim that low serotonin levels cause depression is not supported by scientific evidence. [ 35 ] [ 36 ] [ 37 ]
Serotonin primarily acts through its receptors and its effects depend on which cells and tissues express these receptors. [ 32 ]
Metabolism involves first oxidation by monoamine oxidase to 5-hydroxyindoleacetaldehyde (5-HIAL). [ 38 ] [ 39 ] The rate-limiting step is hydride transfer from serotonin to the flavin cofactor. [ 40 ] There follows oxidation by aldehyde dehydrogenase (ALDH) to 5-hydroxyindoleacetic acid ( 5-HIAA ), the indole acetic-acid derivative. The latter is then excreted by the kidneys.
The serotonin receptors are located on the cell membrane of nerve cells and other cell types in animals, and mediate the effects of serotonin as the endogenous ligand and of a broad range of pharmaceutical and psychedelic drugs . There are currently 14 known serotonin receptors, including the serotonin 5-HT 1 ( 1A , 1B , 1D , 1E , 1F ), 5-HT 2 ( 2A , 2B , 2C ), 5-HT 3 , 5-HT 4 , 5-HT 5 ( 5A , 5B ), 5-HT 6 , and 5-HT 7 receptors . Except for the serotonin 5-HT 3 receptor , a ligand-gated ion channel , all other 5-HT receptors are G-protein-coupled receptors (also called seven-transmembrane, or heptahelical receptors) that activate an intracellular second messenger cascade. [ 41 ] The 5-HT 5B receptor is present in rodents but not in humans.
In addition to the serotonin receptors, serotonin is an agonist of the trace amine-associated receptor 1 (TAAR1) in some species. [ 42 ] [ 43 ] It is a weak TAAR1 partial agonist in rats, but is inactive at the TAAR1 in mice and humans. [ 42 ] [ 43 ]
The cryo-EM structures of the serotonin 5-HT 2A receptor with serotonin, as well as with various serotonergic psychedelics , have been solved and published by Bryan L. Roth and colleagues. [ 44 ] [ 45 ]
Serotonergic action is terminated primarily via uptake of 5-HT from the synapse. This is accomplished through the specific monoamine transporter for 5-HT, SERT , on the presynaptic neuron. Various agents can inhibit 5-HT reuptake, including cocaine , dextromethorphan (an antitussive ), tricyclic antidepressants and selective serotonin reuptake inhibitors (SSRIs). A 2006 study found that a significant portion of 5-HT's synaptic clearance is due to the selective activity of the plasma membrane monoamine transporter (PMAT) which actively transports the molecule across the membrane and back into the presynaptic cell. [ 46 ]
In contrast to the high affinity of SERT, the PMAT has been identified as a low-affinity transporter, with an apparent K m of 114 micromoles/l for serotonin, which is approximately 230 times higher than that of SERT. However, the PMAT, despite its relatively low serotonergic affinity, has a considerably higher transport "capacity" than SERT, "resulting in roughly comparable uptake efficiencies to SERT ... in heterologous expression systems." [ 46 ] The study also suggests that the administration of SSRIs such as fluoxetine and sertraline may be associated with an inhibitory effect on PMAT activity when used at higher than normal dosages ( IC 50 test values used in trials were 3–4 fold higher than typical prescriptive dosage).
Serotonin can also signal through a nonreceptor mechanism called serotonylation, in which serotonin modifies proteins. [ 47 ] This process underlies serotonin's effects upon platelet-forming cells ( thrombocytes ) in which it links to the modification of signaling enzymes called GTPases that then trigger the release of vesicle contents by exocytosis . [ 48 ] A similar process underlies the pancreatic release of insulin. [ 47 ]
The effects of serotonin upon vascular smooth muscle tone – the biological function after which serotonin was originally named – depend upon the serotonylation of proteins involved in the contractile apparatus of muscle cells. [ 49 ]
The neurons of the raphe nuclei are the principal source of 5-HT release in the brain. [ 57 ] There are nine raphe nuclei, designated B1–B9, which contain the majority of serotonin-containing neurons (some scientists chose to group the nuclei raphes lineares into one nucleus), all of which are located along the midline of the brainstem , and centered on the reticular formation . [ 58 ] [ 59 ] Axons from the neurons of the raphe nuclei form a neurotransmitter system reaching almost every part of the central nervous system. Axons of neurons in the lower raphe nuclei terminate in the cerebellum and spinal cord , while the axons of the higher nuclei spread out in the entire brain.
It is the dorsal part of the raphe nucleus that contains neurons projecting to the central nervous system. Serotonin-releasing neurons in this area receive input from a large number of areas, notably from prefrontal cortex , lateral habenula , preoptic area , substantia nigra and amygdala . [ 60 ] These neurons are thought to communicate the expectation of rewards in the near future, a quantity called state value in reinforcement learning . [ 61 ]
The serotonin nuclei may also be divided into two main groups, the rostral and caudal containing three and four nuclei respectively. The rostral group consists of the caudal linear nuclei (B8), the dorsal raphe nuclei (B6 and B7) and the median raphe nuclei (B5, B8 and B9), that project into multiple cortical and subcortical structures. The caudal group consists of the nucleus raphe magnus (B3), raphe obscurus nucleus (B2), raphe pallidus nucleus (B1), and lateral medullary reticular formation, that project into the brainstem. [ 62 ]
The serotonergic pathway is involved in sensorimotor function, with pathways projecting both into cortical (Dorsal and Median Raphe Nuclei), subcortical, and spinal areas involved in motor activity. Pharmacological manipulation suggests that serotonergic activity increases with motor activity while firing rates of serotonergic neurons increase with intense visual stimuli. Animal models suggest that kainate signaling negatively regulates serotonin actions in the retina, with possible implications for the control of the visual system. [ 63 ] The descending projections form a pathway that inhibits pain called the "descending inhibitory pathway" that may be relevant to a disorder such as fibromyalgia, migraine, and other pain disorders, and the efficacy of antidepressants in them. [ 64 ]
Serotonergic projections from the caudal nuclei are involved in regulating mood and emotion, and hypo- [ 65 ] or hyper-serotonergic [ 66 ] states may be involved in depression and sickness behavior.
Serotonin is released into the synapse, or space between neurons, and diffuses over a relatively wide gap (>20 nm) to activate 5-HT receptors located on the dendrites , cell bodies, and presynaptic terminals of adjacent neurons.
When humans smell food, dopamine is released to increase the appetite . But, unlike in worms, serotonin does not increase anticipatory behaviour in humans; instead, the serotonin released while consuming activates 5-HT2C receptors on dopamine-producing cells. This halts their dopamine release, and thereby serotonin decreases appetite. Drugs that block 5-HT 2C receptors make the body unable to recognize when it is no longer hungry or otherwise in need of nutrients, and are associated with weight gain, [ 67 ] especially in people with a low number of receptors. [ 68 ] The expression of 5-HT 2C receptors in the hippocampus follows a diurnal rhythm , [ 69 ] just as the serotonin release in the ventromedial nucleus , which is characterised by a peak at morning when the motivation to eat is strongest. [ 70 ]
In macaques , alpha males have twice the level of serotonin in the brain as subordinate males and females (measured by the concentration of 5-HIAA in the cerebrospinal fluid (CSF)). Dominance status and CSF serotonin levels appear to be positively correlated. When dominant males were removed from such groups, subordinate males begin competing for dominance. Once new dominance hierarchies were established, serotonin levels of the new dominant individuals also increased to double those in subordinate males and females. The reason why serotonin levels are only high in dominant males, but not dominant females has not yet been established. [ 71 ]
In humans, levels of 5-HT 1A receptor inhibition in the brain show negative correlation with aggression, [ 72 ] and a mutation in the gene that codes for the 5-HT 2A receptor may double the risk of suicide for those with that genotype. [ 73 ] Serotonin in the brain is not usually degraded after use, but is collected by serotonergic neurons by serotonin transporters on their cell surfaces. Studies have revealed nearly 10% of total variance in anxiety-related personality depends on variations in the description of where, when and how many serotonin transporters the neurons should deploy. [ 74 ]
Serotonin regulates gastrointestinal (GI) function. The gut is surrounded by enterochromaffin cells , which release serotonin in response to food in the lumen . This makes the gut contract around the food. Platelets in the veins draining the gut collect excess serotonin. There are often serotonin abnormalities in gastrointestinal disorders such as constipation and irritable bowel syndrome. [ 75 ]
If irritants are present in the food, the enterochromaffin cells release more serotonin to make the gut move faster, i.e., to cause diarrhea, so the gut is emptied of the noxious substance. If serotonin is released in the blood faster than the platelets can absorb it, the level of free serotonin in the blood is increased. This activates 5-HT3 receptors in the chemoreceptor trigger zone that stimulate vomiting . [ 76 ] Thus, drugs and toxins stimulate serotonin release from enterochromaffin cells in the gut wall can induce emesis. The enterochromaffin cells not only react to bad food but are also very sensitive to irradiation and cancer chemotherapy . Drugs that block 5HT3 are very effective in controlling the nausea and vomiting produced by cancer treatment, and are considered the gold standard for this purpose. [ 77 ]
The lung , [ 78 ] including that of reptiles, [ 79 ] contains specialized epithelial cells that occur as solitary cells or as clusters called neuroepithelial bodies or bronchial Kulchitsky cells or alternatively K cells . [ 80 ] These are enterochromaffin cells that like those in the gut release serotonin. [ 80 ] Their function is probably vasoconstriction during hypoxia . [ 78 ]
Serotonin is also produced by Merkel cells which are part of the somatosensory system. [ 81 ]
In mice and humans, alterations in serotonin levels and signalling have been shown to regulate bone mass. [ 82 ] [ 83 ] [ 84 ] [ 85 ] Mice that lack brain serotonin have osteopenia , while mice that lack gut serotonin have high bone density. In humans, increased blood serotonin levels have been shown to be a significant negative predictor of low bone density. Serotonin can also be synthesized, albeit at very low levels, in the bone cells. It mediates its actions on bone cells using three different receptors. Through 5-HT 1B receptors , it negatively regulates bone mass, while it does so positively through 5-HT 2B receptors and 5-HT 2C receptors . There is very delicate balance between physiological role of gut serotonin and its pathology. Increase in the extracellular content of serotonin results in a complex relay of signals in the osteoblasts culminating in FoxO1/ Creb and ATF4 dependent transcriptional events. [ 86 ] Following the 2008 findings that gut serotonin regulates bone mass, the mechanistic investigations into what regulates serotonin synthesis from the gut in the regulation of bone mass have started. Piezo1 has been shown to sense RNA in the gut and relay this information through serotonin synthesis to the bone by acting as a sensor of single-stranded RNA (ssRNA) governing 5-HT production. Intestinal epithelium-specific deletion of mouse Piezo1 profoundly disturbed gut peristalsis, impeded experimental colitis, and suppressed serum 5-HT levels. Because of systemic 5-HT deficiency, conditional knockout of Piezo1 increased bone formation. Notably, fecal ssRNA was identified as a natural Piezo1 ligand, and ssRNA-stimulated 5-HT synthesis from the gut was evoked in a MyD88/TRIF-independent manner. Colonic infusion of RNase A suppressed gut motility and increased bone mass. These findings suggest gut ssRNA as a master determinant of systemic 5-HT levels, indicating the ssRNA-Piezo1 axis as a potential prophylactic target for treatment of bone and gut disorders. Studies in 2008, 2010 and 2019 have opened the potential for serotonin research to treat bone mass disorders. [ 87 ] [ 88 ]
Since serotonin signals resource availability it is not surprising that it affects organ development. Many human and animal studies have shown that nutrition in early life can influence, in adulthood, such things as body fatness, blood lipids, blood pressure, atherosclerosis , behavior, learning, and longevity. [ 89 ] [ 90 ] [ 91 ] Rodent experiment shows that neonatal exposure to SSRIs makes persistent changes in the serotonergic transmission of the brain resulting in behavioral changes, [ 92 ] [ 93 ] which are reversed by treatment with antidepressants. [ 94 ] By treating normal and knockout mice lacking the serotonin transporter with fluoxetine scientists showed that normal emotional reactions in adulthood, like a short latency to escape foot shocks and inclination to explore new environments were dependent on active serotonin transporters during the neonatal period. [ 95 ] [ 96 ]
Human serotonin can also act as a growth factor directly. Liver damage increases cellular expression of 5-HT 2A and 5-HT 2B receptors , mediating liver compensatory regrowth (see Liver § Regeneration and transplantation ) [ 97 ] Serotonin present in the blood then stimulates cellular growth to repair liver damage. [ 98 ]
5-HT 2B receptors also activate osteocytes , which build up bone [ 99 ] However, serotonin also inhibits osteoblasts , through 5-HT 1B receptors. [ 100 ]
Serotonin, in addition, evokes endothelial nitric oxide synthase activation and stimulates, through a 5-HT1B receptor -mediated mechanism, the phosphorylation of p44/p42 mitogen-activated protein kinase activation in bovine aortic endothelial cell cultures. [ clarification needed ] [ 101 ] In blood, serotonin is collected from plasma by platelets, which store it. It is thus active wherever platelets bind in damaged tissue, as a vasoconstrictor to stop bleeding, and also as a fibrocyte mitotic (growth factor), to aid healing. [ 102 ]
Serotonin also regulates white and brown adipose tissue function, and adipocytes are capable of producing 5-HT separately from the gut. Serotonin increases lipogenesis through HTR2A in white adipose tissue, and suppressed thermogenesis in brown adipose tissue via Htr3. [ 103 ]
Several classes of drugs target the serotonin system, including some antidepressants , anxiolytics , antipsychotics , analgesics , antimigraine drugs , antiemetics , appetite suppressants , and anticonvulsants , as well as psychedelics and entactogens .
At rest, serotonin is stored within the vesicles of presynaptic neurons. When stimulated by nerve impulses, serotonin is released as a neurotransmitter into the synapse, reversibly binding to the postsynaptic receptor to induce a nerve impulse on the postsynaptic neuron. Serotonin can also bind to auto-receptors on the presynaptic neuron to regulate the synthesis and release of serotonin. Normally serotonin is taken back into the presynaptic neuron to stop its action, then reused or broken down by monoamine oxidase. [ 104 ]
Drugs that alter serotonin levels are used in treating depression , generalized anxiety disorder , and social phobia . Monoamine oxidase inhibitors (MAOIs) prevent the breakdown of monoamine neurotransmitters (including serotonin), and therefore increase concentrations of the neurotransmitter in the brain. MAOI therapy is associated with many adverse drug reactions, and patients are at risk of hypertensive emergency triggered by foods with high tyramine content, and certain drugs. Some drugs inhibit the re-uptake of serotonin, making it stay in the synaptic cleft longer. The tricyclic antidepressants (TCAs) inhibit the reuptake of both serotonin and norepinephrine . The newer selective serotonin reuptake inhibitors ( SSRIs ) have fewer side-effects and fewer interactions with other drugs. [ 105 ]
Certain SSRI medications have been shown to lower serotonin levels below the baseline after chronic use, despite initial increases. [ 106 ] The 5-HTTLPR gene codes for the number of serotonin transporters in the brain, with more serotonin transporters causing decreased duration and magnitude of serotonergic signaling. [ 107 ] The 5-HTTLPR polymorphism (l/l) causing more serotonin transporters to be formed is also found to be more resilient against depression and anxiety. [ 108 ] [ 109 ]
Besides their use in treating depression and anxiety, certain serotonergic antidepressants are also approved and used to treat fibromyalgia , neuropathic pain , and chronic fatigue syndrome . [ 110 ] [ 111 ]
Azapirone anxiolytics like buspirone and tandospirone act as serotonin 5-HT 1A receptor agonists . [ 112 ] [ 113 ]
Many antipsychotics bind to and modulate serotonin receptors , including the serotonin 5-HT 1A , 5-HT 2A , 5-HT 2B , 5-HT 2C , 5-HT 6 , and 5-HT 7 receptors , among others. [ 114 ] [ 115 ] Activation of serotonin 5-HT 1A receptors and blockade of serotonin 5-HT 2A receptors may contribute to the therapeutic antipsychotic effects of these agents, whereas antagonism of serotonin 5-HT 2C receptors has been especially implicated in side effects of antipsychotics. [ 114 ] [ 115 ]
Antimigraine agents such as the triptans like sumatriptan act as agonists of the serotonin 5-HT 1B , 5-HT 1D , and/or 5-HT 1F receptors . [ 116 ] [ 117 ] Earlier antimigraine agents were the ergoline derivatives and ergot -related drugs such as ergotamine , dihydroergotamine , and methysergide , which act as non-selective serotonin receptor agonists . [ 117 ] [ 118 ] [ 119 ]
Some serotonin 5-HT 3 receptor antagonists , such as ondansetron , granisetron , and tropisetron , are important antiemetic agents. [ 120 ] [ 121 ] They are particularly important in treating the nausea and vomiting that occur during anticancer chemotherapy using cytotoxic drugs . [ 121 ] Another application is in the treatment of postoperative nausea and vomiting . [ 120 ]
Some serotonin releasing agents , serotonin reuptake inhibitors , and/or serotonin 5-HT 2C receptor agonists , such as fenfluramine , dexfenfluramine , chlorphentermine , sibutramine , and lorcaserin , have been approved and used as appetite suppressants for purposes of weight loss in the treatment of overweightness or obesity . [ 122 ] [ 123 ] [ 124 ] [ 125 ] [ 126 ] Several of the preceding agents have been withdrawn from the market due to toxicity , such as cardiac fibrosis or pulmonary hypertension . [ 126 ]
Although it was previously withdrawn from the market as an appetite suppressant, fenfluramine was reintroduced as an anticonvulsant for treatment of seizures in certain rare forms of epilepsy like Dravet syndrome and Lennox–Gastaut syndrome . [ 127 ] Selective serotonin 5-HT 2C receptor agonists, like lorcaserin, bexicaserin , and BMB-101 , are also being developed for this use. [ 127 ] [ 128 ] [ 129 ] [ 130 ]
Serotonergic psychedelics , including drugs like psilocybin (found in psilocybin mushrooms ), dimethyltryptamine (DMT) (found in ayahuasca ), lysergic acid diethylamide (LSD), mescaline (found in peyote cactus ), and 5-MeO-DMT (found in Anadenanthera trees and the Bufo alvarius toad), are non-selective agonists of the serotonin receptors and mediate their hallucinogenic effects specifically by activation of the serotonin 5-HT 2A receptor . [ 131 ] [ 132 ] [ 133 ] This is evidenced by the fact that serotonin 5-HT 2A receptor antagonists and so-called " trip killers " like ketanserin block the hallucinogenic effects of serotonergic psychedelics in humans, among many other findings. [ 131 ] [ 132 ] [ 134 ] Some serotonergic psychedelics, like psilocin , DMT, and 5-MeO-DMT, are substituted tryptamines and are very similar in chemical structure to serotonin. [ 133 ]
Serotonin itself, despite acting as a serotonin 5-HT 2A receptor agonist, is thought to be non-hallucinogenic. [ 135 ] The hallucinogenic effects of serotonergic psychedelics appear to be mediated by activation of serotonin 5-HT 2A receptors expressed in a population of cortical neurons in the medial prefrontal cortex (mPFC). [ 136 ] [ 135 ] These serotonin 5-HT 2A receptors, unlike most serotonin and related receptors, are expressed intracellularly . [ 136 ] [ 135 ] In addition, the neurons containing them lack expression of the serotonin transporter (SERT), which normally transports serotonin from the extracellular space to the intracellular space within neurons. [ 136 ] [ 135 ] Serotonin itself is too hydrophilic to enter serotonergic neurons without the SERT, and hence these serotonin 5-HT 2A receptors are inaccessible to serotonin. [ 136 ] [ 135 ] Conversely, serotonergic psychedelics are more lipophilic than serotonin and readily enter these neurons. [ 136 ] [ 135 ] In addition to explaining why serotonin does not show psychedelic effects, these findings may explain why drugs that increase serotonin levels, like selective serotonin reuptake inhibitors (SSRIs) and various other types of serotonergic agents, do not produce psychedelic effects. [ 136 ] [ 135 ] Artificial expression of the SERT in these medial prefrontal cortex neurons resulted in the serotonin releasing agent para -chloroamphetamine (PCA), which does not normally show psychedelic-like effects, being able to produce psychedelic-like effects in animals. [ 135 ]
Although serotonin itself is non-hallucinogenic, administration of very high doses of a serotonin precursor , like tryptophan or 5-hydroxytryptophan (5-HTP), or intracerebroventricular injection of high doses of serotonin directly into the brain, can produce psychedelic-like effects in animals. [ 137 ] [ 138 ] [ 139 ] These psychedelic-like effects can be abolished by indolethylamine N -methyltransferase (INMT) inhibitors , which block conversion of serotonin and other endogenous tryptamines into N - methylated tryptamines, including N -methylserotonin (NMS; norbufotenin), bufotenin (5-hydroxy- N , N -dimethyltryptamine; 5-HO-DMT), N -methyltryptamine (NMT), and N , N -dimethyltryptamine (DMT). [ 138 ] [ 140 ] [ 139 ] These N -methyltryptamines are much more lipophilic than serotonin and, in contrast, are able to diffuse into serotonergic neurons and activate intracellular serotonin 5-HT 2A receptors. [ 138 ] [ 139 ] [ 136 ] [ 135 ] Another possible metabolite of serotonin with psychedelic-like effects in animals is 5-methoxytryptamine (5-MT). [ 141 ] [ 142 ] [ 143 ]
DMT is a naturally occurring endogenous compound in the body. [ 144 ] [ 145 ] [ 146 ] In relation to the fact that serotonin itself is unable to activate intracellular serotonin 5-HT 2A receptors, it is possible that DMT might be the endogenous ligand of these receptors rather than serotonin. [ 136 ] [ 135 ]
The entactogen MDMA is a serotonin releasing agent and, while it also possesses other actions such as concomitant release of norepinephrine and dopamine and weak direct agonism of the serotonin 5-HT 2 receptors , its serotonin release plays a key role in its unique entactogenic effects. [ 147 ] Entactogens like MDMA should be distinguished from other drugs such as stimulants like amphetamine and psychedelics like LSD , although MDMA itself also has some characteristics of both of these types of agents. [ 147 ] [ 148 ] Coadministration of selective serotonin reuptake inhibitors (SSRIs), which block the serotonin transporter (SERT) and prevent MDMA from inducing serotonin release, markedly reduce the subjective effects of MDMA, demonstrating the key role of serotonin in the effects of the drug. [ 149 ] Serotonin releasing agents like MDMA achieve much greater increases in serotonin levels than SSRIs and have far more robust of subjective effects. [ 150 ] [ 151 ] [ 152 ] [ 153 ] Besides MDMA, many other entactogens also exist and are known. [ 154 ] [ 155 ] [ 148 ]
Extremely high levels of serotonin or activation of certain serotonin receptors can cause a condition known as serotonin syndrome , with toxic and potentially fatal effects. In practice, such toxic levels are essentially impossible to reach through an overdose of a single antidepressant drug, but require a combination of serotonergic agents, such as an SSRI with a MAOI , which may occur in therapeutic doses. [ 156 ] [ 157 ] However, serotonin syndrome can occur with overdose of certain serotonin receptor agonists, like the NBOMe series of serotonergic psychedelics. [ 158 ] [ 159 ] [ 160 ]
The intensity of the symptoms of serotonin syndrome vary over a wide spectrum, and the milder forms are seen even at nontoxic levels. [ 161 ] It is estimated that 14% of patients experiencing serotonin syndrome overdose on SSRIs; meanwhile the fatality rate is between 2% and 12%. [ 156 ] [ 162 ] [ 163 ]
Some serotonergic agonist drugs cause fibrosis anywhere in the body, particularly the syndrome of retroperitoneal fibrosis , as well as cardiac valve fibrosis . [ 164 ]
In the past, three groups of serotonergic drugs have been epidemiologically linked with these syndromes. These are the serotonergic vasoconstrictive antimigraine drugs ( ergotamine and methysergide ), [ 164 ] the serotonergic appetite suppressant drugs ( fenfluramine , chlorphentermine , and aminorex ), and certain anti-Parkinsonian dopaminergic agonists, which also stimulate serotonergic 5-HT 2B receptors. These include pergolide and cabergoline , but not the more dopamine-specific lisuride . [ 165 ]
As with fenfluramine, some of these drugs have been withdrawn from the market after groups taking them showed a statistical increase of one or more of the side effects described. An example is pergolide . The drug was declining in use since it was reported in 2003 to be associated with cardiac fibrosis. [ 166 ]
Two independent studies published in The New England Journal of Medicine in January 2007 implicated pergolide, along with cabergoline , in causing valvular heart disease . [ 167 ] [ 168 ] As a result of this, the FDA removed pergolide from the United States market in March 2007. [ 169 ] (Since cabergoline is not approved in the United States for Parkinson's Disease, but for hyperprolactinemia, the drug remains on the market. Treatment for hyperprolactinemia requires lower doses than that for Parkinson's Disease, diminishing the risk of valvular heart disease). [ 170 ]
Serotonin is used by a variety of single-cell organisms for various purposes. SSRIs have been found to be toxic to algae. [ 171 ] The gastrointestinal parasite Entamoeba histolytica secretes serotonin, causing a sustained secretory diarrhea in some people. [ 22 ] [ 172 ] Patients infected with E. histolytica have been found to have highly elevated serum serotonin levels, which returned to normal following resolution of the infection. [ 173 ] E. histolytica also responds to the presence of serotonin by becoming more virulent. [ 174 ] This means serotonin secretion not only serves to increase the spread of entamoebas by giving the host diarrhea but also serves to coordinate their behaviour according to their population density, a phenomenon known as quorum sensing . Outside the gut of a host, there is nothing that the entamoebas provoke to release serotonin, hence the serotonin concentration is very low. Low serotonin signals to the entamoebas they are outside a host and they become less virulent to conserve energy. When they enter a new host, they multiply in the gut, and become more virulent as the enterochromaffine cells get provoked by them and the serotonin concentration increases.
In drying seeds , serotonin production is a way to get rid of the buildup of poisonous ammonia . The ammonia is collected and placed in the indole part of L - tryptophan , which is then decarboxylated by tryptophan decarboxylase to give tryptamine, which is then hydroxylated by a cytochrome P450 monooxygenase , yielding serotonin. [ 175 ]
However, since serotonin is a major gastrointestinal tract modulator, it may be produced in the fruits of plants as a way of speeding the passage of seeds through the digestive tract, in the same way as many well-known seed and fruit associated laxatives. Serotonin is found in mushrooms , fruits , and vegetables . The highest values of 25–400 mg/kg have been found in nuts of the walnut ( Juglans ) and hickory ( Carya ) genera. Serotonin concentrations of 3–30 mg/kg have been found in plantains , pineapples , banana , kiwifruit , plums , and tomatoes . Moderate levels from 0.1–3 mg/kg have been found in a wide range of tested vegetables. [ 23 ] [ 20 ]
Serotonin is one compound of the poison contained in stinging nettles ( Urtica dioica ), where it causes pain on injection in the same manner as its presence in insect venoms. [ 21 ] It is also naturally found in Paramuricea clavata , or the Red Sea Fan. [ 176 ]
Serotonin and tryptophan have been found in chocolate with varying cocoa contents. The highest serotonin content (2.93 μg/g) was found in chocolate with 85% cocoa, and the highest tryptophan content (13.27–13.34 μg/g) was found in 70–85% cocoa. The intermediate in the synthesis from tryptophan to serotonin, 5-hydroxytryptophan, was not found. [ 177 ]
Root development in Arabidopsis thaliana is stimulated and modulated by serotonin – in various ways at various concentrations. [ 178 ]
Serotonin serves as a plant defense chemical against fungi. When infected with Fusarium crown rot ( Fusarium pseudograminearum ), wheat ( Triticum aestivum ) greatly increases its production of tryptophan to synthesize new serotonin. [ 179 ] The function of this is poorly understood [ 179 ] but wheat also produces serotonin when infected by Stagonospora nodorum – in that case to retard spore production. [ 180 ] The model cereal Brachypodium distachyon – used as a research substitute for wheat and other production cereals – also produces serotonin, coumaroyl -serotonin, and feruloyl -serotonin in response to F. graminearum . This produces a slight antimicrobial effect. B. distachyon produces more serotonin (and conjugates) in response to deoxynivalenol (DON)-producing F. graminearum than non-DON-producing. [ 181 ] Solanum lycopersicum produces many AA conjugates – including several of serotonin – in its leaves, stems, and roots in response to Ralstonia solanacearum infection. [ 182 ]
Serotonin occurs in several hallucinogenic mushrooms of the genus Panaeolus . [ 183 ]
Serotonin functions as a neurotransmitter in the nervous systems of most animals.
For example, in the roundworm Caenorhabditis elegans , which feeds on bacteria, serotonin is released as a signal in response to positive events, such as finding a new source of food or in male animals finding a female with which to mate. [ 184 ] When a well-fed worm feels bacteria on its cuticle , dopamine is released, which slows it down; if it is starved, serotonin also is released, which slows the animal down further. This mechanism increases the amount of time animals spend in the presence of food. [ 185 ] The released serotonin activates the muscles used for feeding, while octopamine suppresses them. [ 186 ] [ 187 ] Serotonin diffuses to serotonin-sensitive neurons, which control the animal's perception of nutrient availability.
If lobsters are injected with serotonin, they behave like dominant individuals whereas octopamine causes subordinate behavior . [ 30 ] A crayfish that is frightened may flip its tail to flee, and the effect of serotonin on this behavior depends largely on the animal's social status. Serotonin inhibits the fleeing reaction in subordinates, but enhances it in socially dominant or isolated individuals. The reason for this is social experience alters the proportion between serotonin receptors (5-HT receptors) that have opposing effects on the fight-or-flight response . [ clarification needed ] The effect of 5-HT 1 receptors predominates in subordinate animals, while 5-HT 2 receptors predominates in dominants. [ 188 ]
Serotonin is a common component of invertebrate venoms, salivary glands, nervous tissues, and various other tissues, across molluscs, insects, crustaceans, scorpions, various kinds of worms, and jellyfish. [ 21 ] Adult Rhodnius prolixus – hematophagous on vertebrates – secrete lipocalins into the wound during feeding. In 2003 these lipocalins were demonstrated to sequester serotonin to prevent vasoconstriction (and possibly coagulation) in the host. [ 189 ]
Serotonin is evolutionarily conserved and appears across the animal kingdom. It is seen in insect processes in roles similar to in the human central nervous system, such as memory, appetite, sleep, and behavior. [ 190 ] [ 19 ] Some circuits in mushroom bodies are serotonergic. [ 191 ] (See specific Drosophila example below, §Dipterans .)
Locust swarming is initiated but not maintained by serotonin, [ 192 ] with release being triggered by tactile contact between individuals. [ 193 ] This transforms social preference from aversion to a gregarious state that enables coherent groups. [ 194 ] [ 193 ] [ 192 ] Learning in flies and honeybees is affected by the presence of serotonin. [ 195 ] [ 196 ]
Insect 5-HT receptors have similar sequences to the vertebrate versions, but pharmacological differences have been seen. Invertebrate drug response has been far less characterized than mammalian pharmacology and the potential for species selective insecticides has been discussed. [ 197 ]
Wasps and hornets have serotonin in their venom, [ 198 ] which causes pain and inflammation [ 199 ] [ 21 ] as do scorpions . [ 200 ] [ 21 ] Pheidole dentata takes on more and more tasks in the colony as it gets older, which requires it to respond to more and more olfactory cues in the course of performing them. This olfactory response broadening was demonstrated to go along with increased serotonin and dopamine , but not octopamine in 2006. [ 201 ]
If flies are fed serotonin, they are more aggressive; flies depleted of serotonin still exhibit aggression, but they do so much less frequently. [ 202 ] In their crops it plays a vital role in digestive motility produced by contraction. Serotonin that acts on the crop is exogenous to the crop itself and 2012 research suggested that it probably originated in the serotonin neural plexus in the thoracic-abdominal synganglion. [ 203 ] In 2011 a Drosophila serotonergic mushroom body was found to work in concert with Amnesiac to form memories. [ 191 ] In 2007 serotonin was found to promote aggression in Diptera , which was counteracted by neuropeptide F – a surprising find given that they both promote courtship , which is usually similar to aggression in most respects. [ 191 ]
Serotonin, also referred to as 5-hydroxytryptamine (5-HT), is a neurotransmitter most known for its involvement in mood disorders in humans. It is also a widely present neuromodulator among vertebrates and invertebrates. [ 204 ] Serotonin has been found having associations with many physiological systems such as cardiovascular, thermoregulation , and behavioral functions, including: circadian rhythm , appetite, aggressive and sexual behavior, sensorimotor reactivity and learning, and pain sensitivity. [ 205 ] Serotonin's function in neurological systems along with specific behaviors among vertebrates found to be strongly associated with serotonin will be further discussed. Two relevant case studies are also mentioned regarding serotonin development involving teleost fish and mice .
In mammals, 5-HT is highly concentrated in the substantia nigra , ventral tegmental area and raphe nuclei . Lesser concentrated areas include other brain regions and the spinal cord. [ 204 ] 5-HT neurons are also shown to be highly branched, indicating that they are structurally prominent for influencing multiple areas of the CNS at the same time, although this trend is exclusive solely to mammals. [ 205 ]
Vertebrates are multicellular organisms in the phylum Chordata that possess a backbone and a nervous system . This includes mammals, fish, reptiles, birds, etc. In humans, the nervous system is composed of the central and peripheral nervous system , with little known about the specific mechanisms of neurotransmitters in most other vertebrates. However, it is known that while serotonin is involved in stress and behavioral responses, it is also important in cognitive functions . [ 204 ] Brain organization in most vertebrates includes 5-HT cells in the hindbrain . [ 204 ] In addition to this, 5-HT is often found in other sections of the brain in non-placental vertebrates, including the basal forebrain and pretectum . [ 206 ] Since location of serotonin receptors contribute to behavioral responses, this suggests serotonin is part of specific pathways in non-placental vertebrates that are not present in amniotic organisms. [ 207 ] Teleost fish and mice are organisms most often used to study the connection between serotonin and vertebrate behavior. Both organisms show similarities in the effect of serotonin on behavior, but differ in the mechanism in which the responses occur.
There are few studies of serotonin in dogs. One study reported serotonin values were higher at dawn than at dusk. [ 208 ] In another study, serum 5-HT levels did not seem to be associated with dogs' behavioural response to a stressful situation. [ 209 ] Urinary serotonin/creatinine ratio in bitches tended to be higher 4 weeks after surgery. In addition, serotonin was positively correlated with both cortisol and progesterone but not with testosterone after ovariohysterectomy. [ 210 ]
Like non-placental vertebrates, teleost fish also possess 5-HT cells in other sections of the brain, including the basal forebrain . [ 206 ] Danio rerio (zebra fish) are a species of teleost fish often used for studying serotonin within the brain. Despite much being unknown about serotonergic systems in vertebrates, the importance in moderating stress and social interaction is known. [ 211 ] It is hypothesized that AVT and CRF cooperate with serotonin in the hypothalamic-pituitary-interrenal axis . [ 206 ] These neuropeptides influence the plasticity of the teleost, affecting its ability to change and respond to its environment. Subordinate fish in social settings show a drastic increase in 5-HT concentrations. [ 211 ] High levels of 5-HT long term influence the inhibition of aggression in subordinate fish. [ 211 ]
Researchers at the Department of Pharmacology and Medical Chemistry used serotonergic drugs on male mice to study the effects of selected drugs on their behavior. [ 212 ] Mice in isolation exhibit increased levels of agonistic behavior towards one another. Results found that serotonergic drugs reduce aggression in isolated mice while simultaneously increasing social interaction. [ 212 ] Each of the treatments use a different mechanism for targeting aggression, but ultimately all have the same outcome. While the study shows that serotonergic drugs successfully target serotonin receptors, it does not show specifics of the mechanisms that affect behavior, as all types of drugs tended to reduce aggression in isolated male mice. [ 212 ] Aggressive mice kept out of isolation may respond differently to changes in serotonin reuptake.
Like in humans, serotonin is extremely involved in regulating behavior in most other vertebrates. This includes not only response and social behaviors, but also influencing mood. Defects in serotonin pathways can lead to intense variations in mood, as well as symptoms of mood disorders, which can be present in more than just humans.
One of the most researched aspects of social interaction in which serotonin is involved is aggression. Aggression is regulated by the 5-HT system, as serotonin levels can both induce or inhibit aggressive behaviors, as seen in mice (see section on Mice) and crabs. [ 212 ] While this is widely accepted, it is unknown if serotonin interacts directly or indirectly with parts of the brain influencing aggression and other behaviors. [ 204 ] Studies of serotonin levels show that they drastically increase and decrease during social interactions, and they generally correlate with inhibiting or inciting aggressive behavior. [ 213 ] The exact mechanism of serotonin influencing social behaviors is unknown, as pathways in the 5-HT system in various vertebrates can differ greatly. [ 204 ]
Serotonin is important in environmental response pathways, along with other neurotransmitters . [ 214 ] Specifically, it has been found to be involved in auditory processing in social settings, as primary sensory systems are connected to social interactions. [ 215 ] Serotonin is found in the IC structure of the midbrain, which processes specie specific and non-specific social interactions and vocalizations. [ 215 ] It also receives acoustic projections that convey signals to auditory processing regions. [ 215 ] Research has proposed that serotonin shapes the auditory information being received by the IC and therefore is influential in the responses to auditory stimuli. [ 215 ] This can influence how an organism responds to the sounds of predatory or other impactful species in their environment, as serotonin uptake can influence aggression or social interaction.
We can describe mood not as specific to an emotional status, but as associated with a relatively long-lasting emotional state. Serotonin's association with mood is most known for various forms of depression and bipolar disorders in humans. [ 205 ] Disorders caused by serotonergic activity potentially contribute to the many symptoms of major depression, such as overall mood, activity, suicidal thoughts and sexual and cognitive dysfunction . Selective serotonin reuptake inhibitors (SSRI's) are a class of drugs demonstrated to be an effective treatment in major depressive disorder and are the most prescribed class of antidepressants. SSRI's function is to block the reuptake of serotonin, making more serotonin available to absorb by the receiving neuron. Animals have been studied for decades in order to understand depressive behavior among species. One of the most familiar studies, the forced swimming test (FST), was performed to measure potential antidepressant activity. [ 205 ] Rats were placed in an inescapable container of water, at which point time spent immobile and number of active behaviors (such as splashing or climbing) were compared before and after a panel of anti-depressant drugs were administered. Antidepressants that selectively inhibit NE reuptake were shown to reduce immobility and selectively increase climbing without affecting swimming. However, results of the SSRI's also show reduced immobility but increased swimming without affecting climbing. This study demonstrated the importance of behavioral tests for antidepressants, as they can detect drugs with an effect on core behavior along with behavioral components of species. [ 205 ]
In the nematode C. elegans , artificial depletion of serotonin or the increase of octopamine cues behavior typical of a low-food environment: C. elegans becomes more active, and mating and egg-laying are suppressed, while the opposite occurs if serotonin is increased or octopamine is decreased in this animal. [ 33 ] Serotonin is necessary for normal nematode male mating behavior, [ 216 ] and the inclination to leave food to search for a mate. [ 217 ] The serotonergic signaling used to adapt the worm's behaviour to fast changes in the environment affects insulin -like signaling and the TGF beta signaling pathway , [ 218 ] which control long-term adaption.
In the fruit fly insulin both regulates blood sugar as well as acting as a growth factor . Thus, in the fruit fly, serotonergic neurons regulate the adult body size by affecting insulin secretion. [ 219 ] [ 220 ] Serotonin has also been identified as the trigger for swarm behavior in locusts. [ 194 ] In humans, though insulin regulates blood sugar and IGF regulates growth, serotonin controls the release of both hormones, modulating insulin release from the beta cells in the pancreas through serotonylation of GTPase signaling proteins. [ 47 ] Exposure to SSRIs during pregnancy reduces fetal growth. [ 221 ]
Genetically altered C. elegans worms that lack serotonin have an increased reproductive lifespan, may become obese, and sometimes present with arrested development at a dormant larval state . [ 222 ] [ 223 ]
Serotonin is known to regulate aging, learning, and memory. The first evidence comes from the study of longevity in C. elegans . [ 218 ] During early phase of aging [ vague ] , the level of serotonin increases, which alters locomotory behaviors and associative memory. [ 224 ] The effect is restored by mutations and drugs (including mianserin and methiothepin ) that inhibit serotonin receptors . The observation does not contradict with the notion that the serotonin level goes down in mammals and humans, which is typically seen in late but not early [ vague ] phase of aging.
In animals and humans, serotonin is synthesized from the amino acid L - tryptophan by a short metabolic pathway consisting of two enzymes , tryptophan hydroxylase (TPH) and aromatic amino acid decarboxylase (DDC), and the coenzyme pyridoxal phosphate . The TPH-mediated reaction is the rate-limiting step in the pathway.
TPH has been shown to exist in two forms: TPH1 , found in several tissues , and TPH2 , which is a neuron-specific isoform . [ 225 ]
Serotonin can be synthesized from tryptophan in the lab using Aspergillus niger and Psilocybe coprophila as catalysts. The first phase to 5-hydroxytryptophan would require letting tryptophan sit in ethanol and water for 7 days, then mixing in enough HCl (or other acid) to bring the pH to 3, and then adding NaOH to make a pH of 13 for 1 hour. Aspergillus niger would be the catalyst for this first phase. The second phase to synthesizing tryptophan itself from the 5-hydroxytryptophan intermediate would require adding ethanol and water, and letting sit for 30 days this time. The next two steps would be the same as the first phase: adding HCl to make the pH = 3, and then adding NaOH to make the pH very basic at 13 for 1 hour. This phase uses the Psilocybe coprophila as the catalyst for the reaction. [ 226 ]
Serotonin taken orally does not pass into the serotonergic pathways of the central nervous system, because it does not cross the blood–brain barrier . [ 9 ] However, tryptophan and its metabolite 5-hydroxytryptophan (5-HTP), from which serotonin is synthesized, do cross the blood–brain barrier. These agents are available as dietary supplements and in various foods, and may be effective serotonergic agents.
One product of serotonin breakdown is 5-hydroxyindoleacetic acid (5-HIAA), which is excreted in the urine . Serotonin and 5-HIAA are sometimes produced in excess amounts by certain tumors or cancers , and levels of these substances may be measured in the urine to test for these tumors.
Indium tin oxide is recommended for the electrode material in electrochemical investigation of concentrations produced, detected, or consumed by microbes . [ 227 ] A mass spectrometry technique was developed in 1994 to measure the molecular weight of both natural and synthetic serotonins. [ 228 ]
It had been known to physiologists for over a century that a vasoconstrictor material appears in serum when blood was allowed to clot. [ 229 ] In 1935, Italian Vittorio Erspamer , working in Pavia, showed an extract from enterochromaffin cells made intestines contract. Some believed it contained adrenaline , but two years later, Erspamer was able to show it was a previously unknown amine , which he named "enteramine". [ 230 ] [ 231 ] In 1948, Maurice M. Rapport , Arda Green , and Irvine Page of the Cleveland Clinic discovered a vasoconstrictor substance in blood serum , and since it was a serum agent affecting vascular tone, they named it serotonin. [ 232 ]
In 1952, enteramine was shown to be the same substance as serotonin, and as the broad range of physiological roles was elucidated, the abbreviation 5-HT of the proper chemical name 5-hydroxytryptamine became the preferred name in the pharmacological field. [ 233 ] Synonyms of serotonin include: 5-hydroxytriptamine, enteramine, substance DS, and 3-(β-aminoethyl)-5-hydroxyindole. [ 234 ] In 1953, Betty Twarog and Page discovered serotonin in the central nervous system. [ 235 ] Page regarded Erspamer's work on Octopus vulgaris , Discoglossus pictus , Hexaplex trunculus , Bolinus brandaris , Sepia , Mytilus , and Ostrea as valid and fundamental to understanding this newly identified substance, but regarded his earlier results in various models – especially those from rat blood – to be too confounded by the presence of other bioactive chemicals, including some other vasoactives . [ 236 ]
Serotonin, given orally at a dose of 100 mg, produced effects in humans including blood pressure changes, abdominal cramps , muscle aches , and a feeling of sedation . [ 237 ] [ 238 ] [ 239 ] In contrast to psychedelic drugs like LSD , no hallucinogenic effects were reported. [ 237 ] [ 238 ] [ 239 ] In other studies, serotonin, at low intravenous doses of 2 to 6 mg, had no effects on electroencephalogram (EEG) readings in humans. [ 240 ] In accordance with the preceding findings, it has been stated that administration of serotonin in humans produces no psychoactive effects that cannot be attributed to anxiety by its profound peripheral adverse effects including circulatory disturbance , other autonomic effects, and vomiting . [ 240 ] [ 241 ] Intracerebroventricular injection of serotonin has been studied in patient with severe psychiatric conditions , but little information about its psychoactive effects is provided. [ 241 ] [ 242 ]
It is thought that exogenous serotonin is too hydrophilic to cross the blood–brain barrier and has too poor of metabolic stability due to rapid metabolism by monoamine oxidase (MAO) such that it cannot produce drug -like central effects in humans with peripheral administration . [ 238 ] [ 243 ] However, close analogues of serotonin that are more lipophilic and metabolically stable , like bufotenin ( N , N -dimethylserotonin), 5-MeO-DMT ( N , N , O -trimethylserotonin), and 5-MeO-AMT (α, O -dimethylserotonin), among many others, are active and produce pronounced centrally mediated effects in humans. [ 243 ] [ 244 ] These drugs are non-selective serotonin receptor agonists like serotonin and are serotonergic psychedelics due to activation of the serotonin 5-HT 2A receptor . [ 243 ] [ 245 ] [ 244 ] α-Methylserotonin is well-studied in preclinical research , but is not known to have been tested in humans. [ 243 ] | https://en.wikipedia.org/wiki/Serotonin_and_aging |
Serotonylation is a receptor independent signaling mechanism by which serotonin activates intracellular processes by creating long lasting covalent bonds upon proteins . [ 1 ] It occurs through the modification of proteins by the attachment of serotonin on their glutamine residues. This happens through the enzyme transglutaminase and the creation of glutamyl-amide bonds. This process occurs following serotonin transportation into the cell rather on plasma membranes as with the brief interactions that serotonin has when it activates 5-HT receptors .
Serotonylation is the process by which serotonin effects the exocytosis of alpha-granules from platelets (also known as thrombocytes ). [ 1 ] This involves the serotonylation of small GTPases such as Rab4 and RhoA . It has been suggested that "further understanding of the specific hormonal role of 5-HT in hemostasis and thrombosis is important to possibly prevent and treat deleterious hemorrhagic and cardiovascular disorders." [ 1 ] Serotonylation has recently identified as playing a critical role in pulmonary hypertension . [ 2 ]
Serotonylation also through small GTPases is involved in the process by which serotonin controls the release of insulin from beta cells in the pancreas and so the regulation of blood glucose levels. [ 3 ] This role helps explain why defects in transglutaminase can lead to glucose intolerance . [ 3 ] Though small GTPases are involved, the existence of a large amount of protein-bound serotonin suggests the presence of yet unidentified other serotonylation interactions. [ 3 ]
Serotonylation of proteins other than small GTPases underlies the regulation of vascular smooth muscle "tone" in blood vessels including the aorta . [ 4 ] This may occur through serotonylation modifying proteins integral to the contractility and the cytoskeleton such as alpha-actin , beta-actin , gamma-actin , myosin heavy chain and filamin A [ 4 ]
According to some [ 4 ] serotonin was "named for its source ( sero -) and ability to modify smooth muscle tone (tonin)" an effect that may be dependent (some controversy exists) upon serotonylation. [ 4 ]
The term serotonylation was created in 2003 by Diego J. Walther and colleagues of the Max Planck Institute for Molecular Genetics in a paper in the journal Cell . [ 1 ] | https://en.wikipedia.org/wiki/Serotonylation |
A serotype or serovar is a distinct variation within a species of bacteria or virus or among immune cells of different individuals. These microorganisms , viruses, or cells are classified together based on their shared reactivity between their surface antigens and a particular antiserum , allowing the classification of organisms to a level below the species . [ 1 ] [ 2 ] [ 3 ] A group of serovars with common antigens is called a serogroup or sometimes serocomplex . [ clarification needed ]
Serotyping often plays an essential role in determining species and subspecies. The Salmonella genus of bacteria, for example, has been determined to have over 2600 serotypes. Vibrio cholerae , the species of bacteria that causes cholera , has over 200 serotypes, based on cell antigens. Only two of them have been observed to produce the potent enterotoxin that results in cholera: O1 and O139. [ citation needed ]
Serotypes were discovered in hemolytic streptococci by the American microbiologist Rebecca Lancefield in 1933. [ 4 ]
Serotyping is the process of determining the serotype of an organism, using prepared antisera that bind to a set of known antigens. Some antisera detect multiple known antigens and are known as polyvalent or broad ; others are monovalent . For example, what was once described as HLA-A9 is now subdivided into two more specific serotypes (" split antigens "), HLA-A23 and HLA-A24 . As a result, A9 is now known as a "broad" serotype. [ 5 ] For organisms with many possible serotypes, first obtaining a polyvalent match can reduce the number of tests required. [ 6 ]
The binding between a surface antigen and the antiserum can be experimentally observed in many forms. A number of bacteria species, including Streptococcus pneumoniae , display the Quellung reaction visible under a microscope. [ 7 ] Others such as Shigella (and E. coli ) and Salmonella are traditionally detected using a slide agglutination test. [ 6 ] [ 8 ] HLA types are originally determined with the complement fixation test . [ 9 ] Newer procedures include the latex fixation test and various other immunoassays .
"Molecular serotyping" refers to methods that replace the antibody-based test with a test based on the nucleic acid sequence – therefore actually a kind of genotyping . By analyzing which surface antigen-defining allele(s) are present, these methods can produce faster results. However, their results may not always agree with traditional serotyping, as they can fail to account for factors that affect the expression of antigen-determining genes. [ 10 ] [ 11 ]
The immune system is capable of discerning a cell as being 'self' or 'non-self' according to that cell's serotype. In humans, that serotype is largely determined by human leukocyte antigen (HLA), the human version of the major histocompatibility complex . Cells determined to be non-self are usually recognized by the immune system as foreign, causing an immune response, such as hemagglutination . Serotypes differ widely between individuals; therefore, if cells from one human (or animal) are introduced into another random human, those cells are often determined to be non-self because they do not match the self-serotype. For this reason, transplants between genetically non-identical humans often induce a problematic immune response in the recipient, leading to transplant rejection . In some situations, this effect can be reduced by serotyping both recipient and potential donors to determine the closest HLA match. [ 12 ]
Most bacteria produce antigenic substances on the outer surface that can be distinguished by serotyping.
The LPS (O) and capsule (K) antigens are themselves important pathogenicity factors . [ 6 ] [ 15 ]
Some antigens are invariant among a taxonomic group. Presence of these antigens would not be useful for classification lower than the species level, but may inform identification. One example is the enterobacterial common antigen (ECA), universal to all Enterobacterales . [ 16 ]
E. coli have 187 possible O antigens (6 later removed from list, 3 actually producing no LPS), [ 17 ] 53 H antigens, [ 18 ] and at least 72 K antigens. [ 19 ] Among these three, the O antigen has the best correlation with lineages; as a result, the O antigen is used to define the "serogroup" and is also used to define strains in taxonomy and epidemiology. [ 17 ]
Shigella are only classified by their O antigen, as they are non-motile and produce no flagella. Across the four "species", there are 15 + 11 + 20 + 2 = 48 serotypes. [ 6 ] Some of these O antigens have equivalents in E. coli , which also cladistically include Shigella . [ 20 ]
The Kauffman–White classification scheme is the basis for naming the manifold serovars of Salmonella . To date, more than 2600 different serotypes have been identified. [ 21 ] A Salmonella serotype is determined by the unique combination of reactions of cell surface antigens . For Salmonella , the O and H antigens are used. [ 22 ] There are two species of Salmonella : Salmonella bongori and Salmonella enterica . Salmonella enterica can be subdivided into six subspecies.
The process to identify the serovar of the bacterium consists of finding the formula of surface antigens which represent the variations of the bacteria. The traditional method for determining the antigen formula is agglutination reactions on slides . The agglutination between the antigen and the antibody is made with a specific antisera , which reacts with the antigen to produce a mass. The antigen O is tested with a bacterial suspension from an agar plate , whereas the antigen H is tested with a bacterial suspension from a broth culture. The scheme classifies the serovar depending on its antigen formula obtained via the agglutination reactions. [ 8 ] Additional serotyping methods and alternative subtyping methodologies have been reviewed by Wattiau et al. [ 23 ]
Streptococcus pneumoniae has 93 capsular serotypes. 91 of these serotypes use the Wzy enzyme pathway. The Wzy pathway is used by almost all gram-positive bacteria, by lactococci and streptococci (exopolysacchide), and is also responsible for group 1 and 4 Gram-negative capsules. [ 24 ]
Many other organisms can be classified using recognition by antibodies. | https://en.wikipedia.org/wiki/Serotype |
In physiology , serous fluid or serosal fluid (originating from the Medieval Latin word serosus , from Latin serum ) is any of various body fluids resembling serum , that are typically pale yellow or transparent and of a benign nature. The fluid fills the inside of body cavities . Serous fluid originates from serous glands , with secretions enriched with proteins and water . Serous fluid may also originate from mixed glands, which contain both mucous and serous cells. A common trait of serous fluids is their role in assisting digestion , excretion , and respiration .
In medical fields, especially cytopathology , serous fluid is a synonym for effusion fluids from various body cavities . Examples of effusion fluid are pleural effusion and pericardial effusion . There are many causes of effusions which include involvement of the cavity by cancer . Cancer in a serous cavity is called a serous carcinoma . Cytopathology evaluation is recommended to evaluate the causes of effusions in these cavities. [ 1 ]
Saliva consists of mucus and serous fluid; the serous fluid contains the enzyme amylase , which is important for the digestion of carbohydrates . Minor salivary glands of von Ebner present on the tongue secrete the lipase . The parotid gland produces purely serous saliva. The other major salivary glands produce mixed (serous and mucus) saliva.
Another type of serous fluid is secreted by the serous membranes (serosa), two-layered membranes which line the body cavities . Serous membrane fluid collects on microvilli on the outer layer and acts as a lubricant and reduces friction from muscle movement. This can be seen in the lungs , with the pleural cavity .
Pericardial fluid is a serous fluid secreted by the serous layer of the pericardium into the pericardial cavity . The pericardium consists of two layers, an outer fibrous layer and the inner serous layer. This serous layer has two membranes which enclose the pericardial cavity into which is secreted the pericardial fluid.
Blood serum is the component of blood that is neither a blood cell nor a clotting factor . Blood serum and blood plasma are similar, but serum does not contain any clotting factors such as fibrinogen , prothrombin , thromboplastin and many others. Serum includes all proteins not used in coagulation (clotting) and all the electrolytes , antibodies , antigens , hormones and any exogenous substances, such as drugs and microorganisms . | https://en.wikipedia.org/wiki/Serous_fluid |
Serpent's Wall ( Ukrainian : Змієві вали , romanized : Zmiievi valy ) is an ancient system of earthworks ( valla ) located in the middle Dnieper Ukraine (Naddniprianshchyna) [ 2 ] that stretch across primarily Kyiv Oblast , Ukraine . They seem to be similar in purpose and character to Trajan's Wall situated to the southwest in Bessarabia . The remaining ancient walls have a total length of 1,000 km [ 1 ] and constitute less than 20% of the original wall system. [ 2 ]
According to legend, the earthworks are the result of ancient events when a mythical hero ( bohatyr ), Kozmodemian (or Borysohlib), in order to slay the gargantuan dragon (serpent) Gornych, [ 3 ] harnessed it to a giant plow and furrowed the earth. [ 2 ] Gornych bit the dust and left furrows, on both sides of which were immense banks of earth that became known as Serpent's Wall. [ 2 ]
The ancient walls were built between the 2nd century BC and 7th century AD, according to carbon dating . There are three theories as to what peoples built the walls: either the Sarmatians against the Scythians , or the Goths of Oium against the Huns , or the Early East Slavs against the nomads of the southern steppes . In Slavic culture , the warlike nomads are often associated with the winged dragon , hence the name.
On the right bank of Dnieper between its tributaries Teteriv and Ros the remnants of the walls form six lines stretching from west to east. [ 2 ] One Serpent's Wall passed over the left bank of Dnieper and its tributary Sula . [ 2 ]
The 1974-85 explorations established that Serpent's Wall is a remnant of wooded earth fortifications built at the end of 10th and the first half of 11th centuries, smaller part in the 12th century, to protect middle Dnieper Ukraine and Kyiv from Pechenegs and Cumans . [ 2 ] Later excavation, southeast of the historical town Pereiaslav, brought to light ceramic materials dated to the third or fourth century and attributed to the Chernyakhov culture. Remains of a timber construction were also found in a trial excavation of 2019 near the village of Khotsky. While none of these findings are conclusive, they concur in suggesting a later date of construction than earlier theories. [ 1 ]
Due to military unrest in the region, specifically Russia's 2022 attempt to surround Kiev, the damage to nearby sections of the Serpent's Wall is of yet impossible to assess. [ 1 ]
This Ukrainian history –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Serpent's_Wall |
A serpentine curve is a curve whose equation is of the form
Equivalently, it has a parametric representation
or functional representation
The curve has an inflection point at the origin. It has local extrema at x = ± a {\displaystyle x=\pm a} , with a maximum value of y = b / 2 {\displaystyle y=b/2} and a minimum value of y = − b / 2 {\displaystyle y=-b/2} .
Solving for x, we get
x = ( x 2 + a 2 ) y a b {\displaystyle x={\frac {(x^{2}+a^{2})y}{ab}}}
Serpentine curves were studied by L'Hôpital and Huygens , and named and classified by Newton .
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Serpentine_curve |
Serratus is a large scale viroinformatics platform for uncovering the total genetic diversity of Earth's virome . Originating with the goal of uncovering novel coronaviruses [ 1 ] that may have been incidentally sequenced by other researchers, the project expanded to encompass all RNA viruses , those which encode a viral RNA-dependent RNA polymerase (RdRp).
By the end of 2020 there were approximately 15,000 distinct RNA virus sequences known from public databases, measured by the number of distinct RdRp (greater than 10% difference in amino acid sequence). Using a bioinformatics workflow optimized for large-scale cloud computing , the research team analyzed 5.7 million freely available sequencing datasets (20.4 petabytes of raw data) in the Sequence Read Archive (SRA) in only 11 days and a computing cost of US$23,900. [ 2 ] This analysis yielded 132,000 novel viral RdRp, representing nearly an order of magnitude increase in the known genetic diversity of RNA viruses. [ 3 ]
Within the database, RNA viruses are classified according to their RdRp palmprint , [ 4 ] a type of molecular barcode . The palmprint can be used as a computationally efficient index for the identification of which SRA sequencing runs contain a particular RNA virus. Such an index allows for targeted analysis of raw sequencing datasets from which novel RNA viruses can be characterized. [ 5 ]
All Serratus data are freely-available under the INDSC release policy . | https://en.wikipedia.org/wiki/Serratus_(virology) |
In mathematics , Serre's modularity conjecture , introduced by Jean-Pierre Serre ( 1975 , 1987 ), states that an odd, irreducible, two-dimensional Galois representation over a finite field arises from a modular form. A stronger version of this conjecture specifies the weight and level of the modular form. The conjecture in the level 1 case was proved by Chandrashekhar Khare in 2005, [ 1 ] and a proof of the full conjecture was completed jointly by Khare and Jean-Pierre Wintenberger in 2008. [ 2 ]
The conjecture concerns the absolute Galois group G Q {\displaystyle G_{\mathbb {Q} }} of the rational number field Q {\displaystyle \mathbb {Q} } .
Let ρ {\displaystyle \rho } be an absolutely irreducible , continuous, two-dimensional representation of G Q {\displaystyle G_{\mathbb {Q} }} over a finite field F = F ℓ r {\displaystyle F=\mathbb {F} _{\ell ^{r}}} .
Additionally, assume ρ {\displaystyle \rho } is odd, meaning the image of complex conjugation has determinant -1.
To any normalized modular eigenform
of level N = N ( ρ ) {\displaystyle N=N(\rho )} , weight k = k ( ρ ) {\displaystyle k=k(\rho )} , and some Nebentype character
a theorem due to Shimura, Deligne, and Serre-Deligne attaches to f {\displaystyle f} a representation
where O {\displaystyle {\mathcal {O}}} is the ring of integers in a finite extension of Q ℓ {\displaystyle \mathbb {Q} _{\ell }} . This representation is characterized by the condition that for all prime numbers p {\displaystyle p} , coprime to N ℓ {\displaystyle N\ell } we have
and
Reducing this representation modulo the maximal ideal of O {\displaystyle {\mathcal {O}}} gives a mod ℓ {\displaystyle \ell } representation ρ f ¯ {\displaystyle {\overline {\rho _{f}}}} of G Q {\displaystyle G_{\mathbb {Q} }} .
Serre's conjecture asserts that for any representation ρ {\displaystyle \rho } as above, there is a modular eigenform f {\displaystyle f} such that
The level and weight of the conjectural form f {\displaystyle f} are explicitly conjectured in Serre's article. In addition, he derives a number of results from this conjecture, among them Fermat's Last Theorem and the now-proven Taniyama–Weil (or Taniyama–Shimura) conjecture, now known as the modularity theorem (although this implies Fermat's Last Theorem, Serre proves it directly from his conjecture).
The strong form of Serre's conjecture describes the level and weight of the modular form.
The optimal level is the Artin conductor of the representation, with the power of l {\displaystyle l} removed.
A proof of the level 1 and small weight cases of the conjecture was obtained in 2004 by Chandrashekhar Khare and Jean-Pierre Wintenberger , [ 3 ] and by Luis Dieulefait , [ 4 ] independently.
In 2005, Chandrashekhar Khare obtained a proof of the level 1 case of Serre conjecture, [ 5 ] and in 2008 a proof of the full conjecture in collaboration with Jean-Pierre Wintenberger. [ 6 ] | https://en.wikipedia.org/wiki/Serre's_modularity_conjecture |
In abstract algebra, specifically the theory of Lie algebras , Serre's theorem states: given a (finite reduced) root system Φ {\displaystyle \Phi } , there exists a finite-dimensional semisimple Lie algebra whose root system is the given Φ {\displaystyle \Phi } .
Given a root system Φ {\displaystyle \Phi } in a Euclidean space with an inner product ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} , and the usual bilinear form ⟨ β , α ⟩ = 2 ( α , β ) / ( α , α ) {\displaystyle \langle \beta ,\alpha \rangle =2(\alpha ,\beta )/(\alpha ,\alpha )} , with a fixed base a base { α 1 , … , α n } {\displaystyle \{\alpha _{1},\dots ,\alpha _{n}\}} , there exists a Lie algebra g {\displaystyle {\mathfrak {g}}} generated by the 3 n {\displaystyle 3n} elements e i , f i , h i {\displaystyle e_{i},f_{i},h_{i}} (for 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} ) and relations:
We also have that g {\displaystyle {\mathfrak {g}}} is a finite-dimensional semisimple Lie algebra with the Cartan subalgebra h = ⨁ i h i {\displaystyle {\mathfrak {h}}=\bigoplus _{i}h_{i}} and that the root system of g {\displaystyle {\mathfrak {g}}} is Φ {\displaystyle \Phi } .
The square matrix [ ⟨ α i , α j ⟩ ] 1 ≤ i , j ≤ n {\displaystyle [\langle \alpha _{i},\alpha _{j}\rangle ]_{1\leq i,j\leq n}} is called the Cartan matrix . Thus, with this notion, the theorem states that, given a Cartan matrix A , there exists a unique (up to an isomorphism) finite-dimensional semisimple Lie algebra g ( A ) {\displaystyle {\mathfrak {g}}(A)} associated to A {\displaystyle A} . The construction of a semisimple Lie algebra from a Cartan matrix can be generalized by weakening the definition of a Cartan matrix. The (generally infinite-dimensional) Lie algebra associated to a generalized Cartan matrix is called a Kac–Moody algebra .
The proof here is taken from ( Serre 1966 , Ch. VI, Appendix.) and ( Kac 1990 , Theorem 1.2.).
Let a i j = ⟨ α i , α j ⟩ {\displaystyle a_{ij}=\langle \alpha _{i},\alpha _{j}\rangle } and then let g ~ {\displaystyle {\widetilde {\mathfrak {g}}}} be the Lie algebra generated by (1) the generators e i , f i , h i {\displaystyle e_{i},f_{i},h_{i}} and (2) the relations:
Let h {\displaystyle {\mathfrak {h}}} be the free vector space spanned by h i {\displaystyle h_{i}} , V the free vector space with a basis v 1 , … , v n {\displaystyle v_{1},\dots ,v_{n}} and T = ⨁ l = 0 ∞ V ⊗ l {\textstyle T=\bigoplus _{l=0}^{\infty }V^{\otimes l}} the tensor algebra over it. Consider the following representation of a Lie algebra:
given by: for a ∈ T , h ∈ h , λ ∈ h ∗ {\displaystyle a\in T,h\in {\mathfrak {h}},\lambda \in {\mathfrak {h}}^{*}} ,
It is not trivial that this is indeed a well-defined representation and that has to be checked by hand. From this representation, one deduces the following properties: let n ~ + {\displaystyle {\widetilde {\mathfrak {n}}}_{+}} (resp. n ~ − {\displaystyle {\widetilde {\mathfrak {n}}}_{-}} ) the subalgebras of g ~ {\displaystyle {\widetilde {\mathfrak {g}}}} generated by the e i {\displaystyle e_{i}} 's (resp. the f i {\displaystyle f_{i}} 's).
For each ideal i {\displaystyle {\mathfrak {i}}} of g ~ {\displaystyle {\widetilde {\mathfrak {g}}}} , one can easily show that i {\displaystyle {\mathfrak {i}}} is homogeneous with respect to the grading given by the root space decomposition; i.e., i = ⨁ α ( g ~ α ∩ i ) {\displaystyle {\mathfrak {i}}=\bigoplus _{\alpha }({\widetilde {\mathfrak {g}}}_{\alpha }\cap {\mathfrak {i}})} . It follows that the sum of ideals intersecting h {\displaystyle {\mathfrak {h}}} trivially, it itself intersects h {\displaystyle {\mathfrak {h}}} trivially. Let r {\displaystyle {\mathfrak {r}}} be the sum of all ideals intersecting h {\displaystyle {\mathfrak {h}}} trivially. Then there is a vector space decomposition: r = ( r ∩ n ~ − ) ⊕ ( r ∩ n ~ + ) {\displaystyle {\mathfrak {r}}=({\mathfrak {r}}\cap {\widetilde {\mathfrak {n}}}_{-})\oplus ({\mathfrak {r}}\cap {\widetilde {\mathfrak {n}}}_{+})} . In fact, it is a g ~ {\displaystyle {\widetilde {\mathfrak {g}}}} -module decomposition. Let
Then it contains a copy of h {\displaystyle {\mathfrak {h}}} , which is identified with h {\displaystyle {\mathfrak {h}}} and
where n + {\displaystyle {\mathfrak {n}}_{+}} (resp. n − {\displaystyle {\mathfrak {n}}_{-}} ) are the subalgebras generated by the images of e i {\displaystyle e_{i}} 's (resp. the images of f i {\displaystyle f_{i}} 's).
One then shows: (1) the derived algebra [ g , g ] {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]} here is the same as g {\displaystyle {\mathfrak {g}}} in the lead, (2) it is finite-dimensional and semisimple and (3) [ g , g ] = g {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]={\mathfrak {g}}} .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Serre's_theorem_on_a_semisimple_Lie_algebra |
In algebraic geometry , a branch of mathematics , Serre duality is a duality for the coherent sheaf cohomology of algebraic varieties, proved by Jean-Pierre Serre . The basic version applies to vector bundles on a smooth projective variety, but Alexander Grothendieck found wide generalizations, for example to singular varieties. On an n -dimensional variety, the theorem says that a cohomology group H i {\displaystyle H^{i}} is the dual space of another one, H n − i {\displaystyle H^{n-i}} . Serre duality is the analog for coherent sheaf cohomology of Poincaré duality in topology, with the canonical line bundle replacing the orientation sheaf .
The Serre duality theorem is also true in complex geometry more generally, for compact complex manifolds that are not necessarily projective complex algebraic varieties . In this setting, the Serre duality theorem is an application of Hodge theory for Dolbeault cohomology , and may be seen as a result in the theory of elliptic operators .
These two different interpretations of Serre duality coincide for non-singular projective complex algebraic varieties, by an application of Dolbeault's theorem relating sheaf cohomology to Dolbeault cohomology.
Let X be a smooth variety of dimension n over a field k . Define the canonical line bundle K X {\displaystyle K_{X}} to be the bundle of n -forms on X , the top exterior power of the cotangent bundle :
Suppose in addition that X is proper (for example, projective ) over k . Then Serre duality says: for an algebraic vector bundle E on X and an integer i , there is a natural isomorphism:
of finite-dimensional k -vector spaces. Here ⊗ {\displaystyle \otimes } denotes the tensor product of vector bundles. It follows that the dimensions of the two cohomology groups are equal:
As in Poincaré duality, the isomorphism in Serre duality comes from the cup product in sheaf cohomology. Namely, the composition of the cup product with a natural trace map on H n ( X , K X ) {\displaystyle H^{n}(X,K_{X})} is a perfect pairing :
The trace map is the analog for coherent sheaf cohomology of integration in de Rham cohomology . [ 1 ]
Serre also proved the same duality statement for X a compact complex manifold and E a holomorphic vector bundle . [ 2 ] Here, the Serre duality theorem is a consequence of Hodge theory . Namely, on a compact complex manifold X {\displaystyle X} equipped with a Riemannian metric , there is a Hodge star operator :
where dim C X = n {\displaystyle \dim _{\mathbb {C} }X=n} . Additionally, since X {\displaystyle X} is complex, there is a splitting of the complex differential forms into forms of type ( p , q ) {\displaystyle (p,q)} . The Hodge star operator (extended complex-linearly to complex-valued differential forms) interacts with this grading as:
Notice that the holomorphic and anti-holomorphic indices have switched places. There is a conjugation on complex differential forms which interchanges forms of type ( p , q ) {\displaystyle (p,q)} and ( q , p ) {\displaystyle (q,p)} , and if one defines the conjugate-linear Hodge star operator by ⋆ ¯ ω = ⋆ ω ¯ {\displaystyle {\bar {\star }}\omega =\star {\bar {\omega }}} then we have:
Using the conjugate-linear Hodge star, one may define a Hermitian L 2 {\displaystyle L^{2}} -inner product on complex differential forms, by:
where now α ∧ ⋆ ¯ β {\displaystyle \alpha \wedge {\bar {\star }}\beta } is an ( n , n ) {\displaystyle (n,n)} -form, and in particular a complex-valued 2 n {\displaystyle 2n} -form and can therefore be integrated on X {\displaystyle X} with respect to its canonical orientation . Furthermore, suppose ( E , h ) {\displaystyle (E,h)} is a Hermitian holomorphic vector bundle. Then the Hermitian metric h {\displaystyle h} gives a conjugate-linear isomorphism E ≅ E ∗ {\displaystyle E\cong E^{*}} between E {\displaystyle E} and its dual vector bundle , say τ : E → E ∗ {\displaystyle \tau :E\to E^{*}} . Defining ⋆ ¯ E ( ω ⊗ s ) = ⋆ ¯ ω ⊗ τ ( s ) {\displaystyle {\bar {\star }}_{E}(\omega \otimes s)={\bar {\star }}\omega \otimes \tau (s)} , one obtains an isomorphism:
where Ω p , q ( X , E ) = Ω p , q ( X ) ⊗ Γ ( E ) {\displaystyle \Omega ^{p,q}(X,E)=\Omega ^{p,q}(X)\otimes \Gamma (E)} consists of smooth E {\displaystyle E} -valued complex differential forms. Using the pairing between E {\displaystyle E} and E ∗ {\displaystyle E^{*}} given by τ {\displaystyle \tau } and h {\displaystyle h} , one can therefore define a Hermitian L 2 {\displaystyle L^{2}} -inner product on such E {\displaystyle E} -valued forms by:
where here ∧ h {\displaystyle \wedge _{h}} means wedge product of differential forms and using the pairing between E {\displaystyle E} and E ∗ {\displaystyle E^{*}} given by h {\displaystyle h} .
The Hodge theorem for Dolbeault cohomology asserts that if we define:
where ∂ ¯ E {\displaystyle {\bar {\partial }}_{E}} is the Dolbeault operator of E {\displaystyle E} and ∂ ¯ E ∗ {\displaystyle {\bar {\partial }}_{E}^{*}} is its formal adjoint with respect to the inner product, then:
On the left is Dolbeault cohomology, and on the right is the vector space of harmonic E {\displaystyle E} -valued differential forms defined by:
Using this description, the Serre duality theorem can be stated as follows: The isomorphism ⋆ ¯ E {\displaystyle {\bar {\star }}_{E}} induces a complex linear isomorphism:
This can be easily proved using the Hodge theory above. Namely, if [ α ] {\displaystyle [\alpha ]} is a cohomology class in H p , q ( X , E ) {\displaystyle H^{p,q}(X,E)} with unique harmonic representative α ∈ H Δ ∂ ¯ E p , q ( X ) {\displaystyle \alpha \in {\mathcal {H}}_{\Delta _{{\bar {\partial }}_{E}}}^{p,q}(X)} , then:
with equality if and only if α = 0 {\displaystyle \alpha =0} . In particular, the complex linear pairing:
between H Δ ∂ ¯ E p , q ( X ) {\displaystyle {\mathcal {H}}_{\Delta _{{\bar {\partial }}_{E}}}^{p,q}(X)} and H Δ ∂ ¯ E ∗ n − p , n − q ( X ) {\displaystyle {\mathcal {H}}_{\Delta _{{\bar {\partial }}_{E^{*}}}}^{n-p,n-q}(X)} is non-degenerate , and induces the isomorphism in the Serre duality theorem.
The statement of Serre duality in the algebraic setting may be recovered by taking p = 0 {\displaystyle p=0} , and applying Dolbeault's theorem , which states that:
where on the left is Dolbeault cohomology and on the right sheaf cohomology, where Ω p {\displaystyle {\boldsymbol {\Omega }}^{p}} denotes the sheaf of holomorphic ( p , 0 ) {\displaystyle (p,0)} -forms. In particular, we obtain:
where we have used that the sheaf of holomorphic ( n , 0 ) {\displaystyle (n,0)} -forms is just the canonical bundle of X {\displaystyle X} .
A fundamental application of Serre duality is to algebraic curves . (Over the complex numbers , it is equivalent to consider compact Riemann surfaces .) For a line bundle L on a smooth projective curve X over a field k , the only possibly nonzero cohomology groups are H 0 ( X , L ) {\displaystyle H^{0}(X,L)} and H 1 ( X , L ) {\displaystyle H^{1}(X,L)} . Serre duality describes the H 1 {\displaystyle H^{1}} group in terms of an H 0 {\displaystyle H^{0}} group (for a different line bundle). [ 3 ] That is more concrete, since H 0 {\displaystyle H^{0}} of a line bundle is simply its space of sections.
Serre duality is especially relevant to the Riemann–Roch theorem for curves. For a line bundle L of degree d on a curve X of genus g , the Riemann–Roch theorem says that:
Using Serre duality, this can be restated in more elementary terms:
The latter statement (expressed in terms of divisors ) is in fact the original version of the theorem from the 19th century. This is the main tool used to analyze how a given curve can be embedded into projective space and hence to classify algebraic curves.
Example: Every global section of a line bundle of negative degree is zero. Moreover, the degree of the canonical bundle is 2 g − 2 {\displaystyle 2g-2} . Therefore, Riemann–Roch implies that for a line bundle L of degree d > 2 g − 2 {\displaystyle d>2g-2} , h 0 ( X , L ) {\displaystyle h^{0}(X,L)} is equal to d − g + 1 {\displaystyle d-g+1} . When the genus g is at least 2, it follows by Serre duality that h 1 ( X , T X ) = h 0 ( X , K X ⊗ 2 ) = 3 g − 3 {\displaystyle h^{1}(X,TX)=h^{0}(X,K_{X}^{\otimes 2})=3g-3} . Here H 1 ( X , T X ) {\displaystyle H^{1}(X,TX)} is the first-order deformation space of X . This is the basic calculation needed to show that the moduli space of curves of genus g has dimension 3 g − 3 {\displaystyle 3g-3} .
Another formulation of Serre duality holds for all coherent sheaves , not just vector bundles. As a first step in generalizing Serre duality, Grothendieck showed that this version works for schemes with mild singularities, Cohen–Macaulay schemes , not just smooth schemes.
Namely, for a Cohen–Macaulay scheme X of pure dimension n over a field k , Grothendieck defined a coherent sheaf ω X {\displaystyle \omega _{X}} on X called the dualizing sheaf . (Some authors call this sheaf K X {\displaystyle K_{X}} .) Suppose in addition that X is proper over k . For a coherent sheaf E on X and an integer i , Serre duality says that there is a natural isomorphism:
of finite-dimensional k -vector spaces. [ 4 ] Here the Ext group is taken in the abelian category of O X {\displaystyle O_{X}} -modules . This includes the previous statement, since Ext X i ( E , ω X ) {\displaystyle \operatorname {Ext} _{X}^{i}(E,\omega _{X})} is isomorphic to H i ( X , E ∗ ⊗ ω X ) {\displaystyle H^{i}(X,E^{*}\otimes \omega _{X})} when E is a vector bundle.
In order to use this result, one has to determine the dualizing sheaf explicitly, at least in special cases. When X is smooth over k , ω X {\displaystyle \omega _{X}} is the canonical line bundle K X {\displaystyle K_{X}} defined above. More generally, if X is a Cohen–Macaulay subscheme of codimension r in a smooth scheme Y over k , then the dualizing sheaf can be described as an Ext sheaf : [ 5 ]
When X is a local complete intersection of codimension r in a smooth scheme Y , there is a more elementary description: the normal bundle of X in Y is a vector bundle of rank r , and the dualizing sheaf of X is given by: [ 6 ]
In this case, X is a Cohen–Macaulay scheme with ω X {\displaystyle \omega _{X}} a line bundle, which says that X is Gorenstein .
Example: Let X be a complete intersection in projective space P n {\displaystyle {\mathbf {P} }^{n}} over a field k , defined by homogeneous polynomials f 1 , … , f r {\displaystyle f_{1},\ldots ,f_{r}} of degrees d 1 , … , d r {\displaystyle d_{1},\ldots ,d_{r}} . (To say that this is a complete intersection means that X has dimension n − r {\displaystyle n-r} .) There are line bundles O ( d ) on P n {\displaystyle {\mathbf {P} }^{n}} for integers d , with the property that homogeneous polynomials of degree d can be viewed as sections of O ( d ). Then the dualizing sheaf of X is the line bundle:
by the adjunction formula . For example, the dualizing sheaf of a plane curve X of degree d is O ( d − 3 ) | X {\displaystyle O(d-3)|_{X}} .
In particular, we can compute the number of complex deformations, equal to dim ( H 1 ( X , T X ) ) {\displaystyle \dim(H^{1}(X,TX))} for a quintic threefold in P 4 {\displaystyle \mathbb {P} ^{4}} , a Calabi–Yau variety, using Serre duality. Since the Calabi–Yau property ensures K X ≅ O X {\displaystyle K_{X}\cong {\mathcal {O}}_{X}} Serre duality shows us that H 1 ( X , T X ) ≅ H 2 ( X , O X ⊗ Ω X ) ≅ H 2 ( X , Ω X ) {\displaystyle H^{1}(X,TX)\cong H^{2}(X,{\mathcal {O}}_{X}\otimes \Omega _{X})\cong H^{2}(X,\Omega _{X})} showing the number of complex moduli is equal to h 2 , 1 {\displaystyle h^{2,1}} in the Hodge diamond. Of course, the last statement depends on the Bogomolev–Tian–Todorov theorem which states every deformation on a Calabi–Yau is unobstructed.
Grothendieck's theory of coherent duality is a broad generalization of Serre duality, using the language of derived categories . For any scheme X of finite type over a field k , there is an object ω X ∙ {\displaystyle \omega _{X}^{\bullet }} of the bounded derived category of coherent sheaves on X , D coh b ( X ) {\displaystyle D_{\operatorname {coh} }^{b}(X)} , called the dualizing complex of X over k . Formally, ω X ∙ {\displaystyle \omega _{X}^{\bullet }} is the exceptional inverse image f ! O Y {\displaystyle f^{!}O_{Y}} , where f is the given morphism X → Y = Spec ( k ) {\displaystyle X\to Y=\operatorname {Spec} (k)} . When X is Cohen–Macaulay of pure dimension n , ω X ∙ {\displaystyle \omega _{X}^{\bullet }} is ω X [ n ] {\displaystyle \omega _{X}[n]} ; that is, it is the dualizing sheaf discussed above, viewed as a complex in (cohomological) degree − n . In particular, when X is smooth over k , ω X ∙ {\displaystyle \omega _{X}^{\bullet }} is the canonical line bundle placed in degree − n .
Using the dualizing complex, Serre duality generalizes to any proper scheme X over k . Namely, there is a natural isomorphism of finite-dimensional k -vector spaces:
for any object E in D coh b ( X ) {\displaystyle D_{\operatorname {coh} }^{b}(X)} . [ 7 ]
More generally, for a proper scheme X over k , an object E in D coh b ( X ) {\displaystyle D_{\operatorname {coh} }^{b}(X)} , and F a perfect complex in D perf ( X ) {\displaystyle D_{\operatorname {perf} }(X)} , one has the elegant statement:
Here the tensor product means the derived tensor product , as is natural in derived categories. (To compare with previous formulations, note that Ext X i ( E , ω X ) {\displaystyle \operatorname {Ext} _{X}^{i}(E,\omega _{X})} can be viewed as Hom X ( E , ω X [ i ] ) {\displaystyle \operatorname {Hom} _{X}(E,\omega _{X}[i])} .) When X is also smooth over k , every object in D coh b ( X ) {\displaystyle D_{\operatorname {coh} }^{b}(X)} is a perfect complex, and so this duality applies to all E and F in D coh b ( X ) {\displaystyle D_{\operatorname {coh} }^{b}(X)} . The statement above is then summarized by saying that F ↦ F ⊗ ω X ∙ {\displaystyle F\mapsto F\otimes \omega _{X}^{\bullet }} is a Serre functor on D coh b ( X ) {\displaystyle D_{\operatorname {coh} }^{b}(X)} for X smooth and proper over k . [ 8 ]
Serre duality holds more generally for proper algebraic spaces over a field. [ 9 ] | https://en.wikipedia.org/wiki/Serre_duality |
The Serum Metabolome database is a free web database about small molecule metabolites found in human serum and their concentration values. The database includes chemical data, clinical data and molecular/biochemistry data from literature and experiment. This database also references many other databases, such as KEGG , PubChem , MetaCyc , ChEBI , PDB , Swiss-Prot , GenBank , and Human Metabolome Database ( HMDB ).
The Serum Metabolome database is maintained by David S. Wishart .
The Serum Metabolome database protocol is available via its website. [ 1 ]
Official website | https://en.wikipedia.org/wiki/Serum_Metabolome_Database |
Chloride is an anion in the human body needed for metabolism (the process of turning food into energy). [ 1 ] It also helps keep the body's acid-base balance. The amount of serum chloride is carefully controlled by the kidneys . [ 2 ]
Chloride ions have important physiological roles. For instance, in the central nervous system , the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl − into specific neurons. Also, the chloride-bicarbonate exchanger biological transport protein relies on the chloride ion to increase the blood 's capacity of carbon dioxide , in the form of the bicarbonate ion; this is the mechanism underpinning the chloride shift occurring as the blood passes through oxygen-consuming capillary beds.
The normal blood reference range of chloride for adults in most labs is 96 to 106 milliequivalents (mEq) per liter. The normal range may vary slightly from lab to lab. Normal ranges are usually shown next to results in the lab report. A diagnostic test may use a chloridometer to determine the serum chloride level.
The North American Dietary Reference Intake recommends a daily intake of between 2300 and 3600 mg/day for 25-year-old males.
This biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Serum_chloride |
Serum iron is a medical laboratory test that measures the amount of circulating iron that is bound to transferrin and freely circulate in the blood. Clinicians order this laboratory test when they are concerned about iron deficiency , which can cause anemia and other problems. 65% of the iron in the body is bound up in hemoglobin molecules in red blood cells . About 4% is bound up in myoglobin molecules. Around 30% of the iron in the body is stored as ferritin or hemosiderin in the spleen , the bone marrow and the liver . Small amounts of iron can be found in other molecules in cells throughout the body. None of this iron is directly accessible by testing the serum. [ citation needed ]
However, some iron is circulating in the serum. Transferrin is a molecule produced by the liver that binds one or two iron(III) ions , i.e. ferric iron, Fe 3+ ; transferrin is essential if stored iron is to be moved and used. Most of the time, about 30% of the available sites on the transferrin molecule are filled. The test for serum iron uses blood drawn from veins to measure the iron ions that are bound to transferrin and circulating in the blood. This test should be done after 12 hours of fasting. The extent to which sites on transferrin molecules are filled by iron ions can be another helpful clinical indicator, known as percent transferrin saturation . Another lab test saturates the sample to measure the total amount of transferrin; this test is called total iron-binding capacity (TIBC). These three tests are generally done at the same time, and taken together are an important part of the diagnostic process for conditions such as anemia , iron deficiency anemia , anemia of chronic disease and haemochromatosis . [ citation needed ]
Normal reference ranges are:
μg/dL = micrograms per deciliter .
Laboratories often use different units and "normal" may vary by population and the lab techniques used; look at the individual laboratory reference values to interpret a specific test (for instance, your own). | https://en.wikipedia.org/wiki/Serum_iron |
Serum protein electrophoresis ( SPEP or SPE ) is a laboratory test that examines specific proteins in the blood called globulins . [ 1 ] The most common indications for a serum protein electrophoresis test are to diagnose or monitor multiple myeloma , a monoclonal gammopathy of uncertain significance (MGUS), or further investigate a discrepancy between a low albumin and a relatively high total protein. Unexplained bone pain, anemia, proteinuria , chronic kidney disease , and hypercalcemia are also signs of multiple myeloma, and indications for SPE. [ 2 ] Blood must first be collected, usually into an airtight vial or syringe . Electrophoresis is a laboratory technique in which the blood serum (the fluid portion of the blood after the blood has clotted) is applied to either an acetate membrane soaked in a liquid buffer, [ 3 ] or to a buffered agarose gel matrix, or into liquid in a capillary tube, and exposed to an electric current to separate the serum protein components into five major fractions by size and electrical charge: serum albumin , alpha-1 globulins , alpha-2 globulins , beta 1 and 2 globulins , and gamma globulins .
Proteins are separated by both electrical forces and electroendoosmostic forces. The net charge on a protein is based on the sum charge of its amino acids, and the pH of the buffer. Proteins are applied to a solid matrix such as an agarose gel, or a cellulose acetate membrane in a liquid buffer, and electric current is applied. Proteins with a negative charge will migrate towards the positively charged anode. Albumin has the most negative charge, and will migrate furthest towards the anode. Endoosmotic flow is the movement of liquid towards the cathode, which causes proteins with a weaker charge to move backwards from the application site. Gamma proteins are primarily separated by endoosmotic forces. [ 4 ] The drawing of the electrophoretic bands provided by the laboratory may be difficult to remember, and medical students, residents, nurses, and non-specialized medical practitioners may find visual mnemonics useful to recall the five main bands and the shape of normal serum electrophoresis. [ 5 ]
In capillary electrophoresis, there is no solid matrix. Proteins are separated primarily by strong electroendosmotic forces. The sample is injected into a capillary with a negative surface charge. A high current is applied, and negatively charged proteins such as albumin try to move towards the anode. Liquid buffer flows towards the cathode, and drags proteins with a weaker charge. [ 6 ] [ 7 ]
Albumin is the major fraction in a normal SPEP. A fall of 30% is necessary before the decrease shows on electrophoresis. Usually a single band is seen. Heterozygous individuals may produce bisalbuminemia – two equally staining bands, the product of two genes. Some variants give rise to a wide band or two bands of unequal intensity but none of these variants is associated with disease. [ 8 ] Increased anodic mobility results from the binding of bilirubin , nonesterified fatty acids , penicillin and acetylsalicylic acid , and occasionally from tryptic digestion in acute pancreatitis . [ citation needed ]
Absence of albumin, known as analbuminaemia , is rare. A decreased level of albumin, however, is common in many diseases, including liver disease , malnutrition , malabsorption, protein-losing nephropathy and enteropathy. [ 9 ]
Even staining in this zone is due to alpha-1 lipoprotein ( high density lipoprotein – HDL). Decrease occurs in severe inflammation, acute hepatitis , and cirrhosis . Also, nephrotic syndrome can lead to decrease in albumin level; due to its loss in the urine through a damaged leaky glomerulus . An increase appears in severe alcoholics and in women during pregnancy and in puberty. [ citation needed ]
The high levels of AFP that may occur in hepatocellular carcinoma may result in a sharp band between the albumin and the alpha-1 zone. [ citation needed ]
Orosomucoid and antitrypsin migrate together but orosomucoid stains poorly so alpha 1 antitrypsin (AAT) constitutes most of the alpha-1 band. Alpha-1 antitrypsin has an SG group and thiol compounds may be bound to the protein altering their mobility. A decreased band is seen in the deficiency state. It is decreased in the nephrotic syndrome [ 10 ] and absence could indicate possible alpha 1-antitrypsin deficiency. This eventually leads to emphysema from unregulated neutrophil elastase activity in the lung tissue. The alpha-1 fraction does not disappear in alpha 1-antitrypsin deficiency, however, because other proteins, including alpha- lipoprotein and orosomucoid, also migrate there. As a positive acute phase reactant, AAT is increased in acute inflammation. [ citation needed ]
Bence Jones protein may bind to and retard the alpha-1 band. [ citation needed ]
Two faint bands may be seen representing alpha 1-antichymotrypsin and vitamin D binding protein . These bands fuse and intensify in early inflammation due to an increase in alpha 1-antichymotrypsin, an acute phase protein . [ citation needed ]
This zone consists principally of alpha-2 macroglobulin (AMG or A2M) and haptoglobin . There are typically low levels in haemolytic anaemia (haptoglobin is a suicide molecule which binds with free haemoglobin released from red blood cells and these complexes are rapidly removed by phagocytes ). Haptoglobin is raised as part of the acute phase response, resulting in a typical elevation in the alpha-2 zone during inflammation. A normal alpha-2 and an elevated alpha-1 zone is a typical pattern in hepatic metastasis and cirrhosis.
Haptoglobin/haemoglobin complexes migrate more cathodally than haptoglobin as seen in the alpha-2 – beta interzone. This is typically seen as a broadening of the alpha-2 zone.
Alpha-2 macroglobulin may be elevated in children and the elderly. This is seen as a sharp front to the alpha-2 band. AMG is markedly raised (10-fold increase or greater) in association with glomerular protein loss, as in nephrotic syndrome . Due to its large size, AMG cannot pass through glomeruli, while other lower-molecular weight proteins are lost. Enhanced synthesis of AMG accounts for its absolute increase in nephrotic syndrome. Increased AMG is also noted in rats with no albumin indicating that this is a response to low albumin rather than nephrotic syndrome itself [ 11 ]
AMG is mildly elevated early in the course of diabetic nephropathy . [ citation needed ]
Cold insoluble globulin forms a band here which is not seen in plasma because it is precipitated by heparin . There are low levels in inflammation and high levels in pregnancy. [ citation needed ]
Beta lipoprotein forms an irregular crenated band in this zone. High levels are seen in type II hypercholesterolaemia , hypertriglyceridemia , and in the nephrotic syndrome.
Transferrin and beta-lipoprotein ( LDL ) comprises the beta-1. Increased beta-1 protein due to the increased level of free transferrin is typical of iron deficiency anemia , pregnancy , and oestrogen therapy. Increased beta-1 protein due to LDL elevation occurs in hypercholesterolemia . Decreased beta-1 protein occurs in acute or chronic inflammation. [ citation needed ]
Beta-2 comprises C3 ( complement protein 3). It is raised in the acute phase response. Depression of C3 occurs in autoimmune disorders as the complement system is activated and the C3 becomes bound to immune complexes and removed from serum. Fibrinogen, a beta-2 protein, is found in normal plasma but absent in normal serum. Occasionally, blood drawn from heparinized patients does not fully clot, resulting in a visible fibrinogen band between the beta and gamma globulins. [ citation needed ]
C-reactive protein is found in between the beta and gamma zones producing beta/gamma fusion. IgA has the most anodal mobility and typically migrates in the region between the beta and gamma zones also causing a beta/gamma fusion in patients with cirrhosis, respiratory infection, skin disease , or rheumatoid arthritis (increased IgA). Fibrinogen from plasma samples will be seen in the beta gamma region. Fibrinogen, a beta-2 protein, is found in normal plasma but absent in normal serum. Occasionally, blood drawn from heparinized patients does not fully clot, resulting in a visible fibrinogen band between the beta and gamma globulins. [ citation needed ]
The immunoglobulins or antibodies are generally the only proteins present in the normal gamma region. Of note, any protein migrating in the gamma region will be stained and appear on the gel, which may include protein contaminants, artifacts, or certain medications. Depending on whether an agarose or capillary method is used, interferences vary. Immunoglobulins consist of heavy chains (μ, δ, γ, α, and ε) and light chains (κ and λ). A normal gamma zone should appear as a smooth 'blush', or smear, with no asymmetry or sharp peaks. [ 12 ] The gamma globulins may be elevated ( hypergammaglobulinemia ), decreased ( hypogammaglobulinaemia ), or have an abnormal peak or peaks. Note that immunoglobulins may also be found in other zones; IgA typically migrates in the beta-gamma zone, and in particular, pathogenic immunoglobulins may migrate anywhere, including the alpha regions. [ citation needed ]
Hypogammaglobulinaemia is easily identifiable as a "slump" or decrease in the gamma zone. It is normal in infants. It is found in patients with X-linked agammaglobulinemia . IgA deficiency occurs in 1:500 of the population, as is suggested by a pallor in the gamma zone. Of note, hypogammaglobulinema may be seen in the context of MGUS or multiple myeloma. [ citation needed ]
If the gamma zone shows an increase the first step in interpretation is to establish if the region is narrow or wide. A broad "swell-like" manner (wide) indicates polyclonal immunoglobulin production. If it is elevated in an asymmetric manner or with one or more peaks or narrow "spikes" it could indicate clonal production of one or more immunoglobulins, [ 13 ]
Polyclonal gammopathy is indicated by a "swell-like" elevation in the gamma zone, which typically indicates a non-neoplastic condition (although is not exclusive to non-neoplastic conditions). The most common causes of polyclonal hypergammaglobulinaemia detected by electrophoresis are severe infection , chronic liver disease, rheumatoid arthritis, systemic lupus erythematosus and other connective tissue diseases. [ citation needed ]
A narrow spike is suggestive of a monoclonal gammopathy, also known as a restricted band, or "M-spike". To confirm that the restricted band is an immunoglobulin, follow up testing with immunofixation , or immunodisplacement/immunosubtraction (capillary methods) is performed. Therapeutic monoclonal antibodies (mAb), also migrate in this region and may be misinterpreted as a monoclonal gammopathy, and may also be identified by immunofixation or immunodisplacement/immunosubtraction as they are structurally comparable to human immunoglobulins. [ 14 ] The most common cause of a restricted band is an MGUS (monoclonal gammopathy of uncertain significance), which, although a necessary precursor, only rarely progresses to multiple myeloma. (On average, 1%/year.) [ 15 ] Typically, a monoclonal gammopathy is malignant or clonal in origin, Myeloma being the most common cause of IgA and IgG spikes. chronic lymphatic leukaemia and lymphosarcoma are not uncommon and usually give rise to IgM paraproteins . Note that up to 8% of healthy geriatric patients may have a monoclonal spike. [ 16 ] Waldenström's macroglobulinaemia (IgM), monoclonal gammopathy of undetermined significance (MGUS), amyloidosis, plasma cell leukemia and solitary plasmacytomas also produce an M-spike.
Oligoclonal gammopathy is indicated by one or more discrete clones. [ citation needed ]
Lysozyme may be seen as a band cathodal to gamma in myelomonocytic leukaemia in which it is released from tumour cells. [ citation needed ] | https://en.wikipedia.org/wiki/Serum_protein_electrophoresis |
Server-side request forgery ( SSRF ) is a type of computer security exploit where an attacker abuses the functionality of a server causing it to access or manipulate information in the realm of that server that would otherwise not be directly accessible to the attacker. [ 1 ] [ 2 ]
Similar to cross-site request forgery which utilizes a web client , for example, a web browser, within the domain as a proxy for attacks; an SSRF attack utilizes a vulnerable server within the domain as a proxy .
If a parameter of a URL is vulnerable to this attack, it is possible an attacker can devise ways to interact with the server directly (via localhost) or with the backend servers that are not accessible by the external users. An attacker can practically scan the entire network and retrieve sensitive information.
In this type of attack the response is displayed to the attacker. The server fetches the URL requested by the attacker and sends the response back to the attacker.
In this type of attack the response is not sent back to the attacker. Therefore, the attacker has to devise ways to confirm this vulnerability.
This computer security article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Server-side_request_forgery |
ServerNet is a switched fabric communications link primarily used in proprietary computers made by Tandem Computers , Compaq , and HP .
Its features include good scalability , clean fault containment, error detection and failover . The ServerNet architecture specification defines a connection between nodes, either processor or high performance I/O nodes such as storage devices.
Tandem Computers developed the original ServerNet architecture and protocols for use in its own proprietary computer systems starting in 1992, and released the first ServerNet systems in 1995. Early attempts to license the technology and interface chips to other companies failed, due in part to a disconnect between the culture of selling complete hardware / software / middleware computer systems and that needed for selling and supporting chips and licensing technology. A follow-on development effort ported the Virtual Interface Architecture to ServerNet with PCI interface boards connecting personal computers. Infiniband directly inherited many ServerNet features. As of 2017, systems still ship based on the ServerNet architecture.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/ServerNet |
The Server Base System Architecture ( SBSA ) is a hardware system architecture for servers based on 64-bit ARM processors. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Historically, ARM-based products have often been tailored for specific applications and power profiles. Variation between ARM-based hardware platforms has been an impediment requiring operating system adjustments for each product.
The SBSA seeks to strengthen the ARM ecosystem by specifying a minimal set of standardized features so that an OS built for this standard platform should function correctly without modification on all hardware products compliant with the specification.
Existing specifications for USB, PCIe, ACPI, TPM, and other standards are incorporated to solidify the specification.
Firmware issues are addressed separately in the Server Base Boot Requirements (SBBR) specification. [ 5 ]
The Architecture Compliance Suite (ACS) checks whether an environment is compliant with the SBSA specification, and is provided under an Apache 2 open source license. It is available at https://github.com/ARM-software/sbsa-acs .
The specification defines levels of compliance, with level 0 being the most basic, and successive levels building on prior levels. In the words of the spec, "Unless explicitly stated, all specification items belonging to level N apply to levels greater than N."
Levels 0, 1, and 2 have been deprecated and folded into level 3.
Level 3 contains base-level specifications for:
Extends level 3, e.g. with support for RAS fault recovery extensions of ARMv8.2 spec.
Extends level 4, e.g. with support for stage 2 translation control from hypervisor as specified in ARMv8.4.
Extends level 5, e.g. with support for speculative execution safety features.
Extends level 6, e.g. with support for Arm Memory System Resource Partitioning and Monitoring (MPAM) and Performance Monitoring Unit (PMU) features.
Initial public version of the SBSA was announced on January 29, 2014.
SBSA Version 3.0 was released on February 1, 2016.
SBSA Version 5.0 was released on May 30, 2018.
SBSA Version 6.0 was released on September 16, 2019.
SBSA Version 6.1 was released on September 15, 2020.
SBSA Version 7.0 was released on January 31, 2021.
SBSA Version 7.1 was released on October 6, 2022.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Server_Base_System_Architecture |
Server Normal Format ( SNF ) is a bitmap font format used by X Window . It is one of the oldest X Window font formats. Nowadays it is rarely used, however it is still supported by the latest X.org server. SNF fonts had the problem of being platform dependent, therefore they needed to be compiled on each system. In 1991, X11 moved away from SNF fonts to Portable Compiled Format , which could be shared between systems. [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Server_Normal_Format |
A server is a computer that provides information to other computers called " clients " on a computer network . [ 1 ] This architecture is called the client–server model . Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients or performing computations for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. [ 2 ] Typical servers are database servers , file servers , mail servers , print servers , web servers , game servers , and application servers . [ 3 ]
Client–server systems are usually most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgment. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers , but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.
The use of the word server in computing comes from queueing theory , [ 4 ] where it dates to the mid 20th century, being notably used in Kendall (1953) (along with "service"), the paper that introduced Kendall's notation . In earlier papers, such as the Erlang (1909) , more concrete terms such as "[telephone] operators" are used.
In computing, "server" dates at least to RFC 5 (1969), [ 5 ] one of the earliest documents describing ARPANET (the predecessor of Internet ), and is contrasted with "user", distinguishing two types of host : "server-host" and "user-host". The use of "serving" also dates to early documents, such as RFC 4, [ 6 ] contrasting "serving-host" with "using-host".
The Jargon File defines server in the common sense of a process performing service for requests, usually remote, [ 7 ] with the 1981 version reading: [ 8 ]
SERVER n. A kind of DAEMON which performs a service for the requester, which often runs on a computer other than the one on which the server runs.
The average utilization of a server in the early 2000s was 5 to 15%, but with the adoption of virtualization this figure started to increase to reduce the number of servers needed. [ 9 ]
Strictly speaking, the term server refers to a computer program or process (running program). Through metonymy , it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host . In addition to server , the words serve and service (as verb and as noun respectively) are frequently used, though servicer and servant are not. [ a ] The word service (noun) may refer to the abstract form of functionality, e.g. Web service . Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service . Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve [up] web pages to users" or "service their requests".
The server is part of the client–server model ; in this model, a server serves data for clients . The nature of communication between a client and server is request and response . This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general-purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server . Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.
While request–response is the most common client-server design, there are others, such as the publish–subscribe pattern . In the publish-subscribe pattern, clients register with a pub-sub server, subscribing to specified types of messages; this initial registration may be done by request-response. Thereafter, the pub-sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request-response. [ 10 ]
The role of a server is to share data as well as to share resources and distribute work. A server computer can serve its own computer programs as well; depending on the scenario, this could be part of a quid pro quo transaction, or simply a technical possibility. The following table shows several scenarios in which a server is used.
Almost the entire structure of the Internet is based upon a client–server model. High-level root nameservers , DNS , and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world [ 13 ] and virtually every action taken by an ordinary Internet user requires one or more interactions with one or more servers. There are exceptions that do not use dedicated servers; for example, peer-to-peer file sharing and some implementations of telephony (e.g. pre-Microsoft Skype ).
Hardware requirement for servers vary widely, depending on the server's purpose and its software. Servers often are more powerful and expensive than the clients that connect to them.
The name server is used both for the hardware and software pieces. For the hardware servers, it is usually limited to mean the high-end machines although software servers can run on a variety of hardwares.
Since servers are usually accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface (GUI). They are configured and managed remotely. Remote management can be conducted via various methods including Microsoft Management Console (MMC), PowerShell , SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo .
Large traditional single servers would need to be run for long periods without interruption. Availability would have to be very high, making hardware reliability and durability extremely important. Mission-critical enterprise servers would be very fault tolerant and use specialized hardware with low failure rates in order to maximize uptime . Uninterruptible power supplies might be incorporated to guard against power failure. Servers typically include hardware redundancy such as dual power supplies , RAID disk systems, and ECC memory , [ 14 ] along with extensive pre-boot memory testing and verification. Critical components might be hot swappable , allowing technicians to replace them on the running server without shutting it down, and to guard against overheating, servers might have more powerful fans or use water cooling . They will often be able to be configured, powered up and down, or rebooted remotely, using out-of-band management , typically based on IPMI . Server casings are usually flat and wide , and designed to be rack-mounted, either on 19-inch racks or on Open Racks .
These types of servers are often housed in dedicated data centers . These will normally have very stable power and Internet and increased security. Noise is also less of a concern, but power consumption and heat output can be a serious issue. Server rooms are equipped with air conditioning devices.
A server farm or server cluster is a collection of computer servers maintained by an organization to supply server functionality far beyond the capability of a single device. Modern data centers are now often built of very large clusters of much simpler servers, [ 15 ] and there is a collaborative effort, Open Compute Project around this concept.
A class of small specialist servers called network appliances are generally at the low end of the scale, often being smaller than common desktop computers.
A mobile server has a portable form factor, e.g. a laptop . [ 16 ] In contrast to large data centers or rack servers, the mobile server is designed for on-the-road or ad hoc deployment into emergency, disaster or temporary environments where traditional servers are not feasible due to their power requirements, size, and deployment time. [ 17 ] The main beneficiaries of so-called "server on the go" technology include network managers, software or database developers, training centers, military personnel, law enforcement, forensics, emergency relief groups, and service organizations. [ 18 ] To facilitate portability, features such as the keyboard , display , battery ( uninterruptible power supply , to provide power redundancy in case of failure), and mouse are all integrated into the chassis.
On the Internet, the dominant operating systems among servers are UNIX-like open-source distributions , such as those based on Linux and FreeBSD , [ 19 ] with Windows Server also having a significant share. Proprietary operating systems such as z/OS and macOS Server are also deployed, but in much smaller numbers. Servers that run Linux are commonly used as Webservers or Databanks. Windows Servers are used for Networks that are made out of Windows Clients.
Specialist server-oriented operating systems have traditionally had features such as:
In practice, today many desktop and server operating systems share similar code bases , differing mostly in configuration.
In 2010, data centers (servers, cooling, and other electrical infrastructure) were responsible for 1.1–1.5% of electrical energy consumption worldwide and 1.7–2.2% in the United States. [ 21 ] [ needs update ] One estimate is that total energy consumption for information and communications technology saves more than 5 times its carbon footprint [ 22 ] in the rest of the economy by increasing efficiency.
Global energy consumption is increasing due to the increasing demand of data and bandwidth. Natural Resources Defense Council (NRDC) states that data centers used 91 billion kilowatt hours (kWh) electrical energy in 2013 which accounts to 3% of global electricity usage. [ 23 ] [ needs update ]
Environmental groups have placed focus on the carbon emissions of data centers as it accounts to 200 million metric tons of carbon dioxide in a year. | https://en.wikipedia.org/wiki/Server_software |
The purpose of service-oriented device architecture (SODA) is to enable devices to be connected to a service-oriented architecture (SOA). Currently, developers connect enterprise services to an enterprise service bus (ESB) using the various web service standards that have evolved since the advent of XML in 1998. With SODA, developers are able to connect devices to the ESB and users can access devices in exactly the same manner that they would access any other web service.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Service-oriented_device_architecture |
Service-oriented Software Engineering (SOSE), also referred to as service engineering , [ 1 ] is a software engineering methodology focused on the development of software systems by composition of reusable services ( service-orientation ) often provided by other service providers. Since it involves composition, it shares many characteristics of component-based software engineering , the composition of software systems from reusable components, but it adds the ability to dynamically locate necessary services at run-time. These services may be provided by others as web services , but the essential element is the dynamic nature of the connection between the service users and the service providers. [ 2 ]
There are three types of actors in a service-oriented interaction: service providers, service users and service registries. They participate in a dynamic collaboration which can vary from time to time. Service providers are software services that publish their capabilities and availability with service registries. Service users are software systems (which may be services themselves) that accomplish some task through the use of services provided by service providers. Service users use service registries to discover and locate the service providers they can use. This discovery and location occurs dynamically when the service user requests them from a service registry. [ 2 ]
This software-engineering -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Service-oriented_software_engineering |
The Service Evaluation System ( SES ) was an operations support system developed by Bell Laboratories and used by telephone companies beginning in the late 1960s. [ 1 ] Many local, long distance , and operator circuit-switching systems provided special dedicated circuits to the SES to monitor the quality of customer connections during the call setup process. Calls were selected at random by switching systems and one-way voice connections were established to the SES monitoring center. [ 2 ]
During this era, most voice connections used analog trunk circuits that were designed to conform with the Via Net Loss plan established by Bell Laboratories. The purpose of the VNL plan and five-level long distance switching hierarchy was to minimize the number of trunk circuits in a call and maximize the voice quality of the connections. Excessive loss in a voice connection meant that subscribers may have difficulty hearing each other. This was particularly important in the 1960s when dial up data connections were developed with the use of analog modems . The SES evaluated multi-frequency outpulsing signaling as well as voice impairments including sound amplitude, noise, echo, and a variety of other parameters. Deployment of common-channel signaling systems such as Common Channel Interoffice Signaling and later Signaling System #7 obviated the need to monitor multi-frequency signaling as it became obsolete.
The Service Evaluation System was described in Notes on the Network published by AT&T in 1970, 1975, 1980 and later versions published by Bell Communications Research (now Telcordia Technologies ) in 1983, 1986, 1990, 1994, and 2000. | https://en.wikipedia.org/wiki/Service_Evaluation_System |
The Standard Interface for Real-time Information or SIRI is an XML protocol to allow distributed computers to exchange real-time information about public transport services and vehicles.
The protocol is a CEN norm, developed originally as a technical standard with initial participation by France, Germany ( Verband Deutscher Verkehrsunternehmen ), Scandinavia, and the UK ( RTIG )
SIRI is based on the CEN Transmodel abstract model for public transport information, and comprises a general purpose model, and an XML schema for public transport information.
A SIRI White Paper is available for further information on the protocol. [ 1 ]
CEN SIRI allows pairs of server computers to exchange structured real-time information about schedules, vehicles, and connections, together with informational messages related to the operation of the services. The information can be used for many different purposes, for example:
CEN SIRI includes a number of optional capabilities.
Different countries may specify a country profile of the subset of SIRI capabilities that they wish to adopt.
The CEN SIRI standard has two distinct components:
SIRI V1.0 defined eight functional services:
Two further functional services have been added as part of the CEN SIRI specification;
The CEN SIRI Common Protocol Framework can be used by other standards to define their own Functional Services. Two CEN standards that do this are:
Version 2.0 of SIRI [1] , representing the CEN documents as published, is currently available as a set of XSD files packaged as a zip file [2] .
SIRI is maintained under a maintenance regime, with version control managed by a working group of the CEN TC/278 Working Group 3 . Later versions of the schema are available at the same site, together with change notes.
The CEN SIRI standard was developed from European national standards for real-time data exchange, in particular the German VDV 453 standard, between 2000 and 2005, and included eight functional services. V1.0 became a CEN Technical Standard in 2006 and a full CEN standard in 2009.
Two additional functional services were added later Situation Exchange (SX) (Technical Standard 2009, Standard 2016) and Facility Monitoring (FM) (2011).
A number of small enhancements were subsequently added as informal changes creating interim releases v1.1, v1.2, etc.
Two other CEN standards were developed that made use of the 'SIRI Common Protocol Framework' to define their own functional services; NeTEx (v1.0 published in 2014) and Open API for distributed journey planning (v 1.0 published in 2017).
Version 2.0 of CEN-SIRI was developed between adopted in 2015. This is backwards compatible with V1.0 and both formalises the adoption of the interim enhancements and adds a number of additional features.
An important new addition in SIRI v2.0 was the description of a uniform transform for rendering CEN-SIRI messages into a flat format that can be used in simple http requests without an XML rendering.
Different SIRI implementations are used in a number of sites globally | https://en.wikipedia.org/wiki/Service_Interface_for_Real_Time_Information |
Service Request Transport Protocol (GE-SRTP) protocol is developed by GE Intelligent Platforms (earlier GE Fanuc) for transfer of data from programmable logic controllers . The protocol is used over Ethernet almost all GE automation equipment supports the GE-SRTP protocol when equipped with an Ethernet port. Any SRTP client will be capable of reading and writing system memory of any number of remote SRTP capable devices.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Service_Request_Transport_Protocol |
A service account or application account is a digital identity used by an application software or service to interact with other applications or the operating system . They are often used for machine to machine communication (M2M), for example for application programming interfaces (API). [ 1 ] The service account may be a privileged identity within the context of the application. [ 2 ]
Local service accounts can interact with various components of the operating system, which makes coordination of password changes difficult. [ 3 ] In practice this causes passwords for service accounts to rarely be changed, which poses a considerable security risk for an organization. [ 3 ]
Some types of service accounts do not have a password. [ 4 ]
Service accounts are often used by applications for access to databases , running batch jobs or scripts , or for accessing other applications. Such privileged identities often have extensive access to an organization's underlying data stores laying in applications or databases. [ 3 ]
Passwords for such accounts are often built and saved in plain textfiles , which is a vulnerability which may be replicated across several servers to provide fault tolerance for applications. This vulnerability poses a significant risk for an organization since the application often hosts the type of data which is interesting to advanced persistent threats . [ 3 ]
Service accounts are non-personal digital identities and can be shared. [ 3 ]
Google Cloud lists several possibilities for misuse of service accounts: [ 4 ] | https://en.wikipedia.org/wiki/Service_account |
Service assurance , in telecommunications , is the application of policies and processes by a Communications Service Provider (CSP) to ensure that services offered over networks meet a pre-defined service quality level for an optimal subscriber experience.
The practice of service assurance enables CSPs to identify faults in the network and resolve these issues in a timely manner so as to minimize service downtime. The practice also includes policies and processes to proactively pinpoint, diagnose and resolve service quality degradations or device malfunctions before subscribers are impacted.
Service assurance encompasses the following:
There are many drivers for service assurance adoption, with some considering the most important to be the ability to measure the performance of a service. A subscriber’s service experience quality can be directly linked to customer churn . [ 1 ] Therefore, maintaining satisfactory service quality levels is key to creating “customer stickiness. [ 2 ] ”
Other factors driving growing interest in service assurance include increasing competition, new challenges due to the convergence of networks, services, applications and devices, enabling services over IP and the merging of IT and telecommunications services. [ 3 ] But ultimately, it is the CSP’s ability to ensure a satisfactory level of QoS that will have the greatest impact on revenue. [ 4 ]
The importance of service performance is also reinforced by research stating that two thirds of subscribers will stop trying a new service after two failed attempts with that service. [ 5 ] Therefore, it is increasingly apparent that service assurance tools must be put in place prior to the introduction of a new service if it is to be successful in the market. This is particularly true of deployments of such services as VoIP , IPTV and mobile video. [ 6 ]
Service assurance spending by CSPs is forecast to grow to $USD 3.0 billion by 2011. Leading global service assurance providers include InfoVista , VIAVI, TEOCO , Ericsson , nsn , EXFO, MYCOM OSI , Centina, [ 7 ] Anritsu , Epitiro, Riverbed Technology , Spirent , Empirix , JumpSoft , Computer Associates , EMC, Telcordia , Tektronix , RADCOM , CENX, Agilent , Cisco , HP , IBM , IBM Tivoli/Netcool and Softenger (I) Pvt Ltd. [ 8 ] | https://en.wikipedia.org/wiki/Service_assurance |
Service design is the activity of planning and arranging people, infrastructure, communication and material components of a service in order to improve its quality, and the interaction between the service provider and its users. Service design may function as a way to inform changes to an existing service or create a new service entirely. [ 1 ]
The purpose of service design methodologies is to establish the most effective practices for designing services, according to both the needs of users and the competencies and capabilities of service providers. If a successful method of service design is adapted then the service will be user-friendly and relevant to the users, while being sustainable and competitive for the service provider. For this purpose, service design uses methods and tools derived from different disciplines, ranging from ethnography [ 2 ] [ 3 ] [ 4 ] [ 5 ] to information and management science [ 6 ] to interaction design . [ 7 ] [ 8 ]
Service design concepts and ideas are typically portrayed visually, using different representation techniques according to the culture, skill and level of understanding of the stakeholders involved in the service processes (Krucken and Meroni, 2006). [ 9 ] With the advent of emerging technologies from the Fourth Industrial Revolution, the significance of Service Design has increased, as it is believed to facilitate a more feasible productization of these new technologies into the market. [ 10 ]
Service design practice is the specification and construction of processes which deliver valuable capacities for action to a particular user. Service design practice can be both tangible and intangible, and can involve artifacts or other elements such as communication, environment and behaviour. [ 11 ] Several of the authors of service design theory including Pierre Eiglier, [ 12 ] Richard Normann , [ 13 ] Nicola Morelli, [ 14 ] propose that services come to existence at the same moment they are both provided and used . In contrast, products are created and "exist" before being purchased and used. [ 14 ] While a designer can prescribe the exact configuration of a product, they cannot prescribe in the same way the result of the interaction between users and service providers , [ 7 ] nor can they prescribe the form and characteristics of any emotional value produced by the service.
Consequently, service design is an activity that, among other things, suggests behavioural patterns or "scripts" for the actors interacting in the service. Understanding how these patterns interweave and support each other are important aspects of the character of design and service. [ 15 ] This allows greater user freedom, and better provider adaptability to the users' needs.
Service design is the process of creating and improving services to meet the needs and expectations of customers. [ 16 ]
Service design involves creating a service concept that defines the customer's experience, as well as the physical, human, and technological resources required to deliver the service. Service design focuses on the experience, including customer interactions, service delivery, and support processes. [ 17 ]
Early contributions to service design were made by G. Lynn Shostack, a bank and marketing manager and consultant, [ 18 ] in the form of written articles and books. [ 19 ] [ 20 ] The activity of designing a service was considered to be part of the domain of marketing and management disciplines in the early years. [ 19 ] For instance, in 1982 Shostack proposed the integration of the design of material components (products) and immaterial components (services). [ 19 ] This design process, according to Shostack, can be documented and codified using a " service blueprint " to map the sequence of events in a service and its essential functions in an objective and explicit manner. [ 19 ] A service blueprint is an extension of a user journey map , and this document specifies all the interactions a user has with an organisation throughout their user lifecycle. [ 21 ]
Servicescape is a model developed by B.H. Booms and Mary Jo Bitner to focus upon the impact of the physical environment in which a service process takes place [ 22 ] and to explain the actions of people within the service environment, with a view to designing environments which accomplish organisational goals in terms of achieving desired responses.
In 1991, service design was first introduced as a design discipline by professors Michael Erlhoff and Brigit Mager at Köln International School of Design (KISD) . [ 23 ] In 2004, the Service Design Network was launched by Köln International School of Design , Carnegie Mellon University , Linköpings Universitet , Politecnico di Milano and Domus Academy in order to create an international network for service design academics and professionals. [ 24 ]
In 2001, Livework, the first service design and innovation consultancy, opened for business in London. [ 25 ] In 2003, Engine, initially founded in 2000 in London as an ideation company, positioned themselves as a service design consultancy. [ 26 ]
The 2018 book, This Is Service Design Doing: Applying Service Design Thinking in the Real World , by Adam Lawrence, Jakob Schneider, Marc Stickdorn, and Markus Edgar Hormess, proposes six service design principles: [ 27 ]
In the 2011 book, This is Service Design Thinking: Basics, Tools, Cases , [ 28 ] the first principle is “ user-centred ”. "User" refers to any user of the service system, including customers and employees. Thus, the authors revised “user-centred” to “ human-centred ” in their new book, This is service design doing , to clarify that 'human' includes service providers, customers, and all others relevant stakeholders. For instance, service design must consider not only the customer experience , but also the interests of all relevant people in retailing. [ 28 ]
“Collaborative” and “iterative” come from the principle “ co-creative ” in this is service design thinking . [ 28 ] The service exists with the participation of users, and is created by a group of people from different backgrounds. In most cases, people tend to focus only on the meaning of “collaborative”, stressing the co-operative and interdisciplinary nature of service design, but ignored the caveat that a service only exists with the participation of a user. Therefore, in the definition of new service design principles, the "co-creative" is divided into two principles of "collaborative" and "iterative". "Collaboration" is used to indicate the process of creation by the entire stakeholders from different backgrounds. "Iteration" is used to describe service design is an iterating process keeping evolve to adapt the change of business posture.
“Sequential” means that services need to be logically, rhythmically and visually displayed. Service design is a dynamic process over a period of time. The timeline is important for users in the service system. For example, when a customer shops at an online website, the first information showed up should be the regions where the products can be delivered. In this way, if the customer finds that the products cannot be delivered to their region, they will not continually browse the products on the website.
Service is often invisible and occurs in a state that the user cannot perceive. “Real” means that the intangible service needs to be displayed in a tangible way. For example, when people order food in a restaurant, they can't perceive the various attributes of the food. If we play the cultivation and picking process of vegetables in the restaurant, people can perceive the intangible services in the backstage, such as the cultivation of organic vegetables, and get a quality service experience. This service also helps the restaurant establish a natural and organic brand image to customers.
Thinking in a holistic way is the cornerstone of service design. Holistic thinking needs to consider both intangible and tangible service, and ensure that every moment the user interacts with the service, such moments known as touchpoints , is considered and optimised. Holistic thinking also needs to understand that users have multiple logics to complete an experience process. Thus, a service designer should think about each aspect from different perspectives to ensure that no needs are left unattended-to.
Together with the most traditional methods used for product design, service design requires methods and tools to control new elements of the design process, such as the time and the interaction between actors. An overview of the methodologies for designing services is proposed by Nicola Morelli in 2006, [ 6 ] who proposes three main directions:
Analytical tools refer to anthropology , social studies , ethnography and social construction of technology . Appropriate elaborations of those tools have been proposed with video-ethnography [ 4 ] [ 5 ] and different observation techniques to gather data about users’ actions. [ 29 ] Other methods, such as cultural probes, have been developed in the design discipline, which aim to capture information on users in their context of use (Gaver, Dunne et al. 1999; Lindsay and Rocchi 2003).
Design tools aim at producing a blueprint of the service, which describes the nature and characteristics of the interaction in the service. Design tools include service scenarios (which describe the interaction) and use cases (which illustrate the detail of time sequences in a service encounter). Both techniques are already used in software and systems engineering to capture the functional requirements of a system. However, when used in service design, they have been adequately adapted to include more information concerning material and immaterial components of a service, as well as time sequences and physical flows. [ 6 ] Crowdsourced information has been shown to be highly beneficial in providing such information for service design purposes, particularly when the information has either a very low or very high monetary value. [ 30 ] Other techniques, such as IDEF0 , just in time and total quality management are used to produce functional models of the service system and to control its processes. However, it is important to note that such tools may prove too rigid to describe services in which users are supposed to have an active role, because of the high level of uncertainty related to the user's behaviour.
Because of the need for communication between inner mechanisms of services and actors (such as final users), representation techniques are critical in service design. For this reason, storyboards are often used to illustrate the interaction of the front office . [ 31 ] Other representation techniques have been used to illustrate the system of interactions or a "platform" in a service (Manzini, Collina et al. 2004). Recently, video sketching (Jegou 2009, Keitsch et al. 2010) and prototypes (Blomkvist 2014) have also been used to produce quick and effective tools to stimulate users' participation in the development of the service and their involvement in the value production process.
In the United Kingdom, British Standard BS 7000-3:1994, part of the BS 7000 - Design management systems series, covers service design. [ 32 ]
Public sector service design is associated with civic technology , open government , e-government , and can constitute either government-led or citizen-led initiatives. The public sector is the part of the economy composed of public services and public enterprises . Public services include public goods and governmental services such as the military , police , infrastructure ( public roads , bridges , tunnels , water supply , sewers , electrical grids , telecommunications , etc.), public transit , public education , along with health care and those working for the government itself, such as elected officials . Due to new investments in hospitals, schools, cultural institutions and security infrastructures in the last few years, the public sector has expanded in many countries. The number of jobs in public services has also grown; such growth can be associated with the large and rapid social change that is in itself a trigger for fresh design. In this context, some governments are considering service design as a means to bring about better-designed public services. [ 33 ]
In 2002, MindLab , an innovation public sector service design group was established by the Danish ministries of Business and Growth, Employment, and Children and Education. [ 34 ] MindLab was the one of the world's first public sector design innovation labs and their work inspired the proliferation of similar labs and user-centred design methodologies deployed in many countries worldwide. [ 35 ] The design methods used at MindLab are typically an iterative approach of prototyping and testing, to evolve not just their government projects, but also the government's organisational structure using ethnographic-inspired user research, creative ideation processes, and visualisation and modelling of service prototypes. [ 34 ] [ 35 ] [ 36 ] In Denmark, design within the public sector has been applied to a variety of projects including rethinking Copenhagen's waste management, improving social interactions between convicts and guards in Danish prisons, transforming services in Odense for mentally disabled adults and more. [ 34 ]
In 2007 and 2008 documents from the British government explore the concept of "user-driven public services" and scenarios of highly personalised public services. [ 37 ] [ 38 ] The documents proposed a new view on the role of service providers and users in the development of new and highly customised public services, employing user involvement methods. [ 37 ] [ 38 ] While this approach has been explored through an early initiative in the UK, the possibilities of service design for the public sector are also being researched, picked up, and promoted in European Union countries including Belgium. [ 39 ]
The Behavioural Insights Team (BIT) were originally established under the auspices of the Cabinet Office in 2010, in order to apply nudge theory to try to improve UK government policy interventions and save money. In 2014 BIT was 'spun-out' to become a company allied to Nesta (charity) , BIT employees and the UK government each owning a third of this new business. [ 40 ] That same year a Nudge unit was added to the United States government under President Obama, referred to as the ‘US Nudge Unit,’ working within the White House Office of Science and Technology Policy . [ 41 ]
In recent years New Zealand has seen a significant increase in the use of Service Design approaches and methods applied to challenges faced by the public sector. One instance of service design approaches being applied is with the Family 100 project which focused on the experiences of families living in urban poverty in Auckland. A report " Speaking for Ourselves Archived 2021-01-30 at the Wayback Machine " and a companion empathy tool " Demonstrating the complexities of being poor Archived 2021-01-27 at the Wayback Machine "' were released in July 2014. The report and empathy tool were released as the result of a collective service design effort by the Auckland Council , Auckland City Mission, ThinkPlace (a Service Design consultancy) as well as researchers from Waikato University, Massey University, and the University of Auckland. Since its release the report has seen extensive use and has assisted in both the engagement of stakeholders as well as the development of public services focussed on achieving better outcomes for those experiencing urban poverty.
Real-world service design work can be experienced as new and useful approaches as well as entail some challenges in practice, as identified in field research (see e.g. Jevnaker et al., 2015). [ 42 ] A practical example of service design thinking can be found at the Myyrmanni shopping mall in Vantaa , Finland. The management attempted to improve the customer flow to the second floor as there were queues at the landscape lifts and the KONE steel car lifts were ignored. To improve customer flow to the second floor of the mall (2010) Kone Lifts implemented their 'People Flow' Service Design Thinking by turning the elevators into a Hall of Fame for the 'Incredibles' comic strip characters. Making their elevators more attractive to the public solved the people flow problem. This case of service design thinking by Kone Elevator Company is used in literature as an example of extending products into services. [ 43 ]
Clinical service redesign is an approach to improving quality and productivity in health care. A redesign is ideally clinically led and involves all stakeholders (e.g. primary and secondary care clinicians, senior management, patients, commissioners etc.) to ensure national and local clinical standards are set and communicated across the care settings. By following the patient's journey or pathway, the team can focus on improving both the patient experience and the outcomes of care. | https://en.wikipedia.org/wiki/Service_design |
A Service Design Sprint is a time-constrained Service Design project that uses Design Thinking and Service Design tools to create a new service or improve an existing one. The term Service Design Sprint was first mentioned by Tenny Pinheiro in his book The Service Startup: Design Thinking Gets Lean (Elsevier; 2014). [ 1 ]
The Minimum Valuable Service methodology used in a Service Design Sprint [ 2 ] combines Agile -based approaches with Service-dominant logic and Service Design tools [ 3 ] to help product development teams understand, co-design, and prototype complex service scenarios with low resources and within the timespan of a week. The methodology, created by Tenny Pinheiro in 2014, [ 4 ] was designed to be used by startups in their Agile sprints.
A Service Design Sprint differs from a traditional Design Sprint [ 5 ] due to its service dominant logic inclination. [ 6 ] Since its inception, the approach has been used by startup accelerators, educational institutions like the university of Lapland in Finland, MIT , and fortune 500 companies in many different sectors. [ 7 ]
The Minimum Valuable Service model [ 8 ] is divided into four phases each containing a set of tools. | https://en.wikipedia.org/wiki/Service_design_sprint |
Service innovation is used to refer to many things. These include but not limited to:
The Finnish research agency TEKES defines service innovation as "a new or significantly improved service concept that is taken into practice. It can be for example a new customer interaction channel, a distribution system or a technological concept or a combination of them. A service innovation always includes replicable elements that can be identified and systematically reproduced in other cases or environments. The replicable element can be the service outcome or the service process as such or a part of them. A service innovation benefits both the service producer and customers and it improves its developer's competitive edge. A service innovation is a service product or service process that is based on some technology or systematic method. In services however, the innovation does not necessarily relate to the novelty of the technology itself but the innovation often lies in the non-technological areas. Service innovations can for instance be new solutions in the customer interface, new distribution methods, novel application of technology in the service process, new forms of operation with the supply chain or new ways to organize and manage services."
Another definition proposed by Van Ark et al. (2003) [1] states it as a "new or considerably changed service concept, client interaction channel, service delivery system or technological concept that individually, but most likely in combination, leads to one or more (re)new(ed) service functions that are new to the firm and do change the service/good offered on the market and do require structurally new technological, human or organizational capabilities of the service organization." This definition covers the notions of technological and non-technological innovation. Non-technological innovations in services mainly arise from investment in intangible inputs.
Many literatures on what makes for successful innovations of this kind comes from the New Service Development research field (e.g. Johne and Storey, 1998; [2] Nijssen et al., 2006 [3] ). Service design practitioners have also extensively discussed the features of effective service products and experiences. One of the key aspects of many service activities is the high involvement of the client/customer/user in the production of the final service. Additionally, firms cooperate with both horizontal (e.g., competitors) and vertical (e.g., suppliers) business partners in order to develop relevant service innovations. Without this co-production (i.e. interactivity of service production), the service would often not be created. This co-production, together with the intangibility of many service products, causes service innovation to often take forms rather different from those familiar through studies of innovation in manufacturing. Innovation researchers have, for this reason, stressed that much service innovation is hard to capture in traditional categories like product or process innovation, and that its effects are diverse. [ 1 ] The co-production process, and the interactions between service provider and client, can also form the focus of innovation. A growing number of professional association have service sections that promote service innovation research, including INFORMS , ISSIP , and others.
Thus den Hertog (2000) [4] who identifies four “dimensions” of service innovation, takes quite a different direction to much standard innovation theorizing.
In practice, the majority of service innovations will almost certainly involve various combinations of these four dimensions. For instance:
An elaboration of this model to suggest six dimensions of innovation was developed in the course of work on creative sectors, by Green, Miles and Rutter. [7] As well as Technology and Production process, four dimensions were specified whose linkages are very strong in creative sectors like videogames, advertising and design: Cultural Product, Cultural Concept, Delivery and User Interface.
The service innovation literature is surprisingly poorly related to the literature on new product development, which has spawned a line of study on new service development. This often focuses on the managerially important issue of what makes for successful service innovation. See for example Johne and Storey (1998), 4 who reviewed numerous New Service Development studies.
Ian Miles of the Manchester Institute of Innovation Research (MIoIR), The University of Manchester, is one of the scholars on the study of 'Service Innovation'. He coined the term in his 1993 article in the journal FUTURES , (Vol. 25, No. 6, pp. 653–672,) [8] . He listed a series of characteristic features of services, and associated these with particular types of innovation. Such innovations are often aimed at overcoming problems associated with service characteristics like the difficulty in demonstrating the service to the client, or the problems in storing and building up stocks of the service.
After Miles (1993), numerous studies were made, one of the more recent studies that reaches similar conclusions was from a qualitative survey of service organizations by Candi (2007). [ 2 ] ) Note that the “product” related innovations below have a lot in common with new service development as discussed above. In the following list, features of services are linked to innovation strategies by the symbol >>>.
Additionally, a number of general tendencies in the innovation process in services have been noted. These include:
In the traditional product-service system (PSS) business model, industries develop product with value-added service instead of single product itself, and provide their customers services that are needed. In this relationship, the market goal of manufacturers is not one-time product selling, but continuous profit from customers by total service solution, which can satisfy unmet customers’ needs. Most of PSS systems focus on ‘human-generated or human-related data’ instead of ‘machine-generated data or industrial data’, which may include machine controllers, sensors, manufacturing systems, etc. Early work using web-based product monitoring for remote product services including GM OnStar Telematics, Otis Remote Elevator Maintenance (REM), and GE Medical InSite during the 1990s.
In recent years policy makers have begun to consider the potential for promoting services innovation as part of their economic development strategies. Such consideration has, in part, been driven by the growing contribution that services activities make to national and regional economies. It also reflects the emerging recognition that traditional policy measures such as R&D grants and technology transfer supports have been developed from a manufacturing perspective of the innovation process.
The European Commission and the OECD has been particularly active in seeking to generate reflection on services innovation and its policy implications. This has resulted in studies such as the OECD's reports into knowledge intensive services, [9] and the European Commission Expert Group report on services innovation – the report of the group, "Fostering Innovation in Services" [ 3 ] as well as various TrendChart studies. [ 4 ] The European Commission has also launched a number of Knowledge Intensive Services Platforms designed to act as laboratories for new public policies for services innovation. Few economic development agencies at the member state level, and fewer still at the regional level, have translated this new thinking on services innovation into policy action. Finland is an exception, where knowledge intensive business services have been a focus of much regional work (esp. the Uusimaa region).
Finland has been active in thinking about the policy implications of services innovation. This has seen TEKES – the Finnish Funding Agency for Technology and Innovation – launch the SERVE initiative, designed to support ‘Finnish companies and research organizations in the development of innovative service concepts that can be reproduced or replicated and where some technology or systematic method is applied.’ a Germany has also undertaken initiatives for services R&D. Canada and Norway have programs as well.
Ireland has been considering a services-focused innovation policy, with Forfás – its national policy and advisory board for enterprise, trade, science, technology and innovation – having undertaken a review of Ireland's existing policy and support measures for innovation, and outlined options for a new policy and framework environment in support of service innovation activity. [10]
At the regional level, limited information is available on how Europe's regions are responding to the challenges presented by service innovation. [CM International] has recently published a European survey on services innovation and regional policy responses. The results of this suggest that very few regions in France, the UK and Ireland have an explicit focus on services and innovation. Many do, however, express a desire to address this issue in the coming future. [11] | https://en.wikipedia.org/wiki/Service_innovation |
In information technology , a service level indicator ( SLI ) is a measure of the service level provided by a service provider to a customer. SLIs form the basis of service level objectives (SLOs), which in turn form the basis of service level agreements (SLAs); [ 1 ] an SLI can be called an SLA metric (also customer service metric , or simply service metric ).
Though every system is different in the services provided, often common SLIs are used. Common SLIs include latency , throughput , availability , and error rate; others include durability (in storage systems), end-to-end latency (for complex data processing systems, especially pipelines), and correctness. [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it .
This management -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Service_level_indicator |
A product's service life is its period of use in service. Several related terms describe more precisely a product's life, from the point of manufacture, storage, and distribution, and eventual use.
Service life has been defined as "a product's total life in use from the point of sale to the point of discard" and distinguished from replacement life , "the period after which the initial purchaser returns to the shop for a replacement". [ 3 ] Determining a product's expected service life as part of business policy ( product life cycle management ) involves using tools and calculations from maintainability and reliability analysis . Service life represents a commitment made by the item's manufacturer and is usually specified as a median. It is the time that any manufactured item can be expected to be "serviceable" or supported by its manufacturer . [ citation needed ]
Service life is not to be confused with shelf life , which deals with storage time, or with technical life, which is the maximum period during which it can physically function. [ 3 ] Service life also differs from predicted life , in terms of mean time before failure (MTBF) or maintenance-free operating period (MFOP). Predicted life is useful such that a manufacturer may estimate, by hypothetical modeling and calculation, a general rule for which it will honor warranty claims, or planning for mission fulfillment. The difference between service life and predicted life is most clear when considering mission time and reliability in comparison to MTBF and service life. For example, a missile system can have a mission time of less than one minute, service life of 20 years, active MTBF of 20 minutes, dormant MTBF of 50 years, and reliability of 99.9999%.
Consumers will have different expectations about service life and longevity [ 4 ] [ 5 ] based upon factors such as use, cost, and quality.
Manufacturers will commit to very conservative service life, usually 2 to 5 years for most commercial and consumer products (for example computer peripherals and components ). However, for large and expensive durable goods , the items are not consumable , and service lives and maintenance activity will factor large in the service life. Again, an airliner might have a mission time of 11 hours, a predicted active MTBF of 10,000 hours without maintenance (or 15,000 hours with maintenance), reliability of .99999, and a service life of 40 years.
The most common model for item lifetime is the bathtub curve , a plot of the varying failure rate as a function of time. During early life, the bathtub shows increased failures, usually witnessed during product development . The middle portion of the bathtub, or 'useful life', is a slightly inclined, nearly constant failure rate period where the consumer enjoys the benefit conferred by the product. As time increases further, the curve reaches a period of increasing failures, modeling the product's wear-out phase.
For an individual product, the component parts may each have independent service lives, resulting in several bathtub curves. For instance, a tire will have a service life partitioning related to the tread and the casing.
For maintainable items, those wear-out items that are determined by logistical analysis to be provisioned for sparing and replacement will assure a longer service life than manufactured items without such planning. A simple example is automotive tires - failure to plan for this wear out item would limit automotive service life to the extent of a single set of tires.
An individual tire's life follows the bathtub curve , to boot. After installation, there is a not-small probability of failure which may be related to material or workmanship or even to the process for mounting the tire which may introduce some small damage. After the initial period, the tire will perform, given no defect introducing events such as encountering a road hazard (a nail or a pothole ), for a long duration relative to its expected service life which is a function of several variables (design, material, process). After a period, the failure probability will rise; for some tires, this will occur after the tread is worn out. Then, a secondary market for tires puts a retread on the tire thereby extending the service life. It is not uncommon for an 80,000-mile tire to perform well beyond that limit. [ 6 ]
It may be difficult to obtain reliable longevity data about many consumer products as, in general, efforts at actuarial analysis are not taken to the same extent as found with that needed to support insurance decisions. However, some attempts to provide this type of information have been made. An example is the collection of estimates for household components provided by the Old House Web [ 7 ] which gathers data from the Appliance Statistical Review and various institutes involved with the homebuilding trade.
Some Engine manufacturers, such as for example Navistar and Volvo, use a so-called B-life rating, [ 8 ] based on the durability data of the engine manufacturer, [ 9 ] B10 and B50 index for measuring the life expectancy of an engine . [ 10 ]
When exposed to high temperatures, the lithium-ion batteries in smartphones are easily damaged and can fail faster than expected, in addition to letting the device run out of battery too often. Debris and other contaminants that enter through small cracks in the phone can also infringe on smartphone life expectancy. One of the most common factors that cause smartphones and other electronic devices to die quickly is physical impact and breakage, which can severely damage the internal pieces. [ 11 ]
For certain products, such as those that cannot be serviced during their operational life for technical reasons, a manufacturer may calculate a product's expected performance at both the beginning of operational life (BOL) and end of operational life (EOL). Batteries and other components that degrade over time may affect the operation of a product. The performance of mission critical components is therefore calculated for EOL, with the components exceeding their specification at BOL. For example, with spaceflight hardware, which must survive in the harsh environment of space, the capacity to generate electricity from solar panels or radioisotope thermoelectric generator (RTG) is likely to reduce throughout a mission, but must still meet a specific requirement at EOL in order to complete the mission. A spacecraft may also have a BOL mass that is greater than its EOL mass as propellant is depleted during its operational life. | https://en.wikipedia.org/wiki/Service_life |
Service quality ( SQ ), in its contemporary conceptualisation, is a comparison of perceived expectations (E) of a service with perceived performance (P), giving rise to the equation SQ = P − E. [ 1 ] This conceptualistion of service quality has its origins in the expectancy-disconfirmation paradigm. [ 2 ]
A business with high service quality will meet or exceed customer expectations whilst remaining economically competitive . [ 3 ] Evidence from empirical studies suggests that improved service quality increases profitability and long term economic competitiveness. Improvements to service quality may be achieved by improving operational processes; identifying problems quickly and systematically; establishing valid and reliable service performance measures and measuring customer satisfaction and other performance outcomes. [ 3 ]
From the viewpoint of business administration , service quality is an achievement in customer service . [ 4 ] It reflects at each service encounter. Customers form service expectations from past experiences, word of mouth and marketing communications. [ 5 ] In general, customers compare perceived service with expected service, and if the former falls short of the latter the customers are disappointed.
For example, in the case of Taj Hotels Resorts and Palaces , wherein TAJ remaining the old world, luxury brand in the five-star category, the umbrella branding was diluting the image of the TAJ brand because although the different hotels such as Vivanta by Taj - the four star category, Gateway in the three star category and Ginger the two star economy brand, were positioned and categorised differently, customers still expected high quality of Taj.
The measurement of subjective aspects of customer service depends on the conformity of the expected benefit with the perceived result. This in turns depends upon the customer's expectation in terms of service, they might receive and the service provider's ability and talent to present this expected service. Successful companies add benefits to their offering that not only satisfy the customers but also surprise and delight them. Delighting customers is a matter of exceeding their expectations.
Pre-defined objective criteria may be unattainable in practice, in which case, the best possible achievable result becomes the ideal. The objective ideal may still be poor, in subjective terms.
Service quality can be related to service potential (for example, worker's qualifications); service process (for example, the quickness of service) and service result (customer satisfaction).
Individual service quality states the service quality of employees as distinct from the quality that the customers perceived. [ 6 ]
Historically, scholars have treated service quality as very difficult to define and measure, due to the inherent intangible nature of services, which are often experienced subjectively. [ 7 ]
One of the earliest attempts to grapple with the service quality concept came from the so-called Nordic School . In this approach, service quality was seen as having two basic dimensions: [ 8 ]
The technical quality is relatively objective and therefore easy to measure. However, difficulties arise when trying to evaluate functional quality. [ 8 ]
A customer's expectation of a particular service is determined by factors such as recommendations, personal needs and past experiences. The expected service and the perceived service sometimes may not be equal, thus leaving a gap. The service quality model or the ‘GAP model’ developed in 1985, highlights the main requirements for delivering high service quality. It identifies five ‘gaps’ that cause unsuccessful delivery. Customers generally have a tendency to compare the service they 'experience' with the service they 'expect'. If the experience does not match the expectation, there arises a gap. [ 9 ] Given the emphasis on expectations, this approach to measuring service quality is known as the expectancy-disconfirmation paradigm and is the dominant model in the consumer behaviour and marketing literature. [ 10 ]
A model of service quality, based on the expectancy-disconformation paradigm, and developed by A. Parasuraman , Valarie A. Zeithaml and Len Berry , identifies the principal dimensions (or components) of service quality and proposes a scale for measuring service quality, known as SERVQUAL . The model's developers originally identified ten dimensions of service quality that influence customer's perceptions of service quality. [ 11 ] However, after extensive testing and retesting, some of the dimensions were found to be autocorrelated and the total number of dimensions was reduced to five, namely - reliability, assurance, tangibles, empathy and responsiveness. These five dimensions are thought to represent the dimensions of service quality across a range of industries and settings. [ 12 ] Among students of marketing, the mnemonic, RATER , an acronym formed from the first letter of each of the five dimensions, is often used as an aid to recall.
In spite of the dominance of the expectancy-disconfirmation paradigm, scholars have questioned its validity. In particular scholars have pointed out the expectancy-disconfirmation approach had its roots in consumer research and was fundamentally concerned with measuring customer satisfaction rather than service quality. In other words, questions surround the face validity of the model and whether service quality can be conceptualised as a gap . [ 13 ]
Measuring service quality may involve both subjective and objective processes. In both cases, it is often some aspect of customer satisfaction which is being assessed. However, customer satisfaction is an indirect measure of service quality. Research has also indicated that the presence of service quality leads to several outcomes including changes in perceived value, customer satisfaction and loyalty intentions with consumers. [ 14 ] [ 15 ]
Given the widespread use of internet and e-commerce , researchers have also sought to define and measure e-service quality. Parasuraman, Zeithaml, and Malhotra (2005, p. 5) define e-service quality as the “extent to which a website facilitates efficient and effective shopping, purchasing, and delivery.” Wolfinbarger and Gilly (2003, p. 183) define e-service quality as “the beginning to the end of the transaction including information search, website navigation, order, customer service interactions, delivery, and satisfaction with the ordered product.”. [ 16 ] [ 17 ]
A recent paper examined research on e-service quality. [ 18 ] The author identified four dimensions of e-service quality: website design, fulfillment, customer service, and security and privacy.
Subjective processes can be assessed in characteristics (assessed be the SERVQUAL method); in incidents (assessed in critical incident theory ) and in problems (assessed by Frequenz Relevanz Analyse a German term. The most important and most used method with which to measure subjective elements of service quality is the Servqual method. [ citation needed ]
Objective processes may be subdivided into primary processes and secondary processes. During primary processes, silent customers create test episodes of service or the service episodes of normal customers are observed. In secondary processes, quantifiable factors such as numbers of customer complaints or numbers of returned goods are analysed in order to make inferences about service quality.
In general, an improvement in service design and delivery helps achieve higher levels of service quality. For example, in service design, changes can be brought about in the design of service products and facilities. On the other hand, in service delivery, changes can be brought about in the service delivery processes, the environment in which the service delivery takes place and improvements in the interaction processes between customers and service providers.
Various techniques can be used to make changes such as: Quality function deployment (QFD); failsafing ; moving the line of visibility and the line of accessibility; and blueprinting .
In order to ensure and increase the 'conformance quality' of services, that is, service delivery happening as designed, various methods are available. Some of these include guaranteeing ; mystery shopping ; recovering; setting standards and measuring; statistical process control and Customer involvement management . [ 19 ]
The relationship between service quality and customer satisfaction has received considerable attention in academic literature. The results of most research studies have indicated that the service quality and customer satisfaction are indeed independent but are closely related that and a rise in one is likely to result in an increase in another construct. [ 20 ] [ 21 ] | https://en.wikipedia.org/wiki/Service_quality |
In IEEE 802.11 wireless local area networking standards (including Wi‑Fi ), a service set is a group of wireless network devices which share a service set identifier ( SSID )—typically the natural language label that users see as a network name. (For example, all of the devices that together form and use a Wi‑Fi network called "Foo" are a service set.) A service set forms a logical network of nodes operating with shared link-layer networking parameters; they form one logical network segment.
A service set is either a basic service set ( BSS ) or an extended service set ( ESS ).
A basic service set is a subgroup, within a service set, of devices that share physical-layer medium access characteristics (e.g. radio frequency, modulation scheme, security settings) such that they are wirelessly networked. The basic service set is defined by a basic service set identifier ( BSSID ) shared by all devices within it. The BSSID is a 48-bit label that conforms to MAC-48 conventions. While a device may have multiple BSSIDs, usually each BSSID is associated with at most one basic service set at a time. [ 1 ]
A basic service set should not be confused with the coverage area of an access point, known as the basic service area ( BSA ). [ 2 ]
An infrastructure BSS is created by an infrastructure device called an access point ( AP ) for other devices to join. (Note that the term IBSS is not used for this type of BSS but refers to the independent type discussed below.) The operating parameters of the infrastructure BSS are defined by the AP. [ 3 ] The Wi‑Fi segments of common home and business networks are examples of this type.
Each basic service set has a unique identifier, a BSSID, which is a 48-bit number that follows MAC address conventions. [ 4 ] An infrastructure BSSID is usually non-configurable, in which case it is either preset during manufacture or mathematically derived from a preset value such as a serial number or a MAC address of another network interface. As with the MAC addresses used for Ethernet devices, an infrastructure BSSID is a combination of a 24-bit organizationally unique identifier (OUI, the manufacturer's identity) and a 24-bit serial number. A BSSID with a value of all 1s is used to indicate the wildcard BSSID, usable only during probe requests or for communications that take place outside the context of a BSS. [ 5 ]
An independent BSS ( IBSS ), or ad hoc network , is created by peer devices among themselves without network infrastructure. [ 6 ] A temporary network created by a cellular telephone to share its Internet access with other devices is a common example. In contrast to the stations in an infrastructure-mode network, the stations in a wireless ad hoc network communicate directly with one another, i.e. without a dependence on a distribution point to relay traffic between them. [ 7 ] In this form of peer-to-peer wireless networking, the peers form an independent basic service set ( IBSS ). [ 8 ] Some of the responsibilities of a distribution point—such as defining network parameters and other "beaconing" functions—are established by the first station in an ad-hoc network. However, that station does not relay traffic between the other stations; instead, the peers communicate directly with one another. Like an infrastructure BSS, an independent BSS also has a 48-bit MAC-address-like identifier. But unlike infrastructure BSS identifiers, independent BSS identifiers are not necessarily unique: the individual/group bit of the address is always set to 0 (individual), the universal/local bit of the address is always set to 1 (local), and the remaining 46 bits are randomly generated. [ 5 ]
A mesh basic service set ( MBSS ) is a self-contained network of mesh stations that share a mesh profile , defined in 802.11s . [ 9 ] Each node may also be an access point hosting its own basic service set, for example using the mesh BSS to provide Internet access for local users. In such a system, the BSS created by the access point is distinct from the mesh network, and a wireless client of that BSS is not part of the MBSS. The formation of the mesh BSS, as well as wireless traffic management (including path selection and forwarding) is negotiated between the nodes of the mesh infrastructure. The mesh BSS is distinct from the networks (which may also be wireless) used by a mesh's redistribution points to communicate with one another. [ citation needed ]
The service set identifier ( SSID ) defines or extends a service set. Normally it is broadcast in the clear by stations in beacon packets to announce the presence of a network and seen by users as a wireless network name.
Unlike basic service set identifiers, SSIDs are usually customizable. [ 10 ] These SSIDs can be zero to 32 octets long, [ 11 ] and are, for convenience, usually in a natural language , such as English. The 802.11 standards prior to the 2012 edition did not define any particular encoding or representation for SSIDs, which were expected to be treated and handled as an arbitrary sequence of 0–32 octets that are not limited to printable characters . IEEE Std 802.11-2012 defines a flag to express that the SSID is UTF-8 -encoded and could contain any Unicode text. [ 12 ] Wireless network stacks must still be prepared to handle all possible values in the SSID field.
Since the contents of an SSID field are arbitrary, the 802.11 standard permits devices to advertise the presence of a wireless network with beacon packets in which the SSID field is set to null. [ 13 ] [ n 1 ] A null SSID (the SSID element's length field is set to zero [ 11 ] ) is called a wildcard SSID in IEEE 802.11 standards documents, [ 14 ] and as a no broadcast SSID or hidden SSID in the context of beacon announcements, [ 13 ] [ 15 ] and can be used, for example, in enterprise and mesh networks to steer a client to a particular (e.g. less utilized) access point. [ 13 ] A station may also likewise transmit packets in which the SSID field is set to null; this prompts an associated access point to send the station a list of supported SSIDs. [ 16 ] Once a device has associated with a basic service set, for efficiency, the SSID is not sent within packet headers; only BSSIDs are used for addressing.
Apple 's location services interpret the SSID of a Wi‑Fi access point ending in _nomap as an opt-out from being included in Apple's crowdsourced location databases. [ 17 ]
An extended service set ( ESS ) is a wireless network, created by multiple access points, which appears to users as a single, seamless network, such as a network covering a home or office that is too large for reliable coverage by a single access point. It is a set of one or more infrastructure basic service sets on a common logical network segment (i.e. same IP subnet and VLAN). [ 18 ] Key to the concept is that the participating basic service sets appear as a single network to the logical link control layer by using the same SSID. [ 18 ] [ 19 ] Thus, from the perspective of the logical link control layer, stations within an ESS may communicate with one another, and mobile stations may move transparently from one participating basic service set to another (within the same ESS). [ 19 ] Extended service sets make possible distribution services such as centralized authentication. From the perspective of the link layer, all stations within an ESS are all on the same link, and transfer from one BSS to another is transparent to logical link control. [ 20 ]
The basic service sets formed in wireless ad hoc networks are, by definition, independent from other BSSs, and an independent BSS cannot therefore be part of an extended infrastructure. [ 21 ] In that formal sense an independent BSS has no extended service set. However, the network packets of both independent BSSs and infrastructure BSSs have a logical network service set identifier, and the logical link control does not distinguish between the use of that field to name an ESS network, and the use of that field to name a peer-to-peer ad hoc network. The two are effectively indistinguishable at the logical link control layer level. [ 20 ] | https://en.wikipedia.org/wiki/Service_set_(802.11_network) |
In civil engineering and structural engineering , serviceability refers to the conditions under which a building is still considered useful. Should these limit states be exceeded, a structure that may still be structurally sound would nevertheless be considered unfit. It refers to conditions other than the building strength that render the buildings unusable. Serviceability limit state design of structures includes factors such as durability, overall stability, fire resistance, deflection, cracking and excessive vibration.
For example, a skyscraper could sway severely and cause the occupants to be sick (much like sea-sickness ), yet be perfectly sound structurally. This building is in no danger of collapsing, yet since it is obviously no longer fit for human occupation, it is considered to have exceeded its serviceability limit state.
A serviceability limit defines the performance criterion for serviceability and corresponds to a conditions beyond which specified service requirements resulting from the planned use are no longer met.
In limit state design , a structure fails its serviceability if the criteria of the serviceability limit state are not met during the specified service life and with the required reliability. Hence, the serviceability limit state identifies a civil engineering structure which fails to meet technical requirements for use even though it may be strong enough to remain standing.
A structure that fails serviceability has exceeded a defined limit for one of the following properties:
Serviceability limits are not always defined by building code developer, government or regulatory agency. Building codes tend to be restricted to ultimate limits related to public and occupant safety. Global geopolitical variations are likely to exist. | https://en.wikipedia.org/wiki/Serviceability_(structure) |
Sesam is a software suite for structural and hydrodynamic analysis of ships and offshore structures. [ 1 ] It is based on the displacement formulation of the Finite Element Method .
The first version of Sesam was developed at NTH, now Norges Teknisk-Naturvitenskapelige Universitet ( NTNU Trondheim ), in the mid-1960s. [ 2 ] Sesam was bought by Det Norske Veritas, now DNV , in 1968 and commercialized under the name SESAM-69 in 1970. Sesam was thus one of the first major structural analysis tools based on the Finite Element Method available and when it came to capability of analysing large and complex structures it outclassed all. [ 2 ] In the beginning it was used for analysis of ships, in particular oil tankers (for which a comparison of analysis results with measurements on the real ship was made to confirm the accuracy of the method and tool [ 3 ] ) and liquefied natural gas ( LNG ) carriers. [ 4 ]
With the development of offshore oil fields in the North Sea in the 1970s the use of Sesam for fixed offshore platforms grew. Examples of such use are the Ekofisk concrete tank of the Ekofisk oil field, the Condeep concrete gravity base structures and the Kvitebjørn jacket in the North Sea.
In the late 1970s development of a completely new version of Sesam started. [ 5 ] This version was released in the mid-1980s under the name SESAM'80 and is the basis for today's Sesam. During the 1990s Sesam was further enhanced with a high-level concept modelling technique together with a design-oriented and unified user interface. Analysis features for mooring systems and flexible risers were also added. The software name was at the same time simplified to merely "Sesam".
The development of the recent years with frequent new releases is focused on improving Sesam as a tool for all phases of offshore and maritime structures from design, through transportation, installation, operation and modification to life extension, requalification and finally decommissioning.
Sesam consists of several modules of which the most important are:
GeniE for modelling, analysis and code checking of beam, plate and shell structures like offshore platforms and ships.
HydroD for hydrodynamic and hydrostatic analysis of fixed and floating structures like offshore platforms and ships.
Sima for simulation of marine operations like lifting and lowering large objects in a marine environment.
DeepC for mooring and riser design as well marine operations of offshore floating structures.
Sesam is developed in Norway by DNV with focus on solution of structural and hydrodynamic engineering problems within the offshore and maritime industries. It has been used by the offshore and maritime industries world-wide for more than 50 years. | https://en.wikipedia.org/wiki/Sesam_(structural_analysis_software) |
Sesamodil is a calcium channel blocker .
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sesamodil |
In algebraic geometry , a Seshadri constant is an invariant of an ample line bundle L at a point P on an algebraic variety . It was introduced by Demailly to measure a certain rate of growth , of the tensor powers of L , in terms of the jets of the sections of the L k . The object was the study of the Fujita conjecture .
The name is in honour of the Indian mathematician C. S. Seshadri .
It is known that Nagata's conjecture on algebraic curves is equivalent to the assertion that for more than nine general points, the Seshadri constants of the projective plane are maximal. There is a general conjecture for algebraic surfaces , the Nagata–Biran conjecture .
Let X {\displaystyle {X}} be a smooth projective variety , L {\displaystyle {L}} an ample line bundle on it, x {\displaystyle {x}} a point of X {\displaystyle {X}} , C x {\displaystyle {{\mathcal {C}}_{x}}} = { all irreducible curves passing through x {\displaystyle {x}} }.
ϵ ( L , x ) := inf C ∈ C x L ⋅ C mult x ( C ) {\displaystyle {\epsilon (L,x):={\underset {C\in {\mathcal {C}}_{x}}{\inf }}{\frac {L\cdot C}{\operatorname {mult} _{x}(C)}}}} .
Here, L ⋅ C {\displaystyle {L\cdot C}} denotes the intersection number of L {\displaystyle {L}} and C {\displaystyle {C}} , mult x ( C ) {\displaystyle {\operatorname {mult} _{x}(C)}} measures how many times C {\displaystyle {C}} passing through x {\displaystyle {x}} .
Definition: One says that ϵ ( L , x ) {\displaystyle {\epsilon (L,x)}} is the Seshadri constant of L {\displaystyle {L}} at the point x {\displaystyle {x}} , a real number. When X {\displaystyle {X}} is an abelian variety , it can be shown that ϵ ( L , x ) {\displaystyle {\epsilon (L,x)}} is independent of the point chosen, and it is written simply ϵ ( L ) {\displaystyle {\epsilon (L)}} . | https://en.wikipedia.org/wiki/Seshadri_constant |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.