text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Electroporation , also known as electropermeabilization , is a microbiological and biotechnological technique in which an electric field is applied to cells to briefly increase the permeability of the cell membrane . [ 1 ] The application of a high- voltage electric field induces a temporary destabilization of the lipid bilayer , resulting in the formation of nanoscale pores that permit the entry or exit of macromolecules . [ 2 ]
This method is widely employed to introduce molecules—including small molecules , DNA , RNA , and proteins —into cells. Electroporation can be performed on cells in suspension using electroporation cuvettes , or directly on adherent cells in situ within their culture vessels. [ 3 ]
In microbiology, electroporation is frequently utilized for the transformation of bacteria or yeast cells, [ 4 ] often with plasmid DNA. [ 5 ] It is also used in the transfection of plant protoplasts and mammalian cells. [ 6 ] Notably, electroporation plays a critical role in the ex vivo manipulation of immune cells for the development of cell-based therapies, such as CAR T-cell therapy. [ 7 ] [ 8 ] Moreover, in vivo applications of electroporation have been successfully demonstrated in various tissue types. [ 9 ]
Bulk electroporation confers advantages over other physical delivery methods, including microinjection and gene gun techniques. However, it is limited by reduced cell viability . To address these issues, researchers have developed miniaturized approaches such as micro-electroporation [ 10 ] and nanotransfection . [ 11 ] These techniques utilize nanochannel-mediated electroporation to deliver molecular cargo to cells in a more controlled and less invasive manner.
Alternative methods for intracellular delivery include the use of cell-penetrating peptides , [ 12 ] cell squeezing techniques, [ 13 ] and chemical transformation , [ 14 ] with selection depending on the specific cell type and cargo characteristics.
Electroporation is also employed to induce cell fusion . [ 15 ] A prominent application of cell fusion is hybridoma technology , where antibody-producing B lymphocytes are fused with immortal myeloma cell lines to produce monoclonal antibodies . [ 16 ] [ 17 ]
Electroporation is widely utilized in laboratory settings due to its ability to achieve high transformation efficiencies , particularly for plasmid DNA, with reported yields approaching 10 10 colony-forming units per microgram of DNA. Electroporation is generally more costly than chemical transformation methods due to the specialized equipment required. This includes electroporators—devices designed to generate controlled electrostatic fields for cell suspension [ 18 ] —and electroporation cuvettes, which are typically constructed from glass or plastic and contain parallel aluminum electrodes . [ 19 ] [ 20 ]
A standard bacterial transformation protocol involves several steps. First, electro-competent cells are prepared by washing to remove ions that could cause arcing . These cells are then mixed with plasmid DNA and transferred into an electroporation cuvette. A high-voltage electric pulse is applied, with specific parameters such as voltage and pulse duration tailored to the particular cell type being used. Following electroporation, recovery medium is added, and the cells are incubated at an appropriate temperature to allow for outgrowth. Finally, the cells are plated onto selective agar plates to assess transformation efficiency. [ 21 ]
The success of electroporation depends on several factors, including the purity of the plasmid DNA solution, [ 22 ] salt concentration, and electroporation parameters. [ 23 ] High salt concentrations can lead to arcing (electrical discharge), significantly reducing the viability of electroporated cells. Therefore, the electroporation conditions must be optimized for each cell type to achieve an effective balance between cell viability and DNA uptake. [ 24 ]
In addition to in vitro applications, electroporation is employed in vivo to enhance cell membrane permeability during injections and surgical procedures. The effectiveness of in vivo electroporation depends greatly on selected parameters such as voltage, pulse duration, and number of pulses. Developing central nervous systems are particularly suitable for in vivo electroporation, as ventricles provide clear visibility for nucleic acid injections, and dividing cells exhibit increased permeability . Electroporation of embryos injected in utero is performed through the uterine wall , often using forceps -type electrodes to minimize embryo damage. [ 25 ]
Researchers in the 1960s discovered that applying an external electric field would create a large membrane potential at the two poles of a cell. In the 1970s, it was found that when a critical membrane potential is reached, the cellular membrane would break down and subsequently recover. [ 26 ] By the 1980s, this temporary membrane breakdown was exploited to introduce various molecules into cells. [ 27 ]
In vivo gene electroporation was first described in 1991. [ 28 ] This method delivers a large variety of therapeutic genes for the potential treatment of several diseases, including immune disorders , tumors , metabolic disorders , monogenetic diseases, cardiovascular diseases , and analgesia . [ 29 ] [ 30 ] [ 31 ]
Regarding irreversible electroporation, the first successful treatment of malignant cutaneous tumors implanted in mice was accomplished in 2007 by a group of scientists who achieved complete tumor ablation in 12 of 13 mice. They accomplished this by sending 80 pulses of 100 microseconds at 0.3 Hz with an electrical field magnitude of 2500 V/cm to treat the cutaneous tumors. [ 32 ]
The first group to apply electroporation used a reversible procedure in conjunction with impermeable macromolecules . The first research on how nanosecond pulses might be used on human cells was published in 2003. [ 33 ]
The first medical application of electroporation was used for introducing poorly permeant anti-cancer drugs into tumor nodules. [ 34 ] Gene electro-transfer soon became of interest because of its low cost, ease of implementation, and alleged safety. Viral vectors have since been found to have limitations in terms of immunogenicity and pathogenicity when used for DNA transfer. [ 35 ]
Irreversible electroporation is being used and evaluated as cardiac ablation therapy to kill specific areas of heart muscle. This is done to treat irregularities of heart rhythm . A cardiac catheter delivers trains of high-voltage, ultra-rapid electrical pulses that form irreversible pores in cell membranes, resulting in cell death. [ 36 ]
Non-thermal irreversible electroporation (N-TIRE) is a technique that treats many different types of tumors and other unwanted tissue. This procedure is done using small electrodes (about 1mm in diameter), placed either inside or surrounding the target tissue to apply short, repetitive bursts of electricity at a predetermined voltage and frequency. These bursts of electricity increase the resting transmembrane potential (TMP) so that nanopores form in the plasma membrane. When the electricity applied to the tissue is above the electric field threshold of the target tissue, the cells become permanently permeable from the formation of nanopores. As a result, the cells are unable to repair the damage and die due to a loss of homeostasis. [ 37 ] N-TIRE is unique to other tumor ablation techniques in that it does not create thermal damage to the tissue around it.
In contrast, reversible electroporation occurs when the electricity applied with the electrodes is below the target tissue's electric field threshold. Because the electricity applied is below the cells' threshold, it allows the cells to repair their phospholipid bilayer and continue with their normal cell functions. Reversible electroporation is typically done with treatments that involve inserting a drug or gene (or other molecule that is not normally permeable to the cell membrane) into the cell. Not all tissues have the same electric field threshold; therefore, to improve safety and efficacy, careful calculations need to be made prior to a treatment. [ 38 ]
N-TIRE, when done correctly, only affects the target tissue. Proteins, the extracellular matrix, and critical structures such as blood vessels and nerves are all unaffected and left healthy by this treatment. This facilitates a more rapid replacement of dead tumor cells and a faster recovery. [ 39 ]
Imaging technology such as CT scans and MRIs are commonly used to create a 3D image of the tumor. Computed tomography is used to help with the placement of electrodes during the procedure, particularly when the electrodes are being used to treat tumors in the brain. [ 40 ]
The procedure takes five minutes with a high success rate. [ 2 ] It may be used for future treatment in humans. One disadvantage of using N-TIRE is that the electricity delivered from the electrodes can stimulate muscle cells to contract, which could have lethal consequences, depending on the situation. Therefore, a paralytic agent must be used when performing the procedure. The paralytic agents that have been used in such research have risks [ 41 ] when using anesthetics.
High-frequency irreversible electroporation (H-FIRE) uses electrodes to apply bipolar bursts of electricity at a high frequency, as opposed to unipolar bursts of electricity at a low frequency. This type of procedure has the same tumor ablation success as N-TIRE. However, it has one distinct advantage: H-FIRE does not cause muscle contraction in the patient, and therefore, there is no need for a paralytic agent. [ 42 ] Furthermore, H-FIRE has been demonstrated to produce more predictable ablations due to the lesser difference in the electrical properties of tissues at higher frequencies. [ 43 ]
Electroporation can also be used to help deliver drugs or genes into the cell by applying short and intense electric pulses that transiently permeabilize cell membrane, thus allowing the transport of molecules otherwise not transported through a cellular membrane. This procedure is referred to as electrochemotherapy when the molecules to be transported are chemotherapeutic agents or gene electrotransfer when the molecule to be transported is DNA. Scientists from Karolinska Institute and the University of Oxford use electroporation of exosomes to deliver siRNAs , antisense oligonucleotides, chemotherapeutic agents, and proteins specifically to neurons after injecting them systemically (in blood). Because these exosomes can cross the blood-brain barrier , this protocol could solve the problem of poor delivery of medications to the central nervous system and may potentially treat Alzheimer's disease , Parkinson's disease , and brain cancer , among other conditions. [ 44 ]
Research has shown that shock waves could be used for pre-treating the cell membrane prior to electroporation. [ 45 ] [ 46 ] This synergistic strategy has shown to reduce external voltage requirement and create larger pores. Also, application of shock waves allow scope to target desired membrane site. This procedure allows to control the size of the pore.
Electroporation allows cellular introduction of large highly charged molecules, such as DNA , that cannot passively diffuse across the hydrophobic bilayer core. [ 47 ] This phenomenon indicates that the mechanism is the creation of nm-scale water-filled holes in the membrane. [ 48 ] Electropores were optically imaged in lipid bilayer models like droplet interface bilayers [ 49 ] and giant unilamellar vesicles, [ 50 ] while addition of cytoskeleton proteins such as actin networks to the giant unilamellar vesicles seem to prevent the formation of visible electropores. [ 51 ] Experimental evidences for actin networks in regulating the cell membrane permeability has also emerged. [ 52 ] Although electroporation and dielectric breakdown both result from application of an electric field, the mechanisms involved are fundamentally different. In dielectric breakdown the barrier material is ionized, creating a conductive pathway. The material alteration is thus chemical in nature. In contrast, during electroporation the lipid molecules are not chemically altered but simply shift position, opening up a pore which acts as the conductive pathway through the bilayer as it is filled with water.
Electroporation is a dynamic phenomenon that depends on the local transmembrane voltage at each point on the cell membrane. It is generally accepted that for a given pulse duration and shape, a specific transmembrane voltage threshold exists for the manifestation of the electroporation phenomenon (from 0.5 V to 1 V). This leads to the definition of an electric field magnitude threshold for electroporation (E th ). That is, only the cells within areas where E≧E th are electroporated. If a second threshold (E ir ) is reached or surpassed, electroporation will compromise the viability of the cells, i.e. , irreversible electroporation (IRE). [ 53 ]
Electroporation is a process with several distinct phases. [ 54 ] [ 55 ] First, a short electrical pulse is applied. Typical parameters would be 300–400 mV for < 1 ms across the membrane (note- the voltages used in cell experiments are typically much larger because they are being applied across large distances to the bulk solution so the resulting field across the actual membrane is only a small fraction of the applied bias). Application of this potential causes migration of ions from the surrounding solution to the membrane which charges like a capacitor . Rapid localized rearrangements in lipid morphology occur once the critical level is achieved. The resulting structure is believed to be a "pre-pore" since it is not electrically conductive but leads rapidly to the creation of a conductive pore. [ 56 ] Evidence for the existence of such pre-pores comes mostly from the "flickering" of pores, which suggests a transition between conductive and insulating states. [ 57 ] It has been suggested that these pre-pores are small (~3 Å) hydrophobic defects. If this theory is correct, then the transition to a conductive state could be explained by a rearrangement at the pore edge, in which the lipid heads fold over to create a hydrophilic interface. [ citation needed ] Finally, these conductive pores can either heal, resealing the bilayer or expand, eventually rupturing it. The resultant fate depends on whether the critical defect size was exceeded [ 58 ] which in turn depends on the applied field, local mechanical stress and bilayer edge energy.
Application of electric pulses of sufficient strength to the cell causes an increase in the trans-membrane potential difference, which provokes the membrane destabilization. Cell membrane permeability is increased, and otherwise non-permeant molecules enter the cell. [ 59 ] [ 60 ] Although the mechanisms of gene electrotransfer are not yet fully understood, it was shown that the introduction of DNA only occurs in the part of the membrane facing the cathode and that several steps are needed for successful transfection: electrophoretic migration of DNA towards the cell, DNA insertion into the membrane, translocation across the spoke membrane, migration of DNA towards the nucleus, transfer of DNA across the nuclear envelope and finally gene expression. [ 61 ] There are a number of factors that can influence the efficiency of gene electrotransfer, such as: temperature, parameters of electric pulses, DNA concentration, electroporation buffer used, cell size and the ability of cells to express transfected genes. [ 62 ] In in vivo gene electrotransfer, DNA diffusion through extracellular matrix, properties of tissue, and overall tissue conductivity may be crucial. [ 63 ] | https://en.wikipedia.org/wiki/Gene_electrotransfer |
Gene expression is the process (including its regulation ) by which information from a gene is used in the synthesis of a functional gene product that enables it to produce end products, proteins or non-coding RNA , and ultimately affect a phenotype . These products are often proteins , but in non-protein-coding genes such as transfer RNA (tRNA) and small nuclear RNA (snRNA) , the product is a functional non-coding RNA .
The process of gene expression is used by all known life— eukaryotes (including multicellular organisms ), prokaryotes ( bacteria and archaea ), and viruses —to generate the macromolecular machinery for life.
In genetics , gene expression is the most fundamental level at which the genotype gives rise to the phenotype , i.e. observable trait. The genetic information stored in DNA represents the genotype, whereas the phenotype results from the "interpretation" of that information. Such phenotypes are often displayed by the synthesis of proteins that control the organism's structure and development, or that act as enzymes catalyzing specific metabolic pathways.
All steps in the gene expression process may be modulated (regulated), including the transcription , RNA splicing , translation , and post-translational modification of a protein. Regulation of gene expression gives control over the timing, location, and amount of a given gene product (protein or ncRNA) present in a cell and can have a profound effect on the cellular structure and function. Regulation of gene expression is the basis for cellular differentiation , development , morphogenesis and the versatility and adaptability of any organism . Gene regulation may therefore serve as a substrate for evolutionary change.
The production of a RNA copy from a DNA strand is called transcription , and is performed by RNA polymerases , which add one ribonucleotide at a time to a growing RNA strand as per the complementarity law of the nucleotide bases. This RNA is complementary to the template 3′ → 5′ DNA strand, [ 1 ] with the exception that thymines (T) are replaced with uracils (U) in the RNA and possible errors.
In bacteria, transcription is carried out by a single type of RNA polymerase, which needs to bind a DNA sequence called a Pribnow box with the help of the sigma factor protein (σ factor) to start transcription. In eukaryotes, transcription is performed in the nucleus by three types of RNA polymerases, each of which needs a special DNA sequence called the promoter and a set of DNA-binding proteins— transcription factors —to initiate the process (see regulation of transcription below). RNA polymerase I is responsible for transcription of ribosomal RNA (rRNA) genes. RNA polymerase II (Pol II) transcribes all protein-coding genes but also some non-coding RNAs ( e.g. , snRNAs, snoRNAs or long non-coding RNAs ). RNA polymerase III transcribes 5S rRNA , transfer RNA (tRNA) genes, and some small non-coding RNAs ( e.g. , 7SK ). Transcription ends when the polymerase encounters a sequence called the terminator .
While transcription of prokaryotic protein-coding genes creates messenger RNA (mRNA) that is ready for translation into protein, transcription of eukaryotic genes leaves a primary transcript of RNA (pre-RNA), which first has to undergo a series of modifications to become a mature RNA. Types and steps involved in the maturation processes vary between coding and non-coding preRNAs; i.e. even though preRNA molecules for both mRNA and tRNA undergo splicing, the steps and machinery involved are different. [ 2 ] The processing of non-coding RNA is described below (non-coding RNA maturation).
The processing of pre-mRNA include 5′ capping , which is set of enzymatic reactions that add 7-methylguanosine (m 7 G) to the 5′ end of pre-mRNA and thus protect the RNA from degradation by exonucleases . [ 3 ] The m 7 G cap is then bound by cap binding complex heterodimer (CBP20/CBP80), which aids in mRNA export to cytoplasm and also protect the RNA from decapping . [ 4 ]
Another modification is 3′ cleavage and polyadenylation . [ 5 ] They occur if polyadenylation signal sequence (5′- AAUAAA-3′) is present in pre-mRNA, which is usually between protein-coding sequence and terminator. [ 6 ] The pre-mRNA is first cleaved and then a series of ~200 adenines (A) are added to form poly(A) tail, which protects the RNA from degradation. [ 7 ] The poly(A) tail is bound by multiple poly(A)-binding proteins (PABPs) necessary for mRNA export and translation re-initiation. [ 8 ] In the inverse process of deadenylation, poly(A) tails are shortened by the CCR4-Not 3′-5′ exonuclease, which often leads to full transcript decay. [ 9 ]
A very important modification of eukaryotic pre-mRNA is RNA splicing . The majority of eukaryotic pre-mRNAs consist of alternating segments called exons and introns . [ 10 ] During the process of splicing, an RNA-protein catalytical complex known as spliceosome catalyzes two transesterification reactions, which remove an intron and release it in form of lariat structure, and then splice neighbouring exons together. [ 11 ] In certain cases, some introns or exons can be either removed or retained in mature mRNA. [ 12 ] This so-called alternative splicing creates series of different transcripts originating from a single gene. Because these transcripts can be potentially translated into different proteins, splicing extends the complexity of eukaryotic gene expression and the size of a species proteome . [ 13 ]
Extensive RNA processing may be an evolutionary advantage made possible by the nucleus of eukaryotes. In prokaryotes, transcription and translation happen together, whilst in eukaryotes, the nuclear membrane separates the two processes, giving time for RNA processing to occur. [ 14 ]
In most organisms non-coding genes (ncRNA) are transcribed as precursors that undergo further processing. In the case of ribosomal RNAs (rRNA), they are often transcribed as a pre-rRNA that contains one or more rRNAs. The pre-rRNA is cleaved and modified ( 2′- O -methylation and pseudouridine formation) at specific sites by approximately 150 different small nucleolus-restricted RNA species, called snoRNAs. SnoRNAs associate with proteins, forming snoRNPs. While snoRNA part basepair with the target RNA and thus position the modification at a precise site, the protein part performs the catalytical reaction. In eukaryotes, in particular a snoRNP called RNase, MRP cleaves the 45S pre-rRNA into the 28S , 5.8S , and 18S rRNAs . The rRNA and RNA processing factors form large aggregates called the nucleolus . [ 15 ]
In the case of transfer RNA (tRNA), for example, the 5′ sequence is removed by RNase P , [ 16 ] whereas the 3′ end is removed by the tRNase Z enzyme [ 17 ] and the non-templated 3′ CCA tail is added by a nucleotidyl transferase . [ 18 ] In the case of micro RNA (miRNA) , miRNAs are first transcribed as primary transcripts or pri-miRNA with a cap and poly-A tail and processed to short, 70-nucleotide stem-loop structures known as pre-miRNA in the cell nucleus by the enzymes Drosha and Pasha . After being exported, it is then processed to mature miRNAs in the cytoplasm by interaction with the endonuclease Dicer , which also initiates the formation of the RNA-induced silencing complex (RISC) , composed of the Argonaute protein.
Even snRNAs and snoRNAs themselves undergo series of modification before they become part of functional RNP complex. [ 19 ] This is done either in the nucleoplasm or in the specialized compartments called Cajal bodies . [ 20 ] Their bases are methylated or pseudouridinilated by a group of small Cajal body-specific RNAs (scaRNAs) , which are structurally similar to snoRNAs. [ 21 ]
In eukaryotes most mature RNA must be exported to the cytoplasm from the nucleus . While some RNAs function in the nucleus, many RNAs are transported through the nuclear pores and into the cytosol . [ 22 ] Export of RNAs requires association with specific proteins known as exportins. Specific exportin molecules are responsible for the export of a given RNA type. mRNA transport also requires the correct association with Exon Junction Complex (EJC), which ensures that correct processing of the mRNA is completed before export. In some cases RNAs are additionally transported to a specific part of the cytoplasm, such as a synapse ; they are then towed by motor proteins that bind through linker proteins to specific sequences (called "zipcodes") on the RNA. [ 23 ]
For some non-coding RNA, the mature RNA is the final gene product. [ 24 ] In the case of messenger RNA (mRNA) the RNA is an information carrier coding for the synthesis of one or more proteins. mRNA carrying a single protein sequence (common in eukaryotes) is monocistronic whilst mRNA carrying multiple protein sequences (common in prokaryotes) is known as polycistronic .
Every mRNA consists of three parts: a 5′ untranslated region (5′UTR), a protein-coding region or open reading frame (ORF), and a 3′ untranslated region (3′UTR). The coding region carries information for protein synthesis encoded by the genetic code to form triplets. Each triplet of nucleotides of the coding region is called a codon and corresponds to a binding site complementary to an anticodon triplet in transfer RNA. Transfer RNAs with the same anticodon sequence always carry an identical type of amino acid . Amino acids are then chained together by the ribosome according to the order of triplets in the coding region. The ribosome helps transfer RNA to bind to messenger RNA and takes the amino acid from each transfer RNA and makes a structure-less protein out of it. [ 25 ] [ 26 ] Each mRNA molecule is translated into many protein molecules, on average ~2800 in mammals. [ 27 ] [ 28 ]
In prokaryotes translation generally occurs at the point of transcription (co-transcriptionally), often using a messenger RNA that is still in the process of being created. In eukaryotes translation can occur in a variety of regions of the cell depending on where the protein being written is supposed to be. Major locations are the cytoplasm for soluble cytoplasmic proteins and the membrane of the endoplasmic reticulum for proteins that are for export from the cell or insertion into a cell membrane . Proteins that are supposed to be produced at the endoplasmic reticulum are recognised part-way through the translation process. This is governed by the signal recognition particle —a protein that binds to the ribosome and directs it to the endoplasmic reticulum when it finds a signal peptide on the growing (nascent) amino acid chain. [ 29 ]
Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA into a linear chain of amino acids . This polypeptide lacks any developed three-dimensional structure (the left hand side of the neighboring figure). The polypeptide then folds into its characteristic and functional three-dimensional structure from a random coil . [ 30 ] Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right hand side of the figure) known as the native state . The resulting three-dimensional structure is determined by the amino acid sequence ( Anfinsen's dogma ). [ 31 ]
The correct three-dimensional structure is essential to function, although some parts of functional proteins may remain unfolded . [ 32 ] Failure to fold into the intended shape usually produces inactive proteins with different properties including toxic prions . Several neurodegenerative and other diseases are believed to result from the accumulation of misfolded proteins. [ 33 ] Many allergies are caused by the folding of the proteins, for the immune system does not produce antibodies for certain protein structures. [ 34 ]
Enzymes called chaperones assist the newly formed protein to attain ( fold into) the 3-dimensional structure it needs to function. [ 35 ] Similarly, RNA chaperones help RNAs attain their functional shapes. [ 36 ] Assisting protein folding is one of the main roles of the endoplasmic reticulum in eukaryotes.
Secretory proteins of eukaryotes or prokaryotes must be translocated to enter the secretory pathway. Newly synthesized proteins are directed to the eukaryotic Sec61 or prokaryotic SecYEG translocation channel by signal peptides . The efficiency of protein secretion in eukaryotes is very dependent on the signal peptide which has been used. [ 37 ]
Many proteins are destined for other parts of the cell than the cytosol and a wide range of signalling sequences or (signal peptides) are used to direct proteins to where they are supposed to be. [ 38 ] [ 39 ] In prokaryotes this is normally a simple process due to limited compartmentalisation of the cell. [ 40 ] However, in eukaryotes there is a great variety of different targeting processes to ensure the protein arrives at the correct organelle. [ 41 ]
Not all proteins remain within the cell and many are exported, for example, digestive enzymes , hormones and extracellular matrix proteins. In eukaryotes the export pathway is well developed and the main mechanism for the export of these proteins is translocation to the endoplasmic reticulum, followed by transport via the Golgi apparatus . [ 42 ] [ 43 ]
Protein degradation is a major regulatory mechanism of gene expression [ 44 ] [ 45 ] and contributes substantially for shaping proteomes, especially of tissues and cells that do not grow very fast. [ 46 ] Protein degradation is a highly regulated processes, which results in significant and context dependent variation in degradation rates between proteins as well as for the same protein across cell types and tissue types. This variation can contribute about 40 % of the variance of protein levels across slowly growing tissues, with the remaining 60 % likely coming from protein synthesis, including transcription and translation as explained above. [ 46 ]
Regulation of gene expression is the control of the amount and timing of appearance of the functional product of a gene. Control of expression is vital to allow a cell to produce the gene products it needs when it needs them; in turn, this gives cells the flexibility to adapt to a variable environment, external signals, damage to the cell, and other stimuli. More generally, gene regulation gives the cell control over all structure and function, and is the basis for cellular differentiation , morphogenesis and the versatility and adaptability of any organism.
Numerous terms are used to describe types of genes depending on how they are regulated; these include:
Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. The stability of the final gene product, whether it is RNA or protein, also contributes to the expression level of the gene—an unstable product results in a low expression level. In general gene expression is regulated through changes [ 47 ] in the number and type of interactions between molecules [ 48 ] that collectively influence transcription of DNA [ 49 ] and translation of RNA. [ 50 ]
Some simple examples of where gene expression is important are:
Regulation of transcription can be broken down into three main routes of influence; genetic (direct interaction of a control factor with the gene), modulation interaction of a control factor with the transcription machinery and epigenetic (non-sequence changes in DNA structure that influence transcription). [ 51 ] [ 52 ]
Direct interaction with DNA is the simplest and the most direct method by which a protein changes transcription levels. [ 53 ] Genes often have several protein binding sites around the coding region with the specific function of regulating transcription. [ 54 ] There are many classes of regulatory DNA binding sites known as enhancers , insulators and silencers . [ 55 ] The mechanisms for regulating transcription are varied, from blocking key binding sites on the DNA for RNA polymerase to acting as an activator and promoting transcription by assisting RNA polymerase binding. [ 56 ]
The activity of transcription factors is further modulated by intracellular signals causing protein post-translational modification including phosphorylation , acetylation , or glycosylation . [ 57 ] These changes influence a transcription factor's ability to bind, directly or indirectly, to promoter DNA, to recruit RNA polymerase, or to favor elongation of a newly synthesized RNA molecule. [ 58 ]
The nuclear membrane in eukaryotes allows further regulation of transcription factors by the duration of their presence in the nucleus, which is regulated by reversible changes in their structure and by binding of other proteins. [ 59 ] Environmental stimuli or endocrine signals [ 60 ] may cause modification of regulatory proteins [ 61 ] eliciting cascades of intracellular signals, [ 62 ] which result in regulation of gene expression.
It has become apparent that there is a significant influence of non-DNA-sequence specific effects on transcription. [ 63 ] These effects are referred to as epigenetic and involve the higher order structure of DNA, non-sequence specific DNA binding proteins and chemical modification of DNA. [ 64 ] In general epigenetic effects alter the accessibility of DNA to proteins and so modulate transcription. [ 65 ]
In eukaryotes the structure of chromatin , controlled by the histone code , regulates access to DNA with significant impacts on the expression of genes in euchromatin and heterochromatin areas. [ 66 ]
Gene expression in mammals is regulated by many cis-regulatory elements , including core promoters and promoter-proximal elements that are located near the transcription start sites of genes, upstream on the DNA (towards the 5' region of the sense strand ). Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers , silencers , insulators and tethering elements. [ 67 ] Enhancers and their associated transcription factors have a leading role in the regulation of gene expression. [ 68 ]
Enhancers are genome regions that regulate genes. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. [ 69 ] Multiple enhancers, each often tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control gene expression. [ 69 ]
The illustration shows an enhancer looping around to come into proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1 ). One member of the dimer is anchored to its binding motif on the enhancer and the other member is anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). [ 70 ] Several cell function-specific transcription factors (among the about 1,600 transcription factors in a human cell) [ 71 ] generally bind to specific motifs on an enhancer. [ 72 ] A small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern transcription level of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter. [ 73 ]
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the figure. [ 74 ] An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). [ 75 ] An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene. [ 76 ]
DNA methylation is a widespread mechanism for epigenetic influence on gene expression and is seen in bacteria and eukaryotes and has roles in heritable transcription silencing and transcription regulation. Methylation most often occurs on a cytosine (see Figure). Methylation of cytosine primarily occurs in dinucleotide sequences where a cytosine is followed by a guanine, a CpG site . The number of CpG sites in the human genome is about 28 million. [ 77 ] Depending on the type of cell, about 70% of the CpG sites have a methylated cytosine. [ 78 ]
Methylation of cytosine in DNA has a major role in regulating gene expression. Methylation of CpGs in a promoter region of a gene usually represses gene transcription [ 79 ] while methylation of CpGs in the body of a gene increases expression. [ 80 ] TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene. [ 81 ]
In a rat, contextual fear conditioning (CFC) is a painful learning experience. Just one episode of CFC can result in a life-long fearful memory. [ 82 ] After an episode of CFC, cytosine methylation is altered in the promoter regions of about 9.17% of all genes in the hippocampus neuron DNA of a rat. [ 83 ] The hippocampus is where new memories are initially stored. After CFC about 500 genes have increased transcription (often due to demethylation of CpG sites in a promoter region) and about 1,000 genes have decreased transcription (often due to newly formed 5-methylcytosine at CpG sites in a promoter region). The pattern of induced and repressed genes within neurons appears to provide a molecular basis for forming the first transient memory of this training event in the hippocampus of the rat brain. [ 83 ]
Some specific mechanisms guiding new DNA methylations and new DNA demethylations in the hippocampus during memory establishment have been established (see [ 84 ] for summary). One mechanism includes guiding the short isoform of the TET1 DNA demethylation enzyme, TET1s, to about 600 locations on the genome. The guidance is performed by association of TET1s with EGR1 protein, a transcription factor important in memory formation. Bringing TET1s to these locations initiates DNA demethylation at those sites, up-regulating associated genes. A second mechanism involves DNMT3A2, a splice-isoform of DNA methyltransferase DNMT3A, which adds methyl groups to cytosines in DNA. This isoform is induced by synaptic activity, and its location of action appears to be determined by histone post-translational modifications (a histone code ). The resulting new messenger RNAs are then transported by messenger RNP particles (neuronal granules) to synapses of the neurons, where they can be translated into proteins affecting the activities of synapses. [ 84 ]
In particular, the brain-derived neurotrophic factor gene ( BDNF ) is known as a "learning gene". [ 85 ] After CFC there was upregulation of BDNF gene expression, related to decreased CpG methylation of certain internal promoters of the gene, and this was correlated with learning. [ 85 ]
The majority of gene promoters contain a CpG island with numerous CpG sites . [ 86 ] When many of a gene's promoter CpG sites are methylated the gene becomes silenced. [ 87 ] Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. [ 88 ] However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer ). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs . [ 89 ] In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-transcribed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers ).
In eukaryotes, where export of RNA is required before translation is possible, nuclear export is thought to provide additional control over gene expression. All transport in and out of the nucleus is via the nuclear pore and transport is controlled by a wide range of importin and exportin proteins. [ 90 ]
Expression of a gene coding for a protein is only possible if the messenger RNA carrying the code survives long enough to be translated. [ 41 ] In a typical cell, an RNA molecule is only stable if specifically protected from degradation. [ 91 ] RNA degradation has particular importance in regulation of expression in eukaryotic cells where mRNA has to travel significant distances before being translated. [ 92 ] In eukaryotes, RNA is stabilised by certain post-transcriptional modifications, particularly the 5′ cap and poly-adenylated tail . [ 93 ]
Intentional degradation of mRNA is used not just as a defence mechanism from foreign RNA (normally from viruses) but also as a route of mRNA destabilisation . [ 94 ] If an mRNA molecule has a complementary sequence to a small interfering RNA then it is targeted for destruction via the RNA interference pathway. [ 95 ]
Three prime untranslated regions (3′UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. Such 3′-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. [ 96 ] By binding to specific sites within the 3′-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. [ 97 ] The 3′-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA. [ 98 ]
The 3′-UTR often contains microRNA response elements (MREs) . MREs are sequences to which miRNAs bind. These are prevalent motifs within 3′-UTRs. Among all regulatory motifs within the 3′-UTRs (e.g. including silencer regions), MREs make up about half of the motifs. [ 99 ]
As of 2014, the miRBase web site, [ 100 ] an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). [ 101 ] Friedman et al. [ 101 ] estimate that >45,000 miRNA target sites within human mRNA 3′UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. [ 102 ] Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold). [ 103 ] [ 104 ]
The effects of miRNA dysregulation of gene expression seem to be important in cancer. [ 105 ] For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes. [ 106 ]
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders. [ 107 ] [ 108 ]
Direct regulation of translation is less prevalent than control of transcription or mRNA stability but is occasionally used. [ 109 ] Inhibition of protein translation is a major target for toxins and antibiotics , so they can kill a cell by overriding its normal gene expression control. [ 110 ] Protein synthesis inhibitors include the antibiotic neomycin and the toxin ricin . [ 111 ]
Post-translational modifications (PTMs) are covalent modifications to proteins. Like RNA splicing, they help to significantly diversify the proteome. These modifications are usually catalyzed by enzymes. Additionally, processes like covalent additions to amino acid side chain residues can often be reversed by other enzymes. However, some, like the proteolytic cleavage of the protein backbone, are irreversible. [ 112 ]
PTMs play many important roles in the cell. [ 113 ] For example, phosphorylation is primarily involved in activating and deactivating proteins and in signaling pathways. [ 114 ] PTMs are involved in transcriptional regulation: an important function of acetylation and methylation is histone tail modification, which alters how accessible DNA is for transcription. [ 112 ] They can also be seen in the immune system, where glycosylation plays a key role. [ 115 ] One type of PTM can initiate another type of PTM, as can be seen in how ubiquitination tags proteins for degradation through proteolysis. [ 112 ] Proteolysis, other than being involved in breaking down proteins, is also important in activating and deactivating them, and in regulating biological processes such as DNA transcription and cell death. [ 116 ]
Measuring gene expression is an important part of many life sciences , as the ability to quantify the level at which a particular gene is expressed within a cell, tissue or organism can provide a lot of valuable information. For example, measuring gene expression can:
Similarly, the analysis of the location of protein expression is a powerful tool, and this can be done on an organismal or cellular scale. Investigation of localization is particularly important for the study of development in multicellular organisms and as an indicator of protein function in single cells. Ideally, measurement of expression is done by detecting the final gene product (for many genes, this is the protein); however, it is often easier to detect one of the precursors, typically mRNA and to infer gene-expression levels from these measurements.
Levels of mRNA can be quantitatively measured by northern blotting , which provides size and sequence information about the mRNA molecules. [ 117 ] A sample of RNA is separated on an agarose gel and hybridized to a radioactively labeled RNA probe that is complementary to the target sequence. [ 118 ] The radiolabeled RNA is then detected by an autoradiograph . [ 119 ] Because the use of radioactive reagents makes the procedure time-consuming and potentially dangerous, alternative labeling and detection methods, such as digoxigenin and biotin chemistries, have been developed. [ 120 ] Perceived disadvantages of Northern blotting are that large quantities of RNA are required and that quantification may not be completely accurate, as it involves measuring band strength in an image of a gel. [ 121 ] On the other hand, the additional mRNA size information from the Northern blot allows the discrimination of alternately spliced transcripts. [ 122 ] [ 123 ]
Another approach for measuring mRNA abundance is RT-qPCR. In this technique, reverse transcription is followed by quantitative PCR . Reverse transcription first generates a DNA template from the mRNA; this single-stranded template is called cDNA . The cDNA template is then amplified in the quantitative step, during which the fluorescence emitted by labeled hybridization probes or intercalating dyes changes as the DNA amplification process progresses. [ 124 ] With a carefully constructed standard curve, qPCR can produce an absolute measurement of the number of copies of original mRNA, typically in units of copies per nanolitre of homogenized tissue or copies per cell. [ 125 ] qPCR is very sensitive (detection of a single mRNA molecule is theoretically possible), but can be expensive depending on the type of reporter used; fluorescently labeled oligonucleotide probes are more expensive than non-specific intercalating fluorescent dyes. [ 126 ]
For expression profiling , or high-throughput analysis of many genes within a sample, quantitative PCR may be performed for hundreds of genes simultaneously in the case of low-density arrays. [ 127 ] A second approach is the hybridization microarray . A single array or "chip" may contain probes to determine transcript levels for every known gene in the genome of one or more organisms. [ 128 ] Alternatively, "tag based" technologies like Serial analysis of gene expression (SAGE) and RNA-Seq , which can provide a relative measure of the cellular concentration of different mRNAs, can be used. [ 129 ] An advantage of tag-based methods is the "open architecture", allowing for the exact measurement of any transcript, with a known or unknown sequence. [ 130 ] Next-generation sequencing (NGS) such as RNA-Seq is another approach, producing vast quantities of sequence data that can be matched to a reference genome. Although NGS is comparatively time-consuming, expensive, and resource-intensive, it can identify single-nucleotide polymorphisms , splice-variants, and novel genes, and can also be used to profile expression in organisms for which little or no sequence information is available. [ 131 ]
Profiles like these are found for almost all proteins listed in Wikipedia. They are generated by organizations such as the Genomics Institute of the Novartis Research Foundation and the European Bioinformatics Institute . Additional information can be found by searching their databases (for an example of the GLUT4 transporter pictured here, see citation). [ 132 ] These profiles indicate the level of DNA expression (and hence RNA produced) of a certain protein in a certain tissue, and are color-coded accordingly in the images located in the Protein Box on the right side of each Wikipedia page.
For genes encoding proteins, the expression level can be directly assessed by a number of methods with some clear analogies to the techniques for mRNA quantification.
One of the most commonly used methods is to perform a Western blot against the protein of interest. [ 133 ] This gives information on the size of the protein in addition to its identity. A sample (often cellular lysate ) is separated on a polyacrylamide gel , transferred to a membrane and then probed with an antibody to the protein of interest. The antibody can either be conjugated to a fluorophore or to horseradish peroxidase for imaging and/or quantification. The gel-based nature of this assay makes quantification less accurate, but it has the advantage of being able to identify later modifications to the protein, for example proteolysis or ubiquitination, from changes in size.
While transcription directly reflects gene expression, the copy number of mRNA molecules does not directly correlate with the number of protein molecules translated from mRNA. Quantification of both protein and mRNA permits a correlation of the two levels. Regulation on each step of gene expression can impact the correlation, as shown for regulation of translation [ 28 ] or protein stability. [ 134 ] Post-translational factors, such as protein transport in highly polar cells, [ 135 ] can influence the measured mRNA-protein correlation as well.
Analysis of expression is not limited to quantification; localization can also be determined. mRNA can be detected with a suitably labelled complementary mRNA strand and protein can be detected via labelled antibodies. The probed sample is then observed by microscopy to identify where the mRNA or protein is.
By replacing the gene with a new version fused to a green fluorescent protein marker or similar, expression may be directly quantified in live cells. This is done by imaging using a fluorescence microscope . It is very difficult to clone a GFP-fused protein into its native location in the genome without affecting expression levels, so this method often cannot be used to measure endogenous gene expression. It is, however, widely used to measure the expression of a gene artificially introduced into the cell, for example via an expression vector . By fusing a target protein to a fluorescent reporter, the protein's behavior, including its cellular localization and expression level, can be significantly changed.
The enzyme-linked immunosorbent assay works by using antibodies immobilised on a microtiter plate to capture proteins of interest from samples added to the well. Using a detection antibody conjugated to an enzyme or fluorophore the quantity of bound protein can be accurately measured by fluorometric or colourimetric detection. The detection process is very similar to that of a Western blot, but by avoiding the gel steps more accurate quantification can be achieved.
An expression system is a system specifically designed for the production of a gene product of choice. This is normally a protein although may also be RNA, such as tRNA or a ribozyme . An expression system consists of a gene, normally encoded by DNA , and the molecular machinery required to transcribe the DNA into mRNA and translate the mRNA into protein using the reagents provided. In the broadest sense this includes every living cell but the term is more normally used to refer to expression as a laboratory tool. An expression system is therefore often artificial in some manner. Expression systems are, however, a fundamentally natural process. Viruses are an excellent example where they replicate by using the host cell as an expression system for the viral proteins and genome.
Doxycycline is also used in "Tet-on" and "Tet-off" tetracycline controlled transcriptional activation to regulate transgene expression in organisms and cell cultures .
In addition to these biological tools, certain naturally observed configurations of DNA (genes, promoters, enhancers, repressors) and the associated machinery itself are referred to as an expression system. This term is normally used in the case where a gene or set of genes is switched on under well defined conditions, for example, the simple repressor switch expression system in Lambda phage and the lac operator system in bacteria. Several natural expression systems are directly used or modified and used for artificial expression systems such as the Tet-on and Tet-off expression system.
Genes have sometimes been regarded as nodes in a network, with inputs being proteins such as transcription factors , and outputs being the level of gene expression. The node itself performs a function, and the operation of these functions have been interpreted as performing a kind of information processing within cells and determines cellular behavior.
Gene networks can also be constructed without formulating an explicit causal model. This is often the case when assembling networks from large expression data sets. [ 136 ] Covariation and correlation of expression is computed across a large sample of cases and measurements (often transcriptome or proteome data). The source of variation can be either experimental or natural (observational). There are several ways to construct gene expression networks, but one common approach is to compute a matrix of all pair-wise correlations of expression across conditions, time points, or individuals and convert the matrix (after thresholding at some cut-off value) into a graphical representation in which nodes represent genes, transcripts, or proteins and edges connecting these nodes represent the strength of association (see GeneNetwork GeneNetwork 2 ). [ 137 ]
The following experimental techniques are used to measure gene expression and are listed in roughly chronological order, starting with the older, more established technologies. They are divided into two groups based on their degree of multiplexity . | https://en.wikipedia.org/wiki/Gene_expression |
In the field of molecular biology , gene expression profiling is the measurement of the activity (the expression ) of thousands of genes at once, to create a global picture of cellular function. These profiles can, for example, distinguish between cells that are actively dividing, or show how the cells react to a particular treatment. Many experiments of this sort measure an entire genome simultaneously, that is, every gene present in a particular cell.
Several transcriptomics technologies can be used to generate the necessary data to analyse. DNA microarrays [ 1 ] measure the relative activity of previously identified target genes. Sequence based techniques, like RNA-Seq , provide information on the sequences of genes in addition to their expression level.
Expression profiling is a logical next step after sequencing a genome : the sequence tells us what the cell could possibly do, while the expression profile tells us what it is actually doing at a point in time. Genes contain the instructions for making messenger RNA ( mRNA ), but at any moment each cell makes mRNA from only a fraction of the genes it carries. If a gene is used to produce mRNA, it is considered "on", otherwise "off". Many factors determine whether a gene is on or off, such as the time of day, whether or not the cell is actively dividing, its local environment, and chemical signals from other cells. For instance, skin cells, liver cells and nerve cells turn on (express) somewhat different genes and that is in large part what makes them different. Therefore, an expression profile allows one to deduce a cell's type, state, environment, and so forth.
Expression profiling experiments often involve measuring the relative amount of mRNA expressed in two or more experimental conditions. This is because altered levels of a specific sequence of mRNA suggest a changed need for the protein coded by the mRNA, perhaps indicating a homeostatic response or a pathological condition. For example, higher levels of mRNA coding for alcohol dehydrogenase suggest that the cells or tissues under study are responding to increased levels of ethanol in their environment. Similarly, if breast cancer cells express higher levels of mRNA associated with a particular transmembrane receptor than normal cells do, it might be that this receptor plays a role in breast cancer. A drug that interferes with this receptor may prevent or treat breast cancer. In developing a drug, one may perform gene expression profiling experiments to help assess the drug's toxicity, perhaps by looking for changing levels in the expression of cytochrome P450 genes, which may be a biomarker of drug metabolism. [ 2 ] Gene expression profiling may become an important diagnostic test. [ 3 ] [ 4 ]
The human genome contains on the order of 20,000 genes which work in concert to produce roughly 1,000,000 distinct proteins. This is due to alternative splicing , and also because cells make important changes to proteins through posttranslational modification after they first construct them, so a given gene serves as the basis for many possible versions of a particular protein. In any case, a single mass spectrometry experiment can identify about
2,000 proteins [ 5 ] or 0.2% of the total. While knowledge of the precise proteins a cell makes ( proteomics ) is more relevant than knowing how much messenger RNA is made from each gene, [ why? ] gene expression profiling provides the most global picture possible in a single experiment. However, proteomics methodology is improving. In other species, such as yeast, it is possible to identify over 4,000 proteins in just over one hour. [ 6 ]
Sometimes, a scientist already has an idea of what is going on, a hypothesis , and he or she performs an expression profiling experiment with the idea of potentially disproving this hypothesis. In other words, the scientist is making a specific prediction about levels of expression that could turn out to be false.
More commonly, expression profiling takes place before enough is known about how genes interact with experimental conditions for a testable hypothesis to exist. With no hypothesis, there is nothing to disprove, but expression profiling can help to identify a candidate hypothesis for future experiments. Most early expression profiling experiments, and many current ones, have this form [ 7 ] which is known as class discovery. A popular approach to class discovery involves grouping similar genes or samples together using one of the many existing clustering methods such the traditional k-means or hierarchical clustering , or the more recent MCL . [ 8 ] Apart from selecting a clustering algorithm, user usually has to choose an appropriate proximity measure (distance or similarity) between data objects. [ 9 ] The figure above represents the output of a two dimensional cluster, in which similar samples (rows, above) and similar gene probes (columns) were organized so that they would lie close together. The simplest form of class discovery would be to list all the genes that changed by more than a certain amount between two experimental conditions.
Class prediction is more difficult than class discovery, but it allows one to answer questions of direct clinical significance such as, given this profile, what is the probability that this patient will respond to this drug? This requires many examples of profiles that responded and did not respond, as well as cross-validation techniques to discriminate between them.
In general, expression profiling studies report those genes that showed statistically significant differences under changed experimental conditions. This is typically a small fraction of the genome for several reasons. First, different cells and tissues express a subset of genes as a direct consequence of cellular differentiation so many genes are turned off. Second, many of the genes code for proteins that are required for survival in very specific amounts so many genes do not change. Third, cells use many other mechanisms to regulate proteins in addition to altering the amount of mRNA , so these genes may stay consistently expressed even when protein concentrations are rising and falling. Fourth, financial constraints limit expression profiling experiments to a small number of observations of the same gene under identical conditions, reducing the statistical power of the experiment, making it impossible for the experiment to identify important but subtle changes. Finally, it takes a great amount of effort to discuss the biological significance of each regulated gene, so scientists often limit their discussion to a subset. Newer microarray analysis techniques automate certain aspects of attaching biological significance to expression profiling results, but this remains a very difficult problem.
The relatively short length of gene lists published from expression profiling experiments limits the extent to which experiments performed in different laboratories appear to agree. Placing expression profiling results in a publicly accessible microarray database makes it possible for researchers to assess expression patterns beyond the scope of published results, perhaps identifying similarity with their own work.
Both DNA microarrays and quantitative PCR exploit the preferential binding or " base pairing " of complementary nucleic acid sequences, and both are used in gene expression profiling, often in a serial fashion. While high throughput DNA microarrays lack the quantitative accuracy of qPCR, it takes about the same time to measure the gene expression of a few dozen genes via qPCR as it would to measure an entire genome using DNA microarrays. So it often makes sense to perform semi-quantitative DNA microarray analysis experiments to identify candidate genes, then perform qPCR on some of the most interesting candidate genes to validate the microarray results. Other experiments, such as a Western blot of some of the protein products of differentially expressed genes, make conclusions based on the expression profile more persuasive, since the mRNA levels do not necessarily correlate to the amount of expressed protein.
Data analysis of microarrays has become an area of intense research. [ 10 ] Simply stating that a group of genes were regulated by at least twofold, once a common practice, lacks a solid statistical footing. With five or fewer replicates in each group, typical for microarrays, a single outlier observation can create an apparent difference greater than two-fold. In addition, arbitrarily setting the bar at two-fold is not biologically sound, as it eliminates from consideration many genes with obvious biological significance.
Rather than identify differentially expressed genes using a fold change cutoff, one can use a variety of statistical tests or omnibus tests such as ANOVA , all of which consider both fold change and variability to create a p-value , an estimate of how often we would observe the data by chance alone. Applying p-values to microarrays is complicated by the large number of multiple comparisons (genes) involved. For example, a p-value of 0.05 is typically thought to indicate significance, since it estimates a 5% probability of observing the data by chance. But with 10,000 genes on a microarray, 500 genes would be identified as significant at p < 0.05 even if there were no difference between the experimental groups. One obvious solution is to consider significant only those genes meeting a much more stringent p value criterion, e.g., one could perform a Bonferroni correction on the p-values, or use a false discovery rate calculation to adjust p-values in proportion to the number of parallel tests involved. Unfortunately, these approaches may reduce the number of significant genes to zero, even when genes are in fact differentially expressed. Current statistics such as Rank products aim to strike a balance between false discovery of genes due to chance variation and non-discovery of differentially expressed genes. Commonly cited methods include the Significance Analysis of Microarrays (SAM) [ 11 ] and a wide variety of methods are available from Bioconductor and a variety of analysis packages from bioinformatics companies .
Selecting a different test usually identifies a different list of significant genes [ 12 ] since each test operates under a specific set of assumptions, and places a different emphasis on certain features in the data. Many tests begin with the assumption of a normal distribution in the data, because that seems like a sensible starting point and often produces results that appear more significant. Some tests consider the joint distribution of all gene observations to estimate general variability in measurements, [ 13 ] while others look at each gene in isolation. Many modern microarray analysis techniques involve bootstrapping (statistics) , machine learning or Monte Carlo methods . [ 14 ]
As the number of replicate measurements in a microarray experiment increases, various statistical approaches yield increasingly similar results, but lack of concordance between different statistical methods makes array results appear less trustworthy. The MAQC Project [ 15 ] makes recommendations to guide researchers in selecting more standard methods (e.g. using p-value and fold-change together for selecting the differentially expressed genes) so that experiments performed in different laboratories will agree better.
Different from the analysis on differentially expressed individual genes, another type of analysis focuses on differential expression or perturbation of pre-defined gene sets and is called gene set analysis. [ 16 ] [ 17 ] Gene set analysis demonstrated several major advantages over individual gene differential expression analysis. [ 16 ] [ 17 ] Gene sets are groups of genes that are functionally related according to current knowledge. Therefore, gene set analysis is considered a knowledge based analysis approach. [ 16 ] Commonly used gene sets include those derived from KEGG pathways, Gene Ontology terms, gene groups that share some other functional annotations, such as common transcriptional regulators etc. Representative gene set analysis methods include Gene Set Enrichment Analysis (GSEA), [ 16 ] which estimates significance of gene sets based on permutation of sample labels, and Generally Applicable Gene-set Enrichment (GAGE), [ 17 ] which tests the significance of gene sets based on permutation of gene labels or a parametric distribution.
While the statistics may identify which gene products change under experimental conditions, making biological sense of expression profiling rests on knowing which protein each gene product makes and what function this protein performs. Gene annotation provides functional and other information, for example the location of each gene within a particular chromosome. Some functional annotations are more reliable than others; some are absent. Gene annotation databases change regularly, and various databases refer to the same protein by different names, reflecting a changing understanding of protein function. Use of standardized gene nomenclature helps address the naming aspect of the problem, but exact matching of transcripts to genes [ 18 ] [ 19 ] remains an important consideration.
Having identified some set of regulated genes, the next step in expression profiling involves looking for patterns within the regulated set. Do the proteins made from these genes perform similar functions? Are they chemically similar? Do they reside in similar parts of the cell? Gene ontology analysis provides a standard way to define these relationships. Gene ontologies start with very broad categories, e.g., "metabolic process" and break them down into smaller categories, e.g., "carbohydrate metabolic process" and finally into quite restrictive categories like "inositol and derivative phosphorylation".
Genes have other attributes beside biological function, chemical properties and cellular location. One can compose sets of genes based on proximity to other genes, association with a disease, and relationships with drugs or toxins. The Molecular Signatures Database [ 20 ] and the Comparative Toxicogenomics Database [ 21 ] are examples of resources to categorize genes in numerous ways.
Regulated genes are categorized in terms of what they are and what they do, important relationships between genes may emerge. [ 23 ] For example, we might see evidence that a certain gene creates a protein to make an enzyme that activates a protein to turn on a second gene on our list. This second gene may be a transcription factor that regulates yet another gene from our list. Observing these links we may begin to suspect that they represent much more than chance associations in the results, and that they are all on our list because of an underlying biological process. On the other hand, it could be that if one selected genes at random, one might find many that seem to have something in common. In this sense, we need rigorous statistical procedures to test whether the emerging biological themes is significant or not. That is where gene set analysis [ 16 ] [ 17 ] comes in.
Fairly straightforward statistics provide estimates of whether associations between genes on lists are greater than what one would expect by chance. These statistics are interesting, even if they represent a substantial oversimplification of what is really going on. Here is an example. Suppose there are 10,000 genes in an experiment, only 50 (0.5%) of which play a known role in making cholesterol . The experiment identifies 200 regulated genes. Of those, 40 (20%) turn out to be on a list of cholesterol genes as well. Based on the overall prevalence of the cholesterol genes (0.5%) one expects an average of 1 cholesterol gene for every 200 regulated genes, that is, 0.005 times 200. This expectation is an average, so one expects to see more than one some of the time. The question becomes how often we would see 40 instead of 1 due to pure chance.
According to the hypergeometric distribution , one would expect to try about 10^57 times (10 followed by 56 zeroes) before picking 39 or more of the cholesterol genes from a pool of 10,000 by drawing 200 genes at random. Whether one pays much attention to how infinitesimally small the probability of observing this by chance is, one would conclude that the regulated gene list is enriched [ 24 ] in genes with a known cholesterol association.
One might further hypothesize that the experimental treatment regulates cholesterol, because the treatment seems to selectively regulate genes associated with cholesterol. While this may be true, there are a number of reasons why making this a firm conclusion based on enrichment alone represents an unwarranted leap of faith. One previously mentioned issue has to do with the observation that gene regulation may have no direct impact on protein regulation: even if the proteins coded for by these genes do nothing other than make cholesterol, showing that their mRNA is altered does not directly tell us what is happening at the protein level. It is quite possible that the amount of these cholesterol-related proteins remains constant under the experimental conditions. Second, even if protein levels do change, perhaps there is always enough of them around to make cholesterol as fast as it can be possibly made, that is, another protein, not on our list, is the rate determining step in the process of making cholesterol. Finally, proteins typically play many roles, so these genes may be regulated not because of their shared association with making cholesterol but because of a shared role in a completely independent process.
Bearing the foregoing caveats in mind, while gene profiles do not in themselves prove causal relationships between treatments and biological effects, they do offer unique biological insights that would often be very difficult to arrive at in other ways.
As described above, one can identify significantly regulated genes first and then find patterns by comparing the list of significant genes to sets of genes known to share certain associations. One can also work the problem in reverse order. Here is a very simple example. Suppose there are 40 genes associated with a known process, for example, a predisposition to diabetes. Looking at two groups of expression profiles, one for mice fed a high carbohydrate diet and one for mice fed a low carbohydrate diet, one observes that all 40 diabetes genes are expressed at a higher level in the high carbohydrate group than the low carbohydrate group. Regardless of whether any of these genes would have made it to a list of significantly altered genes, observing all 40 up, and none down appears unlikely to be the result of pure chance: flipping 40 heads in a row is predicted to occur about one time in a trillion attempts using a fair coin.
For a type of cell, the group of genes whose combined expression pattern is uniquely characteristic to a given condition constitutes the gene signature of this condition. Ideally, the gene signature can be used to select a group of patients at a specific state of a disease with accuracy that facilitates selection of treatments. [ 25 ] [ 26 ] Gene Set Enrichment Analysis (GSEA) [ 16 ] and similar methods [ 17 ] take advantage of this kind of logic but uses more sophisticated statistics, because component genes in real processes display more complex behavior than simply moving up or down as a group, and the amount the genes move up and down is meaningful, not just the direction. In any case, these statistics measure how different the behavior of some small set of genes is compared to genes not in that small set.
GSEA uses a Kolmogorov Smirnov style statistic to see whether any previously defined gene sets exhibited unusual behavior in the current expression profile. This leads to a multiple hypothesis testing challenge, but reasonable methods exist to address it. [ 27 ]
Expression profiling provides new information about what genes do under various conditions. Overall, microarray technology produces reliable expression profiles. [ 28 ] From this information one can generate new hypotheses about biology or test existing ones. However, the size and complexity of these experiments often results in a wide variety of possible interpretations. In many cases, analyzing expression profiling results takes far more effort than performing the initial experiments.
Most researchers use multiple statistical methods and exploratory data analysis before publishing their expression profiling results, coordinating their efforts with a bioinformatician or other expert in DNA microarrays . Good experimental design, adequate biological replication and follow up experiments play key roles in successful expression profiling experiments. | https://en.wikipedia.org/wiki/Gene_expression_profiling |
A gene family is a set of several similar genes, formed by duplication of a single original gene , and generally with similar biochemical functions. One such family are the genes for human hemoglobin subunits; the ten genes are in two clusters on different chromosomes, called the α-globin and β-globin loci. These two gene clusters are thought to have arisen as a result of a precursor gene being duplicated approximately 500 million years ago. [ 1 ]
Genes are categorized into families based on shared nucleotide or protein sequences . Phylogenetic techniques can be used as a more rigorous test. The positions of exons within the coding sequence can be used to infer common ancestry. Knowing the sequence of the protein encoded by a gene can allow researchers to apply methods that find similarities among protein sequences that provide more information than similarities or differences among DNA sequences.
If the genes of a gene family encode proteins, the term protein family is often used in an analogous manner to gene family .
The expansion or contraction of gene families along a specific lineage can be due to chance, or can be the result of
natural selection. [ 2 ] To distinguish between these two cases is often difficult in practice. Recent work uses a combination
of statistical models and algorithmic techniques to detect gene families that are under the effect of natural selection. [ 3 ]
The HUGO Gene Nomenclature Committee (HGNC) creates nomenclature schemes using a "stem" (or "root") symbol for members of a gene family (by homology or function), with a hierarchical numbering system to distinguish the individual members. [ 4 ] [ 5 ] For example, for the peroxiredoxin family, PRDX is the root symbol, and the family members are PRDX1 , PRDX2 , PRDX3 , PRDX4 , PRDX5 , and PRDX6 .
One level of genome organization is the grouping of genes into several gene families. [ 6 ] [ 7 ] Gene families are groups of related genes that share a common ancestor. Members of gene families may be paralogs or orthologs. Gene paralogs are genes with similar sequences from within the same species while gene orthologs are genes with similar sequences in different species. Gene families are highly variable in size, sequence diversity, and arrangement. Depending on the diversity and functions of the genes within the family, families can be classified as multigene families or superfamilies. [ 6 ] [ 8 ]
Multigene families typically consist of members with similar sequences and functions, though a high degree of divergence (at the sequence and/or functional level) does not lead to the removal of a gene from a gene family. Individual genes in the family may be arranged close together on the same chromosome or dispersed throughout the genome on different chromosomes. Due to the similarity of their sequences and their overlapping functions, individual genes in the family often share regulatory control elements. [ 6 ] [ 8 ] In some instances, gene members have identical (or nearly identical) sequences. Such families allow for massive amounts of gene product to be expressed in a short time as needed. Other families allow for similar but specific products to be expressed in different cell types or at different stages of an organism's development. [ 6 ]
Superfamilies are much larger than single multigene families. Superfamilies contain up to hundreds of genes, including multiple multigene families as well as single, individual gene members. The large number of members allows superfamilies to be widely dispersed with some genes clustered and some spread far apart. The genes are diverse in sequence and function displaying various levels of expression and separate regulation controls. [ 6 ] [ 8 ]
Some gene families also contain pseudogenes , sequences of DNA that closely resemble established gene sequences but are non-functional. [ 9 ] Different types of pseudogenes exist. Non-processed pseudogenes are genes that acquired mutations over time becoming non-functional. Processed pseudogenes are genes that have lost their function after being moved around the genome by retrotransposition. [ 8 ] [ 9 ] Pseudogenes that have become isolated from the gene family they originated in, are referred to as orphans . [ 6 ]
Gene families arose from multiple duplications of an ancestral gene, followed by mutation and divergence. [ 6 ] Duplications can occur within a lineage (e.g., humans might have two copies of a gene that is found only once in chimpanzees) or they are the result of speciation. For example, a single gene in the ancestor of humans and chimpanzees now occurs in both species and can be thought of as having been 'duplicated' via speciation. As a result of duplication by speciation, a gene family might include 15 genes, one copy in each of 15 different species.
In the formation of gene families, four levels of duplication exist: 1) exon duplication and shuffling , 2) entire gene duplication , 3) multigene family duplication, and 4) whole genome duplication . Exon duplication and shuffling gives rise to variation and new genes. Genes are then duplicated to form multigene families which duplicate to form superfamilies spanning multiple chromosomes. Whole genome duplication doubles the number of copies of every gene and gene family. [ 6 ] Whole genome duplication or polyploidization can be either autopolyploidization or alloploidization. Autopolyploidization is the duplication of the same genome and allopolyploidization is the duplication of two closely related genomes or hybridized genomes from different species. [ 8 ]
Duplication occurs primarily through uneven crossing over events in meiosis of germ cells. (1,2) When two chromosomes misalign, crossing over - the exchange of gene alleles - results in one chromosome expanding or increasing in gene number and the other contracting or decreasing in gene number. The expansion of a gene cluster is the duplication of genes that leads to larger gene families. [ 6 ] [ 8 ]
Gene members of a multigene family or multigene families within superfamilies exist on different chromosomes due to relocation of those genes after duplication of the ancestral gene. Transposable elements play a role in the movement of genes. Transposable elements are recognized by inverted repeats at their 5' and 3' ends. When two transposable elements are close enough in the same region on a chromosome, they can form a composite transposon. The protein transposase recognizes the outermost inverted repeats, cutting the DNA segment. Any genes between the two transposable elements are relocated as the composite transposon jumps to a new area of the genome. [ 6 ]
Reverse transcription is another method of gene movement. An mRNA transcript of a gene is reversed transcribed, or copied, back into DNA. This new DNA copy of the mRNA is integrated into another part of the genome, resulting in gene family members being dispersed. [ 8 ]
A special type of multigene family is implicated in the movement of gene families and gene family members. LINE ( L ong IN terspersed E lements) and SINE ( S hort IN terspersed E lements) families are highly repetitive DNA sequences spread all throughout the genome. The LINEs contain a sequence that encodes a reverse transcriptase protein. This protein aids in copying the RNA transcripts of LINEs and SINEs back into DNA, and integrates them into different areas of the genome. This self-perpetuates the growth of LINE and SINE families. Due to the highly repetitive nature of these elements, LINEs and SINEs when close together also trigger unequal crossing over events which result in single-gene duplications and the formation of gene families. [ 6 ] [ 8 ]
Non-synonymous mutations resulting in the substitution of amino acids, increase in duplicate gene copies. Duplication gives rise to multiple copies of the same gene, giving a level of redundancy where mutations are tolerated. With one functioning copy of the gene, other copies are able to acquire mutations without being extremely detrimental to the organisms. Mutations allow duplicate genes to acquire new or different functions. [ 8 ]
Some multigene families are extremely homogenous, with individual genes members sharing identical or almost identical sequences. The process by which gene families maintain high homogeneity is Concerted evolution . Concerted evolution occurs through repeated cycles of unequal crossing over events and repeated cycles of gene transfer and conversion. Unequal crossing over leads to the expansion and contraction of gene families. Gene families have an optimal size range that natural selection acts towards. Contraction deletes divergent gene copies and keeps gene families from becoming too large. Expansion replaces lost gene copies and prevents gene families from becoming too small. Repeat cycles of gene transfer and conversion increasingly make gene family members more similar. [ 6 ]
In the process of gene transfer, allelic gene conversion is biased. Mutant alleles spreading in a gene family towards homogeneity is the same process of an advantageous allele spreading in a population towards fixation. Gene conversion also aids in creating genetic variation in some cases. [ 10 ]
Gene families, part of a hierarchy of information storage in a genome, play a large role in the evolution and diversity of multicellular organisms. Gene families are large units of information and genetic variability. [ 6 ] Over evolutionary time, gene families have expanded and contracted with genes within a family duplicating and diversifying into new genes, and genes being lost. An entire gene family may also be lost, or gained through de novo gene birth , by such extensive divergence such that a gene is considered part of a new family, or by horizontal gene transfer . When the number of genes per genome remains relatively constant, this implies that genes are gained and lost at relatively same rates. There are some patterns in which genes are more likely to be lost vs. which are more likely to duplicate and diversify into multiple copies. [ 11 ]
An adaptive expansion of a single gene into many initially identical copies occurs when natural selection would favour additional gene copies. This is the case when an environmental stressor acts on a species. Gene amplification is more common in bacteria and is a reversible process. Contraction of gene families commonly results from accumulation of loss of function mutations. A nonsense mutation which prematurely halts gene transcription becomes fixed in the population, leading to the loss of genes. This process occurs when changes in the environment render a gene redundant. [ 7 ]
In addition to classification by evolution (structural gene family), the HGNC also makes "gene families" by function in their stem nomenclature. [ 12 ] As a result, a stem can also refer to genes that have the same function, often part of the same protein complex . For example, BRCA1 and BRCA2 are unrelated genes that are both named for their role in breast cancer and RPS2 and RPS3 are unrelated ribosomal proteins found in the same small subunit.
The HGNC also maintains a "gene group" (formerly "gene family") classification. A gene can be a member of multiple groups, and all groups form a hierarchy. As with the stem classification, both structural and functional groups exist. [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Gene_family |
Gene gating is a phenomenon by which transcriptionally active genes are brought next to nuclear pore complexes (NPCs) so that nascent transcripts can quickly form mature mRNA associated with export factors. [ 1 ] [ 2 ] Gene gating was first hypothesised by Günter Blobel in 1985. [ 3 ] It has been shown to occur in Saccharomyces cerevisiae , Caenorhabditis elegans , Drosophila melanogaster as well as mammalian model systems. [ 1 ]
The proteins that constitute the NPCs, known as nucleoporins , have been shown to play a role in DNA binding and mRNA transport, making gene gating possible. In addition, gene gating is orchestrated by two protein complexes , Spt-Ada-Gcn5-acetyltransferase (SAGA) and transcription–export complex 2 (TREX-2 complex). SAGA is a chromatin remodeling complex responsible for activating the transcription of certain inducible genes. The SAGA complex binds to gene promoters and also interacts with the TREX-2 complex. [ 4 ] In turn, the TREX-2 complex interacts with the NPC, thus favouring the relocation of actively transcribed genes to the periphery of the cell nucleus . [ 2 ] [ 5 ] In contrast, the rest of the periphery, i.e. those parts not associated with NPCs, is transcriptionally silent heterochromatin .
Nucleoporins (Nups) are the main constituent proteins of NPCs and have been shown to play multiple roles in mediating several processes involved in gene gating. [ 1 ] While it has been known that the nuclear periphery serves as the primary location for most heterochromatin, telomeric and centrosomal DNA, studies in the yeast Saccharomyces cerevisiae have shown that NPCs containing Nup2p and Prp20p create boundaries of active gene expression near the nuclear envelope and prevent the spread of heterochromatin at the nuclear periphery. These Nup2p and Prp20p proteins also provide a location for the binding of chromatin . [ 6 ]
Some inducible genes in yeast been shown to re-locate to the nuclear periphery by binding NPCs composed of specific Nups. [ 1 ] Several of these inducible genes, including GAL1, INO1, TSA2, and HSP104 contain gene recruitment sequences (GRSs) found in the promoter, which are necessary for the attachment of the gene to the NPC by way of DNA binding to specific Nups. [ 7 ] This initial relocation of genes containing GRSs requires the action of Snf1-p dependent Spt-Ada-Gcn5 acetyltransferase (SAGA), a chromatin remodeling complex, as well as several mRNA export proteins, for their transcriptional activation at the nuclear periphery. [ 4 ]
In the fruit fly Drosophila melanogaster large stretches of chromatin are bound to Nups Nup153 and Megator. [ 8 ] These genomic regions are often found on the male X chromosome , which exhibits high levels of transcriptional activity due to dosage compensation ; these regions of chromatin are termed Nup-associated regions (NARs). Depletion of Nup153 causes a drastic decrease in expression of genes associated with NARs and decreased the affinity of these gene sequences with the nuclear periphery. Other Nups such as Nup50, Nup60 and Nup98 are associated with genes involved in development and the cell cycle . [ 9 ]
In mammalian model systems activated genes to be transcribed are shuttled in a Nup-dependent manner, though some experiments in human cell lines show a reversal of movement, from the periphery of the nucleus to the nucleoplasmic center. [ 1 ] mRNP ( messenger ribonucleoprotein ) leaving sites of transcription in the nuclear center follows the same path through the nucleus to the NPC, which suggests that mRNA/protein complexes can move through the nucleus by a directed means, through interchromatin channels. [ 10 ] In mice and human cell lines a transmembrane Nup, Nup210 , has been shown to be necessary for the proper transcription of several genes involved in neurogenesis and myogenesis . RNAi knockdown of Nup210 prevents myogenesis in mouse stem cells, but has no effect on nuclear transport, though it has been speculated that Nup210 or other NPC-associated factors could influence chromatin architecture to mediate routes for mRNP/mRNA to the nuclear membrane. [ 11 ] Movement of transcriptionally active genes from the periphery of the nucleus to the nucleoplasmic region has also been observed in human cell lines. The human Mash1 , GAFB and β-globin loci have all been observed moving away from the nuclear periphery when transcriptionally active. This seems to contradict the gene-gating hypothesis, but this process may still be mediated by Nup98 , a soluble Nup protein that shuttles between the nucleoplasm and NPC at the nuclear membrane. Nup98 seems to be responsible for the transport of many RNAs from the center of the nucleus to the nuclear lamina . Nup98 antibodies introduced in the nucleus block the export of many RNAs. [ 12 ] [ 13 ] A large body of data exists which supports the role of nulceoporins, both anchored to NPCs and soluble, in the role of mediating the transport of mRNA and for the proper transcription of active genes, though numerous other protein factors influence these complex processes.
Spt-Ada-Gcn5 acetyltransferase (SAGA) is a histone modifying transcriptional co-activator that is composed of 21 proteins and exhibits histone acetyltransferase (HAT) and deubiquitinating (DUB) activity. In yeast the SAGA complex serves to activate the transcription of approximately 10% of the genome, and this active gene/SAGA complex is then able to interact with the TREX-2 complex, a NPC-associated mRNA export complex. Numerous proteins involved in the formation of mRNA interact with the NPC, with the majority of these protein-protein interactions occurring between the SAGA complex and the TREX-2 complex at the NPC. [ 4 ] Correct transcription and subsequent export of mRNA is largely dependent on this interaction. A common protein subunit of both the SAGA and TREX-2 complexes, Sus1, binds to the upstream activating sequence via SAGA, which then serves as the attachment point to the TREX-2 complex. The interacting surfaces between Sus1 and the TREX-2 complex are facilitated by the protein subunits Mex67 and Yra1 of the TREX-2 complex, as evidenced by co-immunoprecipitation experiments. [ 4 ] The TREX-2 complex is bound to the NPC complex by the nucleoporin Nup1. All TREX-2 subunits are necessary for the successful formation and export of an mRNA transcript at the nuclear membrane for genes activated by the SAGA complex, and data suggest that SAGA and TREX-2 act in concert to recruit Sus1 to genes to be transcribed. Other investigations have shown that several SAGA subunits interact with the NPC protein Mlp1, providing another link between the NPC and the SAGA/active gene complex. [ 4 ] | https://en.wikipedia.org/wiki/Gene_gating |
In genetic engineering , a gene gun or biolistic particle delivery system is a device used to deliver exogenous DNA ( transgenes ), RNA , or protein to cells. By coating particles of a heavy metal with a gene of interest and firing these micro-projectiles into cells using mechanical force, an integration of desired genetic information can be introduced into desired cells. The technique involved with such micro-projectile delivery of DNA is often referred to as biolistics , short for "biological ballistics". [ 1 ] [ 2 ]
This device is able to transform almost any type of cell and is not limited to the transformation of the nucleus; it can also transform organelles, including plastids and mitochondria . [ 3 ]
The gene gun was originally a Crosman air pistol modified to fire dense tungsten particles. It was invented by John C Sanford , Ed Wolf, and Nelson Allen at Cornell University [ 4 ] [ 5 ] [ 6 ] along with Ted Klein of DuPont between 1983 and 1986. The original target was onions (chosen for their large cell size), and the device was used to deliver particles coated with a marker gene which would relay a signal if proper insertion of the DNA transcript occurred. [ 7 ] Genetic transformation was demonstrated upon observed expression of the marker gene within onion cells.
The earliest custom manufactured gene guns (fabricated by Nelson Allen) used a 22 caliber nail gun cartridge to propel a polyethylene cylinder (bullet) down a 22 caliber Douglas barrel. A droplet of the tungsten powder coated with genetic material was placed onto the bullet and shot down into a Petri dish below. The bullet welded to the disk below the Petri plate, and the genetic material blasted into the sample with a doughnut effect involving devastation in the middle of the sample with a ring of good transformation around the periphery. The gun was connected to a vacuum pump and was placed under a vacuum while firing. The early design was put into limited production by a Rumsey-Loomis (a local machine shop then at Mecklenburg Road in Ithaca, NY, USA).
Biolistics, Inc sold Dupont the rights to manufacture and distribute an updated device with improvements including the use of helium as a non-explosive propellant and a multi-disk collision delivery mechanism to minimize damage to sample tissues. Other heavy metals such as gold and silver are also used to deliver genetic material with gold being favored due to lower cytotoxicity in comparison to tungsten projectile carriers. [ 8 ]
Biolistic transformation involves the integration of a functional fragment of DNA—known as a DNA construct—into target cells. A gene construct is a DNA cassette containing all required regulatory elements for proper expression within the target organism. [ 9 ] [ page needed ] While gene constructs may vary in their design depending on the desired outcome of the transformation procedure, all constructs typically contain a combination a promoter sequence, a terminator sequence, the gene of interest, and a reporter gene .
Gene guns are mostly used with plant cells. However, there is much potential use in humans and other animals as well.
The target of a gene gun is often a callus of undifferentiated plant cells or a group of immature embryos growing on gel medium in a Petri dish. After the DNA-coated gold particles have been delivered to the cells, the DNA is used as a template for transcription (transient expression) and sometimes it integrates into a plant chromosome ('stable' transformation)
If the delivered DNA construct contains a selectable marker, then stably transformed cells can be selected and cultured using tissue culture methods. For example, if the delivered DNA construct contains a gene that confers resistance to an antibiotic or herbicide, then stably transformed cells may be selected by including that antibiotic or herbicide in the tissue culture media.
Transformed cells can be treated with a series of plant hormones, such as auxins and gibberellins , and each may divide and differentiate into the organized, specialized, tissue cells of an entire plant. This capability of total re-generation is called totipotency . The new plant that originated from a successfully transformed cell may have new traits that are heritable. The use of the gene gun may be contrasted with the use of Agrobacterium tumefaciens and its Ti plasmid to insert DNA into plant cells. See transformation for different methods of transformation in different species.
Gene guns have also been used to deliver DNA vaccines .
The delivery of plasmids into rat neurons through the use of a gene gun, specifically DRG neurons, is also used as a pharmacological precursor in studying the effects of neurodegenerative diseases such as Alzheimer's disease .
The gene gun has become a common tool for labeling subsets of cells in cultured tissue. In addition to being able to transfect cells with DNA plasmids coding for fluorescent proteins, the gene gun can be adapted to deliver a wide variety of vital dyes to cells. [ 16 ]
Gene gun bombardment has also been used to transform Caenorhabditis elegans , as an alternative to microinjection . [ 17 ]
Biolistics has proven to be a versatile method of genetic modification and it is generally preferred to engineer transformation-resistant crops, such as cereals . Notably, Bt maize is a product of biolistics. [ 9 ] [ page needed ] Plastid transformation has also seen great success with particle bombardment when compared to other current techniques, such as Agrobacterium mediated transformation, which have difficulty targeting the vector to and stably expressing in the chloroplast. [ 9 ] [ page needed ] [ 18 ] In addition, there are no reports of a chloroplast silencing a transgene inserted with a gene gun. [ 19 ] Additionally, with only one firing of a gene gun, a skilled technician can generate two transformed organisms in certain species. [ 18 ] This technology has even allowed for modification of specific tissues in situ , although this is likely to damage large numbers of cells and transform only some , rather than all, cells of the tissue. [ 20 ]
Biolistics introduces DNA randomly into the target cells. Thus the DNA may be transformed into whatever genomes are present in the cell, be they nuclear, mitochondrial, plasmid or any others, in any combination, though proper construct design may mitigate this. The delivery and integration of multiple templates of the DNA construct is a distinct possibility, resulting in potential variable expression levels and copy numbers of the inserted gene. [ 9 ] [ page needed ] This is due to the ability of the constructs to give and take genetic material from other constructs, causing some to carry no transgene and others to carry multiple copies; the number of copies inserted depends on both how many copies of the transgene an inserted construct has, and how many were inserted. [ 9 ] [ page needed ] Also, because eukaryotic constructs rely on illegitimate recombination —a process by which the transgene is integrated into the genome without similar genetic sequences—and not homologous recombination , they cannot be targeted to specific locations within the genome, [ 9 ] [ page needed ] unless the transgene is co-delivered with genome editing reagents. | https://en.wikipedia.org/wiki/Gene_gun |
In molecular cloning and biology , a gene knock-in (abbreviation: KI ) refers to a genetic engineering method that involves the one-for-one substitution of DNA sequence information in a genetic locus or the insertion of sequence information not found within the locus. [ 1 ] Typically, this is done in mice since the technology for this process is more refined and there is a high degree of shared sequence complexity between mice and humans. [ 2 ] The difference between knock-in technology and traditional transgenic techniques is that a knock-in involves a gene inserted into a specific locus, and is thus a "targeted" insertion. It is the opposite of gene knockout .
A common use of knock-in technology is for the creation of disease models. It is a technique by which scientific investigators may study the function of the regulatory machinery (e.g. promoters ) that governs the expression of the natural gene being replaced. This is accomplished by observing the new phenotype of the organism in question. The BACs and YACs are used in this case so that large fragments can be transferred.
Gene knock-in originated as a slight modification of the original knockout technique developed by Martin Evans , Oliver Smithies , and Mario Capecchi . Traditionally, knock-in techniques have relied on homologous recombination to drive targeted gene replacement, although other methods using a transposon -mediated system to insert the target gene have been developed. [ 3 ] The use of loxP flanking sites that become excised upon expression of Cre recombinase with gene vectors is an example of this. Embryonic stem cells with the modification of interest are then implanted into a viable blastocyst , which will grow into a mature chimeric mouse with some cells having the original blastocyst cell genetic information and other cells having the modifications introduced to the embryonic stem cells. Subsequent offspring of the chimeric mouse will then have the gene knock-in. [ 4 ]
Gene knock-in has allowed, for the first time, hypothesis-driven studies on gene modifications and resultant phenotypes. Mutations in the human p53 gene, for example, can be induced by exposure to benzo(a)pyrene (BaP) and the mutated copy of the p53 gene can be inserted into mouse genomes. Lung tumors observed in the knock-in mice offer support for the hypothesis of BaP’s carcinogenicity . [ 5 ] More recent developments in knock-in technique have allowed for pigs to have a gene for green fluorescent protein inserted with a CRISPR/Cas9 system, which allows for much more accurate and successful gene insertions. [ 6 ] The speed of CRISPR/Cas9-mediated gene knock-in also allows for biallelic modifications to some genes to be generated and the phenotype in mice observed in a single generation, an unprecedented timeframe. [ 7 ]
Knock-in technology is different from knockout technology in that knockout technology aims to either delete part of the DNA sequence or insert irrelevant DNA sequence information to disrupt the expression of a specific genetic locus. Gene knock-in technology, on the other hand, alters the genetic locus of interest via a one-for-one substitution of DNA sequence information or by the addition of sequence information that is not found on said genetic locus. A gene knock-in therefore can be seen as a gain-of-function mutation and a gene knockout a loss-of-function mutation , but a gene knock-in may also involve the substitution of a functional gene locus for a mutant phenotype that results in some loss of function. [ 8 ]
Because of the success of gene knock-in methods thus far, many clinical applications can be envisioned. Knock-in of sections of the human immunoglobulin gene into mice has already been shown to allow them to produce humanized antibodies that are therapeutically useful. [ 9 ] It should be possible to modify stem cells in humans to restore targeted gene function in certain tissues, for example possibly correcting the mutant gamma-chain gene of the IL-2 receptor in hematopoietic stem cells to restore lymphocyte development in people with X-linked severe combined immunodeficiency . [ 4 ]
While gene knock-in technology has proven to be a powerful technique for the generation of models of human disease and insight into proteins in vivo , numerous limitations still exist. Many of these are shared with the limitations of knockout technology. First, combinations of knock-in genes lead to growing complexity in the interactions that inserted genes and their products have with other sections of the genome and can therefore lead to more side effects and difficult-to-explain phenotypes . Also, only a few loci, such as the ROSA26 locus have been characterized well enough where they can be used for conditional gene knock-ins; making combinations of reporter and transgenes in the same locus problematic. The biggest disadvantage of using gene knock-in for human disease model generation is that mouse physiology is not identical to that of humans and human orthologs of proteins expressed in mice will often not wholly reflect the role of a gene in human pathology. [ 10 ] This can be seen in mice produced with the ΔF508 fibrosis mutation in the CFTR gene , which accounts for more than 70% of the mutations in this gene for the human population and leads to cystic fibrosis . While ΔF508 CF mice do exhibit the processing defects characteristic of the human mutation, they do not display the pulmonary pathophysiological changes seen in humans and carry virtually no lung phenotype. [ 11 ] Such problems could be ameliorated by the use of a variety of animal models, and pig models (pig lungs share many biochemical and physiological similarities with human lungs) have been generated in an attempt to better explain the activity of the ΔF508 mutation. [ 12 ] | https://en.wikipedia.org/wiki/Gene_knock-in |
Gene knockdown is an experimental technique by which the expression of one or more of an organism 's genes is reduced. The reduction can occur either through genetic modification or by treatment with a reagent such as a short DNA or RNA oligonucleotide that has a sequence complementary to either gene or an mRNA transcript. [ 1 ]
If a DNA of an organism is genetically modified, the resulting organism is called a "knockdown organism." If the change in gene expression is caused by an oligonucleotide binding to an mRNA or temporarily binding to a gene , this leads to a temporary change in gene expression that does not modify the chromosomal DNA, and the result is referred to as a "transient knockdown". [ 1 ]
In a transient knockdown, the binding of this oligonucleotide to the active gene or its transcripts causes decreased expression through a variety of processes. Binding can occur either through the blocking of transcription (in the case of gene-binding), the degradation of the mRNA transcript (e.g. by small interfering RNA ( siRNA )) or RNase -H dependent antisense, or through the blocking of either mRNA translation , pre-m RNA splicing sites, or nuclease cleavage sites used for maturation of other functional RNAs, including miRNA (e.g. by morpholino oligos or other RNase-H independent antisense). [ 1 ] [ 2 ]
The most direct use of transient knockdowns is for learning about a gene that has been sequenced , but has an unknown or incompletely known function. This experimental approach is known as reverse genetics . Researchers draw inferences from how the knockdown differs from individuals in which the gene of interest is operational. Transient knockdowns are often used in developmental biology because oligos can be injected into single-celled zygotes and will be present in the daughter cells of the injected cell through embryonic development. [ 3 ] The term gene knockdown first appeared in the literature in 1994 [ 4 ]
RNA interference (RNAi) is a means of silencing genes by way of mRNA degradation. [ 5 ] Gene knockdown by this method is achieved by introducing small double-stranded interfering RNAs (siRNA) into the cytoplasm. Small interfering RNAs can originate from inside the cell or can be exogenously introduced into the cell. Once introduced into the cell, exogenous siRNAs are processed by the RNA-induced silencing complex ( RISC ). [ 6 ] The siRNA is complementary to the target mRNA to be silenced, and the RISC uses the siRNA as a template for locating the target mRNA. After the RISC localizes to the target mRNA, the RNA is cleaved by a ribonuclease.
RNAi is widely used as a laboratory technique for genetic functional analysis. [ 7 ] RNAi in organisms such as C. elegans and Drosophila melanogaster provides a quick and inexpensive means of investigating gene function. In C. elegans research, the availability of tools such as the Ahringer RNAi Library give laboratories a way of testing many genes in a variety of experimental backgrounds. Insights gained from experimental RNAi use may be useful in identifying potential therapeutic targets, drug development , or other applications. [ 8 ] RNA interference is a very useful research tool, allowing investigators to carry out large genetic screens in an effort to identify targets for further research related to a particular pathway, drug, or phenotype. [ 9 ] [ 10 ]
A different means of silencing exogenous DNA that has been discovered in prokaryotes is a mechanism involving loci called 'Clustered Regularly Interspaced Short Palindromic Repeats', or CRISPRs . [ 11 ] CRISPR-associated (cas) genes encode cellular machinery that cuts exogenous DNA into small fragments and inserts them into a CRISPR repeat locus. When this CRISPR region of DNA is expressed by the cell, the small RNAs produced from the exogenous DNA inserts serve as a template sequence that other Cas proteins use to silence this same exogenous sequence. The transcripts of the short exogenous sequences are used as a guide to silence these foreign DNA when they are present in the cell. This serves as a kind of acquired immunity, and this process is like a prokaryotic RNA interference mechanism. The CRISPR repeats are conserved amongst many species and have been demonstrated to be usable in human cells, [ 12 ] bacteria, [ 13 ] C. elegans , [ 14 ] zebrafish , [ 15 ] and other organisms for effective genome manipulation. The use of CRISPRs as a versatile research tool can be illustrated [ 16 ] by many studies making use of it to generate organisms with genome alterations.
Another technology made possible by prokaryotic genome manipulation is the use of transcription activator-like effector nucleases ( TALENs ) to target specific genes. [ 17 ] TALENs are nucleases that have two important functional components: a DNA binding domain and a DNA cleaving domain. The DNA binding domain is a sequence-specific transcription activator-like effector sequence while the DNA cleaving domain originates from a bacterial endonuclease and is non-specific. TALENs can be designed to cleave a sequence specified by the sequence of the transcription activator-like effector portion of the construct. Once designed, a TALEN is introduced into a cell as a plasmid or mRNA. The TALEN is expressed, localizes to its target sequence, and cleaves a specific site. After cleavage of the target DNA sequence by the TALEN, the cell uses non-homologous end joining as a DNA repair mechanism to correct the cleavage. The cell's attempt at repairing the cleaved sequence can render the encoded protein non-functional, as this repair mechanism introduces insertion or deletion errors at the repaired site.
So far, knockdown organisms with permanent alterations in their DNA have been engineered chiefly for research purposes. Also known simply as knockdowns , these organisms are most commonly used for reverse genetics, especially in species such as mice or rats for which transient knockdown technologies cannot easily be applied. [ 3 ] [ 18 ]
There are several companies that offer commercial services related to gene knockdown treatments. | https://en.wikipedia.org/wiki/Gene_knockdown |
Gene knockouts (also known as gene deletion or gene inactivation ) are a widely used genetic engineering technique that involves the targeted removal or inactivation of a specific gene within an organism's genome. This can be done through a variety of methods, including homologous recombination , CRISPR-Cas9 , and TALENs .
One of the main advantages of gene knockouts is that they allow researchers to study the function of a specific gene in vivo, and to understand the role of the gene in normal development and physiology as well as in the pathology of diseases. By studying the phenotype of the organism with the knocked out gene, researchers can gain insights into the biological processes that the gene is involved in.
There are two main types of gene knockouts: complete and conditional. A complete gene knockout permanently inactivates the gene, while a conditional gene knockout allows for the gene to be turned off and on at specific times or in specific tissues. Conditional knockouts are particularly useful for studying developmental processes and for understanding the role of a gene in specific cell types or tissues.
Gene knockouts have been widely used in many different organisms, including bacteria, yeast, fruit flies, zebrafish, and mice. In mice, gene knockouts are commonly used to study the function of specific genes in development, physiology, and cancer research.
The use of gene knockouts in mouse models has been particularly valuable in the study of human diseases. For example, gene knockouts in mice have been used to study the role of specific genes in cancer, neurological disorders, immune disorders, and metabolic disorders.
However, gene knockouts also have some limitations. For example, the loss of a single gene may not fully mimic the effects of a genetic disorder, and the knockouts may have unintended effects on other genes or pathways. Additionally, gene knockouts are not always a good model for human disease as the mouse genome is not identical to the human genome, and mouse physiology is different from human physiology.
The KO technique is essentially the opposite of a gene knock-in . Knocking out two genes simultaneously in an organism is known as a double knockout ( DKO ). Similarly the terms triple knockout ( TKO ) and quadruple knockouts ( QKO ) are used to describe three or four knocked out genes, respectively. However, one needs to distinguish between heterozygous and homozygous KOs. In the former, only one of two gene copies ( alleles ) is knocked out, in the latter both are knocked out.
Knockouts are accomplished through a variety of techniques. Originally, naturally occurring mutations were identified and then gene loss or inactivation had to be established by DNA sequencing or other methods. [ 1 ]
Gene knockout by mutation is commonly carried out in bacteria. An early instance of the use of this technique in Escherichia coli was published in 1989 by Hamilton, et al. [ 2 ] In this experiment, two sequential recombinations were used to delete the gene. This work established the feasibility of removing or replacing a functional gene in bacteria. That method has since been developed for other organisms, particularly research animals, like mice. Knockout mice are commonly used to study genes with human equivalents that may have significance for disease. An example of a study using knockout mice is an investigation of the roles of Xirp proteins in Sudden Unexplained Nocturnal Death Syndrome (SUNDS) and Brugada Syndrome in the Chinese Han Population. [ 3 ]
For gene knockout investigations, RNA interference (RNAi), a more recent method, also known as gene silencing, has gained popularity. In RNA interference (RNAi), messenger RNA for a particular gene is inactivated using small interfering RNA (siRNA) or short hairpin RNA (shRNA). This effectively stops the gene from being expressed. Oncogenes like Bcl-2 and p53, as well as genes linked to neurological disease, genetic disorders, and viral infections, have all been targeted for gene silencing utilizing RNA interference (RNAi). [ 4 ]
Homologous recombination is the exchange of genes between two DNA strands that include extensive regions of base sequences that are identical to one another. In eukaryotic species, bacteria, and some viruses, homologous recombination happens spontaneously and is a useful tool in genetic engineering. Homologous recombination, which takes place during meiosis in eukaryotes, is essential for the repair of double-stranded DNA breaks and promotes genetic variation by allowing the movement of genetic information during chromosomal crossing. Homologous recombination, a key DNA repair mechanism in bacteria, enables the insertion of genetic material acquired through horizontal transfer of genes and transformation into DNA. Homologous recombination in viruses influences the course of viral evolution. Homologous recombination, a type of gene targeting used in genetic engineering, involves the introduction of an engineered mutation into a particular gene in order to learn more about the function of that gene. This method involves inserting foreign DNA into a cell that has a sequence similar to the target gene while being flanked by sequences that are the same upstream and downstream of the target gene. The target gene's DNA is substituted with the foreign DNA sequence during replication when the cell detects the similar flanking regions as homologues. The target gene is "knocked out" by the exchange. By using this technique to target particular alleles in embryonic stem cells in mice, it is possible to create knockout mice. With the aid of gene targeting, numerous mouse genes have been shut down, leading to the creation of hundreds of distinct mouse models of various human diseases, such as cancer, diabetes, cardiovascular diseases, and neurological disorders. [ citation needed ] Mario Capecchi, Sir Martin J. Evans, and Oliver Smithies performed groundbreaking research on homologous recombination in mouse stem cells, and they shared the 2007 Nobel Prize in Physiology or Medicine for their findings. [ 5 ] Traditionally, homologous recombination was the main method for causing a gene knockout. This method involves creating a DNA construct containing the desired mutation. For knockout purposes, this typically involves a drug resistance marker in place of the desired knockout gene. [ 6 ] The construct will also contain a minimum of 2kb of homology to the target sequence. The construct can be delivered to stem cells either through microinjection or electroporation . This method then relies on the cell's own repair mechanisms to recombine the DNA construct into the existing DNA. This results in the sequence of the gene being altered, and most cases the gene will be translated into a nonfunctional protein , if it is translated at all. However, this is an inefficient process, as homologous recombination accounts for only 10 −2 to 10 −3 of DNA integrations. [ 6 ] [ 7 ] Often, the drug selection marker on the construct is used to select for cells in which the recombination event has occurred.
These stem cells now lacking the gene could be used in vivo , for instance in mice, by inserting them into early embryos. If the resulting chimeric mouse contained the genetic change in their germline, this could then be passed on offspring. [ 6 ]
In diploid organisms, which contain two alleles for most genes, and may as well contain several related genes that collaborate in the same role, additional rounds of transformation and selection are performed until every targeted gene is knocked out. Selective breeding may be required to produce homozygous knockout animals.
There are currently three methods in use that involve precisely targeting a DNA sequence in order to introduce a double-stranded break. Once this occurs, the cell's repair mechanisms will attempt to repair this double stranded break, often through non-homologous end joining (NHEJ), which involves directly ligating the two cut ends together. [ 7 ] This may be done imperfectly, therefore sometimes causing insertions or deletions of base pairs, which cause frameshift mutations . These mutations can render the gene in which they occur nonfunctional, thus creating a knockout of that gene. This process is more efficient than homologous recombination, and therefore can be more easily used to create biallelic knockouts. [ 7 ]
Zinc-finger nucleases consist of DNA binding domains that can precisely target a DNA sequence. [ 7 ] Each zinc-finger can recognize codons of a desired DNA sequence, and therefore can be modularly assembled to bind to a particular sequence. [ 9 ] These binding domains are coupled with a restriction endonuclease that can cause a double stranded break (DSB) in the DNA. [ 7 ] Repair processes may introduce mutations that destroy functionality of the gene. [ citation needed ]
Transcription activator-like effector nucleases ( TALENs ) also contain a DNA binding domain and a nuclease that can cleave DNA. [ 10 ] The DNA binding region consists of amino acid repeats that each recognize a single base pair of the desired targeted DNA sequence. [ 9 ] If this cleavage is targeted to a gene coding region, and NHEJ-mediated repair introduces insertions and deletions, a frameshift mutation often results, thus disrupting function of the gene. [ 10 ]
CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) is a genetic engineering technique that allows for precise editing of the genome. One application of CRISPR is gene knockout, which involves disabling or "knocking out" a specific gene in an organism. [ citation needed ]
The process of gene knockout with CRISPR involves three main steps: designing a guide RNA (gRNA) that targets a specific location in the genome, delivering the gRNA and a Cas9 enzyme (which acts as a molecular scissors) to the target cell, and then allowing the cell to repair the cut in the DNA. When the cell repairs the cut, it can either join the cut ends back together, resulting in a non-functional gene, or introduce a mutation that disrupts the gene's function.
This technique can be used in a variety of organisms, including bacteria, yeast, plants, and animals, and it allows scientists to study the function of specific genes by observing the effects of their absence. CRISPR-based gene knockout is a powerful tool for understanding the genetic basis of disease and for developing new therapies.
It is important to note that CRISPR-based gene knockout, like any genetic engineering technique, has the potential to produce unintended or harmful effects on the organism, so it should be used with caution. [ 9 ] [ 11 ] The coupled Cas9 will cause a double stranded break in the DNA. [ 9 ] Following the same principle as zinc-fingers and TALENs, the attempts to repair these double stranded breaks often result in frameshift mutations that result in an nonfunctional gene. [ 9 ] Non invasive CRISPR-Cas9 technology has successfully knocked out a gene associated in depression and anxiety in mice, being the first successful delivery passing through the blood–brain barrier to enable gene modification. [ 12 ]
Gene knock-in is similar to gene knockout, but it replaces a gene with another instead of deleting it. [ citation needed ]
A conditional gene knockout allows gene deletion in a tissue in a tissue specific manner. This is required in place of a gene knockout if the null mutation would lead to embryonic death , [ 13 ] or a specific tissue or cell type is of specific interest. This is done by introducing short sequences called loxP sites around the gene. These sequences will be introduced into the germ-line via the same mechanism as a knockout. This germ-line can then be crossed to another germline containing Cre-recombinase which is a viral enzyme that can recognize these sequences, recombines them and deletes the gene flanked by these sites. [ 14 ] Other recombinases have since been created and employed in conditional knockout experiments. [ 15 ]
Knockouts are primarily used to understand the role of a specific gene or DNA region by comparing the knockout organism to a wildtype with a similar genetic background. [ citation needed ]
Knockout organisms are also used as screening tools in the development of drugs , to target specific biological processes or deficiencies by using a specific knockout, or to understand the mechanism of action of a drug by using a library of knockout organisms spanning the entire genome , such as in Saccharomyces cerevisiae . [ 16 ] | https://en.wikipedia.org/wiki/Gene_knockout |
Gene mapping or genome mapping describes the methods used to identify the location of a gene on a chromosome and the distances between genes. [ 2 ] [ 3 ] Gene mapping can also describe the distances between different sites within a gene.
The essence of all genome mapping is to place a collection of molecular markers onto their respective positions on the genome. Molecular markers come in all forms. Genes can be viewed as one special type of genetic markers in the construction of genome maps, and mapped the same way as any other markers. In some areas of study, gene mapping contributes to the creation of new recombinants within an organism. [ 4 ]
Gene maps help describe the spatial arrangement of genes on a chromosome . Genes are designated to a specific location on a chromosome known as the locus and can be used as molecular markers to find the distance between other genes on a chromosome. Maps provide researchers with the opportunity to predict the inheritance patterns of specific traits, which can eventually lead to a better understanding of disease-linked traits. [ 5 ]
The genetic basis to gene maps is to provide an outline that can potentially help researchers carry out DNA sequencing . A gene map helps point out the relative positions of genes and allows researchers to locate regions of interest in the genome . Genes can then be identified quickly and sequenced quickly. [ 6 ]
Two approaches to generating gene maps ( gene mapping ) include physical mapping and genetic mapping. Physical mapping utilizes molecular biology techniques to inspect chromosomes. These techniques consequently allow researchers to observe chromosomes directly so that a map may be constructed with relative gene positions. Genetic mapping on the other hand uses genetic techniques to indirectly find association between genes. Techniques can include cross-breeding ( hybrid ) experiments and examining pedigrees . These technique allow for maps to be constructed so that relative positions of genes and other important sequences can be analyzed. [ 6 ]
There are two distinctive mapping approaches used in the field of genome mapping: genetic maps (also known as linkage maps) [ 7 ] and physical maps. [ 3 ] While both maps are a collection of genetic markers and gene loci , [ 8 ] genetic maps' distances are based on the genetic linkage information, while physical maps use actual physical distances usually measured in number of base pairs . While the physical map could be a more accurate representation of the genome, genetic maps often offer insights into the nature of different regions of the chromosome, for example the genetic distance to physical distance ratio varies greatly at different genomic regions which reflects different recombination rates, and such rate is often indicative of euchromatic (usually gene-rich) vs heterochromatic (usually gene-poor) regions of the genome. [ citation needed ]
Researchers begin a genetic map by collecting samples of blood, saliva, or tissue from family members that carry a prominent disease or trait and family members that do not. The most common sample used in gene mapping, especially in personal genomic tests is saliva. Scientists then isolate DNA from the samples and closely examine it, looking for unique patterns in the DNA of the family members who do carry the disease and the DNA of those who do not carry the disease do not have. These unique molecular patterns in the DNA are referred to as polymorphisms, or markers. [ 9 ]
The first steps of building a genetic map are the development of genetic markers and a mapping population. The closer two markers are on the chromosome, the more likely they are to be passed on to the next generation together. Therefore, the "co-segregation" patterns of all markers can be used to reconstruct their order. With this in mind, the genotypes of each genetic marker are recorded for both parents and each individual in the following generations. The quality of the genetic maps is largely dependent upon these factors: the number of genetic markers on the map and the size of the mapping population. The two factors are interlinked, as a larger mapping population could increase the "resolution" of the map and prevent the map from being "saturated". [ citation needed ]
In gene mapping, any sequence feature that can be faithfully distinguished from the two parents can be used as a genetic marker. Genes, in this regard, are represented by "traits" that can be faithfully distinguished between two parents. Their linkage with other genetic markers is calculated in the same way as if they are common markers and the actual gene loci are then bracketed in a region between the two nearest neighboring markers. The entire process is then repeated by looking at more markers that target that region to map the gene neighborhood to a higher resolution until a specific causative locus can be identified. This process is often referred to as " positional cloning ", and it is used extensively in the study of plant species. One plant species, in particular in which positional cloning is utilized is in maize . [ 10 ] The great advantage of genetic mapping is that it can identify the relative position of genes based solely on their phenotypic effect. [ citation needed ]
Genetic mapping is a way to identify exactly which chromosome has which gene and exactly pinpointing where that gene lies on that particular chromosome. Mapping also acts as a method in determining which gene is most likely to recombine based on the distance between two genes. The distance between two genes is measured in units known as centimorgan or map units, these terms are interchangeable. A centimorgan is a distance between genes for which one product of meiosis in one hundred is recombinant. [ 11 ] [ 4 ] The farther two genes are from each other, the more likely they are going to recombine. If it were closer, the opposite would occur. [ 12 ]
The basis to linkage analysis is understanding chromosomal location and identifying disease genes. Certain genes that are genetically linked or associated with each other reside close to each other on the same chromosome. During meiosis , these genes are capable of being inherited together and can be used as a genetic marker to help identify the phenotype of diseases. Because linkage analysis can identify inheritance patterns, these studies are usually family based. [ 13 ]
The earliest gene maps were done by linkage analysis of fruitflies, in the research group around Thomas Hunt Morgan . The first was published in 1913. [ 15 ]
Gene association analysis is population based; it is not focused on inheritance patterns, but rather is based on the entire history of a population. Gene association analysis looks at a particular population and tries to identify whether the frequency of an allele in affected individuals is different from that of a control set of unaffected individuals of the same population. This method is particularly useful to identify complex diseases that do not have a Mendelian inheritance pattern. [ 16 ]
Since actual base-pair distances are generally hard or not possible to directly measure, physical maps are actually constructed by first shattering the genome into hierarchically smaller pieces. By characterizing each single piece and assembling back together, the overlapping path or "tiling path" of these small fragments would allow researchers to infer physical distances between genomic features. [ citation needed ]
Restriction mapping is a method in which structural information regarding a segment of DNA is obtained using restriction enzymes . Restriction enzymes are enzymes that help cut segments of DNA at specific recognition sequences. The basis to restriction mapping involves digesting (or cutting) DNA with restriction enzymes. The digested DNA fragments are then run on an agarose gel using electrophoresis , which provides one with information regarding the size of these digested fragments. The sizes of these fragments help indicate the distance between restriction enzyme sites on the DNA analyzed, and provides researchers with information regarding the structure of DNA analyzed. [ 16 ] The resulting pattern of DNA migration – its genetic fingerprint is used to identify what stretch of DNA is in the clone . By analyzing the fingerprints, contigs are assembled by automated (FPC) or manual means (pathfinders) into overlapping DNA stretches. Now a good choice of clones can be made to efficiently sequence the clones to determine the DNA sequence of the organism under study. [ citation needed ]
In physical mapping, there are no direct ways of marking up a specific gene since the mapping does not include any information that concerns traits and functions. Genetic markers can be linked to a physical map by processes like in situ hybridization . By this approach, physical map contigs can be "anchored" onto a genetic map. The clones used in the physical map contigs can then be sequenced on a local scale to help new genetic marker design and identification of the causative loci. [ citation needed ]
Macrorestriction is a type of physical mapping wherein the high molecular weight DNA is digested with a restriction enzyme having a low number of restriction sites. [ citation needed ]
There are alternative ways to determine how DNA in a group of clones overlaps without completely sequencing the clones. Once the map is determined, the clones can be used as a resource to efficiently contain large stretches of the genome. This type of mapping is more accurate than genetic maps. [ citation needed ]
Restriction mapping is a method in which structural information regarding a segment of DNA is obtained using restriction enzymes . Restriction enzymes are enzymes that help cut segments of DNA at specific recognition sequences. The basis to restriction mapping involves digesting (or cutting) DNA with restriction enzymes. The digested DNA fragments are then run on an agarose gel using electrophoresis , which provides one with information regarding the size of these digested fragments. The sizes of these fragments help indicate the distance between restriction enzyme sites on the DNA analyzed, and provides researchers with information regarding the structure of DNA analyzed. [ 16 ]
Fluorescence in situ hybridization (FISH) is a method used to detect the presence (or absence) of a DNA sequence within a cell. [ 17 ] DNA probes that are specific for chromosomal regions or genes of interest are labeled with fluorochromes . By attaching fluorochromes to probes, researchers are able to visualize multiple DNA sequences simultaneously. When a probe comes into contact with DNA on a specific chromosome, hybridization will occur. Consequently, information regarding the location of that sequence of DNA will be attained. FISH analyzes single stranded DNA ( ssDNA ). Once the DNA is in its single stranded state, the DNA can bind to its specific probe. [ 6 ]
A sequence-tagged site (STS) is a short sequence of DNA (about 100 - 500 base pairs in length) that is seen to appear multiple times within an individual's genome. These sites are easily recognizable, usually appearing at least once in the DNA being analyzed. These sites usually contain genetic polymorphisms making them sources of viable genetic markers (as they differ from other sequences). Sequenced tagged sites can be mapped within our genome and require a group of overlapping DNA fragments. PCR is generally used to produce the collection of DNA fragments. After overlapping fragments are created, the map distance between STSs can be analyzed. In order to calculate the map distance between STSs, researchers determine the frequency at which breaks between the two markers occur (see shotgun sequencing ) [ 16 ]
In the early 1950s the prevailing view was that the genes in a chromosome are discrete entities, indivisible by genetic recombination and arranged like beads on a string. During 1955 to 1959, Benzer performed genetic recombination experiments using rII mutants of bacteriophage T4 . He found that, on the basis of recombination tests, the sites of mutation could be mapped in a linear order. [ 18 ] [ 19 ] This result provided evidence for the key idea that the gene has a linear structure equivalent to a length of DNA with many sites that can independently mutate. [ citation needed ]
In 1961, Francis Crick, Leslie Barnett, Sydney Brenner and Richard Watts-Tobin performed genetic experiments that demonstrated the basic nature of the genetic code for proteins. [ 20 ] These experiments, involving mapping of mutational sites within the rIIB gene of bacteriophage T4, demonstrated that three sequential nucleobases of the gene's DNA specify each successive amino acid of its encoded protein. Thus the genetic code was shown to be a triplet code, where each triplet (called a codon) specifies a particular amino acid. They also obtained evidence that the codons do not overlap with each other in the DNA sequence encoding a protein, and that such a sequence is read from a fixed starting point. [ citation needed ]
Edgar et al. [ 21 ] performed mapping experiments with r mutants of bacteriophage T4 showing that recombination frequencies between rII mutants are not strictly additive. The recombination frequency from a cross of two rII mutants (a x d) is usually less than the sum of recombination frequencies for adjacent internal sub-intervals (a x b) + (b x c) + (c x d). Although not strictly additive, a systematic relationship was demonstrated [ 22 ] that likely reflects the underlying molecular mechanism of genetic recombination .
Genome sequencing is sometimes mistakenly referred to as "genome mapping" by non-biologists. The process of shotgun sequencing [ 23 ] resembles the process of physical mapping: it shatters the genome into small fragments, characterizes each fragment, then puts them back together (more recent sequencing technologies are drastically different). While the scope, purpose and process are totally different, a genome assembly can be viewed as the "ultimate" form of physical map, in that it provides in a much better way all the information that a traditional physical map can offer. [ citation needed ]
Identification of genes is usually the first step in understanding a genome of a species; mapping of the gene is usually the first step of identification of the gene. Gene mapping is usually the starting point of many important downstream studies.
The process to identify a genetic element that is responsible for a disease is also referred to as "mapping". If the locus in which the search is performed is already considerably constrained, the search is called the fine mapping of a gene. This information is derived from the investigation of disease manifestations in large families ( genetic linkage ) or from populations-based genetic association studies.
Using the methods mentioned above, researchers are capable of mapping disease genes. Generating a gene map is the critical first step towards identifying disease genes. Gene maps allow for variant alleles to be identified and allow for researchers to make predictions about the genes they think are causing the mutant phenotype. An example of a disorder that was identified by Linkage analysis is Cystic Fibrosis . For example, with Cystic Fibrosis (CF), DNA samples from fifty families affected by CF were analyzed using linkage analysis. Hundreds of markers pertaining to CF were analyzed throughout the genome until CF was identified on the long arm of chromosome 7. Researchers then had completed linkage analysis on additional DNA markers within chromosome 7 to identify an even more precise location of the CF gene. They found that the CF gene resides around 7q31-q32 (see chromosomal nomenclature ). [ 16 ] | https://en.wikipedia.org/wiki/Gene_mapping |
Gene nomenclature is the scientific naming of genes , the units of heredity in living organisms. It is also closely associated with protein nomenclature, as genes and the proteins they code for usually have similar nomenclature. An international committee published recommendations for genetic symbols and nomenclature in 1957. [ 1 ] The need to develop formal guidelines for human gene names and symbols was recognized in the 1960s and full guidelines were issued in 1979 (Edinburgh Human Genome Meeting). [ 2 ] Several other genus -specific research communities (e.g., Drosophila fruit flies, Mus mice) have adopted nomenclature standards as well, and have published them on the relevant model organism websites and in scientific journals, including the Trends in Genetics Genetic Nomenclature Guide. [ 3 ] [ 4 ] Scientists familiar with a particular gene family may work together to revise the nomenclature for the entire set of genes when new information becomes available. [ 5 ] For many genes and their corresponding proteins, an assortment of alternate names is in use across the scientific literature and public biological databases , posing a challenge to effective organization and exchange of biological information. [ 6 ] Standardization of nomenclature thus tries to achieve the benefits of vocabulary control and bibliographic control , although adherence is voluntary. The advent of the information age has brought gene ontology , which in some ways is a next step of gene nomenclature, because it aims to unify the representation of gene and gene product attributes across all species.
Gene nomenclature and protein nomenclature are not separate endeavors; they are aspects of the same whole. Any name or symbol used for a protein can potentially also be used for the gene that encodes it, and vice versa. [ citation needed ] But owing to the nature of how science has developed (with knowledge being uncovered bit by bit over decades), proteins and their corresponding genes have not always been discovered simultaneously (and not always physiologically understood when discovered), which is the largest reason why protein and gene names do not always match, or why scientists tend to favor one symbol or name for the protein and another for the gene. [ citation needed ] Another reason is that many of the mechanisms of life are the same or very similar across species , genera, orders, and phyla (through homology, analogy, or some of both ), so that a given protein may be produced in many kinds of organisms; and thus scientists naturally often use the same symbol and name for a given protein in one species (for example, mice) as in another species (for example, humans). Regarding the first duality (same symbol and name for gene or protein), the context usually makes the sense clear to scientific readers, and the nomenclatural systems also provide for some specificity by using italic for a symbol when the gene is meant and plain (roman) for when the protein is meant. [ citation needed ] Regarding the second duality (a given protein is endogenous in many kinds of organisms), the nomenclatural systems also provide for at least human-versus-nonhuman specificity by using different capitalization , [ citation needed ] although scientists often ignore this distinction, given that it is often biologically irrelevant. [ citation needed ]
Also owing to the nature of how scientific knowledge has unfolded, proteins and their corresponding genes often have several names and symbols that are synonymous . Some of the earlier ones may be deprecated in favor of newer ones, although such deprecation is voluntary. Some older names and symbols live on simply because they have been widely used in the scientific literature (including before the newer ones were coined) and are well established among users. For example, mentions of HER2 and ERBB2 are synonymous .
Lastly, the correlation between genes and proteins is not always one-to-one (in either direction); in some cases it is several-to-one or one-to-several, and the names and symbols may then be gene-specific or protein-specific to some degree, or overlapping in usage:
The HUGO Gene Nomenclature Committee is responsible for providing human gene naming guidelines and approving new, unique human gene names and symbols (short identifiers typically created by abbreviating). For some nonhuman species, model organism databases serve as central repositories of guidelines and help resources, including advice from curators and nomenclature committees. In addition to species-specific databases, approved gene names and symbols for many species can be located in the National Center for Biotechnology Information's "Entrez Gene" [ 7 ] database.
There are generally accepted rules and conventions used for naming genes in bacteria . Standards were proposed in 1966 by Demerec et al. [ 8 ]
Each bacterial gene is denoted by a mnemonic of three lower case letters which indicate the pathway or process in which the gene-product is involved, followed by a capital letter signifying the actual gene. In some cases, the gene letter may be followed by an allele number. All letters and numbers are underlined or italicised. For example, leuA is one of the genes of the leucine biosynthetic pathway, and leuA273 is a particular allele of this gene.
Where the actual protein coded by the gene is known then it may become part of the basis of the mnemonic, thus:
Some gene designations refer to a known general function:
In a 1998 analysis of the E. coli genome, a large number of genes with unknown function were designated names beginning with the letter y , followed by sequentially generated letters without a mnemonic meaning (e.g., ydiO and ydbK ). [ 9 ] Since being designated, some y-genes have been confirmed to have a function, [ 10 ] and assigned a synonym (alternative) name in recognition of this. However, as y-genes are not always re-named after being further characterised, this designation is not a reliable indicator of a gene's significance. [ 10 ]
Loss of gene activity leads to a nutritional requirement ( auxotrophy ) not exhibited by the wildtype ( prototrophy ).
Amino acids:
Some pathways produce metabolites that are precursors of more than one pathway. Hence, loss of one of these enzymes will lead to a requirement for more than one amino acid. For example:
Nucleotides:
Vitamins:
Loss of gene activity leads to loss of the ability to catabolise (use) the compound.
If the gene in question is the wildtype a superscript '+' sign is used:
If a gene is mutant, it is signified by a superscript '-':
By convention, if neither is used, it is considered to be mutant.
There are additional superscripts and subscripts which provide more information about the mutation:
Other modifiers:
When referring to the genotype (the gene) the mnemonic is italicized and not capitalised. When referring to the gene product or phenotype, the mnemonic is first-letter capitalised and not italicized ( e.g. DnaA – the protein produced by the dnaA gene; LeuA − – the phenotype of a leuA mutant; Amp R – the ampicillin-resistance phenotype of the β-lactamase gene bla ).
Protein names are generally the same as the gene names, but the protein names are not italicized, and the first letter is upper-case. E.g. the name of R NA po lymerase is RpoB, and this protein is encoded by rpoB gene. [ 11 ]
The research communities of vertebrate model organisms have adopted guidelines whereby genes in these species are given, whenever possible, the same names as their human orthologs . The use of prefixes on gene symbols to indicate species (e.g., "Z" for zebrafish) is discouraged. The recommended formatting of printed gene and protein symbols varies between species.
Vertebrate genes and proteins have names (typically strings of words) and symbols, which are short identifiers (typically 3 to 8 characters). For example, the gene cytotoxic T-lymphocyte-associated protein 4 has the HGNC symbol CTLA4 . These symbols are usually, but not always, coined by contraction or acronymic abbreviation of the name. They are pseudo-acronyms , however, in the sense that they are complete identifiers by themselves—short names, essentially. They are synonymous with (rather than standing for) the gene/protein name (or any of its aliases), regardless of whether the initial letters "match". For example, the symbol for the gene v-akt murine thymoma viral oncogene homolog 1, which is AKT1 , cannot be said to be an acronym for the name, and neither can any of its various synonyms, which include AKT , PKB , PRKBA , and RAC . Thus, the relationship of a gene symbol to the gene name is functionally the relationship of a nickname to a formal name (both are complete identifiers )—it is not the relationship of an acronym to its expansion. In this sense they are similar to the symbols for units of measurement in the SI system (such as km for the kilometre ), in that they can be viewed as true logograms rather than just abbreviations. Sometimes the distinction is academic, but not always. Although it is not wrong to say that "VEGFA" is an acronym standing for " vascular endothelial growth factor A ", just as it is not wrong that "km" is an abbreviation for "kilometre", there is more to the formality of symbols than those statements capture.
The root portion of the symbols for a gene family (such as the " SERPIN " root in SERPIN1 , SERPIN2 , SERPIN3 , and so on) is called a root symbol. [ 12 ]
The HUGO Gene Nomenclature Committee is responsible for providing human gene naming guidelines and approving new, unique human gene names and symbols (short identifiers typically created by abbreviating). All human gene names and symbols can be searched online at the HGNC [ 13 ] website, and the guidelines for their formation are available there. [ 14 ] The guidelines for humans fit logically into the larger scope of vertebrates in general, and the HGNC's remit has recently expanded to assigning symbols to all vertebrate species without an existing nomenclature committee, to ensure that vertebrate genes are named in line with their human orthologs/paralogs. Human gene symbols generally are italicised, with all letters in uppercase (e.g., SHH , for sonic hedgehog ). Italics are not necessary in gene catalogs. Protein designations are the same as the gene symbol except that they are not italicised. Like the gene symbol, they are in all caps because human (human-specific or human homolog). mRNAs and cDNAs use the same formatting conventions as the gene symbol. [ 5 ] For naming families of genes , the HGNC recommends using a "root symbol" [ 15 ] as the root for the various gene symbols. For example, for the peroxiredoxin family, PRDX is the root symbol, and the family members are PRDX1 , PRDX2 , PRDX3 , PRDX4 , PRDX5 , and PRDX6 .
Gene symbols generally are italicised, with only the first letter in uppercase and the remaining letters in lowercase ( Shh ). Italics are not required on web pages. Protein designations are the same as the gene symbol, but are not italicised and all are upper case (SHH). [ 16 ]
Nomenclature generally follows the conventions of human nomenclature. Gene symbols generally are italicised, with all letters in uppercase (e.g., NLGN1 , for neuroligin1). Protein designations are the same as the gene symbol, but are not italicised; all letters are in uppercase (NLGN1). mRNAs and cDNAs use the same formatting conventions as the gene symbol. [ 17 ]
Gene symbols are italicised and all letters are in lowercase ( shh ). Protein designations are different from their gene symbol; they are not italicised, and all letters are in uppercase (SHH). [ 18 ]
Gene symbols are italicised and all letters are in lowercase ( shh ). Protein designations are the same as the gene symbol, but are not italicised; the first letter is in uppercase and the remaining letters are in lowercase (Shh). [ 19 ]
Gene symbols are italicised, with all letters in lowercase ( shh ). Protein designations are the same as the gene symbol, but are not italicised; the first letter is in uppercase and the remaining letters are in lowercase (Shh). [ 20 ]
A nearly universal rule in copyediting of articles for medical journals and other health science publications is that abbreviations and acronyms must be expanded at first use, to provide a glossing type of explanation. Typically no exceptions are permitted except for small lists of especially well known terms (such as DNA or HIV ). Although readers with high subject-matter expertise do not need most of these expansions, those with intermediate or (especially) low expertise are appropriately served by them.
One complication that gene and protein symbols bring to this general rule is that they are not, accurately speaking, abbreviations or acronyms, despite the fact that many were originally coined via abbreviating or acronymic etymology. They are pseudoacronyms (as SAT and KFC also are) because they do not "stand for" any expansion. Rather, the relationship of a gene symbol to the gene name is functionally the relationship of a nickname to a formal name (both are complete identifiers )—it is not the relationship of an acronym to its expansion. In fact, many official gene symbol–gene name pairs do not even share their initial-letter sequences (although some do). Nevertheless, gene and protein symbols "look just like" abbreviations and acronyms, which presents the problem that "failing" to "expand" them (even though it is not actually a failure and there are no true expansions) creates the appearance of violating the spell-out-all-acronyms rule.
One common way of reconciling these two opposing forces is simply to exempt all gene and protein symbols from the glossing rule. This is certainly fast and easy to do, and in highly specialized journals, it is also justified because the entire target readership has high subject matter expertise. (Experts are not confused by the presence of symbols (whether known or novel) and they know where to look them up online for further details if needed.) But for journals with broader and more general target readerships, this action leaves the readers without any explanatory annotation and can leave them wondering what the apparent-abbreviation stands for and why it was not explained. Therefore, a good alternative solution is simply to put either the official gene name or a suitable short description (gene alias/other designation) in parentheses after the first use of the official gene/protein symbol. This meets both the formal requirement (the presence of a gloss) and the functional requirement (helping the reader to know what the symbol refers to). The same guideline applies to shorthand names for sequence variations; AMA says, "In general medical publications, textual explanations should accompany the shorthand terms at first mention." [ 21 ] Thus "188del11" is glossed as "an 11-bp deletion at nucleotide 188." This corollary rule (which forms an adjunct to the spell-everything-out rule) often also follows the "abbreviation-leading" style of expansion that is becoming more prevalent in recent years. Traditionally, the abbreviation always followed the fully expanded form in parentheses at first use. This is still the general rule. But for certain classes of abbreviations or acronyms (such as clinical trial acronyms [e.g., ECOG ] or standardized polychemotherapy regimens [e.g., CHOP ]), this pattern may be reversed, because the short form is more widely used and the expansion is merely parenthetical to the discussion at hand. The same is true of gene/protein symbols.
The HUGO Gene Nomenclature Committee (HGNC) maintains an official symbol and name for each human gene, as well as a list of synonyms and previous symbols and names. For example, for AFF1 (AF4/FMR2 family, member 1), previous symbols and names are MLLT2 ("myeloid/lymphoid or mixed-lineage leukemia (trithorax (Drosophila) homolog); translocated to, 2") and PBM1 ("pre-B-cell monocytic leukemia partner 1"), and synonyms are AF-4 and AF4 . Authors of journal articles often use the latest official symbol and name, but just as often they use synonyms and previous symbols and names, which are well established by earlier use in the literature. AMA style is that "authors should use the most up-to-date term" [ 22 ] and that "in any discussion of a gene, it is recommended that the approved gene symbol be mentioned at some point, preferably in the title and abstract if relevant." [ 22 ] Because copyeditors are not expected or allowed to rewrite the gene and protein nomenclature throughout a manuscript (except by rare express instructions on particular assignments), the middle ground in manuscripts using synonyms or older symbols is that the copyeditor will add a mention of the current official symbol at least as a parenthetical gloss at the first mention of the gene or protein, and query for confirmation.
Some basic conventions, such as (1) that animal/human homolog (ortholog) pairs differ in letter case ( title case and all caps , respectively) and (2) that the symbol is italicized when referring to the gene but nonitalic when referring to the protein, are often not followed by contributors to medical journals. Many journals have the copyeditors restyle the casing and formatting to the extent feasible, although in complex genetics discussions only subject-matter experts (SMEs) can effortlessly parse them all. One example that illustrates the potential for ambiguity among non-SMEs is that some official gene names have the word "protein" within them, so the phrase "brain protein I3 ( BRI3 )" (referring to the gene) and "brain protein I3 (BRI3)" (referring to the protein) are both valid. The AMA Manual gives another example: both "the TH gene" and "the TH gene" can validly be parsed as correct ("the gene for tyrosine hydroxylase"), because the first mentions the alias (description) and the latter mentions the symbol. This seems confusing on the surface, although it is easier to understand when explained as follows: in this gene's case, as in many others, the alias (description) "happens to use the same letter string" that the symbol uses. (The matching of the letters is of course acronymic in origin and thus the phrase "happens to" implies more coincidence than is actually present; but phrasing it that way helps to make the explanation clearer.) There is no way for a non-SME to know this is the case for any particular letter string without looking up every gene from the manuscript in a database such as NCBI Gene, reviewing its symbol, name, and alias list, and doing some mental cross-referencing and double-checking (plus it helps to have biochemical knowledge). Most medical journals do not (in some cases cannot) pay for that level of fact-checking as part of their copyediting service level; therefore, it remains the author's responsibility. However, as pointed out earlier, many authors make little attempt to follow the letter case or italic guidelines; and regarding protein symbols, they often will not use the official symbol at all. For example, although the guidelines would call p53 protein "TP53" in humans or "Trp53" in mice, most authors call it "p53" in both (and even refuse to call it "TP53" if edits or queries try to), not least because of the biologic principle that many proteins are essentially or exactly the same molecules regardless of mammalian species. Regarding the gene, authors are usually willing to call it by its human-specific symbol and capitalization, TP53 , and may even do so without being prompted by a query. But the end result of all these factors is that the published literature often does not follow the nomenclature guidelines completely. | https://en.wikipedia.org/wiki/Gene_nomenclature |
Gene order is the permutation of genome arrangement. A fair amount of research has been done trying to determine whether gene orders evolve according to a molecular clock ( molecular clock hypothesis ) or in jumps ( punctuated equilibrium ). By comparing gene orders in dissimilar organisms, scientists are able to develop a molecular phylogeny tree. [ 1 ] When organisms have similar gene orders, meaning they have likely diverged recently, it is called synteny .
Some research on gene orders in animals' mitochondrial genomes reveals that the mutation rate of gene orders is not a constant in some degrees. [ 2 ]
Methods for genome mapping , determining the gene order, include: [ 3 ] [ 4 ]
All of these methods can lead to a gene sequence or a DNA sequence by which genes can be identified and compared.
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gene_orders |
The gene pool is the set of all genes , or genetic information , in any population , usually of a particular species . [ 1 ]
A large gene pool indicates extensive genetic diversity , which is associated with robust populations that can survive bouts of intense selection . Meanwhile, low genetic diversity (see inbreeding and population bottlenecks ) can cause reduced biological fitness and an increased chance of extinction , although as explained by genetic drift new genetic variants, that may cause an increase in the fitness of organisms, are more likely to fix in the population if it is rather small.
When all individuals in a population are identical with regard to a particular phenotypic trait, the population is said to be 'monomorphic'. When the individuals show several variants of a particular trait they are said to be polymorphic .
The Russian geneticist Alexander Sergeevich Serebrovsky first formulated the concept in the 1920s as genofond (gene fund), a word that was imported to the United States from the Soviet Union by Theodosius Dobzhansky , who translated it into English as "gene pool." [ 2 ]
Harlan and de Wet (1971) proposed classifying each crop and its related species by gene pools rather than by formal taxonomy. [ 3 ]
Gene pool centres refers to areas on the earth where important crop plants and domestic animals originated. They have an extraordinary range of the wild counterparts of cultivated plant species and useful tropical plants.
Gene pool centres also contain different sub tropical and temperate region species. | https://en.wikipedia.org/wiki/Gene_pool |
In computational biology , gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encode genes . This includes protein-coding genes as well as RNA genes , but may also include prediction of other functional elements such as regulatory regions . Gene finding is one of the first and most important steps in understanding the genome of a species once it has been sequenced .
In its earliest days, "gene finding" was based on painstaking experimentation on living cells and organisms. Statistical analysis of the rates of homologous recombination of several different genes could determine their order on a certain chromosome , and information from many such experiments could be combined to create a genetic map specifying the rough location of known genes relative to each other. Today, with comprehensive genome sequence and powerful computational resources at the disposal of the research community, gene finding has been redefined as a largely computational problem.
Determining that a sequence is functional should be distinguished from determining the function of the gene or its product. Predicting the function of a gene and confirming that the gene prediction is accurate still demands in vivo experimentation [ 1 ] through gene knockout and other assays, although frontiers of bioinformatics research [ 2 ] are making it increasingly possible to predict the function of a gene based on its sequence alone.
Gene prediction is one of the key steps in genome annotation , following sequence assembly , the filtering of non-coding regions and repeat masking. [ 3 ]
Gene prediction is closely related to the so-called 'target search problem' investigating how DNA-binding proteins ( transcription factors ) locate specific binding sites within the genome . [ 4 ] [ 5 ] Many aspects of structural gene prediction are based on current understanding of underlying biochemical processes in the cell such as gene transcription , translation , protein–protein interactions and regulation processes , which are subject of active research in the various omics fields such as transcriptomics , proteomics , metabolomics , and more generally structural and functional genomics .
In empirical (similarity, homology or evidence-based) gene finding systems, the target genome is searched for sequences that are similar to extrinsic evidence in the form of the known expressed sequence tags , messenger RNA (mRNA), protein products, and homologous or orthologous sequences. Given an mRNA sequence, it is trivial to derive a unique genomic DNA sequence from which it had to have been transcribed . Given a protein sequence, a family of possible coding DNA sequences can be derived by reverse translation of the genetic code . Once candidate DNA sequences have been determined, it is a relatively straightforward algorithmic problem to efficiently search a target genome for matches, complete or partial, and exact or inexact. Given a sequence, local alignment algorithms such as BLAST , FASTA and Smith-Waterman look for regions of similarity between the target sequence and possible candidate matches. Matches can be complete or partial, and exact or inexact. The success of this approach is limited by the contents and accuracy of the sequence database.
A high degree of similarity to a known messenger RNA or protein product is strong evidence that a region of a target genome is a protein-coding gene. However, to apply this approach systemically requires extensive sequencing of mRNA and protein products. Not only is this expensive, but in complex organisms, only a subset of all genes in the organism's genome are expressed at any given time, meaning that extrinsic evidence for many genes is not readily accessible in any single cell culture. Thus, to collect extrinsic evidence for most or all of the genes in a complex organism requires the study of many hundreds or thousands of cell types , which presents further difficulties. For example, some human genes may be expressed only during development as an embryo or fetus, which might be difficult to study for ethical reasons.
Despite these difficulties, extensive transcript and protein sequence databases have been generated for human as well as other important model organisms in biology, such as mice and yeast. For example, the RefSeq database contains transcript and protein sequence from many different species, and the Ensembl system comprehensively maps this evidence to human and several other genomes. It is, however, likely that these databases are both incomplete and contain small but significant amounts of erroneous data.
New high-throughput transcriptome sequencing technologies such as RNA-Seq and ChIP-sequencing open opportunities for incorporating additional extrinsic evidence into gene prediction and validation, and allow structurally rich and more accurate alternative to previous methods of measuring gene expression such as expressed sequence tag or DNA microarray .
Major challenges involved in gene prediction involve dealing with sequencing errors in raw DNA data, dependence on the quality of the sequence assembly , handling short reads, frameshift mutations , overlapping genes and incomplete genes.
In prokaryotes it's essential to consider horizontal gene transfer when searching for gene sequence homology . An additional important factor underused in current gene detection tools is existence of gene clusters — operons (which are functioning units of DNA containing a cluster of genes under the control of a single promoter ) in both prokaryotes and eukaryotes. Most popular gene detectors treat each gene in isolation, independent of others, which is not biologically accurate.
Ab Initio gene prediction is an intrinsic method based on gene content and signal detection. Because of the inherent expense and difficulty in obtaining extrinsic evidence for many genes, it is also necessary to resort to ab initio gene finding, in which the genomic DNA sequence alone is systematically searched for certain tell-tale signs of protein-coding genes. These signs can be broadly categorized as either signals , specific sequences that indicate the presence of a gene nearby, or content , statistical properties of the protein-coding sequence itself. Ab initio gene finding might be more accurately characterized as gene prediction , since extrinsic evidence is generally required to conclusively establish that a putative gene is functional.
In the genomes of prokaryotes , genes have specific and relatively well-understood promoter sequences (signals), such as the Pribnow box and transcription factor binding sites , which are easy to systematically identify. Also, the sequence coding for a protein occurs as one contiguous open reading frame (ORF), which is typically many hundred or thousands of base pairs long. The statistics of stop codons are such that even finding an open reading frame of this length is a fairly informative sign. (Since 3 of the 64 possible codons in the genetic code are stop codons, one would expect a stop codon approximately every 20–25 codons, or 60–75 base pairs, in a random sequence .) Furthermore, protein-coding DNA has certain periodicities and other statistical properties that are easy to detect in a sequence of this length. These characteristics make prokaryotic gene finding relatively straightforward, and well-designed systems are able to achieve high levels of accuracy.
Ab initio gene finding in eukaryotes , especially complex organisms like humans, is considerably more challenging for several reasons. First, the promoter and other regulatory signals in these genomes are more complex and less well-understood than in prokaryotes, making them more difficult to reliably recognize. Two classic examples of signals identified by eukaryotic gene finders are CpG islands and binding sites for a poly(A) tail .
Second, splicing mechanisms employed by eukaryotic cells mean that a particular protein-coding sequence in the genome is divided into several parts ( exons ), separated by non-coding sequences ( introns ). (Splice sites are themselves another signal that eukaryotic gene finders are often designed to identify.) A typical protein-coding gene in humans might be divided into a dozen exons, each less than two hundred base pairs in length, and some as short as twenty to thirty. It is therefore much more difficult to detect periodicities and other known content properties of protein-coding DNA in eukaryotes.
Advanced gene finders for both prokaryotic and eukaryotic genomes typically use complex probabilistic models , such as hidden Markov models (HMMs) to combine information from a variety of different signal and content measurements. The GLIMMER system is a widely used and highly accurate gene finder for prokaryotes. GeneMark is another popular approach. Eukaryotic ab initio gene finders, by comparison, have achieved only limited success; notable examples are the GENSCAN and geneid programs. The GeneMark-ES and SNAP gene finders are GHMM-based like GENSCAN. They attempt to address problems related to using a gene finder on a genome sequence that it was not trained against. [ 7 ] [ 8 ] A few recent approaches like mSplicer, [ 9 ] CONTRAST, [ 10 ] or mGene [ 11 ] also use machine learning techniques like support vector machines for successful gene prediction. They build a discriminative model using hidden Markov support vector machines or conditional random fields to learn an accurate gene prediction scoring function.
Ab Initio methods have been benchmarked, with some approaching 100% sensitivity, [ 3 ] however as the sensitivity increases, accuracy suffers as a result of increased false positives .
Among the derived signals used for prediction are statistics resulting from the sub-sequence statistics like k-mer statistics, Isochore (genetics) or Compositional domain GC composition/uniformity/entropy, sequence and frame length, Intron/Exon/Donor/Acceptor/Promoter and Ribosomal binding site vocabulary, Fractal dimension , Fourier transform of a pseudo-number-coded DNA, Z-curve parameters and certain run features. [ 12 ]
It has been suggested that signals other than those directly detectable in sequences may improve gene prediction. For example, the role of secondary structure in the identification of regulatory motifs has been reported. [ 13 ] In addition, it has been suggested that RNA secondary structure prediction helps splice site prediction. [ 14 ] [ 15 ] [ 16 ] [ 17 ]
Artificial neural networks are computational models that excel at machine learning and pattern recognition . Neural networks must be trained with example data before being able to generalise for experimental data, and tested against benchmark data. Neural networks are able to come up with approximate solutions to problems that are hard to solve algorithmically, provided there is sufficient training data. When applied to gene prediction, neural networks can be used alongside other ab initio methods to predict or identify biological features such as splice sites. [ 18 ] One approach [ 19 ] involves using a sliding window, which traverses the sequence data in an overlapping manner. The output at each position is a score based on whether the network thinks the window contains a donor splice site or an acceptor splice site. Larger windows offer more accuracy but also require more computational power. A neural network is an example of a signal sensor as its goal is to identify a functional site in the genome.
Programs such as Maker combine extrinsic and ab initio approaches by mapping protein and EST data to the genome to validate ab initio predictions. Augustus, which may be used as part of the Maker pipeline, can also incorporate hints in the form of EST alignments or protein profiles to increase the accuracy of the gene prediction.
As the entire genomes of many different species are sequenced, a promising direction in current research on gene finding is a comparative genomics approach.
This is based on the principle that the forces of natural selection cause genes and other functional elements to undergo mutation at a slower rate than the rest of the genome, since mutations in functional elements are more likely to negatively impact the organism than mutations elsewhere. Genes can thus be detected by comparing the genomes of related species to detect this evolutionary pressure for conservation. This approach was first applied to the mouse and human genomes, using programs such as SLAM, SGP and TWINSCAN/N-SCAN and CONTRAST. [ 20 ]
TWINSCAN examined only human-mouse synteny to look for orthologous genes. Programs such as N-SCAN and CONTRAST allowed the incorporation of alignments from multiple organisms, or in the case of N-SCAN, a single alternate organism from the target. The use of multiple informants can lead to significant improvements in accuracy. [ 20 ]
CONTRAST is composed of two elements. The first is a smaller classifier, identifying donor splice sites and acceptor splice sites as well as start and stop codons. The second element involves constructing a full model using machine learning. Breaking the problem into two means that smaller targeted data sets can be used to train the classifiers,
and that classifier can operate independently and be trained with smaller windows. The full model can use the independent classifier, and not have to waste computational time or model complexity re-classifying intron-exon boundaries. The paper in which CONTRAST is introduced proposes that their method (and those of TWINSCAN, etc.) be classified as de novo gene assembly, using alternate genomes, and identifying it as distinct from ab initio , which uses a target 'informant' genomes. [ 20 ]
Comparative gene finding can also be used to project high quality annotations from one genome to another. Notable examples include Projector, GeneWise, GeneMapper and GeMoMa. Such techniques now play a central role in the annotation of all genomes.
Pseudogenes are close relatives of genes, sharing very high sequence homology, but being unable to code for the same protein product. Whilst once relegated as byproducts of gene sequencing , increasingly, as regulatory roles are being uncovered, they are becoming predictive targets in their own right. [ 21 ] Pseudogene prediction utilises existing sequence similarity and ab initio methods, whilst adding additional filtering and methods of identifying pseudogene characteristics.
Sequence similarity methods can be customised for pseudogene prediction using additional filtering to find candidate pseudogenes. This could use disablement detection, which looks for nonsense or frameshift mutations that would truncate or collapse an otherwise functional coding sequence. [ 22 ] Additionally, translating DNA into proteins sequences can be more effective than just straight DNA homology. [ 21 ]
Content sensors can be filtered according to the differences in statistical properties between pseudogenes and genes, such as a reduced count of CpG islands in pseudogenes, or the differences in G-C content between pseudogenes and their neighbours. Signal sensors also can be honed to pseudogenes, looking for the absence of introns or polyadenine tails. [ 23 ]
Metagenomics is the study of genetic material recovered from the environment, resulting in sequence information from a pool of organisms. Predicting genes is useful for comparative metagenomics .
Metagenomics tools also fall into the basic categories of using either sequence similarity approaches (MEGAN4) and ab initio techniques (GLIMMER-MG).
Glimmer-MG [ 24 ] is an extension to GLIMMER that relies mostly on an ab initio approach for gene finding and by using training sets from related organisms. The prediction strategy is augmented by classification and clustering gene data sets prior to applying ab initio gene prediction methods. The data is clustered by species. This classification method leverages techniques from metagenomic phylogenetic classification. An example of software for this purpose is, Phymm, which uses interpolated markov models—and PhymmBL, which integrates BLAST into the classification routines.
MEGAN4 [ 25 ] uses a sequence similarity approach, using local alignment against databases of known sequences, but also attempts to classify using additional information on functional roles, biological pathways and enzymes. As in single organism gene prediction, sequence similarity approaches are limited by the size of the database.
FragGeneScan and MetaGeneAnnotator are popular gene prediction programs based on Hidden Markov model . These predictors account for sequencing errors, partial genes and work for short reads.
Another fast and accurate tool for gene prediction in metagenomes is MetaGeneMark. [ 26 ] This tool is used by the DOE Joint Genome Institute to annotate IMG/M, the largest metagenome collection to date. | https://en.wikipedia.org/wiki/Gene_prediction |
A gene product is the biochemical material, either RNA or protein , resulting from the expression of a gene . A measurement of the amount of gene product is sometimes used to infer how active a gene is. Abnormal amounts of gene product can be correlated with disease -causing alleles , such as the overactivity of oncogenes , which can cause cancer . [ 1 ] [ 2 ] A gene is defined as "a hereditary unit of DNA that is required to produce a functional product". [ 3 ] Regulatory elements include:
These elements work in combination with the open reading frame to create a functional product. This product may be transcribed and be functional as RNA or is translated from mRNA to a protein to be functional in the cell.
RNA molecules that do not code for any proteins still maintain a function in the cell. The function of the RNA depends on its classification. These roles include:
Protein synthesis is aided by functional RNA molecules such as tRNA , which helps add the correct amino acid to a polypeptide chain during translation , rRNA , a major component of ribosomes (which guide protein synthesis), as well as mRNA which carry the instructions for creating the protein product. [ 4 ]
One type of functional RNA involved in regulation is microRNA ( miRNA ), which works by repressing translation. [ 5 ] These miRNAs work by binding to a complementary target mRNA sequence to prevent translation from occurring. [ 4 ] [ 6 ] Short-interfering RNA ( siRNA ) also works by negative regulation of transcription. These siRNA molecules work in RNA-induced silencing complex ( RISC ) during RNA interference by binding to a target DNA sequence to prevent transcription of a specific mRNA. [ 6 ]
Proteins are the product of a gene that are formed from translation of a mature mRNA molecule. Proteins contain 4 elements in regards to their structure: primary, secondary, tertiary and quaternary. The linear amino acid sequence is also known as the primary structure. Hydrogen bonding between the amino acids of the primary structure results in the formation of alpha helices or beta sheets . [ 7 ] These stable foldings are the secondary structure. The particular combination of the primary and secondary structures form the tertiary structure of a polypeptide. [ 7 ] The quaternary structure refers to the way multiple chains of polypeptides fold together. [ 7 ]
Proteins have many different functions in a cell and the function may vary based on the polypeptides they interact with and their cellular environment. Chaperone proteins work to stabilize newly synthesized proteins. They ensure the new protein folds into its correct functional conformation in addition to making sure products do not aggregate in areas where they should not. [ 8 ] Proteins can also function as enzymes , increasing the rate of various biochemical reactions and turning substrates into products. [ 7 ] [ 9 ] Products can be modified by attaching groups such as phosphate via an enzyme to specific amino acids in the primary sequence. [ 9 ] Proteins can also be used to move molecules in the cell to where they are needed, these are called motor proteins . [ 9 ] The shape of the cell is supported by proteins. Proteins such as actin , microtubules and intermediate filaments provide structure to the cell. [ 7 ] Another class of proteins are found in plasma membranes. Membrane proteins can be associated with the plasma membrane in different ways, depending on their structure. [ 9 ] These proteins allow the cell to import or export cell products, nutrients or signals to and from the extracellular space. [ 7 ] [ 9 ] Other proteins help the cell to perform regulatory functions. For example, transcription factors bind to DNA to help transcription of RNA. [ 10 ]
In 1941, American geneticist George Beadle and biochemist Edward Tatum proposed, on the basis of their study of mutants of the fungus Neurospora sitophila , that genes control specific biochemical reactions. [ 11 ] They suggested that the functioning of an organism depends on an integrated system of chemical reactions controlled in some manner by genes. They further noted that "It is entirely tenable to suppose that these genes, which are themselves a part of the system, control or regulate specific reactions in the system either by acting directly as enzymes or by determining the specificity of enzymes." This line of reasoning gave rise to the " one gene–one enzyme hypothesis ".
In a retrospective article, Beadle discussed the status of the one gene-one enzyme hypothesis 10 years after it was proposed. Beadle commented on the Cold Spring Harbor Symposium meeting of biologists in 1951. He noted "I have the impression that the number whose faith in one gene-one enzyme remained steadfast could be counted on the fingers of one hand—with a couple of fingers left over." [ 12 ] However, by the early 1960s, the concept that the DNA base sequence of a gene specifies the amino acid sequence of a protein became well established on the basis of numerous experiments. For example, an experiment by Crick, Brenner, Barnett, and Watts-Tobin in 1961 demonstrated that each amino acid in a protein is encoded by a corresponding sequence of three bases in DNA, called a codon. [ 13 ] Soon after this, the specific codon assignments for each amino acid were determined. | https://en.wikipedia.org/wiki/Gene_product |
Genetic regulatory circuits (also referred to as transcriptional regulatory circuits ) is a concept that evolved from the Operon Model discovered by François Jacob and Jacques Monod . [ 1 ] [ 2 ] [ 3 ] They are functional clusters of genes that impact each other's expression through inducible transcription factors and cis-regulatory elements . [ 4 ] [ 5 ]
Genetic regulatory circuits are analogous in many ways to electronic circuits in how they use signal inputs and outputs to determine gene regulation . [ 4 ] [ 5 ] Like electronic circuits, their organization determines their efficiency, and this has been demonstrated in circuits working in series to have a greater sensitivity of gene regulation. [ 4 ] [ 6 ] They also use inputs such as trans and cis sequence regulators of genes, and outputs such as gene expression level. [ 4 ] [ 5 ] Depending on the type of circuit, they respond constantly to outside signals, such as sugars and hormone levels, that determine how the circuit will return to its fixed point or periodic equilibrium state. [ 7 ] Genetic regulatory circuits also have an ability to be evolutionarily rewired without the loss of the original transcriptional output level. [ 8 ] [ 9 ] This rewiring is defined by the change in regulatory-target gene interactions, while there is still conservation of regulatory factors and target genes. [ 8 ] [ 10 ]
These circuits can be modelled in silico to predict the dynamics of a genetic system. [ 8 ] [ 11 ] Having constructed a computational model of the natural circuit of interest, one can use the model to make testable predictions about circuit performance. [ 12 ] [ 13 ] When designing a synthetic circuit for a specific engineering task, a model is useful for identifying necessary connections and parameter operating regimes that give rise to a desired functional output. Similarly, when studying a natural circuit, one can use the model to identify the parts or parameter values necessary for a desired biological outcome. [ 12 ] [ 14 ] In other words, computational modelling and experimental synthetic perturbations can be used to probe biological circuits. [ 12 ] [ 14 ] However, the structure of the circuits have shown to not be a reliable indicator of the function that the regulatory circuit provides for the larger cellular regulatory network. [ 7 ]
Understanding of genetic regulatory circuits are key in the field of synthetic biology , where disparate genetic elements are combined to produce novel biological functions . [ 1 ] [ 12 ] These biological gene circuits can be used synthetically to act as physical models for studying regulatory function. [ 15 ] [ 16 ]
By engineering genetic regulatory circuits, cells can be modified to take information from their environment, such as nutrient availability and developmental signals, and react in accordance to changes in their surroundings [ 17 ] [ 18 ] . [ 19 ] [ 20 ] In plant synthetic biology, genetic regulatory circuits can be used to program traits to increase crop plant efficiency by increasing their robustness to environmental stressors. [ 18 ] [ 21 ] Additionally, they are used to produce biopharmaceuticals for medical intervention. [ 18 ] [ 21 ] | https://en.wikipedia.org/wiki/Gene_regulatory_circuit |
Gene set enrichment analysis (GSEA) (also called functional enrichment analysis or pathway enrichment analysis ) is a method to identify classes of genes or proteins that are over-represented in a large set of genes or proteins, and may have an association with different phenotypes (e.g. different organism growth patterns or diseases). The method uses statistical approaches to identify significantly enriched or depleted groups of genes. Transcriptomics technologies and proteomics results often identify thousands of genes, which are used for the analysis. [ 1 ]
Researchers performing high-throughput experiments that yield sets of genes (for example, genes that are differentially expressed under different conditions) often want to retrieve a functional profile of that gene set, in order to better understand the underlying biological processes. This can be done by comparing the input gene set to each of the bins (terms) in the gene ontology – a statistical test can be performed for each bin to see if it is enriched for the input genes.
After the completion of the Human Genome Project , the problem of how to interpret and analyze it remained. In order to seek out genes associated with diseases, DNA microarrays were used to measure the amount of gene expression in different cells. Microarrays on thousands of different genes were carried out, and comparisons the results of two different cell categories, e.g. normal cells versus cancerous cells. However, this method of comparison is not sensitive enough to detect the subtle differences between the expression of individual genes, because diseases typically involve entire groups of genes. [ 2 ] Multiple genes are linked to a single biological pathway, and so it is the additive change in expression within gene sets that leads to the difference in phenotypic expression. Gene Set Enrichment Analysis was developed [ 2 ] to focus on the changes of expression in groups of a priori defined gene sets. By doing so, this method resolves the problem of the undetectable, small changes in the expression of single genes. [ 3 ]
Gene set enrichment analysis uses a priori gene sets that have been grouped together by their involvement in the same biological pathway, or by proximal location on a chromosome. [ 1 ] A database of these predefined sets can be found at the Molecular signatures database (MSigDB). [ 4 ] [ 5 ] In GSEA, DNA microarrays, or now RNA-Seq , are still performed and compared between two cell categories, but instead of focusing on individual genes in a long list, the focus is put on a gene set. [ 1 ] Researchers analyze whether the majority of genes in the set fall in the extremes of this list: the top and bottom of the list correspond to the largest differences in expression between the two cell types. If the gene set falls at either the top (over-expressed) or bottom (under-expressed), it is thought to be related to the phenotypic differences.
In the method that is typically referred to as standard GSEA, there are three steps involved in the analytical process. [ 1 ] [ 2 ] The general steps are summarized below:
This can be described as:
P h i t ( S , i ) = ∑ g j ∈ S , j ≤ i | r j | p N R ; P m i s s ( S , i ) = ∑ g j ∉ S , j ≤ i 1 N − N H ; N R = ∑ g j ∈ S | r j | p ; E S = P ( S , i ) = P h i t ( S , i ) − P m i s s ( S , i ) = m a x ( | P h i t ( S , i ) − P m i s s ( S , i ) | ) {\displaystyle {\begin{alignedat}{1}&P_{hit}(S,i)=\sum _{g_{j}\in S,j\leq i}{\dfrac {|r_{j}|^{p}}{N_{R}}};\\[0.6ex]&P_{miss}(S,i)=\sum _{g_{j}\not \in S,j\leq i}{\dfrac {1}{N-N_{H}}};\\[0.6ex]&N_{R}=\sum _{g_{j}\in S}|r_{j}|^{p};\\[0.6ex]&ES=P(S,i)=P_{hit}(S,i)-P_{miss}(S,i)=max(|P_{hit}(S,i)-P_{miss}(S,i)|)\\[0.6ex]\end{alignedat}}} Where r {\displaystyle r} is the rank of the gene, p {\displaystyle p} is the power usually set to 1 (if it were 0, it would be equivalent to the Kolmogorov–Smirnov test).
When GSEA was first proposed in 2003 some immediate concerns were raised regarding its methodology. These criticisms led to the use of the correlation-weighted Kolmogorov–Smirnov test , the normalized ES, and the false discovery rate calculation, all of which are the factors that currently define standard GSEA. [ 6 ] However, GSEA has now also been criticized for the fact that its null distribution is superfluous, and too difficult to be worth calculating, as well as the fact that its Kolmogorov–Smirnov-like statistic is not as sensitive as the original. [ 6 ] As an alternative, the method known as Simpler Enrichment Analysis (SEA), was proposed. This method assumes gene independence and uses a simpler approach to calculate t-test. However, it is thought that these assumptions are in fact too simplifying, and gene correlation cannot be disregarded. [ 6 ]
One other limitation to Gene Set Enrichment Analysis is that the results are very dependent on the algorithm that clusters the genes, and the number of clusters being tested. [ 7 ] Spectral Gene Set Enrichment (SGSE) is a proposed, unsupervised test. The method's founders claim that it is a better way to find associations between MSigDB gene sets and microarray data. The general steps include:
1. Calculating the association between principal components and gene sets. [ 7 ]
2. Using the weighted Z-method to calculate the association between the gene sets and the spectral structure of the data. [ 7 ]
GSEA uses complicated statistics, so it requires a computer program to run the calculations. GSEA has become standard practice, and there are many websites and downloadable programs that will provide the data sets and run the analysis.
Multi-Ontology Enrichment Tool (MOET) is a web-based ontology analysis tool that provides functionality for multiple ontologies, including Disease, GO, Pathway, Phenotype, and Chemical entities (ChEBI) for multiple species, including rat, mouse, human, bonobo, squirrel, dog, pig, chinchilla, naked mole-rat and vervet (green monkey). [ 8 ] It outputs a downloadable graph and a list of statistically overrepresented terms in the user's list of genes using hypergeometric distribution. MOET also displays the corresponding Bonferroni correction and odds ratio on the results page. It is simple to use, and results are provided with a few clicks in seconds; no software installations or programming skills are required. In addition, MOET is updated weekly, providing the user with the most recent data for analyses.
NASQAR (Nucleic Acid SeQuence Analysis Resource) is an open source, web-based platform for high-throughput sequencing data analysis and visualization. [ 9 ] [ 10 ] GSEA can be run using the R-based clusterProfiler package. [ 11 ] NASQAR currently supports GO Term and KEGG Pathway enrichment with all organisms supported by an Org.Db database. [ 12 ]
The gene ontology (GO) annotation for 165 plant species and GO enrichment analysis is available. [ 13 ]
The Molecular Signatures Database hosts an extensive collection of annotated gene sets that can be used with most GSEA Software. [ 14 ]
The Broad Institute website is in cooperation with MSigDB and has a downloadable GSEA software, as well a general tutorial. [ 15 ]
WebGestalt [ 16 ] is a web based gene set analysis toolkit. It supports three well-established and complementary methods for enrichment analysis, including Over-Representation Analysis (ORA), Gene Set Enrichment Analysis (GSEA), and Network Topology-based Analysis (NTA). Analysis can be performed against 12 organisms and 321,251 functional categories using 354 gene identifiers from various databases and technology platforms.
Enrichr [ 17 ] [ 18 ] [ 19 ] is a gene set enrichment analysis tool for mammalian gene sets. It contains background libraries for transcription regulation, pathways and protein interactions, ontologies including GO and the human and mouse phenotype ontologies, signatures from cells treated with drugs, gene sets associated with human diseases, and expression of genes in different cells and tissues. The background libraries are from over 200 resources and contain over 450,000 annotated gene sets. The tool can be accessed through API and provides different ways to visualize the results. [ 20 ]
GeneSCF is a real-time based functional enrichment tool with support for multiple organisms [ 21 ] and is designed to overcome the problems associated with using outdated resources and databases. [ 22 ] Advantages of using GeneSCF: real-time analysis, users do not have to depend on enrichment tools to get updated, easy for computational biologists to integrate GeneSCF with their NGS pipeline, it supports multiple organisms, enrichment analysis for multiple gene list using multiple source database in single run, retrieve or download complete GO terms/Pathways/Functions with associated genes as simple table format in a plain text file. [ 23 ] [ 24 ]
DAVID is the database for annotation, visualization and integrated discovery, a bioinformatics tool that pools together information from most major bioinformatics sources, with the aim of analyzing large gene lists in a high-throughput manner. [ 25 ] DAVID goes beyond standard GSEA with additional functions like switching between gene and protein identifiers on the genome-wide scale, [ 25 ] however, the annotations used by DAVID was not updated since October 2016 to Dec 2021, [ 26 ] which can have a considerable impact on practical interpretation of results. [ 27 ] However, A most recent update was performed in 2021 [ 26 ]
Metascape is a biologist-oriented gene-list analysis portal. [ 28 ] Metascape integrates pathway enrichment analysis, protein complex analysis, and multi-list meta-analysis into one seamless workflow accessible through a significantly simplified user interface. Metascape maintains analysis accuracy by updating its 40 underlying knowledgebases monthly. Metascape presents results using easy-to-interpret graphics, spreadsheets, and publication quality presentations, and is freely available. [ 29 ]
The Gene Ontology (GO) consortium has also developed their own online GO term enrichment tool, [ 30 ] allowing species-specific enrichment analysis versus the complete database, coarser-grained GO slims, or custom references. [ 31 ]
Genomic region enrichment of annotations tool (GREAT) is a software which takes advantage of regulatory domains to better associate gene ontology terms to genes. [ 32 ] [ 33 ] Its primary purpose is to identify pathways and processes that are significantly associated with factor regulating activity. This method maps genes with regulatory regions through a hypergeometric test over genes, inferring proximal gene regulatory domains. It does this by using the total fraction of the genome associated with a given ontology term as the expected fraction of input regions associated with the term by chance. Enrichment is calculated by all regulatory regions, and several experiments were performed to validate GREAT, one of which being enrichment analyses done on 8 ChIP-seq datasets. [ 32 ]
The Functional Enrichment Analysis (FunRich) tool [ 34 ] is mainly used for the functional enrichment and network analysis of Omics data. [ 35 ]
The FuncAssociate tool enables Gene Ontology and custom enrichment analyses. [ 36 ] It allows inputting ordered sets as well as weighted gene space files for background. [ 37 ]
Instances of InterMine automatically provide enrichment analysis [ 38 ] for uploaded sets of genes and other biological entities.
ToppGene is a one-stop portal for gene list enrichment analysis and candidate gene prioritization based on functional annotations and protein interactions network. [ 39 ] Developed and maintained by the Division of Biomedical Informatics at Cincinnati Children's Hospital Medical Center .
Quantitative Set Analysis for Gene Expression (QuSAGE) is a computational method for gene set enrichment analysis. [ 40 ] QuSAGE improves power by accounting for inter-gene correlations and quantifies gene set activity with a complete probability density function (PDF). From this PDF, P values and confidence intervals can be easily extracted. Preserving the PDF also allows for post-hoc analysis (e.g., pair-wise comparisons of gene set activity) while maintaining statistical traceability. The applicability of QuSAGE has been extended to longitudinal studies by adding functionality for general linear mixed models. [ 41 ] QuSAGE was used by the NIH/NIAID to identify baseline transcriptional signatures that were associated with human influenza vaccination responses. [ 42 ] QuSAGE is available as a R/ Bioconductor package. [ 43 ]
Blast2GO is a bioinformatics platform for functional annotation and analysis of genomic datasets. [ 44 ] This tool allows to perform gene set enrichment analysis, [ 45 ] among other functions.
g:Profiler is a toolset for finding biological categories enriched in gene lists, conversions between gene identifiers and mappings to their orthologs. [ 46 ] g:Profiler relies on Ensembl as a primary data source and follows their quarterly release cycle while updating the other data sources simultaneously. g:Profiler supports close to 500 species and strains, including vertebrates, plants, fungi, insects and parasites.
Single-nucleotide polymorphisms , or SNPs, are single base mutations that may be associated with diseases. One base change has the potential to affect the protein that results from that gene being expressed; however, it also has the potential to have no effect at all. Genome-wide association studies (GWAS) are comparisons between healthy and disease genotypes to try to find SNPs that are overrepresented in the disease genomes, and might be associated with that condition. Before GSEA, the accuracy of genome-wide SNP association studies was severely limited by a high number of false positives. [ 47 ] The theory that the SNPs contributing to a disease tend to be grouped in a set of genes that are all involved in the same biological pathway, is what the GSEA-SNP method is based on. This application of GSEA does not only aid in the discovery of disease-associated SNPs, but helps illuminate the corresponding pathways and mechanisms of the diseases. [ 47 ]
Gene set enrichment methods led to the discovery of new suspect genes and biological pathways related to spontaneous preterm births . [ 48 ] Exome sequences from women who had experienced SPTB were compared to those from females from the 1000 Genome Project, using a tool that scored possible disease-causing variants. Genes with higher scores were then run through different programs to group them into gene sets based on pathways and ontology groups. This study found that the variants were significantly clustered in sets related to several pathways, all suspects in SPTB. [ 48 ]
Gene set enrichment analysis can be used to understand the changes that cells undergo during carcinogenesis and metastasis . In a study, microarrays were performed on renal cell carcinoma metastases, primary renal tumors, and normal kidney tissue, and the data was analyzed using GSEA. [ 49 ] This analysis showed significant changes of expression in genes involved in pathways that have not been previously associated with the progression of renal cancer. From this study, GSEA has provided potential new targets for renal cell carcinoma therapy.
GSEA can be used to help understand the molecular mechanisms of complex disorders. Schizophrenia is a largely heritable disorder, but is also very complex, and the onset of the disease involves many genes interacting within multiple pathways, as well the interaction of those genes with environmental factors. For instance, epigenetic changes, like DNA methylation , are affected by the environment, but are also inherently dependent on the DNA itself. DNA methylation is the most well-studied epigenetic change, and was recently analyzed using GSEA in relation to schizophrenia-related intermediate phenotypes. [ 50 ] Researchers ranked genes for their correlation between methylation patterns and each of the phenotypes. They then used GSEA to look for an enrichment of genes that are predicted to be targeted by microRNAs in the progression of the disease. [ 50 ]
GSEA can help provide molecular evidence for the association of biological pathways with diseases. Previous studies have shown that long-term depression symptoms are correlated with changes in immune response and inflammatory pathways. [ 51 ] Genetic and molecular evidence was sought to support this. Researchers took blood samples from sufferers of depression, and used genome-wide expression data, along with GSEA to find expression differences in gene sets related to inflammatory pathways. This study found that those people who rated with the most severe depression symptoms also had significant expression differences in those gene sets, and this result supports the association hypothesis. [ 51 ]
Gene set enrichment analysis has been adapted for microbiome studies through taxon set enrichment analysis (TSEA) [ 52 ] and microbe set enrichment analysis (MSEA). [ 53 ] Instead of analyzing gene sets, these approaches tests for enrichment of predefined sets of microbial species or genera enabling interpretation of microbial community shifts in terms of higher-level taxonomy or functional roles. [ 54 ] | https://en.wikipedia.org/wiki/Gene_set_enrichment_analysis |
A gene signature or gene expression signature is a single or combined group of genes in a cell with a uniquely characteristic pattern of gene expression [ 1 ] that occurs as a result of an altered or unaltered biological process or pathogenic medical condition. [ 2 ] This is not to be confused with the concept of gene expression profiling . Activating pathways in a regular physiological process or a physiological response to a stimulus results in a cascade of signal transduction and interactions that elicit altered levels of gene expression, which is classified as the gene signature of that physiological process or response. [ 3 ] The clinical applications of gene signatures breakdown into prognostic, diagnostic [ 4 ] [ 5 ] and predictive signatures. The phenotypes that may theoretically be defined by a gene expression signature range from those that predict the survival or prognosis of an individual with a disease, those that are used to differentiate between different subtypes of a disease, to those that predict activation of a particular pathway . Ideally, gene signatures can be used to select a group of patients [ 6 ] for whom a particular treatment will be effective. [ 7 ] [ 8 ]
In 1995, 2 studies conducted identified unique approaches to analyzing global gene expression of a genome which collectively promoted the value of identifying and analyzing gene signatures for physiological relevance. The first study reports a technique that improves expressed sequence tag (EST) analysis, known as Serial Analysis of Gene Expression (SAGE) that hinged on sequencing and quantifying mRNA samples which acquired levels of gene expression that eventually revealed characteristic gene expression patterns. [ 9 ]
The second study identified a technique that is now widely known as the microarray which quantifies complementary DNA (cDNA) hybridization on a glass slide to analyze the expression of many genes in parallel. [ 10 ] These studies drew greater attention to the wealth of information that analysis of gene signatures bear that may or may not be physiologically relevant.
Pressing forward, the latter technique has revolutionized research in genetics and DNA chip technology [ 11 ] as it is a widely adopted technique to profile gene expression signatures such that these physiological responses can be cataloged [ 12 ] in repositories such as NCBI Gene Expression Omnibus . This catalogue of prognostic, diagnostic and predictive gene expression signatures allow for predictions of onset of pathogenic diseases in patients, [ 13 ] tumour and cancer classification, [ 14 ] and enhanced therapeutic strategies that predict the optimal target candidates subjects and genes. [ 15 ]
Today, microarrays and other quantitative methods such as RNA-seq that encompass gene expression profiling , are moving towards promotion of re-analysis and integration of the large, publicly available database of gene expression signatures and profiles to uncover the full threshold of information these expression signatures hold. [ 16 ]
Prognostic refers to predicting the likely outcome or course of a disease. Classifying a biological phenotype or medical condition based on a specific gene signature or multiple gene signatures, can serve as a prognostic biomarker for the associated phenotype or condition. This concept termed prognostic gene signature , serves to offer insight into the overall outcome of the condition regardless of therapeutic intervention. [ 17 ] Several studies have been conducted with focus on identifying prognostic gene signatures with the hopes of improving the diagnostic methods and therapeutic courses adopted in a clinical settings. It is important to note that prognostic gene signatures are not a target of therapy; they offer additional information to consider when discussing details such as duration or dosage or drug sensitivity etc. in therapeutic intervention. The criteria a gene signature must meet to be deemed a prognostic marker include demonstration of its association with the outcomes of the condition, reproducibility and validation of its association in an independent group of patients and lastly, the prognostic value must demonstrate independence from other standard factors in a multivariate analysis. [ 3 ] The applications of these prognostic signatures include prognostic assays for breast cancer , [ 18 ] [ 19 ] hepatocellular carcinoma , [ 20 ] leukaemia [ 21 ] and are continually being developed for other types of cancers and disorders as well.
A diagnostic gene signature serves as a biomarker that distinguishes phenotypically similar medical conditions that have a threshold of severity consisting of mild, moderate or severe phenotypes. [ 5 ] Establishing verified methods of diagnosing clinically indolent and significant cases allows practitioners to provide more accurate care and therapeutic options that range from no therapy, preventative care to symptomatic relief. These diagnostic signatures also allow for a more accurate representation of test samples used in research. [ 6 ] Similar to the procedure of validation of prognostic gene signature, a criterion exists for classifying a gene signature as a biomarker for a disorder or diseases outlined by Chau et al. [ 22 ] [ 23 ]
A predictive gene signature is similar to a predictive biomarker, where it predicts the effect of treatment in patients or study participants that exhibit a particular disease phenotype. A predictive gene signature unlike a prognostic gene signature can be a target for therapy. [ 17 ] The information predictive signatures provide are more rigorous than that of prognostic signatures as they are based on treatment groups with therapeutic intervention on the likely benefit from treatment, completely independent of prognosis. [ 24 ] Predictive gene signatures addresses the paramount need for ways to personalize and tailor therapeutic intervention in diseases. These signatures have implications in facilitating personalized medicine through identification of more novel therapeutic targets and identifying the most qualified subjects for optimal benefit of specific treatments. [ 3 ] [ 25 ] [ 26 ] | https://en.wikipedia.org/wiki/Gene_signature |
Gene silencing is the regulation of gene expression in a cell to prevent the expression of a certain gene . [ 1 ] [ 2 ] Gene silencing can occur during either transcription or translation and is often used in research. [ 1 ] [ 2 ] In particular, methods used to silence genes are being increasingly used to produce therapeutics to combat cancer and other diseases, such as infectious diseases and neurodegenerative disorders .
Gene silencing is often considered the same as gene knockdown . [ 3 ] [ 4 ] When genes are silenced, their expression is reduced. [ 3 ] [ 4 ] In contrast, when genes are knocked out, they are completely erased from the organism's genome and, thus, have no expression. [ 3 ] [ 4 ] Gene silencing is considered a gene knockdown mechanism since the methods used to silence genes, such as RNAi , CRISPR , or siRNA , generally reduce the expression of a gene by at least 70% but do not eliminate it [ citation needed ] . Methods using gene silencing are often considered better than gene knockouts [ 5 ] [ 6 ] since they allow researchers to study essential genes that are required for the animal models to survive and cannot be removed. In addition, they provide a more complete view on the development of diseases since diseases are generally associated with genes that have a reduced expression. [ 3 ]
Antisense oligonucleotides were discovered in 1978 by Paul Zamecnik and Mary Stephenson. [ 7 ] Oligonucleotides , which are short nucleic acid fragments, bind to complementary target mRNA molecules when added to the cell. [ 7 ] [ 8 ] These molecules can be composed of single-stranded DNA or RNA and are generally 13–25 nucleotides long. [ 8 ] [ 9 ] The antisense oligonucleotides can affect gene expression in two ways: by using an RNase H -dependent mechanism or by using a steric blocking mechanism. [ 8 ] [ 9 ] RNase H-dependent oligonucleotides cause the target mRNA molecules to be degraded, while steric-blocker oligonucleotides prevent translation of the mRNA molecule. [ 8 ] [ 9 ] The majority of antisense drugs function through the RNase H-dependent mechanism, in which RNase H hydrolyzes the RNA strand of the DNA/RNA heteroduplex . [ 8 ] [ 9 ] expression. [ 8 ]
Ribozymes are catalytic RNA molecules used to inhibit gene expression . These molecules work by cleaving mRNA molecules, essentially silencing the genes that produced them. Sidney Altman and Thomas Cech first discovered catalytic RNA molecules, RNase P and group II intron ribozymes, in 1989 and won the Nobel Prize for their discovery. [ 10 ] [ 11 ] Several types of ribozyme motifs exist, including hammerhead , hairpin , hepatitis delta virus , group I , group II , and RNase P ribozymes. Hammerhead, hairpin, and hepatitis delta virus (HDV) ribozyme motifs are generally found in viruses or viroid RNAs. [ 10 ] These motifs are able to self-cleave a specific phosphodiester bond on an mRNA molecule. [ 10 ] Lower eukaryotes and a few bacteria contain group I and group II ribozymes. [ 10 ] These motifs can self-splice by cleaving and joining phosphodiester bonds. [ 10 ] The last ribozyme motif, the RNase P ribozyme, is found in Escherichia coli and is known for its ability to cleave the phosphodiester bonds of several tRNA precursors when joined to a protein cofactor. [ 10 ]
The general catalytic mechanism used by ribozymes is similar to the mechanism used by protein ribonucleases . [ 12 ] These catalytic RNA molecules bind to a specific site and attack the neighboring phosphate in the RNA backbone with their 2' oxygen, which acts as a nucleophile , resulting in the formation of cleaved products with a 2'3'-cyclic phosphate and a 5' hydroxyl terminal end. [ 12 ] This catalytic mechanism has been increasingly used by scientists to perform sequence-specific cleavage of target mRNA molecules. In addition, attempts are being made to use ribozymes to produce gene silencing therapeutics, which would silence genes that are responsible for causing diseases. [ 13 ]
RNA interference ( RNAi ) is a natural process used by cells to regulate gene expression. It was discovered in 1998 by Andrew Fire and Craig Mello , who won the Nobel Prize for their discovery in 2006. [ 14 ] The process to silence genes first begins with the entrance of a double-stranded RNA (dsRNA) molecule into the cell, which triggers the RNAi pathway. [ 14 ] The double-stranded molecule is then cut into small double-stranded fragments by an enzyme called Dicer . [ 14 ] These small fragments, which include small interfering RNAs (siRNA) and microRNA (miRNA) , are approximately 21–23 nucleotides in length. [ 14 ] [ 15 ] The fragments integrate into a multi-subunit protein called the RNA-induced silencing complex , which contains Argonaute proteins that are essential components of the RNAi pathway. [ 14 ] [ 15 ] One strand of the molecule, called the "guide" strand, binds to RISC, while the other strand, known as the "passenger" strand is degraded. [ 14 ] [ 15 ] The guide or antisense strand of the fragment that remains bound to RISC directs the sequence-specific silencing of the target mRNA molecule. [ 15 ] The genes can be silenced by siRNA molecules that cause the endonucleatic cleavage of the target mRNA molecules or by miRNA molecules that suppress translation of the mRNA molecule. [ 15 ] With the cleavage or translational repression of the mRNA molecules, the genes that form them are rendered essentially inactive. [ 14 ] RNAi is thought to have evolved as a cellular defense mechanism against invaders, such as RNA viruses , or to combat the proliferation of transposons within a cell's DNA. [ 14 ] Both RNA viruses and transposons can exist as double-stranded RNA and lead to the activation of RNAi. [ 14 ] Currently, siRNAs are being widely used to suppress specific gene expression and to assess the function of genes . Companies utilizing this approach include Alnylam , Sanofi , [ 16 ] Arrowhead, Discerna, [ 17 ] and Persomics , [ 18 ] among others.
The three prime untranslated regions (3'UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally cause gene silencing. Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins . By binding to specific sites within the 3'-UTR, a large number of specific miRNAs decrease gene expression of their particular target mRNAs by either inhibiting translation or directly causing degradation of the transcript, using a mechanism similar to RNA interference (see MicroRNA ). The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of an mRNA. [ citation needed ]
The 3'-UTR often contains microRNA response elements (MREs) . MREs are sequences to which miRNAs bind and cause gene silencing. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs. [ citation needed ]
As of 2014, the miRBase web site, [ 19 ] an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to each have an average of about four hundred target mRNAs (causing gene silencing of several hundred genes). [ 20 ] Freidman et al. [ 20 ] estimate that >45,000 miRNA target sites within human mRNA 3'UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs. [ citation needed ]
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. [ 21 ] Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold). [ 22 ] [ 23 ]
The effects of miRNA dysregulation of gene expression seem to be important in cancer. [ 24 ] For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes. [ 25 ]
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders. [ 26 ] [ 27 ] [ 28 ]
Gene silencing techniques have been widely used by researchers to study genes associated with disorders. These disorders include cancer , infectious diseases , respiratory diseases , and neurodegenerative disorders . Gene silencing is also currently being used in drug discovery efforts, such as synthetic lethality , high-throughput screening , and miniaturized RNAi screens. [ citation needed ]
RNA interference has been used to silence genes associated with several cancers. In in vitro studies of chronic myelogenous leukemia (CML) , siRNA was used to cleave the fusion protein, BCR-ABL , which prevents the drug Gleevec ( imatinib ) from binding to the cancer cells. [ 29 ] Cleaving the fusion protein reduced the amount of transformed hematopoietic cells that spread throughout the body by increasing the sensitivity of the cells to the drug. [ 29 ] RNA interference can also be used to target specific mutants. For instance, siRNAs were able to bind specifically to tumor suppressor p53 molecules containing a single point mutation and destroy it, while leaving the wild-type suppressor intact. [ 30 ]
Receptors involved in mitogenic pathways that lead to the increased production of cancer cells there have also been targeted by siRNA molecules. The chemokine receptor chemokine receptor 4 (CXCR4) , associated with the proliferation of breast cancer, was cleaved by siRNA molecules that reduced the number of divisions commonly observed by the cancer cells. [ 31 ] Researchers have also used siRNAs to selectively regulate the expression of cancer-related genes. Antiapoptotic proteins, such as clusterin and survivin , are often expressed in cancer cells. [ 32 ] [ 33 ] Clusterin and survivin-targeting siRNAs were used to reduce the number of antiapoptotic proteins and, thus, increase the sensitivity of the cancer cells to chemotherapy treatments. [ 32 ] [ 33 ] In vivo studies are also being increasingly utilized to study the potential use of siRNA molecules in cancer therapeutics. For instance, mice implanted with colon adenocarcinoma cells were found to survive longer when the cells were pretreated with siRNAs that targeted B-catenin in the cancer cells. [ 34 ]
Viral genes and host genes that are required for viruses to replicate or enter the cell, or that play an important role in the life cycle of the virus are often targeted by antiviral therapies. RNAi has been used to target genes in several viral diseases, such as the human immunodeficiency virus (HIV) and hepatitis . [ 35 ] [ 36 ] In particular, siRNA was used to silence the primary HIV receptor chemokine receptor 5 (CCR5). [ 37 ] This prevented the virus from entering the human peripheral blood lymphocytes and the primary hematopoietic stem cells. [ 37 ] [ 38 ] A similar technique was used to decrease the amount of the detectable virus in hepatitis B and C infected cells. In hepatitis B, siRNA silencing was used to target the surface antigen on the hepatitis B virus and led to a decrease in the number of viral components. [ 39 ] In addition, siRNA techniques used in hepatitis C were able to lower the amount of the virus in the cell by 98%. [ 40 ] [ 41 ]
RNA interference has been in commercial use to control virus diseases of plants for over 20 years (see Plant disease resistance ). In 1986–1990, multiple examples of "coat protein-mediated resistance" against plant viruses were published, before RNAi had been discovered. [ 42 ] In 1993, work with tobacco etch virus first demonstrated that host organisms can target specific virus or mRNA sequences for degradation, and that this activity is the mechanism behind some examples of virus resistance in transgenic plants. [ 43 ] [ 44 ] The discovery of small interfering RNAs (the specificity determinant in RNA-mediated gene silencing) also utilized virus-induced post-transcriptional gene silencing in plants. [ 45 ] By 1994, transgenic squash varieties had been generated expressing coat protein genes from three different viruses, providing squash hybrids with field-validated multiviral resistance that remain in commercial use at present. Potato lines expressing viral replicase sequences that confer resistance to potato leafroll virus were sold under the trade names NewLeaf Y and NewLeaf Plus, and were widely accepted in commercial production in 1999–2001, until McDonald's Corp. decided not to purchase GM potatoes and Monsanto decided to close their NatureMark potato business. [ 46 ] Another frequently cited example of virus resistance mediated by gene silencing involves papaya, where the Hawaiian papaya industry was rescued by virus-resistant GM papayas produced and licensed by university researchers rather than a large corporation. [ 47 ] These papayas also remain in use at present, although not without significant public protest, [ 48 ] [ 49 ] which is notably less evident in medical uses of gene silencing.
Gene silencing techniques have also been used to target other viruses, such as the human papilloma virus , the West Nile virus , and the Tulane virus. The E6 gene in tumor samples retrieved from patients with the human papilloma virus was targeted and found to cause apoptosis in the infected cells. [ 50 ] Plasmid siRNA expression vectors used to target the West Nile virus were also able to prevent the replication of viruses in cell lines. [ 51 ] In addition, siRNA has been found to be successful in preventing the replication of the Tulane virus, part of the virus family Caliciviridae , by targeting both its structural and non-structural genes. [ 52 ] By targeting the NTPase gene, one dose of siRNA 4 hours pre-infection was shown to control Tulane virus replication for 48 hours post-infection, reducing the viral titer by up to 2.6 logarithms. [ 52 ] Although the Tulane virus is species-specific and does not affect humans, it has been shown to be closely related to the human norovirus , which is the most common cause of acute gastroenteritis and food-borne disease outbreaks in the United States. [ 53 ] Human noroviruses are notorious for being difficult to study in the laboratory, but the Tulane virus offers a model through which to study this family of viruses for the clinical goal of developing therapies that can be used to treat illnesses caused by human norovirus. [ citation needed ]
Unlike viruses, bacteria are not as susceptible to silencing by siRNA. [ 54 ] This is largely due to how bacteria replicate. Bacteria replicate outside of the host cell and do not contain the necessary machinery for RNAi to function. [ 54 ] However, bacterial infections can still be suppressed by siRNA by targeting the host genes that are involved in the immune response caused by the infection or by targeting the host genes involved in mediating the entry of bacteria into cells. [ 54 ] [ 55 ] For instance, siRNA was used to reduce the amount of pro-inflammatory cytokines expressed in the cells of mice treated with lipopolysaccharide (LPS) . [ 54 ] [ 56 ] The reduced expression of the inflammatory cytokine, tumor necrosis factor α (TNFα) , in turn, caused a reduction in the septic shock felt by the LPS-treated mice. [ 56 ] In addition, siRNA was used to prevent the bacteria, Psueomonas aeruginosa , from invading murine lung epithelial cells by knocking down the caveolin-2 (CAV2) gene. [ 57 ] Thus, though bacteria cannot be directly targeted by siRNA mechanisms, they can still be affected by siRNA when the components involved in the bacterial infection are targeted. [ citation needed ]
Ribozymes, antisense oligonucleotides, and more recently RNAi have been used to target mRNA molecules involved in asthma . [ 55 ] [ 58 ] These experiments have suggested that siRNA may be used to combat other respiratory diseases, such as chronic obstructive pulmonary disease (COPD) and cystic fibrosis . [ 55 ] COPD is characterized by goblet cell hyperplasia and mucus hypersecretion . [ 59 ] Mucus secretion was found to be reduced when the transforming growth factor (TGF)-α was targeted by siRNA in NCI-H292 human airway epithelial cells . [ 60 ] In addition to mucus hypersecretion, chronic inflammation and damaged lung tissue are characteristic of COPD and asthma. The transforming growth factor TGF-β is thought to play a role in these manifestations. [ 61 ] [ 62 ] As a result, when interferon (IFN)-γ was used to knock down TGF-β, fibrosis of the lungs, caused by damage and scarring to lung tissue, was improved. [ 63 ] [ 64 ]
Huntington's disease (HD) results from a mutation in the huntingtin gene that causes an excess of CAG repeats. [ 65 ] The gene then forms a mutated huntingtin protein with polyglutamine repeats near the amino terminus . [ 66 ] This disease is incurable and known to cause motor, cognitive , and behavioral deficits. [ 67 ] Researchers have been looking to gene silencing as a potential therapeutic for HD. [ citation needed ]
Gene silencing can be used to treat HD by targeting the mutant huntingtin protein. The mutant huntingtin protein has been targeted through gene silencing that is allele specific using allele specific oligonucleotides . In this method, the antisense oligonucleotides are used to target single nucleotide polymorphism (SNPs) , which are single nucleotide changes in the DNA sequence, since HD patients have been found to share common SNPs that are associated with the mutated huntingtin allele. It has been found that approximately 85% of patients with HD can be covered when three SNPs are targeted. In addition, when antisense oligonucleotides were used to target an HD-associated SNP in mice, there was a 50% decrease in the mutant huntingtin protein. [ 65 ]
Non-allele specific gene silencing using siRNA molecules has also been used to silence the mutant huntingtin proteins. Through this approach, instead of targeting SNPs on the mutated protein, all of the normal and mutated huntingtin proteins are targeted. When studied in mice, it was found that siRNA could reduce the normal and mutant huntingtin levels by 75%. At this level, they found that the mice developed improved motor control and a longer survival rate when compared to the controls. [ 65 ] Thus, gene silencing methods may prove to be beneficial in treating HD.
Amyotrophic lateral sclerosis (ALS) , also called Lou Gehrig's disease, is a motor neuron disease that affects the brain and spinal cord . The disease causes motor neurons to degenerate, which eventually leads to neuron death and muscular degeneration. [ 68 ] Hundreds of mutations in the Cu/Zn superoxide dismutase (SOD1) gene have been found to cause ALS. [ 69 ] Gene silencing has been used to knock down the SOD1 mutant that is characteristic of ALS. [ 69 ] [ 70 ] In specific, siRNA molecules have been successfully used to target the SOD1 mutant gene and reduce its expression through allele-specific gene silencing. [ 69 ] [ 71 ]
There are several challenges associated with gene silencing therapies, including delivery and specificity for targeted cells. For instance, for treatment of neurodegenerative disorders, molecules for a prospective gene silencing therapy must be delivered to the brain. The blood–brain barrier makes it difficult to deliver molecules into the brain through the bloodstream by preventing the passage of the majority of molecules that are injected or absorbed into the blood. [ 65 ] [ 66 ] Thus, researchers have found that they must directly inject the molecules or implant pumps that push them into the brain. [ 65 ]
Once inside the brain, however, the molecules must move inside of the targeted cells. In order to efficiently deliver siRNA molecules into the cells, viral vectors can be used. [ 65 ] [ 67 ] Nevertheless, this method of delivery can also be problematic as it can elicit an immune response against the molecules. In addition to delivery, specificity has also been found to be an issue in gene silencing. Both antisense oligonucleotides and siRNA molecules can potentially bind to the wrong mRNA molecule. [ 65 ] Thus, researchers are searching for more efficient methods to deliver and develop specific gene silencing therapeutics that are still safe and effective. [ citation needed ]
Arctic Apples are a suite of trademarked [ 72 ] apples that contain a nonbrowning trait created by using gene silencing to reduce the expression of polyphenol oxidase (PPO). It is the first approved food product to use this technique. [ 73 ] | https://en.wikipedia.org/wiki/Gene_silencing |
Gene targeting is a biotechnological tool used to change the DNA sequence of an organism (hence it is a form of Genome Editing ). It is based on the natural DNA-repair mechanism of Homology Directed Repair (HDR), including Homologous Recombination . Gene targeting can be used to make a range of sizes of DNA edits, from larger DNA edits such as inserting entire new genes into an organism, through to much smaller changes to the existing DNA such as a single base-pair change. Gene targeting relies on the presence of a repair template to introduce the user-defined edits to the DNA. The user (usually a scientist) will design the repair template to contain the desired edit, flanked by DNA sequence corresponding (homologous) to the region of DNA that the user wants to edit; hence the edit is targeted to a particular genomic region. In this way Gene Targeting is distinct from natural homology-directed repair, during which the ‘natural’ DNA repair template of the sister chromatid is used to repair broken DNA (the sister chromatid is the second copy of the gene). The alteration of DNA sequence in an organism can be useful in both a research context – for example to understand the biological role of a gene – and in biotechnology, for example to alter the traits of an organism (e.g. to improve crop plants).
To create a gene-targeted organism, DNA must be introduced into its cells. This DNA must contain all of the parts necessary to complete the gene targeting. At a minimum this is the homology repair template, containing the desired edit flanked by regions of DNA homologous (identical in sequence to) the targeted region (these homologous regions are called “homology arms” ). Often a reporter gene and/or a selectable marker is also required, to help identify and select for cells (or “events”) where GT has actually occurred. It is also common practice to increase GT rates by causing a double-strand-break (DSB) in the targeted DNA region. [ 2 ] Hence the genes encoding for the site-specific-nuclease of interest may also be transformed along with the repair template. These genetic elements required for GT may be assembled through conventional molecular cloning in bacteria.
Gene targeting methods are established for several model organisms and may vary depending on the species used. To target genes in mice , the DNA is inserted into mouse embryonic stem cells in culture. Cells with the insertion can contribute to a mouse's tissue via embryo injection. Finally, chimeric mice where the modified cells make up the reproductive organs are bred . After this step the entire body of the mouse is based on the selected embryonic stem cell.
To target genes in moss , the DNA is incubated together with freshly isolated protoplasts and with polyethylene glycol . As mosses are haploid organisms, [ 3 ] moss filaments ( protonema ) can be directly screened for the target, either by treatment with antibiotics or with PCR . Unique among plants , this procedure for reverse genetics is as efficient as in yeast . [ 4 ] Gene targeting has been successfully applied to cattle, sheep, swine and many fungi.
The frequency of gene targeting can be significantly enhanced through the use of site-specific endonucleases such as zinc finger nucleases , [ 5 ] engineered homing endonucleases , [ 6 ] TALENS , or most commonly the CRISPR -Cas system. This method has been applied to species including Drosophila melanogaster , [ 5 ] tobacco , [ 7 ] [ 8 ] corn , [ 9 ] human cells, [ 10 ] mice [ 11 ] and rats . [ 11 ]
The relationship between gene targeting, gene editing and genetic modification is outlined in the Venn diagram below. It displays how 'Genetic engineering' encompasses all 3 of these techniques. Genome editing is characterised by making small edits to the genome at a specific location, often following cutting of the target DNA region by a site-specific-nuclease such as CRISPR. [ 12 ] Genetic modification usually describes the insertion of a transgene (foreign DNA, i.e. a gene from another species) into a random location within the genome. [ 13 ] [ 14 ] Gene-targeting is a specific biotechnological tool that can lead to small changes to the genome at a specific site [ 2 ] - in which case the edits caused by gene-targeting would count as genome editing. However gene targeting is also capable of inserting entire genes (such as transgenes) at the target site if the transgene is incorporated into the homology repair template that is used during gene-targeting. [ 15 ] [ 16 ] In such cases the edits caused by gene-targeting would, in some jurisdictions, be considered as equivalent to Genetic Modification as insertion of foreign DNA has occurred. [ 16 ]
Gene targeting is one specific form of genome editing tool. Other genome editing tools include targeted mutagenesis, base editing and prime editing , all of which create edits to the endogenous DNA (DNA already present in the organism) at a specific genomic location. [ 17 ] [ 18 ] This site-specific or ‘targeted’ nature of genome editing is typically what makes genome-editing different to traditional ‘genetic modification’ which inserts a transgene at a non-specific location in the organisms' genome, as well as gene-editing making small edits to the DNA already present in the organisms, verses genetic modification insertion 'foreign' DNA from another species. [ 19 ] [ 20 ]
Because gene editing makes smaller changes to endogenous DNA, many mutations created through genome-editing could in theory occur through natural mutagenesis or, in the context of plants, through mutation breeding which is part of conventional breeding (in contrast the insertion of a transgene to create a Genetically Modified Organism (GMO) could not occur naturally). However, there are exceptions to this general rule; as explained in the introduction, GT can introduce a range of possible size of edits to DNA; from very small edits such as changing, inserting or deleting 1 base-pair, through to inserting much longer DNA sequences, which could in theory include insertion of an entire transgene. [ 16 ] However, in practice GT is more commonly used to insert smaller sequences. The range of edits possible through GT can make it challenging to regulate (see Regulation ).
The two most established forms of gene editing are gene-targeting and targeted-mutagenesis . While gene targeting relies on the Homology Directed Repair (HDR) (also called Homologous Recombination , HR) DNA repair pathway, targeted-mutagenesis uses Non-Homologous-End-Joining (NHEJ) of broken DNA. NHEJ is an error-prone DNA repair pathway, meaning that when it repairs the broken DNA it can insert or delete DNA bases, creating insertions or deletions (indels). The user cannot specify what these random indels will be, hence they cannot control exactly what edits are made at the target site. However they can control where these edits will occur (i.e. dictate the target site) through using a site-specific nuclease (previously Zinc Finger Nucleases & TALENs , now commonly CRISPR ) to break the DNA at the target site. A summary of gene-targeting through HDR (also called Homologous Recombination) and targeted mutagenesis through NHEJ is shown in the figure below.
The more newly developed gene-editing techniques of prime editing and base editing, [ 18 ] based on CRISPR-Cas methods, are alternatives to gene targeting, which can also create user-defined edits at targeted genomic locations. However each is limited in the length of DNA sequence insertion possible; base editing is limited to single base pair conversions [ 21 ] while prime editing can only insert sequences of up to ~44bp. [ 22 ] [ 23 ] Hence GT remains the primary method of targeted (location-specific) insertion of long DNA sequences for genome engineering.
Gene trapping is based on random insertion of a cassette, while gene targeting manipulates a specific gene. Cassettes can be used for many different things while the flanking homology regions of gene targeting cassettes need to be adapted for each gene. This makes gene trapping more easily amenable for large scale projects than targeting. On the other hand, gene targeting can be used for genes with low transcriptions that would go undetected in a trap screen. The probability of trapping increases with intron size, while for gene targeting, small genes are just as easily altered.
Gene targeting was developed in mammalian cells in the 1980s, [ 24 ] [ 25 ] [ 26 ] with diverse applications possible as a result of being able to make specific sequence changes at a target genomic site, such as the study of gene function or human disease, particularly in mice models. [ 27 ] Indeed, gene targeting has been widely used to study human genetic diseases by removing (" knocking out "), or adding (" knocking in "), specific mutations of interest. [ 28 ] [ 29 ] Previously used to engineer rat cell models, [ 30 ] [ 31 ] advances in gene targeting technologies enable a new wave of isogenic human disease models . These models are the most accurate in vitro models available to researchers and facilitate the development of personalized drugs and diagnostics, particularly in oncology . [ 32 ] Gene targeting has also been investigated for gene therapy to correct disease-causing mutations. However the low efficiency of delivery of the gene-targeting machinery into cells has hindered this, with research conducted into viral vectors for gene targeting to try and address these challenges. [ 33 ]
Gene targeting is relatively high efficiency in yeast, bacterial and moss (but is rare in higher eukaryotes). Hence gene targeting has been used in reverse genetics approaches to study gene function in these systems. [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ]
Gene targeting (GT), or homology-directed repair (HDR), is used routinely in plant genome engineering to insert specific sequences, [ 39 ] with the first published example of GT in plants in the 1980s. [ 15 ] However, gene targeting is particularly challenging in higher plants due to the low rates of Homologous Recombination, or Homology Directed Repair, in higher plants and the low rate of transformation (DNA uptake) by many plant species. [ 40 ] However, there has been much effort to increase the frequencies of gene targeting in plants in the past decades, [ 39 ] [ 40 ] [ 41 ] [ 42 ] as it is very useful to be able to introduce specific sequences in the plant genome for plant genome engineering. The most significant improvement to gene targeting frequencies in plants was the induction of double-strand-breaks through site specific nucleases such as CRISPR, as described above. Other strategies include in planta gene targeting, whereby the homology repair template is embedded within the plant genome and then liberated using CRISPR cutting; [ 43 ] upregulation of genes involved in the homologous recombination pathway; downregulation of the competing Non-Homologous-End-Joining pathway; [ 39 ] increasing copy numbers of the homologous repair template; [ 44 ] and engineering Cas variants to be optimised for plant tissue culture. [ 45 ] Some of these approaches have also been used to improve gene targeting efficiencies in mammalian cells. [ 46 ]
Plants that have been gene-targeted include Arabidopsis thaliana (the most commonly used model plant ), rice, tomato, maize, tobacco and wheat. [ 40 ]
Gene targeting holds enormous promise to make targeted, user-defined sequence changes or sequence insertions in the genome. However its primary applications - human disease modelling and plant genome engineering - are hindered by the low efficiency of homologous recombination in comparison to the competing non-homologous end joining in mammalian and higher plant cells. [ 47 ] As described above, there are strategies that can be employed to increase the frequencies of gene targeting in plants and mammalian cells. [ 37 ] In addition, robust selection methods that allow the selection or specific enrichment of cells where gene targeting has occurred can increase the rates of recovery of gene-targeted cells. [ 48 ]
Mario R. Capecchi , Martin J. Evans and Oliver Smithies were awarded the 2007 Nobel Prize in Physiology or Medicine for their work on "principles for introducing specific gene modifications in mice by the use of embryonic stem cells", or gene targeting. [ 49 ]
As explained above, Gene Targeting is technically capable of creating a range of sizes of genetic changes; from single base-pair mutations through to insertion of longer sequences, including potentially transgenes. This means that products of gene targeting can be indistinguishable from natural mutation, or can be equivalent to GMOs due to their insertion of a transgene (see Venn diagram above). Hence regulating products of Gene Targeting can be challenging and different countries have taken different approaches or are reviewing how to do so as part of broader regulatory reviews into the products of gene-editing. [ 50 ] [ 51 ] [ 52 ] Broadly adopted classifications split gene-edited organisms into 3 classes of "SDN1-3", referring to Site Directed Nucleases (such as CRISPR-Cas) that are used to generate gene-edited organisms. [ 53 ] [ 16 ] These SDN classifications can guide national regulations as to which class of SDN they will consider to be ‘GMOs’ and therefore which are subject to potentially strict regulations.
Historically the European Union (EU) has broadly been opposed to Genetic Modification technology, on grounds of its precautionary principle . In 2018 the European Court of Justice (ECJ) ruled that gene-edited crops (including gene-targeted crops) should be considered as genetically modified [ 55 ] and therefore were subject to the GMO Directive, which places significant regulatory burdens on GMO use. However this decision was received negatively by the European scientific community. [ 56 ] In 2021 the European Commission deemed that current EU legislation governing Genetic Modification and Gene-Editing techniques (or NGTs – New Genomic Techniques) was ‘not fit for purpose’ and needed adapting to reflect scientific and technological progress. [ 57 ] In July 2023 the European Commission published a proposal to change rules for certain products of gene-editing to reduce the regulatory requirements for organisms developed with gene-editing that contained genetic changes that could have occurred naturally. [ 58 ] | https://en.wikipedia.org/wiki/Gene_targeting |
In bioethics and law , gene theft or DNA theft is the act of acquiring the genetic material of another individual, usually from public places , without his or her permission. The DNA may be harvested from a wide variety of common objects such as discarded cigarettes, used condoms, coffee cups, and hairbrushes. In addition, a variety of people can be interested on collecting someone's genetic material. This includes the police, political parties, historians, professional sports teams, personal enemies, etc. [ 1 ] DNA contains adequate amount of information about someone and it can be used for many purposes such as establishing paternity , proving genealogical connections or even unmasking private medical conditions. [ 2 ]
Currently, there are not many laws pertaining to the punishment that one may receive from obtaining the genetic material of others without their consent. However, due to the Health Insurance Portability and Accountability Act (HIPAA) , one's genetic material cannot be given to his or her school or employer as the genome is a part of one's personal health data, but, law enforcement can have access to it without consent. This only occurs when a person is either a victim or a suspect of a criminal investigation. [ 3 ]
Great Britain criminalized the acquisition of DNA without consent in 2006 at the urging of the Human Genetics Commission . [ 4 ] [ 5 ] Australia's legislature debated a two-year jail sentence for such theft in 2008. [ 6 ] [ 7 ] In the United States, eight states currently have criminal or civil prohibitions on such non-consensual appropriation of genetic materials. [ 8 ] In Alaska , Florida , New Jersey , New York and Oregon , individuals caught swiping DNA face fines or short jail sentences. [ 8 ] Lawsuits against "gene snatchers" are permitted in Minnesota , New Hampshire and New Mexico . [ 8 ] In jurisdictions where such non-consensual taking of DNA is illegal, exceptions are generally made for law enforcement.
Many bioethicists believe that such conduct is an unethical invasion of human privacy . [ 8 ] Professor Jacob Appel has warned that criminals may acquire the capability to copy DNA of innocent people and deposit it at crimes scenes, endangering the blameless and undermining a key tool of forensic investigation." [ 8 ] In addition, there have been ethical concerns on law enforcement using the DNA of the family members of criminals to catch them. This concept was used for the Golden State Killer case in California , who was connected to at least 50 rapes and 12 murders between 1976 and 1986. After the case went cold, investigators used a website that compared the genetic information of those who had uploaded their information and found a relative of the killer. [ 9 ]
However, others defend the appropriation of genetic material on the grounds that doing so may further human knowledge in productive ways. [ 2 ] One particularly controversial case which received widespread attention in the media was that of Derrell Teat, a wastewater coordinator, who sought to acquire without consent the DNA of a man who was allegedly the last male descendant of her great-great-great-grandfather's brother. [ 2 ] [ 10 ] Another prominent case was a United States paternity suit involving film producer Steve Bing and billionaire investor Kirk Kerkorian . [ 11 ] | https://en.wikipedia.org/wiki/Gene_theft |
Gene therapy is medical technology that aims to produce a therapeutic effect through the manipulation of gene expression or through altering the biological properties of living cells. [ 1 ] [ 2 ] [ 3 ]
The first attempt at modifying human DNA was performed in 1980, by Martin Cline , but the first successful nuclear gene transfer in humans, approved by the National Institutes of Health , was performed in May 1989. [ 4 ] The first therapeutic use of gene transfer as well as the first direct insertion of human DNA into the nuclear genome was performed by French Anderson in a trial starting in September 1990. Between 1989 and December 2018, over 2,900 clinical trials were conducted, with more than half of them in phase I . [ 5 ] In 2003, Gendicine became the first gene therapy to receive regulatory approval. Since that time, further gene therapy drugs were approved, such as alipogene tiparvovec (2012), Strimvelis (2016), tisagenlecleucel (2017), voretigene neparvovec (2017), patisiran (2018), onasemnogene abeparvovec (2019), idecabtagene vicleucel (2021), nadofaragene firadenovec , valoctocogene roxaparvovec and etranacogene dezaparvovec (all 2022). Most of these approaches utilize adeno-associated viruses (AAVs) and lentiviruses for performing gene insertions, in vivo and ex vivo , respectively. AAVs are characterized by stabilizing the viral capsid , lower immunogenicity, ability to transduce both dividing and nondividing cells, the potential to integrate site specifically and to achieve long-term expression in the in-vivo treatment. [ 6 ] ASO / siRNA approaches such as those conducted by Alnylam and Ionis Pharmaceuticals require non-viral delivery systems, and utilize alternative mechanisms for trafficking to liver cells by way of GalNAc transporters.
Not all medical procedures that introduce alterations to a patient's genetic makeup can be considered gene therapy. Bone marrow transplantation and organ transplants in general have been found to introduce foreign DNA into patients. [ 7 ]
Gene therapy was first conceptualized in the 1960s, when the feasibility of adding new genetic functions to mammalian cells began to be researched. Several methods to do so were tested, including injecting genes with a micropipette directly into a living mammalian cell, and exposing cells to a precipitate of DNA that contained the desired genes. Scientists theorized that a virus could also be used as a vehicle, or vector, to deliver new genes into cells.
One of the first scientists to report the successful direct incorporation of functional DNA into a mammalian cell was biochemist Dr. Lorraine Marquardt Kraus (6 September 1922 – 1 July 2016) [ 8 ] at the University of Tennessee Health Science Center in Memphis, Tennessee . In 1961, she managed to genetically alter the hemoglobin of cells from bone marrow taken from a patient with sickle cell anaemia . She did this by incubating the patient's cells in tissue culture with DNA extracted from a donor with normal hemoglobin . In 1968, researchers Theodore Friedmann , Jay Seegmiller, and John Subak-Sharpe at the National Institutes of Health (NIH), Bethesda, in the United States successfully corrected genetic defects associated with Lesch-Nyhan syndrome , a debilitating neurological disease , by adding foreign DNA to cultured cells collected from patients suffering from the disease. [ 9 ]
The first attempt, an unsuccessful one, at gene therapy (as well as the first case of medical transfer of foreign genes into humans not counting organ transplantation ) was performed by geneticist Martin Cline of the University of California, Los Angeles in California , United States on 10 July 1980. [ 10 ] [ 11 ] Cline claimed that one of the genes in his patients was active six months later, though he never published this data or had it verified. [ 12 ]
After extensive research on animals throughout the 1980s and a 1989 bacterial gene tagging trial on humans, the first gene therapy widely accepted as a success was demonstrated in a trial that started on 14 September 1990, when Ashanthi DeSilva was treated for ADA - SCID . [ 13 ]
The first somatic treatment that produced a permanent genetic change was initiated in 1993. [ 14 ] The goal was to cure malignant brain tumors by using recombinant DNA to transfer a gene making the tumor cells sensitive to a drug that in turn would cause the tumor cells to die. [ 15 ]
The polymers are either translated into proteins , interfere with target gene expression , or possibly correct genetic mutations . The most common form uses DNA that encodes a functional, therapeutic gene to replace a mutated gene. The polymer molecule is packaged within a " vector ", which carries the molecule inside cells. [ medical citation needed ]
Early clinical failures led to dismissals of gene therapy. Clinical successes since 2006 regained researchers' attention, although as of 2014 [update] , it was still largely an experimental technique. [ 16 ] These include treatment of retinal diseases Leber's congenital amaurosis [ 17 ] [ 18 ] [ 19 ] [ 20 ] and choroideremia , [ 21 ] X-linked SCID , [ 22 ] ADA-SCID, [ 23 ] [ 24 ] adrenoleukodystrophy , [ 25 ] chronic lymphocytic leukemia (CLL), [ 26 ] acute lymphocytic leukemia (ALL), [ 27 ] multiple myeloma , [ 28 ] haemophilia , [ 24 ] and Parkinson's disease . [ 29 ] Between 2013 and April 2014, US companies invested over $600 million in the field. [ 30 ]
The first commercial gene therapy, Gendicine , was approved in China in 2003, for the treatment of certain cancers. [ 31 ] In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease , including critical limb ischemia . [ 32 ] In 2012, alipogene tiparvovec , a treatment for a rare inherited disorder , lipoprotein lipase deficiency , became the first treatment to be approved for clinical use in either the European Union or the United States after its endorsement by the European Commission . [ 16 ] [ 33 ]
Following early advances in genetic engineering of bacteria, cells, and small animals, scientists started considering how to apply it to medicine. Two main approaches were considered – replacing or disrupting defective genes. [ 34 ] Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis , haemophilia, muscular dystrophy , thalassemia , and sickle cell anemia . alipogene tiparvovec treats one such disease, caused by a defect in lipoprotein lipase . [ 33 ]
DNA must be administered, reach the damaged cells, enter the cell and either express or disrupt a protein. [ 35 ] Multiple delivery techniques have been explored. The initial approach incorporated DNA into an engineered virus to deliver the DNA into a chromosome . [ 36 ] [ 37 ] Naked DNA approaches have also been explored, especially in the context of vaccine development. [ 38 ]
Generally, efforts focused on administering a gene that causes a needed protein to be expressed. More recently, increased understanding of nuclease function has led to more direct DNA editing, using techniques such as zinc finger nucleases and CRISPR . The vector incorporates genes into chromosomes. The expressed nucleases then knock out and replace genes in the chromosome. As of 2014 [update] these approaches involve removing cells from patients, editing a chromosome and returning the transformed cells to patients. [ 39 ]
Gene editing is a potential approach to alter the human genome to treat genetic diseases, [ 40 ] viral diseases, [ 41 ] and cancer. [ 42 ] [ 43 ] As of 2020 [update] these approaches are being studied in clinical trials. [ 44 ] [ 45 ]
In 1986, a meeting at the Institute Of Medicine defined gene therapy as the addition or replacement of a gene in a targeted cell type. In the same year, the FDA announced that it had jurisdiction over approving "gene therapy" without defining the term. The FDA added a very broad definition in 1993 of any treatment that would 'modify or manipulate the expression of genetic material or to alter the biological properties of living cells'. In 2018 this was narrowed to 'products that mediate their effects by transcription or translation of transferred genetic material or by specifically altering host (human) genetic sequences'. [ 46 ]
Writing in 2018, in the Journal of Law and the Biosciences, Sherkow et al. argued for a narrower definition of gene therapy than the FDA's in light of new technology that would consist of any treatment that intentionally and permanently modified a cell's genome, with the definition of genome including episomes outside the nucleus but excluding changes due to episomes that are lost over time. This definition would also exclude introducing cells that did not derive from a patient themselves, but include ex vivo approaches, and would not depend on the vector used. [ 46 ]
During the COVID-19 pandemic, some academics insisted that the mRNA vaccines for COVID were not gene therapy to prevent the spread of incorrect information that the vaccine could alter DNA, other academics maintained that the vaccines were a gene therapy because they introduced genetic material into a cell. [ 47 ] Fact-checkers , such as Full Fact , [ 48 ] Reuters , [ 49 ] PolitiFact , [ 50 ] and FactCheck.org [ 51 ] said that calling the vaccines a gene therapy was incorrect. Podcast host Joe Rogan was criticized for calling mRNA vaccines gene therapy as was British politician Andrew Bridgen , with fact checker Full Fact calling for Bridgen to be removed from the conservative party for this and other statements. [ 52 ] [ 53 ]
Gene therapy encapsulates many forms of adding different nucleic acids to a cell. Gene augmentation adds a new protein coding gene to a cell. One form of gene augmentiation is gene replacement therapy , a treatment for monogenic recessive disorders where a single gene is not functional; an additional functional gene is added. For diseases caused by multiple genes or a dominant gene, gene silencing or gene editing approaches are more appropriate but gene addition, a form of gene augmentation where new gene is added, may improve a cells function without modifying the genes that cause a disorder. [ 54 ] : 117
Gene therapy may be classified into two types by the type of cell it affects: somatic cell and germline gene therapy.
In somatic cell gene therapy (SCGT), the therapeutic genes are transferred into any cell other than a gamete , germ cell , gametocyte , or undifferentiated stem cell . Any such modifications affect the individual patient only, and are not inherited by offspring . Somatic gene therapy represents mainstream basic and clinical research, in which therapeutic DNA (either integrated in the genome or as an external episome or plasmid ) is used to treat disease. [ 55 ] Over 600 clinical trials utilizing SCGT are underway [ when? ] in the US. Most focus on severe genetic disorders, including immunodeficiencies , haemophilia , thalassaemia , and cystic fibrosis . Such single gene disorders are good candidates for somatic cell therapy. The complete correction of a genetic disorder or the replacement of multiple genes is not yet possible. Only a few of the trials are in the advanced stages. [ 56 ] [ needs update ]
In germline gene therapy (GGT), germ cells ( sperm or egg cells ) are modified by the introduction of functional genes into their genomes. Modifying a germ cell causes all the organism's cells to contain the modified gene. The change is therefore heritable and passed on to later generations. Australia, Canada, Germany, Israel, Switzerland, and the Netherlands [ 57 ] prohibit GGT for application in human beings, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations [ 57 ] and higher risks versus SCGT. [ 58 ] The US has no federal controls specifically addressing human genetic modification (beyond FDA regulations for therapies in general). [ 57 ] [ 59 ] [ 60 ] [ 61 ]
In in vivo gene therapy, a vector (typically, a virus) is introduced to the patient, which then achieves the desired biological effect by passing the genetic material (e.g. for a missing protein) into the patient's cells. In ex vivo gene therapies, such as CAR-T therapeutics, the patient's own cells (autologous) or healthy donor cells (allogeneic) are modified outside the body (hence, ex vivo ) using a vector to express a particular protein, such as a chimeric antigen receptor. [ 62 ]
In vivo gene therapy is seen as simpler, since it does not require the harvesting of mitotic cells. However, ex vivo gene therapies are better tolerated and less associated with severe immune responses. [ 63 ] The death of Jesse Gelsinger in a trial of an adenovirus -vectored treatment for ornithine transcarbamylase deficiency due to a systemic inflammatory reaction led to a temporary halt on gene therapy trials across the United States. [ 64 ] As of 2021 [update] , in vivo and ex vivo therapeutics are both seen as safe. [ 65 ]
The concept of gene therapy is to fix a genetic problem at its source. If, for instance, a mutation in a certain gene causes the production of a dysfunctional protein resulting (usually recessively) in an inherited disease, gene therapy could be used to deliver a copy of this gene that does not contain the deleterious mutation and thereby produces a functional protein. This strategy is referred to as gene replacement therapy and could be employed to treat inherited retinal diseases. [ 17 ] [ 66 ]
While the concept of gene replacement therapy is mostly suitable for recessive diseases, novel strategies have been suggested that are capable of also treating conditions with a dominant pattern of inheritance.
In vivo, gene editing systems using CRISPR have been used in studies with mice to treat cancer and have been effective at reducing tumors. [ 72 ] : 18 In vitro, the CRISPR system has been used to treat HPV cancer tumors. Adeno-associated virus , Lentivirus based vectors have been to introduce the genome for the CRISPR system. [ 72 ] : 6
The delivery of DNA into cells can be accomplished by multiple methods . The two major classes are recombinant viruses (sometimes called biological nanoparticles or viral vectors) and naked DNA or DNA complexes (non-viral methods). [ 73 ]
In order to replicate , viruses introduce their genetic material into the host cell, tricking the host's cellular machinery into using it as blueprints for viral proteins. [ 54 ] : 39 Retroviruses go a stage further by having their genetic material copied into the nuclear genome of the host cell. Scientists exploit this by substituting part of a virus's genetic material with therapeutic DNA or RNA. [ 54 ] : 40 [ 74 ] Like the genetic material (DNA or RNA) in viruses, therapeutic genetic material can be designed to simply serve as a temporary blueprint that degrades naturally, as in a non-integrative vectors , or to enter the host's nucleus becoming a permanent part of the host's nuclear DNA in infected cells. [ 54 ] : 50
A number of viruses have been used for human gene therapy, including viruses such as lentivirus , adenoviruses , herpes simplex , vaccinia , and adeno-associated virus . [ 5 ]
Adenovirus viral vectors (Ad) temporarily modify a cell's genetic expression with genetic material that is not integrated into the host cell's DNA. [ 75 ] : 5 As of 2017, such vectors were used in 20% of trials for gene therapy. [ 74 ] : 10 Adenovirus vectors are mostly used in cancer treatments and novel genetic vaccines such as the Ebola vaccine , vaccines used in clinical trials for HIV and SARS-CoV-2 , or cancer vaccines . [ 75 ] : 5
Lentiviral vectors based on lentivirus , a retrovirus , can modify a cell's nuclear genome to permanently express a gene, although vectors can be modified to prevent integration. [ 54 ] : 40,50 Retroviruses were used in 18% of trials before 2018. [ 74 ] : 10 Libmeldy is an ex vivo stem cell treatment for metachromatic leukodystrophy which uses a lentiviral vector and was approved by the European medical agency in 2020. [ 76 ]
Adeno-associated virus (AAV) is a virus that is incapable of transmission between cells unless the cell is infected by another virus, a helper virus. Adenovirus and the herpes viruses act as helper viruses for AAV. AAV persists within the cell outside of the cell's nuclear genome for an extended period of time through the formation of concatemers mostly organized as episomes . [ 77 ] : 4 Genetic material from AAV vectors is integrated into the host cell's nuclear genome at a low frequency and likely mediated by the DNA-modifying enzymes of the host cell. [ 78 ] : 2647 Animal models suggest that integration of AAV genetic material into the host cell's nuclear genome may cause hepatocellular carcinoma , a form of liver cancer . [ 78 ] Several AAV investigational agents have been explored in treatment of wet age related macular degeneration by both intravitreal and subretinal approaches as a potential application of AAV gene therapy for human disease. [ 79 ] [ 80 ]
Non-viral vectors for gene therapy [ 81 ] present certain advantages over viral methods, such as large scale production and low host immunogenicity . However, non-viral methods initially produced lower levels of transfection and gene expression , and thus lower therapeutic efficacy. Newer technologies offer promise of solving these problems, with the advent of increased cell-specific targeting and subcellular trafficking control.
Methods for non-viral gene therapy include the injection of naked DNA, electroporation , the gene gun , sonoporation , magnetofection , the use of oligonucleotides , lipoplexes, dendrimers, and inorganic nanoparticles. These therapeutics can be administered directly or through scaffold enrichment . [ 82 ] [ 83 ]
More recent approaches, such as those performed by companies such as Ligandal , offer the possibility of creating cell-specific targeting technologies for a variety of gene therapy modalities, including RNA, DNA and gene editing tools such as CRISPR. Other companies, such as Arbutus Biopharma and Arcturus Therapeutics , offer non-viral, non-cell-targeted approaches that mainly exhibit liver trophism. In more recent years, startups such as Sixfold Bio , GenEdit , and Spotlight Therapeutics have begun to solve the non-viral gene delivery problem. Non-viral techniques offer the possibility of repeat dosing and greater tailorability of genetic payloads, which in the future will be more likely to take over viral-based delivery systems.
Companies such as Editas Medicine , Intellia Therapeutics , CRISPR Therapeutics , Casebia , Cellectis , Precision Biosciences , bluebird bio , Excision BioTherapeutics , and Sangamo have developed non-viral gene editing techniques, however frequently still use viruses for delivering gene insertion material following genomic cleavage by guided nucleases . These companies focus on gene editing, and still face major delivery hurdles.
BioNTech , Moderna Therapeutics and CureVac focus on delivery of mRNA payloads, which are necessarily non-viral delivery problems.
Alnylam , Dicerna Pharmaceuticals , and Ionis Pharmaceuticals focus on delivery of siRNA (antisense oligonucleotides) for gene suppression, which also necessitate non-viral delivery systems.
In academic contexts, a number of laboratories are working on delivery of PEGylated particles, which form serum protein coronas and chiefly exhibit LDL receptor mediated uptake in cells in vivo . [ 84 ]
There have been attempts to treat cancer using gene therapy. As of 2017, 65% of gene therapy trials were for cancer treatment. [ 74 ] : 7
Adenovirus vectors are useful for some cancer gene therapies because adenovirus can transiently insert genetic material into a cell without permanently altering the cell's nuclear genome. These vectors can be used to cause antigens to be added to cancers causing an immune response, or hinder angiogenesis by expressing certain proteins. [ 85 ] : 5 An Adenovirus vector is used in the commercial products Gendicine and Oncorine . [ 85 ] : 10 Another commercial product, Rexin G , uses a retrovirus-based vector and selectively binds to receptors that are more expressed in tumors. [ 85 ] : 10
One approach, suicide gene therapy , works by introducing genes encoding enzymes that will cause a cancer cell to die. Another approach is the use oncolytic viruses , such as Oncorine, [ 86 ] : 165 which are viruses that selectively reproduce in cancerous cells leaving other cells unaffected. [ 87 ] : 6 [ 88 ] : 280
mRNA has been suggested as a non-viral vector for cancer gene therapy that would temporarily change a cancerous cell's function to create antigens or kill the cancerous cells and there have been several trials. [ 89 ]
Afamitresgene autoleucel , sold under the brand name Tecelra, is an autologous T cell immunotherapy used for the treatment of synovial sarcoma . It is a T cell receptor (TCR) gene therapy. [ 90 ] It is the first FDA-approved engineered cell therapy for a solid tumor. [ 91 ] It uses a self-inactivating lentiviral vector to express a T-cell receptor specific for MAGE-A4, a melanoma-associated antigen. [ medical citation needed ]
Gene therapy approaches to replace a faulty gene with a healthy gene have been proposed and are being studied for treating some genetic diseases. As of 2017, 11.1% of gene therapy clinical trials targeted monogenic diseases. [ 74 ] : 9
Diseases such as sickle cell disease that are caused by autosomal recessive disorders for which a person's normal phenotype or cell function may be restored in cells that have the disease by a normal copy of the gene that is mutated, may be a good candidate for gene therapy treatment. [ 92 ] [ 93 ] The risks and benefits related to gene therapy for sickle cell disease are not known. [ 93 ]
Gene therapy has been used in the eye . The eye is especially suitable for adeno-associated virus vectors. Voretigene neparvovec is an approved gene therapy to treat Leber's hereditary optic neuropathy . [ 94 ] : 1354 alipogene tiparvovec , a treatment for pancreatitis caused by a genetic condition, and Zolgensma for the treatment of spinal muscular atrophy both use an adeno-associated virus vector. [ 78 ] : 2647
As of 2017, 7% of genetic therapy trials targeted infectious diseases. 69.2% of trials targeted HIV , 11% hepatitis B or C, and 7.1% malaria . [ 74 ]
Some genetic therapies have been approved by the U.S. Food and Drug Administration (FDA), the European Medicines Agency (EMA), and for use in Russia and China.
( Arylsulfatase A gene encoding autologous CD34+ cells)
Some of the unsolved problems include:
Three patients' deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger , who died in 1999, because of immune rejection response. [ 131 ] [ 132 ] One X-SCID patient died of leukemia in 2003. [ 13 ] In 2007, a rheumatoid arthritis patient died from an infection; the subsequent investigation concluded that the death was not related to gene therapy. [ 133 ]
Regulations covering genetic modification are part of general guidelines about human-involved biomedical research. [ 134 ] There are no international treaties which are legally binding in this area, but there are recommendations for national laws from various bodies. [ 134 ]
The Helsinki Declaration (Ethical Principles for Medical Research Involving Human Subjects) was amended by the World Medical Association 's General Assembly in 2008. This document provides principles physicians and researchers must consider when involving humans as research subjects. The Statement on Gene Therapy Research initiated by the Human Genome Organization (HUGO) in 2001, provides a legal baseline for all countries. HUGO's document emphasizes human freedom and adherence to human rights, and offers recommendations for somatic gene therapy, including the importance of recognizing public concerns about such research. [ 135 ]
No federal legislation lays out protocols or restrictions about human genetic engineering. This subject is governed by overlapping regulations from local and federal agencies, including the Department of Health and Human Services , the FDA and NIH's Recombinant DNA Advisory Committee. Researchers seeking federal funds for an investigational new drug application, (commonly the case for somatic human genetic engineering,) must obey international and federal guidelines for the protection of human subjects. [ 136 ]
NIH serves as the main gene therapy regulator for federally funded research. Privately funded research is advised to follow these regulations. NIH provides funding for research that develops or enhances genetic engineering techniques and to evaluate the ethics and quality in current research. The NIH maintains a mandatory registry of human genetic engineering research protocols that includes all federally funded projects. [ 137 ]
An NIH advisory committee published a set of guidelines on gene manipulation. [ 138 ] The guidelines discuss lab safety as well as human test subjects and various experimental types that involve genetic changes. Several sections specifically pertain to human genetic engineering, including Section III-C-1. This section describes required review processes and other aspects when seeking approval to begin clinical research involving genetic transfer into a human patient. [ 139 ] The protocol for a gene therapy clinical trial must be approved by the NIH's Recombinant DNA Advisory Committee prior to any clinical trial beginning; this is different from any other kind of clinical trial. [ 138 ]
As with other kinds of drugs, the FDA regulates the quality and safety of gene therapy products and supervises how these products are used clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials , must be reviewed and approved by the FDA and an Institutional Review Board . [ 140 ] [ 141 ]
Athletes may adopt gene therapy technologies to improve their performance. [ 142 ] Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports. [ 143 ]
Genetic engineering could be used to cure diseases, but also to change physical appearance, metabolism , and even improve physical capabilities and mental faculties such as memory and intelligence . Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases. [ 144 ] [ 145 ] [ 146 ] For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery. [ 147 ] [ 148 ] Another theorist claims that moral concerns limit but do not prohibit germline engineering. [ 149 ]
A 2020 issue of the journal Bioethics was devoted to moral issues surrounding germline genetic engineering in people. [ 150 ]
Possible regulatory schemes include a complete ban, provision to everyone, or professional self-regulation. The American Medical Association 's Council on Ethical and Judicial Affairs stated that "genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics." [ 151 ]
As early in the history of biotechnology as 1990, there have been scientists opposed to attempts to modify the human germline using these new tools, [ 152 ] and such concerns have continued as technology progressed. [ 153 ] [ 154 ] With the advent of new techniques like CRISPR , in March 2015 a group of scientists urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited. [ 155 ] [ 156 ] [ 157 ] [ 158 ] In April 2015, researchers sparked controversy when they reported results of basic research to edit the DNA of non-viable human embryos using CRISPR. [ 159 ] [ 160 ] A committee of the American National Academy of Sciences and National Academy of Medicine gave qualified support to human genome editing in 2017 [ 161 ] [ 162 ] once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight." [ 163 ]
In 1972, Friedmann and Roblin authored a paper in Science titled "Gene therapy for human genetic disease?". [ 164 ] Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those with genetic defects. [ 165 ]
In 1984, a retrovirus vector system was designed that could efficiently insert foreign genes into mammalian chromosomes. [ 166 ]
The first approved gene therapy clinical research in the US took place on 14 September 1990, at the National Institutes of Health (NIH), under the direction of William French Anderson . [ 167 ] Four-year-old Ashanti DeSilva received treatment for a genetic defect that left her with adenosine deaminase deficiency (ADA-SCID), a severe immune system deficiency. The defective gene of the patient's blood cells was replaced by the functional variant. Ashanti's immune system was partially restored by the therapy. Production of the missing enzyme was temporarily stimulated, but the new cells with functional genes were not generated. She led a normal life only with the regular injections performed every two months. The effects were successful, but temporary. [ 168 ]
Cancer gene therapy was introduced in 1992/93 (Trojan et al. 1993). [ 169 ] The treatment of glioblastoma multiforme, the malignant brain tumor whose outcome is always fatal, was done using a vector expressing antisense IGF-I RNA (clinical trial approved by NIH protocol no.1602 24 November 1993, [ 170 ] and by the FDA in 1994). This therapy also represents the beginning of cancer immunogene therapy, a treatment which proves to be effective due to the anti-tumor mechanism of IGF-I antisense, which is related to strong immune and apoptotic phenomena.
In 1992, Claudio Bordignon , working at the Vita-Salute San Raffaele University , performed the first gene therapy procedure using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases . [ 171 ] In 2002, this work led to the publication of the first successful gene therapy treatment for ADA-SCID. The success of a multi-center trial for treating children with SCID ( severe combined immune deficiency or "bubble boy" disease) from 2000 and 2002, was questioned when two of the ten children treated at the trial's Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the US, the United Kingdom, France, Italy, and Germany. [ 172 ]
In 1993, Andrew Gobea was born with SCID following prenatal genetic screening . Blood was removed from his mother's placenta and umbilical cord immediately after birth, to acquire stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses inserted the gene into the stem cell chromosomes. Stem cells containing the working ADA gene were injected into Andrew's blood. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed. [ 173 ]
In 1996, Luigi Naldini and Didier Trono developed a new class of gene therapy vectors based on HIV capable of infecting non-dividing cells that have since then been widely used in clinical and research settings, pioneering lentivirals vector in gene therapy . [ 174 ]
Jesse Gelsinger 's death in 1999 impeded gene therapy research in the US. [ 175 ] [ 176 ] As a result, the FDA suspended several clinical trials pending the reevaluation of ethical and procedural practices. [ 177 ]
The modified gene therapy strategy of antisense IGF-I RNA (NIH n˚ 1602) [ 170 ] using antisense / triple helix anti-IGF-I approach was registered in 2002, by Wiley gene therapy clinical trial - n˚ 635 and 636. The approach has shown promising results in the treatment of six different malignant tumors: glioblastoma , cancers of liver, colon, prostate, uterus, and ovary (Collaborative NATO Science Programme on Gene Therapy USA, France, Poland n˚ LST 980517 conducted by J. Trojan) (Trojan et al., 2012). This anti-gene antisense/triple helix therapy has proven to be efficient, due to the mechanism stopping simultaneously IGF-I expression on translation and transcription levels, strengthening anti-tumor immune and apoptotic phenomena.
Sickle cell disease can be treated in mice. [ 178 ] The mice – which have essentially the same defect that causes human cases – used a viral vector to induce production of fetal hemoglobin (HbF), which normally ceases to be produced shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF temporarily alleviates sickle cell symptoms. The researchers demonstrated this treatment to be a more permanent means to increase therapeutic HbF production. [ 179 ]
A new gene therapy approach repaired errors in messenger RNA derived from defective genes. This technique has the potential to treat thalassaemia , cystic fibrosis and some cancers. [ 180 ]
Researchers created liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane . [ 181 ]
In 2003, a research team inserted genes into the brain for the first time. They used liposomes coated in a polymer called polyethylene glycol , which unlike viral vectors, are small enough to cross the blood–brain barrier . [ 182 ]
Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs ) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced. [ 183 ]
Gendicine is a cancer gene therapy that delivers the tumor suppressor gene p53 using an engineered adenovirus . In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma . [ 31 ]
In March, researchers announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease , a disease which affects myeloid cells and damages the immune system . The study is the first to show that gene therapy can treat the myeloid system. [ 184 ]
In May, a team reported a way to prevent the immune system from rejecting a newly delivered gene. [ 185 ] Similar to organ transplantation , gene therapy has been plagued by this problem. The immune system normally recognizes the new gene as foreign and rejects the cells carrying it. The research utilized a newly uncovered network of genes regulated by molecules known as microRNAs . This natural function selectively obscured their therapeutic gene in immune system cells and protected it from discovery. Mice infected with the gene containing an immune-cell microRNA target sequence did not reject the gene.
In August, scientists successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells. [ 186 ]
In November, researchers reported on the use of VRX496, a gene-based immunotherapy for the treatment of HIV that uses a lentiviral vector to deliver an antisense gene against the HIV envelope . In a phase I clinical trial , five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens were treated. A single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. All five patients had stable or increased immune response to HIV antigens and other pathogens . This was the first evaluation of a lentiviral vector administered in a US human clinical trial. [ 187 ] [ 188 ]
In May 2007, researchers announced the first gene therapy trial for inherited retinal disease . The first operation was carried out on a 23-year-old British male, Robert Johnson, in early 2007. [ 189 ]
Leber's congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in April. [ 17 ] Delivery of recombinant adeno-associated virus (AAV) carrying RPE65 yielded positive results. In May, two more groups reported positive results in independent clinical trials using gene therapy to treat the condition. In all three clinical trials, patients recovered functional vision without apparent side-effects. [ 17 ] [ 18 ] [ 19 ] [ 20 ]
In September researchers were able to give trichromatic vision to squirrel monkeys . [ 190 ] In November 2009, researchers halted a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1 , the gene that is mutated in the disorder. [ 191 ]
An April paper reported that gene therapy addressed achromatopsia (color blindness) in dogs by targeting cone photoreceptors. Cone function and day vision were restored for at least 33 months in two young specimens. The therapy was less efficient for older dogs. [ 192 ]
In September it was announced that an 18-year-old male patient in France with beta thalassemia major had been successfully treated. [ 193 ] Beta thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions . [ 194 ] The technique used a lentiviral vector to transduce the human β-globin gene into purified blood and marrow cells obtained from the patient in June 2007. [ 195 ] The patient's haemoglobin levels were stable at 9 to 10 g/dL. About a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed. [ 195 ] [ 196 ] Further clinical trials were planned. [ 197 ] Bone marrow transplants are the only cure for thalassemia, but 75% of patients do not find a matching donor. [ 196 ]
Cancer immunogene therapy using modified antigene, antisense/triple helix approach was introduced in South America in 2010/11 in La Sabana University, Bogota (Ethical Committee 14 December 2010, no P-004-10). Considering the ethical aspect of gene diagnostic and gene therapy targeting IGF-I, the IGF-I expressing tumors i.e. lung and epidermis cancers were treated (Trojan et al. 2016). [ 198 ] [ 199 ]
In 2007 and 2008, a man ( Timothy Ray Brown ) was cured of HIV by repeated hematopoietic stem cell transplantation (see also allogeneic stem cell transplantation , allogeneic bone marrow transplantation , allotransplantation ) with double-delta-32 mutation which disables the CCR5 receptor. This cure was accepted by the medical community in 2011. [ 200 ] It required complete ablation of existing bone marrow , which is very debilitating. [ 201 ]
In August two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The therapy used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease. [ 26 ] In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free. [ 202 ]
Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction . [ 203 ] [ 204 ]
In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease , including critical limb ischemia ; it delivers the gene encoding for VEGF . [ 32 ] Neovasculogen is a plasmid encoding the CMV promoter and the 165 amino acid form of VEGF . [ 205 ] [ 206 ]
The FDA approved Phase I clinical trials on thalassemia major patients in the US for 10 participants in July. [ 207 ] The study was expected to continue until 2015. [ 197 ]
In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment used Alipogene tiparvovec (Glybera) to compensate for lipoprotein lipase deficiency , which can cause severe pancreatitis . [ 208 ] The recommendation was endorsed by the European Commission in November 2012, [ 16 ] [ 33 ] [ 209 ] [ 210 ] and commercial rollout began in late 2014. [ 211 ] Alipogene tiparvovec was expected to cost around $1.6 million per treatment in 2012, [ 212 ] revised to $1 million in 2015, [ 213 ] making it the most expensive medicine in the world at the time. [ 214 ] As of 2016 [update] , only the patients treated in clinical trials and a patient who paid the full price for treatment have received the drug. [ 215 ]
In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission "or very close to it" three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1 , which exist only on cancerous myeloma cells. [ 28 ]
In March researchers reported that three of five adult subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B cells , cancerous or not. The researchers believed that the patients' immune systems would make normal T cells and B cells after a couple of months. They were also given bone marrow. One patient relapsed and died and one died of a blood clot unrelated to the disease. [ 27 ]
Following encouraging Phase I trials, in April, researchers announced they were starting Phase II clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients [ 216 ] at several hospitals to combat heart disease . The therapy was designed to increase the levels of SERCA 2, a protein in heart muscles, improving muscle function. [ 217 ] The U.S. Food and Drug Administration (FDA) granted this a breakthrough therapy designation to accelerate the trial and approval process. [ 218 ] In 2016, it was reported that no improvement was found from the CUPID 2 trial. [ 219 ]
In July researchers reported promising results for six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 7–32 months. Three of the children had metachromatic leukodystrophy , which causes children to lose cognitive and motor skills. [ 220 ] The other children had Wiskott–Aldrich syndrome , which leaves them to open to infection, autoimmune diseases, and cancer. [ 221 ] Follow up trials with gene therapy on another six children with Wiskott–Aldrich syndrome were also reported as promising. [ 222 ] [ 223 ]
In October researchers reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and that their immune systems were showing signs of full recovery. Another three children were making progress. [ 24 ] In 2014, a further 18 children with ADA-SCID were cured by gene therapy. [ 224 ] ADA-SCID children have no functioning immune system and are sometimes known as "bubble children". [ 24 ]
Also in October researchers reported that they had treated six people with haemophilia in early 2011 using an adeno-associated virus. Over two years later all six were producing clotting factor . [ 24 ] [ 225 ]
In January researchers reported that six choroideremia patients had been treated with adeno-associated virus with a copy of REP1 . Over a six-month to two-year period all had improved their sight. [ 66 ] [ 226 ] By 2016, 32 patients had been treated with positive results and researchers were hopeful the treatment would be long-lasting. [ 21 ] Choroideremia is an inherited genetic eye disease with no approved treatment, leading to loss of sight.
In March researchers reported that 12 HIV patients had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation ( CCR5 deficiency) known to protect against HIV with promising results. [ 227 ] [ 228 ]
Clinical trials of gene therapy for sickle cell disease were started in 2014. [ 229 ] [ 230 ]
In February LentiGlobin BB305 , a gene therapy treatment undergoing clinical trials for treatment of beta thalassemia gained FDA "breakthrough" status after several patients were able to forgo the frequent blood transfusions usually required to treat the disease. [ 231 ]
In March researchers delivered a recombinant gene encoding a broadly neutralizing antibody into monkeys infected with simian HIV ; the monkeys' cells produced the antibody , which cleared them of HIV. The technique is named immunoprophylaxis by gene transfer (IGT). Animal tests for antibodies to ebola, malaria, influenza, and hepatitis were underway. [ 232 ] [ 233 ]
In March, scientists, including an inventor of CRISPR , Jennifer Doudna , urged a worldwide moratorium on germline gene therapy, writing "scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans" until the full implications "are discussed among scientific and governmental organizations". [ 155 ] [ 156 ] [ 157 ] [ 158 ]
In December, scientists of major world academies called for a moratorium on inheritable human genome edits, including those related to CRISPR-Cas9 technologies [ 234 ] but that basic research including embryo gene editing should continue. [ 235 ]
Researchers successfully treated a boy with epidermolysis bullosa using skin grafts grown from his own skin cells, genetically altered to repair the mutation that caused his disease. [ 236 ]
In November, researchers announced that they had treated a baby girl, Layla Richards, with an experimental treatment using donor T cells genetically engineered using TALEN to attack cancer cells. One year after the treatment she was still free of her cancer (a highly aggressive form of acute lymphoblastic leukaemia [ALL]). [ 237 ] Children with highly aggressive ALL normally have a very poor prognosis and Layla's disease had been regarded as terminal before the treatment. [ 238 ] [ 239 ]
In April the Committee for Medicinal Products for Human Use of the European Medicines Agency endorsed a gene therapy treatment called Strimvelis [ 240 ] [ 241 ] and the European Commission approved it in June. [ 242 ] This treats children born with adenosine deaminase deficiency and who have no functioning immune system. This was the second gene therapy treatment to be approved in Europe. [ 243 ]
In October, Chinese scientists reported they had started a trial to genetically modify T cells from 10 adult patients with lung cancer and reinject the modified T cells back into their bodies to attack the cancer cells. The T cells had the PD-1 protein (which stops or slows the immune response) removed using CRISPR-Cas9. [ 244 ] [ 245 ]
A 2016 Cochrane systematic review looking at data from four trials on topical cystic fibrosis transmembrane conductance regulator (CFTR) gene therapy does not support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections. One of the four trials did find weak evidence that liposome-based CFTR gene transfer therapy may lead to a small respiratory improvement for people with CF. This weak evidence is not enough to make a clinical recommendation for routine CFTR gene therapy. [ 246 ]
In February Kite Pharma announced results from a clinical trial of CAR-T cells in around a hundred people with advanced non-Hodgkin lymphoma . [ 247 ]
In March, French scientists reported on clinical research of gene therapy to treat sickle cell disease . [ 248 ]
In August, the FDA approved tisagenlecleucel for acute lymphoblastic leukemia . [ 249 ] Tisagenlecleucel is an adoptive cell transfer therapy for B-cell acute lymphoblastic leukemia; T cells from a person with cancer are removed, genetically engineered to make a specific T-cell receptor (a chimeric T cell receptor, or "CAR-T") that reacts to the cancer, and are administered back to the person. The T cells are engineered to target a protein called CD19 that is common on B cells. This is the first form of gene therapy to be approved in the United States. In October, a similar therapy called axicabtagene ciloleucel was approved for non-Hodgkin lymphoma. [ 250 ]
In October, biophysicist and biohacker Josiah Zayner claimed to have performed the very first in-vivo human genome editing in the form of a self-administered therapy. [ 251 ] [ 252 ]
On 13 November, medical scientists working with Sangamo Therapeutics , headquartered in Richmond, California , announced the first ever in-body human gene editing therapy . [ 253 ] [ 254 ] The treatment, designed to permanently insert a healthy version of the flawed gene that causes Hunter syndrome , was given to 44-year-old Brian Madeux and is part of the world's first study to permanently edit DNA inside the human body. [ 255 ] The success of the gene insertion was later confirmed. [ 256 ] [ 257 ] Clinical trials by Sangamo involving gene editing using zinc finger nuclease (ZFN) are ongoing. [ 258 ]
In December the results of using an adeno-associated virus with blood clotting factor VIII to treat nine haemophilia A patients were published. Six of the seven patients on the high dose regime increased the level of the blood clotting VIII to normal levels. The low and medium dose regimes had no effect on the patient's blood clotting levels. [ 259 ] [ 260 ]
In December, the FDA approved voretigene neparvovec , the first in vivo gene therapy, for the treatment of blindness due to Leber's congenital amaurosis . [ 261 ] The price of this treatment is US$850,000 for both eyes. [ 262 ] [ 263 ]
In May, the FDA approved onasemnogene abeparvovec (Zolgensma) for treating spinal muscular atrophy in children under two years of age. The list price of Zolgensma was set at US$2.125 million per dose, making it the most expensive drug ever. [ 264 ]
In May, the EMA approved betibeglogene autotemcel (Zynteglo) for treating beta thalassemia for people twelve years of age and older. [ 265 ] [ 266 ]
In July, Allergan and Editas Medicine announced phase I/II clinical trial of AGN-151587 for the treatment of Leber congenital amaurosis 10. [ 267 ] This is one of the first studies of a CRISPR -based in vivo human gene editing therapy , where the editing takes place inside the human body. [ 268 ] The first injection of the CRISPR-Cas System was confirmed in March 2020. [ 269 ]
Exagamglogene autotemcel , a CRISPR -based human gene editing therapy , was used for sickle cell and thalassemia in clinical trials. [ 270 ]
In May, onasemnogene abeparvovec (Zolgensma) was approved by the European Union for the treatment of spinal muscular atrophy in people who either have clinical symptoms of SMA type 1 or who have no more than three copies of the SMN2 gene , irrespective of body weight or age. [ 271 ]
In August, Audentes Therapeutics reported that three out of 17 children with X-linked myotubular myopathy participating the clinical trial of a AAV8-based gene therapy treatment AT132 have died. It was suggested that the treatment, whose dosage is based on body weight, exerts a disproportionately toxic effect on heavier patients, since the three patients who died were heavier than the others. [ 272 ] [ 273 ] The trial has been put on clinical hold. [ 274 ]
On 15 October, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorisation for the medicinal product Libmeldy (autologous CD34+ cell enriched population that contains hematopoietic stem and progenitor cells transduced ex vivo using a lentiviral vector encoding the human arylsulfatase A gene), a gene therapy for the treatment of children with the "late infantile" (LI) or "early juvenile" (EJ) forms of metachromatic leukodystrophy (MLD). [ 275 ] The active substance of Libmeldy consists of the child's own stem cells which have been modified to contain working copies of the ARSA gene. [ 275 ] When the modified cells are injected back into the patient as a one-time infusion, the cells are expected to start producing the ARSA enzyme that breaks down the build-up of sulfatides in the nerve cells and other cells of the patient's body. [ 276 ] Libmeldy was approved for medical use in the EU in December 2020. [ 277 ]
On 15 October, Lysogene, a French biotechnological company, reported the death of a patient in who has received LYS-SAF302, an experimental gene therapy treatment for mucopolysaccharidosis type IIIA (Sanfilippo syndrome type A). [ 278 ]
In May, a new method using an altered version of HIV as a lentivirus vector was reported in the treatment of 50 children with ADA-SCID obtaining positive results in 48 of them, [ 279 ] [ 280 ] [ 281 ] this method is expected to be safer than retroviruses vectors commonly used in previous studies of SCID where the development of leukemia was usually observed [ 282 ] and had already been used in 2019, but in a smaller group with X-SCID. [ 283 ] [ 284 ] [ 285 ] [ 286 ]
In June a clinical trial on six patients affected with transthyretin amyloidosis reported a reduction the concentration of missfolded transthretin (TTR) protein in serum through CRISPR -based inactivation of the TTR gene in liver cells observing mean reductions of 52% and 87% among the lower and higher dose groups.This was done in vivo without taking cells out of the patient to edit them and reinfuse them later. [ 287 ] [ 288 ] [ 289 ]
In July results of a small gene therapy phase I study was published reporting observation of dopamine restoration on seven patients between 4 and 9 years old affected by aromatic L-amino acid decarboxylase deficiency (AADC deficiency). [ 290 ] [ 291 ] [ 292 ]
In February, the first ever gene therapy for Tay–Sachs disease was announced, it uses an adeno-associated virus to deliver the correct instruction for the HEXA gene on brain cells which causes the disease. Only two children were part of a compassionate trial presenting improvements over the natural course of the disease and no vector-related adverse events . [ 293 ] [ 294 ] [ 295 ]
In May, eladocagene exuparvovec is recommended for approval by the European Commission. [ 296 ] [ 297 ]
In July results of a gene therapy candidate for haemophilia B called FLT180 were announced, it works using an adeno-associated virus (AAV) to restore the clotting factor IX (FIX) protein, normal levels of the protein were observed with low doses of the therapy but immunosuppression was necessitated to decrease the risk of vector-related immune responses. [ 298 ] [ 299 ] [ 300 ]
In December, a 13-year girl that had been diagnosed with T-cell acute lymphoblastic leukaemia was successfully treated at Great Ormond Street Hospital (GOSH) in the first documented use of therapeutic gene editing for this purpose, after undergoing six months of an experimental treatment, where all attempts of other treatments failed. The procedure included reprogramming a healthy T-cell to destroy the cancerous T-cells to first rid her of leukaemia, and then rebuilding her immune system using healthy immune cells. [ 301 ] The GOSH team used BASE editing and had previously treated a case of acute lymphoblastic leukaemia in 2015 using TALENs . [ 239 ]
In May 2023, the FDA approved beremagene geperpavec for the treatment of wounds in people with dystrophic epidermolysis bullosa (DEB) which is applied as a topical gel that delivers a herpes-simplex virus type 1 (HSV-1) vector encoding the collagen type VII alpha 1 chain ( COL7A1 ) gene that is dysfunctional on those affected by DEB . One trial found 65% of the Vyjuvek-treated wounds completely closed while only 26% of the placebo-treated at 24 weeks. [ 97 ] It has been also reported its use as an eyedrop for a patient with DEB that had vision loss due to the widespread blistering with good results. [ 302 ]
In June 2023, the FDA gave an accelerated approval to Elevidys for Duchenne muscular dystrophy (DMD) only for boys 4 to 5 years old as they are more likely to benefit from the therapy which consists of one-time intravenous infusion of a virus (AAV rh74 vector) that delivers a functioning "microdystrophin" gene (138 kDa ) into the muscle cells to act in place of the normal dystrophin (427 kDa) that is found mutated in this disease. [ 102 ]
In July 2023, it was reported that it had been developed a new method to affect genetic expressions through direct current. [ 303 ]
In December 2023, two gene therapies were approved for sickle cell disease , exagamglogene autotemcel [ 105 ] and lovotibeglogene autotemcel . [ 108 ]
2024
In November 2024, FDA granted accelerated approval for eladocagene exuparvovec -tneq (Kebilidi, PTC Therapeutics ), a direct-to-brain gene therapy for aromatic L -amino acid decarboxylase deficiency . [ 304 ] It uses a recombinant adeno-associated virus serotype 2 (rAAV2) to deliver a functioning DOPA decarboxylase (DDC) gene directly into the putamen , increasing the AADC enzyme and restoring dopamine production. It is administered through a stereotactic surgical procedure. [ 103 ] | https://en.wikipedia.org/wiki/Gene_therapy |
Gene therapy for blood diseases is a novel field of research investigating ways in which components of blood can be genetically modified to treat hematologic diseases . [ 1 ]
Source: [ 2 ]
CAR T-cell therapy is a type of personalized cancer immunotherapy designed to strengthen the patient’s own immune system to better fight cancer. The process begins by extracting T-cells , a type of immune cell, from an individual patient’s blood. The surface of cancer cells contains unique markers called antigens . The patient’s T-cells are genetically modified in laboratories to include chimeric antigen receptors (CARs). The CARs are designed to recognize the specific cancer antigens and bind to them, allowing T-cells to target and attack the cancer cells. The genetically modified T-cells are administered back to the patients as a treatment.
Leukemia is a group of blood cancers commonly found in children younger than 15 and elders older than 55. [ 3 ] In 2017, tisagenlecleucel (Kymriah™) , [ 2 ] the first CAR-T cell therapy approved by the FDA , became available to anyone up to the age of 25 with acute lymphoblastic leukemia (ALL). Until 2022, a total of six CAR-T therapies have been approved by the FDA, all of which target blood cancers. [ 4 ] These CAR-T therapies have been shown to have high efficacy in eradicating leukemia cancer, including in patients with advanced-stage, treatment-resistant ( refractory ) or returned ( relapsed ) leukemia. [ 5 ] They also have a high remission rate in comparison to other traditional cancer treatments . [ 2 ]
Patients in the U.S. suffering from sickle cell disease can now receive targeted gene therapies using hematopoietic stem cells . [ 6 ] Hematopoietic stem cells are stem cells which differentiate and give rise to red blood cells , white blood cells and platelets . [ 7 ] These therapies involve removing hematopoietic stem cells from the patient and making specific edits to the genome of the hematopoietic stem cells. These edits in the genome of the hematopoietic stem cells are to reverse the effects of sickle cell disease. The cells are then re-administered into the patient. The hematopoietic stem cells are then able to produce red blood cells with the factors which promote proper red blood cell shape reducing the effects of sickle cell disease.
Beta thalassemia is a heritable disorder, characterized by the inability to make beta globin protein , and in turn reduced functioning of hemoglobin (which beta globin is a part of). [ 8 ] In December 2023, the European Medicines Agency recommended approval for a cell based gene therapy that works through the CRISPR/Cas9 system. The therapy known as Casgevy [ 9 ] works through editing a dysfunctional protein that interferes with creation of adult hemoglobin . This gene is known as the BCL11A , and when people have Beta thalassemia, their bodies do not make enough adult hemoglobin. Casgevy uses precise gene editing of stem cells, and reduces the activity of BCL11A . With the subsequent reduction of adult hemoglobin, fetal hemoglobin (HbF) genes are turned back on, allowing the cells to produce enough hemoglobin. Typically, the body stops making fetal hemoglobin around 6 months of age, and starts making adult hemoglobin. [ 10 ] These serve similar functions, however fetal hemoglobin has a higher binding affinity for oxygen than adult hemoglobin, but both are functional at transporting oxygen in the body. [ 10 ] Stem cells edited by Casgevy are then transfused back into the body where they can create more HbF and therefore make more functional red blood cells that have this edit. With this therapy, patients who would regularly need blood transfusions can now produce enough hemoglobin for themselves. [ 11 ]
Human immunodeficiency virus (HIV) is a disease that, once contacted, attacks cells that are necessary to fight off infections . It can be transmitted in many different ways, including through sexual contact, blood contamination, the sharing of needles, or from mother to infant. [ 12 ] If left untreated, HIV can result in acquired immunodeficiency syndrome (AIDS). [ 13 ] HIV weakens an individual’s immune system , leading to increased risk of fatal infections and cancers. [ 14 ] In 2023, around 40 million people globally were living with HIV. [ 15 ] Despite options available for the treatment and management of HIV (e.g., highly active antiretroviral therapy ; HAART), they come with limitations including the need for indefinite daily treatment. [ 16 ] Attempts to generate a long-term HIV-resistant immune system have been promising with results from a case report of a patient who developed acute myeloid leukemia after HIV infection. [ 17 ] Previously, researchers had found a version of a gene (an allele ) that was resistant to HIV. These researchers therefore found a donor who had two copies of this allele ( homozygous ) and extracted their stem cells in an attempt to produce HIV resistance in the patient with acute myeloid leukemia. After stem cell transplantation from this donor, the patient tested and remained HIV-negative at 20 months post-transplantation and was able to discontinue use of antiviral therapies. [ 17 ]
This genetics article is a stub . You can help Wikipedia by expanding it .
This article about biological engineering is a stub . You can help Wikipedia by expanding it .
This hematology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gene_therapy_for_blood_diseases |
Gene therapy for color blindness is an experimental gene therapy of the human retina aiming to grant typical trichromatic color vision to individuals with congenital color blindness by introducing typical alleles for opsin genes. Animal testing for gene therapy began in 2007 with a 2009 breakthrough in squirrel monkeys suggesting an imminent gene therapy in humans. While the research into gene therapy for red-green colorblindness has lagged since then, successful human trials are ongoing for achromatopsia . Congenital color vision deficiency affects upwards of 200 million people in the world, which represents a large demand for this gene therapy.
The retina of the human eye contains photoreceptive cells called cones that allow color vision. A normal trichromat possesses three different types of cones to distinguish different colors within the visible spectrum . The three types of cones are designated L, M, and S cones, each containing an opsin sensitive to a different portion of the visible spectrum. More specifically, the L cone absorbs around 560 nm, the M cone absorbs near 530 nm, and the S cone absorbs near 420 nm. [ 1 ] These cones transduce the absorbed light into electrical information to be relayed through other cells along the phototransduction pathway , before reaching the visual cortex in the brain . [ 1 ]
The signals from the 3 cones are compared to each other to generate 3 opponent process channels. The channels are perceived as balances between red-green, blue-yellow and black-white. [ 1 ]
Color vision deficiency (CVD) is the deviation of an individual's color vision from typical human trichromatic vision. Relevant to gene therapy, CVD can be classified in 2 groups.
Dichromats have partial color vision. The most common form of dichromacy is red-green colorblindness. Dichromacy usually arises when one of the three opsin genes is deleted or otherwise fully nonfunctional. The effects and diagnosis depend on the missing opsin. Protanopes (very common) have no L-opsin, Deuteranopes (very common) have no M-opsin, and Tritanopes (rare) have no S-opsin. Accordingly, a missing cone means one of the opponent channels is inactive: red-green for protanopes/deuteranopes and blue-yellow for tritanopes. They therefore perceive a much reduced color space. Although dichromacy poses few critical problems in daily life, a lack of access to many occupations (where color vision may be safety-critical) is a large disadvantage.
Anomalous Trichromats are not missing an opsin gene, but rather have a mutated (or chimeric ) gene. They have trichromatic vision, but with a smaller color gamut than typical color vision. Regarding gene therapy, they are equivalent to dichromats.
Blue Cone Monochromats are missing both the L- and M-opsin and therefore have no color vision. They are treated as a subset of dichromacy since a combination of gene therapies for protanopia and deuteranopia would be used.
Individuals with congenital achromatopsia tend to have typical opsin genes, but have a mutation in another gene downstream in the phototransduction pathway (e.g. GNAT2 protein) that prevents their cones (and therefore photopic vision ) from functioning. Achromats rely solely on their scotopic vision . The severity of achromatopsia is much higher than dichromacy, not only in the lack of color vision, but also in co-occurring symptoms photophobia , nystagmus and poor visual acuity .
Gene therapies aim to inject functional copies of missing or mutated genes into affected individuals by the use of viral vectors. Using a replication-defective recombinant adeno-associated virus (rAAV) as a vector, the cDNA of the affected gene can be delivered to the cones at the back of the retina typically via subretinal injection. Intravitreal injections are much less invasive, but not yet as effective as subretinal injections. Upon gaining the gene, the cone begins to express the new photopigment. The effect is ideally permanent. [ citation needed ]
The first retinal gene therapy to be approved by the FDA was Voretigene neparvovec in 2017, which treats Leber's congenital amaurosis , a genetic disorder that can lead to blindness. These treatments also use subretinal injections of AAV vector and are therefore foundational to research in gene therapy for color blindness. [ 2 ] [ 3 ]
Human L-cone photopigment have been introduced into mice . Since the mice possess only S cones and M cones, they are dichromats. [ 4 ] M-opsin was replaced with a cDNA of L-opsin in the X chromosome of some mice. By breeding these "knock-in" transgenic mice, they generated heterozygous females with both an M cone and an L cone. These mice had improved range of color vision and have gained trichromacy, as tested by electroretinogram and behavioral tests. However, this is more difficult to apply in the form of gene therapy.
Recombinant AAV vector was to introduce the green fluorescent protein (GFP) gene in the cones of gerbils . [ 5 ] The genetic insert was designed to only be expressed in S or M cones, and the expression of GFP in vivo was observed over time. Gene expression could stabilize if a sufficiently high dose of the viral vector is given.
In 2009, adult dichromatic squirrel monkeys were converted into trichromats using gene therapy. [ 6 ] New world monkeys are polymorphic in their M-opsin, such that females can be trichromatic, but all males are dichromatic. [ 6 ] Recombinant AAV vector was used to deliver a human L-opsin gene subretinally. A subset of the monkey's M-cones gained the L-opsin genes and began co-expressing the new and old photopigments. [ 6 ] Electroretinograms demonstrated that the cones were expressing the new opsin and after 20 weeks a pseudoisochromatic color vision test demonstrated that the treated monkeys had indeed developed functional trichromatic vision. [ 6 ]
Gene therapy was to restore some of the sight of mice with achromatopsia . The results were positive for 80% of the mice treated. [ 7 ]
In 2010, gene therapy for a form of achromatopsia was performed in dogs. Cone function and day vision have been restored for at least 33 months in two young dogs with achromatopsia. However, this therapy was less efficient for older dogs. [ 8 ]
In 2022, 4 young human ACHM2 and ACHM3 achromats were shown to have neurological responses (as measured with fMRI ) to photopic vision that matched patterns generated by their scotopic vision after gene therapy. This inferred a photopic cone-driven system that was at least marginally functional. The methodology did not investigate novel color vision, though one respondent claimed to more easily interpret traffic lights. [ 9 ] This may be considered the first case of a cure for colorblindness in humans.
In July 2023, a study found positive but limited improvements on congenital CNGA3 achromatopsia. [ 10 ] [ 11 ]
While the benefits of gene therapy to achromats typically outweigh the current risks, there are several challenges before large acceptance of gene therapy in dichromats can occur.
The procedure – namely the subretinal injection – is quite invasive, requiring several incisions and punctures in the eyeball. This poses a significant risk of infection and other complications. Subretinal injections methods promise to become less invasive with their application in other retinal gene therapies. They could also be replaced by intravitreal injections, which are significantly less invasive and can in theory be performed by a family doctor, but are less effective. [ 12 ]
The permanence of these therapies is also in question. Mancuso et al. reported that the treated squirrel monkeys maintained 2 years of color vision after the treatment. [ 6 ] However, if repeat injections are needed, there is also the concern of the body developing an immune reaction to the virus. If a body develops sensitivity to the viral vector, the success of the therapy could be jeopardized and/or the body may respond unfavorably. An editorial by J. Bennett points to Mancuso et al.'s use of an "unspecified postinjection corticosteroid therapy". [ 13 ] Bennett suggests that the monkeys may have experienced inflammation due to the injection. [ 13 ] However, the AAV virus that is commonly used for this study is non-pathogenic, and the body is less likely to develop an immune reaction. [ 14 ]
According to research by David H. Hubel and Torsten Wiesel , suturing shut one eye of monkeys at an early age resulted in an irreversible loss of vision in that eye, even after the suture was removed. [ 1 ] [ 15 ] The study concluded that the neural circuitry for vision is wired during a "critical period" in childhood, after which the visual circuitry can no longer be rewired to process new sensory input. Contrary to this finding, Mancuso et al.’s success in conferring trichromacy to adult squirrel monkeys suggests that it is possible to adapt the preexisting circuit to allow greater acuity in color vision. The researchers concluded that integrating the stimulus from the new photopigment as an adult was not analogous to vision loss following visual deprivation. [ 6 ]
It is yet unknown how the animals that gain a new photopigment are perceiving the new color. While the article by Mancuso et al. states that the monkey has indeed gained trichromacy and gained the ability to discriminate between red and green, they claim no knowledge of how the animal internally perceives the sensation. [ 6 ]
As a way to introduce new genetic information to change a person's phenotype, a gene therapy for color blindness is open to the same ethical questions and criticisms as gene therapy in general. These include issues around the governance of the therapy, whether treatment should be available only to those who can afford it, and whether the availability of treatment creates a stigma for those with color blindness. Given the large number of people with color blindness, there is also the question of whether color blindness is a disorder. [ 16 ] Furthermore, even if gene therapy succeeds in converting incomplete colorblind individuals to trichromats, the degree of satisfaction among the subjects is unknown. It is uncertain how the quality of life will improve (or worsen) after the therapy.
The gene therapy for converting dichromats to trichromats can also be used hypothetically to "upgrade" typical trichromats to tetrachromats by introducing a new opsin genes. This begs the ethics of designer babies that contain genes not available naturally in the human gene pool. In 2022, the lab of Jay Neitz engineered a novel opsin sensitive to wavelengths between the typical human S- (420 nm) and M- (530 nm) opsins, i.e. the novel opsin at 493 nm. This allowed the opsin to be clearly visible in ERGs , but could be used to create tetrachromacy. [ 12 ] | https://en.wikipedia.org/wiki/Gene_therapy_for_color_blindness |
Retinal gene therapy holds a promise in treating different forms of non-inherited and inherited blindness .
In 2008, three independent research groups reported that patients with the rare genetic retinal disease Leber's congenital amaurosis had been successfully treated using gene therapy with adeno-associated virus (AAV). [ 1 ] [ 2 ] [ 3 ] In all three studies, an AAV vector was used to deliver a functional copy of the RPE65 gene, which restored vision in children suffering from LCA. These results were widely seen as a success in the gene therapy field, and have generated excitement and momentum for AAV-mediated applications in retinal disease.
In retinal gene therapy, the most widely used vectors for ocular gene delivery are based on adeno-associated virus . The great advantage in using adeno-associated virus for the gene therapy is that it poses minimal immune responses and mediates long-term transgene expression in a variety of retinal cell types. For example, tight junctions that form the blood-retina barrier, separate subretinal space from the blood supply , providing protection from microbes and decreasing most immune-mediated damages. [ 4 ]
There is still a lot of knowledge missing in regards of retina dystrophies. Detail characterization is needed in order to improve knowledge. To address this issue, creation of Registries is an attempt to grouped and characterize rare diseases. Registries help to localize, and measure all the phenotype of these conditions and therefore to provide easy follow-ups and provide a source of information to scientist community. Registry designs varies from region to region, however localization and characterization of the phenotype are the standard gold.
Examples of Registries are:
RetMxMap<ARVO 2009>. A Mexican and Latin-American registry created since 2009. This registry was created by Dr Adda Lízbeth Villanueva Avilés. She is a clinical-scientist gene mapping inherited retina dystrophies in Mexico and other Latin countries.
Preclinical studies in mouse models of Leber's congenital amaurosis (LCA) were published in 1996 and a study in dogs published in 2001. In 2008, three groups reported results of clinical trials using adeno-associated virus for LCA. In these studies, an AAV vector encoding the RPE65 gene was delivered via a "subretinal injection", where a small amount of fluid is injected underneath the retina in a short surgical procedure. [ 5 ] Development continued, and in December 2017 the FDA approved Voretigene neparvovec (Luxturna), an adeno-associated virus vector-based gene therapy for children and adults with biallelic RPE65 gene mutations responsible for retinal dystrophy, including Leber congenital amaurosis. People must have viable retinal cells as a prerequisite for the intraocular administration of the drug. [ 6 ]
Following the successful clinical trials in LCA, researchers have been developing similar treatments using adeno-associated virus for age-related macular degeneration (AMD). To date, efforts have focused on long-term delivery of VEGF inhibitors to treat the wet form of macular degeneration. Whereas wet AMD is treated using frequent injections of recombinant protein into the eyeball, the goal of these treatments is long-term disease management following a single administration. One such study is being conducted at the Lions Eye Institute in Australia [ 7 ] in collaboration with Avalanche Biotechnologies, a US-based biotechnology start-up. Another early-stage study is sponsored by Genzyme Corporation . [ 8 ]
Ixoberogene soroparvovec (Ixo-vec) is an investigational intravitreal gene therapy treatment targeting wet age-related macular degeneration (AMD) that aims to reduce the treatment burden by decreasing the frequency of anti-VEGF injections. [ 9 ] Delivered as a single intravitreal injection, Ixo-vec enables sustained release of aflibercept, an anti-VEGF protein that helps control abnormal blood vessel growth and fluid leakage, which are key in AMD progression. [ 10 ] Results from the OPTIC and LUNA trials demonstrate Ixo-vec’s effectiveness in significantly reducing the need for regular injections over extended periods. Patients in these trials experienced a reduction in injection frequency by as much as 90%, with many remaining injection-free for extended periods. Visual acuity remained stable, and anatomical outcomes, like reductions in central subfield thickness (CST), were achieved. [ 11 ] Mild intraocular inflammation was the most common side effect, with steroid prophylaxis proving effective in managing this issue. This treatment approach, if proven in further studies, could offer AMD patients a more convenient, long-lasting alternative to frequent anti-VEGF injections, enhancing quality of life and treatment adherence.
In October 2011, the first clinical trial was announced for the treatment of choroideremia . [ 12 ] Dr. Robert MacLaren of the University of Oxford, who lead the trial, co-developed the treatment with Dr. Miguel Seabra of the Imperial College, London. This Phase 1/2 trial used subretinal AAV to restore the REP gene in affected patients. [ 13 ] Initial results of the trial were reported in January 2014 as promising as all six patients had better vision. [ 14 ] [ 15 ]
Research has shown that AAV can successfully restore color vision to treat color blindness in adult monkeys. [ 16 ] Although this treatment has not yet entered clinical trials for humans, this work was considered a breakthrough for the ability to target cone photoreceptors. [ 17 ]
Revakinagene taroretcel was approved for medical use in the United States in March 2025 for the treatment of macular telangiectasia type 2. [ 18 ]
The vertebrate neural retina composed of several layers and distinct cell types (see anatomy of the human retina ). A number of these cell types are implicated in retinal diseases, including retinal ganglion cells , which degenerate in glaucoma, the rod and cone photoreceptors , which are responsive to light and degenerate in retinitis pigmentosa , macular degeneration , and other retinal diseases, and the retinal pigment epithelium (RPE), which supports the photoreceptors and is also implicated in retinitis pigmentosa and macular degeneration .
In retinal gene therapy , AAV is capable of "transducing" these various cell types by entering the cells and expressing the therapeutic DNA sequence. Since the cells of the retina are non-dividing, AAV continues to persist and provide expression of the therapeutic DNA sequence over a long time period that can last several years. [ 19 ]
AAV is capable of transducing multiple cell types within the retina. AAV serotype 2, the most well-studied type of AAV, is commonly administered in one of two routes: intravitreal or subretinal. Using the intravitreal route, AAV is injected in the vitreous humor of the eye. Using the subretinal route, AAV is injected underneath the retina, taking advantage of the potential space between the photoreceptors and RPE layer, in a short surgical procedure. Although this is more invasive than the intravitreal route, the fluid is absorbed by the RPE and the retina flattens in less than 14 hours without complications. [ 1 ] Intravitreal AAV targets retinal ganglion cells and a few Muller glial cells. Subretinal AAV efficiently targets photoreceptors and RPE cells. [ 20 ] [ 21 ]
The reason that different routes of administration lead to different cell types being transfected (e.g., different tropism ) is that the inner limiting membrane (ILM) and the various retinal layers act as physical barriers for the delivery of drugs and vectors to the deeper retinal layers. [ 22 ] Thus overall, subretinal AAV is 5-10 times more efficient than delivery using the intravitreal route.
One important factor in gene delivery is developing altered cell tropisms to narrow or broaden rAAV-mediated gene delivery and to increase its efficiency in tissues. Specific properties like capsid conformation, cell targeting strategies can determine which cell types are affected and also the efficiency of the gene transfer process. Different kinds of modification can be undertaken. For example, modification by chemical, immunological or genetic changes that enables the AAV2 capsid to interact with specific cell surface molecules . [ 23 ]
Initial studies with AAV in the retina have utilized AAV serotype 2. Researchers are now beginning to develop new variants of AAV, based on naturally-occurring AAV serotypes and engineered AAV variants. [ 24 ]
Several naturally-occurring serotypes of AAV have been isolated that can transduce retinal cells. Following intravitreal injection, only AAV serotypes 2 and 8 were capable of transducing retinal ganglion cells. Occasional Muller cells were transduced by AAV serotypes 2, 8, and 9. Following subretinal injection, serotypes 2, 5, 7, and 8 efficiently transduced photoreceptors, and serotypes 1, 2, 5, 7, 8, and 9 efficiently transduce RPE cells. [ 21 ]
One example of an engineered variant has been described that efficiently transduces Muller glia following intravitreal injection, and has been used to rescue an animal model of aggressive, autosomal-dominant retinitis pigmentosa . [ 25 ] [ 26 ]
Importantly, the retina is immune-privileged, and thus does not experience a significant inflammation or immune-response when AAV is injected. [ 27 ] Immune response to gene therapy vectors is what has caused previous attempts at gene therapy to fail, and is considered a key advantage of gene therapy in the eye. Re-administration has been successful in large animals, indicating that no long-lasting immune response is mounted. [ 28 ]
Data indicate that the subretinal route may be subject to a greater degree of immune privilege compared to the intravitreal route. [ 29 ]
Expression in various retinal cell types can be determined by the promoter sequence. In order to restrict expression to a specific cell type, a tissue-specific or cell-type specific promoter can be used.
For example, in rats the murine rhodopsin gene drive the expression in AAV2, GFP reporter product was found only in rat photoreceptors, not in any other retinal cell type or in the adjacent RPE after subretinal injection. On the other hand, if ubiquitously expressed immediate-early cytomegalovirus (CMV) enhancer-promoter is expressed in a wide variety of transfected cell types. Other ubiquitous promoters such as the CBA promoter, a fusion of the chicken-actin promoter and CMV immediate-early enhancer, allows stable GFP reporter expression in both RPE and photoreceptor cells after subretinal injections. [ 30 ]
Sometimes modulation of transgene expression may be necessary since strong constitutive expression of a therapeutic gene in retinal tissues could be deleterious for long-term retinal function. Different methods have been utilized for the expression modulation. One way is using exogenously regulatable promoter system in AAV vectors. For example, the tetracycline -inducible expression system uses a silencer/transactivator AAV2 vector and a separate inducible doxycycline-responsive coinjection. [ 30 ] [ 31 ] When induction occurs by oral doxycycline , this system shows tight regulation of gene expression in both photoreceptor and RPE cells.
One study that was done by Royal College of Surgeons (RCS) in rat model shows that a recessive mutation in a receptor tyrosine kinase gene, mertk results in a premature stop codon and impaired phagocytosis function by RPE cells. This mutation causes the accumulation of outer segment debris in the subretinal space, which causes photoreceptor cell death . The model organism with this disease received a subretinal injection of AAV serotype 2 carrying a mouse Mertk cDNA under the control of either the CMV or RPE65 promoters. This treatment was found to prolong photoreceptor cell survival for several months [ 32 ] and also the number of photoreceptor was 2.5 fold higher in AAV-Mertk- treated eyes compared with controls 9 weeks after injection, also they found decreased amount of debris in the subretinal space.
The protein RPE65 is used in the retinoid cycle where the all-trans-retinol within the rod outer segment is isomerized to its 11-cis form and oxidized to 11-cis retinal before it goes back to the photoreceptor and joins with opsin molecule to form functional rhodopsin . [ 33 ] In animal knockout model (RPE65-/-), gene transfer experiment shows that early intraocular delivery of human RPE65 vector on embryonic day 14 shows efficient transduction of retinal pigment epithelium in the RPE65-/- knockout mice and rescues visual functions. This shows successful gene therapy can be attributed to early intraocular deliver to the diseased animal.
Juvenile retinoschisis is a disease that affects the nerve tissue in the eye. This disease is an X-linked recessive degenerative disease of the central macula region, and it is caused by mutation in the RSI gene encoding the protein retinoschisin. Retinoschisin is produced in the photoreceptor and bipolar cells and it is critical in maintaining the synaptic integrity of the retina. [ 30 ]
Specifically the AAV 5 vector containing the wild-type human RSI cDNA driven by a mouse opsin promoter showed long-term retinal functional and structural recovery. Also the retinal structural reliability improved greatly after the treatment, characterized by an increase in the outer nuclear layer thickness. [ 30 ]
Retinitis pigmentosa is an inherited disease which leads to progressive night blindness and loss of peripheral vision as a result of photoreceptor cell death. [ 30 ] [ 34 ] [ 35 ] Most people who suffer from RP are born with rod cells that are either dead or dysfunctional, so they are effectively blind at nighttime, since these are the cells responsible for vision in low levels of light. What follows often is the death of cone cells , responsible for color vision and acuity, at light levels present during the day. Loss of cones leads to full blindness as early as five years old, but may not onset until many years later. There have been multiple hypotheses about how the lack of rod cells can lead to the death of cone cells. Pinpointing a mechanism for RP is difficult because there are more than 39 genetic loci and genes correlated with this disease. In an effort to find the cause of RP, there have been different gene therapy techniques applied to address each of the hypotheses. [ 36 ]
Different types of inheritance can attribute to this disease; autosomal recessive, autosomal dominant, X-linked type, etc. The main function of rhodopsin is initiating the phototransduction cascade. The opsin proteins are made in the photoreceptor inner segments, then transported to the outer segment, and eventually phagocytized by the RPE cells. When mutations occur in the rhodopsin the directional protein movement is affected because the mutations can affect protein folding , stability, and intracellular trafficking. One approach is introducing AAV-delivered ribozymes designed to target and destroy a mutant mRNA. [ 30 ]
The way this system operates was shown in animal model that have a mutant rhodopsin gene. The injected AAV-ribozymes were optimized in vitro and used to cleave the mutant mRNA transcript of P23H (where most mutation occur) in vivo. [ 30 ]
Another mutation in the rhodopsin structural protein, specifically peripherin 2 which is a membrane glycoprotein involved in the formation of photoreceptor outersegment disk, can lead to recessive RP and macular degeneration in human [ 34 ] (19). In a mouse experiment, AAV2 carrying a wild-type peripherin 2 gene driven by a rhodopsin promoter was delivered to the mice by subretinal injection. The result showed improvement in photoreceptor structure and function which was detected by ERG (electroretinogram). The result showed improvement of photoreceptor structure and function which was detected by ERG. Also peripherin 2 was detected at the outer segment layer of the retina 2 weeks after injection and therapeutic effects were noted as soon as 3 weeks after injection. A well-defined outer segment containing both peripherin2 and rhodopsin was present 9-month after injection. [ 30 ]
Since apoptosis can be the cause of photoreceptor death in most of the retinal dystrophies. It has been known that survival factors and antiapoptoic reagents can be an alternative treatment if the mutation is unknown for gene replacement therapy. Some scientists have experimented with treating this issue by injecting substitute trophic factors into the eye. One group of researchers injected the rod derived cone viability factor (RdCVF) protein (encoded for by the Nxnl1 (Txnl6) gene) into the eye of the most commonly occurring dominant RP mutation rat models. This treatment demonstrated success in promoting the survival of cone activity, but the treatment served even more significantly to prevent progression of the disease by increasing the actual function of the cones. [ 37 ] Experiments were also carried out to study whether supplying AAV2 vectors with cDNA for glial cell line-derived neurotrophic factor (GDNF) can have an anti-apoptosis effect on the rod cells . [ 30 ] [ 38 ] In looking at an animal model, the opsin transgene contains a truncated protein lacking the last 15 amino acids of the C terminus, which causes alteration in rhodopsin transport to the outer segment and leads to retinal degeneration. [ 30 ] When the AAV2-CBA-GDNF vector is administered to the subretinal space, photoreceptor stabilized and rod photoreceptors increased and this was seen in the improved function of the ERG analysis. [ 38 ] Successful experiments in animals have also been carried out using ciliary neurotrophic factor (CNTF), and CNTF is being used as a treatment in human clinical trials. [ 39 ]
Ocular neovascularization (NV) is the abnormal formation of new capillaries from already existing blood vessels in the eye, and this is a characteristics for ocular diseases such as diabetic retinopathy (DR), retinopathy of prematurity (ROP) and (wet form) age-related macular degeneration (AMD). One of the main players in these diseases is VEGF (Vascular endothelial growth factor) which is known to induce vessel leakage and which is also known to be angiogenic. [ 30 ] In normal tissues VEGF stimulates endothelial cell proliferation in a dose dependent manner, but such activity is lost with other angiogenic factors. [ 40 ]
Many angiostatic factors have been shown to counteract the effect of increasing local VEGF. The naturally occurring form of soluble Flt-1 has been shown to reverse neovascularization in rats, mice, and monkeys. [ 41 ] [ 42 ] [ 43 ] [ 44 ]
Pigment epithelium-derived factor ( PEDF ) also acts as an inhibitor of angiogenesis . The secretion of PEDF is noticeably decreased under hypoxic conditions allowing the endothelial mitogenic activity of VEGF to dominate, suggesting that the loss of PEDF plays a central role in the development of ischemia -driven NV. One clinical finding shows that the levels of PEDF in aqueous humor of human are decreased with increasing age, indicating that the reduction may lead to the development of AMD. [ 30 ] [ 45 ] In animal model, an AAV with human PEDF cDNA under the control of the CMV promoter prevented choroidal and retinal NV. [ 46 ]
The finding suggests that the AAV-mediated expression of angiostatic factors can be implemented to treat NV. [ 47 ] [ 48 ] This approach could be useful as an alternative to frequent injections of recombinant protein into the eye. In addition, PEDF and sFlt-1 may be able to diffuse through sclera tissue, [ 49 ] allowing for the potential to be relatively independent of the intraocular site of administration. | https://en.wikipedia.org/wiki/Gene_therapy_of_the_human_retina |
Gene transfer agent s ( GTAs ) are DNA-containing virus -like particles that are produced by some bacteria and archaea and mediate horizontal gene transfer . Different GTA types have originated independently from viruses in several bacterial and archaeal lineages. These cells produce GTA particles containing short segments of the DNA present in the cell. After the particles are released from the producer cell, they can attach to related cells and inject their DNA into the cytoplasm. The DNA can then become part of the recipient cells' genome. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
GTAs are classified as viriforms in the ICTV taxonomy . Among the GTAs mentioned by the article, RcGTA and DsGTA are now in the family Rhodogtaviriformidae , BaGTA in Bartogtaviriformidae , and VSH-1 in Brachygtaviriformidae . [ 5 ] Dd1 and VTA do not yet have a classification.
The first GTA system was discovered in 1974, when mixed cultures of Rhodobacter capsulatus strains produced a high frequency of cells with new combinations of genes. [ 6 ] The factor responsible was distinct from known gene-transfer mechanisms in being independent of cell contact, insensitive to deoxyribonuclease, and not associated with phage production. Because of its presumed function it was named gene transfer agent (GTA, now RcGTA) More recently other gene transfer agent systems have been discovered by incubating filtered (cell-free) culture medium with a genetically distinct strain. [ 3 ]
The genes specifying GTAs are derived from bacteriophage (phage) DNA that has integrated into a host chromosome. Such prophages often acquire mutations that make them defective and unable to produce phage particles. Many bacterial genomes contain one or more defective prophages that have undergone more-or less-extensive mutation and deletion. Gene transfer agents, like defective prophages, arise by mutation of prophages, but they retain functional genes for the head and tail components of the phage particle (structural genes) and the genes for DNA packaging. The phage genes specifying its regulation and DNA replication have typically been deleted, and expression of the cluster of structural genes is under the control of cellular regulatory systems. Additional genes that contribute to GTA production or uptake are usually present at other chromosome locations. Some of these have regulatory functions, and others contribute directly to GTA production ( e.g. the phage-derived lysis genes) or uptake and recombination ( e.g. production of cell-surface capsule and DNA transport proteins) These GTA-associated genes are often under coordinated regulation with the main GTA gene cluster. [ 7 ] Phage-derived cell-lysis proteins (holin and endolysin) then weaken the cell wall and membrane, allowing the cell to burst and release the GTA particles. The number of GTA particles produced by each cell is not known.
Some GTA systems appear to be recent additions to their host genomes, but others have been maintained for many millions of years. Where studies of sequence divergence have been done (dN/dS analysis), they indicate that the genes are being maintained by natural selection for protein function (i.e. defective versions are being eliminated). [ 8 ] [ 9 ]
However, the nature of this selection is not clear. Although the discoverers of GTA assumed that gene transfer was the function of the particles, the presumed benefits of gene transfer come at a substantial cost to the population. Most of this cost arises because GTA-producing cells must lyse (burst open) to release their GTA particles, but there are also genetic costs associated with making new combinations of genes because most new combinations will usually be less fit than the original combination. [ 10 ] One alternative explanation is that GTA genes persist because GTAs are genetic parasites that spread infectiously to new cells. However this is ruled out because GTA particles are typically too small to contain the genes that encode them. For example, the main RcGTA cluster (see below) is 14 kb long, but RcGTA particles can contain only 4–5 kb of DNA.
Most bacteria have not been screened for the presence of GTAs, and many more GTA systems may await discovery. Although DNA-based surveys for GTA-related genes have found homologs in many genomes, but interpretation is hindered by the difficulty of distinguishing genes that encode GTAs from ordinary prophage genes. [ 8 ] [ 9 ]
In laboratory cultures, production of GTAs is typically maximized by particular growth conditions that induce transcription of the GTA genes; most GTAs are not induced by the DNA-damaging treatments that induce many prophages. Even under maximally inducing conditions only a small fraction of the culture produces GTAs, typically less than 1%. [ 11 ] [ 12 ]
The steps in GTA production are derived from those of phage infection. The structural genes are first transcribed and translated, and the proteins assembled into empty heads and unattached tails. The DNA packaging machinery then packs DNA into each head, cutting the DNA when the head is full, attaching a tail to the head, and then moving the newly-created DNA end on to a new empty head. Unlike prophage genes, the genes encoding GTAs are not excised from the genome and replicated for packaging in GTA particles. The two best studied GTAs (RcGTA and BaGTA) randomly package all of the DNA in the cell, with no overrepresentation of GTA-encoding genes. [ 11 ] [ 13 ] The number of GTA particles produced by each cell is not known.
Whether release of GTA particles leads to transfer of DNA to new genomes depends on several factors. First, the particles must survive in the environment – little is known about this, although particles are reported to be quite unstable under laboratory conditions. [ 14 ] Second, particles must encounter and attach to suitable recipient cells, usually members of the same or a closely related species. Like phages, GTAs attach to specific protein or carbohydrate structures on the recipient cell surface before injecting their DNA. Unlike phage, the well-studied GTAs appear to inject their DNA only across the first of the two membranes surrounding the recipient cytoplasm, and they use a different system, competence -derived rather than phage-derived, to transport one strand of the double-stranded DNA across the inner membrane into the cytoplasm. [ 15 ] [ 16 ]
If the cell's recombinational repair machinery finds a chromosomal sequence very similar to the incoming DNA, it replaces the former with the latter by homologous recombination, mediated by the cell's RecA protein. If the sequences are not identical this will produce a cell with a new genetic combination. However, if the incoming DNA is not closely related to DNA sequences in the cell it will be degraded, and the cell will reuse its nucleotides for DNA replication.
The GTA produced by the alphaproteobacterium Rhodobacter capsulatus , named R. capsulatus GTA (RcGTA), is currently the best studied GTA. When laboratory cultures of R. capsulatus enter stationary phase, a subset of the bacterial population induces production of RcGTA, and the particles are subsequently released from the cells through cell lysis . [ 12 ] Most of the RcGTA structural genes are encoded in a ~ 15 kb genetic cluster on the bacterial chromosome. However, other genes required for RcGTA function, such as the genes required for cell lysis, are located separately. [ 2 ] [ 17 ] RcGTA particles contain 4.5 kb DNA fragments, with even representation of the whole chromosome except for a 2-fold dip at the site of the RcGTA gene cluster.
Regulation of GTA production and transduction has been best studied in R. capsulatus , where a quorum-sensing system and a CtrA-phosphorelay control expression of not only the main RcGTA gene cluster, but also a holin/endolysin cell lysis system, particle head spikes, an attachment protein (possibly tail fibers), and the capsule and DNA processing genes needed for RcGTA recipient function. An uncharacterized stochastic process further limits expression of the gene cluster is to only 0.1-3% of the cells.
RcGTA-like clusters are found in a large subclade of the alphaproteobacteria, although the genes also appear to be frequently lost by deletion. Recently, several members of the order Rhodobacterales have been demonstrated to produce functional RcGTA-like particles. Groups of genes with homology to the RcGTA are present in the chromosomes of various types of alphaproteobacteria. [ 8 ]
D. shibae , like R. capsulatus , is a member of the Order Rhodobacterales, and its GTA shares a common ancestor and many features with RcGTA, including gene organization, packaging of short DNA fragments (4.2 kb) and regulation by quorum sensing and a CtrA phosphorelay. [ 18 ] However, its DNA packaging machinery has much more specificity, with sharp peaks and valleys of coverage suggestion that it may preferentially initiate packaging at specific sites in the genome. The DNA of the major DsGTA gene cluster is packaged very poorly.
Bartonella species are members of the Alphaproteobacteria like R. capsulatus and D. shibae , but BaGTA is not related to RcGTA and DsGTA. [ 19 ] BaGTA particles are larger than RcGTA and contain 14 kb DNA fragments. Although this capacity could in principle allow BaGTA to package and transmit its 14 kb GTA cluster, measurements of DNA coverage show reduced coverage of the cluster. An adjacent region of high coverage is thought to be due to local DNA replication. [ 13 ]
Brachyspira is a genus of spirochete; several species have been shown to carry homologous GTA gene clusters. Particles contain 7.5 kb DNA fragments. Production of VSH-1 is stimulated by the DNA-damaging agent mitomycin C and by some antibiotics. It is also associated with detectable cell lysis, indicating that a substantial fraction of the culture may be producing VSH-1. [ 20 ]
D. desulfuricans is a soil bacterium in the deltaproteobacteria; Dd1 packages 13.6 kb of DNA fragments. It is unclear which genes encode for this GTA: there is one 17.8 kb area with phage-like structural genes in the bacterial genome, but their link to GTA production is not yet experimentally proven. [ 21 ]
M. voltae is an archaean; its GTA is known to transfer 4.4 kb DNA fragments but has not been otherwise characterized, [ 22 ] although a defective provirus related to Methanococcus head-tailed viruses ( Caudoviricetes ) in M. voltae A3 genome has been suggested to represent the GTA locus. [ 23 ] A possible terL terminase ( D7DSG2 ) was again identified in 2019. [ 24 ] | https://en.wikipedia.org/wiki/Gene_transfer_agent |
The Gene transfer format ( GTF ) is a file format used to hold information about gene structure. It is a tab-delimited text format based on the general feature format (GFF), but contains some additional conventions specific to gene information. A significant feature of the GTF that can be validated: given a sequence and a GTF file, one can check that the format is correct. This significantly reduces problems with the interchange of data between groups.
GTF is identical to GFF , version 2. [ 1 ]
This computer-storage -related article is a stub . You can help Wikipedia by expanding it .
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gene_transfer_format |
Gene trapping is a high-throughput approach that is used to introduce insertional mutations across an organism's genome.
Trapping is performed with gene trap vectors whose principal element is a gene trapping cassette consisting of a promoterless reporter gene and/or selectable genetic marker , flanked by an upstream 3' splice site (splice acceptor; SA) and a downstream transcriptional termination sequence ( polyadenylation sequence; polyA).
When inserted into an intron of an expressed gene, the gene trap cassette is transcribed from the endogenous promoter of that gene in the form of a fusion transcript in which the exon(s) upstream of the insertion site is spliced in frame to the reporter/selectable marker gene. Since transcription is terminated prematurely at the inserted polyadenylation site, the processed fusion transcript encodes a truncated and nonfunctional version of the cellular protein and the reporter/selectable marker. Thus, gene traps simultaneously inactivate and report the expression of the trapped gene at the insertion site, and provide a DNA tag (gene trap sequence tag, GTST) for the rapid identification of the disrupted gene . [ 1 ] [ 2 ]
The International Gene Trap Consortium is centralizing the data and supplies modified cell lines. [ 3 ] | https://en.wikipedia.org/wiki/Gene_trapping |
A genealogical DNA test is a DNA -based genetic test used in genetic genealogy that looks at specific locations of a person's genome in order to find or verify ancestral genealogical relationships, or (with lower reliability) to estimate the ethnic mixture of an individual. Since different testing companies use different ethnic reference groups and different matching algorithms, ethnicity estimates for an individual vary between tests, sometimes dramatically.
Three principal types of genealogical DNA tests are available, with each looking at a different part of the genome and being useful for different types of genealogical research: autosomal (atDNA), mitochondrial (mtDNA), and Y-chromosome (Y-DNA).
Autosomal tests may result in a large number of DNA matches to both males and females who have also tested with the same company. Each match will typically show an estimated degree of relatedness, i.e., a close family match, 1st-2nd cousins, 3rd-4th cousins, etc. The furthest degree of relationship is usually the "6th-cousin or further" level. However, due to the random nature of which, and how much, DNA is inherited by each tested person from their common ancestors, precise relationship conclusions can only be made for close relations. Traditional genealogical research , and the sharing of family trees, is typically required for interpretation of the results. Autosomal tests are also used in estimating ethnic mix.
MtDNA and Y-DNA tests are much more objective. However, they give considerably fewer DNA matches, if any (depending on the company doing the testing), since they are limited to relationships along a strict female line and a strict male line respectively. MtDNA and Y-DNA tests are utilized to identify archeological cultures and migration paths of a person's ancestors along a strict mother's line or a strict father's line. Based on MtDNA and Y-DNA, a person's haplogroup (s) can be identified. The mtDNA test can be taken by both males and females, because everyone inherits their mtDNA from their mother, as the mitochondrial DNA is located in the egg cell. However, a Y-DNA test can only be taken by a male, as only males have a Y-chromosome .
A genealogical DNA test is performed on a DNA sample obtained by cheek-scraping (also known as a buccal swab ), spit-cups, mouthwash , or chewing gum . Typically, the sample collection uses a home test kit supplied by a service provider such as 23andMe , AncestryDNA , Family Tree DNA , or MyHeritage . After following the kit instructions on how to collect the sample, it is returned to the supplier for analysis. The sample is then processed using a technology known as DNA microarray to obtain the genetic information.
There are three major types of genealogical DNA tests: Autosomal (which includes X-DNA), Y-DNA, and mtDNA.
Y-DNA and mtDNA do not produce a direct ethnicity estimate, but allows to find one's haplogroup (s). Haplogroups can only provide information on one line of ancestors among many. While they are unevenly distributed across ethnicities, their historical distribution is only speculation. [ 2 ] Direct-to-consumer DNA test companies have often labeled haplogroups by continent or ethnicity (e.g., an "African haplogroup" or a "Viking haplogroup"), but these labels may be speculative or misleading. [ 2 ] [ 3 ] [ 4 ]
Autosomal DNA is contained in the 22 pairs of chromosomes not involved in determining a person's sex. [ 2 ] Autosomal DNA recombines in each generation, and new offspring receive one set of chromosomes from each parent. [ 5 ] These are inherited exactly equally from both parents and roughly equally from grandparents to about 3x great-grandparents. [ 6 ] Therefore, the number of markers (one of two or more known variants in the genome at a particular location – known as Single-nucleotide polymorphisms or SNPs) inherited from a specific ancestor decreases by about half with each successive generation; that is, an individual receives half of their markers from each parent, about a quarter of those markers from each grandparent; about an eighth of those markers from each great-grandparent, etc. Inheritance is more random and unequal from more distant ancestors. [ 7 ] Generally, a genealogical DNA test might test about 700,000 SNPs (specific points in the genome). [ 8 ]
The preparation of a report on the DNA in the sample proceeds in multiple stages:
All major service providers use equipment with microarray chips supplied by Illumina . [ 9 ] The chip determines which SNP locations are tested. Different versions of the chip are used by different service providers. In addition, updated versions of the Illumina chip may test different sets of SNP locations. The list of SNP locations and base pairs at that location is usually available to the customer as "raw data". The raw data can be uploaded to some other genealogical service providers to produce an additional interpretation and matches. For additional genealogical analysis the data can also be uploaded to GEDmatch (a third-party web based set of tools that analyzes raw data from the main service providers). Raw data can also be uploaded to services that provide health risk and trait reports using SNP genotypes. These reports may be free or inexpensive, in contrast to reports provided by DTC testing companies, who charge about double the cost of their genealogy-only services. The implications of individual SNP results can be ascertained from raw data results by referring to SNPedia.com.
The major component of an autosomal DNA test is matching other individuals. Where the individual being tested has a number of consecutive SNPs in common with a previously tested individual in the company's database, it can be inferred that they share a segment of DNA at that part of their genomes. [ 10 ] If the segment is longer than a threshold amount set by the testing company, then these two individuals are considered to be a match. Unlike the identification of base pairs, the data bases against which the new sample is tested, and the algorithms used to determine a match, are proprietary and specific to each company.
The unit for segments of DNA is the centimorgan (cM). For comparison, a full human genome is about 6500 cM. The shorter the length of a match, the greater are the chances that a match is spurious. [ 11 ] An important statistic for subsequent interpretation is the length of the shared DNA (or the percentage of the genome that is shared).
Most companies will show the customers how many cMs they share and across how many segments. From the number of cMs and segments, the relationship between the two individuals can be estimated; however, due to the random nature of DNA inheritance, relationship estimates, especially for distant relatives, are only approximate. Some more distant cousins will not match at all. [ 12 ] Although information about specific SNPs can be used for some purposes (e.g., suggesting likely eye color), the key information is the percentage of DNA shared by two individuals. This can indicate the closeness of the relationship. However, it does not show the roles of the two individuals, e.g., 50% shared suggests a parent/child relationship, but it does not identify which individual is the parent.
Various advanced techniques and analyses can be done on this data. This includes features such as In-common/Shared Matches, [ 13 ] Chromosome Browsers, [ 14 ] and Triangulation. [ 15 ] This analysis is often required if DNA evidence is being used to prove or disprove a specific relationship.
The X-chromosome SNP results are often included in autosomal DNA tests. Both males and females receive an X-chromosome from their mother, but only females receive a second X-chromosome from their father. [ 16 ] The X-chromosome has a special path of inheritance patterns and can be useful in significantly narrowing down possible ancestor lines compared to autosomal DNA. For example, an X-chromosome match with a male can only have come from his maternal side. [ 17 ] Like autosomal DNA, X-chromosome DNA undergoes random recombination at each generation (except for father-to-daughter X-chromosomes, which are passed down unchanged). There are specialized inheritance charts which describe the possible patterns of X-chromosome DNA inheritance for males and females. [ 18 ]
Some genealogical companies offer autosomal STRs (short tandem repeats). [ 19 ] These are similar to Y-DNA STRs. The number of STRs offered is limited, and results have been used for personal identification, [ 20 ] paternity cases, and inter-population studies. [ 21 ] [ 22 ]
Law enforcement agencies in the US and Europe use autosomal STR data to identify criminals. [ 19 ] [ 23 ]
The mitochondrion is a component of a human cell, and contains its own DNA. Mitochondrial DNA usually has 16,569 base pairs (the number can vary slightly depending on addition or deletion mutations) [ 24 ] and is much smaller than the human genome DNA which has 3.2 billion base pairs. Mitochondrial DNA is transmitted from mother to child, as it is contained in the egg cell. Thus, a direct maternal ancestor can be traced using mtDNA . The transmission occurs with relatively rare mutations compared to autosomal DNA. A perfect match found to another person's mtDNA test results indicates shared ancestry of possibly between 1 and 50 generations ago. [ 2 ] More distant matching to a specific haplogroup or subclade may be linked to a common geographic origin.
The mtDNA, by current conventions, is divided into three regions. They are the coding region (00577-16023) and two Hyper Variable Regions (HVR1 [16024-16569], and HVR2 [00001-00576]). [ 25 ]
The two most common mtDNA tests are a sequence of HVR1 and HVR2 and a full sequence of the mitochondria. Generally, testing only the HVRs has limited genealogical use so it is increasingly popular and accessible to have a full sequence. The full mtDNA sequence is only offered by Family Tree DNA among the major testing companies [ 26 ] and is somewhat controversial because the coding region DNA may reveal medical information about the test-taker [ 27 ]
All humans descend in the direct female line from Mitochondrial Eve , a female who lived probably around 150,000 years ago in Africa. [ 28 ] [ 29 ] Different branches of her descendants are different haplogroups. Most mtDNA results include a prediction or exact assertion of one's mtDNA Haplogroup . Mitochrondial haplogroups were greatly popularized by the book The Seven Daughters of Eve , which explores mitochondrial DNA.
It is not normal for test results to give a base-by-base list of results. Instead, results are normally compared to the Cambridge Reference Sequence (CRS), which is the mitochondria of a European who was the first person to have their mtDNA published in 1981 (and revised in 1999). [ 30 ] Differences between the CRS and testers are usually very few, thus it is more convenient than listing one's raw results for each base pair.
Note that in HVR1, instead of reporting the base pair exactly, for example 16,111, the 16 is often removed to give in this example 111. The letters refer to one of the four bases (A, T, G, C) that make up DNA.
The Y-chromosome is one of the 23rd pair of human chromosomes. Only males have a Y-chromosome, because women have two X chromosomes in their 23rd pair. A man's patrilineal ancestry, or male-line ancestry, can be traced using the DNA on his Y-chromosome (Y-DNA), because the Y-chromosome is transmitted from a father to son nearly unchanged. [ 31 ] A man's test results are compared to another man's results to determine the time frame in which the two individuals shared a most recent common ancestor , or MRCA, in their direct patrilineal lines. If their test results are very close, they are related within a genealogically useful time frame. [ 32 ] A surname project is where many individuals whose Y-chromosomes match collaborate to find their common ancestry.
Women who wish to determine their direct paternal DNA ancestry can ask their father, brother, paternal uncle, paternal grandfather, or a paternal uncle's son (their cousin) to take a test for them.
There are two types of DNA testing: STRs and SNPs. [ 2 ]
Most common is STRs (short tandem repeat). A certain section of DNA is examined for a pattern that repeats (e.g. ATCG). The number of times it repeats is the value of the marker. Typical tests test between 12 and 111 STR markers. STRs mutate fairly frequently. The results of two individuals are then compared to see if there is a match. DNA companies will usually provide an estimate of how closely related two people are, in terms of generations or years, based on the difference between their results. [ 33 ] Because STR values mutate frequently and are not permanent, false matches between two men can occur when they have the same STR values by chance. A true genetic relationship can only be determined using Y-DNA SNP mutations, but sequencing these was historically more time-consuming and expensive.
A person's haplogroup can often be inferred from their STR results, but can be proven only with a Y-chromosome SNP test (Y-SNP test).
A single-nucleotide polymorphism (SNP) is a change to a single nucleotide in a DNA sequence. Typical Y-DNA SNP tests test about 20,000 to 35,000 SNPs. [ 34 ] Getting a SNP test allows a much higher resolution than STRs. It can be used to provide additional information about the relationship between two individuals and to confirm haplogroups. Unique permanent SNP mutations occur in every male line every 83 years on average, [ 35 ] providing excellent time resolution.
All human men descend in the paternal line from a single man dubbed Y-chromosomal Adam , who lived probably between 200,000 and 300,000 years ago. [ 36 ] [ 37 ] A 'family tree' can be drawn showing how men today descend from him. Different branches of this tree are different haplogroups. Most haplogroups can be further subdivided multiple times into sub-clades. Some known sub-clades were founded in the last 1000 years, meaning their timeframe approaches the genealogical era (c.1500 onwards). [ 38 ]
New sub-clades of haplogroups may be discovered when an individual tests, especially if they are non-European. Most significant of these new discoveries was in 2013 when the haplogroup A00 was discovered, which required theories about Y-chromosomal Adam to be significantly revised. The haplogroup was discovered when an African-American man tested STRs at FamilyTreeDNA and his results were found to be unusual. SNP testing confirmed that he does not descend patrilineally from the "old" Y-chromosomal Adam and so a much older man became Y-Chromosomal Adam.
Many companies offer a percentage breakdown by ethnicity or region. Generally the world is specified into about 20–25 regions, and the approximate percentage of DNA inherited from each is stated. This is usually done by comparing the frequency of each Autosomal DNA marker tested to many population groups. [ 2 ] The reliability of this type of test is dependent on comparative population size, the number of markers tested, the ancestry informative value of the SNPs tested, and the degree of admixture in the person tested. Earlier ethnicity estimates were often wildly inaccurate, but as companies receive more samples over time, ethnicity estimates have become more accurate. Testing companies such as Ancestry.com will often regularly update their ethnicity estimates, which has caused some controversy from customers as their results update. [ 39 ] [ 40 ] Usually the results at the continental level are accurate, but more specific assertions of the test may turn out to be incorrect. [ citation needed ]
To generate ethnicity estimates from an SNP chip, a testing provider does the following:
The dataset used to perform phasing and ethnic group assignment is proprietary to each provider.
The interest in genealogical DNA tests has been linked to both an increase in curiosity about traditional genealogy and to more general personal origins. Those who test for traditional genealogy often utilize a combination of autosomal, mitochondrial, and Y-Chromosome tests. Those with an interest in personal ethnic origins are more likely to use an autosomal test. However, answering specific questions about the ethnic origins of a particular lineage may be best suited to an mtDNA test or a Y-DNA test.
For recent genealogy, exact matching on the mtDNA full sequence is used to confirm a common ancestor on the direct maternal line between two suspected relatives. Because mtDNA mutations are very rare, a nearly perfect match is not usually considered relevant to the most recent 1 to 16 generations. [ 42 ] In cultures lacking matrilineal surnames to pass down, neither relative above is likely to have as many generations of ancestors in their matrilineal information table as in the above patrilineal or Y-DNA case: for further information on this difficulty in traditional genealogy , due to lack of matrilineal surnames (or matrinames), see Matriname . [ 43 ] However, the foundation of testing is still two suspected descendants of one person. This hypothesize and test DNA pattern is the same one used for autosomal DNA and Y-DNA.
As discussed above, autosomal tests usually report the ethnic proportions of the individual. These attempt to measure an individual's mixed geographic heritage by identifying particular markers, called ancestry informative markers or AIM, that are associated with populations of specific geographical areas. Geneticist Adam Rutherford has written that these tests "don’t necessarily show your geographical origins in the past. They show with whom you have common ancestry today." [ 44 ]
The haplogroups determined by Y-DNA and mtDNA tests are often unevenly geographically distributed. Many direct-to-consumer DNA tests described this association to infer the test-taker's ancestral homeland. [ 4 ] Most tests describe haplogroups according to their most frequently associated continent (e.g., a "European haplogroup"). [ 4 ] When Leslie Emery and collaborators performed a trial of mtDNA haplogroups as a predictor of continental origin on individuals in the Human Genetic Diversity Panel (HGDP) and 1000 Genomes (1KGP) datasets, they found that only 14 of 23 haplogroups had a success rate above 50% among the HGDP samples, as did "about half" of the haplogroups in the 1KGP. [ 4 ] The authors concluded that, for most people, "mtDNA-haplogroup membership provides limited information about either continental ancestry or continental region of origin." [ 4 ]
Y-DNA and mtDNA testing may be able to determine with which peoples in present-day Africa a person shares a direct line of part of his or her ancestry, but patterns of historic migration and historical events cloud the tracing of ancestral groups. Due to joint long histories in the US, approximately 30% of African American males have a European Y-Chromosome haplogroup [ 45 ] Approximately 58% of African Americans have at least the equivalent of one great-grandparent (13%) of European ancestry. Only about 5% have the equivalent of one great-grandparent of Native American ancestry. By the early 19th century, substantial families of Free Persons of Color had been established in the Chesapeake Bay area who were descended from free people during the colonial period; most of those have been documented as descended from white men and African women (servant, slave or free). Over time various groups married more within mixed-race, black or white communities. [ 46 ]
According to authorities like Salas, nearly three-quarters of the ancestors of African Americans taken in slavery came from regions of West Africa. The African-American movement to discover and identify with ancestral tribes has burgeoned since DNA testing became available. African Americans usually cannot easily trace their ancestry during the years of slavery through surname research , census and property records, and other traditional means. Genealogical DNA testing may provide a tie to regional African heritage.
Melungeons are one of numerous multiracial groups in the United States with origins wrapped in myth. The historical research of Paul Heinegg has documented that many of the Melungeon groups in the Upper South were descended from mixed-race people who were free in colonial Virginia and the result of unions between the Europeans and Africans. They moved to the frontiers of Virginia, North Carolina, Kentucky and Tennessee to gain some freedom from the racial barriers of the plantation areas. [ 47 ] Several efforts, including a number of ongoing studies, have examined the genetic makeup of families historically identified as Melungeon. Most results point primarily to a mixture of European and African, which is supported by historical documentation. Some may have Native American heritage as well. Though some companies provide additional Melungeon research materials with Y-DNA and mtDNA tests, any test will allow comparisons with the results of current and past Melungeon DNA studies.
The pre-columbian indigenous people of the United States are called "Native Americans" in American English. [ 48 ] Autosomal testing, Y-DNA, and mtDNA testing can be conducted to determine the ancestry of Native Americans . A mitochondrial Haplogroup determination test based on mutations in Hypervariable Region 1 and 2 may establish whether a person's direct female line belongs to one of the canonical Native American Haplogroups, A , B , C , D or X . The vast majority of Native American individuals belong to one of the five identified mtDNA Haplogroups . Thus, being in one of those groups provides evidence of potential Native American descent. However, DNA ethnicity results cannot be used as a substitute for legal documentation. [ 49 ] Native American tribes have their own requirements for membership, often based on at least one of a person's ancestors having been included on tribal-specific Native American censuses (or final rolls) prepared during treaty -making, relocation to reservations or apportionment of land in the late 19th century and early 20th century. One example is the Dawes Rolls .
The Cohanim (or Kohanim) is a patrilineal priestly line of descent in Judaism . According to the Bible , the ancestor of the Cohanim is Aaron , brother of Moses . Many believe that descent from Aaron is verifiable with a Y-DNA test: the first published study in genealogical Y-Chromosome DNA testing found that a significant percentage of Cohens had distinctively similar DNA, rather more so than general Jewish or Middle Eastern populations. These Cohens tended to belong to Haplogroup J , with Y-STR values clustered unusually closely around a haplotype known as the Cohen Modal Haplotype (CMH). This could be consistent with a shared common ancestor, or with the hereditary priesthood having originally been founded from members of a single closely related clan.
Nevertheless, the original studies tested only six Y-STR markers, which is considered a low-resolution test. In response to the low resolution of the original 6-marker CMH, the testing company FTDNA released a 12-marker CMH signature that was more specific to the large closely related group of Cohens in Haplogroup J1.
A further academic study published in 2009 examined more STR markers and identified a more sharply defined SNP haplogroup, J1e* (now J1c3, also called J-P58*) for the J1 lineage. The research found "that 46.1% of Kohanim carry Y chromosomes belonging to a single paternal lineage (J-P58*) that likely originated in the Near East well before the dispersal of Jewish groups in the Diaspora. Support for a Near Eastern origin of this lineage comes from its high frequency in our sample of Bedouins , Yemenis (67%), and Jordanians (55%) and its precipitous drop in frequency as one moves away from Saudi Arabia and the Near East (Fig. 4). Moreover, there is a striking contrast between the relatively high frequency of J-58* in Jewish populations (»20%) and Kohanim (»46%) and its vanishingly low frequency in our sample of non-Jewish populations that hosted Jewish diaspora communities outside of the Near East." [ 50 ]
Recent phylogenetic research for haplogroup J-M267 placed the "Y-chromosomal Aaron" in a subhaplogroup of J-L862, L147.1 (age estimate 5631-6778yBP yBP): YSC235>PF4847/CTS11741>YSC234>ZS241>ZS227>Z18271 (age estimate 2731yBP). [ 51 ]
Genealogical DNA tests have become popular due to the ease of testing at home and their usefulness in supplementing genealogical research . Genealogical DNA tests allow for an individual to determine with high accuracy whether he or she is related to another person within a certain time frame, or with certainty that he or she is not related. DNA tests are perceived as more scientific, conclusive and expeditious than searching the civil records. However, they are limited by restrictions on lines that may be studied. The civil records are always only as accurate as the individuals having provided or written the information.
Y-DNA testing results are normally stated as probabilities: For example, with the same surname a perfect 37/37 marker test match gives a 95% likelihood of the most recent common ancestor (MRCA) being within 8 generations, [ 52 ] while a 111 of 111 marker match gives the same 95% likelihood of the MRCA being within only 5 generations back. [ 53 ]
As presented above in mtDNA testing , if a perfect match is found, the mtDNA test results can be helpful. In some cases, research according to traditional genealogy methods encounters difficulties due to the lack of regularly recorded matrilineal surname information in many cultures (see Matrilineal surname ). [ 43 ]
Autosomal DNA combined with genealogical research has been used by adoptees to find their biological parents, [ 54 ] to find the name and family of unidentified bodies, [ 55 ] [ 56 ] and by law enforcement agencies to apprehend criminals [ 57 ] [ 58 ] (for example, the Contra Costa County District Attorney's office used the "open-source" genetic genealogy site GEDmatch to find relatives of the suspect in the Golden State Killer case. [ 59 ] [ 60 ] ). The Atlantic magazine commented in 2018 that "Now, the floodgates are open. ..a small, volunteer-run website, GEDmatch.com, has become ... the de facto DNA and genealogy database for all of law enforcement." [ 61 ] Family Tree DNA announced in February 2019 it was allowing the FBI to access its DNA data for cases of murder and rape. [ 62 ] However, in May 2019 GEDmatch initiated stricter rules for accessing their autosomal DNA database [ 63 ] and Family Tree DNA shut down their Y-DNA database ysearch.org, making it more difficult for law enforcement agencies to solve cases. [ 64 ]
Common concerns about genealogical DNA testing are cost and privacy issues . [ 65 ] Some testing companies, such as 23andMe and Ancestry , [ 66 ] retain samples and results for their own use without a privacy agreement with subjects. [ 67 ] [ 68 ]
Autosomal DNA tests can identify relationships but they can be misinterpreted. [ 69 ] [ 70 ] [ 71 ] For example, transplants of stem cell or bone marrow will produce matches with the donor. In addition, identical twins (who have identical DNA) can give unexpected results. [ 72 ]
Testing of the Y-DNA lineage from father to son may reveal complications, due to unusual mutations, secret adoptions, and non-paternity events (i.e., that the perceived father in a generation is not the father indicated by written birth records). [ 73 ] According to the Ancestry and Ancestry Testing Task Force of the American Society of Human Genetics , autosomal tests cannot detect "large portions" of DNA from distant ancestors because it has not been inherited. [ 74 ]
With the increasing popularity of the use of DNA tests for ethnicity tests, uncertainties and errors in ethnicity estimates are a drawback for Genetic genealogy. While ethnicity estimates at the continental level should be accurate (with the possible exception of East Asia and the Americas), sub-continental estimates, especially in Europe, are often inaccurate. Customers may be misinformed about the uncertainties and errors of the estimates. [ 75 ]
Some have recommended government or other regulation of ancestry testing to ensure its performance to an agreed standard. [ 76 ]
A number of law enforcement agencies took legal action to compel
genetic genealogy companies to release genetic information that could match cold case crime victims [ 77 ] or perpetrators. A number of companies fought the requests. [ 78 ]
The popular consciousness of DNA testing and of DNA generally is subject to a number of misconceptions involving the reliability of testing, the nature of the connections with one's ancestors, the connection between DNA and personal traits, etc. [ 79 ]
Though genealogical DNA tests are not designed mainly for medical purposes, autosomal DNA tests can be used to analyze the probability of hundreds of heritable medical conditions, [ 80 ] albeit the result is complex to understand and may confuse a non-expert. 23andMe provides medical and trait information from their genealogical DNA test [ 81 ] and for a fee the Promethease web site analyses genealogical DNA test data from Family Tree DNA, 23andMe, or AncestryDNA for medical information. [ 82 ] Promethease, and its research paper crawling database SNPedia, has received criticism for technical complexity and a poorly defined "magnitude" scale that causes misconceptions, confusion and panic among its users. [ 83 ]
The testing of full MtDNA and YDNA sequences is still somewhat controversial as it may reveal even more medical information. For example, a correlation exists between a lack of Y-DNA marker DYS464 and infertility , and between mtDNA haplogroup H and protection from sepsis . Certain haplogroups have been linked to longevity in some population groups. [ 84 ] [ 85 ] The field of linkage disequilibrium, unequal association of genetic disorders with a certain mitochondrial lineage, is in its infancy, but those mitochondrial mutations that have been linked are searchable in the genome database Mitomap. [ 86 ] Family Tree DNA's MtFull Sequence test analyses the full MtDNA genome [ 26 ] and the National Human Genome Research Institute operates the Genetic And Rare Disease Information Center [ 87 ] that can assist consumers in identifying an appropriate screening test and help locate a nearby medical center that offers such a test.
The first company to provide direct-to-consumer genealogical DNA tests was the now defunct GeneTree . However, it did not offer multi-generational genealogy tests. In fall 2001, GeneTree sold its assets to Salt Lake City-based Sorenson Molecular Genealogy Foundation (SMGF) which originated in 1999. [ 88 ] While in operation, SMGF provided free Y-chromosome and mitochondrial DNA tests to thousands. [ 89 ] Later, GeneTree returned to genetic testing for genealogy in conjunction with the Sorenson parent company and eventually was part of the assets acquired in the Ancestry.com buyout of SMGF in 2012. [ 90 ] [ 91 ]
In 2000, Family Tree DNA , founded by Bennett Greenspan and Max Blankfeld, was the first company dedicated to direct-to-consumer testing for genealogy research. They initially offered eleven-marker Y-Chromosome STR tests and HVR1 mitochondrial DNA tests. They originally tested in partnership with the University of Arizona. [ 92 ] [ 93 ] [ 94 ] [ 95 ] [ 96 ]
In 2007, 23andMe was the first company to offer a saliva -based direct-to-consumer genetic testing . [ 97 ] It was also the first to implement the use of autosomal DNA for ancestry testing, which other major companies (e.g., Ancestry, Family Tree DNA, and MyHeritage ) now use. [ 98 ] [ 99 ]
MyHeritage launched its genetic testing service in 2016, allowing users to use cheek swabs to collect samples. [ 100 ] In 2019, new analysis tools were presented: autoclusters (grouping all matches visually into clusters) [ 101 ] and family tree theories (suggesting conceivable relations between DNA matches by combining several Myheritage trees as well as the Geni global family tree). [ 102 ]
Living DNA , founded in 2015, also provides a genetic testing service. Living DNA uses SNP chips to provide reports on autosomal ancestry, Y, and mtDNA ancestry. [ 103 ] [ 104 ] Living DNA provides detailed reports on ancestry from the UK as well as detailed Y chromosome and mtDNA reports. [ 105 ] [ 106 ] [ 107 ]
In 2019 it was estimated that large genealogical testing companies had about 26 million DNA profiles. [ 108 ] [ 109 ] Many transferred their test result for free to multiple testing sites, and also to genealogical services such as Geni.com and GEDmatch . GEDmatch said in 2018 that about half of their one million profiles were from the USA. [ 109 ]
Some genealogy software programs – such as Family Tree Maker, Legacy Family Tree (Deluxe Edition) and the Swedish program Genney – allow recording DNA marker test results. This allows for tracking of both Y-chromosome and mtDNA tests, and recording results for relatives. [ 110 ] | https://en.wikipedia.org/wiki/Genealogical_DNA_test |
Several genealogical numbering systems have been widely adopted for presenting family trees and pedigree charts in text format.
Ahnentafel , also known as the Eytzinger Method , Sosa Method , and Sosa-Stradonitz Method , allows for the numbering of ancestors beginning with a descendant. This system allows one to derive an ancestor's number without compiling the complete list, and allows one to derive an ancestor's relationship based on their number. The number of a person's father is twice their own number, and the number of a person's mother is twice their own, plus one. For instance, if John Smith is 10, his father is 20, and his mother is 21, and his daughter is 5.
In order to readily have the generation stated for a certain person, the Ahnentafel numbering may be preceded by the generation. This method's usefulness becomes apparent when applied further back in the generations: e.g. 08-146 , is a male preceding the subject by 7 (8-1) generations. This ancestor was the father of a woman (146/2=73) (in the genealogical line of the subject), who was the mother of a man (73/2=36.5), further down the line the father of a man (36/2=18), father of a woman (18/2=9), mother of a man (9/2=4.5), father of the subject's father (4/2=2). Hence, 08-146 is the subject's father's father's mother's father's father's mother's father.
The atree or Binary Ahnentafel method is based on the same numbering of nodes, but first converts the numbers to binary notation and then converts each 0 to M (for Male) and each 1 to F (for Female). The first character of each code (shown as X in the table below) is M if the subject is male and F if the subject is female. For example 5 becomes 101 and then FMF (or MMF if the subject is male). An advantage of this system is easier understanding of the genealogical path.
The first 15 codes in each system, identifying individuals in four generations, are as follows:
Genealogical writers sometimes choose to present ancestral lines by carrying back individuals with their spouses or single families generation by generation. The siblings of the individual or individuals studied may or may not be named for each family. This method is most popular in simplified single surname studies, however, allied surnames of major family branches may be carried back as well. In general, numbers are assigned only to the primary individual studied in each generation. [ 1 ]
The Register System uses both common numerals (1, 2, 3, 4) and Roman numerals (i, ii, iii, iv). The system is organized by generation, i.e., generations are grouped separately.
The system was created in 1870 for use in the New England Historical and Genealogical Register published by the New England Historic Genealogical Society based in Boston, Massachusetts . Register Style , of which the numbering system is part, is one of two major styles used in the U.S. for compiling descending genealogies. (The other being the NGSQ System.) [ 2 ]
The NGSQ System gets its name from the National Genealogical Society Quarterly published by the National Genealogical Society headquartered in Falls Church, Virginia , which uses the method in its articles. It is sometimes called the "Record System" or the "Modified Register System" because it derives from the Register System. The most significant difference between the NGSQ and the Register Systems is in the method of numbering for children who are not carried forward into future generations: The NGSQ System assigns a number to every child, whether or not that child is known to have progeny , and the Register System does not. Other differences between the two systems are mostly stylistic. [ 1 ]
The Henry System is a descending system created by Reginald Buchanan Henry for a genealogy of the families of the presidents of the United States that he wrote in 1935. [ 3 ] It can be organized either by generation or not. The system begins with 1. The oldest child becomes 11, the next child is 12, and so on. The oldest child of 11 is 111, the next 112, and so on. The system allows one to derive an ancestor's relationship based on their number. For example, 621 is the first child of 62, who is the second child of 6, who is the sixth child of his parents.
In the Henry System, when there are more than nine children, X is used for the 10th child, A is used for the 11th child, B is used for the 12th child, and so on. In the Modified Henry System, when there are more than nine children, numbers greater than nine are placed in parentheses.
The d'Aboville System is a descending numbering method developed by Jacques d'Aboville in 1940 that is very similar to the Henry System, widely used in France . [ 4 ] It can be organized either by generation or not. It differs from the Henry System in that periods are used to separate the generations and no changes in numbering are needed for families with more than nine children. [ 5 ] For example:
The Meurgey de Tupigny System is a simple numbering method used for single surname studies and hereditary nobility line studies developed by Jacques Meurgey de Tupigny [ Wikidata ] of the National Archives of France , published in 1953. [ 6 ]
Each generation is identified by a Roman numeral (I, II, III, ...), and each child and cousin in the same generation carrying the same surname is identified by an Arabic numeral. [ 7 ] The numbering system usually appears on or in conjunction with a pedigree chart. Example:
The de Villiers/Pama System gives letters to generations, and then numbers children in birth order. For example:
In this system, b2.c3 is the third child of the second child, [ 8 ] and is one of the progenitor's grandchildren.
The de Villiers/Pama system is the standard for genealogical works in South Africa . It was developed in the 19th century by Christoffel Coetzee de Villiers and used in his three volume Geslachtregister der Oude Kaapsche Familien ( Genealogies of Old Cape Families ). The system was refined by Dr. Cornelis (Cor) Pama , one of the founding members of the Genealogical Society of South Africa . [ 9 ]
Bibby (2012) [ 10 ] proposed a literal system to trace relationships between members of the same family. This used the following:
f = father
m = mother
so = son
d = daughter
b = brother
si = sister
h = husband
w = wife
c = cousin.
By concatenating these symbols, more distant relationships can be summarised, e.g.:
ff = father’s father
fm = father’s mother
mf = mother’s father.
We interpret “brother” and “sister” to mean “same father, same mother” i.e:
b = fso and mso
si = fd and md.
Some cases need careful parsing, e.g. fs means “father’s son”. This could represent
(1) the person himself, or
(2) a brother, or
(3) a half-brother (same father, different mother).
Very often, terms are synonymous. So m (mother) and fw (father’s wife) might refer to the same person. Generally m might be preferred – leaving fw to mean a father’s wife who is not the mother.
Similarly, c (cousin) might mean fbso or fbd or fsiso or fsid, or indeed mbso or mbd or msiso or msid, or several other combinations especially if grandfather married several times. Brother-in-law etc. is similarly ambiguous.
Other genealogical notations have been proposed, of course. This one is not claimed to be optimal, but it has been found convenient. In Bibby's usage , the “home” person is Karl Pearson, and all relationships are relative to him. So f is his father, and m is his mother, etc., while fw is Karl’s father’s second wife (who is not his mother). | https://en.wikipedia.org/wiki/Genealogical_numbering_systems |
Genencor is a biotechnology company based in Palo Alto , CA and a subsidiary of IFF . [ 1 ] Genencor is a producer of Industrial enzymes and low-priced bulk protein. The name Genencor originates with Genencor, Inc. , the original joint venture between Genentech and Corning Incorporated , which was founded in 1982. It is considered to have pioneered the field of industrial biotechnology , as distinct from traditional applications of biotechnology to health care and agriculture.
In 2005 Genencor was acquired by Danisco . [ 2 ]
In 2008 Genencor entered a joint venture with DuPont , called DuPont Danisco Cellulosic Ethanol LLC , to develop and commercialize low cost technology for the production of cellulosic ethanol . In 2008, Genencor and Goodyear announced they were working to develop BioIsoprene.
In 2011, DuPont acquired Danisco for $6.3 billion. [ 3 ]
In 2021, portions of DuPont including the Genencor division were acquired by International Flavors & Fragrances . [ 4 ]
Genencor achieved the following awards: [ citation needed ]
This biotechnology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Genencor |
Genentech, Inc. is an American biotechnology corporation headquartered in South San Francisco, California . It operates as an independent subsidiary of holding company Roche . Genentech Research and Early Development operates as an independent center within Roche. [ 6 ] Historically, the company is regarded as the world's first biotechnology company. [ 7 ]
As of July 2021, Genentech employed 13,539 people. [ 8 ]
The company was founded in 1976 by venture capitalist Robert A. Swanson and biochemist Herbert Boyer . [ 9 ] [ 10 ] Boyer is considered to be a pioneer in the field of recombinant DNA technology. In 1973, Boyer and his colleague Stanley Norman Cohen demonstrated that restriction enzymes could be used as "scissors" to cut DNA fragments of interest from one source, to be ligated into a similarly cut plasmid vector . [ 11 ] While Cohen returned to the laboratory in academia, Swanson contacted Boyer to found the company. [ 9 ] [ 12 ] Boyer worked with Arthur Riggs and Keiichi Itakura from the Beckman Research Institute , and the group became the first to successfully express a human gene in bacteria when they produced the hormone somatostatin in 1977. [ 13 ] David Goeddel and Dennis Kleid were then added to the group, and contributed to its success with synthetic human insulin in 1978.
In 1990 F. Hoffmann-La Roche AG acquired a majority stake in Genentech. [ 14 ]
In 2006 Genentech acquired Tanox in its first acquisition deal. Tanox had started developing Xolair and development was completed in collaboration with Novartis and Genentech; the acquisition allowed Genentech to keep more of the revenue. [ 15 ]
In March 2009, Roche fully acquired Genentech and made it a wholly-owned subsidiary by buying all remaining shares it did not already control for approximately $46.8 billion. [ 16 ] [ 17 ] [ 18 ]
In July 2014, Genentech/Roche acquired Seragon for its pipeline of small-molecule cancer drug candidates for $725 million cash upfront, with an additional $1 billion of payments dependent on successful development of products in Seragon's pipeline. [ 19 ]
Genentech is a pioneering research-driven biotechnology company [ 14 ] that has continued to conduct R&D internally as well as through collaborations. [ 20 ] [ 21 ]
Genentech's research collaborations include:
Genentech's corporate headquarters are in South San Francisco, California ( 37°39′25″N 122°22′44″W / 37.657°N 122.379°W / 37.657; -122.379 ( Genentech ) ), with additional manufacturing facilities in Vacaville, California ; Oceanside, California ; and Hillsboro, Oregon . In March 2024, it was announced the Swiss pharmaceutical company, Lonza had acquired the Vacaville site from parent-company, Roche for $1.2 billion. [ 31 ]
In December 2006, Genentech sold its Porriño , Spain , facility to Lonza and acquired an exclusive right to purchase Lonza's mammalian cell culture manufacturing facility under construction in Singapore . In June 2007, Genentech began the construction and development of an E. coli manufacturing facility, also in Singapore, for the worldwide production of Lucentis ( ranibizumab injection) bulk drug substance. [ citation needed ]
In 2023, the company announced plans to close down its manufacturing facility in South San Francisco, while expanding its manufacturing capabilities in Oceanside. [ 32 ] [ 33 ]
Genentech is a donor to the Center for Health Care Strategies, a non-governmental organization that lobbies the United States Government on issues related to Medicaid . [ 34 ]
Genentech Inc Political Action Committee is a U.S. Federal Political Action Committee (PAC), created to "aggregate contributions from members or employees and their families to donate to candidates for federal office". [ 35 ]
In November 1999, Genentech agreed to pay the University of California, San Francisco $200 million to settle a nine-year-old patent dispute. In 1990, UCSF sued Genentech for $400 million in compensation for alleged theft of technology developed at the university and covered by a 1982 patent. [ citation needed ] Genentech claimed that they developed Protropin (recombinant somatotropin /human growth hormone), independently of UCSF. A jury ruled that the university's patent was valid in July 1999, but wasn't able to decide whether Protropin was based upon UCSF research or not. Protropin, a drug used to treat dwarfism , was Genentech's first marketed drug and its $2 billion in sales has contributed greatly to its position as an industry leader. [ citation needed ] The settlement was to be divided as follows: $30 million to the University of California General Fund , $85 million to the three inventors and two collaborating scientists, $50 million towards a new teaching and research campus for UCSF, and $35 million to support university-wide research. [ 36 ]
In 2009, The New York Times reported that Genentech's talking points on health care reform appeared verbatim in the official statements of several Members of Congress during the national health care reform debate. [ 37 ] Two U.S. Representatives , Joe Wilson and Blaine Luetkemeyer , both issued the same written statements: "One of the reasons I have long supported the U.S. biotechnology industry is that it is a homegrown success story that has been an engine of job creation in this country. Unfortunately, many of the largest companies that would seek to enter the biosimilar market have made their money by outsourcing their research to foreign countries like India." The statement was originally drafted by lobbyists for Genentech. | https://en.wikipedia.org/wiki/Genentech |
Genera is a commercial operating system and integrated development environment for Lisp machines created by Symbolics . It is essentially a fork of an earlier operating system originating on the Massachusetts Institute of Technology (MIT) AI Lab's Lisp machines which Symbolics had used in common with Lisp Machines , Inc. (LMI), and Texas Instruments (TI). Genera was also sold by Symbolics as Open Genera , which runs Genera on computers based on a Digital Equipment Corporation (DEC) Alpha processor using Tru64 UNIX . In 2021 a new version was released as Portable Genera which runs on Tru64 UNIX on Alpha, Linux on x86-64 and Arm64 Linux , and macOS on x86-64 and Arm64 ( Apple Silicon M Series ). It is released and licensed as proprietary software .
Genera is an example of an object-oriented operating system based on the programming language Lisp .
Genera supports incremental and interactive development of complex software using a mix of programming styles with extensive support for object-oriented programming .
The Lisp Machine operating system was written in Lisp Machine Lisp . It was a one-user workstation initially targeted at software developers for artificial intelligence (AI) projects. [ 1 ] The system had a large bitmap screen, a mouse, a keyboard, a network interface, a disk drive, and slots for expansion. The operating system was supporting this hardware and it provided (among others):
This was already a complete one-user Lisp-based operating system and development environment.
The MIT Lisp machine operating system was developed from the middle 1970s to the early 1980s.
In 2006, the source code for this Lisp machine operating system from MIT was released as free and open-source software . [ 2 ]
Symbolics developed new Lisp machines and published the operating system under the name Genera . The latest version is 8.5. Symbolics Genera was developed in the early 1980s and early 1990s. In the final years, development entailed mostly patches, with very little new function.
Symbolics developed Genera based on this foundation of the MIT Lisp machine operating system. It sells the operating system and layered software . Some of the layered software has been integrated into Genera in later releases. Symbolics improved the operating system software from the original MIT Lisp machine and expanded it. The Genera operating system was only available for Symbolics Lisp machines and the Open Genera virtual machine .
Symbolics Genera has many features and supports all the versions of various hardware that Symbolics built over its life. Its source code is more than a million lines; the number depends on the release and what amount of software is installed. Symbolics Genera was published on magnetic tape and CD-ROM . The release of the operating system also provided most of the source code of the operating system and its applications. The user has free access to all parts of the running operating system and can write changes and extensions. The source code of the operating system is divided into systems . These systems bundle sources, binaries and other files. The system construction toolkit (SCT) maintains the dependencies, the components and the versions of all the systems. A system has two numbers: a major and a minor version number. The major version number counts the number of full constructions of a system. The minor version counts the number of patches to that system. A patch is a file that can be loaded to fix problems or provide extensions to a particular version of a system.
Symbolics developed a version named Open Genera , that included a virtual machine that enabled executing Genera on DEC Alpha based workstations, plus several Genera extensions and applications that were sold separately (like the Symbolics S-Graphics suite). Also, they made a new operating system named Minima for embedded uses, in Common Lisp . The latest version is Portable Genera , which has the virtual machine ported to x86-64 , Arm64 and Apple M1 processors - additionally to the DEC Alpha processor. The virtual machine then runs under the Linux and macOS , additionally to Tru64 UNIX .
The original Lisp machine operating system was developed in Lisp Machine Lisp , using the Flavors object-oriented extension to that Lisp. Symbolics provided a successor to Flavors named New Flavors . Later Symbolics also supported Common Lisp and the Common Lisp Object System (CLOS). Then Symbolics Common Lisp became the default Lisp dialect for writing software with Genera. The software of the operating system was written mostly in Lisp Machine Lisp (named ZetaLisp ) and Symbolics Common Lisp. These Lisp dialects are both provided by Genera. Also parts of the software was using either Flavors, New Flavors, and Common Lisp Object System. Some of the older parts of the Genera operating system have been rewritten in Symbolics Common Lisp and the Common Lisp Object system. Many parts of the operating systems remained written in ZetaLisp and Flavors (or New Flavors).
The early versions of Symbolics Genera were built with the original graphical user interface (GUI) windowing system of the Lisp machine operating system. Symbolics then developed a radically new windowing system named Dynamic Windows with a presentation-based user interface . [ 3 ] This window system was introduced with Genera 7 in 1986. [ 4 ] Many of the applications of Genera have then been using Dynamic Windows for their user interface . Eventually there was a move to port parts of the window system to run on other Common Lisp implementations by other vendors as the Common Lisp Interface Manager (CLIM). Versions of CLIM have been available (among others) for Allegro Common Lisp , LispWorks , and Macintosh Common Lisp . An open source version is available ( McCLIM ).
Dynamic Windows uses typed objects for all output to the screen. All displayed information keeps its connection to the objects displayed ( output recording ). This works for both textual and graphical output. At runtime the applicable operations to these objects are computed based on the class hierarchy and the available operations ( commands ). Commands are organized in hierarchical command tables with typed parameters. Commands can be entered with the mouse (making extensive use of mouse chording ), keystrokes, and with a command line interface. All applications share one command line interpreter implementation, which adapts to various types of usage. The graphical abilities of the window system are based on the PostScript graphics model.
The user interface is mostly in monochrome ( black-and-white ) since that was what the hardware console typically provided. But extensive support exists for color, using color frame buffers or X Window System (X11) servers with color support. The activities (applications) use the whole screen with several panes, though windows can also be smaller. The layout of these activity windows adapts to different screen sizes. Activities can also switch between different pane layouts.
Genera provides a system menu to control windows, switch applications, and operate the window system. Many features of the user interface (switching between activities, creating activities, stopping and starting processes, and much more) can also be controlled with keyboard commands.
The Dynamic Lisp Listener is an example of a command line interface with full graphics abilities and support for mouse-based interaction. It accepts Lisp expressions and commands as input. The output is mouse sensitive. The Lisp listener can display forms to input data for the various built-in commands.
The user interface provides extensive online help and context sensitive help , completion of choices in various contexts.
Genera supports fully hyperlinked online documentation. The documentation is read with the Document Examiner , an early hypertext browser. The documentation is based on small reusable documentation records that can also be displayed in various contexts with the Editor and the Lisp Listener. The documentation is organized in books and sections. The books were also provided in printed versions with the same contents as the online documentation. The documentation database information is delivered with Genera and can be modified with incremental patches.
The documentation was created with a separate application that was not shipped with Genera: Symbolics Concordia . Concordia provides an extension to the Zmacs editor for editing documentation records, a graphics editor and a page previewer.
The documentation provides user guides, installation guidelines and references of the various Lisp constructs and libraries.
The markup language is based on the Scribe markup language and also usable by the developer.
Genera supports printing to postscript printers, provides a printing queue and also a PostScript interpreter (written in Lisp).
Genera also has support for various network protocols and applications using those. It has extensive support for TCP/IP .
Genera supports one-processor machines with several threads (called processes ).
Genera supports several different types of garbage collection (GC): full GC, in-place GC, incremental GC, and ephemeral GC. The ephemeral collector uses only physical memory and uses the memory management unit to get information about changed pages in physical memory. The collector uses generations and the virtual memory is divided into areas. Areas can contain objects of certain types (strings, bitmaps, pathnames, ...), and each area can use different memory management mechanisms.
Genera implements two file systems : the FEP file system for large files and the Lisp Machine File System (LMFS) optimized for many small files. These systems also maintain different versions of files. If a file is modified, Genera still keeps the old versions. Genera also provides access to, can read from and write to, other, local and remote, file systems including: NFS, FTP, HFS, CD-ROMs, tape drives .
Genera supports netbooting.
Genera provides a client for the Statice object database from Symbolics.
Genera makes extensive use of the condition system (exception handling) to handle all kinds of runtime errors and is able to recover from many of these errors. For example, it allows retrying network operations if a network connection has a failure; the application code will keep running. When errors occur, users are presented a menu of restarts (abort, retry, continue options) that are specific to the error signalled.
Genera has extensive debugging tools.
Genera can save versions of the running system to worlds . These worlds can be booted and then will contain all the saved data and code.
Symbolics provided several programming languages for use with Genera:
Symbolics Common Lisp provides most of the Common Lisp standard with very many extensions, many of them coming from ZetaLisp.
It is remarkable that these programming language implementations inherited some of the dynamic features of the Lisp system (like garbage collection and checked access to data) and supported incremental software development.
Third-party developers provided more programming languages, such as OPS5 , and development tools, such as the Knowledge Engineering Environment (KEE) from IntelliCorp).
Symbolics Genera comes with several applications. Applications are called activities . Some of the activities:
Symbolics sold several applications that run on Symbolics Genera.
Several companies developed and sold applications for Symbolics Genera. Some examples:
Genera's limits include:
A stable version of Open Genera that can run on x86-64 or arm64 Linux , and Apple M1 MacOS has been released [ 5 ] and has been renamed to Portable Genera.
A hacked version of Open Genera that can run on x86-64 Linux exists. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Genera_(operating_system) |
The General Coordinates Network ( GCN ), formerly known as the Gamma-ray burst Coordinates Network , is an open-source platform created by NASA to receive and transmit alerts about astronomical transient phenomena. This includes neutrino detections by observatories such as IceCube or Super-Kamiokande , gravitational wave events from the LIGO , Virgo and KAGRA interferometers, and gamma-ray bursts observed by Fermi , Swift or INTEGRAL . [ 1 ] One of the main goals is to allow for follow-up observations of an event by other observatories, in hope to observe multi-messenger events. [ 2 ] [ 3 ]
GCN has its origins in the BATSE coordinates distribution network (BACODINE). The Burst And Transient Source Experiment ( BATSE ) was a scientific instrument on the Compton Gamma-Ray Observatory (CGRO), and BACODINE monitored the BATSE real-time telemetry from CGRO. The first function of BACODINE was calculating the right ascension (RA) and declination (dec) locations for GRBs that it detected, and distributing those locations to sites around the world in real-time. Since the de-orbiting of the CGRO, this function of BACODINE is no longer operational. The second function of BACODINE was collecting right ascension and declination locations of GRBs detected by spacecraft other than CGRO, and then distributing that information. With this functionality, the original BACODINE name was changed to the more general name GCN. [ 4 ] It later evolved to include alerts from non-GRB observatories and was sometimes referred to as GCN/TAN (for Transient Astronomy Network). [ 5 ]
The GCN relies on two types of alerts: notices and circulars. Notices are machine-readable alerts, which are distributed in real time; they typically include only basic information about the event. Circulars are brief human-readable alerts, which are distributed (typically by e-mail) with a low latency but not in real time; they can also contain predictions, requests for follow-up observations from other observatories, or advertise observing plans. [ 6 ]
The current version of the GCN relies on Kafka to distribute the alerts, improving on previous versions which used three separate protocols. [ 7 ]
The infrastructure for sending the alerts towards the GCN is managed by the respective observatories. For the historical gamma-ray burst observatories, which are based on spacecraft, this involves sending the information to a ground station; NASA Goddard Space Flight Center was the center in charge of sending the notices from GRB observatories.
As of April 2023, 14 missions are sending alerts to the GCN : [ 1 ]
Past spacecraft and instruments that participated in GCN include Array of Low Energy X-ray Imaging Sensors ( ALEXIS ), BeppoSAX , the Imaging Compton Telescope (COMPTEL) on CGRO, the X-Ray/Gamma-Ray Spectrometer (XGRS) on NEAR Shoemaker , the High Energy Transient Explorer (WMM and SXC), the Rossi X-ray Timing Explorer (PCA and ASM) and Ulysses . | https://en.wikipedia.org/wiki/General_Coordinates_Network |
The General Data Format for Biomedical Signals is a scientific and medical data file format . The aim of GDF is to combine and integrate the best features of all biosignal file formats into a single file format. [ 1 ]
The original GDF specification was introduced in 2005 as a new data format to overcome some of the limitations of the European Data Format for Biosignals (EDF). GDF was also designed to unify a number of file formats which had been designed for very specific applications (for example, in ECG research and EEG analysis). [ 2 ] The original specification included a binary header, and used an event table. [ 3 ] An updated specification (GDF v2) was released in 2011 and added fields for additional subject-specific information (gender, age, etc.) and utilized several standard codes for storing physical units and other properties. [ 2 ] In 2015, the Austrian Standardization Institute made GDF an official Austrian Standard https://shop.austrian-standards.at/action/en/public/details/553360/OENORM_K_2204_2015_11_15 , and the revision number has been updated to v3.
The GDF format is often used in brain–computer interface research. [ 4 ] [ 5 ] [ 6 ] However, since GDF provides a superset of features from many different file formats, it could be also used for many other domains.
The free and open source software BioSig library provides implementations for reading and writing of GDF in GNU Octave / MATLAB and C / C++ . [ 7 ] A lightweight C++ library called libGDF is also available and implements version 2 of the GDF format. [ 8 ] | https://en.wikipedia.org/wiki/General_Data_Format_for_Biomedical_Signals |
In the field of mathematical analysis , a general Dirichlet series is an infinite series that takes the form of
where a n {\displaystyle a_{n}} , s {\displaystyle s} are complex numbers and { λ n } {\displaystyle \{\lambda _{n}\}} is a strictly increasing sequence of nonnegative real numbers that tends to infinity.
A simple observation shows that an 'ordinary' Dirichlet series
is obtained by substituting λ n = ln n {\displaystyle \lambda _{n}=\ln n} while a power series
is obtained when λ n = n {\displaystyle \lambda _{n}=n} .
If a Dirichlet series is convergent at s 0 = σ 0 + t 0 i {\displaystyle s_{0}=\sigma _{0}+t_{0}i} , then it is uniformly convergent in the domain
and convergent for any s = σ + t i {\displaystyle s=\sigma +ti} where σ > σ 0 {\displaystyle \sigma >\sigma _{0}} .
There are now three possibilities regarding the convergence of a Dirichlet series, i.e. it may converge for all, for none or for some values of s . In the latter case, there exist a σ c {\displaystyle \sigma _{c}} such that the series is convergent for σ > σ c {\displaystyle \sigma >\sigma _{c}} and divergent for σ < σ c {\displaystyle \sigma <\sigma _{c}} . By convention, σ c = ∞ {\displaystyle \sigma _{c}=\infty } if the series converges nowhere and σ c = − ∞ {\displaystyle \sigma _{c}=-\infty } if the series converges everywhere on the complex plane .
The abscissa of convergence of a Dirichlet series can be defined as σ c {\displaystyle \sigma _{c}} above. Another equivalent definition is
The line σ = σ c {\displaystyle \sigma =\sigma _{c}} is called the line of convergence . The half-plane of convergence is defined as
The abscissa , line and half-plane of convergence of a Dirichlet series are analogous to radius , boundary and disk of convergence of a power series .
On the line of convergence, the question of convergence remains open as in the case of power series. However, if a Dirichlet series converges and diverges at different points on the same vertical line, then this line must be the line of convergence. The proof is implicit in the definition of abscissa of convergence. An example would be the series
which converges at s = − π i {\displaystyle s=-\pi i} ( alternating harmonic series ) and diverges at s = 0 {\displaystyle s=0} ( harmonic series ). Thus, σ = 0 {\displaystyle \sigma =0} is the line of convergence.
Suppose that a Dirichlet series does not converge at s = 0 {\displaystyle s=0} , then it is clear that σ c ≥ 0 {\displaystyle \sigma _{c}\geq 0} and ∑ a n {\displaystyle \sum a_{n}} diverges. On the other hand, if a Dirichlet series converges at s = 0 {\displaystyle s=0} , then σ c ≤ 0 {\displaystyle \sigma _{c}\leq 0} and ∑ a n {\displaystyle \sum a_{n}} converges. Thus, there are two formulas to compute σ c {\displaystyle \sigma _{c}} , depending on the convergence of ∑ a n {\displaystyle \sum a_{n}} which can be determined by various convergence tests . These formulas are similar to the Cauchy–Hadamard theorem for the radius of convergence of a power series.
If ∑ a k {\displaystyle \sum a_{k}} is divergent, i.e. σ c ≥ 0 {\displaystyle \sigma _{c}\geq 0} , then σ c {\displaystyle \sigma _{c}} is given by
If ∑ a k {\displaystyle \sum a_{k}} is convergent, i.e. σ c ≤ 0 {\displaystyle \sigma _{c}\leq 0} , then σ c {\displaystyle \sigma _{c}} is given by
A Dirichlet series is absolutely convergent if the series
is convergent. As usual, an absolutely convergent Dirichlet series is convergent, but the converse is not always true.
If a Dirichlet series is absolutely convergent at s 0 {\displaystyle s_{0}} , then it is absolutely convergent for all s where Re ( s ) > Re ( s 0 ) {\displaystyle \operatorname {Re} (s)>\operatorname {Re} (s_{0})} . A Dirichlet series may converge absolutely for all, for no or for some values of s . In the latter case, there exist a σ a {\displaystyle \sigma _{a}} such that the series converges absolutely for σ > σ a {\displaystyle \sigma >\sigma _{a}} and converges non-absolutely for σ < σ a {\displaystyle \sigma <\sigma _{a}} .
The abscissa of absolute convergence can be defined as σ a {\displaystyle \sigma _{a}} above, or equivalently as
The line and half-plane of absolute convergence can be defined similarly. There are also two formulas to compute σ a {\displaystyle \sigma _{a}} .
If ∑ | a k | {\displaystyle \sum |a_{k}|} is divergent, then σ a {\displaystyle \sigma _{a}} is given by
If ∑ | a k | {\displaystyle \sum |a_{k}|} is convergent, then σ a {\displaystyle \sigma _{a}} is given by
In general, the abscissa of convergence does not coincide with abscissa of absolute convergence. Thus, there might be a strip between the line of convergence and absolute convergence where a Dirichlet series is conditionally convergent . The width of this strip is given by
In the case where L = 0, then
All the formulas provided so far still hold true for 'ordinary' Dirichlet series by substituting λ n = log n {\displaystyle \lambda _{n}=\log n} .
It is possible to consider other abscissas of convergence for a Dirichlet series. The abscissa of bounded convergence σ b {\displaystyle \sigma _{b}} is given by
σ b = inf { σ ∈ R : ∑ n = 1 ∞ a n e − λ n s is bounded in the half-plane Re ( s ) ≥ σ } , {\displaystyle {\begin{aligned}\sigma _{b}=\inf {\Big \{}\sigma \in \mathbb {R} :\sum _{n=1}^{\infty }a_{n}e^{-\lambda _{n}s}&{\text{ is bounded in the half-plane }}\operatorname {Re} (s)\geq \sigma {\Big \}},\end{aligned}}}
while the abscissa of uniform convergence σ u {\displaystyle \sigma _{u}} is given by
σ u = inf { σ ∈ R : ∑ n = 1 ∞ a n e − λ n s converges uniformly in the half-plane Re ( s ) ≥ σ } . {\displaystyle {\begin{aligned}\sigma _{u}=\inf {\Big \{}\sigma \in \mathbb {R} :\sum _{n=1}^{\infty }a_{n}e^{-\lambda _{n}s}&{\text{ converges uniformly in the half-plane }}\operatorname {Re} (s)\geq \sigma {\Big \}}.\end{aligned}}}
These abscissas are related to the abscissa of convergence σ c {\displaystyle \sigma _{c}} and of absolute convergence σ a {\displaystyle \sigma _{a}} by the formulas
σ c ≤ σ b ≤ σ u ≤ σ a {\displaystyle \sigma _{c}\leq \sigma _{b}\leq \sigma _{u}\leq \sigma _{a}} ,
and a remarkable theorem of Bohr in fact shows that for any ordinary Dirichlet series where λ n = ln ( n ) {\displaystyle \lambda _{n}=\ln(n)} (i.e. Dirichlet series of the form ∑ n = 1 ∞ a n n − s {\displaystyle \sum _{n=1}^{\infty }a_{n}n^{-s}} ) , σ u = σ b {\displaystyle \sigma _{u}=\sigma _{b}} and σ a ≤ σ u + 1 / 2 ; {\displaystyle \sigma _{a}\leq \sigma _{u}+1/2;} [ 1 ] Bohnenblust and Hille subsequently showed that for every number d ∈ [ 0 , 0.5 ] {\displaystyle d\in [0,0.5]} there are Dirichlet series ∑ n = 1 ∞ a n n − s {\displaystyle \sum _{n=1}^{\infty }a_{n}n^{-s}} for which σ a − σ u = d . {\displaystyle \sigma _{a}-\sigma _{u}=d.} [ 2 ]
A formula for the abscissa of uniform convergence σ u {\displaystyle \sigma _{u}} for the general Dirichlet series ∑ n = 1 ∞ a n e − λ n s {\displaystyle \sum _{n=1}^{\infty }a_{n}e^{-\lambda _{n}s}} is given as follows: for any N ≥ 1 {\displaystyle N\geq 1} , let U N = sup t ∈ R { | ∑ n = 1 N a n e i t λ n | } {\displaystyle U_{N}=\sup _{t\in \mathbb {R} }\{|\sum _{n=1}^{N}a_{n}e^{it\lambda _{n}}|\}} , then σ u = lim N → ∞ log U N λ N . {\displaystyle \sigma _{u}=\lim _{N\rightarrow \infty }{\frac {\log U_{N}}{\lambda _{N}}}.} [ 3 ]
A function represented by a Dirichlet series
is analytic on the half-plane of convergence. Moreover, for k = 1 , 2 , 3 , … {\displaystyle k=1,2,3,\ldots }
A Dirichlet series can be further generalized to the multi-variable case where λ n ∈ R k {\displaystyle \lambda _{n}\in \mathbb {R} ^{k}} , k = 2, 3, 4,..., or complex variable case where λ n ∈ C m {\displaystyle \lambda _{n}\in \mathbb {C} ^{m}} , m = 1, 2, 3,... | https://en.wikipedia.org/wiki/General_Dirichlet_series |
edgelab (typically expressed with a leading lowercase "e") is an applied academic research lab, established in 2000 as a partnership between General Electric and the University of Connecticut . [ 1 ] [ 2 ]
edgelab approaches GE Businesses three times per year, identifying key strategic initiatives for program execution. 15 projects are selected each year (five per semester) and students, faculty, and on-site GE staff work on these projects full-time for the 13-week session.
The edgelab program was discontinued in Spring 2011. The School of Business and General Electric have announced their plans to continue the relationship by creating a new joint venture. [ 1 ] [ 3 ] [ 4 ]
This article about an education organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/General_Electric_EdgeLab |
In calculus , the general Leibniz rule , [ 1 ] named after Gottfried Wilhelm Leibniz , generalizes the product rule for the derivative of the product of two (which is also known as "Leibniz's rule"). It states that if f {\displaystyle f} and g {\displaystyle g} are n -times differentiable functions , then the product f g {\displaystyle fg} is also n -times differentiable and its n -th derivative is given by ( f g ) ( n ) = ∑ k = 0 n ( n k ) f ( n − k ) g ( k ) , {\displaystyle (fg)^{(n)}=\sum _{k=0}^{n}{n \choose k}f^{(n-k)}g^{(k)},} where ( n k ) = n ! k ! ( n − k ) ! {\displaystyle {n \choose k}={n! \over k!(n-k)!}} is the binomial coefficient and f ( j ) {\displaystyle f^{(j)}} denotes the j th derivative of f (and in particular f ( 0 ) = f {\displaystyle f^{(0)}=f} ).
The rule can be proven by using the product rule and mathematical induction .
If, for example, n = 2 , the rule gives an expression for the second derivative of a product of two functions: ( f g ) ″ ( x ) = ∑ k = 0 2 ( 2 k ) f ( 2 − k ) ( x ) g ( k ) ( x ) = f ″ ( x ) g ( x ) + 2 f ′ ( x ) g ′ ( x ) + f ( x ) g ″ ( x ) . {\displaystyle (fg)''(x)=\sum \limits _{k=0}^{2}{{\binom {2}{k}}f^{(2-k)}(x)g^{(k)}(x)}=f''(x)g(x)+2f'(x)g'(x)+f(x)g''(x).}
The formula can be generalized to the product of m differentiable functions f 1 ,..., f m . ( f 1 f 2 ⋯ f m ) ( n ) = ∑ k 1 + k 2 + ⋯ + k m = n ( n k 1 , k 2 , … , k m ) ∏ 1 ≤ t ≤ m f t ( k t ) , {\displaystyle \left(f_{1}f_{2}\cdots f_{m}\right)^{(n)}=\sum _{k_{1}+k_{2}+\cdots +k_{m}=n}{n \choose k_{1},k_{2},\ldots ,k_{m}}\prod _{1\leq t\leq m}f_{t}^{(k_{t})}\,,} where the sum extends over all m -tuples ( k 1 ,..., k m ) of non-negative integers with ∑ t = 1 m k t = n , {\textstyle \sum _{t=1}^{m}k_{t}=n,} and ( n k 1 , k 2 , … , k m ) = n ! k 1 ! k 2 ! ⋯ k m ! {\displaystyle {n \choose k_{1},k_{2},\ldots ,k_{m}}={\frac {n!}{k_{1}!\,k_{2}!\cdots k_{m}!}}} are the multinomial coefficients . This is akin to the multinomial formula from algebra.
The proof of the general Leibniz rule [ 2 ] : 68–69 proceeds by induction. Let f {\displaystyle f} and g {\displaystyle g} be n {\displaystyle n} -times differentiable functions. The base case when n = 1 {\displaystyle n=1} claims that: ( f g ) ′ = f ′ g + f g ′ , {\displaystyle (fg)'=f'g+fg',} which is the usual product rule and is known to be true. Next, assume that the statement holds for a fixed n ≥ 1 , {\displaystyle n\geq 1,} that is, that ( f g ) ( n ) = ∑ k = 0 n ( n k ) f ( n − k ) g ( k ) . {\displaystyle (fg)^{(n)}=\sum _{k=0}^{n}{\binom {n}{k}}f^{(n-k)}g^{(k)}.}
Then, ( f g ) ( n + 1 ) = [ ∑ k = 0 n ( n k ) f ( n − k ) g ( k ) ] ′ = ∑ k = 0 n ( n k ) f ( n + 1 − k ) g ( k ) + ∑ k = 0 n ( n k ) f ( n − k ) g ( k + 1 ) = ∑ k = 0 n ( n k ) f ( n + 1 − k ) g ( k ) + ∑ k = 1 n + 1 ( n k − 1 ) f ( n + 1 − k ) g ( k ) = ( n 0 ) f ( n + 1 ) g ( 0 ) + ∑ k = 1 n ( n k ) f ( n + 1 − k ) g ( k ) + ∑ k = 1 n ( n k − 1 ) f ( n + 1 − k ) g ( k ) + ( n n ) f ( 0 ) g ( n + 1 ) = ( n + 1 0 ) f ( n + 1 ) g ( 0 ) + ( ∑ k = 1 n [ ( n k − 1 ) + ( n k ) ] f ( n + 1 − k ) g ( k ) ) + ( n + 1 n + 1 ) f ( 0 ) g ( n + 1 ) = ( n + 1 0 ) f ( n + 1 ) g ( 0 ) + ∑ k = 1 n ( n + 1 k ) f ( n + 1 − k ) g ( k ) + ( n + 1 n + 1 ) f ( 0 ) g ( n + 1 ) = ∑ k = 0 n + 1 ( n + 1 k ) f ( n + 1 − k ) g ( k ) . {\displaystyle {\begin{aligned}(fg)^{(n+1)}&=\left[\sum _{k=0}^{n}{\binom {n}{k}}f^{(n-k)}g^{(k)}\right]'\\&=\sum _{k=0}^{n}{\binom {n}{k}}f^{(n+1-k)}g^{(k)}+\sum _{k=0}^{n}{\binom {n}{k}}f^{(n-k)}g^{(k+1)}\\&=\sum _{k=0}^{n}{\binom {n}{k}}f^{(n+1-k)}g^{(k)}+\sum _{k=1}^{n+1}{\binom {n}{k-1}}f^{(n+1-k)}g^{(k)}\\&={\binom {n}{0}}f^{(n+1)}g^{(0)}+\sum _{k=1}^{n}{\binom {n}{k}}f^{(n+1-k)}g^{(k)}+\sum _{k=1}^{n}{\binom {n}{k-1}}f^{(n+1-k)}g^{(k)}+{\binom {n}{n}}f^{(0)}g^{(n+1)}\\&={\binom {n+1}{0}}f^{(n+1)}g^{(0)}+\left(\sum _{k=1}^{n}\left[{\binom {n}{k-1}}+{\binom {n}{k}}\right]f^{(n+1-k)}g^{(k)}\right)+{\binom {n+1}{n+1}}f^{(0)}g^{(n+1)}\\&={\binom {n+1}{0}}f^{(n+1)}g^{(0)}+\sum _{k=1}^{n}{\binom {n+1}{k}}f^{(n+1-k)}g^{(k)}+{\binom {n+1}{n+1}}f^{(0)}g^{(n+1)}\\&=\sum _{k=0}^{n+1}{\binom {n+1}{k}}f^{(n+1-k)}g^{(k)}.\end{aligned}}} And so the statement holds for n + 1 {\displaystyle n+1} , and the proof is complete.
The Leibniz rule bears a strong resemblance to the binomial theorem , and in fact the binomial theorem can be proven directly from the Leibniz rule by taking f ( x ) = e a x {\displaystyle f(x)=e^{ax}} and g ( x ) = e b x , {\displaystyle g(x)=e^{bx},} which gives
and then dividing both sides by e ( a + b ) x . {\displaystyle e^{(a+b)x}.} [ 2 ] : 69
With the multi-index notation for partial derivatives of functions of several variables, the Leibniz rule states more generally: ∂ α ( f g ) = ∑ β : β ≤ α ( α β ) ( ∂ β f ) ( ∂ α − β g ) . {\displaystyle \partial ^{\alpha }(fg)=\sum _{\beta \,:\,\beta \leq \alpha }{\alpha \choose \beta }(\partial ^{\beta }f)(\partial ^{\alpha -\beta }g).}
This formula can be used to derive a formula that computes the symbol of the composition of differential operators. In fact, let P and Q be differential operators (with coefficients that are differentiable sufficiently many times) and R = P ∘ Q . {\displaystyle R=P\circ Q.} Since R is also a differential operator, the symbol of R is given by: R ( x , ξ ) = e − ⟨ x , ξ ⟩ R ( e ⟨ x , ξ ⟩ ) . {\displaystyle R(x,\xi )=e^{-{\langle x,\xi \rangle }}R(e^{\langle x,\xi \rangle }).}
A direct computation now gives: R ( x , ξ ) = ∑ α 1 α ! ( ∂ ∂ ξ ) α P ( x , ξ ) ( ∂ ∂ x ) α Q ( x , ξ ) . {\displaystyle R(x,\xi )=\sum _{\alpha }{1 \over \alpha !}\left({\partial \over \partial \xi }\right)^{\alpha }P(x,\xi )\left({\partial \over \partial x}\right)^{\alpha }Q(x,\xi ).}
This formula is usually known as the Leibniz formula. It is used to define the composition in the space of symbols, thereby inducing the ring structure. | https://en.wikipedia.org/wiki/General_Leibniz_rule |
General Mission Analysis Tool (GMAT) is open-source space mission analysis software developed by NASA and private industry. [ 2 ]
It has been used for several missions, including LCROSS , the Lunar Reconnaissance Orbiter , OSIRIS-REx , the Magnetospheric Multiscale Mission , and the Transiting Exoplanet Survey Satellite (TESS) mission. [ 2 ] [ 3 ]
GMAT is an open-source alternative to software like Systems Tool Kit and FreeFlyer . | https://en.wikipedia.org/wiki/General_Mission_Analysis_Tool |
General Motors Research Laboratories are the part of General Motors responsible for creation of the first known operating system ( GM-NAA I/O ) in 1955 and contributed to the first mechanical heart , the Dodrill-GMR , successfully used while performing open heart surgery. [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/General_Motors_Research_Laboratories |
The General Toll Switching Plan was a systematic nationwide effort by the American Telephone and Telegraph Company (AT&T) of organizing the telephone toll circuits and cable routes of the nation, and of streamlining the operating principles and technical infrastructure for connecting long-distance telephone calls in North America. [ 1 ] This involved the design of a hierarchical system of toll-switching centers, a process that had found substantial maturity by 1929. The switching plan was principally operated by the Long Lines division of the Bell System in cooperation with independent telephone companies under the decree of the Kingsbury Commitment , reached with the United States government in 1913. The General Toll Switching Plan was a system manually operated by long-distance telephone operators. It was the forerunner of an automated system called Nationwide Operator Toll Dialing that was begun in 1943, which eventually led to Direct Distance Dialing (DDD) within the framework of the North American Numbering Plan decades later.
In the same manner that early telephone users developed an increasing desire to talk to each other, and expected the service to reach farther out and increase the number of participating private and business customers locally, so did users in different towns and cities desire to call each other. [ 2 ] However, the state of the technology initially had limitation in distance. Telephone companies developed on a local basis, usually one company for each community. As the technology of telephones and of line construction improved, logistical problems stood in the way of rapid expansion. Solutions had to be found for organizational structure, management, business relationships, collaboration, to operate telephone services between or across multiple communities.
Theodore Newton Vail , General Manager of the American Bell Telephone Company , created the vision for this endeavor. [ 2 ] The first long-distance experiment was the Boston–New York telephone line. Vail suggested that a company separate from the four or five affected local telephone companies should be responsible for the construction and operation of the line. [ 2 ] Approved in 1880, he directed American Bell Telephone to incorporate a new entity in New York, the Inter-State Telephone Company for the construction of the first section starting in Boston, with the grant of a license by American Bell. An additional company was formed in Connecticut to complete the line to New York. The Boston–New York toll line opened in 1884, [ 2 ]
Building additional long-distance lines would require enormous amounts of financing. However, the Massachusetts legislature rejected an application to increase the corporate capitalization of American Bell. With the experience of incorporating Inter-State Telephone in New York, the company formed a new subsidiary in New York City, the American Telephone and Telegraph Company , on March 3, 1885, with Theodore Vail as its first president. [ 2 ] The new company had the mandate to construct and operate long-distance telephone lines, and would negotiate and facilitate inter-connections to local telephones companies under the umbrella of the Bell System. This original purpose of the business would later be delegated to a division called Long Lines . By 1892, the company's long-distance network reached Chicago, [ 3 ] but the New York–Chicago line did not become commercially successful until after 1900, and the invention of the loading coil . [ 4 ]
In 1907, Theodore Vail became president of the American Telephone and Telegraph Company for the second time, having left the company previously in 1887. [ 6 ] Immediately, he steered the company into a new direction. He refined a vision of service, shaped new goals for supporting technological progress, and reorganized the company to facilitate his ideas. He envisioned universal telephone service as a public utility, and the future of the American telephone industry as a unified system of companies under the lead of American Telephone and Telegraph.
Such a nationwide network required technical standards that were understood and accepted by all cooperating participants in the industry. The Western Electric Company , the Bell System's sole and dedicated manufacturing unit, which was previously not permitted by company policy to sell outside the Bell System, was now directed to advertise and sell its products to the general market, so that independent operators could buy compatible apparatus. Vail organized a distinct research division within Western Electric, the later Bell Laboratories , to focus on basic research and development to solve the problems encountered in improving the technology of telephony. These efforts and this vision were communicated to the public by marketing campaigns under the slogan [ 7 ]
One System — One Policy — Universal Service
.
In 1913, AT&T settled pending anti-trust challenges in the Kingsbury Commitment . [ 8 ] On December 19, 1913, in a letter by Nathan C. Kingsbury to the U.S. Attorney General, AT&T conceded to restrictions in the acquisition of independent companies, and agreed to the divestiture of Western Union . AT&T's telephone operations thereby essentially became a government-sanctioned natural monopoly , because an essential feature of this commitment was that independent telephony operators were permitted to "secure for their subscribers toll service over the lines of the companies in the Bell System." [ 9 ] [ 10 ] removing the barriers to a nationwide telephone system that would have no competitors.
While the Bell System had a specialized division, Long Lines , to interconnect the local telephone networks of its Associated Companies, [ 11 ] no such unifying driving force existed in the independent telephone industry. Telephone companies negotiated interconnection with neighboring businesses and built localized toll networks that addressed the regional needs of their customers. The interconnection agreements with the Bell System provided access beyond these networks.
Long-distance toll lines for transmission of telephone calls were almost entirely open-wire pair installations early in the 20th century. [ 12 ] By 1911, the Long Lines network had reached from New York as far west as Denver, using loading coil circuits, but this distance was the limit for communication without amplification. The research efforts at Western Electric, committed to by Vail in c. 1909, into the principles of the electron tube recently invented by Lee de Forest , the Audion , and its efficient manufacture made it possible to build signal repeaters that extended the transmission distance of toll lines. In 1914, AT&T succeeded in the first transcontinental transmission line spanning between the Atlantic Ocean and the Pacific Ocean. This connected a large customer base in the far west beyond the Rocky Mountains to the AT&T Long Lines network.
Open wire , while marginally increasing in installations in the 1920s, was increasingly supplemented with cable routes, experiencing dramatic growth, but also with carrier transmission, a new development which multiplexed multiple communication channels, at times 200 or 300 circuits, on the same physical cable medium.
By 1925, the extent and quality of transmission lines in the nation was good enough, so that telephone subscribers could place telephone calls to almost anywhere in the continental United States. [ 13 ] However, set-up times for calls were typically still long, and callers often had to hang up after ordering a call with an operator, who called them back when the circuit was established. In 1925, the average time to establish connections was still over seven minutes, but this improved to about two and one half minutes by 1929. [ 14 ]
The extensions of the nationwide interconnections led to rapid increase in traffic. [ 15 ] In 1915, less than a quarter billion toll messages had been carried in the Bell System. Over the next fifteen years, this more than quadrupled to over one trillion. A increasing fraction of this traffic was for the long-haul routes in the network, between the largest cities in the nation and via the transcontinental routes. As a consequence, the build-out of long-haul plant was emphasized in investments, resulting in better quality circuits for long-haul transmissions.
Due to the investments in the plant, the average speed of establishing connections was steadily decreasing throughout the 1920s, [ 15 ] AT&T was able to effect several rate cuts in long-distance service in just a few years. However, the growth also caused major construction problems in the layout of the cable plant. By 1930, the long-haul business was handled by about 2,500 toll centers, out of 6,400 central offices in the United States and eastern Canada.
The long-haul build-out in the United States was paralleled by the construction of the Trans-Canada Telephone System, having been planned immediately after the formation of the Telephone Association of Canada in 1921. [ 16 ]
A systematic approach was needed to limit the number of intermediate toll offices that relayed the calls across the country, to further reduce set-up time, and to establish technical parameters for interchange points to assure a certain level of circuit quality. In 1929, the results of this research outlined a new fundamental layout of the toll plant ( circuit ) and the routing of toll calls. This first continental toll switching system became known as the General Toll Switching Plan . [ 1 ]
During the growth of telephone service since the first installations of telephone exchanges and the development of advanced manual and automatic switching systems, approximately 2,500 switching systems had been established in the nation that had trunks for connecting to other communities. The systems were designated as toll centers to which all local calls were routed that had to be connected to another toll center closer to the destination of each long distance call.
The technological improvements since the transcontinental transmission line of 1914 required a new methodology and plan of managing the traffic. The purpose of this plan was to provide systematically a basic layout of the plant compatible with the highest practicable standards of service achievable within given economic goals. The layout of cabling and major toll centers needed optimization in the number of switching steps required along a given route to connect any two telephones on the continent.
The General Toll Switching Plan organized these local toll centers into geographical groups associated with region within which all toll centers forwarded calls to destinations outside their territory to a Primary Outlet . The Primary Outlet was responsible for establishing an optimal route either to another Primary Outlet or to a Regional Center . Regional Centers were toll-switches responsible for a yet larger geographic region each. The Regional Centers were strategically located across the nation and each maintained cable routes to all other Regional Centers. In addition, they connected to some Primary Outlets in other regions as well, as traffic demanded, or for alternate routing in case of congestion or technical failures. | https://en.wikipedia.org/wiki/General_Toll_Switching_Plan |
The General Transport, Petroleum and Chemical Workers' Union (GTPCWU) is a trade union representing workers in various industries in Ghana.
The union was founded in 1967 in Accra, to represent workers in the formal section of road transport, in air transport, and in the chemical and petroleum sectors. [ 1 ] By 1985, membership had reached 29,185, but by 2018 this had fallen to 7,500. [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/General_Transport,_Petroleum_and_Chemical_Workers'_Union |
General chemistry (sometimes referred to as "gen chem") is offered by colleges and universities as an introductory level chemistry course usually taken by students during their first year. [ 1 ] The course is usually run with a concurrent lab section that gives students an opportunity to experience a laboratory environment and carry out experiments with the material learned in the course. These labs can consist of acid-base titrations , kinetics , equilibrium reactions , and electrochemical reactions . Chemistry majors as well as students across STEM majors such as biology , biochemistry , biomedicine , physics , and engineering are usually required to complete one year of general chemistry as well. [ 2 ]
The concepts taught in a typical general chemistry course are as follows:
Students in colleges and universities looking to follow the " pre-medical " track are required to pass general chemistry as the Association of American Medical Colleges requires at least one full year of chemistry. [ 3 ] In order for students to apply to medical school, they must pass the medical college admission test, or MCAT , which consists of a section covering the foundations of general chemistry. [ 4 ] General chemistry covers many of the principal foundations that apply to medicine and the human body that is essential in our current understanding and practice of medicine. [ 5 ]
Students who are enrolled in general chemistry often desire to become doctors, researchers, and educators. Because of the demands of these fields, professors believe that the level of rigor that is associated with general chemistry should be elevated from that of a typical introductory course. This has led to this course to gain the title of a "weed out course" where students drop out from their respected major due to the level of difficulty. [ 6 ] Students can have different perceptions of the course based on their experiences, or lack thereof, in high school chemistry courses. Students who enroll in AP chemistry in high school, a course that mirrors what is covered in college, could be perceived as having an advantage over students who do not come to college with a strong chemistry background. Students who wish to be competitive in applying to medical schools try to achieve success in general chemistry as the average GPA for medical school matriculants was 3.71 in 2017. [ 7 ] This makes a simply passing grade not acceptable for students with medical school aspirations. General chemistry professors have been known to make tests worth a large portion of the course, and make them more challenging than the material presents itself as. Grade deflation, purposely adjusting the grades of a course to be lower, is also an issue of general chemistry courses at the undergraduate level. | https://en.wikipedia.org/wiki/General_chemistry |
A General Content Descriptor (GCD) is a file which describes downloads like ringtones and pictures to wireless devices . GCD's are plain text files. They are required by many wireless carriers to install applications on devices. The name of the file will end with a ".gcd" extension . [ 1 ]
This article about wireless technology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/General_content_descriptor |
A contractor [ 1 ] [ 2 ] ( North American English ) or builder ( British English ), [ 3 ] [ 4 ] is responsible for the day-to-day oversight of a construction site, management of vendors and trades, and the communication of information to all involved parties throughout the course of a building project. [ 5 ]
In the United States , a contractor may be a sole proprietor managing a project and performing labor or carpentry work, have a small staff, or may be a very large company managing billion dollar projects. Some builders build new homes, some are remodelers, some are developers. [ 6 ]
A general contractor is a construction manager employed by a client, usually upon the advice of the project's architect or engineer . [ 7 ] General Contractors are mainly responsible for the overall coordination of a project and may also act as building designer and construction foreman (a tradesman in charge of a crew).
A general contractor must first assess the project-specific documents (referred to as a bid, proposal, or tender documents). In the case of renovations, a site visit is required to get a better understanding of the project. Depending on the project delivery method , the general contractor will submit a fixed price proposal or bid, cost-plus price or an estimate. The general contractor considers the cost of home office overhead, general conditions, materials, and equipment, as well as the cost of labor, to provide the owner with a price for the project.
Contract documents may include drawings, project manuals (including general, supplementary, or special conditions and specifications), and addendum or modifications issued prior to proposal/bidding and prepared by a design professional, such as an architect .The general contractor may also assume the role of construction manager, responsible for overseeing the project while assuming financial and legal risks.There are several types of risks can occur include cost overruns, delays, and liabilities related to safety or contract breaches.
Prior to formal appointment, the selected general contractor to whom a client proposes to award a contract is often referred to as a "preferred contractor". [ 8 ]
A general contractor is responsible for providing all of the material, labor, equipment (such as heavy equipment and tools) and services necessary for the construction of the project. A general contractor often hires specialized subcontractors to perform all or portions of the construction work. When using subcontractors, the general contractor is responsible for overseeing the quality of all work performed by any and all of the workers and subcontractors.
It is a best practice for general contractors to prioritize safety on the job site, and they are generally responsible for ensuring that work takes place following safe practices.
A general contractor's responsibilities may include applying for building permits, advising the person they are hired by, securing the property, providing temporary utilities on site, managing personnel on site, providing site surveying and engineering, disposing or recycling of construction waste , monitoring schedules and cash flows, and maintaining accurate records. [ 9 ]
The general contractor may be responsible for some part of the design, referred to as the "contractor's design portion" ( JCT terminology). [ 10 ]
In the United Kingdom , Australia and some British Commonwealth countries, the term 'general contractor' was gradually superseded by builders during the early twentieth century. [ citation needed ] This was the term used by major professional, trade, and consumer organizations when issuing contracts for construction work, and thus the term 'general contractor' fell out of use except in large organizations where the main contractor is the top manager and a general contractor shares responsibilities with professional contractors.
General contractors who conduct work for government agencies are often referred to as "builders". This term is also used in contexts where the customer's immediate general contractor is permitted to sub-contract or circumstances are likely to involve sub-contracting to specialist operators e.g. in various public services.
In the United States and Asia , the terms general contractor (or simply "contractor"), prime contractor and main contractor are often interchangeable when referring to small local companies that perform residential work. These companies are represented by trade organizations such as the NAHB . [ 11 ]
Licensing requirements to work legally on construction projects vary from locale to locale. In the United States, there are no federal licensing requirements to become a general contractor, but most US states require general contractors to obtain a local license to operate. It is the states' responsibility to define these requirements: for example, in the state of California , the requirements are stated as follows:
With a few exceptions, all businesses or individuals who work on any building, highway, road, parking facility, railroad, excavation, or other structure in California must be licensed by the California Contractors State License Board (CSLB) if the total cost of one or more contracts on the project is $500 or more.
In every state that requires a license, a surety bond is required as part of the licensing process, with the exception of Louisiana , where bonding requirements may vary in different parishes. Not all states require General Contractor licenses - these include Vermont , New Hampshire and Maine , among others.
Some general contractors obtain bachelor's degrees in construction science , building science , surveying , construction safety, or other disciplines.
General Contractors often learn about different aspects of construction, including masonry , carpentry , framing , and plumbing . Aspiring general contractors communicate with subcontractors and may learn the management skills they need to run their own company.
Experience in the construction industry as well as references from customers, business partners, or former employers are demanded. Some jurisdictions require candidates to provide proof of financing to own their own general contracting firm.
General Contractors often run their own business. They hire subcontractors to complete specialized construction work and may manage a team of plumbers , electricians , bricklayers , carpenters , iron workers , technicians , handymans , architects and roofers . General Contractors build their business by networking with potential clients, buying basic construction tools, and ensuring that their subcontractors complete high-quality work. General Contractors do not usually complete much construction work themselves, but they need to be familiar with construction techniques so they can manage workers effectively. Other reasons include access to specialist skills, flexible hiring and firing, and lower costs.
A property owner or real estate developer develops a program of their needs and selects a site (often with an architect). The architect assembles a design team of consulting engineers and other experts to design the building and specify the building systems. Today contractors frequently participate on the design team by providing pre-design services such as providing estimations of the budget and scheduling requirements to improve the economy of the project. In other cases, the general contractor is hired at the close of the design phase. The owner, architect, and general contractor work closely together to meet deadlines and budget. The general contractor works with subcontractors to ensure quality standards; subcontractors specialise in areas such as electrical wiring, plumbing, masonry, etc. | https://en.wikipedia.org/wiki/General_contractor |
Empirical methods
Prescriptive and policy
In economics , general equilibrium theory attempts to explain the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that the interaction of demand and supply will result in an overall general equilibrium . General equilibrium theory contrasts with the theory of partial equilibrium , which analyzes a specific part of an economy while its other factors are held constant. [ 1 ]
General equilibrium theory both studies economies using the model of equilibrium pricing and seeks to determine in which circumstances the assumptions of general equilibrium will hold. The theory dates to the 1870s, particularly the work of French economist Léon Walras in his pioneering 1874 work Elements of Pure Economics . [ 2 ] The theory reached its modern form with the work of Lionel W. McKenzie (Walrasian theory), Kenneth Arrow and Gérard Debreu (Hicksian theory) in the 1950s.
Broadly speaking, general equilibrium tries to give an understanding of the whole economy using a "bottom-up" approach, starting with individual markets and agents. Therefore, general equilibrium theory has traditionally been classified as part of microeconomics . The difference is not as clear as it used to be, since much of modern macroeconomics has emphasized microeconomic foundations , and has constructed general equilibrium models of macroeconomic fluctuations . General equilibrium macroeconomic models usually have a simplified structure that only incorporates a few markets, like a "goods market" and a "financial market". In contrast, general equilibrium models in the microeconomic tradition typically involve a multitude of different goods markets. They are usually complex and require computers to calculate numerical solutions .
In a market system the prices and production of all goods, including the price of money and interest , are interrelated. A change in the price of one good, say bread, may affect another price, such as bakers' wages. If bakers don't differ in tastes from others, the demand for bread might be affected by a change in bakers' wages, with a consequent effect on the price of bread. Calculating the equilibrium price of just one good, in theory, requires an analysis that accounts for all of the millions of different goods that are available. It is often assumed that agents are price takers , and under that assumption two common notions of equilibrium exist: Walrasian, or competitive equilibrium , and its generalization: a price equilibrium with transfers.
The first attempt in neoclassical economics to model prices for a whole economy was made by Léon Walras . Walras' Elements of Pure Economics provides a succession of models, each taking into account more aspects of a real economy (two commodities, many commodities, production, growth, money). Some think Walras was unsuccessful and that the later models in this series are inconsistent. [ 3 ] [ 4 ]
In particular, Walras's model was a long-run model in which prices of capital goods are the same whether they appear as inputs or outputs and in which the same rate of profits is earned in all lines of industry. This is inconsistent with the quantities of capital goods being taken as data. But when Walras introduced capital goods in his later models, he took their quantities as given, in arbitrary ratios. (In contrast, Kenneth Arrow and Gérard Debreu continued to take the initial quantities of capital goods as given, but adopted a short run model in which the prices of capital goods vary with time and the own rate of interest varies across capital goods.)
Walras was the first to lay down a research program widely followed by 20th-century economists. In particular, the Walrasian agenda included the investigation of when equilibria are unique and stable— Walras' Lesson 7 shows neither uniqueness, nor stability, nor even existence of an equilibrium is guaranteed. Walras also proposed a dynamic process by which general equilibrium might be reached, that of the tâtonnement or groping process.
The tâtonnement process is a model for investigating stability of equilibria. Prices are announced (perhaps by an "auctioneer"), and agents state how much of each good they would like to offer (supply) or purchase (demand). No transactions and no production take place at disequilibrium prices. Instead, prices are lowered for goods with positive prices and excess supply . Prices are raised for goods with excess demand. The question for the mathematician is under what conditions such a process will terminate in equilibrium where demand equates to supply for goods with positive prices and demand does not exceed supply for goods with a price of zero. Walras was not able to provide a definitive answer to this question (see Unresolved Problems in General Equilibrium below).
In partial equilibrium analysis, the determination of the price of a good is simplified by just looking at the price of one good, and assuming that the prices of all other goods remain constant. The Marshallian theory of supply and demand is an example of partial equilibrium analysis. Partial equilibrium analysis is adequate when the first-order effects of a shift in the demand curve do not shift the supply curve. Anglo-American economists became more interested in general equilibrium in the late 1920s and 1930s after Piero Sraffa 's demonstration that Marshallian economists cannot account for the forces thought to account for the upward-slope of the supply curve for a consumer good.
If an industry uses little of a factor of production, a small increase in the output of that industry will not bid the price of that factor up. To a first-order approximation, firms in the industry will experience constant costs, and the industry supply curves will not slope up. If an industry uses an appreciable amount of that factor of production, an increase in the output of that industry will exhibit increasing costs. But such a factor is likely to be used in substitutes for the industry's product, and an increased price of that factor will have effects on the supply of those substitutes. Consequently, Sraffa argued, the first-order effects of a shift in the demand curve of the original industry under these assumptions includes a shift in the supply curve of substitutes for that industry's product, and consequent shifts in the original industry's supply curve. General equilibrium is designed to investigate such interactions between markets.
Continental European economists made important advances in the 1930s. Walras' arguments for the existence of general equilibrium often were based on the counting of equations and variables. Such arguments are inadequate for non-linear systems of equations and do not imply that equilibrium prices and quantities cannot be negative, a meaningless solution for his models. The replacement of certain equations by inequalities and the use of more rigorous mathematics improved general equilibrium modeling.
The modern conception of general equilibrium is provided by the Arrow–Debreu– McKenzie model, developed jointly by Kenneth Arrow , Gérard Debreu , and Lionel W. McKenzie in the 1950s. [ 5 ] [ 6 ] Debreu presents this model in Theory of Value (1959) as an axiomatic model, following the style of mathematics promoted by Nicolas Bourbaki . In such an approach, the interpretation of the terms in the theory (e.g., goods, prices) are not fixed by the axioms.
Three important interpretations of the terms of the theory have been often cited. First, suppose commodities are distinguished by the location where they are delivered. Then the Arrow-Debreu model is a spatial model of, for example, international trade.
Second, suppose commodities are distinguished by when they are delivered. That is, suppose all markets equilibrate at some initial instant of time. Agents in the model purchase and sell contracts, where a contract specifies, for example, a good to be delivered and the date at which it is to be delivered. The Arrow–Debreu model of intertemporal equilibrium contains forward markets for all goods at all dates. No markets exist at any future dates.
Third, suppose contracts specify states of nature which affect whether a commodity is to be delivered: "A contract for the transfer of a commodity now specifies, in addition to its physical properties, its location and its date, an event on the occurrence of which the transfer is conditional. This new definition of a commodity allows one to obtain a theory of [risk] free from any probability concept..." [ 7 ]
These interpretations can be combined. So the complete Arrow–Debreu model can be said to apply when goods are identified by when they are to be delivered, where they are to be delivered and under what circumstances they are to be delivered, as well as their intrinsic nature. So there would be a complete set of prices for contracts such as "1 ton of Winter red wheat, delivered on 3rd of January in Minneapolis, if there is a hurricane in Florida during December". A general equilibrium model with complete markets of this sort seems to be a long way from describing the workings of real economies, however, its proponents argue that it is still useful as a simplified guide as to how real economies function.
Some of the recent work in general equilibrium has in fact explored the implications of incomplete markets , which is to say an intertemporal economy with uncertainty, where there do not exist sufficiently detailed contracts that would allow agents to fully allocate their consumption and resources through time. While it has been shown that such economies will generally still have an equilibrium, the outcome may no longer be Pareto optimal . The basic intuition for this result is that if consumers lack adequate means to transfer their wealth from one time period to another and the future is risky, there is nothing to necessarily tie any price ratio down to the relevant marginal rate of substitution , which is the standard requirement for Pareto optimality. Under some conditions the economy may still be constrained Pareto optimal , meaning that a central authority limited to the same type and number of contracts as the individual agents may not be able to improve upon the outcome, what is needed is the introduction of a full set of possible contracts. Hence, one implication of the theory of incomplete markets is that inefficiency may be a result of underdeveloped financial institutions or credit constraints faced by some members of the public. Research still continues in this area.
Basic questions in general equilibrium analysis are concerned with the conditions under which an equilibrium will be efficient, which efficient equilibria can be achieved, when an equilibrium is guaranteed to exist and when the equilibrium will be unique and stable.
The First Fundamental Welfare Theorem asserts that market equilibria are Pareto efficient . In other words, the allocation of goods in the equilibria is such that there is no reallocation which would leave a consumer better off without leaving another consumer worse off. In a pure exchange economy, a sufficient condition for the first welfare theorem to hold is that preferences be locally nonsatiated . The first welfare theorem also holds for economies with production regardless of the properties of the production function. Implicitly, the theorem assumes complete markets and perfect information. In an economy with externalities , for example, it is possible for equilibria to arise that are not efficient.
The first welfare theorem is informative in the sense that it points to the sources of inefficiency in markets. Under the assumptions above, any market equilibrium is tautologically efficient. Therefore, when equilibria arise that are not efficient, the market system itself is not to blame, but rather some sort of market failure .
Even if every equilibrium is efficient, it may not be that every efficient allocation of resources can be part of an equilibrium. However, the second theorem states that every Pareto efficient allocation can be supported as an equilibrium by some set of prices. In other words, all that is required to reach a particular Pareto efficient outcome is a redistribution of initial endowments of the agents after which the market can be left alone to do its work. This suggests that the issues of efficiency and equity can be separated and need not involve a trade-off. The conditions for the second theorem are stronger than those for the first, as consumers' preferences and production sets now need to be convex (convexity roughly corresponds to the idea of diminishing marginal rates of substitution i.e. "the average of two equally good bundles is better than either of the two bundles").
Even though every equilibrium is efficient, neither of the above two theorems say anything about the equilibrium existing in the first place. To guarantee that an equilibrium exists, it suffices that consumer preferences be strictly convex . With enough consumers, the convexity assumption can be relaxed both for existence and the second welfare theorem. Similarly, but less plausibly, convex feasible production sets suffice for existence; convexity excludes economies of scale .
Proofs of the existence of equilibrium traditionally rely on fixed-point theorems such as Brouwer fixed-point theorem for functions (or, more generally, the Kakutani fixed-point theorem for set-valued functions ). See Competitive equilibrium#Existence of a competitive equilibrium . The proof was first due to Lionel McKenzie , [ 8 ] and Kenneth Arrow and Gérard Debreu . [ 9 ] In fact, the converse also holds, according to Uzawa 's derivation of Brouwer's fixed point theorem from Walras's law. [ 10 ] Following Uzawa's theorem, many mathematical economists consider proving existence a deeper result than proving the two Fundamental Theorems.
Another method of proof of existence, global analysis , uses Sard's lemma and the Baire category theorem ; this method was pioneered by Gérard Debreu and Stephen Smale .
Starr (1969) applied the Shapley–Folkman–Starr theorem to prove that even without convex preferences there exists an approximate equilibrium. The Shapley–Folkman–Starr results bound the distance from an "approximate" economic equilibrium to an equilibrium of a "convexified" economy, when the number of agents exceeds the dimension of the goods. [ 11 ] Following Starr's paper, the Shapley–Folkman–Starr results were "much exploited in the theoretical literature", according to Guesnerie, [ 12 ] : 112 who wrote the following:
some key results obtained under the convexity assumption remain (approximately) relevant in circumstances where convexity fails. For example, in economies with a large consumption side, nonconvexities in preferences do not destroy the standard results of, say Debreu's theory of value. In the same way, if indivisibilities in the production sector are small with respect to the size of the economy, [ . . . ] then standard results are affected in only a minor way. [ 12 ] : 99
To this text, Guesnerie appended the following footnote:
The derivation of these results in general form has been one of the major achievements of postwar economic theory. [ 12 ] : 138
In particular, the Shapley-Folkman-Starr results were incorporated in the theory of general economic equilibria [ 13 ] [ 14 ] [ 15 ] and in the theory of market failures [ 16 ] and of public economics . [ 17 ]
Although generally (assuming convexity) an equilibrium will exist and will be efficient, the conditions under which it will be unique are much stronger. The Sonnenschein–Mantel–Debreu theorem , proven in the 1970s, states that the aggregate excess demand function inherits only certain properties of individual's demand functions, and that these ( continuity , homogeneity of degree zero , Walras' law and boundary behavior when prices are near zero) are the only real restriction one can expect from an aggregate excess demand function. Any such function can represent the excess demand of an economy populated with rational utility-maximizing individuals.
There has been much research on conditions when the equilibrium will be unique, or which at least will limit the number of equilibria. One result states that under mild assumptions the number of equilibria will be finite (see regular economy ) and odd (see index theorem ). Furthermore, if an economy as a whole, as characterized by an aggregate excess demand function, has the revealed preference property (which is a much stronger condition than revealed preferences for a single individual) or the gross substitute property then likewise the equilibrium will be unique. All methods of establishing uniqueness can be thought of as establishing that each equilibrium has the same positive local index, in which case by the index theorem there can be but one such equilibrium.
Given that equilibria may not be unique, it is of some interest to ask whether any particular equilibrium is at least locally unique. If so, then comparative statics can be applied as long as the shocks to the system are not too large. As stated above, in a regular economy equilibria will be finite, hence locally unique. One reassuring result, due to Debreu, is that "most" economies are regular.
Work by Michael Mandler (1999) has challenged this claim. [ 18 ] The Arrow–Debreu–McKenzie model is neutral between models of production functions as continuously differentiable and as formed from (linear combinations of) fixed coefficient processes. Mandler accepts that, under either model of production, the initial endowments will not be consistent with a continuum of equilibria, except for a set of Lebesgue measure zero. However, endowments change with time in the model and this evolution of endowments is determined by the decisions of agents (e.g., firms) in the model. Agents in the model have an interest in equilibria being indeterminate:
Indeterminacy, moreover, is not just a technical nuisance; it undermines the price-taking assumption of competitive models. Since arbitrary small manipulations of factor supplies can dramatically increase a factor's price, factor owners will not take prices to be parametric. [ 18 ] : 17
When technology is modeled by (linear combinations) of fixed coefficient processes, optimizing agents will drive endowments to be such that a continuum of equilibria exist:
The endowments where indeterminacy occurs systematically arise through time and therefore cannot be dismissed; the Arrow-Debreu-McKenzie model is thus fully subject to the dilemmas of factor price theory. [ 18 ] : 19
Some have questioned the practical applicability of the general equilibrium approach based on the possibility of non-uniqueness of equilibria.
In a typical general equilibrium model the prices that prevail "when the dust settles" are simply those that coordinate the demands of various consumers for various goods. But this raises the question of how these prices and allocations have been arrived at, and whether any (temporary) shock to the economy will cause it to converge back to the same outcome that prevailed before the shock. This is the question of stability of the equilibrium, and it can be readily seen that it is related to the question of uniqueness. If there are multiple equilibria, then some of them will be unstable. Then, if an equilibrium is unstable and there is a shock, the economy will wind up at a different set of allocations and prices once the convergence process terminates. However, stability depends not only on the number of equilibria but also on the type of the process that guides price changes (for a specific type of price adjustment process see Walrasian auction ). Consequently, some researchers have focused on plausible adjustment processes that guarantee system stability, i.e., that guarantee convergence of prices and allocations to some equilibrium. When more than one stable equilibrium exists, where one ends up will depend on where one begins. The theorems that have been mostly conclusive when related to the stability of a typical general equilibrium model are closed related to that of the most local stability.
Research building on the Arrow–Debreu–McKenzie model has revealed some problems with the model. The Sonnenschein–Mantel–Debreu results show that, essentially, any restrictions on the shape of excess demand functions are stringent. Some think this implies that the Arrow–Debreu model lacks empirical content. [ 19 ] Therefore, an unsolved problem is
A model organized around the tâtonnement process has been said to be a model of a centrally planned economy , not a decentralized market economy. Some research has tried to develop general equilibrium models with other processes. In particular, some economists have developed models in which agents can trade at out-of-equilibrium prices and such trades can affect the equilibria to which the economy tends. Particularly noteworthy are the Hahn process, the Edgeworth process and the Fisher process.
The data determining Arrow-Debreu equilibria include initial endowments of capital goods. If production and trade occur out of equilibrium, these endowments will be changed further complicating the picture.
In a real economy, however, trading, as well as production and consumption, goes on out of equilibrium. It follows that, in the course of convergence to equilibrium (assuming that occurs), endowments change. In turn this changes the set of equilibria. Put more succinctly, the set of equilibria is path dependent ... [This path dependence] makes the calculation of equilibria corresponding to the initial state of the system essentially irrelevant. What matters is the equilibrium that the economy will reach from given initial endowments, not the equilibrium that it would have been in, given initial endowments, had prices happened to be just right. – ( Franklin Fisher ). [ 20 ]
The Arrow–Debreu model in which all trade occurs in futures contracts at time zero requires a very large number of markets to exist. It is equivalent under complete markets to a sequential equilibrium concept in which spot markets for goods and assets open at each date-state event (they are not equivalent under incomplete markets); market clearing then requires that the entire sequence of prices clears all markets at all times. A generalization of the sequential market arrangement is the temporary equilibrium structure, where market clearing at a point in time is conditional on expectations of future prices which need not be market clearing ones.
Although the Arrow–Debreu–McKenzie model is set out in terms of some arbitrary numéraire , the model does not encompass money. Frank Hahn , for example, has investigated whether general equilibrium models can be developed in which money enters in some essential way. One of the essential questions he introduces, often referred to as the Hahn's problem is: "Can one construct an equilibrium where money has value?" The goal is to find models in which existence of money can alter the equilibrium solutions, perhaps because the initial position of agents depends on monetary prices.
Some critics of general equilibrium modeling contend that much research in these models constitutes exercises in pure mathematics with no connection to actual economies. In a 1979 article, Nicholas Georgescu-Roegen complains: "There are endeavors that now pass for the most desirable kind of economic contributions although they are just plain mathematical exercises, not only without any economic substance but also without any mathematical value." [ 21 ] He cites as an example a paper that assumes more traders in existence than there are points in the set of real numbers.
Although modern models in general equilibrium theory demonstrate that under certain circumstances prices will indeed converge to equilibria, critics hold that the assumptions necessary for these results are extremely strong. As well as stringent restrictions on excess demand functions, the necessary assumptions include perfect rationality of individuals; complete information about all prices both now and in the future; and the conditions necessary for perfect competition . However, some results from experimental economics suggest that even in circumstances where there are few, imperfectly informed agents, the resulting prices and allocations may wind up resembling those of a perfectly competitive market (although certainly not a stable general equilibrium in all markets). [ citation needed ]
Frank Hahn defends general equilibrium modeling on the grounds that it provides a negative function. General equilibrium models show what the economy would have to be like for an unregulated economy to be Pareto efficient . [ citation needed ]
Until the 1970s general equilibrium analysis remained theoretical. With advances in computing power and the development of input–output tables, it became possible to model national economies, or even the world economy, and attempts were made to solve for general equilibrium prices and quantities empirically.
Applied general equilibrium (AGE) models were pioneered by Herbert Scarf in 1967, and offered a method for solving the Arrow–Debreu General Equilibrium system in a numerical fashion. This was first implemented by John Shoven and John Whalley (students of Scarf at Yale) in 1972 and 1973, and were a popular method up through the 1970s. [ 22 ] [ 23 ] In the 1980s however, AGE models faded from popularity due to their inability to provide a precise solution and its high cost of computation.
Computable general equilibrium (CGE) models surpassed and replaced AGE models in the mid-1980s, as the CGE model was able to provide relatively quick and large computable models for a whole economy, and was the preferred method of governments and the World Bank . CGE models are heavily used today, and while 'AGE' and 'CGE' is used inter-changeably in the literature, Scarf-type AGE models have not been constructed since the mid-1980s, and the CGE literature at current is not based on Arrow-Debreu and General Equilibrium Theory as discussed in this article. CGE models, and what is today referred to as AGE models, are based on static, simultaneously solved, macro balancing equations (from the standard Keynesian macro model), giving a precise and explicitly computable result. [ 24 ]
General equilibrium theory is a central point of contention and influence between the neoclassical school and other schools of economic thought , and different schools have varied views on general equilibrium theory. Some, such as the Keynesian and Post-Keynesian schools, strongly reject general equilibrium theory as "misleading" and "useless". Disequilibrium macroeconomics and different non-equilibrium approaches were developed as alternatives. Other schools, such as new classical macroeconomics , developed from general equilibrium theory.
Keynesian and Post-Keynesian economists, and their underconsumptionist predecessors criticize general equilibrium theory specifically, and as part of criticisms of neoclassical economics generally. Specifically, they argue that general equilibrium theory is neither accurate nor useful, that economies are not in equilibrium, that equilibrium may be slow and painful to achieve, and that modeling by equilibrium is "misleading", and that the resulting theory is not a useful guide, particularly for understanding of economic crises . [ 25 ] [ 26 ]
Let us beware of this dangerous theory of equilibrium which is supposed to be automatically established. A certain kind of equilibrium, it is true, is reestablished in the long run, but it is after a frightful amount of suffering.
The long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is past the ocean is flat again.
It is as absurd to assume that, for any long period of time, the variables in the economic organization, or any part of them, will "stay put," in perfect equilibrium, as to assume that the Atlantic Ocean can ever be without a wave.
Robert Clower and others have argued for a reformulation of theory toward disequilibrium analysis to incorporate how monetary exchange fundamentally alters the representation of an economy as though a barter system. [ 27 ]
While general equilibrium theory and neoclassical economics generally were originally microeconomic theories, new classical macroeconomics builds a macroeconomic theory on these bases. In new classical models, the macroeconomy is assumed to be at its unique equilibrium, with full employment and potential output, and that this equilibrium is assumed to always have been achieved via price and wage adjustment (market clearing). The best-known such model is real business-cycle theory , in which business cycles are considered to be largely due to changes in the real economy, unemployment is not due to the failure of the market to achieve potential output, but due to equilibrium potential output having fallen and equilibrium unemployment having risen.
Within socialist economics , a sustained critique of general equilibrium theory (and neoclassical economics generally) is given in Anti-Equilibrium , [ 28 ] based on the experiences of János Kornai with the failures of Communist central planning , although Michael Albert and Robin Hahnel later based their Parecon model on the same theory. [ 29 ]
The structural equilibrium model is a matrix-form computable general equilibrium model in new structural economics. [ 30 ] [ 31 ] This model is an extension of the John von Neumann 's general equilibrium model (see Computable general equilibrium for details). Its computation can be performed using the R package GE. [ 32 ] The structural equilibrium model can be used for intertemporal equilibrium analysis, where time is treated as a label that differentiates between types of commodities and firms, meaning commodities are distinguished by when they are delivered and firms are distinguished by when they produce. The model can include factors such as taxes, money, endogenous production functions, and endogenous institutions, etc. The structural equilibrium model can include excess tax burdens, meaning that the equilibrium in the model may not be Pareto optimal. When production functions and/or economic institutions are treated as endogenous variables, the general equilibrium is referred to as structural equilibrium. | https://en.wikipedia.org/wiki/General_equilibrium_theory |
In bioinformatics , the general feature format ( gene-finding format , generic feature format , GFF ) is a file format used for describing genes and other features of DNA , RNA and protein sequences.
The following versions of GFF exist:
GFF2/GTF had a number of deficiencies, notably that it can only represent two-level feature hierarchies and thus cannot handle the three-level hierarchy of gene → transcript → exon. GFF3 addresses this and other deficiencies. For example, it supports arbitrarily many hierarchical levels, and gives specific meanings to certain tags in the attributes field.
The GTF is identical to GFF, version 2. [ 1 ]
All GFF formats (GFF2, GFF3 and GTF) are tab delimited with 9 fields per line. They all share the same structure for the first 7 fields, while differing in the content and format of the ninth field . Some field names have been changed in GFF3 to avoid confusion. For example, the "seqid" field was formerly referred to as "sequence", which may be confused with a nucleotide or amino acid chain. The general structure is as follows:
Simply put, CDS means "CoDing Sequence". The exact meaning of the term is defined by Sequence Ontology (SO). According to the GFF3 specification: [ 2 ] [ 3 ]
For features of type "CDS", the phase indicates where the feature begins with reference to the reading frame. The phase is one of the integers 0, 1, or 2, indicating the number of bases that should be removed from the beginning of this feature to reach the first base of the next codon.
In GFF files, additional meta information can be included and follows after the ## directive. This meta information can detail GFF version, sequence region, or species (full list of meta data types can be found at Sequence Ontology specifications ).
Servers that generate this format:
Clients that use this format:
The modENCODE project hosts an online GFF3 validation tool with generous limits of 286.10 MB and 15 million lines.
The Genome Tools software collection contains a gff3validator tool that can be used offline to validate and possibly tidy GFF3 files. An online validation service is also available. | https://en.wikipedia.org/wiki/General_feature_format |
In logic , general frames (or simply frames ) are Kripke frames with an additional structure, which are used to model modal and intermediate logics. The general frame semantics combines the main virtues of Kripke semantics and algebraic semantics : it shares the transparent geometrical insight of the former, and robust completeness of the latter.
A modal general frame is a triple F = ⟨ F , R , V ⟩ {\displaystyle \mathbf {F} =\langle F,R,V\rangle } , where ⟨ F , R ⟩ {\displaystyle \langle F,R\rangle } is a Kripke frame (i.e., R {\displaystyle R} is a binary relation on the set F {\displaystyle F} ), and V {\displaystyle V} is a set of subsets of F {\displaystyle F} that is closed under the following:
They are thus a special case of fields of sets with additional structure . The purpose of V {\displaystyle V} is to restrict the allowed valuations in the frame: a model ⟨ F , R , ⊩ ⟩ {\displaystyle \langle F,R,\Vdash \rangle } based on the Kripke frame ⟨ F , R ⟩ {\displaystyle \langle F,R\rangle } is admissible in the general frame F {\displaystyle \mathbf {F} } , if
The closure conditions on V {\displaystyle V} then ensure that { x ∈ F ∣ x ⊩ A } {\displaystyle \{x\in F\mid x\Vdash A\}} belongs to V {\displaystyle V} for every formula A {\displaystyle A} (not only a variable).
A formula A {\displaystyle A} is valid in F {\displaystyle \mathbf {F} } , if x ⊩ A {\displaystyle x\Vdash A} for all admissible valuations ⊩ {\displaystyle \Vdash } , and all points x ∈ F {\displaystyle x\in F} . A normal modal logic L {\displaystyle L} is valid in the frame F {\displaystyle \mathbf {F} } , if all axioms (or equivalently, all theorems ) of L {\displaystyle L} are valid in F {\displaystyle \mathbf {F} } . In this case we call F {\displaystyle \mathbf {F} } an L {\displaystyle L} - frame .
A Kripke frame ⟨ F , R ⟩ {\displaystyle \langle F,R\rangle } may be identified with a general frame in which all valuations are admissible: i.e., ⟨ F , R , P ( F ) ⟩ {\displaystyle \langle F,R,{\mathcal {P}}(F)\rangle } , where P ( F ) {\displaystyle {\mathcal {P}}(F)} denotes the power set of F {\displaystyle F} .
§8888899999==Types of frames==
In full generality, general frames are hardly more than a fancy name for Kripke models ; in particular, the correspondence of modal axioms to properties on the accessibility relation is lost. This can be remedied by imposing additional conditions on the set of admissible valuations.
A frame F = ⟨ F , R , V ⟩ {\displaystyle \mathbf {F} =\langle F,R,V\rangle } is called
Kripke frames are refined and atomic. However, infinite Kripke frames are never compact. Every finite differentiated or atomic frame is a Kripke frame.
Descriptive frames are the most important class of frames because of the duality theory (see below). Refined frames are useful as a common generalization of descriptive and Kripke frames.
Every Kripke model ⟨ F , R , ⊩ ⟩ {\displaystyle \langle F,R,{\Vdash }\rangle } induces the general frame ⟨ F , R , V ⟩ {\displaystyle \langle F,R,V\rangle } , where V {\displaystyle V} is defined as
The fundamental truth-preserving operations of generated subframes, p-morphic images , and disjoint unions of Kripke frames have analogues on general frames. A frame G = ⟨ G , S , W ⟩ {\displaystyle \mathbf {G} =\langle G,S,W\rangle } is a generated subframe of a frame F = ⟨ F , R , V ⟩ {\displaystyle \mathbf {F} =\langle F,R,V\rangle } , if the Kripke frame ⟨ G , S ⟩ {\displaystyle \langle G,S\rangle } is a generated subframe of the Kripke frame ⟨ F , R ⟩ {\displaystyle \langle F,R\rangle } (i.e., G {\displaystyle G} is a subset of F {\displaystyle F} closed upwards under R {\displaystyle R} , and S = R ∩ G × G {\displaystyle S=R\cap G\times G} ), and
A p-morphism (or bounded morphism ) f : F → G {\displaystyle f\colon \mathbf {F} \to \mathbf {G} } is a function from F {\displaystyle F} to G {\displaystyle G} that is a p-morphism of the Kripke frames ⟨ F , R ⟩ {\displaystyle \langle F,R\rangle } and ⟨ G , S ⟩ {\displaystyle \langle G,S\rangle } , and satisfies the additional constraint
The disjoint union of an indexed set of frames F i = ⟨ F i , R i , V i ⟩ {\displaystyle \mathbf {F} _{i}=\langle F_{i},R_{i},V_{i}\rangle } , i ∈ I {\displaystyle i\in I} , is the frame F = ⟨ F , R , V ⟩ {\displaystyle \mathbf {F} =\langle F,R,V\rangle } , where F {\displaystyle F} is the disjoint union of { F i ∣ i ∈ I } {\displaystyle \{F_{i}\mid i\in I\}} , R {\displaystyle R} is the union of { R i ∣ i ∈ I } {\displaystyle \{R_{i}\mid i\in I\}} , and
The refinement of a frame F = ⟨ F , R , V ⟩ {\displaystyle \mathbf {F} =\langle F,R,V\rangle } is a refined frame G = ⟨ G , S , W ⟩ {\displaystyle \mathbf {G} =\langle G,S,W\rangle } defined as follows. We consider the equivalence relation
and let G = F / ∼ {\displaystyle G=F/{\sim }} be the set of equivalence classes of ∼ {\displaystyle \sim } . Then we put
Unlike Kripke frames, every normal modal logic L {\displaystyle L} is complete with respect to a class of general frames. This is a consequence of the fact that L {\displaystyle L} is complete with respect to a class of Kripke models ⟨ F , R , ⊩ ⟩ {\displaystyle \langle F,R,{\Vdash }\rangle } : as L {\displaystyle L} is closed under substitution, the general frame induced by ⟨ F , R , ⊩ ⟩ {\displaystyle \langle F,R,{\Vdash }\rangle } is an L {\displaystyle L} -frame. Moreover, every logic L {\displaystyle L} is complete with respect to a single descriptive frame. Indeed, L {\displaystyle L} is complete with respect to its canonical model, and the general frame induced by the canonical model (called the canonical frame of L {\displaystyle L} ) is descriptive.
General frames bear close connection to modal algebras . Let F = ⟨ F , R , V ⟩ {\displaystyle \mathbf {F} =\langle F,R,V\rangle } be a general frame. The set V {\displaystyle V} is closed under Boolean operations, therefore it is a subalgebra of the power set Boolean algebra ⟨ P ( F ) , ∩ , ∪ , − ⟩ {\displaystyle \langle {\mathcal {P}}(F),\cap ,\cup ,-\rangle } . It also carries an additional unary operation, ◻ {\displaystyle \Box } . The combined structure ⟨ V , ∩ , ∪ , − , ◻ ⟩ {\displaystyle \langle V,\cap ,\cup ,-,\Box \rangle } is a modal algebra, which is called the dual algebra of F {\displaystyle \mathbf {F} } , and denoted by F + {\displaystyle \mathbf {F} ^{+}} .
In the opposite direction, it is possible to construct the dual frame A + = ⟨ F , R , V ⟩ {\displaystyle \mathbf {A} _{+}=\langle F,R,V\rangle } to any modal algebra A = ⟨ A , ∧ , ∨ , − , ◻ ⟩ {\displaystyle \mathbf {A} =\langle A,\wedge ,\vee ,-,\Box \rangle } . The Boolean algebra ⟨ A , ∧ , ∨ , − ⟩ {\displaystyle \langle A,\wedge ,\vee ,-\rangle } has a Stone space , whose underlying set F {\displaystyle F} is the set of all ultrafilters of A {\displaystyle \mathbf {A} } . The set V {\displaystyle V} of admissible valuations in A + {\displaystyle \mathbf {A} _{+}} consists of the clopen subsets of F {\displaystyle F} , and the accessibility relation R {\displaystyle R} is defined by
for all ultrafilters x {\displaystyle x} and y {\displaystyle y} .
A frame and its dual validate the same formulas; hence the general frame semantics and algebraic semantics are in a sense equivalent. The double dual ( A + ) + {\displaystyle (\mathbf {A} _{+})^{+}} of any modal algebra is isomorphic to A {\displaystyle \mathbf {A} } itself. This is not true in general for double duals of frames, as the dual of every algebra is descriptive. In fact, a frame F {\displaystyle \mathbf {F} } is descriptive if and only if it is isomorphic to its double dual ( F + ) + {\displaystyle (\mathbf {F} ^{+})_{+}} .
It is also possible to define duals of p-morphisms on one hand, and modal algebra homomorphisms on the other hand. In this way the operators ( ⋅ ) + {\displaystyle (\cdot )^{+}} and ( ⋅ ) + {\displaystyle (\cdot )_{+}} become a pair of contravariant functors between the category of general frames, and the category of modal algebras. These functors provide a duality (called Jónsson–Tarski duality after Bjarni Jónsson and Alfred Tarski ) between the categories of descriptive frames, and modal algebras. This is a special case of a more general duality between complex algebras and fields of sets on relational structures .
The frame semantics for intuitionistic and intermediate logics can be developed in parallel to the semantics for modal logics. An intuitionistic general frame is a triple ⟨ F , ≤ , V ⟩ {\displaystyle \langle F,\leq ,V\rangle } , where ≤ {\displaystyle \leq } is a partial order on F {\displaystyle F} , and V {\displaystyle V} is a set of upper subsets ( cones ) of F {\displaystyle F} that contains the empty set, and is closed under
Validity and other concepts are then introduced similarly to modal frames, with a few changes necessary to accommodate for the weaker closure properties of the set of admissible valuations. In particular, an intuitionistic frame F = ⟨ F , ≤ , V ⟩ {\displaystyle \mathbf {F} =\langle F,\leq ,V\rangle } is called
Tight intuitionistic frames are automatically differentiated, hence refined.
The dual of an intuitionistic frame F = ⟨ F , ≤ , V ⟩ {\displaystyle \mathbf {F} =\langle F,\leq ,V\rangle } is the Heyting algebra F + = ⟨ V , ∩ , ∪ , → , ∅ ⟩ {\displaystyle \mathbf {F} ^{+}=\langle V,\cap ,\cup ,\to ,\emptyset \rangle } . The dual of a Heyting algebra A = ⟨ A , ∧ , ∨ , → , 0 ⟩ {\displaystyle \mathbf {A} =\langle A,\wedge ,\vee ,\to ,0\rangle } is the intuitionistic frame A + = ⟨ F , ≤ , V ⟩ {\displaystyle \mathbf {A} _{+}=\langle F,\leq ,V\rangle } , where F {\displaystyle F} is the set of all prime filters of A {\displaystyle \mathbf {A} } , the ordering ≤ {\displaystyle \leq } is inclusion , and V {\displaystyle V} consists of all subsets of F {\displaystyle F} of the form
where a ∈ A {\displaystyle a\in A} . As in the modal case, ( ⋅ ) + {\displaystyle (\cdot )^{+}} and ( ⋅ ) + {\displaystyle (\cdot )_{+}} are a pair of contravariant functors, which make the category of Heyting algebras dually equivalent to the category of descriptive intuitionistic frames.
It is possible to construct intuitionistic general frames from transitive reflexive modal frames and vice versa, see modal companion . | https://en.wikipedia.org/wiki/General_frame |
In mathematics , the general linear group of degree n {\displaystyle n} is the set of n × n {\displaystyle n\times n} invertible matrices , together with the operation of ordinary matrix multiplication . This forms a group , because the product of two invertible matrices is again invertible, and the inverse of an invertible matrix is invertible, with the identity matrix as the identity element of the group. The group is so named because the columns (and also the rows) of an invertible matrix are linearly independent , hence the vectors/points they define are in general linear position , and matrices in the general linear group take points in general linear position to points in general linear position.
To be more precise, it is necessary to specify what kind of objects may appear in the entries of the matrix. For example, the general linear group over R {\displaystyle \mathbb {R} } (the set of real numbers ) is the group of n × n {\displaystyle n\times n} invertible matrices of real numbers, and is denoted by GL n ( R ) {\displaystyle \operatorname {GL} _{n}(\mathbb {R} )} or GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} .
More generally, the general linear group of degree n {\displaystyle n} over any field F {\displaystyle F} (such as the complex numbers ), or a ring R {\displaystyle R} (such as the ring of integers ), is the set of n × n {\displaystyle n\times n} invertible matrices with entries from F {\displaystyle F} (or R {\displaystyle R} ), again with matrix multiplication as the group operation. [ 1 ] Typical notation is GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} or GL n ( F ) {\displaystyle \operatorname {GL} _{n}(F)} , or simply GL ( n ) {\displaystyle \operatorname {GL} (n)} if the field is understood.
More generally still, the general linear group of a vector space GL ( V ) {\displaystyle \operatorname {GL} (V)} is the automorphism group , not necessarily written as matrices.
The special linear group , written SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} or SL n ( F ) {\displaystyle \operatorname {SL} _{n}(F)} , is the subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} consisting of matrices with a determinant of 1.
The group GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} and its subgroups are often called linear groups or matrix groups (the automorphism group GL ( V ) {\displaystyle \operatorname {GL} (V)} is a linear group but not a matrix group). These groups are important in the theory of group representations , and also arise in the study of spatial symmetries and symmetries of vector spaces in general, as well as the study of polynomials . The modular group may be realised as a quotient of the special linear group SL ( 2 , Z ) {\displaystyle \operatorname {SL} (2,\mathbb {Z} )} .
If n ≥ 2 {\displaystyle n\geq 2} , then the group GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} is not abelian .
If V {\displaystyle V} is a vector space over the field F {\displaystyle F} , the general linear group of V {\displaystyle V} , written GL ( V ) {\displaystyle \operatorname {GL} (V)} or Aut ( V ) {\displaystyle \operatorname {Aut} (V)} , is the group of all automorphisms of V {\displaystyle V} , i.e. the set of all bijective linear transformations V → V {\displaystyle V\to V} , together with functional composition as group operation. If V {\displaystyle V} has finite dimension n {\displaystyle n} , then GL ( V ) {\displaystyle \operatorname {GL} (V)} and GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} are isomorphic . The isomorphism is not canonical; it depends on a choice of basis in V {\displaystyle V} . Given a basis { e 1 , … , e n } {\displaystyle \{e_{1},\dots ,e_{n}\}} of V {\displaystyle V} and an automorphism T {\displaystyle T} in GL ( V ) {\displaystyle \operatorname {GL} (V)} , we have then for every basis vector e i that
for some constants a i j {\displaystyle a_{ij}} in F {\displaystyle F} ; the matrix corresponding to T {\displaystyle T} is then just the matrix with entries given by the a j i {\displaystyle a_{ji}} .
In a similar way, for a commutative ring R {\displaystyle R} the group GL ( n , R ) {\displaystyle \operatorname {GL} (n,R)} may be interpreted as the group of automorphisms of a free R {\displaystyle R} -module M {\displaystyle M} of rank n {\displaystyle n} . One can also define GL( M ) for any R {\displaystyle R} -module, but in general this is not isomorphic to GL ( n , R ) {\displaystyle \operatorname {GL} (n,R)} (for any n {\displaystyle n} ).
Over a field F {\displaystyle F} , a matrix is invertible if and only if its determinant is nonzero. Therefore, an alternative definition of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} is as the group of matrices with nonzero determinant.
Over a commutative ring R {\displaystyle R} , more care is needed: a matrix over R {\displaystyle R} is invertible if and only if its determinant is a unit in R {\displaystyle R} , that is, if its determinant is invertible in R {\displaystyle R} . Therefore, GL ( n , R ) {\displaystyle \operatorname {GL} (n,R)} may be defined as the group of matrices whose determinants are units.
Over a non-commutative ring R {\displaystyle R} , determinants are not at all well behaved. In this case, GL ( n , R ) {\displaystyle \operatorname {GL} (n,R)} may be defined as the unit group of the matrix ring M ( n , R ) {\displaystyle M(n,R)} .
The general linear group GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} over the field of real numbers is a real Lie group of dimension n 2 {\displaystyle n^{2}} . To see this, note that the set of all n × n {\displaystyle n\times n} real matrices, M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} , forms a real vector space of dimension n 2 {\displaystyle n^{2}} . The subset GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} consists of those matrices whose determinant is non-zero. The determinant is a polynomial map, and hence GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} is an open affine subvariety of M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} (a non-empty open subset of M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} in the Zariski topology ), and therefore [ 2 ] a smooth manifold of the same dimension.
The Lie algebra of GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} , denoted g l n , {\displaystyle {\mathfrak {gl}}_{n},} consists of all n × n {\displaystyle n\times n} real matrices with the commutator serving as the Lie bracket.
As a manifold, GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} is not connected but rather has two connected components : the matrices with positive determinant and the ones with negative determinant. The identity component , denoted by GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} , consists of the real n × n {\displaystyle n\times n} matrices with positive determinant. This is also a Lie group of dimension n 2 {\displaystyle n^{2}} ; it has the same Lie algebra as GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} .
The polar decomposition , which is unique for invertible matrices, shows that there is a homeomorphism between GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} and the Cartesian product of O ( n ) {\displaystyle \operatorname {O} (n)} with the set of positive-definite symmetric matrices. Similarly, it shows that there is a homeomorphism between GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} and the Cartesian product of SO ( n ) {\displaystyle \operatorname {SO} (n)} with the set of positive-definite symmetric matrices. Because the latter is contractible, the fundamental group of GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} is isomorphic to that of SO ( n ) {\displaystyle \operatorname {SO} (n)} .
The homeomorphism also shows that the group GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} is noncompact . “The” [ 3 ] maximal compact subgroup of GL ( n , R ) {\displaystyle \operatorname {GL} (n,\mathbb {R} )} is the orthogonal group O ( n ) {\displaystyle \operatorname {O} (n)} , while "the" maximal compact subgroup of GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} is the special orthogonal group SO ( n ) {\displaystyle \operatorname {SO} (n)} . As for SO ( n ) {\displaystyle \operatorname {SO} (n)} , the group GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} is not simply connected (except when n = 1 {\displaystyle n=1} , but rather has a fundamental group isomorphic to Z {\displaystyle \mathbb {Z} } for n = 2 {\displaystyle n=2} or Z 2 {\displaystyle \mathbb {Z} _{2}} for n > 2 {\displaystyle n>2} .
The general linear group over the field of complex numbers , GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} , is a complex Lie group of complex dimension n 2 {\displaystyle n^{2}} . As a real Lie group (through realification) it has dimension 2 n 2 {\displaystyle 2n^{2}} . The set of all real matrices forms a real Lie subgroup. These correspond to the inclusions
which have real dimensions n 2 {\displaystyle n^{2}} , 2 n 2 {\displaystyle 2n^{2}} , and ( 2 n ) 2 = 4 n 2 {\displaystyle (2n)^{2}=4n^{2}} . Complex n {\displaystyle n} -dimensional matrices can be characterized as real 2 n {\displaystyle 2n} -dimensional matrices that preserve a linear complex structure ; that is, matrices that commute with a matrix J {\displaystyle J} such that J 2 = − I {\displaystyle J^{2}=-I} , where J {\displaystyle J} corresponds to multiplying by the imaginary unit i {\displaystyle i} .
The Lie algebra corresponding to GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} consists of all n × n {\displaystyle n\times n} complex matrices with the commutator serving as the Lie bracket.
Unlike the real case, GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} is connected . This follows, in part, since the multiplicative group of complex numbers C × {\displaystyle \mathbb {C} ^{\times }} is connected. The group manifold GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} is not compact; rather its maximal compact subgroup is the unitary group U ( n ) {\displaystyle \operatorname {U} (n)} . As for U ( n ) {\displaystyle \operatorname {U} (n)} , the group manifold GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} is not simply connected but has a fundamental group isomorphic to Z {\displaystyle \mathbb {Z} } .
If F {\displaystyle F} is a finite field with q {\displaystyle q} elements, then we sometimes write GL ( n , q ) {\displaystyle \operatorname {GL} (n,q)} instead of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} . When p is prime, GL ( n , p ) {\displaystyle \operatorname {GL} (n,p)} is the outer automorphism group of the group Z p n {\displaystyle \mathbb {Z} _{p}^{n}} , and also the automorphism group, because Z p n {\displaystyle \mathbb {Z} _{p}^{n}} is abelian, so the inner automorphism group is trivial.
The order of GL ( n , q ) {\displaystyle \operatorname {GL} (n,q)} is:
This can be shown by counting the possible columns of the matrix: the first column can be anything but the zero vector; the second column can be anything but the multiples of the first column; and in general, the k {\displaystyle k} th column can be any vector not in the linear span of the first k − 1 {\displaystyle k-1} columns. In q -analog notation, this is [ n ] q ! ( q − 1 ) n q ( n 2 ) {\displaystyle [n]_{q}!(q-1)^{n}q^{n \choose 2}} .
For example, GL(3, 2) has order (8 − 1)(8 − 2)(8 − 4) = 168 . It is the automorphism group of the Fano plane and of the group Z 2 3 {\displaystyle \mathbb {Z} _{2}^{3}} . This group is also isomorphic to PSL(2, 7) .
More generally, one can count points of Grassmannian over F {\displaystyle F} : in other words the number of subspaces of a given dimension k {\displaystyle k} . This requires only finding the order of the stabilizer subgroup of one such subspace and dividing into the formula just given, by the orbit-stabilizer theorem .
These formulas are connected to the Schubert decomposition of the Grassmannian, and are q -analogs of the Betti numbers of complex Grassmannians. This was one of the clues leading to the Weil conjectures .
Note that in the limit q → 1 {\displaystyle q\to 1} the order of GL ( n , q ) {\displaystyle \operatorname {GL} (n,q)} goes to 0! – but under the correct procedure (dividing by ( q − 1 ) n {\displaystyle (q-1)^{n}} ) we see that it is the order of the symmetric group (see Lorscheid's article). In the philosophy of the field with one element , one thus interprets the symmetric group as the general linear group over the field with one element: S n ≅ GL ( n , 1 ) {\displaystyle S_{n}\cong \operatorname {GL} (n,1)} .
The general linear group over a prime field, GL ( ν , p ) {\displaystyle \operatorname {GL} (\nu ,p)} , was constructed and its order computed by Évariste Galois in 1832, in his last letter (to Chevalier) and second (of three) attached manuscripts, which he used in the context of studying the Galois group of the general equation of order p ν {\displaystyle p^{\nu }} . [ 4 ]
The special linear group , SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} , is the group of all matrices with determinant 1. These matrices are special in that they lie on a subvariety : they satisfy a polynomial equation (as the determinant is a polynomial in the entries). Matrices of this type form a group as the determinant of the product of two matrices is the product of the determinants of each matrix.
If we write F × {\displaystyle F^{\times }} for the multiplicative group of F {\displaystyle F} (that is, F {\displaystyle F} excluding 0), then the determinant is a group homomorphism
that is surjective and its kernel is the special linear group. Thus, SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} is a normal subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} , and by the first isomorphism theorem , GL ( n , F ) / SL ( n , F ) {\displaystyle \operatorname {GL} (n,F)/\operatorname {SL} (n,F)} is isomorphic to F × {\displaystyle F^{\times }} . In fact, GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} can be written as a semidirect product :
The special linear group is also the derived group (also known as commutator subgroup) of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} (for a field or a division ring F {\displaystyle F} ), provided that n ≠ 2 {\displaystyle n\neq 2} or F {\displaystyle F} is not the field with two elements . [ 5 ]
When F {\displaystyle F} is R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } , SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} is a Lie subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} of dimension n 2 − 1 {\displaystyle n^{2}-1} . The Lie algebra of SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} consists of all n × n {\displaystyle n\times n} matrices over F {\displaystyle F} with vanishing trace . The Lie bracket is given by the commutator .
The special linear group SL ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} can be characterized as the group of volume and orientation-preserving linear transformations of R n {\displaystyle \mathbb {R} ^{n}} .
The group SL ( n , C ) {\displaystyle \operatorname {SL} (n,\mathbb {C} )} is simply connected, while SL ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} is not. SL ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} has the same fundamental group as GL + ( n , R ) {\displaystyle \operatorname {GL} ^{+}(n,\mathbb {R} )} , that is, Z {\displaystyle \mathbb {Z} } for n = 2 {\displaystyle n=2} and Z 2 {\displaystyle \mathbb {Z} _{2}} for n > 2 {\displaystyle n>2} .
The set of all invertible diagonal matrices forms a subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} isomorphic to ( F × ) n {\displaystyle (F^{\times })^{n}} . In fields like R {\displaystyle \mathbb {R} } and C {\displaystyle \mathbb {C} } , these correspond to rescaling the space; the so-called dilations and contractions.
A scalar matrix is a diagonal matrix which is a constant times the identity matrix . The set of all nonzero scalar matrices forms a subgroup of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} isomorphic to F × {\displaystyle F^{\times }} . This group is the center of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} . In particular, it is a normal, abelian subgroup.
The center of SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} is simply the set of all scalar matrices with unit determinant, and is isomorphic to the group of n {\displaystyle n} th roots of unity in the field F {\displaystyle F} .
The so-called classical groups are subgroups of GL ( V ) {\displaystyle \operatorname {GL} (V)} which preserve some sort of bilinear form on a vector space V {\displaystyle V} . These include the
These groups provide important examples of Lie groups.
The projective linear group PGL ( n , F ) {\displaystyle \operatorname {PGL} (n,F)} and the projective special linear group PSL ( n , F ) {\displaystyle \operatorname {PSL} (n,F)} are the quotients of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} and SL ( n , F ) {\displaystyle \operatorname {SL} (n,F)} by their centers (which consist of the multiples of the identity matrix therein); they are the induced action on the associated projective space .
The affine group Aff ( n , F ) {\displaystyle \operatorname {Aff} (n,F)} is an extension of GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} by the group of translations in F n {\displaystyle F^{n}} . It can be written as a semidirect product :
where GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} acts on F n {\displaystyle F^{n}} in the natural manner. The affine group can be viewed as the group of all affine transformations of the affine space underlying the vector space F n {\displaystyle F^{n}} .
One has analogous constructions for other subgroups of the general linear group: for instance, the special affine group is the subgroup defined by the semidirect product, SL ( n , F ) ⋉ F n {\displaystyle \operatorname {SL} (n,F)\ltimes F^{n}} , and the Poincaré group is the affine group associated to the Lorentz group , O ( 1 , 3 , F ) ⋉ F n {\displaystyle \operatorname {O} (1,3,F)\ltimes F^{n}} .
The general semilinear group Γ L ( n , F ) {\displaystyle \operatorname {\Gamma L} (n,F)} is the group of all invertible semilinear transformations , and contains GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} . A semilinear transformation is a transformation which is linear “up to a twist”, meaning “up to a field automorphism under scalar multiplication”. It can be written as a semidirect product:
where Gal ( F ) {\displaystyle \operatorname {Gal} (F)} is the Galois group of F {\displaystyle F} (over its prime field ), which acts on GL ( n , F ) {\displaystyle \operatorname {GL} (n,F)} by the Galois action on the entries.
The main interest of Γ L ( n , F ) {\displaystyle \operatorname {\Gamma L} (n,F)} is that the associated projective semilinear group P Γ L ( n , F ) {\displaystyle \operatorname {P\Gamma L} (n,F)} , which contains PGL ( n , F ) {\displaystyle \operatorname {PGL} (n,F)} , is the collineation group of projective space , for n > 2 {\displaystyle n>2} , and thus semilinear maps are of interest in projective geometry .
The Full Linear Monoid, derived upon removal of the determinant's non-zero restriction, forms an algebraic structure akin to a monoid, often referred to as the full linear monoid or occasionally as the full linear semigroup or general linear monoid. Notably, it constitutes a regular semigroup.
If one removes the restriction of the determinant being non-zero, the resulting algebraic structure is a monoid , usually called the full linear monoid , [ 6 ] [ 7 ] [ 8 ] but occasionally also full linear semigroup , [ 9 ] general linear monoid [ 10 ] [ 11 ] etc. It is actually a regular semigroup . [ 7 ]
The infinite general linear group or stable general linear group is the direct limit of the inclusions GL ( n , F ) → GL ( n + 1 , F ) {\displaystyle \operatorname {GL} (n,F)\to \operatorname {GL} (n+1,F)} as the upper left block matrix . It is denoted by either GL ( F ) {\displaystyle \operatorname {GL} (F)} or GL ( ∞ , F ) {\displaystyle \operatorname {GL} (\infty ,F)} , and can also be interpreted as invertible infinite matrices which differ from the identity matrix in only finitely many places. [ 12 ]
It is used in algebraic K-theory to define K 1 , and over the reals has a well-understood topology, thanks to Bott periodicity .
It should not be confused with the space of (bounded) invertible operators on a Hilbert space , which is a larger group, and topologically much simpler, namely contractible – see Kuiper's theorem . | https://en.wikipedia.org/wiki/General_linear_group |
A general protection fault ( GPF ) in the x86 instruction set architectures (ISAs) is a fault (a type of interrupt ) initiated by ISA-defined protection mechanisms in response to an access violation caused by some running code, either in the kernel or a user program. The mechanism is first described in Intel manuals and datasheets for the Intel 80286 CPU, which was introduced in 1983; it is also described in section 9.8.13 in the Intel 80386 programmer's reference manual from 1986. A general protection fault is implemented as an interrupt ( vector number 13 (0Dh)). Some operating systems may also classify some exceptions not related to access violations, such as illegal opcode exceptions, as general protection faults, even though they have nothing to do with memory protection. If a CPU detects a protection violation, it stops executing the code and sends a GPF interrupt. In most cases, the operating system removes the failing process from the execution queue, signals the user, and continues executing other processes. If, however, the operating system fails to catch the general protection fault, i.e. another protection violation occurs before the operating system returns from the previous GPF interrupt, the CPU signals a double fault , stopping the operating system. If yet another failure ( triple fault ) occurs, the CPU is unable to recover; since 80286, the CPU enters a special halt state called "Shutdown", which can only be exited through a hardware reset . The IBM PC AT , the first PC-compatible system to contain an 80286, has hardware that detects the Shutdown state and automatically resets the CPU when it occurs. All descendants of the PC AT do the same, so in a PC, a triple fault causes an immediate system reset.
In Microsoft Windows , the general protection fault presents with varied language, depending on product version:
Terminating current application.
If the problem persists, contact the program vendor.
An error log is being created.
If you continue to experience problems, try restarting your computer.
If you were in the middle of something, the information you were working on might be lost.
[...]
For more information about this error, click here .
A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available.
1.Go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting.
2.Create a DWORD, name it DontShowUI and leave its value at 0 (if it already exists, set its value to 0).
In Windows 95, 98 and Me, there is an alternate error message, used mostly with Windows 3.x programs: "An error has occurred in your program. To keep working anyway, click Ignore and save your work in a new file. To quit this program, click Close. You will lose information you entered since your last save." Clicking "Close" results in one of the error messages above, depending on Windows version. "Ignore" sometimes does this too.
In Linux and other Unices , the errors are reported separately (e.g. segmentation fault for memory errors).
In memory errors, the faulting program accesses memory that it should not access. Examples include:
However, many modern operating systems implement their memory access-control schemes via paging instead of segmentation, so it is often the case that invalid memory references in operating systems such as Windows are reported via page faults instead of general protection faults. Operating systems typically provide an abstraction layer (such as exception handling or signals) that hides whatever internal processor mechanism was used to raise a memory access error from a program, for the purposes of providing a standard interface for handling many different types of processor-generated error conditions.
In terms of the x86 architecture, general protection faults are specific to segmentation-based protection when it comes to memory accesses. However, general protection faults are still used to report other protection violations (aside from memory access violations) when paging is used, such as the use of instructions not accessible from the current privilege level (CPL).
While it is theoretically possible for an operating system to utilize both paging and segmentation, for the most part, common operating systems typically rely on paging for the bulk of their memory access control needs.
There are some things on a computer which are reserved for the exclusive use of the operating system . If a program which is not part of the operating system attempts to use one of these features, it may cause a general protection fault.
Additionally, there are storage locations which are reserved both for the operating system and the processor itself. As a consequence of their reservation, they are read-only and an attempt to write data to them by an unprivileged program produces an error.
General protection faults are raised by the processor when a protected instruction is encountered which exceeds the permission level of the currently executing task, either because a user-mode program is attempting a protected instruction, or because the operating system has issued a request which would put the processor into an undefined state.
General protection faults are caught and handled by modern operating systems. Generally, if the fault originated in a user-mode program, the user-mode program is terminated. If, however, the fault originated in a core system driver or the operating system itself, the operating system usually saves diagnostic information either to a file or to the screen and stops operating. It either restarts the computer or displays an error screen , such as a Blue Screen of Death or kernel panic .
Segment limits can be exceeded:
Segment permissions can be violated by:
This can occur when:
Faults can occur in the task state segment (TSS) structure when:
Other causes of general protection faults are: | https://en.wikipedia.org/wiki/General_protection_fault |
Albert Einstein 's discovery of the gravitational field equations of general relativity and David Hilbert 's almost simultaneous derivation of the theory using an elegant variational principle , [ B 1 ] : 170 during a period when the two corresponded frequently, has led to numerous historical analyses of their interaction. The analyses came to be called a priority dispute . [ B 2 ]
The events of interest to historians of the dispute occurred in late 1915. At that time
Albert Einstein, now perhaps the most famous modern scientist, [ 1 ] had been working on gravitational theory since 1912. He had "developed and published much of the framework of general relativity, including the ideas that gravitational effects require a tensor theory, that these effects determine a non-Euclidean geometry , that this metric role of gravitation results in a redshift and in the bending of light passing near a massive body." [ 2 ] While David Hilbert never became a celebrity, he was seen as a mathematician unequaled in his generation, [ 3 ] with an especially wide impact on mathematics . When he met Einstein in the summer of 1915, Hilbert had started working on an axiomatic system for a unified field theory, combining the ideas of Gustav Mie's on electromagnetism with Einstein's general relativity. [ 2 ] As the historians referenced below recount, Einstein and Hilbert corresponded extensively throughout the fall of 1915, culminating in lectures by both men in late November that were later published. The historians debate consequences of this friendly correspondence on the resulting publications.
The following facts are well established and referable:
Historians have discussed Hilbert's view of his interaction with Einstein.
Walter Isaacson points out that Hilbert's publication on his derivation of the equations of general relativity included the text: “The differential equations of gravitation that result are, as it seems to me, in agreement with the magnificent theory of general relativity established by Einstein.” [ 9 ]
Wuensch [ B 3 ] points out that Hilbert refers to the field equations of gravity as "meine Theorie" ("my theory") in his 6 February 1916 letter to Schwarzschild. This, however, is not at issue, since no one disputes that Hilbert had his own "theory", which Einstein criticized as naive and overly ambitious. Hilbert's theory was based on the work of Mie combined with Einstein's principle of general covariance, but applied to matter and electromagnetism as well as gravity.
Mehra [ B 4 ] and Bjerknes [ B 5 ] point out that Hilbert's 1924 version of the article contained the sentence "... und andererseits auch Einstein, obwohl wiederholt von abweichenden und unter sich verschiedenen Ansätzen ausgehend, kehrt schließlich in seinen letzten Publikationen geradenwegs zu den Gleichungen meiner Theorie zurück" - "Einstein [...] in his last publications ultimately returns directly to the equations of my theory.". [ 10 ] These statements of course do not have any particular bearing on the matter at issue. No one disputes that Hilbert had "his" theory, which was a very ambitious attempt to combine gravity with a theory of matter and electromagnetism along the lines of Mie's theory, and that his equations for gravitation agreed with those that Einstein presented beginning in Einstein's 25 November paper (which Hilbert refers to as Einstein's later papers to distinguish them from previous theories of Einstein). None of this bears on the precise origin of the trace term in the Einstein field equations (a feature of the equations that, while theoretically significant, does not have any effect on the vacuum equations, from which all the empirical tests proposed by Einstein were derived).
Sauer says "the independence of Einstein's discovery was never a point of dispute between Einstein and Hilbert ... Hilbert claimed priority for the introduction of the Riemann scalar into the action principle and the derivation of the field equations from it," [ B 6 ] (Sauer mentions a letter and a draft letter where Hilbert defends his priority for the action functional) "and Einstein admitted publicly that Hilbert (and Lorentz) had succeeded in giving the equations of general relativity a particularly lucid form by deriving them from a single variational principle" [ citation needed ] . Sauer also stated, "And in a draft of a letter to Weyl, dated 22 April 1918, written after he had read the proofs of the first edition of Weyl's 'Raum-Zeit-Materie' Hilbert also objected to being slighted in Weyl's exposition. In this letter again 'in particular the use of the Riemannian curvature [scalar] in the Hamiltonian integral' ('insbesondere die Verwendung der Riemannschen Krümmung unter dem Hamiltonschen Integral') was claimed as one of his original contributions. SUB Cod. Ms. Hilbert 457/17." [ B 6 ]
While Hilbert's paper was submitted five days earlier than Einstein's, it only appeared in 1916, after Einstein's field equations paper had appeared in print. For this reason, there was no good reason to suspect plagiarism on either side. In 1978, an 18 November 1915 letter from Einstein to Hilbert [ citation needed ] resurfaced, in which Einstein thanked Hilbert for sending an explanation of Hilbert's work. This was not unexpected to most scholars, who were well aware of the correspondence between Hilbert and Einstein that November, and who continued to hold the view expressed by Albrecht Fölsing in his Einstein biography:
In November, when Einstein was totally absorbed in his theory of gravitation, he essentially only corresponded with Hilbert, sending Hilbert his publications and, on November 18, thanking him for a draft of his article. Einstein must have received that article immediately before writing this letter. Could Einstein, casting his eye over Hilbert's paper, have discovered the term which was still lacking in his own equations, and thus ' nostrified ' Hilbert? [ B 7 ]
In the very next sentence, after asking the rhetorical question, Folsing answers it with "This is not really probable...", and then goes on to explain in detail why
[Einstein's] eventual derivation of the equations was a logical development of his earlier arguments—in which, despite all the mathematics, physical principles invariably predominated. His approach was thus quite different from Hilbert's, and Einstein's achievements can, therefore, surely be regarded as authentic.
In their 1997 Science paper, [ B 2 ] Corry, Renn and Stachel quote the above passage and comment that "the arguments by which Einstein is exculpated are rather weak, turning on his slowness in fully grasping Hilbert's mathematics", and so they attempted to find more definitive evidence of the relationship between the work of Hilbert and Einstein, basing their work largely on a recently discovered pre-print of Hilbert's paper. A discussion of the controversy around this paper is given below.
Those who contend that Einstein's paper was motivated by the information obtained from Hilbert have referred to the following sources:
Those who contend that Einstein's work takes priority over Hilbert's, [ B 2 ] or that both authors worked independently [ B 8 ] have used the following arguments:
This section cites notable publications where people have expressed a view on the issues outlined above.
From Fölsing's 1993 (English translation 1998) [ B 7 ] Einstein biography " Hilbert, like all his other colleagues, acknowledged Einstein as the sole creator of relativity theory."
In 1997, Corry, Renn and Stachel published a three-page article in Science entitled "Belated Decision in the Hilbert-Einstein Priority Dispute" concluding that Hilbert had not anticipated Einstein's equations. [ B 2 ] [ B 11 ]
Friedwardt Winterberg , [ B 12 ] a professor of physics at the University of Nevada, Reno , disputed [2] these conclusions, observing that the galley proofs of Hilbert's articles had been tampered with - part of one page had been cut off. He goes on to argue that the removed part of the article contained the equations that Einstein later published, and he wrote that "the cut off part of the proofs suggests a crude attempt by someone to falsify the historical record". Science declined to publish this; it was printed in revised form in Zeitschrift für Naturforschung , with a dateline of 5 June 2003. Winterberg criticized Corry, Renn and Statchel for having omitted the fact that part of Hilbert's proofs was cut off. Winterberg wrote that the correct field equations are still present on the existing pages of the proofs in various equivalent forms. In this paper, Winterberg asserted that Einstein sought the help of Hilbert and Klein to help him find the correct field equation , without mentioning the research of Fölsing (1997) and Sauer (1999), according to which Hilbert invited Einstein to Göttingen to give a week of lectures on general relativity in June 1915, which however does not necessarily contradict Winterberg. Hilbert at the time was looking for physics problems to solve.
A short reply to Winterberg's article can be found at [3] Archived 2006-08-06 at the Wayback Machine ; the original long reply can be accessed via the Internet Archive at [4] . In this reply, Winterberg's hypothesis is called " paranoid " and "speculative". Corry et al. offer the following alternative speculation: "it is possible that Hilbert himself cropped off the top of p. 7 to include it with the three sheets he sent Klein, in order that they not end in mid-sentence." [ B 13 ]
As of September 2006, the Max Planck Institute of Berlin has replaced the short reply with a note [5] saying that the Max Planck Society "distances itself from statements published on this website [...] concerning Prof. Friedwart Winterberg" and stating that "the Max Planck Society will not take a position in [this] scientific dispute".
Ivan Todorov, in a paper published on ArXiv, [ B 8 ] says of the debate:
In the paper recommended by Todorov as calm and non-confrontational, Tilman Sauer [ B 6 ] concludes that the printer's proofs show conclusively that Einstein did not plagiarize Hilbert, stating
Max Born's letters to David Hilbert, quoted in Wuensch, are quoted by Todorov as evidence that Einstein's thinking towards general covariance was influenced by the competition with Hilbert.
Todorov ends his paper by stating:
Anatoly Logunov (a former vice president of the Soviet Academy of Sciences [ 11 ] and at the time the scientific advisor of the Institute for High Energy Physics [ 12 ] ), is author of a book about Poincaré's relativity theory and coauthor, with Mestvirishvili and Petrov, of an article rejecting the conclusions of the Corry/Renn/Stachel paper. They discuss both Einstein's and Hilbert's papers, claiming that Einstein and Hilbert arrived at the correct field equations independently. Specifically, they conclude that:
Daniela Wuensch, [ B 3 ] a historian of science and a Hilbert and Kaluza expert, responded to Bjerknes, Winterberg and Logunov's criticisms of the Corry/Renn/Stachel paper in a book which appeared in 2005 , where in she defends the view that the cut to Hilbert's printer proofs was made in recent times. Moreover, she presents a theory about what might have been on the missing part of the proofs, based upon her knowledge of Hilbert's papers and lectures.
She defends the view that knowledge of Hilbert's 16 November 1915 letter was crucial to Einstein's development of the field equations: Einstein arrived at the correct field equations only with Hilbert's help ("nach großer Anstrengung mit Hilfe Hilberts"), but nevertheless calls Einstein's reaction (his negative comments on Hilbert in the 26 November letter to Zangger) "understandable" ("Einsteins Reaktion ist verständlich") because Einstein had worked on the problem for a long time.
According to her publisher, Klaus Sommer, Wuensch concludes though that:
In 2006, Wuensch was invited to give a talk at the annual meeting of the German Physics Society (Deutsche Physikalische Gesellschaft) about her views about the priority issue for the field equations. [7] Archived 2006-08-28 at the Wayback Machine
Wuensch's publisher, Klaus Sommer, in an article in Physik in unserer Zeit , [ B 15 ] supported Wuensch's view that Einstein obtained some results not independently but from the information obtained from Hilbert's 16 November letter and from the notes of Hilbert's talk. While he does not call Einstein a plagiarist, Sommer speculates that Einstein's conciliatory 20 December letter was motivated by the fear that Hilbert might comment on Einstein's behaviour in the final version of his paper. Sommer claimed that a scandal caused by Hilbert could have done more damage to Einstein than any scandal before ("Ein Skandal Hilberts hätte ihm mehr geschadet als jeder andere zuvor").
The contentions of Wuensch and Sommer have been strongly contested by the historian of mathematics and natural sciences David E. Rowe in a detailed review of Wuensch's book published in Historia Mathematica in 2006. [ 13 ] Rowe argues that Wuensch's book offers nothing but tendentious, unsubstantiated, and in many cases highly implausible, speculations.
Wolfgang Pauli 's Encyclopedia entry for the theory of relativity pointed out two reasons physicists did not consider Hilbert's derivation equivalent to Einstein's: 1) it required accepting the stationary-action principle as a physical axiom and more important 2) it was based on Mie unified field theory . [ 7 ] : 134
In his 1999 article for Time Magazine which featured Einstein Man of the Century Stephen Hawking wrote:
"Einstein had discussed his ideas with the mathematician David Hilbert during a visit to the University of Gottingen in the summer of 1915, and Hilbert independently found the same equations a few days before Einstein. Nevertheless, as Hilbert admitted, the credit for the new theory belonged to Einstein. It was his idea to relate gravity to the warping of space-time." [ 14 ]
Kip Thorne concludes, in remarks based on Hilbert's 1924 paper, that Hilbert regarded the general theory of relativity as Einstein's:
"Quite naturally, and in accord with Hilbert's view of things, the resulting law of warpage was quickly given the name the Einstein field equation rather than being named after Hilbert. Hilbert had carried out the last few mathematical steps to its discovery independently and almost simultaneously with Einstein, but Einstein was responsible for essentially everything that preceded those steps...". [ B 16 ]
However, Kip Thorne also stated, "Remarkably, Einstein was not the first to discover the correct form of the law of warpage [. . . .] Recognition for the first discovery must go to Hilbert" based on "the things he had learned from Einstein's summer visit to Göttingen." [ B 16 ] This last point is also mentioned by Corry et al. [ B 2 ]
As noted by the historians John Earman and Clark Glymour, "questions about the priority of discoveries are often among the least interesting and least important issues in the history of science." [ 2 ] There was no real controversy between Einstein and Hilbert themselves:
"Of course, there never was any quarrel over priority between Hilbert and Einstein, who admired one another
deeply." [ 7 ] : 117
And:
"Hilbert always remained aware of the
fact that the great principal physical idea was Einstein's, and he expressed it in numerous lectures and memoirs ...." [ 15 ] : 92 | https://en.wikipedia.org/wiki/General_relativity_priority_dispute |
In medicine and anatomy , the general senses are the senses which are perceived due to receptors scattered throughout the body such as touch, temperature, and hunger, rather than tied to a specific structure, as the special senses vision or hearing are. [ 1 ] Often, the general senses are associated with a specific drive ; that is, the sensation will cause a change in behavior meant to reduce the sensation. [ 2 ]
This neuroanatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/General_sense_(anatomy) |
General set theory ( GST ) is George Boolos 's (1998) name for a fragment of the axiomatic set theory Z . GST is sufficient for all mathematics not requiring infinite sets , and is the weakest known set theory whose theorems include the Peano axioms .
The ontology of GST is identical to that of ZFC , and hence is thoroughly canonical. GST features a single primitive ontological notion, that of set , and a single ontological assumption, namely that all individuals in the universe of discourse (hence all mathematical objects ) are sets. There is a single primitive binary relation , set membership ; that set a is a member of set b is written a ∈ b (usually read " a is an element of b ").
The symbolic axioms below are from Boolos (1998: 196), and govern how sets behave and interact.
As with Z , the background logic for GST is first order logic with identity . Indeed, GST is the fragment of Z obtained by omitting the axioms Union , Power Set , Elementary Sets (essentially Pairing ) and Infinity and then taking a theorem of Z, Adjunction, as an axiom.
The natural language versions of the axioms are intended to aid the intuition.
1) Axiom of Extensionality : The sets x and y are the same set if they have the same members.
The converse of this axiom follows from the substitution property of equality.
2) Axiom Schema of Specification (or Separation or Restricted Comprehension ): If z is a set and ϕ {\displaystyle \phi } is any property which may be satisfied by all, some, or no elements of z , then there exists a subset y of z containing just those elements x in z which satisfy the property ϕ {\displaystyle \phi } . The restriction to z is necessary to avoid Russell's paradox and its variants. More formally, let ϕ ( x ) {\displaystyle \phi (x)} be any formula in the language of GST in which x may occur freely and y does not. Then all instances of the following schema are axioms:
3) Axiom of Adjunction : If x and y are sets, then there exists a set w , the adjunction of x and y , whose members are just y and the members of x . [ 1 ]
Adjunction refers to an elementary operation on two sets, and has no bearing on the use of that term elsewhere in mathematics, including in category theory .
ST is GST with the axiom schema of specification replaced by the axiom of empty set :
Note that Specification is an axiom schema. The theory given by these axioms is not finitely axiomatizable . Montague (1961) showed that ZFC is not finitely axiomatizable, and his argument carries over to GST. Hence any axiomatization of GST must include at least one axiom schema .
With its simple axioms, GST is also immune to the three great antinomies of naïve set theory : Russell's , Burali-Forti's , and Cantor's .
GST is Interpretable in relation algebra because no part of any GST axiom lies in the scope of more than three quantifiers . This is the necessary and sufficient condition given in Tarski and Givant (1987).
Setting φ( x ) in Separation to x ≠ x , and assuming that the domain is nonempty, assures the existence of the empty set . Adjunction implies that if x is a set, then so is S ( x ) = x ∪ { x } {\displaystyle S(x)=x\cup \{x\}} . Given Adjunction , the usual construction of the successor ordinals from the empty set can proceed, one in which the natural numbers are defined as ∅ , S ( ∅ ) , S ( S ( ∅ ) ) , … , {\displaystyle \varnothing ,\,S(\varnothing ),\,S(S(\varnothing )),\,\ldots ,} . See Peano's axioms .
GST is mutually interpretable with Peano arithmetic (thus it has the same proof-theoretic strength as PA).
The most remarkable fact about ST (and hence GST), is that these tiny fragments of set theory give rise to such rich metamathematics. While ST is a small fragment of the well-known canonical set theories ZFC and NBG , ST interprets Robinson arithmetic (Q), so that ST inherits the nontrivial metamathematics of Q. For example, ST is essentially undecidable because Q is, and every consistent theory whose theorems include the ST axioms is also essentially undecidable. [ 2 ] [ 3 ] This includes GST and every axiomatic set theory worth thinking about, assuming these are consistent. In fact, the undecidability of ST implies the undecidability of first-order logic with a single binary predicate letter. [ 4 ]
Q is also incomplete in the sense of Gödel's incompleteness theorem . Any axiomatizable theory, such as ST and GST, whose theorems include the Q axioms is likewise incomplete. Moreover, the consistency of GST cannot be proved within GST itself, unless GST is in fact inconsistent.
Given any model M of ZFC, the collection of hereditarily finite sets in M will satisfy the GST axioms. Therefore, GST cannot prove the existence of even a countable infinite set , that is, of a set whose cardinality is ℵ 0 {\displaystyle \aleph _{0}} . Even if GST did afford a countably infinite set, GST could not prove the existence of a set whose cardinality is ℵ 1 {\displaystyle \aleph _{1}} , because GST lacks the axiom of power set . Hence GST cannot ground analysis and geometry , and is too weak to serve as a foundation for mathematics .
Boolos was interested in GST only as a fragment of Z that is just powerful enough to interpret Peano arithmetic . He never lingered over GST, only mentioning it briefly in several papers discussing the systems of Frege 's Grundlagen and Grundgesetze , and how they could be modified to eliminate Russell's paradox . The system Aξ' [δ 0 ] in Tarski and Givant (1987: 223) is essentially GST with an axiom schema of induction replacing Specification , and with the existence of an empty set explicitly assumed.
GST is called STZ in Burgess (2005), p. 223. [ 5 ] Burgess's theory ST [ 6 ] is GST with Empty Set replacing the axiom schema of specification . That the letters "ST" also appear in "GST" is a coincidence. | https://en.wikipedia.org/wiki/General_set_theory |
In structural engineering and mechanical engineering , generalised beam theory (GBT) is a one-dimensional theory used to mathematically model how beams bend and twist under various loads. It is a generalization of classical Euler–Bernoulli beam theory that approximates a beam as an assembly of thin-walled plates that are constrained to deform as a linear combination of specified deformation modes . [ 1 ]
Its origin is due to Richard Schardt (1966). Since then many other authors have extended the initial (first-order elastic) GBT formulations developed by Schardt and his co-workers. [ 2 ] [ 3 ] Many extensions and applications of GBT have been developed by Camotim ( Instituto Superior Técnico , University of Lisbon, Portugal) and collaborators, since the beginning of the 21st century. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ]
The theory can be applied without restrictions to any prismatic thin-walled structural member exhibiting straight or curved axial axis (any loading , any cross-section geometry , any boundary conditions). GBT is in some ways analogous to the finite strip method [ 1 ] and can be a more computationally efficient method than modeling a beam with a full 2D or 3D finite element method to predict the member structural behavior.
GBT has been widely recognized as an efficient approach to analyzing thin-walled members and structural systems. The efficiency arises mostly from its modal nature – the displacement field is expressed as a linear combination of cross-section deformation modes whose amplitudes vary continuously along the member length (x axis) - see Figures 2-3. Due to GBT assumptions inherent to a thin-walled member, only 3 non-null stress components are considered in the formulations (see Fig. 1).
Membrane displacement field (i.e., in the cross-section mid-surface):
The GBT modal nature makes it possible to (i) acquire in-depth knowledge on the mechanics of the thin-walled member behaviour and (ii) judiciously exclude, from subsequent similar GBT analyses, those deformation modes found to play no (or negligible) role in the particular behaviour under scrutiny. Eliminating modes that play no role reduces the number of degrees of freedom involved in a GBT analysis and increases its computational efficiency. GBT has proven useful in the understanding of the structural behaviour under analysis as well as in its computational efficiency. [ 1 ] | https://en.wikipedia.org/wiki/Generalised_beam_theory |
A generalized compound is a mixture of chemical compounds of constant composition, despite possible changes in the total amount. [ 1 ] The concept is used in the Dynamic Energy Budget theory, where biomass is partitioned into a limited set of generalised compounds, which contain a high percentage of organic compounds . [ 2 ] The amount of generalized compound can be quantified in terms of weight, but more conveniently in terms of C-moles . The concept of strong homeostasis has an intimate relationship with that of generalised compound. [ 3 ]
This article about chemical compounds is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalised_compound |
Generalized likelihood uncertainty estimation ( GLUE ) is a statistical method used in hydrology for quantifying the uncertainty of model predictions. The method was introduced by Keith Beven and Andrew Binley in 1992. [ 1 ] [ 2 ] The basic idea of GLUE is that given our inability to represent exactly in a mathematical model how nature works, there will always be several different models that mimic equally well an observed natural process (such as river discharge ). Such equally acceptable or behavioral models are therefore called equifinal . [ 3 ]
The methodology deals with models whose results are expressed as probability distributions of possible outcomes, often in the form of Monte Carlo simulations , and the problem can be viewed as assessing, and comparing between models, how good these representations of uncertainty are. There is an implicit understanding that the models being used are approximations to what might be obtained from a thorough Bayesian analysis of the problem if a fully adequate model of real-world hydrological processes were available. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
GLUE is equivalent to Approximate Bayesian computation for some choices of summary statistic and threshold. [ 8 ] [ 9 ]
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalised_likelihood_uncertainty_estimation |
A generalist species is able to thrive in a wide variety of environmental conditions and can make use of a variety of different resources (for example, a heterotroph with a varied diet ). A specialist species can thrive only in a narrow range of environmental conditions or has a limited diet. Most organisms do not all fit neatly into either group, however. Some species are highly specialized (the most extreme case being monophagous, eating one specific type of food ), others less so, and some can tolerate many different environments. In other words, there is a continuum from highly specialized to broadly generalist species.
Omnivores are usually generalists. Herbivores are often specialists, but those that eat a variety of plants may be considered generalists. A well-known example of a specialist animal is the monophagous koala , which subsists almost entirely on eucalyptus leaves. The raccoon is a generalist, because it has a natural range that includes most of North and Central America, and it is omnivorous, eating berries , insects such as butterflies, eggs, and various small animals.
When it comes to insects, particularly native bees and lepidoptera (butterflies and moths), many are specialist species. [ 1 ] [ 2 ] It is estimated that about half of native US bee species are pollen specialists, meaning they collect resources from specific genera . [ 3 ] For instance, the threatened monarch butterfly exclusively lays its eggs on milkweed species . This reliance underscores the critical role of native plants in supporting ecological food chains.
The distinction between generalists and specialists is not limited to animals. For example, some plants require a narrow range of temperatures, soil conditions and precipitation to survive while others can tolerate a broader range of conditions. A cactus could be considered a specialist species. It will die during winters at high latitudes or if it receives too much water.
When body weight is controlled for, specialist feeders such as insectivores and frugivores have larger home ranges than generalists like some folivores (leaf-eaters), whose food-source is less abundant; they need a bigger area for foraging . [ 4 ] An example comes from the research of Tim Clutton-Brock , who found that the black-and-white colobus , a folivore generalist, needs a home range of only 15 ha . On the other hand, the more specialized red colobus monkey has a home range of 70 ha, which it requires to find patchy shoots, flowers and fruit. [ 5 ]
When environmental conditions change, generalists are able to adapt, but specialists tend to fall victim to extinction much more easily. [ 6 ] For example, if a species of fish were to go extinct, any specialist parasites would also face extinction . On the other hand, a species with a highly specialized ecological niche is more effective at competing with other organisms. [ citation needed ] For example, a fish and its parasites are in an evolutionary arms race , a form of coevolution , in which the fish constantly develops defenses against the parasite, while the parasite in turn evolves adaptations to cope with the specific defenses of its host. This tends to drive the speciation of more specialized species provided conditions remain relatively stable. This involves niche partitioning as new species are formed, and biodiversity is increased.
A benefit of a specialist species is that because the species has a more clearly defined niche, this reduces competition from other species. On the other hand, generalist species, by their nature, cannot realize as much resources from one niche, but instead find resources from many. Because other species can also be generalists, there is more competition between species, reducing the amount of resources for all generalists in an ecosystem. [ 7 ] Specialist herbivores can have morphological differences as compared to generalists that allow them to be more efficient at hunting a certain prey item, or able to eat a plant that generalists would be less tolerant of. [ 8 ]
https://www.webpages.uidaho.edu/range556/appl_behave/projects/different_strokes.html | https://en.wikipedia.org/wiki/Generalist_and_specialist_species |
In the history of mathematics , the generality of algebra was a phrase used by Augustin-Louis Cauchy to describe a method of argument that was used in the 18th century by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange , [ 1 ] particularly in manipulating infinite series . According to Koetsier, [ 2 ] the generality of algebra principle assumed, roughly, that the algebraic rules that hold for a certain class of expressions can be extended to hold more generally on a larger class of objects, even if the rules are no longer obviously valid. As a consequence, 18th century mathematicians believed that they could derive meaningful results by applying the usual rules of algebra and calculus that hold for finite expansions even when manipulating infinite expansions.
In works such as Cours d'Analyse , Cauchy rejected the use of "generality of algebra" methods and sought a more rigorous foundation for mathematical analysis .
An example [ 2 ] is Euler's derivation of the series
for 0 < x < π {\displaystyle 0<x<\pi } . He first evaluated the identity
at r = 1 {\displaystyle r=1} to obtain
The infinite series on the right hand side of ( 3 ) diverges for all real x {\displaystyle x} . But nevertheless integrating this term-by-term gives ( 1 ), an identity which is known to be true by Fourier analysis . [ example needed ]
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
This article about the history of mathematics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generality_of_algebra |
A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. [ 1 ] Generalizations posit the existence of a domain or set of elements, as well as one or more common characteristics shared by those elements (thus creating a conceptual model ). As such, they are the essential basis of all valid deductive inferences (particularly in logic , mathematics and science), where the process of verification is necessary to determine whether a generalization holds true for any given situation.
Generalization can also be used to refer to the process of identifying the parts of a whole, as belonging to the whole. The parts, which might be unrelated when left on their own, may be brought together as a group, hence belonging to the whole by establishing a common relation between them.
However, the parts cannot be generalized into a whole—until a common relation is established among all parts. This does not mean that the parts are unrelated, only that no common relation has been established yet for the generalization.
The concept of generalization has broad application in many connected disciplines, and might sometimes have a more specific meaning in a specialized context (e.g. generalization in psychology, generalization in learning ). [ 1 ]
In general, given two related concepts A and B, A is a "generalization" of B (equiv., B is a special case of A ) if and only if both of the following hold:
For example, the concept animal is a generalization of the concept bird , since every bird is an animal, but not all animals are birds (dogs, for instance). For more, see Specialisation (biology) .
The connection of generalization to specialization (or particularization ) is reflected in the contrasting words hypernym and hyponym . A hypernym as a generic stands for a class or group of equally ranked items, such as the term tree which stands for equally ranked items such as peach and oak , and the term ship which stands for equally ranked items such as cruiser and steamer . In contrast, a hyponym is one of the items included in the generic, such as peach and oak which are included in tree , and cruiser and steamer which are included in ship . A hypernym is superordinate to a hyponym, and a hyponym is subordinate to a hypernym. [ 2 ]
An animal is a generalization of a mammal , a bird, a fish, an amphibian and a reptile.
Generalization has a long history in cartography as an art of creating maps for different scale and purpose. Cartographic generalization is the process of selecting and representing information of a map in a way that adapts to the scale of the display medium of the map. In this way, every map has, to some extent, been generalized to match the criteria of display. This includes small cartographic scale maps, which cannot convey every detail of the real world. As a result, cartographers must decide and then adjust the content within their maps, to create a suitable and useful map that conveys the geospatial information within their representation of the world. [ 3 ]
Generalization is meant to be context-specific. That is to say, correctly generalized maps are those that emphasize the most important map elements, while still representing the world in the most faithful and recognizable way. The level of detail and importance in what is remaining on the map must outweigh the insignificance of items that were generalized—so as to preserve the distinguishing characteristics of what makes the map useful and important.
In mathematics , one commonly says that a concept or a result B is a generalization of A if A is defined or proved before B (historically or conceptually) and A is a special case of B . | https://en.wikipedia.org/wiki/Generalization |
In mathematics and physics , in particular quantum information , the term generalized Pauli matrices refers to families of matrices which generalize the (linear algebraic) properties of the Pauli matrices . Here, a few classes of such matrices are summarized.
This method of generalizing the Pauli matrices refers to a generalization from a single 2-level system ( qubit ) to multiple such systems. In particular, the generalized Pauli matrices for a group of N {\displaystyle N} qubits is just the set of matrices generated by all possible products of Pauli matrices on any of the qubits. [ 1 ]
The vector space of a single qubit is V 1 = C 2 {\displaystyle V_{1}=\mathbb {C} ^{2}} and the vector space of N {\displaystyle N} qubits is V N = ( C 2 ) ⊗ N ≅ C 2 N {\displaystyle V_{N}=\left(\mathbb {C} ^{2}\right)^{\otimes N}\cong \mathbb {C} ^{2^{N}}} . We use the tensor product notation
to refer to the operator on V N {\displaystyle V_{N}} that acts as a Pauli matrix on the n {\displaystyle n} th qubit and the identity on all other qubits. We can also use a = 0 {\displaystyle a=0} for the identity, i.e., for any n {\displaystyle n} we use σ 0 ( n ) = ⨂ m = 1 N I ( m ) {\textstyle \sigma _{0}^{(n)}=\bigotimes _{m=1}^{N}I^{(m)}} . Then the multi-qubit Pauli matrices are all matrices of the form
i.e., for a → {\displaystyle {\vec {a}}} a vector of integers between 0 and 4. Thus there are 4 N {\displaystyle 4^{N}} such generalized Pauli matrices if we include the identity I = ⨂ m = 1 N I ( m ) {\textstyle I=\bigotimes _{m=1}^{N}I^{(m)}} and 4 N − 1 {\displaystyle 4^{N}-1} if we do not.
In quantum computation, it is conventional to denote the Pauli matrices with single upper case letters
This allows subscripts on Pauli matrices to indicate the qubit index. For example, in a system with 3 qubits,
Multi-qubit Pauli matrices can be written as products of single-qubit Paulis on disjoint qubits. Alternatively, when it is clear from context, the tensor product symbol ⊗ {\displaystyle \otimes } can be omitted, i.e. unsubscripted Pauli matrices written consecutively represents tensor product rather than matrix product. For example:
The traditional Pauli matrices are the matrix representation of the s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} Lie algebra generators J x {\displaystyle J_{x}} , J y {\displaystyle J_{y}} , and J z {\displaystyle J_{z}} in the 2-dimensional irreducible representation of SU(2) , corresponding to a spin-1/2 particle. These generate the Lie group SU(2) .
For a general particle of spin s = 0 , 1 / 2 , 1 , 3 / 2 , 2 , … {\displaystyle s=0,1/2,1,3/2,2,\ldots } , one instead utilizes the 2 s + 1 {\displaystyle 2s+1} -dimensional irreducible representation.
This method of generalizing the Pauli matrices refers to a generalization from 2-level systems (Pauli matrices acting on qubits ) to 3-level systems ( Gell-Mann matrices acting on qutrits ) and generic d {\displaystyle d} -level systems (generalized Gell-Mann matrices acting on qudits ).
Let E j k {\displaystyle E_{jk}} be the matrix with 1 in the jk -th entry and 0 elsewhere. Consider the space of d × d {\displaystyle d\times d} complex matrices, C d × d {\displaystyle \mathbb {C} ^{d\times d}} , for a fixed d {\displaystyle d} .
Define the following matrices,
and
The collection of matrices defined above without the identity matrix are called the generalized Gell-Mann matrices , in dimension d {\displaystyle d} . [ 2 ] [ 3 ] The symbol ⊕ (utilized in the Cartan subalgebra above) means matrix direct sum .
The generalized Gell-Mann matrices are Hermitian and traceless by construction, just like the Pauli matrices. One can also check that they are orthogonal in the Hilbert–Schmidt inner product on C d × d {\displaystyle \mathbb {C} ^{d\times d}} . By dimension count, one sees that they span the vector space of d × d {\displaystyle d\times d} complex matrices, g l ( d , C ) {\displaystyle {\mathfrak {gl}}(d,\mathbb {C} )} . They then provide a Lie-algebra-generator basis acting on the fundamental representation of s u ( d ) {\displaystyle {\mathfrak {su}}(d)} .
In dimensions d {\displaystyle d} = 2 and 3, the above construction recovers the Pauli and Gell-Mann matrices , respectively.
A particularly notable generalization of the Pauli matrices was constructed by James Joseph Sylvester in 1882. [ 4 ] These are known as "Weyl–Heisenberg matrices" as well as "generalized Pauli matrices". [ 5 ] [ 6 ]
The Pauli matrices σ 1 {\displaystyle \sigma _{1}} and σ 3 {\displaystyle \sigma _{3}} satisfy the following:
The so-called Walsh–Hadamard conjugation matrix is
Like the Pauli matrices, W {\displaystyle W} is both Hermitian and unitary . σ 1 , σ 3 {\displaystyle \sigma _{1},\;\sigma _{3}} and W {\displaystyle W} satisfy the relation
The goal now is to extend the above to higher dimensions, d {\displaystyle d} .
Fix the dimension d {\displaystyle d} as before. Let ω = exp ( 2 π i / d ) {\displaystyle \omega =\exp(2\pi i/d)} , a root of unity. Since ω d = 1 {\displaystyle \omega ^{d}=1} and ω ≠ 1 {\displaystyle \omega \neq 1} , the sum of all roots annuls:
Integer indices may then be cyclically identified mod d .
Now define, with Sylvester, the shift matrix
and the clock matrix ,
These matrices generalize σ 1 {\displaystyle \sigma _{1}} and σ 3 {\displaystyle \sigma _{3}} , respectively.
Note that the unitarity and tracelessness of the two Pauli matrices is preserved, but not Hermiticity in dimensions higher than two. Since Pauli matrices describe quaternions , Sylvester dubbed the higher-dimensional analogs "nonions", "sedenions", etc.
These two matrices are also the cornerstone of quantum mechanical dynamics in finite-dimensional vector spaces [ 7 ] [ 8 ] [ 9 ] as formulated by Hermann Weyl , and they find routine applications in numerous areas of mathematical physics. [ 10 ] The clock matrix amounts to the exponential of position in a "clock" of d {\displaystyle d} hours, and the shift matrix is just the translation operator in that cyclic vector space, so the exponential of the momentum. They are (finite-dimensional) representations of the corresponding elements of the Weyl-Heisenberg group on a d {\displaystyle d} -dimensional Hilbert space.
The following relations echo and generalize those of the Pauli matrices:
and the braiding relation,
the Weyl formulation of the CCR , and can be rewritten as
On the other hand, to generalize the Walsh–Hadamard matrix W {\displaystyle W} , note
Define, again with Sylvester, the following analog matrix, [ 11 ] still denoted by W {\displaystyle W} in a slight abuse of notation,
It is evident that W {\displaystyle W} is no longer Hermitian, but is still unitary. Direct calculation yields
which is the desired analog result. Thus, W {\displaystyle W} , a Vandermonde matrix , arrays the eigenvectors of Σ 1 {\displaystyle \Sigma _{1}} , which has the same eigenvalues as Σ 3 {\displaystyle \Sigma _{3}} .
When d = 2 k {\displaystyle d=2^{k}} , W ∗ {\displaystyle W^{*}} is precisely the discrete Fourier transform matrix , converting position coordinates to momentum coordinates and vice versa.
The complete family of d 2 {\displaystyle d^{2}} unitary (but non-Hermitian) independent matrices { σ k , j } k , j = 1 d {\displaystyle \{\sigma _{k,j}\}_{k,j=1}^{d}} is defined as follows:
σ k , j := ( Σ 1 ) k ( Σ 3 ) j = ∑ m = 0 d − 1 | m + k ⟩ ω j m ⟨ m | . {\displaystyle \sigma _{k,j}:=\left(\Sigma _{1}\right)^{k}\left(\Sigma _{3}\right)^{j}=\sum _{m=0}^{d-1}|m+k\rangle \omega ^{jm}\langle m|.}
This provides Sylvester's well-known trace-orthogonal basis for g l ( d , C ) {\displaystyle {\mathfrak {gl}}(d,\mathbb {C} )} , known as "nonions" g l ( 3 , C ) {\displaystyle {\mathfrak {gl}}(3,\mathbb {C} )} , "sedenions" g l ( 4 , C ) {\displaystyle {\mathfrak {gl}}(4,\mathbb {C} )} , etc... [ 12 ] [ 13 ]
This basis can be systematically connected to the above Hermitian basis. [ 14 ] (For instance, the powers of Σ 3 {\displaystyle \Sigma _{3}} , the Cartan subalgebra ,
map to linear combinations of the h k d {\displaystyle h_{k}^{\,\,\,d}} matrices.) It can further be used to identify g l ( d , C ) {\displaystyle {\mathfrak {gl}}(d,\mathbb {C} )} , as d → ∞ {\displaystyle d\to \infty } , with the algebra of Poisson brackets .
With respect to the Hilbert–Schmidt inner product on operators, ⟨ A , B ⟩ HS = Tr ( A ∗ B ) {\displaystyle \langle A,B\rangle _{\text{HS}}=\operatorname {Tr} (A^{*}B)} , Sylvester's generalized Pauli operators are orthogonal and normalized to d {\displaystyle {\sqrt {d}}} :
This can be checked directly from the above definition of σ k , j {\displaystyle \sigma _{k,j}} . | https://en.wikipedia.org/wiki/Generalizations_of_Pauli_matrices |
In mathematics , a polynomial sequence { p n ( z ) } {\displaystyle \{p_{n}(z)\}} has a generalized Appell representation if the generating function for the polynomials takes on a certain form:
where the generating function or kernel K ( z , w ) {\displaystyle K(z,w)} is composed of the series
and
and
Given the above, it is not hard to show that p n ( z ) {\displaystyle p_{n}(z)} is a polynomial of degree n {\displaystyle n} .
Boas–Buck polynomials are a slightly more general class of polynomials.
The generalized Appell polynomials have the explicit representation
The constant is
where this sum extends over all compositions of n {\displaystyle n} into k + 1 {\displaystyle k+1} parts; that is, the sum extends over all { j } {\displaystyle \{j\}} such that
For the Appell polynomials, this becomes the formula
Equivalently, a necessary and sufficient condition that the kernel K ( z , w ) {\displaystyle K(z,w)} can be written as A ( w ) Ψ ( z g ( w ) ) {\displaystyle A(w)\Psi (zg(w))} with g 1 = 1 {\displaystyle g_{1}=1} is that
where b ( w ) {\displaystyle b(w)} and c ( w ) {\displaystyle c(w)} have the power series
and
Substituting
immediately gives the recursion relation
For the special case of the Brenke polynomials, one has g ( w ) = w {\displaystyle g(w)=w} and thus all of the b n = 0 {\displaystyle b_{n}=0} , simplifying the recursion relation significantly.
This polynomial -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalized_Appell_polynomials |
Two forms of GAL are available. The first is General Automation Language for device automation and the second is Generalized Automation Language ( GAL ) which a very high level programming language for MVS based systems such as OS/390 and z/OS .
Developed by iLED to provide a common language for standardising automation and control of devices in the residential (Home Automation) and commercial control environments. The language provides a standardised method of communicating to/from controlled/controlling devices. At each device, GAL is converted into the machine specific protocol and medium. An example is the control of a DVD player. The GAL command will be <MyHouse MyArea MyRoom MyDevice MyCommand> FredsHouse GroundFloor Lounge DVDplayer ON . The GAL device will then convert this to the discrete IR command to switch on the DVD Player. [ 1 ]
Developed by Expans Systems to provide features and constructs that enable the programmer to intercept systems events and schedule responses, as implemented via their product AutoMan . Somewhat akin to BASIC, GAL enables systems programmers and operators to define logic to apply to systems messages as they flow through a multi-system ( sysplex ) environment. GAL also enables the programmer to define events that have occurred in the past, by intercepting Action Message Retention Facility (AMRF) messages. The language has built-in constructs to obtain the age of a retained message and make decisions about its fate depending on age. GAL can be used to write new systems commands, by intercepting and interpreting anything that is entered into an Operator Console. GAL uses keywords such as names of days of the week, names of month etc. to automatically schedule events in the system. Like REXX , GAL is both an interpretive language and a compiled language. GAL statements can be entered to the interpreter on the fly, or entire automation scenarios can be predefined, such as the logic to define unattended operations of a system, and can be compiled offline, using the compiler program GALCOMP.
GAL implements comparison by IF statements, setting of variables, by the LET statement and subroutine calls. GAL allows the programmer to break into REXX, and Assembler where it is needed. The very high level nature of GAL is exemplified by the EMAIL statement, which enables the programmer to send an email alert when an event is detected that requires human intervention. Assuming that this message event requires an alert to be sent to a default recipient:
GAL uses text capture and replacement facilities. In this simple example, the text of the system message is captured into a variable and the text in that variable is then used as the subject of the email. The message in the body of the email is the text in quotes following the subject.
GAL allows for cross systems( IBM XCF ) queries to be issued by simple IF statements, without regard for the underlying internal processes required to perform the cross systems communications. It is simply a matter of identifying one or more systems that are to be tested.
For instance to check if a job is currently running in a partner system: | https://en.wikipedia.org/wiki/Generalized_Automation_Language |
In algebra , a generalized Cohen–Macaulay ring is a commutative Noetherian local ring ( A , m ) {\displaystyle (A,{\mathfrak {m}})} of Krull dimension d > 0 that satisfies any of the following equivalent conditions: [ 1 ] [ 2 ]
The last condition implies that the localization A p {\displaystyle A_{\mathfrak {p}}} is Cohen–Macaulay for each prime ideal p ≠ m {\displaystyle {\mathfrak {p}}\neq {\mathfrak {m}}} .
A standard example is the local ring at the vertex of an affine cone over a smooth projective variety . Historically, the notion grew up out of the study of a Buchsbaum ring , a Noetherian local ring A in which length A ( A / Q ) − e ( Q ) {\displaystyle \operatorname {length} _{A}(A/Q)-e(Q)} is constant for m {\displaystyle {\mathfrak {m}}} -primary ideals Q {\displaystyle Q} ; see the introduction of. [ 3 ]
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalized_Cohen–Macaulay_ring |
Generalized Environmental Modeling System for Surfacewaters or GEMSS is a public domain software [ 1 ] application published by ERM. It has been used for hydrological studies throughout the world. [ 2 ]
GEMSS has been used for ultimate heat sink analyses at Comanche Peak Nuclear Power Plant , and Arkansas Nuclear One . In Pennsylvania it has been applied at PPL Corporation's Brunner Island Steam Electric Station on the lower Susquehanna River , Exelon’s Cromby and Limerick Generating Stations on the Schuylkill River , and at several other electric power facilities. River applications for electric power facilities have been made on the Susquehanna (Brunner Island), the Missouri (Labadie Power Station), the Delaware (Mercer and Gilbert Generating Station), the Connecticut ( Connecticut Yankee Nuclear Power Plant ), and others.
Applications of GEMSS and its individual component modules have been accepted by regulatory agencies in the U.S. and Canada. [ citation needed ] It is the sole hydrodynamic model listed in the model selection tool database [ which? ] for hydrodynamic and chemical fate models that can perform 1-D, 2-D, and 3-D time-variable modeling for most waterbody types, consider all state variables and include the near- and far-fields. GEMSS can also provide GUI’s, grid generation, and GIS linkage tools and has strong documentation. [ 3 ]
GEMSS includes a grid generator and editor, control file generator, 2-D and 3-D post processing viewers, and an animation tool. It uses a database approach to store and access model results. The database approach is also used for field data; as a result, the GEMSS viewers can be used to display model results, field data or both, a capability useful for understanding the behavior of the prototype as well as for calibrating the model. The field data analysis features can be used independently using GEMSS modeling capability.
A GEMSS application requires two types of data: (1) spatial data (primarily the waterbody shoreline and bathymetry , but also locations, elevations, and configurations of man-made structures) and (2) temporal data (time-varying boundary condition data defining tidal elevation, inflow rate and temperature, inflow constituent concentration, outflow rate, and meteorological data. [ 2 ] All deterministic models , including GEMSS, require uninterrupted time-varying boundary condition data. There can be no long gaps in the datasets and all required datasets must be available during the span of the proposed simulation period.
For input to the model, the spatial data is encoded primarily in two input files: the control and bathymetry files. These files are geo-referenced . The temporal data is encoded in many files, each file representing a set of time-varying boundary conditions, for example, meteorological data for surface heat exchange and wind shear , or inflow rates for a tributary stream. Each record in the boundary condition files is stamped with a year-month-day-hour-minute address. The data can be subjected to quality assurance procedures by using GEMSS to plot, then to visually inspect individual data points, trends and outliers. The set of input files and the GEMSS executable constitute the model application. | https://en.wikipedia.org/wiki/Generalized_Environmental_Modeling_System_for_Surfacewaters |
The Helmholtz theorem of classical mechanics reads as follows:
Let H ( x , p ; V ) = K ( p ) + φ ( x ; V ) {\displaystyle H(x,p;V)=K(p)+\varphi (x;V)} be the Hamiltonian of a one-dimensional system, where K = p 2 2 m {\displaystyle K={\frac {p^{2}}{2m}}} is the kinetic energy and φ ( x ; V ) {\displaystyle \varphi (x;V)} is a "U-shaped" potential energy profile which depends on a parameter V {\displaystyle V} .
Let ⟨ ⋅ ⟩ t {\displaystyle \left\langle \cdot \right\rangle _{t}} denote the time average. Let
Then d S = d E + P d V T . {\displaystyle dS={\frac {dE+PdV}{T}}.}
The thesis of this theorem of classical mechanics reads exactly as the heat theorem of thermodynamics . This fact shows that thermodynamic-like relations exist between certain mechanical quantities. This in turn allows to define the "thermodynamic state" of a one-dimensional mechanical system. In particular the temperature T {\displaystyle T} is given by time average of the kinetic energy, and the entropy S {\displaystyle S} by the logarithm of the action (i.e., ∮ d x 2 m ( E − φ ( x , V ) ) {\textstyle \oint dx{\sqrt {2m\left(E-\varphi \left(x,V\right)\right)}}} ). The importance of this theorem has been recognized by Ludwig Boltzmann who saw how to apply it to macroscopic systems (i.e. multidimensional systems), in order to provide a mechanical foundation of equilibrium thermodynamics . This research activity was strictly related to his formulation of the ergodic hypothesis .
A multidimensional version of the Helmholtz theorem, based on the ergodic theorem of George David Birkhoff is known as the generalized Helmholtz theorem.
The generalized Helmholtz theorem is the multi-dimensional generalization of the Helmholtz theorem, and reads as follows.
Let
be the canonical coordinates of a s -dimensional Hamiltonian system , and let
be the Hamiltonian function, where
is the kinetic energy and
is the potential energy which depends on a parameter V {\displaystyle V} .
Let the hyper-surfaces of constant energy in the 2 s -dimensional phase space of the system be metrically indecomposable and let ⟨ ⋅ ⟩ t {\displaystyle \left\langle \cdot \right\rangle _{t}} denote time average. Define the quantities E {\displaystyle E} , P {\displaystyle P} , T {\displaystyle T} , S {\displaystyle S} , as follows:
Then:
This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalized_Helmholtz_theorem |
In continuum mechanics , the generalized Lagrangian mean ( GLM ) is a formalism – developed by D.G. Andrews and M.E. McIntyre ( 1978a , 1978b ) – to unambiguously split a motion into a mean part and an oscillatory part . The method gives a mixed Eulerian–Lagrangian description for the flow field , but appointed to fixed Eulerian coordinates . [ 1 ]
In general, it is difficult to decompose a combined wave–mean motion into a mean and a wave part, especially for flows bounded by a wavy surface: e.g. in the presence of surface gravity waves or near another undulating bounding surface (like atmospheric flow over mountainous or hilly terrain). However, this splitting of the motion in a wave and mean part is often demanded in mathematical models , when the main interest is in the mean motion – slowly varying at scales much larger than those of the individual undulations. From a series of postulates , Andrews & McIntyre (1978a) arrive at the (GLM) formalism to split the flow: into a generalised Lagrangian mean flow and an oscillatory-flow part.
The GLM method does not suffer from the strong drawback of the Lagrangian specification of the flow field – following individual fluid parcels – that Lagrangian positions which are initially close gradually drift far apart. In the Lagrangian frame of reference, it therefore becomes often difficult to attribute Lagrangian-mean values to some location in space.
The specification of mean properties for the oscillatory part of the flow, like: Stokes drift , wave action , pseudomomentum and pseudoenergy – and the associated conservation laws – arise naturally when using the GLM method. [ 2 ] [ 3 ]
The GLM concept can also be incorporated into variational principles of fluid flow. [ 4 ] | https://en.wikipedia.org/wiki/Generalized_Lagrangian_mean |
The generalized Lotka–Volterra equations are a set of equations which are more general than either the competitive or predator–prey examples of Lotka–Volterra types. [ 1 ] [ 2 ] They can be used to model direct competition and trophic relationships between an arbitrary number of species. Their dynamics can be analysed analytically to some extent. This makes them useful as a theoretical tool for modeling food webs . However, they lack features of other ecological models such as predator preference and nonlinear functional responses , and they cannot be used to model mutualism without allowing indefinite population growth.
The generalised Lotka-Volterra equations model the dynamics of the populations x 1 , x 2 , … {\displaystyle x_{1},x_{2},\dots } of n {\displaystyle n} biological species. Together, these populations can be considered as a vector x {\displaystyle \mathbf {x} } . They are a set of ordinary differential equations given by
where the vector f {\displaystyle \mathbf {f} } is given by
where r {\displaystyle \mathbf {r} } is a vector and A {\displaystyle A} is a matrix known as the interaction matrix. [ 3 ]
The generalised Lotka-Volterra equations can represent competition and predation, depending on the values of the parameters, as described below. "Generalized" means that all the combinations of pairs of signs for both species (−/−,−/+,+/-, +/+) are possible. They are less suitable for describing mutualism.
The values of r {\displaystyle \mathbf {r} } are the intrinsic birth or death rates of the species. A positive value for r i {\displaystyle r_{i}} means that species i is able to reproduce in the absence of any other species (for instance, because it is a plant that is wind pollinated), whereas a negative value means that its population will decline unless the appropriate other species are present (e.g. a herbivore that cannot survive without plants to eat, or a predator that cannot persist without its prey).
The values of the elements of the interaction matrix A {\displaystyle A} represent the relationships between the species. The value of a i j {\displaystyle a_{ij}} represents the effect that species j has upon species i. The effect is proportional to the populations of both species, as well as to the value of a i j {\displaystyle a_{ij}} . Thus, if both a i j {\displaystyle a_{ij}} and a j i {\displaystyle a_{ji}} are negative then the two species are said to be in direct competition with one another, since they each have a direct negative effect on the other's population. If a i j {\displaystyle a_{ij}} is positive but a j i {\displaystyle a_{ji}} is negative then species i is considered to be a predator (or parasite) on species j, since i's population grows at j's expense.
Positive values for both a i j {\displaystyle a_{ij}} and a j i {\displaystyle a_{ji}} would be considered mutualism. However, this is not often used in practice, because it can make it possible for both species' populations to grow indefinitely.
Indirect negative and positive effects are also possible. For example, if two predators eat the same prey then they compete indirectly, even though they might not have a direct competition term in the community matrix.
The diagonal terms a i i {\displaystyle a_{ii}} are usually taken to be negative (i.e. species i's population has a negative effect on itself). This self-limitation prevents populations from growing indefinitely.
The generalised Lotka-Volterra equations are capable of a wide variety of dynamics, including limit cycles and chaos as well as point attractors (see Hofbauer and Sigmund [ 2 ] ). As with any set of ODEs, fixed points can be found by setting d x i / d t {\displaystyle dx_{i}/dt} to 0 for all i, which gives, if no species is extinct, i.e., if x i ≠ 0 {\displaystyle x_{i}\neq 0} for all i {\displaystyle i} ,
This may or may not have positive values for all the x i {\displaystyle x_{i}} ; if it does not, then there is no stable attractor for which the populations of all species are positive. If there is a fixed point with all positive populations the Jacobian matrix in a neighbourhood of the fixed point x {\displaystyle \mathbf {x} } is given by diag ( x ) A {\displaystyle \operatorname {diag} (\mathbf {x} )A} . This matrix is known as the community matrix and its eigenvalues determine the stability of the fixed point x {\displaystyle \mathbf {x} } . [ 3 ] The fixed point may or may not be stable.
If the fixed point is unstable then there may or may not be a periodic or chaotic attractor for which all the populations remain positive. In either case there can also be attractors for which some of the populations are zero and others are positive.
x = ( 0 , 0 , … 0 ) {\displaystyle \mathbf {x} =(0,0,\dots 0)} is always a fixed point, corresponding to the absence of all species. For n = 2 {\displaystyle n=2} species, a complete classification of this dynamics, for all sign patterns of above coefficients, is available, [ 4 ] which is based upon equivalence to the 3-type replicator equation .
In the case of a single trophic community, the trophic level below the one of the community (e.g. plants for a community of herbivore species), corresponding to the food required for individuals of a species i to thrive, is modeled through a parameter K i known as the carrying capacity . E.g. suppose a mixture of crops involving S species. In this case a i j {\displaystyle a_{ij}} can be thus written in terms of a non-dimensional interaction coefficient a ^ i j {\displaystyle {\hat {a}}_{ij}} : [ 5 ] a ^ i j = a i j K i / r i {\displaystyle {\hat {a}}_{ij}=a_{ij}K_{i}/r_{i}} .
A straightforward procedure to get the set of model parameters { K i , a ^ i j } {\displaystyle \{K_{i},{\hat {a}}_{ij}\}} is to perform, until the equilibrium state is attained: a) the S single species or monoculture experiments, and from each of them to estimate the carrying capacities as the yield of the species i in monoculture K i = m i e x {\displaystyle K_{i}=m_{i}^{ex}} (the superscript ‘ex’ is to emphasize that this is an experimentally measured quantity a); b) the S ´( S -1)/2 pairwise experiments producing the biculture yields, x i ( j ) e x {\displaystyle x_{i(j)}^{ex}} and x j ( i ) e x {\displaystyle x_{j(i)}^{ex}} (the subscripts i ( j ) and j ( i ) stand for the yield of species i in presence of species j and vice ver sa). We then can obtain a ^ i j {\displaystyle {\hat {a}}_{ij}} and a ^ j i {\displaystyle {\hat {a}}_{ji}} , as: [ 6 ] a ^ i j = ( x i ( j ) e x − m i e x ) / x j ( i ) e x , a ^ j i = ( x j ( i ) e x − m j e x ) / x i ( j ) e x . {\displaystyle {\hat {a}}_{ij}=(x_{i(j)}^{ex}-m_{i}^{ex})/x_{j(i)}^{ex},{\hat {a}}_{ji}=(x_{j(i)}^{ex}-m_{j}^{ex})/x_{i(j)}^{ex}.} Using this procedure it was observed that the Generalized Lotka–Volterra equations can predict with reasonable accuracy most of the species yields in mixtures of S >2 species for the majority of a set of 33 experimental treatments acrossdifferent taxa (algae, plants, protozoa, etc.). [ 6 ]
The vulnerability of species richness to several factors like, climate change, habitat fragmentation, resource exploitation, etc., poses a challenge to conservation biologists and agencies working to sustain the ecosystem services. Hence, there is a clear need for early warning indicators of species loss generated from empirical data.
A recently proposed early warning indicator of such population crashes uses effective estimation of the Lotka-Volterra interaction coefficients a ^ i j {\displaystyle {\hat {a}}_{ij}} . The idea is that such coefficients can be obtained from spatial distributions of individuals of the different species through Maximum Entropy . This method was tested against the data collected for trees by the Barro Colorado Island Research Station , comprising eight censuses performed every 5 years from 1981 to 2015. The main finding was that for those tree species that suffered steep population declines (of at least 50%), across the eight tree censuses, the drop of a ^ i i {\displaystyle {\hat {a}}_{ii}} is always steeper and occurs before the drop of the corresponding species abundance N i . [ 7 ] Indeed, such sharp declines in a ^ i i {\displaystyle {\hat {a}}_{ii}} occur between 5 and 15 years in advance than comparable declines for N i , and thus they serve as early warnings of impending population busts. | https://en.wikipedia.org/wiki/Generalized_Lotka–Volterra_equation |
The generalized Maxwell model also known as the Maxwell–Wiechert model (after James Clerk Maxwell and E Wiechert [ 1 ] [ 2 ] ) is the most general form of the linear model for viscoelasticity . In this model, several Maxwell elements are assembled in parallel. It takes into account that the relaxation does not occur at a single time, but in a set of times. Due to the presence of molecular segments of different lengths, with shorter ones contributing less than longer ones, there is a varying time distribution. The Wiechert model shows this by having as many spring–dashpot Maxwell elements as are necessary to accurately represent the distribution. The figure on the right shows the generalised Wiechert model. [ 3 ] [ 4 ]
Given N + 1 {\displaystyle N+1} elements with moduli E i {\displaystyle E_{i}} , viscosities η i {\displaystyle \eta _{i}} , and relaxation times τ i = η i E i {\displaystyle \tau _{i}={\frac {\eta _{i}}{E_{i}}}}
The general form for the model for solids is given by [ citation needed ] :
σ + {\displaystyle \sigma +} ∑ n = 1 N ( ∑ i 1 = 1 N − n + 1 . . . ( ∑ i a = i a − 1 + 1 N − ( n − a ) + 1 . . . ( ∑ i n = i n − 1 + 1 N ( ∏ j ∈ { i 1 , . . . , i n } τ j ) ) . . . ) . . . ) ∂ n σ ∂ t n {\displaystyle \sum _{n=1}^{N}{\left({\sum _{i_{1}=1}^{N-n+1}{...\left({\sum _{i_{a}=i_{a-1}+1}^{N-\left({n-a}\right)+1}{...\left({\sum _{i_{n}=i_{n-1}+1}^{N}{\left({\prod _{j\in \left\{{i_{1},...,i_{n}}\right\}}{\tau _{j}}}\right)}}\right)...}}\right)...}}\right){\frac {\partial ^{n}{\sigma }}{\partial {t}^{n}}}}}
= {\displaystyle =}
E 0 ϵ + {\displaystyle E_{0}\epsilon +} ∑ n = 1 N ( ∑ i 1 = 1 N − n + 1 . . . ( ∑ i a = i a − 1 + 1 N − ( n − a ) + 1 . . . ( ∑ i n = i n − 1 + 1 N ( ( E 0 + ∑ j ∈ { i 1 , . . . , i n } E j ) ( ∏ k ∈ { i 1 , . . . , i n } τ k ) ) ) . . . ) . . . ) ∂ n ϵ ∂ t n {\displaystyle \sum _{n=1}^{N}{\left({\sum _{i_{1}=1}^{N-n+1}{...\left({\sum _{i_{a}=i_{a-1}+1}^{N-\left({n-a}\right)+1}{...\left({\sum _{i_{n}=i_{n-1}+1}^{N}{\left({\left({E_{0}+\sum _{j\in \left\{{i_{1},...,i_{n}}\right\}}{E_{j}}}\right)\left({\prod _{k\in \left\{{i_{1},...,i_{n}}\right\}}{\tau _{k}}}\right)}\right)}}\right)...}}\right)...}}\right){\frac {\partial ^{n}{\epsilon }}{\partial {t}^{n}}}}}
σ + {\displaystyle \sigma +} ( ∑ i = 1 N τ i ) ∂ σ ∂ t + {\displaystyle {\left({\sum _{i=1}^{N}{\tau _{i}}}\right)}{\frac {\partial {\sigma }}{\partial {t}}}+} ( ∑ i = 1 N − 1 ( ∑ j = i + 1 N τ i τ j ) ) ∂ 2 σ ∂ t 2 {\displaystyle {\left({\sum _{i=1}^{N-1}{\left({\sum _{j=i+1}^{N}{\tau _{i}\tau _{j}}}\right)}}\right)}{\frac {\partial ^{2}{\sigma }}{\partial {t}^{2}}}} + . . . + {\displaystyle +...+}
( ∑ i 1 = 1 N − n + 1 . . . ( ∑ i a = i a − 1 + 1 N − ( n − a ) + 1 . . . ( ∑ i n = i n − 1 + 1 N ( ∏ j ∈ { i 1 , . . . , i n } τ j ) ) . . . ) . . . ) ∂ n σ ∂ t n {\displaystyle \left({\sum _{i_{1}=1}^{N-n+1}{...\left({\sum _{i_{a}=i_{a-1}+1}^{N-\left({n-a}\right)+1}{...\left({\sum _{i_{n}=i_{n-1}+1}^{N}{\left({\prod _{j\in \left\{{i_{1},...,i_{n}}\right\}}{\tau _{j}}}\right)}}\right)...}}\right)...}}\right){\frac {\partial ^{n}{\sigma }}{\partial {t}^{n}}}} + . . . + {\displaystyle +...+} ( ∏ i = 1 N τ i ) ∂ N σ ∂ t N {\displaystyle \left({\prod _{i=1}^{N}{\tau _{i}}}\right){\frac {\partial ^{N}{\sigma }}{\partial {t}^{N}}}}
= {\displaystyle =}
E 0 ϵ + {\displaystyle E_{0}\epsilon +} ( ∑ i = 1 N ( E 0 + E i ) τ i ) ∂ ϵ ∂ t + {\displaystyle {\left({\sum _{i=1}^{N}{\left({E_{0}+E_{i}}\right)\tau _{i}}}\right)}{\frac {\partial {\epsilon }}{\partial {t}}}+} ( ∑ i = 1 N − 1 ( ∑ j = i + 1 N ( E 0 + E i + E j ) τ i τ j ) ) ∂ 2 ϵ ∂ t 2 {\displaystyle {\left({\sum _{i=1}^{N-1}{\left({\sum _{j=i+1}^{N}{\left({E_{0}+E_{i}+E_{j}}\right)\tau _{i}\tau _{j}}}\right)}}\right)}{\frac {\partial ^{2}{\epsilon }}{\partial {t}^{2}}}} + . . . + {\displaystyle +...+}
( ∑ i 1 = 1 N − n + 1 . . . ( ∑ i a = i a − 1 + 1 N − ( n − a ) + 1 . . . ( ∑ i n = i n − 1 + 1 N ( ( E 0 + ∑ j ∈ { i 1 , . . . , i n } E j ) ( ∏ k ∈ { i 1 , . . . , i n } τ k ) ) ) . . . ) . . . ) ∂ n ϵ ∂ t n {\displaystyle \left({\sum _{i_{1}=1}^{N-n+1}{...\left({\sum _{i_{a}=i_{a-1}+1}^{N-\left({n-a}\right)+1}{...\left({\sum _{i_{n}=i_{n-1}+1}^{N}{\left({\left({E_{0}+\sum _{j\in \left\{{i_{1},...,i_{n}}\right\}}{E_{j}}}\right)\left({\prod _{k\in \left\{{i_{1},...,i_{n}}\right\}}{\tau _{k}}}\right)}\right)}}\right)...}}\right)...}}\right){\frac {\partial ^{n}{\epsilon }}{\partial {t}^{n}}}} + . . . + {\displaystyle +...+} ( E 0 + ∑ j = 1 N E j ) ( ∏ i = 1 N τ i ) ∂ N ϵ ∂ t N {\displaystyle \left({E_{0}+\sum _{j=1}^{N}E_{j}}\right)\left({\prod _{i=1}^{N}{\tau _{i}}}\right){\frac {\partial ^{N}{\epsilon }}{\partial {t}^{N}}}}
Following the above model with N + 1 = 2 {\displaystyle N+1=2} elements yields the standard linear solid model :
σ + τ 1 ∂ σ ∂ t = E 0 ϵ + τ 1 ( E 0 + E 1 ) ∂ ϵ ∂ t {\displaystyle \sigma +\tau _{1}{\frac {\partial {\sigma }}{\partial {t}}}=E_{0}\epsilon +\tau _{1}\left({E_{0}+E_{1}}\right){\frac {\partial {\epsilon }}{\partial {t}}}}
Given N + 1 {\displaystyle N+1} elements with moduli E i {\displaystyle E_{i}} , viscosities η i {\displaystyle \eta _{i}} , and relaxation times τ i = η i E i {\displaystyle \tau _{i}={\frac {\eta _{i}}{E_{i}}}}
The general form for the model for fluids is given by:
σ + {\displaystyle \sigma +} ∑ n = 1 N ( ∑ i 1 = 1 N − n + 1 . . . ( ∑ i a = i a − 1 + 1 N − ( n − a ) + 1 . . . ( ∑ i n = i n − 1 + 1 N ( ∏ j ∈ { i 1 , . . . , i n } τ j ) ) . . . ) . . . ) ∂ n σ ∂ t n {\displaystyle \sum _{n=1}^{N}{\left({\sum _{i_{1}=1}^{N-n+1}{...\left({\sum _{i_{a}=i_{a-1}+1}^{N-\left({n-a}\right)+1}{...\left({\sum _{i_{n}=i_{n-1}+1}^{N}{\left({\prod _{j\in \left\{{i_{1},...,i_{n}}\right\}}{\tau _{j}}}\right)}}\right)...}}\right)...}}\right){\frac {\partial ^{n}{\sigma }}{\partial {t}^{n}}}}}
= {\displaystyle =}
∑ n = 1 N ( η 0 + ∑ i 1 = 1 N − n + 1 . . . ( ∑ i a = i a − 1 + 1 N − ( n − a ) + 1 . . . ( ∑ i n = i n − 1 + 1 N ( ( ∑ j ∈ { i 1 , . . . , i n } E j ) ( ∏ k ∈ { i 1 , . . . , i n } τ k ) ) ) . . . ) . . . ) ∂ n ϵ ∂ t n {\displaystyle \sum _{n=1}^{N}{\left({\eta _{0}+\sum _{i_{1}=1}^{N-n+1}{...\left({\sum _{i_{a}=i_{a-1}+1}^{N-\left({n-a}\right)+1}{...\left({\sum _{i_{n}=i_{n-1}+1}^{N}{\left({\left({\sum _{j\in \left\{{i_{1},...,i_{n}}\right\}}{E_{j}}}\right)\left({\prod _{k\in \left\{{i_{1},...,i_{n}}\right\}}{\tau _{k}}}\right)}\right)}}\right)...}}\right)...}}\right){\frac {\partial ^{n}{\epsilon }}{\partial {t}^{n}}}}}
σ + {\displaystyle \sigma +} ( ∑ i = 1 N τ i ) ∂ σ ∂ t + {\displaystyle {\left({\sum _{i=1}^{N}{\tau _{i}}}\right)}{\frac {\partial {\sigma }}{\partial {t}}}+} ( ∑ i = 1 N − 1 ( ∑ j = i + 1 N τ i τ j ) ) ∂ 2 σ ∂ t 2 {\displaystyle {\left({\sum _{i=1}^{N-1}{\left({\sum _{j=i+1}^{N}{\tau _{i}\tau _{j}}}\right)}}\right)}{\frac {\partial ^{2}{\sigma }}{\partial {t}^{2}}}} + . . . + {\displaystyle +...+}
( ∑ i 1 = 1 N − n + 1 . . . ( ∑ i a = i a − 1 + 1 N − ( n − a ) + 1 . . . ( ∑ i n = i n − 1 + 1 N ( ∏ j ∈ { i 1 , . . . , i n } τ j ) ) . . . ) . . . ) ∂ n σ ∂ t n {\displaystyle \left({\sum _{i_{1}=1}^{N-n+1}{...\left({\sum _{i_{a}=i_{a-1}+1}^{N-\left({n-a}\right)+1}{...\left({\sum _{i_{n}=i_{n-1}+1}^{N}{\left({\prod _{j\in \left\{{i_{1},...,i_{n}}\right\}}{\tau _{j}}}\right)}}\right)...}}\right)...}}\right){\frac {\partial ^{n}{\sigma }}{\partial {t}^{n}}}} + . . . + {\displaystyle +...+} ( ∏ i = 1 N τ i ) ∂ N σ ∂ t N {\displaystyle \left({\prod _{i=1}^{N}{\tau _{i}}}\right){\frac {\partial ^{N}{\sigma }}{\partial {t}^{N}}}}
= {\displaystyle =}
( η 0 + ∑ i = 1 N E i τ i ) ∂ ϵ ∂ t + {\displaystyle {\left({\eta _{0}+\sum _{i=1}^{N}{E_{i}\tau _{i}}}\right)}{\frac {\partial {\epsilon }}{\partial {t}}}+} ( η 0 + ∑ i = 1 N − 1 ( ∑ j = i + 1 N ( E i + E j ) τ i τ j ) ) ∂ 2 ϵ ∂ t 2 {\displaystyle {\left({\eta _{0}+\sum _{i=1}^{N-1}{\left({\sum _{j=i+1}^{N}{\left({E_{i}+E_{j}}\right)\tau _{i}\tau _{j}}}\right)}}\right)}{\frac {\partial ^{2}{\epsilon }}{\partial {t}^{2}}}} + . . . + {\displaystyle +...+}
( η 0 + ∑ i 1 = 1 N − n + 1 . . . ( ∑ i a = i a − 1 + 1 N − ( n − a ) + 1 . . . ( ∑ i n = i n − 1 + 1 N ( ( ∑ j ∈ { i 1 , . . . , i n } E j ) ( ∏ k ∈ { i 1 , . . . , i n } τ k ) ) ) . . . ) . . . ) ∂ n ϵ ∂ t n {\displaystyle \left({\eta _{0}+\sum _{i_{1}=1}^{N-n+1}{...\left({\sum _{i_{a}=i_{a-1}+1}^{N-\left({n-a}\right)+1}{...\left({\sum _{i_{n}=i_{n-1}+1}^{N}{\left({\left({\sum _{j\in \left\{{i_{1},...,i_{n}}\right\}}{E_{j}}}\right)\left({\prod _{k\in \left\{{i_{1},...,i_{n}}\right\}}{\tau _{k}}}\right)}\right)}}\right)...}}\right)...}}\right){\frac {\partial ^{n}{\epsilon }}{\partial {t}^{n}}}} + . . . + {\displaystyle +...+} ( η 0 + ( ∑ j = 1 N E j ) ( ∏ i = 1 N τ i ) ) ∂ N ϵ ∂ t N {\displaystyle \left({\eta _{0}+\left({\sum _{j=1}^{N}E_{j}}\right)\left({\prod _{i=1}^{N}{\tau _{i}}}\right)}\right){\frac {\partial ^{N}{\epsilon }}{\partial {t}^{N}}}}
The analogous model to the standard linear solid model is the three parameter fluid, also known as the Jeffreys model: [ 5 ]
σ + τ 1 ∂ σ ∂ t = ( η 0 + τ 1 E 1 ∂ ∂ t ) ∂ ϵ ∂ t {\displaystyle \sigma +\tau _{1}{\frac {\partial {\sigma }}{\partial {t}}}=\left({\eta _{0}+\tau _{1}E_{1}{\frac {\partial }{\partial t}}}\right){\frac {\partial {\epsilon }}{\partial {t}}}} | https://en.wikipedia.org/wiki/Generalized_Maxwell_model |
In vector calculus and differential geometry the generalized Stokes theorem (sometimes with apostrophe as Stokes' theorem or Stokes's theorem ), also called the Stokes–Cartan theorem , [ 1 ] is a statement about the integration of differential forms on manifolds , which both simplifies and generalizes several theorems from vector calculus . In particular, the fundamental theorem of calculus is the special case where the manifold is a line segment , Green’s theorem and Stokes' theorem are the cases of a surface in R 2 {\displaystyle \mathbb {R} ^{2}} or R 3 , {\displaystyle \mathbb {R} ^{3},} and the divergence theorem is the case of a volume in R 3 . {\displaystyle \mathbb {R} ^{3}.} [ 2 ] Hence, the theorem is sometimes referred to as the fundamental theorem of multivariate calculus . [ 3 ]
Stokes' theorem says that the integral of a differential form ω {\displaystyle \omega } over the boundary ∂ Ω {\displaystyle \partial \Omega } of some orientable manifold Ω {\displaystyle \Omega } is equal to the integral of its exterior derivative d ω {\displaystyle d\omega } over the whole of Ω {\displaystyle \Omega } , i.e., ∫ ∂ Ω ω = ∫ Ω d ω . {\displaystyle \int _{\partial \Omega }\omega =\int _{\Omega }\operatorname {d} \omega \,.}
Stokes' theorem was formulated in its modern form by Élie Cartan in 1945, [ 4 ] following earlier work on the generalization of the theorems of vector calculus by Vito Volterra , Édouard Goursat , and Henri Poincaré . [ 5 ] [ 6 ]
This modern form of Stokes' theorem is a vast generalization of a classical result that Lord Kelvin communicated to George Stokes in a letter dated July 2, 1850. [ 7 ] [ 8 ] [ 9 ] Stokes set the theorem as a question on the 1854 Smith's Prize exam, which led to the result bearing his name. It was first published by Hermann Hankel in 1861. [ 9 ] [ 10 ] This classical case relates the surface integral of the curl of a vector field F {\displaystyle {\textbf {F}}} over a surface (that is, the flux of curl F {\displaystyle {\text{curl}}\,{\textbf {F}}} ) in Euclidean three-space to the line integral of the vector field over the surface boundary.
The second fundamental theorem of calculus states that the integral of a function f {\displaystyle f} over the interval [ a , b ] {\displaystyle [a,b]} can be calculated by finding an antiderivative F {\displaystyle F} of f {\displaystyle f} : ∫ a b f ( x ) d x = F ( b ) − F ( a ) . {\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a)\,.}
Stokes' theorem is a vast generalization of this theorem in the following sense.
In even simpler terms, one can consider the points as boundaries of curves, that is as 0-dimensional boundaries of 1-dimensional manifolds. So, just as one can find the value of an integral ( f d x = d F {\displaystyle f\,dx=dF} ) over a 1-dimensional manifold ( [ a , b ] {\displaystyle [a,b]} ) by considering the anti-derivative ( F {\displaystyle F} ) at the 0-dimensional boundaries ( { a , b } {\displaystyle \{a,b\}} ), one can generalize the fundamental theorem of calculus, with a few additional caveats, to deal with the value of integrals ( d ω {\displaystyle d\omega } ) over n {\displaystyle n} -dimensional manifolds ( Ω {\displaystyle \Omega } ) by considering the antiderivative ( ω {\displaystyle \omega } ) at the ( n − 1 ) {\displaystyle (n-1)} -dimensional boundaries ( ∂ Ω {\displaystyle \partial \Omega } ) of the manifold.
So the fundamental theorem reads: ∫ [ a , b ] f ( x ) d x = ∫ [ a , b ] d F = ∫ ∂ [ a , b ] F = ∫ { a } − ∪ { b } + F = F ( b ) − F ( a ) . {\displaystyle \int _{[a,b]}f(x)\,dx=\int _{[a,b]}\,dF=\int _{\partial [a,b]}\,F=\int _{\{a\}^{-}\cup \{b\}^{+}}F=F(b)-F(a)\,.}
Let Ω {\displaystyle \Omega } be an oriented smooth manifold of dimension n {\displaystyle n} with boundary and let α {\displaystyle \alpha } be a smooth n {\displaystyle n} - differential form that is compactly supported on Ω {\displaystyle \Omega } . First, suppose that α {\displaystyle \alpha } is compactly supported in the domain of a single, oriented coordinate chart { U , φ } {\displaystyle \{U,\varphi \}} . In this case, we define the integral of α {\displaystyle \alpha } over Ω {\displaystyle \Omega } as ∫ Ω α = ∫ φ ( U ) ( φ − 1 ) ∗ α , {\displaystyle \int _{\Omega }\alpha =\int _{\varphi (U)}(\varphi ^{-1})^{*}\alpha \,,} i.e., via the pullback of α {\displaystyle \alpha } to R n {\displaystyle \mathbb {R} ^{n}} .
More generally, the integral of α {\displaystyle \alpha } over Ω {\displaystyle \Omega } is defined as follows: Let { ψ i } {\displaystyle \{\psi _{i}\}} be a partition of unity associated with a locally finite cover { U i , φ i } {\displaystyle \{U_{i},\varphi _{i}\}} of (consistently oriented) coordinate charts, then define the integral ∫ Ω α ≡ ∑ i ∫ U i ψ i α , {\displaystyle \int _{\Omega }\alpha \equiv \sum _{i}\int _{U_{i}}\psi _{i}\alpha \,,} where each term in the sum is evaluated by pulling back to R n {\displaystyle \mathbb {R} ^{n}} as described above. This quantity is well-defined; that is, it does not depend on the choice of the coordinate charts, nor the partition of unity.
The generalized Stokes theorem reads:
Theorem ( Stokes–Cartan ) — Let ω {\displaystyle \omega } be a smooth ( n − 1 ) {\displaystyle (n-1)} - form with compact support on an oriented , n {\displaystyle n} -dimensional manifold-with-boundary Ω {\displaystyle \Omega } , where ∂ Ω {\displaystyle \partial \Omega } is given the induced orientation. Then ∫ Ω d ω = ∫ ∂ Ω ω . {\displaystyle \int _{\Omega }d\omega =\int _{\partial \Omega }\omega .}
Here d {\displaystyle d} is the exterior derivative , which is defined using the manifold structure only. The right-hand side is sometimes written as ∮ ∂ Ω ω {\textstyle \oint _{\partial \Omega }\omega } to stress the fact that the ( n − 1 ) {\displaystyle (n-1)} -manifold ∂ Ω {\displaystyle \partial \Omega } has no boundary. [ note 1 ] (This fact is also an implication of Stokes' theorem, since for a given smooth n {\displaystyle n} -dimensional manifold Ω {\displaystyle \Omega } , application of the theorem twice gives ∫ ∂ ( ∂ Ω ) ω = ∫ Ω d ( d ω ) = 0 {\textstyle \int _{\partial (\partial \Omega )}\omega =\int _{\Omega }d(d\omega )=0} for any ( n − 2 ) {\displaystyle (n-2)} -form ω {\displaystyle \omega } , which implies that ∂ ( ∂ Ω ) = ∅ {\displaystyle \partial (\partial \Omega )=\emptyset } .) The right-hand side of the equation is often used to formulate integral laws; the left-hand side then leads to equivalent differential formulations (see below).
The theorem is often used in situations where Ω {\displaystyle \Omega } is an embedded oriented submanifold of some bigger manifold, often R k {\displaystyle \mathbb {R} ^{k}} , on which the form ω {\displaystyle \omega } is defined.
Let M be a smooth manifold . A (smooth) singular k -simplex in M is defined as a smooth map from the standard simplex in R k to M . The group C k ( M , Z ) of singular k - chains on M is defined to be the free abelian group on the set of singular k -simplices in M . These groups, together with the boundary map, ∂ , define a chain complex . The corresponding homology (resp. cohomology) group is isomorphic to the usual singular homology group H k ( M , Z ) (resp. the singular cohomology group H k ( M , Z ) ), defined using continuous rather than smooth simplices in M .
On the other hand, the differential forms, with exterior derivative, d , as the connecting map, form a cochain complex, which defines the de Rham cohomology groups H d R k ( M , R ) {\displaystyle H_{dR}^{k}(M,\mathbf {R} )} .
Differential k -forms can be integrated over a k -simplex in a natural way, by pulling back to R k . Extending by linearity allows one to integrate over chains. This gives a linear map from the space of k -forms to the k th group of singular cochains, C k ( M , Z ) , the linear functionals on C k ( M , Z ) . In other words, a k -form ω defines a functional I ( ω ) ( c ) = ∮ c ω . {\displaystyle I(\omega )(c)=\oint _{c}\omega .} on the k -chains. Stokes' theorem says that this is a chain map from de Rham cohomology to singular cohomology with real coefficients; the exterior derivative, d , behaves like the dual of ∂ on forms. This gives a homomorphism from de Rham cohomology to singular cohomology. On the level of forms, this means:
De Rham's theorem shows that this homomorphism is in fact an isomorphism . So the converse to 1 and 2 above hold true. In other words, if { c i } are cycles generating the k th homology group, then for any corresponding real numbers, { a i } , there exist a closed form, ω , such that ∮ c i ω = a i , {\displaystyle \oint _{c_{i}}\omega =a_{i}\,,} and this form is unique up to exact forms.
Stokes' theorem on smooth manifolds can be derived from Stokes' theorem for chains in smooth manifolds, and vice versa. [ 11 ] Formally stated, the latter reads: [ 12 ]
Theorem ( Stokes' theorem for chains ) — If c is a smooth k -chain in a smooth manifold M , and ω is a smooth ( k − 1) -form on M , then ∫ ∂ c ω = ∫ c d ω . {\displaystyle \int _{\partial c}\omega =\int _{c}d\omega .}
To simplify these topological arguments, it is worthwhile to examine the underlying principle by considering an example for d = 2 dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains. It thus suffices to prove Stokes' theorem for sufficiently fine tilings (or, equivalently, simplices ), which usually is not difficult.
Let γ : [ a , b ] → R 2 {\displaystyle \gamma :[a,b]\to \mathbb {R} ^{2}} be a piecewise smooth Jordan plane curve . The Jordan curve theorem implies that γ {\displaystyle \gamma } divides R 2 {\displaystyle \mathbb {R} ^{2}} into two components, a compact one and another that is non-compact. Let D {\displaystyle D} denote the compact part that is bounded by γ {\displaystyle \gamma } and suppose ψ : D → R 3 {\displaystyle \psi :D\to \mathbb {R} ^{3}} is smooth, with S = ψ ( D ) {\displaystyle S=\psi (D)} . If Γ {\displaystyle \Gamma } is the space curve defined by Γ ( t ) = ψ ( γ ( t ) ) {\displaystyle \Gamma (t)=\psi (\gamma (t))} [ note 2 ] and F {\displaystyle {\textbf {F}}} is a smooth vector field on R 3 {\displaystyle \mathbb {R} ^{3}} , then: [ 13 ] [ 14 ] [ 15 ] ∮ Γ F ⋅ d Γ = ∬ S ( ∇ × F ) ⋅ d S {\displaystyle \oint _{\Gamma }\mathbf {F} \,\cdot \,d{\mathbf {\Gamma } }=\iint _{S}\left(\nabla \times \mathbf {F} \right)\cdot \,d\mathbf {S} }
This classical statement is a special case of the general formulation after making an identification of vector field with a 1-form and its curl with a two form through ( F x F y F z ) ⋅ d Γ → F x d x + F y d y + F z d z {\displaystyle {\begin{pmatrix}F_{x}\\F_{y}\\F_{z}\\\end{pmatrix}}\cdot d\Gamma \to F_{x}\,dx+F_{y}\,dy+F_{z}\,dz} ∇ × ( F x F y F z ) ⋅ d S = ( ∂ y F z − ∂ z F y ∂ z F x − ∂ x F z ∂ x F y − ∂ y F x ) ⋅ d S → d ( F x d x + F y d y + F z d z ) = ( ∂ y F z − ∂ z F y ) d y ∧ d z + ( ∂ z F x − ∂ x F z ) d z ∧ d x + ( ∂ x F y − ∂ y F x ) d x ∧ d y . {\displaystyle {\begin{aligned}&\nabla \times {\begin{pmatrix}F_{x}\\F_{y}\\F_{z}\end{pmatrix}}\cdot d\mathbf {S} ={\begin{pmatrix}\partial _{y}F_{z}-\partial _{z}F_{y}\\\partial _{z}F_{x}-\partial _{x}F_{z}\\\partial _{x}F_{y}-\partial _{y}F_{x}\\\end{pmatrix}}\cdot d\mathbf {S} \to \\[1.4ex]&d(F_{x}\,dx+F_{y}\,dy+F_{z}\,dz)=\left(\partial _{y}F_{z}-\partial _{z}F_{y}\right)dy\wedge dz+\left(\partial _{z}F_{x}-\partial _{x}F_{z}\right)dz\wedge dx+\left(\partial _{x}F_{y}-\partial _{y}F_{x}\right)dx\wedge dy.\end{aligned}}}
The formulation above, in which Ω {\displaystyle \Omega } is a smooth manifold with boundary, does not suffice in many applications. For example, if the domain of integration is defined as the plane region between two x {\displaystyle x} -coordinates and the graphs of two functions, it will often happen that the domain has corners. In such a case, the corner points mean that Ω {\displaystyle \Omega } is not a smooth manifold with boundary, and so the statement of Stokes' theorem given above does not apply. Nevertheless, it is possible to check that the conclusion of Stokes' theorem is still true. This is because Ω {\displaystyle \Omega } and its boundary are well-behaved away from a small set of points (a measure zero set).
A version of Stokes' theorem that allows for roughness was proved by Whitney. [ 16 ] Assume that D {\displaystyle D} is a connected bounded open subset of R n {\displaystyle \mathbb {R} ^{n}} . Call D {\displaystyle D} a standard domain if it satisfies the following property: there exists a subset P {\displaystyle P} of ∂ D {\displaystyle \partial D} , open in ∂ D {\displaystyle \partial D} , whose complement in ∂ D {\displaystyle \partial D} has Hausdorff ( n − 1 ) {\displaystyle (n-1)} -measure zero; and such that every point of P {\displaystyle P} has a generalized normal vector . This is a vector v ( x ) {\displaystyle {\textbf {v}}(x)} such that, if a coordinate system is chosen so that v ( x ) {\displaystyle {\textbf {v}}(x)} is the first basis vector, then, in an open neighborhood around x {\displaystyle x} , there exists a smooth function f ( x 2 , … , x n ) {\displaystyle f(x_{2},\dots ,x_{n})} such that P {\displaystyle P} is the graph { x 1 = f ( x 2 , … , x n ) } {\displaystyle \{x_{1}=f(x_{2},\dots ,x_{n})\}} and D {\displaystyle D} is the region { x 1 : x 1 < f ( x 2 , … , x n ) } {\displaystyle \{x_{1}:x_{1}<f(x_{2},\dots ,x_{n})\}} . Whitney remarks that the boundary of a standard domain is the union of a set of zero Hausdorff ( n − 1 ) {\displaystyle (n-1)} -measure and a finite or countable union of smooth ( n − 1 ) {\displaystyle (n-1)} -manifolds, each of which has the domain on only one side. He then proves that if D {\displaystyle D} is a standard domain in R n {\displaystyle \mathbb {R} ^{n}} , ω {\displaystyle \omega } is an ( n − 1 ) {\displaystyle (n-1)} -form which is defined, continuous, and bounded on D ∪ P {\displaystyle D\cup P} , smooth on D {\displaystyle D} , integrable on P {\displaystyle P} , and such that d ω {\displaystyle d\omega } is integrable on D {\displaystyle D} , then Stokes' theorem holds, that is, ∫ P ω = ∫ D d ω . {\displaystyle \int _{P}\omega =\int _{D}d\omega \,.}
The study of measure-theoretic properties of rough sets leads to geometric measure theory . Even more general versions of Stokes' theorem have been proved by Federer and by Harrison. [ 17 ]
The general form of the Stokes theorem using differential forms is more powerful and easier to use than the special cases. The traditional versions can be formulated using Cartesian coordinates without the machinery of differential geometry, and thus are more accessible. Further, they are older and their names are more familiar as a result. The traditional forms are often considered more convenient by practicing scientists and engineers but the non-naturalness of the traditional formulation becomes apparent when using other coordinate systems, even familiar ones like spherical or cylindrical coordinates. There is potential for confusion in the way names are applied, and the use of dual formulations.
This is a (dualized) (1 + 1)-dimensional case, for a 1-form (dualized because it is a statement about vector fields ). This special case is often just referred to as Stokes' theorem in many introductory university vector calculus courses and is used in physics and engineering. It is also sometimes known as the curl theorem.
The classical Stokes' theorem relates the surface integral of the curl of a vector field over a surface Σ {\displaystyle \Sigma } in Euclidean three-space to the line integral of the vector field over its boundary. It is a special case of the general Stokes theorem (with n = 2 {\displaystyle n=2} ) once we identify a vector field with a 1-form using the metric on Euclidean 3-space. The curve of the line integral, ∂ Σ {\displaystyle \partial \Sigma } , must have positive orientation , meaning that ∂ Σ {\displaystyle \partial \Sigma } points counterclockwise when the surface normal , n {\displaystyle n} , points toward the viewer.
One consequence of this theorem is that the field lines of a vector field with zero curl cannot be closed contours. The formula can be rewritten as:
Theorem — Suppose F = ( P ( x , y , z ) , Q ( x , y , z ) , R ( x , y , z ) ) {\displaystyle {\textbf {F}}={\big (}P(x,y,z),Q(x,y,z),R(x,y,z){\big )}} is defined in a region with smooth surface Σ {\displaystyle \Sigma } and has continuous first-order partial derivatives . Then ∬ Σ ( ( ∂ R ∂ y − ∂ Q ∂ z ) d y d z + ( ∂ P ∂ z − ∂ R ∂ x ) d z d x + ( ∂ Q ∂ x − ∂ P ∂ y ) d x d y ) = ∮ ∂ Σ ( P d x + Q d y + R d z ) , {\displaystyle \iint _{\Sigma }{\Biggl (}\left({\frac {\partial R}{\partial y}}-{\frac {\partial Q}{\partial z}}\right)dy\,dz+\left({\frac {\partial P}{\partial z}}-{\frac {\partial R}{\partial x}}\right)dz\,dx+\left({\frac {\partial Q}{\partial x}}-{\frac {\partial P}{\partial y}}\right)dx\,dy{\Biggr )}=\oint _{\partial \Sigma }{\Big (}P\,dx+Q\,dy+R\,dz{\Big )}\,,} where P , Q {\displaystyle P,Q} and R {\displaystyle R} are the components of F {\displaystyle {\textbf {F}}} , and ∂ Σ {\displaystyle \partial \Sigma } is the boundary of the region Σ {\displaystyle \Sigma } .
Green's theorem is immediately recognizable as the third integrand of both sides in the integral in terms of P , Q , and R cited above.
Two of the four Maxwell equations involve curls of 3-D vector fields, and their differential and integral forms are related by the special 3-dimensional (vector calculus) case of Stokes' theorem . Caution must be taken to avoid cases with moving boundaries: the partial time derivatives are intended to exclude such cases. If moving boundaries are included, interchange of integration and differentiation introduces terms related to boundary motion not included in the results below (see Differentiation under the integral sign ):
(with C and S not necessarily stationary)
(with C and S not necessarily stationary)
The above listed subset of Maxwell's equations are valid for electromagnetic fields expressed in SI units . In other systems of units, such as CGS or Gaussian units , the scaling factors for the terms differ. For example, in Gaussian units, Faraday's law of induction and Ampère's law take the forms: [ 18 ] [ 19 ] ∇ × E = − 1 c ∂ B ∂ t , ∇ × H = 1 c ∂ D ∂ t + 4 π c J , {\displaystyle {\begin{aligned}\nabla \times \mathbf {E} &=-{\frac {1}{c}}{\frac {\partial \mathbf {B} }{\partial t}}\,,\\\nabla \times \mathbf {H} &={\frac {1}{c}}{\frac {\partial \mathbf {D} }{\partial t}}+{\frac {4\pi }{c}}\mathbf {J} \,,\end{aligned}}} respectively, where c is the speed of light in vacuum.
Likewise, the divergence theorem ∫ V o l ∇ ⋅ F d V o l = ∮ ∂ Vol F ⋅ d Σ {\displaystyle \int _{\mathrm {Vol} }\nabla \cdot \mathbf {F} \,d_{\mathrm {Vol} }=\oint _{\partial \operatorname {Vol} }\mathbf {F} \cdot d{\boldsymbol {\Sigma }}} is a special case if we identify a vector field with the ( n − 1 ) {\displaystyle (n-1)} -form obtained by contracting the vector field with the Euclidean volume form. An application of this is the case F = f c → {\displaystyle {\textbf {F}}=f{\vec {c}}} where c → {\displaystyle {\vec {c}}} is an arbitrary constant vector. Working out the divergence of the product gives c → ⋅ ∫ V o l ∇ f d V o l = c → ⋅ ∮ ∂ V o l f d Σ . {\displaystyle {\vec {c}}\cdot \int _{\mathrm {Vol} }\nabla f\,d_{\mathrm {Vol} }={\vec {c}}\cdot \oint _{\partial \mathrm {Vol} }f\,d{\boldsymbol {\Sigma }}\,.} Since this holds for all c → {\displaystyle {\vec {c}}} we find ∫ V o l ∇ f d V o l = ∮ ∂ V o l f d Σ . {\displaystyle \int _{\mathrm {Vol} }\nabla f\,d_{\mathrm {Vol} }=\oint _{\partial \mathrm {Vol} }f\,d{\boldsymbol {\Sigma }}\,.}
Let f : Ω → R {\displaystyle f:\Omega \to \mathbb {R} } be a scalar field . Then ∫ Ω ∇ → f = ∫ ∂ Ω n → f {\displaystyle \int _{\Omega }{\vec {\nabla }}f=\int _{\partial \Omega }{\vec {n}}f} where n → {\displaystyle {\vec {n}}} is the normal vector to the surface ∂ Ω {\displaystyle \partial \Omega } at a given point.
Proof:
Let c → {\displaystyle {\vec {c}}} be a vector. Then 0 = ∫ Ω ∇ → ⋅ c → f − ∫ ∂ Ω n → ⋅ c → f by the divergence theorem = ∫ Ω c → ⋅ ∇ → f − ∫ ∂ Ω c → ⋅ n → f = c → ⋅ ∫ Ω ∇ → f − c → ⋅ ∫ ∂ Ω n → f = c → ⋅ ( ∫ Ω ∇ → f − ∫ ∂ Ω n → f ) {\displaystyle {\begin{aligned}0&=\int _{\Omega }{\vec {\nabla }}\cdot {\vec {c}}f-\int _{\partial \Omega }{\vec {n}}\cdot {\vec {c}}f&{\text{by the divergence theorem}}\\&=\int _{\Omega }{\vec {c}}\cdot {\vec {\nabla }}f-\int _{\partial \Omega }{\vec {c}}\cdot {\vec {n}}f\\&={\vec {c}}\cdot \int _{\Omega }{\vec {\nabla }}f-{\vec {c}}\cdot \int _{\partial \Omega }{\vec {n}}f\\&={\vec {c}}\cdot \left(\int _{\Omega }{\vec {\nabla }}f-\int _{\partial \Omega }{\vec {n}}f\right)\end{aligned}}} Since this holds for any c → {\displaystyle {\vec {c}}} (in particular, for every basis vector ), the result follows. | https://en.wikipedia.org/wiki/Generalized_Stokes_theorem |
Generalized Timing Formula is a standard by VESA which defines exact parameters of the component video signal for analogue VGA display interface.
The video parameters defined by the standard include horizontal blanking (retrace) and vertical blanking intervals , horizontal frequency and vertical frequency (collectively, pixel clock rate or video signal bandwidth ), and horizontal / vertical sync polarity. Unlike predefined discrete modes (VESA DMT), any mode in a range can be produced using a formula by GTF.
A GTF-compliant display is expected to calculate the blanking intervals from the signal frequencies, producing a properly centered image. At the same time, a compliant graphics card is expected to use the calculation to produce a signal that will work on the display — either a GTF default formula for then-ordinary CRT displays or via a custom formula provided via Extended Display Identification Data (EDID) signaling.
These parameters are used by the XFree86 Modeline , for example.
This video timing standard is available for free. [ 1 ]
The standard was adopted in 1999, and was superseded by the Coordinated Video Timings specification in 2002.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalized_Timing_Formula |
In mathematics , a generalized arithmetic progression (or multiple arithmetic progression ) is a generalization of an arithmetic progression equipped with multiple common differences – whereas an arithmetic progression is generated by a single common difference, a generalized arithmetic progression can be generated by multiple common differences. For example, the sequence 17 , 20 , 22 , 23 , 25 , 26 , 27 , 28 , 29 , … {\displaystyle 17,20,22,23,25,26,27,28,29,\dots } is not an arithmetic progression, but is instead generated by starting with 17 and adding either 3 or 5, thus allowing multiple common differences to generate it.
A semilinear set generalizes this idea to multiple dimensions – it is a set of vectors of integers, rather than a set of integers.
A finite generalized arithmetic progression , or sometimes just generalized arithmetic progression (GAP) , of dimension d is defined to be a set of the form
where x 0 , x 1 , … , x d , L 1 , … , L d ∈ Z {\displaystyle x_{0},x_{1},\dots ,x_{d},L_{1},\dots ,L_{d}\in \mathbb {Z} } . The product L 1 L 2 ⋯ L d {\displaystyle L_{1}L_{2}\cdots L_{d}} is called the size of the generalized arithmetic progression; the cardinality of the set can differ from the size if some elements of the set have multiple representations. If the cardinality equals the size, the progression is called proper . Generalized arithmetic progressions can be thought of as a projection of a higher dimensional grid into Z {\displaystyle \mathbb {Z} } . This projection is injective if and only if the generalized arithmetic progression is proper.
Formally, an arithmetic progression of N d {\displaystyle \mathbb {N} ^{d}} is an infinite sequence of the form v , v + v ′ , v + 2 v ′ , v + 3 v ′ , … {\displaystyle \mathbf {v} ,\mathbf {v} +\mathbf {v} ',\mathbf {v} +2\mathbf {v} ',\mathbf {v} +3\mathbf {v} ',\ldots } , where v {\displaystyle \mathbf {v} } and v ′ {\displaystyle \mathbf {v} '} are fixed vectors in N d {\displaystyle \mathbb {N} ^{d}} , called the initial vector and common difference respectively. A subset of N d {\displaystyle \mathbb {N} ^{d}} is said to be linear if it is of the form
where m {\displaystyle m} is some integer and v , v 1 , … , v m {\displaystyle \mathbf {v} ,\mathbf {v} _{1},\dots ,\mathbf {v} _{m}} are fixed vectors in N d {\displaystyle \mathbb {N} ^{d}} . A subset of N d {\displaystyle \mathbb {N} ^{d}} is said to be semilinear if it is a finite union of linear sets.
The semilinear sets are exactly the sets definable in Presburger arithmetic . [ 1 ] | https://en.wikipedia.org/wiki/Generalized_arithmetic_progression |
In analytical mechanics , generalized coordinates are a set of parameters used to represent the state of a system in a configuration space . These parameters must uniquely define the configuration of the system relative to a reference state. [ 1 ] The generalized velocities are the time derivatives of the generalized coordinates of the system. The adjective "generalized" distinguishes these parameters from the traditional use of the term "coordinate" to refer to Cartesian coordinates .
An example of a generalized coordinate would be to describe the position of a pendulum using the angle of the pendulum relative to vertical, rather than by the x and y position of the pendulum.
Although there may be many possible choices for generalized coordinates for a physical system, they are generally selected to simplify calculations, such as the solution of the equations of motion for the system. If the coordinates are independent of one another, the number of independent generalized coordinates is defined by the number of degrees of freedom of the system. [ 2 ] [ 3 ]
Generalized coordinates are paired with generalized momenta to provide canonical coordinates on phase space .
Generalized coordinates are usually selected to provide the minimum number of independent coordinates that define the configuration of a system, which simplifies the formulation of Lagrange's equations of motion. However, it can also occur that a useful set of generalized coordinates may be dependent , which means that they are related by one or more constraint equations.
For a system of N particles in 3D real coordinate space , the position vector of each particle can be written as a 3- tuple in Cartesian coordinates :
Any of the position vectors can be denoted r k where k = 1, 2, …, N labels the particles. A holonomic constraint is a constraint equation of the form for particle k [ 4 ] [ a ]
which connects all the 3 spatial coordinates of that particle together, so they are not independent. The constraint may change with time, so time t will appear explicitly in the constraint equations. At any instant of time, any one coordinate will be determined from the other coordinates, e.g. if x k and z k are given, then so is y k . One constraint equation counts as one constraint. If there are C constraints, each has an equation, so there will be C constraint equations. There is not necessarily one constraint equation for each particle, and if there are no constraints on the system then there are no constraint equations.
So far, the configuration of the system is defined by 3 N quantities, but C coordinates can be eliminated, one coordinate from each constraint equation. The number of independent coordinates is n = 3 N − C . (In D dimensions, the original configuration would need ND coordinates, and the reduction by constraints means n = ND − C ). It is ideal to use the minimum number of coordinates needed to define the configuration of the entire system, while taking advantage of the constraints on the system. These quantities are known as generalized coordinates in this context, denoted q j ( t ) . It is convenient to collect them into an n - tuple
which is a point in the configuration space of the system. They are all independent of one other, and each is a function of time. Geometrically they can be lengths along straight lines, or arc lengths along curves, or angles; not necessarily Cartesian coordinates or other standard orthogonal coordinates . There is one for each degree of freedom , so the number of generalized coordinates equals the number of degrees of freedom, n . A degree of freedom corresponds to one quantity that changes the configuration of the system, for example the angle of a pendulum, or the arc length traversed by a bead along a wire.
If it is possible to find from the constraints as many independent variables as there are degrees of freedom, these can be used as generalized coordinates. [ 5 ] The position vector r k of particle k is a function of all the n generalized coordinates (and, through them, of time), [ 6 ] [ 7 ] [ 8 ] [ 5 ] [ nb 1 ]
and the generalized coordinates can be thought of as parameters associated with the constraint.
The corresponding time derivatives of q are the generalized velocities,
(each dot over a quantity indicates one time derivative ). The velocity vector v k is the total derivative of r k with respect to time
and so generally depends on the generalized velocities and coordinates. Since we are free to specify the initial values of the generalized coordinates and velocities separately, the generalized coordinates q j and velocities dq j / dt can be treated as independent variables .
A mechanical system can involve constraints on both the generalized coordinates and their derivatives. Constraints of this type are known as non-holonomic. First-order non-holonomic constraints have the form
An example of such a constraint is a rolling wheel or knife-edge that constrains the direction of the velocity vector. Non-holonomic constraints can also involve next-order derivatives such as generalized accelerations.
The total kinetic energy of the system is the energy of the system's motion, defined as [ 9 ]
in which · is the dot product . The kinetic energy is a function only of the velocities v k , not the coordinates r k themselves. By contrast an important observation is [ 10 ]
which illustrates the kinetic energy is in general a function of the generalized velocities, coordinates, and time if the constraints also vary with time, so T = T ( q , d q / dt , t ) .
In the case the constraints on the particles are time-independent, then all partial derivatives with respect to time are zero, and the kinetic energy is a homogeneous function of degree 2 in the generalized velocities.
Still for the time-independent case, this expression is equivalent to taking the line element squared of the trajectory for particle k ,
and dividing by the square differential in time, dt 2 , to obtain the velocity squared of particle k . Thus for time-independent constraints it is sufficient to know the line element to quickly obtain the kinetic energy of particles and hence the Lagrangian . [ 11 ]
It is instructive to see the various cases of polar coordinates in 2D and 3D, owing to their frequent appearance. In 2D polar coordinates ( r , θ ) ,
in 3D cylindrical coordinates ( r , θ , z ) ,
in 3D spherical coordinates ( r , θ , φ ) ,
The generalized momentum " canonically conjugate to" the coordinate q i is defined by
If the Lagrangian L does not depend on some coordinate q i , then it follows from the Euler–Lagrange equations that the corresponding generalized momentum will be a conserved quantity , because the time derivative is zero implying the momentum is a constant of the motion;
For a bead sliding on a frictionless wire subject only to gravity in 2d space, the constraint on the bead can be stated in the form f ( r ) = 0 , where the position of the bead can be written r = ( x ( s ), y ( s )) , in which s is a parameter, the arc length s along the curve from some point on the wire. This is a suitable choice of generalized coordinate for the system. Only one coordinate is needed instead of two, because the position of the bead can be parameterized by one number, s , and the constraint equation connects the two coordinates x and y ; either one is determined from the other. The constraint force is the reaction force the wire exerts on the bead to keep it on the wire, and the non-constraint applied force is gravity acting on the bead.
Suppose the wire changes its shape with time, by flexing. Then the constraint equation and position of the particle are respectively
which now both depend on time t due to the changing coordinates as the wire changes its shape. Notice time appears implicitly via the coordinates and explicitly in the constraint equations.
The relationship between the use of generalized coordinates and Cartesian coordinates to characterize the movement of a mechanical system can be illustrated by considering the constrained dynamics of a simple pendulum. [ 12 ] [ 13 ]
A simple pendulum consists of a mass M hanging from a pivot point so that it is constrained to move on a circle of radius L . The position of the mass is defined by the coordinate vector r = ( x , y ) measured in the plane of the circle such that y is in the vertical direction. The coordinates x and y are related by the equation of the circle
that constrains the movement of M . This equation also provides a constraint on the velocity components,
Now introduce the parameter θ , that defines the angular position of M from the vertical direction. It can be used to define the coordinates x and y , such that
The use of θ to define the configuration of this system avoids the constraint provided by the equation of the circle.
Notice that the force of gravity acting on the mass m is formulated in the usual Cartesian coordinates,
where g is the acceleration due to gravity .
The virtual work of gravity on the mass m as it follows the trajectory r is given by
The variation δ r can be computed in terms of the coordinates x and y , or in terms of the parameter θ ,
Thus, the virtual work is given by
Notice that the coefficient of δ y is the y -component of the applied force. In the same way, the coefficient of δ θ is known as the generalized force along generalized coordinate θ , given by
To complete the analysis consider the kinetic energy T of the mass, using the velocity,
so,
D'Alembert's form of the principle of virtual work for the pendulum in terms of the coordinates x and y are given by,
This yields the three equations
in the three unknowns, x , y and λ .
Using the parameter θ , those equations take the form
which becomes,
or
This formulation yields one equation because there is a single parameter and no constraint equation.
This shows that the parameter θ is a generalized coordinate that can be used in the same way as the Cartesian coordinates x and y to analyze the pendulum.
The benefits of generalized coordinates become apparent with the analysis of a double pendulum .
For the two masses m i ( i = 1, 2) , let r i = ( x i , y i ), i = 1, 2 define their two trajectories. These vectors satisfy the two constraint equations,
and
The formulation of Lagrange's equations for this system yields six equations in the four Cartesian coordinates x i , y i ( i = 1, 2) and the two Lagrange multipliers λ i ( i = 1, 2) that arise from the two constraint equations.
Now introduce the generalized coordinates θ i ( i = 1, 2) that define the angular position of each mass of the double pendulum from the vertical direction. In this case, we have
The force of gravity acting on the masses is given by,
where g is the acceleration due to gravity. Therefore, the virtual work of gravity on the two masses as they follow the trajectories r i ( i = 1, 2) is given by
The variations δ r i ( i = 1, 2) can be computed to be
Thus, the virtual work is given by
and the generalized forces are
Compute the kinetic energy of this system to be
Euler–Lagrange equation yield two equations in the unknown generalized coordinates θ i ( i = 1, 2) given by [ 14 ]
and
The use of the generalized coordinates θ i ( i = 1, 2) provides an alternative to the Cartesian formulation of the dynamics of the double pendulum.
For a 3D example, a spherical pendulum with constant length l free to swing in any angular direction subject to gravity, the constraint on the pendulum bob can be stated in the form
where the position of the pendulum bob can be written
in which ( θ , φ ) are the spherical polar angles because the bob moves in the surface of a sphere. The position r is measured along the suspension point to the bob, here treated as a point particle . A logical choice of generalized coordinates to describe the motion are the angles ( θ , φ ) . Only two coordinates are needed instead of three, because the position of the bob can be parameterized by two numbers, and the constraint equation connects the three coordinates ( x , y , z ) so any one of them is determined from the other two.
The principle of virtual work states that if a system is in static equilibrium, the virtual work of the applied forces is zero for all virtual movements of the system from this state, that is, δ W = 0 for any variation δ r . [ 15 ] When formulated in terms of generalized coordinates, this is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is F i = 0 .
Let the forces on the system be F j ( j = 1, 2, …, m ) be applied to points with Cartesian coordinates r j ( j = 1, 2, …, m ) , then the virtual work generated by a virtual displacement from the equilibrium position is given by
where δ r j ( j = 1, 2, …, m ) denote the virtual displacements of each point in the body.
Now assume that each δ r j depends on the generalized coordinates q i ( i = 1, 2, …, n ) then
and
The n terms
are the generalized forces acting on the system. Kane [ 16 ] shows that these generalized forces can also be formulated in terms of the ratio of time derivatives,
where v j is the velocity of the point of application of the force F j .
In order for the virtual work to be zero for an arbitrary virtual displacement, each of the generalized forces must be zero, that is | https://en.wikipedia.org/wiki/Generalized_coordinates |
The generalized distributive law (GDL) is a generalization of the distributive property which gives rise to a general message passing algorithm. [ 1 ] It is a synthesis of the work of many authors in the information theory , digital communications , signal processing , statistics , and artificial intelligence communities. The law and algorithm were introduced in a semi-tutorial by Srinivas M. Aji and Robert J. McEliece with the same title. [ 1 ]
"The distributive law in mathematics is the law relating the operations of multiplication and addition, stated symbolically, a ∗ ( b + c ) = a ∗ b + a ∗ c {\displaystyle a*(b+c)=a*b+a*c} ; that is, the monomial factor a {\displaystyle a} is distributed, or separately applied, to each term of the binomial factor b + c {\displaystyle b+c} , resulting in the product a ∗ b + a ∗ c {\displaystyle a*b+a*c} " - Britannica. [ 2 ]
As it can be observed from the definition, application of distributive law to an arithmetic expression reduces the number of operations in it. In the previous example the total number of operations reduced from three (two multiplications and an addition in a ∗ b + a ∗ c {\displaystyle a*b+a*c} ) to two (one multiplication and one addition in a ∗ ( b + c ) {\displaystyle a*(b+c)} ). Generalization of distributive law leads to a large family of fast algorithms . This includes the FFT and Viterbi algorithm .
This is explained in a more formal way in the example below:
α ( a , b ) = d e f ∑ c , d , e ∈ A f ( a , c , b ) g ( a , d , e ) {\displaystyle \alpha (a,\,b){\stackrel {\mathrm {def} }{=}}\displaystyle \sum \limits _{c,d,e\in A}f(a,\,c,\,b)\,g(a,\,d,\,e)} where f ( ⋅ ) {\displaystyle f(\cdot )} and g ( ⋅ ) {\displaystyle g(\cdot )} are real-valued functions, a , b , c , d , e ∈ A {\displaystyle a,b,c,d,e\in A} and | A | = q {\displaystyle |A|=q} (say)
Here we are "marginalizing out" the independent variables ( c {\displaystyle c} , d {\displaystyle d} , and e {\displaystyle e} ) to obtain the result. When we are calculating the computational complexity, we can see that for each q 2 {\displaystyle q^{2}} pairs of ( a , b ) {\displaystyle (a,b)} , there are q 3 {\displaystyle q^{3}} terms due to the triplet ( c , d , e ) {\displaystyle (c,d,e)} which needs to take part in the evaluation of α ( a , b ) {\displaystyle \alpha (a,\,b)} with each step having one addition and one multiplication. Therefore, the total number of computations needed is 2 ⋅ q 2 ⋅ q 3 = 2 q 5 {\displaystyle 2\cdot q^{2}\cdot q^{3}=2q^{5}} . Hence the asymptotic complexity of the above function is O ( n 5 ) {\displaystyle O(n^{5})} .
If we apply the distributive law to the RHS of the equation, we get the following:
This implies that α ( a , b ) {\displaystyle \alpha (a,\,b)} can be described as a product α 1 ( a , b ) ⋅ α 2 ( a ) {\displaystyle \alpha _{1}(a,\,b)\cdot \alpha _{2}(a)} where α 1 ( a , b ) = d e f ∑ c ∈ A f ( a , c , b ) {\displaystyle \alpha _{1}(a,b){\stackrel {\mathrm {def} }{=}}\displaystyle \sum \limits _{c\in A}f(a,\,c,\,b)} and α 2 ( a ) = d e f ∑ d , e ∈ A g ( a , d , e ) {\displaystyle \alpha _{2}(a){\stackrel {\mathrm {def} }{=}}\displaystyle \sum \limits _{d,\,e\in A}g(a,\,d,\,e)}
Now, when we are calculating the computational complexity, we can see that there are q 3 {\displaystyle q^{3}} additions in α 1 ( a , b ) {\displaystyle \alpha _{1}(a,\,b)} and α 2 ( a ) {\displaystyle \alpha _{2}(a)} each and there are q 2 {\displaystyle q^{2}} multiplications when we are using the product α 1 ( a , b ) ⋅ α 2 ( a ) {\displaystyle \alpha _{1}(a,\,b)\cdot \alpha _{2}(a)} to evaluate α ( a , b ) {\displaystyle \alpha (a,\,b)} . Therefore, the total number of computations needed is q 3 + q 3 + q 2 = 2 q 3 + q 2 {\displaystyle q^{3}+q^{3}+q^{2}=2q^{3}+q^{2}} . Hence the asymptotic complexity of calculating α ( a , b ) {\displaystyle \alpha (a,b)} reduces to O ( n 3 ) {\displaystyle O(n^{3})} from O ( n 5 ) {\displaystyle O(n^{5})} . This shows by an example that applying distributive law reduces the computational complexity which is one of the good features of a "fast algorithm".
Some of the problems that used distributive law to solve can be grouped as follows:
MPF or marginalize a product function is a general computational problem which as special case includes many classical problems such as computation of discrete Hadamard transform , maximum likelihood decoding of a linear code over a memory-less channel , and matrix chain multiplication . The power of the GDL lies in the fact that it applies to situations in which additions and multiplications are generalized.
A commutative semiring is a good framework for explaining this behavior. It is defined over a set K {\displaystyle K} with operators " + {\displaystyle +} " and " . {\displaystyle .} " where ( K , + ) {\displaystyle (K,\,+)} and ( K , . ) {\displaystyle (K,\,.)} are a commutative monoids and the distributive law holds.
Let p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} be variables such that p 1 ∈ A 1 , … , p n ∈ A n {\displaystyle p_{1}\in A_{1},\ldots ,p_{n}\in A_{n}} where A {\displaystyle A} is a finite set and | A i | = q i {\displaystyle |A_{i}|=q_{i}} . Here i = 1 , … , n {\displaystyle i=1,\ldots ,n} . If S = { i 1 , … , i r } {\displaystyle S=\{i_{1},\ldots ,i_{r}\}} and S ⊂ { 1 , … , n } {\displaystyle S\,\subset \{1,\ldots ,n\}} , let A S = A i 1 × ⋯ × A i r {\displaystyle A_{S}=A_{i_{1}}\times \cdots \times A_{i_{r}}} , p S = ( p i 1 , … , p i r ) {\displaystyle p_{S}=(p_{i_{1}},\ldots ,p_{i_{r}})} , q S = | A S | {\displaystyle q_{S}=|A_{S}|} , A = A 1 × ⋯ × A n {\displaystyle \mathbf {A} =A_{1}\times \cdots \times A_{n}} , and p = { p 1 , … , p n } {\displaystyle \mathbf {p} =\{p_{1},\ldots ,p_{n}\}}
Let S = { S j } j = 1 M {\displaystyle S=\{S_{j}\}_{j=1}^{M}} where S j ⊂ { 1 , . . . , n } {\displaystyle S_{j}\subset \{1,...\,,n\}} . Suppose a function is defined as α i : A S i → R {\displaystyle \alpha _{i}:A_{S_{i}}\rightarrow R} , where R {\displaystyle R} is a commutative semiring . Also, p S i {\displaystyle p_{S_{i}}} are named the local domains and α i {\displaystyle \alpha _{i}} as the local kernels .
Now the global kernel β : A → R {\displaystyle \beta :\mathbf {A} \rightarrow R} is defined as: β ( p 1 , . . . , p n ) = ∏ i = 1 M α ( p S i ) {\displaystyle \beta (p_{1},...\,,p_{n})=\prod _{i=1}^{M}\alpha (p_{S_{i}})}
Definition of MPF problem : For one or more indices i = 1 , . . . , M {\displaystyle i=1,...\,,M} , compute a table of the values of S i {\displaystyle S_{i}} - marginalization of the global kernel β {\displaystyle \beta } , which is the function β i : A S i → R {\displaystyle \beta _{i}:A_{S_{i}}\rightarrow R} defined as β i ( p S i ) = ∑ p S i c ∈ A S i c β ( p ) {\displaystyle \beta _{i}(p_{S_{i}})\,=\displaystyle \sum \limits _{p_{S_{i}^{c}}\in A_{S_{i}^{c}}}\beta (p)}
Here S i c {\displaystyle S_{i}^{c}} is the complement of S i {\displaystyle S_{i}} with respect to { 1 , . . . , n } {\displaystyle \mathbf {\{} 1,...\,,n\}} and the β i ( p S i ) {\displaystyle \beta _{i}(p_{S_{i}})} is called the i t h {\displaystyle i^{th}} objective function , or the objective function at S i {\displaystyle S_{i}} . It can observed that the computation of the i t h {\displaystyle i^{th}} objective function in the obvious way needs M q 1 q 2 q 3 ⋯ q n {\displaystyle Mq_{1}q_{2}q_{3}\cdots q_{n}} operations. This is because there are q 1 q 2 ⋯ q n {\displaystyle q_{1}q_{2}\cdots q_{n}} additions and ( M − 1 ) q 1 q 2 . . . q n {\displaystyle (M-1)q_{1}q_{2}...q_{n}} multiplications needed in the computation of the i th {\displaystyle i^{\text{th}}} objective function. The GDL algorithm which is explained in the next section can reduce this computational complexity.
The following is an example of the MPF problem.
Let p 1 , p 2 , p 3 , p 4 , {\displaystyle p_{1},\,p_{2},\,p_{3},\,p_{4},} and p 5 {\displaystyle p_{5}} be variables such that p 1 ∈ A 1 , p 2 ∈ A 2 , p 3 ∈ A 3 , p 4 ∈ A 4 , {\displaystyle p_{1}\in A_{1},p_{2}\in A_{2},p_{3}\in A_{3},p_{4}\in A_{4},} and p 5 ∈ A 5 {\displaystyle p_{5}\in A_{5}} . Here M = 4 {\displaystyle M=4} and S = { { 1 , 2 , 5 } , { 2 , 4 } , { 1 , 4 } , { 2 } } {\displaystyle S=\{\{1,2,5\},\{2,4\},\{1,4\},\{2\}\}} . The given functions using these variables are f ( p 1 , p 2 , p 5 ) {\displaystyle f(p_{1},p_{2},p_{5})} and g ( p 3 , p 4 ) {\displaystyle g(p_{3},p_{4})} and we need to calculate α ( p 1 , p 4 ) {\displaystyle \alpha (p_{1},\,p_{4})} and β ( p 2 ) {\displaystyle \beta (p_{2})} defined as:
Here local domains and local kernels are defined as follows:
where α ( p 1 , p 4 ) {\displaystyle \alpha (p_{1},p_{4})} is the 3 r d {\displaystyle 3^{rd}} objective function and β ( p 2 ) {\displaystyle \beta (p_{2})} is the 4 t h {\displaystyle 4^{th}} objective function.
Consider another example where p 1 , p 2 , p 3 , p 4 , r 1 , r 2 , r 3 , r 4 ∈ { 0 , 1 } {\displaystyle p_{1},p_{2},p_{3},p_{4},r_{1},r_{2},r_{3},r_{4}\in \{0,1\}} and f ( r 1 , r 2 , r 3 , r 4 ) {\displaystyle f(r_{1},r_{2},r_{3},r_{4})} is a real valued function. Now, we shall consider the MPF problem where the commutative semiring is defined as the set of real numbers with ordinary addition and multiplication and the local domains and local kernels are defined as follows:
Now since the global kernel is defined as the product of the local kernels, it is
and the objective function at the local domain p 1 , p 2 , p 3 , p 4 {\displaystyle p_{1},p_{2},p_{3},p_{4}} is
This is the Hadamard transform of the function f ( ⋅ ) {\displaystyle f(\cdot )} . Hence we can see that the computation of Hadamard transform is a special case of the MPF problem. More examples can be demonstrated to prove that the MPF problem forms special cases of many classical problem as explained above whose details can be found at [ 1 ]
If one can find a relationship among the elements of a given set S {\displaystyle S} , then one can solve the MPF problem basing on the notion of belief propagation which is a special use of "message passing" technique. The required relationship is that the given set of local domains can be organised into a junction tree . In other words, we create a graph theoretic tree with the elements of S {\displaystyle S} as the vertices of the tree T {\displaystyle T} , such that for any two arbitrary vertices say v i {\displaystyle v_{i}} and v j {\displaystyle v_{j}} where i ≠ j {\displaystyle i\neq j} and there exists an edge between these two vertices, then the intersection of corresponding labels, viz S i ∩ S j {\displaystyle S_{i}\cap S_{j}} , is a subset of the label on each vertex on the unique path from v i {\displaystyle v_{i}} to v j {\displaystyle v_{j}} .
For example,
Example 1: Consider the following nine local domains:
For the above given set of local domains, one can organize them into a junction tree as shown below:
Similarly If another set like the following is given
Example 2: Consider the following four local domains:
Then constructing the tree only with these local domains is not possible since this set of values has no common domains which can be placed between any two values of the above set. But however, if add the two dummy domains as shown below then organizing the updated set into a junction tree would be possible and easy too.
5. { p 1 , p 2 {\displaystyle \{p_{1},p_{2}} , p 4 } {\displaystyle p_{4}\}} 6. { p 2 , p 3 {\displaystyle \{p_{2},p_{3}} , p 4 } {\displaystyle p_{4}\}}
Similarly for these set of domains, the junction tree looks like shown below:
Input: A set of local domains. Output: For the given set of domains, possible minimum number of operations that is required to solve the problem is computed. So, if v i {\displaystyle v_{i}} and v j {\displaystyle v_{j}} are connected by an edge in the junction tree, then a message from v i {\displaystyle v_{i}} to v j {\displaystyle v_{j}} is a set/table of values given by a function: μ i , j {\displaystyle \mu _{i,j}} : A S i ∩ S j → R {\displaystyle A_{S_{i}\cap S_{j}}\rightarrow R} . To begin with all the functions i.e. for all combinations of i {\displaystyle i} and j {\displaystyle j} in the given tree, μ i , j {\displaystyle \mu _{i,j}} is defined to be identically 1 {\displaystyle 1} and when a particular message is update, it follows the equation given below.
where v k adj v i {\displaystyle v_{k}\operatorname {adj} v_{i}} means that v k {\displaystyle v_{k}} is an adjacent vertex to v i {\displaystyle v_{i}} in tree.
Similarly each vertex has a state which is defined as a table containing the values from the function σ i : A S i → R {\displaystyle \sigma _{i}:A_{S_{i}}\rightarrow R} , Just like how messages initialize to 1 identically, state of v i {\displaystyle v_{i}} is defined to be local kernel α ( p S i ) {\displaystyle \alpha (p_{S_{i}})} , but whenever σ i {\displaystyle \sigma _{i}} gets updated, it follows the following equation:
For the given set of local domains as input, we find out if we can create a junction tree, either by using the set directly or by adding dummy domains to the set first and then creating the junction tree, if construction junction is not possible then algorithm output that there is no way to reduce the number of steps to compute the given equation problem, but once we have junction tree, algorithm will have to schedule messages and compute states, by doing these we can know where steps can be reduced, hence will be discusses this below.
There are two special cases we are going to talk about here namely Single Vertex Problem in which the objective function is computed at only one vertex v 0 {\displaystyle v_{0}} and the second one is All Vertices Problem where the goal is to compute the objective function at all vertices.
Lets begin with the single-vertex problem , GDL will start by directing each edge towards the targeted vertex v 0 {\displaystyle v_{0}} . Here messages are sent only in the direction towards the targeted vertex. Note that all the directed messages are sent only once. The messages are started from the leaf nodes(where the degree is 1) go up towards the target vertex v 0 {\displaystyle v_{0}} . The message travels from the leaves to its parents and then from there to their parents and so on until it reaches the target vertex v 0 {\displaystyle v_{0}} . The target vertex v 0 {\displaystyle v_{0}} will compute its state only when it receives all messages from all its neighbors. Once we have the state, We have got the answer and hence the algorithm terminates.
For Example, let us consider a junction tree constructed from the set of local domains given above i.e. the set from example 1, Now the Scheduling table for these domains is (where the target vertex is p 2 {\displaystyle p_{2}} ).
Round Message or State Computation {\displaystyle {\text{Round Message or State Computation}}} 1. μ 8 , 4 ( p 4 ) = α 8 ( p 4 ) {\displaystyle 1.\mu _{8,4}(p_{4})=\alpha _{8}(p_{4})} 2. μ 8 , 4 ( p 4 ) = Σ p 2 α 9 ( p 2 , p 4 ) {\displaystyle 2.\mu _{8,4}(p_{4})=\Sigma _{p_{2}}\alpha _{9}(p_{2},p_{4})} 3. μ 5 , 2 ( p 3 ) = α 5 ( p 3 ) {\displaystyle 3.\mu _{5,2}(p_{3})=\alpha _{5}(p_{3})} 4. μ 6 , 3 ( p 1 ) = Σ p 4 α 6 ( p 1 , p 4 ) {\displaystyle 4.\mu _{6,3}(p_{1})=\Sigma _{p_{4}}\alpha _{6}(p_{1},p_{4})} 5. μ 7 , 3 ( p 1 ) = α 7 ( p 1 ) {\displaystyle 5.\mu _{7,3}(p_{1})=\alpha _{7}(p_{1})} 6. μ 4 , 2 ( p 3 ) = Σ p 4 α 4 ( p 3 , p 4 ) . μ 8 , 4 ( p 4 ) . μ 9 , 4 ( p 4 ) {\displaystyle 6.\mu _{4,2}(p_{3})=\Sigma _{p_{4}}\alpha _{4}(p_{3},p_{4}).\mu _{8,4}(p_{4}).\mu _{9,4}(p_{4})} 7. μ 3 , 1 ( p 2 ) = Σ p 1 α 3 ( p 2 , p 1 ) . μ 6 , 3 ( p 1 ) . μ 7 , 3 ( p 1 ) {\displaystyle 7.\mu _{3,1}(p_{2})=\Sigma _{p_{1}}\alpha _{3}(p_{2},p_{1}).\mu _{6,3}(p_{1}).\mu _{7,3}(p_{1})} 8. μ 2 , 1 ( p 2 ) = Σ p 3 α 2 ( p 3 , p 2 ) . μ 4 , 2 ( p 3 ) . μ 5 , 2 ( p 3 ) {\displaystyle 8.\mu _{2,1}(p_{2})=\Sigma _{p_{3}}\alpha _{2}(p_{3},p_{2}).\mu _{4,2}(p_{3}).\mu _{5,2}(p_{3})} 9. σ 1 ( p 2 ) = α 1 ( p 2 ) . μ 2 , 1 ( p 2 ) . μ 3 , 1 ( p 2 ) {\displaystyle 9.\sigma _{1}(p_{2})=\alpha _{1}(p_{2}).\mu _{2,1}(p_{2}).\mu _{3,1}(p_{2})}
Thus the complexity for Single Vertex GDL can be shown as
Σ v d ( v ) | A S ( v ) | {\displaystyle \Sigma _{v}d(v)|A_{S_{(v)}}|} arithmetic operations Where (Note: The explanation for the above equation is explained later in the article ) S ( v ) {\displaystyle S(v)} is the label of v {\displaystyle v} . d ( v ) {\displaystyle d(v)} is the degree of v {\displaystyle v} (i.e. number of vertices adjacent to v).
To solve the All-Vertices problem, we can schedule GDL in several ways, some of them are parallel implementation where in each round, every state is updated and every message is computed and transmitted at the same time. In this type of implementation the states and messages will stabilizes after number of rounds that is at most equal to the diameter of the tree. At this point all the all states of the vertices will be equal to the desired objective function.
Another way to schedule GDL for this problem is serial implementation where its similar to the Single vertex problem except that we don't stop the algorithm until all the vertices of a required set have not got all the messages from all their neighbors and have compute their state. Thus the number of arithmetic this implementation requires is at most Σ v ∈ V d ( v ) | A S ( v ) | {\displaystyle \Sigma _{v\in V}d(v)|A_{S_{(v)}}|} arithmetic operations.
The key to constructing a junction tree lies in the local domain graph G L D {\displaystyle G_{LD}} , which is a weighted complete graph with M {\displaystyle M} vertices v 1 , v 2 , v 3 , … , v M {\displaystyle v_{1},v_{2},v_{3},\ldots ,v_{M}} i.e. one for each local domain, having the weight of the edge e i , j : v i ↔ v j {\displaystyle e_{i,j}:v_{i}\leftrightarrow v_{j}} defined by ω i , j = | S i ∩ S j | {\displaystyle \omega _{i,j}=|S_{i}\cap S_{j}|} . if x k ∈ S i ∩ S j {\displaystyle x_{k}\in S_{i}\cap S_{j}} , then we say x k {\displaystyle x_{k}} is contained in e i , j {\displaystyle e_{i,j}} . Denoted by ω m a x {\displaystyle \omega _{max}} (the weight of a maximal-weight spanning tree of G L D {\displaystyle G_{LD}} ), which is defined by
where n is the number of elements in that set. For more clarity and details, please refer to these. [ 3 ] [ 4 ]
Let ′ T ′ {\displaystyle 'T'} be a junction tree with vertex set ′ V ′ {\displaystyle 'V'} and edge set ′ E ′ {\displaystyle 'E'} . In this algorithm, the messages are sent in both the direction on any edge, so we can say/regard the edge set E as set of ordered pairs of vertices. For example, from Figure 1 ′ E ′ {\displaystyle 'E'} can be defined as follows
NOTE: E {\displaystyle E} above gives you all the possible directions that a message can travel in the tree.
The schedule for the GDL is defined as a finite sequence of subsets of E {\displaystyle E} . Which is generally represented by E = {\displaystyle {\mathcal {E}}=} { E 1 , E 2 , E 3 , … , E N {\displaystyle E_{1},E_{2},E_{3},\ldots ,E_{N}} }, Where E N {\displaystyle E_{N}} is the set of messages updated during the N t h {\displaystyle N^{th}} round of running the algorithm.
Having defined/seen some notations, we will see want the theorem says, When we are given a schedule E = { E 1 , E 2 , E 3 , … , E N } {\displaystyle {\mathcal {E}}=\{E_{1},E_{2},E_{3},\ldots ,E_{N}\}} , the corresponding message trellis as a finite directed graph with Vertex set of V × { 0 , 1 , 2 , 3 , … , N } {\displaystyle V\times \{0,1,2,3,\ldots ,N\}} , in which a typical element is denoted by v i ( t ) {\displaystyle v_{i}(t)} for t ∈ { 0 , 1 , 2 , 3 , … , N } {\displaystyle t\in \{0,1,2,3,\ldots ,N\}} , Then after completion of the message passing, state at vertex v j {\displaystyle v_{j}} will be the j th {\displaystyle j^{\text{th}}} objective defined in
and iff there is a path from v i ( 0 ) {\displaystyle v_{i}(0)} to v j ( N ) {\displaystyle v_{j}(N)}
Here we try to explain the complexity of solving the MPF problem in terms of the number of mathematical operations required for the calculation. i.e. We compare the number of operations required when calculated using the normal method (Here by normal method we mean by methods that do not use message passing or junction trees in short methods that do not use the concepts of GDL) and the number of operations using the generalized distributive law.
Example: Consider the simplest case where we need to compute the following expression a b + a c {\displaystyle ab+ac} .
To evaluate this expression naively requires two multiplications and one addition. The expression when expressed using the distributive law can be written as a ( b + c ) {\displaystyle a(b+c)} a simple optimization that reduces the number of operations to one addition and one multiplication.
Similar to the above explained example we will be expressing the equations in different forms to perform as few operation as possible by applying the GDL.
As explained in the previous sections we solve the problem by using the concept of the junction trees. The optimization obtained by the use of these trees is comparable to the optimization obtained by solving a semi group problem on trees. For example, to find the minimum of a group of numbers we can observe that if we have a tree and the elements are all at the bottom of the tree, then we can compare the minimum of two items in parallel and the resultant minimum will be written to the parent. When this process is propagated up the tree the minimum of the group of elements will be found at the root.
The following is the complexity for solving the junction tree using message passing
We rewrite the formula used earlier to the following form. This is the eqn for a message to be sent from vertex v to w
Similarly we rewrite the equation for calculating the state of vertex v as follows
We first will analyze for the single-vertex problem and assume the target vertex is v 0 {\displaystyle v_{0}} and hence we have one edge from v {\displaystyle v} to v 0 {\displaystyle v_{0}} .
Suppose we have an edge ( v , w ) {\displaystyle (v,w)} we calculate the message using the message equation. To calculate p u ∩ v {\displaystyle p_{u\cap v}} requires
additions and
multiplications.
(We represent the | A S ( v ) S ( w ) | {\displaystyle |A_{S(v)\ S(w)}|} as q v ∖ w {\displaystyle q_{v\setminus w}} .)
But there will be many possibilities for x v ∩ w {\displaystyle x_{v\cap w}} hence q v ∩ w = d e f | A S ( v ) ∩ S ( w ) | {\displaystyle q_{v\cap w}{\stackrel {\mathrm {def} }{=}}|A_{S(v)\cap S(w)}|} possibilities for p v ∩ w {\displaystyle p_{v\cap w}} .
Thus the entire message will need
additions and
multiplications
The total number of arithmetic operations required to send a message towards v 0 {\displaystyle v_{0}} along the edges of tree will be
additions and
multiplications.
Once all the messages have been transmitted the algorithm terminates with the computation of state at v 0 {\displaystyle v_{0}} . The state computation requires d ( v 0 ) q 0 {\displaystyle d(v_{0})q_{0}} more multiplications.
Thus number of calculations required to calculate the state is given as below
additions and
multiplications
Thus the grand total of the number of calculations is
where e = ( v , w ) {\displaystyle e=(v,w)} is an edge and its size is defined by q v ∩ w {\displaystyle q_{v\cap w}}
The formula above gives us the upper bound.
If we define the complexity of the edge e = ( v , w ) {\displaystyle e=(v,w)} as
Therefore, ( 1 ) {\displaystyle (1)} can be written as
We now calculate the edge complexity for the problem defined in Figure 1 as follows
The total complexity will be 3 q 2 q 3 + 3 q 3 q 4 + 3 q 1 q 2 + q 2 q 4 + q 1 q 4 − q 1 − q 3 − q 4 {\displaystyle 3q_{2}q_{3}+3q_{3}q_{4}+3q_{1}q_{2}+q_{2}q_{4}+q_{1}q_{4}-q_{1}-q_{3}-q_{4}} which is considerably low compared to the direct method. (Here by direct method we mean by methods that do not use message passing. The time taken using the direct method will be the equivalent to calculating message at each node and time to calculate the state of each of the nodes.)
Now we consider the all-vertex problem where the message will have to be sent in both the directions and state must be computed at both the vertexes. This would take O ( ∑ v d ( v ) d ( v ) q v ) {\displaystyle O(\sum _{v}d(v)d(v)q_{v})} but by precomputing we can reduce the number of multiplications to 3 ( d − 2 ) {\displaystyle 3(d-2)} . Here d {\displaystyle d} is the degree of the vertex. Ex: If there is a set ( a 1 , … , a d ) {\displaystyle (a_{1},\ldots ,a_{d})} with d {\displaystyle d} numbers. It is possible to compute all the d products of d − 1 {\displaystyle d-1} of the a i {\displaystyle a_{i}} with at most 3 ( d − 2 ) {\displaystyle 3(d-2)} multiplications rather than the obvious d ( d − 2 ) {\displaystyle d(d-2)} .
We do this by precomputing the quantities b 1 = a 1 , b 2 = b 1 ⋅ a 2 = a 1 ⋅ a 2 , b d − 1 = b d − 2 ⋅ a d − 1 = a 1 a 2 ⋯ a d − 1 {\displaystyle b_{1}=a_{1},b_{2}=b_{1}\cdot a_{2}=a_{1}\cdot a_{2},b_{d-1}=b_{d-2}\cdot a_{d-1}=a_{1}a_{2}\cdots a_{d-1}} and c d = a d , c d − 1 = a d − 1 c d = a d − 1 ⋅ a d , … , c 2 = a 2 ⋅ c 3 = a 2 a 3 ⋯ a d {\displaystyle c_{d}=a_{d},c_{d-1}=a_{d-1}c_{d}=a_{d-1}\cdot a_{d},\ldots ,c_{2}=a_{2}\cdot c_{3}=a_{2}a_{3}\cdots a_{d}} this takes 2 ( d − 2 ) {\displaystyle 2(d-2)} multiplications. Then if m j {\displaystyle m_{j}} denotes the product of all a i {\displaystyle a_{i}} except for a j {\displaystyle a_{j}} we have m 1 = c 2 , m 2 = b 1 ⋅ c 3 {\displaystyle m_{1}=c_{2},m_{2}=b_{1}\cdot c_{3}} and so on will need another d − 2 {\displaystyle d-2} multiplications making the total 3 ( d − 2 ) {\displaystyle 3(d-2)} .
There is not much we can do when it comes to the construction of the junction tree except that we may have many maximal weight spanning tree and we should choose the spanning tree with the least χ ( T ) {\displaystyle \chi (T)} and sometimes this might mean adding a local domain to lower the junction tree complexity.
It may seem that GDL is correct only when the local domains can be expressed as a junction tree. But even in cases where there are cycles and a number of iterations the messages will approximately be equal to the objective function. The experiments on Gallager–Tanner–Wiberg algorithm for low density parity-check codes were supportive of this claim. | https://en.wikipedia.org/wiki/Generalized_distributive_law |
In linear algebra , a generalized eigenvector of an n × n {\displaystyle n\times n} matrix A {\displaystyle A} is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector . [ 1 ]
Let V {\displaystyle V} be an n {\displaystyle n} -dimensional vector space and let A {\displaystyle A} be the matrix representation of a linear map from V {\displaystyle V} to V {\displaystyle V} with respect to some ordered basis .
There may not always exist a full set of n {\displaystyle n} linearly independent eigenvectors of A {\displaystyle A} that form a complete basis for V {\displaystyle V} . That is, the matrix A {\displaystyle A} may not be diagonalizable . [ 2 ] [ 3 ] This happens when the algebraic multiplicity of at least one eigenvalue λ i {\displaystyle \lambda _{i}} is greater than its geometric multiplicity (the nullity of the matrix ( A − λ i I ) {\displaystyle (A-\lambda _{i}I)} , or the dimension of its nullspace ). In this case, λ i {\displaystyle \lambda _{i}} is called a defective eigenvalue and A {\displaystyle A} is called a defective matrix . [ 4 ]
A generalized eigenvector x i {\displaystyle x_{i}} corresponding to λ i {\displaystyle \lambda _{i}} , together with the matrix ( A − λ i I ) {\displaystyle (A-\lambda _{i}I)} generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of V {\displaystyle V} . [ 5 ] [ 6 ] [ 7 ]
Using generalized eigenvectors, a set of linearly independent eigenvectors of A {\displaystyle A} can be extended, if necessary, to a complete basis for V {\displaystyle V} . [ 8 ] This basis can be used to determine an "almost diagonal matrix" J {\displaystyle J} in Jordan normal form , similar to A {\displaystyle A} , which is useful in computing certain matrix functions of A {\displaystyle A} . [ 9 ] The matrix J {\displaystyle J} is also useful in solving the system of linear differential equations x ′ = A x , {\displaystyle \mathbf {x} '=A\mathbf {x} ,} where A {\displaystyle A} need not be diagonalizable. [ 10 ] [ 11 ]
The dimension of the generalized eigenspace corresponding to a given eigenvalue λ {\displaystyle \lambda } is the algebraic multiplicity of λ {\displaystyle \lambda } . [ 12 ]
There are several equivalent ways to define an ordinary eigenvector . [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] For our purposes, an eigenvector u {\displaystyle \mathbf {u} } associated with an eigenvalue λ {\displaystyle \lambda } of an n {\displaystyle n} × n {\displaystyle n} matrix A {\displaystyle A} is a nonzero vector for which ( A − λ I ) u = 0 {\displaystyle (A-\lambda I)\mathbf {u} =\mathbf {0} } , where I {\displaystyle I} is the n {\displaystyle n} × n {\displaystyle n} identity matrix and 0 {\displaystyle \mathbf {0} } is the zero vector of length n {\displaystyle n} . [ 21 ] That is, u {\displaystyle \mathbf {u} } is in the kernel of the transformation ( A − λ I ) {\displaystyle (A-\lambda I)} . If A {\displaystyle A} has n {\displaystyle n} linearly independent eigenvectors, then A {\displaystyle A} is similar to a diagonal matrix D {\displaystyle D} . That is, there exists an invertible matrix M {\displaystyle M} such that A {\displaystyle A} is diagonalizable through the similarity transformation D = M − 1 A M {\displaystyle D=M^{-1}AM} . [ 22 ] [ 23 ] The matrix D {\displaystyle D} is called a spectral matrix for A {\displaystyle A} . The matrix M {\displaystyle M} is called a modal matrix for A {\displaystyle A} . [ 24 ] Diagonalizable matrices are of particular interest since matrix functions of them can be computed easily. [ 25 ]
On the other hand, if A {\displaystyle A} does not have n {\displaystyle n} linearly independent eigenvectors associated with it, then A {\displaystyle A} is not diagonalizable. [ 26 ] [ 27 ]
Definition: A vector x m {\displaystyle \mathbf {x} _{m}} is a generalized eigenvector of rank m of the matrix A {\displaystyle A} and corresponding to the eigenvalue λ {\displaystyle \lambda } if
but
Clearly, a generalized eigenvector of rank 1 is an ordinary eigenvector. [ 29 ] Every n {\displaystyle n} × n {\displaystyle n} matrix A {\displaystyle A} has n {\displaystyle n} linearly independent generalized eigenvectors associated with it and can be shown to be similar to an "almost diagonal" matrix J {\displaystyle J} in Jordan normal form. [ 30 ] That is, there exists an invertible matrix M {\displaystyle M} such that J = M − 1 A M {\displaystyle J=M^{-1}AM} . [ 31 ] The matrix M {\displaystyle M} in this case is called a generalized modal matrix for A {\displaystyle A} . [ 32 ] If λ {\displaystyle \lambda } is an eigenvalue of algebraic multiplicity μ {\displaystyle \mu } , then A {\displaystyle A} will have μ {\displaystyle \mu } linearly independent generalized eigenvectors corresponding to λ {\displaystyle \lambda } . [ 33 ] These results, in turn, provide a straightforward method for computing certain matrix functions of A {\displaystyle A} . [ 34 ]
Note: For an n × n {\displaystyle n\times n} matrix A {\displaystyle A} over a field F {\displaystyle F} to be expressed in Jordan normal form, all eigenvalues of A {\displaystyle A} must be in F {\displaystyle F} . That is, the characteristic polynomial f ( x ) {\displaystyle f(x)} must factor completely into linear factors; F {\displaystyle F} must be an algebraically closed field. For example, if A {\displaystyle A} has real-valued elements, then it may be necessary for the eigenvalues and the components of the eigenvectors to have complex values . [ 35 ] [ 36 ] [ 37 ]
The set spanned by all generalized eigenvectors for a given λ {\displaystyle \lambda } forms the generalized eigenspace for λ {\displaystyle \lambda } . [ 38 ]
Here are some examples to illustrate the concept of generalized eigenvectors. Some of the details will be described later.
This example is simple but clearly illustrates the point. This type of matrix is used frequently in textbooks. [ 39 ] [ 40 ] [ 41 ] Suppose
Then there is only one eigenvalue, λ = 1 {\displaystyle \lambda =1} , and its algebraic multiplicity is m = 2 {\displaystyle m=2} .
Notice that this matrix is in Jordan normal form but is not diagonal . Hence, this matrix is not diagonalizable. Since there is one superdiagonal entry, there will be one generalized eigenvector of rank greater than 1 (or one could note that the vector space V {\displaystyle V} is of dimension 2, so there can be at most one generalized eigenvector of rank greater than 1). Alternatively, one could compute the dimension of the nullspace of A − λ I {\displaystyle A-\lambda I} to be p = 1 {\displaystyle p=1} , and thus there are m − p = 1 {\displaystyle m-p=1} generalized eigenvectors of rank greater than 1.
The ordinary eigenvector v 1 = ( 1 0 ) {\displaystyle \mathbf {v} _{1}={\begin{pmatrix}1\\0\end{pmatrix}}} is computed as usual (see the eigenvector page for examples). Using this eigenvector, we compute the generalized eigenvector v 2 {\displaystyle \mathbf {v} _{2}} by solving
Writing out the values:
This simplifies to
The element v 21 {\displaystyle v_{21}} has no restrictions. The generalized eigenvector of rank 2 is then v 2 = ( a 1 ) {\displaystyle \mathbf {v} _{2}={\begin{pmatrix}a\\1\end{pmatrix}}} , where a can have any scalar value. The choice of a = 0 is usually the simplest.
Note that
so that v 2 {\displaystyle \mathbf {v} _{2}} is a generalized eigenvector, because
so that v 1 {\displaystyle \mathbf {v} _{1}} is an ordinary eigenvector, and that v 1 {\displaystyle \mathbf {v} _{1}} and v 2 {\displaystyle \mathbf {v} _{2}} are linearly independent and hence constitute a basis for the vector space V {\displaystyle V} .
This example is more complex than Example 1 . Unfortunately, it is a little difficult to construct an interesting example of low order. [ 42 ] The matrix
has eigenvalues λ 1 = 1 {\displaystyle \lambda _{1}=1} and λ 2 = 2 {\displaystyle \lambda _{2}=2} with algebraic multiplicities μ 1 = 2 {\displaystyle \mu _{1}=2} and μ 2 = 3 {\displaystyle \mu _{2}=3} , but geometric multiplicities γ 1 = 1 {\displaystyle \gamma _{1}=1} and γ 2 = 1 {\displaystyle \gamma _{2}=1} .
The generalized eigenspaces of A {\displaystyle A} are calculated below. x 1 {\displaystyle \mathbf {x} _{1}} is the ordinary eigenvector associated with λ 1 {\displaystyle \lambda _{1}} . x 2 {\displaystyle \mathbf {x} _{2}} is a generalized eigenvector associated with λ 1 {\displaystyle \lambda _{1}} . y 1 {\displaystyle \mathbf {y} _{1}} is the ordinary eigenvector associated with λ 2 {\displaystyle \lambda _{2}} . y 2 {\displaystyle \mathbf {y} _{2}} and y 3 {\displaystyle \mathbf {y} _{3}} are generalized eigenvectors associated with λ 2 {\displaystyle \lambda _{2}} .
This results in a basis for each of the generalized eigenspaces of A {\displaystyle A} .
Together the two chains of generalized eigenvectors span the space of all 5-dimensional column vectors.
An "almost diagonal" matrix J {\displaystyle J} in Jordan normal form , similar to A {\displaystyle A} is obtained as follows:
where M {\displaystyle M} is a generalized modal matrix for A {\displaystyle A} , the columns of M {\displaystyle M} are a canonical basis for A {\displaystyle A} , and A M = M J {\displaystyle AM=MJ} . [ 43 ]
Definition: Let x m {\displaystyle \mathbf {x} _{m}} be a generalized eigenvector of rank m corresponding to the matrix A {\displaystyle A} and the eigenvalue λ {\displaystyle \lambda } . The chain generated by x m {\displaystyle \mathbf {x} _{m}} is a set of vectors { x m , x m − 1 , … , x 1 } {\displaystyle \left\{\mathbf {x} _{m},\mathbf {x} _{m-1},\dots ,\mathbf {x} _{1}\right\}} given by
x m − 1 = ( A − λ I ) x m , {\displaystyle \mathbf {x} _{m-1}=(A-\lambda I)\mathbf {x} _{m},} x m − 2 = ( A − λ I ) 2 x m = ( A − λ I ) x m − 1 , {\displaystyle \mathbf {x} _{m-2}=(A-\lambda I)^{2}\mathbf {x} _{m}=(A-\lambda I)\mathbf {x} _{m-1},} x m − 3 = ( A − λ I ) 3 x m = ( A − λ I ) x m − 2 , {\displaystyle \mathbf {x} _{m-3}=(A-\lambda I)^{3}\mathbf {x} _{m}=(A-\lambda I)\mathbf {x} _{m-2},}
x 1 = ( A − λ I ) m − 1 x m = ( A − λ I ) x 2 . {\displaystyle \mathbf {x} _{1}=(A-\lambda I)^{m-1}\mathbf {x} _{m}=(A-\lambda I)\mathbf {x} _{2}.}
where x 1 {\displaystyle \mathbf {x} _{1}} is always an ordinary eigenvector with a given eigenvalue λ {\displaystyle \lambda } . Thus, in general,
The vector x j {\displaystyle \mathbf {x} _{j}} , given by ( 2 ), is a generalized eigenvector of rank j corresponding to the eigenvalue λ {\displaystyle \lambda } . A chain is a linearly independent set of vectors. [ 44 ]
Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains.
Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m − 1 vectors x m − 1 , x m − 2 , … , x 1 {\displaystyle \mathbf {x} _{m-1},\mathbf {x} _{m-2},\ldots ,\mathbf {x} _{1}} that are in the Jordan chain generated by x m {\displaystyle \mathbf {x} _{m}} are also in the canonical basis. [ 45 ]
Let λ i {\displaystyle \lambda _{i}} be an eigenvalue of A {\displaystyle A} of algebraic multiplicity μ i {\displaystyle \mu _{i}} . First, find the ranks (matrix ranks) of the matrices ( A − λ i I ) , ( A − λ i I ) 2 , … , ( A − λ i I ) m i {\displaystyle (A-\lambda _{i}I),(A-\lambda _{i}I)^{2},\ldots ,(A-\lambda _{i}I)^{m_{i}}} . The integer m i {\displaystyle m_{i}} is determined to be the first integer for which ( A − λ i I ) m i {\displaystyle (A-\lambda _{i}I)^{m_{i}}} has rank n − μ i {\displaystyle n-\mu _{i}} ( n being the number of rows or columns of A {\displaystyle A} , that is, A {\displaystyle A} is n × n ).
Now define
The variable ρ k {\displaystyle \rho _{k}} designates the number of linearly independent generalized eigenvectors of rank k corresponding to the eigenvalue λ i {\displaystyle \lambda _{i}} that will appear in a canonical basis for A {\displaystyle A} . Note that
In the preceding sections we have seen techniques for obtaining the n {\displaystyle n} linearly independent generalized eigenvectors of a canonical basis for the vector space V {\displaystyle V} associated with an n × n {\displaystyle n\times n} matrix A {\displaystyle A} . These techniques can be combined into a procedure:
The matrix
has an eigenvalue λ 1 = 5 {\displaystyle \lambda _{1}=5} of algebraic multiplicity μ 1 = 3 {\displaystyle \mu _{1}=3} and an eigenvalue λ 2 = 4 {\displaystyle \lambda _{2}=4} of algebraic multiplicity μ 2 = 1 {\displaystyle \mu _{2}=1} . We also have n = 4 {\displaystyle n=4} . For λ 1 {\displaystyle \lambda _{1}} we have n − μ 1 = 4 − 3 = 1 {\displaystyle n-\mu _{1}=4-3=1} .
The first integer m 1 {\displaystyle m_{1}} for which ( A − 5 I ) m 1 {\displaystyle (A-5I)^{m_{1}}} has rank n − μ 1 = 1 {\displaystyle n-\mu _{1}=1} is m 1 = 3 {\displaystyle m_{1}=3} .
We now define
Consequently, there will be three linearly independent generalized eigenvectors; one each of ranks 3, 2 and 1. Since λ 1 {\displaystyle \lambda _{1}} corresponds to a single chain of three linearly independent generalized eigenvectors, we know that there is a generalized eigenvector x 3 {\displaystyle \mathbf {x} _{3}} of rank 3 corresponding to λ 1 {\displaystyle \lambda _{1}} such that
but
Equations ( 3 ) and ( 4 ) represent linear systems that can be solved for x 3 {\displaystyle \mathbf {x} _{3}} . Let
Then
and
Thus, in order to satisfy the conditions ( 3 ) and ( 4 ), we must have x 34 = 0 {\displaystyle x_{34}=0} and x 33 ≠ 0 {\displaystyle x_{33}\neq 0} . No restrictions are placed on x 31 {\displaystyle x_{31}} and x 32 {\displaystyle x_{32}} . By choosing x 31 = x 32 = x 34 = 0 , x 33 = 1 {\displaystyle x_{31}=x_{32}=x_{34}=0,x_{33}=1} , we obtain
as a generalized eigenvector of rank 3 corresponding to λ 1 = 5 {\displaystyle \lambda _{1}=5} . Note that it is possible to obtain infinitely many other generalized eigenvectors of rank 3 by choosing different values of x 31 {\displaystyle x_{31}} , x 32 {\displaystyle x_{32}} and x 33 {\displaystyle x_{33}} , with x 33 ≠ 0 {\displaystyle x_{33}\neq 0} . Our first choice, however, is the simplest. [ 47 ]
Now using equations ( 1 ), we obtain x 2 {\displaystyle \mathbf {x} _{2}} and x 1 {\displaystyle \mathbf {x} _{1}} as generalized eigenvectors of rank 2 and 1, respectively, where
and
The simple eigenvalue λ 2 = 4 {\displaystyle \lambda _{2}=4} can be dealt with using standard techniques and has an ordinary eigenvector
A canonical basis for A {\displaystyle A} is
x 1 , x 2 {\displaystyle \mathbf {x} _{1},\mathbf {x} _{2}} and x 3 {\displaystyle \mathbf {x} _{3}} are generalized eigenvectors associated with λ 1 {\displaystyle \lambda _{1}} , while y 1 {\displaystyle \mathbf {y} _{1}} is the ordinary eigenvector associated with λ 2 {\displaystyle \lambda _{2}} .
This is a fairly simple example. In general, the numbers ρ k {\displaystyle \rho _{k}} of linearly independent generalized eigenvectors of rank k {\displaystyle k} will not always be equal. That is, there may be several chains of different lengths corresponding to a particular eigenvalue. [ 48 ]
Let A {\displaystyle A} be an n × n matrix. A generalized modal matrix M {\displaystyle M} for A {\displaystyle A} is an n × n matrix whose columns, considered as vectors, form a canonical basis for A {\displaystyle A} and appear in M {\displaystyle M} according to the following rules:
Let V {\displaystyle V} be an n -dimensional vector space; let ϕ {\displaystyle \phi } be a linear map in L ( V ) , the set of all linear maps from V {\displaystyle V} into itself; and let A {\displaystyle A} be the matrix representation of ϕ {\displaystyle \phi } with respect to some ordered basis. It can be shown that if the characteristic polynomial f ( λ ) {\displaystyle f(\lambda )} of A {\displaystyle A} factors into linear factors, so that f ( λ ) {\displaystyle f(\lambda )} has the form
where λ 1 , λ 2 , … , λ r {\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{r}} are the distinct eigenvalues of A {\displaystyle A} , then each μ i {\displaystyle \mu _{i}} is the algebraic multiplicity of its corresponding eigenvalue λ i {\displaystyle \lambda _{i}} and A {\displaystyle A} is similar to a matrix J {\displaystyle J} in Jordan normal form , where each λ i {\displaystyle \lambda _{i}} appears μ i {\displaystyle \mu _{i}} consecutive times on the diagonal, and the entry directly above each λ i {\displaystyle \lambda _{i}} (that is, on the superdiagonal ) is either 0 or 1: in each block the entry above the first occurrence of each λ i {\displaystyle \lambda _{i}} is always 0 (except in the first block); all other entries on the superdiagonal are 1. All other entries (that is, off the diagonal and superdiagonal) are 0. (But no ordering is imposed among the eigenvalues, or among the blocks for a given eigenvalue.) The matrix J {\displaystyle J} is as close as one can come to a diagonalization of A {\displaystyle A} . If A {\displaystyle A} is diagonalizable, then all entries above the diagonal are zero. [ 50 ] Note that some textbooks have the ones on the subdiagonal , that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal. [ 51 ] [ 52 ]
Every n × n matrix A {\displaystyle A} is similar to a matrix J {\displaystyle J} in Jordan normal form, obtained through the similarity transformation J = M − 1 A M {\displaystyle J=M^{-1}AM} , where M {\displaystyle M} is a generalized modal matrix for A {\displaystyle A} . [ 53 ] (See Note above.)
Find a matrix in Jordan normal form that is similar to
Solution: The characteristic equation of A {\displaystyle A} is ( λ − 2 ) 3 = 0 {\displaystyle (\lambda -2)^{3}=0} , hence, λ = 2 {\displaystyle \lambda =2} is an eigenvalue of algebraic multiplicity three. Following the procedures of the previous sections, we find that
and
Thus, ρ 2 = 1 {\displaystyle \rho _{2}=1} and ρ 1 = 2 {\displaystyle \rho _{1}=2} , which implies that a canonical basis for A {\displaystyle A} will contain one linearly independent generalized eigenvector of rank 2 and two linearly independent generalized eigenvectors of rank 1, or equivalently, one chain of two vectors { x 2 , x 1 } {\displaystyle \left\{\mathbf {x} _{2},\mathbf {x} _{1}\right\}} and one chain of one vector { y 1 } {\displaystyle \left\{\mathbf {y} _{1}\right\}} . Designating M = ( y 1 x 1 x 2 ) {\displaystyle M={\begin{pmatrix}\mathbf {y} _{1}&\mathbf {x} _{1}&\mathbf {x} _{2}\end{pmatrix}}} , we find that
and
where M {\displaystyle M} is a generalized modal matrix for A {\displaystyle A} , the columns of M {\displaystyle M} are a canonical basis for A {\displaystyle A} , and A M = M J {\displaystyle AM=MJ} . [ 54 ] Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both M {\displaystyle M} and J {\displaystyle J} may be interchanged, it follows that both M {\displaystyle M} and J {\displaystyle J} are not unique. [ 55 ]
In Example 3 , we found a canonical basis of linearly independent generalized eigenvectors for a matrix A {\displaystyle A} . A generalized modal matrix for A {\displaystyle A} is
A matrix in Jordan normal form, similar to A {\displaystyle A} is
so that A M = M J {\displaystyle AM=MJ} .
Three of the most fundamental operations which can be performed on square matrices are matrix addition, multiplication by a scalar, and matrix multiplication. [ 56 ] These are exactly those operations necessary for defining a polynomial function of an n × n matrix A {\displaystyle A} . [ 57 ] If we recall from basic calculus that many functions can be written as a Maclaurin series , then we can define more general functions of matrices quite easily. [ 58 ] If A {\displaystyle A} is diagonalizable, that is
with
then
and the evaluation of the Maclaurin series for functions of A {\displaystyle A} is greatly simplified. [ 59 ] For example, to obtain any power k of A {\displaystyle A} , we need only compute D k {\displaystyle D^{k}} , premultiply D k {\displaystyle D^{k}} by M {\displaystyle M} , and postmultiply the result by M − 1 {\displaystyle M^{-1}} . [ 60 ]
Using generalized eigenvectors, we can obtain the Jordan normal form for A {\displaystyle A} and these results can be generalized to a straightforward method for computing functions of nondiagonalizable matrices. [ 61 ] (See Matrix function#Jordan decomposition .)
Consider the problem of solving the system of linear ordinary differential equations
where
If the matrix A {\displaystyle A} is a diagonal matrix so that a i j = 0 {\displaystyle a_{ij}=0} for i ≠ j {\displaystyle i\neq j} , then the system ( 5 ) reduces to a system of n equations which take the form
x 1 ′ = a 11 x 1 {\displaystyle x_{1}'=a_{11}x_{1}} x 2 ′ = a 22 x 2 {\displaystyle x_{2}'=a_{22}x_{2}}
x n ′ = a n n x n . {\displaystyle x_{n}'=a_{nn}x_{n}.}
In this case, the general solution is given by
In the general case, we try to diagonalize A {\displaystyle A} and reduce the system ( 5 ) to a system like ( 6 ) as follows. If A {\displaystyle A} is diagonalizable, we have D = M − 1 A M {\displaystyle D=M^{-1}AM} , where M {\displaystyle M} is a modal matrix for A {\displaystyle A} . Substituting A = M D M − 1 {\displaystyle A=MDM^{-1}} , equation ( 5 ) takes the form M − 1 x ′ = D ( M − 1 x ) {\displaystyle M^{-1}\mathbf {x} '=D(M^{-1}\mathbf {x} )} , or
where
The solution of ( 7 ) is
The solution x {\displaystyle \mathbf {x} } of ( 5 ) is then obtained using the relation ( 8 ). [ 62 ]
On the other hand, if A {\displaystyle A} is not diagonalizable, we choose M {\displaystyle M} to be a generalized modal matrix for A {\displaystyle A} , such that J = M − 1 A M {\displaystyle J=M^{-1}AM} is the Jordan normal form of A {\displaystyle A} . The system y ′ = J y {\displaystyle \mathbf {y} '=J\mathbf {y} } has the form
y 1 ′ = λ 1 y 1 + ϵ 1 y 2 ⋮ y n − 1 ′ = λ n − 1 y n − 1 + ϵ n − 1 y n y n ′ = λ n y n , {\displaystyle {\begin{aligned}y_{1}'&=\lambda _{1}y_{1}+\epsilon _{1}y_{2}\\&\vdots \\y_{n-1}'&=\lambda _{n-1}y_{n-1}+\epsilon _{n-1}y_{n}\\y_{n}'&=\lambda _{n}y_{n},\end{aligned}}}
where the λ i {\displaystyle \lambda _{i}} are the eigenvalues from the main diagonal of J {\displaystyle J} and the ϵ i {\displaystyle \epsilon _{i}} are the ones and zeros from the superdiagonal of J {\displaystyle J} . The system ( 9 ) is often more easily solved than ( 5 ). We may solve the last equation in ( 9 ) for y n {\displaystyle y_{n}} , obtaining y n = k n e λ n t {\displaystyle y_{n}=k_{n}e^{\lambda _{n}t}} . We then substitute this solution for y n {\displaystyle y_{n}} into the next to last equation in ( 9 ) and solve for y n − 1 {\displaystyle y_{n-1}} . Continuing this procedure, we work through ( 9 ) from the last equation to the first, solving the entire system for y {\displaystyle \mathbf {y} } . The solution x {\displaystyle \mathbf {x} } is then obtained using the relation ( 8 ). [ 63 ]
Lemma:
Given the following chain of generalized eigenvectors of length r , {\displaystyle r,}
these functions solve the system of equations,
Proof:
Define
Then, as t 0 = 1 {\displaystyle {t^{0}}=1} and 1 ′ = 0 {\displaystyle 1'=0} ,
On the other hand we have, v 0 = 0 {\displaystyle v_{0}=0} and so
as required. | https://en.wikipedia.org/wiki/Generalized_eigenvector |
The generalized entropy index has been proposed as a measure of income inequality in a population. [ 1 ] It is derived from information theory as a measure of redundancy in data. In information theory a measure of redundancy can be interpreted as non-randomness or data compression ; thus this interpretation also applies to this index. In addition, interpretation of biodiversity as entropy has also been proposed leading to uses of generalized entropy to quantify biodiversity. [ 2 ]
The formula for general entropy for real values of α {\displaystyle \alpha } is:
G E ( α ) = { 1 N α ( α − 1 ) ∑ i = 1 N [ ( y i y ¯ ) α − 1 ] , α ≠ 0 , 1 , 1 N ∑ i = 1 N y i y ¯ ln y i y ¯ , α = 1 , − 1 N ∑ i = 1 N ln y i y ¯ , α = 0. {\displaystyle GE(\alpha )={\begin{cases}{\frac {1}{N\alpha (\alpha -1)}}\sum _{i=1}^{N}\left[\left({\frac {y_{i}}{\overline {y}}}\right)^{\alpha }-1\right],&\alpha \neq 0,1,\\{\frac {1}{N}}\sum _{i=1}^{N}{\frac {y_{i}}{\overline {y}}}\ln {\frac {y_{i}}{\overline {y}}},&\alpha =1,\\-{\frac {1}{N}}\sum _{i=1}^{N}\ln {\frac {y_{i}}{\overline {y}}},&\alpha =0.\end{cases}}} where N is the number of cases (e.g., households or families), y i {\displaystyle y_{i}} is the income for case i and α {\displaystyle \alpha } is a parameter which regulates the weight given to distances between incomes at different parts of the income distribution . For large α {\displaystyle \alpha } the index is especially sensitive to the existence of large incomes, whereas for small α {\displaystyle \alpha } the index is especially sensitive to the existence of small incomes.
The GE index satisfies the following properties:
An Atkinson index for any inequality aversion parameter can be derived from a generalized entropy index under the restriction that ϵ = 1 − α {\displaystyle \epsilon =1-\alpha } - i.e. an Atkinson index with high inequality aversion is derived from a GE index with small α {\displaystyle \alpha } .
The formula for deriving an Atkinson index with inequality aversion parameter ϵ {\displaystyle \epsilon } under the restriction ϵ = 1 − α {\displaystyle \epsilon =1-\alpha } is given by: A = 1 − [ ϵ ( ϵ − 1 ) G E ( α ) + 1 ] ( 1 / ( 1 − ϵ ) ) ϵ ≠ 1 {\displaystyle A=1-[\epsilon (\epsilon -1)GE(\alpha )+1]^{(1/(1-\epsilon ))}\qquad \epsilon \neq 1} A = 1 − e − G E ( α ) ϵ = 1 {\displaystyle A=1-e^{-GE(\alpha )}\qquad \epsilon =1}
Note that the generalized entropy index has several income inequality metrics as special cases. For example, GE(0) is the mean log deviation a.k.a. Theil L index, GE(1) is the Theil T index , and GE(2) is half the squared coefficient of variation . | https://en.wikipedia.org/wiki/Generalized_entropy_index |
1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias
Generalized exchange is a type of social exchange in which a desired outcome that is sought by an individual is not dependent on the resources provided by that individual. [ 1 ] It is assumed to be a fundamental social mechanism that stabilizes relations in society by unilateral resource giving in which one's giving is not necessarily reciprocated by the recipient, but by a third party. [ 2 ] Thus, in contrast to direct or restricted exchange or reciprocity, [ 3 ] in which parties exchange resources with each other, generalized exchange naturally involves more than two parties. [ 2 ] Examples of generalized exchange include; matrilateral cross-cousin marriage and helping a stranded driver on a desolate road. [ 4 ]
All forms of social exchange occur within structures of mutual dependence, that is, structures in which actors are mutually, or reciprocally dependent on one another for valued outcomes. A structure of mutual or reciprocal dependence is defining characteristic of all social relations based on exchange. [ 5 ]
The mutual or reciprocal dependence can be either direct (restricted) or indirect (generalized) . Both of them rest on a norm of reciprocity which provides guidance to both parties: takers are obliged to be givers. [ 6 ] In direct dyadic exchange, the norm of reciprocity insists that takers give gifts to those who gave to them. Generalized exchange, also, insists that takers give, but to somebody else. [ 6 ] The recipient is not defined and creates opportunities of exploitation if actors explicitly reject the guiding norm of reciprocity. The purest form of indirect, generalized exchange, is the chain-generalized form, first documented by the classical anthropologists: Lévi-Strauss (1969) [ 7 ] and Malinowski (1922). [ 8 ] In chain-generalized exchange, benefits flow in one direction in a circle of giving that eventually returns benefit to the giver. In direct exchange, actors instead engage in individual actions that benefit another. Reciprocal exchanges evolve gradually, as beneficial acts prompt reciprocal benefits, in a series of sequentially contingent, individual acts. [ 5 ]
In indirect structures of reciprocity , each actor is depended not on a single other, as in direct forms of exchange, but on all actors who contribute to maintaining the collective system. [ 4 ] Generalized exchange according to this logic, is a common feature of business organizations, neighborhoods, and the vast and growing network of online communities . [ 9 ] [ 10 ] In indirect exchanges, we observe reduced emotional tension between the partners, a credit mentality, collective orientation and high levels of solidarity and trust. Indirect reciprocity occurs when an actor who provides benefits to another is subsequently helped by a third party. Indirect reciprocity is deeply rooted in reputation processes. The indirect reciprocity requires information about the broader network (e.g., what has an actor A done for the others?). When collective organizations are large, this greater informational complexity of indirect reciprocity processes may moderate its effects. [ 11 ] Experimental evidence shows that people respond strategically to the presence of others, cooperating at much higher levels when reputational benefits and possibilities or indirect reciprocity exist. [ 12 ] Individuals have a tendency to reward givers and penalize non-givers – which is often explained from the perspectives of prosocial behavior and norm enforcement. [ 13 ] But another explanation may lie in reputational concerns. In other words, because so much of human behavior is based on the reputational advantages and opportunities, evolutionary theorists posit that the foundations of human morality are rooted in indirect reciprocity and reputational processes. [ 14 ] [ 15 ]
In generalized exchange, one actor gives benefits to another, and receives from another, but not from the same actor. We have a context of a chain-generalized system of exchange where A, B, and C are the connected parties. They may also be a part of a larger, more diffused network, with no defined structure. According to Takahashi (2000), [ 2 ] this is called "pure generalized" exchange. In this form, there is no fixed structure of giving. A might give to B on one occasion and to C on a different occasion. The structure of indirect reciprocity affects the solidarity in comparison with forms of exchange with direct reciprocity.
In forms of exchange with direct reciprocity, two actors exchange resources with one another. This means, A provides value to B, and B to A. B's reciprocation of A's giving is direct [ 5 ] and each actor's outcomes depend solely on the behavior of another actor or actors. Direct structures of reciprocity produce exchanges which have different consequences for trust and solidarity. [ 16 ] Direct exchanges are characterized by high emotional tension and lack of trust – quid pro quo – self-interested actors who often engage in conflicts over fairness of exchanges and low levels of trust.
An American sociologist Richard Marc Emerson (1981) further distinguished between two forms of transactions in direct exchange relations: negotiated and reciprocal. There exists a clear distinction between negotiated and reciprocal forms of direct exchange. [ 17 ] [ 18 ] Along these lines, Yamagishi and Cook [ 19 ] (1993) and Takahashi [ 20 ] (2000) note that emphasis on collective aspects of generalized exchange neglects elements such as: the high risk of the structure, the potential for those who fail to give to disrupt the entire system, and the difficulty of establishing a structure of stable giving without initial levels of high trust or established norms. [ 21 ]
American sociologists: Karen S. Cook , [ 22 ] Richard M. Emerson, Toshio Yamagishi, [ 23 ] Mary R. Gillmore, [ 24 ] Samuel B. Bacharach, [ 25 ] and Edward J. Lawler [ 26 ] all study negotiated transactions.
In such exchange, actors together arrange and negotiate the terms of an agreement that benefits both parties, either equally or unequally. This is a joint decision process, an explicit bargaining. [ 5 ] Both sides of the exchange are agreed upon at the same time, and the benefits for both exchange partners are easily identified as paired contributions that form a discrete transaction. There agreements are strictly binding and produce the benefits agreed upon. [ 27 ] Most economic exchanges (excluding fixed-price trades) as well as many other social exchanges fall under this category.
In such exchange, actors engage in actions that benefit one another. Actors' contributions to the exchange are not ex-ante negotiated. Actors initiate exchanges without knowing whether their actions will be reciprocated ex-post. Such contributions are performed separately and are not known to the counterparties. [ 5 ] Behaviors can be advice-giving, assistance, help, and are not subject to negotiation. Moreover, there is no knowledge whether or when or to what extent the other will reciprocate. Reciprocal transactions are distinct from pure economic exchanges and are typical in many interpersonal relationships where norms curtail the extent of explicit bargaining .
Negotiated and reciprocal exchanges create different structural relations between actors' behaviors and between their outcomes. Both forms of transactions alter the risk inherent in relations of mutual dependence, but in different ways.
In theory, all three forms of exchange – indirect, direct negotiated, and direct reciprocal – differ from one another on a set of dimensions that potentially affect the development of social solidarity. These dimensions comprise the structure of reciprocity in social exchange. [ 28 ] Theory argues that while all forms of exchange are characterized by some type of reciprocity, the structure of reciprocity varies on two key dimensions that affect the social solidarity or integrative bonds that develop between actors:
Structure of reciprocity can affect exchange in a more fundamental way, through its implications on actors’ incentives. Generalized reciprocity is a way of "organizing" an ongoing process of "interlocked behaviors" where one person’s behavior depends on another’s, whose is also depended on another’s, the process forming a chain reaction. For generalized exchange to emerge, individuals must overcome the temptation to receive without contributing and instead in engage in sharing (cooperative) behavior. [ 29 ] Once the sharing begins, the overall collective good can increase as more individuals contribute more goods (with high jointness of supply). As group size grows in an organization, individual information preferences are more likely to be met through diversity.
Individuals should be encouraged to make altruistic contribution to a collective good for generalized exchange to emerge. Empirical studies show that altruistic behavior is a natural aspect of social interaction . [ 30 ] [ 31 ] Individuals donate blood and organs at some personal cost with no direct benefits. When contributions are also rewarded, then contributing and cooperation becomes more attractive regardless of decisions of others. There are incentives to motivate sharing knowledge and helping others in organizations such as, formal participation quotas, making helping and giving an enforceable requirement with guaranteed rewards. Such incentives, do not specify who helps who – that is more discretionary. Individuals are free to choose who to help, and these choices can vary from helping only those that have helped an individual in the past (direct reciprocity), or to help those that have helped others and not helping those that have not helped. Incentives have been successful as an economic solution to free-riding, because they offer additional motivations that make cooperation rational. [ 32 ] [ 33 ]
Individuals may gain some intrinsic satisfaction from the popularity of their own contribution in the form of psychological efficacy, causing an individual to want to share more in the future. Additionally, individuals may participate in giving social approval by rating the popularity of other's contributions. This makes giving and receiving social approval to have an influence on behavior. Individuals may cooperate (or share) because they care about receiving social approval [ 29 ] and/or because they want to give social approval to others' contributions. Social approval is a combination of these two processes.
Reputation is regarded as an incentive in generalized reciprocity. Evolutionary theorists Nowak and Sigmund (1998) [ 34 ] regard reputation a person’s image. This, in organizations, is named as "professional image", namely, others’ perceptions of individuals within organizations – but with a focus on helpfulness. The same authors also show in their simulation study that strategy of rewarding reputation produces an evolutionary stable system of generalized reciprocity. Same idea is echoed by economic experiments where the rewarding of reputation is shown to yield generalized reciprocity. [ 35 ] Individuals with reputations for helpfulness are more likely to get helped in contrast to those individuals without such reputation. Real-life examples show that in situations where reputations for helpfulness are rewarded, individuals are prone to engaging in helping others so that they will in return be rewarded and helped in the future. Incentive here is to be helped in the future – which is why individuals engage in building reputation. Experimental research on rewarding reputation also shows that reputations in organizations too are built with such incentives, and through consistent demonstration of "distinctive and salient behaviors on repeated occasions, or over time". [ 36 ] Consequences for such actions are the following: good reputation results in more autonomy, power, and career success. [ 37 ] [ 38 ]
Rewarding reputation is more time contingent. It is taxing for individuals to keep track of what everyone else does and monitor whose rate of helping is higher. This makes rewarding reputation tied to the recency of helpfulness. Individuals are found to make decisions based on recent reputation of others rather than their long-term reputation. The reward system of reciprocity is based on "what have you don’t for us lately?!" and the less recent one’s deeds are, the less likely it is for these individuals to receive help in return. [ 37 ]
To encourage reciprocity and incentivize individuals to engage in such prosocial behavior, organizations are shown to enforce norms of asking for help, giving help, and reciprocating help by organizing meetings and informal practices. Supervisors are also encouraged to use symbolic or financial rewards to incentivize helping. Google for example, uses a peer-to-peer bonus system that empowers employees to express gratitude and reward helpful behavior with token payments. [ 39 ] Additionally, they use paying it forward incentive – meaning, those individuals that receive such bonuses, are given additional funds that may only be paid forward to recognize a third employee. To encourage knowledge exchange, large organizations employ knowledge-sharing communities in which they post and respond to requests for help around work-related problems.
Reputational concerns were found to be the driving force behind the effect of observability. Moreover, this effect was substantially stronger in settings where individuals were more likely to have future interactions with those who observed them and when participation was framed as a public good.
Individuals will conditionally cooperate based on what they believe others are doing in a public goods situation. [ 40 ] From a game-theoretical perspective, there is no strategic advantage to matching one's cooperation level to the rest of the group when others are already cooperating at a relatively high level. [ 29 ] [ 41 ]
Future-oriented behavior deals with the tendency for individuals to modify their behavior based on what they believe will happen in the future. Such behavior shares a similar logic to the game-theoretical approach to conditional cooperation. [ 29 ] Individuals plan strategically their actions in terms of looking forward to future interactions. [ 42 ]
In reactive behavior, individuals tend to orient themselves towards the average behavior of other group members. Such behavior is closely tied to the principle of reciprocity. When individuals can see the overall cooperation level of the participants, they can stimulate a normative response to reciprocate by cooperating as well. [ 29 ] In addition, when contributions are observable, individuals can also signal their commitment by making small contributions without taking too much risk at once. [ 43 ] Observing cooperative behavior also imposes obligation on an individual to also cooperate. Decisions to cooperate become more impersonal. Individuals can experience at least a minimal amount of satisfaction from being a cooperator because they feel like they are part of a larger group and organization. [ 29 ] [ 42 ]
Exchange, generalized or otherwise, is an inherently social construct. Social dynamics set the stage for an exchange to occur, between whom the exchange occurs, and what will happen after the exchange occurs. For example, exchange has been shown to have effects on an individual's reputation and standing.
Some have conceived of indirect reciprocity as being a result of direct reciprocity that is observed , as direct exchanges that are not observed by others cannot possibly increase the standing of an individual to an entire group except through piecemeal methods such as gossip. Through observation, it becomes clearer to a group who gives or reciprocates and who does not; in this way good actions can be rewarded or encouraged, and bad actions can be sanctioned through refusal to give.
Exchange is also a human process in the sense that it is not always carried out or perceived correctly. Individuals in groups can hold faulty perceptions of other actors which will lead them to take sanctioning action; this can also in turn lead to a lowering of the standing of that individual, if the group perceives the receiving actor undeserving of sanction. In a similar way, sometimes individuals may intend on taking a certain action and failing to do so either through human error (e.g. forgetfulness) or due to circumstances that prevent them from doing so. For these reasons, there will always be a certain degree of error in the way that exchange systems work.
One hypothesized outcome of exchange processes is social solidarity. Through continued exchange between many different members of a group, and the continuous attempt to sanction and eliminate self-serving behavior, a group can become tightly-connected to the point that an individual identifies with the group. This identification could then lead an individual to protect or aid the group even at one's own cost or without any promise of benefit in return.
The idea of why society needs exchanges in the first place could date back both anthropologically and sociologically.
Sociologists use the term of solidarity to explain exchanges. Emile Durkheim differentiates solidarity into mechanical and organic solidarity according to the type of the society.
Mechanical solidarity is associated with pre-modern society, where individuals are homogeneous and the cohesion arises mainly from shared values, lifestyles and work. Kinship connects individuals inside the society hence the exchange exists solely for survival purpose because of the low level of role specialization. This makes the solidarity mechanical as the exchange appears only when someone needs others, which may fall into exchange theory, with reciprocity in the form of status or reputation, as well as generalized exchange theory, where, out of expectation from the homogeneous group, reciprocity starts from the recipient helping a third and ends as the cycle is closed. That is to say, exchange theory argues that in a primitive society , reciprocity may be accompanied by an enhanced status or reputation while the sole intentionality for such exchange is survival. Generalized exchange theory believes that there is a social consensus out of commonly shared value or lifestyle that exchange does not require an immediate reciprocity but promise another activity, which, after several iterations, closes the cycle. Another important presumption in mechanical solidarity is the low level of role specialization where an individual may ask a random one, not necessary an expert, for help and this random one is capable of providing expected service.
Organic solidarity in modern society differs from the above-mentioned mechanical one. Modern society steps out of small and kinship-based town and integrates heterogeneous individuals that vary in their education, social class, religions, nations and races. Individuals stay distant from others psychologically and sociologically but meanwhile depend upon each other for their own well-being. The generalized exchange is hence more complicated as a result of longer chain in the cycle and perhaps temporal expansion. Different from a primitive society with a low level of role specialization, modern society is endowed with high specialization that emphasizes the searching process of the correct one when an exchange relation starts. When this searching fails finding a legitimate counterpart, this emerged exchange relation may die before birth. Therefore, the mechanism of organic solidarity is more complicated as the emergence, transmission, driving mechanism and the end point need careful reviews.
Anthropologists, quite different from sociologists, study the solidarity from the structural functionalism . While sociologists view individuals engaged in exchange due to the social factors, anthropologists, such as Levi-Strauss , believe the exchange is more of the solidarity in maintaining a well-functioned society than that for socially constrained individuals’ own needs. The society is believed to be an organism and all parts function together for the stability of the organism. Individuals work for the society and, reciprocally, they receive, say, philanthropic, materialistic, and social return from the society. It is similar to the modern society described by sociologists above, but the point here is that the solidarity is the cause of individual activities, which means individuals’ activities are dominated by the idea of solidarity, while sociologists’ modern society reaches solidarity as a result of individual’s self-oriented activities, where the solidarity is observed after selfish individuals focus on their own interests.
However, Malinowski studies the kula ring exchange on some island and concludes that individuals participate in the ritual or ceremony out of their own needs, where they feel satisfied as a part of the society. This could also be interpreted religiously as individuals hold the society above their social roles hence they actively become involved in the ceremony and reciprocally benefit psychologically and socially from being a part of the holiness, which, in a way, agrees with the idea of solidarity as the cause.
The unilateral character of generalized exchange that lacks one-to-one correspondence between what two parties directly give to and take from one another, distinguishes it from direct or restricted exchange. [ 4 ] [ 2 ] Ekeh (1974), [ 16 ] a pioneer scholar in exchange theory, argues that generalized exchange is more powerful than restricted forms of exchange in generating morality, promoting mutual trust and solidarity among the participants. This view, however, was found to be too optimistic or problematic by later scholars, given that it ignores the social dilemmas created by the exchange structure. Because generalized exchange paves the way for exploitation by rational self-interested members, thus a free rider problem . This social dilemma needs to be resolved for generalized exchange systems to emerge and survive.
Despite the risk of free riding, early exchange theorists proposed several explanations to why such exchange systems exist. Among others, altruistic motivation of members, existence of collective norms and incentives that regulates the behaviour of returning resources to any member, are most discussed ideas. [ 2 ] However, these approaches do not guarantee the maintenance of exchange system, since compliance is facilitated by monitoring [ 44 ] which does not exist in most cases. Subsequent social theorists proposed more feasible solutions that prevent free rider problem in generalized exchange systems. These solutions are described below by using the terminology adapted from Takahashi (2000). [ 2 ]
Tit-for-Tat strategy was originally introduced in game theory in order to provide solution to Prisoner’s dilemma by promoting mutual cooperation between two actors. This strategy has been adapted to bilateral and network relations, and in both cases the strategy works only in restricted – rather than generalized – exchange, because it involves bilateral resource giving in either situation. In an effort to propose a strategy to solve the social dilemma aspect of generalized exchange, Yamagishi and Cook (1993) [ 4 ] analyzed the effect of network structures on group members’ decisions. Relying on Ekeh’s (1974) [ 16 ] approach, they distinguish two forms of generalized exchange as "group-generalized" and "network-generalized". In the first type, group members pool their resources and then receive benefits that are generated by pooling. In the second, each member provides resources to another member in the network who does not return benefits directly to the provider, but the provider receives benefits from some other member in the network. They basically claim that group-generalized exchange involves free rider problem as it is rational for any member to receive resources from pool without contributing. On the other hand, network-generalized exchange limits the occurrence of this problem as it is easier to detect free riding member and punish him/her by withholding resources until s/he starts to give. The laboratory experiments supported these predictions and they showed that network-generalized exchange promotes a higher level of participation (or cooperation) that group-generalized exchange structure. They also show that trust is an important factor for the survival of both systems and has a stronger effect on cooperation in the network-generalized structure than in the group-generalized structure.
In another study, biologists Boyd and Richerson (1989) [ 45 ] presented a model of evolution of indirect reciprocity and supported the idea that downward tit-for-tat strategy helps sustaining network-generalized exchange structures. They also claim that as the group size increases, positive effect of this strategy on the possibility of cooperation reduces. In summary, these studies show that for a generalized exchange system to emerge and survive, a fixed form of network that consists of unidirectional paths is required. When this is available, adapting downward tit-for-tat strategy is profitable for all members and free riding is not possible. However, according to Takahashi (2000), [ 2 ] the requirement of a fixed network structure is a major limitation since many of real world generalized exchange systems do not represent a simple closed chain of resource giving.
Takahashi and Yamagishi proposed pure-generalized exchange as a situation where there is no fixed structure. It is regarded as more general, flexible and less restricted compared to previous models. In essence, pure-generalized exchange is network-generalized exchange with a choice of recipients, where each actor gives resources to recipient(s) that s/he chooses unilaterally. However, this model also comes with a limitation; the necessity of a criterion that represents a collective sense of fairness among the members. By easing the limitations caused by the models described above, Takahashi (2000) proposed a more general solution to the free rider problem. This new model is summarized below.
The new model proposed by Takahashi (2000), [ 2 ] solved the free rider problem in generalized exchange by imposing particular social structures as little as possible. He adapted pure-generalized exchange situation with a novel strategy; fairness-based selective giving. In this strategy, actors select recipients whose behaviors satisfy their own criteria of fairness which would make pure-generalized exchange possible. He showed that this argument can hold in two evolutionary experiments, in particular, pure-generalized exchange can emerge even in a society in which members have different standards of fairness. Thus, altruism and a collective sense of fairness are no longer required in such a setting. Why self-interested actors give resources unilaterally has been interpreted with the possibility that this action increases profits by participation in exchange.
Exchange processes have been studied in a variety of empirical contexts. Much of the beginning of generalized exchange work revolved around tribal settings. For example, Malinowski’s Trobriand Island research serves as a foundational work for the study of exchange. The classic example of the Kula ring showed a system of exchange formed cyclically, where a giver would receive after a product given had gone through a full circle of receivers. Similar tribal research includes the inhabitants of Groote Eylandt, and matrilineal cross-cousin marriages.
These early studies have provoked the study of reciprocity and exchange in modern settings as well. For example, with technology comes exchange through information sharing in large, anonymous online communities of software developers. [ 46 ] Even within academia, exchange has been studied through prosocial behaviors in a group of MBA students. [ 37 ] Takahashi (2000) provided several places where generalized exchange can be observed in real life. [ 2 ] Aiding a stranded driver alongside a road speaks to a societal-level feeling of duty to help others based on past experience or future expectation of needing help. Such a duty may also serve as the motivation for donating blood to unknown or indiscriminate receivers. Academically, the reviewers of journal articles also do so without payment in order to contribute to the system of publication and the knowledge that others will do so, or have already done so, for their papers. In addition to qualitative and ethnographic research, scholars have also studied generalized exchange through targeted lab experiments as well as programmed simulations. Generalized exchange has further studied through real life experiences, such as participation in public good conservation programs when one is recognized for doing so as opposed to when one’s name remains anonymous. [ 44 ]
Generalized exchange structures can be statistically represented by blockmodels , which is an effective method for characterizing the pattern of multiple type and asymmetric social interactions in complex networks. [ 47 ] | https://en.wikipedia.org/wiki/Generalized_exchange |
Generalized expected utility is a decision-making metric based on any of a variety of theories that attempt to resolve some discrepancies between expected utility theory and empirical observations , concerning choice under risky (probabilistic) or uncertain circumstances. Given its motivations and approach, generalized expected utility theory may properly be regarded as a subfield of behavioral economics , but it is more frequently located within mainstream economic theory .
The expected utility model developed by John von Neumann and Oskar Morgenstern dominated decision theory from its formulation in 1944 until the late 1970s, not only as a prescriptive , but also as a descriptive model, despite powerful criticism from Maurice Allais and Daniel Ellsberg who showed that, in certain choice problems, decisions were usually inconsistent with the axioms of expected utility theory. These problems are usually referred to as the Allais paradox and Ellsberg paradox .
Beginning in 1979 with the publication of the prospect theory of Daniel Kahneman and Amos Tversky , a range of generalized expected utility models were developed with the aim of resolving the Allais and Ellsberg paradoxes, while maintaining many of the attractive properties of expected utility theory. Important examples were anticipated utility theory, later referred to as rank-dependent utility theory , [ 1 ] weighted utility (Chew 1982), and expected uncertain utility theory. [ 2 ] A general representation, using the concept of the local utility function was presented by Mark J. Machina . [ 3 ] Since then, generalizations of expected utility theory have proliferated, but the probably most frequently used model is nowadays cumulative prospect theory , a rank-dependent development of prospect theory, introduced in 1992 by Daniel Kahneman and Amos Tversky .
This economics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalized_expected_utility |
In analytical mechanics (particularly Lagrangian mechanics ), generalized forces are conjugate to generalized coordinates . They are obtained from the applied forces F i , i = 1, …, n , acting on a system that has its configuration defined in terms of generalized coordinates. In the formulation of virtual work , each generalized force is the coefficient of the variation of a generalized coordinate.
Generalized forces can be obtained from the computation of the virtual work , δW , of the applied forces. [ 1 ] : 265
The virtual work of the forces, F i , acting on the particles P i , i = 1, ..., n , is given by δ W = ∑ i = 1 n F i ⋅ δ r i {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot \delta \mathbf {r} _{i}} where δ r i is the virtual displacement of the particle P i .
Let the position vectors of each of the particles, r i , be a function of the generalized coordinates, q j , j = 1, ..., m . Then the virtual displacements δ r i are given by δ r i = ∑ j = 1 m ∂ r i ∂ q j δ q j , i = 1 , … , n , {\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}}\delta q_{j},\quad i=1,\ldots ,n,} where δq j is the virtual displacement of the generalized coordinate q j .
The virtual work for the system of particles becomes δ W = F 1 ⋅ ∑ j = 1 m ∂ r 1 ∂ q j δ q j + ⋯ + F n ⋅ ∑ j = 1 m ∂ r n ∂ q j δ q j . {\displaystyle \delta W=\mathbf {F} _{1}\cdot \sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{1}}{\partial q_{j}}}\delta q_{j}+\dots +\mathbf {F} _{n}\cdot \sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{n}}{\partial q_{j}}}\delta q_{j}.} Collect the coefficients of δq j so that δ W = ∑ i = 1 n F i ⋅ ∂ r i ∂ q 1 δ q 1 + ⋯ + ∑ i = 1 n F i ⋅ ∂ r i ∂ q m δ q m . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{1}}}\delta q_{1}+\dots +\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{m}}}\delta q_{m}.}
The virtual work of a system of particles can be written in the form δ W = Q 1 δ q 1 + ⋯ + Q m δ q m , {\displaystyle \delta W=Q_{1}\delta q_{1}+\dots +Q_{m}\delta q_{m},} where Q j = ∑ i = 1 n F i ⋅ ∂ r i ∂ q j , j = 1 , … , m , {\displaystyle Q_{j}=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}},\quad j=1,\ldots ,m,} are called the generalized forces associated with the generalized coordinates q j , j = 1, ..., m .
In the application of the principle of virtual work it is often convenient to obtain virtual displacements from the velocities of the system. For the n particle system, let the velocity of each particle P i be V i , then the virtual displacement δ r i can also be written in the form [ 2 ] δ r i = ∑ j = 1 m ∂ V i ∂ q ˙ j δ q j , i = 1 , … , n . {\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}\delta q_{j},\quad i=1,\ldots ,n.}
This means that the generalized force, Q j , can also be determined as Q j = ∑ i = 1 n F i ⋅ ∂ V i ∂ q ˙ j , j = 1 , … , m . {\displaystyle Q_{j}=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.}
D'Alembert formulated the dynamics of a particle as the equilibrium of the applied forces with an inertia force ( apparent force ), called D'Alembert's principle . The inertia force of a particle, P i , of mass m i is F i ∗ = − m i A i , i = 1 , … , n , {\displaystyle \mathbf {F} _{i}^{*}=-m_{i}\mathbf {A} _{i},\quad i=1,\ldots ,n,} where A i is the acceleration of the particle.
If the configuration of the particle system depends on the generalized coordinates q j , j = 1, ..., m , then the generalized inertia force is given by Q j ∗ = ∑ i = 1 n F i ∗ ⋅ ∂ V i ∂ q ˙ j , j = 1 , … , m . {\displaystyle Q_{j}^{*}=\sum _{i=1}^{n}\mathbf {F} _{i}^{*}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.}
D'Alembert's form of the principle of virtual work yields δ W = ( Q 1 + Q 1 ∗ ) δ q 1 + ⋯ + ( Q m + Q m ∗ ) δ q m . {\displaystyle \delta W=(Q_{1}+Q_{1}^{*})\delta q_{1}+\dots +(Q_{m}+Q_{m}^{*})\delta q_{m}.} | https://en.wikipedia.org/wiki/Generalized_forces |
Generalized game theory is an extension of game theory incorporating social theory concepts such as norm , value, belief, role, social relationship, and institution. The theory was developed by Tom R. Burns , Anna Gomolinska, and Ewa Roszkowska but has not had great influence beyond these immediate associates. The theory seeks to address certain perceived limitations of game theory by formulating a theory of rules and rule complexes and to develop a more robust approach to socio-psychological and sociological phenomena.
In generalized game theory, games are conceptualized as rule complexes , which is a set containing rules and/or other rule complexes. However, the rules may be imprecise, inconsistent, and even dynamic. Distinctions in the properties and functions of different types of rules allows the rules themselves to be analyzed in complex ways, and thus the models of the theory more closely represent relationships and institutions investigated in the social sciences .
The ways in which the rules may be changed is developed within the context of generalized game theory based on the principle of rule revision and game restructuring. These types of games are referred to as open games, that is, games which are open to transformation. Games which have specified, fixed players, fixed preference structures, fixed optimization procedures, and fixed action alternatives and outcomes are called closed games (characteristic of most classical game theory models).
Because its premises derive from social theory generalized game theory emphasizes and provides cultural and institutional tools for game conceptualization and analysis, [ 1 ] what Granovetter (1985) refers to as the social embeddedness of interaction and social and economic processes. [ 2 ] This is in contrast to conceptualization of games consisting of actors which are autonomous utility maximizers . Further, the modeling of the actors themselves in generalized game theory is especially open to the use of concepts such as incomplete information and bounded rationality .
Proponents of generalized game theory have advocated the application of the theory to reconceptualizing individual and collective decision-making, resolutions of the prisoners' dilemma game, agent-based modeling , fuzzy games , conflict resolution procedures, challenging and providing robust and normatively grounded alternatives to Nash equilibrium and Pareto optimality , among others.
A key aspect of actors decision making in generalized game theory is based on the concept of judgment. Several types of judgment could be relevant, for instance value judgment, factual judgment, and action judgment. In the case of action judgment, the actor seeks to take the course of action offered by the rules of the game which most closely fit the values held by the actor (where the values are a sub-rule complex of the game).
Predicting how actors will react under these sub-rules is hypothesised to be more accurate than forming traditional game theory complexes. Armstrong (2002) found that when actors hold differing beliefs and roles within a sub-game formal game theory Nash equilibriums became less reliable (generalised game theory has received less scrutiny due to lack of notoriety). [ 3 ]
Even the method by which the actor calculates closeness of fit can be controlled by the actors values (such as an actor might use a more speedy algorithm, or a more far-sighted one). Each actor has a judgment operator by which the actor can create a preference order of the perceived qualities of possible outcomes based on satisfying the condition that the qualities of the outcomes can be roughly said to be sufficiently similar to the qualities of the actors primary values or norms. Thus, in generalized game theory, each actor's judgment calculus includes the institutional context of the game. [ 4 ]
A general or common game solution is a strategy or interaction order for the agents which satisfies or realizes the relevant norms and values of the players. This should lead to a state that is acceptable by the game players, and is not necessarily a normative equilibrium, but represents the "best result attainable under the circumstances". [ 4 ]
Solutions may be reached through a sequence of proposed alternatives, and when the actors find the ultimate solution acceptable, the proposed solutions may be said to be convergent. Roszkowska and Burns (2005) showed that not every game has a common solution, and that divergent proposals may arise. [ 5 ] This may result in a no equilibrium being found, and stems from dropping the assumption for the existence of a Nash equilibrium that the game be finite or that the game have complete information. Another possibility is the existence of a rule which allows a dictator to force an equilibrium. The rules which make up the norms of the game are one way of resolving the problem of choosing between multiple equilibria, such as those arising in the so-called folk theorem .
Generalization in psychological terms is the measure of how a theory holds up when applied in a non-experimental environment. Hence, generalised game theory takes elements from this quality and applies them to game theories. Many traditional Nash equilibriums can be applied to social and psychological interactions through generalization. [ 6 ]
When Roszkowska and Burns first discussed the notion of generalised game theory, it stemmed from a need to make game theory more applicable to the real world. Game theory being more useful in describing mathematics and economics than describing psychological phenomena. Traditional notions of best choice and optimal strategy are replaced by consequentialism and instrumental rationality when applied in less abstract contexts, such as the prisoner's dilemma , dictator game and public goods game .
In open environments actors can transform game rules to create “open games”. [ 7 ] For example, if the actors concur that the consequences of their actions aren't ideal they may introduce cooperation sub-rules when there is no one adjudicating the scenario. Depending on the differing status and dispositions of the actors, game transformation can occur to form an asymmetric set of rules resulting in a non-optimal outcome. When game theories are generalized, these uncertainty factors are accounted for in the formation of interaction patterns, but role-playing is often required to understand what optimal solutions will result.
Different observable interaction patterns will create different normative equilibriums. [ 6 ]
Interaction patterns can involve a combination of these to form resulting value judgements for divergent or contradictory outcomes.
In the example of the two-player prisoner's dilemma , for instance, proponents of generalized game theory are critical of the rational Nash equilibrium wherein both actors defect because rational actors, it is argued, would actually be predisposed to work out coordinating mechanisms in order to achieve optimum outcomes. Although these mechanisms are not usually included in the rules of the game, generalized game theorists argue that they do exist in real life situations.
This is because there exists in most interaction situations a social relationship between the players characterized by rules and rule complexes. This relationship may be one of, for instance, solidarity (which results in the Pareto optimal outcome), adversary (which results in the Nash equilibrium), or even hierarchy (by which one actor sacrifices their own benefits for the other's good). Some values, such as pure rivalry, are seen as nonstable because both actors would seek asymmetric gain, and thus would need to either transform the game or seek another value to attempt to satisfy.
If no communication mechanism is given (as is usual in the prisoner's dilemma), the operative social relationship between the actors is based on the actors own beliefs about the other (perhaps as another member of the human race, solidarity will be felt, or perhaps as an adversary). This illustrates the principle of game transformation, which is a key element of the theory. | https://en.wikipedia.org/wiki/Generalized_game_theory |
In coding theory , generalized minimum-distance (GMD) decoding provides an efficient algorithm for decoding concatenated codes , which is based on using an errors -and- erasures decoder for the outer code .
A naive decoding algorithm for concatenated codes can not be an optimal way of decoding because it does not take into account the information that maximum likelihood decoding (MLD) gives. In other words, in the naive algorithm, inner received codewords are treated the same regardless of the difference between their hamming distances . Intuitively, the outer decoder should place higher confidence in symbols whose inner encodings are close to the received word. David Forney in 1966 devised a better algorithm called generalized minimum distance (GMD) decoding which makes use of those information better. This method is achieved by measuring confidence of each received codeword, and erasing symbols whose confidence is below a desired value. And GMD decoding algorithm was one of the first examples of soft-decision decoders . We will present three versions of the GMD decoding algorithm. The first two will be randomized algorithms while the last one will be a deterministic algorithm .
Consider the received word y = ( y 1 , … , y N ) ∈ [ q n ] N {\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{N})\in [q^{n}]^{N}} which was corrupted by a noisy channel . The following is the algorithm description for the general case. In this algorithm, we can decode y by just declaring an erasure at every bad position and running the errors and erasure decoding algorithm for C out {\displaystyle C_{\text{out}}} on the resulting vector.
Randomized_Decoder Given : y = ( y 1 , … , y N ) ∈ [ q n ] N {\displaystyle \mathbf {y} =(y_{1},\dots ,y_{N})\in [q^{n}]^{N}} .
Theorem 1. Let y be a received word such that there exists a codeword c = ( c 1 , ⋯ , c N ) ∈ C out ∘ C in ⊆ [ q n ] N {\displaystyle \mathbf {c} =(c_{1},\cdots ,c_{N})\in C_{\text{out}}\circ {C_{\text{in}}}\subseteq [q^{n}]^{N}} such that Δ ( c , y ) < D d 2 {\displaystyle \Delta (\mathbf {c} ,\mathbf {y} )<{\tfrac {Dd}{2}}} . Then the deterministic GMD algorithm outputs c {\displaystyle \mathbf {c} } .
Note that a naive decoding algorithm for concatenated codes can correct up to D d 4 {\displaystyle {\tfrac {Dd}{4}}} errors.
Remark. If 2 e ′ + s ′ < D {\displaystyle 2e'+s'<D} , then the algorithm in Step 2 will output c {\displaystyle \mathbf {c} } . The lemma above says that in expectation, this is indeed the case. Note that this is not enough to prove Theorem 1 , but can be crucial in developing future variations of the algorithm.
Proof of lemma 1. For every 1 ≤ i ≤ N , {\displaystyle 1\leq i\leq N,} define e i = Δ ( y i , c i ) . {\displaystyle e_{i}=\Delta (y_{i},c_{i}).} This implies that
∑ i = 1 N e i < D d 2 ( 1 ) {\displaystyle \sum _{i=1}^{N}e_{i}<{\frac {Dd}{2}}\qquad \qquad (1)} Next for every 1 ≤ i ≤ N {\displaystyle 1\leq i\leq N} , we define two indicator variables :
X i ? = 1 ⇔ y i ″ = ? X i e = 1 ⇔ C in ( y i ″ ) ≠ c i and y i ″ ≠ ? {\displaystyle {\begin{aligned}X{_{i}^{?}}=1&\Leftrightarrow y_{i}''=?\\X{_{i}^{e}}=1&\Leftrightarrow C_{\text{in}}(y_{i}'')\neq c_{i}\ {\text{and}}\ y_{i}''\neq ?\end{aligned}}} We claim that we are done if we can show that for every 1 ≤ i ≤ N {\displaystyle 1\leq i\leq N} :
E [ 2 X i e + X i ? ] ⩽ 2 e i d ( 2 ) {\displaystyle \mathbb {E} \left[2X{_{i}^{e}+X{_{i}^{?}}}\right]\leqslant {2e_{i} \over d}\qquad \qquad (2)} Clearly, by definition
e ′ = ∑ i X i e and s ′ = ∑ i X i ? . {\displaystyle e'=\sum _{i}X_{i}^{e}\quad {\text{and}}\quad s'=\sum _{i}X_{i}^{?}.} Further, by the linearity of expectation, we get
E [ 2 e ′ + s ′ ] ⩽ 2 d ∑ i e i < D . {\displaystyle \mathbb {E} [2e'+s']\leqslant {\frac {2}{d}}\sum _{i}e_{i}<D.} To prove (2) we consider two cases: i {\displaystyle i} -th block is correctly decoded ( Case 1 ), i {\displaystyle i} -th block is incorrectly decoded ( Case 2 ):
Case 1: ( c i = C in ( y i ′ ) ) {\displaystyle (c_{i}=C_{\text{in}}(y_{i}'))}
Note that if y i ″ = ? {\displaystyle y_{i}''=?} then X i e = 0 {\displaystyle X_{i}^{e}=0} , and Pr [ y i ″ = ? ] = 2 ω i d {\displaystyle \Pr[y_{i}''=?]={\tfrac {2\omega _{i}}{d}}} implies E [ X i ? ] = Pr [ X i ? = 1 ] = 2 ω i d , {\displaystyle \mathbb {E} [X_{i}^{?}]=\Pr[X_{i}^{?}=1]={\tfrac {2\omega _{i}}{d}},} and E [ X i e ] = Pr [ X i e = 1 ] = 0 {\displaystyle \mathbb {E} [X_{i}^{e}]=\Pr[X_{i}^{e}=1]=0} .
Further, by definition we have
ω i = min ( Δ ( C in ( y i ′ ) , y i ) , d 2 ) ⩽ Δ ( C in ( y i ′ ) , y i ) = Δ ( c i , y i ) = e i {\displaystyle \omega _{i}=\min \left(\Delta (C_{\text{in}}(y_{i}'),y_{i}),{\tfrac {d}{2}}\right)\leqslant \Delta (C_{\text{in}}(y_{i}'),y_{i})=\Delta (c_{i},y_{i})=e_{i}} Case 2: ( c i ≠ C in ( y i ′ ) ) {\displaystyle (c_{i}\neq C_{\text{in}}(y_{i}'))}
In this case, E [ X i ? ] = 2 ω i d {\displaystyle \mathbb {E} [X_{i}^{?}]={\tfrac {2\omega _{i}}{d}}} and E [ X i e ] = Pr [ X i e = 1 ] = 1 − 2 ω i d . {\displaystyle \mathbb {E} [X_{i}^{e}]=\Pr[X_{i}^{e}=1]=1-{\tfrac {2\omega _{i}}{d}}.}
Since c i ≠ C in ( y i ′ ) , e i + ω i ⩾ d {\displaystyle c_{i}\neq C_{\text{in}}(y_{i}'),e_{i}+\omega _{i}\geqslant d} . This follows another case analysis when ( ω i = Δ ( C in ( y i ′ ) , y i ) < d 2 ) {\displaystyle (\omega _{i}=\Delta (C_{\text{in}}(y_{i}'),y_{i})<{\tfrac {d}{2}})} or not.
Finally, this implies
E [ 2 X i e + X i ? ] = 2 − 2 ω i d ≤ 2 e i d . {\displaystyle \mathbb {E} [2X_{i}^{e}+X_{i}^{?}]=2-{2\omega _{i} \over d}\leq {2e_{i} \over d}.} In the following sections, we will finally show that the deterministic version of the algorithm above can do unique decoding of C out ∘ C in {\displaystyle C_{\text{out}}\circ C_{\text{in}}} up to half its design distance.
Note that, in the previous version of the GMD algorithm in step "3", we do not really need to use "fresh" randomness for each i {\displaystyle i} . Now we come up with another randomized version of the GMD algorithm that uses the same randomness for every i {\displaystyle i} . This idea follows the algorithm below.
Modified_Randomized_Decoder Given : y = ( y 1 , … , y N ) ∈ [ q n ] N {\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{N})\in [q^{n}]^{N}} , pick θ ∈ [ 0 , 1 ] {\displaystyle \theta \in [0,1]} at random. Then every for every 1 ≤ i ≤ N {\displaystyle 1\leq i\leq N} :
For the proof of Lemma 1 , we only use the randomness to show that
Pr [ y i ″ = ? ] = 2 ω i d . {\displaystyle \Pr[y_{i}''=?]={2\omega _{i} \over d}.} In this version of the GMD algorithm, we note that
Pr [ y i ″ = ? ] = Pr [ θ ∈ [ 0 , 2 ω i d ] ] = 2 ω i d . {\displaystyle \Pr[y_{i}''=?]=\Pr \left[\theta \in \left[0,{\tfrac {2\omega _{i}}{d}}\right]\right]={\tfrac {2\omega _{i}}{d}}.} The second equality above follows from the choice of θ {\displaystyle \theta } . The proof of Lemma 1 can be also used to show E [ 2 e ′ + s ′ ] < D {\displaystyle \mathbb {E} [2e'+s']<D} for version2 of GMD. In the next section, we will see how to get a deterministic version of the GMD algorithm by choosing θ {\displaystyle \theta } from a polynomially sized set as opposed to the current infinite set [ 0 , 1 ] {\displaystyle [0,1]} .
Let Q = { 0 , 1 } ∪ { 2 ω 1 d , … , 2 ω N d } {\displaystyle Q=\{0,1\}\cup \{{2\omega _{1} \over d},\ldots ,{2\omega _{N} \over d}\}} . Since for each i , ω i = min ( Δ ( y i ′ , y i ) , d 2 ) {\displaystyle i,\omega _{i}=\min(\Delta (\mathbf {y_{i}'} ,\mathbf {y_{i}} ),{d \over 2})} , we have
Q = { 0 , 1 } ∪ { q 1 , … , q m } {\displaystyle Q=\{0,1\}\cup \{q_{1},\ldots ,q_{m}\}} where q 1 < ⋯ < q m {\displaystyle q_{1}<\cdots <q_{m}} for some m ≤ ⌊ d 2 ⌋ {\displaystyle m\leq \left\lfloor {\frac {d}{2}}\right\rfloor } . Note that for every θ ∈ [ q i , q i + 1 ] {\displaystyle \theta \in [q_{i},q_{i+1}]} , the step 1 of the second version of randomized algorithm outputs the same y ″ . {\displaystyle \mathbf {y} ''.} . Thus, we need to consider all possible value of θ ∈ Q {\displaystyle \theta \in Q} . This gives the deterministic algorithm below.
Deterministic_Decoder Given : y = ( y 1 , … , y N ) ∈ [ q n ] N {\displaystyle \mathbf {y} =(y_{1},\ldots ,y_{N})\in [q^{n}]^{N}} , for every θ ∈ Q {\displaystyle \theta \in Q} , repeat the following.
Every loop of 1~4 can be run in polynomial time , the algorithm above can also be computed in polynomial time. Specifically, each call to an errors and erasures decoder of < d D / 2 {\displaystyle <dD/2} errors takes O ( d ) {\displaystyle O(d)} time. Finally, the runtime of the algorithm above is O ( N Q n O ( 1 ) + N T out ) {\displaystyle O(NQn^{O(1)}+NT_{\text{out}})} where T out {\displaystyle T_{\text{out}}} is the running time of the outer errors and erasures decoder. | https://en.wikipedia.org/wiki/Generalized_minimum-distance_decoding |
Generalized pencil-of-function method ( GPOF ), also known as matrix pencil method , is a signal processing technique for estimating a signal or extracting information with complex exponentials . Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency. [ 1 ]
The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method. [ 1 ] [ 2 ] The method has a plethora of applications in electrical engineering , particularly related to problems in computational electromagnetics , microwave engineering and antenna theory . [ 1 ]
A transient electromagnetic signal can be represented as: [ 3 ]
where
The same sequence, sampled by a period of T s {\displaystyle T_{s}} , can be written as the following:
Generalized pencil-of-function estimates the optimal M {\displaystyle M} and z i {\displaystyle z_{i}} 's. [ 4 ]
For the noiseless case, two ( N − L ) × L {\displaystyle (N-L)\times L} matrices, Y 1 {\displaystyle Y_{1}} and Y 2 {\displaystyle Y_{2}} , are produced: [ 3 ]
where L {\displaystyle L} is defined as the pencil parameter. Y 1 {\displaystyle Y_{1}} and Y 2 {\displaystyle Y_{2}} can be decomposed into the following matrices: [ 3 ]
where
[ Z 0 ] {\textstyle [Z_{0}]} and [ B ] {\textstyle [B]} are M × M {\textstyle M\times M} diagonal matrices with sequentially-placed z i {\textstyle z_{i}} and R i {\textstyle R_{i}} values, respectively. [ 3 ]
If M ≤ L ≤ N − M {\textstyle M\leq L\leq N-M} , the generalized eigenvalues of the matrix pencil
yield the poles of the system, which are λ = z i {\displaystyle \lambda =z_{i}} . Then, the generalized eigenvectors p i {\displaystyle p_{i}} can be obtained by the following identities: [ 3 ]
where the + {\displaystyle ^{+}} denotes the Moore–Penrose inverse , also known as the pseudo-inverse. Singular value decomposition can be employed to compute the pseudo-inverse.
If noise is present in the system, [ Y 1 ] {\textstyle [Y_{1}]} and [ Y 2 ] {\textstyle [Y_{2}]} are combined in a general data matrix, [ Y ] {\textstyle [Y]} : [ 3 ]
where y {\displaystyle y} is the noisy data. For efficient filtering , L is chosen between N 3 {\textstyle {\frac {N}{3}}} and N 2 {\textstyle {\frac {N}{2}}} . A singular value decomposition on [ Y ] {\textstyle [Y]} yields:
In this decomposition, [ U ] {\textstyle [U]} and [ V ] {\textstyle [V]} are unitary matrices with respective eigenvectors [ Y ] [ Y ] H {\textstyle [Y][Y]^{H}} and [ Y ] H [ Y ] {\textstyle [Y]^{H}[Y]} and [ Σ ] {\textstyle [\Sigma ]} is a diagonal matrix with singular values of [ Y ] {\textstyle [Y]} . Superscript H {\textstyle H} denotes the conjugate transpose . [ 3 ] [ 4 ]
Then the parameter M {\textstyle M} is chosen for filtering. Singular values after M {\textstyle M} , which are below the filtering threshold, are set to zero; for an arbitrary singular value σ c {\textstyle \sigma _{c}} , the threshold is denoted by the following formula: [ 1 ]
σ m a x {\textstyle \sigma _{max}} and p are the maximum singular value and significant decimal digits , respectively. For a data with significant digits accurate up to p , singular values below 10 − p {\textstyle 10^{-p}} are considered noise. [ 4 ]
[ V 1 ′ ] {\textstyle [V_{1}']} and [ V 2 ′ ] {\textstyle [V_{2}']} are obtained through removing the last and first row and column of the filtered matrix [ V ′ ] {\textstyle [V']} , respectively; M {\textstyle M} columns of [ Σ ] {\textstyle [\Sigma ]} represent [ Σ ′ ] {\textstyle [\Sigma ']} . Filtered [ Y 1 ] {\textstyle [Y_{1}]} and [ Y 2 ] {\textstyle [Y_{2}]} matrices are obtained as: [ 4 ]
Prefiltering can be used to combat noise and enhance signal-to-noise ratio (SNR). [ 1 ] Band-pass matrix pencil (BPMP) method is a modification of the GPOF method via FIR or IIR band-pass filters . [ 1 ] [ 5 ]
GPOF can handle up to 25 dB SNR. For GPOF, as well as for BPMP, variance of the estimates approximately reaches Cramér–Rao bound . [ 3 ] [ 5 ] [ 4 ]
Residues of the complex poles are obtained through the least squares problem: [ 1 ]
The method is generally used for the closed-form evaluation of Sommerfeld integrals in discrete complex image method for method of moments applications, where the spectral Green's function is approximated as a sum of complex exponentials. [ 1 ] [ 6 ] Additionally, the method is used in antenna analysis, S-parameter -estimation in microwave integrated circuits , wave propagation analysis, moving target indication , radar signal processing , [ 1 ] [ 7 ] [ 8 ] and series acceleration in electromagnetic problems. [ 9 ] | https://en.wikipedia.org/wiki/Generalized_pencil-of-function_method |
Generalized periodic epileptiform discharges ( GPED s) are very rare abnormal patterns found in EEG . [ 1 ] [ 2 ]
Based on the interval between the discharges they are classified as: [ 1 ]
This article about a medical condition affecting the nervous system is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalized_periodic_epileptiform_discharges |
A generalized probabilistic theory (GPT) is a general framework to describe the operational features of arbitrary physical theories . A GPT must specify what kind of physical systems one can find in the lab, as well as rules to compute the outcome statistics of any experiment involving labeled preparations, transformations and measurements. The framework of GPTs has been used to define hypothetical non-quantum physical theories which nonetheless possess quantum theory 's most remarkable features, such as entanglement [ 1 ] [ 2 ] or teleportation . [ 3 ] Notably, a small set of physically motivated axioms is enough to single out the GPT representation of quantum theory. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
The mathematical formalism of GPTs has been developed since the 1950s and 1960s by many authors, and rediscovered independently several times. The earliest ideas are due to Segal [ 8 ] and Mackey, [ 9 ] although the first comprehensive and mathematically rigorous treatment can be traced back to the work of Ludwig, Dähn, and Stolz, all three based at the University of Marburg. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] While the formalism in these earlier works is less similar to the modern one, already in the early 1970s the ideas of the Marburg school had matured and the notation had developed towards the modern usage, thanks also to the independent contribution of Davies and Lewis. [ 16 ] [ 17 ] The books by Ludwig and the proceedings of a conference held in Marburg in 1973 offer a comprehensive account of these early developments. [ 18 ] [ 4 ] The term "generalized probabilistic theory" itself was coined by Jonathan Barrett in 2007, [ 19 ] based on the version of the framework introduced by Lucien Hardy. [ 5 ]
Note that some authors use the term operational probabilistic theory (OPT). [ 6 ] [ 20 ] OPTs are an alternative way to define hypothetical non-quantum physical theories, based on the language of category theory , in which one specify the axioms that should be satisfied by observations.
A GPT is specified by a number of mathematical structures, namely:
It can be argued that if one can prepare a state x {\displaystyle x} and a different state y {\displaystyle y} , then one can also toss a (possibly biased) coin which lands on one side with probability p {\displaystyle p} and on the other with probability 1 − p {\displaystyle 1-p} and prepare either x {\displaystyle x} or y {\displaystyle y} , depending on the side the coin lands on. The resulting state is a statistical mixture of the states x {\displaystyle x} and y {\displaystyle y} and in GPTs such statistical mixtures are described by convex combinations, in this case p x + ( 1 − p ) y {\displaystyle px+(1-p)y} . For this reason all state spaces are assumed to be convex sets . Following a similar reasoning, one can argue that also the set of measurement outcomes and set of physical operations must be convex.
Additionally it is always assumed that measurement outcomes and physical operations are affine maps, i.e. that if Φ {\displaystyle \Phi } is a physical transformation, then we must have Φ ( p x + ( 1 − p ) y ) = p Φ ( x ) + ( 1 − p ) Φ ( y ) {\displaystyle \Phi (px+(1-p)y)=p\Phi (x)+(1-p)\Phi (y)} and similarly for measurement outcomes. This follows from the argument that we should obtain the same outcome if we first prepare a statistical mixture and then apply the physical operation, or if we prepare a statistical mixture of the outcomes of the physical operations.
Note that physical operations are a subset of all affine maps which transform states into states as we must require that a physical operation yields a valid state even when it is applied to a part of a system (the notion of "part" is subtle: it is specified by explaining how different system types compose and how the global parameters of the composite system are affected by local operations).
For practical reasons it is often assumed that a general GPT is embedded in a finite-dimensional vector space, although infinite-dimensional formulations exist. [ 21 ] [ 22 ]
Classical theory is a GPT where states correspond to probability distributions and both measurements and physical operations are stochastic maps. One can see that in this case all state spaces are simplexes .
Standard quantum information theory is a GPT where system types are described by a natural number D {\displaystyle D} which corresponds to the complex Hilbert space dimension. States of the systems of Hilbert space dimension D {\displaystyle D} are described by the normalized positive semidefinite matrices, i.e. by the density matrices . Measurements are identified with Positive Operator valued Measures (POVMs) , and the physical operations are completely positive maps . Systems compose via the tensor product of the underlying complex Hilbert spaces.
Real quantum theory is the GPT which is obtained from standard quantum information theory by restricting the theory to real Hilbert spaces. It does not satisfy the axiom of local tomography . [ 23 ]
The framework of GPTs has provided examples of consistent physical theories which cannot be embedded in quantum theory and indeed exhibit very non-quantum features. One of the first ones was Box-world, the theory with maximal non-local correlations. [ 19 ] Other examples are theories with third-order interference [ 24 ] and the family of GPTs known as generalized bits. [ 25 ]
Many features that were considered purely quantum are actually present in all non-classical GPTs. These include the impossibility of universal broadcasting, i.e., the no-cloning theorem ; [ 26 ] the existence of incompatible measurements; [ 22 ] [ 27 ] and the existence of entangled states or entangled measurements. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Generalized_probabilistic_theory |
In mathematics, a generalized space is a generalization of a topological space . Impetuses for such a generalization comes at least in two forms:
Alexander Grothendieck 's dictum says a topos is a generalized space; precisely, he and his followers write in exposé 4 of SGA I: [ 1 ]
On peut done dire que la notion de topos, dérivé naturel du point de vue faisceautique en Topologie, constitue à son tour un élargissement substantiel de la notion d'espace topologique, un grand nombre de situations qui autrefois n'étaient pas considérées comme relevant de intuition topologique
However, William Lawvere argues in his 1975 paper [ 2 ] that this dictum should be turned backward; namely, "a topos is the 'algebra of continuous (set-valued) functions' on a generalized space, not the generalized space itself."
A generalized space should not be confused with a geometric object that can substitute the role of spaces. For example, a stack is typically not viewed as a space but as a geometric object with a richer structure.
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalized_space |
In image analysis , the generalized structure tensor (GST) is an extension of the Cartesian structure tensor to curvilinear coordinates . [ 1 ] It is mainly used to detect and to represent the "direction" parameters of curves, just as the Cartesian structure tensor detects and represents the direction in Cartesian coordinates. Curve families generated by pairs of locally orthogonal functions have been the best studied.
It is a widely known method in applications of image and video processing including computer vision, such as biometric identification by fingerprints, [ 2 ] and studies of human tissue sections. [ 3 ] [ 4 ]
Let the term image represent a function f ( ξ ( x , y ) , η ( x , y ) ) {\displaystyle f(\xi (x,y),\eta (x,y))} where x , y {\displaystyle x,y} are real variables and ξ , η {\displaystyle \xi ,\eta } , and f {\displaystyle f} , are real valued functions. GST represents the direction along which the image f {\displaystyle f} can undergo an infinitesimal translation with minimal ( total least squares ) error, along the "lines" fulfilling the following conditions:
1. The "lines" are ordinary lines in the curvilinear coordinate basis ξ , η {\displaystyle \xi ,\eta }
which are curves in Cartesian coordinates as depicted by the equation above. The error is measured in the L 2 {\displaystyle L^{2}} sense and the minimality of the error refers thereby to L2 norm .
2. The functions ξ ( x , y ) , η ( x , y ) {\displaystyle \xi (x,y),\eta (x,y)} constitute a harmonic pair, i.e. they fulfill Cauchy–Riemann equations ,
Accordingly, such curvilinear coordinates ξ , η {\displaystyle \xi ,\eta } are locally orthogonal.
Then GST consists in
where 0 ≤ λ m i n ≤ λ m a x {\displaystyle 0\leq \lambda _{min}\leq \lambda _{max}} are errors of (infinitesimal) translation in the best direction (designated by the angle θ {\displaystyle \theta } ) and the worst direction (designated by θ + π / 2 {\displaystyle \theta +\pi /2} ). The function w ( ξ , η ) {\displaystyle w(\xi ,\eta )} is the window function defining the "outer scale" wherein the detection of θ {\displaystyle \theta } will be carried out, which can be omitted if it is already included in f {\displaystyle f} or if f {\displaystyle f} is the full image (rather than local). The matrix I {\displaystyle I} is the identity matrix . Using the chain rule , it can be shown that the integration above can be implemented as convolutions in Cartesian coordinates applied to the ordinary structure tensor when ξ , η {\displaystyle \xi ,\eta } pair the real and imaginary parts of an analytic function g ( z ) {\displaystyle g(z)} ,
where z = x + i y {\displaystyle z=x+iy} . [ 5 ] Examples of analytic functions include g ( z ) = log z = log ( x + i y ) {\displaystyle g(z)=\log z=\log(x+iy)} , as well as monomials g ( z ) = z n = ( x + i y ) n {\displaystyle g(z)=z^{n}=(x+iy)^{n}} , g ( z ) = z n / 2 = ( x + i y ) n / 2 {\displaystyle g(z)=z^{n/2}=(x+iy)^{n/2}} , where n {\displaystyle n} is an arbitrary positive or negative integer. The monomials g ( z ) = z n {\displaystyle g(z)=z^{n}} are also referred to as harmonic functions in computer vision, and image processing.
Thereby, Cartesian Structure tensor is a special case of GST where ξ = x {\displaystyle \xi =x} , and η = y {\displaystyle \eta =y} , i.e. the harmonic function is simply g ( z ) = z = ( x + i y ) {\displaystyle g(z)=z=(x+iy)} . Thus by choosing a harmonic function g {\displaystyle g} , one can detect all curves that are linear combinations of its real and imaginary parts by convolutions on (rectangular) image grids only, even if ξ , η {\displaystyle \xi ,\eta } are non-Cartesian. Furthermore, the convolution computations can be done by using complex filters applied to the complex version of the structure tensor. Thus, GST implementations have frequently been done using complex version of the structure tensor, rather than using the (1,1) tensor.
As there is a complex version of the ordinary structure tensor , there is also a complex version of the GST
which is identical to its cousin with the difference that w {\displaystyle w} is a complex filter. It should be recalled that, the ordinary structure tensor w {\displaystyle w} is a real filter, usually defined by a sampled and scaled Gaussian to delineate the neighborhood, also known as the outer scale. This simplicity is a reason for why GST implementations have predominantly used the complex version above. For curve families ξ , η {\displaystyle \xi ,\eta } defined by analytic functions g {\displaystyle g} , it can be shown that, [ 1 ] the neighborhood defining function is complex valued,
a so called symmetry derivative of a Gaussian. Thus, the orientation wise variation of the pattern to be looked for is directly incorporated into the neighborhood defining function, and the detection occurs in the space of the (ordinary) structure tensor.
Efficient detection of θ {\displaystyle \theta } in images is possible by image processing for a pair ξ {\displaystyle \xi } , η {\displaystyle \eta } . Complex convolutions (or the corresponding matrix operations) and point-wise non-linear mappings are the basic computational elements of GST implementations. A total least square error estimation of 2 θ {\displaystyle 2\theta } is then obtained along with the two errors, λ m a x {\displaystyle \lambda _{max}} and λ m i n {\displaystyle \lambda _{min}} . In analogy with the Cartesian structure tensor , the estimated angle is in double angle representation, i.e. 2 θ {\displaystyle 2\theta } is delivered by computations, and can be used as a shape feature whereas λ m a x − λ m i n {\displaystyle \lambda _{max}-\lambda _{min}} alone or in combination with λ m a x + λ m i n {\displaystyle \lambda _{max}+\lambda _{min}} can be used as a quality (confidence, certainty) measure for the angle estimation.
Logarithmic spirals, including circles, can for instance be detected by (complex) convolutions and non-linear mappings. [ 1 ] The spirals can be in gray (valued) images or in a binary image , i.e. locations of edge elements of the concerned patterns, such as contours of circles or spirals, must not be known or marked otherwise.
Generalized structure tensor can be used as an alternative to Hough transform in image processing and computer vision to detect patterns whose local orientations can be modelled, for example junction points. The main differences comprise:
The curvilinear coordinates of GST can explain physical processes applied to images. A well known pair of processes consist in rotation, and zooming. These are related to the coordinate transformation ξ = log ( x 2 + y 2 ) {\displaystyle \xi =\log({\sqrt {x^{2}+y^{2}}})} and η = tan − 1 ( x , y ) {\displaystyle \eta =\tan ^{-1}(x,y)} .
If an image f {\displaystyle f} consists in iso-curves that can be explained by only ξ {\displaystyle \xi } i.e. its iso-curves consist in circles f ( ξ , η ) = g ( ξ ) {\displaystyle f(\xi ,\eta )=g(\xi )} , where g {\displaystyle g} is any real valued differentiable function defined on 1D, the image is invariant to rotations (around the origin).
Zooming (comprising unzooming) operation is modeled similarly. If the image has iso-curves that look like a "star" or bicycle spokes, i.e. f ( ξ , η ) = g ( η ) {\displaystyle f(\xi ,\eta )=g(\eta )} for some differentiable 1D function g {\displaystyle g} then, the image f {\displaystyle f} is invariant to scaling (w.r.t. the origin).
In combination,
is invariant to a certain amount of rotation combined with scaling, where the amount is precised by the parameter θ {\displaystyle \theta } .
Analogously, the Cartesian structure tensor is a representation of a translation too. Here the physical process consists in an ordinary translation of a certain amount along x {\displaystyle x} combined with translation along y {\displaystyle y} ,
where the amount is specified by the parameter θ {\displaystyle \theta } . Evidently θ {\displaystyle \theta } here represents the direction of the line.
Generally, the estimated θ {\displaystyle \theta } represents the direction (in ξ , η {\displaystyle \xi ,\eta } coordinates) along which infinitesimal translations leave the image invariant, in practice least variant. With every curvilinear coordinate basis pair, there is thus a pair of infinitesimal translators, a linear combination of which is a Differential operator . The latter are related to Lie algebra .
"Image" in the context of the GST can mean both an ordinary image and an image neighborhood thereof (local image), depending on context. For example, a photograph is an image as is any neighborhood of it. | https://en.wikipedia.org/wiki/Generalized_structure_tensor |
In number theory , the generalized taxicab number Taxicab( k , j , n ) is the smallest number — if it exists — that can be expressed as the sum of j numbers to the k th positive power in n different ways. For k = 3 and j = 2 , they coincide with taxicab number .
T a x i c a b ( 1 , 2 , 2 ) = 4 = 1 + 3 = 2 + 2 T a x i c a b ( 2 , 2 , 2 ) = 50 = 1 2 + 7 2 = 5 2 + 5 2 T a x i c a b ( 3 , 2 , 2 ) = 1729 = 1 3 + 12 3 = 9 3 + 10 3 {\displaystyle {\begin{aligned}\mathrm {Taxicab} (1,2,2)&=4=1+3=2+2\\\mathrm {Taxicab} (2,2,2)&=50=1^{2}+7^{2}=5^{2}+5^{2}\\\mathrm {Taxicab} (3,2,2)&=1729=1^{3}+12^{3}=9^{3}+10^{3}\end{aligned}}}
The latter example is 1729 , as first noted by Ramanujan .
Euler showed that
T a x i c a b ( 4 , 2 , 2 ) = 635318657 = 59 4 + 158 4 = 133 4 + 134 4 . {\displaystyle \mathrm {Taxicab} (4,2,2)=635318657=59^{4}+158^{4}=133^{4}+134^{4}.}
However, Taxicab(5, 2, n ) is not known for any n ≥ 2 : No positive integer is known that can be written as the sum of two 5th powers in more than one way, and it is not known whether such a number exists. [ 1 ] | https://en.wikipedia.org/wiki/Generalized_taxicab_number |
The Generalized Uncertainty Principle ( GUP ) represents a pivotal extension of the Heisenberg Uncertainty Principle, incorporating the effects of gravitational forces to refine the limits of measurement precision within quantum mechanics. Rooted in advanced theories of quantum gravity, including string theory and loop quantum gravity, the GUP introduces the concept of a minimal measurable length. This fundamental limit challenges the classical notion that positions can be measured with arbitrary precision, hinting at a discrete structure of spacetime at the Planck scale. The mathematical expression of the GUP is often formulated as:
Δ x Δ p ≥ ℏ 2 + β Δ p 2 {\displaystyle \Delta x\Delta p\geq {\frac {\hbar }{2}}+\beta \Delta p^{2}}
In this equation, Δ x {\displaystyle \Delta x} and Δ p {\displaystyle \Delta p} denote the uncertainties in position and momentum, respectively. The term ℏ {\displaystyle \hbar } represents the reduced Planck constant, while β {\displaystyle \beta } is a parameter that embodies the minimal length scale predicted by the GUP. The GUP is more than a theoretical curiosity; it signifies a cornerstone concept in the pursuit of unifying quantum mechanics with general relativity. It posits an absolute minimum uncertainty in the position of particles, approximated by the Planck length, underscoring its significance in the realms of quantum gravity and string theory where such minimal length scales are anticipated. [ 1 ] [ 2 ] Various quantum gravity theories, such as string theory, loop quantum gravity, and quantum geometry, propose a generalized version of the uncertainty principle (GUP), which suggests the presence of a minimum measurable length. In earlier research, multiple forms of the GUP have been introduced [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ]
The GUP's phenomenological and experimental implications have been examined across low and high-energy contexts, encompassing atomic systems, [ 13 ] [ 14 ] quantum optical systems, [ 15 ] gravitational bar detectors, [ 16 ] gravitational decoherence, [ 17 ] and macroscopic harmonic oscillators, [ 18 ] further extending to composite particles, [ 19 ] astrophysical systems [ 20 ] | https://en.wikipedia.org/wiki/Generalized_uncertainty_principle |
The generalized valence bond ( GVB ) is a method in valence bond theory that uses flexible orbitals in the general way used by modern valence bond theory . The method was developed by the group of William A. Goddard, III around 1970. [ 1 ] [ 2 ]
The generalized Coulson–Fischer theory for the hydrogen molecule , discussed in Modern valence bond theory , is used to describe every electron pair in a molecule. The orbitals for each electron pair are expanded in terms of the full basis set and are non-orthogonal. Orbitals from different pairs are forced to be orthogonal - the strong orthogonality condition. This condition simplifies the calculation but can lead to some difficulties.
GVB code in some programs, particularly GAMESS (US) , can also be used to do a variety of restricted open-shell Hartree–Fock calculations, [ 3 ] such as those with one or three electrons in two pi-electron molecular orbitals while retaining the degeneracy of the orbitals. This wave function is essentially a two-determinant function, rather than the one-determinant function of the restricted Hartree–Fock method.
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generalized_valence_bond |
Generally Accepted Privacy Principles (GAPP) is a framework intended to assist chartered accountants and certified public accountants in creating an effective privacy program for managing and preventing privacy risks. The framework was developed through joint consultation between the Canadian Institute of Chartered Accountants (CICA) and the American Institute of Certified Public Accountants (AICPA) through the AICPA/CICA Privacy Task Force. It is a component of SOC 2 . [ 1 ]
The GAPP framework was previously known as the AICPA/CICA Privacy Framework, and is founded on a single privacy principle: personally identifiable information must be collected, used, retained and disclosed in compliance with the commitments in the entity's privacy notice and with criteria set out in the GAPP issued by the AICPA/CICA. This privacy objective is supported by ten main principles and over seventy objectives, with associated measurable criteria. The ten principles are:
Privacy is defined in the Generally Accepted Privacy Principles as "the rights and obligations of individuals and organizations with respect to the collection, use, retention, disclosure, and disposal of personal information." [ 2 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generally_Accepted_Privacy_Principles |
Generally recognized as safe ( GRAS ) is a United States Food and Drug Administration (FDA) designation that a chemical or substance added to food is considered safe by experts under the conditions of its intended use. [ 1 ] An ingredient with a GRAS designation is exempted from the usual Federal Food, Drug, and Cosmetic Act (FFDCA) food additive tolerance requirements. [ 2 ] The concept of food additives being "generally recognized as safe" was first described in the Food Additives Amendment of 1958 , and all additives introduced after this time had to be evaluated by new standards. [ 1 ] [ 3 ] Some examples of substances recognized as GRAS include ascorbic acid (vitamin C), citric acid, and salt, which are all commonly used in food preservation and flavoring. [ 4 ] The FDA list of GRAS notices is updated approximately each month, as of 2021. [ 5 ]
On 6 September 1958, the Food Additives Amendment of 1958 was signed into law, with a list of 700 food substances that were exempt from the then-new requirement that manufacturers test food additives before putting them on the market. [ 1 ] [ 3 ] On August 31, 1960, William W. Goodrich, assistant general counsel of the FDA, addressed the annual meeting (16 Bus. Law. 107 1960–1961) of the FFDCA. The purpose of the meeting was the forthcoming March 6, 1961, effective date of the enforcement provisions of the "Food Additives Amendment of 1958", referred to as GRAS. [ 6 ]
A GRAS determination can be self-affirmed or the FDA can be notified of a determination of GRAS by qualified non-governmental experts:
As of January 2021 [update] (beginning in 1998), 955 ingredient or food substances have been filed with the FDA . [ 5 ] These petitions, submitted by sponsors or manufacturers, are reviewed for the safety evidence contained in the document. FDA posts status of the review as either without further questions (as a position of "no objection") or the petition is withdrawn by the applicant. [ 5 ]
For substances used in food prior to January 1, 1958, a grandfather clause allows experience based on common use in food to be used in asserting an ingredient is safe under the conditions of their intended use. [ 3 ]
The FDA can also explicitly withdraw the GRAS classification, as it did for trans fat in 2015. [ 7 ]
The list of GRAS notices is updated approximately each month by the FDA. [ 5 ]
The Code of Federal Regulations , revised as of April 1, 2020 [update] , [ 8 ] includes (CFR) title 21 170.30(b) that provides general recognition of safety through scientific procedures requires the same quantity and quality of scientific evidence needed to obtain approval of the substance as a food additive. [ 9 ] and ordinarily is based upon published studies, which may be corroborated by unpublished studies and other data and information. [ 2 ]
The substance must be shown to be "generally recognized" as safe under the conditions of its intended use. [ 2 ] For new proposals, the proponent of the exemption – usually a food manufacturer or ingredient supplier wishing to highlight a food ingredient in its manufactured product – has the burden of providing rigorous scientific evidence that use of the substance in an edible consumer product is safe. [ 2 ] To establish GRAS, the proponent must show that there is a consensus of expert opinion that the substance is safe for its intended use. [ 5 ] For existing GRAS items, new uses should not substantially exceed historical occurrence levels of the substance in the diet. [ 2 ]
When use of a substance does not qualify for the GRAS exemption, it is subject to the premarket approval mandated by the Federal Food, Drug, and Cosmetic Act . In such circumstances, the FDA can take enforcement action to stop distribution of the food substance and foods containing it on the grounds that such foods are not deemed GRAS or contain an unlawfully added ingredient. [ 10 ]
An example of a non-GRAS ingredient requiring enforcement actions in the form of FDA warning letters to 15 companies in 2019 was cannabidiol , [ 11 ] which, as of 2021, had not been established with sufficient scientific evidence of safety as a GRAS ingredient. [ 12 ] | https://en.wikipedia.org/wiki/Generally_recognized_as_safe |
In physics, and more specifically in Hamiltonian mechanics , a generating function is, loosely, a function whose partial derivatives generate the differential equations that determine a system's dynamics. Common examples are the partition function of statistical mechanics, the Hamiltonian, and the function which acts as a bridge between two sets of canonical variables when performing a canonical transformation .
There are four basic generating functions, summarized by the following table: [ 1 ]
Sometimes a given Hamiltonian can be turned into one that looks like the harmonic oscillator Hamiltonian, which is
H = a P 2 + b Q 2 . {\displaystyle H=aP^{2}+bQ^{2}.}
For example, with the Hamiltonian
H = 1 2 q 2 + p 2 q 4 2 , {\displaystyle H={\frac {1}{2q^{2}}}+{\frac {p^{2}q^{4}}{2}},}
where p is the generalized momentum and q is the generalized coordinate, a good canonical transformation to choose would be
This turns the Hamiltonian into
H = Q 2 2 + P 2 2 , {\displaystyle H={\frac {Q^{2}}{2}}+{\frac {P^{2}}{2}},}
which is in the form of the harmonic oscillator Hamiltonian.
The generating function F for this transformation is of the third kind,
F = F 3 ( p , Q ) . {\displaystyle F=F_{3}(p,Q).}
To find F explicitly, use the equation for its derivative from the table above,
P = − ∂ F 3 ∂ Q , {\displaystyle P=-{\frac {\partial F_{3}}{\partial Q}},}
and substitute the expression for P from equation ( 1 ), expressed in terms of p and Q :
p Q 2 = − ∂ F 3 ∂ Q {\displaystyle {\frac {p}{Q^{2}}}=-{\frac {\partial F_{3}}{\partial Q}}}
Integrating this with respect to Q results in an equation for the generating function of the transformation given by equation ( 1 ):
F 3 ( p , Q ) = p Q {\displaystyle F_{3}(p,Q)={\frac {p}{Q}}}
To confirm that this is the correct generating function, verify that it matches ( 1 ):
q = − ∂ F 3 ∂ p = − 1 Q {\displaystyle q=-{\frac {\partial F_{3}}{\partial p}}={\frac {-1}{Q}}} | https://en.wikipedia.org/wiki/Generating_function_(physics) |
In mathematics , a generating set Γ of a module M over a ring R is a subset of M such that the smallest submodule of M containing Γ is M itself (the smallest submodule containing a subset is the intersection of all submodules containing the set). The set Γ is then said to generate M . For example, the ring R is generated by the identity element 1 as a left R -module over itself. If there is a finite generating set, then a module is said to be finitely generated .
This applies to ideals , which are the submodules of the ring itself. In particular, a principal ideal is an ideal that has a generating set consisting of a single element.
Explicitly, if Γ is a generating set of a module M , then every element of M is a (finite) R -linear combination of some elements of Γ; i.e., for each x in M , there are r 1 , ..., r m in R and g 1 , ..., g m in Γ such that
Put in another way, there is a surjection
where we wrote r g for an element in the g -th component of the direct sum. (Coincidentally, since a generating set always exists, e.g. M itself, this shows that a module is a quotient of a free module , a useful fact.)
A generating set of a module is said to be minimal if no proper subset of the set generates the module. If R is a field , then a minimal generating set is the same thing as a basis . Unless the module is finitely generated , there may exist no minimal generating set. [ 1 ]
The cardinality of a minimal generating set need not be an invariant of the module; Z is generated as a principal ideal by 1, but it is also generated by, say, a minimal generating set {2, 3 }. What is uniquely determined by a module is the infimum of the numbers of the generators of the module.
Let R be a local ring with maximal ideal m and residue field k and M finitely generated module. Then Nakayama's lemma says that M has a minimal generating set whose cardinality is dim k M / m M = dim k M ⊗ R k {\displaystyle \dim _{k}M/mM=\dim _{k}M\otimes _{R}k} . If M is flat , then this minimal generating set is linearly independent (so M is free). See also: Minimal resolution .
A more refined information is obtained if one considers the relations between the generators; see Free presentation of a module .
This abstract algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generating_set_of_a_module |
Generation loss is the loss of quality between subsequent copies or transcodes of data. Anything that reduces the quality of the representation when copying, and would cause further reduction in quality on making a copy of the copy, can be considered a form of generation loss. File size increases are a common result of generation loss, as the introduction of artifacts may actually increase the entropy of the data through each generation.
In analog systems (including systems that use digital recording but make the copy over an analog connection), generation loss is mostly due to noise and bandwidth issues in cables , amplifiers , mixers , recording equipment and anything else between the source and the destination. Poorly adjusted distribution amplifiers and mismatched impedances can make these problems even worse. Repeated conversion between analog and digital can also cause loss.
Generation loss was a major consideration in complex analog audio and video editing , where multi-layered edits were often created by making intermediate mixes which were then "bounced down" back onto tape. Careful planning was required to minimize generation loss, and the resulting noise and poor frequency response.
One way of minimizing the number of generations needed was to use an audio mixing or video editing suite capable of mixing a large number of channels at once; in the extreme case, for example with a 48-track recording studio, an entire complex mixdown could be done in a single generation, although this was prohibitively expensive for all but the best-funded projects.
The introduction of professional analog noise reduction systems such as Dolby A helped reduce the amount of audible generation loss, but were eventually superseded by digital systems which vastly reduced generation loss. [ 1 ]
According to ATIS , "Generation loss is limited to analog recording because digital recording and reproduction may be performed in a manner that is essentially free from generation loss." [ 1 ]
Used correctly, digital technology can eliminate generation loss. This implies the exclusive use of lossless compression codecs or uncompressed data from recording or creation until the final lossy encode for distribution through internet streaming or optical discs. Copying a digital file gives an exact copy if the equipment is operating properly which eliminates generation loss caused by copying, while reencoding digital files with lossy compression codecs can cause generation loss. This trait of digital technology has given rise to awareness of the risk of unauthorized copying. Before digital technology was widespread, a record label , for example, could be confident knowing that unauthorized copies of their music tracks were never as good as the originals.
Generation loss can still occur when using lossy video or audio compression codecs as these introduce artifacts into the source material with each encode or reencode. Lossy compression codecs such as Apple ProRes , Advanced Video Coding and mp3 are very widely used as they allow for dramatic reductions on file size while being indistinguishable from the uncompressed or losslessly compressed original for viewing purposes. The only way to avoid generation loss is by using uncompressed or losslessly compressed files; which may be expensive from a storage standpoint as they require larger amounts of storage space in flash memory or hard drives per second of runtime. Uncompressed video requires a high data rate; for example, a 1080p video at 60 frames per second require approximately 370 megabytes per second. [ 2 ] Lossy codecs make Blu-rays and streaming video over the internet feasible since neither can deliver the amounts of data needed for uncompressed or losslessly compressed video at acceptable frame rates and resolutions. Images can suffer from generation loss in the same way video and audio can.
Processing a lossily compressed file rather than an original usually results in more loss of quality than generating the same output from an uncompressed original. For example, a low-resolution digital image for a web page is better if generated from an uncompressed raw image than from an already-compressed JPEG file of higher quality.
In digital systems , several techniques such as lossy compression codecs and algorithms, used because of other advantages, may introduce generation loss and must be used with caution. However, copying a digital file itself incurs no generation loss—the copied file is identical to the original, provided a perfect copying channel is used.
Some digital transforms are reversible, while some are not. Lossless compression is, by definition, fully reversible, while lossy compression throws away some data which cannot be restored. Similarly, many DSP processes are not reversible.
Thus careful planning of an audio or video signal chain from beginning to end and rearranging to minimize multiple conversions is important to avoid generation loss when using lossy compression codecs. Often, arbitrary choices of numbers of pixels and sampling rates for source, destination, and intermediates can seriously degrade digital signals in spite of the potential of digital technology for eliminating generation loss completely.
Similarly, when using lossy compression, it will ideally only be done once, at the end of the workflow involving the file, after all required changes have been made.
Converting between lossy formats – be it decoding and re-encoding to the same format, between different formats, or between different bitrates or parameters of the same format – causes generation loss.
Repeated applications of lossy compression and decompression can cause generation loss, particularly if the parameters used are not consistent across generations.
Ideally an algorithm will be both idempotent , meaning that if the signal is decoded and then re-encoded with identical settings, there is no loss, and scalable, meaning that if it is re-encoded with lower quality settings, the result will be the same as if it had been encoded from the original signal – see Scalable Video Coding . More generally, transcoding between different parameters of a particular encoding will ideally yield the greatest common shared quality – for instance, converting from an image with 4 bits of red and 8 bits of green to one with 8 bits of red and 4 bits of green would ideally yield simply an image with 4 bits of red color depth and 4 bits of green color depth without further degradation.
Some lossy compression algorithms are much worse than others in this regard, being neither idempotent nor scalable, and introducing further degradation if parameters are changed.
For example, with JPEG , changing the quality setting will cause different quantization constants to be used, causing additional loss. Further, as JPEG is divided into 16×16 blocks (or 16×8, or 8×8, depending on chroma subsampling ), cropping that does not fall on an 8×8 boundary shifts the encoding blocks, causing substantial degradation – similar problems happen on rotation. This can be avoided by the use of jpegtran or similar tools for cropping. Similar degradation occurs if video keyframes do not line up from generation to generation.
Digital resampling such as image scaling , and other DSP techniques can also introduce artifacts or degrade signal-to-noise ratio (S/N ratio) each time they are used, even if the underlying storage is lossless. When making a copy of a copy, the quality of the image will deteriorate with every ‘generation’.
To use the scanning and printing features on a photocopier , these elements rely on noise sensors and physical mediums like paper and ink , leading to the accumulation of noise over successive iterations. Similarly, lossy image formats, such as JPEG , introduce degradation when files are repeatedly edited and re-saved. While directly copying a JPEG file preserves its quality, opening and saving it in an image editor creates a new, re-encoded version, introducing subtle changes. Social media platforms like Facebook and X, formerly known as Twitter , automatically re-encode uploaded images at low-quality settings to optimize storage and bandwidth, further compounding compression artifacts. Over time, repeated re-encoding or processing can significantly degrade the image's quality.
Resampling causes aliasing , both blurring low-frequency components and adding high-frequency noise, causing jaggies , while rounding off computations to fit in finite precision introduces quantization , causing banding ; if fixed by dither , this instead becomes noise. In both cases, these at best degrade the signal's S/N ratio, and may cause artifacts. Quantization can be reduced by using high precision while editing (notably floating point numbers), only reducing back to fixed precision at the end.
Often, particular implementations fall short of theoretical ideals.
Successive generations of photocopies result in image distortion and degradation. [ 3 ] Repeatedly downloading and then reposting / reuploading content to platforms such as Instagram or YouTube can result to noticeable quality degradation. [ 4 ] [ 5 ] [ 6 ] Similar effects have been documented in copying of VHS tapes. [ 7 ] This is because both services use lossy codecs on all data that is uploaded to them, even if the data being uploaded is a duplicate of data already hosted on the service, while VHS is an analog medium, where effects such as noise from interference can have a much more noticeable impact on recordings. | https://en.wikipedia.org/wiki/Generation_loss |
The Generation of Animals (or On the Generation of Animals ; Greek : Περὶ ζῴων γενέσεως ( Peri Zoion Geneseos ); Latin : De Generatione Animalium ) is one of the biological works of the Corpus Aristotelicum , the collection of texts traditionally attributed to Aristotle (384–322 BC). The work provides an account of animal reproduction , gestation , heredity , and embryology .
Generation of Animals consists of five books, which are themselves split into varying numbers of chapters. Most editions of this work categorise it with Bekker numbers . In general, each book covers a range of related topics, however there is also a significant amount of overlap in the content of the books. For example, while one of the two principal topics covered in book I is the function of semen ( gone , sperma ), this account is not finalised until partway through book II.
Book I (715a – 731b)
Chapter 1 begins with Aristotle claiming to have already addressed the parts of animals, referencing the author's work of the same name . While this and possibly his other biological works , have addressed three of the four causes pertaining to animals, the final , formal , and material , the efficient cause has yet to be spoken of. He argues that the efficient cause, or "that from which the source of movement comes" [ 1 ] can be addressed with an inquiry into the generation of animals. Aristotle then provides a general overview of the processes of reproduction adopted by the various genera, for instance most ' blooded ' animals reproduce by coition of a male and female of the same species , but cases vary for 'bloodless' animals.
The reproductive organs of males and females are also investigated. Through chapters 2–5 Aristotle successively describes the general reproductive features common to each sex, the differences in reproductive parts among blooded animals, the causes of differences of testes in particular, and why some animals do not have external reproductive organs. The latter provides clear examples of Aristotle's teleological approach to causation, as it is applied to biology . He argues that the male hedgehog has its testes near its loin , unlike the majority of vivipara, because due to their spines hedgehogs mate standing upright. The hedgehog's form is that of an animal able to use its spines for self-defence, and so its reproductive organs are situated in such a way as to complement this.
Chapter 6 describes why fish and serpents copulate in a short space of time, and chapter 7 provides an explanation for why serpents intertwine during coition. Chapters 8–11 focus on female reproductive organs, and in particular the differences in viviparous and oviparous production of young, and the differing states of the eggs produced by ovipara. This is continued in chapters 12 and 13, where Aristotle discusses the reasons the uterus is internal and the testes external, and their locations among various species. Concluding this section on the reproductive parts of animals is an overview from chapters 14–16 of the generative faculties of crustacea , cephalopods , and insects . This section contains an admission of an observational uncertainty, with Aristote stating that observations of insect coition are not yet detailed enough to classify into types. [ 2 ]
The remainder of Book I (chapters 17 – 23) is concerned with providing an account of semen and its contribution to the generative process. The primary conclusions reached in this section are, firstly, that semen is not a bodily waste product, but "a residue of useful nutriment", [ 3 ] and that because the bodily emissions produced by females during copulation are not of a similar nutritive character, semen must be the efficient cause of offspring.
Book II (731b – 749a)
Chapters 1–3 of Book II continue the discussion of semen from the end of Book I. As a result of questioning potential ways in which the particular parts of animals might come to be formed, such as semen containing small versions of the bodily organs, [ 4 ] before settling on the idea that semen contributes the potential ( dunamis ) for the parts to come into being as they are. This is the basis for the imparting of the soul upon the material substratum present in the egg, as the female reproductive residue itself contains no active principle for the motion required to form an embryo . Aristotle's conception of the soul should not be mistaken for one which takes the soul to be a non-physical substance separate to the body. It instead comprises the ability for some function to be performed, which in the case of bodily development means the ability for organs to perform their bodily functions. Scholar Devin Henry describes Aristotle's view as follows:
"Aristotelian souls are not the sorts of things that are capable of being implanted in bodily organs from without (except perhaps intellectual soul). Soul is not an extra ingredient added to the organ over-and-above its structure. Once there is a properly constructed organ it straightaway possess the corresponding soul-function in virtue of its structure." [ 5 ]
The generative capacity of semen in imparting the soul is its heat, with semen itself being "a compound of breath and water". [ 6 ] It is the component of breath ( pneuma ) that shapes the material provided by the female into the correct form.
The mechanics of the development of the embryo take up much of chapters 4–7, with Aristotle addressing first the different stages of development at which vivipara and ovipara expel their young. In chapter 5 the theory of soul-imparting is amended slightly, as observations of wind-eggs show that the female, unassisted, is able to impart the nutritive aspect of the soul, which Aristotle claims is its lowest portion . Chapter 6 addresses the order in which the parts of an embryo come about, and in chapter 7 Aristotle argues that, contrary to what Democritus apparently thought, that "children are nourished in the uterus by sucking some lump of flesh", [ 7 ] in actuality unborn vivipara are nourished by the umbilical cord . Chapter 8 discusses cross-breeding of species, and the sterility of mules .
Book III (749a – 763b)
Book III covers non-viviparous embryonic development. The first four chapters provide a description and explanation of eggs, while in chapters 5–7 Aristotle responds to other ideas about eggs and some observational difficulties in providing an empirical account of all eggs. The final chapters cover the development of hitherto unmentioned animals.
Chapter 1 is on the subject of bird eggs, with Aristotle providing explanations for why different birds produce different amounts of eggs, why some birds produce wind-eggs, and why bird eggs are sometimes of two colours. Following an explication of the formation of eggs and how they provide nutrition for the embryo in chapter 2, in chapter 3 Aristotle compares the eggs of birds against those of fish. The descriptive account of eggs is completed in chapter 4, which describes the growth of some eggs after they have been laid.
Chapters 5 and 6 are a response to what Aristotle takes to be falsely-held beliefs of other scientists concerning the process of procreation. For example, Anaxagoras apparently held that weasels give birth from their mouths because "the young of the weasel are very small like those of the other fissipeds, of which we shall speak later, and because they often carry the young about in their mouths. [ 8 ] Aristotle states instead that weasels have the same uteruses as other quadrupeds , and there is nothing to connect the uterus to the mouth, so such a claim as Anaxagoras' must be unfounded.
Chapters 7–10 cover the generative processes of selachians , cephalopods, crustacea, insects and bees , in successive order. Chapter 11 concerns the generation of testacea , which are said to generate spontaneously. While it is possible for some of the Testacea, such as mussels, to emit a liquid slime which can form others of the same kind, they are also formed "in connexion with putrefaction and admixture of rain-water." [ 9 ]
Book IV (763b – 778a)
Book IV is primarily on the topic of biological inheritance. Aristotle is concerned with both the similarities between the offspring and parents and the differences that can arise within a particular species as a result of the generative process. Chapters 1 is an account of the origin of the sexes. Aristotle considers the sexes to be "the first principles of all living things". [ 10 ] Given this, the sex of an embryo is determined entirely by the potency of the fertilising semen, which contains the male principle. If this semen lacks heat in fashioning the material present in the female then the male principle cannot take hold, and therefore its opposite principle must take hold. In chapter two Aristotle provides pieces of observational evidence for this, including the following:
"Again, more males are born if copulation takes place when north than when south winds are blowing; for animals' bodies are more liquid when the wind is in the south, so that they produce more residue – and more residue is harder to concoct; hence the semen of the males is more liquid and so is the discharge of the menstrual fluids in women." [ 11 ]
In chapter 3 Aristotle provides the primary elements of his theory of inheritance and resemblances. Utilising the account of the function of semen from Book II Aristotle describes how the movement of semen upon the proto-embryonic material gives rise to particular traits inherited from one's ancestors. Semen contains the general male principle, and contains in addition that of the particular male whose semen it is, so Socrates ' semen will contain his particular genetic traits. In fashioning the material the semen imparts, or does not impart, genetic traits in the same way as the determination of sex, where a resemblance to the father will be imparted onto the material if the semen is of a suitable temperature, provided the male principle has established the sex as male. If instead the male principle was hot enough to be imparted but not that of the particular male, Socrates, was not then the movement may either put forth a resemblance to the mother, or it could relapse into that of the father of the father or some other non-immediate ancestor.
Chapter 4 develops this theory for the cases of deformities , and why different animals produce different amounts of offspring . The former is due to malformed reproductive material present in the female, and for the latter it is particular relations of the size of the animal, the moisture of reproductive materials, and the heat of semen. Chapter 5 presents the causes of superfetation , which is an inadequate separation of multiple young during gestation. Chapters 6 and 7 focus on the causes of other birth defects, and why males are allegedly more likely to suffer from defects . Chapters 8–10 concern the production of milk , why animals are born headfirst, and on the length of gestation being proportional to the length of life, respectively.
Book V (778a – 789b)
Aristotle takes Book V to be an investigation of "the qualities by which the parts of animals differ." [ 12 ] The subjects addressed by this book are a miscellaneous range of animal parts, such as eye colour (chapter 1), body hair (chapter 3) and the pitch of the voice (chapter 7). The apparent lack of a single causal scheme or subject matter for these discrete topics has led to disagreement in how this book relates to the rest of the Generation of Animals . Some scholars [ 13 ] [ 14 ] take the Book only to be concerned only with material causes of intra-species differences that arise later in development, in contrast with the earlier books' systematic use of teleology. Others [ 15 ] have suggested that Book V does utilise causation other than material to a considerable extent.
Works Cited | https://en.wikipedia.org/wiki/Generation_of_Animals |
In population biology and demography , generation time is the average time between two consecutive generations in the lineages of a population . In human populations, generation time typically has ranged from 20 to 30 years, with wide variation based on gender and society. [ 1 ] [ 2 ] Historians sometimes use this to date events, by converting generations into years to obtain rough estimates of time.
The existing definitions of generation time fall into two categories: those that treat generation time as a renewal time of the population, and those that focus on the distance between individuals of one generation and the next. Below are the three most commonly used definitions: [ 3 ] [ 4 ]
The net reproductive rate R 0 {\displaystyle \textstyle R_{0}} is the number of offspring an individual is expected to produce during its lifetime: R 0 = 1 {\displaystyle \textstyle R_{0}=1} means demographic equilibrium. One may then define the generation time T {\displaystyle T} as the time it takes for the population to increase by a factor of R 0 {\displaystyle \textstyle R_{0}} . For example, in microbiology , a population of cells undergoing exponential growth by mitosis replaces each cell by two daughter cells, so that R 0 = 2 {\displaystyle \textstyle R_{0}=2} and T {\displaystyle T} is the population doubling time .
If the population grows with exponential growth rate r {\displaystyle \textstyle r} , so the population size at time t {\displaystyle t} is given by
then generation time is given by
That is, T {\displaystyle \textstyle T} is such that n ( t + T ) = R 0 n ( t ) {\displaystyle n(t+T)=R_{0}\,n(t)} , i.e. e r T = R 0 {\displaystyle e^{rT}=R_{0}} .
This definition is a measure of the distance between generations rather than a renewal time of the population. Since many demographic models are female-based (that is, they only take females into account), this definition is often expressed as a mother-daughter distance (the "average age of mothers at birth of their daughters"). However, it is also possible to define a father-son distance (average age of fathers at the birth of their sons) or not to take sex into account at all in the definition. In age-structured population models, an expression is given by: [ 3 ] [ 4 ]
where r {\displaystyle \textstyle r} is the growth rate of the population, ℓ ( x ) {\displaystyle \textstyle \ell (x)} is the survivorship function (probability that an individual survives to age x {\displaystyle \textstyle x} ) and m ( x ) {\displaystyle \textstyle m(x)} the maternity function (birth function, age-specific fertility). For matrix population models , there is a general formula: [ 5 ]
where λ = e r {\displaystyle \textstyle \lambda =e^{r}} is the discrete-time growth rate of the population, F = ( f i j ) {\displaystyle \textstyle \mathbf {F} =(f_{ij})} is its fertility matrix, v {\displaystyle \textstyle \mathbf {v} } its reproductive value (row-vector) and w {\displaystyle \textstyle \mathbf {w} } its stable stage distribution (column-vector); the e λ ( f i j ) = f i j λ ∂ λ ∂ f i j {\displaystyle \textstyle e_{\lambda }(f_{ij})={\frac {f_{ij}}{\lambda }}{\frac {\partial \lambda }{\partial f_{ij}}}} are the elasticities of λ {\displaystyle \textstyle \lambda } to the fertilities.
This definition is very similar to the previous one but the population need not be at its stable age distribution. Moreover, it can be computed for different cohorts and thus provides more information about the generation time in the population. This measure is given by: [ 3 ] [ 4 ]
Indeed, the numerator is the sum of the ages at which a member of the cohort reproduces, and the denominator is R 0 , the average number of offspring it produces. | https://en.wikipedia.org/wiki/Generation_time |
GDMC (short for Generative Design in Minecraft ) is a programming competition to create procedurally generated settlements in Minecraft . [ 1 ] The competition is organized by academics from New York University , the University of Hertfordshire and the Queen Mary University of London .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Generative_Design_in_Minecraft |
Developmental biology is the study of the process by which animals and plants grow and develop. Developmental biology also encompasses the biology of regeneration , asexual reproduction , metamorphosis , and the growth and differentiation of stem cells in the adult organism.
The main processes involved in the embryonic development of animals are: tissue patterning (via regional specification and patterned cell differentiation ); tissue growth ; and tissue morphogenesis .
The development of plants involves similar processes to that of animals. However, plant cells are mostly immotile so morphogenesis is achieved by differential growth, without cell movements. Also, the inductive signals and the genes involved are different from those that control animal development.
Generative biology is the generative science that explores the dynamics guiding the development and evolution of a biological morphological form. [ 1 ] [ 2 ] [ 3 ]
Cell differentiation is the process whereby different functional cell types arise in development. For example, neurons, muscle fibers and hepatocytes (liver cells) are well known types of differentiated cells. Differentiated cells usually produce large amounts of a few proteins that are required for their specific function and this gives them the characteristic appearance that enables them to be recognized under the light microscope. The genes encoding these proteins are highly active. Typically their chromatin structure is very open, allowing access for the transcription enzymes, and specific transcription factors bind to regulatory sequences in the DNA in order to activate gene expression. [ 4 ] [ 5 ] For example, NeuroD is a key transcription factor for neuronal differentiation, myogenin for muscle differentiation, and HNF4 for hepatocyte differentiation.
Cell differentiation is usually the final stage of development, preceded by several states of commitment which are not visibly differentiated. A single tissue, formed from a single type of progenitor cell or stem cell, often consists of several differentiated cell types. Control of their formation involves a process of lateral inhibition, [ 6 ] based on the properties of the Notch signaling pathway . [ 7 ] For example, in the neural plate of the embryo this system operates to generate a population of neuronal precursor cells in which NeuroD is highly expressed.
Regeneration indicates the ability to regrow a missing part. [ 8 ] This is very prevalent amongst plants, which show continuous growth, and also among colonial animals such as hydroids and ascidians. But most interest by developmental biologists has been shown in the regeneration of parts in free living animals. In particular four models have been the subject of much investigation. Two of these have the ability to regenerate whole bodies: Hydra , which can regenerate any part of the polyp from a small fragment, [ 9 ] and planarian worms, which can usually regenerate both heads and tails. [ 10 ] Both of these examples have continuous cell turnover fed by stem cells and, at least in planaria, at least some of the stem cells have been shown to be pluripotent . [ 11 ] The other two models show only distal regeneration of appendages. These are the insect appendages, usually the legs of hemimetabolous insects such as the cricket, [ 12 ] and the limbs of urodele amphibians . [ 13 ] Considerable information is now available about amphibian limb regeneration and it is known that each cell type regenerates itself, except for connective tissues where there is considerable interconversion between cartilage, dermis and tendons. In terms of the pattern of structures, this is controlled by a re-activation of signals active in the embryo.
There is still debate about the old question of whether regeneration is a "pristine" or an "adaptive" property. [ 14 ] If the former is the case, with improved knowledge, we might expect to be able to improve regenerative ability in humans. If the latter, then each instance of regeneration is presumed to have arisen by natural selection in circumstances particular to the species, so no general rules would be expected.
The sperm and egg fuse in the process of fertilization to form a fertilized egg, or zygote . [ 15 ] This undergoes a period of divisions to form a ball or sheet of similar cells called a blastula or blastoderm . These cell divisions are usually rapid with no growth so the daughter cells are half the size of the mother cell and the whole embryo stays about the same size. They are called cleavage divisions.
Mouse epiblast primordial germ cells (see Figure: "The initial stages of human embryogenesis ") undergo extensive epigenetic reprogramming. [ 16 ] This process involves genome -wide DNA demethylation , chromatin reorganization and epigenetic imprint erasure leading to totipotency . [ 16 ] DNA demethylation is carried out by a process that utilizes the DNA base excision repair pathway. [ 17 ]
Morphogenetic movements convert the cell mass into a three layered structure consisting of multicellular sheets called ectoderm , mesoderm and endoderm . These sheets are known as germ layers . This is the process of gastrulation . During cleavage and gastrulation the first regional specification events occur. In addition to the formation of the three germ layers themselves, these often generate extraembryonic structures, such as the mammalian placenta , needed for support and nutrition of the embryo, [ 18 ] and also establish differences of commitment along the anteroposterior axis (head, trunk and tail). [ 19 ]
Regional specification is initiated by the presence of cytoplasmic determinants in one part of the zygote. The cells that contain the determinant become a signaling center and emit an inducing factor. Because the inducing factor is produced in one place, diffuses away, and decays, it forms a concentration gradient, high near the source cells and low further away. [ 20 ] [ 21 ] The remaining cells of the embryo, which do not contain the determinant, are competent to respond to different concentrations by upregulating specific developmental control genes. This results in a series of zones becoming set up, arranged at progressively greater distance from the signaling center. In each zone a different combination of developmental control genes is upregulated. [ 22 ] These genes encode transcription factors which upregulate new combinations of gene activity in each region. Among other functions, these transcription factors control expression of genes conferring specific adhesive and motility properties on the cells in which they are active. Because of these different morphogenetic properties, the cells of each germ layer move to form sheets such that the ectoderm ends up on the outside, mesoderm in the middle, and endoderm on the inside. [ 23 ] [ 24 ]
Morphogenetic movements not only change the shape and structure of the embryo, but by bringing cell sheets into new spatial relationships they also make possible new phases of signaling and response between them. In addition, first morphogenetic movements of embryogenesis, such as gastrulation, epiboly and twisting , directly activate pathways involved in endomesoderm specification through mechanotransduction processes. [ 25 ] [ 26 ] This property was suggested to be evolutionary inherited from endomesoderm specification as mechanically stimulated by marine environmental hydrodynamic flow in first animal organisms (first metazoa). [ 27 ] Twisting along the body axis by a left-handed chirality is found in all chordates (including vertebrates) and is addressed by the axial twist theory . [ 28 ]
Growth in embryos is mostly autonomous. [ 29 ] For each territory of cells the growth rate is controlled by the combination of genes that are active. Free-living embryos do not grow in mass as they have no external food supply. But embryos fed by a placenta or extraembryonic yolk supply can grow very fast, and changes to relative growth rate between parts in these organisms help to produce the final overall anatomy.
The whole process needs to be coordinated in time and how this is controlled is not understood. There may be a master clock able to communicate with all parts of the embryo that controls the course of events, or timing may depend simply on local causal sequences of events. [ 30 ]
Developmental processes are very evident during the process of metamorphosis . This occurs in various types of animal such as insects, amphibians, some fish, and many marine invertebrates. [ 31 ] Well-known examples are seen in frogs, which usually hatch as a tadpole and metamorphoses to an adult frog, and certain insects which hatch as a larva and then become remodeled to the adult form during a pupal stage.
All the developmental processes listed above occur during metamorphosis. Examples that have been especially well studied include tail loss and other changes in the tadpole of the frog Xenopus , [ 32 ] [ 33 ] and the biology of the imaginal discs, which generate the adult body parts of the fly Drosophila melanogaster . [ 34 ] [ 35 ]
Plant development is the process by which structures originate and mature as a plant grows. It is studied in plant anatomy and plant physiology as well as plant morphology.
Plants constantly produce new tissues and structures throughout their life from meristems [ 36 ] located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature.
The properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts." [ 37 ]
A vascular plant begins from a single celled zygote , formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis . As this happens, the resulting cells will organize so that one end becomes the first root, while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" ( cotyledons ). By the end of embryogenesis, the young plant will have all the parts necessary to begin its life.
Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis . New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. [ 38 ] Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialized tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium . [ 39 ]
In addition to growth by cell division, a plant may grow through cell elongation . [ 40 ] This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light ( phototropism ), gravity ( gravitropism ), water, ( hydrotropism ), and physical contact ( thigmotropism ).
Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). [ 41 ] Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin.
Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility.
Transcription factors and transcriptional regulatory networks play key roles in plant morphogenesis and their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants. [ 42 ]
Most land plants share a common ancestor, multicellular algae. An example of the evolution of plant morphology is seen in charophytes. Studies have shown that charophytes have traits that are homologous to land plants. There are two main theories of the evolution of plant morphology, these theories are the homologous theory and the antithetic theory. The commonly accepted theory for the evolution of plant morphology is the antithetic theory. The antithetic theory states that the multiple mitotic divisions that take place before meiosis, cause the development of the sporophyte. Then the sporophyte will development as an independent organism. [ 43 ]
Much of developmental biology research in recent decades has focused on the use of a small number of model organisms . It has turned out that there is much conservation of developmental mechanisms across the animal kingdom. In early development different vertebrate species all use essentially the same inductive signals and the same genes encoding regional identity. Even invertebrates use a similar repertoire of signals and genes although the body parts formed are significantly different. Model organisms each have some particular experimental advantages which have enabled them to become popular among researchers. In one sense they are "models" for the whole animal kingdom, and in another sense they are "models" for human development, which is difficult to study directly for both ethical and practical reasons. Model organisms have been most useful for elucidating the broad nature of developmental mechanisms. The more detail is sought, the more they differ from each other and from humans.
Also popular for some purposes have been sea urchins [ 52 ] [ 44 ] and ascidians . [ 53 ] For studies of regeneration urodele amphibians such as the axolotl Ambystoma mexicanum are used, [ 54 ] and also planarian worms such as Schmidtea mediterranea . [ 10 ] Organoids have also been demonstrated as an efficient model for development. [ 55 ] Plant development has focused on the thale cress Arabidopsis thaliana as a model organism. [ 56 ] | https://en.wikipedia.org/wiki/Generative_biology |
Generative design is an iterative design process that uses software to generate outputs that fulfill a set of constraints iteratively adjusted by a designer. Whether a human, test program, or artificial intelligence , the designer algorithmically or manually refines the feasible region of the program's inputs and outputs with each iteration to fulfill evolving design requirements. [ 1 ] By employing computing power to evaluate more design permutations than a human alone is capable of, the process is capable of producing an optimal design that mimics nature 's evolutionary approach to design through genetic variation and selection . [ citation needed ] The output can be images, sounds, architectural models , animation , and much more. It is, therefore, a fast method of exploring design possibilities that is used in various design fields such as art , architecture , communication design , and product design . [ 2 ]
Generative design has become more important, largely due to new programming environments or scripting capabilities that have made it relatively easy, even for designers with little programming experience, to implement their ideas. [ 3 ] Additionally, this process can create solutions to substantially complex problems that would otherwise be resource-exhaustive with an alternative approach making it a more attractive option for problems with a large or unknown solution set. [ 4 ] It is also facilitated with tools in commercially available CAD packages. [ 5 ] Not only are implementation tools more accessible, but also tools leveraging generative design as a foundation. [ 6 ]
Generative design in architecture is an iterative design process that enables architects to explore a wider solution space with more possibility and creativity . [ 7 ] Architectural design has long been regarded as a wicked problem . [ 8 ] Compared with traditional top-down design approach, generative design can address design problems efficiently, by using a bottom-up paradigm that uses parametric defined rules to generate complex solutions. The solution itself then evolves to a good, if not optimal, solution. [ 9 ] The advantage of using generative design as a design tool is that it does not construct fixed geometries, but take a set of design rules that can generate an infinite set of possible design solutions. The generated design solutions can be more sensitive, responsive, and adaptive to the problem.
Generative design involves rule definition and result analysis which are integrated with the design process. [ 10 ] By defining parameters and rules, the generative approach is able to provide optimized solution for both structural stability and aesthetics. Possible design algorithms include cellular automata , shape grammar , genetic algorithm , space syntax , and most recently, artificial neural network . Due to the high complexity of the solution generated, rule-based computational tools, such as finite element method and topology optimisation , are more preferable to evaluate and optimise the generated solution. [ 11 ] The iterative process provided by computer software enables the trial-and-error approach in design, and involves architects interfering with the optimisation process.
Historical precedent work includes Antoni Gaudí 's Sagrada Família , which used rule based geometrical forms for structures, [ 12 ] and Buckminster Fuller 's Montreal Biosphere where the rules to generate individual components is designed, rather than the final product. [ 13 ]
More recent generative design cases include Foster and Partners ' Queen Elizabeth II Great Court , where the tessellated glass roof was designed using a geometric schema to define hierarchical relationships, and then the generated solution was optimized based on geometrical and structural requirement. [ 14 ]
Generative design in sustainable design is an effective approach addressing energy efficiency and climate change at the early design stage, recognizing buildings contribute to approximately one-third of global greenhouse gas emissions and 30%-40% of total building energy use. [ 15 ] It integrates environmental principles with algorithms, enabling exploration of countless design alternatives to enhance energy performance, reduce carbon footprints, and minimize waste.
A key feature of generative design in sustainable design is its ability to incorporate Building Performance Simulations (BPS) into the design process. Simulation programs like EnergyPlus , Ladybug Tools , and so on, combined with generative algorithms, can optimize design solutions for cost-effective energy use and zero-carbon building designs. For example, the GENE_ARCH system used a Pareto algorithm with DOE2.1E building energy simulation for the whole building design optimization. [ 16 ] Generative design has improved sustainable facade design, as illustrated by the algorithm of cellular automata and daylight simulations in adaptive facade design. [ 17 ] In addition, genetic algorithms were used with radiation simulations for energy-efficient PV modules on high-rise building facades. [ 18 ] Generative design is also applied to life cycle analysis (LCA), as demonstrated by a framework using grid search algorithms to optimize exterior wall design for minimum environmental embodied impact. [ 19 ]
Multi-objective optimization embraces multiple diverse sustainability goals, such as interactive kinetic louvers using biomimicry and daylight simulations to enhance daylight, visual comfort and energy efficiency. [ 20 ] The study of PV and shading systems can maximize on-site electricity, improve visual quality and daylight performance. [ 21 ]
AI and machine learning (ML) further improve computation efficiency in complex climate-responsive sustainable design. one study employed reinforcement learning to identify the relationship between design parameters and energy use for a sustainable campus, [ 22 ] while some other studies tried hybrid algorithms, such as using the genetic algorithm and GANs to balance daylight illumination and thermal comfort under different roof conditions. [ 23 ] Other popular AI tools were also integrated, including deep reinforcement learning (DRL) and computer vision (CV) to generate an urban block according to direct sunlight hours and solar heat gains. [ 24 ] These AI-driven generative design methods enable faster simulations and design decision making, resulting in designs that are environmentally responsible.
Additive manufacturing (AM) is a process that creates physical models directly from 3D data by joining materials layer by layer. It is used in industries to produce a variety of end-use parts , which are final components designed for direct application in products or systems. AM provides design flexibility and enables material reduction in lightweight applications, such as aerospace, automotive, medical, and portable electronic devices, where minimizing weight is critical for performance. Generative design, one of the four key methods for lightweight design in AM, is commonly applied to optimize structures for specific performance requirements. [ 25 ]
Generative design can help create optimized solutions that balance multiple objectives, such as enhancing performance while minimizing cost. [ 26 ] In design for additive manufacturing (DfAM), multi-objective topology optimization is used to generate a set of candidate solutions. Designers then assess these options using their expertise and key performance indicators (KPIs) to select the best option for implementation. [ 25 ]
However, integrating AM constraints (e.g.,speed of build, materials, build envelope, and accuracy) into generative design remains challenging, as ensuring all solutions are valid is complex. [ 25 ] Balancing multiple design objectives while limiting computational costs adds further challenges for designers. [ 27 ] To overcome these difficulties, researchers proposed a generative design method with manufacturing validation to improve decision-making efficiency. This method starts with a constructive solid geometry (CSG)-based technique to create smooth topology shapes with precise geometric control. Then, a genetic algorithm is used to optimize these shapes, and the method offers designers a set of top non-dominated solutions on the Pareto front for further evaluation and final decision-making. [ 27 ] By combining multiple techniques, this method can generate many high-quality solutions with smooth boundaries at lower computational costs, making it a practical approach for designing lightweight structures in AM.
Building on topology optimization methods, software providers introduced generative design features in their tools, helping designers set criteria and rank solutions. [ 25 ] Industry is driving advancements in generative design for AM, highlighting the need for tools that not only offer a range of solution choices but also streamline workflows for industrial use. [ 26 ] | https://en.wikipedia.org/wiki/Generative_design |
In mathematics and physics , the term generator or generating set may refer to any of a number of related concepts. The underlying concept in each case is that of a smaller set of objects, together with a set of operations that can be applied to it, that result in the creation of a larger collection of objects, called the generated set . The larger set is then said to be generated by the smaller set. It is commonly the case that the generating set has a simpler set of properties than the generated set, thus making it easier to discuss and examine. It is usually the case that properties of the generating set are in some way preserved by the act of generation; likewise, the properties of the generated set are often reflected in the generating set.
A list of examples of generating sets follow.
In the study of differential equations , and commonly those occurring in physics , one has the idea of a set of infinitesimal displacements that can be extended to obtain a manifold , or at least, a local part of it, by means of integration. The general concept is of using the exponential map to take the vectors in the tangent space and extend them, as geodesics , to an open set surrounding the tangent point. In this case, it is not unusual to call the elements of the tangent space the generators of the manifold. When the manifold possesses some sort of symmetry, there is also the related notion of a charge or current , which is sometimes also called the generator, although, strictly speaking, charges are not elements of the tangent space. | https://en.wikipedia.org/wiki/Generator_(mathematics) |
Generic Bootstrapping Architecture ( GBA ) is a technology that enables the authentication of a user. This authentication is possible if the user owns a valid identity on an HLR ( Home Location Register ) or on an HSS ( Home Subscriber Server ).
GBA is standardized at the 3GPP ( http://www.3gpp.org/ftp/Specs/html-info/33220.htm ). The user authentication is instantiated by a shared secret, one in the smartcard , for example a SIM card inside the mobile phone and the other is on the HLR/HSS.
GBA authenticates by making a network component challenge the smartcard and verify that the answer is the one predicted by the HLR/HSS.
Instead of asking the service provider to trust the BSF and relying on it for every authentication request, the BSF establishes a shared secret between the simcard card and the service provider. This shared secret is limited in time and for a specific domain.
This solution has some strong points of certificate and shared secrets without having some of their weaknesses:
- There is no need for user enrollment phase nor secure deployment of keys, making this solution a very low cost one when compared to PKI .
- Another advantage is the ease with which the authentication method may be integrated into terminals and service providers, as it is based on HTTP 's well known " Digest access authentication ". Every Web server already implement HTTP digest authentication and the effort to implement GBA on top of digest authentication is minimal. For example, it could be implemented on SimpleSAMLPhP http://rnd.feide.no/simplesamlphp Archived 2008-12-19 at the Wayback Machine with 500 PHP lines of code and only a few tens of lines of code are Service Provider specific making it really easy to port it to another Web site.
- On device side is needed:
Actually, contents in this section are from external literature. [ 1 ]
There are two ways to use GAA (Generic Authentication Architecture).
In the shared secret cases, the customer and the operator are first mutually authenticated through 3G and Authentication Key (AKA) and they agree on session keys which can then be used between the client and services that the customer wants to use.
This is called bootstrapping .
After that, the services can retrieve the session keys from the operator, and they can be used in some application specific protocol between the client and services.
Figure above shows the network GAA entities and interfaces between them. Optional entities are drawn with lines
network and borders dotted the scoreboard. The User Equipment (UE) is, for example, the user's mobile phone. The UE and Bootstrapping Server Function ( BSF ) mutually authenticate themselves during the Ub (number [2] above) interface, using the Digest access authentication AKA protocol. The UE also communicates with the Network Application Functions ( NAF ), which are the implementation servers, over the Ua [4] interface, which can use any specific application protocol necessary.
BSF retrieves data from the subscriber from the Home Subscriber Server (HSS) during the Zh [3] interface, which uses the Diameter Base Protocol. If there are several HSS in the network, BSF must first know which one to use. This can be done by either setting up a pre-defined HSS to BSF, or by querying the Subscriber Locator Function (SLF).
NAFs recover the key session of BSF during the Zn [5] interface, which also uses the diameter at the base Protocol. If
NAF is not in the home network, it must use a Zn-proxy to contact BSF .
Sadly, despite many advantages and potential uses of GBA, its implementation in handsets has been limited since GBA standardization in 2006. Most notably, GBA was implemented in Symbian-based handsets. | https://en.wikipedia.org/wiki/Generic_Bootstrapping_Architecture |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.