text
stringlengths
2
132k
source
dict
the trunk of the Y shape. In between them is a hinge region of the heavy chains, whose flexibility allows antibodies to bind to pairs of epitopes at various distances, to form complexes (dimers, trimers, etc.), and to bind effector molecules more easily. In an electrophoresis test of blood proteins, antibodies mostly migrate to the last, gamma globulin fraction. Conversely, most gamma-globulins are antibodies, which is why the two terms were historically used as synonyms, as were the symbols Ig and γ. This variant terminology fell out of use due to the correspondence being inexact and due to confusion with γ (gamma) heavy chains which characterize the IgG class of antibodies. === Antigen-binding site === The variable domains can also be referred to as the FV region. It is the subregion of Fab that binds to an antigen. More specifically, each variable domain contains three hypervariable regions – the amino acids seen there vary the most from antibody to antibody. When the protein folds, these regions give rise to three loops of β-strands, localized near one another on the surface of the antibody. These loops are referred to as the complementarity-determining regions (CDRs), since their shape complements that of an antigen. Three CDRs from each of the heavy and light chains together form an antibody-binding site whose shape can be anything from a pocket to which a smaller antigen binds, to a larger surface, to a protrusion that sticks out into a groove in an antigen. Typically though, only a few residues contribute to most of the binding energy. The existence of two identical antibody-binding sites allows antibody molecules to bind strongly to multivalent antigen (repeating sites such as polysaccharides in bacterial cell walls, or other sites at some distance apart), as well as to form antibody complexes and larger
{ "page_id": 2362, "source": null, "title": "Antibody" }
antigen-antibody complexes. The structures of CDRs have been clustered and classified by Chothia et al. and more recently by North et al. and Nikoloudis et al. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities. In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes. === Fc region === The Fc region (the trunk of the Y shape) is composed of constant domains from the heavy chains. Its role is in modulating immune cell activity: it is where effector molecules bind to, triggering various effects after the antibody Fab region binds to an antigen. Effector cells (such as macrophages or natural killer cells) bind via their Fc receptors (FcR) to the Fc region of an antibody, while the complement system is activated by binding the C1q protein complex. IgG or IgM can bind to C1q, but IgA cannot, therefore IgA does not activate the classical complement pathway. Another role of the Fc region is to selectively distribute different antibody classes across the body. In particular, the neonatal Fc receptor (FcRn) binds to the Fc region of IgG antibodies to transport it across the placenta, from the mother to the fetus. In addition to this, binding to FcRn endows IgG with an exceptionally long half-life relative to other plasma proteins of 3-4 weeks. IgG3 in most cases (depending on allotype) has mutations at the FcRn binding site which lower affinity for FcRn, which are thought to have evolved
{ "page_id": 2362, "source": null, "title": "Antibody" }
to limit the highly inflammatory effects of this subclass. Antibodies are glycoproteins, that is, they have carbohydrates (glycans) added to conserved amino acid residues. These conserved glycosylation sites occur in the Fc region and influence interactions with effector molecules. === Protein structure === The N-terminus of each chain is situated at the tip. Each immunoglobulin domain has a similar structure, characteristic of all the members of the immunoglobulin superfamily: it is composed of between 7 (for constant domains) and 9 (for variable domains) β-strands, forming two beta sheets in a Greek key motif. The sheets create a "sandwich" shape, the immunoglobulin fold, held together by a disulfide bond. === Antibody complexes === Secreted antibodies can occur as a single Y-shaped unit, a monomer. However, some antibody classes also form dimers with two Ig units (as with IgA), tetramers with four Ig units (like teleost fish IgM), or pentamers with five Ig units (like shark IgW or mammalian IgM, which occasionally forms hexamers as well, with six units). IgG can also form hexamers, though no J chain is required. IgA tetramers and pentamers have also been reported. Antibodies also form complexes by binding to antigen: this is called an antigen-antibody complex or immune complex. Small antigens can cross-link two antibodies, also leading to the formation of antibody dimers, trimers, tetramers, etc. Multivalent antigens (e.g., cells with multiple epitopes) can form larger complexes with antibodies. An extreme example is the clumping, or agglutination, of red blood cells with antibodies in blood typing to determine blood groups: the large clumps become insoluble, leading to visually apparent precipitation. === B cell receptors === The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B
{ "page_id": 2362, "source": null, "title": "Antibody" }
cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors. These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences. == Classes == Antibodies can come in different varieties known as isotypes or classes. In humans there are five antibody classes known as IgA, IgD, IgE, IgG, and IgM, which are further subdivided into subclasses such as IgA1, IgA2. The prefix "Ig" stands for immunoglobulin, while the suffix denotes the type of heavy chain the antibody contains: the heavy chain types α (alpha), γ (gamma), δ (delta), ε (epsilon), μ (mu) give rise to IgA, IgG, IgD, IgE, IgM, respectively. The distinctive features of each class are determined by the part of the heavy chain within the hinge and Fc region. The classes differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table. For example, IgE antibodies are responsible for an allergic response consisting of histamine release from mast cells, often a sole contributor to asthma (though other pathways exist as do symptoms very similar to yet not technically asthma). The variable region of these antibodies bind to allergic antigen, for example house dust mite particles, while its Fc region (in the ε heavy
{ "page_id": 2362, "source": null, "title": "Antibody" }
chains) binds to Fc receptor ε on a mast cell, triggering its degranulation: the release of molecules stored in its granules. The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. This requires cytokines from T helper cells, unless antigen cross-links B cell receptors. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Activated B cells that encounter certain signaling molecules undergo immunoglobulin class switching, also known as isotope switching, which causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG. === Light chain types === In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). However, there is no known functional difference between them, and both can occur with any of the five major types of heavy chains. Each antibody contains two identical light chains: both κ or both λ. Proportions of κ and λ types vary by species and can be used to detect abnormal proliferation of B cell clones. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei). === In
{ "page_id": 2362, "source": null, "title": "Antibody" }
non-mammalian animals === In most placental mammals, the structure of antibodies is generally the same. Jawed fish appear to be the most primitive animals that are able to make antibodies similar to those of mammals, although many features of their adaptive immunity appeared somewhat earlier. Cartilaginous fish (such as sharks) produce heavy-chain-only antibodies (i.e., lacking light chains) which moreover feature longer chain pentamers (with five constant units per molecule). Camelids (such as camels, llamas, alpacas) are also notable for producing heavy-chain-only antibodies. == Antibody–antigen interactions == The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants. Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific – for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities. == Function == The main categories of antibody action include the following: Neutralisation, in which neutralizing antibodies block parts of the surface of a bacterial cell or virion to render its attack ineffective Agglutination, in which antibodies "glue together" foreign cells into clumps that are attractive targets for phagocytosis Precipitation, in which antibodies "glue together" serum-soluble antigens, forcing them to precipitate out of solution in clumps that are attractive targets for phagocytosis Complement activation (fixation), in which antibodies that are latched onto a foreign cell encourage complement to attack it with a membrane attack complex, which leads to the following: Lysis of the foreign cell Encouragement of inflammation
{ "page_id": 2362, "source": null, "title": "Antibody" }
by chemotactically attracting inflammatory cells More indirectly, an antibody can signal immune cells to present antibody fragments to T cells, or downregulate other immune cells to avoid autoimmunity. Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures. At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway. Antibodies will also trigger vasoactive amine degranulation to contribute to immunity against certain types of antigens (helminths, allergens). === Activation of complement === Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some
{ "page_id": 2362, "source": null, "title": "Antibody" }
complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis). === Activation of effector cells === To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region. Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interact with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) – this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens. === Natural antibodies === Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Antibodies are produced exclusively by B cells in response to antigens where initially, antibodies are formed as membrane-bound receptors,
{ "page_id": 2362, "source": null, "title": "Antibody" }
but upon activation by antigens and helper T cells, B cells differentiate to produce soluble antibodies. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. These antibodies undergo quality checks in the endoplasmic reticulum (ER), which contains proteins that assist in proper folding and assembly. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue. == Immunoglobulin diversity == Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes. === Domain variability === The chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody—the chromosome region containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light
{ "page_id": 2362, "source": null, "title": "Antibody" }
chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences between the variable domains are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity-determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination and discussed below. === V(D)J recombination === Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. The rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of
{ "page_id": 2362, "source": null, "title": "Antibody" }
microRNA miR-650, which further influences biology of B-cells. RAG proteins play an important role with V(D)J recombination in cutting DNA at a particular region. Without the presence of these proteins, V(D)J recombination would not occur. After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain. === Somatic hypermutation and affinity maturation === Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains. This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival allowing the average affinity of antibodies to increase over time. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent
{ "page_id": 2362, "source": null, "title": "Antibody" }
on help from helper T cells. === Class switching === Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment. Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype. === Specificity designations === An antibody can be called
{ "page_id": 2362, "source": null, "title": "Antibody" }
monospecific if it has specificity for a single antigen or epitope, or bispecific if it has affinity for two different antigens or two different epitopes on the same antigen. A group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of a variety of different IgG (polyclonal IgG). In contrast, monoclonal antibodies are identical antibodies produced by a single B cell. === Asymmetrical antibodies === Heterodimeric antibodies, which are also asymmetrical antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the "knobs-into-holes" format. This format is specific to the heavy chain part of the constant region in antibodies. The "knobs" part is engineered by replacing a small amino acid with a larger one. It fits into the "hole", which is engineered by replacing a large amino acid with a smaller one. What connects the "knobs" to the "holes" are the disulfide bonds between each chain. The "knobs-into-holes" shape facilitates antibody dependent cell mediated cytotoxicity. Single-chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The "knobs-into-holes" format enhances heterodimer formation but does not suppress homodimer formation. To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse
{ "page_id": 2362, "source": null, "title": "Antibody" }
protein motifs that use the functional strategy of the antibody molecule, but are not limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms. Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms do not have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change of shape from the natural form should lead to decreased functionality. === Interchromosomal DNA Transposition === Antibody diversification typically occurs through somatic hypermutation, class switching, and affinity maturation targeting the BCR gene loci, but on occasion more unconventional forms of diversification have been documented. For example, in the case of malaria caused by Plasmodium falciparum, some antibodies from those who had been infected demonstrated an insertion from chromosome 19 containing a 98-amino acid stretch from leukocyte-associated immunoglobulin-like receptor 1, LAIR1, in the elbow joint. This represents a form of interchromosomal transposition. LAIR1 normally binds collagen, but can recognize repetitive interspersed families of polypeptides (RIFIN) family members that are highly expressed on the surface of P. falciparum-infected red blood cells. In fact, these antibodies underwent affinity maturation that enhanced affinity for RIFIN but abolished affinity for collagen. These "LAIR1-containing" antibodies have been found in 5-10% of donors from Tanzania and Mali, though not in European donors. European donors did show 100-1000 nucleotide stretches inside the elbow joints as well, however. This particular phenomenon may be specific
{ "page_id": 2362, "source": null, "title": "Antibody" }
to malaria, as infection is known to induce genomic instability. == History == The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different Antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper (immune body in English). As such, the original construction of the word contains a logical flaw; the antitoxin is something directed against a toxin, while the antibody is a body directed against something. The study of antibodies began in 1890 when Emil von Behring and Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Von Behring and Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization. In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that
{ "page_id": 2362, "source": null, "title": "Antibody" }
antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagraeus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies. Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA); David S. Rowe and John L. Fahey discovered IgD; and Kimishige Ishizaka and Teruko Ishizaka discovered IgE and showed it was a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies. == Medical applications == === Disease diagnosis === Detection of particular antibodies is
{ "page_id": 2362, "source": null, "title": "Antibody" }
a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed. In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, IgM levels are often elevated in patients with primary biliary cirrhosis, whereas IgA deposition along hepatic sinusoids can suggest alcoholic liver disease. Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women. Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Over-the-counter home pregnancy tests rely on human chorionic gonadotropin (hCG)-directed antibodies. New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer. === Disease therapy === Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer. Some immune
{ "page_id": 2362, "source": null, "title": "Antibody" }
deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short-term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual. === Prenatal therapy === Rh factor, also known as Rh D antigen, is an antigen found on red blood cells; individuals that are Rh-positive (Rh+) have this antigen on their red blood cells and individuals that are Rh-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn. Rho(D) immune globulin antibodies are specific for human RhD antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rh-negative mother has a Rh-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. This occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rh antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself. == Research applications == Specific antibodies are produced by injecting an antigen into
{ "page_id": 2362, "source": null, "title": "Antibody" }
a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography. In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cells express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISpot techniques. Antibodies used in research are some of the most powerful, yet most problematic reagents with
{ "page_id": 2362, "source": null, "title": "Antibody" }
a tremendous number of factors that must be controlled in any experiment including cross reactivity, or the antibody recognizing multiple epitopes and affinity, which can vary widely depending on experimental conditions such as pH, solvent, state of tissue etc. Multiple attempts have been made to improve both the way that researchers validate antibodies and ways in which they report on antibodies. Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. Papers published in F1000 in 2014 and 2015 provide researchers with a guide for reporting research antibody use. The RRID paper, is co-published in 4 journals that implemented the RRIDs Standard for research resource citation, which draws data from the antibodyregistry.org as the source of antibody identifiers (see also group at Force11). Antibody regions can be used to further biomedical research by acting as a guide for drugs to reach their target. Several application involve using bacterial plasmids to tag plasmids with the Fc region of the antibody such as pFUSE-Fc plasmid. == Regulations == === Production and testing === There are several ways to obtain antibodies, including in vivo techniques like animal immunization and various in vitro approaches, such as the phage display method. Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include: The demonstration that the process is able to produce in good quality (the process should be validated) The efficiency of the antibody purification
{ "page_id": 2362, "source": null, "title": "Antibody" }
(all impurities and virus must be eliminated) The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...) Determination of the virus clearance studies === Before clinical trials === Product safety testing: Sterility (bacteria and fungi), in vitro and in vivo testing for adventitious viruses, murine retrovirus testing..., product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product. Feasibility testing: These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vitro or in vivo testing). === Preclinical studies === Testing cross-reactivity of antibody: to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models). Preclinical pharmacology and toxicity testing: preclinical safety testing of antibody is designed to identify possible toxicity in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible. Animal toxicity studies: Acute toxicity testing, repeat-dose toxicity testing, long-term toxicity testing Pharmacokinetics and pharmacodynamics testing: Use for determinate clinical dosages, antibody activities, evaluation of the potential clinical effects == Structure prediction and computational antibody design == The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative
{ "page_id": 2362, "source": null, "title": "Antibody" }
to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enable computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities. The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims. Several methods have been presented for computational design of antibodies based on the structural bioinformatics studies of antibody CDRs. There are a variety of methods used to sequence an antibody including Edman degradation, cDNA, etc.; albeit one of the most common modern uses for peptide/protein identification is liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). High volume antibody sequencing methods require computational approaches for the data analysis, including de novo sequencing directly from tandem mass spectra and database search methods that use existing protein sequence databases. Many versions of shotgun protein sequencing are able to increase the coverage by utilizing CID/HCD/ETD fragmentation methods and other techniques, and they have achieved substantial progress in attempt to fully sequence proteins, especially antibodies. Other methods have assumed the existence of similar proteins, a known genome sequence, or combined
{ "page_id": 2362, "source": null, "title": "Antibody" }
top-down and bottom up approaches. Current technologies have the ability to assemble protein sequences with high accuracy by integrating de novo sequencing peptides, intensity, and positional confidence scores from database and homology searches. == Antibody mimetic == Antibody mimetics are organic compounds, like antibodies, that can specifically bind antigens. They consist of artificial peptides or proteins, or aptamer-based nucleic acid molecules with a molar mass of about 3 to 20 kDa. Antibody fragments, such as Fab and nanobodies are not considered as antibody mimetics. Common advantages over antibodies are better solubility, tissue penetration, stability towards heat and enzymes, and comparatively low production costs. Antibody mimetics have been developed and commercialized as research, diagnostic and therapeutic agents. == Binding antibody unit == BAU (binding antibody unit, often as BAU/mL) is a measurement unit defined by the WHO for the comparison of assays detecting the same class of immunoglobulins with the same specificity. == See also == == References == == External links == Mike's Immunoglobulin Structure/Function Page at University of Cambridge Antibodies as the PDB molecule of the month Discussion of the structure of antibodies at RCSB Protein Data Bank A hundred years of antibody therapy History and applications of antibodies in the treatment of disease at University of Oxford How Lymphocytes Produce Antibody from Cells Alive!
{ "page_id": 2362, "source": null, "title": "Antibody" }
Quasi-linkage equilibrium (QLE) is a mathematical approximation used in solving population genetics problems. Motoo Kimura introduced the notion to simplify a model of Fisher's fundamental theorem. QLE greatly simplifies population genetic equations whilst making the assumption of weak selection and weak epistasis. Selection under these conditions rapidly changes allele frequencies to a state where they evolve as if in linkage equilibrium. Kimura originally provided the sufficient conditions for QLE in two-locus systems, but recently several researchers have shown how QLE occurs in general multilocus systems. QLE allows theorists to approximate linkage disequilibria by simple expressions, often simple functions of allele or genotype frequencies, thereby providing solutions to highly complex problems involving selection on multiple loci or polygenic traits. QLE also plays an important role in justifying approximations in the derivation of quantitative genetic equations from mendelian principles. == Simple Model == Let X {\displaystyle X} , Y {\displaystyle Y} , Z {\displaystyle Z} and U {\displaystyle U} represent the frequencies of the four possible genotypes in a haploid two-locus-two-allele model. Kimura's original model showed that R = X U Y Z {\displaystyle R={\frac {XU}{YZ}}} approaches a stable state R ^ {\displaystyle {\hat {R}}} rapidly if epistatic effects are small relative to recombination. Deviations from R ^ {\displaystyle {\hat {R}}} will be reduced by the recombination fraction every generation. == References ==
{ "page_id": 34343228, "source": null, "title": "Quasi-linkage equilibrium" }
Recurrent neural networks (RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, and time series, where the order of elements is important. Unlike feedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences. The fundamental building block of RNNs is the recurrent unit, which maintains a hidden state—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connected handwriting recognition, speech recognition, natural language processing, and neural machine translation. However, traditional RNNs suffer from the vanishing gradient problem, which limits their ability to learn long-range dependencies. This issue was addressed by the development of the long short-term memory (LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later, Gated Recurrent Units (GRUs) were introduced as a more computationally efficient alternative. In recent years, transformers, which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial. == History == === Before modern === One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex formed
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
by parallel fiber, Purkinje cells, and granule cells. In 1933, Lorente de Nó discovered "recurrent, reciprocal connections" by Golgi's method, and proposed that excitatory loops explain certain aspects of the vestibulo-ocular reflex. During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943), which proposed the McCulloch-Pitts neuron model, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past. They were both interested in closed loops as possible explanations for e.g. epilepsy and causalgia. Recurrent inhibition was proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at the Macy conferences. See for an extensive review of recurrent neural network models in neuroscience. Frank Rosenblatt in 1960 published "close-loop cross-coupled perceptrons", which are 3-layered perceptron networks whose middle layer contains recurrent connections that change by a Hebbian learning rule.: 73–75 Later, in Principles of Neurodynamics (1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks,: Chapter 19, 21 and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network.: Section 19.11 Similar networks were published by Kaoru Nakano in 1971,Shun'ichi Amari in 1972, and William A. Little in 1974, who was acknowledged by Hopfield in his 1982 paper. Another origin of RNN was statistical mechanics. The Ising model was developed by Wilhelm Lenz and Ernst Ising in the 1920s as a simple statistical mechanical model of magnets at equilibrium. Glauber in 1963 studied the Ising model evolving in time,
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
as a process towards equilibrium (Glauber dynamics), adding in the component of time. The Sherrington–Kirkpatrick model of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions. In a 1984 paper he extended this to continuous activation functions. It became a standard model for the study of neural networks through statistical mechanics. === Modern === Modern RNN networks are mainly based on two architectures: LSTM and BRNN. At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets". Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. Long short-term memory (LSTM) networks were invented by Hochreiter and Schmidhuber in 1995 and set accuracy records in multiple applications domains. It became the default choice for RNN architecture. Bidirectional recurrent neural networks (BRNN) uses two RNN that processes the same input in opposite directions. These two are often combined, giving the bidirectional LSTM architecture. Around 2006, bidirectional LSTM started to revolutionize speech recognition, outperforming traditional models in certain speech applications. They also improved large-vocabulary speech recognition and text-to-speech synthesis and was used in Google voice search, and dictation on Android devices. They broke records for improved machine translation, language modeling and Multilingual Language Processing. Also, LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning. The idea of encoder-decoder sequence transduction
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014. A seq2seq architecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and was instrumental in the development of attention mechanisms and transformers. == Configurations == An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNN can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc. === Standard === RNNs come in many variants. Abstractly speaking, an RNN is a function f θ {\displaystyle f_{\theta }} of type ( x t , h t ) ↦ ( y t , h t + 1 ) {\displaystyle (x_{t},h_{t})\mapsto (y_{t},h_{t+1})} , where x t {\displaystyle x_{t}} : input vector; h t {\displaystyle h_{t}} : hidden vector; y t {\displaystyle y_{t}} : output vector; θ {\displaystyle \theta } : neural network parameters. In words, it is a neural network that maps an input x t {\displaystyle x_{t}} into an output y t {\displaystyle y_{t}} , with the hidden vector h t {\displaystyle h_{t}} playing the role of "memory", a partial record of all previous input-output pairs. At each step, it transforms input to an output, and modifies its "memory" to help it to better perform future processing. The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to be layers are, in fact, different steps in time, "unfolded" to produce the appearance of layers. === Stacked RNN === A stacked RNN, or deep RNN,
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows Layer 1 has hidden vector h 1 , t {\displaystyle h_{1,t}} , parameters θ 1 {\displaystyle \theta _{1}} , and maps f θ 1 : ( x 0 , t , h 1 , t ) ↦ ( x 1 , t , h 1 , t + 1 ) {\displaystyle f_{\theta _{1}}:(x_{0,t},h_{1,t})\mapsto (x_{1,t},h_{1,t+1})} . Layer 2 has hidden vector h 2 , t {\displaystyle h_{2,t}} , parameters θ 2 {\displaystyle \theta _{2}} , and maps f θ 2 : ( x 1 , t , h 2 , t ) ↦ ( x 2 , t , h 2 , t + 1 ) {\displaystyle f_{\theta _{2}}:(x_{1,t},h_{2,t})\mapsto (x_{2,t},h_{2,t+1})} . ... Layer n {\displaystyle n} has hidden vector h n , t {\displaystyle h_{n,t}} , parameters θ n {\displaystyle \theta _{n}} , and maps f θ n : ( x n − 1 , t , h n , t ) ↦ ( x n , t , h n , t + 1 ) {\displaystyle f_{\theta _{n}}:(x_{n-1,t},h_{n,t})\mapsto (x_{n,t},h_{n,t+1})} . Each layer operates as a stand-alone RNN, and each layer's output sequence is used as the input sequence to the layer above. There is no conceptual limit to the depth of stacked RNN. === Bidirectional === A bidirectional RNN (biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows: The forward RNN processes in one direction: f θ ( x 0 , h 0 ) = ( y 0 , h 1 ) , f θ ( x 1 , h 1 ) = ( y 1 , h 2 ) , … {\displaystyle f_{\theta }(x_{0},h_{0})=(y_{0},h_{1}),f_{\theta }(x_{1},h_{1})=(y_{1},h_{2}),\dots }
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
The backward RNN processes in the opposite direction: f θ ′ ′ ( x N , h N ′ ) = ( y N ′ , h N − 1 ′ ) , f θ ′ ′ ( x N − 1 , h N − 1 ′ ) = ( y N − 1 ′ , h N − 2 ′ ) , … {\displaystyle f'_{\theta '}(x_{N},h_{N}')=(y'_{N},h_{N-1}'),f'_{\theta '}(x_{N-1},h_{N-1}')=(y'_{N-1},h_{N-2}'),\dots } The two output sequences are then concatenated to give the total output: ( ( y 0 , y 0 ′ ) , ( y 1 , y 1 ′ ) , … , ( y N , y N ′ ) ) {\displaystyle ((y_{0},y_{0}'),(y_{1},y_{1}'),\dots ,(y_{N},y_{N}'))} . Bidirectional RNN allows the model to process a token both in the context of what came before it and what came after it. By stacking multiple bidirectional RNNs together, the model can process a token increasingly contextually. The ELMo model (2018) is a stacked bidirectional LSTM which takes character-level as inputs and produces word-level embeddings. === Encoder-decoder === Two RNNs can be run front-to-back in an encoder-decoder configuration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optional attention mechanism. This was used to construct state of the art neural machine translators during the 2014–2017 period. This was an instrumental step towards the development of transformers. === PixelRNN === An RNN may process data with more than one dimension. PixelRNN processes two-dimensional data, with many possible directions. For example, the row-by-row direction processes an n × n {\displaystyle n\times n} grid of vectors x i , j {\displaystyle x_{i,j}} in the following order: x 1 , 1 , x 1 , 2 , …
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
, x 1 , n , x 2 , 1 , x 2 , 2 , … , x 2 , n , … , x n , n {\displaystyle x_{1,1},x_{1,2},\dots ,x_{1,n},x_{2,1},x_{2,2},\dots ,x_{2,n},\dots ,x_{n,n}} The diagonal BiLSTM uses two LSTMs to process the same grid. One processes it from the top-left corner to the bottom-right, such that it processes x i , j {\displaystyle x_{i,j}} depending on its hidden state and cell state on the top and the left side: h i − 1 , j , c i − 1 , j {\displaystyle h_{i-1,j},c_{i-1,j}} and h i , j − 1 , c i , j − 1 {\displaystyle h_{i,j-1},c_{i,j-1}} . The other processes it from the top-right corner to the bottom-left. == Architectures == === Fully recurrent === Fully recurrent neural networks (FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is a fully connected network. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons. === Hopfield === The Hopfield network is an RNN in which all connections across layers are equally sized. It requires stationary inputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained using Hebbian learning, then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration. === Elman networks and Jordan networks === An Elman network is a three-layer network (arranged horizontally as x, y, and z in the illustration) with the addition of a set of context units (u in the illustration). The middle (hidden) layer is connected to these context units
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
fixed with a weight of one. At each time step, the input is fed forward and a learning rule is applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform tasks such as sequence-prediction that are beyond the power of a standard multilayer perceptron. Jordan networks are similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also called the state layer. They have a recurrent connection to themselves. Elman and Jordan networks are also known as "Simple recurrent networks" (SRN). Elman network h t = σ h ( W h x t + U h h t − 1 + b h ) y t = σ y ( W y h t + b y ) {\displaystyle {\begin{aligned}h_{t}&=\sigma _{h}(W_{h}x_{t}+U_{h}h_{t-1}+b_{h})\\y_{t}&=\sigma _{y}(W_{y}h_{t}+b_{y})\end{aligned}}} Jordan network h t = σ h ( W h x t + U h s t + b h ) y t = σ y ( W y h t + b y ) s t = σ s ( W s , s s t − 1 + W s , y y t − 1 + b s ) {\displaystyle {\begin{aligned}h_{t}&=\sigma _{h}(W_{h}x_{t}+U_{h}s_{t}+b_{h})\\y_{t}&=\sigma _{y}(W_{y}h_{t}+b_{y})\\s_{t}&=\sigma _{s}(W_{s,s}s_{t-1}+W_{s,y}y_{t-1}+b_{s})\end{aligned}}} Variables and functions x t {\displaystyle x_{t}} : input vector h t {\displaystyle h_{t}} : hidden layer vector s t {\displaystyle s_{t}} : "state" vector, y t {\displaystyle y_{t}} : output vector W {\displaystyle W} , U {\displaystyle U} and b {\displaystyle b} : parameter matrices and vector σ {\displaystyle \sigma } : Activation functions === Long short-term memory === Long
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
short-term memory (LSTM) is the most widely used RNN architecture. It was designed to solve the vanishing gradient problem. LSTM is normally augmented by recurrent gates called "forget gates". LSTM prevents backpropagated errors from vanishing or exploding. Instead, errors can flow backward through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved. LSTM works even given long delays between significant events and can handle signals that mix low and high-frequency components. Many applications use stacks of LSTMs, for which it is called "deep LSTM". LSTM can learn to recognize context-sensitive languages unlike previous models based on hidden Markov models (HMM) and similar concepts. === Gated recurrent unit === Gated recurrent unit (GRU), introduced in 2014, was designed as a simplification of LSTM. They are used in the full form and several further simplified variants. They have fewer parameters than LSTM, as they lack an output gate. Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory. There does not appear to be particular performance difference between LSTM and GRU. ==== Bidirectional associative memory ==== Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bidirectionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using Markov stepping were optimized for increased network stability and relevance to real-world applications. A BAM network has two layers, either of which can be driven as an input to recall an association and produce an
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
output on the other layer. === Echo state === Echo state networks (ESN) have a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certain time series. A variant for spiking neurons is known as a liquid state machine. === Recursive === A recursive neural network is created by applying the same set of weights recursively over a differentiable graph-like structure by traversing the structure in topological order. Such networks are typically also trained by the reverse mode of automatic differentiation. They can process distributed representations of structure, such as logical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied to natural language processing. The Recursive Neural Tensor Network uses a tensor-based composition function for all nodes in the tree. === Neural Turing machines === Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to external memory resources with which they interact. The combined system is analogous to a Turing machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology. Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers of context free grammars (CFGs). Recurrent neural networks are Turing complete and can run arbitrary programs to process arbitrary sequences of inputs. == Training == === Teacher forcing === An RNN can be trained into
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
a conditionally generative model of sequences, aka autoregression. Concretely, let us consider the problem of machine translation, that is, given a sequence ( x 1 , x 2 , … , x n ) {\displaystyle (x_{1},x_{2},\dots ,x_{n})} of English words, the model is to produce a sequence ( y 1 , … , y m ) {\displaystyle (y_{1},\dots ,y_{m})} of French words. It is to be solved by a seq2seq model. Now, during training, the encoder half of the model would first ingest ( x 1 , x 2 , … , x n ) {\displaystyle (x_{1},x_{2},\dots ,x_{n})} , then the decoder half would start generating a sequence ( y ^ 1 , y ^ 2 , … , y ^ l ) {\displaystyle ({\hat {y}}_{1},{\hat {y}}_{2},\dots ,{\hat {y}}_{l})} . The problem is that if the model makes a mistake early on, say at y ^ 2 {\displaystyle {\hat {y}}_{2}} , then subsequent tokens are likely to also be mistakes. This makes it inefficient for the model to obtain a learning signal, since the model would mostly learn to shift y ^ 2 {\displaystyle {\hat {y}}_{2}} towards y 2 {\displaystyle y_{2}} , but not the others. Teacher forcing makes it so that the decoder uses the correct output sequence for generating the next entry in the sequence. So for example, it would see ( y 1 , … , y k ) {\displaystyle (y_{1},\dots ,y_{k})} in order to generate y ^ k + 1 {\displaystyle {\hat {y}}_{k+1}} . === Gradient descent === Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear activation
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
functions are differentiable. The standard method for training RNN by gradient descent is the "backpropagation through time" (BPTT) algorithm, which is a special case of the general algorithm of backpropagation. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL, which is an instance of automatic differentiation in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space. In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space. For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the Jacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon. An online hybrid between BPTT and RTRL with intermediate complexity exists, along with variants for continuous time. A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events. LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems. This problem is also solved in the independently recurrent neural network (IndRNN) by reducing the context of a neuron to its own past state and the cross-neuron information can then
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problem. The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks. It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback. One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation. It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations. It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza. === Connectionist temporal classification === The connectionist temporal classification (CTC) is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable. === Global optimization methods === Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function. The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks. Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows: Each weight encoded in the chromosome is assigned to the respective weight link of the network. The training set is presented to the network which propagates the input signals forward. The mean-squared error is returned to the fitness function. This function drives the genetic selection process. Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is: When the neural network has learned a certain percentage of the training data or When the minimum value of the mean-squared-error is satisfied or When the maximum number of training generations has been reached. The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error. Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as simulated annealing or particle swarm optimization. == Other architectures == === Independently RNN (IndRNN) === The independently recurrent neural network (IndRNN) addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as ReLU. Deep networks can be trained using skip connections. === Neural history compressor === The
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
neural history compressor is an unsupervised stack of RNNs. At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level. The system effectively minimizes the description length or the negative logarithm of the probability of the data. Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events. It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level). Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events. A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. === Second order RNNs === Second-order RNNs use higher order weights w i j k {\displaystyle w{}_{ijk}} instead
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
of the standard w i j {\displaystyle w{}_{ij}} weights, and states can be a product. This allows a direct mapping to a finite-state machine both in training, stability, and representation. Long short-term memory is an example of this but has no such formal mappings or proof of stability. === Hierarchical recurrent neural network === Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose hierarchical behavior into useful subprograms. Such hierarchical structures of cognition are present in theories of memory presented by philosopher Henri Bergson, whose philosophical views have inspired hierarchical models. Hierarchical recurrent neural networks are useful in forecasting, helping to predict disaggregated inflation components of the consumer price index (CPI). The HRNN model leverages information from higher levels in the CPI hierarchy to enhance lower-level predictions. Evaluation of a substantial dataset from the US CPI-U index demonstrates the superior performance of the HRNN model compared to various established inflation prediction methods. === Recurrent multilayer perceptron network === Generally, a recurrent multilayer perceptron network (RMLP network) consists of cascaded subnetworks, each containing multiple layers of nodes. Each subnetwork is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections. === Multiple timescales model === A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties. With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the memory-prediction theory of brain function by Hawkins in his
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
book On Intelligence. Such a hierarchy also agrees with theories of memory posited by philosopher Henri Bergson, which have been incorporated into an MTRNN model. === Memristive networks === Greg Snider of HP Labs describes a system of cortical computing with memristive nanodevices. The memristors (memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film. DARPA's SyNAPSE project has funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems. Memristive networks are a particular type of physical neural network that have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to the Ising model. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type of neuromorphic engineering in which the device behavior depends on the circuit wiring or topology. The evolution of these networks can be studied analytically using variations of the Caravelli–Traversa–Di Ventra equation. === Continuous-time === A continuous-time recurrent neural network (CTRNN) uses a system of ordinary differential equations to model the effects on a neuron of the incoming inputs. They are typically analyzed by dynamical systems theory. Many RNN models in neuroscience are continuous-time. For a neuron i {\displaystyle i} in the network with activation y i {\displaystyle y_{i}} , the rate of change of activation is given by: τ i y ˙ i = − y i + ∑ j = 1 n w
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
j i σ ( y j − Θ j ) + I i ( t ) {\displaystyle \tau _{i}{\dot {y}}_{i}=-y_{i}+\sum _{j=1}^{n}w_{ji}\sigma (y_{j}-\Theta _{j})+I_{i}(t)} Where: τ i {\displaystyle \tau _{i}} : Time constant of postsynaptic node y i {\displaystyle y_{i}} : Activation of postsynaptic node y ˙ i {\displaystyle {\dot {y}}_{i}} : Rate of change of activation of postsynaptic node w j i {\displaystyle w{}_{ji}} : Weight of connection from pre to postsynaptic node σ ( x ) {\displaystyle \sigma (x)} : Sigmoid of x e.g. σ ( x ) = 1 / ( 1 + e − x ) {\displaystyle \sigma (x)=1/(1+e^{-x})} . y j {\displaystyle y_{j}} : Activation of presynaptic node Θ j {\displaystyle \Theta _{j}} : Bias of presynaptic node I i ( t ) {\displaystyle I_{i}(t)} : Input (if any) to node CTRNNs have been applied to evolutionary robotics where they have been used to address vision, co-operation, and minimal cognitive behaviour. Note that, by the Shannon sampling theorem, discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent difference equations. This transformation can be thought of as occurring after the post-synaptic node activation functions y i ( t ) {\displaystyle y_{i}(t)} have been low-pass filtered but prior to sampling. They are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step. From a time-series perspective, RNNs can appear as nonlinear versions of finite impulse response and infinite impulse response filters and also as a nonlinear autoregressive exogenous model (NARX). RNN
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
has infinite impulse response whereas convolutional neural networks have finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that cannot be unrolled. The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency spiking activity. Additional stored states and the storage under direct control by the network can be added to both infinite-impulse and finite-impulse networks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part of long short-term memory networks (LSTMs) and gated recurrent units. This is also called Feedback Neural Network (FNN). == Libraries == Modern libraries provide runtime-optimized implementations of the above functionality or allow to speed up the slow loop by just-in-time compilation. Apache Singa Caffe: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers. Chainer: Fully in Python, production support for CPU, GPU, distributed training. Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. Flux: includes interfaces for RNNs, including GRUs and LSTMs, written in Julia. Keras: High-level API, providing a wrapper to many other deep learning libraries. Microsoft Cognitive Toolkit MXNet: an open-source deep learning framework used to train and deploy deep neural networks. PyTorch: Tensors and Dynamic neural networks in Python with GPU acceleration. TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU and Google's proprietary TPU,
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
mobile Theano: A deep-learning library for Python with an API largely compatible with the NumPy library. Torch: A scientific computing framework with support for machine learning algorithms, written in C and Lua. == Applications == Applications of recurrent neural networks include: Machine translation Robot control Time series prediction Speech recognition Speech synthesis Brain–computer interfaces Time series anomaly detection Text-to-Video model Rhythm learning Music composition Grammar learning Handwriting recognition Human action recognition Protein homology detection Predicting subcellular localization of proteins Several prediction tasks in the area of business process management Prediction in medical care pathways Predictions of fusion plasma disruptions in reactors (Fusion Recurrent Neural Network (FRNN) code) == References == == Further reading == Mandic, Danilo P.; Chambers, Jonathon A. (2001). Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley. ISBN 978-0-471-49517-8. Grossberg, Stephen (2013-02-22). "Recurrent Neural Networks". Scholarpedia. 8 (2): 1888. Bibcode:2013SchpJ...8.1888G. doi:10.4249/scholarpedia.1888. ISSN 1941-6016. Recurrent Neural Networks. List of RNN papers by Jürgen Schmidhuber's group at Dalle Molle Institute for Artificial Intelligence Research.
{ "page_id": 1706303, "source": null, "title": "Recurrent neural network" }
The molecular formula C5H6 (molar mass: 66.10 g/mol, exact mass: 66.04695 u) may refer to: Cyclopentadiene Cyclopropylacetylene [1.1.1]propellane Cyclopentyne
{ "page_id": 12388671, "source": null, "title": "C5H6" }
Luminescent bacteria emit light as the result of a chemical reaction during which chemical energy is converted to light energy. Luminescent bacteria exist as symbiotic organisms carried within a larger organism, such as many deep sea organisms, including the Lantern Fish, the Angler fish, certain jellyfish, certain clams and the Gulper eel. The light is generated by an enzyme-catalyzed chemoluminescence reaction, wherein the pigment luciferin is oxidised by the enzyme luciferase. The expression of genes related to bioluminescence is controlled by an operon called the lux operon. Some species of luminescent bacteria possess quorum sensing, the ability to determine local population by the concentration of chemical messengers. Species which have quorum sensing can turn on and off certain chemical pathways, commonly luminescence; in this way, once population levels reach a certain point the bacteria switch on light-production == Characteristics of the phenomenon == Bioluminescence is a form of luminescence, or "cold light" emission; less than 20% of the light generates thermal radiation. It should not be confused with fluorescence, phosphorescence or refraction of light. Most forms of bioluminescence are brighter (or only exist) at night, following a circadian rhythm. === See also === Dinoflagellates Vibrionaceae (e.g. Vibrio fischeri, Vibrio harveyi, Vibrio phosphoreum) == References == == External links == Bioluminescence Lecture Notes Bioluminescence Webpage Isolation of Vibrio phosphoreum Luminescent Bacteria Scripps Institution of Oceanography: Bioluminescence '
{ "page_id": 4000065, "source": null, "title": "Luminescent bacteria" }
The molecular formula C6H8 may refer to: Cyclohexadiene (disambiguation) 1,3-Cyclohexadiene 1,4-Cyclohexadiene Methylcyclopentadiene Propellane The standard composition of gasoline (a mixture of different hydrocarbons) is approximately equivalent to C6H8
{ "page_id": 12388674, "source": null, "title": "C6H8" }
Occupational exposure banding, also known as hazard banding, is a process intended to quickly and accurately assign chemicals into specific categories (bands), each corresponding to a range of exposure concentrations designed to protect worker health. These bands are assigned based on a chemical’s toxicological potency and the adverse health effects associated with exposure to the chemical. The output of this process is an occupational exposure band (OEB). Occupational exposure banding has been used by the pharmaceutical sector and by some major chemical companies over the past several decades to establish exposure control limits or ranges for new or existing chemicals that do not have formal OELs. Furthermore, occupational exposure banding has become an important component of the Hierarchy of Occupational Exposure Limits (OELs). The U.S. National Institute for Occupational Safety and Health (NIOSH) has developed a process that could be used to apply occupational exposure banding to a broader spectrum of occupational settings. The NIOSH occupational exposure banding process utilizes available, but often limited, toxicological data to determine a potential range of chemical exposure levels that can be used as targets for exposure controls to reduce risk among workers. An OEB is not meant to replace an OEL, rather it serves as a starting point to inform risk management decisions. Therefore, the OEB process should not be applied to a chemical with an existing OEL. == Purpose == Occupational exposure limits (OELs) play a critical role in protecting workers from exposure to dangerous concentrations of hazardous material. In the absence of an OEL, determining the controls needed to protect workers from chemical exposures can be challenging. According to the U.S. Environmental Protection Agency, the Toxic Substances Control Act Chemical Substance Inventory as of 2014 contained over 85,000 chemicals that are commercially available, but a quantitative health-based OEL has been developed
{ "page_id": 53741891, "source": null, "title": "Occupational exposure banding" }
for only about 1,000 of these chemicals. Furthermore, the rate at which new chemicals are being introduced into commerce significantly outpaces OEL development, creating a need for guidance on thousands of chemicals that lack reliable exposure limits. The NIOSH occupational exposure banding process has been created to provide a reliable approximation of a safe exposure level for potentially hazardous and unregulated chemicals in the workplace. Occupational exposure banding uses limited chemical toxicity data to group chemicals into one of five bands. Occupational exposure bands: Define a set range of exposures expected to protect worker health Identify potential health effects and target organs with 9 toxicological endpoints Provide critical information on chemical potency Inform decisions on control methods, hazard communication, and medical surveillance Identify areas where health effects data is lacking Require less time and data than developing an OEL == Assignment process == The NIOSH occupational exposure banding process utilizes a three-tiered approach. Each tier of the process has different requirements for data sufficiency, which allows stakeholders to use the occupational exposure banding process in many different situations. Selection of the most appropriate tier for a specific banding situation depends on the quantity and quality of the available data and the training and expertise of the user. The process places chemicals into one of five bands, designated A through E. Each band is associated with a specific range of exposure concentrations. Band E represents the lowest range of exposure concentrations, while Band A represents the highest range. Assignment of a chemical to a band is based on both the potency of the chemical and the severity of the health effect. Band A and band B include chemicals with reversible health effects or produce adverse effects at only high concentration levels. Band C, band D, or band E include chemicals with
{ "page_id": 53741891, "source": null, "title": "Occupational exposure banding" }
serious or irreversible effects and those that cause problems at low concentration ranges. The resulting airborne concentration target ranges are shown in the graphic: Tier 1, the qualitative tier, produces an occupational exposure band (OEB) assignment based on qualitative data from the Globally Harmonized System of Classification and Labeling of Chemicals (GHS); it involves assigning the OEB based on criteria aligned with specific GHS hazard codes and categories. These hazard codes are typically pulled from GESTIS, ECHA Annex VI, or safety data sheets. The Tier 1 process can be performed by a health and safety generalist, and takes only minutes to complete with the NIOSH OEB e-tool. The e-tool is free to use and can be accessed through the NIOSH website. Tier 2, the semi-quantitative tier, produces an OEB assignment based on quantitative and qualitative data from secondary sources; it involves assigning the OEB on the basis of key findings from prescribed literature sources, including use of data from specific types of studies. Tier 2 focuses on nine toxicological endpoints. The Tier 2 process can be performed by an occupational hygienist but requires some formal training. Tier 2 banding is also incorporated into the NIOSH OEB e-tool but can take hours instead of minutes to complete for a given chemical. However, the resulting band is considered more robust than a Tier 1 band due to the in-depth retrieval of published data. NIOSH recommends users complete at least the Tier 2 process to produce reliable OEBs. Tier 3, the expert judgement tier, relies on expert judgement to produce a band based on primary and secondary data that is available to the user. This level of OEB would require the advanced knowledge and experience held by a toxicologist or veteran occupational hygienist. The Tier 3 process allows the professional to incorporate their
{ "page_id": 53741891, "source": null, "title": "Occupational exposure banding" }
own raw data in conjunction with the availability of data drawn from published literature. == Reliability == Since unveiling the occupational exposure banding technique in 2017, NIOSH has sought feedback from its users and has evaluated the reliability of this tool. There has been an overwhelming response of positive feedback. Users have described Tier 1 as a helpful screening tool, Tier 2 as a basic assessment for a new chemical on the worksite, and Tier 3 as a personalized in-depth analysis. During pilot testing, NIOSH evaluated the Tier 1 and Tier 2 protocols using chemicals with OELs and compared the banding results to OELs. For >90% of these chemicals, the resultant Tier 1 and Tier 2 bands were found to be equally or more stringent than the OELs. This demonstrates the confidence health & safety professionals can have in the OEB process when making risk management decisions for chemicals without OELs. == Limitations == Although occupational exposure banding holds a great deal of promise for the occupational hygiene profession, there are potential limitations that should be considered. As with any analysis, the outcome of the NIOSH occupational exposure banding process – the OEB – is dependent upon the quantity and the quality of data used and the expertise of the individual using the process. In order to maximize data quality, NIOSH has compiled a list of NIOSH-recommended sources which can provide data that can be used for banding. Furthermore, for some chemicals the amount of quality data may not be sufficient to derive an OEB. It is important to note that the lack of data does not indicate that the chemical is safe. Other risk management strategies, such as control banding, can then be applied. == Control banding versus exposure banding == The NIOSH occupational exposure banding process guides a
{ "page_id": 53741891, "source": null, "title": "Occupational exposure banding" }
user through the evaluation and selection of critical health hazard information to select an OEB from among five categories of severity. For OEBs, the process uses only hazard-based data (e.g., studies on human health effects or toxicology studies) to identify an overall level of hazard potential and associated airborne concentration range for chemicals with similar hazard profiles. While the output of this process can be used by informed occupational safety and health professionals to make risk management and exposure control decisions, the process does not supply such recommendations directly. In contrast, control banding is a strategy that groups workplace risks into control categories or bands based on combinations of both hazard and exposure information. Control banding combines hazard banding with exposure risk management to directly link hazards to specific control measures. Various toolkit models for control banding have been developed in the UK, Germany, and the Netherlands. COSHH Essentials was the first widely adopted banding scheme. Other banding schemes are also available, such as Stoffenmanager, EMKG, and International Chemical Control Toolkit of the ILO. Evaluation of these and other control banding systems have yielded varying results. Occupational exposure banding has emerged as a helpful supplementary exposure assessment tool. When conducting a workplace hazard assessment, occupational hygienists may find it useful to start with occupational exposure banding to identify potential hazards and exposure ranges, before moving on to control banding. Together, these tools will aid the health & safety professional in selecting the appropriate risk mitigation strategies. == See also == Control banding – Approach to promoting OHS Health Hazards Evaluation Program, NIOSH Occupational exposure limit – Upper limit on the acceptable concentration of a hazardous substance Occupational hygiene Recommended exposure limit – Limit for exposure to a chemical substance Threshold limit value – Upper limit on the acceptable exposure concentration
{ "page_id": 53741891, "source": null, "title": "Occupational exposure banding" }
of a hazardous substance in the workplace Hierarchy of hazard controls – System used in industry to eliminate or minimize exposure to hazards Occupational hygiene – Management of workplace health hazards == References == == External links == The NIOSH Occupational Exposure Banding Process: Guidance for the Evaluation of Chemical Hazards Current Intelligence Bulletin The NIOSH Occupational Exposure Banding Topic Page The NIOSH Occupational Exposure Banding e-Tool Occupational Exposure Banding – A Conversation with Lauralynn Taylor McKernan, ScD CIH The NIOSH Control Banding Topic Page Hands-on Activity Demonstration: Identifying Occupational Exposure Bands Occupational Exposure Control Banding Pharmaceuticals Control Recommendations by Esco Pharma based on OEB Classification
{ "page_id": 53741891, "source": null, "title": "Occupational exposure banding" }
In classical mechanics, the stretch rule (sometimes referred to as Routh's rule) states that the moment of inertia of a rigid object is unchanged when the object is stretched parallel to an axis of rotation that is a principal axis, provided that the distribution of mass remains unchanged except in the direction parallel to the axis. This operation leaves cylinders oriented parallel to the axis unchanged in radius. This rule can be applied with the parallel axis theorem and the perpendicular axis theorem to find moments of inertia for a variety of shapes. == Derivation == The (scalar) moment of inertia of a rigid body around the z-axis is given by: I z = ∫ V d 3 r ρ ( r ) r 2 {\displaystyle I_{z}=\int _{V}d^{3}r\,\rho (\mathbf {r} )\,r^{2}} Where r {\displaystyle r} is the distance of a point from the z-axis. We can expand as follows, since we are dealing with stretching over the z-axis only: I z = ∫ 0 L d z ∫ x , y d x d y ρ ( x , y , z ) r 2 {\displaystyle I_{z}=\int _{0}^{L}dz\int _{x,y}dx\,dy\,\rho (x,y,z)\,r^{2}} Here, L {\displaystyle L} is the body's height. Stretching the object by a factor of a {\displaystyle a} along the z-axis is equivalent to dividing the mass density by a {\displaystyle a} (meaning ρ ′ ( x , y , z ) = ρ ( x , y , z / a ) / a {\displaystyle \rho '(x,y,z)=\rho (x,y,z/a)/a} ), as well as integrating over new limits 0 {\displaystyle 0} and a L {\displaystyle aL} (the new height of the object), thus leaving the total mass unchanged. This means the new moment of inertia will be: I z ′ = ∫ 0 a L d z ∫ x , y
{ "page_id": 592198, "source": null, "title": "Stretch rule" }
d x d y ρ ′ ( x , y , z ) r 2 = ∫ 0 L a d z ′ ∫ x , y d x d y ρ ( x , y , z / a ) a r 2 = ∫ 0 L d z ′ ∫ x , y d x d y ρ ( x , y , z ′ ) r 2 = I z {\displaystyle {\begin{aligned}I_{z}'&=\int _{0}^{aL}dz\int _{x,y}dx\,dy\,\rho '(x,y,z)\,r^{2}\\[8pt]&=\int _{0}^{L}a\,dz'\int _{x,y}dx\,dy\,{\frac {\rho (x,y,z/a)}{a}}\,r^{2}\\[8pt]&=\int _{0}^{L}dz'\int _{x,y}dx\,dy\,\rho (x,y,z')\,r^{2}=I_{z}\end{aligned}}} == References ==
{ "page_id": 592198, "source": null, "title": "Stretch rule" }
A winnowing basket or fan is a tool for winnowing grain from chaff while removing dirt and dust too. They have been used traditionally in a number of civilizations for centuries, and are still in use today in some countries. == Use == Unprocessed grain, mixed with impurities like dirt or inedible husks, is placed on the basket. The basket is then lifted and shaken, which separates out lighter particles (usually inedible husks) from heavier particles (the grain). The process can benefit from mild wind, which can carry away lighter particles. == By region == === Ancient Greece === The λικνον (liknon) appears in the Iliad (5.4999). === India === These have been used in India from centuries and still see widespread contemporary use. They are known as soup in Hindi and dala in Bangla. In West Bengal, Odisha, Assam, and Bihar the tool is also used to welcome the groom during marriage ceremonies. === Japan === They are known as mino or mi (箕). === Korea === These are known as k'i (Korean: 키; Hanja: 簸) in Korea, and were used throughout the region for centuries. There was some regional variation in what materials the k'i were constructed from, with southern regions using primarily bamboo instead of wicker. ==== Traditions ==== There is a folk tradition where children who are unable to adequately control when they urinate (particularly while sleeping) are made to wear the k'i on their head, then sent to knock on the doors of their neighbors and ask for salt. This served to publicly embarrass the child into compliance, as neighbors would recognize why the child was knocking on their door. In South Gyeongsang Province, there was a tradition where people avoided buying the baskets on the first market day of each lunar year, as the
{ "page_id": 74713415, "source": null, "title": "Winnowing basket" }
baskets, as they would with husks, would allow good luck to escape. A tradition on Jeju Island involved a type of divination, where on Lunar New Year's Eve, the baskets would be cleaned, washed, and placed facedown. The following morning, the baskets would be inspected. If rice was present under the basket, then the harvest would be good that year. == See also == Winnowing Oar == References ==
{ "page_id": 74713415, "source": null, "title": "Winnowing basket" }
Mohammad Ataul Karim (Bengali: মোহাম্মদ আতাউল করিম; born 4 May 1953) is a Bangladeshi American scientist and higher education administrator, with expertise in electro-optical systems, optical computing, and pattern recognition. Ataul Karim is ranked amongst the top 50 researchers who contributed most to journal Applied Optics in its 50-year history. Ataul Karim served as provost, executive vice chancellor and chief operating officer of the University of Massachusetts Dartmouth between June 2013 – 2020, and was for 9 years the first vice president for research of Old Dominion University (ODU) in Norfolk, Virginia. == Biography == Mohammad Ataul Karim was born in Barlekha, a border town in South Sylhet. He attended Shatma Primary for his elementary education and Patharia Chotolekha High for a year after which he left home to be schooled at Faujdarhat Cadet College (1965–1969), Sylhet MC College (1969–1972), and the University of Dacca (1972–1976) wherefrom he received his bachelor's honors degree in physics. Ataul Karim earned his master's degrees in physics (1978) and electrical engineering (1979), and a doctor of philosophy degree in electrical engineering (1982) from the University of Alabama. Ataul Karim practiced creative writing in high school. Many of his popular science writings in Bengali appeared in Biggyan Shamoeeki and Bangla Academy Biggyan Patrika during 1972–1976. Of these, the most significant were Biborthon Kahinee, a series of articles on cosmic and biological evolution, and Shamproteek, a monthly feature on current affairs in science, both of which appeared in Biggyan Shamoeeki. By the second year of his bachelor's degree in 1974, he had completed his first book manuscript, which he submitted to Bangla Academy for publication. After about two years, the academy informed him that it was not prepared to take a chance on its juvenile author. This episode troubled him deeply, ending his creative writing efforts
{ "page_id": 26020168, "source": null, "title": "Mohammad Ataul Karim" }
in Bengali. All his subsequent books and articles were written in English, and all were published from outside of Bangladesh. A 2004 government of Bangladesh report and a number of books in Bengali, including Bangladesher Shera Bigyani (Hitler A. Halim, Shikor, 2004), Medhabi Manusher Golpo (Mohammad Kaykobad, Annyaprokash, 2005), "Medhabi O Binoyi Manusher Golpo" (Mohammad Kaykobad, "Shore O", 2020), and "Tarae Tarae Khochito" (Fardin Munir and Munir Hasan, "Odomya Prokash", 2022) as well as Star Insight, cite him as an example of the outstanding success of the Bangladeshi diaspora. His efforts to correct illegal practices that otherwise discriminated against international graduate students were featured by the Chronicle of Higher Education, The Wall Street Journal in "Hidden Costs of a Brain Gain" and in turn by David Heenan in his book "Flight Capital: The Alarming Exodus of America's Best And Brightest". == Professional affiliations == Prior to University of Massachusetts Dartmouth (2013–present), he had held academic appointments with the University of Arkansas at Little Rock (1982–1983), Wichita State University (1983–1986), University of Dayton (1986-1998: founding director, Electro-Optics Program, 1990–1998; chair of electrical and computer engineering, 1994–1998), University of Tennessee at Knoxville (1998–2000: head of the Department of Electrical and Computer Engineering), City College of New York of the City University of New York (2000–2004: dean of engineering), and Old Dominion University (2004-2013: vice president for research). Ataul Karim is an elected fellow of seven societies: Optical Society of America (1993), Society of Photo-Optical Instrumentation Engineers (1995), Bangladesh Academy of Sciences (2002), Institute of Physics (2006), Institution of Engineering & Technology (2006), Institute of Electrical and Electronics Engineers (2009), and Asia-Pacific Artificial Intelligence Association (2022). == Scholarly research == He supervised MS/PhD research of over 60 graduate students and authored 19 books, 13 book chapters, and over 365 research papers. He guest-edited
{ "page_id": 26020168, "source": null, "title": "Mohammad Ataul Karim" }
36 journal special issues in areas of communication, computing, multimedia, networks, optics, pattern recognition, infrared systems, remote sensing, and software. == Relevance to Bangladesh == He leads the International Conference on Computer and Information Technology, now in its 25th year. Since 2009, with assistance of 5 teams of guest editors, Ataul Karim produced 20 journal special issues that featured works of Bangladesh-based researchers in the fields of communications, computing, multimedia, networks, and software. His edited book on "Technical Challenges and Design Issues in Bangla Language Processing" provides a state-of-the-art platform for information communication technology research and development that is of significance to nearly 260 million Bengali-speaking people who live in Bangladesh, India and in diaspora in the Middle East, Europe, and the US. This milestone work includes 16 chapters coauthored by 41 researchers from Bangladesh, Canada, India, Ireland, Norway, the UK, and the US. Ataul Karim is known for his advocacy and writings to improve ranking of universities in Bangladesh, and for containing questionable journal publication and faculty recruitment practices. He serves on the board of trustees and/or advisory boards of a number of private universities including Ahsanullah University of Science and Technology, North South University, and Metropolitan University, Sylhet and the Board of Regents of the North American Bangladeshi Islamic Community, known for its many projects in education, environment, healthcare, poverty alleviation, and relief and rehabilitation. == References ==
{ "page_id": 26020168, "source": null, "title": "Mohammad Ataul Karim" }
A psammophile ( (P)SAM-oh-fyle) is a plant or animal that prefers or thrives in sandy areas. Plant psammophiles are also known as psammophytes. They thrive in places such as the Arabian Peninsula and the Sahara and also the dunes of coastal regions. Because of the unique ecological selective pressures of sand, often times animals on opposite sides of the planet can convergently evolve similar features, a phenomenon sometimes referred to as ecomorphological convergence. The Crotalus cerastes native to American deserts and the Bitis peringueyi native to Namibian deserts have independently evolved sidewinding behavior to traverse across sand. In addition, the African jerboa and the American kangaroo rat have separately evolved a bipedal form with large hind legs that allow them to hop. == Etymology == Psammo is from Ancient Greek ψάμμος (psámmos, “sand”); -philo is from Ancient Greek φίλος (phílos, “dear, beloved”) via Latin -phila. == Popular culture == With the correct spelling of the word psammophile, Florida eighth-grader Dev Shah, one of 231 contestants, won the 95th Scripps National Spelling Bee in June 2023 and was awarded $50,000 in prize money. == References ==
{ "page_id": 46729550, "source": null, "title": "Psammophile" }
Waterproofing is the process of making an object, person or structure waterproof or water-resistant so that it remains relatively unaffected by water or resists the ingress of water under specified conditions. Such items may be used in wet environments or underwater to specified depths. Water-resistant and waterproof often refer to resistance to penetration of water in its liquid state and possibly under pressure, whereas damp proof refers to resistance to humidity or dampness. Permeation of water vapour through a material or structure is reported as a moisture vapor transmission rate (MVTR). The hulls of boats and ships were once waterproofed by applying tar or pitch. Modern items may be waterproofed by applying water-repellent coatings or by sealing seams with gaskets or o-rings. Waterproofing is used in reference to building structures (such as basements, decks, or wet areas), watercraft, canvas, clothing (raincoats or waders), electronic devices and paper packaging (such as cartons for liquids). == In construction == In construction, a building or structure is waterproofed with the use of membranes and coatings to protect contents and structural integrity. The waterproofing of the building envelope in construction specifications is listed under 07 - Thermal and Moisture Protection within MasterFormat 2004, by the Construction Specifications Institute, and includes roofing and waterproofing materials. In building construction, waterproofing is a fundamental aspect of creating a building envelope, which is a controlled environment. The roof covering materials, siding, foundations, and all of the various penetrations through these surfaces must be water-resistant and sometimes waterproof. Roofing materials are generally designed to be water-resistant and shed water from a sloping roof, but in some conditions, such as ice damming and on flat roofs, the roofing must be waterproof. Many types of waterproof membrane systems are available, including felt paper or tar paper with asphalt or tar to
{ "page_id": 2099543, "source": null, "title": "Waterproofing" }
make a built-up roof, other bituminous waterproofing, ethylene propylene diene monomer EPDM rubber, hypalon, polyvinyl chloride, liquid roofing, and more. Walls are not subjected to standing water, and the water-resistant membranes used as housewraps are designed to be porous enough to let moisture escape. Walls also have vapor barriers or air barriers. Damp proofing is another aspect of waterproofing. Masonry walls are built with a damp-proof course to prevent rising damp, and the concrete in foundations needs to be damp-proofed or waterproofed with a liquid coating, basement waterproofing membrane (even under the concrete slab floor where polyethylene sheeting is commonly used), or an additive to the concrete. Within the waterproofing industry, below-ground waterproofing is generally divided into two areas: Tanking: This is waterproofing used where the below-ground structure will be sitting in the water table continuously or periodically. This causes hydrostatic pressure on both the membrane and structure and requires full encapsulation of the basement structure in a tanking membrane, under the slab and walls. Damp proofing: This is waterproofing used where the water table is lower than the structure and there is good free-draining fill. The membrane deals with the shedding of water and the ingress of water vapor only, with no hydrostatic pressure. Generally, this incorporates a damp proof membrane (DPM) to the walls with a polythene DPM under the slab. With higher grade DPM, some protection from short-term Hydrostatic pressure can be gained by transitioning the higher quality wall DPM to the slab polythene under the footing rather than at the footing face. In buildings using earth sheltering, too much humidity can be a potential problem, so waterproofing is critical. Water seepage can lead to mold growth, causing significant damage and air quality issues. Properly waterproofing foundation walls is required to prevent deterioration and seepage. Another specialized
{ "page_id": 2099543, "source": null, "title": "Waterproofing" }
area of waterproofing is rooftop decks and balconies. Waterproofing systems have become quite sophisticated and are a very specialized area. Failed waterproof decks, whether made of polymer or tile, are one of the leading causes of water damage to building structures and personal injury when they fail. Major problems occur in the construction industry when improper products are used for the wrong application. While the term waterproof is used for many products, each of them has a very specific area of application, and when manufacturer specifications and installation procedures are not followed, the consequences can be severe. Another factor is the impact of expansion and contraction on waterproofing systems for decks. Decks constantly move with changes in temperatures, putting stress on the waterproofing systems. One of the leading causes of waterproof deck system failures is the movement of underlying substrates (plywood) that causes too much stress on the membranes, failing the system. While beyond the scope of this reference document, waterproofing of decks and balconies is a complex of many complimentary elements. These include the waterproofing membrane used, adequate slope-drainage, proper flashing details, and proper construction materials. The penetrations through a building envelope must be built in a way such that water does not enter the building, such as using flashing and special fittings for pipes, vents, wires, etc. Some caulkings are durable, but many are unreliable for waterproofing. Also, many types of geomembranes are available to control water, gases, or pollution. From the late 1990s to the 2010s, the construction industry has had technological advances in waterproofing materials, including integral waterproofing systems and more advanced membrane materials. Integral systems such as hycrete work within the matrix of a concrete structure, giving the concrete itself a waterproof quality. There are two main types of integral waterproofing systems: the hydrophilic and
{ "page_id": 2099543, "source": null, "title": "Waterproofing" }
the hydrophobic systems. A hydrophilic system typically uses a crystallization technology that replaces the water in the concrete with insoluble crystals. Various brands available in the market claim similar properties, but not all can react with a wide range of cement hydration by-products and thus require caution. Hydrophobic systems use concrete sealers or even fatty acids to block pores within the concrete, preventing water passage. Sometimes, the same materials used to keep water out of buildings are used to keep water in, such as a pool or pond liners. New membrane materials seek to overcome shortcomings in older methods like polyvinyl chloride (PVC) and high-density polyethylene (HDPE). Generally, new technology in waterproof membranes relies on polymer-based materials that are very adhesive to create a seamless barrier around the outside of a structure. Waterproofing should not be confused with roofing, since roofing cannot necessarily withstand hydrostatic head while waterproofing can. The standards for waterproofing bathrooms in domestic construction have improved over the years, due in large part to the general tightening of building codes. == In clothing == Some garments, and tents, are designed to give greater or lesser protection against rain. For urban use, raincoats and jackets are used; for outdoor activities in rough weather, there is a range of hiking apparel. Typical descriptions are "showerproof", "water resistant", and "waterproof". These terms are not precisely defined. A showerproof garment will usually be treated with a water-resisting coating but is not rated to resist a specific hydrostatic head. This is suitable for protection against light rain, but after a short time, water will penetrate. A water-resistant garment is similar, perhaps slightly more resistant to waste,r but also not rated to resist a specific hydrostatic head. A garment described as waterproof will have a water-repellent coating, with the seams also taped to
{ "page_id": 2099543, "source": null, "title": "Waterproofing" }
prevent water ingress there. Better waterproof garments have a membrane lining designed to keep water out but allow trapped moisture to escape ("breathability")—a totally waterproof garment would retain body sweat and become clammy. Waterproof garments specify their hydrostatic rating, ranging from 1,500 for light rain to 20,000 for heavy rain. Waterproof garments are intended for use in weather conditions which are often windy as well as wet and are usually also wind resistant. Footwear can also be made waterproof by using a variety of methods, including but not limited to, the application of beeswax, waterproofing spray, or mink oil. == In other objects == Waterproofing methods have been implemented in many types of objects, including paper packaging, cosmetics, and, more recently, consumer electronics. Electronic devices used in military and severe commercial environments are routinely conformally coated in accordance with IPC-CC-830 to resist moisture and corrosion, but encapsulation is needed to become truly waterproof. Even though it is possible to find waterproof wrapping or other types of protective cases for electronic devices, a new technology enabled the release of diverse waterproof smartphones and tablets in 2013. This method is based on a special nanotechnology coating a thousand times thinner than a human hair which protects electronic equipment from damage due to the penetration of water. Several manufacturers use the nano coating method on their smartphones, tablets, and digital cameras. A 2013 study found that nanotextured surfaces using cone forms produce highly water-repellent surfaces. These nanocone textures are superhydrophobic (extremely water-hating). == Applications == Waterproof packaging or other types of protective cases for electronic devices can be found. A new technology enabled the release of various waterproof smartphones and tablets in 2013. A study from 2013 found that nano-textured surfaces using cone shapes produce highly water-repellent surfaces. These "nanocone" textures are superhydrophobic.
{ "page_id": 2099543, "source": null, "title": "Waterproofing" }
== Standards == ASTM C1127 – Standard Guide for Use of High Solids Content, Cold Liquid-Applied Elastomeric Waterproofing Membrane with an Integral Wearing Surface ASTM D779 – Standard Test Method for Determining the Water Vapor Resistance of Sheet Materials in Contact with Liquid Water by the Dry Indicator Method ASTM D2099 – Standard Test Method for Dynamic Water Resistance of Shoe Upper Leather by the Maeser Water Penetration Tester ASTM D3393 – Standard Specification for Coated Fabrics Waterproofness D6135 – Standard Practice for Application of Self-Adhering Modified Bituminous Waterproofing ASTM D7281 – Standard Test Method for Determining Water Migration Resistance Through Roof Membranes British Standards Institution BS.8102:2009 – "Protection of Below Ground Structures against Water from the Ground". IEC 60529 – Degrees of protection provided by enclosures (IP Code) ISO 2281 – Horology — Water-resistant watches == See also == == References == == External links == Media related to Waterproofing at Wikimedia Commons
{ "page_id": 2099543, "source": null, "title": "Waterproofing" }
The American Association of Immunologists Lifetime Achievement Award is the highest honor bestowed by the American Association of Immunologists (AAI). It has been awarded annually to a single AAI member since 1994. == Winners == Source: == See also == List of medicine awards == References ==
{ "page_id": 58526041, "source": null, "title": "American Association of Immunologists Lifetime Achievement Award" }
Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. Recurrent neural networks, or neural networks with loops allow information from later processing stages to feed back to earlier stages for sequence processing. However, at every stage of inference a feedforward multiplication remains the core, essential for backpropagation or backpropagation through time. Thus neural networks cannot contain feedback like negative feedback or positive feedback where the outputs feed back to the very same inputs and modify them, because this forms an infinite loop which is not possible to rewind in time to generate an error signal through backpropagation. This issue and nomenclature appear to be a point of confusion between some computer scientists and scientists in other fields studying brain networks. == Mathematical foundations == === Activation function === The two historically common activation functions are both sigmoids, and are described by y ( v i ) = tanh ⁡ ( v i ) and y ( v i ) = ( 1 + e − v i ) − 1 {\displaystyle y(v_{i})=\tanh(v_{i})~~{\textrm {and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}} . The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here y i {\displaystyle y_{i}} is the output of the i {\displaystyle i} th node (neuron) and v i {\displaystyle v_{i}} is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models). In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible
{ "page_id": 1706332, "source": null, "title": "Feedforward neural network" }
ways to overcome the numerical problems related to the sigmoids. === Learning === Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation. We can represent the degree of error in an output node j {\displaystyle j} in the n {\displaystyle n} th data point (training example) by e j ( n ) = d j ( n ) − y j ( n ) {\displaystyle e_{j}(n)=d_{j}(n)-y_{j}(n)} , where d j ( n ) {\displaystyle d_{j}(n)} is the desired target value for n {\displaystyle n} th data point at node j {\displaystyle j} , and y j ( n ) {\displaystyle y_{j}(n)} is the value produced at node j {\displaystyle j} when the n {\displaystyle n} th data point is given as an input. The node weights can then be adjusted based on corrections that minimize the error in the entire output for the n {\displaystyle n} th data point, given by E ( n ) = 1 2 ∑ output node j e j 2 ( n ) {\displaystyle {\mathcal {E}}(n)={\frac {1}{2}}\sum _{{\text{output node }}j}e_{j}^{2}(n)} . Using gradient descent, the change in each weight w i j {\displaystyle w_{ij}} is Δ w j i ( n ) = − η ∂ E ( n ) ∂ v j ( n ) y i ( n ) {\displaystyle \Delta w_{ji}(n)=-\eta {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}y_{i}(n)} where y i ( n ) {\displaystyle y_{i}(n)} is the output of the previous neuron i {\displaystyle i} , and η {\displaystyle \eta } is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the
{ "page_id": 1706332, "source": null, "title": "Feedforward neural network" }
previous expression, ∂ E ( n ) ∂ v j ( n ) {\displaystyle {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}} denotes the partial derivate of the error E ( n ) {\displaystyle {\mathcal {E}}(n)} according to the weighted sum v j ( n ) {\displaystyle v_{j}(n)} of the input connections of neuron i {\displaystyle i} . The derivative to be calculated depends on the induced local field v j {\displaystyle v_{j}} , which itself varies. It is easy to prove that for an output node this derivative can be simplified to − ∂ E ( n ) ∂ v j ( n ) = e j ( n ) ϕ ′ ( v j ( n ) ) {\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=e_{j}(n)\phi ^{\prime }(v_{j}(n))} where ϕ ′ {\displaystyle \phi ^{\prime }} is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is − ∂ E ( n ) ∂ v j ( n ) = ϕ ′ ( v j ( n ) ) ∑ k − ∂ E ( n ) ∂ v k ( n ) w k j ( n ) {\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=\phi ^{\prime }(v_{j}(n))\sum _{k}-{\frac {\partial {\mathcal {E}}(n)}{\partial v_{k}(n)}}w_{kj}(n)} . This depends on the change in weights of the k {\displaystyle k} th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. == History == === Timeline === Circa 1800, Legendre (1805) and Gauss (1795) created the simplest feedforward network which consists of a single weight
{ "page_id": 1706332, "source": null, "title": "Feedforward neural network" }
layer with linear activation functions. It was trained by the least squares method for minimising mean squared error, also known as linear regression. Legendre and Gauss used it for the prediction of planetary movement from training data. In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks. In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections. R. D. Joseph (1960) mentions an even earlier perceptron-like device: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject." In 1960, Joseph also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962): section 16 cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning. In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling, the first working deep learning algorithm, a method to train arbitrarily deep neural networks. It is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates." It was used to train an eight-layer neural net in 1971. In 1967, Shun'ichi Amari reported the first multilayered neural network trained by stochastic gradient descent, which was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers. In 1970, Seppo Linnainmaa published the modern form
{ "page_id": 1706332, "source": null, "title": "Feedforward neural network" }
of backpropagation in his master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work. In 2003, interest in backpropagation networks returned due to the successes of deep learning being applied to language modelling by Yoshua Bengio with co-authors. === Linear regression === === Perceptron === If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron. (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function. Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent. === Multilayer perceptron === A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable. == Other feedforward networks == Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function. == See also == Hopfield network Feed-forward Backpropagation Rprop == References == == External links == Feedforward neural networks tutorial
{ "page_id": 1706332, "source": null, "title": "Feedforward neural network" }
Feedforward Neural Network: Example Feedforward Neural Networks: An Introduction
{ "page_id": 1706332, "source": null, "title": "Feedforward neural network" }
A retention agent is a chemical process that improves the retention of a functional chemical in a substrate. The result is that totally fewer chemicals are used to get the same effect as the functional chemical and fewer chemicals go to waste. == Applications == Retention agents (retention aids) are used in the papermaking industry. These are added in the wet end of the paper machine to improve retention fine particles and fillers during the formation of paper. Retention aids can also be used to improve the retention of other papermaking chemicals, including sizing and cationic starches. The improved retention of papermaking furnish components improves the operational efficiency of the paper machine, reduces the solids and organic loading in the process water loop, and can lower overall chemical costs. Typical chemicals used as retention aids are: polyacrylamide (PAM), polyethyleneimine (PEI), colloidal silica, and bentonite. Retention Agents or Retention Aids are often used along with the addition of drainage aids on paper machines. This is done because while retention is enhanced the forming fabrics get choked, resulting in slower removal of water from the paper web. Research done at a manufacturing laboratory in India signifies that over use of flocculants falling under this category can also result in problems during the runnability of machine. == See also == Colour retention agent
{ "page_id": 20973916, "source": null, "title": "Retention agent" }
Teliospore (sometimes called teleutospore) is the thick-walled resting spore of some fungi (rusts and smuts), from which the basidium arises. == Development == They develop in telia (sing. telium or teliosorus). The telial host is the primary host in heteroecious rusts. The aecial host is the alternate host (look for pycnia and aecia). These terms apply when two hosts are required by a heteroecious rust fungus to complete its life cycle. == Morphology == Teliospores consist of one, two or more dikaryote cells. Teliospores are often dark-coloured and thick-walled, especially in species where they overwinter (acting as chlamydospores). Two-celled teliospores formerly defined the genus Puccinia. Here the wall is particularly thick at the tip of the terminal cell which extends into a beak in some species. Teliospores consist of dikaryote cells. As the teliospore cells germinate, the nuclei undergo karyogamy and thereafter meiosis, giving rise to a four-celled basidium with haploid basidiospores. == See also == Aeciospore Chlamydospore Pycniospore Rust fungus § Spores Urediniomycetes Urediniospore Ustilaginomycetes == References == C.J. Alexopolous, Charles W. Mims, M. Blackwell, Introductory Mycology, 4th ed. (John Wiley and Sons, Hoboken NJ, 2004) ISBN 0-471-52229-5
{ "page_id": 5310815, "source": null, "title": "Teliospore" }
In biogeography and paleontology, a relict is a population or taxon of organisms that was more widespread or more diverse in the past. A relictual population is a population currently inhabiting a restricted area whose range was far wider during a previous geologic epoch. Similarly, a relictual taxon is a taxon (e.g. species or other lineage) which is the sole surviving representative of a formerly diverse group. == Definition == A relict (or relic) plant or animal is a taxon that persists as a remnant of what was once a diverse and widespread population. Relictualism occurs when a widespread habitat or range changes and a small area becomes cut off from the whole. A subset of the population is then confined to the available hospitable area, and survives there while the broader population either shrinks or evolves divergently. This phenomenon differs from endemism in that the range of the population was not always restricted to the local region. In other words, the species or group did not necessarily arise in that small area, but rather was stranded, or insularized, by changes over time. The agent of change could be anything from competition from other organisms, continental drift, or climate change such as an ice age. When a relict is representative of taxa found in the fossil record, and yet is still living, such an organism is sometimes referred to as a living fossil. However, a relict need not be currently living. An evolutionary relict is any organism that was characteristic of the flora or fauna of one age and that persisted into a later age, with the later age being characterized by newly evolved flora or fauna significantly different from those that came before. == Examples == A notable example is the thylacine of Tasmania, a relict marsupial carnivore that
{ "page_id": 39324002, "source": null, "title": "Relict (biology)" }
survived into modern times on an island, whereas the rest of its species on mainland Australia had gone extinct between 3000 and 2000 years ago. Another example is Omma, a genus of beetle with a fossil record extending back over 200 million years to the Late Triassic and found worldwide during the Jurassic and Cretaceous, now confined to a single living species in Australia. Another relict from the Triassic is Pholadomya, a common clam genus during the Mesozoic, now confined to a single rare species in the Caribbean. The tuatara endemic to New Zealand is the only living member of the once-diverse reptile order Rhynchocephalia, which has a fossil record stretching back 240 million years and during the Mesozoic era was globally distributed and ecologically diverse. An example from the fossil record would be a specimen of Nimravidae, an extinct branch of carnivores in the mammalian evolutionary tree, if said specimen came from Europe in the Miocene epoch. If that was the case, the specimen would represent, not the main population, but a last surviving remnant of the nimravid lineage. These carnivores were common and widespread in the previous epoch, the Oligocene, and disappeared when the climate changed and woodlands were replaced by savanna. They persisted in Europe in the last remaining forests as a relict of the Oligocene: a relict species in a relict habitat. An example of divergent evolution creating relicts is found in the shrews of the islands off the coast of Alaska, namely the Pribilof Island shrew and the St. Lawrence Island shrew. These species are apparently relicts of a time when the islands were connected to the mainland, and these species were once conspecific with a more widespread species, now the cinereus shrew, the three populations having diverged through speciation. In botany, an example of
{ "page_id": 39324002, "source": null, "title": "Relict (biology)" }
an ice age relict plant population is the Snowdon lily, notable as being precariously rare in Wales. The Welsh population is confined to the north-facing slopes of Snowdonia, where climatic conditions are apparently similar to ice age Europe. Some have expressed concern that the warming climate will cause the lily to die out in Great Britain. Other populations of the same plant can be found in the Arctic and in the mountains of Europe and North America, where it is known as the common alplily. While the extirpation of a geographically disjunct population of a relict species may be of regional conservation concern, outright extinction at the species level may occur in this century of rapid climate change if geographic range occupied by a relict species has already contracted to the degree that it is narrowly endemic. For this reason, the traditional conservation tool of translocation has recently been reframed as assisted migration of narrowly endemic, critically endangered species that are already (or soon expected) to experience climate change beyond their levels of tolerance. Two examples of critically endangered relict species for which assisted migration projects are already underway are the western swamp tortoise of Australia and a subcanopy conifer tree in the United States called Florida Torreya. A well-studied botanical example of a relictual taxon is Ginkgo biloba, the last living representative of Ginkgoales that is restricted to China in the wild. Ginkgo trees had a diverse and widespread northern distribution during the Mesozoic, but are not known from the fossil record after the Pliocene other than G. biloba. The Saimaa ringed seal (Phoca hispida saimensis) is an endemic subspecies, a relict of last ice age that lives only in Finland in the landlocked and fragmented Saimaa freshwater lake complex. Now the population has less than 400 individuals, which
{ "page_id": 39324002, "source": null, "title": "Relict (biology)" }
poses a threat to its survival. Another example is the relict leopard frog once found throughout Nevada, Arizona, Utah, and Colorado, but now only found at Lake Mead National Recreation Area in Nevada and Arizona. == Relevance == The concept of relictualism is useful in understanding the ecology and conservation status of populations that have become insularized, meaning confined to one small area or multiple small areas with no chance of movement between populations. Insularization makes a population vulnerable to forces that can lead to extinction, such as disease, inbreeding, habitat destruction, competition from introduced species, and global warming. Consider the case of the white-eyed river martin, a very localized species of bird found only in Southeast Asia, and extremely rare, if not already extinct. Its closest and only surviving living relative is the African river martin, also very localized in central Africa. These two species are the only known members of the subfamily Pseudochelidoninae, and their widely disjunct populations suggest they are relict populations of a more common and widespread ancestor. Known to science only since 1968, it seems to have disappeared. Studies have been done on relict populations in isolated mountain and valley habitats in western North America, where the basin and range topography creates areas that are insular in nature, such as forested mountains surrounded by inhospitable desert, called sky islands. Such situations can serve as refuges for certain Pleistocene relicts, such as Townsend's pocket gopher, while at the same time creating barriers for biological dispersal. Studies have shown that such insular habitats have a tendency toward decreasing species richness. This observation has significant implications for conservation biology, because habitat fragmentation can also lead to the insularization of stranded populations. So-called "relics of cultivation" are plant species that were grown in the past for various purposes (medicinal,
{ "page_id": 39324002, "source": null, "title": "Relict (biology)" }
food, dyes, etc.), but are no longer utilized. They are naturalized and can be found at archaeological sites. == See also == Living fossil == References ==
{ "page_id": 39324002, "source": null, "title": "Relict (biology)" }
l-Photo-leucine is a synthetic derivative of the l-leucine amino acid that is used as its natural analog and is characterized for having photo-reactivity, which makes it suitable for observing and characterizing protein-protein interactions (PPI). When a protein containing this amino acid (A) is exposed to ultraviolet light while interacting with another protein (B), the complex formed from these two proteins (AB) remains attached and can be isolated for study. Photo-leucine, as well as another photo-reactive amino acid derived from methionine, photo-methionine, were first synthesized in 2005 by Monika Suchanek, Anna Radzikowska and Christoph Thiele from the Max Planck Institute of Molecular Cell Biology and Genetics with the objective of identifying protein to protein interaction throughout a simple western blot test that would provide high specificity. The resemblance of the photo-reactive amino acids to the natural ones allows the former to avoid the extensive control mechanisms that take place during the protein synthesis within the cell. == Structure == As mentioned in the introduction, l-photo-leucine is a synthetic derivative of the l-Leucine amino acid. l-photo-leucine is characterized by the presence of a diazirine ring linked to the R radical of the original amino acid. This cyclopropene ring-shaped molecule is constituted of a carbon atom attached to two nitrogen atoms through a covalent single bond. These two nitrogen atoms are simultaneously connected to each other by a double covalent bond. The diazirine carbon is located in the position where theoretically the 2nd carbon atom of the R radical of l-leucine would be, linked up with the 1st and 3rd carbon of this theoretical R radical. The diazirine ring confers to the photo-leucine its photoreactive property. When irradiated with UV light, it splits releasing nitrogen in gas form and leaving an unbound carbon atom (see Diazirine). In protein-protein interactions (PPI), this atom is
{ "page_id": 44108130, "source": null, "title": "L-Photo-leucine" }
attached to the complex formed by the two proteins susceptible of being under study. The rest of the amino acid has indeed the same structure as the original l-leucine molecule, which includes, as every amino acid, an amino group and a carboxyl group bonded to an α-carbon, and a radical that is attached to this carbon atom. The R chain, contains, in this case, a diazirine ring and two extra carbon atoms connected each to the diazirine carbon as it has been previously mentioned. For use in biology experiments, only the l-enantiomer of the photo-leucine amino acid is synthesized, so that it can substitute for natural l-leucine. (Natural proteins consist only of l-amino acids; see homochirality.) == Synthesis == l-Photo-leucine resembles l-leucine in its structure. However, the latter contains a photo-activatable diazirine ring, which the former does not, and which yields a reactive carbene after the light-induced loss of nitrogen, fact that confers l-photo-leucine its properties. This photo-reactive amino acid is synthesized by α-bromination of the azi-carboxylic acid followed by aminolysis of azi-bromo-carboxylic acid. The classic procedure for synthesizing photo-leucine is based on the following steps: 4,4'-azi-pentanoic acid, CCl4 and thionyl chloride are heated to 65 °C for 30 minutes. Then, N-bromosuccinimide, CCl4 and 48% HBr are added and the mixture is stirred at 55 °C for 4 hours. The solvent and free bromine are removed under reduced pressure and the residue is extracted with 50 mL CCl4. The solvent is removed and the crude product (2-bromo-4,4'-azi-pentanoyl chloride) dissolved in acetone and hydrolyzed with aqueous NaHCO3. The crude brominated free acid is obtained upon acidification with HCl and extraction with dichloromethane. The solvent is removed and the product filtered through silica gel in isohexane acetate followed by the removal of the solvent. Following this procedure, we are able to add
{ "page_id": 44108130, "source": null, "title": "L-Photo-leucine" }
a diazirine ring to the 4,4'-azi-pentanoic acid, and to obtain finally the dl-2-bromo-4,4'-azi-pentanoic acid. Aminolysis of dl-2-bromo-4,4'-azi-pentanoic acid is performed in ammonia-saturated methanol and 25% aq ammonia for 5 days at 55 °C. After evaporation of the ammonia, 20 mL of concentrated HCl are added followed by evaporation of the water at reduced pressure. The dry residue is extracted with 20 mL hot methanol and the extract neutralized with N,N-dimetylethylamine. Upon standing for 2 days at -32 °C a precipitate formed which is isolated and re-crystallized twice from 70% ethanol to yield pure dl-2-amino-4,4'-azi-pentonoic acid. dl-2-amino-4,4'-azi-pentonoic acid is acetylated to obtain acetylation dl-2-acetamino-4,4'-azi-pentanoic acid followed by enzymatic deacetylation to give pure l-2-amino-4,4'-azi-pentanoic acid, also known as l-photo-leucine. Recently, synthesis of photo-leucine has been improved. This new way of synthesizing photo-leucine requires boc-(S)-photo-leucine, which is prepared via ozonolysis of a commercially available product, followed by formation of the diazirine by de method of Church and Weiss. This route supposes a significant improvement over the original six-step synthesis of (S)-photo-leucine, which proceeded in low yield and required enzymatic resolution of a racemic intermediate. == Activation == l-Photo-leucine acquires its function after being exposed to UV light. This causes diazirine ring of l-photo-leucine to lose its nitrogen atoms in form of nitrogen gas, leaving its carbon atom as a reactive free radical. The bonds established between this carbon, belonging to one protein (A), and atoms belonging to another protein (B) are responsible for the cross-linking properties of l-photo-leucine, which allow it to attach these two peptide chains into a single complex (AB). The appropriate wavelength to activate the l-photo-leucine molecule ranges from 320 to 370 nanometers. Lamps with higher power are more effective in accomplishing this objective and do so in less time. The ideal wavelength for the activation of the photo-leucine amino
{ "page_id": 44108130, "source": null, "title": "L-Photo-leucine" }
acid is of 345 nm. To increase efficiency, a shallow and uncovered plate must be used. Also, rotation of the samples located under the UV right may be necessary to make sure they receive even UV irradiation, and thus to, yet again, improve the cross-linking efficiency. If the cross-linking is done in vivo, within living cells, these must be exposed to the UV radiation during a period of 15 minutes or less. == Uses == In the absence of the original amino acid (l-leucine) in an environment, l-photo-leucine is used just as its naturally occurring analog in the protein processing mechanisms of the cell. Therefore, it can be used as a substitute for leucine in the primary structure of the protein. This property of photo-leucine is very useful for studying protein-protein interactions (PPIs), due to the fact that the photo-leucine molecule, because of its molecular structure, participates in the covalent cross-linking of proteins in the protein-protein interaction (PPI) domains when it is activated by ultraviolet (UV) light. This fact allows to determine and describe stable and transient protein interactions within cells without using any additional chemical cross-linkers, which could damage the cell structure being studied. The study of these protein-protein interactions is important because they are crucial in organizing cellular processes in space and time. In fact, interest in protein-protein interactions is not confined only to basic research: many of these interactions involved in viral fusion or in growth-factor signaling are promising targets for antiviral and anticancer drugs. Photo-affinity labeling is a powerful tool to identify protein targets of biologically active small molecules and to probe the structure of ligand binding sites, reason due to which photo amino acids, including photo-leucine, are so useful. === Protein labelling === Monika Suchanek, Anna Radzikowska and Christoph Thiele carried out an experiment in
{ "page_id": 44108130, "source": null, "title": "L-Photo-leucine" }
which they had successfully managed to label proteins from the cells of a monkey's kidney (COS7). These cells were grown in a high-glucose medium, from which a sample of 3 cm² was removed to proceed with the western blotting. At about 70% of confluence, the initial medium was replaced by another one lacking the amino acids methionine, leucine, isoleucine and valine, as well as phenol red. Afterwards, photo-amino acids were added to a final concentration of 4 mM of photo-leucine and photo-isoleucine, 1.7 mM of photo-methionine, and cultivated for 22 hours. Once the time was over, cells were washed using PBS and UV-irradiated using a 200-W high pressure mercury lamp with a glass filter that removed wavelengths under 310 nm during 1 to 3 minutes. This did not affect to the viability of the cell (which only was altered after 10 minutes of irradiation). Cell was driven to lysis and subsequent western blotting to analyse the isolated cross-linked complexes. MacKinnon A. L. et al. used photo-leucine to label proteins in a crude membrane fraction, which allowed them to identify the central part of a translocation channel within the membrane that is the target of the cyclodepsipeptide inhibitor. == Advantages of photo-leucine as a cross-linker == Traditionally, the recognition of protein-protein interactions was carried out through chemical cross-linking, that involved the use of moderately reactive bifunctional reagent, commonly attached to free amino groups. However, photochemical cross-linking is much more specific due to the short lifetime of the excited intermediates. In addition, photochemical cross-linking does not interfere with the antibodies recognition whilst the former does. But photo-leucine's advantages go further, because in addition to having a lot of advantages, it doesn't have negative effects. For example, although unnatural amino acids are in general toxic to cells, photo-leucine has been proved not to
{ "page_id": 44108130, "source": null, "title": "L-Photo-leucine" }
have any substantial effect to cell viability. Those results have been corroborate by many experiments. For example, an essay with Escherichia coli-galactosidase showed that the addition of either of the three photo-amino acids or of a mixture of them had no effect on enzyme activity. That helps to conclude that photo-amino acids are nontoxic to cultivated mammalian cells and can, at least partially, functionally replace their natural forms. However, currently photo-reactive amino acids are used in combination with chemical cross-linkers in order to achieve the most reliable results possible within protein-protein interaction studies. == References ==
{ "page_id": 44108130, "source": null, "title": "L-Photo-leucine" }
The thrifty gene hypothesis, or Gianfranco's hypothesis is an attempt by geneticist James V. Neel to explain why certain populations and subpopulations in the modern day are prone to diabetes mellitus type 2. He proposed the hypothesis in 1962 to resolve a fundamental problem: diabetes is clearly a very harmful medical condition, yet it is quite common, and it was already evident to Neel that it likely had a strong genetic basis. The problem is to understand how disease with a likely genetic component and with such negative effects may have been favoured by the process of natural selection. Neel suggested the resolution to this problem is that genes which predispose to diabetes (called 'thrifty genes') were historically advantageous, but they became detrimental in the modern world. In his words they were "rendered detrimental by 'progress'". Neel's primary interest was in diabetes, but the idea was soon expanded to encompass obesity as well. Thrifty genes are genes which enable individuals to efficiently collect and process food to deposit fat during periods of food abundance in order to provide for periods of food shortage (feast and famine). According to the hypothesis, the 'thrifty' genotype would have been advantageous for hunter-gatherer populations, especially child-bearing women, because it would allow them to fatten more quickly during times of abundance. Fatter individuals carrying the thrifty genes would thus better survive times of food scarcity. However, in modern societies with a constant abundance of food, this genotype effectively prepares individuals for a famine that never comes. The result of this mismatch between the environment in which the brain evolved and the environment of today is widespread chronic obesity and related health problems like diabetes. The hypothesis has received various criticisms and several modified or alternative hypotheses have been proposed. == Hypothesis and research by Neel
{ "page_id": 6687077, "source": null, "title": "Thrifty gene hypothesis" }
== James Neel, a professor of Human Genetics at the University of Michigan Medical School, proposed the "thrifty genotype" hypothesis in 1962 in his paper "Diabetes Mellitus: A 'Thrifty' Genotype Rendered Detrimental by 'Progress'?" Neel intended the paper to provoke further contemplation and research on the possible evolutionary and genetic causes of diabetes among populations that had only recently come into regular contact with Westerners. The genetic paradox Neel sought to address was this: diabetes conferred a significant reproductive (and thus evolutionary) disadvantage to anyone who had it; yet the populations Neel studied had diabetes in such high frequencies that a genetic predisposition to develop diabetes seemed plausible. Neel sought to unravel the mystery of why genes that promote diabetes had not been naturally-selected out of the population's gene pool. Neel proposed that a genetic predisposition to develop diabetes was adaptive to the feast and famine cycles of Paleolithic human existence, allowing humans to fatten rapidly and profoundly during times of feast in order that they might better survive during times of famine. This would have been advantageous then but not in the current environment. The hypothesis was proposed before there was a clear distinction between the different types of diabetes. Neel later stated that the hypothesis applied to non-insulin-dependent diabetes mellitus. In its original form the theory more specifically stated that diabetes may be due to a rapid insulin response which would prevent loss of glucose from the urine. Furthermore, it made use of a then popular theory which was later disproven. This argued that specific insulin antagonists were released in response to insulin with this causing diabetes. In the decades following the publications of his first paper on the "thrifty genotype" hypothesis, Neel researched the frequency of diabetes and (increasingly) obesity in a number of other populations and
{ "page_id": 6687077, "source": null, "title": "Thrifty gene hypothesis" }
sought out observations that might disprove or discount his "thrifty gene" hypothesis. Neel's further investigations cast doubt on the "thrifty genotype" hypothesis. If a propensity to develop diabetes were an evolutionary adaptation, then diabetes would have been a disease of long standing in those populations currently experiencing a high frequency of diabetes. However, Neel found no evidence of diabetes among these populations earlier in the century. And when he tested younger members of these populations for glucose intolerance - which might have indicated a predisposition for diabetes - he found none. In 1989, Neel published a review of his further research based on the "thrifty genotype" hypothesis and in the Introduction noted the following: "The data on which that (rather soft) hypothesis was based has now largely collapsed." However, Neel argued that "...the concept of a "thrifty genotype" remains as viable as when first advanced...". He went on to advance that the thrifty genotype concept be thought of in the context of a "compromised" genotype that affects several other metabolically related diseases. Neel in a 1998 review described an expanded form of the original hypothesis, diabetes being caused by "thrifty genes" adapted specifically for intermittent starvation, to a more complex theory of several related diseases such as diabetes, obesity, and hypertension (see also metabolic syndrome) being caused by physiological systems adapted for an older environment being pushed beyond their limits by environmental changes. Thus, one possible remedy for these diseases is changing diet and exercise activity to more closely reflect that of the ancestral environment. == Other research == The thrifty genotype hypothesis has been used to explain high, and rapidly escalating, levels of obesity and diabetes among groups newly introduced to western diets and environments, from South Pacific Islanders, to Sub Saharan Africans, to Native Americans in the Southwestern
{ "page_id": 6687077, "source": null, "title": "Thrifty gene hypothesis" }
United States, to Inuit. The original "thrifty gene" hypothesis argued that famines were common and severe enough to select for thrifty gene in the 2.5 million years of human paleolithic history. This assumption is contradicted by some anthropological evidence. Many of the populations that later developed high rates of obesity and diabetes appeared to have no discernible history of famine or starvation (for example, Pacific Islanders whose "tropical-equatorial islands had luxuriant vegetation all year round and were surrounded by lukewarm waters full of fish."). However, this implies that the period after which humans migrated out of Africa would have provided sufficient time to reverse any pre-existing famine-adapted alleles, for which there is little to no evidence. One criticism of the 'thrifty gene' idea is that it predicts that modern hunter gatherers should get fat in the periods between famines. Data on the body mass index of hunter-gatherer and subsistence agriculturalists show that between famines they do not deposit large fat stores. However, genes that promote only limited fat deposition in the context of pre-industrialized lifestyles and diets may promote excessive fat deposition and obesity when caloric intake is increased and expenditure is decreased beyond the range of the environments these genes evolved in (a gene x environment interaction). As a response to such criticisms, a modified "thrifty" gene hypothesis is that the famines and seasonal shortages of food that occurred only during the agricultural period may have exerted enough pressure to select for "thrifty" genes. == Thrifty phenotype hypothesis == The thrifty phenotype hypothesis arose from challenges posed to the thrifty gene hypothesis. The thrifty phenotype hypothesis theorizes that instead of arising genetically, the "thrifty factors" developed as a direct result of the environment within the womb during development. The development of insulin resistance is theorized to be directly related
{ "page_id": 6687077, "source": null, "title": "Thrifty gene hypothesis" }
to the body "predicting" a life of starvation for the developing fetus. Hence, one of the main causes of type 2 diabetes has been attributed to poor fetal and infant growth and the subsequent development of the metabolic syndrome. Since the hypothesis was proposed, many studies worldwide have confirmed the initial epidemiological evidence. Although the relationship with insulin resistance is clear at all ages studied, the relation of insulin secretion is less clear. The relative contribution of genes and environment to these relationships remains a matter of debate. Other relevant observations arose from metabolism researchers who note that for practically every other species on earth, fat metabolism is well regulated and that "most wild animals are in fact very lean" and that they remain lean "even when adequate food is supplied." == Other alternative hypotheses == In response to the criticisms of the original thrifty genotype theory, several new ideas have been proposed for explaining the evolutionary bases of obesity and related diseases. The "thrifty epigenomic hypothesis" is a combination of the thrifty phenotype and thrifty genotype hypotheses. While it argues that there is an ancient, canalized (genetically coded) physiological system for being "thrifty", the hypothesis argues that an individual's disease risk is primarily determined by epigenetic events. Subtle, epigenetic modifications at many genomic loci (gene regulatory networks) alter the shape of the canal in response to environmental influences and thereby establish a predisposition for complex diseases such as metabolic syndrome. There may be epigenetic inheritance of disease risk. Watve and Yajnik suggested that changing insulin resistance mediates two phenotypic transitions: a transition in reproductive strategy from "r" (large number of offspring with smaller investment in each) to "K" (smaller number of offspring with greater investment in each) (see r/K selection theory); and a switch from a lifestyle dependent upon
{ "page_id": 6687077, "source": null, "title": "Thrifty gene hypothesis" }
muscular strength to one dependent on brain power ("soldier to diplomat"). Because the environmental conditions that would facilitate each transition are heavily overlapping, the scientists surmise, a common switch could have evolved for the two transitions. The main problem with this idea is the timing at which the transition is presumed to have happened, and how this would then translate into the genetic predisposition to type 2 diabetes and obesity. For example, the decline in reproductive investment in human societies (the so-called r to K shift) has occurred far too recently to have been caused by a change in genetics. Sellayah and colleagues have postulated an 'Out of Africa' theory to explain the evolutionary origins of obesity. The theory cites diverse ethnic based differences in obesity susceptibility in western civilizations to contend that, neither the thrifty or drifty gene hypotheses can explain the demographics of the modern obesity crisis. Although the arguments against these patterns arising due to 'drift' are unclear. Sellayah et al. argue that ethnic groups whose ancestors were adapted to hot climates have low metabolic rates due to lack of thermogenic capacity, whereas those groups whose ancestors were cold-adapted were endowed with greater thermogenic capacity and higher metabolic rates. Sellayah and colleagues provide evidence of thermogenic capacity, metabolic rates and obesity prevalence in various indigenous populations in support of their argument. Contrasting this analysis however a study of the spatial distribution of obesity across the mainland USA showed that once the effects of poverty and race were accounted for there was no association between ambient temperature and obesity rates. The most highly cited alternative to the thrifty gene hypothesis is the drifty gene hypothesis proposed by the British biologist John Speakman. This idea differs fundamentally from all the other ideas in that it does not propose any
{ "page_id": 6687077, "source": null, "title": "Thrifty gene hypothesis" }
selective advantage for the obese state, either now or in the past. The main feature of this hypothesis is that the current pattern of obesity does not suggest that obesity has been under strong positive selection for a protracted period of time. It is argued instead that the obesity comes about because of genetic drift in the genes controlling the upper limit on our body fatness. Such drift may have started because around 2 million years ago ancestral humans effectively removed the risk from predators, which was probably a key factor selecting against fatness. The drifty gene hypothesis was presented as part of a presidential debate at the 2007 Obesity Society meeting in New Orleans, with the counter-arguments favouring the thrifty gene presented by British nutritionist Andrew Prentice. The main thrust of Prentice's argument against the drifty gene idea is that Speakman's critique of the thrifty gene hypothesis ignores the huge impact that famines have on fertility. It is argued by Prentice that famine may actually have only been a force driving evolution of thrifty genes for the past 15,000 years or so (since the invention of agriculture), but because famines exert effects on both survival and fertility the selection pressure may have been sufficient even over such a short timescale to generate a pressure for "thrifty" genes. These alternative arguments were published in two back-to-back papers in the International Journal of Obesity in November 2008. Prentice et al. predicted that the emerging molecular genetics field would ultimately provide a way to test between the adaptive 'thrifty gene' idea and the non-adaptive 'drifty gene' idea because it would be possible to find signatures of positive selection in the human genome, at genes that are linked to both obesity and type 2 diabetes, if the 'thrifty gene' hypothesis is correct. Two
{ "page_id": 6687077, "source": null, "title": "Thrifty gene hypothesis" }
comprehensive studies have been performed seeking such signatures of selection. Ayub et al. (2014) searched for signatures of positive selection at 65 genes linked to type 2 diabetes, and Wang and Speakman (2016) searched for signatures of selection at 115 genes linked to obesity. In both cases there was no evidence for such selection signatures at a higher rate than in random genes selected for matched GC content and recombination rate. These two papers provide strong evidence against the thrifty gene idea, and indeed against any adaptive explanation which relies on selection during our recent evolutionary history, but rather provide strong support the 'drifty gene' interpretation. == Search for thrifty genes == Many attempts have been made to search for one or more genes contributing to thrift. Modern tools of genome wide association studies have revealed many genes with small effects associated with obesity or type 2 diabetes but all of them together explain only between 1.4 and 10% of population variance. This leaves a large gap between the pregenomic and emerging genomic estimates of heritability of obesity and Type 2 diabetes, sometimes called the "missing heritability problem." The reasons for this discrepancy are not completely understood. A likely possibility is that the missing heritability is explained by rare variants of large effect that are found only in limited populations. These would be impossible to detect by standard whole genome sequencing approaches even with hundreds of thousands of participants. The extreme endpoint of this distribution are the so-called 'monogenic' obesities where most of the impact on body weight can be tied to a mutation in a single gene that runs in a single family. The classic example of such a genetic effect is the presence of mutations in the leptin gene. An important unanswered question is whether such rare variants
{ "page_id": 6687077, "source": null, "title": "Thrifty gene hypothesis" }
exist because of chance mutations, population founder events and maintenance by processes such as drift, or whether there is any selective advantage involved in their maintenance and spread. An example of such a rare variant effect was recently discovered among Samoan islanders. Among the islanders the variant is extremely common, but in other populations it is extremely rare or absent. The variant predisposes to obesity but strangely is protective against type 2 diabetes. Based on cell studies it was suggested the variant may protect individuals against periods of 'famine' and there is also evidence that it has been under positive selection. The most likely scenario then is that this rare variant was established in the islanders by a founder effect among a small initial colonising population, and was able to spread because of a selective advantage it conferred within that small group. Hence, in small populations under particular environmental conditions it may be feasible that the 'thrifty gene' idea is correct. It remains to be seen if rare variants that fill the gap in the missing heritability estimates are also 'thrifty genes' or if they are rare chance events sustained by drift, as implicated for the common variants currently linked to obesity and type 2 diabetes. == See also == Genetics of obesity New World syndrome == References ==
{ "page_id": 6687077, "source": null, "title": "Thrifty gene hypothesis" }
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern analytical techniques. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. == History == Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th
{ "page_id": 2408, "source": null, "title": "Analytical chemistry" }
century and refined in the late 20th century. The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples. Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology. Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical. == Classical methods == Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical
{ "page_id": 2408, "source": null, "title": "Analytical chemistry" }
chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs. === Qualitative analysis === Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity. ==== Chemical tests ==== There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood. ==== Flame test ==== Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient. === Quantitative analysis === Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis). ==== Gravimetric analysis ==== The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water. ==== Volumetric analysis ==== Titration involves the gradual addition of a measurable reactant to an exact volume of a solution being analyzed until some equivalence point is reached. Titration is a
{ "page_id": 2408, "source": null, "title": "Analytical chemistry" }
family of techniques used to determine the concentration of an analyte. Titrating accurately to either the half-equivalence point or the endpoint of a titration allows the chemist to determine the amount of moles used, which can then be used to determine a concentration or composition of the titrant. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator, such as phenolphthalein. There are many other types of titrations, for example, potentiometric titrations or precipitation titrations. Chemists might also create titration curves in order by systematically testing the pH every drop in order to understand different properties of the titrant. == Instrumental methods == === Spectroscopy === Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on. === Mass spectrometry === Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. In a mass spectrometer, a small amount of sample is ionized and converted to gaseous ions, where they are separated and analyzed according to their mass-to-charge ratios. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix-assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. === Electrochemical analysis === Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials
{ "page_id": 2408, "source": null, "title": "Analytical chemistry" }