text
stringlengths
11
320k
source
stringlengths
26
161
Immunoediting is a dynamic process that consists of immunosurveillance and tumor progression. It describes the relation between the tumor cells and the immune system. It is made up of three phases: elimination , equilibrium , and escape . [ 1 ] Immunoediting is characterized by changes in the immunogenicity of tumors due to the anti-tumor response of the immune system, resulting in the emergence of immune-resistant variants. [ 2 ] The elimination phase , also known as immunosurveillance , includes innate and adaptive immune responses to tumour cells. For the innate immune response, several effector cells such as natural killer cells and T cells are activated by the inflammatory cytokines , which are released by the growing tumour cells, macrophages and stromal cells surrounding the tumour cells. The recruited tumour-infiltrating NK cells and macrophages produce interleukin 12 and interferon gamma , which kill tumour cells by cytotoxic mechanisms such as perforin , TNF-related apoptosis-inducing ligands ( TRAILs ), and reactive oxygen species . [ 3 ] [ 1 ] [ 4 ] Most of the tumor cells are destroyed in this phase, but some of them survive and are able to reach equilibrium with the immune system. The elimination phase consists of the following four phases: The first phase involves the initiation of an antitumor immune response. Cells of the innate immune system recognize the presence of a growing tumor which has undergone stromal remodeling , causing local tissue damage. This is followed by the induction of inflammatory signals which is essential for recruiting cells of the innate immune system (e.g. natural killer cells , natural killer T cells, macrophages and dendritic cells ) to the tumor site. During this phase, the infiltrating lymphocytes such as the natural killer cells and natural killer T cells are stimulated to produce IFN-gamma . In the second phase, newly synthesized IFN-gamma induces tumor death (to a limited amount) as well as promoting the production of chemokines CXCL10 , CXCL9 and CXCL11 . These chemokines play an important role in promoting tumor death by blocking the formation of new blood vessels. Tumor cell debris produced as a result of tumor death is then ingested by dendritic cells, followed by the migration of these dendritic cells to the draining lymph nodes . The recruitment of more immune cells also occurs and is mediated by the chemokines produced during the inflammatory process. In the third phase, natural killer cells and macrophages transactivate one another via the reciprocal production of IFN-gamma and IL-12 . This again promotes more tumor killing by these cells via apoptosis and the production of reactive oxygen and nitrogen intermediates. In the draining lymph nodes, tumor-specific dendritic cells trigger the differentiation of Th1 cells which in turn facilitates the development of cytotoxic CD8 + T cells also known as killer T-cells . In the final phase, tumor-specific CD4 + and CD8 + T cells home to the tumor site and the cytotoxic T lymphocytes then destroy the antigen-bearing tumor cells which remain at the site. The next step in cancer immunoediting is the equilibrium phase , during which tumor cells that have escaped the elimination phase and have a non-immunogenic phenotype are selected for growth. Lymphocytes and IFN-gamma exert a selection pressure on tumor cells which are genetically unstable and rapidly mutating. Tumor cell variants which have acquired resistance to elimination then enter the escape phase. It is the longest of the three processes in cancer immunoediting and may occur over a period of many years. During this period of Darwinian selection , new tumor cell variants emerge with various mutations that further increase overall resistance to immune attack. [ 3 ] In the escape phase, tumor cells continue to grow and expand in an uncontrolled manner and may eventually lead to malignancies. In the study of cancer immunoediting, knockout mice have been used for experimentation since human testing is not possible. Tumor infiltration by lymphocytes is seen as a reflection of a tumor-related immune response. [ 5 ] There is increasing evidence that biological vesicles (e.g., exosomes) secreted by tumour cells help to foster an immunosuppressive tumour microenvironment. [ 6 ] During the escape phase , tumor cell variants selected in the equilibrium phase have breached the host organism's immune defenses, with various genetic and epigenetic changes conferring further resistance to immune detection. [ 1 ] There are several mechanisms that lead to escape of cancer cells to immune system, for example downregulation or loss of expression of classical MHC class I (HLA-A, HLA-B- HLA-C) [ 7 ] [ 4 ] which is essential for effective T cell-mediated immune response (appears in up to 90% of tumours [ 7 ] ), development of cancer microenvironment which has suppressive effect on immune system [ 8 ] and works as a protective barrier to cancer cells. Cells contained in tumor microenvironment are able to produce cytokines which can cause apoptosis of activated T lymphocyte. [ 9 ] Another mechanism of tumor cells to avoid immune system is upregulation of non-classical MHC I (HLA-E, HLA-F, HLA-G) which prevents NK-mediated immune reaction by interaction with NK cells. [ 10 ] [ 11 ] [ 4 ] The tumor begins to develop and grow after escaping the immune system. Recent studies suggest that cells harboring the HIV reservoir may also be undergoing a process of immunoediting, thereby contributing to the increased resistance of these cells to be eliminated by host immune factors. [ 12 ]
https://en.wikipedia.org/wiki/Immunoediting
Immunoelectrophoresis is a general name for a number of biochemical methods for separation and characterization of proteins based on electrophoresis and reaction with antibodies . All variants of immunoelectrophoresis require immunoglobulins , also known as antibodies , reacting with the proteins to be separated or characterized. The methods were developed and used extensively during the second half of the 20th century. In somewhat chronological order: Immunoelectrophoretic analysis (one-dimensional immunoelectrophoresis ad modum Grabar), crossed immunoelectrophoresis (two-dimensional quantitative immunoelectrophoresis ad modum Clarke and Freeman or ad modum Laurell), rocket-immunoelectrophoresis (one-dimensional quantitative immunoelectrophoresis ad modum Laurell), fused rocket immunoelectrophoresis ad modum Svendsen and Harboe, affinity immunoelectrophoresis ad modum Bøg-Hansen. Immunoelectrophoresis is a general term describing many combinations of the principles of electrophoresis and reaction of antibodies , also known as immunodiffusion. [ 1 ] Agarose as 1% gel slabs of about 1 mm thickness buffered at high pH (around 8.6) is traditionally preferred for electrophoresis and the reaction with antibodies. The agarose was chosen as the gel matrix because it has large pores allowing free passage and separation of proteins but provides an anchor for the immunoprecipitates of protein and specific antibodies. The high pH was chosen because antibodies are practically immobile at high pH. Electrophoresis equipment with a horizontal cooling plate was normally recommended for the electrophoresis. Immunoprecipitates are visible in the wet agarose gel, but are stained with protein stains like Coomassie brilliant blue in the dried gel. In contrast to SDS- gel electrophoresis , the electrophoresis in agarose allows native conditions, preserving the native structure and activities of the proteins under investigation, therefore immunoelectrophoresis allows characterization of enzyme activities and ligand binding etc. in addition to electrophoretic separation. Counterimmunoelectrophoresis is the combination of immunodiffusion with electrophoresis. In essence electrophoresis speeds up the process of moving the reactants together. The immunoelectrophoretic analysis ad modum Grabar is the classical method of immunoelectrophoresis. Proteins are separated by electrophoresis, then antibodies are applied in a trough next to the separated proteins and immunoprecipitates are formed after a period of diffusion of the separated proteins and antibodies against each other. The introduction of the immunoelectrophoretic analysis gave a great boost to protein chemistry, some of the first results were the resolution of proteins in biological fluids and biological extracts. Among the important observations made were the great number of different proteins in serum, the existence of several immunoglobulin classes and their electrophoretic heterogeneity. Crossed immunoelectrophoresis is also called two-dimensional quantitative immunoelectrophoresis ad modum Clarke and Freeman or ad modum Laurell. In this method the proteins are first separated during the first dimension electrophoresis, then instead of the diffusion towards the antibodies, the proteins are electrophoresed into an antibody-containing gel in the second dimension. Immunoprecipitation will take place during the second dimension electrophorsis and the immunoprecipitates have a characteristic bell-shape, each precipitate representing one antigen, the position of the precipitate being dependent on the amount of protein as well as the amount of specific antibody in the gel, so relative quantification can be performed. The sensitivity and resolving power of crossed immunoelectrophoresis is than that of the classical immunoelectrophoretic analysis and there are multiple variations of the technique useful for various purposes. Crossed immunoelectrophoresis has been used for studies of proteins in biological fluids, particularly human serum, and biological extracts. Rocket immunoelectrophoresis is one-dimensional quantitative immunoelectrophoresis. The method has been used for quantitation of human serum proteins before automated methods became available. Fused rocket immunoelectrophoresis is a modification of one-dimensional quantitative immunoelectrophorsis used for detailed measurement of proteins in fractions from protein separation experiments. Affinity immunoelectrophoresis is based on changes in the electrophoretic pattern of proteins through specific interaction or complex formation with other macromolecules or ligands. Affinity immunoelectrophoresis has been used for estimation of binding constants , as for instance with lectins or for characterization of proteins with specific features like glycan content or ligand binding. Some variants of affinity immunoelectrophoresis are similar to affinity chromatography by use of immobilized ligands . Binding of ligands. The open structure of the immunoprecipitate in the agarose gel will allow additional binding of radioactively labeled antibodies and other ligands to reveal specific proteins. Application of this possibility has been used for instance for identification of allergens through reaction with immunoglobulin E (IgE) and for identification of glycoproteins with lectins . General comments. Two factors determine that immunoelectrophoretic methods are not widely used. First they are rather work intensive and require some manual expertise. Second they require rather large amounts of polyclonal antibodies. Today gel electrophoresis followed by electroblotting is the preferred method for protein characterization because its ease of operation, its high sensitivity, and its low requirement for specific antibodies. In addition proteins are separated by gel electrophoresis on the basis of their apparent molecular weight, which is not accomplished by immunoelectrophoresis, but nevertheless immunoelectrophoretic methods are still useful when non-reducing conditions are needed. Counter-immunoelectrophoresis and its modification In comparison to other conventional methods of diagnosis e.g. for viral infection testing, counter-immunoelectrophoresis is a highly specific, simple, and speedy method that does not require sophisticated, expensive tools, input materials, or long-term capacity building. Considering the high informativeness of counter-immunoelectrophoresis, the results in practice can be dubious at times. As a result, by using a manufactured amphiphilic fluorescein-containing copolymer to increase the antigen and antibody interaction, counter-immunoelectrophoresis procedures can be improved. The use of the fluorescein copolymer-antigen mixture improved the association with plasma levels antibodies of animals immunized against hemorrhage illness and enhanced protein concentration in the precipitated zone, according to the findings. The capability of the amphiphilic fluorescein copolymer to boost antigen-antibody association and see the fluorescent accumulation domain may improve the efficiency of counter-immunoelectrophoresis for infectious disease rapid diagnosis. [ 3 ] Immunomethods The terminologies, immune-methods and immune-chemical techniques refer to a variety of immunoelectrophoresis processes whose results are identified using antibodies and immunological methodologies. [ 4 ] As a result, immunomethods' great sensitivity is a beneficial compared to the great expense of utilizing antibodies. Many different types of agarose electrophoresis are used to see how proteins travel under diverse circumstances. Proteins are recognized after the timer has expired by incubating gels with certain antibodies, which are then stained with Comassie blue. [ 5 ] Radial immunodiffusion The radial immunodiffusion is an immunoassay technique for determining the concentration of a particular protein in a mixture including other modules. It is made up of an agarose gel, just like the others. Furthermore, in this procedure, the materials are placed into round wells in the gel's core part and disperse through it, generating a deposition ring with a diameter relation to the number of unbound protein that has diffused. [ 4 ] Identification of nanomaterial interaction with C3 protein complement and 2D immunoelectrophoresis 2D immunoelectrophoresis is a potential method that can be used for a range of functions involving protein flow of migrants, such as the deep examination of protein opsonization, in succession of first dimension as an activity of protein molar mass and the second dimension as a role of the isoelectric point. Despite the fact that it contains a large number of proteins, each spot on the 2D gel will symbolize a particular protein with a specific molecular mass and feature. [ 5 ] 2D immunoelectrophoresis is also provided as a valuable implement for examining the stimulation of the signal transduction pathway, which is an essential factor in researching nanoparticles before in vivo delivery, because it will impact nanoparticle longevity, destination, and bio-distribution. This method employs two-dimensional horizontally agarose protein electrophoresis to specifically identify the association of nanoparticles with the C3 protein. Proteins can be separated in the first dimension according to their molecular mass (the shorter the protein, the far it drifts), and in the second dimension according to their abundance [ 6 ] Some limitations of immunoelectrophoresis Though immunoelectrophoresis has a number of benefits, it also has certain drawbacks, such as when compared to other methods of electrophoresis, such as immunofixation, this method is sluggish and less precise. It can be difficult to interpret the results. Several tiny monoclonal proteins may be harder to identify. The accessibility of particular antibodies limits its utility in analytical techniques. Traditional (classical or conventional) immunoelectrophoresis has a number of drawbacks, including the fact that it is time consuming and the protocol might take up to 3 days to finish, has limited specificity and sensitivity, and the results can be difficult to read. As a result, newer immunoelectrophoresis techniques have largely supplanted the conventional immunoelectrophoresis.
https://en.wikipedia.org/wiki/Immunoelectrophoresis
Immunofixation permits the detection and typing of monoclonal antibodies or immunoglobulins in serum or urine . It is of great importance for the diagnosis and monitoring of certain blood related diseases such as myeloma . The method detects by precipitation : when a soluble antigen (Ag) is brought in contact with the corresponding antibody , precipitation occurs, which may be visible with the naked eye or microscope. [ citation needed ] Immunofixation first separates antibodies in a mixture as a function of their specific electrophoretic mobility. For the purpose of identification, antisera are used that are specific for the targeted antibodies. [ 1 ] Specifically, immunofixation allows the detection of monoclonal antibodies representative of diseases such as myeloma or Waldenström macroglobulinemia . The technique consists of depositing a serum (or urine which has been previously concentrated) sample on a gel. After application of an electric current that allows the separation of proteins according to their size, antibodies specific for each type of immunoglobulin are laid upon the gel. It thus appears to be more or less narrow bands on the gel, which are at different immunoglobulins. [ citation needed ] Immunofixation as immunoelectrophoresis, takes place in two steps: Immunofixation tends to replace protein electrophoresis because  : [ citation needed ] Immunofixation is however only sensitive to immunoglobulins and is more expensive than protein electrophoresis.
https://en.wikipedia.org/wiki/Immunofixation
Immunofluorescence (IF) is a light microscopy -based technique that allows detection and localization of a wide variety of target biomolecules within a cell or tissue at a quantitative level. The technique utilizes the binding specificity of antibodies and antigens . [ 1 ] The specific region an antibody recognizes on an antigen is called an epitope . Several antibodies can recognize the same epitope but differ in their binding affinity. The antibody with the higher affinity for a specific epitope will surpass antibodies with a lower affinity for the same epitope. [ 2 ] [ 3 ] By conjugating the antibody to a fluorophore , the position of the target biomolecule is visualized by exciting the fluorophore and measuring the emission of light in a specific predefined wavelength using a fluorescence microscope . It is imperative that the binding of the fluorophore to the antibody itself does not interfere with the immunological specificity of the antibody or the binding capacity of its antigen. [ 4 ] [ 5 ] Immunofluorescence is a widely used example of immunostaining (using antibodies to stain proteins) and is a specific example of immunohistochemistry (the use of the antibody-antigen relationship in tissues). This technique primarily utilizes fluorophores to visualize the location of the antibodies, while others provoke a color change in the environment containing the antigen of interest or make use of a radioactive label. Immunofluorescent techniques that utilized labelled antibodies was conceptualized in the 1940s by Albert H. Coons . [ 2 ] [ 6 ] [ 7 ] Immunofluorescence is employed in foundational scientific investigations and clinical diagnostic endeavors, showcasing its multifaceted utility across diverse substrates, including tissue sections, cultured cell lines , or individual cells. Its usage includes analysis of the distribution of proteins , glycans , small biological and non-biological molecules, and visualization of structures such as intermediate-sized filaments. [ 8 ] If the topology of a cell membrane is undetermined, epitope insertion into proteins can be used in conjunction with immunofluorescence to determine structures within the cell membrane. [ 9 ] Immunofluorescence (IF) can also be used as a “semi-quantitative” method to gain insight into the levels and localization patterns of DNA methylation. IF can additionally be used in combination with other, non-antibody methods of fluorescent staining, e.g., the use of DAPI to label DNA . [ 10 ] [ 11 ] Examination of immunofluorescence specimens can be conducted utilizing various microscope configurations, including the epifluorescence microscope , confocal microscope , and widefield microscope. [ 12 ] To perform immunofluorescence staining, a fluorophore must be conjugated (“tagged”) to an antibody. Staining procedures can be applied to both retained intracellular expressed antibodies, or to cell surface antigens on living cells. There are two general classes of immunofluorescence techniques: primary (direct) and secondary (indirect). [ 1 ] [ 2 ] The following descriptions will focus primarily on these classes in terms of conjugated antibodies. [ 12 ] Primary (direct) immunofluorescence (DIF) uses a single antibody, conjugated to a fluorophore . The antibody recognizes the target molecule (antigen) and binds to a specific region, called the epitope . The attached fluorophore can be detected via fluorescent microscopy, which, depending on the type of fluorophore, will emit a specific wavelength of light once excited. [ 1 ] [ 14 ] The direct attachment of the fluorophore to the antibody reduces the number of steps in the sample preparation procedure, saving time and reducing non-specific background signal during analysis. [ 12 ] This also limits the possibility of antibody cross-reactivity, and possible mistakes throughout the process. One disadvantage of DIF is the limited number of antibodies that can bind to the antigen. This limitation may reduce sensitivity to the technique. When the target protein is available in only small concentrations, a better approach would be secondary IF, which is considered to be more sensitive than DIF [ 2 ] [ 12 ] when compared to Secondary (Indirect) Immunofluorescence. [ 1 ] Secondary (indirect) immunofluorescence (SIF) is similar to direct immunofluorescence, however the technique utilizes two types of antibodies whereas only one of them have a conjugated fluorophore. The antibody with the conjugated fluorophore is referred to as the secondary antibody, while the unconjugated is referred to as the primary antibody. [ 1 ] The principle of this technique is that the primary antibody specifically binds to the epitope on the target molecule, whereas the secondary antibody, with the conjugated fluorophore, recognizes and binds to the primary antibody. [ 1 ] This technique is considered to be more sensitive than primary immunofluorescence, because multiple secondary antibodies can bind to the same primary antibody. The increased number of fluorophore molecules per antigen increases the amount of emitted light, and thus amplifies the signal. [ 1 ] There are different methods for attaining a higher fluorophore-antigen ratio such as the Avidin-Biotin Complex (ABC method) and Labeled Streptavidin-Biotin (LSAB method). [ 15 ] [ 16 ] Immunofluorescence is only limited to fixed (i.e. dead) cells, when studying structures within the cell, as antibodies generally do not penetrate intact cellular or subcellular membranes in living cells, because they are large proteins. To visualize these structures, antigenic material must be fixed firmly on its natural localization inside the cell. [ 17 ] To study structures within living cells, in combination with fluorescence, one can utilize recombinant proteins containing fluorescent protein domains, e.g., green fluorescent protein (GFP). The GFP-technique involves altering the genetic information of the cells. [ 18 ] [ 19 ] A significant problem with immunofluorescence is photobleaching , [ 12 ] the fluorophores permanent loss of ability to emit light. [ 1 ] To mitigate the risk of photobleaching one can employ different strategies. By reducing or limiting the intensity, or timespan of light exposure, the absorption-emission cycle of fluorescent light is decreased, thus preserving the fluorophores functionality. One can also increase the concentration of fluorophores, or opt for more robust fluorophores that exhibit resilience against photobleaching such as Alexa Fluors , Seta Fluors, or DyLight Fluors . [ 2 ] Other problems that may arise when using immunofluorescence techniques include autofluorescence , spectral overlap and non-specific staining. [ 1 ] [ 2 ] Autofluorescence includes the natural fluorescence emitted from the sample tissue or cell itself. Spectral overlap happens when a fluorophore has a broad emission specter, that overlaps with the specter of another fluorophore, thus giving rise to false signals. Non-specific staining occurs when the antibody, containing the fluorophore, binds to unintended proteins because of sufficient similarity in the epitope. This can lead to false positives. [ 2 ] [ 4 ] [ 1 ] The main improvements to immunofluorescence lie in the development of fluorophores and fluorescent microscopes. Fluorophores can be structurally modified to improve brightness and photostability, while preserving spectral properties and cell permeability. [ 20 ] Super-resolution fluorescence microscopy methods can produce images with a higher resolution than those microscopes imposed by the diffraction limit . This enables the determination of structural details within the cell. [ 21 ] Super-resolution in fluorescence, more specifically, refers to the ability of a microscope to prevent the simultaneous fluorescence of adjacent spectrally identical fluorophores (spectral overlap). Some of the recently developed super-resolution fluorescent microscope methods include stimulated emission depletion ( STED ) microscopy, saturated structured-illumination microscopy (SSIM), fluorescence photoactivation localization microscopy (F PALM ), and stochastic optical reconstruction microscopy (STORM). [ 22 ]
https://en.wikipedia.org/wiki/Immunofluorescence
For the pharmaceutical company, see ImmunoGen . For other uses, see Immunogen (disambiguation) . An immunogen is any substance that generates B-cell (humoral/antibody) and/or T-cell (cellular) adaptive immune responses upon exposure to a host organism. [ 1 ] [ 2 ] Immunogens that generate antibodies are called antigens ("antibody-generating"). [ 2 ] Immunogens that generate antibodies are directly bound by host antibodies and lead to the selective expansion of antigen-specific B-cells. Immunogens that generate T-cells are indirectly bound by host T-cells after processing and presentation by host antigen-presenting cells. [ 3 ] An immunogen can be defined as a complete antigen which is composed of the macromolecular carrier and epitopes (determinants) that can induce immune response. [ 4 ] An explicit example is a hapten . Haptens are low-molecular-weight compounds that may be bound by antibodies, but cannot elicit an immune response. Consequently, the haptens themselves are nonimmunogenic and they cannot evoke an immune response until they bind with a larger carrier immunogenic molecule. The hapten-carrier complex, unlike free hapten, can act as an immunogen and can induce an immune response. [ 5 ] Until 1959, the terms immunogen and antigen were not distinguished. [ 6 ] An adjuvant (from Latin adiuvare – to help) is any substance, distinct from antigen, which enhances immune response by various mechanisms: recruiting of professional antigen-presenting cells (APCs) to the site of antigen exposure; increasing the delivery of antigens by delayed/slow release (depot generation); immunomodulation by cytokine production (selection of Th1 or Th2 response); inducing T-cell response (prolonged exposure of peptide-MHC complexes [signal 1] and stimulation of expression of T-cell-activating co-stimulators [signal 2] on the APCs' surface) and targeting (e. g. carbohydrate adjuvants which target lectin receptors on APCs). Adjuvants have been used as additives to improve vaccine efficiency since the 1920s. Generally, administration of adjuvants is used both in experimental immunology and in clinical settings to ensure a high quality/quantity memory-enhanced antibody response, where antigens must be prepared and delivered in a fashion that maximizes production of a specific immune response. Among commonly used adjuvants are complete and incomplete Freund's adjuvant and solutions of aluminum hydroxide or aluminum phosphate . [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Immunogen
Immunogenetics or immungenetics is the branch of Medical Immunology and Medical Genetics that explores the relationship between the immune system and genetics . Autoimmune diseases, such as type 1 diabetes, are complex genetic traits which result from defects in the immune system. Identification of genes defining the immune defects may identify new target genes for therapeutic approaches. Alternatively, genetic variations can also help to define the immunological pathway leading to disease. The term immunogenetics is based on the two words immunology and genetics , and is defined as "a sub discipline of genetics which deals with the genetic basis of the immune response (immunity)" according to MeSH. [ 1 ] Genetics (based on Greek γενεά geneá "descent" and γένεσις génesis "origin") [ 2 ] is the science researching the transfer of characteristics from one generation to the next. The genes of an organism (strands of DNA) and the transfer of genes from the parent to the child generation of an organism in the scope of possible variations are the basis of genetics. Immunology deals with the biological and biochemical basis for the body's defense against germs (such as bacteria , viruses , and fungi ), as well as against foreign agents such as biological toxins and environmental pollutants, and failures and malfunctions of these defense mechanisms. Apart from these external effects on the organism, there are also defense reactions regarding the body's own cells, e.g. in the scope of the bodily reactions on cancer and the lacking reaction of a body on healthy cells in the scope of immune-mediated disease. Hence, immunology is a sub-category of biology . Its origin is usually attributed to Edward Jenner , who discovered in 1796 that cowpox , or vaccinia, induced protection against human smallpox . The term immunogenetics comprises all processes of an organism, which are, on the one hand, controlled and influenced by the genes of the organism, and are, on the other hand, significant with regard to the immunological defense reactions of the organism. The history of immunology and the medical study of the immune system dates back to the 19th century. The first Nobel Prize in the field of immunogenetics was awarded to Baruj Benacerraf , Jean Dausset and George Davis Snell in 1980 for discovering genetically determined cellular surface structures, which control immunological reactions. [ 3 ] Since 1972, [ 4 ] numerous H&I (histocompatibility and immunogenetics) organizations have been founded specializing in research activities on a large number of different questions in immunogenetics. Both the acceleration of and the decreasing costs for the sequencing of the genes have resulted in more intensive research of both academic and commercial working groups. Current research topics particularly deal with forecasts on the course of diseases and therapy recommendations due to genetic dispositions and how these dispositions can be affected by agents (gene therapy). A special focus is often laid on the forecast regarding and therapy of genetically based autoimmune diseases , which include all diseases caused by an extreme reaction of the immune system against the body's own tissue. By mistake, the immune system recognizes the body's own tissue as a foreign object which is to be fought. This can result in serious inflammatory reactions which may permanently damage the respective organs. Autoimmune diseases, the outbreak and/or course of which can be visible in the individual genome of the organism, include multiple sclerosis , diabetes type I , rheumatoid arthritis and Crohn's disease . As for multiple sclerosis, an article in the journal Nature dated May 2010 [ 5 ] showed that this autoimmune disease is not caused by a genetic variation but the course and the treatability are considerably influenced by genetic dispositions. This research was based on analyzing three monovular pairs of twins, of which one twin has contracted multiple sclerosis whereas the other one has not.
https://en.wikipedia.org/wiki/Immunogenetics
Immunogenic cell death is any type of cell death eliciting an immune response . Both accidental cell death and regulated cell death can result in immune response. Immunogenic cell death contrasts to forms of cell death ( apoptosis , autophagy or others) that do not elicit any response or even mediate immune tolerance . The name 'immunogenic cell death' is also used for one specific type of regulated cell death that initiates an immune response after stress to endoplasmic reticulum. Immunogenic cell death types are divided according to molecular mechanisms leading up to, during and following the death event. The immunogenicity of a specific cell death is determined by antigens and adjuvant released during the process. [ 1 ] Accidental cell death is the result of physical, chemical or mechanical damage to a cell, which exceeds its repair capacity. It is an uncontrollable process, leading to loss of membrane integrity. The result is the spilling of intracellular components, which may mediate an immune response. [ 2 ] ICD or immunogenic apoptosis is a form of cell death resulting in a regulated activation of the immune response. This cell death is characterized by apoptotic morphology, [ 3 ] maintaining membrane integrity. Endoplasmic reticulum (ER) stress is generally recognised as a causative agent for ICD, with high production of reactive oxygen species (ROS). Two groups of ICD inducers are recognised. Type I inducers cause stress to the ER only as collateral damage, mainly targeting DNA or chromatin maintenance apparatus or membrane components. Type II inducers target the ER specifically. [ 3 ] ICD is induced by some cytostatic agents such as anthracyclines , [ 4 ] oxaliplatin and bortezomib , or radiotherapy and photodynamic therapy (PDT). [ 5 ] Some viruses can be listed among biological causes of ICD. [ 6 ] Just as immunogenic death of infected cells induces immune response to the infectious agent, immunogenic death of cancer cells can induce an effective antitumor immune response through activation of dendritic cells (DCs) and consequent activation of specific T cell response. [ 7 ] [ 6 ] This effect is used in antitumor therapy. ICD is characterized by secretion of damage-associated molecular patterns ( DAMPs ).There are three most important DAMPs which are exposed to the cell surface during ICD. Calreticulin (CRT), one of the DAMP molecules which is normally in the lumen of the endoplasmic reticulum, is translocated after the induction of immunogenic death to the surface of dying cell. There it functions as an "eat me" signal for professional phagocytes . Other important surface exposed DAMPs are heat-shock proteins (HSPs), namely HSP70 and HSP90 , which under stress condition also translocate to the plasma membrane. On the cell surface they have an immunostimulatory effect, based on their interaction with number of antigen-presenting cell (APC) surface receptors like CD91 and CD40 and also facilitate crosspresentation of antigens derived from tumour cells on MHC class I molecule, which then leads to the CD8+ T cell response. Other important DAMPs, characteristic for ICD are secreted HMGB1 and ATP . [ 2 ] HMGB1 is considered to be a marker of late ICD and its release to the extracellular space seems to be required for the optimal presentation of antigens by dendritic cells. It binds to several pattern recognition receptors (PRRs) such as Toll-like receptors (TLR) 2 and 4, which are expressed on APCs. ATP released during immunogenic cell death functions as a "find-me" signal for phagocytes when secreted and induces their attraction to the site of ICD. Also, binding of ATP to purinergic receptors on target cells has immunostimulatory effect through inflammasome activation. DNA and RNA molecules released during ICD activate TLR3 and cGAS responses, both in the dying cell and in phagocytes. The concept of using ICD in antitumor therapy has started taking shape with the identification of some inducers mentioned above, which have a potential as anti-tumor vaccination strategies. The use of ICD inducers alone or in combination with other anticancer therapies ( targeted therapies , immunotherapies [ 8 ] ) has been effective in mouse models of cancer [ 9 ] and is being tested in the clinic. [ 10 ] Another type of regulated cell death that induces an immune response is necroptosis . Necroptosis is characterized by necrotic morphology. [ 2 ] This type of cell death is induced by extracellular and intracellular microtraumas detected by death or damage receptors. For example, FAS , TNFR1 and pattern recognition receptors may initiate necroptosis. These activation inducers converge on receptor-interacting serine/threonine-protein kinase 3 (RIPK3) and mixed lineage kinase domain like pseudokinase (MLKL). Sequential activation of these proteins leads to membrane permeabilization. [ 2 ] [ 1 ] Pyroptosis is a distinct type of regulated cell death, exhibiting a necrotic morphology and cellular content spilling. [ 2 ] This type of cell death is induced most commonly in response to microbial pathogen infection, such as infection with Salmonella , Francisella , or Legionella . Host factors, such as those produced during myocardial infarction , may also induce pyroptosis. [ 11 ] Cytosolic presence of bacterial metabolites or structures, termed pathogen associated molecular patterns (PAMPs), initiates the pyroptotic response. Detection of such PAMPs by some members of Nod-like receptor family (NLRs), absent in melanoma 2 (AIM2) or pyrin leads to the assembly of an inflammasome structure and caspase 1 activation. So far, the cytosolic PRRs that are known to induce inflammasome formation are NLRP3 , NLRP1 , NLRC4 , AIM2 and Pyrin. These proteins contain oligomerization NACHT domains, CARD domains and some also contain similar pyrin (PYR) domains. Caspase 1, the central activator protease of pyroptosis, attaches to the inflammasome via the CARD domains or a CARD/PYR-containing adaptor protein called apoptosis-associated speck-like protein (ASC). [ 12 ] Activation of caspase 1 (CASP1) is central to pyroptosis and when activated mediates the proteolytic activation of other caspases. In humans, other involved caspases are CASP3 , CASP4 and CASP5 , in mice CASP3 and CASP11 . [ 2 ] Precursors of IL-1β and IL-18 are among the most important CASP1 substrates, and the secretion of the cleavage products induces the potent immune response to pyroptosis. The release of IL-1β and IL-18 occurs before any morphological changes occur in the cell. [ 13 ] The cell dies by spilling its contents, mediating the distribution of further immunogenic molecules. Among these, HMGB1, S100 proteins and IL-1α are important DAMPs. [ 12 ] Pyroptosis has some characteristics similar with apoptosis, an immunologically inert cell death. Primarily, both these processes are caspase-dependent, although each process utilizes specific caspases. Chromatin condensation and fragmentation occurs during pyroptosis, but the mechanisms and outcome differ from those during apoptosis. Contrasting with apoptosis, membrane integrity is not maintained in pyroptosis, [ 2 ] [ 13 ] while mitochondrial membrane integrity is maintained and no spilling of cytochrome c occurs. [ 11 ] Ferroptosis is also a regulated form of cell death. The process is initiated in response to oxidative stress and lipid peroxidation and is dependent on iron availability. Necrotic morphology is typical of ferroptotic cells. Peroxidation of lipids is catalyzed mainly by lipoxygenases , but also by cyclooxygenases . Lipid peroxidation can be inhibited in the cell by glutathione peroxidase 4 (GPX4), making the balance of these enzymes a central regulator of ferroptosis. Chelation of iron also inhibits ferroptosis, possibly by depleting iron from lipoxygenases. Spilling of cytoplasmic components during cell death mediates the immunogenicity of this process. [ 2 ] Mitochondria permeability transition (MPT)- driven cell death is also a form of regulated cell death and manifests a necrotic morphology. Oxidative stress or Ca 2+ imbalance are important causes for MPT-driven necrosis. The main event in this process is the loss of inner mitochondrial membrane (IMM) impermeability. The precise mechanisms leading to the formation of permeability-transition pore complexes, which assemble between the inner and outer mitochondrial membranes, are still unknown. Peptidylprolyl isomerase F (CYPD) is the only known required protein for MPT-driven necrosis. The loss of IMM impermeability is followed by membrane potential dissipation and disintegration of both mitochondrial membranes. [ 2 ] Parthanatos is also a regulated form of cell demise with necrotic morphology. It is induced under a variety of stressing conditions, but most importantly as a result of long-term alkylating DNA damage , oxidative stress, hypoxia , hypoglycemia and inflammatory environment. This cell death is initiated by the DNA damage response components, mainly poly(ADP-ribose) polymerase 1 (PARP1). PARP1 hyperactivation leads to ATP depletion, redox and bioenergetic collapse as well as accumulation of poly(ADPribose) polymers and poly(ADP-ribosyl)ated proteins, which bind to apoptosis inducing factor mitochondria associated 1 (AIF). The outcome is membrane potential dissipation and mitochondrial outer membrane permeabilization. Chromatin condensation and fragmentation by AIF is characteristic of parthanatos. Interconnection of the prathanotic process with some members of the necroptotic apparatus has been proposed, as RIPK3 stimulates PARP1 activity. [ 2 ] This type of cell death has been linked to some pathologies, such as some cardiovascular and renal disorders, diabetes , cerebral ischemia , and neurodegeneration . [ 2 ] Lysosome dependent cell death is a type of regulated cell death that is dependent on permeabilization of lysosomal membranes. The morphology of cells dying by this death is variable, with apoptotic, necrotic or intermediate morphologies observed. It is a type of intracellular pathogen defense, but is connected with several pathophysiological processes, like tissue remodeling or inflammation. Lysosome permeabilization initiates the cell death process, sometimes along with mitochondrial membrane permeabilization. [ 2 ] NETotic cell death is a specific type of cell death typical for neutrophils , but also observed in basophils and eosinophils . The process is characterized by extrusion of chromatin fibers bound into neutrophil extracellular traps (NETs). NET formation is generally induced in response to microbial infections, but pathologically also in sterile conditions of some inflammatory diseases. ROS inside the cell trigger release of elastase (ELANE) and myeloperoxidase (MPO), their translocation to the nucleus and cytoskeleton remodeling. Some interaction with the necroptotic apparatus (RIPK and MLKL) has been suggested. [ 2 ]
https://en.wikipedia.org/wiki/Immunogenic_cell_death
Immunogenicity is the ability of a foreign substance, such as an antigen , to provoke an immune response in the body of a human or other animal. It may be wanted or unwanted: A challenge in biotherapy is predicting the immunogenic potential of novel protein therapeutics. [ 3 ] For example, immunogenicity data from high-income countries are not always transferable to low-income and middle-income countries. [ 4 ] Another challenge is considering how the immunogenicity of vaccines changes with age. [ 5 ] [ 6 ] Therefore, as stated by the World Health Organization , immunogenicity should be investigated in a target population since animal testing and in vitro models cannot precisely predict immune response in humans. [ 7 ] Antigenicity is the capacity of a chemical structure (either an antigen or hapten ) to bind specifically with a group of certain products that have adaptive immunity : T cell receptors or antibodies (a.k.a. B cell receptors ). Antigenicity was more commonly used in the past to refer to what is now known as immunogenicity, and the two terms are still often used interchangeably. However, strictly speaking, immunogenicity refers to the ability of an antigen to induce an adaptive immune response . Thus an antigen might bind specifically to a T or B cell receptor, but not induce an adaptive immune response. If the antigen does induce a response, it is an 'immunogenic antigen', which is referred to as an immunogen . Many lipids and nucleic acids are relatively small molecules and/or have non-immunogenic properties. Consequently, they may require conjugation with an epitope such as a protein or polysaccharide to increase immunogenic potency so that they can evoke an immune response. [ 8 ] Immunogenicity is influenced by multiple characteristics of an antigen: T cell epitope content is one of the factors that contributes to antigenicity . Likewise, T Cell epitopes can cause unwanted immunogenicity, including the development of ADAs. A key determinant in T cell epitope immunogenicity is the binding strength of T cell epitopes to major histocompatibility complexes (MHC or HLA ) molecules. Epitopes with higher binding affinities are more likely to be displayed on the surface of a cell. Because a T cell receptor recognizes a specific epitope, only certain T cells are able to respond to a certain peptide bound to MHC on a cell surface. [ 11 ] When protein drug therapeutics, (as in enzymes, monoclonals, replacement proteins) or vaccines are administered, antigen presenting cells (APCs), such as a B cell or Dendritic Cell, will present these substances as peptides, which T cells may recognize. This may result in unwanted immunogenicity, including ADAs and autoimmune diseases, such as autoimmune thrombocytopenia (ITP) following exposure to recombinant thrombopoietin and pure red cell aplasia, which was associated with a particular formulation of erythropoietin (Eprex). [ 11 ] Therapeutic monoclonal antibodies (mAbs) are used for several diseases, including cancer and Rheumatoid arthritis . [ 12 ] Consequently, the high immunogenicity limited efficacy and was associated with severe infusion reactions. Although the exact mechanism is unclear, it is suspected that the mAbs are inducing infusion reactions by eliciting antibody antigen interactions, such as increased formation of immunoglobulin E (IgE) antibodies, which may bind onto mast cells and subsequent degranulation , causing allergy-like symptoms as well as the release of additional cytokines . [ 13 ] Several innovations in genetic engineering has resulted in the decrease in immunogenicity, (also known as deimmunization ), of mAbs. Genetic engineering has led to the generation of humanized and chimeric antibodies , by exchanging the murine constant and complementary regions of the immunoglobulin chains with the human counterparts. [ 14 ] [ 15 ] Although this has reduced the sometimes extreme immunogenicity associated with murine mAbs, the anticipation that all fully human mAbs would have not possess unwanted immunogenic properties remains unfulfilled. [ 16 ] [ 17 ] T cell epitope content, which is one of the factors that contributes to the risk of immunogenicity can now be measured relatively accurately using in silico tools. Immunoinformatics algorithms for identifying T-cell epitopes are now being applied to triage protein therapeutics into higher risk and low risk categories. These categories refer to assessing and analyzing whether an immunotherapy or vaccine will cause unwanted immunogenicity. [ 18 ] One approach is to parse protein sequences into overlapping nonamer (that is, 9 amino acid) peptide frames, each of which is then evaluated for binding potential to each of six common class I HLA alleles that “cover” the genetic backgrounds of most humans worldwide. [ 11 ] By calculating the density of high-scoring frames within a protein, it is possible to estimate a protein's overall “immunogenicity score”. In addition, sub-regions of densely packed high scoring frames or “clusters” of potential immunogenicity can be identified, and cluster scores can be calculated and compiled. Using this approach, the clinical immunogenicity of a novel protein therapeutics can be calculated. Consequently, a number of biotech companies have integrated in silico immunogenicity into their pre-clinical process as they develop new protein drugs.
https://en.wikipedia.org/wiki/Immunogenicity
Immunoglobulin class switching , also known as isotype switching , isotypic commutation or class-switch recombination ( CSR ), is a biological mechanism that changes a B cell 's production of immunoglobulin from one type to another, such as from the isotype IgM to the isotype IgG . [ 1 ] During this process, the constant-region portion of the antibody heavy chain is changed, but the variable region of the heavy chain stays the same (the terms variable and constant refer to changes or lack thereof between antibodies that target different epitopes ). Since the variable region does not change, class switching does not affect antigen specificity. Instead, the antibody retains affinity for the same antigens, but can interact with different effector molecules. Class switching occurs after activation of a mature B cell via its membrane-bound antibody molecule (or B cell receptor ) to generate the different classes of antibody, all with the same variable domains as the original antibody generated in the immature B cell during the process of V(D)J recombination , but possessing distinct constant domains in their heavy chains . [ 2 ] Naïve mature B cells produce both IgM and IgD , which are the first two heavy chain segments in the immunoglobulin locus . After activation by antigen, these B cells proliferate. If these activated B cells encounter specific signaling molecules via their CD40 and cytokine receptors (both modulated by T helper cells ), they undergo antibody class switching to produce IgG, IgA or IgE antibodies. During class switching, the constant region of the immunoglobulin heavy chain changes but the variable regions do not, and therefore antigenic specificity remains the same. This allows different daughter cells from the same activated B cell to produce antibodies of different isotypes or subtypes (e.g. IgG1, IgG2). [ 3 ] In humans, the order of the heavy chain exons is as follows: Class switching occurs by a mechanism called class switch recombination (CSR) binding. Class switch recombination is a biological mechanism that allows the class of antibody produced by an activated B cell to change during a process known as isotype or class switching. During CSR, portions of the antibody heavy chain locus are removed from the chromosome and the gene segments surrounding the deleted portion are rejoined to retain a functional antibody gene that produces antibody of a different isotype . Double-stranded breaks are generated in DNA at conserved nucleotide motifs, called switch (S) regions, which are upstream from gene segments that encode the constant regions of antibody heavy chains ; these occur adjacent to all heavy chain constant region genes with the exception of the δ-chain. DNA is nicked and broken at two selected S-regions by the activity of a series of enzymes , including activation-induced (cytidine) deaminase (AID), uracil DNA glycosylase , and apyrimidic/apurinic (AP)-endonucleases . [ 5 ] [ 6 ] AID begins the process of class switching by deaminating (removing an amino group from) cytosines within the S regions, converting the original C bases into deoxyuridine and allowing the uracil glycosylase to excise the base. This allows AP-endonucleases to cut the newly-formed abasic site, creating the initial SSBs that spontaneously form DSBs. [ 7 ] The intervening DNA between the S-regions is subsequently deleted from the chromosome, removing unwanted μ or δ heavy chain constant region exons and allowing substitution of a γ, α or ε constant region gene segment. The free ends of the DNA are rejoined by a process called non-homologous end joining (NHEJ) to link the variable domain exon to the desired downstream constant domain exon of the antibody heavy chain. [ 8 ] In the absence of non-homologous end joining, free ends of DNA may be rejoined by an alternative pathway biased toward microhomology joins. [ 9 ] With the exception of the μ and δ genes, only one antibody class is expressed by a B cell at any point in time. While class switch recombination is mostly a deletional process, rearranging a chromosome in "cis", it can also occur (in 10 to 20% of cases, depending upon the Ig class) as an inter-chromosomal translocation mixing immunoglobulin heavy chain genes from both alleles. [ 10 ] [ 11 ] T cell cytokines modulate class switching in mice (Table 1) and humans (Table 2). [ 12 ] [ 13 ] These cytokines may have suppressive effect on production of IgM. In addition to the highly repetitive structure of the target S regions, the process of class switching needs S regions to be first transcribed and spliced out of the immunoglobulin heavy chain transcripts (where they lie within introns). Chromatin remodeling, accessibility to transcription and to AID, and synapsis of broken S regions are under the control of a large super-enhancer, located downstream the more distal Calpha gene, the 3' regulatory region (3'RR). [ 17 ] In some occasions, the 3'RR super-enhancer can itself be targeted by AID and undergo DNA breaks and junction with Sμ, which then deletes the Ig heavy chain locus and defines locus suicide recombination (LSR). [ 18 ]
https://en.wikipedia.org/wiki/Immunoglobulin_class_switching
The immunoglobulin superfamily ( IgSF ) is a large protein superfamily of cell surface and soluble proteins that are involved in the recognition, binding, or adhesion processes of cells . Molecules are categorized as members of this superfamily based on shared structural features with immunoglobulins (also known as antibodies); they all possess a domain known as an immunoglobulin domain or fold . Members of the IgSF include cell surface antigen receptors, co-receptors and co-stimulatory molecules of the immune system , molecules involved in antigen presentation to lymphocytes , cell adhesion molecules , certain cytokine receptors and intracellular muscle proteins. They are commonly associated with roles in the immune system. Otherwise, the sperm-specific protein IZUMO1 , a member of the immunoglobulin superfamily, has also been identified as the only sperm membrane protein essential for sperm-egg fusion. Proteins of the IgSF possess a structural domain known as an immunoglobulin (Ig) domain . Ig domains are named after the immunoglobulin molecules. They contain about 70-110 amino acids and are categorized according to their size and function. [ 2 ] Ig-domains possess a characteristic Ig-fold , which has a sandwich-like structure formed by two sheets of antiparallel beta strands . Interactions between hydrophobic amino acids on the inner side of the sandwich and highly conserved disulfide bonds formed between cysteine residues in the B and F strands, stabilize the Ig-fold. [ citation needed ] The Ig like domains can be classified as IgV, IgC1, IgC2, or IgI. [ 3 ] Most Ig domains are either variable (IgV) or constant (IgC). The Ig domain was reported to be the most populous family of proteins in the human genome with 765 members identified. [ 5 ] Members of the family can be found even in the bodies of animals with a simple physiological structure such as poriferan sponges. They have also been found in bacteria, where their presence is likely to be due to divergence from a shared ancestor of eukaryotic immunoglobulin superfamily domains. [ 6 ] Similar to the situation with T cells, B cells also have cell surface co-receptors and accessory molecules that assist with cell activation by the B Cell Receptor (BCR)/immunoglobulin. Two chains are used or signaling, CD79a and CD79b that both possess a single Ig domain.
https://en.wikipedia.org/wiki/Immunoglobulin_superfamily
Immunogold labeling or immunogold staining ( IGS ) is a staining technique used in electron microscopy . [ 2 ] This staining technique is an equivalent of the indirect immunofluorescence technique for visible light. Colloidal gold particles are most often attached to secondary antibodies which are in turn attached to primary antibodies designed to bind a specific antigen or other cell component. Gold is used for its high electron density which increases electron scatter to give high contrast 'dark spots'. [ 3 ] First used in 1971, immunogold labeling has been applied to both transmission electron microscopy and scanning electron microscopy , as well as brightfield microscopy . The labeling technique can be adapted to distinguish multiple objects by using differently-sized gold particles. Immunogold labeling can introduce artifacts, as the gold particles reside some distance from the labelled object and very thin sectioning is required during sample preparation. [ 3 ] Immunogold labeling was first used in 1971 by Faulk and Taylor to identify Salmonella antigens . [ 2 ] [ 4 ] It was first applied in transmission electron microscopy (TEM) and was especially useful in highlighting proteins found in low densities, such as some cell surface antigens. [ 5 ] As the resolution of scanning electron microscopy (SEM) increased, so too did the need for nanoparticle-sized labels such as immunogold. In 1975, Horisberger and coworkers successfully visualised gold nanoparticles with a diameter of less than 30 nm [ 6 ] and this soon became an established SEM technique. [ 5 ] First, a thin section of the sample is cut, often using a microtome . [ 7 ] Various other stages of sample preparation may then take place. The prepared sample is then incubated with a specific antibody designed to bind the molecule of interest. [ 3 ] Next, a secondary antibody which has gold particles attached is added, and it binds to the primary antibody. Gold can also be attached to protein A or protein G instead of a secondary antibody, as these proteins bind mammalian IgG Fc regions in a non-specific way. [ 6 ] The electron-dense gold particle can now be seen under an electron microscope as a black dot, indirectly labeling the molecule of interest. [ 3 ] Immunogold labeling can be used to visualize more than one target simultaneously. This can be achieved in electron microscopy by using two different-sized gold particles. [ 8 ] An extension of this method used three different sized gold particles to track the localisation of regulatory peptides . [ 9 ] A more complex method of multi-site labeling involves labeling opposite sides of an antigenic site separately, the immunogold particles attached to both sides can then be viewed simultaneously. [ 10 ] Although immunogold labeling is typically used for transmission electron microscopy, when the gold is 'silver-enhanced' it can be seen using brightfield microscopy . [ 11 ] The silver enhancement increases the particle size, also making scanning electron microscopy possible. In order to produce the silver-enhanced gold particles, colloidal gold particles are placed in an acidic enhancing solution containing silver ions . Gold particles then act as a nucleation site and silver is deposited onto the particle. An example of the application of silver-enhanced immunogold labeling (IGSS) was in the identification of the pathogen Erwinia amylovora . [ 11 ] An inherent limitation to the immunogold technique is that the gold particle is around 15-30 nm away from the site to which the primary antibody is bound [ 5 ] (when using a primary and secondary antibodies labeling strategy). The precise location of the targeted molecule can therefore not be accurately calculated. Gold particles can be created with a diameter of 1 nm (or lower) but another limitation is then realized—at these sizes the gold label becomes hard to distinguish from tissue structure. [ 2 ] [ 5 ] Thin sections are required for immunogold labeling and these can produce misleading images; a thin slice of a cell component may not give an accurate view of its three-dimensional structure . For example, a microtubule may appear as a 'spike' depending on which plane the sectioning occurred. To overcome this limitation serial sections can be taken, which can then be compiled into a three-dimensional image. [ 3 ] A further limitation is that antibodies and gold particles cannot penetrate the resin used to embed samples for imaging. Thus, only accessible molecules can be targeted and visualized. Labeling prior to embedding the sample can reduce the negative impact of this limitation. [ 3 ]
https://en.wikipedia.org/wiki/Immunogold_labelling
Immunohematology is a branch of hematology and transfusion medicine which studies antigen - antibody reactions and analogous phenomena as they relate to the pathogenesis and clinical manifestations of blood disorders . A person employed in this field is referred to as an immunohematologist or colloquially as a blood banker. Their day-to-day duties include blood typing , cross-matching and antibody identification. [ 1 ] [ citation needed ] Immunohematology and Transfusion Medicine is a medical post graduate specialty in many countries. The specialist Immunohematology and Transfusion Physician provides expert opinion for difficult transfusions, massive transfusions, incompatibility work up, therapeutic plasmapheresis , cellular therapy , irradiated blood therapy, leukoreduced and washed blood products, stem cell procedures, platelet rich plasma therapies, HLA and cord blood banking. Other research avenues are in the field of stem cell researches, regenerative medicine and cellular therapy. [ 1 ] Immunohematology is one of the specialized branches of medical science. It deals with the concepts and clinical 2 techniques related to modern transfusion therapy. Efforts to save human lives by transfusing blood have been recorded for several centuries. The era of blood transfusion, however, really began when William Harvey described the circulation of blood in 1616. This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immunohaematology
Immunohistochemistry is a form of immunostaining . It involves the process of selectively identifying antigens in cells and tissue, by exploiting the principle of antibodies binding specifically to antigens in biological tissues . Albert Hewett Coons , Ernest Berliner , Norman Jones and Hugh J Creech was the first to develop immunofluorescence in 1941. This led to the later development of immunohistochemistry. [ 2 ] [ 3 ] Immunohistochemical staining is widely used in the diagnosis of abnormal cells such as those found in cancerous tumors. In some cancer cells certain tumor antigens are expressed which make it possible to detect. Immunohistochemistry is also widely used in basic research, to understand the distribution and localization of biomarkers and differentially expressed proteins in different parts of a biological tissue. [ 4 ] Immunohistochemistry can be performed on tissue that has been fixed and embedded in paraffin , but also cryopreservated (frozen) tissue. Based on the way the tissue is preserved, there are different steps to prepare the tissue for immunohistochemistry, but the general method includes proper fixation, antigen retrieval, incubation with primary antibody, then incubation with secondary antibody. [ 5 ] [ 6 ] Fixation of the tissue is important to preserve the tissue and maintaining cellular morphology. The fixation formula, ratio of fixative to tissue and time in the fixative, will affect the result. The fixation solution (fixative) is often 10% neutral buffer formalin . Normal fixation time is 24 hours in room temperature. The ratio of fixative to tissue ranges from 1:1 to 1:20. After the tissue is fixed it can be embedded in paraffin wax. [ 5 ] [ 6 ] For frozen sections, fixation is usually performed after sectioning if not new antibodies are going to be tested. Then acetone or formalin can be used. [ 6 ] Sectioning of the tissue sample is done using a microtome. For paraffin embedded tissue 4 μm is normal thickness, and for frozen sections 4 – 6 μm. [ 6 ] The thickness of the sliced sections matters, and is an important factor in immunohistochemistry. If you compare a section of brain tissue measuring 4 μm with a section measuring 7 μm, some of what you see in the 7 μm thick section might be lacking in the 4 μm section. This shows the importance of detailed methods related to this methodology. [ 7 ] The paraffin embedded tissues should be deparaffinized to remove all the paraffin on and around the tissue sample in xylene or a good substitute, followed by alcohol. [ 8 ] Antigen retrieval is required to make the epitopes accessible for immunohistochemical staining for most formalin fixed tissue section. The epitopes are the binding sites for antibodies used to visualize the targeted antigen which may be masked due to the fixation. Fixation of the tissue may cause formation of methylene bridges or crosslinking of amino groups, so that the epitopes no longer are available. Antigen retrieval can restore the masked antigenicity, possibly by breaking down the crosslinks caused by fixation. [ 9 ] The most common way to perform antigen retrieval is by using high-temperature heating while soaking the slides in a buffer solution. [ 10 ] This can be done in different ways, for example by using microwave oven, autoclaves, heating plates or water baths. For frozen sections, antigen retrieval is generally not necessary, but for frozen sections that have been fixed in acetone or formalin, antigen retrieval can improve the immunohistochemical signal. [ 6 ] Non-specific binding of antibodies can cause background staining. Although antibodies bind to specific epitopes, they may also partially or weakly bind to sites on nonspecific proteins that are similar to the binding site on the target protein. By incubating the tissue with normal serum isolated from the species which the secondary antibody was produced, the background staining can be reduced. It is also possible to use commercially available universal blocking buffers. Other common blocking buffers include normal serum, non-fat dry milk, BSA , or gelatin. [ 5 ] [ 6 ] Endogenous enzyme activity may also cause background staining but can be reduced if the tissue is treated with hydrogen peroxide. [ 5 ] After preparing the sample, the target can be visualized by using antibodies labeled with fluorescent compounds, metals or enzymes. There are direct and indirect methods for labeling the sample. [ 6 ] [ 11 ] The antibodies used for detection can be polyclonal or monoclonal. Polyclonal antibodies are made by using animals like guinea pig, rabbit, mouse, rat, or goat. The animal is injected with the antigen of interest and trigger an immune response. The antibodies can be isolated from the animal's whole serum. Polyclonal antibody production will result in a mixture of different antibodies and will recognize multiple epitopes. Monoclonal antibodies are made by injecting the animal with the antigen of interest and then isolating an antibody-producing B cell, typically from the spleen. The antibody producing cell is then fused with a cancer cell line. This causes the antibodies to show specificity for a single epitope. [ 12 ] For immunohistochemical detection strategies, antibodies are classified as primary or secondary reagents. Primary antibodies are raised against an antigen of interest and are typically unconjugated (unlabeled). Secondary antibodies are raised against immunoglobulins of the primary antibody species. The secondary antibody is usually conjugated to a linker molecule, such as biotin, that then recruits reporter molecules, or the secondary antibody itself is directly bound to the reporter molecule. [ 11 ] The direct method is a one-step staining method and involves a labeled antibody reacting directly with the antigen in tissue sections. While this technique utilizes only one antibody and therefore is simple and rapid, the sensitivity is lower due to little signal amplification, in contrast to indirect approaches. [ 11 ] The indirect method involves an unlabeled primary antibody that binds to the target antigen in the tissue. Then a secondary antibody, which binds with the primary antibody is added as a second layer. As mentioned, the secondary antibody must be raised against the antibody IgG of the animal species in which the primary antibody has been raised. This method is more sensitive than direct detection strategies because of signal amplification due to the binding of several secondary antibodies to each primary antibody. [ 11 ] The indirect method, aside from its greater sensitivity, also has the advantage that only a relatively small number of standard conjugated (labeled) secondary antibodies needs to be generated. For example, a labeled secondary antibody raised against rabbit IgG, is useful with any primary antibody raised in rabbit. This is particularly useful when a researcher is labeling more than one primary antibody, whether due to polyclonal selection producing an array of primary antibodies for a singular antigen or when there is interest in multiple antigens. With the direct method, it would be necessary to label each primary antibody for every antigen of interest. [ 11 ] Reporter molecules vary based on the nature of the detection method, the most common being chromogenic and fluorescence detection. In chromogenic immunohistochemistry an antibody is conjugated to an enzyme, such as alkaline phosphate and horseradish peroxidase, that can catalyze a color-producing reaction in the presence of a chromogenic substrate like diaminobenzidine. [ 5 ] The colored product can be analyzed with an ordinary light microscope. [ 13 ] In immunofluorescence the antibody is tagged to a fluorophore , such as fluorescein isothiocyanate, tetramethylrhodamine isothiocyanate, aminomethyl Coumarin acetate or Cyanine5. Synthetic fluorochromes from Alexa Fluors is also commonly used. [ 13 ] [ 14 ] The fluorochromes can be visualized by a fluorescence or confocal microscope. [ 13 ] For chromogenic and fluorescent detection methods, densitometric analysis of the signal can provide semi- and fully quantitative data, respectively, to correlate the level of reporter signal to the level of protein expression or localization. [ 6 ] After immunohistochemical staining of the target antigen, another stain is often applied. The counterstain provide contrast that helps the primary stain stand out and makes it easier to examine the tissue morphology. It also helps with orientation and visualization of the tissue section. Hematoxylin is commonly used. [ 6 ] [ 15 ] In immunohistochemical techniques, there are several steps prior to the final staining of the tissue that can cause a variety of problems. It can be strong background staining, weak target antigen staining and presence of artifacts. It is important that antibody quality and the immunohistochemistry techniques are optimized. [ 16 ] Endogenous biotin, reporter enzymes or primary/secondary antibody cross-reactivity are common causes of strong background staining. [ 11 ] [ 13 ] Weak or absent staining may be caused by inaccurate fixation of the tissue or to low antigen levels. These aspects of immunohistochemistry tissue prep and antibody staining must be systematically addressed to identify and overcome staining issues. [ 5 ] [ 6 ] Methods to eliminate background staining include dilution of the primary or secondary antibodies, changing the time or temperature of incubation, and using a different detection system or different primary antibody. Quality control should as a minimum include a tissue known to express the antigen as a positive control and negative controls of tissue known not to express the antigen, as well as the test tissue probed in the same way with omission of the primary antibody (or better, absorption of the primary antibody). [ 5 ] [ 18 ] Immunohistochemistry is an excellent detection technique and has the tremendous advantage of being able to show exactly where a given protein is located within the tissue examined. It is also an effective way to examine the tissues. This has made it a widely used technique in neuroscience , enabling researchers to examine protein expression within specific brain structures. Its major disadvantage is that, unlike immunoblotting techniques where staining is checked against a molecular weight ladder, it is impossible to show in immunohistochemistry that the staining corresponds with the protein of interest. For this reason, primary antibodies must be well-validated in a Western Blot or similar procedure. The technique is even more widely used in diagnostic surgical pathology for immunophenotyping tumors (e.g. immunostaining for e-cadherin to differentiate between ductal carcinoma in situ (stains positive) and lobular carcinoma in situ (does not stain positive) [ 19 ] ). More recently, immunohistochemical techniques have been useful in differential diagnoses of multiple forms of salivary gland, head, and neck carcinomas. [ 20 ] The diversity of immunohistochemistry markers used in diagnostic surgical pathology is substantial. Many clinical laboratories in tertiary hospitals will have menus of over 200 antibodies used as diagnostic, prognostic and predictive biomarkers. Examples of some commonly used markers include: A variety of molecular pathways are altered in cancer and some of the alterations can be targeted in cancer therapy. Immunohistochemistry can be used to assess which tumors are likely to respond to therapy, by detecting the presence or elevated levels of the molecular target. [ citation needed ] Tumor biology allows for a number of potential intracellular targets. Many tumors are hormone dependent. The presence of hormone receptors can be used to determine if a tumor is potentially responsive to antihormonal therapy. One of the first therapies was the antiestrogen, tamoxifen , used to treat breast cancer. Such hormone receptors can be detected by immunohistochemistry. [ 23 ] Imatinib , an intracellular tyrosine kinase inhibitor, was developed to treat chronic myelogenous leukemia , a disease characterized by the formation of a specific abnormal tyrosine kinase. Imitanib has proven effective in tumors that express other tyrosine kinases, most notably KIT. Most gastrointestinal stromal tumors express KIT, which can be detected by immunohistochemistry. [ 24 ] Many proteins shown to be highly upregulated in pathological states by immunohistochemistry are potential targets for therapies utilising monoclonal antibodies . Monoclonal antibodies, due to their size, are utilized against cell surface targets. Among the overexpressed targets are members of the EGFR family , transmembrane proteins with an extracellular receptor domain regulating an intracellular tyrosine kinase. [ 25 ] Of these, HER2/neu (also known as Erb-B2) was the first to be developed. The molecule is highly expressed in a variety of cancer cell types, most notably breast cancer. As such, antibodies against HER2/neu have been FDA approved for clinical treatment of cancer under the drug name Herceptin . There are commercially available immunohistochemical tests, Dako HercepTest, [ 26 ] Leica Biosystems Oracle [ 27 ] and Ventana Pathway. [ 28 ] Similarly, epidermal growth factor receptor (HER-1) is overexpressed in a variety of cancers including head and neck and colon. Immunohistochemistry is used to determine patients who may benefit from therapeutic antibodies such as Erbitux (cetuximab). [ 29 ] Commercial systems to detect epidermal growth factor receptor by immunohistochemistry include the Dako pharmDx. [ 30 ] Immunohistochemistry can also be used for a more general protein profiling, provided the availability of antibodies validated for immunohistochemistry. The Human Protein Atlas displays a map of protein expression in normal human organs and tissues. The combination of immunohistochemistry and tissue microarrays provides protein expression patterns in a large number of different tissue types. Immunohistochemistry is also used for protein profiling in the most common forms of human cancer. [ 31 ] [ 32 ]
https://en.wikipedia.org/wiki/Immunohistochemistry
The immunohistochemistry (IHC) test is a laboratory method that detects antibodies of prions (mis-shapen proteins thought to transmit bovine spongiform encephalopathy , BSE or mad cow disease) by exposing a brain sample to a stain that appears as a specific color under a microscope. The IHC test is used by USDA researchers in their BSE surveillance program because they consider it the gold standard, providing a high level of confidence about the results. However, IHC tests are expensive and time-consuming. More rapid and less expensive testing alternatives (“rapid tests”) have been used in some other countries, but until recently USDA has viewed them as less reliable because they can deliver more false positive and/or false negative results than the IHC. However, in June 2004 USDA embarked on a greatly expanded BSE testing program to test more than 200,000 cattle over a 12-18 month period (compared with 20,000 in each of 2002 and 2003). It is now using rapid test kits at regional laboratories to conduct initial screening; any samples that test “positive” for BSE (which USDA terms “inconclusive”) must be subjected to an IHC test for confirmation.
https://en.wikipedia.org/wiki/Immunohistochemistry_test
In general, immunoisolation is the process of protecting implanted material such as biopolymers, cells, or drug release carriers from an immune reaction . The most prominent means of accomplishing this is through the use of cell encapsulation . [ 1 ] This medical treatment –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immunoisolate
Immunolabeling is a biochemical process that enables the detection and localization of an antigen to a particular site within a cell, tissue, or organ. Antigens are organic molecules, usually proteins , capable of binding to an antibody . These antigens can be visualized using a combination of antigen-specific antibody as well as a means of detection, called a tag, that is covalently linked to the antibody. [ 1 ] If the immunolabeling process is meant to reveal information about a cell or its substructures, the process is called immunocytochemistry . [ 2 ] Immunolabeling of larger structures is called immunohistochemistry . [ 3 ] There are two complex steps in the manufacture of antibody for immunolabeling. The first is producing the antibody that binds specifically to the antigen of interest and the second is fusing the tag to the antibody. Since it is impractical to fuse a tag to every conceivable antigen-specific antibody, most immunolabeling processes use an indirect method of detection. This indirect method employs a primary antibody that is antigen-specific and a secondary antibody fused to a tag that specifically binds the primary antibody. This indirect approach permits mass production of secondary antibody that can be bought off the shelf. [ 4 ] Pursuant to this indirect method, the primary antibody is added to the test system. The primary antibody seeks out and binds to the target antigen. The tagged secondary antibody, designed to attach exclusively to the primary antibody, is subsequently added. Typical tags include: a fluorescent compound, gold beads, a particular epitope tag, [ 5 ] or an enzyme that produces a colored compound. The association of the tags to the target via the antibodies provides for the identification and visualization of the antigen of interest in its native location in the tissue, such as the cell membrane , cytoplasm , or nuclear membrane . Under certain conditions the method can be adapted to provide quantitative information. [ 4 ] Immunolabeling can be used in pharmacology , molecular biology , biochemistry and any other field where it is important to know of the precise location of an antibody-bindable molecule. [ 6 ] [ 7 ] [ 8 ] There are two methods involved in immunolabeling, the direct and the indirect methods. In the direct method of immunolabeling, the primary antibody is conjugated directly to the tag. [ 9 ] The direct method is useful in minimizing cross-reaction , a measure of nonspecificity that is inherent in all antibodies and that is multiplied with each additional antibody used to detect an antigen. However, the direct method is far less practical than the indirect method, and is not commonly used in laboratories, since the primary antibodies must be covalently labeled, which require an abundant supply of purified antibody. Also, the direct method is potentially far less sensitive than the indirect method. [ 10 ] Since several secondary antibodies are capable of binding to different parts, or domains, of a single primary antibody binding the target antigen, there is more tagged antibody associated with each antigen. More tag per antigen results in more signal per antigen. [ 11 ] Different indirect methods can be employed to achieve high degrees of specificity and sensitivity. First, two-step protocols are often used to avoid the cross-reaction between the immunolabeling of multiple primary and secondary antibody mixtures, where secondary antibodies Fab fragments are frequently used. Secondly, haptenylated primary antibodies can be used, where the secondary antibody can recognize the associated hapten . The hapten is covalently linked to the primary antibody by succinyl imidesters or conjugated IgG Fc -specific Fab sections. Lastly, primary monoclonal antibodies that have different Ig isotypes can be detected by specific secondary antibodies that are against the isotype of interest. [ 10 ] Overall, antibodies must bind to the antigens with a high specificity and affinity. [ 12 ] The specificity of the binding refers to an antibody's capacity to bind and only bind a single target antigen. Scientists commonly use monoclonal antibodies and polyclonal antibodies , which are composed of synthetic peptides. During the manufacture of these antibodies, antigen specific antibodies are sequestered by attaching the antigenic peptide to an affinity column and allowing nonspecific antibody to simply pass through the column. This decreases the likelihood that the antibodies will bind to an unwanted epitope of the antigen not found on the initial peptide. Hence, the specificity of the antibody is established by the specific reaction with the protein or peptide that is used for immunization by specific methods, such as immunoblotting or immunoprecipitation . [ 13 ] In establishing the specificity of antibodies, the key factor is the type of synthetic peptides or purified proteins being used. The lesser the specificity of the antibody, the greater the chance of visualizing something other than the target antigen. In the case of synthetic peptides, the advantage is the amino acid sequence is easily accessible, but the peptides do not always resemble the 3-D structure or post-translational modification found in the native form of the protein. Therefore, antibodies that are produced to work against a synthetic peptide may have problems with the native 3-D protein. These types of antibodies would lead to poor results in immunoprecipitation or immunohistochemistry experiments, yet the antibodies may be capable of binding to the denatured form of the protein during an immunoblotting run. On the contrary, if the antibody works well for purified proteins in their native form and not denatured , an immunoblot cannot be used as a standardized test to determine the specificity of the antibody binding, particularly in immunohistochemistry. [ 14 ] Light microscopy is the use of a light microscope , which is an instrument that requires the usage of light to view the enlarged specimen. In general, a compound light microscope is frequently used, where two lenses, the eyepiece, and the objective work simultaneously to generate the magnification of the specimen. [ 15 ] Light microscopy frequently uses immunolabeling to observe targeted tissues or cells. For instance, a study was conducted to view the morphology and the production of hormones in pituitary adenoma cell cultures via light microscopy and other electron microscopic methods. This type of microscopy confirmed that the primary adenoma cell cultures keep their physiological characteristics in vitro , which matched the histology inspection. Moreover, cell cultures of human pituitary adenomas were viewed by light microscopy and immunocytochemistry, where these cells were fixed and immunolabeled with a monoclonal mouse antibody against human GH and a polyclonal rabbit antibody against PRL. This is an example of how a immunolabeled cell culture of pituitary adenoma cells that were viewed via light microscopy and by other electron microscopy techniques can assist with the proper diagnosis of tumors. [ 16 ] Electron microscopy (EM) is a focused area of science that uses the electron microscope as a tool for viewing tissues. [ 17 ] Electron microscopy has a magnification level up to 2 million times, whereas light microscopy only has a magnification up to 1000-2000 times. [ 18 ] There are two types of electron microscopes, the transmission electron microscope and the scanning electron microscope . [ 17 ] Electron microscopy is a common method that uses the immunolabeling technique to view tagged tissues or cells. The electron microscope method follows many of the same concepts as immunolabeling for light microscopy, where the particular antibody is able to recognize the location of the antigen of interest and then be viewed by the electron microscope. The advantage of electron microscopy over light microscopy is the ability to view the targeted areas at their subcellular level. Generally, a heavy metal that is electron dense is used for EM, which can reflect the incident electrons. Immunolabeling is typically confirmed using the light microscope to assure the presence of the antigen and then followed up with the electron microscope. [ 19 ] Immunolabeling and electron microscopy are often used to view chromosomes . A study was conducted to view possible improvements of immunolabeling chromosome structures, such as topoisomerase IIα and condensin in dissected mitotic chromosomes. In particular, these investigators used UV irradiation of separated nuclei or showed how chromosomes assist by high levels of specific immunolabeling, which were viewed by electron microscopy. [ 20 ] Transmission electron microscopy (TEM) uses a transmission electron microscope to form a two-dimensional image by shooting electrons through a thin piece of tissue. The brighter certain areas are on the image, the more electrons that are able to move through the specimen. [ 17 ] Transmission Electron Microscopy has been used as a way to view immunolabeled tissues and cells. For instance, bacteria can be viewed by TEM when immunolabeling is applied. A study was conducted to examine the structures of CS3 and CS6 fimbriae in different Escherichia coli strains, which were detected by TEM followed by negative staining, and immunolabeling. More specifically, immunolabeling of the fimbriae confirmed the existence of different surface antigens. [ 21 ] Scanning electron microscopy (SEM) uses a scanning electron microscope, which produces large images that are perceived as three-dimensional when, in fact, they are not. This type of microscope concentrates a beam of electrons across a very small area (2-3 nm) of the specimen in order to produce electrons from said specimen. These secondary electrons are detected by a sensor, and the image of the specimen is generated over a certain time period. [ 17 ] Scanning electron microscopy is a frequently used immunolabeling technique. SEM is able to detect the surface of cellular components in high resolution. This immunolabeling technique is very similar to the immuno-fluorescence method, but a colloidal gold tag is used instead of a fluorophore. Overall, the concepts are very parallel in that an unconjugated primary antibody is used and sequentially followed by a tagged secondary antibody that works against the primary antibody. [ 22 ] Sometimes SEM in conjunction with gold particle immunolabeling is troublesome in regards to the particles and charges resolution under the electron beam; however, this resolution setback has been resolved by the improvement of the SEM instrumentation by backscattered electron imaging. [ 23 ] This is because electron backscattered diffraction patterns provide a clean surface of the sample to interact with the primary electron beam. [ 24 ] Immunolabeling with gold particles, also known as immunogold staining , is used regularly with scanning electron microscopy and transmission electron microscopy to successfully identify the area within cells and tissues where antigens are located. [ 23 ] The gold particle labeling technique was first published by Faulk, W. and Taylor, G. when they were able to tag gold particles to anti-salmonella rabbit gamma globulins in one step in order to identify the location of the antigens of salmonella. [ 23 ] [ 25 ] Studies have shown that the size of the gold particle must be enlarged (>40 nm) to view the cells in low magnification, but gold particles that are too large can decrease the efficiency of the binding of the gold tag. Scientists have concluded the usage of smaller gold particles (1-5 nm) should be enlarged and enhanced with silver. Although osmium tetroxide staining can scratch the silver, gold particle enhancement was found not to be susceptible to scratching by osmium tetroxide staining; therefore, many cell adhesion studies of different substrates can use the immunogold labeling mechanism via the enhancement of the gold particles. [ 26 ] Research has been conducted to test the compatibility of immunolabeling with fingerprints. Sometimes, fingerprints are not clear enough to recognize the ridge pattern. Immunolabeling may be a way for forensic personnel to narrow down who left the print. Researchers conducted a study which tested the compatibility of immunolabeling with many developmental techniques for fingerprints. They found that indanedione-zinc (IND-ZnCl), IND-ZnCl followed by ninhydrin spraying (IND-NIN), physical developer (PD), cyanoacrylate fuming (CA), cyanoacrylate followed by basic yellow staining (CA-BY), lumicyanoacrylate fuming (Lumi-CA) and polycyanoacrylate fuming (Poly-CA) all were compatible with immunolabeling. [ 27 ] Immunolabeling can not only extract donor profiling information from fingerprints, but can also enhance the quality of the fingerprints which both would be beneficial in a forensic case.
https://en.wikipedia.org/wiki/Immunolabeling
In immunology , activation is the transition of leucocytes and other cell types involved in the immune system . On the other hand, deactivation is the transition in the reverse direction. [ 1 ] This balance is tightly regulated, since a too small degree of activation causes susceptibility to infections , while, on the other hand, a too large degree of activation causes autoimmune diseases . Activation and deactivation results from a variety of factors, including cytokines , soluble receptors , arachidonic acid metabolites, steroids , receptor antagonists , adhesion molecules , bacterial products and viral products. [ 1 ]
https://en.wikipedia.org/wiki/Immunologic_activation
In immunology , an adjuvant is a substance that increases or modulates the immune response to a vaccine . [ 1 ] The word "adjuvant" comes from the Latin word adiuvare , meaning to help or aid. "An immunologic adjuvant is defined as any substance that acts to accelerate, prolong, or enhance antigen-specific immune responses when used in combination with specific vaccine antigens ." [ 2 ] In the early days of vaccine manufacture, significant variations in the efficacy of different batches of the same vaccine were correctly assumed to be caused by contamination of the reaction vessels. However, it was soon found that more scrupulous cleaning actually seemed to reduce the effectiveness of the vaccines, and some contaminants actually enhanced the immune response. There are many known adjuvants in widespread use, including potassium alum , various plant and animal derived oils and virosomes . [ 3 ] Adjuvants in immunology are often used to modify or augment the effects of a vaccine by stimulating the immune system to respond to the vaccine more vigorously, and thus providing increased immunity to a particular disease . Adjuvants accomplish this task by mimicking specific sets of evolutionarily conserved molecules, so called pathogen-associated molecular patterns , which include liposomes , lipopolysaccharide , molecular cages for antigens , components of bacterial cell walls , and endocytosed nucleic acids such as RNA , double-stranded RNA , single-stranded DNA , and unmethylated CpG dinucleotide-containing DNA. [ 4 ] Because immune systems have evolved to recognize these specific antigenic moieties , the presence of an adjuvant in conjunction with the vaccine can greatly increase the innate immune response to the antigen by augmenting the activities of dendritic cells , lymphocytes , and macrophages by mimicking a natural infection . [ 5 ] [ 6 ] There are many adjuvants, some of which are inorganic , that carry the potential to augment immunogenicity . [ 14 ] [ 15 ] Alum was the first aluminium salt used for this purpose, but has been almost completely replaced by aluminium hydroxide and aluminium phosphate for commercial vaccines. [ 16 ] Aluminium salts are the most commonly-used adjuvants in human vaccines. Their adjuvant activity was described in 1926. [ 17 ] The precise mechanism of aluminium salts remains unclear but some insights have been gained. It was formerly thought that they function as delivery systems by generating depots that trap antigens at the injection site, providing a slow release that continues to stimulate the immune system. [ 18 ] However, studies have shown that surgical removal of these depots had no impact on the magnitude of IgG1 response. [ 19 ] Alum can trigger dendritic cells and other immune cells to secrete Interleukin 1 beta (IL‑1β), an immune signal that promotes antibody production. Alum adheres to the cell's plasma membrane and rearranges certain lipids there. Spurred into action, the dendritic cells pick up the antigen and speed to lymph nodes, where they stick tightly to a helper T cell and presumably induce an immune response. A second mechanism depends on alum killing immune cells at the injection site although researchers aren't sure exactly how alum kills these cells. It has been speculated that the dying cells release DNA which serves as an immune alarm. Some studies found that DNA from dying cells causes them to adhere more tightly to helper T cells which ultimately leads to an increased release of antibodies by B cells . No matter what the mechanism is, alum is not a perfect adjuvant because it does not work with all antigens (e.g. malaria and tuberculosis). [ 20 ] However, recent research indicates that alum formulated in a nanoparticle form rather than microparticles can broaden the utility of alum adjuvants and promote stronger adjuvant effects. [ 21 ] Freund's complete adjuvant is a solution of inactivated Mycobacterium tuberculosis in mineral oil developed in 1930. It is not safe enough for human use. A version without the bacteria, that is only oil in water, is known as Freund's incomplete adjuvant. It helps vaccines release antigens for a longer time. Despite the side effects, its potential benefit has led to a few clinical trials. [ 17 ] Squalene is a naturally-occurring organic compound used in human and animal vaccines. Squalene is an oil, made up of carbon and hydrogen atoms, produced by plants and is present in many foods. Squalene is also produced by the human liver as a precursor to cholesterol and is present in human sebum . [ 22 ] MF59 is an oil-in-water emulsion of squalene adjuvant used in some human vaccines. As of 2021, over 22 million doses of one vaccine with squalene, FLUAD, have been administered with no severe adverse effects reported. [ 23 ] AS03 is another squalene-containing adjuvant. [ 24 ] In addition, squalene-based O/W emulsions have also been shown to stably incorporate small molecule TLR7/8 adjuvants (e.g. PVP-037) and lead to enhanced adjuvanticity via synergism. [ 13 ] The plant extract QS-21 is a liposome loaded with saponins extracted from the tree Quillaja saponaria [ 25 ] [ 26 ] Monophosphoryl lipid A (MPL), a detoxified version of the lipopolysaccharide toxin from the bacterium Salmonella Minnesota , interacts with the receptor TLR4 to enhance immune response. [ 27 ] [ 17 ] The combination of QS-21, cholesterol and MPL forms the adjuvant AS01 [ 11 ] which is used in the Shingrix vaccine approved in 2017, [ 27 ] as well as in the approved malaria vaccine Mosquirix . [ 11 ] The adjuvant Matrix-M is an immune stimulating complex (ISCOM) consisting of nanospheres made of QS-21, cholesterol and phospholipids . [ 26 ] It is used in the approved Novavax Covid-19 vaccine and in the malaria vaccine R21/Matrix-M. Several unmethylated cytosine phosphoguanosine (CpG) oligonucleotides activate the TLR9 receptor that is present in a number of cell types of the immune system. The adjuvant CpG 1018 is used in an approved Hepatitis B vaccine . [ 11 ] In order to understand the links between the innate immune response and the adaptive immune response to help substantiate an adjuvant function in enhancing adaptive immune responses to the specific antigen of a vaccine, the following points should be considered: This process carried out by both dendritic cells and macrophages is termed antigen presentation and represents a physical link between the innate and adaptive immune responses. Upon activation, mast cells release heparin and histamine to effectively increase trafficking to and seal off the site of infection to allow immune cells of both systems to clear the area of pathogens. In addition, mast cells also release chemokines which result in the positive chemotaxis of other immune cells of both the innate and adaptive immune responses to the infected area. [ 30 ] [ 31 ] Due to the variety of mechanisms and links between the innate and adaptive immune response, an adjuvant-enhanced innate immune response results in an enhanced adaptive immune response. Specifically, adjuvants may exert their immune-enhancing effects according to five immune-functional activities. [ 32 ] The ability of the immune system to recognize molecules that are broadly shared by pathogens is, in part, due to the presence of immune receptors called toll-like receptors (TLRs) that are expressed on the membranes of leukocytes including dendritic cells , macrophages , natural killer cells , cells of the adaptive immunity (T and B lymphocytes) and non-immune cells ( epithelial and endothelial cells , and fibroblasts ). [ 33 ] The binding of ligands – either in the form of adjuvant used in vaccinations or in the form of invasive moieties during times of natural infection – to TLRs mark the key molecular events that ultimately lead to innate immune responses and the development of antigen-specific acquired immunity. [ 34 ] [ 35 ] As of 2016, several TLR ligands were in clinical development or being tested in animal models as potential adjuvants. [ 36 ] Aluminium salts used in many human vaccines are regarded as safe by Food and Drug Administration . [ 37 ] Although there are studies suggesting the role of aluminium, especially injected highly bioavailable antigen-aluminium complexes in Alzheimer's disease development, [ 38 ] most researchers do not support a causal connection with aluminium. [ 39 ] Adjuvants may make vaccines too reactogenic , which often leads to fever . This is often an expected outcome upon vaccination and is usually controlled by oral paracetamol if necessary. An increased number of narcolepsy (a chronic neurological disorder) cases in children and adolescents was observed in Scandinavian and other European countries after vaccinations to address the H1N1 "swine flu" pandemic in 2009 . Narcolepsy has previously been associated with HLA -subtype DQB1*602, which has led to the prediction that it is an autoimmune process. After a series of epidemiological investigations, researchers found that the higher incidence correlated with the use of AS03-adjuvanted influenza vaccine ( Pandemrix ). Those vaccinated with Pandemrix have almost a twelve-times higher risk of developing the disease. [ 40 ] [ 41 ] The adjuvant of the vaccine contained vitamin E that was no more than a day's normal dietary intake. Vitamin E increases hypocretin -specific fragments that bind to DQB1*602 in cell culture experiments, leading to the hypothesis that autoimmunity may arise in genetically susceptible individuals, [ 42 ] but there is no clinical data to support this hypothesis. The third AS03 ingredient is polysorbate 80 . [ 24 ] Polysorbate 80 is also found in both the Oxford–AstraZeneca and Janssen COVID-19 vaccines . [ 43 ] [ 44 ] Aluminium adjuvants have caused motor neuron death in mice [ 45 ] when injected directly onto the spine at the scruff of the neck, and oil–water suspensions have been reported to increase the risk of autoimmune disease in mice. [ 46 ] Squalene has caused rheumatoid arthritis in rats already prone to arthritis. [ 47 ] In cats, vaccine-associated sarcoma (VAS) occurs at a rate of 1–10 per 10,000 injections. In 1993, a causal relationship between VAS and administration of aluminium adjuvated rabies and FeLV vaccines was established through epidemiologic methods, and in 1996 the Vaccine-Associated Feline Sarcoma Task Force was formed to address the problem. [ 48 ] However, evidence conflicts on whether types of vaccines, manufacturers or factors have been associated with sarcomas. [ 49 ] As of 2006 [update] , the premise that TLR signaling acts as the key node in antigen-mediated inflammatory responses has been in question as researchers have observed antigen-mediated inflammatory responses in leukocytes in the absence of TLR signaling. [ 4 ] [ 50 ] One researcher found that in the absence of MyD88 and Trif (essential adapter proteins in TLR signaling), they were still able to induce inflammatory responses, increase T cell activation and generate greater B cell abundancy using conventional adjuvants ( alum , Freund's complete adjuvant, Freund's incomplete adjuvant, and monophosphoryl-lipid A/trehalose dicorynomycolate ( Ribi's adjuvant )). [ 4 ] These observations suggest that although TLR activation can lead to increases in antibody responses, TLR activation is not required to induce enhanced innate and adaptive responses to antigens. Investigating the mechanisms which underlie TLR signaling has been significant in understanding why adjuvants used during vaccinations are so important in augmenting adaptive immune responses to specific antigens . However, with the knowledge that TLR activation is not required for the immune-enhancing effects caused by common adjuvants, we can conclude that there are, in all likelihood, other receptors besides TLRs that have not yet been characterized, opening the door to future research. Reports after the first Gulf War linked anthrax vaccine adjuvants [ 51 ] to Gulf War syndrome in American and British troops. [ 52 ] The United States Department of Defense strongly denied the claims. Discussing the safety of squalene as an adjuvant in 2006, the World Health Organisation stated "follow-up to detect any vaccine-related adverse events will need to be performed." [ 53 ] No such followup has been published by the WHO. Subsequently, the American National Center for Biotechnology Information published an article discussing the comparative safety of vaccine adjuvants which stated that "the biggest remaining challenge in the adjuvant field is to decipher the potential relationship between adjuvants and rare vaccine adverse reactions, such as narcolepsy, macrophagic myofasciitis or Alzheimer's disease." [ 54 ]
https://en.wikipedia.org/wiki/Immunologic_adjuvant
An immune checkpoint regulator is a modulator of the immune system, that allows initiation of a productive immune response and prevents the onset of autoimmunity. Examples of such a molecule are cytotoxic T-lymphocyte antigen 4 (CTLA-4 or CD152), which is an inhibitory receptor found on immune cells and programmed cell death 1 (CD279), which has an important role in down-regulating the immune system by preventing the activation of T-cells. Tumours involve certain immune-checkpoint pathways as a major mechanism of immune resistance, particularly against T cells that are specific for tumor antigens . [ 1 ] Therefore, the strategy in using immunological checkpoints in cancer therapy is to inhibit inhibitory molecules of the immune system, thus stimulating the immune system. The ability to interfere with the inhibitory function of checkpoint receptors CD152 and CD279 (programmed death-1) in oncology has proved successful. In metastatic melanoma FDA approved an αCD152 monoclonal antibody Ipilimumab, that was found to prolong survival. In melanoma , nonsmall cell lung cancer, and renal cell carcinoma there is hope with CD279 blocking Ab, that promotes antitumor responses. In hematologic malignancies a humanized αCD279 IgG1 needs further research. In solid tumors the use of CD279 IgG4 Ab is promising, and further CD273/PD-L2 in stage IV. In autoimmune rheumatic diseases , impaired tolerance leads to the development of diseases such as rheumatoid arthritis , systemic sclerosis , systemic lupus erythematosus , Sjogren’s syndrome, etc. Therefore, in autoimmune diseases the converse strategy of engaging immunological checkpoints may be beneficial: stimulate inhibitory molecules of the immune system, thus inhibiting the immune system (therefore, increase self-tolerance). What is known to work is Abatacept , an CD152-Ig used in treating rheumatoid arthritis and juvenile idiopathic arthritis . Not studied enough yet are the therapeutic opportunities using Programmed death-1 pathway. [ 2 ] [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Immunologic_checkpoint
The Immunologic Constant of Rejection (ICR), is a notion introduced by biologists to group a shared set of genes expressed in tissue destructive-pathogenic conditions like cancer and infection , along a diverse set of physiological circumstances of tissue damage or organ failure, including autoimmune disease or allograft rejection. [ 1 ] The identification of shared mechanisms and phenotypes by distinct immune pathologies, marked as a hallmarks or biomarkers, aids in the identification of novel treatment options, without necessarily assessing patients phenomenologies individually. The concept of immunologic constant of rejection is based on the proposition that: [ 1 ] In the case of autoimmunity and/or allograft rejection, immunity broadens in the target organ by producing chemokines of the CXCL family that recruit the receptor CXCR3 -bearing cytotoxic T cells. These initiate the following cascade: As such, 20 genes involved in this cascade make up the ICR gene set, including: [ 2 ] [ 3 ] The disrupted homeostasis of cancer cells is found to initiate processes promoting cell growth. To illustrate, growth factors and chemokines activated in response to injury are recruited by tumour cells, sustaining chronic inflammation ; similarly to the immune phenotype found in chronic infection, allograft rejection and autoimmunity diseases. The role of immunity in cancer is demonstrated by the predictive and prognostic role of tumour-infiltrating lymphocytes (TIL) and immune response gene signatures. In several cancers these genes show great correlation. [ 2 ] A high expression of these genes indicates an active immune engagement, and at least a partial rejection of the cancer tissue. In breast cancer increased survival is observed in patients displaying a high level of ICR gene expression. [ 3 ] This immune active phenotype was associated with an increased level of mutations while the poor immune phenotype was defined by perturbation in the MAPK signalling pathways . [ 4 ] The consensus clustering of tumours based on ICR gene expression provides an assessment of the prognosis and response to immunotherapy . To illustrate, classification of breast cancer into four classes (ranking from ICR4 to ICR1) have shown better levels of immune anti-tumour response in ICR4 tumours, as well as a prolonged survival in comparison to ICR1-3 tumours. [ 4 ] Another study [ 5 ] have assessed the clinico-biological value of ICR in breast cancer, via the classification of around 8700 breast tumours and assessment of metastasis-free survival and pathological complete response to neoadjuvant chemotherapy . It has been proven that ICR signature is associated with metastasis-free survival and pathological response to chemotherapy . The increased enrichment of immune signature reflects the expression of cells including T cells, cytotoxic T cells, Th-1 cells, CD8+ T cells, Tγδ cells , and APCs; which defines tumours as immune-active and immune-silent. [7] Although being associated with poor-prognosis, the infiltration of immune cells in ICR4 tumours have resulted in a longer metastasis-free survival and better response to chemotherapy, proving the importance of immune reaction in breast cancer. It was also shown that ICR classification is dependent upon intrinsic molecular subtype of breast tumours, being highly present in triple-negative and HER2 + tumours. A cohort of fresh-frozen samples from 348 patients affected by primary colon cancer (AC-ICAM) was used for genomic examination. this examination revealed that a TH1 cell/cytotoxic immune activation, as captured by the ICR, immunoediting, concurrent expansion of TCR clonotypes and specific intratumoral microbiome composition, were associated with a favorable clinical outcome. The results also revealed that the ICR was associated with overall survival independently of Consensus Molecular Subtypes (CMS) and microsatellite instability (MSI). [ 6 ] In addition, they identified a microbiome signature with strong prognostic value (MBR risk score). The researchers then combined the ICR with the MBR risk score to get a new multi-omics biomarker (mICRoScore) that was able to predict exceptionally long survival in patients with colon cancer. [ 6 ] A pre-existing intratumoral anti-tumor T helper (Th-1) immune response has been linked to favorable outcomes with immunotherapy, but not all immunologically active cancers respond to treatment. In a pan-cancer analysis using The Cancer Genome Atlas (TCGA) including 31 cancer types from 9282 patients, high expression of the ICR signature was associated with significant prolonged survival in breast invasive carcinoma, skin cutaneous melanoma, sarcoma, and uterine corpus endometrial carcinoma, while this "hot" immune phenotype was associated with reduced overall survival in uveal melanoma, low grade glioma, pancreatic adenocarcinoma and kidney renal clear cell carcinoma. In a systemic analysis, cancer-specific pathways were found to modulate the prognostic value of ICR. In tumors with a high proliferation score, ICR was linked to better survival, while in tumors with low proliferation no association with survival was observed. In tumors dominated by cancer signaling, for example by increased TGF beta signaling, the "hot" immune phenotype did not have any survival benefit, suggesting that the immune response is heavily suppressed without protective effect. [ 7 ] The clinical relevance of this finding was demonstrated in the Van Allen dataset with tumor samples of melanoma patients treated with checkpoint inhibitor anti-CTLA4. Overall, a significantly increased expression of ICR was observed in responders compared to non-responders. However, an association of high ICR scores pretreatment with survival was only observed for samples with high proliferation scores. Conversely, ICR was only associated with survival in samples with low TGF beta expression. In soft tissue sarcoma , a cohort of 1455 non-metastatic samples had the ICR retrospectively applied to them to discover links between ICR classes and clinicopathological and biological variables. Because of this, the cohort was thus divided into 4 groups labelled as ICR1, ICR2, ICR3 and ICR4 with each consisting of 34, 27, 24, and 15% of the tumors. The aforementioned groups were created while taking into account the age age, pathology depth, and enrichment value ICR1 through 4 of quantitative/qualitative scores of immune responses. When ICR1 is compared to ICR2-4 classes, there was an increase of 59% of metastatic relapse. Multivariate analysis also showed that the ICR classification remained associated with MFS as well as pathological type and CINSARC classification, suggesting that there is an independent prognostic value. The presence of an ICR signature is linked to postoperative MFS in early-stage STS, regardless of other prognostic factors such as CINSARC. A prognostic clinicogenomic model was created which combines ICR, CINSARC, and pathological type to provide a reliable prediction of outcomes. Additionally, the study proposes that each prognostic group has varying levels of susceptibility to different systemic therapies. [ 8 ] A large a systematic analysis of public RNAseq data (TARGET) for five pediatric tumor types: osteosarcoma (OS), neuroblastoma (NBL), clear cell sarcoma of the kidney (CCSK), Wilms tumor (WLM) and rhabdoid tumor of the kidney (RT) showed a very important role of ICR in pediatric tumors. It was discovered that a lower ICR score was associated with lower survival in WLM while higher ICR score was associated with a better survival in OS and high risk NBL without MYCN amplification. Immune traits were then used to cluster the samples into 6 different immune subtypes (S1-S6) with each having different and distinct survival outcomes. For example, the S2 cluster illustrated the highest overall survival, distinguished by low enrichment of the wound healing signature, high Th1, and low Th2 infiltration. However, the opposite was highlighted in S4. Upregulation of the WNT/Beta-catenin pathway was associated with unfavorable outcomes and decreased T-cell infiltration in OS. [ 9 ] Molecular pathways including IFN-stimulated genes activation; the recruitment of NK cells and T cells, by the secretion of CCL5 and CXCL9-10; and the induction of immune effector mechanisms are found overlapping in conditions like autoimmunity, as a results of host-against-self reaction, where immune cells initiate tissue-specific destruction. Similarly, allografting results in a strong immune response, which clinically necessitates a continued immunosuppression to maintain graft survival. They are found to express conformational epitopes, such as MHC molecules, as nonself antigens, which activates both B and T cells. [ 1 ] An 18-gene Gene Expression Profile that predicted response to pembrolizumab across multiple solid tumors. Can be used with a platform such as the NanoString nCounter platform and define tumor type–independent dimensions of the tumor microenvironment relevant to predicting clinical outcome for agents targeting the PD-1/PD-L1 signaling pathway. [ 10 ] [ 11 ] Gene Signature : CCL5, CD27, CD274 (PD-L1), CD276 (B7-H3), CD8A, CMKLR1, CXCL9, CXCR6, HLA-DQA1, HLA-DRB1, HLA-E, IDO1, LAG3, NKG7, PDCD1LG2 (PDL2), PSMB10, STAT1, and TIGIT. A simple 2 gene mean expression score of GZMA and PRF1 expression. High CYT within colorectal cancer is associated with improved survival, likely due to increased immunity and cytolytic activity of T cells and M1 macrophages. [ 12 ] The 5-year recurrence-free survival of liver cancer patients with low CYT scores was significantly shorter than that of patients with high CYT scores. [ 13 ] researchers found 20 different 20 lnc-RNA prognostic signatures that showed a stronger effect on overall survival than the ICR signature in different solid cancers. They also found a 3 lncRNA signature that displayed prognostic significance in 5 solid cancer types with a stronger association to clinical outcome than ICR and displayed addition prognostic significance in the uterine cohort, endometrial carcinoma, cervical squamous cell carcinomam and endocervical adenocarcinoma as compared to ICR. [ 14 ]
https://en.wikipedia.org/wiki/Immunologic_constant_of_rejection
The Immunological Genome Project (ImmGen) is a collaborative scientific research project that is currently building a gene-expression database for all characterized immune cells in the mouse. The overarching goal of the project is to computationally reconstruct the gene regulatory network in immune cells. [ 1 ] All data generated as part of ImmGen are made freely and publicly available at the ImmGen portal [1] . The ImmGen project began in 2008, as a collaboration between several immunology and computational biology laboratories across the United States, and will be completing its second phase on 2017. Currently, raw data and specialized data browsers from the first and second phases are on www.ImmGen.org. A true understanding of cell differentiation in the immune system will require a general perspective on the transcriptional profile of each cell type of the adaptive and innate immune systems, and how these profiles evolve through cell differentiation or activation by immunogenic or tolerogenic ligands . The ImmGen project aims to establish the roadmap of these transcriptional states. [ citation needed ] The first aim of ImmGen is to generate a compendium of whole-genome transcriptional profiles (initially by microarray , now mostly by RNA-sequencing ) for nearly all characterized cell populations of the adaptive and innate immune systems in the mouse, at major stages of differentiation and activation. This effort is being carried out by a group of collaborating immunology research laboratories across the U.S. Each of the laboratories brings a unique expertise in a particular cell lineage , and all are employing standardized procedures for cell sorting . The compendium of microarray data currently include over 250 immunologically relevant cell types, from all lymphoid organs and other tissues which are monitored by immune cells. [ citation needed ] A series of ImmGen reports was published as the compendium accumulated. Some lineage specific reports described hematopoietic stem cells , [ 2 ] natural killer cells , [ 3 ] neutrophiles, [ 4 ] B and T cells , [ 5 ] natural killer cells, [ 6 ] macrophage , [ 7 ] dendritic cells , [ 8 ] alpha beta T cells, [ 9 ] gamma delta T cells, [ 10 ] activated CD8 T cells, [ 11 ] innate lymphoid cells, [ 12 ] and lymph node stromal cells . [ 13 ] Though most of the transcriptional profiling was done on B6 mice, the effect of genetic variation was also studied. [ 14 ] The second phase of ImmGen started profiling activated immune cells. The interferon response was used as a test case. [ 15 ] Several groups of collaborating computational biologists (Regev & Koller) used the data to reverse-engineer the genetic regulatory network in immune cells, [ 16 ] and compare it to the human immune system [ 17 ] An initial survey of differential splicing across immune lineages was carried out using both microarrays and RNA-sequencing. [ 18 ] Project participants from Brown University's Computer Sciences Department are also exploring novel representation modes for the ImmGen data, developing and curating the public representation. [ citation needed ] Participating Immunology laboratories include: The Brenner (NKT, BWH, Boston), Goldrath (Activated CD8 T cells, UCSD , San Diego), Kang (gamma delta T cells, U. Mass, Worcester), Lanier (NK, UCSF , San Francisco), Mathis/Benoist (alpha beta T cells, HMS, Boston), Merad and Randolph (monocytes & macrophages, Mount Sinai , New York and Washington University in St. Louis ), Rossi (HSC, Children's, Boston), Turley (DC, DFCI, Boston), and Wagers (HSC, Joslin, Boston) labs. [ citation needed ] Tragically, Richard (Randy) Hardy (Fox Chase, Philadelphia), who was an ImmGen member since its initiation, passed away in June 2016. As of August 2016, Immgen has profiled more than 250 naive cell populations in the mouse using microarrays, and several dozens of activated cell types using RNA-sequencing. [ citation needed ] The project's status and detailed information can be seen at ( ImmGen ). This site also includes a dedicated data browser, with which users can interactively explore the expression profiles for particular genes, networks of co-regulated genes, and genes that best distinguish different cell types. Raw data are available at the NCBI's Gene Expression Omnibus [2]
https://en.wikipedia.org/wiki/Immunological_Genome_Project
Immunology is a branch of biology and medicine [ 1 ] that covers the study of immune systems [ 2 ] in all organisms . Immunology charts, measures, and contextualizes the physiological functioning of the immune system in states of both health and diseases; malfunctions of the immune system in immunological disorders (such as autoimmune diseases , hypersensitivities , [ 3 ] immune deficiency , [ 4 ] and transplant rejection [ 5 ] ); and the physical, chemical, and physiological characteristics of the components of the immune system in vitro , [ 6 ] in situ , and in vivo . [ 7 ] Immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, rheumatology, virology, bacteriology, parasitology, psychiatry, and dermatology. The term was coined by Russian biologist Ilya Ilyich Mechnikov , [ 8 ] who advanced studies on immunology and received the Nobel Prize for his work in 1908 with Paul Ehrlich "in recognition of their work on immunity". He pinned small thorns into starfish larvae and noticed unusual cells surrounding the thorns. This was the active response of the body trying to maintain its integrity. It was Mechnikov who first observed the phenomenon of phagocytosis , [ 9 ] in which the body defends itself against a foreign body. Ehrlich accustomed mice to the poisonous ricin and abrin. After feeding them with small but increasing dosages of ricin he ascertained that they had become "ricin-proof". Ehrlich interpreted this as immunization and observed that it was abruptly initiated after a few days and was still in existence after several months. Prior to the designation of immunity , [ 10 ] from the etymological root immunis , which is Latin for 'exempt', early physicians characterized organs that would later be proven as essential components of the immune system. The important lymphoid organs of the immune system are the thymus , [ 11 ] bone marrow , and chief lymphatic tissues such as spleen , tonsils , lymph vessels , lymph nodes , adenoids , and liver . However, many components of the immune system are cellular in nature, and not associated with specific organs, but rather embedded or circulating in various tissues located throughout the body. Classical immunology ties in with the fields of epidemiology and medicine . It studies the relationship between the body systems, pathogens , and immunity. The earliest written mention of immunity can be traced back to the plague of Athens in 430 BCE. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. [ 12 ] Many other ancient societies have references to this phenomenon, but it was not until the 19th and 20th centuries before the concept developed into scientific theory. The study of the molecular and cellular components that comprise the immune system, including their function and interaction, is the central science of immunology. The immune system has been divided into a more primitive innate immune system and, in vertebrates , an acquired or adaptive immune system . The latter is further divided into humoral (or antibody ) and cell-mediated components. [ citation needed ] The immune system has the capability of self and non-self-recognition. [ 13 ] An antigen is a substance that ignites the immune response. The cells involved in recognizing the antigen are Lymphocytes. Once they recognize, they secrete antibodies. Antibodies are proteins that neutralize the disease-causing microorganisms. Antibodies do not directly kill pathogens, but instead, identify antigens as targets for destruction by other immune cells such as phagocytes or NK cells. The (antibody) response is defined as the interaction between antibodies and antigens . [ 14 ] Antibodies are specific proteins released from a certain class of immune cells known as B lymphocytes , while antigens are defined as anything that elicits the generation of antibodies ( anti body gen erators). Immunology rests on an understanding of the properties of these two biological entities and the cellular response to both. It is now getting clear that the immune responses contribute to the development of many common disorders not traditionally viewed as immunologic, [ 15 ] including metabolic, cardiovascular, cancer, and neurodegenerative conditions like Alzheimer's disease. Besides, there are direct implications of the immune system in the infectious diseases (tuberculosis, malaria, hepatitis, pneumonia, dysentery, and helminth infestations) as well. Hence, research in the field of immunology is of prime importance for the advancements in the fields of modern medicine, biomedical research, and biotechnology. The specificity of the bond between antibody and antigen has made the antibody an excellent tool for the detection of substances by a variety of diagnostic techniques. Antibodies specific for a desired antigen can be conjugated with an isotopic (radio) or fluorescent label or with a color-forming enzyme in order to detect it. However, the similarity between some antigens can lead to false positives and other errors in such tests by antibodies cross-reacting with antigens that are not exact matches. [ 16 ] The use of immune system components or antigens to treat a disease or disorder is known as immunotherapy . Immunotherapy is most commonly used to treat allergies, autoimmune disorders such as Crohn's disease , Hashimoto's thyroiditis and rheumatoid arthritis , and certain cancers . Immunotherapy is also often used for patients who are immunosuppressed (such as those with HIV ) and people with other immune deficiencies. This includes regulating factors such as IL-2, IL-10, GM-CSF B, IFN-α. Clinical immunology is the study of diseases caused by disorders of the immune system (failure, aberrant action, and malignant growth of the cellular elements of the system). It also involves diseases of other systems, where immune reactions play a part in the pathology and clinical features. The diseases caused by disorders of the immune system fall into two broad categories: Other immune system disorders include various hypersensitivities (such as in asthma and other allergies ) that respond inappropriately to otherwise harmless compounds . The most well-known disease that affects the immune system itself is AIDS , an immunodeficiency characterized by the suppression of CD4+ ("helper") T cells , dendritic cells and macrophages by the human immunodeficiency virus (HIV). Clinical immunologists also study ways to prevent the immune system's attempts to destroy allografts ( transplant rejection ). [ 17 ] Clinical immunology and allergy is usually a subspecialty of internal medicine or pediatrics . Fellows in Clinical Immunology are typically exposed to many of the different aspects of the specialty and treat allergic conditions, primary immunodeficiencies and systemic autoimmune and autoinflammatory conditions. As part of their training fellows may do additional rotations in rheumatology , pulmonology , otorhinolaryngology , dermatology and the immunologic lab. [ 18 ] When health conditions worsen to emergency status, portions of immune system organs, including the thymus, spleen, bone marrow, lymph nodes, and other lymphatic tissues, can be surgically excised for examination while patients are still alive. Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff , it was cells – more precisely, phagocytes – that were responsible for immune responses. In contrast, the humoral theory of immunity, held by Robert Koch [ 19 ] and Emil von Behring , [ 20 ] among others, stated that the active immune agents were soluble components (molecules) found in the organism's "humors" rather than its cells. [ 21 ] [ 22 ] [ 23 ] In the mid-1950s, Macfarlane Burnet , inspired by a suggestion made by Niels Jerne , [ 24 ] formulated the clonal selection theory (CST) of immunity. [ 25 ] On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (e.g., pathogens, an allograft) trigger a destructive immune response. [ 26 ] The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. [ 27 ] The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, [ 23 ] [ 28 ] [ 29 ] but remain very influential. [ 30 ] [ 31 ] More recently, several theoretical frameworks have been suggested in immunology, including " autopoietic " views, [ 32 ] "cognitive immune" views, [ 33 ] the " danger model " (or "danger theory"), [ 28 ] and the "discontinuity" theory. [ 34 ] [ 35 ] The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions. [ 36 ] [ 37 ] [ 38 ] [ 39 ] The body's capability to react to antigens depends on a person's age, antigen type, maternal factors and the area where the antigen is presented. [ 40 ] Neonates are said to be in a state of physiological immunodeficiency, because both their innate and adaptive immunological responses are greatly suppressed. Once born, a child's immune system responds favorably to protein antigens while not as well to glycoproteins and polysaccharides . In fact, many of the infections acquired by neonates are caused by low virulence organisms like Staphylococcus and Pseudomonas . In neonates, opsonic activity and the ability to activate the complement cascade is very limited. For example, the mean level of C3 in a newborn is approximately 65% of that found in the adult. Phagocytic activity is also greatly impaired in newborns. This is due to lower opsonic activity, as well as diminished up-regulation of integrin and selectin receptors, which limit the ability of neutrophils to interact with adhesion molecules in the endothelium . Their monocytes are slow and have a reduced ATP production, which also limits the newborn's phagocytic activity. Although, the number of total lymphocytes is significantly higher than in adults, the cellular and humoral immunity is also impaired. Antigen-presenting cells in newborns have a reduced capability to activate T cells. Also, T cells of a newborn proliferate poorly and produce very small amounts of cytokines like IL-2, IL-4, IL-5, IL-12, and IFN-g which limits their capacity to activate the humoral response as well as the phagocitic activity of macrophage. B cells develop early during gestation but are not fully active. [ 41 ] Maternal factors also play a role in the body's immune response. At birth, most of the immunoglobulin present is maternal IgG. These antibodies are transferred from the placenta to the fetus using the FcRn (neonatal Fc receptor). [ 42 ] Because IgM, IgD, IgE and IgA do not cross the placenta, they are almost undetectable at birth. Some IgA is provided by breast milk . These passively-acquired antibodies can protect the newborn for up to 18 months, but their response is usually short-lived and of low affinity . [ 41 ] These antibodies can also produce a negative response. If a child is exposed to the antibody for a particular antigen before being exposed to the antigen itself then the child will produce a dampened response. Passively acquired maternal antibodies can suppress the antibody response to active immunization. Similarly, the response of T-cells to vaccination differs in children compared to adults, and vaccines that induce Th1 responses in adults do not readily elicit these same responses in neonates. [ 41 ] Between six and nine months after birth, a child's immune system begins to respond more strongly to glycoproteins , but there is usually no marked improvement in their response to polysaccharides until they are at least one year old. This can be the reason for distinct time frames found in vaccination schedules . [ 43 ] [ 44 ] During adolescence, the human body undergoes various physical, physiological and immunological changes triggered and mediated by hormones , of which the most significant in females is 17-β-estradiol (an estrogen ) and, in males, is testosterone . Estradiol usually begins to act around the age of 10 and testosterone some months later. [ 45 ] There is evidence that these steroids not only act directly on the primary and secondary sexual characteristics but also have an effect on the development and regulation of the immune system, [ 46 ] including an increased risk in developing pubescent and post-pubescent autoimmunity. [ 47 ] There is also some evidence that cell surface receptors on B cells and macrophages may detect sex hormones in the system. [ 48 ] The female sex hormone 17-β-estradiol has been shown to regulate the level of immunological response, [ 49 ] while some male androgens such as testosterone seem to suppress the stress response to infection. Other androgens, however, such as DHEA , increase immune response. [ 50 ] As in females, the male sex hormones seem to have more control of the immune system during puberty and post-puberty than during the rest of a male's adult life. Physical changes during puberty such as thymic involution also affect immunological response. [ 51 ] Ecoimmunology, or ecological immunology, explores the relationship between the immune system of an organism and its social, biotic and abiotic environment. More recent ecoimmunological research has focused on host pathogen defences traditionally considered "non-immunological", such as pathogen avoidance , self-medication, symbiont -mediated defenses, and fecundity trade-offs. [ 52 ] Behavioural immunity, a phrase coined by Mark Schaller , specifically refers to psychological pathogen avoidance drivers, such as disgust aroused by stimuli encountered around pathogen-infected individuals, such as the smell of vomit . [ 53 ] More broadly, "behavioural" ecological immunity has been demonstrated in multiple species. For example, the Monarch butterfly often lays its eggs on certain toxic milkweed species when infected with parasites. These toxins reduce parasite growth in the offspring of the infected Monarch. However, when uninfected Monarch butterflies are forced to feed only on these toxic plants, they suffer a fitness cost as reduced lifespan relative to other uninfected Monarch butterflies. [ 54 ] This indicates that laying eggs on toxic plants is a costly behaviour in Monarchs which has probably evolved to reduce the severity of parasite infection. [ 52 ] Symbiont-mediated defenses are also heritable across host generations, despite a non-genetic direct basis for the transmission. Aphids , for example, rely on several different symbionts for defense from key parasites, and can vertically transmit their symbionts from parent to offspring. [ 55 ] Therefore, a symbiont that successfully confers protection from a parasite is more likely to be passed to the host offspring, allowing coevolution with parasites attacking the host in a way similar to traditional immunity. The preserved immune tissues of extinct species, such as the thylacine ( Thylacine cynocephalus ), can also provide insights into their biology. [ 56 ] The study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer. The immunology concerned with physiological reaction characteristic of the immune state. Inflammation is an immune response that has been observed in many types of cancers. [ 57 ] This area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance. The term has also been used by fertility clinics to address fertility problems, recurrent miscarriages, premature deliveries and dangerous complications such as pre-eclampsia .
https://en.wikipedia.org/wiki/Immunology
Immunomagnetic separation ( IMS ) is a laboratory tool that can efficiently isolate cells out of body fluid or cultured cells . It can also be used as a method of quantifying the pathogenicity of food, blood or feces. DNA analysis have supported the combined use of both this technique and Polymerase Chain Reaction (PCR). [ 1 ] Another laboratory separation tool is the affinity magnetic separation (AMS), which is more suitable for the isolation of prokaryotic cells. [ 2 ] IMS deals with the isolation of cells, proteins, and nucleic acids through the specific capture of biomolecules through the attachment of small-magnetized particles, beads, containing antibodies and lectins . [ 3 ] These beads are coated to bind to targeted biomolecules, gently separated and goes through multiple cycles of washing to obtain targeted molecules bound to these super paramagnetic beads, which can differentiate based on strength of magnetic field and targeted molecules, are then eluted to collect supernatant and then are able to determine the concentration of specifically targeted biomolecules. IMS obtains certain concentrations of specific molecules within targeted bacteria. A mixture of cell population will be put into a magnetic field where cells then are attached to super paramagnetic beads, specific example are Dynabeads (4.5-μm), will remain once excess substrate is removed binding to targeted antigen. Dynabeads consists of iron-containing cores, which is covered by a thin layer of a polymer shell allowing the absorption of biomolecules. The beads are coated with primary antibodies, specific-specific antibodies, lectins, enzymes, or streptavidin; [ 3 ] the linkage between magnetized beads coated materials are cleavable DNA linker cell separation from the beads when the culturing of cells is more desirable. [ 4 ] Many of these beads have the same principles of separation; however, the presence and different strength s of magnetic fields requires certain sizes of beads, based on the ramifications of the separation of the cell population. The larger sized beads (>2μm) are the most commonly used range that was produced by Dynal (Dynal [UK] Ltd., Wirral, Merseyside, UK; Dynal, Inc., Lake Success, NY). Where as smaller beads (<100 nm) are mostly used for MACS system that was produced by Miltenyi Biotech (Miltenyi Biotech Ltd., Bisley, Surrey, UK; Miltenyi Biotech Inc., Auburn, CA). [ 3 ] Immunomagnetic separation is used in a variety of scientific fields including molecular biology, microbiology, and immunology. (3) This technique of separation does not only consist of separation of cells within the blood, but can also be used for techniques of separation from primary tumors and in metastases research, through separation into component parts, creating a singular-cell delay, then allowing the suitable antibody to label the cell. In metastasis research this separation technique may become necessary to separate when given a cell population and wanting to isolate tumors cells in tumors, peripheral blood, and bone marrow . [ 5 ] Antibodies coating paramagnetic beads will bind to antigens present on the surface of cells thus capturing the cells and facilitate the concentration of these bead-attached cells. The concentration process is created by a magnet placed on the side of the test tube bringing the beads to it. MACS systems (Magnetic Cell Separation system): [ 6 ] [ 3 ] Through the usage of smaller super paramagnetic beads (<100 nm), which requires a stronger magnetic field to separate cells. Cells are labeled with primary antibodies and then MACS beads are coated with specific- specific antibodies. These labeled cell suspension is then put into a separation column in a strong magnetic field. The labeled cells are contained, magnetized, while in the magnetic field and the unlabeled cells are suspended, un-magnetized, to be collected. Once removed from magnetic field positive cells are eluted. These MACS beads are then incorporated by the cells allowing them to remain in the column because they do not intrude with the cell attachment to the culture surface to cell-cell interactions. A bead removal reagent is then applied to have an enzymatically release of the MACS beads allowing those cells to become relabeled with some other marker, which then is sorted.
https://en.wikipedia.org/wiki/Immunomagnetic_separation
The immunome is the set of genes that code for proteins which constitute the immune system , excluding those that are widespread in other cell types, and not involved in the immune response itself. [ 1 ] [ 2 ] It is further defined as the set of peptides derived from the proteome that interact with the immune system. [ 3 ] There are numerous ongoing efforts to characterize and sequence the immunomes of humans , mice , and elements of non-human primates. Typically, immunomes are studied using immunofluorescence microscopy to determine the presence and activity of immune-related enzymes and pathways. [ 4 ] Practical applications for studying the immunome include vaccines , therapeutic proteins, and further treatment of other diseases. [ 3 ] [ 5 ] The study of the immunome falls under the field of immunomics . The word immunome is a portmanteau of the words " immune " and " chromosome ." See omics for a further discussion. The exact size of the human immunome has been a topic of study for decades. [ 6 ] However, the amount of information it encodes is said to exceed the size of the human genome by several orders of magnitude due to, at least in part, somatic hypermutation and junctional diversity . [ 7 ] [ 8 ] Several efforts are attempting to characterize the immunomes of humans and other species. [ 9 ] [ 10 ] [ 11 ] [ 12 ] The Human Immunome Program is a major effort, launched in 2016, as a collaborative project between The Human Vaccines Project, Vanderbilt University Medical Center , and Illumina, Inc. [ 9 ] Its goal is to decipher the complete collection of human B and T immune cell receptors. [ 13 ] Thousands of individuals will be studied, representing the range of age, gender, ethnicity, geographical origin, health status, and vaccination status. [ 9 ] The results will be shared as an open-source database . [ 14 ] The sequencing project will continue until unique sequences stop appearing within B and T cell receptors and is expected to take ten years. [ 15 ] The Immunological Genome Project 's stated goal is to characterize the immunome of the mouse, generating "a complete microarray dissection of gene expression and its regulation in the immune system". This project is intended to function as a primary resource. The project engages more than 20 research labs, studying T cells, B cells, and dendritic cells , along with many other cell types. The project began in 2008. [ 10 ] Non-human primate immunomes are studied because of their genetic similarity to humans. [ 11 ] [ 12 ] In 2025, the Mal-ID project first sequenced B (BCR) and T cell receptors (TCR) at scale across multiple diagnoses using three machine learning models, achieving an area under the receiver operative characteristic curve value of 0.986. [ 16 ] In order to gain useful knowledge about the immunome and its characteristics, the cells and components of the immune system must be phenotyped in a quick and pragmatic manner. There are hundreds of known cell types within the immune system and the possibility of detecting and characterizing them without the use of recent advances in immunophenotyping technology was remote because large amounts of an individual's blood would have been required. This outdated method is called low-dimensional immunophenotyping. However, high-dimensional immunophenotyping is now a possibility. The types of high-dimensional immunophenotyping can be broadly grouped into two categories: the use of isotopes of lanthanide and the use of fluorophores . These advanced technologies allow for up to 100 parameters to be measured at one time. [ 4 ] There are potentially far-reaching applications for studying the immunome. Some scientists believe that knowledge gained from the immunome could lead to the discovery of differences in the absolute number of T cell epitopes , and could reveal antigenic relationships between different but immunologically similar pathogens, potentially unlocking autoimmune disease therapies and organ transplantation . [ 3 ] Immunome investigation has proven useful in determining the symptoms and potential causes of pulmonary fibrosis on a molecular level. [ 17 ] The development of vaccines is also an application of immunome study as shown by Carlos F. Suárez and his colleagues. They were able to find components of a malaria vaccine that could be readily used in humans as a result of having characterized the cell surface receptor of an immune cell from an owl monkey . These monkeys have been shown to be highly susceptible to human malaria, so they serve as a good model for the disease. [ 18 ] It could also be possible to develop an influenza vaccine that would provide protection from several strains of the virus. [ 19 ] Furthermore, analyzation of the immunomes of non-human primates and other species can reflect the evolutionary history of species as shown by David F. Plaza and his colleagues. This immunome data can also be helpful when testing antibody therapies on non-human primates to ensure they are safe for humans. This can be accomplished by being able to interpret results in the context of the slight differences in ortholog structure between the human and non-human primate immunomes. [ 20 ] There are a number of databases corresponding to the different facets of the human immunome and the immunomes of other species. [ 21 ] An effort is being made to assemble immunological information into a singular database called the Immunome Knowledge Base(IKB). The two scientists behind the effort, Csaba Ortutay & Mauno Vihinen, have integrated data from three separate databases into IKB. These three databases, Immunome, ImmTree, and ImmunomeBase, all have separate, but related information pertaining to the immunome. Immunome contains entries to official gene names according to the HUGO Gene Nomenclature Committee , alternative names, and locations of genes on the chromosomes. ImmTree contains entries related to the molecular evolution of the immune system, including orthologous genes and phylogenetic trees . Finally, ImmunomeBase is a multi-species database related to immunity. Altogether, as of 2009, IKB has entries for more than 100,000 data items, including 893 entries for genes in the immunome. [ 1 ] This database serves as a resource for data on antibody and T cell epitopes studied in humans, non-human primates, and other species as it relates to disease, allergies, autoimmunity, and transplantation. The database also has tools to assist in the prediction and analysis of epitopes. [ 22 ] This database has data for every known marsupial and monotreme immune gene. It serves as a resource for immunologists and researchers studying the evolution of mammalian immunity. [ 23 ] A database developed for the purpose of promoting the re-use of immunological data. It is a partnership between researchers at the University of California-San Francisco, Stanford University, the University of Buffalo, the Technion - Israel Institute of Technology, and Northrop Grumman. It encompasses results from over 400 studies related to immunology. [ 24 ] This database is a public resource containing the data relating to the study of the immune system of the mouse. [ 10 ]
https://en.wikipedia.org/wiki/Immunome
Immunometabolism is a branch of biology that studies the interplay between metabolism and immunology in all organisms . In particular, immunometabolism is the study of the molecular and biochemical underpinninngs for i) the metabolic regulation of immune function, and ii) the regulation of metabolism by molecules and cells of the immune system. [ 1 ] Further categorization includes i) systemic immunometabolism and ii) cellular immunometabolism. [ 2 ] Immunometabolism includes metabolic inflammation:a chronic, systemic, low grade inflammation, orchestrated by metabolic deregulation caused by obesity or aging. Immunometabolism first appears in academic literature in 2011, where it is defined as "an emerging field of investigation at the interface between the historically distinct disciplines of immunology and metabolism." [ 3 ] A later article defines immunometabolism as describing "the changes that occur in intracellular metabolic pathways in immune cells during activation". [ 4 ] Broadly, immunometabolic research records the physiological functioning of the immune system in the context of different metabolic conditions in health and disease. These studies can cover molecular and cellular aspects of immune system function in vitro , in situ , and in vivo , under different metabolic conditions. For example, highly proliferative cells such as cancer cells and activating T cells undergo metabolic reprogramming, increasing glucose uptake to shift towards aerobic glycolysis during normoxia. While aerobic glycolysis is an inefficient pathway for ATP production in quiescent cells, this so-called “Warburg effect” supports the bioenergetic and biosynthetic needs of rapidly proliferating cells. [ 5 ] There are many indispensable signalling molecules connected to metabolic processes, which play an important role in both the immune system homeostasis and in the immune response. From these the most significant are mammalian target of rapamycin ( mTOR ), liver kinase B1 ( LKB1 ), 5' AMP-activated protein kinase ( AMPK ), phosphoinositide 3 kinase ( PI3K ) and protein kinase B ( akt ). All of the aforementioned molecules together control the most important metabolic pathways in cells like glycolysis , krebs cycle or oxidative phosphorylation . To fully understand how all of these molecules and pathways affect the immune cells, it is first needed to examine the delicate interplay of these molecules. [ 6 ] [ 4 ] mTOR is a serine/threonine protein kinase, which is found in 2 complexes in cells: mTOR complex 1 and 2 ( mTORC1 and mTORC2 ). mTORC1 is activated through the T cell receptor ( TCR ) and the costimulatory molecule cluster of differentiation 28 ( CD28 ) engagement. However, it can also be activated by growth factors like IL-7 or IL-2 and by metabolites like glucose or amino acids ( leucin , arginine or glutamine ). [ 7 ] [ 6 ] In contrast, there are more gaps as to how mTORC2 pathway functions, but its activation is also achieved through growth factors as exemplified by IL-2. [ 6 ] When activated mTORC1 negatively regulates autophagy (through inhibiting the ULK complex) and shifts the cell towards aerobic glycolysis, glutaminolysis (through activation of c-Myc ) and promotes lipid synthesis and mitochondrial remodelling. [ 7 ] [ 6 ] mTORC2 enhances glycolysis as well, but in contrast to mTORC1, it activates akt, which in turn promotes glucose transporter 1 ( GLUT1 ) membrane deposition. It also further promotes, through other kinases, cell proliferation and survival. [ 6 ] PI3K mediates the phosphorylation of phosphatidylinositol-(4,5)-bisphosphate ( PIP2 ) into phosphatidylinositol-(3,4,5)-trisphosphate ( PIP3 ). PIP3 then serves as a scaffold for other proteins, which contain a pleckstrin homology ( PH ) domain. It can be activated, just like mTOR, through TCR, CD28 and, unlike mTOR, through another costimulatory molecule: Inducible T-cell COStimulator ( ICOS ). [ 6 ] The present of PIP3 on a membrane recruits many proteins including phosphoinositide-dependent protein kinase 1 ( PDK1 ), which after its phosphorylation together with mTORC2 activates akt, a serine/threonine kinase. As a result akt promotes GLUT1 membrane deposition and akt also inhibits transcription factor forkhead box O (FoxO), whose inactivation acts in synergy with the mTORC2 above mentioned changes. [ 6 ] [ 8 ] Both LKB1 and AMPK are serine/threonine kinases acting predominantly opposingly to the aforementioned molecules. From the two, LKB1's activation is less understood, as it is mainly dependants on cellular localization and on many posttranslational modifications. For instance the above-mentioned akt can stimulate LKB1 inhibition through promoting nuclear retention. When activated, LKB1 can activate, apart from other targets, AMPK, whose activation leads to mTORC1 destabilization. [ 6 ] Furthermore, it activates ULK complex, phosphorylates p53 and acetyl-CoA carboxylase ( ACC ), which promotes autophagy, cell cycle arrest and fatty acids oxidation respectively. Since AMPK can also be activated through adenosine monophosphate ( AMP ) or by glucose insufficiency, it acts as a sensor of starvation and therefore activates many already mentioned catabolic processes, which is in direct contrast with mTOR, which activates myriad of anabolic processes. [ 6 ] [ 7 ] [ 9 ] Generally speaking, cells, whose primary objective is their long-term survival or control of inflammation, in terms of energy tend to rely on Krebs cycle and lipid oxidation which are both coupled with functional oxidative phosphorylation. Among these cells we can include naive T cells , memory T cells , regulatory T cells ( Tregs ), unstimulated innate immune cells like macrophages and M2 macrophages . On the contrary, cells whose main function is proliferation, synthesis of different molecules or propagation of inflammation often prefer glycolysis as a source of energy and metabolites. Therefore, into these cells belong for instance effector T cells and M1 macrophages . [ 4 ] [ 8 ] [ 10 ] Naive T cells have to be kept in a permanent state of quiescence, until they encounter their cognate antigen . The quiescence state is sustained by tonic TCR signalling and by IL-7. Tonic TCR signalling is necessary to keep the FoxO transcription factor active, which in turn allows for IL-7R transcription. This enables the T cell to survive and proliferate at a low rate. However, during this tonic TCR signalling proteins, that control metabolism, have to be strictly regulated, because their activation could lead to spontaneous exit of quiescence and differentiation into various T cells subset, as exemplified by the uncontrolled activation of PI3K which causes the development of Th1 or Th2 . [ 11 ] Both of the aforementioned signals should lead to the mTOR and akt activation, but in quiescence T cells there are tuberous sclerosis complex ( TSC ) and phosphatase and tensin homolog ( PTEN ) acting against their activations. Therefore, a naive T cell dependent predominantly on oxidative phosphorylation and has much lower glucose uptake and ATP production than their activated counterparts (effector T cells). [ 11 ] [ 7 ] [ 6 ] Quiescence exit begins when a T cell encounters its cognate antigen usually during an infection. The TCR signal together with the costimulation signal lead to downregulation of PTEN and TSC. [ 11 ] This causes the phosphorylation cascades of mTOR and akt and many more kinases to be fully activated. These cascades activities result in glucose and glutamine uptake coupled with higher glycolysis and glutaminolysis, which not only supports rapid cell growth, but also further promotes mTOR activation. Furthermore, mTOR stimulates lipid synthesis and mitochondria remodelling, exemplified by increased expression of sterol regulatory element-binding protein ( SREBP ) and mitochondria undergoing fission, which causes them to function predominantly as biosynthetic hubs, rather than energy production hubs. After their activation and the metabolic reprogramming, T cells compete with one another and consequently, it is very likely that during its effector phase T cells reach a point, where they suffer from lack of nutrients. In such cases AMPK is activated to balance the mTOR signalling and to prevent apoptosis. [ 11 ] [ 6 ] [ 4 ] The described scheme of quiescence exit holds true for inflammatory T cells subsets like Th1, Th2, Th17 and cytotoxic T cells . However, mTOR activity can be detrimental when we focus on Tregs. This is shown by the fact that in Tregs high activation of mTORC1 coupled with a higher level of glycolysis leads to the failure of Treg lineage commitment. Therefore, in contrast to inflammatory cell subsets, Tregs rely on oxidative phosphorylation fuelled by lipid oxidation. [ 11 ] [ 4 ] Although, it is important to note that complete suppression of glycolysis leads to enolase (a glycolytic enzyme) binding to a splice variant of Foxp3, which effectively compromises peripheral Tregs abilities to act as immunosuppressive cells. [ 7 ] [ 4 ] After the infection is cleared most of the activated T cells succumb to apoptosis . However, few of them survive and develop into the memory T cell subsets. For this development the engagement of costimulatory molecules, like CD28, appears to be crucial, as the co-stimulation manifests in mitochondrial morphology, thus allowing for higher oxidative phosphorylation but also retaining the potential to quickly revert to glycolysis. [ 12 ] [ 13 ] Moreover, T cell activation causes an overall increase in acetyl-CoA, which is a substrate for the histone acetylation. As a results, many genes are acetylated and therefore accessible to transcription even after the differentiation into memory subsets, hence allowing memory T cells to rapidly re-express some effector related genes. [ 12 ] The aforementioned changes allow T cells to become memory cells, but what exactly drives the memory cell differentiation is still under debate, even though IL-15 seems to be necessary for the T cell memory induction. Recently, asymmetric division of mTORC1, during the first divisions after TCR activation, has been shown to drive the memory cell differentiation in those cells which receive lower amount of mTORC1. [ 12 ] [ 13 ] Immunometabolism of macrophages is mostly studied in the two opposing populations of macrophages: [ 14 ] M1 and M2. M1 macrophages are a pro-inflammatory population induced by LPS or IFNγ . This activation leads, as in the case of T cells, to increase in glycose uptake and glycolysis. What is strikingly different is the Krebs cycle, as in the case of M1 macrophages the cycle is broken at two places. The first break is the conversion of iso-citrate to α-ketoglutarate owing to the downregulation of isocitrate dehydrogenase . Accumulated citrate is subsequently used for lipid and itaconate synthesis, which are both indispensable for M1 macrophages function. The second break at the succinate to fumarate transition occurs probably due to the itaconate production and causes a build up of succinate. This triggers ROS production, which stabilizes HIF-1α . This transcription factor further promotes glycolysis and it is essential for activation of inflammatory macrophages. [ 10 ] [ 4 ] M2 macrophages are anti-inflammatory cells which need for their induction IL-4 . M2 macrophages metabolism is markedly distinct from M1 macrophages due to their unbroken Krebs cycle, which after their activation is fuelled by upragulated glycolysis, glutaminolysis and fatty acid oxidation. [ 10 ] [ 4 ] How the fully operational Krebs cycle exactly translates to M2 macrophages functions is still poorly understood, but the upregulated pathways allow for production of intermediates (mainly acetyl-CoA and S-adenosyl methionine ), which are needed for histone modifications of genes targeted by IL-4 signalling. [ 10 ] Immunometabolism is an area of growing drug discovery research investment [ 15 ] [ 16 ] in numerous areas of medicine , such as for example, in lessening the impact of age-related metabolic dysfunction and obesity on incidence of type 2 diabetes / cardiovascular disease , cancer , [ 3 ] [ 17 ] [ 18 ] as well as infectious diseases. [ 19 ] In recent years, evidence suggests that immunometabolism is implicated in autoimmune disorders. [ 20 ] [ 21 ] The metabolic alterations on immune system regulation have provided unique insights into disease pathogenesis and development, as well as potential therapeutic targets. [ 22 ] [ 23 ] [ 24 ] Sepsis-Related Immunometabolic Paralysis Sepsis pathophysiology now includes immunometabolic paralysis, a condition marked by severe abnormalities in cellular energy metabolism. This phenomenon affects both the acute and late stages of the disease, playing a critical role in the immune response during sepsis. [ 25 ] Summary A potentially fatal illness known as sepsis is brought on by the body's overreaction to an infection. Although there is a strong inflammatory response during the early phase of sepsis, [ 25 ] [ 26 ] immunometabolic paralysis may appear later on and is linked to a bad prognosis for the patient. Shih Chin Cheng and colleagues have conducted recent research that explores the complex interplay between cellular metabolism and the immune response in sepsis. [ 25 ] Important Results • 1. Transition from Oxidative Phosphorylation to Aerobic Glycolysis: The Warburg effect, which occurs during the acute stage of sepsis, is characterized by a change from oxidative phosphorylation to aerobic glycolysis. [ 25 ] [ 27 ] One of the key mechanisms in the first activation of the host defense against infections is this metabolic change. [ 25 ] • 2. Impaired Energy Metabolism in Leukocytes: It was shown that patients experiencing acute sepsis exhibited extensive impairments in cellular energy metabolism, which impacted leukocyte glycolysis and oxidative metabolism. [ 25 ] [ 28 ] The ailment known as immunometabolic paralysis is associated with a compromised capacity to react to secondary stimulus. [ 29 ] [ 25 ] • 3. IFN-γ's Function in Restoring Glycolysis: Interferon-gamma, or IFN-γ, is being explored as a possible treatment option IFN-γ therapy partially restored glycolysis, [ 25 ] [ 30 ] in tolerant monocytes, as demonstrated by in vitro tests, demonstrating its ability to mitigate the metabolic abnormalities linked to immunotolerance. [ 25 ] Therapeutic Implications The work emphasizes how cellular metabolism in sepsis might be targeted therapeutically. Although few medicines possessing metabolic-regulatory properties have been investigated, the study emphasizes how important it is to comprehend and treat immunometabolic paralysis in order to improve outcomes for individuals suffering from sepsis. [ 25 ] Conclusion To sum up, the research conducted by Cheng and colleagues provides significant understanding of the intricate relationship between immune response and cellular metabolism in sepsis. A crucial role for immunometabolic paralysis—a condition marked by impaired energy metabolism—in the development and cure of sepsis is revealed. It appears that more investigation and testing of therapeutic approaches aimed at cellular metabolism will help to improve the management of sepsis. [ 25 ]
https://en.wikipedia.org/wiki/Immunometabolism
Immunomics is the study of immune system regulation and response to pathogens using genome -wide approaches. With the rise of genomic and proteomic technologies, scientists have been able to visualize biological networks and infer interrelationships between genes and/or proteins; recently, these technologies have been used to help better understand how the immune system functions and how it is regulated. Two thirds of the genome is active in one or more immune cell types and less than 1% of genes are uniquely expressed in a given type of cell. Therefore, it is critical that the expression patterns of these immune cell types be deciphered in the context of a network, and not as an individual, so that their roles be correctly characterized and related to one another. [ 1 ] Defects of the immune system such as autoimmune diseases , immunodeficiency , and malignancies can benefit from genomic insights on pathological processes. For example, analyzing the systematic variation of gene expression can relate these patterns with specific diseases and gene networks important for immune functions. [ 2 ] Traditionally, scientists studying the immune system have had to search for antigens on an individual basis and identify the protein sequence of these antigens (“ epitopes ”) that would stimulate an immune response. This procedure required that antigens be isolated from whole cells, digested into smaller fragments, and tested against T- and B-cells to observe T- and B- cell responses. These classical approaches could only visualize this system as a static condition and required a large amount of time and labor. Immunomics has made this approach easier by its ability to look at the immune system as a whole and characterize it as a dynamic model. It has revealed that some of the immune system's most distinguishing features are the continuous motility, turnover, and plasticity of its constituent cells. In addition, current genomic technologies, like microarrays , can capture immune system gene expression over time and can trace interactions of microorganisms with cells of the innate immune system . New, proteomic approaches, including T-cell and B-cells- epitope mapping , can also accelerate the pace at which scientists discover antibody-antigen relationships. A host's immune system responds to pathogen invasion by a set of pathogen-specific responses in which many “players” participate; these include antibodies , T-helper cells , cytotoxic T-cells , and many others. Antigen-presenting cells (APC) are capable of internalizing pathogens and displaying a fragment of the antigen – the epitope - with major histocompatibility complexes (MHCs) on the cell surface. T-cell response is initiated when T-cells recognize these displayed epitopes. Only specific peptide sequences from some pathogen-specific antigens are needed to stimulate T- and B- cell responses; that is, the whole pathogenic peptide sequence is not necessary to initiate an immune response. The ‘ immunome ’ of a pathogen is described by its set of epitopes, and can be defined by comparing genome sequences and applying immunoinformatic tools. [ 3 ] Ash Alizadeh et al. were some of the first to recognize the potential of cDNA microarrays to define gene expression of immune cells. Their analysis probed gene expression of human B and T lymphocytes during cellular activation and/or stimulation with cytokines , a type of signaling regulatory molecule. Many of the activated genes in stimulated T lymphocytes were known to be involved in the G0/G1 cell cycle transition or encoding for chemokines , signaling molecules involved in inflammatory response. This team was also able to visualize temporal patterns of gene expression during T cell mitogenesis . In the concluding paragraphs of their landmark paper, these scientists state “virtually every corner of immunological research will benefit from cDNA microarray analysis of gene expression,” and, thus, heralded the rise of immunomics. Limited by available microarrays and a non-complete human genome at this point in time, this same set of researchers were motivated to create a specialized microarray that focused on genes preferentially expressed in a given cell type, or known to be functionally important in a given biological process. As a result, Alizadeh and colleagues designed the “Lymphochip” cDNA microarray, which contained 13,000 genes and was enriched for genes of importance to the immune system. [ 4 ] Iyer et al.’s 1999 article was another to reveal the importance of applying genomic technologies to immunological research. Although not intending to address any aspect of immunity at the start of their experiment, these researchers observed that the expression profiles of serum -stimulated fibroblasts were far richer than anticipated and suggested an important physiological role for fibroblasts in healing wounds. The serum-induced genes have been associated with processes relevant to wound healing, including genes directly involved in remodeling the clot and extracellular matrix, as well as genes encoding signal proteins for inflammation, the development of new blood vessels, and regrowth of epithelial tissue. Additionally, one of the most significant results of this expression analysis was the discovery of more than 200 previously unknown genes whose expression was temporally regulated during the response of fibroblasts to serum. These results revealed the importance of viewing the immune response as a collaborative physiological program and begged for further study of the immune system as a network, and not just as individual pieces. [ 5 ] In 2006, Moutaftsi et al. demonstrated that epitope-mapping tools could accurately identify the epitopes responsible for 95% of the murine T-cell response to vaccinia virus . Through their work, these scientists introduced the interdisciplinary realm of informatics and immunology while employing genomic, proteomic, and immunological data. The striking success and ease of this method encouraged researchers both to define the immunome of other pathogens, and to measure the breadth and overlap of pathogen immunomes that give rise to immunity. Additionally, it suggested other applications in which epitope-mapping tools could be used including autoimmunity, transplantation, and immunogenicity . [ 6 ] Several types of microarrays have been created to specifically observe the immune system response and interactions. Antibody microarrays use antibodies as probes and antigens as targets. They can be used to directly measure the antigen concentrations for which the antibody probes are specific. Peptide microarrays use antigen peptides as probes and serum antibodies as targets. These can be used for functional immunomic applications to the understanding of autoimmune diseases and allergies, definition of B-cell epitopes, vaccine studies, detection assays, and analysis of antibody specificity. MHC microarrays are the most recent development in immunomic arrays and use peptide-MHC complexes and their co-stimulatory molecules as probes and T-cell populations as targets. Bound T-cells are activated and secrete cytokines, which are captured by specific detection antibodies. This microarray can map MHC-restricted T cell epitopes. [ 7 ] The Lymphochip is a specialized human cDNA microarray enriched for genes related to immune function and created by Ash Alizadeh at Stanford University . 17,853 cDNA clones were taken from three sources. The first set of clones were selected if identified expressed sequence tags (ESTs) were unique or enriched specifically in lymphoid cDNA libraries; these represent ~80% of the Lymphochip clones. The second set of clones was identified during first-generation microarray analysis of immune responses. Finally, 3,183 genes that are known or suspected to have roles in immune function, oncogenesis , apoptosis , cell proliferation, or being open reading frames from pathogenic human viruses were used on the Lymphochip. New genes are frequently being added. Epitope mapping identifies the sites of antibodies to which their target antigens bind. In the past, scientists would have to isolate antigens, digest them into smaller fragments, and determine which of these fragments stimulated T- and B- cell responses to define an antibody's epitope. Immunomics harnesses the power of bioinformatics and offers mapping algorithms that accelerate the discovery of epitope sequences. These algorithms are relevant to vaccine design and for characterizing and modifying immune responses in the context of autoimmunity , endocrinology , allergy , transplantation, diagnostics and engineering of therapeutic proteins. T-cell and B-cell epitope mapping algorithms can computationally predict epitopes based on the genomic sequence of pathogens, without prior knowledge of a protein's structure or function. A series of steps are used to identify epitopes: The guiding principle behind flow cytometry is that cells or subcellular particles are tagged with fluorescent probes are passed through a laser beam and sorted by the strength of fluorescence emitted by cells contained in the droplets. MHC [[tetramer staining]] by flow cytometry identifies and isolates specific T cells based on the binding specificity of their cell surface receptors with fluorescently-tagged MHC-peptide complexes. [ 9 ] ELISPOT is a modified version of the ELISA immunoassay and is a common method of monitoring immune responses. Immunomics has made a considerable impact on the understanding of the immune system by uncovering differences in gene expression profiles of cell types, characterizing immune response, illuminating immune cell lineages and relationship, and establishing gene regulatory networks. Whereas the following list of contributions is not complete, it is meant to demonstrate the broad application of immunomic research and powerful consequences on immunology. Microarrays have discovered gene expression patterns that correlate with antigen-induced activation or anergy in B lymphocytes. Lymphocyte anergy pathways involve induction of some, but not all of the signaling pathways used during lymphocyte activation. For example, NFAT and MAPK/ERK kinase pathways are expressed in anergic (or “tolerant) cell lines, whereas NF-kB and c-Jun N-terminal kinases pathways are not. Of the 300 genes that were altered in expression after antigen-stimulated naïve B cells, only 8 of these genes were regulated in tolerant B cells. Understanding these “tolerance” pathways have important implications for designing immunosuppressive drugs. These gene expression signatures of tolerant B cells could be used during drug screens to probe for compounds that mimic the functional effects of natural tolerance. [ 10 ] Gene expression profiles during human lymphocyte differentiation has followed mature, naïve B cells from their resting state through germinal center reactions and into terminal differentiation. These studies have shown that germinal center B cells represent a distinct stage in differentiation because the gene expression profile is different from activated peripheral B cells. Although no in vitro culture system has been able to induce resting peripheral B cells to adopt a full germinal center phenotype, these gene expression profiles can be used to measure the success of in vitro cultures in mimicking the germinal center state as they are being developed. [ 11 ] About 9 of every 10 human lymphoid cancers derive from B cells. Distinct immunome-wide expression patterns in a large number of diffuse large cell lymphoma (DLCL)– the most common form of non-Hodgkin's lymphoma – have identified at least two different subtypes in what was previously thought to be a single disease. One subset of these DLCLs shows a similar gene expression pattern to that of normal germinal center B cells and implies that the tumor cell originated from a germinal center B cell. Other surveys of B cell malignancies show that follicular lymphomas share expression features with germinal center B cells, whereas chronic lymphocytic leukemia cells resemble resting peripheral blood lymphocytes. Furthermore, heterogeneity in each of these cell lines also suggest that different subtypes exist within each type of lymphoma, just as it has been shown in DLCL. Such knowledge can be used to direct patients to the most appropriate therapy. [ 12 ] Microarrays have analyzed global responses of macrophages to different microorganisms and have confirmed that these responses sustain and control inflammatory processes, and also kill microorganisms. These independent studies have been able to better describe how macrophages mount attacks against different microorganisms. A “core transcriptional response” was observed to induce 132 genes and repress 59 genes. Induced genes include pro-inflammatory chemokines and cytokines, and their respective receptors. A “pathogen-specific response” was also observed. [ 13 ] Dendritic cells (DCs) help macrophages sustain inflammatory processes and participate in the innate immune system response, but can also prime adaptive immunity . Gene expression analyses have shown that DCs can “multi-task” by temporally segregating their different functions. Soon after recognizing an infectious agent, immature DCs transition to a state of early activation via a core response characterized by rapid downregulation of genes involved with pathogen recognition and phagocytosis , upregulation of cytokine and chemokine genes to recruit other immune cells to the side of inflammations; and expression of genes that control migratory capacity. Early activated DCs are enabled to migrate from non-lymphoid tissues to lymph nodes, where they can prime T-cell responses. These early DCs responses are related to innate immunity and consist of the “core transcriptional response” of DCs. Pathogen-specific responses have a stronger influence on the DC's ability to regulate adaptive immunity. Comparing distinctions between immune cells’ overall transcriptional program can generate plots that position each cell type to best reflect its expression profile relative to all other cells and can reveal interesting relationships between cell types. For example, the transcriptional profiles from thymic medullary epithelial immune cells mapped closer to lymphocytes than to other epithelia. This can suggest that a functional interaction exists between these two cells type and requires the sharing of particular transcripts and proteins. When comparing gene expression profiles from cells of the blood system, T-cell and B-cell subsets tightly group with their respective cell types. By looking at the transcriptional profile of different T-cells, scientists have shown that natural killer T-cells are a close variant of conventional CD4+ T cells , rather than an intermediary cell-type between T cells and natural killer cells . Additionally, DCs, natural killer cells, and B cells are tightly grouped based on their global expression profiles. It may have been expected that B lymphocytes and T lymphocytes would cluster separately from each other, or that natural killer cells would be more closely related to T cells because they share common precursors, cytolytic activity, and similar activation markers. Therefore, immunomics has established relationship between cell lineages that depart from classical views. Additionally, it may better explain the observed plasticity in lymphoid and myeloid cell differentiation because of the considerable overlap between global expression profiles of these different lineages. [ 14 ] Networks represent the broadest level of genetic interactions and aim to link all genes and transcripts in the immunological genome. Cellular phenotypes and differentiation states are ultimately established by the activity of these networks of co-regulated genes. One of the most complete networks in immunology has deciphered regulatory connections among normal and transformed human B cells. This analysis suggests a hierarchical network where a small number of highly connected genes (called “hubs”) regulated most interactions. Proto- oncogene MYC emerged as a major hub and highly influential regulator for B cells. Notably, MYC was found to directly control BYSL , a highly conserved, but poorly characterized gene, and is the largest hub in the whole B cell network. This suggests that BYSL encodes an important cellular molecule and a critical effecter of MYC function, and motivates additional studies to elucidate its function. Therefore, using gene expression data to create networks can reveal genes highly influential in immune cell differentiation that pre-genomic technologies had not yet identified. [ 14 ] As quoted by Stefania Bambini and Rino Rappuoli, “New powerful genomics technologies have increased the number of disease that can be addressed by vaccination, and decreased the time for discover research and vaccine development.” The availability of complete genome sequences of pathogens in combination with high-throughput genomics technologies have helped to accelerate vaccine development. Reverse vaccinology uses genomic sequences of viral, bacterial, or parasitic pathogens to identify genes potentially encoding genes that promote pathogenesis . [ 15 ] The first application of reverse vaccinology identified vaccine candidates against Neisseria meningitidis serogroup B. Computational tools identified 600 putative surface-exposed or secreted proteins from the complete genome sequence of a MenB pathogenic strain, on the basis of sequence features. These putative proteins were expressed in E. coli, purified, and used to immunize mice. Tests using mice immune sera estimated the ability of antibodies to protect against these proteins. The proteins able to solicit a robust immune response were checked for sequence conservation across a panel of meningitides strains and allowed for further selection of antigen able to elicit an immune response against most strains in the panel. On the basis of these antigen sequences, scientists have been able to develop a universal “cocktail” vaccine against Neisseria meninitidis that uses five antigens to promote immunity. [ 16 ] Similar approaches have been used for a variety of other human pathogens, such as Streptococcus pneumoniae , Chlamydia pneumoniae , Bacillus anthracis , Porphyromonas gingivalis , Mycobacterium tuberculosis , Helicobacter pylori , amongst others. Additionally, studies have started for the development of vaccines against viruses. The inventory of receptors and signal transduction pathways that immune cells use to monitor and defend the body gives rise to signature patterns of altered gene expression in peripheral blood cells that reflect the character of the infection or injury. Therefore, recognizing characteristic expression profiles of peripheral blood cells may be a powerful diagnostic tool by recruiting these cells as “spies” to detect occult diseases or agents that cannot be readily cultured from the host. For example, cytomegalovirus (CMV) infection of fibroblasts and HTLV-I infection of T lymphocytes revealed distinct gene expression profiles. CMV infection provoked a unique interferon response whereas HTLV-1 infection induced NF-kB target genes. A type of white blood cells have also been tested again bacterial exposures and immunome expression varied based on the type of bacterial strain used. Monitoring the change of peripheral blood gene expression can also help determine the course of infection and help treat patients with a therapy tailored to their disease stage. This approach has already been used against sepsis – a disease that progresses through a predictable line of events. Changes gene expression signatures may precede clinical exacerbation of symptoms, like in multiple sclerosis , and allow physicians to nip these “flare-ups” in the bud. [ 1 ] The immune system is a network of genetic and signaling pathways connected by a network of interacting cells. The Immunological Genome Project seeks to generate a complete compendium of protein-coding gene expression for all cell populations in the mouse immune system. It analyzes both steady-state conditions within different cell populations, and in response to genetic and/or environmental perturbations created by natural genetic polymorphism, gene knock-out, gene knock-down by RNAi , or drug treatment. Computational tools to reverse-engineer or predict immune cell regulatory networks use these expression profiles. By 2008, the ImmGen project involved seven immunology and three computational biology laboratories across the United States and over 200 cell populations involved in the immune system had been identified and described. This consortium has created a data browser to explore the expression patterns of particular genes, networks of co-regulated genes, and genes that can reliably distinguish cell types. Raw data is also accessible from the NCBI's Gene Expression Omnibus. [ 17 ] [ 18 ] ←
https://en.wikipedia.org/wiki/Immunomics
Immunomodulation is modulation (regulatory adjustment) of the immune system . It has natural and human-induced forms, and thus the word can refer to the following:
https://en.wikipedia.org/wiki/Immunomodulation
Immunopathology is a branch of medicine that deals with immune responses associated with disease . It includes the study of the pathology of an organism , organ system , or disease with respect to the immune system , immunity, and immune responses. In biology , it refers to damage caused to an organism by its own immune response, as a result of an infection. It could be due to mismatch between pathogen and host species, and often occurs when an animal pathogen infects a human (e.g. avian flu leads to a cytokine storm which contributes to the increased mortality rate). [ 1 ] In all vertebrates, there are two different kinds of immunities: Innate and Adaptive immunity. Innate immunity is used to fight off non-changing antigens and is therefore considered nonspecific. It is usually a more immediate response than the adaptive immune system, usually responding within minutes to hours. [ 2 ] It is composed of physical blockades such as the skin, but also contains nonspecific immune cells such as dendritic cells, macrophages, and basophils. The second form of immunity is Adaptive immunity. This form of immunity requires recognition of the foreign antigen before a response is produced. Once the antigen is recognized, a specific response is produced in order to destroy the specific antigen. Because of its tailored response characteristic, adaptive immunity is considered to be specific immunity. A key part of adaptive immunity that separates it from innate is the use of memory to combat the antigen in the future. When the antigen is originally introduced, the organism does not have any receptors for the antigen so it must generate them from the first time the antigen is present. The immune system then builds a memory of that antigen, which enables it to recognize the antigen quicker in the future and be able to combat it quicker and more efficiently. The more the system is exposed to the antigen, the quicker it will build up its responsiveness. [ 2 ] Nested within Adaptive immunity are the Primary and Secondary Immune Responses. The Primary Immune Response refers to the first exposure and subsequent response of the immune system to a pathogen. During this initial response, the immune system identifies and targets the pathogen through various mechanisms, including the activation of immune cells such as T cells and B cells, which produce antibodies that specifically target the pathogen. [ 2 ] The Secondary Immune Response occurs upon subsequent encounters with the same pathogen. During the Primary Immune Response, memory cells are generated that remember the specific pathogen and how to target it. When the same pathogen enters the body again, the memory cells are quickly activated, leading to a faster and more efficacious response compared to the primary immune response. This results in more effective elimination of the pathogen. [ 2 ] Vaccines serve to activate the Primary Immune Response through exposure to weakened or less dangerous antigens, preparing the body's memory cells for the purpose of the immune system being more equipped to handle the equivalent full scale antigen. [ 3 ] When a foreign antigen enters the body, there is either an antigen specific or nonspecific response to it. These responses are the immune system fighting off the foreign antigens, whether they are deadly or not. A possible definition of Immunopathology is how the foreign antigens cause the immune system to have a response or problems that can arise from an organism's own immune response on itself. There are certain problems or faults in the immune system that can lead to more serious illness or disease. These diseases can come from one of the following problems. The first would be Hypersensitivity reactions, where there would be a stronger immune response than normal. There are four different types (type one, two, three and four), all with varying types and degrees of an immune response. The problems that arise from each type vary from small allergic reactions to more serious illnesses such as tuberculosis or arthritis. The second kind of complication in the immune system is Autoimmunity, where the immune system would attack itself rather than the antigen. Inflammation is a prime example of autoimmunity, as the immune cells used are self-reactive. A few examples of autoimmune diseases are Type 1 diabetes, Addison's disease and Celiac disease. The third and final type of complication with the immune system is Immunodeficiency, where the immune system lacks the ability to fight off a certain disease. The immune system's ability to combat it is either hindered or completely absent. The two types are Primary Immunodeficiency, where the immune system is either missing a key component or does not function properly, and Secondary Immunodeficiency, where disease is obtained from an outside source, like radiation or heat, and therefore cannot function properly. Diseases that can cause immunodeficiency include HIV, AIDS and leukemia. [ 2 ] The immune system plays an important role in protecting the body against cancer. The immune response to cancer can be categorized into the two main categories as discussed above: innate immunity and adaptive immunity. Innate immunity is the first line of defense against cancer. It consists of non-specific immune cells that can recognize and destroy abnormal cells, including cancer cells. Natural killer (NK) cells, dendritic cells, and macrophages are some examples of innate immune cells that can detect and eliminate cancer cells. [ 4 ] Adaptive immunity, on the other hand, is more specific and targeted. It involves the activation of T cells and B cells, which can recognize and attack cancer cells that have specific antigens on their surface. T cells can directly kill cancer cells or help activate other immune cells to attack cancer cells. B cells can produce antibodies that recognize and neutralize cancer cells. [ 5 ] However, cancer cells can evade immune surveillance and escape destruction by the immune system through various mechanisms, including downregulating antigen presentation, producing immunosuppressive molecules, and inhibiting T cell function. This can lead to the development and progression of cancer. [ 5 ] Immunotherapy is a type of cancer treatment that aims to harness and enhance the immune system's ability to recognize and attack cancer cells. Some examples of immunotherapies include checkpoint inhibitors, which block molecules that inhibit T cell activation, and CAR-T cell therapy, which involves modifying T cells to recognize and attack cancer cells more efficiently. [ 5 ]
https://en.wikipedia.org/wiki/Immunopathology
Immunoperoxidase is a type of immunostain used in molecular biology , medical research, and clinical diagnostics. In particular, immunoperoxidase reactions refer to a sub-class of immunohistochemical or immunocytochemical procedures in which the antibodies are visualized via a peroxidase-catalyzed reaction. Immunohistochemistry and immunocytochemistry are methods used to determine in which cells or parts of cells, a particular protein or other macromolecule are located. These stains use antibodies to bind to specific antigens , usually of protein or glycoprotein origin. Since antibodies are normally invisible, special strategies must be employed to detect these bound antibodies. In an immunoperoxidase procedure, an enzyme known as a peroxidase is used to catalyze a chemical reaction to produce a coloured product. Simply, a very thin slice of tissue is fixed onto glass, incubated with antibody or a series of antibodies, the last of which is chemically linked to peroxidase. After developing the stain by adding the chemical substrate , the distribution of the stain can be examined by microscopy . Originally all antibodies produced for immunostaining were polyclonal , i.e. raised by normal antibody reactions in animals such as horses or rabbits. Now, many are monoclonal , i.e. produced in tissue culture. Monoclonal antibodies that consist of only one type of antibody tend to provide greater antigen specificity, and also tend to be more consistent between batches. The first step in immunoperoxidase staining is the binding of the specific (primary) antibody to the cell or tissue sample. The detection of the primary antibody can be then accomplished directly (example 1) or indirectly (examples 2 & 3). Optimal staining depends on a number of factors including the antibody dilution, the staining chemicals, the preparation and/or fixation of the cells/tissue, and length of incubation with antibody/staining reagents. These are often determined by trial and error rather than any sort of systematic approach. Other catalytic enzymes such as alkaline phosphatase can be used instead of peroxidases for both direct and indirect staining methods. Alternatively, the primary antibody can be detected using fluorescent label ( immunofluorescence ) , or be attached to colloidal gold particles for electron microscopy . Immunoperoxidase staining is used in clinical diagnostics and in laboratory research . In clinical diagnostics, immunostaining can be used on tissue biopsies for more detailed histopathological study. In the case of cancer, it can aid in sub-classifying tumours. Immunostaining can also be used to help diagnose skin conditions, glomerulonephritis and to sub classify amyloid deposits. Related techniques are also useful in sub-typing lymphocytes which all look quite similar on light microscopy. In laboratory research, antibodies against specific markers of cellular differentiation can be used to label individual cell types. This can enable a better understanding of mechanistic changes to specific cell lineages resulting from a particular experimental intervention.
https://en.wikipedia.org/wiki/Immunoperoxidase
Immunophenotyping is a technique used to study the protein expressed by cells. This technique is commonly used in basic science research and laboratory diagnostic purpose. This can be done on tissue section (fresh or fixed tissue), cell suspension , etc. An example is the detection of tumor markers , such as in the diagnosis of leukemia . It involves the labelling of white blood cells with antibodies directed against surface proteins on their membrane. By choosing appropriate antibodies, the differentiation of leukemic cells can be accurately determined. The labelled cells are processed in a flow cytometer , a laser-based instrument capable of analyzing thousands of cells per second. The whole procedure can be performed on cells from the blood , bone marrow or spinal fluid in a matter of a few hours. [ citation needed ] Immunophenotyping is a very common flow cytometry test in which fluorophore-conjugated antibodies are used as probes for staining target cells with high avidity and affinity. This technique allows rapid and easy phenotyping of each cell in a heterogeneous sample according to the presence or absence of a protein combination. [ 1 ]
https://en.wikipedia.org/wiki/Immunophenotyping
Immunophysics is a novel interdisciplinary research field using immunological, biological, physical and chemical approaches to elucidate and modify immune-mediated mechanisms and to expand our knowledge on the pathomechanisms of chronic immune-mediated diseases such as arthritis , inflammatory bowel disease , asthma and chronic infections . Immune reactions are tightly regulated and usually self-limited. [ 1 ] [ 2 ] Dysregulation can result in chronic inflammatory diseases (immunochronicity). In addition to biochemical molecular mechanisms, physical factors influence the immune system . Such components include: The research field of immunophysics aims to investigate the influence of these physicochemical parameters on the function of the immune system in health and disease. Immunophysical techniques include nuclear magnetic resonance spectroscopy , magnetic resonance imaging (MRI), dual-energy computed tomography, [ 13 ] fluorescence-lifetime imaging microscopy , multispectral optoacoustic tomography (MSOT), high-throughput microfluidic cytometry, [ 14 ] interferometric scattering microscopy (iSCAT) and cryogenic optical localization in 3D (COLD). Immunophysical research is considered to open new perspectives for the investigation of the pathomechanisms of immune-mediated inflammatory diseases, help to develop novel detection methods and diagnostic tools in these diseases and advance the treatment possibilities of such diseases.
https://en.wikipedia.org/wiki/Immunophysics
Immunoprecipitation ( IP ) is the technique of precipitating a protein antigen out of solution using an antibody that specifically binds to that particular protein. This process can be used to isolate and concentrate a particular protein from a sample containing many thousands of different proteins. Immunoprecipitation requires that the antibody be coupled to a solid substrate at some point in the procedure. Involves using an antibody that is specific for a known protein to isolate that particular protein out of a solution containing many different proteins. These solutions will often be in the form of a crude lysate of a plant or animal tissue. Other sample types could be body fluids or other samples of biological origin. Immunoprecipitation of intact protein complexes (i.e. antigen along with any proteins or ligands that are bound to it) is known as co-immunoprecipitation (Co-IP). Co-IP works by selecting an antibody that targets a known protein that is believed to be a member of a larger complex of proteins. By targeting this known member with an antibody it may become possible to pull the entire protein complex out of solution and thereby identify unknown members of the complex. This works when the proteins involved in the complex bind to each other tightly, making it possible to pull multiple members of the complex out of the solution by latching onto one member with an antibody. This concept of pulling protein complexes out of solution is sometimes referred to as a "pull-down". Co-IP is a powerful technique that is used regularly by molecular biologists to analyze protein–protein interactions . Chromatin immunoprecipitation (ChIP) is a method used to determine the location of DNA binding sites on the genome for a particular protein of interest. This technique gives a picture of the protein–DNA interactions that occur inside the nucleus of living cells or tissues. The in vivo nature of this method is in contrast to other approaches traditionally employed to answer the same questions. The principle underpinning this assay is that DNA-binding proteins (including transcription factors and histones ) in living cells can be cross-linked to the DNA that they are binding. By using an antibody that is specific to a putative DNA binding protein, one can immunoprecipitate the protein–DNA complex out of cellular lysates. The crosslinking is often accomplished by applying formaldehyde to the cells (or tissue), although it is sometimes advantageous to use a more defined and consistent crosslinker such as dimethyl 3,3′-dithiobispropionimidate-2 HCl (DTBP). [ 1 ] Following crosslinking, the cells are lysed and the DNA is broken into pieces 0.2–1.0 kb in length by sonication . At this point the immunoprecipitation is performed resulting in the purification of protein–DNA complexes. The purified protein–DNA complexes are then heated to reverse the formaldehyde cross-linking of the protein and DNA complexes, allowing the DNA to be separated from the proteins. The identity and quantity of the DNA fragments isolated can then be determined by polymerase chain reaction (PCR). The limitation of performing PCR on the isolated fragments is that one must have an idea which genomic region is being targeted in order to generate the correct PCR primers. Sometimes this limitation is circumvented simply by cloning the isolated genomic DNA into a plasmid vector and then using primers that are specific to the cloning region of that vector. Alternatively, when one wants to find where the protein binds on a genome-wide scale, ChIP-sequencing is used and has recently emerged as a standard technology that can localize protein binding sites in a high-throughput, cost-effective fashion, allowing also for the characterization of the cistrome . Previously, DNA microarray was also used ( ChIP-on-chip or ChIP-chip ). RIP and CLIP both purify a specific RNA-binding protein in order to identify bound RNAs, thereby studying ribonucleoproteins (RNPs). [ 2 ] [ 3 ] In RIP , the co-purified RNAs are extracted and their enrichment is compared to control, which was originally done by microarray or RT-PCR . In CLIP , cells are UV crosslinked prior to lysis, followed by additional purification steps beyond standard immunoprecipitation, including partial RNA fragmentation, high-salt washing, SDS-PAGE separation and membrane transfer, and identification of direct RNA binding sites by cDNA sequencing . One of the major technical hurdles with immunoprecipitation is the great difficulty in generating an antibody that specifically targets a single known protein. To get around this obstacle, many groups will engineer tags onto either the C- or N- terminal end of the protein of interest. The advantage here is that the same tag can be used time and again on many different proteins and the researcher can use the same antibody each time. The advantages with using tagged proteins are so great that this technique has become commonplace for all types of immunoprecipitation, including all of the types of IP detailed above. Examples of tags in use are the green fluorescent protein (GFP) tag, glutathione-S-transferase (GST) tag and the FLAG-tag tag. While the use of a tag to enable pull-downs is convenient, it raises some concerns regarding biological relevance because the tag itself may either obscure native interactions or introduce new and unnatural interactions. The two general methods for immunoprecipitation are the direct capture method and the indirect capture method. Antibodies that are specific for a particular protein (or group of proteins) are immobilized on a solid-phase substrate such as superparamagnetic microbeads or on microscopic agarose (non-magnetic) beads. The beads with bound antibodies are then added to the protein mixture, and the proteins that are targeted by the antibodies are captured onto the beads via the antibodies; in other words, they become immunoprecipitated. Antibodies that are specific for a particular protein, or a group of proteins, are added directly to the mixture of protein. The antibodies have not been attached to a solid-phase support yet. The antibodies are free to float around the protein mixture and bind their targets. As time passes, beads coated in Protein A/G are added to the mixture of antibody and protein. At this point, the antibodies, which are now bound to their targets, will stick to the beads. From this point on, the direct and indirect protocols converge because the samples now have the same ingredients. Both methods give the same end-result with the protein or protein complexes bound to the antibodies which themselves are immobilized onto the beads. An indirect approach is sometimes preferred when the concentration of the protein target is low or when the specific affinity of the antibody for the protein is weak. The indirect method is also used when the binding kinetics of the antibody to the protein is slow for a variety of reasons. In most situations, the direct method is the default, and the preferred, choice. Historically the solid-phase support for immunoprecipitation used by the majority of scientists has been highly-porous agarose beads (also known as agarose resins or slurries). The advantage of this technology is a very high potential binding capacity, as virtually the entire sponge-like structure of the agarose particle (50 to 150 μm in size) is available for binding antibodies (which will in turn bind the target proteins) and the use of standard laboratory equipment for all aspects of the IP protocol without the need for any specialized equipment. The advantage of an extremely high binding capacity must be carefully balanced with the quantity of antibody that the researcher is prepared to use to coat the agarose beads. Because antibodies can be a cost-limiting factor, it is best to calculate backward from the amount of protein that needs to be captured (depending upon the analysis to be performed downstream), to the amount of antibody that is required to bind that quantity of protein (with a small excess added in order to account for inefficiencies of the system), and back still further to the quantity of agarose that is needed to bind that particular quantity of antibody. In cases where antibody saturation is not required, this technology is unmatched in its ability to capture extremely large quantities of captured target proteins. The caveat here is that the "high capacity advantage" can become a "high capacity disadvantage" that is manifested when the enormous binding capacity of the sepharose /agarose beads is not completely saturated with antibodies. It often happens that the amount of antibody available to the researcher for their immunoprecipitation experiment is less than sufficient to saturate the agarose beads to be used in the immunoprecipitation. In these cases the researcher can end up with agarose particles that are only partially coated with antibodies, and the portion of the binding capacity of the agarose beads that is not coated with antibody is then free to bind anything that will stick, resulting in an elevated background signal due to non-specific binding of lysate components to the beads, which can make data interpretation difficult. While some may argue that for these reasons it is prudent to match the quantity of agarose (in terms of binding capacity) to the quantity of antibody that one wishes to be bound for the immunoprecipitation, a simple way to reduce the issue of non-specific binding to agarose beads and increase specificity is to preclear the lysate, which for any immunoprecipitation is highly recommended. [ 4 ] [ 5 ] Lysates are complex mixtures of proteins, lipids, carbohydrates and nucleic acids, and one must assume that some amount of non-specific binding to the IP antibody, Protein A/G or the beaded support will occur and negatively affect the detection of the immunoprecipitated target(s). In most cases, preclearing the lysate at the start of each immunoprecipitation experiment (see step 2 in the "protocol" section below) [ 6 ] is a way to remove potentially reactive components from the cell lysate prior to the immunoprecipitation to prevent the non-specific binding of these components to the IP beads or antibody. The basic preclearing procedure is described below, wherein the lysate is incubated with beads alone, which are then removed and discarded prior to the immunoprecipitation. [ 6 ] This approach, though, does not account for non-specific binding to the IP antibody, which can be considerable. Therefore, an alternative method of preclearing is to incubate the protein mixture with exactly the same components that will be used in the immunoprecipitation, except that a non-target, irrelevant antibody of the same antibody subclass as the IP antibody is used instead of the IP antibody itself. [ 5 ] This approach attempts to use as close to the exact IP conditions and components as the actual immunoprecipitation to remove any non-specific cell constituent without capturing the target protein (unless, of course, the target protein non-specifically binds to some other IP component, which should be properly controlled for by analyzing the discarded beads used to preclear the lysate). The target protein can then be immunoprecipitated with the reduced risk of non-specific binding interfering with data interpretation. While the vast majority of immunoprecipitations are performed with agarose beads, the use of superparamagnetic beads for immunoprecipitation is a newer approach that is gaining in popularity as an alternative to agarose beads for IP applications. Unlike agarose, magnetic beads are solid and can be spherical, depending on the type of bead, and antibody binding is limited to the surface of each bead. While these beads do not have the advantage of a porous center to increase the binding capacity, magnetic beads are significantly smaller than agarose beads (1 to 4 μm), and the greater number of magnetic beads per volume than agarose beads collectively gives magnetic beads an effective surface area-to-volume ratio for optimum antibody binding. Commercially available magnetic beads can be separated based by size uniformity into monodisperse and polydisperse beads. Monodisperse beads, also called microbeads , exhibit exact uniformity, and therefore all beads exhibit identical physical characteristics, including the binding capacity and the level of attraction to magnets. Polydisperse beads, while similar in size to monodisperse beads, show a wide range in size variability (1 to 4 μm) that can influence their binding capacity and magnetic capture. Although both types of beads are commercially available for immunoprecipitation applications, the higher quality monodisperse superparamagnetic beads are more ideal for automatic protocols because of their consistent size, shape and performance. Monodisperse and polydisperse superparamagnetic beads are offered by many companies, including Invitrogen , Thermo Scientific , and Millipore . Proponents of magnetic beads claim that the beads exhibit a faster rate of protein binding [ 7 ] [ 8 ] [ 9 ] over agarose beads for immunoprecipitation applications, although standard agarose bead-based immunoprecipitations have been performed in 1 hour. [ 5 ] Claims have also been made that magnetic beads are better for immunoprecipitating extremely large protein complexes because of the complete lack of an upper size limit for such complexes, [ 7 ] [ 8 ] [ 10 ] although there is no unbiased evidence stating this claim. The nature of magnetic bead technology does result in less sample handling [ 8 ] due to the reduced physical stress on samples of magnetic separation versus repeated centrifugation when using agarose, which may contribute greatly to increasing the yield of labile (fragile) protein complexes. [ 8 ] [ 9 ] [ 10 ] Additional factors, though, such as the binding capacity, cost of the reagent, the requirement of extra equipment and the capability to automate IP processes should be considered in the selection of an immunoprecipitation support. Proponents of both agarose and magnetic beads can argue whether the vast difference in the binding capacities of the two beads favors one particular type of bead. In a bead-to-bead comparison, agarose beads have significantly greater surface area and therefore a greater binding capacity than magnetic beads due to the large bead size and sponge-like structure. But the variable pore size of the agarose causes a potential upper size limit that may affect the binding of extremely large proteins or protein complexes to internal binding sites, and therefore magnetic beads may be better suited for immunoprecipitating large proteins or protein complexes than agarose beads, although there is a lack of independent comparative evidence that proves either case. Some argue that the significantly greater binding capacity of agarose beads may be a disadvantage because of the larger capacity of non-specific binding. Others may argue for the use of magnetic beads because of the greater quantity of antibody required to saturate the total binding capacity of agarose beads, which would obviously be an economical disadvantage of using agarose. While these arguments are correct outside the context of their practical use, these lines of reasoning ignore two key aspects of the principle of immunoprecipitation that demonstrates that the decision to use agarose or magnetic beads is not simply determined by binding capacity. First, non-specific binding is not limited to the antibody-binding sites on the immobilized support; any surface of the antibody or component of the immunoprecipitation reaction can bind to nonspecific lysate constituents, and therefore nonspecific binding will still occur even when completely saturated beads are used. This is why it is important to preclear the sample before the immunoprecipitation is performed. Second, the ability to capture the target protein is directly dependent upon the amount of immobilized antibody used, and therefore, in a side-by-side comparison of agarose and magnetic bead immunoprecipitation, the most protein that either support can capture is limited by the amount of antibody added. So the decision to saturate any type of support depends on the amount of protein required, as described above in the Agarose section of this page. The price of using either type of support is a key determining factor in using agarose or magnetic beads for immunoprecipitation applications. A typical first-glance calculation on the cost of magnetic beads compared to sepharose beads may make the sepharose beads appear less expensive. But magnetic beads may be competitively priced compared to agarose for analytical-scale immunoprecipitations depending on the IP method used and the volume of beads required per IP reaction. Using the traditional batch method of immunoprecipitation as listed below, where all components are added to a tube during the IP reaction, the physical handling characteristics of agarose beads necessitate a minimum quantity of beads for each IP experiment (typically in the range of 25 to 50 μl beads per IP). This is because sepharose beads must be concentrated at the bottom of the tube by centrifugation and the supernatant removed after each incubation, wash, etc. This imposes absolute physical limitations on the process, as pellets of agarose beads less than 25 to 50 μl are difficult if not impossible to visually identify at the bottom of the tube. With magnetic beads, there is no minimum quantity of beads required due to magnetic handling, and therefore, depending on the target antigen and IP antibody, it is possible to use considerably less magnetic beads. Conversely, spin columns may be employed instead of normal microfuge tubes to significantly reduce the amount of agarose beads required per reaction. Spin columns contain a filter that allows all IP components except the beads to flow through using a brief centrifugation and therefore provide a method to use significantly less agarose beads with minimal loss. As mentioned above, only standard laboratory equipment is required for the use of agarose beads in immunoprecipitation applications, while high-power magnets are required for magnetic bead-based IP reactions. While the magnetic capture equipment may be cost-prohibitive, the rapid completion of immunoprecipitations using magnetic beads may be a financially beneficial approach when grants are due, because a 30-minute protocol with magnetic beads compared to overnight incubation at 4 °C with agarose beads may result in more data generated in a shorter length of time. [ 7 ] [ 8 ] [ 9 ] An added benefit of using magnetic beads is that automated immunoprecipitation devices are becoming more readily available. These devices not only reduce the amount of work and time to perform an IP, but they can also be used for high-throughput applications. While clear benefits of using magnetic beads include the increased reaction speed, more gentle sample handling and the potential for automation, the choice of using agarose or magnetic beads based on the binding capacity of the support medium and the cost of the product may depend on the protein of interest and the IP method used. As with all assays, empirical testing is required to determine which method is optimal for a given application. Once the solid substrate bead technology has been chosen, antibodies are coupled to the beads and the antibody-coated-beads can be added to the heterogeneous protein sample (e.g. homogenized tissue). At this point, antibodies that are immobilized to the beads will bind to the proteins that they specifically recognize. Once this has occurred the immunoprecipitation portion of the protocol is actually complete, as the specific proteins of interest are bound to the antibodies that are themselves immobilized to the beads. Separation of the immunocomplexes from the lysate is an extremely important series of steps, because the protein(s) must remain bound to each other (in the case of co-IP) and bound to the antibody during the wash steps to remove non-bound proteins and reduce background. When working with agarose beads, the beads must be pelleted out of the sample by briefly spinning in a centrifuge with forces between 600–3,000 x g (times the standard gravitational force). This step may be performed in a standard microcentrifuge tube, but for faster separation, greater consistency and higher recoveries, the process is often performed in small spin columns with a pore size that allows liquid, but not agarose beads, to pass through. After centrifugation, the agarose beads will form a very loose fluffy pellet at the bottom of the tube. The supernatant containing contaminants can be carefully removed so as not to disturb the beads. The wash buffer can then be added to the beads and after mixing, the beads are again separated by centrifugation. With superparamagnetic beads, the sample is placed in a magnetic field so that the beads can collect on the side of the tube. This procedure is generally complete in approximately 30 seconds, and the remaining (unwanted) liquid is pipetted away. Washes are accomplished by resuspending the beads (off the magnet) with the washing solution and then concentrating the beads back on the tube wall (by placing the tube back on the magnet). The washing is generally repeated several times to ensure adequate removal of contaminants. If the superparamagnetic beads are homogeneous in size and the magnet has been designed properly, the beads will concentrate uniformly on the side of the tube and the washing solution can be easily and completely removed. After washing, the precipitated protein(s) are eluted and analyzed by gel electrophoresis , mass spectrometry , western blotting , or any number of other methods for identifying constituents in the complex. Protocol times for immunoprecipitation vary greatly due to a variety of factors, with protocol times increasing with the number of washes necessary or with the slower reaction kinetics of porous agarose beads. Co-Immunoprecipitation (Co-IP) Technical
https://en.wikipedia.org/wiki/Immunoprecipitation
Immunoproteomics is the study of large sets of proteins ( proteomics ) involved in the immune response . Examples of common applications of immunoproteomics include: The identification of proteins in immunoproteomics is carried out by techniques including gel based, microarray based, and DNA based techniques, with mass spectroscopy typically being the ultimate identification method. [ 1 ] Immunoproteomics is and has been used to increase scientific understanding of both autoimmune disease pathology and progression. Using biochemical techniques, gene and ultimately protein expression can be measured with high fidelity. With this information, the biochemical pathways causing pathology in conditions such as multiple sclerosis and Crohn's disease can potentially be elucidated. Serum antibody identification in particular has proven to be very useful as a diagnostic tool for a number of diseases in modern medicine, in large part due to the relatively high stability of serum antibodies. [ 2 ] Immunoproteomic techniques are additionally used for the isolation of antibodies. [ 3 ] By identifying and proceeding to sequence antibodies, scientists are able to identify potential protein targets of said antibodies. [ 4 ] In doing so, it is possible to determine the antigen(s) responsible for a particular immune response. Identification and engineering of antibodies involved in autoimmune disease pathology may offer novel techniques in disease therapy. By identifying the antigens responsible for a particular immune response, it is possible to identify viable targets for novel drugs. [ 5 ] In addition, specific antigens can further be classified based on immunoreactivity for identification of future potential vaccine preparations. [ 5 ] In addition to the identification of vaccine candidates, immunoproteomic techniques such as western blotting can additionally be used for measuring the efficacy of a given vaccine. [ 5 ] Mass spectrometry can be used in the sequencing of MHC binding motifs, which can subsequently be used to predict T cell epitopes . [ 6 ] The technique of peptide mass fingerprinting (PMF) can be used to check a peptide's mass spectrum against a database of protein digests which have already been documented. [ 7 ] If the mass spectrum of the protein of interest as well as the database protein share a large amount of homology , it is likely that the protein of interest is contained within the sample. [ 7 ] Affinity proteomics is a high-throughput method of studying the proteome with antibody or other affinity reagents (e.g. aptamers). Large numbers (dozens to hundreds) of immune-related cytokines and related markers can be simultaneously assayed in solution, in contrast to a solid substrate such as a microarray. Two-dimensional gel electrophoresis (2-D gel) techniques in culmination with western blotting has been used for many years in the identification of immune response magnitude. [ 1 ] This can be accomplished by comparing various samples against molecular-weight size markers for qualitative analysis and against known amounts of protein standards for quantitative analysis. By coupling liquid chromatography with a variety of other immunodetection techniques such as serological proteome analysis (SERPA), it is possible to analyze the hydrophobicity , PI , relative mass , and antibody reactivity of antibodies within a given serum. [ 5 ] Microarray analysis of various serums can be used as a means to identify changes in gene expression before, after, and during a given immune response.
https://en.wikipedia.org/wiki/Immunoproteomics
Immunoradiometric assay (IRMA) is an assay that uses radiolabeled antibodies. It differs from conventional radioimmunoassay (RIA) in that the compound to be measured combines immediately with the radiolabeled antibodies, rather than displacing another antigen by degrees over some period. Principle:- A noncompetitive assay in which analyte to be measured sandwich btw two Antibodies Fluorescent and radioactive antibodies have been used to locate or measure solid-phase antigens for many years. However, only recently has the labeled antibody been applied to measurement of antigen to sample. The method converts the unknown antigen into a traceable radioactive product. Immunoradiometric assay (IRMA) was first introduced by "Miles and Hales" in 1968, who proposed certain theoretical advantages of the method with regard to improving the sensitivity and precision of immunoassays. In IRMA, the antibodies are labeled with radioisotopes which are used to bind antigens present in the specimen. When a positive sample is added to the tubes, radioactively labeled (labeled with I125 or I131 radioisotopes) antibodies bind to the free epitopes of antigens and form an antigen-antibody complex. Unbound labeled antibodies are removed by a second reaction with a solid phase antigen. The amount of radioactive remaining in the solution is direct function of the antigen concentration. This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immunoradiometric_assay
Immunoscreening is a method of biotechnology used to detect a polypeptide produced from a cloned gene . The term encompasses several different techniques designed for protein identification, such as Western blotting , using recombinant DNA , and analyzing antibody-peptide interactions. [ 1 ] Clones are screened for the presence of the gene product: the resulting protein. This strategy requires first that a gene library is implemented in an expression vector, and that antiserum to the protein is available. Radioactivity or an enzyme is coupled generally with the secondary antibody. The radioactivity/enzyme linked secondary antibody can be purchased commercially and can detect different antigens. In commercial diagnostics labs, labelled primary antibodies are also used. [ 2 ] The antigen-antibody interaction is used in the immunoscreening of several diseases. [ 3 ] This biotechnology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immunoscreening
Immunosenescence is the gradual deterioration of the immune system , brought on by natural age advancement . A 2020 review concluded that the adaptive immune system is affected more than the innate immune system . [ 1 ] Immunosenescence involves both the host's capacity to respond to infections and the development of long-term immune memory. Age-associated immune deficiency is found in both long- and short-lived species as a function of their age relative to life expectancy rather than elapsed time. [ 2 ] It has been studied in animal models including mice, marsupials and monkeys. [ 3 ] [ 4 ] [ 5 ] Immunosenescence is a contributory factor to the increased frequency of morbidity and mortality among the elderly. Along with anergy and T-cell exhaustion , immunosenescence belongs among the major immune system dysfunctional states. However, while T-cell anergy is a reversible condition, as of 2020 no techniques for immunosenescence reversal had been developed. [ 6 ] [ 7 ] Immunosenescence is not a random deteriorative phenomenon, rather it appears to inversely recapitulate an evolutionary pattern. Most of the parameters affected by immunosenescence appear to be under genetic control. [ 8 ] Immunosenescence can be envisaged as the result of the continuous challenge of the unavoidable exposure to a variety of antigens such as viruses and bacteria . [ 9 ] Aging of the immune system is a controversial phenomenon. Senescence refers to replicative senescence from cell biology , which describes the condition when the upper limit of cell divisions ( Hayflick limit ) has been exceeded, and such cells commit apoptosis or lose their functional properties. Immunosenescence generally means a robust shift in both structural and functional parameters that has a clinically relevant outcome. [ 10 ] Thymus involution is probably the most relevant factor responsible for immunosenescence. Thymic involution is common in most mammals; in humans it begins after puberty , as the immunological defense against most novel antigens is necessary mainly during infancy and childhood. [ 11 ] The major characteristic of the immunosenescent phenotype is a shift in T-cell subpopulation distribution. As the thymus involutes, the number of naive T cells (especially CD8+ ) decreases, thus naive T cells homeostatically proliferate into memory T cells as a compensation. [ 5 ] It is believed that the conversion to memory phenotype can be accelerated by restimulation of the immune system by persistent pathogens such as CMV and HSV . By age 40, an estimated 50% to 85% of adults have contracted human cytomegalovirus ( HCMV ). [ 1 ] Recurring infections by latent herpes viruses can exhaust the immune system of elderly persons. [ 12 ] Consistent, repeated stimulation by such pathogens leads to preferential differentiation of the T-cell memory phenotype, and a 2020 review reported that CD8+ T-cell precursors, specific for the most rare and less frequently present antigens shed the most. [ 5 ] Such a distribution shift leads to increased susceptibility to non-persistent infection, cancer, autoimmune diseases, cardiovascular health conditions and many others. [ 13 ] [ 14 ] T cells are not the only immune cells affected by aging: In addition to changes in immune responses, the beneficial effects of inflammation devoted to the neutralisation of dangerous and harmful agents early in life and in adulthood become detrimental late in life in a period largely not foreseen by evolution, according to the antagonistic pleiotropy theory of aging. [ 25 ] Changes in the lymphoid compartment are not solely responsible for the malfunctioning of the immune system . Although myeloid cell production does not seem to decline with age, macrophages become dysregulated as a consequence of environmental changes. [ 26 ] T cells' functional capacity is most influenced by aging effects. Age-related alterations are evident in all T-cell development stages, making them a significant factor in immunosenescence. [ 27 ] T-cell function decline begins with the progressive involution of the thymus , which is the organ essential for T-cell maturation. This decline in turn reduces IL-2 production [ 28 ] [ 29 ] and reduction/exhaustion on the number of thymocytes (i.e. immature T cells), thus reducing peripheral naïve T cell output. [ 30 ] [ 31 ] Once matured and circulating throughout the peripheral system, T cells undergo deleterious age-dependent changes. This leaves the body practically devoid of virgin T cells, which makes it more prone to a variety of diseases. [ 9 ] The elderly frequently present with non-specific signs and symptoms, and clues of focal infection are often absent or obscured by chronic conditions. [ 2 ] This complicates diagnosis and treatment. The reduced efficacy of vaccination in the elderly stems from their restricted ability to respond to immunization with novel non-persistent pathogens, and correlates with both CD4:CD8 alterations and impaired dendritic cell function. [ 48 ] Therefore, vaccination in earlier life stages seems more likely to be effective, although the duration of the effect varies by pathogen. [ 49 ] [ 10 ] Removal of senescent cells with senolytic compounds has been proposed as a method of enhancing immunity during aging. [ 50 ] Immune system aging in mice can be partly restricted by restoring thymus growth, which can be achieved by transplantation of proliferative thymic epithelial cells from young mice. [ 51 ] Metformin has been proven to moderate aging in preclinical studies. [ 52 ] Its protective effect is probably caused primarily by impaired mitochondria metabolism, particularly decreased reactive oxygen production [ 53 ] or increased AMP:ATP ratio [ 54 ] and lower NAD/NADH ratio. Coenzyme NAD+ is reduced in various tissues in an age-dependent manner, and thus redox potential associated changes seem to be critical in the aging process, [ 55 ] and NAD+ supplements may have protective effects. [ 56 ] Rapamycin , an antitumor and immunosuppresant, acts similarly. [ 57 ]
https://en.wikipedia.org/wiki/Immunosenescence
Immunosequencing , sometimes referred to as repertoire sequencing or Rep-Seq , is a method for analyzing the genetic makeup of an individual's immune system . In most areas of biology a single gene codes for one or a few possible proteins . Through V(D)J recombination a number of organisms take a relatively small number of genes coding for antibodies and T-cell receptors (TCRs) and produce a huge diversity of slightly different antibodies and TCRs. The diversity allows for the recognition of a wide array of antigens . As an immune system reacts to infections and other events, the number of different antibodies and TCRs it contains changes. The makeup and quantity of these proteins is sometimes referred to as an immune repertoire . Immunosequencing is a technique utilizing multiplex polymerase chain reaction that allows for the sequencing and quantification of the large diversity of antibody and TCR genes composing an individual's immune repertoire . [ 1 ] [ 2 ] Immunosequencing in its modern context started being discussed in scientific literature in the early 2010s with the advent of more powerful gene sequencing techniques. [ 3 ] This biotechnology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immunosequencing
In biochemistry , immunostaining is any use of an antibody -based method to detect a specific protein in a sample. The term "immunostaining" was originally used to refer to the immunohistochemical staining of tissue sections, as first described by Albert Coons in 1941. [ 1 ] However, immunostaining now encompasses a broad range of techniques used in histology , cell biology , and molecular biology that use antibody-based staining methods. Immunohistochemistry or IHC staining of tissue sections (or immunocytochemistry , which is the staining of cells ), is perhaps the most commonly applied immunostaining technique. [ 2 ] While the first cases of IHC staining used fluorescent dyes (see immunofluorescence ), other non-fluorescent methods using enzymes such as peroxidase (see immunoperoxidase staining ) and alkaline phosphatase are now used. These enzymes are capable of catalysing reactions that give a coloured product that is easily detectable by light microscopy . Alternatively, radioactive elements can be used as labels, and the immunoreaction can be visualized by autoradiography . [ 3 ] Tissue preparation or fixation is essential for the preservation of cell morphology and tissue architecture. Inappropriate or prolonged fixation may significantly diminish the antibody binding capability. Many antigens can be successfully demonstrated in formalin -fixed paraffin -embedded tissue sections. However, some antigens will not survive even moderate amounts of aldehyde fixation. Under these conditions, tissues should be rapidly fresh frozen in liquid nitrogen and cut with a cryostat. The disadvantages of frozen sections include poor morphology, poor resolution at higher magnifications, difficulty in cutting over paraffin sections, and the need for frozen storage. Alternatively, vibratome sections do not require the tissue to be processed through organic solvents or high heat, which can destroy the antigenicity, or disrupted by freeze thawing. The disadvantage of vibratome sections is that the sectioning process is slow and difficult with soft and poorly fixed tissues, and that chatter marks or vibratome lines are often apparent in the sections. [ citation needed ] The detection of many antigens can be dramatically improved by antigen retrieval methods that act by breaking some of the protein cross-links formed by fixation to uncover hidden antigenic sites. This can be accomplished by heating for varying lengths of times (heat induced epitope retrieval or HIER) or using enzyme digestion (proteolytic induced epitope retrieval or PIER). [ 4 ] One of the main difficulties with IHC staining is overcoming specific or non-specific background. Optimisation of fixation methods and times, pre-treatment with blocking agents, incubating antibodies with high salt, and optimising post-antibody wash buffers and wash times are all important for obtaining high quality immunostaining. In addition, the presence of both positive and negative controls for staining are essential for determining specificity. [ citation needed ] A flow cytometer can be used for the direct analysis of cells expressing one or more specific proteins. Cells are immunostained in solution using methods similar to those used for immunofluorescence, and then analysed by flow cytometry. [ citation needed ] Flow cytometry has several advantages over IHC including: the ability to define distinct cell populations by their size and granularity; the capacity to gate out dead cells; improved sensitivity; and multi-colour analysis to measure several antigens simultaneously. However, flow cytometry can be less effective at detecting extremely rare cell populations, and there is a loss of architectural relationships in the absence of a tissue section. [ 5 ] Flow cytometry also has a high capital cost associated with the purchase of a flow cytometer. [ citation needed ] Western blotting allows the detection of specific proteins from extracts made from cells or tissues, before or after any purification steps. Proteins are generally separated by size using gel electrophoresis before being transferred to a synthetic membrane via dry, semi-dry, or wet blotting methods. The membrane can then be probed using antibodies using methods similar to immunohistochemistry, but without a need for fixation. Detection is typically performed using peroxidase linked antibodies to catalyse a chemiluminescent reaction. [ citation needed ] Western blotting is a routine molecular biology method that can be used to semi-quantitatively compare protein levels between extracts. The size separation prior to blotting allows the protein molecular weight to be gauged as compared with known molecular weight markers. [ citation needed ] The enzyme-linked immunosorbent assay or ELISA is a diagnostic method for quantitatively or semi-quantitatively determining protein concentrations from blood plasma , serum or cell/tissue extracts in a multi-well plate format (usually 96-wells per plate). Broadly, proteins in solution are absorbed to ELISA plates. Antibodies specific for the protein of interest are used to probe the plate. Background is minimised by optimising blocking and washing methods (as for IHC), and specificity is ensured via the presence of positive and negative controls. Detection methods are usually colorimetric or chemiluminescence based. [ citation needed ] Electron microscopy or EM can be used to study the detailed microarchitecture of tissues or cells. Immuno-EM allows the detection of specific proteins in ultrathin tissue sections. Antibodies labelled with heavy metal particles (e.g. gold) can be directly visualised using transmission electron microscopy . While powerful in detecting the sub-cellular localisation of a protein, immuno-EM can be technically challenging, expensive, and require rigorous optimisation of tissue fixation and processing methods. Protein biotinylation in vivo was proposed to alleviate the problems caused by frequent incompatibility of antibody staining with fixation protocols that better preserve cell morphology. [ 6 ] In immunostaining methods, an antibody is used to detect a specific protein epitope . These antibodies can be monoclonal or polyclonal . Detection of this first or primary antibody can be accomplished in multiple ways. As previously described, enzymes such as horseradish peroxidase or alkaline phosphatase are commonly used to catalyse reactions that give a coloured or chemiluminescent product. Fluorescent molecules can be visualised using fluorescence microscopy or confocal microscopy . [ citation needed ] The applications of immunostaining are numerous, but are most typically used in clinical diagnostics and laboratory research . [ citation needed ] Clinically, IHC is used in histopathology for the diagnosis of specific types of cancers based on molecular markers. [ citation needed ] In laboratory science, immunostaining can be used for a variety of applications based on investigating the presence or absence of a protein, its tissue distribution, its sub-cellular localisation, and of changes in protein expression or degradation.
https://en.wikipedia.org/wiki/Immunostaining
Immunostimulants , also known as immunostimulators , are substances (drugs and nutrients) that stimulate the immune system usually in a non-specific manner by inducing activation or increasing activity of any of its components. One notable example is the granulocyte macrophage colony-stimulating factor . The goal of this stimulated immune response is usually to help the body have a stronger immune system response in order to improve outcomes in the case of an infection or cancer malignancy. There is also some evidence that immunostimulants may be useful to help decrease severe acute illness related to chronic obstructive pulmonary disease or acute infections in the lungs. [ citation needed ] There are two main categories of immunostimulants: [ 1 ] Many endogenous substances are non-specific immunostimulators. For example, female sex hormones are known to stimulate both adaptive [ 2 ] and innate immune responses . [ 3 ] [ 4 ] [ 5 ] [ 6 ] Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty . Other hormones appear to regulate the immune system as well, most notably prolactin , growth hormone and vitamin D . [ 7 ] [ 8 ] Some publications point towards the effect of deoxycholic acid (DCA) as an immunostimulant [ 9 ] [ 10 ] [ 11 ] of the non-specific immune system , activating its main actors, the macrophages . According to these publications, a sufficient amount of DCA in the human body corresponds to a good immune reaction of the non-specific immune system. [ citation needed ] Claims made by marketers of various products and alternative health providers , such as chiropractors , homeopaths , and acupuncturists to be able to stimulate or "boost" the immune system generally lack meaningful explanation and evidence of effectiveness. [ 12 ] Immunostimulants have been recommended to help prevent acute illness related to chronic obstructive pulmonary disease and they are sometimes used to treat chronic bronchitis. [ 13 ] The evidence in the form of high quality clinical trials to support their use is weak, however, there is some evidence of benefit and they appear to be safe. [ 14 ] The most commonly used immunostimulant type for this purpose are bacterial-derived immunostimulants. The goal is to stimulate the person's immune system in order to prevent future infections that may result in an acute episode or exacerbation of COPD. [ 14 ]
https://en.wikipedia.org/wiki/Immunostimulant
Immunosuppression is a reduction of the activation or efficacy of the immune system . Some portions of the immune system itself have immunosuppressive effects on other parts of the immune system, and immunosuppression may occur as an adverse reaction to treatment of other conditions. [ 1 ] [ 2 ] In general, deliberately induced immunosuppression is performed to prevent the body from rejecting an organ transplant . [ 3 ] Additionally, it is used for treating graft-versus-host disease after a bone marrow transplant , or for the treatment of auto-immune diseases such as systemic lupus erythematosus , rheumatoid arthritis , Sjögren's syndrome , or Crohn's disease . This is typically done using medications, but may involve surgery ( splenectomy ), plasmapheresis , or radiation. A person who is undergoing immunosuppression, or whose immune system is weak for some other reasons (such as chemotherapy or HIV ), is said to be immunocompromised . [ 4 ] Administration of immunosuppressive medications or immunosuppressants is the main method for deliberately inducing immunosuppression; in optimal circumstances, immunosuppressive drugs primarily target hyperactive components of the immune system. [ 5 ] People in remission from cancer who require immunosuppression are not more likely to experience a recurrence. [ 6 ] Throughout its history, radiation therapy has been used to decrease the strength of the immune system. [ 7 ] Dr. Joseph Murray of Brigham and Women's Hospital was given the Nobel Prize in Physiology or Medicine in 1990 for work on immunosuppression. [ 8 ] Immunosuppressive drugs have the potential to cause immunodeficiency , which can increase susceptibility to opportunistic infection and decrease cancer immunosurveillance . [ 9 ] Immunosuppressants may be prescribed when a normal immune response is undesirable, such as in autoimmune diseases . [ 10 ] Steroids were the first class of immunosuppressant drugs identified, though side-effects of early compounds limited their use. The more specific [ vague ] azathioprine was identified in 1960, but it was the discovery of ciclosporin in 1980 (together with azathioprine) that allowed significant expansion of transplantation to less well-matched donor-recipient pairs as well as broad application to lung transplantation , pancreas transplantation , and heart transplantation . [ 3 ] After an organ transplantation , the body will nearly always reject the new organ(s) due to differences in human leukocyte antigen between the donor and recipient. As a result, the immune system detects the new tissue as "foreign", and attempts to remove it by attacking it with white blood cells , resulting in the death of the donated tissue. Immunosuppressants are administered in order to help prevent rejection; however, the body becomes more vulnerable to infections and malignancy during the course of such treatment. [ 11 ] [ 12 ] [ 13 ] Non-deliberate immunosuppression can occur in, for example, ataxia–telangiectasia , complement deficiencies , many types of cancer , and certain chronic infections such as human immunodeficiency virus (HIV). The unwanted effect in non-deliberate immunosuppression is immunodeficiency that results in increased susceptibility to pathogens , such as bacteria and viruses. [ 1 ] Immunodeficiency is also a potential adverse effect of many immunosuppressant drugs , in this sense, the scope of the term immunosuppression in general includes both beneficial and potential adverse effects of decreasing the function of the immune system. [ 14 ] B cell deficiency and T cell deficiency are immune impairment that individuals are born with or are acquired, which in turn can lead to immunodeficiency problems. [ 15 ] Nezelof syndrome is an example of an immunodeficiency of T-cells. [ 16 ]
https://en.wikipedia.org/wiki/Immunosuppression
Immunosurgery is a method of selectively removing the external cell layer ( trophoblast ) of a blastocyst through a cytotoxicity procedure. The protocol for immunosurgery includes preincubation with an antiserum , rinsing it with embryonic stem cell derivation media to remove the antibodies, exposing it to complement , and then removing the lysed trophoectoderm through a pipette. [ 1 ] This technique is used to isolate the inner cell mass of the blastocyst. The trophoectoderm's cell junctions and tight epithelium "shield" the ICM from antibody binding by effectively making the cell impermeable to macromolecules . [ 2 ] [ 3 ] Immunosurgery can be used to obtain large quantities of pure inner cell masses in a relatively short period of time. The ICM obtained can then be used for stem cell research and is better to use than adult or fetal stem cells because the ICM has not been affected by external factors, such as manually bisecting the cell. [ 4 ] [ 5 ] However, if the structural integrity of the blastocyst is compromised prior to the experiment, the ICM is susceptible to the immunological reaction. Thus, the quality of the embryo used is imperative to the experiment's success. In addition, when using complement derived from animals, the source of the animals matters. They should be kept in a specific-pathogen-free environment to increase the likelihood that the animal has not developed natural antibodies against the bacterial carbohydrates present in the serum (which can be obtained from a different animal). [ 6 ] Solter and Knowles developed the first method of immunosurgery with their 1975 paper "Immunosurgery of Mouse Blastocyst" . They primarily used it for studying early embryonic development . [ 4 ] [ 7 ] Though immunosurgery is the most prevalent method of ICM isolation, various experiments have improved the process, such as through the use of lasers (performed by Tanaka, et al. ) and micromanipulators (performed by Ding, et al. ). [ 8 ] [ 9 ] These new methods reduce the risk of contamination with animal materials within the embryonic stem cells derived from the ICM, which can cause complications later on if the embryonic stem cells are transplanted into a human for cell therapy . This immunology article is a stub . You can help Wikipedia by expanding it . This surgery article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immunosurgery
Immunotoxicology (sometimes abbreviated as ITOX ) is the study of the toxicity of foreign substances called xenobiotics and their effects on the immune system. [ 1 ] Some toxic agents that are known to alter the immune system include: industrial chemicals, heavy metals, agrochemicals, pharmaceuticals, drugs, ultraviolet radiation, air pollutants and some biological materials. [ 2 ] [ 1 ] [ 3 ] The effects of these immunotoxic substances have been shown to alter both the innate and adaptive parts of the immune system. Consequences of xenobiotics affect the organ initially in contact (often the lungs or skin). [ 4 ] Some commonly seen problems that arise as a result of contact with immunotoxic substances are: immunosuppression, hypersensitivity, and autoimmunity. [ 1 ] The toxin-induced immune dysfunction may also increase susceptibility to cancer . [ 2 ] The study of immunotoxicology began in the 1970s. [ 3 ] However, the idea that some substances have a negative effect on the immune system was not a novel concept as people have observed immune system alterations as a result of contact toxins since ancient Egypt. [ 3 ] Immunotoxicology has become increasingly important when considering the safety and effectiveness of commercially sold products. In recent years, guidelines and laws have been created in the effort to regulate and minimize the use of immunotoxic substances in the production of agricultural products, drugs, and consumer products. [ 3 ] One example of these regulations are FDA guidelines mandate that all drugs must be tested for toxicity to avoid negative interactions with the immune system, and in-depth investigations are required whenever a drug shows signs of affecting the immune system. [ 1 ] Scientists use both in vivo and in vitro techniques when determining the immunotoxic effects of a substance. [ 5 ] Immunotoxic agents can damage the immune system by destroying immune cells and changing signaling pathways. [ 5 ] This has wide-reaching consequences in both the innate and adaptive immune systems. [ 1 ] Changes in the adaptive immune system can be observed by measuring levels of cytokine production, modification of surface markers, activation, and cell differentiation. [ 4 ] There are also changes in macrophages and monocyte activity indicating changes in the innate immune system. [ 5 ] Some common agents that have been shown to cause immunosuppression are corticosteroids , radiation, heavy metals, halogenated aromatic hydrocarbons, drugs, air pollutants and immunosuppressive drugs. [ 4 ] [ 3 ] These chemicals can result in mutations found in regulatory genes of the immune system, which alter the amount of critical cytokines produced and can cause insufficient immune responses when antigens are encountered. [ 4 ] These agents have also been known to kill or damage immune cells and cells in the bone marrow, resulting in difficulty recognizing antigens and creating novel immune responses. This can be measured by decreased IgM and IgG antibody levels which are an indicator of immune suppression. [ 1 ] T regulatory cells , which are critical to maintaining the correct level of response in the immune system, also appear to be altered by some agents. [ 5 ] In the presence of certain immunotoxic substances, granulocytes of the innate immune system have also been observed to be damaged causing the rare disease agranulocytosis . [ 5 ] Vaccine effectiveness can also be decreased when the immune system is suppressed by immunotoxic substances. [ 5 ] In vitro T-lymphocyte activation assays have been useful when determining which substances have immunosuppressive properties. [ 4 ] Hypersensitive or allergic reactions, such as asthma, are commonly associated with immunotoxic agents and the number of people exhibiting these symptoms is increasing in industrial countries. This partially due to the increasing number of immunotoxic agents. [ 1 ] [ 5 ] Nanomaterials are commonly absorbed through the skin or inhaled and are known for causing hypersensitive responses by recruiting immune cells. [ 6 ] These nanomaterials are often encountered when a person is in contact with chemicals in an occupational, consumer, or environmental setting. [ 1 ] Agents that are known for creating a hypersensitive response include poison ivy, fragrances, cosmetics, metals, preservatives, and pesticides. [ 1 ] These molecules that are so small, they act as haptens and bind to larger molecules to induce an immune response. [ 6 ] An allergic response is induced when T lymphocytes recognize these haptens and recruit professional antigen-presenting cells. [ 4 ] IgE antibodies are important when looking at hypersensitive reactions but cannot be used to definitively determine the effects of an immunotoxic agents. [ 1 ] Because of this, in vivo testing is the most effective way to determine the potential toxicity of nanomaterials and other agents that are believed to cause hypersensitivity. [ 6 ] Immunotoxic agents can increase the occurrence of immune system attacks on self molecules. [ 1 ] Although autoimmunity mostly occurs as a result of genetic factors, immunotoxic agents such as asbestos, sulfadiazine, silica, paraffin and silicone can also increase the chance of an autoimmune attack. [ 1 ] [ 5 ] These agents are known for causing disturbances to the carefully regulated immune system and increasing the development of autoimmunity. [ 4 ] Changes in the circulating regulatory and responder T cells are good indicators of an autoimmune response induced by an immunotoxic agent. [ 3 ] The effects of autoimmunity have been examined primarily through studies with animal models. Currently, there is not a screen to determine how agents affect human autoimmunity, because of this much of the current knowledge about autoimmunity in response to immunotoxic agents comes from the observations of individuals who have been exposed to suspected immunotoxic agents. [ 1 ] [ 3 ]
https://en.wikipedia.org/wiki/Immunotoxicology
Immunotransplant is a maneuver used to make vaccines more powerful. It refers to the process of infusing vaccine-primed T lymphocytes into lymphodepleted recipients for the purpose of enhancing the proliferation and function of those T cells and increasing immune protection induced by that vaccine. The concept takes advantage of data from animal and studies in vaccinology and the homeostasis of T cells and has applications in the treatment of infectious disease, immunodeficiency syndromes, and cancer. Historically, the effect of vaccines -particularly against pathogens- has been assessed by measurement of their induction of a B-cell-mediated -or humoral- immune response, i.e. the production of pathogen-specific antibodies. In the study of both infectious diseases and cancer, a majority of potential immune targets are only expressed intra-cellularly, and are thus inaccessible to antibody-mediated elimination. T-cell mediated immunity, by contrast, has the potential to recognize targets expressed either extra- or intra-cellularly and has therefore been studied extensively for treatment of these diseases. A number of pre-clinical and clinical studies have demonstrated that vaccines against pathogens, bystander (non-pathogenic) proteins, tumor-associated antigens, or whole tumor cells, can induce specific T-cell mediated immune responses. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] A number of approaches have been considered to amplify T cell mediated immune responses(e.g. IL-2, CTLA-4, IL-7, CD137), and some of these have shown clinical efficacy in eliminating particular types of cancer, most notably melanoma and renal cell carcinoma . The use of immunotransplant to enhance T cell-mediated immune responses, derive from studies of T cell homeostasis. The total cohort of T cells in an organism maintain homeostasis – a consistent total number of T cells in the peripheral blood. Transient elevations in peripheral blood T cell counts cause the whole population to diminish, transient depletions cause the whole population to proliferate, generally maintaining a roughly total T cell count. The latter situation –lymphodepletion– has been studied extensively and the proliferation of mature T cells upon transfer into the lymphopenic host is referred to as “lymphodepletion-induced” or “homeostatic” proliferation. [ 7 ] It has been shown that homeostatic proliferation induces not only quantitative changes in T cell cohorts, but qualitative changes as well, such as increased function and the development of a memory-cell phenotype. [ 8 ] The mechanism of these changes has been shown to be primarily due to upregulation of a group of cytokines including IL-7 and IL-15 induced by lymphodepletion. Additionally, lymphodepletion is a non-selective method of eliminating several known regulatory, or immunosuppressive, subsets of immune cells, such as regulatory T cells. [ 9 ] These observations have prompted several clinical studies of infusing pathogen- or tumor-specific T cells into lymphodepleted patients. A group at the National Cancer Institute demonstrated remarkable efficacy by infusing melanoma-specific T cells (obtained by growing tumor-infiltrating T cells ex vivo ) into melanoma patients treated with lymphodepleting chemotherapy. In a series of studies (to 2005) of this approach, up to 70% of treated patients were shown to have regressions of their tumors, many of which had been considerable in size and refractory to other therapies. [ 10 ] [ 11 ] These findings compare favorably with standard-of-care therapies for melanoma which generally lead to tumor regressions in only ~10-12% of patients. [ citation needed ] Because of the logistic difficulty of obtaining tumor-specific T cells via the ex vivo expansion of tumor-infiltrating cells, a number of studies have examined inducing these cells in vivo by vaccination. Levitsky et al., at Johns Hopkins, in a series of pre-clinical studies demonstrated that vaccine-induced T cells could be considerably more effective when re-infused into lymphodepleted recipients. [ 12 ] [ 13 ] Subsequently, a clinical study in patients with multiple myeloma conducted by June et al., demonstrated that a standard vaccination against pneumonia could induce a T-cell-mediated response to the vaccine and that re-infusing these T cells after an extremely lymphodepletive therapy –autologous stem cell transplant – could significantly enhance that response. [ 14 ] To expand this immunotransplant concept to the amplification of anti-cancer immunity, researchers at Stanford University developed a pre-clinical lymphoma model using a in situ, CpG-base vaccine [ 15 ] to induce anti-tumor immunity and demonstrated that this immunity was enhanced 10-40 fold by immunotransplant. [ 16 ] The above studies by Levitsky et al., were an important precedent for this work. In fact the Hopkins published preliminary results of a clinical study testing the basic immunotransplant concept in acute myeloid leukemia [ 17 ] demonstrating encouraging signals of enhanced anti-tumor immunity. [ 18 ] To continue the clinical translation of this approach, in August 2009 the Stanford group [ 19 ] initiated a phase I/II clinical trial for patients with newly diagnosed mantle cell lymphoma . [ 20 ] That study uses a whole-cell, CpG-activated, autologous tumor vaccine to induce anti-tumor immunity followed by leukapheresis and re-infusion of the vaccine-primed cells immediately after standard autologous transplant. Initial results of this study were presented at the ASCO 2011 Annual Meeting showing successful data towards the primary endpoint: amplification of anti-tumor T-cell responses. [ 21 ]
https://en.wikipedia.org/wiki/Immunotransplant
Immuron is a biotechnology company based in Melbourne , Australia . [ 1 ] In 2008, the company changed its name to Immuron Limited, [ 2 ] having previously operated as Anadis Limited. [ 3 ] [ 4 ] Immuron is focused on antigen -primed and dairy-derived health products. Its proprietary technologies allow for rapid development of polyclonal antibody and other proteins-based solutions for a range of diseases. [ citation needed ] . The company specialises in nutraceutical , pharmaceutical and therapeutic technology products for conditions such as oral and GI mucositis , avian influenza , E. coli travellers' diarrhoea (TD) and Anthrax containment. In 2005, Anadis signed an agreement with Quebec's Baralex Inc. and Valeo Pharma Inc. for the distribution of Travelan, a product made by Anadis for the Canadian market. [ 5 ] This biotechnology article is a stub . You can help Wikipedia by expanding it . This pharmacy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immuron
In mechanics , an impact is when two bodies collide . During this collision, both bodies decelerate. The deceleration causes a high force or shock , applied over a short time period. A high force, over a short duration, usually causes more damage to both bodies than a lower force applied over a proportionally longer duration. At normal speeds, during a perfectly inelastic collision , an object struck by a projectile will deform , and this deformation will absorb most or all of the force of the collision. Viewed from a conservation of energy perspective, the kinetic energy of the projectile is changed into heat and sound energy, as a result of the deformations and vibrations induced in the struck object. However, these deformations and vibrations cannot occur instantaneously. A high-velocity collision (an impact) does not provide sufficient time for these deformations and vibrations to occur. Thus, the struck material behaves as if it were more brittle than it would otherwise be, and the majority of the applied force goes into fracturing the material. Or, another way to look at it is that materials actually are more brittle on short time scales than on long time scales: this is related to time-temperature superposition . Impact resistance decreases with an increase in the modulus of elasticity , which means that stiffer materials will have less impact resistance. Resilient materials will have better impact resistance. Different materials can behave in quite different ways in impact when compared with static loading conditions. Ductile materials like steel tend to become more brittle at high loading rates, and spalling may occur on the reverse side to the impact if penetration doesn't occur. The way in which the kinetic energy is distributed through the section is also important in determining its response. Projectiles apply a Hertzian contact stress at the point of impact to a solid body, with compression stresses under the point, but with bending loads a short distance away. Since most materials are weaker in tension than compression, this is the zone where cracks tend to form and grow. A nail is pounded with a series of impacts, each by a single hammer blow. These high velocity impacts overcome the static friction between the nail and the substrate. A pile driver achieves the same end, although on a much larger scale, the method being commonly used during civil construction projects to make building and bridge foundations. An impact wrench is a device designed to impart torque impacts to bolts to tighten or loosen them. At normal speeds, the forces applied to the bolt would be dispersed, via friction, to the mating threads. However, at impact speeds, the forces act on the bolt to move it before they can be dispersed. In ballistics , bullets utilize impact forces to puncture surfaces that could otherwise resist substantial forces. A rubber sheet, for example, behaves more like glass at typical bullet speeds. That is, it fractures, and does not stretch or vibrate. The field of applications of impact theory ranges from the optimization of material processing, impact testing, dynamics of granular media to medical applications related to the biomechanics of the human body, especially the hip- and knee-joints. [ 1 ] Also, it has vast applications in the automotive and military industries. [ 2 ] Road traffic accidents usually involve impact loading, such as when a car hits a traffic bollard , water hydrant or tree, the damage being localized to the impact zone. When vehicles collide, the damage increases with the relative velocity of the vehicles, the damage increasing as the square of the velocity since it is the impact kinetic energy (1/2 mv 2 ) which is the variable of importance. Much design effort is made to improve the impact resistance of cars so as to minimize user injury. It can be achieved in several ways: by enclosing the driver and passengers in a safety cell for example. The cell is reinforced so it will survive in high speed crashes, and so protect the users. Parts of the body shell outside the cell are designed to crumple progressively, absorbing most of the kinetic energy which must be dissipated by the impact. Various impact test are used to assess the effects of high loading, both on products and standard slabs of material. The Charpy test and Izod test are two examples of standardized methods which are used widely for testing materials. Ball or projectile drop tests are used for assessing product impacts. The Columbia disaster was caused by impact damage when a chunk of polyurethane foam impacted the carbon fibre composite wing of the Space Shuttle . Although tests had been conducted before the disaster, the test chunks were much smaller than the chunk that fell away from the booster rocket and hit the exposed wing. When fragile items are shipped, impacts and drops can cause product damage. Protective packaging and cushioning help reduce the peak acceleration by extending the duration of the shock or impact. [ 3 ]
https://en.wikipedia.org/wiki/Impact_(mechanics)
An impact event is a collision between astronomical objects causing measurable effects. [ 1 ] Impact events have been found to regularly occur in planetary systems , though the most frequent involve asteroids , comets or meteoroids and have minimal effect. When large objects impact terrestrial planets such as the Earth , there can be significant physical and biospheric consequences, as the impacting body is usually traveling at several kilometres a second (a minimum of 11.2 km/s (7.0 mi/s) for an Earth impacting body [ 2 ] ), though atmospheres mitigate many surface impacts through atmospheric entry . Impact craters and structures are dominant landforms on many of the Solar System 's solid objects and present the strongest empirical evidence for their frequency and scale. Impact events appear to have played a significant role in the evolution of the Solar System since its formation. Major impact events have significantly shaped Earth's history , and have been implicated in the formation of the Earth–Moon system . Interplanetary impacts have also been proposed to explain the retrograde rotation of Uranus and Venus . [ 3 ] [ 4 ] [ 5 ] Impact events also appear to have played a significant role in the evolutionary history of life . Impacts may have helped deliver the building blocks for life (the panspermia theory relies on this premise). Impacts have been suggested as the origin of water on Earth . They have also been implicated in several mass extinctions . The prehistoric Chicxulub impact , 66 million years ago, is believed to not only be the cause of the Cretaceous–Paleogene extinction event [ 6 ] but acceleration of the evolution of mammals , leading to their dominance and, in turn, setting in place conditions for the eventual rise of humans . [ 7 ] Throughout recorded history, hundreds of Earth impacts (and exploding bolides ) have been reported, with some occurrences causing deaths, injuries, property damage, or other significant localised consequences. [ 8 ] One of the best-known recorded events in modern times was the Tunguska event , which occurred in Siberia , Russia, in 1908. The 2013 Chelyabinsk meteor event is the only known such incident in modern times to result in numerous injuries. Its meteor is the largest recorded object to have encountered the Earth since the Tunguska event. The Comet Shoemaker–Levy 9 impact provided the first direct observation of an extraterrestrial collision of Solar System objects, when the comet broke apart and collided with Jupiter in July 1994. An extrasolar impact was observed in 2013, when a massive terrestrial planet impact was detected around the star ID8 in the star cluster NGC 2547 by NASA's Spitzer Space Telescope and confirmed by ground observations. [ 9 ] Impact events have been a plot and background element in science fiction . In April 2018, the B612 Foundation reported: "It's 100 percent certain we'll be hit [by a devastating asteroid], but we're not 100 percent certain when." [ 10 ] Also in 2018, physicist Stephen Hawking considered in his final book Brief Answers to the Big Questions that an asteroid collision was the biggest threat to the planet. [ 11 ] [ 12 ] In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched. [ 18 ] On 26 September 2022, the Double Asteroid Redirection Test demonstrated the deflection of an asteroid. It was the first such experiment to be carried out by humankind and was considered to be highly successful. The orbital period of the target body was changed by 32 minutes. The criterion for success was a change of more than 73 seconds. Major impact events have significantly shaped Earth's history , having been implicated in the formation of the Earth–Moon system , the evolutionary history of life , the origin of water on Earth , and several mass extinctions . Impact structures are the result of impact events on solid objects and, as the dominant landforms on many of the System's solid objects, present the most solid evidence of prehistoric events. Notable impact events include the hypothesized Late Heavy Bombardment , which would have occurred early in the history of the Earth–Moon system, and the confirmed Chicxulub impact 66 million years ago, believed to be the cause of the Cretaceous–Paleogene extinction event . Small objects frequently collide with Earth. There is an inverse relationship between the size of the object and the frequency of such events. The lunar cratering record shows that the frequency of impacts decreases as approximately the cube of the resulting crater's diameter, which is on average proportional to the diameter of the impactor. [ 19 ] Asteroids with a 1 km (0.62 mi) diameter strike Earth every 500,000 years on average. [ 20 ] [ 21 ] Large collisions – with 5 km (3 mi) objects – happen approximately once every twenty million years. [ 22 ] The last known impact of an object of 10 km (6 mi) or more in diameter was at the Cretaceous–Paleogene extinction event 66 million years ago. [ 23 ] The energy released by an impactor depends on diameter, density, velocity, and angle. [ 22 ] The diameter of most near-Earth asteroids that have not been studied by radar or infrared can generally only be estimated within about a factor of two, by basing it on the asteroid's brightness. The density is generally assumed, because the diameter and mass, from which density can be calculated, are also generally estimated. Due to Earth's escape velocity , the minimum impact velocity is 11 km/s with asteroid impacts averaging around 17 km/s on the Earth. [ 22 ] The most probable impact angle is 45 degrees. [ 22 ] Impact conditions such as asteroid size and speed, but also density and impact angle determine the kinetic energy released in an impact event. The more energy is released, the more damage is likely to occur on the ground due to the environmental effects triggered by the impact. Such effects can be shock waves, heat radiation, the formation of craters with associated earthquakes, and tsunamis if bodies of water are hit. Human populations are vulnerable to these effects if they live within the affected zone. [ 1 ] Large seiche waves arising from earthquakes and large-scale deposit of debris can also occur within minutes of impact, thousands of kilometres from impact. [ 24 ] Stony asteroids with a diameter of 4 meters (13 ft) enter Earth's atmosphere about once a year. [ 22 ] Asteroids with a diameter of 7 meters enter the atmosphere about every 5 years with as much kinetic energy as the atomic bomb dropped on Hiroshima (approximately 16 kilotons of TNT), but the air burst is reduced to just 5 kilotons. [ 22 ] These ordinarily explode in the upper atmosphere and most or all of the solids are vaporized . [ 25 ] However, asteroids with a diameter of 20 m (66 ft), and which strike Earth approximately twice every century, produce more powerful airbursts. The 2013 Chelyabinsk meteor was estimated to be about 20 m in diameter with an airburst of around 500 kilotons, an explosion 30 times the Hiroshima bomb impact. Much larger objects may impact the solid earth and create a crater. Objects with a diameter less than 1 m (3.3 ft) are called meteoroids and seldom make it to the ground to become meteorites. An estimated 500 meteorites reach the surface each year, but only 5 or 6 of these typically create a weather radar signature with a strewn field large enough to be recovered and be made known to scientists. The late Eugene Shoemaker of the U.S. Geological Survey estimated the rate of Earth impacts, concluding that an event about the size of the nuclear weapon that destroyed Hiroshima occurs about once a year. [ citation needed ] Such events would seem to be spectacularly obvious, but they generally go unnoticed for a number of reasons: the majority of the Earth's surface is covered by water; a good portion of the land surface is uninhabited; and the explosions generally occur at relatively high altitude, resulting in a huge flash and thunderclap but no real damage. [ citation needed ] Although no human is known to have been killed directly by an impact [ disputed – discuss ] , over 1000 people were injured by the Chelyabinsk meteor airburst event over Russia in 2013. [ 26 ] In 2005 it was estimated that the chance of a single person born today dying of an impact is around 1 in 200,000. [ 27 ] The two to four-meter-sized asteroids 2008 TC 3 , 2014 AA , 2018 LA , 2019 MO , 2022 EB5 , and the suspected artificial satellite WT1190F are the only known objects to be detected before impacting the Earth. [ 28 ] [ 29 ] [ 30 ] Impacts have had, during the history of the Earth, a significant geological and climatic influence. [ 31 ] [ 32 ] The Moon 's existence is widely attributed to a huge impact early in Earth's history . [ 33 ] Impact events earlier in the history of Earth have been credited with creative as well as destructive events; it has been proposed that impacting comets delivered the Earth's water, and some have suggested that the origins of life may have been influenced by impacting objects by bringing organic chemicals or lifeforms to the Earth's surface, a theory known as exogenesis . These modified views of Earth's history did not emerge until relatively recently, chiefly due to a lack of direct observations and the difficulty in recognizing the signs of an Earth impact because of erosion and weathering. Large-scale terrestrial impacts of the sort that produced the Barringer Crater , locally known as Meteor Crater , east of Flagstaff, Arizona, are rare. Instead, it was widely thought that cratering was the result of volcanism : the Barringer Crater, for example, was ascribed to a prehistoric volcanic explosion (not an unreasonable hypothesis, given that the volcanic San Francisco Peaks stand only 48 km or 30 mi to the west). Similarly, the craters on the surface of the Moon were ascribed to volcanism. It was not until 1903–1905 that the Barringer Crater was correctly identified as an impact crater, and it was not until as recently as 1963 that research by Eugene Merle Shoemaker conclusively proved this hypothesis. The findings of late 20th-century space exploration and the work of scientists such as Shoemaker demonstrated that impact cratering was by far the most widespread geological process at work on the Solar System's solid bodies. Every surveyed solid body in the Solar System was found to be cratered, and there was no reason to believe that the Earth had somehow escaped bombardment from space. In the last few decades of the 20th century, a large number of highly modified impact craters began to be identified. The first direct observation of a major impact event occurred in 1994: the collision of the comet Shoemaker-Levy 9 with Jupiter . Based on crater formation rates determined from the Earth's closest celestial partner, the Moon, astrogeologists have determined that during the last 600 million years, the Earth has been struck by 60 objects of a diameter of 5 km (3 mi) or more. [ 20 ] The smallest of these impactors would leave a crater almost 100 km (60 mi) across. Only three confirmed craters from that time period with that size or greater have been found: Chicxulub , Popigai , and Manicouagan , and all three have been suspected of being linked to extinction events [ 34 ] [ 35 ] though only Chicxulub, the largest of the three, has been consistently considered. The impact that caused Mistastin crater generated temperatures exceeding 2,370 °C, the highest known to have occurred on the surface of the Earth. [ 36 ] Besides the direct effect of asteroid impacts on a planet's surface topography, global climate and life, recent studies have shown that several consecutive impacts might have an effect on the dynamo mechanism at a planet's core responsible for maintaining the magnetic field of the planet , and may have contributed to Mars' lack of current magnetic field. [ 37 ] An impact event may cause a mantle plume ( volcanism ) at the antipodal point of the impact. [ 38 ] The Chicxulub impact may have increased volcanism at mid-ocean ridges [ 39 ] and has been proposed to have triggered flood basalt volcanism at the Deccan Traps . [ 40 ] While numerous impact craters have been confirmed on land or in the shallow seas over continental shelves , no impact craters in the deep ocean have been widely accepted by the scientific community. [ 41 ] Impacts of projectiles as large as one km in diameter are generally thought to explode before reaching the sea floor, but it is unknown what would happen if a much larger impactor struck the deep ocean. The lack of a crater, however, does not mean that an ocean impact would not have dangerous implications for humanity. Some scholars have argued that an impact event in an ocean or sea may create a megatsunami , which can cause destruction both at sea and on land along the coast, [ 42 ] but this is disputed. [ 43 ] The Eltanin impact into the Pacific Ocean 2.5 Mya is thought to involve an object about 1 to 4 kilometres (0.62 to 2.49 mi) across but remains craterless. The effect of impact events on the biosphere has been the subject of scientific debate. Several theories of impact-related mass extinction have been developed. In the past 500 million years there have been five generally accepted major mass extinctions that on average extinguished half of all species . [ 44 ] One of the largest mass extinctions to have affected life on Earth was the Permian-Triassic , which ended the Permian period 250 million years ago and killed off 90 percent of all species; [ 45 ] life on Earth took 30 million years to recover. [ 46 ] The cause of the Permian-Triassic extinction is still a matter of debate; the age and origin of proposed impact craters, i.e. the Bedout High structure, hypothesized to be associated with it are still controversial. [ 47 ] The last such mass extinction led to the demise of the non-avian dinosaurs and coincided with a large meteorite impact; this is the Cretaceous–Paleogene extinction event (also known as the K–T or K–Pg extinction event), which occurred 66 million years ago. There is no definitive evidence of impacts leading to the three other major mass extinctions. In 1980, physicist Luis Alvarez ; his son, geologist Walter Alvarez ; and nuclear chemists Frank Asaro and Helen V. Michael from the University of California, Berkeley discovered unusually high concentrations of iridium in a specific layer of rock strata in the Earth's crust. Iridium is an element that is rare on Earth but relatively abundant in many meteorites. From the amount and distribution of iridium present in the 65-million-year-old "iridium layer", the Alvarez team later estimated that an asteroid of 10 to 14 km (6 to 9 mi) must have collided with Earth. This iridium layer at the Cretaceous–Paleogene boundary has been found worldwide at 100 different sites. Multidirectionally shocked quartz (coesite), which is normally associated with large impact events [ 48 ] or atomic bomb explosions, has also been found in the same layer at more than 30 sites. Soot and ash at levels tens of thousands times normal levels were found with the above. Anomalies in chromium isotopic ratios found within the K-T boundary layer strongly support the impact theory. [ 49 ] Chromium isotopic ratios are homogeneous within the earth, and therefore these isotopic anomalies exclude a volcanic origin, which has also been proposed as a cause for the iridium enrichment. Further, the chromium isotopic ratios measured in the K-T boundary are similar to the chromium isotopic ratios found in carbonaceous chondrites . Thus a probable candidate for the impactor is a carbonaceous asteroid, but a comet is also possible because comets are assumed to consist of material similar to carbonaceous chondrites. Probably the most convincing evidence for a worldwide catastrophe was the discovery of the crater which has since been named Chicxulub Crater . This crater is centered on the Yucatán Peninsula of Mexico and was discovered by Tony Camargo and Glen Penfield while working as geophysicists for the Mexican oil company PEMEX . [ 50 ] What they reported as a circular feature later turned out to be a crater estimated to be 180 km (110 mi) in diameter. This convinced the vast majority of scientists that this extinction resulted from a point event that is most probably an extraterrestrial impact and not from increased volcanism and climate change (which would spread its main effect over a much longer time period). Although there is now general agreement that there was a huge impact at the end of the Cretaceous that led to the iridium enrichment of the K-T boundary layer, remnants have been found of other, smaller impacts, some nearing half the size of the Chicxulub crater, which did not result in any mass extinctions, and there is no clear linkage between an impact and any other incident of mass extinction. [ 44 ] Paleontologists David M. Raup and Jack Sepkoski have proposed that an excess of extinction events occurs roughly every 26 million years (though many are relatively minor). This led physicist Richard A. Muller to suggest that these extinctions could be due to a hypothetical companion star to the Sun called Nemesis periodically disrupting the orbits of comets in the Oort cloud , leading to a large increase in the number of comets reaching the inner Solar System where they might hit Earth. Physicist Adrian Melott and paleontologist Richard Bambach have more recently verified the Raup and Sepkoski finding, but argue that it is not consistent with the characteristics expected of a Nemesis-style periodicity. [ 51 ] An impact event is commonly seen as a scenario that would bring about the end of civilization . In 2000, Discover magazine published a list of 20 possible sudden doomsday scenarios with an impact event listed as the most likely to occur. [ 52 ] A joint Pew Research Center / Smithsonian survey from April 21 to 26, 2010 found that 31 percent of Americans believed that an asteroid will collide with Earth by 2050. A majority (61 percent) disagreed. [ 53 ] In the early history of the Earth (about four billion years ago), bolide impacts were almost certainly common since the Solar System contained far more discrete bodies than at present. Such impacts could have included strikes by asteroids hundreds of kilometers in diameter, with explosions so powerful that they vaporized all the Earth's oceans. It was not until this heavy bombardment slackened that life appears to have begun to evolve on Earth. The leading theory of the Moon's origin is the giant impact theory, which postulates that Earth was once hit by a planetoid the size of Mars; such a theory is able to explain the size and composition of the Moon, something not done by other theories of lunar formation. [ 54 ] According to the theory of the Late Heavy Bombardment , there should have been 22,000 or more impact craters with diameters >20 km (12 mi), about 40 impact basins with diameters about 1,000 km (620 mi), and several impact basins with diameters about 5,000 km (3,100 mi). However, hundreds of millions of years of deformation at the Earth's crust pose significant challenges to conclusively identifying impacts from this period. Only two pieces of pristine lithosphere are believed to remain from this era: Kaapvaal craton (in contemporary South Africa) and Pilbara Craton (in contemporary Western Australia) to search within which may potentially reveal evidence in the form of physical craters. Other methods may be used to identify impacts from this period, for example, indirect gravitational or magnetic analysis of the mantle, but may prove inconclusive. In 2021, evidence for a probable impact 3.46 billion-years ago at Pilbara Craton has been found in the form of a 150 kilometres (93 mi) crater created by the impact of a 10 kilometres (6.2 mi) asteroid (named "The Apex Asteroid") into the sea at a depth of 2.5 kilometres (1.6 mi) (near the site of Marble Bar, Western Australia ). [ 55 ] The event caused global tsunamis. It is also coincidental to some of the earliest evidence of life on Earth, fossilized Stromatolites . Evidence for at least 4 impact events have been found in spherule layers (dubbed S1 through S8) from the Barberton Greenstone Belt in South Africa, spanning around 3.5-3.2 billion years ago. [ 56 ] The sites of the impacts are thought to have been distant from the location of the belt. The impactors that generated these events are thought to have been much larger than those that created the largest known still existing craters/impact structures on Earth, with the impactors having estimated diameters of ~20–50 kilometres (12–31 mi), with the craters generated by these impacts having an estimated diameter of 400–1,000 kilometres (250–620 mi). [ 57 ] The largest impacts like those represented by the S2 layer are likely to have had far-reaching effects, such as the boiling of the surface layer of the oceans. [ 58 ] The Maniitsoq structure , dated to around 3 billion years old (3 Ga), was once thought to be the result of an impact; [ 59 ] [ 60 ] however, follow-up studies have not confirmed its nature as an impact structure. [ 60 ] [ 61 ] [ 62 ] [ 63 ] [ 64 ] [ 65 ] The Maniitsoq structure is not recognised as an impact structure by the Earth Impact Database . [ 66 ] In 2020, scientists discovered the world's oldest confirmed impact crater, the Yarrabubba crater , caused by an impact that occurred in Yilgarn craton (what is now Western Australia ), dated at more than 2.2 billion years ago with the impactor estimated to be around 7 kilometres (4.3 mi) wide. [ 67 ] [ 68 ] [ 69 ] It is believed that, at this time, the Earth was mostly or completely frozen, commonly called the Huronian glaciation . The Vredefort impact event , which occurred around 2 billion years ago in Kaapvaal craton (what is now South Africa ), caused the largest verified crater, a multi-ringed structure 160–300 km (100–200 mi) across, forming from an impactor approximately 10–15 km (6.2–9.3 mi) in diameter. [ 70 ] [ 71 ] The Sudbury impact event occurred on the Nuna supercontinent (now Canada ) from a bolide approximately 10–15 km (6.2–9.3 mi) in diameter approximately 1.849 billion years ago [ 72 ] Debris from the event would have been scattered across the globe. Two 10-kilometre sized (6.2 mi) asteroids are now believed to have struck Australia between 360 and 300 million years ago at the Western Warburton and East Warburton Basins , creating a 400-kilometre impact zone (250 mi). According to evidence found in 2015, it is the largest ever recorded. [ 73 ] A third, possible impact was also identified in 2015 to the north, on the upper Diamantina River , also believed to have been caused by an asteroid 10 km across about 300 million years ago, but further studies are needed to establish that this crustal anomaly was indeed the result of an impact event. [ 74 ] The prehistoric Chicxulub impact , 66 million years ago, believed to be the cause of the Cretaceous–Paleogene extinction event, was caused by an asteroid estimated to be about 10 kilometres (6.2 mi) wide. [ 6 ] Analysis of the Hiawatha Glacier reveals the presence of a 31 km wide impact crater dated at 58 million years of age, less than 10 million years after the Cretaceous–Paleogene extinction event, scientists believe that the impactor was a metallic asteroid with a diameter in the order of 1.5 kilometres (0.9 mi). The impact would have had global effects. [ 75 ] Artifacts recovered with tektites from the 803,000-year-old Australasian strewnfield event in Asia link a Homo erectus population to a significant meteorite impact and its aftermath. [ 76 ] [ 77 ] [ 78 ] Significant examples of Pleistocene impacts include the Lonar crater lake in India, approximately 52,000 years old (though a study published in 2010 gives a much greater age), which now has a flourishing semi-tropical jungle around it. [ citation needed ] The Rio Cuarto craters in Argentina were produced approximately 10,000 years ago, at the beginning of the Holocene. If proved to be impact craters, they would be the first impact of the Holocene. The Campo del Cielo ("Field of Heaven") refers to an area bordering Argentina's Chaco Province where a group of iron meteorites were found, estimated as dating to 4,000–5,000 years ago. It first came to attention of Spanish authorities in 1576; in 2015, police arrested four alleged smugglers trying to steal more than a ton of protected meteorites. [ 79 ] The Henbury craters in Australia (~5,000 years old) and Kaali craters in Estonia (~2,700 years old) were apparently produced by objects that broke up before impact. [ 80 ] [ citation needed ] Whitecourt crater in Alberta, Canada is estimated to be between 1,080 and 1,130 years old. The crater is approximately 36 m (118 ft) in diameter and 9 m (30 ft) deep, is heavily forested and was discovered in 2007 when a metal detector revealed fragments of meteoric iron scattered around the area. [ 81 ] [ 82 ] A Chinese record states that 10,000 people were killed in the 1490 Qingyang event with the deaths caused by a hail of "falling stones"; some astronomers hypothesize that this may describe an actual meteorite fall, although they find the number of deaths implausible. [ 83 ] Kamil Crater , discovered from Google Earth image review in Egypt , 45 m (148 ft) in diameter and 10 m (33 ft) deep, is thought to have been formed less than 3,500 years ago in a then-unpopulated region of western Egypt. It was found February 19, 2009 by V. de Michelle on a Google Earth image of the East Uweinat Desert, Egypt. [ 84 ] One of the best-known recorded impacts in modern times was the Tunguska event, which occurred in Siberia , Russia, in 1908. [ 85 ] This incident involved an explosion that was probably caused by the airburst of an asteroid or comet 5 to 10 km (3.1 to 6.2 mi) above the Earth's surface, felling an estimated 80 million trees over 2,150 km 2 (830 sq mi). [ 86 ] In February 1947, another large bolide impacted the Earth in the Sikhote-Alin Mountains , Primorye , Soviet Union. It was during daytime hours and was witnessed by many people, which allowed V. G. Fesenkov , then chairman of the meteorite committee of the USSR Academy of Science, to estimate the meteoroid's orbit before it encountered the Earth. Sikhote-Alin is a massive fall with the overall size of the meteoroid estimated at 90,000 kg (200,000 lb). A more recent estimate by Tsvetkov (and others) puts the mass at around 100,000 kg (220,000 lb). [ 87 ] It was an iron meteorite belonging to the chemical group IIAB and with a coarse octahedrite structure. More than 70 tonnes ( metric tons ) of material survived the collision. A case of a human injured by a space rock occurred on November 30, 1954, in Sylacauga, Alabama . [ 88 ] There a 4 kg (8.8 lb) stone chondrite crashed through a roof and hit Ann Hodges in her living room after it bounced off her radio. She was badly bruised by the fragments . Several persons have since claimed to have been struck by "meteorites" but no verifiable meteorites have resulted. A small number of meteorite falls have been observed with automated cameras and recovered following calculation of the impact point. The first was the Příbram meteorite , which fell in Czechoslovakia (now the Czech Republic) in 1959. [ 89 ] In this case, two cameras used to photograph meteors captured images of the fireball. The images were used both to determine the location of the stones on the ground and, more significantly, to calculate for the first time an accurate orbit for a recovered meteorite. Following the Příbram fall, other nations established automated observing programs aimed at studying infalling meteorites. [ 90 ] One of these was the Prairie Meteorite Network , operated by the Smithsonian Astrophysical Observatory from 1963 to 1975 in the midwestern U.S. This program also observed a meteorite fall, the "Lost City" chondrite, allowing its recovery and a calculation of its orbit. [ 91 ] Another program in Canada, the Meteorite Observation and Recovery Project, ran from 1971 to 1985. It too recovered a single meteorite, "Innisfree", in 1977. [ 92 ] Finally, observations by the European Fireball Network, a descendant of the original Czech program that recovered Příbram, led to the discovery and orbit calculations for the Neuschwanstein meteorite in 2002. [ 93 ] On August 10, 1972, a meteor which became known as the 1972 Great Daylight Fireball was witnessed by many people as it moved north over the Rocky Mountains from the U.S. Southwest to Canada. It was filmed by a tourist at the Grand Teton National Park in Wyoming with an 8-millimeter color movie camera. [ 94 ] In size range the object was roughly between a car and a house, and while it could have ended its life in a Hiroshima-sized blast, there was never any explosion. Analysis of the trajectory indicated that it never came much lower than 58 km (36 mi) off the ground, and the conclusion was that it had grazed Earth's atmosphere for about 100 seconds, then skipped back out of the atmosphere and returned to its orbit around the Sun. Many impact events occur without being observed by anyone on the ground. Between 1975 and 1992, American missile early warning satellites picked up 136 major explosions in the upper atmosphere. [ 95 ] In the November 21, 2002, edition of the journal Nature , Peter Brown of the University of Western Ontario reported on his study of U.S. early warning satellite records for the preceding eight years. He identified 300 flashes caused by 1 to 10 m (3 to 33 ft) meteors in that time period and estimated the rate of Tunguska-sized events as once in 400 years. [ 96 ] Eugene Shoemaker estimated that an event of such magnitude occurs about once every 300 years, though more recent analyses have suggested he may have overestimated by an order of magnitude. In the dark morning hours of January 18, 2000, a fireball exploded over the city of Whitehorse, Yukon Territory at an altitude of about 26 km (16 mi), lighting up the night like day. The meteor that produced the fireball was estimated to be about 4.6 m (15 ft) in diameter, with a weight of 180 tonnes. This blast was also featured on the Science Channel series Killer Asteroids , with several witness reports from residents in Atlin, British Columbia . On 7 June 2006, a meteor was observed striking a location in the Reisadalen valley in Nordreisa Municipality in Troms County, Norway. Although initial witness reports stated that the resultant fireball was equivalent to the Hiroshima nuclear explosion , scientific analysis places the force of the blast at anywhere from 100 to 500 tonnes TNT equivalent, around three percent of Hiroshima's yield. [ 97 ] On 15 September 2007, a chondritic meteor crashed near the village of Carancas in southeastern Peru near Lake Titicaca , leaving a water-filled hole and spewing gases across the surrounding area. Many residents became ill, apparently from the noxious gases shortly after the impact. On 7 October 2008, an approximately 4 meter asteroid labeled 2008 TC 3 was tracked for 20 hours as it approached Earth and as it fell through the atmosphere and impacted in Sudan. This was the first time an object was detected before it reached the atmosphere and hundreds of pieces of the meteorite were recovered from the Nubian Desert . [ 98 ] On 15 February 2013, an asteroid entered Earth's atmosphere over Russia as a fireball and exploded above the city of Chelyabinsk during its passage through the Ural Mountains region at 09:13 YEKT (03:13 UTC ). [ 99 ] [ 100 ] The object's air burst occurred at an altitude between 30 and 50 km (19 and 31 mi) above the ground, [ 101 ] and about 1,500 people were injured, mainly by broken window glass shattered by the shock wave. Two were reported in serious condition; however, there were no fatalities. [ 102 ] Initially some 3,000 buildings in six cities across the region were reported damaged due to the explosion's shock wave, a figure which rose to over 7,200 in the following weeks. [ 103 ] [ 104 ] The Chelyabinsk meteor was estimated to have caused over $30 million in damage. [ 105 ] [ 106 ] It is the largest recorded object to have encountered the Earth since the 1908 Tunguska event. [ 107 ] [ 108 ] The meteor is estimated to have an initial diameter of 17–20 metres and a mass of roughly 10,000 tonnes. On 16 October 2013, a team from Ural Federal University led by Victor Grokhovsky recovered a large fragment of the meteor from the bottom of Russia's Lake Chebarkul, about 80 km west of the city. [ 109 ] On 1 January 2014, a 3-meter (10 foot) asteroid, 2014 AA was discovered by the Mount Lemmon Survey and observed over the next hour, and was soon found to be on a collision course with Earth. The exact location was uncertain, constrained to a line between Panama , the central Atlantic Ocean, The Gambia , and Ethiopia. Around roughly the time expected (2 January 3:06 UTC) an infrasound burst was detected near the center of the impact range, in the middle of the Atlantic Ocean. [ 110 ] [ 111 ] This marks the second time a natural object was identified prior to impacting earth after 2008 TC3. Nearly two years later, on October 3, WT1190F was detected orbiting Earth on a highly eccentric orbit, taking it from well within the Geocentric satellite ring to nearly twice the orbit of the Moon. It was estimated to be perturbed by the Moon onto a collision course with Earth on November 13. With over a month of observations, as well as precovery observations found dating back to 2009, it was found to be far less dense than a natural asteroid should be, suggesting that it was most likely an unidentified artificial satellite. As predicted, it fell over Sri Lanka at 6:18 UTC (11:48 local time). The sky in the region was very overcast, so only an airborne observation team was able to successfully observe it falling above the clouds. It is now thought to be a remnant of the Lunar Prospector mission in 1998, and is the third time any previously unknown object – natural or artificial – was identified prior to impact. On 22 January 2018, an object, A106fgF , was discovered by the Asteroid Terrestrial-impact Last Alert System (ATLAS) and identified as having a small chance of impacting Earth later that day. [ 112 ] As it was very dim, and only identified hours before its approach, no more than the initial 4 observations covering a 39-minute period were made of the object. It is unknown if it impacted Earth or not, but no fireball was detected in either infrared or infrasound, so if it did, it would have been very small, and likely near the eastern end of its potential impact area – in the western Pacific Ocean. On 2 June 2018, the Mount Lemmon Survey detected 2018 LA (ZLAF9B2), a small 2–5 meter asteroid which further observations soon found had an 85% chance of impacting Earth. Soon after the impact, a fireball report from Botswana arrived to the American Meteor Society . Further observations with ATLAS extended the observation arc from 1 hour to 4 hours and confirmed that the asteroid orbit indeed impacted Earth in southern Africa, fully closing the loop with the fireball report and making this the third natural object confirmed to impact Earth, and the second on land after 2008 TC 3 . [ 113 ] [ 114 ] [ 115 ] On 8 March 2019, NASA announced the detection of a large airburst that occurred on 18 December 2018 at 11:48 local time off the eastern coast of the Kamchatka Peninsula . The Kamchatka superbolide is estimated to have had a mass of roughly 1600 tons, and a diameter of 9 to 14 meters depending on its density, making it the third largest asteroid to impact Earth since 1900, after the Chelyabinsk meteor and the Tunguska event. The fireball exploded in an airburst 25.6 kilometres (15.9 mi) above Earth's surface. 2019 MO , an approximately 4m asteroid, was detected by ATLAS a few hours before it impacted the Caribbean Sea near Puerto Rico in June 2019. [ 116 ] In 2023, a small meteorite is believed to have crashed through the roof of a home in Trenton, New Jersey. The metallic rock was approximately 4 inches by 6 inches and weighed 4 pounds. The item was seized by police and tested for radioactivity. [ 117 ] The object was later confirmed to be a meteorite by scientists at The College of New Jersey, as well as meteorite expert Jerry Delaney, who previously worked at Rutgers University and the American Museum of Natural History. [ 118 ] In the late 20th and early 21st century scientists put in place measures to detect Near Earth objects , and predict the dates and times of asteroids impacting Earth, along with the locations at which they will impact. The International Astronomical Union Minor Planet Center (MPC) is the global clearing house for information on asteroid orbits. NASA 's Sentry System continually scans the MPC catalog of known asteroids, analyzing their orbits for any possible future impacts. [ 119 ] Currently none are predicted (the single highest probability impact currently listed is ~7 m asteroid 2010 RF 12 , which is due to pass earth in September 2095 with only a 5% predicted chance of impacting). [ 120 ] Currently prediction is mainly based on cataloging asteroids years before they are due to impact. This works well for larger asteroids (> 1 km across) as they are easily seen from a long distance. Over 95% of them are already known and their orbits have been measured, so any future impacts can be predicted long before they are on their final approach to Earth. Smaller objects are too faint to observe except when they come very close and so most cannot be observed before their final approach. Current mechanisms for detecting asteroids on final approach rely on wide-field ground based telescopes , such as the ATLAS system. However, current telescopes only cover part of the Earth and even more importantly cannot detect asteroids on the day-side of the planet, which is why so few of the smaller asteroids that commonly impact Earth are detected during the few hours that they would be visible. [ 121 ] So far only four impact events have been successfully predicted, all from innocuous 2–5 m diameter asteroids and detected a few hours in advance. In April 2018, the B612 Foundation reported "It's 100 per cent certain we’ll be hit [by a devastating asteroid], but we're not 100 per cent certain when." [ 10 ] Also in 2018, physicist Stephen Hawking , in his final book Brief Answers to the Big Questions , considered an asteroid collision to be the biggest threat to the planet. [ 11 ] [ 12 ] In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event , and has developed and released the " National Near-Earth Object Preparedness Strategy Action Plan " to better prepare. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation to launch a mission to intercept an asteroid. [ 18 ] The preferred method is to deflect rather than disrupt an asteroid. [ 122 ] [ 123 ] [ 124 ] Impact craters provide evidence of past impacts on other planets in the Solar System, including possible interplanetary terrestrial impacts. Without carbon dating, other points of reference are used to estimate the timing of these impact events. Mars provides some significant evidence of possible interplanetary collisions. The North Polar Basin on Mars is speculated by some to be evidence for a planet-sized impact on the surface of Mars between 3.8 and 3.9 billion years ago, while Utopia Planitia is the largest confirmed impact and Hellas Planitia is the largest visible crater in the Solar System. The Moon provides similar evidence of massive impacts, with the South Pole–Aitken basin being the biggest. Mercury 's Caloris Basin is another example of a crater formed by a massive impact event. Rheasilvia on Vesta is an example of a crater formed by an impact capable of, based on ratio of impact to size, severely deforming a planetary-mass object. Impact craters on the moons of Saturn such as Engelier and Gerin on Iapetus , Mamaldi on Rhea and Odysseus on Tethys and Herschel on Mimas form significant surface features. Models developed in 2018 to explain the unusual spin of Uranus support a long-held hypothesis that this was caused by an oblique collision with a massive object twice the size of Earth. [ 125 ] Jupiter is the most massive planet in the Solar System , and because of its large mass it has a vast sphere of gravitational influence, the region of space where an asteroid capture can take place under favorable conditions. [ 126 ] Jupiter is able to capture comets in orbit around the Sun with a certain frequency. In general, these comets travel some revolutions around the planet following unstable orbits as highly elliptical and perturbable by solar gravity. While some of them eventually recover a heliocentric orbit , others crash on the planet or, more rarely, on its satellites. [ 127 ] [ 128 ] In addition to the mass factor, its relative proximity to the inner solar system allows Jupiter to influence the distribution of minor bodies there. For a long time it was believed that these characteristics led the gas giant to expel from the system or to attract most of the wandering objects in its vicinity and, consequently, to determine a reduction in the number of potentially dangerous objects for the Earth. Subsequent dynamic studies have shown that in reality the situation is more complex: the presence of Jupiter, in fact, tends to reduce the frequency of impact on the Earth of objects coming from the Oort cloud , [ 129 ] while it increases it in the case of asteroids [ 130 ] and short period comets. [ 131 ] For this reason Jupiter is the planet of the Solar System characterized by the highest frequency of impacts, which justifies its reputation as the "sweeper" or "cosmic vacuum cleaner" of the Solar System. [ 132 ] 2009 studies suggest an impact frequency of one every 50–350 years, for an object of 0.5–1 km in diameter; impacts with smaller objects would occur more frequently. Another study estimated that comets 0.3 km (0.19 mi) in diameter impact the planet once in approximately 500 years and those 1.6 km (0.99 mi) in diameter do so just once in every 6,000 years. [ 133 ] In July 1994, Comet Shoemaker–Levy 9 was a comet that broke apart and collided with Jupiter, providing the first direct observation of an extraterrestrial collision of Solar System objects. [ 134 ] The event served as a "wake-up call", and astronomers responded by starting programs such as Lincoln Near-Earth Asteroid Research (LINEAR), Near-Earth Asteroid Tracking (NEAT), Lowell Observatory Near-Earth Object Search (LONEOS) and several others which have drastically increased the rate of asteroid discovery. The 2009 impact event happened on July 19 when a new black spot about the size of Earth was discovered in Jupiter's southern hemisphere by amateur astronomer Anthony Wesley . Thermal infrared analysis showed it was warm and spectroscopic methods detected ammonia. JPL scientists confirmed that there was another impact event on Jupiter, probably involving a small undiscovered comet or other icy body. [ 135 ] [ 136 ] [ 137 ] The impactor is estimated to have been about 200–500 meters in diameter. Later minor impacts were observed by amateur astronomers in 2010, 2012, 2016, and 2017; one impact was observed by Juno in 2020. In 1998, two comets were observed plunging toward the Sun in close succession. The first of these was on June 1 and the second the next day. A video of this, followed by a dramatic ejection of solar gas (unrelated to the impacts), can be found at the NASA [ 138 ] website. Both of these comets evaporated before coming into contact with the surface of the Sun. According to a theory by NASA Jet Propulsion Laboratory scientist Zdeněk Sekanina , the latest impactor to actually make contact with the Sun was the "supercomet" Howard-Koomen-Michels , also known as Solwind 1, on August 30, 1979. [ 139 ] [ self-published source? ] (See also sungrazer .) In 2010, between January and May, Hubble 's Wide Field Camera 3 [ 140 ] took images of an unusual X shape originated in the aftermath of the collision between asteroid P/2010 A2 with a smaller asteroid . Around March 27, 2012, based on evidence, there were signs of an impact on Mars . Images from the Mars Reconnaissance Orbiter provide compelling evidence of the largest impact observed to date on Mars in the form of fresh craters, the largest measuring 48.5 by 43.5 meters. It is estimated to be caused by an impactor 3 to 5 meters long. [ 141 ] On March 19, 2013, an impact occurred on the Moon that was visible from Earth, when a boulder-sized 30 cm meteoroid slammed into the lunar surface at 90,000 km/h (25 km/s; 56,000 mph) creating a 20-meter crater. [ 142 ] [ 143 ] NASA has actively monitored lunar impacts since 2005, [ 144 ] tracking hundreds of candidate events. [ 145 ] [ 146 ] On 18 September 2021 an impact event on Mars formed a cluster of craters, the largest being 130m in diameter. On 24 December 2021 an impact created a 150m-wide crater. Debris was ejected up to 35 km (19 miles) from the impact site. [ 147 ] In recent decades, human made probes have impacted either intentionally or unintentionally on several objects. Most of these probes were destroyed with little observable damage to their target. Some such probes on the Moon and Mars have left observable craters and debris. This includes landings such as the 1969 Apollo 11 Moon Landing Site. High velocity crashes such as the 1972 Apollo 16 S-IVB rocket, [ 148 ] [ 149 ] 2019 Schiaparelli EDM [ 150 ] [ 151 ] and 2023 Luna 25 [ 152 ] have also made physical changes to the landscape in the form of impact craters. Specific missions designed to study effects including ejecta on target objects included 2005 Deep Impact mission on Tempel 1 which caused an 100+ meter diameter crater, [ 153 ] 2019 Hayabusa2 mission on 162173 Ryugu , 2020 OSIRIS-REx mission on 101955 Bennu [ 154 ] and 2022 Double Asteroid Redirection Test on Dimorphos . [ 155 ] [ 156 ] Observations show that Dimorphos lost approximately 1 million kilograms of mass and had its orbit changed as a result of the deliberate impact with the human made probe. [ 157 ] Collisions between galaxies, or galaxy mergers , have been observed directly by space telescopes such as Hubble and Spitzer. However, collisions in planetary systems including stellar collisions , while long speculated, have only recently begun to be observed directly. In 2013, an impact between minor planets was detected around the star NGC 2547 ID 8 by Spitzer and confirmed by ground observations. Computer modelling suggests that the impact involved large asteroids or protoplanets similar to the events believed to have led to the formation of terrestrial planets like the Earth. [ 9 ] ( PDF direct link , Supplementary published information )
https://en.wikipedia.org/wiki/Impact_event
Impact ionization is the process in a material by which one energetic charge carrier can lose energy by the creation of other charge carriers. For example, in semiconductors , an electron (or hole ) with enough kinetic energy can knock a bound electron out of its bound state (in the valence band ) and promote it to a state in the conduction band , creating an electron-hole pair . For carriers to have sufficient kinetic energy a sufficiently large electric field must be applied, [ 1 ] in essence requiring a sufficiently large voltage but not necessarily a large current. If this occurs in a region of high electrical field then it can result in avalanche breakdown . This process is exploited in avalanche diodes , by which a small optical signal is amplified before entering an external electronic circuit. In an avalanche photodiode the original charge carrier is created by the absorption of a photon . The impact ionization process is used in modern cosmic dust detectors like the Galileo Dust Detector [ 2 ] and dust analyzers Cassini CDA , [ 3 ] Stardust CIDA and the Surface Dust Analyser [ 4 ] for the identification of dust impacts and the compositional analysis of cosmic dust particles. In some sense, impact ionization is the reverse process to Auger recombination . Avalanche photodiodes (APD) are used in optical receivers before the signal is given to the receiver circuitry the photon is multiplied with the photocurrent and this increases the sensitivity of the receiver since photocurrent is multiplied before encountering of the thermal noise associated with the receiver circuit. This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Impact_ionization
Impact mills are one of two general classes of milling devices used to reduce the particle size of a material. The other class of mills are "attrition" or grinding mills. Impact mills pulverize the material upon impact. The feasibility of impact mills was greatly enhanced by the mechanization and engineering of the Industrial Revolution . Prior to the industrial revolution, milling was primarily done by attrition or grinding the material between two surfaces. [ 1 ] Attrition milling continues to be the dominant milling class, particularly in the milling of agricultural products (i.e. grain into flour). Roller mills and stone mills are two examples of attrition (grinding) mills. Impact mills either pulverize the material by simply employing gravity or they can mill dynamically upon impact with a high speed rotor , hammer or pin . Gravitational impact mills pulverize the material inside a rotating chamber. [ 2 ] This is accomplished by a cascading motion of larger pieces repetitively impacting and compressively grinding the product into finer particles as it rotates in the chamber. These are generally referred to as " autogenous " impact mills. This action can be enhanced by placing steel balls in the chamber. The class of gravitational impact mills that incorporate steel balls in the chamber are appropriately referred to as " ball mill ". Dynamic impact would occur when material is dropped into a chamber where it receives a pulverizing blow from a hammer , rotor or pin. [ 3 ] Pulverizing can be enhanced by engineering the rotor or hammer [ 4 ] to pass close to a serrated fixed stator. Pin, unifine; and VSI mills are examples of dynamic impact mills.
https://en.wikipedia.org/wiki/Impact_mill
Agent Orange is a herbicide, classified as a defoliant , that was used most notably by the U.S. military during the Vietnam War . Its primary purpose was strategic deforestation, destroying the forest cover and food resources necessary for the implementation and sustainability of the North Vietnamese style of guerilla warfare . [ 1 ] The U.S. Agent Orange usage reached an apex during Operation Ranch Hand , in which the material (with its extremely toxic impurity, dioxin ) was sprayed over 4.5 million acres of land in Vietnam from 1961 to 1971. [ 2 ] The use of Agent Orange has left tangible, long-term impacts upon the Vietnamese people that live in Vietnam as well as those who fled in the mass exodus from 1978 to the early 1990s. Hindsight corrective studies indicate that previous estimates of Agent Orange exposure were biased by government intervention and under-guessing, such that current estimates for dioxin release are almost double those previously predicted. [ 3 ] Census data indicates that the United States military directly sprayed upon millions of Vietnamese during strategic Agent Orange use. [ 3 ] The effects of Agent Orange on the Vietnamese range from a variety of health effects, ecological effects, and sociopolitical effects. The most illustrative effects of Agent Orange upon the Vietnamese people are the health effects. [ 4 ] Scientific consensus has made it clear that the importance of accuracy in terms of site-specific cancer risk as well as the difficulty in identifying Agent Orange as the cause of that specific cancer risk must be acknowledged. Previous studies on the subject have not been generalizable because though they demonstrate statistically significant increase in cancer risk, the populations have been "Western" veterans or Korean veterans, or the sample sizes were too small to be considered appropriate. [ 5 ] The U.S. Environmental Protection Agency defines the margin of exposure as "the ratio of the no-observed adverse-effect-level to the estimated exposure dose." [ 6 ] Independent scientific analyses of the epidemiology of Agent Orange suggest that there is little to no margin of exposure for dioxin or dioxin-like compounds on vertebrates, meaning that even passive contact or genetic lineage has devastating repercussions. [ 7 ] Rigorous studies have consequently been conducted to instead measure the levels of dioxin still present in the blood samples of the citizens of both North and South Vietnam. These studies indicate that though most Agent Orange studies have had myopic analyses of American veterans, Vietnamese citizens have had far greater exposure to breadth and scope of the target. The pervasion of dioxin as described by Schechter et al. (made clear in very high TCDD or 2,3,7,8-tetrachlorodibenzo-p-dioxin levels in human milk, adipose tissue, and blood as measured by gas chromatography and mass spectroscopy) in the Vietnamese people living in Vietnam is substantially greater than that of other populations (Schechter et al., 1995). [ 8 ] Dioxin levels were corroborated in subsequent studies, most notably those conducted in areas geographically near bombing sites and spray missions during the course of Operation Ranch Hand, approximately between 1962 and 1970. A 2002 sample study of the dioxin levels in the city of Biên Hòa , a populous city in southern Vietnam located in the proximity of an air base used for spray missions, indicated noticeably elevated blood dioxin levels despite a 20-year period of peace, with Agent Orange specifically being found in the blood samples. [ 9 ] Emigrants to the city and even children born after the end of the Agent Orange spraying operations had blood samples indicating a presence of dioxin (Schecter et al., 2001). [ 9 ] Meta-studies have affirmed the dioxin pathway of genetic inheritance, e.g. a statistically significant correlation between paternal exposure to Agent Orange and spina bifida over three case-control studies from 1966 to 2008. [ 10 ] According to the Vietnamese, the US program resulted in 400,000 deaths caused due to a range of cancers and other ailments, and that approximately 4.8 million Vietnamese people were exposed to Agent Orange according to census data. [ 11 ] [ 12 ] Following the end of the Vietnam War, two million refugees from Vietnam as well as Laos and Cambodia fled to other countries. By 1992, upwards of 1 million refugees had settled in the United States, 750,000 in other North American and European countries, and many others remained in refugee camps from Thai-Cambodian border to Hong Kong, unable to obtain the visas and immigration documents necessary to permanently immigrate. [ 13 ] Scientific reports have concluded that refugees who had reported being exposed to chemical sprays while in South Vietnam continued to experience pain in the eyes and skin as well as gastrointestinal upsets. In one study, ninety-two percent of participants suffered incessant fatigue; others reported abortions and monstrous births. [ 14 ] Meta-analyses of the most current studies on the association between Agent Orange and birth defects have concluded that there is a statistically significant correlation such that having a parent who was exposed to Agent Orange at any point in their life will increase one's likelihood of either possessing or acting as a genetic carrier of birth defects. Vietnamese studies specifically indicated an even greater correlation between parental exposure and birth defects, with scholars concluding that the rate of association varied situationally as degree of exposure and intensity were factors also considered. [ 15 ] Agent Orange had devastating ecological effects on Vietnam's plant life, which also contributed to the creation of refugees during the war. The ecological effects of Agent Orange have been reported to continue to affect the daily lives of Vietnamese citizens. A study showed dioxin contamination in soil and sediment samples and hypothesized "that a major route of current and past exposures is from the movement of dioxin from soil into river sediment, then into fish, and from fish consumption into people." [ 9 ] Studies in the Aluoi Valley, a village near a now-defunct military base that was operating between 1963 and 1966, confirmed this process of biological magnification, as contaminated soil acted as "reservoirs" of TCDD Agent Orange toxin that would later transfer to fish and ducks and finally to humans, all via consumption. [ 16 ] The International Union for Conservation of Nature concluded that "much of the damage can probably never be repaired." [ 17 ] Official US military records have listed figures including the destruction of 20% of the jungles of South Vietnam and 20-36% (with other figures reporting 20-50%) of the mangrove forests. [ 18 ] An overall reduction in biomass , i.e. plant and animal populations, has been noted along with loss of soil nutrients and ecosystem productivity in terms of growth yields. [ 19 ] Forests that have been sprayed multiple times (estimates point to about a quantity of land equaling 500,000 ha (1,200,000 acres)) have extensively exacerbated ecological disadvantages; recovery times are dubious and "the plant and animal communities have been totally disrupted" due to "total annihilation of the vegetative cover". [ 19 ] The long-term effect of this deforestation continues to result in less aged foliage and mangroves being unable to grow from even a single spraying, with many patches of economically unviable grass colloquially referred to as "American grass". [ 18 ] Farm land that was destroyed in the process of militarization and the creation of battlefields produced an agricultural wasteland, forcing Vietnamese farmers to work with contaminated soil for more than 40 years. [ 20 ] The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme , lawyers, historians and other academics as an ecocide . [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] The use of Agent Orange is considered a "notorious example" of the expropriation of human environment for warfare, forcing many rural Vietnamese to move to cities as ecological refugees to survive because their crops and livelihood had been destroyed. [ 26 ] Harvard professor Samuel P. Huntington noted that during the Vietnam War the urban population doubled or tripled as people moved from rural areas to escape war. [ 27 ] Jim Glassman argued that ecological destruction and crop destruction, including from Agent Orange, produced rural refugees to cities and helped as part of counterinsurgency efforts to control rural areas and isolate the population from the Viet Cong. He further wrote that the millions of war refugees "cannot be seen narrowly as the result of one or another form of warfare". [ 28 ] Various socio-political effects of Agent Orange have also been documented. Difficulty in maintaining judicial and civil transparency persists despite decades passing since the use of Agent Orange by the United States military. [ 29 ] Corporations indicted by the ethicality of their chemical use have been described as "antagonistic and focused on technological arguments". [ 30 ] The first legal proceeding on behalf of Vietnamese victims was undertaken in January 2004 in a New York district court. [ 31 ] Ultimately the district court held that "herbicide spraying . . . did not constitute a war crime pre-1975" and that international law prevented the companies that produced Agent Orange from being liable. [ 32 ] Alternative models for reconciling the harms done by dioxin on the Vietnamese people with reparations have also been proposed. Some have called for the defoliation and destruction to be deemed an "environmental war crime". [ 32 ] Law reviews have even called for a revision to the litigation process in the US due to the harmful implications regarding justice, reparations, and accountability as a result of the political sway of aggregate private interests. [ 33 ] Citizen-to-citizen dialogue for individuals to call for accountability by the United States government was first established in 2006 by the Ford Foundation . Citizens sought a legal avenue by which private citizens and policy makers could work together to form a coherent plan of action in addressing the legacy of Agent Orange. The US-Vietnam Dialogue Group on Agent Orange/Dioxin, composed of members of the Aspen Institute , Vietnam National University, and Vietnam Veterans Association, is the most notable example of this civic response. Long-term programs and continued check-ups on the state of current plans to address Agent Orange are heavily monitored. [ 34 ] Questions of governmental accountability have been raised towards who should be responsible for allowing the use of the chemical dioxin despite knowing the risks. Those who said that the use (at the time of the Vietnam War) of Agent Orange was merely a means of defeating the Viet Cong did not believe that the defoliant violated the Geneva Protocol . [ 18 ] During the war, resolutions were introduced to the United Nations charging that the U.S. was violating the 1925 Geneva Protocol , which regulated the use of chemical and biological weapons , however the resolutions were defeated. [ 35 ] The extensive environmental damage that resulted from usage of the herbicide prompted the United Nations to pass Resolution 31/72 and ratify the Environmental Modification Convention in 1976. Many states do not regard this as a complete ban on the use of herbicides and defoliants in warfare. [ 36 ] There is reason to believe that sociopolitical context constrains the ability of government bodies to reveal the truth regarding food-behavior research as well as the scientific studies crafted by these bodies; governments may have an incentive to disrupt or obstruct investigations into the matter. [ 37 ] Additional remediative policies have been proposed by concerned groups of citizens due to a lack of governmental accountability. The US-Vietnam Dialogue Group on Agent Orange/Dioxin of the Aspen Institute established a 10-year Plan of Action on June 16, 2010, to call for governmental participation in addressing herbicide effect in Vietnam. This plan calls for the United States and the Vietnamese government to work with other governments and NGOs to invest 30 million dollars over ten years to clean and purify harmed ecosystems and expand services to families who have been affected medically and physically by Agent Orange. [ 38 ] The current scientific consensus on the effects of Agent Orange concludes that scientists at the time made erroneous judgments on how devastating the chemical could be. Scientific reviews ex post facto have indicated that many of these supposedly objective studies that conclude a beneficial use of Agent Orange were based on access to still classified documents and little else. [ 39 ] According to Koppes's study, scientists repeatedly minimized the harmful effects of the chemical and ignored empirical evidence. [ 39 ]
https://en.wikipedia.org/wiki/Impact_of_Agent_Orange_in_Vietnam
The impact of alcohol on aging is multifaceted. Evidence shows that alcoholism or alcohol abuse can cause both accelerated (or premature) aging – in which symptoms of aging appear earlier than normal – and exaggerated aging, in which the symptoms appear at the appropriate time but in a more exaggerated form. [ 1 ] The effects of alcohol use disorder on the aging process include hypertension , cardiac dysrhythmia , cancers , gastrointestinal disorders , neurocognitive deficits , bone loss , and emotional disturbances especially depression . [ 2 ] Furthermore, chronic ethanol consumption can contribute to premature aging by depleting cellular NAD+ , a key coenzyme vital for DNA repair and maintaining cellular health through proteins like sirtuins . [ 3 ] [ 4 ] [ 5 ] [ 6 ] Chronic ethanol consumption is increasingly understood to contribute to accelerated aging through various mechanisms, notably via the depletion and altered balance of nicotinamide adenine dinucleotide (NAD+) . [ 7 ] The metabolism of ethanol, primarily in the liver by enzymes such as alcohol dehydrogenase and aldehyde dehydrogenase , requires NAD+ as a cofactor, converting it to its reduced form, NADH. This process significantly increases the NADH/NAD+ ratio, leading to a relative depletion of available NAD+ within cells. [ 8 ] [ 9 ] NAD+ is a crucial coenzyme for sirtuins , a class of proteins vital for maintaining cellular health, promoting DNA repair , regulating metabolism , enhancing mitochondrial function, and improving stress resistance—all processes that counteract aging. [ 7 ] The reduced availability of NAD+ due to chronic alcohol consumption can therefore impair sirtuin activity, diminishing these protective cellular functions. Additionally, alcohol and its toxic metabolite acetaldehyde can cause DNA damage, which triggers the activation of Poly(ADP-ribose) polymerases (PARPs) , enzymes that heavily consume NAD+ during the DNA repair process, further exacerbating NAD+ depletion. [ 9 ] Given that NAD+ levels naturally decline with age, this alcohol-induced disruption of NAD+ homeostasis and the subsequent impairment of critical NAD+-dependent pathways like those mediated by sirtuins and PARPs can contribute to features indicative of accelerated aging and an increased susceptibility to age-related diseases. [ 7 ] [ 9 ] Alcohol is a potent neurotoxin . [ 10 ] The National Institute on Alcohol Abuse and Alcoholism has found, "Alcoholism may accelerate normal aging or cause premature aging of the brain." [ 11 ] Another report by the same agency found, "Chronic alcohol consumption, as well as chronic glucocorticoid exposure, can result in premature and/or exaggerated aging." Specifically, alcohol activates the HPA axis , causing glucocorticoid secretion and thus elevating levels of stress hormones in the body. Chronic exposure to these hormones results in an acceleration of the aging process, which is associated with "gradual, but often dramatic, changes over time in almost every physiological system in the human body. Combined, these changes result in decreased efficiency and resiliency of physiological function." Chronic stress and chronic heavy alcohol use cause a similar premature aging effect, including nerve cell degeneration in the hippocampus . [ 1 ] According to the National Institutes of Health, researchers now understand that drinking moderate amounts of alcohol can protect the hearts of some people from the risks of coronary artery disease . [ 12 ] But, it's not possible to predict in which people alcoholism will become a problem. Given these and other risks, the American Heart Association cautions people not to start drinking. [ 6 ] A study published in August 2010 in the journal, "Alcoholism: Clinical and Experimental Research," followed 1,824 participants between the ages of 55 and 65 and found that even after adjusting for all suspected covariates, abstainers and heavy drinkers continued to show increased mortality risks of 51 and 45%, respectively, compared to moderate drinkers. [ 13 ] A follow-up study lists several cautions in interpreting the findings. For example, the results do not address nor endorse initiation of drinking among nondrinkers, and persons who have medical conditions which would be worsened by alcohol consumption should not drink alcohol. [ 14 ] Additional research suggests that the reasons for alcohol abstinence may be a determining factor in the outcomes for abstainers: those who do not drink because of existing medical conditions or because of previous substance use disorder issues have the highest rates of early death among the abstainers. Other groups of abstainers, such as those who do not drink because of family upbringing or moral/religious reasons, have mortality risks that are as low as those who drink in moderation. [ 15 ] Excessive alcohol consumption, especially of distilled alcohol , is responsible for higher mortality rates and lower life expectancy for men in Eastern Europe , especially the former Soviet Union . [ 16 ] [ 17 ]
https://en.wikipedia.org/wiki/Impact_of_alcohol_on_aging
The impact of nanotechnology extends from its medical , ethical , mental , legal and environmental applications, to fields such as engineering, biology, chemistry, computing, materials science, and communications. Major benefits of nanotechnology include improved manufacturing methods, water purification systems, energy systems, physical enhancement , nanomedicine , better food production methods, nutrition and large-scale infrastructure auto-fabrication. [ 1 ] Nanotechnology's reduced size may allow for automation of tasks which were previously inaccessible due to physical restrictions, which in turn may reduce labor, land, or maintenance requirements placed on humans. Potential risks include environmental, health, and safety issues; transitional effects such as displacement of traditional industries as the products of nanotechnology become dominant, which are of concern to privacy rights advocates. These may be particularly important if potential negative effects of nanoparticles are overlooked. Whether nanotechnology merits special government regulation is a controversial issue. Regulatory bodies such as the United States Environmental Protection Agency and the Health and Consumer Protection Directorate of the European Commission have started dealing with the potential risks of nanoparticles. The organic food sector has been the first to act with the regulated exclusion of engineered nanoparticles from certified organic produce, firstly in Australia and the UK , [ 2 ] and more recently in Canada , as well as for all food certified to Demeter International standards [ 3 ] The presence of nanomaterials (materials that contain nanoparticles ) is not in itself a threat. It is only certain aspects that can make them risky, in particular their mobility and their increased reactivity. Only if certain properties of certain nanoparticles were harmful to living beings or the environment would we be faced with a genuine hazard. In this case it can be called nanopollution. In addressing the health and environmental impact of nanomaterials we need to differentiate between two types of nanostructures: (1) Nanocomposites, nanostructured surfaces and nanocomponents (electronic, optical, sensors etc.), where nanoscale particles are incorporated into a substance, material or device (“fixed” nano-particles); and (2) “free” nanoparticles, where at some stage in production or use individual nanoparticles of a substance are present. These free nanoparticles could be nanoscale species of elements, or simple compounds, but also complex compounds where for instance a nanoparticle of a particular element is coated with another substance (“coated” nanoparticle or “core-shell” nanoparticle). There seems to be consensus that, although one should be aware of materials containing fixed nanoparticles, the immediate concern is with free nanoparticles. Nanoparticles are very different from their everyday counterparts, so their adverse effects cannot be derived from the known toxicity of the macro-sized material. This poses significant issues for addressing the health and environmental impact of free nanoparticles. To complicate things further, in talking about nanoparticles it is important that a powder or liquid containing nanoparticles almost never be monodisperse, but contain instead a range of particle sizes. This complicates the experimental analysis as larger nanoparticles might have different properties from smaller ones. Also, nanoparticles show a tendency to aggregate, and such aggregates often behave differently from individual nanoparticles. The health impacts of nanotechnology are the possible effects that the use of nanotechnological materials and devices will have on human health . As nanotechnology is an emerging field, there is great debate regarding to what extent nanotechnology will benefit or pose risks for human health. Nanotechnology's health impacts can be split into two aspects: the potential for nanotechnological innovations to have medical applications to cure disease, and the potential health hazards posed by exposure to nanomaterials . In regards to the current global pandemic, researchers, engineers and medical professionals are using an extremely developed collection of nano science and nanotechnology approaches to explore the ways it could potentially help the medical, technical, and scientific communities to help fight the pandemic. [ 4 ] Nanomedicine is the medical application of nanotechnology . [ 5 ] The approaches to nanomedicine range from the medical use of nanomaterials , to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology . Nanomedicine seeks to deliver a valuable set of research tools and clinically helpful devices in the near future. [ 6 ] [ 7 ] The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging. [ 8 ] Neuro-electronic interfaces and other nanoelectronics -based sensors are another active goal of research. Further down the line, the speculative field of molecular nanotechnology believes that cell repair machines could revolutionize medicine and the medical field. Nanomedicine research is directly funded, with the US National Institutes of Health in 2005 funding a five-year plan to set up four nanomedicine centers. In April 2006, the journal Nature Materials estimated that 130 nanotech-based drugs and delivery systems were being developed worldwide. [ 9 ] Nanomedicine is a large industry, with nanomedicine sales reaching $6.8 billion in 2004. With over 200 companies and 38 products worldwide, a minimum of $3.8 billion in nanotechnology R&D is being invested every year. [ 10 ] As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy. Nanotoxicology is the field which studies potential health risks of nanomaterials. The extremely small size of nanomaterials means that they are much more readily taken up by the human body than larger sized particles. How these nanoparticles behave inside the organism is one of the significant issues that needs to be resolved. The behavior of nanoparticles is a function of their size, shape and surface reactivity with the surrounding tissue. For example, they could cause overload on phagocytes , cells that ingest and destroy foreign matter, thereby triggering stress reactions that lead to inflammation and weaken the body's defense against other pathogens. Apart from what happens if non-degradable or slowly degradable nanoparticles accumulate in organs, another concern is their potential interaction with biological processes inside the body: because of their large surface, nanoparticles on exposure to tissue and fluids will immediately adsorb onto their surface some of the macromolecules they encounter. This may, for instance, affect the regulatory mechanisms of enzymes and other proteins. The large number of variables influencing toxicity means that it is difficult to generalise about health risks associated with exposure to nanomaterials – each new nanomaterial must be assessed individually and all material properties must be taken into account. Health and environmental issues combine in the workplace of companies engaged in producing or using nanomaterials and in the laboratories engaged in nanoscience and nanotechnology research. It is safe to say that current workplace exposure standards for dusts cannot be applied directly to nanoparticle dusts. The National Institute for Occupational Safety and Health has conducted initial research on how nanoparticles interact with the body's systems and how workers might be exposed to nano-sized particles in the manufacturing or industrial use of nanomaterials. NIOSH currently offers interim guidelines for working with nanomaterials consistent with the best scientific knowledge. [ 11 ] At The National Personal Protective Technology Laboratory of NIOSH, studies investigating the filter penetration of nanoparticles on NIOSH-certified and EU marked respirators , as well as non-certified dust masks have been conducted. [ 12 ] These studies found that the most penetrating particle size range was between 30 and 100 nanometers, and leak size was the largest factor in the number of nanoparticles found inside the respirators of the test dummies. [ 13 ] [ 14 ] Other properties of nanomaterials that influence toxicity include: chemical composition, shape, surface structure, surface charge, aggregation and solubility, [ 15 ] and the presence or absence of functional groups of other chemicals. [ 16 ] The large number of variables influencing toxicity means that it is difficult to generalise about health risks associated with exposure to nanomaterials – each new nanomaterial must be assessed individually and all material properties must be taken into account. Literature reviews have been showing that release of engineered nanoparticles and incurred personal exposure can happen during different work activities. [ 17 ] [ 18 ] [ 19 ] The situation alerts regulatory bodies to necessitate prevention strategies and regulations at nanotechnology workplaces. The environmental impact of nanotechnology is the possible effects that the use of nanotechnological materials and devices will have on the environment . [ 20 ] As nanotechnology is an emerging field, there is debate regarding to what extent industrial and commercial use of nanomaterials will affect organisms and ecosystems. Nanotechnology's environmental impact can be split into two aspects: the potential for nanotechnological innovations to help improve the environment, and the possibly novel type of pollution that nanotechnological materials might cause if released into the environment. Green nanotechnology refers to the use of nanotechnology to enhance the environmental sustainability of processes producing negative externalities . It also refers to the use of the products of nanotechnology to enhance sustainability . It includes making green nano-products and using nano-products in support of sustainability. Green nanotechnology has been described as the development of clean technologies , "to minimize potential environmental and human health risks associated with the manufacture and use of nanotechnology products, and to encourage replacement of existing products with new nano-products that are more environmentally friendly throughout their lifecycle ." [ 21 ] Green nanotechnology has two goals: producing nanomaterials and products without harming the environment or human health, and producing nano-products that provide solutions to environmental problems. It uses existing principles of green chemistry and green engineering [ 22 ] to make nanomaterials and nano-products without toxic ingredients, at low temperatures using less energy and renewable inputs wherever possible, and using lifecycle thinking in all design and engineering stages. Nanopollution is a generic name for all waste generated by nanodevices or during the nanomaterials manufacturing process. Nanowaste is mainly the group of particles that are released into the environment, or the particles that are thrown away when still on their products. Beyond the toxicity risks to human health and the environment which are associated with first-generation nanomaterials, nanotechnology has broader societal impact and poses broader social challenges. Social scientists have suggested that nanotechnology's social issues should be understood and assessed not simply as "downstream" risks or impacts. Rather, the challenges should be factored into "upstream" research and decision-making in order to ensure technology development that meets social objectives [ 23 ] Many social scientists and organizations in civil society suggest that technology assessment and governance should also involve public participation. The exploration of the stakeholder's perception is also an essential component in assessing the large amount of risk associated with nanotechnology and nano-related products. [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] Over 800 nano-related patents were granted in 2003, with numbers increasing to nearly 19,000 internationally by 2012. [ 29 ] Corporations are already taking out broad-ranging patents on nanoscale discoveries and inventions. For example, two corporations, NEC and IBM , hold the basic patents on carbon nanotubes , one of the current cornerstones of nanotechnology. Carbon nanotubes have a wide range of uses, and look set to become crucial to several industries from electronics and computers, to strengthened materials to drug delivery and diagnostics. [ citation needed ] Nanotechnologies may provide new solutions for the millions of people in developing countries who lack access to basic services, such as safe water, reliable energy, health care, and education. The 2004 UN Task Force on Science, Technology and Innovation noted that some of the advantages of nanotechnology include production using little labor, land, or maintenance, high productivity, low cost, and modest requirements for materials and energy. However, concerns are frequently raised that the claimed benefits of nanotechnology will not be evenly distributed, and that any benefits (including technical and/or economic) associated with nanotechnology will only reach affluent nations. [ 30 ] Longer-term concerns center on the impact that new technologies will have for society at large, and whether these could possibly lead to either a post-scarcity economy, or alternatively exacerbate the wealth gap between developed and developing nations. The effects of nanotechnology on the society as a whole, on human health and the environment, on trade, on security, on food systems and even on the definition of "human", have not been characterized or politicized. Significant debate exists relating to the question of whether nanotechnology or nanotechnology-based products merit special government regulation . This debate is related to the circumstances in which it is necessary and appropriate to assess new substances prior to their release into the market, community and environment. Regulatory bodies such as the United States Environmental Protection Agency and the Food and Drug Administration in the U.S. or the Health & Consumer Protection Directorate of the European Commission have started dealing with the potential risks posed by nanoparticles. So far, neither engineered nanoparticles nor the products and materials that contain them are subject to any special regulation regarding production, handling or labelling. The Material Safety Data Sheet that must be issued for some materials often does not differentiate between bulk and nanoscale size of the material in question and even when it does these MSDS are advisory only. The new advances and rapid growth within the field of nanotechnology have large implications, which in turn will lead to regulations, on the traditional food and agriculture sectors of the world, in particular the invention of smart and active packaging, nano sensors, nano pesticides, and nano fertilizers. [ 31 ] Limited nanotechnology labeling and regulation may exacerbate potential human and environmental health and safety issues associated with nanotechnology. [ 32 ] It has been argued that the development of comprehensive regulation of nanotechnology will be vital to ensure that the potential risks associated with the research and commercial application of nanotechnology do not overshadow its potential benefits. [ 33 ] Regulation may also be required to meet community expectations about responsible development of nanotechnology, as well as ensuring that public interests are included in shaping the development of nanotechnology. [ 34 ] In 2008, E. Marla Felcher "The Consumer Product Safety Commission and Nanotechnology," suggested that the Consumer Product Safety Commission , which is charged with protecting the public against unreasonable risks of injury or death associated with consumer products, is ill-equipped to oversee the safety of complex, high-tech products made using nanotechnology. [ 35 ]
https://en.wikipedia.org/wiki/Impact_of_nanotechnology
The impact of self-driving cars is anticipated to be wide-ranging in many areas of daily life. Self-driving cars (also known as autonomous vehicles or AVs ) have been the subject of significant research on their environmental, practical, and lifestyle consequences and their impacts remain debated. [ 1 ] [ 2 ] Some experts claim substantial reduction in traffic collisions and the resulting severe injuries or deaths. United States government estimates suggest 94% of traffic collisions have humans as the final critical element in crash, [ 3 ] with one study estimating that converting 90% of cars on US roads to AVs would save 25,000 lives per year. [ 4 ] Other experts claim that the number of human error collisions is overestimated and that self-driving cars may actually increase collisions. [ 1 ] [ 5 ] Self-driving cars are speculated to worsen air pollution , noise pollution , and sedentary lifestyles, [ 4 ] to increase productivity and housing affordability, reclaim land used for parking, [ 6 ] cause greater energy use, traffic congestion and sprawl. [ 6 ] The impact of self-driving cars on absolute levels of individual car use is not yet clear; other forms of self-driving vehicles, such as self-driving buses, may actually decrease car use and congestion. [ 7 ] AVs are anticipated to affect the healthcare, insurance, travel, and logistics fields. Auto insurance costs are expected to decrease, and the burden of cars on the healthcare system to reduced. Self-driving cars are predicted to cause significant job losses in the transportation industry. A McKinsey report has forecast that AVs could reach $300 to $400 billion in revenue by 2035. [ 8 ] The industry has attracted multiple car manufacturers, most notably General Motor 's subsidiary Cruise [ 9 ] and Tesla . [ 10 ] Ford and Volkswagen invested billions in Argo AI but withdraw from the market by 2022, instead focusing on semi-autonomous driving (L2+, L3 under SAE classification) . [ 11 ] Notably, non-car manufacturers have also investigated and speculated about self-driving cars, including Google subsidiary Waymo , among others. [ 10 ] To help reduce the possibility of safety issues, some companies have begun to open-source parts of their driverless systems. Udacity for instance is developing an open-source software stack , [ 12 ] and some companies are having similar approaches. [ 13 ] [ 14 ] Estimates of numbers of crashes prevented by AVs varies widely. An NHSTA report in 2018 found that 94% of crashes had humans as the final causal step in a chain of events. [ 3 ] One study claimed that if 90% of cars in the US became self-driving, an estimated 25,000 lives would be saved annually. Lives saved by averting automobile crashes in the US has been valued at more than $200 billion annually. [ 4 ] Other studies claim self-driving car would have the potential to save 10 million lives worldwide, per decade. [ 4 ] [ 15 ] Opponents argue that the number of human driven crashes is taken out of context and estimates of lives saved may not be accurate. [ 5 ] Driving safety experts predict that once driverless technology has been fully developed, traffic collisions (and resulting deaths and injuries and costs) caused by human error , such as delayed reaction time , tailgating , rubbernecking , and other forms of distracted or aggressive driving would be substantially reduced. [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] Some experts advocate the idea of a "smart city" and claim data sharing infrastructure with AVs could further reduce crashes. [ 21 ] Lack of data remains a key challenge in comparisons of fatalities per million miles between AVs and humans. [ 22 ] [ 23 ] One limited early study claimed a rate of 9.1 crashes per million miles by AVs, nearly double the rate from human driving, though crashes were less serious than humans. [ 22 ] Arstechnica calculated 102 crashes over 6 million miles, but claimed crashes were low-impact and still safer than human driving. [ 23 ] Waymo claimed only 3 crashes with injuries over 7.1 million miles, nearly twice as safe as human drivers. [ 24 ] As more cities give permission for AVs to operate, incidents and complaints have increased. [ 1 ] Opponents of AVs have argued that current self-driving technology fails to take into account "edge cases" [ 10 ] which may make the technology more dangerous than human driving. [ 1 ] [ 5 ] In 2017, driving experts were contacted by "TheDrive.com", operated by Time magazine, to rank autopilot systems. [ 25 ] None ranked any of the autopilot systems at the time as safer than human driving. [ 25 ] Factors that reduce safety may include unexpected interactions between humans and vehicle systems; complications due to technical limitations of technologies; the effect of the bugs that inevitably occur in complex interdependent software systems; sensor or data shortcomings; and compromise by malicious actors. Security problems include what an autonomous car might do if summoned to pick up the owner but another person attempts entry, what happens if someone tries to break into the car, and what happens if someone attacks the occupants, for example by exchanging gunfire. [ 26 ] One ethicist argued that autonomous vehicles requiring any human supervision would create complacency and would be immoral to deploy. [ 27 ] Specifically, they argued humans are unlikely to effectively take over during a sudden software failure if an impending decision is required immediately. [ 27 ] Research shows that drivers in automated cars react later when they have to intervene in a critical situation, compared to if they were driving manually. [ 28 ] According to a 2020 Annual Review of Public Health review of the literature, self-driving cars "could increase some health risks (such as air pollution, noise, and sedentarism); however, if properly regulated, AVs will likely reduce morbidity and mortality from motor vehicle crashes and may help reshape cities to promote healthy urban environments." [ 4 ] An unexpected disadvantage of the widespread acceptance of autonomous vehicles would be a reduction in the supply of organs for donation . [ 29 ] In the US, for example, 13% of the organ donation supply comes from car crash victims. [ 4 ] According to a 2020 study, self-driving cars will increase productivity, and housing affordability, as well as reclaim land used for parking. [ 6 ] However, self-driving cars will cause greater energy use, traffic congestion and sprawl. [ 6 ] Automated cars could reduce labor costs ; [ 30 ] [ 31 ] relieve travelers from driving and navigation chores, thereby replacing behind-the-wheel commuting hours with more time for leisure or work; [ 17 ] [ 20 ] and also would lift constraints on occupant ability to drive, distracted and texting while driving , intoxicated , prone to seizures , or otherwise impaired. [ 32 ] [ 33 ] For the young, the elderly , people with disabilities , and low-income citizens, automated cars could provide enhanced mobility . [ 34 ] [ 35 ] The removal of the steering wheel—along with the remaining driver interface and the requirement for any occupant to assume a forward-facing position—would give the interior of the cabin greater ergonomic flexibility. Large vehicles, such as motorhomes, would attain appreciably enhanced ease of use. [ 36 ] The elderly and persons with disabilities (such as persons who are hearing-impaired , vision-impaired , mobility-impaired , or cognitively-impaired ) are potential beneficiaries of adoption of autonomous vehicles; however, the extent to which such populations gain greater mobility from the adoption of AV technology depends on the specific designs and regulations adopted. [ 37 ] [ 38 ] Children and teens, who are not able to drive a vehicle themselves in case of student transport , would also benefit of the introduction of autonomous cars. [ 39 ] Daycares and schools are able to come up with automated pick-up and drop-off systems by car in addition to walking , cycling and busing, causing a decrease of reliance on parents and childcare workers. The extent to which human actions are necessary for driving will vanish. Since current vehicles require human actions to some extent, the driving school industry will not be disrupted until the majority of autonomous transportation is switched to the emerged dominant design. It is plausible that in the distant future driving a vehicle will be considered as a luxury, which implies that the structure of the industry is based on new entrants and a new market. [ 40 ] Self-driving cars would also exasperate existing mobility inequalities driven by the interests of car companies and technology companies while taking investment away from more equitable and sustainable mobility initiatives such as public transportation. [ 41 ] According to a Wonkblog reporter, if fully automated cars become commercially available, they have the potential to be a disruptive innovation with major implications for society. The likelihood of widespread adoption is still unclear, but if they are used on a wide scale, policymakers face a number of unresolved questions about their effects. [ 42 ] One fundamental question is about their effect on travel behavior. Some people believe that they will increase car ownership and car use because it will become easier to use them and they will ultimately be more useful. [ 42 ] This may, in turn, encourage urban sprawl and ultimately total private vehicle use. Others argue that it will be easier to share cars and that this will thus discourage outright ownership and decrease total usage, and make cars more efficient forms of transportation in relation to the present situation. [ 43 ] [ 44 ] Policy-makers will have to take a new look at how infrastructure is to be built and how money will be allotted to build for automated vehicles. The need for traffic signals could potentially be reduced with the adoption of smart highways . [ 45 ] Due to smart highways and with the assistance of smart technological advances implemented by policy change, the dependence on oil imports may be reduced because of less time being spent on the road by individual cars which could have an effect on policy regarding energy. [ 46 ] On the other hand, automated vehicles could increase the overall number of cars on the road which could lead to a greater dependence on oil imports if smart systems are not enough to curtail the impact of more vehicles. [ 47 ] However, due to the uncertainty of the future of automated vehicles, policymakers may want to plan effectively by implementing infrastructure improvements that can be beneficial to both human drivers and automated vehicles. [ 48 ] Caution needs to be taken in acknowledgment to public transportation and that the use may be greatly reduced if automated vehicles are catered to through policy reform of infrastructure with this resulting in job loss and increased unemployment . [ 49 ] Other disruptive effects will come from the use of automated vehicles to carry goods. Self-driving vans have the potential to make home deliveries significantly cheaper, transforming retail commerce and possibly making hypermarkets and supermarkets redundant. As of 2019 [update] the US Department of Transportation defines automation into six levels, starting at level zero which means the human driver does everything and ending with level five, the automated system performs all the driving tasks. Also under the current law, manufacturers bear all the responsibility to self-certify vehicles for use on public roads. This means that currently as long as the vehicle is compliant within the regulatory framework, there are no specific federal legal barriers in the US to a highly automated vehicle being offered for sale. Iyad Rahwan , an associate professor in the MIT Media Lab said, "Most people want to live in a world where cars will minimize casualties, but everyone wants their own car to protect them at all costs." Furthermore, industry standards and best practice are still needed in systems before they can be considered reasonably safe under real-world conditions. [ 50 ] Additional advantages could include higher speed limits ; [ 51 ] smoother rides; [ 52 ] and increased roadway capacity; and minimized traffic congestion , due to decreased need for safety gaps and higher speeds. [ 53 ] [ 54 ] Currently, maximum controlled-access highway throughput or capacity according to the US Highway Capacity Manual is about 2,200 passenger vehicles per hour per lane, with about 5% of the available road space is taken up by cars. One study estimated that automated cars could increase capacity by 273% (≈8,200 cars per hour per lane). The study also estimated that with 100% connected vehicles using vehicle-to-vehicle communication, capacity could reach 12,000 passenger vehicles per hour (up 545% from 2,200 pc/h per lane) traveling safely at 120 km/h (75 mph) with a following gap of about 6 m (20 ft) of each other. Human drivers at highway speeds keep between 40 and 50 m (130 and 160 ft) away from the vehicle in front. These increases in highway capacity could have a significant impact in traffic congestion, particularly in urban areas, and even effectively end highway congestion in some places. [ 55 ] The ability for authorities to manage traffic flow would increase, given the extra data and driving behavior predictability [ 56 ] combined with less need for traffic police and even road signage . Safer driving is expected to reduce the costs of vehicle insurance . [ 30 ] [ 57 ] [ failed verification ] The automobile insurance industry might suffer as the technology makes certain aspects of these occupations obsolete. [ 35 ] As fewer collisions implicate less money spent on repair costs, the role of the insurance industry is likely to be altered as well. It can be expected that the increased safety of transport due to autonomous vehicles will lead to a decrease in payouts for the insurers, which is positive for the industry, but fewer payouts may imply a demand drop for insurances in general. [ citation needed ] In order to accommodate such changes, the Automated and Electric Vehicles Act 2018 was introduced. While Part 2 deals with Electric Vehicles, Part 1 covers insurance provisions for automated vehicles. A direct impact of widespread adoption of automated vehicles is the loss of driving-related jobs in the road transport industry. [ 16 ] [ 30 ] [ 31 ] [ 58 ] There could be resistance from professional drivers and unions who are threatened by job losses. [ 59 ] In addition, there could be job losses in public transit services and crash repair shops. A frequently cited paper by Michael Osborne and Carl Benedikt Frey found that automated cars would make many jobs redundant. [ 60 ] The industry has, however created thousands of jobs in low-income countries for workers who train autonomous systems. [ 61 ] With the aforementioned ambiguous user preference regarding the personal ownership of autonomous vehicles, it is possible that the current mobility provider trend will continue as it rises in popularity. Established providers such as Uber and Lyft are already significantly present within the industry, and it is likely that new entrants will enter when business opportunities arise. [ 62 ] A review found that private autonomous vehicles may increase total travel, whereas autonomous buses may lead to reduced car use. [ 7 ] Vehicle automation can improve fuel economy of the car by optimizing the drive cycle, as well as increasing congested traffic speeds by an estimated 8%–13%. [ 63 ] [ 64 ] Reduced traffic congestion and the improvements in traffic flow due to widespread use of automated cars will translate into higher fuel efficiency, ranging from a 23%–39% increase, with the potential to further increase. [ 63 ] [ 65 ] Additionally, self-driving cars will be able to accelerate and brake more efficiently, meaning higher fuel economy from reducing wasted energy typically associated with inefficient changes to speed. However, the improvement in vehicle energy efficiency does not necessarily translate to net reduction in energy consumption and positive environmental outcomes. Alongside the induced demand, there may also be a reduction in the use of more sustainable modes, such as public or active transport. It is expected that convenience of the automated vehicles encourages the consumers to travel more, and this induced demand may partially or fully offset the fuel efficiency improvement brought by automation. [ 64 ] Alongside the induced demand, there may also be a reduction in the use of more sustainable modes, such as public or active transport. [ 66 ] Overall, the consequences of vehicle automation on global energy demand and emissions are highly uncertain, and heavily depends on the combined effect of changes in consumer behavior, policy intervention, technological progress and vehicle technology. [ 64 ] By reducing the labor and other costs of mobility as a service , automated cars could reduce the number of cars that are individually owned, replaced by taxi/pooling and other car-sharing services. [ 67 ] [ 68 ] This would also dramatically reduce the size of the automotive production industry, with corresponding environmental and economic effects. The lack of stressful driving, more productive time during the trip, and the potential savings in travel time and cost could become an incentive to live far away from cities, where housing is cheaper, and work in the city's core, thus increasing travel distances and inducing more urban sprawl , raising energy consumption and enlarging the carbon footprint of urban travel. [ 64 ] [ 69 ] [ 70 ] There is also the risk that traffic congestion might increase, rather than decrease. [ 64 ] [ 35 ] Appropriate public policies and regulations, such as zoning, pricing, and urban design are required to avoid the negative impacts of increased suburbanization and longer distance travel. [ 35 ] [ 70 ] Since many autonomous vehicles are going to rely on electricity to operate, the demand for lithium batteries increases. Similarly, radar, sensors, lidar , and high-speed internet connectivity require higher auxiliary power from vehicles, which manifests as greater power draw from batteries. [ 64 ] The larger battery requirement causes a necessary increase in the supply of these type of batteries for the chemical industry. On the other hand, with the expected increase of battery-powered (autonomous) vehicles, the petroleum industry is expected to undergo a decline in demand. As this implication depends on the adoption rate of autonomous vehicles, it is unsure to what extent this implication will disrupt this particular industry. This transition phase of oil to electricity allows companies to explore whether there are business opportunities for them in the new energy ecosystem. In 2020, Mohan, Sripad, Vaishnav & Viswanathan at Carnegie Mellon University [ 71 ] found that the electricity consumption of all the automation technology, including sensors, computation, internet access as well as the increased drag from sensors causes up to a 15% impact on the range of an automated electric vehicle, therefore, implying that the larger battery requirement might not be as large as previously assumed. A study conducted by AAA Foundation for Traffic Safety found that drivers did not trust self-parking technology, even though the systems outperformed drivers with a backup camera. The study tested self-parking systems in a variety of vehicles (Lincoln MKC, Mercedes-Benz ML400 4Matic, Cadillac CTS-V Sport, BMW i3 and Jeep Cherokee Limited) and found that self-parking cars hit the curb 81% fewer times, used 47% fewer manoeuvres and parked 10% faster than drivers. Yet, only 25% of those surveyed said they would trust this technology. [ 72 ] Manually driven vehicles are reported to be used only 4–5% of the time, and being parked and unused for the remaining 95–96% of the time. [ 73 ] [ 74 ] Autonomous taxis could, on the other hand, be used continuously after they have reached their destination. This could dramatically reduce the need for parking space . For example, in Los Angeles a 2015 study found 14% of the land is used for parking alone, equivalent to some 1,702 hectares (4,210 acres). [ 75 ] [ 76 ] This combined with the potential reduced need for road space due to improved traffic flow, could free up large amounts of land in urban areas, which could then be used for parks, recreational areas, buildings, among other uses; making cities more livable. Besides this, privately owned self-driving cars, also capable of self-parking would provide another advantage: the ability to drop off and pick up passengers even in places where parking is prohibited. This would benefit park and ride facilities. [ 77 ] The vehicles' increased awareness could aid the police by reporting on illegal passenger behaviour, while possibly enabling other crimes, such as deliberately crashing into another vehicle or a pedestrian. [ 78 ] However, this may also lead to much-expanded mass surveillance if there is wide access granted to third parties to the large data sets generated. [ citation needed ] Privacy could be an issue when having the vehicle's location and position integrated into an interface that other people have access to. [ 16 ] [ 79 ] Moreover, they require a sensor-based infrastructure that would constitute an all-encompassing surveillance apparatus. [ 80 ] This gives the car manufacturers and other companies the data needed to understand the user's lifestyle and personal preferences. [ 81 ] There is the risk of terrorist attacks by automotive hacking through the sharing of information through V2V (Vehicle to Vehicle) and V2I (Vehicle to Infrastructure) protocols. [ 82 ] [ 83 ] [ 84 ] Self-driving cars could potentially be loaded with explosives and used as bombs . [ 85 ] According to legislation of US lawmakers, autonomous and self-driving vehicles should be equipped with defences against hacking . [ 86 ] As collisions are less likely to occur, and the risk for human errors is reduced significantly, the repair industry will face an enormous reduction of work that has to be done on the reparation of car frames. Meanwhile, as the generated data of the autonomous vehicle is likely to predict when certain replaceable parts are in need of maintenance, car owners and the repair industry will be able to proactively replace a part that will fail soon. This "Asset Efficiency Service" would implicate a productivity gain for the automotive repair industry. [ citation needed ] The technique used in autonomous driving also ensures life savings in other industries. The implementation of autonomous vehicles with rescue, emergency response, and military applications has already led to a decrease in deaths. [ citation needed ] Military personnel use autonomous vehicles to reach dangerous and remote places on earth to deliver fuel, food and general supplies and even rescue people. In addition, a future implication of adopting autonomous vehicles could lead to a reduction in deployed personnel, which will lead to a decrease in injuries, since the technological development allows autonomous vehicles to become more and more autonomous. Another future implication is the reduction of emergency drivers when autonomous vehicles are deployed as fire trucks or ambulances. An advantage could be the use of real-time traffic information and other generated data to determine and execute routes more efficiently than human drivers. The time savings can be invaluable in these situations. [ 87 ] With the driver decreasingly focused on operating a vehicle, the interior design and media-entertainment industry will have to reconsider what passengers of autonomous vehicles are doing when they are on the road. Vehicles need to be redesigned, and possibly even be prepared for multipurpose usage. In practice, it will show that travellers have more time for business and/or leisure. In both cases, this gives increasing opportunities for the media-entertainment industry to demand attention. Moreover, the advertisement business is able to provide location-based ads without risking driver safety. [ 88 ] All cars can benefit from information and connections, but autonomous cars "Will be fully capable of operating without C-V2X." [ 89 ] In addition, the earlier mentioned entertainment industry is also highly dependent on this network to be active in this market segment. This implies higher revenues for the telecommunication industry. Driver interactions with the vehicle will be less common within the near future, and in the more distant future, the responsibility will lie entirely with the vehicle. As indicated above, this will have implications for the entertainment- and interior design industry. For roadside restaurants, the implication will be that the need for customers to stop driving and enter the restaurant will vanish, and the autonomous vehicle will have a double function. Moreover, accompanied by the rise of disruptive platforms such as Airbnb that have shaken up the hotel industry, the fast increase of developments within the autonomous vehicle industry might cause another implication for their customer bases. In the more distant future, the implication for motels might be that a decrease in guests will occur, since autonomous vehicles could be redesigned as fully equipped bedrooms. The improvements regarding the interior of the vehicles might additionally have implications for the airline industry. In the case of relatively short-haul flights, waiting times at customs or the gate imply lost time and hassle for customers. With the improved convenience in future car travel, it is possible that customers might go for this option, causing a loss in customer bases for the airline industry. [ 90 ]
https://en.wikipedia.org/wiki/Impact_of_self-driving_cars
The COVID-19 pandemic has affected innumerable scientific and technical institutions globally, resulting in lower productivity in a number of fields and programs. However, the impact of the pandemic has also led to the opening of several new research funding lines for government agencies around the world. [ 1 ] [ 2 ] [ 3 ] As a result of the COVID-19 pandemic, new and improved forms of scientific communication have evolved. One example is the amount of data being published on preprint servers and the way it has been reviewed on social media platforms before being formally peer reviewed . Scientists are reviewing, editing, analyzing, and publishing manuscripts and data speedily. [ 4 ] This intense communication may have enabled an unusual level of collaboration and efficiency among scientists. [ 5 ] Francis Collins notes that while he has not seen research move faster, the pace of research "can still feel slow" during a pandemic . The typical research model was considered too slow for the "urgency of the coronavirus threat". [ 6 ] On the 4th of May, 2020, the World Health Organization (WHO) organized a telethon to raise US$8 billion from forty countries to support the rapid development of COVID-19 vaccines . [ 7 ] WHO also announced the implementation of an international " solidarity trial " to simultaneously evaluate multiple vaccine candidates reaching phase II-III clinical trials . [ 8 ] The " solidarity trial for treatments" is a multinational phase III-IV clinical trial , organized by WHO and its partners, to compare four untested treatments for hospitalized people with severe cases of COVID-19 disease. [ 9 ] [ 10 ] The trial was announced on March 18, 2020, [ 9 ] and by April 21, 2020. Over 100 countries have participated in the trial. [ 11 ] In addition, WHO is coordinating an international multisite randomized controlled trial —"solidarity trial for vaccines" [ 8 ] [ 12 ] —that will allow simultaneous assessment of the benefits and risks of different vaccine candidates being clinically tested in countries with high rates of COVID-19 disease. [ 8 ] The WHO Vaccine Coalition prioritizes which vaccines to include in phase II and III clinical trials, and establishes harmonized phase III protocols for all vaccines that reach the pivotal testing phase. [ 8 ] The Coalition for Epidemic Preparedness Innovations (CEPI), which has established a US$2 billion global fund for rapid investment and development of vaccine candidates, [ 13 ] indicated in April 2020 that a vaccine could be available under protocols of emergency use in less than 12 months, or by early 2021. [ 14 ] The seventh edition of the UNESCO Science Report , which monitors science policy and governance around the world, was in preparation as the COVID-19 pandemic began. As a result, the report documents some of the ways in which scientists, inventors, and governments used science to meet society's needs during the early stages of the pandemic. In the paper, What the COVID-19 Pandemic Reveals About the Evolving Landscape of Scientific Advice , the authors present five countries' case studies ( Uruguay , Sri Lanka , Jamaica , Ghana , and New Zealand ). The authors conclude, "Effective and trusted scientific advice is not simply a function of linkages with the policy-maker. It also involves an effective conversation with stakeholders and the public." According to the World Health Organization, during the COVID-19 pandemic, Africa contributed 13% of the world's new or adapted technologies, such as robotics, 3D printing , and mobile phone apps. Many countries have accelerated their approval processes for research project proposals. For example, the innovation agencies of Argentina , Brazil , and Uruguay have issued calls for research proposals with an expedited approval process through early April 2020. Peru's two innovation agencies reduced their own response time to two weeks, as documented in the UNESCO Science Report (2021). The UNESCO study of publication trends in 193 countries on the topic of new or re-emerging viruses that can infect humans covered the period from 2011 to 2019 and now provides an overview of the state of research prior to the COVID-19 pandemic. Global output on this broad topic increased by only 2% per year between 2011 and 2019, slower than overall global scientific publications. Growth was much higher in individual countries that had to use science to address other viral outbreaks during this period, such as Liberia to combat Ebola or Brazil to combat Zika fever . It remains to be seen whether or not the scientific landscape will shift toward a more proactive approach to health sciences after COVID-19. The United States Department of Energy federal scientific laboratories, such as the Oak Ridge National Laboratory , closed to all visitors and many employees; non-essential employees and scientists became remote workers . Contractors were also strongly advised to isolate their facilities and employees unless necessary. Overall, ORNL operations remain reasonably unaffected. [ 15 ] Lawrence Livermore National Laboratory was tasked by the White House Coronavirus Task Force to use most of its supercomputing capacity to continue the research on the virus stream, possible mutations , and other factors, while other projects were temporarily scaled back or indefinitely postponed. [ 16 ] The European Molecular Biology Laboratory (EMBL) closed all of its six sites in Europe (Barcelona, Grenoble, Hamburg, Heidelberg, Hinxton, and Rome). All EMBL site governments have implemented strict controls in response to the coronavirus. EMBL staff have been instructed to follow the advice of local authorities. Several staff members have been given permission to work at the sites to provide essential services such as animal facility maintenance or data services. All other staff were instructed to stay at home. EMBL also cancelled all visits to the sites by groups outside the staff. This includes physical attendance at the Heidelberg course and conference program, EMBL-EBI training courses, and all other seminars, courses, and public visits at all sites. Meanwhile, the European Bioinformatics Institute established a European COVID-19 platform for data/information exchange. The goal is to collect and share readily available research data to enable synergy, cross-fertilization, and use of different data sets with varying degrees of aggregation, validation, and/or completeness. The platform is envisioned to consist of two interconnected components, the SARS-CoV-2 data hubs, to organize the flow of SARS-CoV-2 outbreak sequence data and enable comprehensive open data exchange for the European and global research community, and a more comprehensive COVID-19 portal. [ 17 ] [ 18 ] [ 19 ] The World Meteorological Organization (WMO) has expressed concern about the effects of the pandemic on its monitoring system. Observations from the Aircraft Meteorological Data Relay program, which uses in-flight measurements from the fleets of 43 airlines, have been reduced by 50 to 80 percent depending on the region. Data from other automated systems have been virtually unaffected, although WMO has expressed concern that repairs and maintenance may be affected eventually. Manual observations, mainly from developing countries, have also seen a significant decrease. [ 20 ] The need to accelerate open scientific research prompted several civil society organizations to create an Open COVID-19 Pledge [ 21 ] [ 22 ] asking different industries to release their intellectual property rights during the pandemic to help find a cure for the disease. Several tech giants have joined the pledge, [ 23 ] which includes the release of an Open COVID license. [ 24 ] Long-time open access advocates such as Creative Commons have launched a myriad of calls and actions to promote open access in science as a key component to combat the disease. [ 25 ] [ 26 ] These include a public call for open access policies [ 27 ] and a call to scientists to adopt zero embargo periods for their publications, applying a CC BY to their articles and a CC0 waiver for research data. [ 28 ] Other organizations have challenged the current scientific culture, calling for more open and public science. [ 29 ] For studies and information on coronavirus that can contribute to citizen science through open science, many other online resources are available on other open science and open access websites, including an e-book chapter hosted by the medical collective EMCrit [ 30 ] and portals run by Cambridge University Press , [ 31 ] the Europe branch of the Scholarly Publishing and Academic Resources Coalition , [ 32 ] The Lancet , [ 33 ] John Wiley and Sons , [ 34 ] and Springer Nature . [ 35 ] A JAMA Network Open study examined trends in oncology clinical trials initiated before and during the COVID-19 pandemic . It was noted that pandemic-related declines in clinical trials raised concerns about the potential negative impact on the development of new cancer therapies and the extent to which these findings could be applied to other diseases. [ 36 ] In March 2020, the United States Department of Energy , National Science Foundation , NASA , industry, and nine universities pooled resources to access supercomputers from IBM combined with cloud computing resources from Hewlett Packard Enterprise , Amazon , Microsoft , and Google for drug discovery. [ 37 ] [ 38 ] The COVID-19 High-Performance Computing Consortium also aims to predict the spread of disease, model possible vaccines, and study thousands of chemical compounds to develop a COVID-19 vaccine or therapy. [ 37 ] [ 38 ] As of May 2020 [update] , the Consortium has used up 437 peta FLOPS of computing power. The C3.ai Digital Transformation Institute, another consortium of Microsoft , six universities (including the Massachusetts Institute of Technology , a member of the first consortium), and the National Center for Supercomputer Applications in Illinois, operating under the auspices of C3.ai, founded by Thomas Siebel , is pooling supercomputing resources for drug discovery, developing medical protocols, and improving public health strategies, and awarded large grants through May 2020 to researchers proposing to use AI for similar tasks. [ 39 ] [ 40 ] In March 2020, the Folding@home distributed computing project launched a program to support medical researchers around the world. The first wave of the project will simulate potential target proteins of SARS-CoV-2 and the related SARS-CoV virus, which has already been studied. [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ needs update ] In March, the Rosetta@home distributed computing project also joined the effort. The project uses volunteers' computers to model the proteins of the SARS-CoV-2 virus to discover potential drug targets or develop new proteins to neutralize the virus. The researchers announced that using Rosetta@home, they were able to "accurately predict the atomic-scale structure of an important coronavirus protein weeks before it could be measured in the lab." [ 45 ] In May 2020, the Open Pandemics—COVID-19 partnership was launched between Scripps Research and IBM's World Community Grid . The partnership is a distributed computing project that "will automatically run a simulated experiment in the background [of connected home PCs] that will help predict the efficacy of a particular chemical compound as a potential treatment for COVID-19 ." [ 46 ] Resources for informatics and scientific crowdsourcing projects on COVID-19 can be found on the internet or as apps. [ 47 ] [ 48 ] [ 49 ] Some examples of such projects are listed below: The scientific community has held several machine learning competitions to identify false information related to the COVID-19 pandemic. Some examples are listed below: NASA announced the temporary closure of all visitor complexes at its field centers until further notice and asked all non-critical personnel to work from home if possible. Production and manufacturing of the Space Launch System at the Michoud Assembly Facility was halted, [ 66 ] [ 67 ] and further delays occurred for the James Webb Space Telescope , [ 68 ] although work resumed on June 3, 2020. [ 69 ] The majority of Johnson Space Center personnel transitioned to telecommunicating , and mission-critical personnel on the International Space Station were ordered to reside in the mission control room until further notice. Station operations were relatively unaffected, but astronauts on new expeditions are subject to longer more stringent pre-flight quarantine. [ 70 ] NASA's emergency response framework varied based on local virus cases around its agency's field centers. As of March 24, 2020, the following space centers had moved to Stage 4. [ 71 ] Two facilities were maintained at Stage 4 after reporting new cases of coronavirus: the Michoud Assembly Facility reported its first employee to test positive for COVID-19, and Stennis Space Center recorded the second case of a NASA community member with the virus. Kennedy Space Center maintained at Stage 3 after a workforce member tested positive. Due to the mandatory remote work policy already in place, the individual had not been on-site for more than a week before the onset of symptoms. [ 72 ] On May 18, the Michoud facility began resuming work operations on the SLS, but so far remains in a Level 3 status. [ 73 ] At Level 4, mandatory remote work is in effect for all personnel except for limited personnel required for mission-critical work and to ensure and maintain the safety and security of the facility. [ 74 ] The European Space Agency (ESA) directed many of its science and technology facility personnel to telework whenever possible. [ 75 ] Developments, including increased restrictions by national, regional, and local authorities across Europe and the first positive COVID-19 test result among European Space Operations Centre personnel, led the agency to further restrict on-site personnel at its mission control centres. ESA Director of Operations, Rolf Densing, strongly advised mission personnel to reduce activity on science missions, especially on interplanetary spacecraft. The affected spacecraft had stable orbits and long-duration missions, so turning off their science instruments and placing them into a largely unattended safety configuration for a certain period of time would have a negligible impact on their overall mission performance. Examples of such missions include: [ 76 ] ESA Science Director Günther Hasinger said: "It was a difficult decision, but the right one to take. Our greatest responsibility is the safety of people, and I know all of us in the science community understand why this is necessary." The temporary reduction in on-site personnel will also allow the ESOC teams to focus on maintaining spacecraft safety for all other missions involved, especially the Mercury explorer BepiColombo , which is en route to the solar system's closest planet and would need on-site support during its planned April 10 2020 flyby of Earth. The difficult manoeuvre, which uses Earth's gravity to adjust BepiColombo's trajectory as it cruises towards Mercury, was performed by a very small number of engineers and with due regard to social distancing and other health and hygiene measures required by the current situation. Commissioning and initial checkout operations of the launched Solar Orbiter were temporarily suspended. ESA plans to resume these operations in the near future, depending on the development of the coronavirus situation. In the meantime, Solar Orbiter will continue its journey towards the Sun, with the first Venus flyby to take place in December. [ 77 ] The space and science operations of the Japan Aerospace Exploration Agency ( JAXA ) were virtually unaffected. However, all visits to their many field centers were suspended until April 30, 2020, to reduce contamination. [ 78 ] [ 79 ] Bigelow Aerospace announced on March 23, 2020, that it was laying off all its 88 employees. It said it would rehire the workers when pandemic restrictions were lifted. [ 80 ] Tucson, Arizona-based World View announced on April 17, 2020, that it had terminated new business initiatives and laid off an unspecified number of employees to reduce cash outflows. The company also received rent deferrals from Pima County, Arizona . [ 81 ] OneWeb filed for bankruptcy on March 27, 2020, following a cash crunch due to difficulties in raising capital to complete construction and deployment of the remaining 90 percent of the network. The company had already laid off approximately 85 percent of its 531 employees, but said it would maintain operational satellite capabilities while the court restructures it and new owners for the constellation were sought. [ 82 ] [ 83 ] Rocket Lab temporarily closed its launch site in New Zealand, but operations continued at its Wallops Flight Facility launch complex. [ 84 ] Major companies such as SpaceX and Boeing were not economically affected, except that they took extra precautions and security measures for their employees to limit the spread of the virus in their workplaces. As of April 16, 2020 Blue Origin said that it was continuing to hire staff, with about 20 more people added each week. [ 85 ] ULA implemented an internal pandemic plan. Although some aspects of launch-related outreach were scaled back, the company made clear its intention to maintain its launch schedule. [ 86 ] From 2019 to 2020, the proportion of EU enterprises employing advanced digital technology in their operations expanded dramatically. From 2020 to 2021, this percentage remained relatively stable, reaching 61% in 2021, compared to 63% in 2020 and 58% in 2019. [ 88 ] [ 89 ] The pandemic has caused a huge strain on internet traffic, with BT Group and Vodafone seeing a 60 and 50 percent increase in broadband usage, respectively. At the same time, Netflix , Disney+ , Google , Amazon , and YouTube have considered reducing the quality of their videos to avoid overload. In addition, Sony has begun to slow down PlayStation game downloads in Europe and the United States to maintain the traffic levels. [ 90 ] [ 91 ] Cellular service providers in mainland China reported significant declines in subscribers, partially due to inability of migrant workers to return to work as a result of the quarantine lockdowns ; China Mobile saw a reduction of 8 million subscribers, while China Unicom had 7.8 million fewer subscribers, and China Telecom lost 5.6 million users. [ 92 ] Teleconferencing has been used to replace cancelled events as well as daily business meetings and social contacts. Teleconference companies such as Zoom Video Communications have seen a sharp increase in usage, accompanied by technical issues such as bandwidth overcrowding and social problems such as Zoombombing . [ 94 ] [ 95 ] [ 96 ] However, teleconferencing has also contributed to the development of distance education . [ 97 ] Thanks to this technology, virtual happy hours for "quarantinis" (mixed drinks) [ 98 ] and even virtual dance parties have been organised. [ 99 ] A survey conducted in 2021 found that while the coronavirus outbreak has boosted overall digitization, it has also widened the digital divide, specifically across firms. Leading businesses advanced digitization more frequently, but some enterprises fell behind and were less likely to convert digitally during the pandemic. [ 100 ] 53% of surveyed firms in the European Union had previously implemented advanced digital technology and invested more into other digital technologies. 34% of non-digital EU firms viewed the pandemic as a chance to begin investing in their digital transformation . [ 101 ] [ 102 ] According to the survey, 16% of EU enterprises regard to access to digital infrastructure to be a substantial barrier to investment. [ 103 ] [ 104 ] [ 105 ] A growing digital divide is also emerging - in the United States, despite non-digital enterprises being more dynamic than in the European Union, 48% of enterprises that were non-digital before to the pandemic utilized the crisis to begin investing in digital technologies, compared to 64% of firms that had previously implemented advanced digital technology. [ 101 ] [ 106 ] Digital infrastructure is essential for digital transformation. Many EU areas have the potential to enable investment in the digital transformation of firms by expanding access to faster internet. This influences organizations' decisions to go digital. [ 107 ] [ 108 ] Across Europe, access to digital infrastructure is already increasing, with the great majority of homes now having access to broadband, but more has to be done to promote the spread of fast connections. There is a large proportion of enterprises citing digital infrastructure as a key barrier to investment and development across nations and regions. [ 109 ] [ 104 ] [ 105 ] One out of every five businesses in the region of Europe and Central Asia launched or grew their online business or distribution of products and services, while one out of every four businesses started or increased their remote operations. [ 110 ] [ 111 ] [ 112 ] [ 113 ] The pandemic has also hastened corporate transformation, with over 30% of companies altering or transforming their output as a result of it. Chemical manufacturers and wholesalers were the first to respond, with one in three expanding online business activity, beginning or boosting delivery of products and services, increasing remote employment, and changing manufacturing. [ 110 ] [ 114 ] Across sub-regions, Russian companies reported the highest rate of digital transformation , with more than half of them beginning or growing online activity, products delivery, and remote work. [ 110 ] Within Central, Eastern and Southeastern Europe, enterprises in Slovenia (48%) and Poland (44%), were the most innovative in 2022, while firms in Slovakia (14%), were the least innovative. 67% of enterprises in these regions deployed at least one sophisticated digital technology, the same as the current EU average (69%). [ 115 ]
https://en.wikipedia.org/wiki/Impact_of_the_COVID-19_pandemic_on_science_and_technology
In physics , the impact parameter b is defined as the perpendicular distance between the path of a projectile and the center of a potential field U ( r ) created by an object that the projectile is approaching (see diagram). It is often referred to in nuclear physics (see Rutherford scattering ) and in classical mechanics . The impact parameter is related to the scattering angle θ by [ 1 ] where v ∞ is the velocity of the projectile when it is far from the center, and r min is its closest distance from the center. [ 2 ] The simplest example illustrating the use of the impact parameter is in the case of scattering from a sphere. Here, the object that the projectile is approaching is a hard sphere with radius R {\displaystyle R} . In the case of a hard sphere, U ( r ) = 0 {\displaystyle U(r)=0} when r > R {\displaystyle r>R} , and U ( r ) = ∞ {\displaystyle U(r)=\infty } for r ≤ R {\displaystyle r\leq R} . When b > R {\displaystyle b>R} , the projectile misses the hard sphere. We immediately see that θ = 0 {\displaystyle \theta =0} . When b ≤ R {\displaystyle b\leq R} , we find that b = R cos ⁡ θ 2 . {\displaystyle b=R\cos {\tfrac {\theta }{2}}.} [ 3 ] In high-energy nuclear physics — specifically, in colliding-beam experiments — collisions may be classified according to their impact parameter. Central collisions have b ≈ 0 {\displaystyle b\approx 0} , peripheral collisions have 0 < b < 2 R {\displaystyle 0<b<2R} , and ultraperipheral collisions (UPCs) [ 4 ] have b > 2 R {\displaystyle b>2R} , where the colliding nuclei are viewed as hard spheres with radius R {\displaystyle R} . [ citation needed ] Because the color force has an extremely short range, it cannot couple quarks that are separated by much more than one nucleon 's radius; hence, strong interactions are suppressed in peripheral and ultraperipheral collisions. This means that final-state particle multiplicity (the total number of particles resulting from the collision), is typically greatest in the most central collisions, due to the partons involved having the greatest probability of interacting in some way. This has led to charged particle multiplicity being used as a common measure of collision centrality, as charged particles are much easier to detect than uncharged particles. [ 5 ] Because strong interactions are effectively impossible in ultraperipheral collisions, they may be used to study electromagnetic interactions — i.e. photon–photon , photon–nucleon, or photon–nucleus interactions — with low background contamination. Because UPCs typically produce only two to four final-state particles, they are also relatively "clean" when compared to central collisions, which may produce hundreds of particles per event . This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Impact_parameter
In compressible fluid dynamics, impact pressure ( dynamic pressure ) is the difference between total pressure (also known as pitot pressure or stagnation pressure ) and static pressure . [ 1 ] [ 2 ] In aerodynamics notation, this quantity is denoted as q c {\displaystyle q_{c}} or Q c {\displaystyle Q_{c}} . When input to an airspeed indicator, impact pressure is used to provide a calibrated airspeed reading. An air data computer with inputs of pitot and static pressures is able to provide a Mach number and, if static temperature is known, true airspeed . [ citation needed ] Some authors in the field of compressible flows use the term dynamic pressure or compressible dynamic pressure instead of impact pressure . [ 3 ] [ 4 ] In isentropic flow the ratio of total pressure to static pressure is given by: [ 3 ] P t P = ( 1 + γ − 1 2 M 2 ) γ γ − 1 {\displaystyle {\frac {P_{t}}{P}}=\left(1+{\frac {\gamma -1}{2}}M^{2}\right)^{\tfrac {\gamma }{\gamma -1}}} where: P t {\displaystyle P_{t}} is total pressure P {\displaystyle P} is static pressure γ {\displaystyle \gamma \;} is the ratio of specific heats M {\displaystyle M\;} is the freestream Mach number Taking γ {\displaystyle \gamma \;} to be 1.4, and since P t = P + q c {\displaystyle \;P_{t}=P+q_{c}} q c = P [ ( 1 + 0.2 M 2 ) 7 2 − 1 ] {\displaystyle \;q_{c}=P\left[\left(1+0.2M^{2}\right)^{\tfrac {7}{2}}-1\right]} Expressing the incompressible dynamic pressure as 1 2 γ P M 2 {\displaystyle \;{\tfrac {1}{2}}\gamma PM^{2}} and expanding by the binomial series gives: q c = q ( 1 + M 2 4 + M 4 40 + M 6 1600 . . . ) {\displaystyle \;q_{c}=q\left(1+{\frac {M^{2}}{4}}+{\frac {M^{4}}{40}}+{\frac {M^{6}}{1600}}...\right)\;} where: q {\displaystyle \;q} is dynamic pressure
https://en.wikipedia.org/wiki/Impact_pressure
In addition to the direct reduction in travel times the HSR project will produce, there are also economic and environmental impacts of the high-speed rail system. These were also specifically noted in Proposition 1A at the time the project sought authorization from the voters of the state in 2008. The anticipated benefits apply both generally to the state overall, as well as to the regions the train will pass through, and to the areas immediately around the train stations. On January 18, 2024, Derek Boughton of the Authority presented the latest financial impact analysis report through June 2023. [ 1 ] The Central Valley Training Center (located in Selma, California ) is an organization supported by the Authority and local non-profit and governmental organizations. Since 2020 it has provided hands-on, free, 12-week pre-apprenticeship programs in 11 trades to prepare Central Valley veterans, at-risk young adults, minority, and low-income populations for construction jobs on the CAHSR project. As of December 2023 it has graduated 11 cohorts, totaling over 176 students, and further assisted them by providing job placement as well as other support services. [ 2 ] CAHSR is designed to be an entirely environmentally sustainable system. Each year since 2018 the Authority has produced a Sustainability Report. [ 3 ] Highlights of the 2022 report are: The 2021 Economic Impact Factsheet estimated that as of June 2021, the statewide economic benefits of the project included 64,400–70,500 job-years of employment, $4.8–$5.2 billion in labor employment, and $12.7–13.7 billion in economic output, and that as of February 2022, 699 small businesses were involved in the project. [ 5 ] The Authority's economic impact analysis is updated annually. The 2021 Economic Analysis Report contains data as of June 2021. [ 6 ] In its 67-page ruling in May 2015, the federal Surface Transportation Board noted: "The current transportation system in the San Joaquin Valley region has not kept pace with the increase in population, economic activity, and tourism. ... The interstate highway system, commercial airports, and conventional passenger rail systems serving the intercity market are operating at or near capacity and would require large public investments for maintenance and expansion to meet existing demand and future growth over the next 25 years or beyond." [ 7 ] Thus, the Board sees the HSR system as providing valuable benefits to the region's transportation needs. The San Joaquin Valley is also one of the poorest areas of the state. For example, the unemployment rate near the end of 2014 in Fresno County was 2.2% higher than the statewide average. [ 8 ] And, of the five poorest metro areas in the country, three are in the Central Valley. [ 9 ] The HSR system has the potential to significantly improve this region and its economy. A large January 2015 report to the CHSRA examined this issue. [ 10 ] In addition to jobs and income levels in general, the presence of HSR is expected to benefit the growth in the cities around the HSR stations. It is anticipated that this will help increase population density in those cities and reduce "development sprawl" out into surrounding farmlands. [ 11 ] There have also been some reported negative impacts from the project's land acquisitions and constructions. As of Oct. 2021 in the Phase 1 construction the project displaced or adversely affected immigrants (Mexican, Cambodian, and Japanese), homeless outreach organizations, homeless shelters, firefighters, nonprofits working with welfare recipients, thrift stores, and disadvantaged communities such as Wasco . [ 12 ] [ 13 ] "What Is the Value of Electrified High-Speed Rail Between Merced and Bakersfield?" in the 2022 Business Plan [ 14 ] (p. 25) listed these estimated benefits which will come from the Interim Initial Operating Segment: The HSR tracks will pose some serious problems for moving and migrating wildlife. Thus, the Interim Initial Operating Segment will have over 300 wildlife crossings to provide safe ways for wildlife to cross the tracks. To accomplish this, the Authority has submitted a $2 million grant to the Federal Highway Administration Wildlife Crossings Pilot Program for the proposed Central Valley 119-Mile Wildlife Crossing Monitoring Plan (total cost to be $2.5 million). This pilot project will study alternative crossing designs, research and monitor wildlife/vehicle collisions, and review the San Joaquin kit fox migration corridors. [ 15 ] The Authority's Carbon Footprint Calculator [ 16 ] shows the benefits for 5 different portions of the HSR route, including all of Phase 1 as well as the Interim Initial Operating Segment. It gives estimates of green house gas emissions of planes, autos, and HSR trains as well as the savings that using the train would create. The HSR savings estimates (per round trip) are: In the 2022 Business Plan the Authority estimates that by 2040, the system could carry 50 million riders per year, and that at full operation, the reduction of greenhouse gas emissions will be equivalent to removing 400,000 vehicles off the road. [ 17 ]
https://en.wikipedia.org/wiki/Impacts_of_California_High-Speed_Rail
Impalefection is a method of gene delivery using nanomaterials , such as carbon nanofibers , carbon nanotubes , nanowires . [ 1 ] Needle-like nanostructures are synthesized perpendicular to the surface of a substrate . Plasmid DNA containing the gene, and intended for intracellular delivery , is attached to the nanostructure surface. A chip with arrays of these needles is then pressed against cells or tissue. Cells that are impaled by nanostructures can express the delivered gene(s). As one of the types of transfection , the term is derived from two words – impalement and infection . One of the features of impalefection is spatially resolved gene delivery that holds potential for such tissue engineering approaches in wound healing as gene activated matrix technology. [ 2 ] Though impalefection is an efficient approach in vitro , it has not yet been effectively used in vivo on live organisms and tissues. [ 3 ] Vertically aligned carbon nanofiber arrays prepared by photolithography and plasma enhanced chemical vapor deposition are one of the suitable types of material. [ 4 ] Silicon nanowires are another choice of nanoneedles that have been utilized for impalefection. This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Impalefection
An impedance analyzer is a type of electronic test equipment used to measure complex electrical impedance as a function of test frequency. Impedance is an important parameter used to characterize electronic components , electronic circuits , and the materials used to make components. Impedance analysis can also be used to characterize materials exhibiting dielectric behavior such as biological tissue, foodstuffs or geological samples. Impedance analyzers come in three distinct hardware implementations, and together these three implementations can probe from ultra low frequency to ultra high frequency and can measure impedances from μΩ to TΩ. Impedance analyzers are a class of instruments which measure complex electrical impedance as a function of frequency. This involves the phase sensitive measurement of current and voltage applied to a device under test while the measurement frequency is varied over the course of the measurement. Key specifications of an impedance analyzer are the frequency range, impedance range, absolute impedance accuracy and phase angle accuracy. Further specifications include the ability to apply voltage bias and current bias while measuring, and the measurement speed [ 1 ] . Impedance analyzers typically offer highly accurate impedance measurements, e.g. with a basic accuracy of up to 0.05%, [ 2 ] and a frequency measurement range from μHz to GHz. Impedance values can range over many decades from μΩ to TΩ, whereas the phase angle accuracy is in the range of 10 millidegree. Measured impedance values include absolute impedance, the real and imaginary part of the measured impedance and the phase between the voltage and current. Model-derived impedance parameters such as conductance, inductance and capacitance are calculated based on a replacement circuit model and subsequently displayed. LCR meters also provide impedance measurement functionality, typically with similar accuracy but lower frequency range. The measurement frequency of LCR meters is generally fixed rather than swept, and cannot be displayed graphically. A fourth implementation, the vector network analyzer (VNA) , can be considered a distinct instrument. In contrast to impedance analyzers, VNAs also measure impedance but usually at much higher frequencies and with much lower accuracy compared to impedance analyzers. [ 4 ] Most impedance analyzers come with a reactance chart [ 5 ] which shows the reactance values for capacitive reactance X C and inductive reactance X L for a given frequency. The accuracy of the instrument is transposed on the chart to allow the user to quickly see what accuracy they can expect for a given frequency and reactance.
https://en.wikipedia.org/wiki/Impedance_analyzer
Impedance cardiography (ICG) is a non-invasive technology measuring total electrical conductivity of the thorax and its changes in time to process continuously a number of cardiodynamic parameters, such as stroke volume (SV), heart rate (HR), cardiac output (CO), ventricular ejection time (VET), pre-ejection period and used to detect the impedance changes caused by a high-frequency, low magnitude current flowing through the thorax between additional two pairs of electrodes located outside of the measured segment. The sensing electrodes also detect the ECG signal, which is used as a timing clock of the system. [ 1 ] [ 2 ] Impedance cardiography (ICG), also referred to as electrical impedance plethysmography (EIP) or Thoracic Electrical Bioimpedance (TEB) has been researched since the 1940s. NASA helped develop the technology in the 1960s. [ 3 ] [ 4 ] The use of impedance cardiography in psychophysiological research was pioneered by the publication of an article by Miller and Horvath in 1978. [ 5 ] Subsequently, the recommendations of Miller and Horvath were confirmed by a standards group in 1990. [ 6 ] A comprehensive list of references is available at ICG Publications . With ICG, the placement of four dual disposable sensors on the neck and chest are used to transmit and detect electrical and impedance changes in the thorax, which are used to measure and calculate cardiodynamic parameters. [ citation needed ] Hemodynamics is a subchapter of cardiovascular physiology, which is concerned with the forces generated by the heart and the resulting motion of blood through the cardiovascular system. [ 7 ] These forces demonstrate themselves to the clinician as paired values of blood flow and blood pressure measured simultaneously at the output node of the left heart. Hemodynamics is a fluidic counterpart to the Ohm's law in electronics: pressure is equivalent to voltage, flow to current, vascular resistance to electrical resistance and myocardial work to power. The relationship between the instantaneous values of aortic blood pressure and blood flow through the aortic valve over one heartbeat interval and their mean values are depicted in Fig.1. Their instantaneous values may be used in research; in clinical practice, their mean values, MAP and SV, are adequate. [ citation needed ] Systemic (global) blood flow parameters are (a) the blood flow per heartbeat, the Stroke Volume, SV [ml/beat], and (b) the blood flow per minute, the Cardiac Output, CO [l/min]. There is clear relationship between these blood flow parameters: where HR is the Heart Rate frequency (beats per minute, bpm). Since the normal value of CO is proportional to body mass it has to perfuse, one "normal" value of SV and CO for all adults cannot exist. All blood flow parameters have to be indexed. The accepted convention is to index them by the Body Surface Area , BSA [m 2 ], by DuBois & DuBois Formula, a function of height and weight: The resulting indexed parameters are Stroke Index, SI (ml/beat/m 2 ) defined as and Cardiac Index, CI (l/min/m 2 ), defined as These indexed blood flow parameters exhibit typical ranges : For the Stroke Index: 35 < SI typical < 65 ml/beat/m 2 ; for the Cardiac Index: 2.8 < CI typical < 4.2 l/min/m 2 . Eq.1 for indexed parameters then changes to The primary function of the cardiovascular system is transport of oxygen: blood is the vehicle, oxygen is the cargo. The task of the healthy cardiovascular system is to provide adequate perfusion to all organs and to maintain a dynamic equilibrium between oxygen demand and oxygen delivery. In a healthy person, the cardiovascular system always increases blood flow in response to increased oxygen demand. In a hemodynamically compromised person, when the system is unable to satisfy increased oxygen demand, the blood flow to organs lower on the oxygen delivery priority list is reduced and these organs may, eventually, fail. Digestive disorders, male impotence, tiredness, sleepwalking, environmental temperature intolerance, are classic examples of a low-flow-state, resulting in reduced blood flow. [ citation needed ] SI variability and MAP variability are accomplished through activity of hemodynamic modulators . The conventional cardiovascular physiology terms for the hemodynamic modulators are preload, contractility and afterload . They deal with (a) the inertial filling forces of blood return into the atrium ( preload ), which stretch the myocardial fibers, thus storing energy in them, (b) the force by which the heart muscle fibers shorten thus releasing the energy stored in them in order to expel part of blood in the ventricle into the vasculature ( contractility ), and (c) the forces the pump has to overcome in order to deliver a bolus of blood into the aorta per each contraction ( afterload ). The level of preload is currently assessed either from the PAOP (pulmonary artery occluded pressure) in a catheterized patient, or from EDI (end-diastolic index) by use of ultrasound. Contractility is not routinely assessed; quite often inotropy and contractility are interchanged as equal terms. Afterload is assessed from the SVRI value. Rather than using the terms preload, contractility and afterload, the preferential terminology and methodology in per-beat hemodynamics is to use the terms for actual hemodynamic modulating tools, which either the body utilizes or the clinician has in his toolbox to control the hemodynamic state: The preload and the Frank-Starling (mechanically)-induced level of contractility is modulated by variation of intravascular volume (volume expansion or volume reduction/diuresis). Pharmacological modulation of contractility is performed with cardioactive inotropic agents (positive or negative inotropes) being present in the blood stream and affecting the rate of contraction of myocardial fibers. The afterload is modulated by varying the caliber of sphincters at the input and output of each organ, thus the vascular resistance , with the vasoactive pharmacological agents (vasoconstrictors or vasodilators and/or ACE Inhibitors and/or ARBs)(ACE = Angiotensin-converting-enzyme; ARB = Angiotensin-receptor-blocker). Afterload also increases with increasing blood viscosity , however, with the exception of extremely hemodiluted or hemoconcentrated patients, this parameter is not routinely considered in clinical practice. With the exception of volume expansion, which can be accomplished only by physical means (intravenous or oral intake of fluids), all other hemodynamic modulating tools are pharmacological, cardioactive or vasoactive agents. The measurement of CI and its derivatives allow clinicians to make timely patient assessment, diagnosis, prognosis, and treatment decisions. It has been well established that both trained and untrained physicians alike are unable to estimate cardiac output through physical assessment alone. Clinical measurement of cardiac output has been available since the 1970s. However, this blood flow measurement is highly invasive, utilizing a flow-directed, thermodilution catheter (also known as the Swan-Ganz catheter), which represents significant risks to the patient. In addition, this technique is costly (several hundred dollars per procedure) and requires a skilled physician and a sterile environment for catheter insertion. As a result, it has been used only in very narrow strata (less than 2%) of critically ill and high-risk patients in whom the knowledge of blood flow and oxygen transport outweighed the risks of the method. In the United States, it is estimated that at least two million pulmonary artery catheter monitoring procedures are performed annually, most often in peri-operative cardiac and vascular surgical patients, decompensated heart failure, multi-organ failure, and trauma. [ citation needed ] In theory, a noninvasive way to monitor hemodynamics would provide exceptional clinical value because data similar to invasive hemodynamic monitoring methods could be obtained with much lower cost and no risk. While noninvasive hemodynamic monitoring can be used in patients who previously required an invasive procedure, the largest impact can be made in patients and care environments where invasive hemodynamic monitoring was neither possible nor worth the risk or cost. Because of its safety and low cost, the applicability of vital hemodynamic measurements could be extended to significantly more patients, including outpatients with chronic diseases. ICG has even been used in extreme conditions such as outer space and a Mt. Everest expedition. [ 8 ] Heart failure, hypertension, pacemaker, and dyspnea patients are four conditions in which outpatient noninvasive hemodynamic monitoring can play an important role in the assessment, diagnosis, prognosis, and treatment. Some studies have shown ICG cardiac output is accurate, [ 9 ] [ 10 ] while other studies have shown it is inaccurate. [ 11 ] Use of ICG has been shown to improve blood pressure control in resistant hypertension when used by both specialists [ 12 ] and general practitioners. [ 13 ] ICG has also been shown to predict worsening status in heart failure. [ 14 ] The electrical and impedance signals are processed to determine fiducial points, which are then utilized to measure and calculate hemodynamic parameters, such as cardiac output, stroke volume, systemic vascular resistance, thoracic fluid content, acceleration index, and systolic time ratio.
https://en.wikipedia.org/wiki/Impedance_cardiography
Impedance control is an approach to dynamic control relating force and position. It is often used in applications where a manipulator interacts with its environment and the force position relation is of concern. Examples of such applications include humans interacting with robots, where the force produced by the human relates to how fast the robot should move/stop. Simpler control methods, such as position control or torque control, perform poorly when the manipulator experiences contacts. Thus impedance control is commonly used in these settings. Mechanical impedance is the ratio of force output to velocity input. This is analogous to electrical impedance , that is the ratio of voltage output to current input (e.g. resistance is voltage divided by current). A " spring constant " defines the force output for a displacement (extension or compression) of the spring. A " damping constant " defines the force output for a velocity input. If we control the impedance of a mechanism, we are controlling the force of resistance to external motions that are imposed by the environment. Mechanical admittance is the inverse of impedance - it defines the motions that result from a force input. If a mechanism applies a force to the environment, the environment will move, or not move, depending on its properties and the force applied. For example, a marble sitting on a table will react very differently to a given force than will a log floating in a lake. The key theory behind the method is to treat the environment as an admittance and the manipulator as an impedance. It assumes the postulate that "no controller can make the manipulator appear to the environment as anything other than a physical system." This rule of thumb can also be stated as: "in the most common case in which the environment is an admittance (e.g. a mass, possibly kinematically constrained) that relation should be an impedance, a function, possibly nonlinear, dynamic, or even discontinuous, specifying the force produced in response to a motion imposed by the environment." [ 1 ] Impedance control doesn't simply regulate the force or position of a mechanism. Instead it regulates the relationship between force and position on the one hand, and velocity and acceleration on the other hand, i.e. the impedance of the mechanism. It requires a position (velocity or acceleration) as input and has a resulting force as output. The inverse of impedance is admittance. It imposes position. So actually the controller imposes a spring-mass-damper behavior on the mechanism by maintaining a dynamic relationship between force ( F ) {\displaystyle ({\boldsymbol {F}})} and position, velocity and acceleration ( x , v , a ) {\displaystyle ({\boldsymbol {x}},{\boldsymbol {v}},{\boldsymbol {a}})} : F = M a + C v + K x + f + s {\displaystyle {\boldsymbol {F}}=M{\boldsymbol {a}}+C{\boldsymbol {v}}+K{\boldsymbol {x}}+{\boldsymbol {f}}+{\boldsymbol {s}}} , with f {\displaystyle {\boldsymbol {f}}} being friction and s {\displaystyle {\boldsymbol {s}}} being static force. Masses ( M {\displaystyle M} ) and springs (with stiffness K {\displaystyle K} ) are energy storing elements, whereas a damper (with damping C {\displaystyle C} ) is an energy dissipating device. If we can control impedance, we are able to control energy exchange during interaction, i.e. the work being done. So impedance control is interaction control. [ 2 ] Note that mechanical systems are inherently multi-dimensional - a typical robot arm can place an object in three dimensions ( ( x , y , z ) {\displaystyle (x,y,z)} coordinates) and in three orientations (e.g. roll, pitch, yaw). In theory, an impedance controller can cause the mechanism to exhibit a multi-dimensional mechanical impedance. For example, the mechanism might act very stiff along one axis and very compliant along another. By compensating for the kinematics and inertias of the mechanism, we can orient those axes arbitrarily and in various coordinate systems. For example, we might cause a robotic part holder to be very stiff tangentially to a grinding wheel, while being very compliant (controlling force with little concern for position) in the radial axis of the wheel. An uncontrolled robot can be expressed in Lagrangian formulation as τ = M ( q ) q ¨ + c ( q , q ˙ ) + g ( q ) + h ( q , q ˙ ) + τ e x t {\displaystyle {\boldsymbol {\tau }}={\boldsymbol {M}}({\boldsymbol {q}}){\ddot {\boldsymbol {q}}}+{\boldsymbol {c}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}})+{\boldsymbol {g}}({\boldsymbol {q}})+{\boldsymbol {h}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}})+{\boldsymbol {\tau }}_{\mathrm {ext} }} , where q {\displaystyle {\boldsymbol {q}}} denotes joint angular position, M {\displaystyle {\boldsymbol {M}}} is the symmetric and positive-definite inertia matrix, c {\displaystyle {\boldsymbol {c}}} the Coriolis and centrifugal torque, g {\displaystyle {\boldsymbol {g}}} the gravitational torque, h {\displaystyle {\boldsymbol {h}}} includes further torques from, e.g., inherent stiffness, friction, etc., and τ e x t {\displaystyle {\boldsymbol {\tau }}_{\mathrm {ext} }} summarizes all the external forces from the environment. The actuation torque τ {\displaystyle {\boldsymbol {\tau }}} on the left side is the input variable to the robot. One may propose a control law of the following form: τ = K ( q d − q ) + D ( q ˙ d − q ˙ ) + M ^ ( q ) q ¨ d + c ^ ( q , q ˙ ) + g ^ ( q ) + h ^ ( q , q ˙ ) , {\displaystyle {\boldsymbol {\tau }}={\boldsymbol {K}}({\boldsymbol {q}}_{\mathrm {d} }-{\boldsymbol {q}})+{\boldsymbol {D}}({\dot {\boldsymbol {q}}}_{\mathrm {d} }-{\dot {\boldsymbol {q}}})+{\hat {\boldsymbol {M}}}({\boldsymbol {q}}){\ddot {\boldsymbol {q}}}_{\mathrm {d} }+{\hat {\boldsymbol {c}}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}})+{\hat {\boldsymbol {g}}}({\boldsymbol {q}})+{\hat {\boldsymbol {h}}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}}),} where q d {\displaystyle {\boldsymbol {q}}_{\mathrm {d} }} denotes the desired joint angular position, K {\displaystyle {\boldsymbol {K}}} and D {\displaystyle {\boldsymbol {D}}} are the control parameters, and M ^ {\displaystyle {\hat {\boldsymbol {M}}}} , c ^ {\displaystyle {\hat {\boldsymbol {c}}}} , g ^ {\displaystyle {\hat {\boldsymbol {g}}}} , and h ^ {\displaystyle {\hat {\boldsymbol {h}}}} are the internal model of the corresponding mechanical terms. Inserting ( 2 ) into ( 1 ) gives an equation of the closed-loop system (controlled robot): K ( q d − q ) + D ( q ˙ d − q ˙ ) + M ( q ) ( q ¨ d − q ¨ ) = τ e x t . {\displaystyle {\boldsymbol {K}}({\boldsymbol {q}}_{\mathrm {d} }-{\boldsymbol {q}})+{\boldsymbol {D}}({\dot {\boldsymbol {q}}}_{\mathrm {d} }-{\dot {\boldsymbol {q}}})+{\boldsymbol {M}}({\boldsymbol {q}})({\ddot {\boldsymbol {q}}}_{\mathrm {d} }-{\ddot {\boldsymbol {q}}})={\boldsymbol {\tau }}_{\mathrm {ext} }.} Let e = q d − q {\displaystyle {\boldsymbol {e}}={\boldsymbol {q}}_{\mathrm {d} }-{\boldsymbol {q}}} , one obtains K e + D e ˙ + M e ¨ = τ e x t {\displaystyle {\boldsymbol {K}}{\boldsymbol {e}}+{\boldsymbol {D}}{\dot {\boldsymbol {e}}}+{\boldsymbol {M}}{\ddot {\boldsymbol {e}}}={\boldsymbol {\tau }}_{\mathrm {ext} }} Since the matrices K {\displaystyle {\boldsymbol {K}}} and D {\displaystyle {\boldsymbol {D}}} have the dimension of stiffness and damping, they are commonly referred to as stiffness and damping matrix, respectively. Clearly, the controlled robot is essentially a multi-dimensional mechanical impedance (mass-spring-damper) to the environment, which is addressed by τ e x t {\displaystyle {\boldsymbol {\tau }}_{\mathrm {ext} }} . The same principle also applies to task space. An uncontrolled robot has the following task-space representation in Lagrangian formulation: F = Λ ( q ) x ¨ + μ ( x , x ˙ ) + γ ( q ) + η ( q , q ˙ ) + F e x t {\displaystyle {\boldsymbol {\mathcal {F}}}={\boldsymbol {\Lambda }}({\boldsymbol {q}}){\ddot {\boldsymbol {x}}}+{\boldsymbol {\mu }}({\boldsymbol {x}},{\dot {\boldsymbol {x}}})+{\boldsymbol {\gamma }}({\boldsymbol {q}})+{\boldsymbol {\eta }}({\boldsymbol {q}},{\dot {\boldsymbol {q}}})+{\boldsymbol {\mathcal {F}}}_{\mathrm {ext} }} , where q {\displaystyle {\boldsymbol {q}}} denotes joint angular position, x {\displaystyle {\boldsymbol {x}}} task-space position, Λ {\displaystyle {\boldsymbol {\Lambda }}} the symmetric and positive-definite task-space inertia matrix. The terms μ {\displaystyle {\boldsymbol {\mu }}} , γ {\displaystyle {\boldsymbol {\gamma }}} , η {\displaystyle {\boldsymbol {\eta }}} , and F e x t {\displaystyle {\boldsymbol {\mathcal {F}}}_{\mathrm {ext} }} are the generalized force of the Coriolis and centrifugal term, the gravitation, further nonlinear terms, and environmental contacts. Note that this representation only applies to robots with redundant kinematics . The generalized force F {\displaystyle {\boldsymbol {\mathcal {F}}}} on the left side corresponds to the input torque of the robot. Analogously, one may propose the following control law: F = K x ( x d − x ) + D x ( x ˙ d − x ˙ ) + Λ ^ ( q ) x ¨ d + μ ^ ( q , q ˙ ) + γ ^ ( q ) + η ^ ( q , q ˙ ) , {\displaystyle {\boldsymbol {\mathcal {F}}}={\boldsymbol {K}}_{\mathrm {x} }({\boldsymbol {x}}_{\mathrm {d} }-{\boldsymbol {x}})+{\boldsymbol {D}}_{\mathrm {x} }({\dot {\boldsymbol {x}}}_{\mathrm {d} }-{\dot {\boldsymbol {x}}})+{\hat {\boldsymbol {\Lambda }}}({\boldsymbol {q}}){\ddot {\boldsymbol {x}}}_{\mathrm {d} }+{\hat {\boldsymbol {\mu }}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}})+{\hat {\boldsymbol {\gamma }}}({\boldsymbol {q}})+{\hat {\boldsymbol {\eta }}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}}),} where x d {\displaystyle {\boldsymbol {x}}_{\mathrm {d} }} denotes the desired task-space position, K x {\displaystyle {\boldsymbol {K}}_{\mathrm {x} }} and D x {\displaystyle {\boldsymbol {D}}_{\mathrm {x} }} are the task-space stiffness and damping matrices, and Λ ^ {\displaystyle {\hat {\boldsymbol {\Lambda }}}} , μ ^ {\displaystyle {\hat {\boldsymbol {\mu }}}} , γ ^ {\displaystyle {\hat {\boldsymbol {\gamma }}}} , and η ^ {\displaystyle {\hat {\boldsymbol {\eta }}}} are the internal model of the corresponding mechanical terms. Similarly, one has e x = x d − x {\displaystyle {\boldsymbol {e}}_{\mathrm {x} }={\boldsymbol {x}}_{\mathrm {d} }-{\boldsymbol {x}}} , K x e x + D x e ˙ x + Λ e ¨ x = F e x t {\displaystyle {\boldsymbol {K}}_{\mathrm {x} }{\boldsymbol {e}}_{\mathrm {x} }+{\boldsymbol {D}}_{\mathrm {x} }{\dot {\boldsymbol {e}}}_{\mathrm {x} }+{\boldsymbol {\Lambda }}{\ddot {\boldsymbol {e}}}_{\mathrm {x} }={\boldsymbol {\mathcal {F}}}_{\mathrm {ext} }} as the closed-loop system, which is essentially a multi-dimensional mechanical impedance to the environment ( F e x t {\displaystyle {\boldsymbol {\mathcal {F}}}_{\mathrm {ext} }} ) as well. Thus, one can choose desired impedance (mainly stiffness) in the task space. For example, one may want to make the controlled robot act very stiff along one direction while relatively compliant along others by setting K x = ( 1 0 0 0 1 0 0 0 1000 ) N / m , {\displaystyle {\boldsymbol {K}}_{\mathrm {x} }={\begin{pmatrix}1&0&0\\0&1&0\\0&0&1000\end{pmatrix}}\mathrm {N/m} ,} assuming the task space is a three-dimensional Euclidean space. The damping matrix D x {\displaystyle {\boldsymbol {D}}_{\mathrm {x} }} is usually chosen such that the closed-loop system ( 3 ) is stable . [ 3 ] Impedance control is used in applications such as robotics as a general strategy to send commands to a robotics arm and end effector that takes into account the non-linear kinematics and dynamics of the object being manipulated. [ 4 ]
https://en.wikipedia.org/wiki/Impedance_control
Impedance microbiology is a microbiological technique used to measure the microbial number density (mainly bacteria but also yeasts ) of a sample by monitoring the electrical parameters of the growth medium . The ability of microbial metabolism to change the electrical conductivity of the growth medium was discovered by Stewart [ 1 ] and further studied by other scientists such as Oker-Blom, [ 2 ] Parson [ 3 ] and Allison [ 4 ] in the first half of 20th century. However, it was only in the late 1970s that, thanks to computer -controlled systems used to monitor impedance , the technique showed its full potential, as discussed in the works of Fistenberg-Eden & Eden, [ 5 ] Ur & Brown [ 6 ] and Cady. [ 7 ] When a pair of electrodes are immersed in the growth medium, the system composed of electrodes and electrolyte can be modeled with the electrical circuit of Fig. 1, where R m and C m are the resistance and capacitance of the bulk medium, while R i and C i are the resistance and capacitance of the electrode-electrolyte interface. [ 8 ] However, when frequency of the sinusoidal test signal applied to the electrodes is relatively low (lower than 1 MHz) the bulk capacitance C m can be neglected and the system can be modeled with a simpler circuit consisting only of a resistance R s and a capacitance C s in series. The resistance R s accounts for the electrical conductivity of the bulk medium while the capacitance C s is due to the capacitive double-layer at the electrode-electrolyte interface. [ 9 ] During the growth phase, bacterial metabolism transforms uncharged or weakly charged compounds of the bulk medium into highly charged compounds that change the electrical properties of the medium. This results in a decrease of resistance R s and an increase of capacitance C s . In impedance microbiology, technique works this way. The sample with the initial unknown bacterial concentration (C 0 ) is placed at a temperature favoring bacterial growth (in the range 37 to 42 °C if mesophilic microbial population is the target) and the electrical parameters R s and C s are measured at regular time intervals of few minutes by means of a couple of electrodes in direct contact with the sample. [ citation needed ] Until the bacterial concentration is lower than a critical threshold C TH the electrical parameters R s and C s remain essentially constant (at their baseline values). C TH depends on various parameters such as electrode geometry, bacterial strain, chemical composition of the growth medium etc., but it is always in the range 10 6 to 10 7 cfu/ml. When the bacterial concentration increases over C TH , the electrical parameters deviate from their baseline values (generally in the case of bacteria there is a decrease of R s and an increase of C s , the opposite happens in the case of yeasts). The time needed for the electrical parameters R s and C s to deviate from their baseline value is referred as Detect Time (DT) and is the parameter used to estimate the initial unknown bacterial concentration C 0 . In Fig. 2 a typical curve for R s as well as the corresponding bacterial concentration are plotted vs. time. Fig. 3 shows typical R s curves vs time for samples characterized by different bacterial concentration. Since DT is the time needed for the bacterial concentration to grow from the initial value C 0 to C TH , highly contaminated samples are characterized by lower values of DT than samples with low bacterial concentration. Given C 1 , C 2 and C 3 the bacterial concentration of three samples with C 1 > C 2 > C 3 , it is DT 1 < DT 2 < DT 3 . Data from literature show how DT is a linear function of the logarithm of C 0 : [ 10 ] [ 11 ] where the parameters A and B are dependent on the particular type of samples under test, the bacterial strains, the type of enriching medium used and so on. These parameters can be calculated by calibrating the system using a set of samples whose bacterial concentration is known and calculating the linear regression line that will be used to estimate the bacterial concentration from the measured DT. Impedance microbiology has different advantages on the standard plate count technique to measure bacterial concentration. It is characterized by faster response time. In the case of mesophilic bacteria, the response time range from 2 – 3 hours for highly contaminated samples (10 5 - 10 6 cfu/ml) to over 10 hours for samples with very low bacterial concentration (less than 10 cfu/ml). As a comparison, for the same bacterial strains the Plate Count technique is characterized by response times from 48 to 72 hours. [ citation needed ] Impedance microbiology is a method that can be easily automated and implemented as part of an industrial machine or realized as an embedded portable sensor, while plate count is a manual method that needs to be carried out in a laboratory by long trained personnel. Over the past decades different instruments (either laboratory built or commercially available) to measure bacterial concentration using impedance microbiology have been built. One of the best selling and well accepted instruments in the industry is the Bactometer [ 12 ] by Biomerieux. The original instrument of 1984 features a multi-incubator system capable of monitoring up to 512 samples simultaneously with the ability to set 8 different incubation temperatures. Other instruments with performance comparable to the Bactometer are Malthus by Malthus Instruments Ltd (Bury, UK), [ 13 ] RABIT by Don Whitley Scientific (Shipley, UK) [ 14 ] and Bac Trac by Sy-Lab (Purkensdorf, Austria). [ 15 ] A portable embedded system for microbial concentration measurement in liquid and semi-liquid media using impedance microbiology has been recently proposed. [ 16 ] [ 17 ] The system is composed of a thermoregulated incubation chamber where the sample under test is stored and a controller for thermoregulation and impedance measurements. Impedance microbiology has been extensively used in the past decades to measure the concentration of bacteria and yeasts in different type of samples, mainly for quality assurance in the food industry. Some applications are, the determination of the shelf life of pasteurized milk [ 18 ] and the measure of total bacterial concentration in raw-milk, [ 19 ] [ 20 ] frozen vegetables, [ 21 ] grain products, [ 22 ] meat products [ 23 ] and beer. [ 24 ] [ 25 ] The technique has been also used in environmental monitoring to detect the coliform concentration in water samples as well as other bacterial pathogens like E.coli present in water bodies, [ 26 ] [ 27 ] [ 28 ] in the pharmaceutical industry to test the efficiency of novel antibacterial agents [ 29 ] and the testing of final products.
https://en.wikipedia.org/wiki/Impedance_microbiology
An impeller , or impellor , [ 1 ] is a driven rotor used to increase the pressure and flow of a fluid. It is the opposite of a turbine , which extracts energy from, and reduces the pressure of, a flowing fluid. Strictly speaking, propellers are a sub-class of impellers where the flow both enters and leaves axially, but in many contexts the term "impeller" is reserved for non -propeller rotors where the flow enters axially and leaves radially, especially when creating suction in a pump or compressor . An impeller is a rotating component of a centrifugal pump that accelerates fluid outward from the center of rotation, thus transferring energy from the motor that drives the pump to the fluid being pumped. [ 2 ] [ 3 ] The acceleration generates output pressure when the outward movement of the fluid is confined by the pump casing. An impeller is usually a short cylinder with an open inlet (called an eye) to accept incoming fluid, vanes to push the fluid radially, and a splined , keyed , or threaded bore to accept a drive shaft. It can be cheaper to cast an impeller and its spindle as one piece, rather than separately. This combination is sometimes referred to simply as the "rotor." An open impeller has a hub with attached vanes and is mounted on a shaft. The vanes do not have a wall, making open impellers slightly weaker than closed or semi-closed impellers. However, as the side plate is not fixed to the inlet side of the vane, the blade stresses are significantly lower. [ 4 ] In pumps, the fluid enters the impeller's eye, where vanes add energy and direct it to the nozzle discharge. A close clearance between vanes and pump volute or back plate prevent most of fluid from flowing back. Wear on the bowl and edge of vane can be compensated by adjusting the clearance to maintain efficiency over time. [ 5 ] Because the internal parts are visible, open impellers are easier to inspect for damage and maintain than closed impellers. They can also be more easily modified to change flow properties. Open impellers operate on a narrow range of specific speed . Open impellers are usually faster and easier to maintain.  For small pumps and those dealing with suspended solids, open impellers are generally used. [ 6 ] Sand locking does not occur as easily as with closed type. A semi-closed impeller has an additional back wall, giving it more strength. These impellers can pass mixed solid-liquid mixtures at the cost of reduced efficiency. The construction of closed impellers includes additional back and front walls on both sides of vanes that enhances its strength. This also reduces the thrust load on the shaft, increasing bearing life and reliability and reducing shafting cost. However, this more complicated design, including the use of additional wear rings, makes closed impellers more difficult to manufacture and more expensive than open impellers. A closed impeller's efficiency decreases as wear ring clearance increases with use. However, adjustment of impeller bowl clearance does not affect the wear on vanes as critically as open impeller. [ 4 ] Closed impellers can be used on a wider range specific speed than open impellers. [ 5 ] They are generally used in large pumps and clear water applications. These impellers can't perform effectively with solids and become difficult to clean if clogged. [ 6 ] The screw impeller design aligns more with an axial progressive channel that allows for solids to be openly handled when rotating. [ 7 ] [ 8 ] The main part of a centrifugal compressor is the impeller. An open impeller has no cover, therefore it can work at higher speeds. A compressor with a covered impeller can have more stages than one that has an open impeller. Some impellers are similar to small propellers but without the large blades. Among other uses, they are used in water jets to power high speed boats. Because impellers do not have large blades to turn, they can spin at much higher speeds than propellers. The water forced through the impeller is channeled by the housing, creating a water jet that propels the vessel forward. The housing is normally tapered into a nozzle to increase the speed of the water, which also creates a Venturi effect in which low pressure behind the impeller pulls more water towards the blades, tending to increase the speed. To work efficiently, there must be a close fit between the impeller and the housing. The housing is normally fitted with a replaceable wear ring which tends to wear as sand or other particles are thrown against the housing side by the impeller. Vessels using impellers are normally steered by changing the direction of the water jet. Compare to propeller and jet aircraft engines . Impellers in agitated tanks are used to mix fluids or slurry in the tank. This can be used to combine materials in the form of solids, liquids and gas. Mixing the fluids in a tank is very important if there are gradients in conditions such as temperature or concentration. There are two types of impellers, depending on the flow regime created (see figure): Radial flow impellers impose essentially shear stress to the fluid, and are used, for example, to mix immiscible liquids or in general when there is a deformable interface to break. Another application of radial flow impellers is the mixing of very viscous fluids. Axial flow impellers impose essentially bulk motion and are used on homogenization processes, in which increased fluid volumetric flow rate is important. Impellers can be further classified principally into three sub-types: Propellers are axial thrust-giving elements. These elements give a very high degree of swirling in the vessel. The flow pattern generated in the fluid resembles a helix. Some constructions of top loading washing machines use impellers to agitate the laundry during washing. Fire services in the United Kingdom and many countries of the Commonwealth use a stylized depiction of an impeller as a rank badge. Officers wear one or more on their epaulettes or the collar of their firefighting uniform as an equivalent to the "pips" worn by the army and police . Air pumps, such as the roots blower , use meshing impellers to move air through a system. Applications include blast furnaces, ventilation systems, and superchargers for internal combustion engines. Impellers are an integral part of axial-flow pumps , used in ventricular assist devices to augment or fully replace cardiac function. [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/Impeller
A chip log , also called common log , [ 1 ] ship log , or just log , is a navigation tool mariners use to estimate the speed of a vessel through water. The word knot , to mean nautical mile per hour , derives from this measurement method. All nautical instruments that measure the speed of a ship through water are known as logs. [ 2 ] This nomenclature dates back to the days of sail, when sailors attached a piece of lumber (a "log" of wood) to a rope knotted at regular intervals off the stern of a ship. Sailors counted the number of knots that passed through their hands in a given time to determine the ship's speed. Today, sailors and aircraft pilots still express speed in knots. A chip log consists of a wooden board attached to a line (the log-line ). The log-line has a number of knots at uniform intervals. The log-line is wound on a reel so the user can easily pay it out . Over time, log construction standardized. The shape is a quarter circle , or quadrant with a radius of 5 inches (130 mm) or 6 inches (150 mm), [ 1 ] and 0.5 inches (13 mm) thick. [ 1 ] The log-line attaches to the board with a bridle of three lines that connect to the quadrant's vertex and the two ends of its arc. To ensure the log submerges and orients correctly in the water, the bottom of the log is weighted with lead . [ 1 ] This provides more resistance in the water, and a more accurate and repeatable reading. The bridle attaches in such a way that a strong tug on the log-line makes one or two of the bridle's lines release, enabling a sailor to retrieve the log. A navigator who needed to know the speed of the vessel had a sailor drop the log over the ship's stern. The log acted as a drogue , remaining roughly in place while the vessel moved away. The sailor let the log-line run out for a fixed time while counting the knots that passed over. The length of log-line passing (the number of knots) determined the reading. The first known device that measured speed is often claimed to be the Dutchman's log. This invention is attributed to the Portuguese Bartolomeu Crescêncio , who designed it in the end of the 15th century or in the beginning of the 16th century. [ 3 ] A sailor threw a floating object overboard and used a sandglass to measure the time it took to pass between two points on deck. The first reference to a Dutchman's log is in 1623—later than the ship log. [ 4 ] The Dutchman's log could be used with a brass tobacco box, rectangular with rounded ends. This box had tables on it to convert log timing to speed. [ 5 ] [ 6 ] Mariners have used the log for a long time. The first known description of the device in print is in A Regiment for the Sea by William Bourne , in 1574. Bourne devised a half-minute sandglass for timing. [ 7 ] At the time, a mile was reckoned as 5,000 feet, so in 30 seconds at one mile per hour, a ship would travel about 42 feet: Initially, the log-line was not knotted and sailors measured the length directly on the line. With the introduction of the nautical mile as a standard unit of measure at sea in the 15th century, they began to mark the line at equal intervals proportional to the nautical mile and to the time interval used for measurement. Initially, the markings were simply knots in the line. Later, sailors worked knotted cords into the log-line. Many ships used knots spaced 8 fathoms (48 feet or 14.6 meters ) apart, while other ships used the 7-fathom prescription. [ 8 ] The time interval needs to be adjusted according to the distance between knots. Substituting 6,000 feet for 1 mile, the above formula yields 28.8 seconds for a distance of 8 fathoms. In fact, 28-second and 14-second glasses used to be common among navigation equipment. [ 9 ] Use of a log did not give an exact speed measure. The sailor had to incorporate a number of considerations: Frequent measurements helped mitigate some of these inaccuracies by averaging out individual errors, and experienced navigators could determine their speed through the water with a fair degree of accuracy. Because a log measures the speed through the water, some errors—especially the effect of currents, the movement of the water itself—could not be corrected. Navigators relied on position fixes to correct for these errors. Modern navigation tools, such as GPS, report speed over ground, and in general do not give the same result as a log when currents are present.
https://en.wikipedia.org/wiki/Impeller_log
The imperfect induction is the process of inferring from a sample of a group to what is characteristic of the whole group. [ 1 ] [ 2 ] This psychology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Imperfect_induction
Impervious surfaces are mainly artificial structures—such as pavements ( roads , sidewalks , driveways and parking lots , as well as industrial areas such as airports , ports and logistics and distribution centres , all of which use considerable paved areas) that are covered by water-resistant materials such as asphalt , concrete , brick , stone —and rooftops . Soils compacted by urban development are also highly impervious. Impervious surfaces are an environmental concern because their construction initiates a chain of events that modifies urban air and water resources: The total coverage by impervious surfaces in an area, such as a municipality or a watershed , is usually expressed as a percentage of the total land area. The coverage increases with rising urbanization . In rural areas, impervious cover may only be one or two percent. In residential areas, coverage increases from about 10 percent in low-density subdivisions to over 50 percent in multifamily communities. In industrial and commercial areas, coverage rises above 70 percent. In regional shopping centers and dense urban areas, it is over 90 percent. In the contiguous 48 states of the US, urban impervious cover adds up to 43,000 square miles (110,000 km 2 ). Development adds 390 square miles (1,000 km 2 ) annually. Typically, two-thirds of the cover is pavements and one-third is building roofs. [ 2 ] Impervious surface coverage can be limited by restricting land use density (such as a number of homes per acre in a subdivision), but this approach causes land elsewhere (outside the subdivision) to be developed, to accommodate the growing population. (See urban sprawl . ) Alternatively, urban structures can be built differently to make them function more like naturally pervious soils; examples of such alternative structures are porous pavements , green roofs and infiltration basins . Rainwater from impervious surfaces can be collected in rainwater tanks and used in place of main water. The island of Catalina located West of the Port of Long Beach has put extensive effort into capturing rainfall to minimize the cost of transportation from the mainland. Partly in response to recent criticism by municipalities , a number of concrete manufacturers such as CEMEX and Quikrete have begun producing permeable materials which partly mitigate the environmental impact of conventional impervious concrete. These new materials are composed of various combinations of naturally derived solids including fine to coarse-grained rocks and minerals , organic matter (including living organisms ), ice , weathered rock and precipitates , liquids (primarily water solutions ), and gases . [ 3 ] The COVID-19 pandemic gave birth to proposals for radical change in the organisation of the city, [ 4 ] being the drastic reduction of the presence of impermeable surfaces and the recovery of the permeability of the soil one of the elements of the Manifesto for the Reorganisation of the city, published in Barcelona by architecture and urban theorist Massimo Paolini and signed by 160 academics and 350 architects. The percentage imperviousness, commonly referred to as PIMP in calculations, is an important factor when considering drainage of water. It is calculated by measuring the percentage of a catchment area which is made up of impervious surfaces such as roads, roofs and other paved surfaces. An estimation of PIMP is given by PIMP = 6.4J^0.5 where J is the number of dwellings per hectare (Butler and Davies 2000). For example, woodland has a PIMP value of 10%, whereas dense commercial areas have a PIMP value of 100%. This variable is used in the Flood Estimation Handbook . Homer and others (2007) indicate that about 76 percent of the conterminous United States is classified as having less than 1 percent impervious cover, 11 percent with impervious cover of 1 to 10 percent, 4 percent with an estimated impervious cover of 11 to 20 percent, 4.4 percent with an estimated impervious cover of 21 to 40 percent, and about 4.4 percent with an estimated impervious cover greater than 40 percent. [ 5 ] [ 6 ] The total impervious area (TIA), commonly referred to as impervious cover (IC) in calculations, can be expressed as a fraction (from zero to one) or a percentage. There are many methods for estimating TIA, including the use of the National Land Cover Data Set (NLCD) [ 7 ] with a Geographic information system (GIS), land-use categories with categorical TIA estimates, a generalized percent developed area, and relations between population density and TIA. [ 6 ] The U.S. NLCD impervious surface data set may provide a high-quality nationally consistent land cover data set in a GIS-ready format that can be used to estimate TIA value. [ 6 ] The NLCD consistently quantifies the percent anthropogenic TIA for the NLCD at a 30-meter (a 900 m2) pixel resolution throughout the Nation. Within the data set, each pixel is quantified as having a TIA value that ranges from 0 to 100 percent. TIA estimates made with the NLCD impervious surface data set represent an aggregated TIA value for each pixel rather than a TIA value for an individual impervious feature. For example, a two lane road in a grassy field has a TIA value of 100 percent, but the pixel containing the road would have a TIA value of 26 percent. If the road (equally) straddles the boundary of two pixels, each pixel would have a TIA value of 13 percent. The Data-quality analysis of the NLCD 2001 data set with manually delimited TIA sample areas indicates that the average error of predicted versus actual TIA may range from 8.8 to 11.4 percent. [ 5 ] TIA estimates from land use are made by identifying land use categories for large blocks of land, summing the total area of each category, and multiplying each area by a characteristic TIA coefficient. [ 6 ] Land use categories commonly are used to estimate TIA because areas with a common land use can be identified from field studies, from maps, from planning and zoning information, and from remote imagery. Land use coefficient methods commonly are used because planning and zoning maps that identify similar areas are, increasingly, available in GIS formats. Also, land use methods are selected to estimate potential effects of future development on TIA with planning maps that quantify projected changes in land use. [ 8 ] There are substantial differences in actual and estimated TIA estimates from different studies in the literature. Terms like low density and high density may differ in different areas. [ 9 ] A residential density of one-half acre per house may be classified as high density in a rural area, medium density in a suburban area, and low density in an urban area. Granato (2010) [ 6 ] provides a table with TIA values for different land-use categories from 30 studies in the literature. The percent developed area (PDA) is commonly used to estimate TIA manually by using maps. [ 6 ] The Multi-Resolution Land Characteristics Consortium (MRLCC) defines a developed area as being covered by at least 30 percent of constructed materials [ 10 ] ). Southard (1986) [ 11 ] defined non-developed areas as natural, agricultural , or scattered residential development . He developed a regression equation to predict TIA using percent developed area (table 6-1). He developed his equation using logarithmic power function with data from 23 basins in Missouri . He noted that this method was advantageous because large basins could quickly be delineated and TIA estimated manually from available maps. Granato (2010) [ 6 ] developed a regression equation by using data from 262 stream basins in 10 metropolitan areas of the conterminous United States with drainage areas ranging from 0.35 to 216 square miles and PDA values ranging from 0.16 to 99.06 percent. TIA also is estimated from population density data by estimating the population in an area of interest and using regression equations to calculate the associated TIA. [ 6 ] Population-density data are used because nationally consistent census-block data are available in GIS formats for the entire United States. Population-density methods also can be used for predicting potential effects of future development. Although there may be substantial variation in relations between population density and TIA the accuracy of such estimates tend to improve with increasing drainage area as local variations are averaged out. [ 12 ] Granato (2010) [ 6 ] provides a table with 8 population-density relations from the literature and a new equation developed by using data from 6,255 stream basins in the USGS GAGESII dataset. [ 13 ] Granato (2010) [ 6 ] also provides four equations to estimate TIA from housing density, which is related to population density. TIA is also estimated from impervious maps extracted through remote sensing . Remote sensing has been extensively utilized to detect impervious surfaces. [ 14 ] [ 15 ] Detection of impervious areas using deep learning in conjunction with satellite images has emerged as a transformative method in remote sensing and environmental monitoring . Deep learning algorithms, particularly convolutional neural networks (CNNs), have revolutionized our capacity to identify and quantify impervious surfaces from high-resolution satellite imagery. These models can automatically extract intricate spatial and spectral features, enabling them to discriminate between impervious and non-impervious surfaces with exceptional accuracy. [ 16 ] [ 17 ] [ 18 ] Natural impervious areas are defined herein as land covers that can contribute a substantial amount of surface runoff during small and large storms, but commonly are classified as pervious areas. [ 6 ] These areas are not commonly considered as an important source of stormflow in most highway and urban runoff -quality studies, but may produce a substantial amount of stormflow. These natural impervious areas may include open water, wetlands , rock outcrops, barren ground (natural soils with low imperviousness), and areas of compacted soils . Natural impervious areas, depending on their nature and antecedent conditions, may produce stormflow from infiltration excess overland flow, saturation overland flow, or direct precipitation. The effects of natural impervious areas on runoff generation are expected to be more important in areas with low TIA than highly developed areas. The NLCD [ 19 ] provides land-cover statistics that can be used as a qualitative measure of the prevalence of different land covers that may act as natural impervious areas. Open water may act as a natural impervious area if direct precipitation is routed through the channel network and arrives as stormflow at the site of interest. Wetlands may act as a natural impervious area during storms when groundwater discharge and saturation overland flow are a substantial proportion of stormflow. Barren ground in riparian areas may act as a natural impervious area during storms because these areas are a source of infiltration excess overland flows. Seemingly pervious areas that have been affected by development activities may act as impervious areas and generate infiltration excess overland flows. These stormflows may occur even during storms that do not meet precipitation volume or intensity criteria to produce runoff based on nominal infiltration rates. Developed pervious areas may behave like impervious areas because development and subsequent use tends to compact soils and reduce infiltration rates. For example, Felton and Lull (1963) [ 20 ] measured infiltration rates for forest soils and lawns to indicate a potential 80 percent reduction in infiltration as a result of development activities. Similarly, Taylor (1982) [ 21 ] did infiltrometer tests in areas before and after suburban development and noted that topsoil alteration and compaction by construction activities reduced infiltration rates by more than 77 percent. This article incorporates public domain material from websites or documents of the United States Geological Survey and the Federal Highway Administration .
https://en.wikipedia.org/wiki/Impervious_surface
An impingement filter can be used to purify a polluted solution, be it gas or liquid. The impingement filter acts by inducing the solution to change direction and the particles to adhere to the filter medium. In many cases this filter medium is designed to contain apertures of specific size which will absorb the impurities in the solution. The gas or liquid, less impurities, is permitted free passage through the medium. Common examples of impingement filters are air filters , fuel filters and oil filters used in cars, trucks etc.
https://en.wikipedia.org/wiki/Impingement_filter
An implant is a medical device manufactured to replace a missing biological structure, support a damaged biological structure, or enhance an existing biological structure. For example, an implant may be a rod, used to strengthen weak bones . Medical implants are human-made devices, in contrast to a transplant , which is a transplanted biomedical tissue . The surface of implants that contact the body might be made of a biomedical material such as titanium , silicone , or apatite depending on what is the most functional. [ 1 ] In 2018, for example, American Elements developed a nickel alloy powder for 3D printing robust, long-lasting, and biocompatible medical implants. [ 2 ] In some cases implants contain electronics, e.g. artificial pacemaker and cochlear implants . Some implants are bioactive , such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents . [ 3 ] Implants can roughly be categorized into groups by application: Sensory and neurological implants are used for disorders affecting the major senses and the brain, as well as other neurological disorders. They are predominately used in the treatment of conditions such as cataract , glaucoma , keratoconus , and other visual impairments ; otosclerosis and other hearing loss issues, as well as middle ear diseases such as otitis media ; and neurological diseases such as epilepsy , Parkinson's disease , and treatment-resistant depression . Examples include the intraocular lens , intrastromal corneal ring segment , cochlear implant , tympanostomy tube , and neurostimulator . [ 1 ] [ 3 ] [ 4 ] Cardiovascular medical devices are implanted in cases where the heart, its valves , and the rest of the circulatory system is in disorder. They are used to treat conditions such as heart failure , cardiac arrhythmia , ventricular tachycardia , valvular heart disease , angina pectoris , and atherosclerosis . Examples include the artificial heart , artificial heart valve , implantable cardioverter-defibrillator , artificial cardiac pacemaker , and coronary stent . [ 1 ] [ 3 ] [ 4 ] Orthopaedic implants help alleviate issues with the bones and joints of the body. [ 5 ] They are used to treat bone fractures , osteoarthritis , scoliosis , spinal stenosis , and chronic pain as well as in knee and hip replacements . Examples include a wide variety of pins, rods, screws, and plates used to anchor fractured bones while they heal. [ 1 ] [ 3 ] [ 4 ] Metallic glasses based on magnesium with zinc and calcium addition are tested as the potential metallic biomaterials for biodegradable medical implants. [ 6 ] [ 7 ] Patients with orthopaedic implants sometimes need to be put under magnetic resonance imaging (MRI) machine for detailed musculoskeletal study. Therefore, concerns have been raised regarding the loosening and migration of implant, heating of the implant metal which could cause thermal damage to surrounding tissues, and distortion of the MRI scan that affects the imaging results. A study of orthopaedic implants in 2005 has shown that majority of the orthopaedic implants does not react with magnetic fields under the 1.0 Tesla MRI scanning machine with the exception of external fixator clamps. [ 8 ] However, at 7.0 Tesla, several orthopaedic implants would show significant interaction with the MRI magnetic fields, such as heel and fibular implant. [ 9 ] Electrical implants are being used to relieve pain from rheumatoid arthritis . [ 10 ] The electric implant is embedded in the neck of patients with rheumatoid arthritics, the implant sends electrical signals to electrodes in the vagus nerve . [ 11 ] [ 12 ] The application of this device is being tested an alternative to medicating people with rheumatoid arthritis for their lifetime. [ 13 ] Contraceptive implants are primarily used to prevent unintended pregnancy and treat conditions such as non-pathological forms of menorrhagia . Examples include copper - and hormone -based intrauterine devices . [ 3 ] [ 4 ] [ 14 ] Cosmetic implants — often prosthetics — attempt to bring some portion of the body back to an acceptable aesthetic norm. They are used as a follow-up to mastectomy due to breast cancer , for correcting some forms of disfigurement , and modifying aspects of the body (as in buttock augmentation and chin augmentation ). Examples include the breast implant , nose prosthesis , ocular prosthesis , and injectable filler . [ 1 ] [ 3 ] [ 4 ] Other types of organ dysfunction can occur in the systems of the body, including the gastrointestinal , respiratory , and urological systems. Implants are used in those and other locations to treat conditions such as gastroesophageal reflux disease , gastroparesis , respiratory failure , sleep apnea , urinary and fecal incontinence , and erectile dysfunction . Examples include the LINX , implantable gastric stimulator , diaphragmatic/phrenic nerve stimulator , neurostimulator, surgical mesh , artificial urinary sphincter and penile implant . [ 3 ] [ 4 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] Drug‑eluting implants combine the structural benefits of traditional devices with advanced drug delivery systems, achieving controlled release through the use of specialized materials. These implants often harness biodegradable polymers—such as PLA, PGA, and PLGA —to precisely orchestrate drug release via mechanisms like diffusion, polymer degradation, and osmotic pressure, ensuring high local concentrations while minimizing systemic side effects. This sophisticated approach to controlled release not only tailors therapy to the specific needs of cardiovascular, ocular, and orthopedic applications but also leverages ongoing innovations in materials science and nanotechnology to create next‐generation, personalized implantable systems [ 20 ] [ 21 ] Medical devices are classified by the US Food and Drug Administration (FDA) under three different classes depending on the risks the medical device may impose on the user. According to 21CFR 860.3, Class I devices are considered to pose the least amount of risk to the user and require the least amount of control. Class I devices include simple devices such as arm slings and hand-held surgical instruments . Class II devices are considered to need more regulation than Class I devices and are required to undergo specific requirements before FDA approval. Class II devices include X-ray systems and physiological monitors. Class III devices require the most regulatory controls since the device supports or sustains human life or may not be well tested. Class III devices include replacement heart valves and implanted cerebellar stimulators. Many implants typically fall under Class II and Class III devices. [ 22 ] [ 23 ] A variety of minimally bioreactive metals are routinely implanted. The most commonly implanted form of stainless steel is 316L . Cobalt - chromium and titanium -based implant alloys are also permanently implanted. All of these are made passive by a thin layer of oxide on their surface. A consideration, however, is that metal ions diffuse outward through the oxide and end up in the surrounding tissue. Bioreaction to metal implants includes the formation of a small envelope of fibrous tissue. The thickness of this layer is determined by the products being dissolved, and the extent to which the implant moves around within the enclosing tissue. Pure titanium may have only a minimal fibrous encapsulation. Stainless steel, on the other hand, may elicit encapsulation of as much as 2 mm. [ 24 ] Porous implants are characterized by the presence of voids in the metallic or ceramic matrix. Voids can be regular, such as in additively manufactured (AM) lattices, [ 25 ] or stochastic, such as in gas-infiltrated production processes. [ 26 ] The reduction in the modulus of the implant follows a complex nonlinear relationship dependent on the volume fraction of base material and morphology of the pores. [ 27 ] Experimental models exist to predict the range of modulus that stochastic porous material may take. [ 28 ] Above 10% vol. fraction porosity, models begin to deviate significantly. Different models, such as the rule of mixtures for low porosity, two-material matrices have been developed to describe mechanical properties. [ 29 ] AM lattices have more predictable mechanical properties compared to stochastic porous materials and can be tuned such that they have favorable directional mechanical properties. Variables such as strut diameter, strut shape, and number of cross-beams can have a dramatic effect on loading characteristics of the lattice. [ 30 ] AM has the ability to fine-tune the lattice spacing to within a much smaller range than stochastically porous structures, enabling the future cell-development of specific cultures in tissue engineering. [ 31 ] 1) The elastic modulus of the implant is decreased, allowing the implant to better match the elastic modulus of the bone. The elastic modulus of cortical bone (~18 GPa) is significantly lower than typical solid titanium or steel implants (110 GPa and 210 GPa, respectively), causing the implant take up a disproportionate amount of the load applied to the appendage, leading to an effect called stress shielding . 2) Porosity enables osteoblastic cells to grow into the pores of implants. Cells can span gaps of smaller than 75 microns and grow into pores larger than 200 microns. [ 26 ] Bone ingrowth is a favorable effect, as it anchors the cells into the implant, increasing the strength of the bone-implant interface. [ 32 ] More load is transferred from the implant to the bone, reducing stress shielding effects. The density of the bone around the implant is likely to be higher due to the increased load applied to the bone. Bone ingrowth reduces the likelihood of the implant loosening over time because stress shielding and corresponding bone resorption over extended timescales is avoided. [ 33 ] Porosity of greater than 40% is favorable to facilitate sufficient anchoring of the osteoblastic cells. [ 34 ] Under ideal conditions, implants should initiate the desired host response . Ideally, the implant should not cause any undesired reaction from neighboring or distant tissues. However, the interaction between the implant and the tissue surrounding the implant can lead to complications. [ 1 ] The process of implantation of medical devices is subjected to the same complications that other invasive medical procedures can have during or after surgery. Common complications include infection , inflammation , and pain . Other complications that can occur include risk of rejection from implant-induced coagulation and allergic foreign body response . Depending on the type of implant, the complications may vary. [ 1 ] When the site of an implant becomes infected during or after surgery, the surrounding tissue becomes infected by microorganisms . Three main categories of infection can occur after operation. Superficial immediate infections are caused by organisms that commonly grow near or on skin. The infection usually occurs at the surgical opening. Deep immediate infection, the second type, occurs immediately after surgery at the site of the implant. Skin-dwelling and airborne bacteria cause deep immediate infection. These bacteria enter the body by attaching to the implant's surface prior to implantation. Though not common, deep immediate infections can also occur from dormant bacteria from previous infections of the tissue at the implantation site that have been activated from being disturbed during the surgery. The last type, late infection, occurs months to years after the implantation of the implant. Late infections are caused by dormant blood-borne bacteria attached to the implant prior to implantation. The blood-borne bacteria colonize on the implant and eventually get released from it. Depending on the type of material used to make the implant, it may be infused with antibiotics to lower the risk of infections during surgery. However, only certain types of materials can be infused with antibiotics, the use of antibiotic-infused implants runs the risk of rejection by the patient since the patient may develop a sensitivity to the antibiotic, and the antibiotic may not work on the bacteria. [ 35 ] Inflammation, a common occurrence after any surgical procedure, is the body's response to tissue damage as a result of trauma, infection, intrusion of foreign materials, or local cell death , or as a part of an immune response . Inflammation starts with the rapid dilation of local capillaries to supply the local tissue with blood. The inflow of blood causes the tissue to become swollen and may cause cell death. The excess blood, or edema, can activate pain receptors at the tissue. The site of the inflammation becomes warm from local disturbances of fluid flow and the increased cellular activity to repair the tissue or remove debris from the site. [ 35 ] Implant-induced coagulation is similar to the coagulation process done within the body to prevent blood loss from damaged blood vessels. However, the coagulation process is triggered from proteins that become attached to the implant surface and lose their shapes. When this occurs, the protein changes conformation and different activation sites become exposed, which may trigger an immune system response where the body attempts to attack the implant to remove the foreign material. The trigger of the immune system response can be accompanied by inflammation. The immune system response may lead to chronic inflammation where the implant is rejected and has to be removed from the body. The immune system may encapsulate the implant as an attempt to remove the foreign material from the site of the tissue by encapsulating the implant in fibrinogen and platelets . The encapsulation of the implant can lead to further complications, since the thick layers of fibrous encapsulation may prevent the implant from performing the desired functions. Bacteria may attack the fibrous encapsulation and become embedded into the fibers. Since the layers of fibers are thick, antibiotics may not be able to reach the bacteria and the bacteria may grow and infect the surrounding tissue. In order to remove the bacteria, the implant would have to be removed. Lastly, the immune system may accept the presence of the implant and repair and remodel the surrounding tissue. Similar responses occur when the body initiates an allergic foreign body response. In the case of an allergic foreign body response, the implant would have to be removed. [ 36 ] The many examples of implant failure include rupture of silicone breast implants , hip replacement joints, and artificial heart valves , such as the Bjork–Shiley valve , all of which have caused FDA intervention. The consequences of implant failure depend on the nature of the implant and its position in the body. Thus, heart valve failure is likely to threaten the life of the individual, while breast implant or hip joint failure is less likely to be life-threatening. [ 1 ] [ 36 ] [ 37 ] Devices implanted directly in the grey matter of the brain produce the highest quality signals, but are prone to scar-tissue build-up, causing the signal to become weaker, or even non-existent, as the body reacts to a foreign object in the brain. [ 38 ] In 2018, Implant files , an investigation made by ICIJ revealed that medical devices that are unsafe and have not been adequately tested were implanted in patients' bodies. In United Kingdom, Prof Derek Alderson, president of the Royal College of Surgeons , concludes: "All implantable devices should be registered and tracked to monitor efficacy and patient safety in the long-term." [ 39 ]
https://en.wikipedia.org/wiki/Implant_(medicine)
Implementation is the realization of an application, execution of a plan , idea, model , design , specification , standard , algorithm , policy , or the administration or management of a process or objective . In the information technology industry, implementation refers to the post-sales process of guiding a client from purchase to use of the software or hardware that was purchased. This includes requirements analysis, scope analysis, customizations, systems integrations, user policies, user training and delivery. These steps are often overseen by a project manager using project management methodologies. Software Implementations involve several professionals that are relatively new to the knowledge based economy such as business analysts , software implementation specialists, solutions architects , and project managers. To implement a system successfully, many inter-related tasks need to be carried out in an appropriate sequence. Utilising a well-proven implementation methodology and enlisting professional advice can help but often it is the number of tasks, poor planning and inadequate resourcing that causes problems with an implementation project, rather than any of the tasks being particularly difficult. Similarly with the cultural issues it is often the lack of adequate consultation and two-way communication that inhibits achievement of the desired results. Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions. [ 1 ] According to this definition, implementation processes are purposeful and are described in sufficient detail such that independent observers can detect the presence and strength of the "specific set of activities" related to implementation. In addition, the activity or program being implemented is described in sufficient detail so that independent observers can detect its presence and strength. In computer science, implementation results in software, while in social and health sciences, implementation science studies how the software can be put into practice or routine use. [ 2 ] System implementation generally benefits from high levels of user involvement and management support. User participation in the design and operation of information systems has several positive results. First, if users are heavily involved in systems design, they move opportunities to mold the system according to their priorities and business requirements, and more opportunities to control the outcome. Second, they are more likely to react positively to the change process. Incorporating user knowledge and expertise leads to better solutions. The relationship between users and information systems specialists has traditionally been a problem area for information systems implementation efforts. Users and information systems specialists tend to have different backgrounds, interests, and priorities. This is referred to as the user-designer communications gap. These differences lead to divergent organizational loyalties, approaches to problem solving, and vocabularies. [ 3 ] Examples of these differences or concerns are below: Social scientific research on implementation also takes a step away from the project oriented at implementing a plan, and turns the project into an object of study. Lucy Suchman 's work has been key, in that respect, showing how the engineering model of plans and their implementation cannot account for the situated action and cognition involved in real-world practices of users relating to plans: [ 4 ] that work shows that a plan cannot be specific enough for detailing everything that successful implementation requires. Instead, implementation draws upon implicit and tacit resources and characteristics of users and of the plan's components.
https://en.wikipedia.org/wiki/Implementation
The Implementation Rules are regulations of the People’s Republic of China , which set the framework of the valid product standards . For each product group there is a specific implementation rule, which is set by the Chinese authorities. The Implementation Rules include 12 or 13 chapters , which determine the scope of the product certification . The following table provides an overview of the most common contents of the Implementation Rules: In 2014, the Implementation Rules have been updated, so that from 2015 some changes have come into effect. For example, some product groups, which were previously grouped under one Implementation Rule, are now divided and subject to some different Implementation Rules. Furthermore, new products have been added, which are now subject to the mandatory certification. Further factory levels were introduced, so that the companies that carry out a product certification will be assigned a level (A-D) in future. Companies that receive particularly positive results on their products certification will receive level A. [ 2 ]
https://en.wikipedia.org/wiki/Implementation_Rule
This article examines the implementation of mathematical concepts in set theory . The implementation of a number of basic mathematical concepts is carried out in parallel in ZFC (the dominant set theory) and in NFU , the version of Quine's New Foundations shown to be consistent by R. B. Jensen in 1969 (here understood to include at least axioms of Infinity and Choice ). What is said here applies also to two families of set theories: on the one hand, a range of theories including Zermelo set theory near the lower end of the scale and going up to ZFC extended with large cardinal hypotheses such as "there is a measurable cardinal "; and on the other hand a hierarchy of extensions of NFU which is surveyed in the New Foundations article. These correspond to different general views of what the set-theoretical universe is like, and it is the approaches to implementation of mathematical concepts under these two general views that are being compared and contrasted. It is not the primary aim of this article to say anything about the relative merits of these theories as foundations for mathematics. The reason for the use of two different set theories is to illustrate that multiple approaches to the implementation of mathematics are feasible. Precisely because of this approach, this article is not a source of "official" definitions for any mathematical concept. The following sections carry out certain constructions in the two theories ZFC and NFU and compare the resulting implementations of certain mathematical structures (such as the natural numbers ). Mathematical theories prove theorems (and nothing else). So saying that a theory allows the construction of a certain object means that it is a theorem of that theory that that object exists. This is a statement about a definition of the form "the x such that ϕ {\displaystyle \phi } exists", where ϕ {\displaystyle \phi } is a formula of our language : the theory proves the existence of "the x such that ϕ {\displaystyle \phi } " just in case it is a theorem that "there is one and only one x such that ϕ {\displaystyle \phi } ". (See Bertrand Russell's theory of descriptions .) Loosely, the theory "defines" or "constructs" this object in this case. If the statement is not a theorem, the theory cannot show that the object exists; if the statement is provably false in the theory, it proves that the object cannot exist; loosely, the object cannot be constructed. ZFC and NFU share the language of set theory, so the same formal definitions "the x such that ϕ {\displaystyle \phi } " can be contemplated in the two theories. A specific form of definition in the language of set theory is set-builder notation : { x ∣ ϕ } {\displaystyle \{x\mid \phi \}} means "the set A such that for all x, x ∈ A ↔ ϕ {\displaystyle x\in A\leftrightarrow \phi } " (A cannot be free in ϕ {\displaystyle \phi } ). This notation admits certain conventional extensions: { x ∈ B ∣ ϕ } {\displaystyle \{x\in B\mid \phi \}} is synonymous with { x ∣ x ∈ B ∧ ϕ } {\displaystyle \{x\mid x\in B\wedge \phi \}} ; { f ( x 1 , … , x n ) ∣ ϕ } {\displaystyle \{f(x_{1},\ldots ,x_{n})\mid \phi \}} is defined as { z ∣ ∃ x 1 , … , x n ( z = f ( x 1 , … , x n ) ∧ ϕ ) } {\displaystyle \{z\mid \exists x_{1},\ldots ,x_{n}\,(z=f(x_{1},\dots ,x_{n})\wedge \phi )\}} , where f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} is an expression already defined. Expressions definable in set-builder notation make sense in both ZFC and NFU: it may be that both theories prove that a given definition succeeds, or that neither do (the expression { x ∣ x ∉ x } {\displaystyle \{x\mid x\not \in x\}} fails to refer to anything in any set theory with classical logic; in class theories like NBG this notation does refer to a class, but it is defined differently), or that one does and the other doesn't. Further, an object defined in the same way in ZFC and NFU may turn out to have different properties in the two theories (or there may be a difference in what can be proved where there is no provable difference between their properties). Further, set theory imports concepts from other branches of mathematics (in intention, all branches of mathematics). In some cases, there are different ways to import the concepts into ZFC and NFU. For example, the usual definition of the first infinite ordinal ω {\displaystyle \omega } in ZFC is not suitable for NFU because the object (defined in purely set theoretical language as the set of all finite von Neumann ordinals ) cannot be shown to exist in NFU. The usual definition of ω {\displaystyle \omega } in NFU is (in purely set theoretical language) the set of all infinite well-orderings all of whose proper initial segments are finite, an object which can be shown not to exist in ZFC. In the case of such imported objects, there may be different definitions, one for use in ZFC and related theories, and one for use in NFU and related theories. For such "implementations" of imported mathematical concepts to make sense, it is necessary to be able to show that the two parallel interpretations have the expected properties: for example, the implementations of the natural numbers in ZFC and NFU are different, but both are implementations of the same mathematical structure, because both include definitions for all the primitives of Peano arithmetic and satisfy (the translations of) the Peano axioms. It is then possible to compare what happens in the two theories as when only set theoretical language is in use, as long as the definitions appropriate to ZFC are understood to be used in the ZFC context and the definitions appropriate to NFU are understood to be used in the NFU context. Whatever is proven to exist in a theory clearly provably exists in any extension of that theory; moreover, analysis of the proof that an object exists in a given theory may show that it exists in weaker versions of that theory (one may consider Zermelo set theory instead of ZFC for much of what is done in this article, for example). These constructions appear first because they are the simplest constructions in set theory, not because they are the first constructions that come to mind in mathematics (though the notion of finite set is certainly fundamental). Even though NFU also allows the construction of set ur-elements yet to become members of a set, the empty set is the unique set with no members: For each object x {\displaystyle x} , there is a set { x } {\displaystyle \{x\}} with x {\displaystyle x} as its only element: For objects x {\displaystyle x} and y {\displaystyle y} , there is a set { x , y } {\displaystyle \{x,y\}} containing x {\displaystyle x} and y {\displaystyle y} as its only elements: The union of two sets is defined in the usual way: This is a recursive definition of unordered n {\displaystyle n} -tuples for any concrete n {\displaystyle n} (finite sets given as lists of their elements:) In NFU, all the set definitions given work by stratified comprehension; in ZFC, the existence of the unordered pair is given by the Axiom of Pairing , the existence of the empty set follows by Separation from the existence of any set, and the binary union of two sets exists by the axioms of Pairing and Union ( x ∪ y = ⋃ { x , y } {\displaystyle x\cup y=\bigcup \{x,y\}} ). First, consider the ordered pair . The reason that this comes first is technical: ordered pairs are needed to implement relations and functions , which are needed to implement other concepts which may seem to be prior. The first definition of the ordered pair was the definition ( x , y ) = d e f { { { x } , ∅ } , { { y } } } {\displaystyle (x,y){\overset {\mathrm {def} }{=}}\{\{\{x\},\emptyset \},\{\{y\}\}\}} proposed by Norbert Wiener in 1914 in the context of the type theory of Principia Mathematica . Wiener observed that this allowed the elimination of types of n -ary relations for n > 1 from the system of that work. It is more usual now to use the definition ( x , y ) = d e f . { { x } , { x , y } } {\displaystyle (x,y){\overset {\mathrm {def.} }{=}}\{\{x\},\{x,y\}\}} , due to Kuratowski . Either of these definitions works in either ZFC or NFU. In NFU, these two definitions have a technical disadvantage: the Kuratowski ordered pair is two types higher than its projections, while the Wiener ordered pair is three types higher. It is common to postulate the existence of a type-level ordered pair (a pair ( x , y ) {\displaystyle (x,y)} which is the same type as its projections ) in NFU. It is convenient to use the Kuratowski pair in both systems until the use of type-level pairs can be formally justified. The internal details of these definitions have nothing to do with their actual mathematical function. For any notion ( x , y ) {\displaystyle (x,y)} of ordered pair, the thing that matters is that it satisfies the defining condition …and that it be reasonably easy to collect ordered pairs into sets. Relations are sets whose members are all ordered pairs . Where possible, a relation R {\displaystyle R} (understood as a binary predicate ) is implemented as { ( x , y ) ∣ x R y } {\displaystyle \{(x,y)\mid xRy\}} (which may be written as { z ∣ π 1 ( z ) R π 2 ( z ) } {\displaystyle \{z\mid \pi _{1}(z)R\pi _{2}(z)\}} ). When R {\displaystyle R} is a relation, the notation x R y {\displaystyle xRy} means ( x , y ) ∈ R {\displaystyle \left(x,y\right)\in R} . In ZFC, some relations (such as the general equality relation or subset relation on sets) are 'too large' to be sets (but may be harmlessly reified as proper classes ). In NFU, some relations (such as the membership relation) are not sets because their definitions are not stratified: in { ( x , y ) ∣ x ∈ y } {\displaystyle \{(x,y)\mid x\in y\}} , x {\displaystyle x} and y {\displaystyle y} would need to have the same type (because they appear as projections of the same pair), but also successive types (because x {\displaystyle x} is considered as an element of y {\displaystyle y} ). Let R {\displaystyle R} and S {\displaystyle S} be given binary relations . Then the following concepts are useful: The converse of R {\displaystyle R} is the relation { ( y , x ) : x R y } {\displaystyle \left\{\left(y,x\right):xRy\right\}} . The domain of R {\displaystyle R} is the set { x : ∃ y ( x R y ) } {\displaystyle \left\{x:\exists y\left(xRy\right)\right\}} . The range of R {\displaystyle R} is the domain of the converse of R {\displaystyle R} . That is, the set { y : ∃ x ( x R y ) } {\displaystyle \left\{y:\exists x\left(xRy\right)\right\}} . The field of R {\displaystyle R} is the union of the domain and range of R {\displaystyle R} . The preimage of a member x {\displaystyle x} of the field of R {\displaystyle R} is the set { y : y R x } {\displaystyle \left\{y:yRx\right\}} (used in the definition of 'well-founded' below.) The downward closure of a member x {\displaystyle x} of the field of R {\displaystyle R} is the smallest set D {\displaystyle D} containing x {\displaystyle x} , and containing each z R y {\displaystyle zRy} for each y ∈ D {\displaystyle y\in D} (i.e., including the preimage of each of its elements with respect to R {\displaystyle R} as a subset.) The relative product R ; S {\displaystyle R;S} of R {\displaystyle R} and S {\displaystyle S} is the relation { ( x , z ) : ∃ y ( x R y ∧ y S z ) } {\displaystyle \left\{\left(x,z\right):\exists y\,\left(xRy\wedge ySz\right)\right\}} . Notice that with our formal definition of a binary relation, the range and codomain of a relation are not distinguished. This could be done by representing a relation R {\displaystyle R} with codomain B {\displaystyle B} as ( R , B ) {\displaystyle \left(R,B\right)} , but our development will not require this. In ZFC, any relation whose domain is a subset of a set A {\displaystyle A} and whose range is a subset of a set B {\displaystyle B} will be a set, since the Cartesian product A × B = { ( a , b ) : a ∈ A ∧ b ∈ B } {\displaystyle A\times B=\left\{\left(a,b\right):a\in A\wedge b\in B\right\}} is a set (being a subclass of P ( A ∪ B ) {\displaystyle {\mathcal {P}}\!\left(A\cup B\right)} ), and Separation provides for the existence of { ( x , y ) ∈ A × B : x R y } {\displaystyle \left\{\left(x,y\right)\in A\times B:xRy\right\}} . In NFU, some relations with global scope (such as equality and subset) can be implemented as sets. In NFU, bear in mind that x {\displaystyle x} and y {\displaystyle y} are three types lower than R {\displaystyle R} in x R y {\displaystyle xRy} (one type lower if a type-level ordered pair is used). A binary relation R {\displaystyle R} is: Relations having certain combinations of the above properties have standard names. A binary relation R {\displaystyle R} is: A functional relation is a binary predicate F {\displaystyle F} such that ∀ x , y , z ( x F y ∧ x F z → y = z ) . {\displaystyle \forall x,y,z\,\left(xFy\wedge xFz\to y=z\right).} Such a relation ( predicate ) is implemented as a relation (set) exactly as described in the previous section. So the predicate F {\displaystyle F} is implemented by the set { ( x , y ) : x F y } {\displaystyle \left\{\left(x,y\right):xFy\right\}} . A relation F {\displaystyle F} is a function if and only if ∀ x , y , z ( ( x , y ) ∈ F ∧ ( x , z ) ∈ F → y = z ) . {\displaystyle \forall x,y,z\,\left(\left(x,y\right)\in F\wedge \left(x,z\right)\in F\to y=z\right).} It is therefore possible to define the value function F ( x ) {\displaystyle F\!\left(x\right)} as the unique object y {\displaystyle y} such that x F y {\displaystyle xFy} – i.e.: x {\displaystyle x} is F {\displaystyle F} -related to y {\displaystyle y} such that the relation f {\displaystyle f} holds between x {\displaystyle x} and y {\displaystyle y} – or as the unique object y {\displaystyle y} such that ( x , y ) ∈ F {\displaystyle \left(x,y\right)\in F} . The presence in both theories of functional predicates which are not sets makes it useful to allow the notation F ( x ) {\displaystyle F\!\left(x\right)} both for sets F {\displaystyle F} and for important functional predicates. As long as one does not quantify over functions in the latter sense, all such uses are in principle eliminable. Outside of formal set theory, we usually specify a function in terms of its domain and codomain, as in the phrase "Let f : A → B {\displaystyle f:A\to B} be a function". The domain of a function is just its domain as a relation, but we have not yet defined the codomain of a function. To do this we introduce the terminology that a function is from A {\displaystyle A} to B {\displaystyle B} if its domain equals A {\displaystyle A} and its range is contained in B {\displaystyle B} . In this way, every function is a function from its domain to its range, and a function f {\displaystyle f} from A {\displaystyle A} to B {\displaystyle B} is also a function from A {\displaystyle A} to C {\displaystyle C} for any set C {\displaystyle C} containing B {\displaystyle B} . Indeed, no matter which set we consider to be the codomain of a function, the function does not change as a set since by definition it is just a set of ordered pairs. That is, a function does not determine its codomain by our definition. If one finds this unappealing then one can instead define a function as the ordered pair ( f , B ) {\displaystyle (f,B)} , where f {\displaystyle f} is a functional relation and B {\displaystyle B} is its codomain, but we do not take this approach in this article (more elegantly, if one first defines ordered triples - for example as ( x , y , z ) = ( x , ( y , z ) ) {\displaystyle (x,y,z)=(x,(y,z))} - then one could define a function as the ordered triple ( f , A , B ) {\displaystyle (f,A,B)} so as to also include the domain). Note that the same issue exists for relations: outside of formal set theory we usually say "Let R ⊆ A × B {\displaystyle R\subseteq A\times B} be a binary relation", but formally R {\displaystyle R} is a set of ordered pairs such that dom R ⊆ A {\displaystyle {\text{dom}}\,R\subseteq A} and ran R ⊆ B {\displaystyle {\text{ran}}\,R\subseteq B} . In NFU, x {\displaystyle x} has the same type as F ( x ) {\displaystyle F\!\left(x\right)} , and F {\displaystyle F} is three types higher than F ( x ) {\displaystyle F\!\left(x\right)} (one type higher, if a type-level ordered pair is used). To solve this problem, one could define F [ A ] {\displaystyle F\left[A\right]} as { y : ∃ x ( x ∈ A ∧ y = F ( x ) ) } {\displaystyle \left\{y:\exists x\,\left(x\in A\wedge y=F\!\left(x\right)\right)\right\}} for any set A {\displaystyle A} , but this is more conveniently written as { F ( x ) : x ∈ A } {\displaystyle \left\{F\!\left(x\right):x\in A\right\}} . Then, if A {\displaystyle A} is a set and F {\displaystyle F} is any functional relation, the Axiom of Replacement assures that F [ A ] {\displaystyle F\left[A\right]} is a set in ZFC . In NFU, F [ A ] {\displaystyle F\left[A\right]} and A {\displaystyle A} now have the same type, and F {\displaystyle F} is two types higher than F [ A ] {\displaystyle F\left[A\right]} (the same type, if a type-level ordered pair is used). The function I {\displaystyle I} such that I ( x ) = x {\displaystyle I\!\left(x\right)=x} is not a set in ZFC because it is "too large". I {\displaystyle I} is however a set in NFU. The function (predicate) S {\displaystyle S} such that S ( x ) = { x } {\displaystyle S\!\left(x\right)=\left\{x\right\}} is neither a function nor a set in either theory; in ZFC, this is true because such a set would be too large, and, in NFU, this is true because its definition would not be stratified . Moreover, S {\displaystyle S} can be proved not to exist in NFU (see the resolution of Cantor's paradox in New Foundations .) Let f {\displaystyle f} and g {\displaystyle g} be arbitrary functions. The composition of f {\displaystyle f} and g {\displaystyle g} , g ∘ f {\displaystyle g\circ f} , is defined as the relative product f | g {\displaystyle f\,|\,g} , but only if this results in a function such that g ∘ f {\displaystyle g\circ f} is also a function, with ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle \left(g\circ f\right)\!\left(x\right)=g\!\left(f\!\left(x\right)\right)} , if the range of f {\displaystyle f} is a subset of the domain of g {\displaystyle g} . The inverse of f {\displaystyle f} , f ( − 1 ) {\displaystyle f^{\left(-1\right)}} , is defined as the converse of f {\displaystyle f} if this is a function. Given any set A {\displaystyle A} , the identity function i A {\displaystyle i_{A}} is the set { ( x , x ) ∣ x ∈ A } {\displaystyle \left\{\left(x,x\right)\mid x\in A\right\}} , and this is a set in both ZFC and NFU for different reasons. A function f {\displaystyle f} from A {\displaystyle A} to B {\displaystyle B} is a: Defining functions as ordered pairs ( f , B ) {\displaystyle (f,B)} or ordered triples ( f , A , B ) {\displaystyle (f,A,B)} has the advantages that we do not have to introduce the terminology of being a function "from A {\displaystyle A} to B {\displaystyle B} ", and that we can speak of "being surjective" outright as opposed to only being able to speak of "being surjective onto B {\displaystyle B} ". In both ZFC and NFU , two sets A and B are the same size (or are equinumerous ) if and only if there is a bijection f from A to B . This can be written as | A | = | B | {\displaystyle |A|=|B|} , but note that (for the moment) this expresses a relation between A and B rather than a relation between yet-undefined objects | A | {\displaystyle |A|} and | B | {\displaystyle |B|} . Denote this relation by A ∼ B {\displaystyle A\sim B} in contexts such as the actual definition of the cardinals where even the appearance of presupposing abstract cardinals should be avoided. Similarly, define | A | ≤ | B | {\displaystyle |A|\leq |B|} as holding if and only if there is an injection from A to B . It is straightforward to show that the relation of equinumerousness is an equivalence relation : equinumerousness of A with A is witnessed by i A {\displaystyle i_{A}} ; if f witnesses | A | = | B | {\displaystyle |A|=|B|} , then f − 1 {\displaystyle f^{-1}} witnesses | B | = | A | {\displaystyle |B|=|A|} ; and if f witnesses | A | = | B | {\displaystyle |A|=|B|} and g witnesses | B | = | C | {\displaystyle |B|=|C|} , then g ∘ f {\displaystyle g\circ f} witnesses | A | = | C | {\displaystyle |A|=|C|} . It can be shown that | A | ≤ | B | {\displaystyle |A|\leq |B|} is a linear order on abstract cardinals, but not on sets. Reflexivity is obvious and transitivity is proven just as for equinumerousness. The Schröder–Bernstein theorem , provable in ZFC and NFU in an entirely standard way, establishes that (this establishes antisymmetry on cardinals), and follows in a standard way in either theory from the axiom of choice . Natural numbers can be considered either as finite ordinals or finite cardinals. Here consider them as finite cardinal numbers. This is the first place where a major difference between the implementations in ZFC and NFU becomes evident. The Axiom of Infinity of ZFC tells us that there is a set A which contains ∅ {\displaystyle \emptyset } and contains y ∪ { y } {\displaystyle y\cup \{y\}} for each y ∈ A {\displaystyle y\in A} . This set A is not uniquely determined (it can be made larger while preserving this closure property): the set N of natural numbers is which is the intersection of all sets which contain the empty set and are closed under the "successor" operation y ↦ y ∪ { y } {\displaystyle y\mapsto y\cup \{y\}} . In ZFC, a set A {\displaystyle A} is finite if and only if there is n ∈ N {\displaystyle n\in N} such that | n | = | A | {\displaystyle |n|=|A|} : further, define | A | {\displaystyle |A|} as this n for finite A . (It can be proved that no two distinct natural numbers are the same size). The usual operations of arithmetic can be defined recursively and in a style very similar to that in which the set of natural numbers itself is defined. For example, + (the addition operation on natural numbers) can be defined as the smallest set which contains ( ( x , ∅ ) , x ) {\displaystyle ((x,\emptyset ),x)} for each natural number x {\displaystyle x} and contains ( ( x , y ∪ { y } ) , z ∪ { z } ) {\displaystyle ((x,y\cup \{y\}),z\cup \{z\})} whenever it contains ( ( x , y ) , z ) {\displaystyle ((x,y),z)} . In NFU, it is not obvious that this approach can be used, since the successor operation y ∪ { y } {\displaystyle y\cup \{y\}} is unstratified and so the set N as defined above cannot be shown to exist in NFU (it is consistent for the set of finite von Neumann ordinals to exist in NFU, but this strengthens the theory, as the existence of this set implies the Axiom of Counting (for which see below or the New Foundations article)). The standard definition of the natural numbers, which is actually the oldest set-theoretic definition of natural numbers , is as equivalence classes of finite sets under equinumerousness. Essentially the same definition is appropriate to NFU (this is not the usual definition, but the results are the same): define Fin , the set of finite sets, as For any set A ∈ F i n {\displaystyle A\in Fin} , define | A | {\displaystyle |A|} as { B ∣ A ∼ B } {\displaystyle \{B\mid A\sim B\}} . Define N as the set { | A | ∣ A ∈ F i n } {\displaystyle \{|A|\mid A\in Fin\}} . The Axiom of Infinity of NFU can be expressed as V ∉ F i n {\displaystyle V\not \in Fin} : this is enough to establish that each natural number has a nonempty successor (the successor of | A | {\displaystyle |A|} being | A ∪ { x } | {\displaystyle |A\cup \{x\}|} for any x ∉ A {\displaystyle x\not \in A} ) which is the hard part of showing that the Peano axioms of arithmetic are satisfied. The operations of arithmetic can be defined in a style similar to the style given above (using the definition of successor just given). They can also be defined in a natural set theoretical way: if A and B are disjoint finite sets, define |A|+|B| as | A ∪ B | {\displaystyle |A\cup B|} . More formally, define m+n for m and n in N as (But note that this style of definition is feasible for the ZFC numerals as well, but more circuitous: the form of the NFU definition facilitates set manipulations while the form of the ZFC definition facilitates recursive definitions, but either theory supports either style of definition). The two implementations are quite different. In ZFC, choose a representative of each finite cardinality (the equivalence classes themselves are too large to be sets); in NFU the equivalence classes themselves are sets, and are thus an obvious choice for objects to stand in for the cardinalities. However, the arithmetic of the two theories is identical: the same abstraction is implemented by these two superficially different approaches. A general technique for implementing abstractions in set theory is the use of equivalence classes. If an equivalence relation R tells us that elements of its field A are alike in some particular respect, then for any set x , regard the set [ x ] R = { y ∈ A ∣ x R y } {\displaystyle [x]_{R}=\{y\in A\mid xRy\}} as representing an abstraction from the set x respecting just those features (identify elements of A up to R ). For any set A , a set P {\displaystyle P} is a partition of A if all elements of P are nonempty, any two distinct elements of P are disjoint, and A = ⋃ P {\displaystyle A=\bigcup P} . For every equivalence relation R with field A , { [ x ] R ∣ x ∈ A } {\displaystyle \{[x]_{R}\mid x\in A\}} is a partition of A . Moreover, each partition P of A determines an equivalence relation { ( x , y ) ∣ ∃ A ∈ P ( x ∈ A ∧ y ∈ A ) } {\displaystyle \{(x,y)\mid \exists A\in P\,(x\in A\wedge y\in A)\}} . This technique has limitations in both ZFC and NFU . In ZFC, since the universe is not a set, it seems possible to abstract features only from elements of small domains. This can be circumvented using a trick due to Dana Scott : if R is an equivalence relation on the universe, define [ x ] R {\displaystyle [x]_{R}} as the set of all y such that y R x {\displaystyle yRx} and the rank of y is less than or equal to the rank of any z R x {\displaystyle zRx} . This works because the ranks are sets. Of course, there still may be a proper class of [ x ] R {\displaystyle [x]_{R}} 's. In NFU, the main difficulty is that [ x ] R {\displaystyle [x]_{R}} is one type higher than x, so for example the "map" x ↦ [ x ] R {\displaystyle x\mapsto [x]_{R}} is not in general a (set) function (though { x } ↦ [ x ] R {\displaystyle \{x\}\mapsto [x]_{R}} is a set). This can be circumvented by the use of the Axiom of Choice to select a representative from each equivalence class to replace [ x ] R {\displaystyle [x]_{R}} , which will be at the same type as x , or by choosing a canonical representative if there is a way to do this without invoking Choice (the use of representatives is hardly unknown in ZFC, either). In NFU, the use of equivalence class constructions to abstract properties of general sets is more common, as for example in the definitions of cardinal and ordinal number below. Two well-orderings W 1 {\displaystyle W_{1}} and W 2 {\displaystyle W_{2}} are similar and write W 1 ∼ W 2 {\displaystyle W_{1}\sim W_{2}} just in case there is a bijection f from the field of W 1 {\displaystyle W_{1}} to the field of W 2 {\displaystyle W_{2}} such that x W 1 y ↔ f ( x ) W 2 f ( y ) {\displaystyle xW_{1}y\leftrightarrow f(x)W_{2}f(y)} for all x and y . Similarity is shown to be an equivalence relation in much the same way that equinumerousness was shown to be an equivalence relation above. In New Foundations (NFU), the order type of a well-ordering W is the set of all well-orderings which are similar to W . The set of ordinal numbers is the set of all order types of well-orderings. This does not work in ZFC , because the equivalence classes are too large. It would be formally possible to use Scott's trick to define the ordinals in essentially the same way, but a device of von Neumann is more commonly used. For any partial order ≤ {\displaystyle \leq } , the corresponding strict partial order < is defined as { ( x , y ) ∣ x ≤ y ∧ x ≠ y } {\displaystyle \{(x,y)\mid x\leq y\wedge x\neq y\}} . Strict linear orders and strict well-orderings are defined similarly. A set A is said to be transitive if ⋃ A ⊆ A {\displaystyle \bigcup A\subseteq A} : each element of an element of A is also an element of A . A (von Neumann) ordinal is a transitive set on which membership is a strict well-ordering. In ZFC, the order type of a well-ordering W is then defined as the unique von Neumann ordinal which is equinumerous with the field of W and membership on which is isomorphic to the strict well-ordering associated with W . (the equinumerousness condition distinguishes between well-orderings with fields of size 0 and 1, whose associated strict well-orderings are indistinguishable). In ZFC there cannot be a set of all ordinals. In fact, the von Neumann ordinals are an inconsistent totality in any set theory: it can be shown with modest set theoretical assumptions that every element of a von Neumann ordinal is a von Neumann ordinal and the von Neumann ordinals are strictly well-ordered by membership. It follows that the class of von Neumann ordinals would be a von Neumann ordinal if it were a set: but it would then be an element of itself, which contradicts the fact that membership is a strict well-ordering of the von Neumann ordinals. The existence of order types for all well-orderings is not a theorem of Zermelo set theory : it requires the Axiom of replacement . Even Scott's trick cannot be used in Zermelo set theory without an additional assumption (such as the assumption that every set belongs to a rank which is a set, which does not essentially strengthen Zermelo set theory but is not a theorem of that theory). In NFU, the collection of all ordinals is a set by stratified comprehension. The Burali-Forti paradox is evaded in an unexpected way. There is a natural order on the ordinals defined by α ≤ β {\displaystyle \alpha \leq \beta } if and only if some (and so any) W 1 ∈ α {\displaystyle W_{1}\in \alpha } is similar to an initial segment of some (and so any) W 2 ∈ β {\displaystyle W_{2}\in \beta } . Further, it can be shown that this natural order is a well-ordering of the ordinals and so must have an order type Ω {\displaystyle \Omega } . It would seem that the order type of the ordinals less than Ω {\displaystyle \Omega } with the natural order would be Ω {\displaystyle \Omega } , contradicting the fact that Ω {\displaystyle \Omega } is the order type of the entire natural order on the ordinals (and so not of any of its proper initial segments). But this relies on one's intuition (correct in ZFC) that the order type of the natural order on the ordinals less than α {\displaystyle \alpha } is α {\displaystyle \alpha } for any ordinal α {\displaystyle \alpha } . This assertion is unstratified, because the type of the second α {\displaystyle \alpha } is four higher than the type of the first (two higher if a type level pair is used). The assertion which is true and provable in NFU is that the order type of the natural order on the ordinals less than α {\displaystyle \alpha } is T 4 ( α ) {\displaystyle T^{4}(\alpha )} for any ordinal α {\displaystyle \alpha } , where T ( α ) {\displaystyle T(\alpha )} is the order type of W ι = { ( { x } , { y } ) ∣ x W y } {\displaystyle W^{\iota }=\{(\{x\},\{y\})\mid xWy\}} for any W ∈ α {\displaystyle W\in \alpha } (it is easy to show that this does not depend on the choice of W; note that T raises type by one). Thus the order type of the ordinals less than Ω {\displaystyle \Omega } with the natural order is T 4 ( Ω ) {\displaystyle T^{4}(\Omega )} , and T 4 ( Ω ) < Ω {\displaystyle T^{4}(\Omega )<\Omega } . All uses of T 4 {\displaystyle T^{4}} here can be replaced with T 2 {\displaystyle T^{2}} if a type-level pair is used. This shows that the T operation is nontrivial, which has a number of consequences. It follows immediately that the singleton map x ↦ { x } {\displaystyle x\mapsto \{x\}} is not a set, as otherwise restrictions of this map would establish the similarity of W and W ι {\displaystyle W^{\iota }} for any well-ordering W . T is (externally) bijective and order-preserving. Because of this, the fact T 4 ( Ω ) < Ω {\displaystyle T^{4}(\Omega )<\Omega } establishes that Ω > T ( Ω ) > T 2 ( Ω ) … {\displaystyle \Omega >T(\Omega )>T^{2}(\Omega )\ldots } is a "descending sequence" in the ordinals which cannot be a set. Ordinals fixed by T are called Cantorian ordinals, and ordinals which dominate only cantorian ordinals (which are easily shown to be cantorian themselves) are said to be strongly cantorian . There can be no set of cantorian ordinals or set of strongly cantorian ordinals. It is possible to reason about von Neumann ordinals in NFU . Recall that a von Neumann ordinal is a transitive set A such that the restriction of membership to A is a strict well-ordering. This is quite a strong condition in the NFU context, since the membership relation involves a difference of type. A von Neumann ordinal A is not an ordinal in the sense of NFU, but ∈ ⌈ A {\displaystyle \in \lceil A} belongs to an ordinal α {\displaystyle \alpha } which may be termed the order type of (membership on) A . It is easy to show that the order type of a von Neumann ordinal A is cantorian: for any well-ordering W of order type α {\displaystyle \alpha } , the induced well-ordering of initial segments of W by inclusion has order type T ( α ) {\displaystyle T(\alpha )} (it is one type higher, thus the application of T): but the order types of the well-ordering of a von Neumann ordinal A by membership and the well-ordering of its initial segments by inclusion are clearly the same because the two well-orderings are actually the same relation, so the order type of A is fixed under T. Moreover, the same argument applies to any smaller ordinal (which will be the order type of an initial segment of A , also a von Neumann ordinal) so the order type of any von Neumann ordinal is strongly cantorian. The only von Neumann ordinals which can be shown to exist in NFU without additional assumptions are the concrete finite ones. However, the application of a permutation method can convert any model of NFU to a model in which every strongly cantorian ordinal is the order type of a von Neumann ordinal. This suggests that the concept "strongly cantorian ordinal of NFU" might be a better analogue to "ordinal of ZFC" than is the apparent analogue "ordinal of NFU". Cardinal numbers are defined in NFU in a way which generalizes the definition of natural number: for any set A , | A | = d e f { B ∣ B ∼ A } {\displaystyle |A|\,{\overset {\mathrm {def} }{=}}\left\{B\mid B\sim A\right\}} . In ZFC , these equivalence classes are too large as usual. Scott's trick could be used (and indeed is used in ZF ), | A | {\displaystyle |A|} is usually defined as the smallest order type (here a von Neumann ordinal) of a well-ordering of A (that every set can be well-ordered follows from the Axiom of Choice in the usual way in both theories). The natural order on cardinal numbers is seen to be a well-ordering: that it is reflexive, antisymmetric (on abstract cardinals, which are now available) and transitive has been shown above. That it is a linear order follows from the Axiom of Choice: well-order two sets and an initial segment of one well-ordering will be isomorphic to the other, so one set will have cardinality smaller than that of the other. That it is a well-ordering follows from the Axiom of Choice in a similar way. With each infinite cardinal, many order types are associated for the usual reasons (in either set theory). Cantor's theorem shows (in both theories) that there are nontrivial distinctions between infinite cardinal numbers. In ZFC , one proves | A | < | P ( A ) | . {\displaystyle |A|<|P(A)|.} In NFU , the usual form of Cantor's theorem is false (consider the case A=V), but Cantor's theorem is an ill-typed statement. The correct form of the theorem in NFU is | P 1 ( A ) | < | P ( A ) | {\displaystyle |P_{1}(A)|<|P(A)|} , where P 1 ( A ) {\displaystyle P_{1}(A)} is the set of one-element subsets of A. | P 1 ( V ) | < | P ( V ) | {\displaystyle |P_{1}(V)|<|P(V)|} shows that there are "fewer" singletons than sets (the obvious bijection x ↦ { x } {\displaystyle x\mapsto \{x\}} from P 1 ( V ) {\displaystyle P_{1}(V)} to V has already been seen not to be a set). It is actually provable in NFU + Choice that | P 1 ( V ) | < | P ( V ) | ≪ | V | {\displaystyle |P_{1}(V)|<|P(V)|\ll |V|} (where ≪ {\displaystyle \ll } signals the existence of many intervening cardinals; there are many, many urelements!). Define a type-raising T operation on cardinals analogous to the T operation on ordinals: T ( | A | ) = | P 1 ( A ) | {\displaystyle T(|A|)=|P_{1}(A)|} ; this is an external endomorphism of the cardinals just as the T operation on ordinals is an external endomorphism of the ordinals. A set A is said to be cantorian just in case | A | = | P 1 ( A ) | = T ( | A | ) {\displaystyle |A|=|P_{1}(A)|=T(|A|)} ; the cardinal | A | {\displaystyle |A|} is also said to be a cantorian cardinal. A set A is said to be strongly cantorian (and its cardinal to be strongly cantorian as well) just in case the restriction of the singleton map to A ( ( x ↦ { x } ) ⌈ A {\displaystyle (x\mapsto \{x\})\lceil A} ) is a set. Well-orderings of strongly cantorian sets are always strongly cantorian ordinals; this is not always true of well-orderings of cantorian sets (though the shortest well-ordering of a cantorian set will be cantorian). A cantorian set is a set which satisfies the usual form of Cantor's theorem. The operations of cardinal arithmetic are defined in a set-theoretically motivated way in both theories. | A | + | B | = { C ∪ D ∣ C ∼ A ∧ D ∼ B ∧ C ∩ D = ∅ } {\displaystyle |A|+|B|=\{C\cup D\mid C\sim A\wedge D\sim B\wedge C\cap D=\emptyset \}} . One would like to define | A | ⋅ | B | {\displaystyle |A|\cdot |B|} as | A × B | {\displaystyle |A\times B|} , and one does this in ZFC , but there is an obstruction in NFU when using the Kuratowski pair: one defines | A | ⋅ | B | {\displaystyle |A|\cdot |B|} as T − 2 ( | A × B | ) {\displaystyle T^{-2}(|A\times B|)} because of the type displacement of 2 between the pair and its projections, which implies a type displacement of two between a cartesian product and its factors. It is straightforward to prove that the product always exists (but requires attention because the inverse of T is not total). Defining the exponential operation on cardinals requires T in an essential way: if B A {\displaystyle B^{A}} was defined as the collection of functions from A to B , this is three types higher than A or B , so it is reasonable to define | B | | A | {\displaystyle |B|^{|A|}} as T − 3 ( | B A | ) {\displaystyle T^{-3}(|B^{A}|)} so that it is the same type as A or B ( T − 1 {\displaystyle T^{-1}} replaces T − 3 {\displaystyle T^{-3}} with type-level pairs). An effect of this is that the exponential operation is partial: for example, 2 | V | {\displaystyle 2^{|V|}} is undefined. In ZFC one defines | B | | A | {\displaystyle |B|^{|A|}} as | B A | {\displaystyle |B^{A}|} without difficulty. The exponential operation is total and behaves exactly as expected on cantorian cardinals, since T fixes such cardinals and it is easy to show that a function space between cantorian sets is cantorian (as are power sets, cartesian products, and other usual type constructors). This offers further encouragement to the view that the "standard" cardinalities in NFU are the cantorian (indeed, the strongly cantorian) cardinalities, just as the "standard" ordinals seem to be the strongly cantorian ordinals. Now the usual theorems of cardinal arithmetic with the axiom of choice can be proved, including κ ⋅ κ = κ {\displaystyle \kappa \cdot \kappa =\kappa } . From the case | V | ⋅ | V | = | V | {\displaystyle |V|\cdot |V|=|V|} the existence of a type level ordered pair can be derived: | V | ⋅ | V | = T − 2 ( | V × V | ) {\displaystyle |V|\cdot |V|=T^{-2}(|V\times V|)} is equal to | V | {\displaystyle |V|} just in case | V × V | = T 2 ( | V | ) = | P 1 2 ( V ) | {\displaystyle |V\times V|=T^{2}(|V|)=|P_{1}^{2}(V)|} , which would be witnessed by a one-to-one correspondence between Kuratowski pairs ( a , b ) {\displaystyle (a,b)} and double singletons { { c } } {\displaystyle \{\{c\}\}} : redefine ( a , b ) {\displaystyle (a,b)} as the c such that { { c } } {\displaystyle \{\{c\}\}} is associated with the Kuratowski ( a , b ) {\displaystyle (a,b)} : this is a type-level notion of ordered pair. So there are two different implementations of the natural numbers in NFU (though they are the same in ZFC ): finite ordinals and finite cardinals. Each of these supports a T operation in NFU (basically the same operation). It is easy to prove that T ( n ) {\displaystyle T(n)} is a natural number if n is a natural number in NFU + Infinity + Choice (and so | N | {\displaystyle |N|} and the first infinite ordinal ω {\displaystyle \omega } are cantorian) but it is not possible to prove in this theory that T ( n ) = n {\displaystyle T(n)=n} . However, common sense indicates that this should be true, and so it can be adopted as an axiom: One natural consequence of this axiom (and indeed its original formulation) is All that can be proved in NFU without Counting is | { 1 , … , n } | = T 2 ( n ) {\displaystyle |\{1,\ldots ,n\}|=T^{2}(n)} . A consequence of Counting is that N is a strongly cantorian set (again, this is an equivalent assertion). The type of any variable restricted to a strongly cantorian set A can be raised or lowered as desired by replacing references to a ∈ A {\displaystyle a\in A} with references to ⋃ f ( a ) {\displaystyle \bigcup f(a)} (type of a raised; this presupposes that it is known that a is a set; otherwise one must say "the element of f ( a ) {\displaystyle f(a)} " to get this effect) or f − 1 ( { a } ) {\displaystyle f^{-1}(\{a\})} (type of a lowered) where f ( a ) = { a } {\displaystyle f(a)=\{a\}} for all a ∈ A {\displaystyle a\in A} , so it is not necessary to assign types to such variables for purposes of stratification. Any subset of a strongly cantorian set is strongly cantorian. The power set of a strongly cantorian set is strongly cantorian. The cartesian product of two strongly cantorian sets is strongly cantorian. Introducing the Axiom of Counting means that types need not be assigned to variables restricted to N or to P ( N ), R (the set of reals) or indeed any set ever considered in classical mathematics outside of set theory. There are no analogous phenomena in ZFC . See the main New Foundations article for stronger axioms that can be adjoined to NFU to enforce "standard" behavior of familiar mathematical objects. Represent positive fractions as pairs of positive natural numbers (0 is excluded): p q {\displaystyle {\frac {p}{q}}} is represented by the pair ( p , q ) {\displaystyle (p,q)} . To make p q = r s ↔ p s = q r {\displaystyle {\frac {p}{q}}={\frac {r}{s}}\leftrightarrow ps=qr} , introduce the relation ∼ {\displaystyle \sim } defined by ( p , q ) ∼ ( r , s ) ↔ p s = q r {\displaystyle (p,q)\sim (r,s)\leftrightarrow ps=qr} . It is provable that this is an equivalence relation: define positive rational numbers as equivalence classes of pairs of positive natural numbers under this relation. Arithmetic operations on positive rational numbers and the order relation on positive rationals are defined just as in elementary school and proved (with some effort) to have the expected properties. Represent magnitudes (positive reals) as nonempty proper initial segments of the positive rationals with no largest element. The operations of addition and multiplication on magnitudes are implemented by elementwise addition of the positive rational elements of the magnitudes. Order is implemented as set inclusion. Represent real numbers as differences m − n {\displaystyle m-n} of magnitudes: formally speaking, a real number is an equivalence class of pairs ( m , n ) {\displaystyle (m,n)} of magnitudes under the equivalence relation ∼ {\displaystyle \sim } defined by ( m , n ) ∼ ( r , s ) ↔ m + s = n + r {\displaystyle (m,n)\sim (r,s)\leftrightarrow m+s=n+r} . The operations of addition and multiplication on real numbers are defined just as one would expect from the algebraic rules for adding and multiplying differences. The treatment of order is also as in elementary algebra. This is the briefest sketch of the constructions. Note that the constructions are exactly the same in ZFC and in NFU , except for the difference in the constructions of the natural numbers: since all variables are restricted to strongly cantorian sets, there is no need to worry about stratification restrictions. Without the Axiom of Counting, it might be necessary to introduce some applications of T in a full discussion of these constructions. In this class of constructions it appears that ZFC has an advantage over NFU : though the constructions are clearly feasible in NFU , they are more complicated than in ZFC for reasons having to do with stratification. Throughout this section assume a type-level ordered pair. Define ( x 1 , x 2 , … , x n ) {\displaystyle (x_{1},x_{2},\ldots ,x_{n})} as ( x 1 , ( x 2 , … , x n ) ) {\displaystyle (x_{1},(x_{2},\ldots ,x_{n}))} . The definition of the general n -tuple using the Kuratowski pair is trickier, as one needs to keep the types of all the projections the same, and the type displacement between the n -tuple and its projections increases as n increases. Here, the n -tuple has the same type as each of its projections. General cartesian products are defined similarly: A 1 × A 2 × … × A n = A 1 × ( A 2 × … × A n ) {\displaystyle A_{1}\times A_{2}\times \ldots \times A_{n}=A_{1}\times (A_{2}\times \ldots \times A_{n})} The definitions are the same in ZFC but without any worries about stratification (the grouping given here is opposite to that more usually used, but this is easily corrected for). Now consider the infinite cartesian product Π i ∈ I A i {\displaystyle \Pi _{i\in I}A_{i}} . In ZFC, this is defined as the set of all functions f with domain I such that f ( i ) ∈ A i {\displaystyle f(i)\in A_{i}} (where A is implicitly understood as a function taking each i to A i {\displaystyle A_{i}} ). In NFU, this is requires attention to type. Given a set I and set valued function A whose value at { i } {\displaystyle \{i\}} in P 1 ( I ) {\displaystyle P_{1}(I)} is written A i {\displaystyle A_{i}} , Define Π i ∈ I A i {\displaystyle \Pi _{i\in I}A_{i}} as the set of all functions f with domain I such that f ( i ) ∈ A i {\displaystyle f(i)\in A_{i}} : notice that f ( i ) ∈ A i = A ( { i } ) {\displaystyle f(i)\in A_{i}=A(\{i\})} is stratified because of our convention that A is a function with values at singletons of the indices. Note that the very largest families of sets (which cannot be indexed by sets of singletons) will not have cartesian products under this definition. Note further that the sets A i {\displaystyle A_{i}} are at the same type as the index set I (since one type higher than its elements); the product, as a set of functions with domain I (so at the same type as I ) is one type higher (assuming a type-level ordered pair). Now consider the product Π i ∈ I | A i | {\displaystyle \Pi _{i\in I}|A_{i}|} of the cardinals of these sets. The cardinality | Π i ∈ I A i {\displaystyle \Pi _{i\in I}A_{i}} | is one type higher than the cardinals | A i | {\displaystyle |A_{i}|} , so the correct definition of the infinite product of cardinals is T − 1 ( | Π i ∈ I A i | ) {\displaystyle T^{-1}(|\Pi _{i\in I}A_{i}|)} (because the inverse of T is not total, it is possible that this may not exist). Repeat this for disjoint unions of families of sets and sums of families of cardinals. Again, let A be a set-valued function with domain P 1 ( I ) {\displaystyle P_{1}(I)} : write A i {\displaystyle A_{i}} for A ( { i } ) {\displaystyle A(\{i\})} . The disjoint union Σ i ∈ I A i {\displaystyle \Sigma _{i\in I}A_{i}} is the set { ( i , a ) ∣ a ∈ A i } {\displaystyle \{(i,a)\mid a\in A_{i}\}} . This set is at the same type as the sets A i {\displaystyle A_{i}} . The correct definition of the sum Σ i ∈ I | A i | {\displaystyle \Sigma _{i\in I}|A_{i}|} is thus | Σ i ∈ I A i | {\displaystyle |\Sigma _{i\in I}A_{i}|} , since there is no type displacement. It is possible to extend these definitions to handle index sets which are not sets of singletons, but this introduces an additional type level and is not needed for most purposes. In ZFC, define the disjoint union Σ i ∈ I A i {\displaystyle \Sigma _{i\in I}A_{i}} as { ( i , a ) ∣ a ∈ A i } {\displaystyle \{(i,a)\mid a\in A_{i}\}} , where A i {\displaystyle A_{i}} abbreviates A ( i ) {\displaystyle A(i)} . Permutation methods can be used to show relative consistency with NFU of the assertion that for every strongly cantorian set A there is a set I of the same size whose elements are self-singletons: i = { i } {\displaystyle i=\{i\}} for each i in I . In ZFC , define the cumulative hierarchy as the ordinal-indexed sequence of sets satisfying the following conditions: V 0 = ∅ {\displaystyle V_{0}=\emptyset } ; V α + 1 = P ( V α ) {\displaystyle V_{\alpha +1}=P(V_{\alpha })} ; V λ = ⋃ { V β ∣ β < λ } {\displaystyle V_{\lambda }=\bigcup \{V_{\beta }\mid \beta <\lambda \}} for limit ordinals λ {\displaystyle \lambda } . This is an example of a construction by transfinite recursion . The rank of a set A is said to be α {\displaystyle \alpha } if and only if A ∈ V α + 1 − V α {\displaystyle A\in V_{\alpha +1}-V_{\alpha }} . The existence of the ranks as sets depends on the axiom of replacement at each limit step (the hierarchy cannot be constructed in Zermelo set theory ); by the axiom of foundation, every set belongs to some rank. The cardinal | P ( V ω + α ) | {\displaystyle |P(V_{\omega +\alpha })|} is called ℶ α {\displaystyle \beth _{\alpha }} . This construction cannot be carried out in NFU because the power set operation is not a set function in NFU ( P ( A ) {\displaystyle P(A)} is one type higher than A for purposes of stratification). The sequence of cardinals ℶ α {\displaystyle \beth _{\alpha }} can be implemented in NFU. Recall that 2 | A | {\displaystyle 2^{|A|}} is defined as T − 1 ( | { 0 , 1 } A | ) {\displaystyle T^{-1}(|\{0,1\}^{A}|)} , where { 0 , 1 } {\displaystyle \{0,1\}} is a convenient set of size 2, and | { 0 , 1 } A | = | P ( A ) | {\displaystyle |\{0,1\}^{A}|=|P(A)|} . Let ℶ {\displaystyle \beth } be the smallest set of cardinals which contains | N | {\displaystyle |N|} (the cardinality of the set of natural numbers), contains the cardinal 2 | A | {\displaystyle 2^{|A|}} whenever it contains | A | {\displaystyle |A|} , and which is closed under suprema of sets of cardinals. A convention for ordinal indexing of any well-ordering W α {\displaystyle W_{\alpha }} is defined as the element x of the field of W {\displaystyle W} such that the order type of the restriction of W {\displaystyle W} to { y ∣ y W x } {\displaystyle \{y\mid yWx\}} is α {\displaystyle \alpha } ; then define ℶ α {\displaystyle \beth _{\alpha }} as the element with index α {\displaystyle \alpha } in the natural order on the elements of ℶ {\displaystyle \beth } . The cardinal ℵ α {\displaystyle \aleph _{\alpha }} is the element with index α {\displaystyle \alpha } in the natural order on all infinite cardinals (which is a well-ordering, see above). Note that ℵ 0 = | N | {\displaystyle \aleph _{0}=|N|} follows immediately from this definition. In all these constructions, notice that the type of the index α {\displaystyle \alpha } is two higher (with type-level ordered pair) than the type of W α {\displaystyle W_{\alpha }} . Each set A of ZFC has a transitive closure T C ( A ) {\displaystyle TC(A)} (the intersection of all transitive sets which contains A ). By the axiom of foundation, the restriction of the membership relation to the transitive closure of A is a well-founded relation . The relation ∈ ⌈ T C ( A ) {\displaystyle \in \lceil TC(A)} is either empty or has A as its top element, so this relation is a set picture . It can be proved in ZFC that every set picture is isomorphic to some ∈ ⌈ T C ( A ) {\displaystyle \in \lceil TC(A)} . This suggests that (an initial segment of) the cumulative hierarchy can be studied by considering the isomorphism classes of set pictures. These isomorphism classes are sets and make up a set in NFU . There is a natural set relation analogous to membership on isomorphism classes of set pictures: if x {\displaystyle x} is a set picture, write [ x ] {\displaystyle [x]} for its isomorphism class and define [ x ] E [ y ] {\displaystyle [x]E[y]} as holding if [ x ] {\displaystyle [x]} is the isomorphism class of the restriction of y to the downward closure of one of the elements of the preimage under y of the top element of y . The relation E is a set relation, and it is straightforward to prove that it is well-founded and extensional. If the definition of E is confusing, it can be deduced from the observation that it is induced by precisely the relationship which holds between the set picture associated with A and the set picture associated with B when A ∈ B {\displaystyle A\in B} in the usual set theory. There is a T operation on isomorphism classes of set pictures analogous to the T operation on ordinals: if x is a set picture, so is x ι = { ( { a } , { b } ) ∣ ( a , b ) ∈ x } {\displaystyle x^{\iota }=\{(\{a\},\{b\})\mid (a,b)\in x\}} . Define T ( [ x ] ) {\displaystyle T([x])} as [ x ι ] {\displaystyle [x^{\iota }]} . It is easy to see that [ x ] E [ y ] ↔ T ( [ x ] ) = T ( [ y ] ) {\displaystyle [x]E[y]\leftrightarrow T([x])=T([y])} . An axiom of extensionality for this simulated set theory follows from E's extensionality. From its well-foundedness follows an axiom of foundation. There remains the question of what comprehension axiom E may have. Consider any collection of set pictures { x ι ∣ x ∈ S } {\displaystyle \{x^{\iota }\mid x\in S\}} (collection of set pictures whose fields are made up entirely of singletons). Since each x ι {\displaystyle x^{\iota }} is one type higher than x (using a type-level ordered pair), replacing each element { a } {\displaystyle \{a\}} of the field of each x ι {\displaystyle x^{\iota }} in the collection with ( x , { a } ) {\displaystyle (x,\{a\})} results in a collection of set pictures isomorphic to the original collection but with their fields disjoint. The union of these set pictures with a new top element yields a set picture whose isomorphism type will have as its preimages under E exactly the elements of the original collection. That is, for any collection of isomorphism types [ x ι ] = T ( [ x ] ) {\displaystyle [x^{\iota }]=T([x])} , there is an isomorphism type [ y ] {\displaystyle [y]} whose preimage under E is exactly this collection. In particular, there will be an isomorphism type [v] whose preimage under E is the collection of all T [ x ]'s (including T [ v ]). Since T [ v ] E v and E is well-founded, T [ v ] ≠ v {\displaystyle T[v]\neq v} . This resembles the resolution of the Burali–Forti paradox discussed above and in the New Foundations article, and is in fact the local resolution of Mirimanoff's paradox of the set of all well-founded sets. There are ranks of isomorphism classes of set pictures just as there are ranks of sets in the usual set theory. For any collection of set pictures A , define S ( A ) as the set of all isomorphism classes of set pictures whose preimage under E is a subset of A; call A a "complete" set if every subset of A is a preimage under E. The collection of "ranks" is the smallest collection containing the empty set and closed under the S operation (which is a kind of power set construction) and under unions of its subcollections. It is straightforward to prove (much as in the usual set theory) that the ranks are well-ordered by inclusion, and so the ranks have an index in this well-order: refer to the rank with index α {\displaystyle \alpha } as R α {\displaystyle R_{\alpha }} . It is provable that | R α | = ℶ α {\displaystyle |R_{\alpha }|=\beth _{\alpha }} for complete ranks R α {\displaystyle R_{\alpha }} . The union of the complete ranks (which will be the first incomplete rank) with the relation E looks like an initial segment of the universe of Zermelo-style set theory (not necessarily like the full universe of ZFC because it may not be large enough). It is provable that if R α {\displaystyle R_{\alpha }} is the first incomplete rank, then R T ( α ) {\displaystyle R_{T(\alpha )}} is a complete rank and thus T ( α ) < α {\displaystyle T(\alpha )<\alpha } . So there is a "rank of the cumulative hierarchy" with an "external automorphism" T moving the rank downward, exactly the condition on a nonstandard model of a rank in the cumulative hierarchy under which a model of NFU is constructed in the New Foundations article. There are technical details to verify, but there is an interpretation not only of a fragment of ZFC but of NFU itself in this structure, with [ x ] ∈ N F U [ y ] {\displaystyle [x]\in _{NFU}[y]} defined as T ( [ x ] ) E [ y ] ∧ [ y ] ∈ R T ( α ) + 1 {\displaystyle T([x])E[y]\wedge [y]\in R_{T(\alpha )+1}} : this "relation" E N F U {\displaystyle E_{NFU}} is not a set relation but has the same type displacement between its arguments as the usual membership relation ∈ {\displaystyle \in } . So there is a natural construction inside NFU of the cumulative hierarchy of sets which internalizes the natural construction of a model of NFU in Zermelo-style set theory. Under the Axiom of Cantorian Sets described in the New Foundations article, the strongly cantorian part of the set of isomorphism classes of set pictures with the E relation as membership becomes a (proper class) model of ZFC (in which there are n - Mahlo cardinals for each n ; this extension of NFU is strictly stronger than ZFC). This is a proper class model because the strongly cantorian isomorphism classes do not make up a set. Permutation methods can be used to create from any model of NFU a model in which every strongly cantorian isomorphism type of set pictures is actually realized as the restriction of the true membership relation to the transitive closure of a set.
https://en.wikipedia.org/wiki/Implementation_of_mathematics_in_set_theory
Implementation theory is an area of research in game theory concerned with whether a class of mechanisms (or institutions) can be designed whose equilibrium outcomes implement a given set of normative goals or welfare criteria. [ 1 ] There are two general types of implementation problems: the economic problem of producing and allocating public and private goods and choosing over a finite set of alternatives. [ 2 ] In the case of producing and allocating public/private goods, solution concepts are focused on finding dominant strategies . In his paper "Counterspeculation, Auctions, and Competitive Sealed Tenders", William Vickrey showed that if preferences are restricted to the case of quasi-linear utility functions then the mechanism dominant strategy is dominant-strategy implementable. [ 3 ] "A social choice rule is dominant strategy incentive compatible , or strategy-proof , if the associated revelation mechanism has the property that honestly reporting the truth is always a dominant strategy for each agent." [ 2 ] However, the payments to agents become large, sacrificing budget neutrality to incentive compatibility. In a game where multiple agents are to report their preferences (or their type), it may be in the best interest of some agents to lie about their preferences. This may improve their payoff , but it may not be seen as a fair outcome to other agents. [ 4 ] Although largely theoretical, implementation theory may have profound implications on policy creation because some social choice rules may be impossible to implement under specific game conditions. [ 1 ] In mechanism design , implementability is a property of a social choice function . It means that there is an incentive-compatible mechanism that attains ("implements") this function. There are several degrees of implementability, corresponding to the different degrees of incentive-compatibility, including: See for a recent reference. In some textbooks, the entire field of mechanism design is called implementation theory . [ 5 ] Incentive Compatibility
https://en.wikipedia.org/wiki/Implementation_theory
In the study of graph algorithms , an implicit graph representation (or more simply implicit graph ) is a graph whose vertices or edges are not represented as explicit objects in a computer's memory, but rather are determined algorithmically from some other input, for example a computable function . The notion of an implicit graph is common in various search algorithms which are described in terms of graphs. In this context, an implicit graph may be defined as a set of rules to define all neighbors for any specified vertex. [ 1 ] This type of implicit graph representation is analogous to an adjacency list , in that it provides easy access to the neighbors of each vertex. For instance, in searching for a solution to a puzzle such as Rubik's Cube , one may define an implicit graph in which each vertex represents one of the possible states of the cube, and each edge represents a move from one state to another. It is straightforward to generate the neighbors of any vertex by trying all possible moves in the puzzle and determining the states reached by each of these moves; however, an implicit representation is necessary, as the state space of Rubik's Cube is too large to allow an algorithm to list all of its states. [ 2 ] In computational complexity theory , several complexity classes have been defined in connection with implicit graphs, defined as above by a rule or algorithm for listing the neighbors of a vertex. For instance, PPA is the class of problems in which one is given as input an undirected implicit graph (in which vertices are n -bit binary strings, with a polynomial time algorithm for listing the neighbors of any vertex) and a vertex of odd degree in the graph, and must find a second vertex of odd degree. By the handshaking lemma , such a vertex exists; finding one is a problem in NP , but the problems that can be defined in this way may not necessarily be NP-complete , as it is unknown whether PPA = NP. PPAD is an analogous class defined on implicit directed graphs that has attracted attention in algorithmic game theory because it contains the problem of computing a Nash equilibrium . [ 3 ] The problem of testing reachability of one vertex to another in an implicit graph may also be used to characterize space-bounded nondeterministic complexity classes including NL (the class of problems that may be characterized by reachability in implicit directed graphs whose vertices are O(log n ) -bit bitstrings), SL (the analogous class for undirected graphs), and PSPACE (the class of problems that may be characterized by reachability in implicit graphs with polynomial-length bitstrings). In this complexity-theoretic context, the vertices of an implicit graph may represent the states of a nondeterministic Turing machine , and the edges may represent possible state transitions, but implicit graphs may also be used to represent many other types of combinatorial structure. [ 4 ] PLS , another complexity class, captures the complexity of finding local optima in an implicit graph. [ 5 ] Implicit graph models have also been used as a form of relativization in order to prove separations between complexity classes that are stronger than the known separations for non-relativized models. For instance, Childs et al. used neighborhood representations of implicit graphs to define a graph traversal problem that can be solved in polynomial time on a quantum computer but that requires exponential time to solve on any classical computer. [ 6 ] In the context of efficient representations of graphs, J. H. Muller defined a local structure or adjacency labeling scheme for a graph G in a given family F of graphs to be an assignment of an O (log n ) -bit identifier to each vertex of G , together with an algorithm (that may depend on F but is independent of the individual graph G ) that takes as input two vertex identifiers and determines whether or not they are the endpoints of an edge in G . That is, this type of implicit representation is analogous to an adjacency matrix : it is straightforward to check whether two vertices are adjacent but finding the neighbors of any vertex may involve looping through all vertices and testing which ones are neighbors. [ 7 ] Graph families with adjacency labeling schemes include: Not all graph families have local structures. For some families, a simple counting argument proves that adjacency labeling schemes do not exist: only O ( n log n ) bits may be used to represent an entire graph, so a representation of this type can only exist when the number of n -vertex graphs in the given family F is at most 2 O ( n log n ) . Graph families that have larger numbers of graphs than this, such as the bipartite graphs or the triangle-free graphs , do not have adjacency labeling schemes. [ 8 ] [ 10 ] However, even families of graphs in which the number of graphs in the family is small might not have an adjacency labeling scheme; for instance, the family of graphs with fewer edges than vertices has 2 O ( n log n ) n -vertex graphs but does not have an adjacency labeling scheme, because one could transform any given graph into a larger graph in this family by adding a new isolated vertex for each edge, without changing its labelability. [ 7 ] [ 10 ] Kannan et al. asked whether having a forbidden subgraph characterization and having at most 2 O ( n log n ) n -vertex graphs are together enough to guarantee the existence of an adjacency labeling scheme; this question, which Spinrad restated as a conjecture. Recent work has refuted this conjecture by providing a family of graphs with a forbidden subgraph characterization and a slow-enough growth rate but with no adjacency labeling scheme. [ 14 ] Among the families of graphs which satisfy the conditions of the conjecture and for which there is no known adjacency labeling scheme are the family of disk graphs and line segment intersection graphs. If a graph family F has an adjacency labeling scheme, then the n -vertex graphs in F may be represented as induced subgraphs of a common induced universal graph of polynomial size, the graph consisting of all possible vertex identifiers. Conversely, if an induced universal graph of this type can be constructed, then the identities of its vertices may be used as labels in an adjacency labeling scheme. [ 8 ] For this application of implicit graph representations, it is important that the labels use as few bits as possible, because the number of bits in the labels translates directly into the number of vertices in the induced universal graph. Alstrup and Rauhe showed that any tree has an adjacency labeling scheme with log 2 n + O ( log * n ) bits per label, from which it follows that any graph with arboricity k has a scheme with k log 2 n + O ( log * n ) bits per label and a universal graph with n k 2 O ( log * n ) vertices. In particular, planar graphs have arboricity at most three, so they have universal graphs with a nearly-cubic number of vertices. [ 15 ] This bound was improved by Gavoille and Labourel who showed that planar graphs and minor-closed graph families have a labeling scheme with 2 log 2 n + O (log log n ) bits per label, and that graphs of bounded treewidth have a labeling scheme with log 2 n + O (log log n ) bits per label. [ 16 ] The bound for planar graphs was improved again by Bonamy, Gavoille, and Piliczuk who showed that planar graphs have a labelling scheme with (4/3+o(1))log 2 n bits per label. [ 17 ] Finally Dujmović et al showed that planar graphs have a labelling scheme with (1+o(1))log 2 n bits per label giving a universal graph with n 1+o(1) vertices. [ 18 ] The Aanderaa–Karp–Rosenberg conjecture concerns implicit graphs given as a set of labeled vertices with a black-box rule for determining whether any two vertices are adjacent. This definition differs from an adjacency labeling scheme in that the rule may be specific to a particular graph rather than being a generic rule that applies to all graphs in a family. Because of this difference, every graph has an implicit representation. For instance, the rule could be to look up the pair of vertices in a separate adjacency matrix. However, an algorithm that is given as input an implicit graph of this type must operate on it only through the implicit adjacency test, without reference to how the test is implemented. A graph property is the question of whether a graph belongs to a given family of graphs; the answer must remain invariant under any relabeling of the vertices. In this context, the question to be determined is how many pairs of vertices must be tested for adjacency, in the worst case, before the property of interest can be determined to be true or false for a given implicit graph. Rivest and Vuillemin proved that any deterministic algorithm for any nontrivial graph property must test a quadratic number of pairs of vertices. [ 19 ] The full Aanderaa–Karp–Rosenberg conjecture is that any deterministic algorithm for a monotonic graph property (one that remains true if more edges are added to a graph with the property) must in some cases test every possible pair of vertices. Several cases of the conjecture have been proven to be true—for instance, it is known to be true for graphs with a prime number of vertices [ 20 ] —but the full conjecture remains open. Variants of the problem for randomized algorithms and quantum algorithms have also been studied. Bender and Ron have shown that, in the same model used for the evasiveness conjecture, it is possible in only constant time to distinguish directed acyclic graphs from graphs that are very far from being acyclic. In contrast, such a fast time is not possible in neighborhood-based implicit graph models, [ 21 ]
https://en.wikipedia.org/wiki/Implicit_graph
Implicit solvation (sometimes termed continuum solvation ) is a method to represent solvent as a continuous medium instead of individual “explicit” solvent molecules, most often used in molecular dynamics simulations and in other applications of molecular mechanics . The method is often applied to estimate free energy of solute - solvent interactions in structural and chemical processes, such as folding or conformational transitions of proteins , DNA , RNA , and polysaccharides , association of biological macromolecules with ligands , or transport of drugs across biological membranes . The implicit solvation model is justified in liquids, where the potential of mean force can be applied to approximate the averaged behavior of many highly dynamic solvent molecules. However, the interfaces and the interiors of biological membranes or proteins can also be considered as media with specific solvation or dielectric properties. These media are not necessarily uniform, since their properties can be described by different analytical functions, such as “polarity profiles” of lipid bilayers . [ 1 ] There are two basic types of implicit solvent methods: models based on accessible surface areas (ASA) that were historically the first, and more recent continuum electrostatics models, although various modifications and combinations of the different methods are possible. The accessible surface area (ASA) method is based on experimental linear relations between Gibbs free energy of transfer and the surface area of a solute molecule. [ 2 ] This method operates directly with free energy of solvation , unlike molecular mechanics or electrostatic methods that include only the enthalpic component of free energy. The continuum representation of solvent also significantly improves the computational speed and reduces errors in statistical averaging that arise from incomplete sampling of solvent conformations, [ 3 ] so that the energy landscapes obtained with implicit and explicit solvent are different. [ 4 ] Although the implicit solvent model is useful for simulations of biomolecules, this is an approximate method with certain limitations and problems related to parameterization and treatment of ionization effects. The free energy of solvation of a solute molecule in the simplest ASA-based method is given by: where A S A i {\displaystyle ASA_{i}} is the accessible surface area of atom i , and σ i {\displaystyle \sigma _{i}} is solvation parameter of atom i , i.e., a contribution to the free energy of solvation of the particular atom i per surface unit area. The needed solvation parameters for different types of atoms ( carbon (C), nitrogen (N), oxygen (O), sulfur (S), etc.) are usually determined by a least squares fit of the calculated and experimental transfer free energies for a series of organic compounds . The experimental energies are determined from partition coefficients of these compounds between different solutions or media using standard mole concentrations of the solutes. [ 5 ] [ 6 ] Notably, solvation energy is the free energy needed to transfer a solute molecule from a solvent to vacuum (gas phase). This energy can supplement the intramolecular energy in vacuum calculated in molecular mechanics . Thus, the needed atomic solvation parameters were initially derived from water-gas partition data. [ 7 ] However, the dielectric properties of proteins and lipid bilayers are much more similar to those of nonpolar solvents than to vacuum. Newer parameters have thus been derived from octanol-water partition coefficients [ 8 ] or other similar data. Such parameters actually describe transfer energy between two condensed media or the difference of two solvation energies. The Poisson-Boltzmann equation (PB) describes the electrostatic environment of a solute in a solvent containing ions . It can be written in cgs units as: or (in mks ): where ϵ ( r → ) {\displaystyle \epsilon ({\vec {r}})} represents the position-dependent dielectric, Ψ ( r → ) {\displaystyle \Psi ({\vec {r}})} represents the electrostatic potential, ρ f ( r → ) {\displaystyle \rho ^{f}({\vec {r}})} represents the charge density of the solute, c i ∞ {\displaystyle c_{i}^{\infty }} represents the concentration of the ion i at a distance of infinity from the solute, z i {\displaystyle z_{i}} is the valence of the ion, q is the charge of a proton, k is the Boltzmann constant , T is the temperature , and λ ( r → ) {\displaystyle \lambda ({\vec {r}})} is a factor for the position-dependent accessibility of position r to the ions in solution (often set to uniformly 1). If the potential is not large, the equation can be linearized to be solved more efficiently. [ 9 ] Although this equation has solid theoretical justification, it is computationally expensive to calculate without approximations. A number of numerical Poisson-Boltzmann equation solvers of varying generality and efficiency have been developed, [ 10 ] [ 11 ] [ 12 ] including one application with a specialized computer hardware platform. [ 13 ] However, performance from PB solvers does not yet equal that from the more commonly used generalized Born approximation. [ 14 ] The Generalized Born (GB) model is an approximation to the exact (linearized) Poisson-Boltzmann equation. It is based on modeling the solute as a set of spheres whose internal dielectric constant differs from the external solvent. The model has the following functional form: where and D = ( r i j 2 a i j ) 2 , a i j = a i a j {\displaystyle D=\left({\frac {r_{ij}}{2a_{ij}}}\right)^{2},a_{ij}={\sqrt {a_{i}a_{j}}}} where ϵ 0 {\displaystyle \epsilon _{0}} is the permittivity of free space , ϵ {\displaystyle \epsilon } is the dielectric constant of the solvent being modeled, q i {\displaystyle q_{i}} is the electrostatic charge on particle i , r i j {\displaystyle r_{ij}} is the distance between particles i and j , and a i {\displaystyle a_{i}} is a quantity (with the dimension of length) termed the effective Born radius . [ 15 ] The effective Born radius of an atom characterizes its degree of burial inside the solute; qualitatively it can be thought of as the distance from the atom to the molecular surface. Accurate estimation of the effective Born radii is critical for the GB model. [ 16 ] The Generalized Born (GB) model augmented with the hydrophobic solvent accessible surface area (SA) term is GBSA. It is among the most commonly used implicit solvent model combinations. The use of this model in the context of molecular mechanics is termed MM/GBSA. Although this formulation has been shown to successfully identify the native states of short peptides with well-defined tertiary structure , [ 17 ] the conformational ensembles produced by GBSA models in other studies differ significantly from those produced by explicit solvent and do not identify the protein's native state. [ 4 ] In particular, salt bridges are overstabilized, possibly due to insufficient electrostatic screening, and a higher-than-native alpha helix population was observed. Variants of the GB model have also been developed to approximate the electrostatic environment of membranes, which have had some success in folding the transmembrane helixes of integral membrane proteins . [ 18 ] Another possibility is to use ad hoc quick strategies to estimate solvation free energy. A first generation of fast implicit solvents is based on the calculation of a per-atom solvent accessible surface area. For each of group of atom types, a different parameter scales its contribution to solvation ("ASA-based model" described above). [ 19 ] Another strategy is implemented for the CHARMM 19 force-field and is called EEF1. [ 20 ] EEF1 is based on a Gaussian-shaped solvent exclusion. The solvation free energy is The reference solvation free energy of i corresponds to a suitably chosen small molecule in which group i is essentially fully solvent-exposed. The integral is over the volume V j of group j and the summation is over all groups j around i . EEF1 additionally uses a distance-dependent (non-constant) dielectric, and ionic side-chains of proteins are simply neutralized. It is only 50% slower than a vacuum simulation. This model was later augmented with the hydrophobic effect and called Charmm19/SASA. [ 21 ] It is possible to include a layer or sphere of water molecules around the solute, and model the bulk with an implicit solvent. Such an approach is proposed by M. J. Frisch and coworkers [ 22 ] and by other authors. [ 23 ] [ 24 ] For instance in Ref. [ 23 ] the bulk solvent is modeled with a Generalized Born approach and the multi-grid method used for Coulombic pairwise particle interactions. It is reported to be faster than a full explicit solvent simulation with the particle mesh Ewald summation (PME) method of electrostatic calculation. There are a range of hybrid methods available capable of accessing and acquiring information on solvation. [ 25 ] Models like PB and GB allow estimation of the mean electrostatic free energy but do not account for the (mostly) entropic effects arising from solute-imposed constraints on the organization of the water or solvent molecules. This is termed the hydrophobic effect and is a major factor in the folding process of globular proteins with hydrophobic cores . Implicit solvation models may be augmented with a term that accounts for the hydrophobic effect. The most popular way to do this is by taking the solvent accessible surface area (SASA) as a proxy of the extent of the hydrophobic effect. Most authors place the extent of this effect between 5 and 45 cal/(Å 2 mol). [ 26 ] Note that this surface area pertains to the solute, while the hydrophobic effect is mostly entropic in nature at physiological temperatures and occurs on the side of the solvent. Implicit solvent models such as PB, GB, and SASA lack the viscosity that water molecules impart by randomly colliding and impeding the motion of solutes through their van der Waals repulsion. In many cases, this is desirable because it makes sampling of configurations and phase space much faster. This acceleration means that more configurations are visited per simulated time unit, on top of whatever CPU acceleration is achieved in comparison to explicit solvent. It can, however, lead to misleading results when kinetics are of interest. Viscosity may be added back by using Langevin dynamics instead of Hamiltonian mechanics and choosing an appropriate damping constant for the particular solvent. [ 27 ] In practical bimolecular simulations one can often speed-up conformational search significantly (up to 100 times in some cases) by using much lower collision frequency γ {\displaystyle \gamma } . [ 28 ] Recent work has also been done developing thermostats based on fluctuating hydrodynamics to account for momentum transfer through the solvent and related thermal fluctuations. [ 29 ] One should keep in mind, though, that the folding rate of proteins does not depend linearly on viscosity for all regimes. [ 30 ] Solute-solvent hydrogen bonds in the first solvation shell are important for solubility of organic molecules and especially ions . Their average energetic contribution can be reproduced with an implicit solvent model. [ 31 ] [ 32 ] All implicit solvation models rest on the simple idea that nonpolar atoms of a solute tend to cluster together or occupy nonpolar media, whereas polar and charged groups of the solute tend to remain in water. However, it is important to properly balance the opposite energy contributions from different types of atoms. Several important points have been discussed and investigated over the years. It has been noted that wet 1-octanol solution is a poor approximation of proteins or biological membranes because it contains ~2M of water, and that cyclohexane would be a much better approximation. [ 33 ] Investigation of passive permeability barriers for different compounds across lipid bilayers led to conclusion that 1,9-decadiene can serve as a good approximations of the bilayer interior, [ 34 ] whereas 1-octanol was a very poor approximation. [ 35 ] A set of solvation parameters derived for protein interior from protein engineering data was also different from octanol scale: it was close to cyclohexane scale for nonpolar atoms but intermediate between cyclohexane and octanol scales for polar atoms. [ 36 ] Thus, different atomic solvation parameters should be applied for modeling of protein folding and protein-membrane binding. This issue remains controversial. The original idea of the method was to derive all solvation parameters directly from experimental partition coefficients of organic molecules, which allows calculation of solvation free energy. However, some of the recently developed electrostatic models use ad hoc values of 20 or 40 cal/(Å 2 mol) for all types of atoms. The non-existent “hydrophobic” interactions of polar atoms are overridden by large electrostatic energy penalties in such models. Strictly speaking, ASA-based models should only be applied to describe solvation , i.e., energetics of transfer between liquid or uniform media. It is possible to express van der Waals interaction energies in the solid state in the surface energy units. This was sometimes done for interpreting protein engineering and ligand binding energetics, [ 37 ] which leads to “solvation” parameter for aliphatic carbon of ~40 cal/(Å 2 mol), [ 38 ] which is 2 times bigger than ~20 cal/(Å 2 mol) obtained for transfer from water to liquid hydrocarbons, because the parameters derived by such fitting represent sum of the hydrophobic energy (i.e., 20 cal/Å 2 mol) and energy of van der Waals attractions of aliphatic groups in the solid state, which corresponds to fusion enthalpy of alkanes . [ 36 ] Unfortunately, the simplified ASA-based model cannot capture the "specific" distance-dependent interactions between different types of atoms in the solid state which are responsible for clustering of atoms with similar polarities in protein structures and molecular crystals. Parameters of such interatomic interactions, together with atomic solvation parameters for the protein interior, have been approximately derived from protein engineering data. [ 36 ] The implicit solvation model breaks down when solvent molecules associate strongly with binding cavities in a protein, so that the protein and the solvent molecules form a continuous solid body. [ 39 ] On the other hand, this model can be successfully applied for describing transfer from water to the fluid lipid bilayer. [ 40 ] More testing is needed to evaluate the performance of different implicit solvation models and parameter sets. They are often tested only for a small set of molecules with very simple structure, such as hydrophobic and amphiphilic alpha helixes (α). This method was rarely tested for hundreds of protein structures. [ 40 ] Ionization of charged groups has been neglected in continuum electrostatic models of implicit solvation, as well as in standard molecular mechanics and molecular dynamics . The transfer of an ion from water to a nonpolar medium with dielectric constant of ~3 (lipid bilayer) or 4 to 10 (interior of proteins) costs significant energy, as follows from the Born equation and from experiments. However, since the charged protein residues are ionizable, they simply lose their charges in the nonpolar environment, which costs relatively little at the neutral pH : ~4 to 7 kcal/mol for Asp, Glu, Lys, and Arg amino acid residues, according to the Henderson-Hasselbalch equation , ΔG = 2.3RT (pH - pK) . The low energetic costs of such ionization effects have indeed been observed for protein mutants with buried ionizable residues. [ 41 ] and hydrophobic α-helical peptides in membranes with a single ionizable residue in the middle. [ 42 ] However, all electrostatic methods, such as PB, GB, or GBSA assume that ionizable groups remain charged in the nonpolar environments, which leads to grossly overestimated electrostatic energy. In the simplest accessible surface area -based models, this problem was treated using different solvation parameters for charged atoms or Henderson-Hasselbalch equation with some modifications. [ 40 ] However even the latter approach does not solve the problem. Charged residues can remain charged even in the nonpolar environment if they are involved in intramolecular ion pairs and H-bonds. Thus, the energetic penalties can be overestimated even using the Henderson-Hasselbalch equation. More rigorous theoretical methods describing such ionization effects have been developed, [ 43 ] and there are ongoing efforts to incorporate such methods into the implicit solvation models. [ 44 ]
https://en.wikipedia.org/wiki/Implicit_solvation
In the United States , implied powers are powers that, although not directly stated in the Constitution, are indirectly given based on expressed powers . When George Washington asked Alexander Hamilton to defend the constitutionality of the First Bank of the United States against the protests [ 1 ] of Thomas Jefferson , James Madison , and Attorney General Edmund Randolph , Hamilton produced what has now become the doctrine of implied powers. [ 2 ] Hamilton argued that the sovereign duties of a government implied the right to use means adequate to its ends. Although the United States government was sovereign only as to certain objects, it was impossible to define all the means it should use, because it was impossible for the founders to anticipate all future exigencies. Hamilton noted that the " general welfare clause " and the " necessary and proper clause " gave elasticity to the Constitution. Hamilton won the argument and Washington signed the bank bill into law. Another instance of the usage of implied powers was during the Louisiana Purchase , where, in 1803, the United States was offered the opportunity to purchase French territory in continental North America. James Monroe was sent by Thomas Jefferson to France to negotiate, with permission to spend up to $10 million on the port of New Orleans and parts of Florida . However, an agreement to purchase the entirety of continental French territory for $15 million was reached instead, even though this far exceeded the authorized $10 million spending cap. Although Jefferson’s decision to purchase the Louisiana territory would ultimately be widely popular, it was not known to constitutional lawyers, nor even to Jefferson himself, whether he had had the legal authority to negotiate the price of the territory (ultimately violating his stipulated budget) without the approval of Congress. In the end, the notion of implied powers was offered and accepted as justification for finishing the deal. [ 3 ] Later, directly borrowing from Hamilton, Chief Justice John Marshall invoked the implied powers of government in the United States Supreme Court case, McCulloch v. Maryland . [ 4 ] In 1816, the United States Congress passed legislation creating the Second Bank of the United States . The state of Maryland attempted to tax the bank. The state argued the United States Constitution did not explicitly grant Congress the power to establish banks. In 1819, the Court decided against the state of Maryland. Chief Justice Marshall argued that Congress had the right to establish the bank, as the Constitution grants to Congress certain implied powers beyond those explicitly stated. In the case of the United States Government, implied powers are powers Congress exercises that the Constitution does not explicitly define, but are necessary and proper to execute the powers. The legitimacy of these Congressional powers is derived from the Taxing and Spending Clause, the Necessary and Proper Clause, and the Commerce Clause . Implied powers are those that can reasonably be assumed to flow from express powers, [ 5 ] though not explicitly mentioned. This theory has flown from domestic constitutional law [ 6 ] to International law , [ 7 ] and European Union institutions have accepted the basics of the implied powers theory. [ 8 ]
https://en.wikipedia.org/wiki/Implied_powers
Implied weighting describes a group of methods used in phylogenetic analysis to assign the greatest importance to characters that are most likely to be homologous . These are a posteriori methods, which include also dynamic weighting, as opposed to a priori methods, which include adaptive, independent, and chemical categories (see Weighting at the American Museum of Natural History's website). The first attempt to implement such a technique was by Farris (1969), [ 1 ] which he called successive approximations weighting, whereby a tree was constructed with equal weights, and characters that appeared as homoplasies on this tree were downweighted based on the CI ( consistency index ) or RCI (rescaled consistency index), which are measures of homology. The analysis was repeated with these new weights, and characters were again re-weighted; subsequent iteration was continued until a stable state was reached. Farris suggested that each character could be considered independently with respect to a weight implied by frequency of change. However, the final tree depended strongly on the starting weights and the finishing criteria. [ 2 ] The most widely used and implemented method, called implied weighting, follows from Goloboff (1993). [ 2 ] The first time a character changes state on a tree, this state change is given the weight '1'; subsequent changes are less 'expensive' and are given smaller weights as the characters tendency for homoplasy becomes more apparent. The trees which maximize the concave function of homoplasy resolve character conflict in favour of the characters which have more homology (less homoplasy) and imply that the average weight for the characters is as high as possible. Goloboff recognizes that trees with the heaviest average weights give the most 'respect' to the data: a low average weight implies that most characters are being 'ignored' by the tree-building algorithms. [ 2 ] Though originally proposed with a severe weighting of k=3, Goloboff now prefers more 'gentle' concavities (e.g. k = 12), [ 3 ] which have been shown to be more effective in simulated and real-world cases. [ 4 ]
https://en.wikipedia.org/wiki/Implied_weighting
Implosion is the collapse of an object into itself from a pressure differential or gravitational force. The opposite of explosion (which expands the volume ), implosion reduces the volume occupied and concentrates matter and energy . Implosion involves a difference between internal (lower) and external (higher) pressure, or inward and outward forces, that is so large that the structure collapses inward into itself, or into the space it occupied if it is not a completely solid object. [ citation needed ] Examples of implosion include a submarine being crushed by hydrostatic pressure [ 1 ] and the collapse of a star under its own gravitational pressure . In some but not all cases, an implosion propels material outward, for example due to the force of inward falling material rebounding, or peripheral material being ejected as the inner parts collapse. If the object was previously solid, then implosion usually requires it to take on a more dense form—in effect to be more concentrated, compressed, or converted into a denser material. In an implosion-type nuclear weapon design , a sphere of plutonium , uranium , or other fissile material is imploded by a spherical arrangement of explosive charges. This decreases the material's volume and thus increases its density by a factor of two to three, causing it to reach critical mass and create a nuclear explosion . In some forms of thermonuclear weapons , the energy from this explosion is then used to implode a capsule of fusion fuel before igniting it, causing a fusion reaction (see Teller–Ulam design ). In general, the use of radiation to implode something, as in a hydrogen bomb or in laser driven inertial confinement fusion , is known as radiation implosion . Cavitation (bubble formation/collapse in a fluid) involves an implosion process. When a cavitation bubble forms in a liquid (for example, by a high-speed water propeller ), this bubble is typically rapidly collapsed—imploded—by the surrounding liquid. Implosion is a key part of the gravitational collapse of large stars , which can lead to the creation of supernovas , neutron stars and black holes . In the most common case, the innermost part of a large star (called the core ) stops burning and without this source of heat , the forces holding electrons and protons apart are no longer strong enough to do so. The core collapses in on itself exceedingly quickly, and becomes a neutron star or black hole ; the outer layers of the original star fall inwards and may rebound off the newly created neutron star (if one was created), creating a supernova . A high vacuum exists within all cathode-ray tubes . If the outer glass envelope is damaged, a dangerous implosion may occur. The implosion may scatter glass pieces at dangerous speeds. While modern CRTs used in televisions and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs removed from equipment must be handled carefully to avoid injury. [ 2 ] The demolition of large buildings using precisely placed and timed explosions so that the structure collapses on itself is often erroneously described as implosion .
https://en.wikipedia.org/wiki/Implosion_(mechanical_process)
The Importance Value Index (IVI) in Ecology is the quantitative measure of how dominant a species is in a given ecosystem . It combines multiple parameters to reflect a species' overall dominance, helping to describe the structure and composition of ecosystems. [ 1 ] The IVI is calculated by summing three relative measures for each species in a given area: IVI = Relative Density + Relative Frequency + Relative Dominance Each of these components is expressed as a percentage, so the IVI ranges from 0 to 300. [ 2 ] IVI is commonly used in vegetation analysis and forest ecology to: It offers insight into species' ecological roles beyond simple abundance by incorporating spatial and distributional data. [ 3 ] In a forest plot, three tree species are sampled. If Species A has high abundance, occurs frequently across plots, and occupies a large basal area, its IVI would be significantly higher than that of a rare, spatially restricted, or small-canopy species. Researchers often present IVI rankings to show the ecological dominance hierarchy within a study area. [ 4 ] Although useful, the IVI has some limitations: This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Importance_Value_Index
Important ecological areas (IEAs) are habitat areas which, either by themselves or in a network, contribute significantly to an ecosystem ’s productivity, biodiversity , and resilience. Appropriate management of key ecological features delineates the management boundaries of an IEA. The identification and protection of IEAs is an element of an ecosystem-based management approach. Important ecological areas may have varying levels of management of extractive activities, from monitoring up to and including marine reserve . IEAs have management measures tailored to the ecological features within the area with consideration of socioeconomic factors. Whereas marine reserves generally have a fixed management policy of no extraction or ‘no-take’. Nonetheless, a marine reserve may be the appropriate management policy for an IEA. The identification and management of IEAs is a form of ocean zoning . In the event that there are a series of linked IEAs within a large marine ecosystem, a collective action to manage the network, such as a marine sanctuary or national monument , may be warranted. Examples are tropical rainforests , oceans , forests , etc.
https://en.wikipedia.org/wiki/Important_ecological_areas
In propositional logic , import-export is a name given to the propositional form of Exportation : This already holds in minimal logic , and thus also in classical logic , where the conditional operator " → {\displaystyle \rightarrow } " is taken as material implication . In the Curry-Howard correspondence for intuitionistic logics, it can be realized through currying and uncurrying. Import-export expresses a deductive argument form . In natural language terms, the formula states that the following English sentences are logically equivalent : [ 1 ] [ 2 ] [ 3 ] There are logics where it does not hold and its status as a true principle of logic is a matter of debate. Controversy over the principle arises from the fact that any conditional operator that satisfies it will collapse to material implication when combined with certain other principles. This conclusion would be problematic given the paradoxes of material implication , which are commonly taken to show that natural language conditionals are not material implication. [ 2 ] [ 3 ] [ 4 ] This problematic conclusion can be avoided within the framework of dynamic semantics , whose expressive power allows one to define a non-material conditional operator which nonetheless satisfies import-export along with the other principles. [ 3 ] [ 5 ] However, other approaches reject import-export as a general principle, motivated by cases such as the following, uttered in a context where it is most likely that the match will be lit by throwing it into a campfire, but where it is possible that it could be lit by striking it. In this context, the first sentence is intuitively true but the second is intuitively false. [ 5 ] [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Import–export_(logic)
Imposex is a disorder in sea snails caused by the toxic effects of certain marine pollutants . These pollutants cause female sea snails ( marine gastropod molluscs ) to develop male sex organs such as a penis and a vas deferens . It was believed that the only inducer of imposex was tributyltin (TBT), [ 1 ] which can be active in extremely low concentrations, but recent studies reported other substances as inducers, such as triphenyltin [ 2 ] and ethanol . [ 3 ] Tributyltin is used as an anti-fouling agent for boats which affects females of the species Nucella lapillus ( dog whelk ), Voluta ebraea (the Hebrew volute), [ 4 ] Olivancillaria vesica , [ 5 ] Stramonita haemastoma [ 6 ] (red-mouthed rock shell) and more than 200 other marine gastropods. In the dog whelk , the growth of a penis in imposex females gradually blocks the oviduct , although ovule production continues. An imposex female dog whelk passes through several stages of penis growth before it becomes unable to maintain a constant production of ovules. Later stages of imposex lead to sterility and the premature death of the females of reproductive age, which can adversely affect the entire population . [ 4 ] In 1993, Scientists from the Plymouth Marine Laboratory found a thriving dog-whelk population in the Dumpton Gap, near Ramsgate in the UK despite high levels of TBT in the water. [ 7 ] In the Dumpton Gap population, only 25% of females showed any significant signs of imposex, while 10% of males were characterized by the absence of a penis or an undersized penis, with incomplete development of the vas deferens and prostate. After further experiments, scientists concluded that "Dumpton Syndrome" was a genetic selection caused by high TBT levels. TBT-resistance was improved at the cost of lower reproductive fitness. The imposex stages of female dog whelks and other molluscs (including Nucella lima ) are used in the United Kingdom and worldwide to monitor levels of tributyltin. The RPSI (Relative Penis Size Index) of females to males, and the VDSI (Vas Deferens Sequence Index) are used to monitor levels of tributyltin in marine environments . A ban on tributyltin was implemented in Canada in 2003, however, in 2006, dog whelks with imposex could still be found on the shores of Halifax Harbour in Nova Scotia. [ 8 ]
https://en.wikipedia.org/wiki/Imposex
In mathematics , logic and philosophy of mathematics , something that is impredicative is a self-referencing definition . Roughly speaking, a definition is impredicative if it invokes (mentions or quantifies over) the set being defined, or (more commonly) another set that contains the thing being defined. There is no generally accepted precise definition of what it means to be predicative or impredicative. Authors have given different but related definitions. The opposite of impredicativity is predicativity, which essentially entails building stratified (or ramified) theories where quantification over a type at one 'level' results in types at a new, higher, level. A prototypical example is intuitionistic type theory , which retains ramification (without the explicit levels) so as to discard impredicativity. The 'levels' here correspond to the number of layers of dependency in a term definition. Russell's paradox is a famous example of an impredicative construction—namely the set of all sets that do not contain themselves. The paradox is that such a set cannot exist: If it would exist, the question could be asked whether it contains itself or not—if it does then by definition it should not, and if it does not then by definition it should. The greatest lower bound of a set X , glb( X ) , also has an impredicative definition: y = glb( X ) if and only if for all elements x of X , y is less than or equal to x , and any z less than or equal to all elements of X is less than or equal to y . This definition quantifies over the set (potentially infinite , depending on the order in question) whose members are the lower bounds of X , one of which being the glb itself. Hence predicativism would reject this definition. [ 1 ] Norms (containing one variable) which do not define classes I propose to call non-predicative ; those which do define classes I shall call predicative . The terms "predicative" and "impredicative" were introduced by Bertrand Russell , though the meaning has changed a little since then. Solomon Feferman provides a historical review of predicativity, connecting it to current outstanding research problems. [ 2 ] The vicious circle principle was suggested by Henri Poincaré (1905–6, 1908) [ 3 ] and Bertrand Russell in the wake of the paradoxes as a requirement on legitimate set specifications. Sets that do not meet the requirement are called impredicative . The first modern paradox appeared with Cesare Burali-Forti 's 1897 A question on transfinite numbers [ 4 ] and would become known as the Burali-Forti paradox . Georg Cantor had apparently discovered the same paradox in his (Cantor's) "naive" set theory and this become known as Cantor's paradox . Russell's awareness of the problem originated in June 1901 [ 5 ] with his reading of Frege 's treatise of mathematical logic , his 1879 Begriffsschrift ; the offending sentence in Frege is the following: On the other hand, it may also be that the argument is determinate and the function indeterminate. [ 6 ] In other words, given f ( a ) the function f is the variable and a is the invariant part. So why not substitute the value f ( a ) for f itself? Russell promptly wrote Frege a letter pointing out that: You state ... that a function too, can act as the indeterminate element. This I formerly believed, but now this view seems doubtful to me because of the following contradiction. Let w be the predicate: to be a predicate that cannot be predicated of itself. Can w be predicated of itself? From each answer its opposite follows. Therefore we must conclude that w is not a predicate. Likewise there is no class (as a totality) of those classes which, each taken as a totality, do not belong to themselves. From this I conclude that under certain circumstances a definable collection does not form a totality. [ 7 ] Frege promptly wrote back to Russell acknowledging the problem: Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build arithmetic. [ 8 ] While the problem had adverse personal consequences for both men (both had works at the printers that had to be emended), van Heijenoort observes that "The paradox shook the logicians' world, and the rumbles are still felt today. ... Russell's paradox, which uses the bare notions of set and element, falls squarely in the field of logic. The paradox was first published by Russell in The principles of mathematics (1903) and is discussed there in great detail ...". [ 9 ] Russell, after six years of false starts, would eventually answer the matter with his 1908 theory of types by "propounding his axiom of reducibility . It says that any function is coextensive with what he calls a predicative function: a function in which the types of apparent variables run no higher than the types of the arguments". [ 10 ] But this "axiom" was met with resistance from all quarters. The rejection of impredicatively defined mathematical objects (while accepting the natural numbers as classically understood) leads to the position in the philosophy of mathematics known as predicativism, advocated by Henri Poincaré and Hermann Weyl in his Das Kontinuum . Poincaré and Weyl argued that impredicative definitions are problematic only when one or more underlying sets are infinite. Ernst Zermelo in his 1908 "A new proof of the possibility of a well-ordering" [ 11 ] presents an entire section "b. Objection concerning nonpredicative definition " where he argued against "Poincaré (1906, p. 307) [who states that] a definition is 'predicative' and logically admissible only if it excludes all objects that are dependent upon the notion defined, that is, that can in any way be determined by it". [ 12 ] He gives two examples of impredicative definitions – (i) the notion of Dedekind chains and (ii) "in analysis wherever the maximum or minimum of a previously defined "completed" set of numbers Z is used for further inferences. This happens, for example, in the well-known Cauchy proof...". [ 13 ] He ends his section with the following observation: "A definition may very well rely upon notions that are equivalent to the one being defined; indeed, in every definition definiens and definiendum are equivalent notions, and the strict observance of Poincaré's demand would make every definition, hence all of science, impossible". [ 14 ] Zermelo's example of minimum and maximum of a previously defined "completed" set of numbers reappears in Kleene 1952:42-42, where Kleene uses the example of least upper bound in his discussion of impredicative definitions; Kleene does not resolve this problem. In the next paragraphs he discusses Weyl's attempt in his 1918 Das Kontinuum ( The Continuum ) to eliminate impredicative definitions and his failure to retain the "theorem that an arbitrary non-empty set M of real numbers having an upper bound has a least upper bound (cf. also Weyl 1919)". [ 15 ] Ramsey argued that "impredicative" definitions can be harmless: for instance, the definition of "tallest person in the room" is impredicative, since it depends on a set of things of which it is an element, namely the set of all persons in the room. Concerning mathematics, an example of an impredicative definition is the smallest number in a set, which is formally defined as: y = min( X ) if and only if for all elements x of X , y is less than or equal to x , and y is in X . Burgess (2005) discusses predicative and impredicative theories at some length, in the context of Frege 's logic, Peano arithmetic , second-order arithmetic , and axiomatic set theory .
https://en.wikipedia.org/wiki/Impredicativity
Imprelis is a selective herbicide created by DuPont . The active ingredient is aminocyclopyrachlor , a synthetic auxin . Imprelis is a selective herbicide created by DuPont . The active ingredient is aminocyclopyrachlor , a synthetic auxin . [ 1 ] [ 2 ] Imprelis was registered with the United States Environmental Protection Agency [ 3 ] for sale in October 2010. [ 4 ] Sale of Imprelis was voluntarily suspended [ 5 ] a week before the EPA required sales stopped. [ 6 ] [ 4 ] DuPont acknowledged it was killing or damaging evergreen trees, including white pine and Norway spruce . [ 7 ] DuPont allegedly knew Imprelis would damage evergreens before seeking EPA approval. [ 8 ] DuPont offered to compensate customers whose trees were affected. They asked that a claim be submitted, and said that they would send a claim resolution agreement, which would specify the amount DuPont would pay to settle the claim. By late May 2012, the end of the planting season, many tree owners had not heard from DuPont. Other tree owners who accepted payment agreements with DuPont had not been paid. [ 7 ] Several hundred tree owners have filed lawsuits with DuPont. As of May 2012 [update] , they are seeking class action status. [ 7 ]
https://en.wikipedia.org/wiki/Imprelis
In mathematical analysis , an improper integral is an extension of the notion of a definite integral to cases that violate the usual assumptions for that kind of integral. [ 1 ] In the context of Riemann integrals (or, equivalently, Darboux integrals ), this typically involves unboundedness, either of the set over which the integral is taken or of the integrand (the function being integrated), or both. It may also involve bounded but not closed sets or bounded but not continuous functions . While an improper integral is typically written symbolically just like a standard definite integral, it actually represents a limit of a definite integral or a sum of such limits; thus improper integrals are said to converge or diverge. [ 2 ] [ 1 ] If a regular definite integral (which may retronymically be called a proper integral ) is worked out as if it is improper, the same answer will result. In the simplest case of a real-valued function of a single variable integrated in the sense of Riemann (or Darboux) over a single interval, improper integrals may be in any of the following forms: The first three forms are improper because the integrals are taken over an unbounded interval. (They may be improper for other reasons, as well, as explained below.) Such an integral is sometimes described as being of the "first" type or kind if the integrand otherwise satisfies the assumptions of integration. [ 2 ] Integrals in the fourth form that are improper because f ( x ) {\displaystyle f(x)} has a vertical asymptote somewhere on the interval [ a , b ] {\displaystyle [a,b]} may be described as being of the "second" type or kind. [ 2 ] Integrals that combine aspects of both types are sometimes described as being of the "third" type or kind. [ 2 ] In each case above, the improper integral must be rewritten using one or more limits, depending on what is causing the integral to be improper. For example, in case 1, if f ( x ) {\displaystyle f(x)} is continuous on the entire interval [ a , ∞ ) {\displaystyle [a,\infty )} , then The limit on the right is taken to be the definition of the integral notation on the left. If f ( x ) {\displaystyle f(x)} is only continuous on ( a , ∞ ) {\displaystyle (a,\infty )} and not at a {\displaystyle a} itself, then typically this is rewritten as for any choice of c > a {\displaystyle c>a} . Here both limits must converge to a finite value for the improper integral to be said to converge. This requirement avoids the ambiguous case of adding positive and negative infinities (i.e., the " ∞ − ∞ {\displaystyle \infty -\infty } " indeterminate form ). Alternatively, an iterated limit could be used or a single limit based on the Cauchy principal value . If f ( x ) {\displaystyle f(x)} is continuous on [ a , d ) {\displaystyle [a,d)} and ( d , ∞ ) {\displaystyle (d,\infty )} , with a discontinuity of any kind at d {\displaystyle d} , then for any choice of c > d {\displaystyle c>d} . The previous remarks about indeterminate forms, iterated limits, and the Cauchy principal value also apply here. The function f ( x ) {\displaystyle f(x)} can have more discontinuities, in which case even more limits would be required (or a more complicated principal value expression). Cases 2–4 are handled similarly. See the examples below. Improper integrals can also be evaluated in the context of complex numbers, in higher dimensions, and in other theoretical frameworks such as Lebesgue integration or Henstock–Kurzweil integration . Integrals that are considered improper in one framework may not be in others. The original definition of the Riemann integral does not apply to a function such as 1 / x 2 {\displaystyle 1/{x^{2}}} on the interval [1, ∞) , because in this case the domain of integration is unbounded . However, the Riemann integral can often be extended by continuity , by defining the improper integral instead as a limit The narrow definition of the Riemann integral also does not cover the function 1 / x {\textstyle 1/{\sqrt {x}}} on the interval [0, 1] . The problem here is that the integrand is unbounded in the domain of integration. In other words, the definition of the Riemann integral requires that both the domain of integration and the integrand be bounded . However, the improper integral does exist if understood as the limit Sometimes integrals may have two singularities where they are improper. Consider, for example, the function 1/(( x + 1) √ x ) integrated from 0 to ∞ (shown right). At the lower bound of the integration domain, as x goes to 0 the function goes to ∞ , and the upper bound is itself ∞ , though the function goes to 0. Thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of π /6. To integrate from 1 to ∞ , a Riemann sum is not possible. However, any finite upper bound, say t (with t > 1 ), gives a well-defined result, 2 arctan( √ t ) − π/2 . This has a finite limit as t goes to infinity, namely π /2. Similarly, the integral from 1/3 to 1 allows a Riemann sum as well, coincidentally again producing π /6. Replacing 1/3 by an arbitrary positive value s (with s < 1 ) is equally safe, giving π/2 − 2 arctan( √ s ) . This, too, has a finite limit as s goes to zero, namely π /2. Combining the limits of the two fragments, the result of this improper integral is This process does not guarantee success; a limit might fail to exist, or might be infinite. For example, over the bounded interval from 0 to 1 the integral of 1/ x does not converge; and over the unbounded interval from 1 to ∞ the integral of 1/ √ x does not converge. It might also happen that an integrand is unbounded near an interior point, in which case the integral must be split at that point. For the integral as a whole to converge, the limit integrals on both sides must exist and must be bounded. For example: But the similar integral cannot be assigned a value in this way, as the integrals above and below zero in the integral domain do not independently converge. (However, see Cauchy principal value .) An improper integral converges if the limit defining it exists. Thus for example one says that the improper integral exists and is equal to L if the integrals under the limit exist for all sufficiently large t , and the value of the limit is equal to L . It is also possible for an improper integral to diverge to infinity. In that case, one may assign the value of ∞ (or −∞) to the integral. For instance However, other improper integrals may simply diverge in no particular direction, such as which does not exist, even as an extended real number . This is called divergence by oscillation. A limitation of the technique of improper integration is that the limit must be taken with respect to one endpoint at a time. Thus, for instance, an improper integral of the form can be defined by taking two separate limits; to which provided the double limit is finite. It can also be defined as a pair of distinct improper integrals of the first kind: where c is any convenient point at which to start the integration. This definition also applies when one of these integrals is infinite, or both if they have the same sign. An example of an improper integral where both endpoints are infinite is the Gaussian integral ∫ − ∞ ∞ e − x 2 d x = π {\textstyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}} . An example which evaluates to infinity is ∫ − ∞ ∞ e x d x {\textstyle \int _{-\infty }^{\infty }e^{x}\,dx} . But one cannot even define other integrals of this kind unambiguously, such as ∫ − ∞ ∞ x d x {\textstyle \int _{-\infty }^{\infty }x\,dx} , since the double limit is infinite and the two-integral method yields an indeterminate form , ∞ − ∞ {\displaystyle \infty -\infty } . In this case, one can however define an improper integral in the sense of Cauchy principal value : The questions one must address in determining an improper integral are: The first question is an issue of mathematical analysis . The second one can be addressed by calculus techniques, but also in some cases by contour integration , Fourier transforms and other more advanced methods. There is more than one theory of integration . From the point of view of calculus, the Riemann integral theory is usually assumed as the default theory. In using improper integrals, it can matter which integration theory is in play. In some cases, the integral can be defined as an integral (a Lebesgue integral , for instance) without reference to the limit but cannot otherwise be conveniently computed. This often happens when the function f being integrated from a to c has a vertical asymptote at c , or if c = ∞ (see Figures 1 and 2). In such cases, the improper Riemann integral allows one to calculate the Lebesgue integral of the function. Specifically, the following theorem holds ( Apostol 1974 , Theorem 10.33): For example, the integral can be interpreted alternatively as the improper integral or it may be interpreted instead as a Lebesgue integral over the set (0, ∞). Since both of these kinds of integral agree, one is free to choose the first method to calculate the value of the integral, even if one ultimately wishes to regard it as a Lebesgue integral. Thus improper integrals are clearly useful tools for obtaining the actual values of integrals. In other cases, however, a Lebesgue integral between finite endpoints may not even be defined, because the integrals of the positive and negative parts of f are both infinite, but the improper Riemann integral may still exist. Such cases are "properly improper" integrals, i.e. their values cannot be defined except as such limits. For example, cannot be interpreted as a Lebesgue integral, since But f ( x ) = sin ⁡ ( x ) x {\displaystyle f(x)={\frac {\sin(x)}{x}}} is nevertheless integrable between any two finite endpoints, and its integral between 0 and ∞ is usually understood as the limit of the integral: One can speak of the singularities of an improper integral, meaning those points of the extended real number line at which limits are used. Consider the difference in values of two limits: The former is the Cauchy principal value of the otherwise ill-defined expression Similarly, we have but The former is the principal value of the otherwise ill-defined expression All of the above limits are cases of the indeterminate form ∞ − ∞ {\displaystyle \infty -\infty } . These pathologies do not affect "Lebesgue-integrable" functions, that is, functions the integrals of whose absolute values are finite. An improper integral may diverge in the sense that the limit defining it may not exist. In this case, there are more sophisticated definitions of the limit which can produce a convergent value for the improper integral. These are called summability methods. One summability method, popular in Fourier analysis , is that of Cesàro summation . The integral is Cesàro summable (C, α) if exists and is finite ( Titchmarsh 1948 , §1.15). The value of this limit, should it exist, is the (C, α) sum of the integral. An integral is (C, 0) summable precisely when it exists as an improper integral. However, there are integrals which are (C, α) summable for α > 0 which fail to converge as improper integrals (in the sense of Riemann or Lebesgue). One example is the integral which fails to exist as an improper integral, but is (C, α ) summable for every α > 0. This is an integral version of Grandi's series . The improper integral can also be defined for functions of several variables. The definition is slightly different, depending on whether one requires integrating over an unbounded domain, such as R 2 {\displaystyle \mathbb {R} ^{2}} , or is integrating a function with singularities, like f ( x , y ) = log ⁡ ( x 2 + y 2 ) {\displaystyle f(x,y)=\log \left(x^{2}+y^{2}\right)} . If f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a non-negative function that is Riemann integrable over every compact cube of the form [ − a , a ] n {\displaystyle [-a,a]^{n}} , for a > 0 {\displaystyle a>0} , then the improper integral of f over R n {\displaystyle \mathbb {R} ^{n}} is defined to be the limit provided it exists. A function on an arbitrary domain A in R n {\displaystyle \mathbb {R} ^{n}} is extended to a function f ~ {\displaystyle {\tilde {f}}} on R n {\displaystyle \mathbb {R} ^{n}} by zero outside of A : The Riemann integral of a function over a bounded domain A is then defined as the integral of the extended function f ~ {\displaystyle {\tilde {f}}} over a cube [ − a , a ] n {\displaystyle [-a,a]^{n}} containing A : More generally, if A is unbounded, then the improper Riemann integral over an arbitrary domain in R n {\displaystyle \mathbb {R} ^{n}} is defined as the limit: If f is a non-negative function which is unbounded in a domain A , then the improper integral of f is defined by truncating f at some cutoff M , integrating the resulting function, and then taking the limit as M tends to infinity. That is for M > 0 {\displaystyle M>0} , set f M = min { f , M } {\displaystyle f_{M}=\min\{f,M\}} . Then define provided this limit exists. These definitions apply for functions that are non-negative. A more general function f can be decomposed as a difference of its positive part f + = max { f , 0 } {\displaystyle f_{+}=\max\{f,0\}} and negative part f − = max { − f , 0 } {\displaystyle f_{-}=\max\{-f,0\}} , so with f + {\displaystyle f_{+}} and f − {\displaystyle f_{-}} both non-negative functions. The function f has an improper Riemann integral if each of f + {\displaystyle f_{+}} and f − {\displaystyle f_{-}} has one, in which case the value of that improper integral is defined by In order to exist in this sense, the improper integral necessarily converges absolutely, since
https://en.wikipedia.org/wiki/Improper_integral
Improvisation , often shortened to improv , is the activity of making or doing something not planned beforehand, using whatever can be found. [ 1 ] The origin of the word itself is in the Latin "improvisus", which literally means un-foreseen. Improvisation in the performing arts is a very spontaneous performance without specific or scripted preparation. The skills of improvisation can apply to many different faculties across all artistic, scientific, physical, cognitive, academic, and non-academic disciplines; see Applied improvisation . The skills of improvisation can apply to many different abilities or forms of communication and expression across all artistic, scientific, physical, cognitive, academic, and non-academic disciplines. For example, improvisation can make a significant contribution in music, dance, cooking, presenting a speech, sales, personal or romantic relationships, sports, flower arranging, martial arts, psychotherapy, and much more. Techniques of improvisation are widely used in training for performing arts or entertainment; for example, music, theatre and dance. To " extemporize " or "ad lib" is basically the same as improvising. Colloquial terms such as "playing by ear", "take it as it comes", and "making it up as [one] goes along" are all used to describe improvisation. The simple act of speaking requires a good deal of improvisation because the mind is addressing its own thought and creating its unrehearsed delivery in words, sounds and gestures, forming unpredictable statements that further feed the thought process (the performer as the listener), creating an enriched process that is not unlike instantaneous composition with a given set or repertoire of elements. [ 2 ] Where the improvisation is intended to solve a problem on a temporary basis, the "proper" solution being unavailable at the time, it may be known as a " stop-gap ". This applies to the field of engineering. Another improvisational, group problem-solving technique being used in organizations of all kinds is brainstorming , in which any and all ideas that a group member may have are permitted and encouraged to be expressed, regardless of actual practicality. As in all improvisation, the process of brainstorming opens up the minds of the people involved to new, unexpected and possibly useful ideas. The colloquial term for this is " thinking outside the box ." Improvisation can be thought of as an "on the spot" or " off the cuff " spontaneous moment of sudden inventiveness that can just come to mind, body and spirit as an inspiration. Viola Spolin created theater games as a method of training improvisational acting. [ 3 ] Her son, Paul Sills popularized improvisational theater, or IMPROV, by using Spolin's techniques to train The Second City in Chicago, the first totally improvisational theater company in the US. [ 4 ] Musical improvisation is usually defined as the spontaneous performance of music without previous preparation or any written notes. [ 5 ] In other words, the art of improvisation can be understood as composing music "on the fly". There have been experiments by Charles Limb, using functional magnetic resonance imaging , that show the brain activity during musical improvisation. [ 6 ] Limb showed increased activity in the medial prefrontal cortex, which is an area associated with an increase in self-expression. Further, there was decreased activity in the lateral prefrontal cortex , which is an area associated with self-monitoring. This change in activity is thought to reduce the inhibitions that normally prevent individuals from taking risks and improvising. Notable improvisational musicians from the modern era include Keith Jarrett , an improvisational jazz pianist and multi-instrumentalist who has performed many improvised concerts all over the world; [ 7 ] W. A. Mathieu a.k.a. William Allaudin Mathieu, the musical director for The Second City in Chicago, the first ongoing improvisational theatre troupe in the United States, and later musical director for another improv theatre, The Committee , an offshoot of The Second City in San Francisco; Derek Bailey , an improvisational guitarist and writer of Improvisation: Its Nature and Practice; [ 8 ] Evan Parker ; British saxophone player, the iconnical pianists Fred van Hove (Be) and Misha Mengelberg (NL) and more recently the Belgian Seppe Gebruers who improvise with two pianos tuned a quartertone apart. [ 9 ] Improvised freestyle rap is commonly practiced as a part of rappers ' creative processes, as a "finished product" for release on recordings (when the improvisation is judged good enough), as a spiritual event, as a means of verbal combat in battle rap , and, simply, for fun. As mentioned above, studies have suggested that improvisation allows a musician to relax the control filters in their mind during this exercise. [ 10 ] It often incorporates insults similar to those in the African-American game The Dozens , and complex rhythmic and sometimes melodic forms comparable to those heard in jazz improvisation. Improvisation, in theatre, is the playing of dramatic scenes without written dialogue and with minimal or no predetermined dramatic activity. The method has been used for different purposes in theatrical history. [ 11 ] Dance improvisation as a choreographic tool: Improvisation is used as a choreographic tool in dance composition . Experimenting with the concepts of shape, space, time, and energy while moving without inhibition or cognitive thinking can create unique and innovative movement designs, spatial configuration, dynamics, and unpredictable rhythms. Improvisation without inhibition allows the choreographer to connect to their deepest creative self, which in turn clears the way for pure invention. This cognitive inhibition is similar to the inhibition described by Limb for musical improvisation, which can be found in the music section above. Contact improvisation : a form developed in 1973, that is now practiced around the world. Contact improvisation originated from the movement studies of Steve Paxton in the 1970s and developed through the continued exploration of the Judson Dance Theater . It is a dance form based on weight sharing, partnering, playing with weight, exploring negative space and unpredictable outcomes. Sculpture often relies on the enlargement of a small model or maquette to create the final work in a chosen material. Where the material is plastic such as clay , a working structure or armature often needs to be built to allow the pre-determined design to be realized. Alan Thornhill 's method for working with clay abandons the maquette, [ 12 ] seeing it as ultimately deadening to creativity . [ 13 ] Without the restrictions of the armature, a clay matrix of elements allows that when recognizable forms start to emerge, they can be essentially disregarded by turning the work, allowing for infinite possibility and the chance for the unforeseen to emerge more powerfully at a later stage. Moving from adding and taking away to purely reductive working, the architectural considerations of turning the work are eased considerably but continued removal of material through the rejection of forms deemed too obvious can mean one ends up with nothing. Former pupil Jon Edgar uses Thornhill's method as a creative extension to direct carving in stone and wood. The director Mike Leigh uses lengthy improvisations developed over a period of weeks to build characters and story lines for his films. [ 14 ] He starts with some sketch ideas of how he thinks things might develop but does not reveal all his intentions with the cast who discover their fate and act out their responses as their destinies are gradually revealed, including significant aspects of their lives which will not subsequently be shown onscreen. The final filming draws on dialogue and actions that have been recorded during the improvisation period. Improvisational writing is an exercise that imposes limitations on a writer such as a time limit, word limit, a specific topic, or rules on what can be written. This forces the writer to work within stream of consciousness and write without judgment of the work they produce. This technique is used for a variety of reasons, such as to bypass writer's block , improve creativity, strengthen one's writing instinct and enhance one's flexibility in writing. Some improvisational writing is collaborative, focusing on an almost dadaist form of collaborative fiction . This can take a variety of forms, from as basic as passing a notebook around a circle of writers with each writing a sentence, to coded environments that focus on collaborative novel-writing, [ 15 ] like OtherSpace . [ 16 ] Improvisation in engineering is to solve a problem with the tools and materials immediately at hand. [ 17 ] Examples of such improvisation was the re-engineering of carbon dioxide scrubbers with the materials on hand during the Apollo 13 space mission, [ 18 ] or the use of a knife in place of a screwdriver to turn a screw. Engineering improvisations may be needed because of emergencies, embargo , obsolescence of a product and the loss of manufacturer support, or just a lack of funding appropriate for a better solution. Users of motor vehicles in parts of Africa develop improvised solutions [ 19 ] where it is not feasible to obtain manufacturer-approved spare parts. [ 20 ] The popular television program MacGyver used as its gimmick a hero who could solve almost any problem with jury rigged devices from everyday materials, a Swiss Army knife and some duct tape .
https://en.wikipedia.org/wiki/Improvisation
Improvision is a software developer based in Coventry , England . The company is the developer of Confocal, live cell imaging and image analysis software for 2D, 3D and 4D imaging. Improvision was founded in 1990 by Ken Salisbury, Andrew Waterfall and John Zeidler. Improvision was acquired by PerkinElmer on 2 April 2007, in a cash transaction. [ 1 ] [ 2 ] The company is based in Coventry, England, and it develops and sells scientific imaging equipment and software including confocal microscopy systems and image analysis software for the Life Sciences industry. [ 3 ] In 2002, it was a winner in the annual Lord Stafford Awards for Innovation. [ 4 ] In April 2000, Improvision received a Queen's Award for Enterprise , the highest honour which can be given to a UK company, in recognition of outstanding achievement in export sales. [ 5 ]
https://en.wikipedia.org/wiki/Improvision
In classical mechanics , impulse (symbolized by J or Imp ) is the change in momentum of an object. If the initial momentum of an object is p 1 , and a subsequent momentum is p 2 , the object has received an impulse J : J = p 2 − p 1 . {\displaystyle \mathbf {J} =\mathbf {p} _{2}-\mathbf {p} _{1}.} Momentum is a vector quantity, so impulse is also a vector quantity: ∑ F × Δ t = Δ p . {\displaystyle \sum \mathbf {F} \times \Delta t=\Delta \mathbf {p} .} [ 1 ] Newton’s second law of motion states that the rate of change of momentum of an object is equal to the resultant force F acting on the object: F = p 2 − p 1 Δ t , {\displaystyle \mathbf {F} ={\frac {\mathbf {p} _{2}-\mathbf {p} _{1}}{\Delta t}},} so the impulse J delivered by a steady force F acting for time Δ t is: J = F Δ t . {\displaystyle \mathbf {J} =\mathbf {F} \Delta t.} The impulse delivered by a varying force acting from time a to b is the integral of the force F with respect to time: J = ∫ a b F d t . {\displaystyle \mathbf {J} =\int _{a}^{b}\mathbf {F} \,\mathrm {d} t.} The SI unit of impulse is the newton second (N⋅s), and the dimensionally equivalent unit of momentum is the kilogram metre per second (kg⋅m/s). The corresponding English engineering unit is the pound -second (lbf⋅s), and in the British Gravitational System , the unit is the slug -foot per second (slug⋅ft/s). Impulse J produced from time t 1 to t 2 is defined to be [ 3 ] J = ∫ t 1 t 2 F d t , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}\mathbf {F} \,\mathrm {d} t,} where F is the resultant force applied from t 1 to t 2 . From Newton's second law , force is related to momentum p by F = d p d t . {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}.} Therefore, J = ∫ t 1 t 2 d p d t d t = ∫ p 1 p 2 d p = p 2 − p 1 = Δ p , {\displaystyle {\begin{aligned}\mathbf {J} &=\int _{t_{1}}^{t_{2}}{\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}\,\mathrm {d} t\\&=\int _{\mathbf {p} _{1}}^{\mathbf {p} _{2}}\mathrm {d} \mathbf {p} \\&=\mathbf {p} _{2}-\mathbf {p} _{1}=\Delta \mathbf {p} ,\end{aligned}}} where Δ p is the change in linear momentum from time t 1 to t 2 . This is often called the impulse-momentum theorem (analogous to the work-energy theorem ). As a result, an impulse may also be regarded as the change in momentum of an object to which a resultant force is applied. The impulse may be expressed in a simpler form when the mass is constant: J = ∫ t 1 t 2 F d t = Δ p = m v 2 − m v 1 , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}\mathbf {F} \,\mathrm {d} t=\Delta \mathbf {p} =m\mathbf {v_{2}} -m\mathbf {v_{1}} ,} where Impulse has the same units and dimensions (MLT −1 ) as momentum. In the International System of Units , these are kg ⋅ m/s = N ⋅ s . In English engineering units , they are slug ⋅ ft/s = lbf ⋅ s . The term "impulse" is also used to refer to a fast-acting force or impact . This type of impulse is often idealized so that the change in momentum produced by the force happens with no change in time. This sort of change is a step change , and is not physically possible. However, this is a useful model for computing the effects of ideal collisions (such as in videogame physics engines ). Additionally, in rocketry, the term "total impulse" is commonly used and is considered synonymous with the term "impulse". The application of Newton's second law for variable mass allows impulse and momentum to be used as analysis tools for jet - or rocket -propelled vehicles. In the case of rockets, the impulse imparted can be normalized by unit of propellant expended, to create a performance parameter, specific impulse . This fact can be used to derive the Tsiolkovsky rocket equation , which relates the vehicle's propulsive change in velocity to the engine's specific impulse (or nozzle exhaust velocity) and the vehicle's propellant- mass ratio .
https://en.wikipedia.org/wiki/Impulse_(physics)
The impulse excitation technique ( IET ) is a non-destructive material characterization technique to determine the elastic properties and internal friction of a material of interest. [ 1 ] It measures the resonant frequencies in order to calculate the Young's modulus , shear modulus , Poisson's ratio and internal friction of predefined shapes like rectangular bars, cylindrical rods and disc shaped samples. The measurements can be performed at room temperature or at elevated temperatures (up to 1700 °C) under different atmospheres. [ 2 ] The measurement principle is based on tapping the sample with a small projectile and recording the induced vibration signal with a piezoelectric sensor , microphone , laser vibrometer or accelerometer . To optimize the results a microphone or a laser vibrometer can be used as there is no contact between the test-piece and the sensor. Laser vibrometers are preferred to measure signals in vacuum. Afterwards, the acquired vibration signal in the time domain is converted to the frequency domain by a fast Fourier transformation . Dedicated software will determine the resonant frequency with high accuracy to calculate the elastic properties based on the classical beam theory . [ 3 ] Different resonant frequencies can be excited dependent on the position of the support wires, the mechanical impulse and the microphone. The two most important resonant frequencies are the flexural which is controlled by the Young's modulus of the sample and the torsional which is controlled by the shear modulus for isotropic materials. For predefined shapes like rectangular bars, discs, rods and grinding wheels, dedicated software calculates the sample's elastic properties using the sample dimensions, weight and resonant frequency (ASTM E1876-15). The first figure gives an example of a test-piece vibrating in the flexure mode. This induced vibration is also referred as the out-of-plane vibration mode. The in-plane vibration will be excited by turning the sample 90° on the axis parallel to its length. The natural frequency of this flexural vibration mode is characteristic for the dynamic Young's modulus . To minimize the damping of the test-piece, it has to be supported at the nodes where the vibration amplitude is zero. The test-piece is mechanically excited at one of the anti-nodes to cause maximum vibration. The second figure gives an example of a test-piece vibrating in the torsion mode. The natural frequency of this vibration is characteristic for the shear modulus . To minimize the damping of the test-piece, it has to be supported at the center of both axis. The mechanical excitation has to be performed in one corner in order to twist the beam rather than flexing it. The Poisson's ratio is a measure in which a material tends to expand in directions perpendicular to the direction of compression. After measuring the Young's modulus and the shear modulus, dedicated software determines the Poisson's ratio using Hooke's law which can only be applied to isotropic materials according to the different standards. Material damping or internal friction is characterized by the decay of the vibration amplitude of the sample in free vibration as the logarithmic decrement. The damping behaviour originates from anelastic processes occurring in a strained solid i.e. thermoelastic damping, magnetic damping, viscous damping, defect damping, ... For example, different materials defects ( dislocations , vacancies, ...) can contribute to an increase in the internal friction between the vibrating defects and the neighboring regions. Considering the importance of elastic properties for design and engineering applications, a number of experimental techniques are developed and these can be classified into 2 groups; static and dynamic methods. Statics methods (like the four-point bending test and nanoindentation ) are based on direct measurements of stresses and strains during mechanical tests. Dynamic methods (like ultrasound spectroscopy and impulse excitation technique) provide an advantage over static methods because the measurements are relatively quick and simple and involve small elastic strains. Therefore, IET is very suitable for porous and brittle materials like ceramics and refractories . The technique can also be easily modified for high temperature experiments and only a small amount of material needs to be available. The most important parameters to define the measurement uncertainty are the mass and dimensions of the sample. Therefore, each parameter has to be measured (and prepared) to a level of accuracy of 0.1%. Especially, the sample thickness is most critical (third power in the equation for Young's modulus). In that case, an overall accuracy of 1% can be obtained practically in most applications. The impulse excitation technique can be used in a wide range of applications. Nowadays, IET equipment can perform measurements between −50 °C and 1700 °C in different atmospheres (air, inert, vacuum). IET is mostly used in research and as quality control tool to study the transitions as function of time and temperature. A detailed insight into the material crystal structure can be obtained by studying the elastic and damping properties. For example, the interaction of dislocations and point defects in carbon steels are studied. [ 4 ] Also the material damage accumulated during a thermal shock treatment can be determined for refractory materials. [ 5 ] This can be an advantage in understanding the physical properties of certain materials. Finally, the technique can be used to check the quality of systems. In this case, a reference piece is required to obtain a reference frequency spectrum. Engine blocks for example can be tested by tapping them and comparing the recorded signal with a pre-recorded signal of a reference engine block. By using simple cluster analysis algorithms or principal component analysis, sample's pattern recognition is also achievable with a set of pre-recorded signals. [ 6 ] with with G the shear modulus with with If the Young's modulus and shear modulus are known, the Poisson's ratio can be calculated according to: The induced vibration signal (in the time domain) is fitted as a sum of exponentially damped sinusoidal functions according to: with Isotropic elastic properties can be found by IET using the above described empirical formulas for the Young's modulus E, the shear modulus G and Poisson's ratio v. For isotropic materials the relation between strains and stresses in any point of flat sheets is given by the flexibility matrix [S] in the following expression: In this expression, ε 1 and ε 2 are normal strains in the 1- and 2-direction and Υ 12 is the shear strain. σ 1 and σ 2 are the normal stresses and τ 12 is the shear stress . The orientation of the axes 1 and 2 in the above figure is arbitrary. This means that the values for E, G and v are the same in any material direction. More complex material behaviour like orthotropic material behaviour can be identified by extended IET procedures . A material is called orthotropic when the elastic properties are symmetric with respect to a rectangular Cartesian system of axes. In case of a two dimensional state of stress, like in thin sheets, the stress-strain relations for orthotropic material become: E 1 and E 2 are the Young's moduli in the 1- and 2-direction and G 12 is the in-plane shear modulus . v 12 is the major Poisson's ratio and v 21 is the minor Poisson's ratio. The flexibility matrix [S] is symmetric. The minor Poisson's ratio can hence be found if E 1 , E 2 and v 12 are known. The figure above shows some examples of common orthotropic materials: layered uni-directionally reinforced composites with fiber directions parallel to the plate edges, layered bi-directionally reinforced composites, short fiber reinforced composites with preference directions (like wooden particle boards), plastics with preference orientation, rolled metal sheets, and much more... Standard methods for the identification of the two Young's moduli E 1 and E 2 require two tensile, bending of IET tests, one on a beam cut along the 1-direction and one on a beam cut along the 2-direction. Major and minor Poisson's ratios can be identified if also the transverse strains are measured during the tensile tests. The identification of the in-plane shear modulus requires an additional in plane shearing test. The " Resonalyser procedure " [ 7 ] [ 8 ] [ 9 ] [ 10 ] is an extension of the IET using an inverse method (also called "Mixed numerical experimental method"). The non destructive Resonalyser procedure allows a fast and accurate simultaneous identification of the 4 Engineering constants E1, E2, G12 and v12 for orthotropic materials. For the identification of the four orthotropic material constants, the first three natural frequencies of a rectangular test plate with constant thickness and the first natural frequency of two test beams with rectangular cross section must be measured. One test beam is cut along the longitudinal direction 1, the other one cut along the transversal direction 2 (see Figure on the right). The Young's modulus of the test beams can be found using the bending IET formula for test beams with a rectangular cross section. The ratio Width/Length of the test plate must be cut according to the following formula: This ratio yields a so-called "Poisson plate". The interesting property of a Freely suspended Poisson plate is that the modal shapes that are associated with the 3 first resonance frequencies are fixed: the first resonance frequency is associated with a torsional modal shape, the second resonance frequency is associated with a saddle modal shape and the third resonance frequency is associated with a breathing modal shape. So, without the necessity to do an investigation to the nature of the modal shapes, the IET on a Poisson plate reveals the vibrational behaviour of a Poisson plate. The question is now how to extract the orthotropic Engineering constants from the frequencies measured with IET on the beams and Poisson plate. This problem can be solved by an inverse method (also called" Mixed numerical/experimental method" [ 11 ] ) based on a finite element (FE) computer model of the Poisson plate. A FE model allows computing resonance frequencies for a given set of material properties In an inverse method, the material properties in the finite element model are updated in such a way that the computed resonance frequencies match the measured resonance frequencies. Problems with inverse methods are: · The need of good starting values for the material properties · Are the parameters converging to the correct physical solution? · Is the solution unique? The requirements to obtain good results are: In the case the Young's moduli (obtained by IET) are fixed (as non variable parameters) in the inverse method procedure and if only the Poisson's ratio v12 and the in-plane shear modulus G12 are taken as variable parameters in the FE-model, the Resonalyser procedure satisfies all above requirements. Indeed,
https://en.wikipedia.org/wiki/Impulse_excitation_technique
An impulse facility is a testing facility that relies on rapid release of stored energy to generate a short period of high enthalpy test conditions for testing of aerodynamic flow, aerodynamic heating and atmospheric reentry , combustion , chemical kinetics , ballistics , and other effects. The rapid release of energy can result in very high instantaneous energy release rates even though the total energy released is modest. The use of an impulse facility can allow testing of violently energetic phenomena generating temperatures and pressures that no known materials could withstand in steady state. [ 1 ] This effect also produces short test times, however, with some types of tests in these facilities lasting less than 100 microseconds . Impulse facilities are a special case of blow down facilities where an energy storage mechanism is charged over a period of time and then released to initiate a test and must be charged again before the next test. This contrasts with continuous facilities such as wind tunnels that may run continuously. Examples of impulse facilities are the shock tube , the shock tunnel , the expansion tube , the expansion tunnel , and the Ludwieg tube . This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Impulse_facility
An impulse generator is an electrical apparatus which produces very short high- voltage or high- current surges. Such devices can be classified into two types: impulse voltage generators and impulse current generators. High impulse voltages are used to test the strength of electric power equipment against lightning and switching surges. Also, steep-front impulse voltages are sometimes used in nuclear physics experiments. High impulse currents are needed not only for tests on equipment such as lightning arresters and fuses but also for many other technical applications such as lasers , thermonuclear fusion , and plasma devices. [ 1 ] In 1863 Hungarian physicist Ányos Jedlik discovered the possibility of voltage multiplication and in 1868 demonstrated it with a "tubular voltage generator", which was successfully displayed at the Vienna World Exposition in 1873. [ 2 ] It was an early form of the impulse generators now applied in nuclear research. [ 3 ] The jury of the World Exhibition of 1873 in Vienna awarded his voltage multiplying condenser of cascade connection with prize "For Development". Through this condenser, Jedlik framed the principle of surge generator of cascaded connection. (The Cascade connection was another important invention of Ányos Jedlik.) [ 4 ] [ 5 ] One form is the Marx generator , named after Erwin Otto Marx , who first proposed it in 1923. This consists of multiple capacitors that are first charged in parallel through charging resistors as by a high-voltage, direct-current source and then connected in series and discharged through a test object by a simultaneous spark-over of the spark gaps . The impulse current generator comprises many capacitors that are also charged in parallel by a high-voltage, low-current, direct-current source, but it is discharged in parallel through resistances, inductances , and a test object by a spark gap. [ 6 ]
https://en.wikipedia.org/wiki/Impulse_generator
Imre Bárány (Mátyásföld, Budapest , 7 December 1947) is a Hungarian mathematician , working in combinatorics and discrete geometry . He works at the Rényi Mathematical Institute of the Hungarian Academy of Sciences , and has a part-time appointment at University College London . Bárány received the Mathematical Prize (now Paul Erdős Prize ) of the Hungarian Academy of Sciences in 1985. He was an invited speaker at the Combinatorics session of the International Congress of Mathematicians , in Beijing , 2002. [ 4 ] He was an Erdős Lecturer at Hebrew University of Jerusalem in 2004. He was elected a corresponding (2010), full (2016) member of the Hungarian Academy of Sciences. [ 5 ] In 2012 he became a fellow of the American Mathematical Society . [ 6 ] Since 2021, he is a member of the Academia Europaea [ 7 ] He is an editor-in-chief for the journal Combinatorica , [ 8 ] and an Editorial Board member for Mathematika [ 9 ] and the Online Journal of Analytic Combinatorics". [ 10 ] He is area editor of the journal Mathematics of Operations Research . [ 11 ]
https://en.wikipedia.org/wiki/Imre_Bárány
Imre Csiszár ( Hungarian: [ˈimrɛ ˈt͡ʃisaːr] ) is a Hungarian mathematician with contributions to information theory and probability theory . In 1996 he won the Claude E. Shannon Award , the highest annual award given in the field of information theory. He was born on 7 February 1938 in Miskolc , Hungary . He became interested in mathematics in middle school. He was inspired by his father who was a forest engineer and was among the first to use mathematical techniques in his area. He studied mathematics at the Eötvös Loránd University , Budapest , and received his Diploma in 1961. He got his PhD in 1967 and the scientific degree Doctor of Mathematical Science in 1977. Later, he was influenced by Alfréd Rényi , who was very active in the area of probability theory. In 1990 he was elected Corresponding Member of the Hungarian Academy of Sciences , and in 1995 he became Full Member. Professor Csiszar has been with the Mathematical Institute of the Hungarian Academy of Sciences since 1961. He has been Head of the Information Theory Group there since 1968, and presently he is Head of the Stochastics Department. He is also Professor of Mathematics at the L. Eotvos University, Budapest. He has held Visiting Professorships at various universities including Bielefeld University , Germany (1981), University of Maryland, College Park (several times, last in 1992), Stanford University (1982), University of Virginia (1985–86), etc. He has been Visiting Researcher at the University of Tokyo in 1988, and at NTT , Japan, in 1994. He is married and has four children. He is a Fellow of the IEEE , and is a member of several other learned societies, including the Bernoulli Society for Mathematical Statistics and Probability . He has received several academic awards, including the Book Excellence Award of the Hungarian Academy of Sciences for his 1981 Information Theory monograph, the 1988 Paper Award of the IEEE Information Theory Society, the 2015 IEEE Richard Hamming Medal and the Academy Award for Interdisciplinary Research of the Hungarian Academy of Sciences in 1989.
https://en.wikipedia.org/wiki/Imre_Csiszár
Imre Lakatos ( UK : / ˈ l æ k ə t ɒ s / , [ 6 ] US : /- t oʊ s / ; Hungarian : Lakatos Imre [ˈlɒkɒtoʃ ˈimrɛ] ; 9 November 1922 – 2 February 1974) was a Hungarian philosopher of mathematics and science , known for his thesis of the fallibility of mathematics and its "methodology of proofs and refutations" in its pre-axiomatic stages of development, and also for introducing the concept of the " research programme " in his methodology of scientific research programmes. Lakatos was born Imre (Avrum) Lipsitz to a Jewish family in Debrecen , Hungary , in 1922. He received a degree in mathematics, physics , and philosophy from the University of Debrecen in 1944. In March 1944 the Germans invaded Hungary , and Lakatos along with Éva Révész, his then-girlfriend and subsequent wife, formed soon after that event a Marxist resistance group. In May of that year, the group was joined by Éva Izsák, a 19-year-old Jewish antifascist activist. Lakatos, considering that there was a risk that she would be captured and forced to betray them, decided that her duty to the group was to commit suicide. Subsequently, a member of the group took her to Debrecen and gave her cyanide . [ 7 ] During the occupation, Lakatos avoided Nazi persecution of Jews by changing his surname to Molnár. [ 8 ] His mother and grandmother were murdered in Auschwitz . He changed his surname once again to Lakatos (Locksmith) in honor of Géza Lakatos . After the war, from 1947, he worked as a senior official in the Hungarian ministry of education. He also continued his education with a PhD at Debrecen University awarded in 1948 and also attended György Lukács 's weekly Wednesday afternoon private seminars. He also studied at the Moscow State University under the supervision of Sofya Yanovskaya in 1949. When he returned, however, he found himself on the losing side of internal arguments within the Hungarian communist party and was imprisoned on charges of revisionism from 1950 to 1953. More of Lakatos's activities in Hungary after World War II have recently become known. In fact, Lakatos was a hardline Stalinist and, despite his young age, had an important role between 1945 and 1950 (his own arrest and jailing) in building up the Communist rule, especially in cultural life and the academia, in Hungary. [ 9 ] After his release, Lakatos returned to academic life, doing mathematical research and translating George Pólya 's How to Solve It into Hungarian. Still nominally a communist, his political views had shifted markedly, and he was involved with at least one dissident student group in the lead-up to the 1956 Hungarian Revolution . After the Soviet Union invaded Hungary in November 1956, Lakatos fled to Vienna and later reached England. He lived there for the rest of his life however he never achieved a British citizenship. [ 10 ] He received a PhD in philosophy in 1961 from the University of Cambridge ; his doctoral thesis was entitled Essays in the Logic of Mathematical Discovery , and his doctoral advisor was R. B. Braithwaite . The book Proofs and Refutations: The Logic of Mathematical Discovery , published after his death, is based on this work. In 1960, he was appointed to a position in the London School of Economics (LSE), where he wrote on the philosophy of mathematics and the philosophy of science . The LSE philosophy of science department at that time included Karl Popper , Joseph Agassi and J. O. Wisdom . [ 11 ] It was Agassi who first introduced Lakatos to Popper under the rubric of his applying a fallibilist methodology of conjectures and refutations to mathematics in his Cambridge PhD thesis. With co-editor Alan Musgrave , he edited the often cited Criticism and the Growth of Knowledge , the Proceedings of the International Colloquium in the Philosophy of Science, London, 1965. Published in 1970, the 1965 Colloquium included well-known speakers delivering papers in response to Thomas Kuhn's The Structure of Scientific Revolutions . In January 1971, he became editor of the British Journal for the Philosophy of Science , which J. O. Wisdom had built up before departing in 1965, and he continued as editor until his death in 1974, [ 12 ] after which it was then edited jointly for many years by his LSE colleagues John W. N. Watkins and John Worrall , Lakatos's ex-research assistant. Lakatos and his colleague Spiro Latsis organized an international conference in Greece in 1975, and went ahead despite his death. It was devoted entirely to historical case studies in Lakatos's methodology of research programmes in physical sciences and economics. These case studies in such as Einstein's relativity programme, Fresnel 's wave theory of light and neoclassical economics , were published by Cambridge University Press in two separate volumes in 1976, one devoted to physical sciences and Lakatos's general programme for rewriting the history of science, with a concluding critique by his great friend Paul Feyerabend , and the other devoted to economics. [ 13 ] He remained at LSE until his sudden death in 1974 of a heart attack [ 14 ] at the age of 51. The Lakatos Award was set up by the school in his memory. His last lectures along with some correspondance were published in Against Method . His last lectures along with parts of his correspondence with Paul Feyerabend have been published in For and Against Method . [ 15 ] Lakatos's philosophy of mathematics was inspired by both Hegel 's and Marx 's dialectic , by Karl Popper 's theory of knowledge, and by the work of mathematician George Pólya . The 1976 book Proofs and Refutations is based on the first three chapters of his 1961 four-chapter doctoral thesis Essays in the Logic of Mathematical Discovery . But its first chapter is Lakatos's own revision of its chapter 1 that was first published as Proofs and Refutations in four parts in 1963–64 in the British Journal for the Philosophy of Science . It is largely taken up by a fictional dialogue set in a mathematics class. The students are attempting to prove the formula for the Euler characteristic in algebraic topology , which is a theorem about the properties of polyhedra , namely that for all polyhedra the number of their vertices V minus the number of their edges E plus the number of their faces F is 2 ( V − E + F = 2 ). The dialogue is meant to represent the actual series of attempted proofs that mathematicians historically offered for the conjecture , only to be repeatedly refuted by counterexamples . Often the students paraphrase famous mathematicians such as Cauchy , as noted in Lakatos's extensive footnotes. Lakatos termed the polyhedral counterexamples to Euler's formula monsters and distinguished three ways of handling these objects: Firstly, monster-barring , by which means the theorem in question could not be applied to such objects. Secondly, monster-adjustment , whereby by making a re-appraisal of the monster it could be made to obey the proposed theorem. Thirdly, exception handling , a further distinct process. These distinct strategies have been taken up in qualitative physics, where the terminology of monsters has been applied to apparent counterexamples, and the techniques of monster-barring and monster-adjustment recognized as approaches to the refinement of the analysis of a physical issue. [ 16 ] What Lakatos tried to establish was that no theorem of informal mathematics is final or perfect. This means that we should not think that a theorem is ultimately true, only that no counterexample has yet been found. Once a counterexample is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (If axioms are given for a branch of mathematics, however, Lakatos claimed that proofs from those axioms were tautological , i.e. logically true .) [ 17 ] Lakatos proposed an account of mathematical knowledge based on the idea of heuristics . In Proofs and Refutations the concept of "heuristic" was not well developed, although Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical " thought experiments " are a valid way to discover mathematical conjectures and proofs, and sometimes called his philosophy "quasi- empiricism ". However, he also conceived of the mathematical community as carrying on a kind of dialectic to decide which mathematical proofs are valid and which are not. Therefore, he fundamentally disagreed with the " formalist " conception of proof that prevailed in Frege 's and Russell 's logicism , which defines proof simply in terms of formal validity. On its first publication as an article in the British Journal for the Philosophy of Science in 1963–64, Proofs and Refutations became highly influential on new work in the philosophy of mathematics, although few agreed with Lakatos's strong disapproval of formal proof. Before his death he had been planning to return to the philosophy of mathematics and apply his theory of research programmes to it. Lakatos, Worrall and Zahar use Poincaré (1893) [ 18 ] to answer one of the major problems perceived by critics, namely that the pattern of mathematical research depicted in Proofs and Refutations does not faithfully represent most of the actual activity of contemporary mathematicians. [ 19 ] In a 1966 text Cauchy and the continuum , Lakatos re-examines the history of the calculus, with special regard to Augustin-Louis Cauchy and the concept of uniform convergence, in the light of non-standard analysis . Lakatos is concerned that historians of mathematics should not judge the evolution of mathematics in terms of currently fashionable theories. As an illustration, he examines Cauchy's proof that the sum of a series of continuous functions is itself continuous. Lakatos is critical of those who would see Cauchy's proof, with its failure to make explicit a suitable convergence hypothesis, merely as an inadequate approach to Weierstrassian analysis. Lakatos sees in such an approach a failure to realize that Cauchy's concept of the continuum differed from currently dominant views. Lakatos's second major contribution to the philosophy of science was his model of the "research programme", [ 20 ] which he formulated in an attempt to resolve the perceived conflict between Popper's falsificationism and the revolutionary structure of science described by Kuhn . Popper's standard of falsificationism was widely taken to imply that a theory should be abandoned as soon as any evidence appears to challenge it, while Kuhn's descriptions of scientific activity were taken to imply that science is most fruitful during periods in which popular, or "normal", theories are supported despite known anomalies. Lakatos's model of the research programme aims to combine Popper's adherence to empirical validity with Kuhn's appreciation for conventional consistency. A Lakatosian research programme [ 21 ] is based on a hard core of theoretical assumptions that cannot be abandoned or altered without abandoning the programme altogether. More modest and specific theories that are formulated in order to explain evidence that threatens the "hard core" are termed auxiliary hypotheses . Auxiliary hypotheses are considered expendable by the adherents of the research programme—they may be altered or abandoned as empirical discoveries require in order to "protect" the "hard core". Whereas Popper was generally read as hostile toward such ad hoc theoretical amendments, Lakatos argued that they can be progressive , i.e. productive, when they enhance the programme's explanatory and/or predictive power, and that they are at least permissible until some better system of theories is devised and the research programme is replaced entirely. The difference between a progressive and a degenerative research programme lies, for Lakatos, in whether the recent changes to its auxiliary hypotheses have achieved this greater explanatory/predictive power or whether they have been made simply out of the necessity of offering some response in the face of new and troublesome evidence. A degenerative research programme indicates that a new and more progressive system of theories should be sought to replace the currently prevailing one, but until such a system of theories can be conceived of and agreed upon, abandonment of the current one would only further weaken our explanatory power and was therefore unacceptable for Lakatos. Lakatos's primary example of a research programme that had been successful in its time and then progressively replaced is that founded by Isaac Newton , with his three laws of motion forming the "hard core". The Lakatosian research programme deliberately provides a framework within which research can be conducted on the basis of "first principles" (the "hard core"), which are shared by those involved in the research programme and accepted for the purpose of that research without further proof or debate. In this regard, it is similar to Kuhn's notion of a paradigm. Lakatos sought to replace Kuhn's paradigm, guided by an irrational "psychology of discovery", with a research programme no less coherent or consistent, yet guided by Popper's objectively valid logic of discovery . Lakatos was following Pierre Duhem 's idea that one can always protect a cherished theory (or part of one) from hostile evidence by redirecting the criticism toward other theories or parts thereof. (See Confirmation holism and Duhem–Quine thesis ). This aspect of falsification had been acknowledged by Popper. Popper 's theory, falsificationism, proposed that scientists put forward theories and that nature "shouts NO" in the form of an inconsistent observation. According to Popper, it is irrational for scientists to maintain their theories in the face of nature's rejection, as Kuhn had described them doing. For Lakatos, however, "It is not that we propose a theory and Nature may shout NO; rather, we propose a maze of theories, and nature may shout INCONSISTENT". [ 22 ] The continued adherence to a programme's "hard core", augmented with adaptable auxiliary hypotheses, reflects Lakatos's less strict standard of falsificationism. Lakatos saw himself as merely extending Popper's ideas, which changed over time and were interpreted by many in conflicting ways. In his 1968 article "Criticism and the Methodology of Scientific Research Programmes", [ 23 ] Lakatos contrasted Popper0 , the "naive falsificationist" who demanded unconditional rejection of any theory in the face of any anomaly (an interpretation Lakatos saw as erroneous but that he nevertheless referred to often); Popper1 , the more nuanced and conservatively interpreted philosopher; and Popper2 , the "sophisticated methodological falsificationist" that Lakatos claims is the logical extension of the correctly interpreted ideas of Popper1 (and who is therefore essentially Lakatos himself). It is, therefore, very difficult to determine which ideas and arguments concerning the research programme should be credited to whom. While Lakatos dubbed his theory "sophisticated methodological falsificationism", it is not "methodological" in the strict sense of asserting universal methodological rules by which all scientific research must abide. Rather, it is methodological only in that theories are only abandoned according to a methodical progression from worse theories to better theories—a stipulation overlooked by what Lakatos terms "dogmatic falsificationism". Methodological assertions in the strict sense, pertaining to which methods are valid and which are invalid, are, themselves, contained within the research programmes that choose to adhere to them, and should be judged according to whether the research programmes that adhere to them prove progressive or degenerative. Lakatos divided these "methodological rules" within a research programme into its "negative heuristics", i.e., what research methods and approaches to avoid, and its "positive heuristics", i.e., what research methods and approaches to prefer. While the "negative heuristic" protects the hard core, the "positive heuristic" directs the modification of the hard core and auxiliary hypotheses in a general direction. [ 24 ] Lakatos claimed that not all changes of the auxiliary hypotheses of a research programme (which he calls "problem shifts") are equally productive or acceptable. He took the view that these "problem shifts" should be evaluated not just by their ability to defend the "hard core" by explaining apparent anomalies, but also by their ability to produce new facts, in the form of predictions or additional explanations. [ 25 ] Adjustments that accomplish nothing more than the maintenance of the "hard core" mark the research programme as degenerative. Lakatos's model provides for the possibility of a research programme that is not only continued in the presence of troublesome anomalies but that remains progressive despite them. For Lakatos, it is essentially necessary to continue on with a theory that we basically know cannot be completely true, and it is even possible to make scientific progress in doing so, as long as we remain receptive to a better research programme that may eventually be conceived of. In this sense, it is, for Lakatos, an acknowledged misnomer to refer to "falsification" or "refutation", when it is not the truth or falsity of a theory that is solely determining whether we consider it "falsified", but also the availability of a less false theory. A theory cannot be rightfully "falsified", according to Lakatos, until it is superseded by a better (i.e. more progressive) research programme. This is what he says is happening in the historical periods Kuhn describes as revolutions and what makes them rational as opposed to mere leaps of faith or periods of deranged social psychology, as Kuhn argued. According to the demarcation criterion of pseudoscience proposed by Lakatos, a theory is pseudoscientific if it fails to make any novel predictions of previously unknown phenomena or its predictions were mostly falsified, in contrast with scientific theories, which predict novel fact(s). [ 26 ] Progressive scientific theories are those that have their novel facts confirmed, and degenerate scientific theories, which can degenerate so much that they become pseudo-science, are those whose predictions of novel facts are refuted. As he put it: Lakatos's own key examples of pseudoscience were Ptolemaic astronomy, Immanuel Velikovsky 's planetary cosmogony, Freudian psychoanalysis , 20th-century Soviet Marxism , [ 27 ] Lysenko's biology , Niels Bohr 's quantum mechanics post-1924, astrology , psychiatry , and neoclassical economics . In his 1973 Scientific Method Lecture 1 [ 28 ] at the London School of Economics, he also claimed that "nobody to date has yet found a demarcation criterion according to which Darwin can be described as scientific". Almost 20 years after Lakatos's 1973 challenge to the scientificity of Darwin , in her 1991 The Ant and the Peacock , LSE lecturer and ex-colleague of Lakatos, Helena Cronin , attempted to establish that Darwinian theory was empirically scientific in respect of at least being supported by evidence of likeness in the diversity of life forms in the world, explained by descent with modification. She wrote that our usual idea of corroboration as requiring the successful prediction of novel facts ... Darwinian theory was not strong on temporally novel predictions. ... however familiar the evidence and whatever role it played in the construction of the theory, it still confirms the theory. [ 29 ] In his 1970 article "History of Science and Its Rational Reconstructions" [ 4 ] Lakatos proposed a dialectical historiographical meta-method for evaluating different theories of scientific method, namely by means of their comparative success in explaining the actual history of science and scientific revolutions on the one hand, whilst on the other providing a historiographical framework for rationally reconstructing the history of science as anything more than merely inconsequential rambling. The article started with his now renowned dictum "Philosophy of science without history of science is empty; history of science without philosophy of science is blind". However, neither Lakatos himself nor his collaborators ever completed the first part of this dictum by showing that in any scientific revolution the great majority of the relevant scientific community converted just when Lakatos's criterion – one programme successfully predicting some novel facts whilst its competitor degenerated – was satisfied. Indeed, for the historical case studies in his 1968 article "Criticism and the Methodology of Scientific Research Programmes" [ 23 ] he had openly admitted as much, commenting: "In this paper it is not my purpose to go on seriously to the second stage of comparing rational reconstructions with actual history for any lack of historicity." Paul Feyerabend argued that Lakatos's methodology was not a methodology at all, but merely "words that sound like the elements of a methodology". [ 30 ] He argued that Lakatos's methodology was no different in practice from epistemological anarchism , Feyerabend's own position. He wrote in Science in a Free Society (after Lakatos's death) that: Lakatos realized and admitted that the existing standards of rationality, standards of logic included, were too restrictive and would have hindered science had they been applied with determination. He therefore permitted the scientist to violate them (he admits that science is not "rational" in the sense of these standards). However, he demanded that research programmes show certain features in the long run — they must be progressive... I have argued that this demand no longer restricts scientific practice. Any development agrees with it. [ 31 ] Lakatos and Feyerabend planned to produce a joint work in which Lakatos would develop a rationalist description of science, and Feyerabend would attack it. The correspondence between Lakatos and Feyerabend, where the two discussed the project, has since been reproduced, with commentary, by Matteo Motterlini. [ 32 ]
https://en.wikipedia.org/wiki/Imre_Lakatos
Trimethylindium , often abbreviated to TMI or TMIn , is the organoindium compound with the formula In(CH 3 ) 3 . It is a colorless, pyrophoric solid. [ 2 ] Unlike trimethylaluminium , but akin to trimethylgallium , TMI is monomeric. [ 3 ] TMI is prepared by the reaction of indium trichloride with methyl lithium . [ 2 ] [ 4 ] Compared to trimethylaluminium and trimethylgallium , InMe 3 is a weaker Lewis acid . It forms adducts with secondary amines and phosphines . [ 5 ] A complex with the heterocyclic triazine ligand (Pr i NCH 2 ) 3 forms a complex with 6-coordinate In, where the C-In-C angles are 114°-117° with three long bonds to the tridentate ligand with N-In-N angles of 48.6° and long In-N bonds of 278 pm. [ 6 ] In the gaseous state InMe 3 is monomeric, with a trigonal planar structure, and in benzene solution it is tetrameric. [ 5 ] In the solid state there are two polymorphs, a tetragonal phase which is obtained, for example, by sublimation and a lower density rhombohedral phase discovered in 2005, [ 7 ] when InMe 3 re-crystallised from hexane solution. In the tetragonal form InMe 3 is tetrameric as in benzene solution and there is bridging between tetramers to give an infinite network. Each indium atom is five coordinate, in a distorted trigonal planar configuration, the three shortest bonds (ca. 216 pm) are those in the equatorial plane, with longer axial bonds, 308 pm for the In-C bonds joining the InMe 3 units to form the tetramers and 356 pm for the In-C linking the tetramers into an infinite network. [ 8 ] The solid state structures of GaMe 3 and TlMe 3 are similar. [ 8 ] The association in the solid state accounts for the high melting point of 89°–89.8 °C compared to triethylindium which melts at −32 °C. [ 5 ] The rhombohedral form of InMe 3 consists of cyclic hexamers with 12 membered (InC) 6 rings in an extended chair conformation . The hexamers are interlinked into an infinite network. Indium atoms are five coordinate the equatorial In-C distances average 216.7pm almost identical to the average for the tetragonal form, and the axial bonds are 302.8pm joining the InMe 3 units into hexamers and 313.4 pm linking the hexamers to form the infinite network. [ 7 ] Indium is a component of several compound semiconductors , including as InP, InAs, InN , InSb , GaInAs , InGaN , AlGaInP , AlInP, and AlInGaNP. These materials are prepared by metalorganic vapour phase epitaxy ( MOVPE ) and TMI is the preferred source for the indium component. High purity in TMI (99.9999% pure or greater) is essential for many of these applications. For some materials, electron mobilities are observed as high as 287,000 cm²/Vs at 77 K and 5400 cm²/Vs at 300 K, and background carrier concentration as low as 6×10 13 cm −3 . [ 9 ] [ 10 ] The vapor pressure equation log P (Torr) = 10.98–3204/T (K) describes TMI within a wide range of MOVPE growth conditions. [ 11 ] TMI is pyrophoric . [ 12 ]
https://en.wikipedia.org/wiki/In(CH3)3
An in-building cellular enhancement system , commonly implemented in conjunction with a distributed antenna system (DAS), is a telecommunications solution which is used to extend and distribute the cellular signal of a given mobile network operator (hereafter abbreviated as an MNO) within a building. In the United States, operators commonly supported by such solutions include AT&T Mobility , Verizon Wireless , Sprint Corporation , T-Mobile US , in addition to smaller regional carriers as required. Below ground level, large buildings and high rises are examples where mobile phones are unable to properly reach the carrier's macro or outdoor network. In these environments, the in-building cellular enhancement system will connect to the carrier's signal source which is typically a bi-directional amplifier or a base transceiver station . This signal source transmits (and receives) the mobile network operator's licensed radio frequency . This frequency is then transported within the building using coaxial cable , optical fiber or Category 5e / Category 6 twisted pair cable . In-building coverage antennas are strategically placed to provide the best overall coverage for users. A cellular enhancement system does not read or modify the information represented within the radio frequency (RF) that passes through the system; rather, it reinforces the signal penetration of voice and data frequencies in low signal areas and in dead spots within structures, As the industry evolves, most MNO networks are now made up of 3G based services and are migrating towards 4G based services. In-building cellular enhancement systems designed for 2G or primarily voice-based services may not be sufficient to support 4G services since signal strength and signal quality specifications become more stringent as the applications move from a voice-centric paradigm to a high speed data-centric paradigm. Therefore, a system designed to provide good quality 2G services may be insufficient or unable to provide quality 4G services. Traditionally, MNO services have been delivered within two frequency ranges: the 800 MHz band and the 1900 MHz band. Additional frequency bands have been auctioned by the FCC resulting in increased capacity for the MNOs which are starting to implement 4G services in the 700 MHz and 2100 MHz frequency ranges. [ citation needed ] A coaxial cable-only system is typically referred to as a passive system when all system components (other than the signal source) are coaxial cable, coverage antennas and other components that do not require AC or DC power to function. A passive system is less expensive to install and is best suited for smaller buildings where one or possibly two MNOs need to be enhanced within the building and are not usually installed in spaces over 100,000 square feet (9,300 m 2 ). Passive systems require the RF power to be balanced among all the coverage antennas so there is uniform signal strength throughout the building. Expanding a passive system after the initial deployment could require a re-engineering of the entire system to ensure proper operation throughout the building. The number of in-building antennas and coverage area is dependent on the output power of the signal source. Systems that require conversion of the radio frequency into other forms, such as optical signals, use products that require AC or DC power to perform the conversion near the signal source. Additional products located throughout the building are then used to convert the signals back into native radio frequency format, which are then transmitted through the coverage antennas. Since the equipment at both ends of the cable require AC or DC power to operate, the system is considered to be active. [ citation needed ] An active system can be deployed in large buildings and/or within a campus of buildings by converting and transporting the radio frequency over optical fiber. Many active systems have been deployed covering areas of 1,000,000 square feet (93,000 m 2 ) and larger. Active systems are best suited when there is a need to support multiple MNOs or large single buildings or campuses with multiple buildings. Expansion of an active system is usually in the form of adding more active equipment to increase the number of coverage antennas within the building, to increase the number of MNOs, or to increase the service offerings of an MNO such as adding 3G or 4G services. In a properly designed active system, no reengineering or rebalancing of the original system is required when the system is expanded. Optical fiber systems can provide coverage in areas up to 2 km from the signal source making them ideal for campus environments. An active system will always be more expensive than a passive system. [ 1 ]
https://en.wikipedia.org/wiki/In-Building_Cellular_Enhancement_System
In-Methylcyclophanes are organic compounds and members of a larger family of cyclophanes . These compounds are used to study how chemical bonds in molecules adapt to strain . In-methylcyclophanes in particular have a methyl group in proximity to a benzene ring. This is only possible when both methyl group and ring are attached to the same rigid scaffold. In one In-methylcyclophane molecule this is accomplished with a triptycene frame. [ 1 ] This particular compound is synthesed starting from anthracene with a methyl group added to each arene ring (1,8,9-trimethylanthracene). A triptycene compound is formed from a reaction of this anthracene compound with an aryne in a Diels-Alder reaction in isoamyl nitrite . In this synthesis the precursor to the reactive aryne is 2-amino-6-methylbenzoic acid. Next the methyl substituents are functionalized with bromine groups by the photochemical reaction with N -bromosuccinimide or NBS. The final cyclophane is put together by reaction with 1,3,5-tris(mercaptomethyl)benzene with nucleophilic sulfhydryl groups and electrophilic alkyl bromides in a nucleophilic aliphatic Substitution . X-ray crystallography of the tri- sulfone derivative of this cyclophane shows that the methyl group located 289.6 picometers from the center of the benzene ring. The carbon-to-carbon bond linking the methyl group to the triptycene frame is actually shortened and measures 147.5 to 149.5 pm. The similar bond in the triptycene precursor is 154 pm. Proton NMR analysis shows a chemical shift of 2.52 ppm for the methyl protons compared to that of 3.16 to 3.85 in the anthracene compound. The reason for this anomaly is that the methyl protons are in line with the aromatic ring current of the benzene ring and are therefore severely shielded , an effect similar to the nucleus-independent chemical shift method of analyzing aromaticity.
https://en.wikipedia.org/wiki/In-Methylcyclophane
In-band control is a characteristic of network protocols with which data control is regulated. In-band control passes control data on the same connection as main data. Protocols that use in-band control include HTTP and SMTP . This is as opposed to Out-of-band control used by protocols such as FTP . Here is an example of an SMTP client-server interaction: SMTP is in-band because the control messages, such as "HELO" and "MAIL FROM", are sent in the same stream as the actual message content. This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/In-band_control
In-house software is computer software for business use within an organization. In-house software can be developed by the organization itself or by someone else, or it could be acquired. [ 1 ] In-house software however may later become available for commercial use upon sole discretion of the developing organization. The need to develop such software may arise depending on many circumstances which may be non-availability of the software in the market, potentiality or ability of the corporation to develop such software or to customize a software based on the corporate organization's need. This article related to a type of software is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/In-house_software
In-session phishing is a form of potential phishing attack which relies on one web browsing session being able to detect the presence of another session (such as a visit to an online banking website) on the same web browser , and to then launch a pop-up window that pretends to have been opened from the targeted session. [ 1 ] This pop-up window, which the user now believes to be part of the targeted session, is then used to steal user data in the same way as with other phishing attacks. [ 2 ] The advantage of in-session phishing to the attacker is that it does not need the targeted website to be compromised in any way, relying instead on a combination of data leakage within the web browser, the capacity of web browsers to run active content, the ability of modern web browsers to support more than one session at a time, and social engineering of the user. [ 3 ] The technique, which exploited a vulnerability in the JavaScript handling of major browsers, was found by Amit Klein, CTO of security vendor Trusteer , Ltd. [ 4 ] [ 5 ] Subsequent security updates to browsers may have made the technique impossible. This World Wide Web –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/In-session_phishing
In-situ conservation is the on-site conservation or the conservation of genetic resources in natural populations of plant or animal species, such as forest genetic resources in natural populations of tree species. [ 1 ] This process protects the inhabitants and ensures the sustainability of the environment and ecosystem. Its converse is ex situ conservation , where threatened species are moved to another location. These can include places like seed libraries, gene banks and more where they are protected through human intervention. [ 2 ] Nature reserves (or biosphere reserves) cover very large areas, often more than 5,000 km 2 (1,900 sq mi). They are used to protect species for a long time. There are three different classifications for these reserves: Strict natural areas are creates to protect the state of nature in a given region. It is not made for the purpose of protecting any species within its limits. managed natural areas alternatively are made specifically for the purpose of protecting a certain species or community that is at the point it may be at risk being in a strict natural area. This is a more controlled environment that is created to be the most optimal habitat for the species concerned to thrive. Finally, a wilderness area serves a dual purpose of providing a protection for the natural region as well as providing recreational opportunities for patrons (excluding motorized transport). [ 3 ] A national park is an area dedicated for the conservation of wildlife along with its environment. A national park is an area which is used to conserve scenery, natural and historical objects. It is usually a small reserve covering an area of about 100 to 500 km 2 (40 to 200 sq mi). Within biosphere reserves, one or more national parks may also exist. Wildlife sanctuaries can provide a higher quality of life for animals who are moved there. These animals are placed in specialized habitats that allows for more species-specific behaviors to take place. Wildlife sanctuaries are often used for animals that have been in zoos, circuses, laboratories and more for a long time, and then live the rest of their lives with greater autonomy in these habitats. [ 4 ] Several international organizations focus their conservation work on areas designated as biodiversity hotspots . According to Conservation International , to qualify as a biodiversity hotspot a region must meet two strict criteria: Biodiversity hotspots make up 1.4% of the earth's land area, yet they contain more than half of our planets species. [ 5 ] A gene sanctuary is an area where plants are conserved. It includes both biosphere reserves as well as national parks. Biosphere reserves are developed to be both a place for biodiversity conservation as well as sustainable development. The concept was first developed in the 1970s and include a core, buffer and transition zones. These zones act together to harmonize the conservation and development aspects of the biosphere. [ 6 ] Since 2004, and 30 years following the invention of the biosphere reserve concept, there have been about 459 conservation areas developed in 97 countries. [ 7 ] One benefit of in situ conservation is that it maintains recovering populations in the environment where they have developed their distinctive properties. Another benefit is that this strategy helps ensure the ongoing processes of evolution and adaptation within their environments. As a last resort, ex situ conservation may be used on some or all of the population, when in situ conservation is too difficult, or impossible. The species gets adjusted to the natural disasters like drought, floods, forest fires and this method is very cheap and convenient. Wildlife and livestock conservation involves the protection of wildlife habitats. Sufficiently large reserves must be maintained to enable the target species to exist in large numbers. The population size must be sufficient to enable the necessary genetic diversity to survive, so that it has a good chance of continuing to adapt and evolve over time. This reserve size can be calculated for target species by examining the population density in naturally occurring situations. The reserves must then be protected from intrusion or destruction by man, and against other catastrophes. In agriculture , in situ conservation techniques are an effective way to improve, maintain, and use traditional or native varieties of agricultural crops. Such methodologies link the positive output of scientific research with farmers' experience and field work. First, the accessions of a variety stored at a germplasm bank and those of the same variety multiplied by farmers are jointly tested in the producers field and in the laboratory, under different situations and stresses. Thus, the scientific knowledge about the production characteristics of the native varieties is enhanced. Later, the best tested accessions are crossed, mixed, and multiplied under replicable situations. At last, these improved accessions are supplied to the producers. Thus, farmers are enabled to crop improved selections of their own varieties, instead of being lured to substitute their own varieties with commercial ones or to abandon their crop. This technique of conservation of agricultural biodiversity is more successful in marginal areas, where commercial varieties are not expedient, due to climate and soil fertility constraints, or where the taste and cooking characteristics of traditional varieties compensate for their lower yields. [ 8 ] About 4% of the total geographical area of India is used for in situ conservation. There are 18 biosphere reserves in India , including Nanda Devi in Uttarakhand, Nokrek in Meghalaya, Manas National Park in Assam and Sundarban in West Bengal. There are 106 national parks in India , including The Kaziranga National Park which conserves The one-horned rhino , Periyar National Park conserving the tiger and elephant, and Ranthambore National Park conserving the tiger. There are 551 wildlife sanctuaries in India . Biodiversity hotspots include the Himalayas , the Western Ghats , the Indo-Burma region [ 9 ] and the Sundaland . India has set up its first gene sanctuary in the Garo Hills of Meghalaya for wild relatives of citrus. Efforts are also being made to set up gene sanctuaries for banana, sugarcane, rice and mango. Community reserves were established as a type of protected area in India in the Wildlife Protection Amendment Act 2002, to provide legal support to community or privately owned reserves which cannot be designated as national park or wildlife sanctuary. Sacred groves are tracts of forest set aside where all the trees and wildlife within are venerated and given total protection. China has up to [ clarification needed ] 2538 nature reserves covering 15% of the country. The majority of in situ conservation areas are concentrated in the regions of Tibet , Qinghai , and Xinjiang . These provinces, all in western China, account for about 56% of the country's nature reserves. Eastern and southern China contain 90% of the country's population, and there are few nature reserves in these areas. In these regions, nature reserves actively compete with human development projects to support a growing demand for infrastructure. One consequence of this competing development has been the movement of the South China tiger out of its natural habitat. In eastern and southern China, many undeveloped natural landscapes are fragmented; however, nature reserves may provide crucial refuge for key species and ecosystem services. [ 10 ]
https://en.wikipedia.org/wiki/In-situ_conservation
Indium(III) oxide ( In 2 O 3 ) is a chemical compound , an amphoteric oxide of indium . Amorphous indium oxide is insoluble in water but soluble in acids, whereas crystalline indium oxide is insoluble in both water and acids. The crystalline form exists in two phases, the cubic ( bixbyite type) [ 1 ] and rhombohedral ( corundum type ). Both phases have a band gap of about 3 eV. [ 3 ] [ 4 ] The parameters of the cubic phase are listed in the infobox. The rhombohedral phase is produced at high temperatures and pressures or when using non-equilibrium growth methods. [ 5 ] It has a space group R 3 c No. 167, Pearson symbol hR30, a = 0.5487 nm, b = 0.5487 nm, c = 1.4510 nm, Z = 6 and calculated density 7.31 g/cm 3 . [ 6 ] Thin films of chromium - doped indium oxide (In 2−x Cr x O 3 ) are a magnetic semiconductor displaying high-temperature ferromagnetism , single- phase crystal structure, and semiconductor behavior with high concentration of charge carriers . It has possible applications in spintronics as a material for spin injectors. [ 7 ] Thin polycrystalline films of indium oxide doped with Zn 2+ are highly conductive (conductivity ~10 5 S/m) and even superconductive at liquid helium temperatures. The superconducting transition temperature T c depends on the doping and film structure and is below 3.3 K. [ 8 ] Bulk samples can be prepared by heating indium(III) hydroxide or the nitrate, carbonate or sulfate. [ 9 ] Thin films of indium oxide can be prepared by sputtering of indium targets in an argon / oxygen atmosphere. They can be used as diffusion barriers (" barrier metals ") in semiconductors , e.g. to inhibit diffusion between aluminium and silicon . [ 10 ] Monocrystalline nanowires can be synthesized from indium oxide by laser ablation, allowing precise diameter control down to 10 nm. Field effect transistors were fabricated from those. [ 11 ] Indium oxide nanowires can serve as sensitive and specific redox protein sensors . [ 12 ] The sol–gel method is another way to prepare nanowires. [ citation needed ] Indium oxide can serve as a semiconductor material , forming heterojunctions with p - InP , n - GaAs , n- Si , and other materials. A layer of indium oxide on a silicon substrate can be deposited from an indium trichloride solution, a method useful for manufacture of solar cells . [ 13 ] When heated to 700 °C, indium(III) oxide forms In 2 O, (called indium(I) oxide or indium suboxide), at 2000 °C it decomposes. [ 9 ] It is soluble in acids but not in alkali. [ 9 ] With ammonia at high temperature indium nitride is formed: [ 14 ] With K 2 O and indium metal the compound K 5 InO 4 containing tetrahedral InO 4 5− ions was prepared. [ 15 ] Reacting with a range of metal trioxides produces perovskites [ 16 ] for example: Indium oxide is used in some types of batteries, thin film infrared reflectors transparent for visible light ( hot mirrors ), some optical coatings , and some antistatic coatings . In combination with tin dioxide , indium oxide forms indium tin oxide (also called tin doped indium oxide or ITO), a material used for transparent conductive coatings. In semiconductors, indium oxide can be used as an n-type semiconductor used as a resistive element in integrated circuits . [ 17 ] In histology , indium oxide is used as a part of some stain formulations.
https://en.wikipedia.org/wiki/In2O3