text
stringlengths
559
401k
source
stringlengths
13
121
Latent growth modeling is a statistical technique used in the structural equation modeling (SEM) framework to estimate growth trajectories. It is a longitudinal analysis technique to estimate growth over a period of time. It is widely used in the social sciences, including psychology and education. It is also called latent growth curve analysis. The latent growth model was derived from theories of SEM. General purpose SEM software, such as OpenMx, lavaan (both open source packages based in R), AMOS, Mplus, LISREL, or EQS among others may be used to estimate growth trajectories. == Background == Latent Growth Models represent repeated measures of dependent variables as a function of time and other measures. Such longitudinal data share the features that the same subjects are observed repeatedly over time, and on the same tests (or parallel versions), and at known times. In latent growth modeling, the relative standing of an individual at each time is modeled as a function of an underlying growth process, with the best parameter values for that growth process being fitted to each individual. These models have grown in use in social and behavioral research since it was shown that they can be fitted as a restricted common factor model in the structural equation modeling framework. The methodology can be used to investigate systematic change, or growth, and inter-individual variability in this change. A special topic of interest is the correlation of the growth parameters, the so-called initial status and growth rate, as well as their relation with time varying and time invariant covariates. (See McArdle and Nesselroade (2003) for a comprehensive review) Although many applications of latent growth curve models estimate only initial level and slope components, more complex models can be estimated. Models with higher order components, e.g., quadratic, cubic, do not predict even-increasing variance, but require more than two occasions of measurement. It is also possible to fit models based on growth curves with functional forms, often versions of the generalised logistic growth such as the logistic, exponential or Gompertz functions. Though straightforward to fit with versatile software such as OpenMx, these more complex models cannot be fitted with SEM packages in which path coefficients are restricted to being simple constants or free parameters, and cannot be functions of free parameters and data. Discontinuous models where the growth pattern changes around a time point (for example, is different before and after an event) can also be fit in SEM software. Similar questions can also be answered using a multilevel model approach. == References == == Further reading == McArdle, J.J. (1989). Kanfer, R.; Ackerman, P. L.; Cudeck, R. (eds.). A structural modeling experiment with multiple growth functions. Abilities, motivation, and methodology: The Minnesota Symposium on Learning and Individual Differences. Lawrence Erlbaum Associates, Inc. pp. 71–117. ISBN 978-0-203-76290-5. Willett, J.B.; Sayer, A.G. (1994). "Using covariance structure analysis to detect correlates and predictors of individual change over time" (PDF). Psychological Bulletin. 116 (2): 363–381. doi:10.1037/0033-2909.116.2.363. Archived (PDF) from the original on 5 July 2023. Curran, P.J.; Stice, E.; Chassin, L. (1997). "The relation between adolescent alcohol use and peer alcohol use: A longitudinal random coefficients model" (PDF). Journal of Consulting and Clinical Psychology. 65 (1): 130–140. doi:10.1037/0022-006X.65.1.130. Archived (PDF) from the original on 8 June 2024. Muthén, B.O.; Curran, P.J. (1997). "General longitudinal modeling of individual differences in experimental designs: A latent variable framework for analysis and power estimation" (PDF). Psychological Methods. 2 (4): 371–402. doi:10.1037/1082-989X.2.4.371. Archived (PDF) from the original on 11 April 2024. Su & Testa 2005 Bollen, K. A.; Curran, P. J. (2006). Latent curve models: A structural equation perspective. Hoboken, NJ: Wiley-Interscience. doi:10.1002/0471746096. ISBN 9780471455929. Singer, J. D.; Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York: Oxford University Press. doi:10.1093/acprof:oso/9780195152968.001.0001. ISBN 9780195152968. Fitzmaurice, G. M.; Laird, N. M.; Ware, J. W. (2004). Applied longitudinal analysis. Hoboken, NJ: Wiley. doi:10.1002/9781119513469. ISBN 9781119513469.
Wikipedia/Latent_growth_modeling
In statistics, a latent class model (LCM) is a model for clustering multivariate discrete data. It assumes that the data arise from a mixture of discrete distributions, within each of which the variables are independent. It is called a latent class model because the class to which each data point belongs is unobserved, or latent. Latent class analysis (LCA) is a subset of structural equation modeling, used to find groups or subtypes of cases in multivariate categorical data. These subtypes are called "latent classes". Confronted with a situation as follows, a researcher might choose to use LCA to understand the data: Imagine that symptoms a-d have been measured in a range of patients with diseases X, Y, and Z, and that disease X is associated with the presence of symptoms a, b, and c, disease Y with symptoms b, c, d, and disease Z with symptoms a, c and d. The LCA will attempt to detect the presence of latent classes (the disease entities), creating patterns of association in the symptoms. As in factor analysis, the LCA can also be used to classify case according to their maximum likelihood class membership. Because the criterion for solving the LCA is to achieve latent classes within which there is no longer any association of one symptom with another (because the class is the disease which causes their association), and the set of diseases a patient has (or class a case is a member of) causes the symptom association, the symptoms will be "conditionally independent", i.e., conditional on class membership, they are no longer related. == Model == Within each latent class, the observed variables are statistically independent. This is an important aspect. Usually the observed variables are statistically dependent. By introducing the latent variable, independence is restored in the sense that within classes variables are independent (local independence). We then say that the association between the observed variables is explained by the classes of the latent variable (McCutcheon, 1987). In one form, the latent class model is written as p i 1 , i 2 , … , i N ≈ ∑ t T p t ∏ n N p i n , t n , {\displaystyle p_{i_{1},i_{2},\ldots ,i_{N}}\approx \sum _{t}^{T}p_{t}\,\prod _{n}^{N}p_{i_{n},t}^{n},} where T {\displaystyle T} is the number of latent classes and p t {\displaystyle p_{t}} are the so-called recruitment or unconditional probabilities that should sum to one. p i n , t n {\displaystyle p_{i_{n},t}^{n}} are the marginal or conditional probabilities. For a two-way latent class model, the form is p i j ≈ ∑ t T p t p i t p j t . {\displaystyle p_{ij}\approx \sum _{t}^{T}p_{t}\,p_{it}\,p_{jt}.} This two-way model is related to probabilistic latent semantic analysis and non-negative matrix factorization. The probability model used in LCA is closely related to the Naive Bayes classifier. The main difference is that in LCA, the class membership of an individual is a latent variable, whereas in Naive Bayes classifiers the class membership is an observed label. == Related methods == There are a number of methods with distinct names and uses that share a common relationship. Cluster analysis is, like LCA, used to discover taxon-like groups of cases in data. Multivariate mixture estimation (MME) is applicable to continuous data, and assumes that such data arise from a mixture of distributions: imagine a set of heights arising from a mixture of men and women. If a multivariate mixture estimation is constrained so that measures must be uncorrelated within each distribution, it is termed latent profile analysis. Modified to handle discrete data, this constrained analysis is known as LCA. Discrete latent trait models further constrain the classes to form from segments of a single dimension: essentially allocating members to classes on that dimension: an example would be assigning cases to social classes on a dimension of ability or merit. As a practical instance, the variables could be multiple choice items of a political questionnaire. The data in this case consists of a N-way contingency table with answers to the items for a number of respondents. In this example, the latent variable refers to political opinion and the latent classes to political groups. Given group membership, the conditional probabilities specify the chance certain answers are chosen. == Application == LCA may be used in many fields, such as: collaborative filtering, Behavior Genetics and Evaluation of diagnostic tests. == References == Linda M. Collins; Stephanie T. Lanza (2010). Latent class and latent transition analysis for the social, behavioral, and health sciences. New York: Wiley. ISBN 978-0-470-22839-5. Allan L. McCutcheon (1987). Latent class analysis. Quantitative Applications in the Social Sciences Series No. 64. Thousand Oaks, California: SAGE Publications. ISBN 978-0-521-59451-6. Leo A. Goodman (1974). "Exploratory latent structure analysis using both identifiable and unidentifiable models". Biometrika. 61 (2): 215–231. doi:10.1093/biomet/61.2.215. Paul F. Lazarsfeld, Neil W. Henry (1968). Latent Structure Analysis. == External links == Statistical Innovations, Home Page, 2016. Website with latent class software (Latent GOLD 5.1), free demonstrations, tutorials, user guides, and publications for download. Also included: online courses, FAQs, and other related software. The Methodology Center, Latent Class Analysis, a research center at Penn State, free software, FAQ John Uebersax, Latent Class Analysis, 2006. A web-site with bibliography, software, links and FAQ for latent class analysis
Wikipedia/Latent_class_models
Quality Control Music, LLC (also known as Quality Control or QC) is an American hip hop record label founded by Kevin "Coach K" Lee (COO) and Pierre "P" Thomas (CEO) in March 2013. The label's releases were distributed through Universal Music Group imprints Capitol Records from 2017 until 2020, and through Motown Records. Tamika Howard and Simone Mitchell are executives of the label, with Howard serving as its general manager. The label has many acts signed, including Migos, City Girls, Lil Yachty and Lil Baby. The label also has Cardi B signed under a management deal. == History == Kevin Lee and Pierre Thomas initially established Quality Control Music by hiring radio and promotion staff, while they personally ventured into publishing and management. They invested $1 million and one year into building a headquarters in the western part of Atlanta, which holds four recording studios and office spaces. In 2020, Migos brought a lawsuit against the company. On February 8, 2023, Quality Control's parent company, QC Media Holdings, was acquired by Hybe America for $300 million, with the founders maintaining control and reporting to the CEO, Scooter Braun. == Roster == === Notable current acts === Updated according to QC Music Artists. ==== Artists ==== JT Quavo Lil Baby Lil Yachty Lakeyah Layton Greene Bankroll Freddie Baby Money DC2Trill Camo! Draft Day ==== In-house producers ==== OG Parker Quay Global === Notable former acts === City Girls Offset Rich the Kid OG Maco Young Greatness Takeoff Renni Rucci Stefflon Don Duke Deuce Icewear Vezzo Gloss Up Yung Miami == Discography == === Compilation albums === === Mixtapes === === Singles === == Quality Control Sports == In 2019, Kevin Lee and Pierre Thomas established Quality Control Sports, a sports management company. Quality Control Sports manages professional basketball, football and baseball players including Deebo Samuel, Alvin Kamara, Diontae Johnson (Pittsburgh Steelers) and Jarrett Allen. == References ==
Wikipedia/Quality_Control_Music
Analytical quality control (AQC) refers to all those processes and procedures designed to ensure that the results of laboratory analysis are consistent, comparable, accurate and within specified limits of precision. Constituents submitted to the analytical laboratory must be accurately described to avoid faulty interpretations, approximations, or incorrect results. The qualitative and quantitative data generated from the laboratory can then be used for decision making. In the chemical sense, quantitative analysis refers to the measurement of the amount or concentration of an element or chemical compound in a matrix that differs from the element or compound. Fields such as industry, medicine, and law enforcement can make use of AQC. == In the laboratory == AQC processes are of particular importance in laboratories analysing environmental samples where the concentration of chemical species present may be extremely low and close to the detection limit of the analytical method. In well managed laboratories, AQC processes are built into the routine operations of the laboratory often by the random introduction of known standards into the sample stream or by the use of spiked samples. Quality control begins with sample collection and ends with the reporting of data. AQC is achieved through laboratory control of analytical performance. Initial control of the complete system can be achieved through specification of laboratory services, instrumentation, glassware, reagents, solvents, and gases. However, evaluation of daily performance must be documented to ensure continual production of valid data. A check should first be done to ensure that the data should be seen is precise and accurate. Next, systematic daily checks such as analysing blanks, calibration standards, quality control check samples, and references must be performed to establish the reproducibility of the data. The checks help certify that the methodology is measuring what is in the sample. The quality of individual AQC efforts can be variable depending on the training, professional pride, and importance of a particular project to a particular analyst. The burden of an individual analyst originating AQC efforts can be lessened through the implementation of quality assurance programs. Through the implementation of established and routine quality assurance programs, two primary functions are fulfilled: the determination of quality, and the control of quality. By monitoring the accuracy and precision of results, the quality assurance program should increase confidence in the reliability of the reported analytical results, thereby achieving adequate AQC. == Pharmaceutical industry == Validation of analytical procedures is imperative in demonstrating that a drug substance is suitable for a particular purpose. Common validation characteristics include: accuracy, precision (repeatability and intermediate precision), specificity, detection limit, quantitation limit, linearity, range, and robustness. In cases such as changes in synthesis of the drug substance, changes in composition of the finished product, and changes in the analytical procedure, revalidation is necessary to ensure quality control. All analytical procedures should be validated. Identification tests are conducted to ensure the identity of an analyte in a sample through comparison of the sample to a reference standard through methods such as spectrum, chromatographic behavior, and chemical reactivity. Impurity testing can either be a quantitative test or a limit test. Both tests should accurately measure the purity of the sample. Quantitative tests of either the active moiety or other components of a sample can be conducted through assay procedures. Other analytical procedures such as dissolution testing or particle size determination may also need to be validated and are equally important. == Statistics == Because of the complex inter-relationship between analytical method, sample concentration, limits of detection and method precision, the management of Analytical Quality Control is undertaken using a statistical approach to determine whether the results obtained lie within an acceptable statistical envelope. == Inter-laboratory calibration == In circumstances where more than one laboratory is analysing samples and feeding data into a large programme of work such as the Harmonised monitoring scheme in the UK, AQC can also be applied to validate one laboratory against another. In such cases the work may be referred to as inter-laboratory calibration. == References ==
Wikipedia/Analytical_quality_control
A test method is a method for a test in science or engineering, such as a physical test, chemical test, or statistical test. It is a specified procedure that produces a test result. To ensure accurate and relevant results, a test method should be "explicit, unambiguous, and experimentally feasible.", as well as effective and reproducible. A test is an observation or experiment that determines one or more characteristics of a given sample, product, process, or service, with the purpose of comparing the test result to expected or desired results. The results can be qualitative (yes/no), quantitative (a measured value), or categorical and can be derived from personal observation or the output of a precision measuring instrument. Usually the test result is the dependent variable, the measured response based on the particular conditions of the test defined by the value of the independent variable. Some tests may involve changing the independent variable to determine the level at which a certain response occurs: in this case, the test result is the independent variable. == Importance == In software development, engineering, science, manufacturing, and business, its developers, researchers, manufacturers, and related personnel must understand and agree upon methods of obtaining data and making measurements. It is common for a physical property to be strongly affected by the precise method of testing or measuring that property. As such, fully documenting experiments and measurements while providing needed documentation and descriptions of specifications, contracts, and test methods is vital. Using a standardized test method, perhaps published by a respected standards organization, is a good place to start. Sometimes it is more useful to modify an existing test method or to develop a new one, though such home-grown test methods should be validated and, in certain cases, demonstrate technical equivalency to primary, standardized methods. Again, documentation and full disclosure are necessary. A well-written test method is important. However, even more important is choosing a method of measuring the correct property or characteristic. Not all tests and measurements are equally useful: usually a test result is used to predict or imply suitability for a certain purpose. For example, if a manufactured item has several components, test methods may have several levels of connections: test results of a raw material should connect with tests of a component made from that material test results of a component should connect with performance testing of a complete item results of laboratory performance testing should connect with field performance These connections or correlations may be based on published literature, engineering studies, or formal programs such as quality function deployment. Validation of the suitability of the test method is often required. == Content == Quality management systems usually require full documentation of the procedures used in a test. The document for a test method might include: descriptive title scope over which class(es) of items, policies, etc. may be evaluated date of last effective revision and revision designation reference to most recent test method validation person, office, or agency responsible for questions on the test method, updates, and deviations significance or importance of the test method and its intended use terminology and definitions to clarify the meanings of the test method types of apparatus and measuring instrument (sometimes the specific device) required to conduct the test sampling procedures (how samples are to be obtained and prepared, as well as the sample size) safety precautions required calibrations and metrology systems natural environment concerns and considerations testing environment concerns and considerations detailed procedures for conducting the test calculation and analysis of data interpretation of data and test method output report format, content, data, etc. == Validation == Test methods are often scrutinized for their validity, applicability, and accuracy. It is very important that the scope of the test method be clearly defined, and any aspect included in the scope is shown to be accurate and repeatable through validation. Test method validations often encompass the following considerations: accuracy and precision; demonstration of accuracy may require the creation of a reference value if none is yet available repeatability and reproducibility, sometimes in the form of a Gauge R&R. range, or a continuum scale over which the test method would be considered accurate (e.g., 10 N to 100 N force test) measurement resolution, be it spatial, temporal, or otherwise curve fitting, typically for linearity, which justifies interpolation between calibrated reference points robustness, or the insensitivity to potentially subtle variables in the test environment or setup which may be difficult to control usefulness to predict end-use characteristics and performance measurement uncertainty interlaboratory or round robin tests other types of measurement systems analysis == See also == == References == === General references, books === Pyzdek, T, "Quality Engineering Handbook", 2003, ISBN 0-8247-4614-7 Godfrey, A. B., "Juran's Quality Handbook", 1999, ISBN 007034003X Kimothi, S. K., "The Uncertainty of Measurements: Physical and Chemical Metrology: Impact and Analysis", 2002, ISBN 0-87389-535-5 === Related standards === ASTM E177 Standard Practice for Use of the Terms Precision and Bias in ASTM Test Methods ASTM E691 Standard Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method ASTM E1488 Standard Guide for Statistical Procedures to Use in Developing and Applying Test Methods ASTM E2282 Standard Guide for Defining the Test Result of a Test Method ASTM E2655 - Standard Guide for Reporting Uncertainty of Test Results and Use of the Term Measurement Uncertainty in ASTM Test Methods
Wikipedia/Test_method
Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical methods to monitor and control the quality of a production process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste scrap. SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines. SPC must be practiced in two phases: the first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures). An advantage of SPC over other methods of quality control, such as "inspection," is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred. In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped. == History == Statistical process control was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George D. Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II. W. Edwards Deming invited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture and served as the editor of Shewhart's book Statistical Method from the Viewpoint of Quality Control (1939), which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, the American Society for Quality Control, which elected Edwards as its first president. Deming travelled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry. === 'Common' and 'special' sources of variation === Shewhart read the new statistical theories coming out of Britain, especially the work of William Sealy Gosset, Karl Pearson, and Ronald Fisher. However, he understood that data from physical processes seldom produced a normal distribution curve (that is, a Gaussian distribution or 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the same way as data from measurements of natural phenomena (for example, Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as being in (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described as not in control. === Application to non-manufacturing processes === Statistical process control is appropriate to support any repetitive process, and has been implemented in many settings where for example ISO 9000 quality management systems are used, including financial auditing and accounting, IT operations, health care processes, and clerical processes such as loan arrangement and administration, customer billing etc. Despite criticism of its use in design and development, it is well-placed to manage semi-automated data governance of high-volume data processing operations, for example in an enterprise data warehouse, or an enterprise data quality management system. In the 1988 Capability Maturity Model (CMM) the Software Engineering Institute suggested that SPC could be applied to software engineering processes. The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept. The application of SPC to non-repetitive, knowledge-intensive processes, such as research and development or systems engineering, has encountered skepticism and remains controversial. In No Silver Bullet, Fred Brooks points out that the complexity, conformance requirements, changeability, and invisibility of software results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the software development than in, e.g., manufacturing. == Variation in manufacturing == In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its design specifications, SPC uses statistical tools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article. Any source of variation at any point of time in a process will fall into one of two classes. (1) Common causes 'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of variation. It refers to any source of variation that consistently acts on process, of which there are typically many. This type of causes collectively produce a statistically stable and repeatable distribution over time. (2) Special causes 'Special' causes are sometimes referred to as 'assignable' sources of variation. The term refers to any factor causing variation that affects only some of the process output. They are often intermittent and unpredictable. Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs. For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate a distribution of net weights. If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced). From an SPC perspective, if the weight of each cereal box varies randomly, some higher and some lower, always within an acceptable range, then the process is considered stable. If the cams and pulleys of the machinery start to wear out, the weights of the cereal box might not be random. The degraded functionality of the cams and pulleys may lead to a non-random linear pattern of increasing cereal box weights. We call this common cause variation. If, however, all the cereal boxes suddenly weighed much more than average because of an unexpected malfunction of the cams and pulleys, this would be considered a special cause variation. == Application == The application of SPC involves three main phases of activity: Understanding the process and the specification limits. Eliminating assignable (special) sources of variation, so that the process is stable. Monitoring the ongoing production process, assisted by the use of control charts, to detect significant changes of mean or variation. The proper implementation of SPC has been limited, in part due to a lack of statistical expertise at many organizations. === Control charts === The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time. ==== Stable process ==== When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". A process capability analysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future. A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index. ==== Excessive variations ==== When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation. The tools used in these extra activities include: Ishikawa diagram, designed experiments, and Pareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs. ==== Process stability metrics ==== When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditional process capability metrics. Several metrics have been proposed, as described in Ramirez and Runger. They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of the Western Electric rules to the total number of subgroups. == Mathematics of control charts == Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example, derived value = last value + average absolute difference between the last N numbers. == See also == ANOVA Gauge R&R Distribution-free control chart Electronic design automation Industrial engineering Process Window Index Process capability index Quality assurance Reliability engineering Six sigma Stochastic control Total quality management == References == == Bibliography == == External links == MIT Course - Control of Manufacturing Processes Guthrie, William F. (2012). "NIST/SEMATECH e-Handbook of Statistical Methods". National Institute of Standards and Technology. doi:10.18434/M32189.
Wikipedia/Statistical_Quality_Control
The Delphi method or Delphi technique ( DEL-fy; also known as Estimate-Talk-Estimate or ETE) is a structured communication technique or method, originally developed as a systematic, interactive forecasting method that relies on a panel of experts. Delphi has been widely used for business forecasting and has certain advantages over another structured forecasting approach, prediction markets. Delphi can also be used to help reach expert consensus and develop professional guidelines. It is used for such purposes in many health-related fields, including clinical medicine, public health, and research. Delphi is based on the principle that forecasts (or decisions) from a structured group of individuals are more accurate than those from unstructured groups. The experts answer questionnaires in two or more rounds. After each round, a facilitator or change agent provides an anonymised summary of the experts' forecasts from the previous round as well as the reasons they provided for their judgments. Thus, experts are encouraged to revise their earlier answers in light of the replies of other members of their panel. It is believed that during this process the range of the answers will decrease and the group will converge towards the "correct" answer. Finally, the process is stopped after a predefined stopping criterion (e.g., number of rounds, achievement of consensus, stability of results), and the mean or median scores of the final rounds determine the results. Special attention has to be paid to the formulation of the Delphi theses and the definition and selection of the experts in order to avoid methodological weaknesses that severely threaten the validity and reliability of the results. Ensuring that the participants have requisite expertise and that more domineering participants do not overwhelm weaker-willed participants, as the first group tends to be less inclined to change their minds and the second group is more motivated to fit in, can be a barrier to reaching true consensus. == History == The name Delphi derives from the Oracle of Delphi, although the authors of the method were unhappy with the oracular connotation of the name, "smacking a little of the occult". The Delphi method assumes that group judgments are more valid than individual judgments. The Delphi method was developed at the beginning of the Cold War to forecast the impact of technology on warfare. In 1944, General Henry H. Arnold ordered the creation of the report for the U.S. Army Air Corps on the future technological capabilities that might be used by the military. Different approaches were tried, but the shortcomings of traditional forecasting methods, such as theoretical approach, quantitative models or trend extrapolation, quickly became apparent in areas where precise scientific laws have not been established yet. To combat these shortcomings, the Delphi method was developed by Project RAND during the 1950-1960s (1959) by Olaf Helmer, Norman Dalkey, and Nicholas Rescher. It has been used ever since, together with various modifications and reformulations, such as the Imen-Delphi procedure. Experts were asked to give their opinion on the probability, frequency, and intensity of possible enemy attacks. Other experts could anonymously give feedback. This process was repeated several times until a consensus emerged. In 2021, a cross-disciplinary study by Beiderbeck et al. focused on new directions and advancements of the Delphi method, including Real-time Delphi formats. The authors provide a methodological toolbox for designing Delphi surveys including among others sentiment analyses of the field of psychology. == Key characteristics == The following key characteristics of the Delphi method help the participants to focus on the issues at hand and separate Delphi from other methodologies: in this technique a panel of experts is drawn from both inside and outside the organisation. The panel consists of experts having knowledge of the area requiring decision making. Each expert is asked to make anonymous predictions. === Anonymity of the participants === Usually all participants remain anonymous. Their identity is not revealed, even after the completion of the final report. This prevents the authority, personality, or reputation of some participants from dominating others in the process. Arguably, it also frees participants (to some extent) from their personal biases, minimizes the "bandwagon effect" or "halo effect", allows free expression of opinions, encourages open critique, and facilitates admission of errors when revising earlier judgments. === Structuring of information flow === The initial contributions from the experts are collected in the form of answers to questionnaires and their comments to these answers. The panel director controls the interactions among the participants by processing the information and filtering out irrelevant content. This avoids the negative effects of face-to-face panel discussions and solves the usual problems of group dynamics. === Regular feedback === The Delphi method allows participants to comment on the responses of others, the progress of the panel as a whole, and to revise their own forecasts and opinions in real time. === Role of the facilitator === The person coordinating the Delphi method is usually known as a facilitator or Leader, and facilitates the responses of their panel of experts, who are selected for a reason, usually that they hold knowledge on an opinion or view. The facilitator sends out questionnaires, surveys etc. and if the panel of experts accept, they follow instructions and present their views. Responses are collected and analyzed, then common and conflicting viewpoints are identified. If consensus is not reached, the process continues through thesis and antithesis, to gradually work towards synthesis, and building consensus. During the past decades, facilitators have used many different measures and thresholds to measure the degree of consensus or dissent. A comprehensive literature review and summary is compiled in an article by von der Gracht. == Applications == === Use in forecasting === First applications of the Delphi method were in the field of science and technology forecasting. The objective of the method was to combine expert opinions on likelihood and expected development time, of the particular technology, in a single indicator. One of the first such reports, prepared in 1964 by Gordon and Helmer, assessed the direction of long-term trends in science and technology development, covering such topics as scientific breakthroughs, population control, automation, space progress, war prevention and weapon systems. Other forecasts of technology were dealing with vehicle-highway systems, industrial robots, intelligent internet, broadband connections, and technology in education. Later the Delphi method was applied in other places, especially those related to public policy issues, such as economic trends, health and education. It was also applied successfully and with high accuracy in business forecasting. For example, in one case reported by Basu and Schroeder (1977), the Delphi method predicted the sales of a new product during the first two years with inaccuracy of 3–4% compared with actual sales. Quantitative methods produced errors of 10–15%, and traditional unstructured forecast methods had errors of about 20%. (This is only one example; the overall accuracy of the technique is mixed.) The Delphi method has also been used as a tool to implement multi-stakeholder approaches for participative policy-making in developing countries. The governments of Latin America and the Caribbean have successfully used the Delphi method as an open-ended public-private sector approach to identify the most urgent challenges for their regional ICT-for-development eLAC Action Plans. As a result, governments have widely acknowledged the value of collective intelligence from civil society, academic and private sector participants of the Delphi, especially in a field of rapid change, such as technology policies. === Use in patent participation identification === In the early 1980s Jackie Awerman of Jackie Awerman Associates, Inc. designed a modified Delphi method for identifying the roles of various contributors to the creation of a patent-eligible product. (Epsilon Corporation, Chemical Vapor Deposition Reactor) The results were then used by patent attorneys to determine bonus distribution percentage to the general satisfaction of all team members. === Use in policy-making === From the 1970s, the use of the Delphi technique in public policy-making introduces a number of methodological innovations. In particular: the need to examine several types of items (not only forecasting items but, typically, issue items, goal items, and option items) leads to introducing different evaluation scales which are not used in the standard Delphi. These often include desirability, feasibility (technical and political) and probability, which the analysts can use to outline different scenarios: the desired scenario (from desirability), the potential scenario (from feasibility) and the expected scenario (from probability); the complexity of issues posed in public policy-making tends to increase weighting of panelists’ arguments, such as soliciting pros and cons for each item along with new items for panel consideration; likewise, methods measuring panel evaluations tend to increase sophistication such as multi-dimensional scaling. Further innovations come from the use of computer-based (and later web-based) Delphi conferences. According to Turoff and Hiltz, in computer-based Delphis: the iteration structure used in the paper Delphis, which is divided into three or more discrete rounds, can be replaced by a process of continuous (roundless) interaction, enabling panelists to change their evaluations at any time; the statistical group response can be updated in real-time, and shown whenever a panelist provides a new evaluation. According to Bolognini, web-based Delphis offer two further possibilities, relevant in the context of interactive policy-making and e-democracy. These are: the involvement of a large number of participants, the use of two or more panels representing different groups (such as policy-makers, experts, citizens), which the administrator can give tasks reflecting their diverse roles and expertise, and make them to interact within ad hoc communication structures. For example, the policy community members (policy-makers and experts) may interact as part of the main conference panel, while they receive inputs from a virtual community (citizens, associations etc.) involved in a side conference. These web-based variable communication structures, which he calls Hyperdelphi (HD), are designed to make Delphi conferences "more fluid and adapted to the hypertextual and interactive nature of digital communication". One successful example of a (partially) web-based policy Delphi is the five-round Delphi exercise (with 1,454 contributions) for the creation of the eLAC Action Plans in Latin America. It is believed to be the most extensive online participatory policy-making foresight exercise in the history of intergovernmental processes in the developing world at this time. In addition to the specific policy guidance provided, the authors list the following lessons learned: "(1) the potential of Policy Delphi methods to introduce transparency and accountability into public decision-making, especially in developing countries; (2) the utility of foresight exercises to foster multi-agency networking in the development community; (3) the usefulness of embedding foresight exercises into established mechanisms of representative democracy and international multilateralism, such as the United Nations; (4) the potential of online tools to facilitate participation in resource-scarce developing countries; and (5) the resource-efficiency stemming from the scale of international foresight exercises, and therefore its adequacy for resource-scarce regions." === Use in health settings === The Delphi technique is widely used to help reach expert consensus in health-related settings. For example, it is frequently employed in the development of medical guidelines and protocols. ==== Public health ==== Some examples of its application in public health contexts include non-alcoholic fatty liver disease, iodine deficiency disorders, building responsive health systems for communities affected by migration, the role of health systems in advancing well-being for those living with HIV, on policies and interventions to reduce harmful gambling, on the regulation of electronic cigarettes and on recommendations to end the COVID-19 pandemic. ==== Reporting guidelines ==== Use of the Delphi method in the development of guidelines for the reporting of health research is recommended, especially for experienced developers. Since this advice was made in 2010, two systematic reviews have found that fewer than 30% of published reporting guidelines incorporated Delphi methods into the development process. === Online Delphi systems === A number of Delphi forecasts are conducted using web sites that allow the process to be conducted in real-time. For instance, the TechCast Project uses a panel of 100 experts worldwide to forecast breakthroughs in all fields of science and technology. Another example is the Horizon Project, where educational futurists collaborate online using the Delphi method to come up with the technological advancements to look out for in education for the next few years. == Variations == Traditionally the Delphi method has aimed at a consensus of the most probable future by iteration. Other versions, such as the Policy Delphi, offer decision support methods aiming at structuring and discussing the diverse views of the preferred future. In Europe, more recent web-based experiments have used the Delphi method as a communication technique for interactive decision-making and e-democracy. The Argument Delphi, developed by Osmo Kuusi, focuses on ongoing discussion and finding relevant arguments rather than focusing on the output. The Disaggregative Policy Delphi, developed by Petri Tapio, uses cluster analysis as a systematic tool to construct various scenarios of the future in the latest Delphi round. The respondent's view on the probable and the preferable future are dealt with as separate cases. The computerization of Argument Delphi is relatively difficult because of several problems like argument resolution, argument aggregation and argument evaluation. The computerization of Argument Delphi, developed by Sadi Evren Seker, proposes solutions to such problems. A fast-track Delphi was developed to provide consensual expert opinion on the state of scientific knowledge in public health crises. It can provide results within three weeks, while the conventional Delphi can take several months (sometimes years). == Accuracy == Today the Delphi method is a widely accepted forecasting tool and has been used successfully for thousands of studies in areas varying from technology forecasting to drug abuse. Overall the track record of the Delphi method is mixed. There have been many cases when the method produced poor results. Still, some authors attribute this to poor application of the method and not to the weaknesses of the method itself. The RAND Methodological Guidance for Conducting and Critically Appraising Delphi Panels is a manual for doing Delphi research which provides guidance for doing research and offers a appraisal tool. This manual gives guidance on best practices that will help to avoid, or mitigate, potential drawbacks of Delphi Method Research; it also helps to understand the confidence that can be given to study results. It must also be realized that in areas such as science and technology forecasting, the degree of uncertainty is so great that exact and always correct predictions are impossible, so a high degree of error is to be expected. An important challenge for the method is ensuring sufficiently knowledgeable panelists. If panelists are misinformed about a topic, the use of Delphi may only add confidence to their ignorance. One of the initial problems of the method was its inability to make complex forecasts with multiple factors. Potential future outcomes were usually considered as if they had no effect on each other. Later on, several extensions to the Delphi method were developed to address this problem, such as cross impact analysis, that takes into consideration the possibility that the occurrence of one event may change probabilities of other events covered in the survey. Still the Delphi method can be used most successfully in forecasting single scalar indicators. === Delphi vs. prediction markets === Delphi has characteristics similar to prediction markets as both are structured approaches that aggregate diverse opinions from groups. Yet, there are differences that may be decisive for their relative applicability for different problems. Some advantages of prediction markets derive from the possibility to provide incentives for participation. They can motivate people to participate over a long period of time and to reveal their true beliefs. They aggregate information automatically and instantly incorporate new information in the forecast. Participants do not have to be selected and recruited manually by a facilitator. They themselves decide whether to participate if they think their private information is not yet incorporated in the forecast. Delphi seems to have these advantages over prediction markets: Participants reveal their reasoning It is easier to maintain confidentiality Potentially quicker forecasts if experts are readily available. Delphi is applicable in situations where the bets involved might affect the value of the currency used in bets (e.g. a bet on the collapse of the dollar made in dollars might have distorted odds). More recent research has also focused on combining both, the Delphi technique and prediction markets. More specifically, in a research study at Deutsche Börse elements of the Delphi method had been integrated into a prediction market. == See also == Computer supported brainstorming DARPA's Policy Analysis Market Horizon scanning Nominal group technique Planning poker Reference class forecasting Wideband delphi The Wisdom of Crowds == References == == Further reading == == External links == RAND publications on the Delphi Method Downloadable documents from RAND concerning applications of the Delphi Technique.
Wikipedia/Delphi_method
Survey Methodology (or Techniques d'enquête in the French version) is a peer-reviewed open access scientific journal that publishes papers related to the development and application of survey techniques. It is published by Statistics Canada, the national statistical office of Canada, in English and French. The journal started publishing in 1975, publishes two issues each year, and is available open access in HTML online and PDF. The print version has been discontinued. As of 2021, the editor-in-chief is Jean-François Beaumont, senior statistical advisor at Statistics Canada. == Abstracting and indexing == Survey Methodology is indexed in the following services: Current Index to Statistics Science Citation Index Expanded Social Sciences Citation Index == References == == External links == Official website
Wikipedia/Survey_Methodology
Repeated measures design is a research design that involves multiple measures of the same variable taken on the same or matched subjects either under different conditions or over two or more time periods. For instance, repeated measurements are collected in a longitudinal study in which change over time is assessed. == Crossover studies == A popular repeated-measures design is the crossover study. A crossover study is a longitudinal study in which subjects receive a sequence of different treatments (or exposures). While crossover studies can be observational studies, many important crossover studies are controlled experiments. Crossover designs are common for experiments in many scientific disciplines, for example psychology, education, pharmaceutical science, and health care, especially medicine. Randomized, controlled, crossover experiments are especially important in health care. In a randomized clinical trial, the subjects are randomly assigned treatments. When such a trial is a repeated measures design, the subjects are randomly assigned to a sequence of treatments. A crossover clinical trial is a repeated-measures design in which each patient is randomly assigned to a sequence of treatments, including at least two treatments (of which one may be a standard treatment or a placebo): Thus each patient crosses over from one treatment to another. Nearly all crossover designs have "balance", which means that all subjects should receive the same number of treatments and that all subjects participate for the same number of periods. In most crossover trials, each subject receives all treatments. However, many repeated-measures designs are not crossovers: the longitudinal study of the sequential effects of repeated treatments need not use any "crossover", for example (Vonesh & Chinchilli; Jones & Kenward). == Uses == Limited number of participants—The repeated measure design reduces the variance of estimates of treatment-effects, allowing statistical inference to be made with fewer subjects. Efficiency—Repeated measure designs allow many experiments to be completed more quickly, as fewer groups need to be trained to complete an entire experiment. For example, experiments in which each condition takes only a few minutes, whereas the training to complete the tasks take as much, if not more time. Longitudinal analysis—Repeated measure designs allow researchers to monitor how participants change over time, both long- and short-term situations. == Order effects == Order effects may occur when a participant in an experiment is able to perform a task and then perform it again. Examples of order effects include performance improvement or decline in performance, which may be due to learning effects, boredom or fatigue. The impact of order effects may be smaller in long-term longitudinal studies or by counterbalancing using a crossover design. == Counterbalancing == In this technique, two groups each perform the same tasks or experience the same conditions, but in reverse order. With two tasks or conditions, four groups are formed. Counterbalancing attempts to take account of two important sources of systematic variation in this type of design: practice and boredom effects. Both might otherwise lead to different performance of participants due to familiarity with or tiredness to the treatments. == Limitations == It may not be possible for each participant to be in all conditions of the experiment (i.e. time constraints, location of experiment, etc.). Severely diseased subjects tend to drop out of longitudinal studies, potentially biasing the results. In these cases mixed effects models would be preferable as they can deal with missing values. Mean regression may affect conditions with significant repetitions. Maturation may affect studies that extend over time. Events outside the experiment may change the response between repetitions. == Repeated measures ANOVA == Repeated measures analysis of variance (rANOVA) is a commonly used statistical approach to repeated measure designs. With such designs, the repeated-measure factor (the qualitative independent variable) is the within-subjects factor, while the dependent quantitative variable on which each participant is measured is the dependent variable. === Partitioning of error === One of the greatest advantages to rANOVA, as is the case with repeated measures designs in general, is the ability to partition out variability due to individual differences. Consider the general structure of the F-statistic: F = MSTreatment / MSError = (SSTreatment/dfTreatment)/(SSError/dfError) In a between-subjects design there is an element of variance due to individual difference that is combined with the treatment and error terms: SSTotal = SSTreatment + SSError dfTotal = n − 1 In a repeated measures design it is possible to partition subject variability from the treatment and error terms. In such a case, variability can be broken down into between-treatments variability (or within-subjects effects, excluding individual differences) and within-treatments variability. The within-treatments variability can be further partitioned into between-subjects variability (individual differences) and error (excluding the individual differences): SSTotal = SSTreatment (excluding individual difference) + SSSubjects + SSError dfTotal = dfTreatment (within subjects) + dfbetween subjects + dferror = (k − 1) + (s − 1) + ((k - 1)(s − 1)) = ks -1= n-1, where k is the number of time levels and s is the number of subjects. In reference to the general structure of the F-statistic, it is clear that by partitioning out the between-subjects variability, the F-value will increase because the sum of squares error term will be smaller resulting in a smaller MSError. It is noteworthy that partitioning variability reduces degrees of freedom from the F-test, therefore the between-subjects variability must be significant enough to offset the loss in degrees of freedom. If between-subjects variability is small this process may actually reduce the F-value. === Assumptions === As with all statistical analyses, specific assumptions should be met to justify the use of this test. Violations can moderately to severely affect results and often lead to an inflation of type 1 error. With the rANOVA, standard univariate and multivariate assumptions apply. The univariate assumptions are: Normality—For each level of the within-subjects factor, the dependent variable must have a normal distribution. Sphericity—Difference scores computed between two levels of a within-subjects factor must have the same variance for the comparison of any two levels. (This assumption only applies if there are more than 2 levels of the independent variable.) Randomness—Cases should be derived from a random sample, and scores from different participants should be independent of each other. The rANOVA also requires that certain multivariate assumptions be met, because a multivariate test is conducted on difference scores. These assumptions include: Multivariate normality—The difference scores are multivariately normally distributed in the population. Randomness—Individual cases should be derived from a random sample, and the difference scores for each participant are independent from those of another participant. === F test === As with other analysis of variance tests, the rANOVA makes use of an F statistic to determine significance. Depending on the number of within-subjects factors and assumption violations, it is necessary to select the most appropriate of three tests: Standard Univariate ANOVA F test—This test is commonly used given only two levels of the within-subjects factor (i.e. time point 1 and time point 2). This test is not recommended given more than 2 levels of the within-subjects factor because the assumption of sphericity is commonly violated in such cases. Alternative Univariate test—These tests account for violations to the assumption of sphericity, and can be used when the within-subjects factor exceeds 2 levels. The F statistic is the same as in the Standard Univariate ANOVA F test, but is associated with a more accurate p-value. This correction is done by adjusting the degrees of freedom downward for determining the critical F value. Two corrections are commonly used: the Greenhouse–Geisser correction and the Huynh–Feldt correction. The Greenhouse–Geisser correction is more conservative, but addresses a common issue of increasing variability over time in a repeated-measures design. The Huynh–Feldt correction is less conservative, but does not address issues of increasing variability. It has been suggested that lower Huynh–Feldt be used with smaller departures from sphericity, while Greenhouse–Geisser be used when the departures are large. Multivariate Test—This test does not assume sphericity, but is also highly conservative. === Effect size === One of the most commonly reported effect size statistics for rANOVA is partial eta-squared (ηp2). It is also common to use the multivariate η2 when the assumption of sphericity has been violated, and the multivariate test statistic is reported. A third effect size statistic that is reported is the generalized η2, which is comparable to ηp2 in a one-way repeated measures ANOVA. It has been shown to be a better estimate of effect size with other within-subjects tests. === Cautions === rANOVA is not always the best statistical analysis for repeated measure designs. The rANOVA is vulnerable to effects from missing values, imputation, unequivalent time points between subjects and violations of sphericity. These issues can result in sampling bias and inflated rates of Type I error. In such cases it may be better to consider use of a linear mixed model. == See also == == Notes == == References == === Design and analysis of experiments === Jones, Byron; Kenward, Michael G. (2003). Design and Analysis of Cross-Over Trials (Second ed.). London: Chapman and Hall.{{cite book}}: CS1 maint: publisher location (link) Vonesh, Edward F. & Chinchilli, Vernon G. (1997). Linear and Nonlinear Models for the Analysis of Repeated Measurements. London: Chapman and Hall.{{cite book}}: CS1 maint: publisher location (link) === Exploration of longitudinal data === Davidian, Marie; David M. Giltinan (1995). Nonlinear Models for Repeated Measurement Data. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. ISBN 978-0-412-98341-2. Fitzmaurice, Garrett; Davidian, Marie; Verbeke, Geert; Molenberghs, Geert, eds. (2008). Longitudinal Data Analysis. Boca Raton, Florida: Chapman and Hall/CRC. ISBN 978-1-58488-658-7. Jones, Byron; Kenward, Michael G. (2003). Design and Analysis of Cross-Over Trials (Second ed.). London: Chapman and Hall.{{cite book}}: CS1 maint: publisher location (link) Kim, Kevin & Timm, Neil (2007). ""Restricted MGLM and growth curve model" (Chapter 7)". Univariate and multivariate general linear models: Theory and applications with SAS (with 1 CD-ROM for Windows and UNIX). Statistics: Textbooks and Monographs (Second ed.). Boca Raton, Florida: Chapman & Hall/CRC. ISBN 978-1-58488-634-1. Kollo, Tõnu & von Rosen, Dietrich (2005). ""Multivariate linear models" (chapter 4), especially "The Growth curve model and extensions" (Chapter 4.1)". Advanced multivariate statistics with matrices. Mathematics and its applications. Vol. 579. New York: Springer. ISBN 978-1-4020-3418-3. Kshirsagar, Anant M. & Smith, William Boyce (1995). Growth curves. Statistics: Textbooks and Monographs. Vol. 145. New York: Marcel Dekker, Inc. ISBN 0-8247-9341-2. Pan, Jian-Xin & Fang, Kai-Tai (2002). Growth curve models and statistical diagnostics. Springer Series in Statistics. New York: Springer-Verlag. ISBN 0-387-95053-2. Seber, G. A. F. & Wild, C. J. (1989). ""Growth models (Chapter 7)"". Nonlinear regression. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. New York: John Wiley & Sons, Inc. pp. 325–367. ISBN 0-471-61760-1. Timm, Neil H. (2002). ""The general MANOVA model (GMANOVA)" (Chapter 3.6.d)". Applied multivariate analysis. Springer Texts in Statistics. New York: Springer-Verlag. ISBN 0-387-95347-7. Vonesh, Edward F. & Chinchilli, Vernon G. (1997). Linear and Nonlinear Models for the Analysis of Repeated Measurements. London: Chapman and Hall.{{cite book}}: CS1 maint: publisher location (link) (Comprehensive treatment of theory and practice) Conaway, M. (1999, October 11). Repeated Measures Design. Retrieved February 18, 2008, from http://biostat.mc.vanderbilt.edu/twiki/pub/Main/ClinStat/repmeas.PDF Minke, A. (1997, January). Conducting Repeated Measures Analyses: Experimental Design Considerations. Retrieved February 18, 2008, from Ericae.net: http://ericae.net/ft/tamu/Rm.htm Shaughnessy, J. J. (2006). Research Methods in Psychology. New York: McGraw-Hill. == External links == Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares, and their analysis in R (University of Southampton)
Wikipedia/Repeated_measures_design
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process. When referring specifically to probabilities, the corresponding term is probabilistic model. All statistical hypothesis tests and all statistical estimators are derived via statistical models. More generally, statistical models are part of the foundation of statistical inference. A statistical model is usually specified as a mathematical relationship between one or more random variables and other non-random variables. As such, a statistical model is "a formal representation of a theory" (Herman Adèr quoting Kenneth Bollen). == Introduction == Informally, a statistical model can be thought of as a statistical assumption (or set of statistical assumptions) with a certain property: that the assumption allows us to calculate the probability of any event. As an example, consider a pair of ordinary six-sided dice. We will study two different statistical assumptions about the dice. The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is ⁠1/6⁠. From that assumption, we can calculate the probability of both dice coming up 5:  ⁠1/6⁠ × ⁠1/6⁠ = ⁠1/36⁠.  More generally, we can calculate the probability of any event: e.g. (1 and 2) or (3 and 3) or (5 and 6). The alternative statistical assumption is this: for each of the dice, the probability of the face 5 coming up is ⁠1/8⁠ (because the dice are weighted). From that assumption, we can calculate the probability of both dice coming up 5:  ⁠1/8⁠ × ⁠1/8⁠ = ⁠1/64⁠.  We cannot, however, calculate the probability of any other nontrivial event, as the probabilities of the other faces are unknown. The first statistical assumption constitutes a statistical model: because with the assumption alone, we can calculate the probability of any event. The alternative statistical assumption does not constitute a statistical model: because with the assumption alone, we cannot calculate the probability of every event. In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation). For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. == Formal definition == In mathematical terms, a statistical model is a pair ( S , P {\displaystyle S,{\mathcal {P}}} ), where S {\displaystyle S} is the set of possible observations, i.e. the sample space, and P {\displaystyle {\mathcal {P}}} is a set of probability distributions on S {\displaystyle S} . The set P {\displaystyle {\mathcal {P}}} represents all of the models that are considered possible. This set is typically parameterized: P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . The set Θ {\displaystyle \Theta } defines the parameters of the model. If a parameterization is such that distinct parameter values give rise to distinct distributions, i.e. F θ 1 = F θ 2 ⇒ θ 1 = θ 2 {\displaystyle F_{\theta _{1}}=F_{\theta _{2}}\Rightarrow \theta _{1}=\theta _{2}} (in other words, the mapping is injective), it is said to be identifiable. In some cases, the model can be more complex. In Bayesian statistics, the model is extended by adding a probability distribution over the parameter space Θ {\displaystyle \Theta } . A statistical model can sometimes distinguish two sets of probability distributions. The first set Q = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {Q}}=\{F_{\theta }:\theta \in \Theta \}} is the set of models considered for inference. The second set P = { F λ : λ ∈ Λ } {\displaystyle {\mathcal {P}}=\{F_{\lambda }:\lambda \in \Lambda \}} is the set of models that could have generated the data which is much larger than Q {\displaystyle {\mathcal {Q}}} . Such statistical models are key in checking that a given procedure is robust, i.e. that it does not produce catastrophic errors when its assumptions about the data are incorrect. == An example == Suppose that we have a population of children, with the ages of the children distributed uniformly, in the population. The height of a child will be stochastically related to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in a linear regression model, like this: heighti = b0 + b1agei + εi, where b0 is the intercept, b1 is a parameter that age is multiplied by to obtain a prediction of height, εi is the error term, and i identifies the child. This implies that height is predicted by age, with some error. An admissible model must be consistent with all the data points. Thus, a straight line (heighti = b0 + b1agei) cannot be admissible for a model of the data—unless it exactly fits all the data points, i.e. all the data points lie perfectly on the line. The error term, εi, must be included in the equation, so that the model is consistent with all the data points. To do statistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εi distributions are i.i.d. Gaussian, with zero mean. In this instance, the model would have 3 parameters: b0, b1, and the variance of the Gaussian distribution. We can formally specify the model in the form ( S , P {\displaystyle S,{\mathcal {P}}} ) as follows. The sample space, S {\displaystyle S} , of our model comprises the set of all possible pairs (age, height). Each possible value of θ {\displaystyle \theta } = (b0, b1, σ2) determines a distribution on S {\displaystyle S} ; denote that distribution by F θ {\displaystyle F_{\theta }} . If Θ {\displaystyle \Theta } is the set of all possible values of θ {\displaystyle \theta } , then P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . (The parameterization is identifiable, and this is easy to check.) In this example, the model is determined by (1) specifying S {\displaystyle S} and (2) making some assumptions relevant to P {\displaystyle {\mathcal {P}}} . There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specify P {\displaystyle {\mathcal {P}}} —as they are required to do. == General remarks == A statistical model is a special class of mathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables are stochastic. In the above example with children's heights, ε is a stochastic variable; without that stochastic variable, the model would be deterministic. Statistical models are often used even when the data-generating process being modeled is deterministic. For instance, coin tossing is, in principle, a deterministic process; yet it is commonly modeled as stochastic (via a Bernoulli process). Choosing an appropriate statistical model to represent a given data-generating process is sometimes extremely difficult, and may require knowledge of both the process and relevant statistical analyses. Relatedly, the statistician Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". There are three purposes for a statistical model, according to Konishi & Kitagawa: Predictions Extraction of information Description of stochastic structures Those three purposes are essentially the same as the three purposes indicated by Friendly & Meyer: prediction, estimation, description. == Dimension of a model == Suppose that we have a statistical model ( S , P {\displaystyle S,{\mathcal {P}}} ) with P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . In notation, we write that Θ ⊆ R k {\displaystyle \Theta \subseteq \mathbb {R} ^{k}} where k is a positive integer ( R {\displaystyle \mathbb {R} } denotes the real numbers; other sets can be used, in principle). Here, k is called the dimension of the model. The model is said to be parametric if Θ {\displaystyle \Theta } has finite dimension. As an example, if we assume that data arise from a univariate Gaussian distribution, then we are assuming that P = { F μ , σ ( x ) ≡ 1 2 π σ exp ⁡ ( − ( x − μ ) 2 2 σ 2 ) : μ ∈ R , σ > 0 } {\displaystyle {\mathcal {P}}=\left\{F_{\mu ,\sigma }(x)\equiv {\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right):\mu \in \mathbb {R} ,\sigma >0\right\}} . In this example, the dimension, k, equals 2. As another example, suppose that the data consists of points (x, y) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean): this leads to the same statistical model as was used in the example with children's heights. The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.) Although formally θ ∈ Θ {\displaystyle \theta \in \Theta } is a single parameter that has dimension k, it is sometimes regarded as comprising k separate parameters. For example, with the univariate Gaussian distribution, θ {\displaystyle \theta } is formally a single parameter with dimension 2, but it is often regarded as comprising 2 separate parameters—the mean and the standard deviation. A statistical model is nonparametric if the parameter set Θ {\displaystyle \Theta } is infinite dimensional. A statistical model is semiparametric if it has both finite-dimensional and infinite-dimensional parameters. Formally, if k is the dimension of Θ {\displaystyle \Theta } and n is the number of samples, both semiparametric and nonparametric models have k → ∞ {\displaystyle k\rightarrow \infty } as n → ∞ {\displaystyle n\rightarrow \infty } . If k / n → 0 {\displaystyle k/n\rightarrow 0} as n → ∞ {\displaystyle n\rightarrow \infty } , then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly used statistical models. Regarding semiparametric and nonparametric models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies". == Nested models == Two statistical models are nested if the first model can be transformed into the second model by imposing constraints on the parameters of the first model. As an example, the set of all Gaussian distributions has, nested within it, the set of zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. As a second example, the quadratic model y = b0 + b1x + b2x2 + ε, ε ~ 𝒩(0, σ2) has, nested within it, the linear model y = b0 + b1x + ε, ε ~ 𝒩(0, σ2) —we constrain the parameter b2 to equal 0. In both those examples, the first model has a higher dimension than the second model (for the first example, the zero-mean model has dimension 1). Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2. == Comparing models == Comparing statistical models is fundamental for much of statistical inference. Konishi & Kitagawa (2008, p. 75) state: "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models." Common criteria for comparing models include the following: R2, Bayes factor, Akaike information criterion, and the likelihood-ratio test together with its generalization, the relative likelihood. Another way of comparing two statistical models is through the notion of deficiency introduced by Lucien Le Cam. == See also == == Notes == == References == == Further reading == Davison, A. C. (2008), Statistical Models, Cambridge University Press Drton, M.; Sullivant, S. (2007), "Algebraic statistical models" (PDF), Statistica Sinica, 17: 1273–1297 Freedman, D. A. (2009), Statistical Models, Cambridge University Press Helland, I. S. (2010), Steps Towards a Unified Basis for Scientific Models and Methods, World Scientific Kroese, D. P.; Chan, J. C. C. (2014), Statistical Modeling and Computation, Springer Shmueli, G. (2010), "To explain or to predict?", Statistical Science, 25 (3): 289–310, arXiv:1101.0891, doi:10.1214/10-STS330, S2CID 15900983
Wikipedia/Statistical_models
In mathematics, a binary function (also called bivariate function, or function of two variables) is a function that takes two inputs. Precisely stated, a function f {\displaystyle f} is binary if there exists sets X , Y , Z {\displaystyle X,Y,Z} such that f : X × Y → Z {\displaystyle \,f\colon X\times Y\rightarrow Z} where X × Y {\displaystyle X\times Y} is the Cartesian product of X {\displaystyle X} and Y . {\displaystyle Y.} == Alternative definitions == Set-theoretically, a binary function can be represented as a subset of the Cartesian product X × Y × Z {\displaystyle X\times Y\times Z} , where ( x , y , z ) {\displaystyle (x,y,z)} belongs to the subset if and only if f ( x , y ) = z {\displaystyle f(x,y)=z} . Conversely, a subset R {\displaystyle R} defines a binary function if and only if for any x ∈ X {\displaystyle x\in X} and y ∈ Y {\displaystyle y\in Y} , there exists a unique z ∈ Z {\displaystyle z\in Z} such that ( x , y , z ) {\displaystyle (x,y,z)} belongs to R {\displaystyle R} . f ( x , y ) {\displaystyle f(x,y)} is then defined to be this z {\displaystyle z} . Alternatively, a binary function may be interpreted as simply a function from X × Y {\displaystyle X\times Y} to Z {\displaystyle Z} . Even when thought of this way, however, one generally writes f ( x , y ) {\displaystyle f(x,y)} instead of f ( ( x , y ) ) {\displaystyle f((x,y))} . (That is, the same pair of parentheses is used to indicate both function application and the formation of an ordered pair.) == Examples == Division of whole numbers can be thought of as a function. If Z {\displaystyle \mathbb {Z} } is the set of integers, N + {\displaystyle \mathbb {N} ^{+}} is the set of natural numbers (except for zero), and Q {\displaystyle \mathbb {Q} } is the set of rational numbers, then division is a binary function f : Z × N + → Q {\displaystyle f:\mathbb {Z} \times \mathbb {N} ^{+}\to \mathbb {Q} } . In a vector space V over a field F, scalar multiplication is a binary function. A scalar a ∈ F is combined with a vector v ∈ V to produce a new vector av ∈ V. Another example is that of inner products, or more generally functions of the form ( x , y ) ↦ x T M y {\displaystyle (x,y)\mapsto x^{\mathrm {T} }My} , where x, y are real-valued vectors of appropriate size and M is a matrix. If M is a positive definite matrix, this yields an inner product. == Functions of two real variables == Functions whose domain is a subset of R 2 {\displaystyle \mathbb {R} ^{2}} are often also called functions of two variables even if their domain does not form a rectangle and thus the cartesian product of two sets. == Restrictions to ordinary functions == In turn, one can also derive ordinary functions of one variable from a binary function. Given any element x ∈ X {\displaystyle x\in X} , there is a function f x {\displaystyle f^{x}} , or f ( x , ⋅ ) {\displaystyle f(x,\cdot )} , from Y {\displaystyle Y} to Z {\displaystyle Z} , given by f x ( y ) = f ( x , y ) {\displaystyle f^{x}(y)=f(x,y)} . Similarly, given any element y ∈ Y {\displaystyle y\in Y} , there is a function f y {\displaystyle f_{y}} , or f ( ⋅ , y ) {\displaystyle f(\cdot ,y)} , from X {\displaystyle X} to Z {\displaystyle Z} , given by f y ( x ) = f ( x , y ) {\displaystyle f_{y}(x)=f(x,y)} . In computer science, this identification between a function from X × Y {\displaystyle X\times Y} to Z {\displaystyle Z} and a function from X {\displaystyle X} to Z Y {\displaystyle Z^{Y}} , where Z Y {\displaystyle Z^{Y}} is the set of all functions from Y {\displaystyle Y} to Z {\displaystyle Z} , is called currying. == Generalisations == The various concepts relating to functions can also be generalised to binary functions. For example, the division example above is surjective (or onto) because every rational number may be expressed as a quotient of an integer and a natural number. This example is injective in each input separately, because the functions f x and f y are always injective. However, it's not injective in both variables simultaneously, because (for example) f (2,4) = f (1,2). One can also consider partial binary functions, which may be defined only for certain values of the inputs. For example, the division example above may also be interpreted as a partial binary function from Z and N to Q, where N is the set of all natural numbers, including zero. But this function is undefined when the second input is zero. A binary operation is a binary function where the sets X, Y, and Z are all equal; binary operations are often used to define algebraic structures. In linear algebra, a bilinear transformation is a binary function where the sets X, Y, and Z are all vector spaces and the derived functions f x and fy are all linear transformations. A bilinear transformation, like any binary function, can be interpreted as a function from X × Y to Z, but this function in general won't be linear. However, the bilinear transformation can also be interpreted as a single linear transformation from the tensor product X ⊗ Y {\displaystyle X\otimes Y} to Z. == Generalisations to ternary and other functions == The concept of binary function generalises to ternary (or 3-ary) function, quaternary (or 4-ary) function, or more generally to n-ary function for any natural number n. A 0-ary function to Z is simply given by an element of Z. One can also define an A-ary function where A is any set; there is one input for each element of A. == Category theory == In category theory, n-ary functions generalise to n-ary morphisms in a multicategory. The interpretation of an n-ary morphism as an ordinary morphisms whose domain is some sort of product of the domains of the original n-ary morphism will work in a monoidal category. The construction of the derived morphisms of one variable will work in a closed monoidal category. The category of sets is closed monoidal, but so is the category of vector spaces, giving the notion of bilinear transformation above. == See also == Arity Unary operation Unary function Binary operation Iterated binary operation Ternary operation == References ==
Wikipedia/Binary_function
Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. The hazard rate at time t {\displaystyle t} is the probability per short time dt that an event will occur between t {\displaystyle t} and t + d t {\displaystyle t+dt} given that up to time t {\displaystyle t} no event has occurred yet. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed, may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated (or decelerated). == Background == Survival models can be viewed as consisting of two parts: the underlying baseline hazard function, often denoted λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , describing how the risk of event per time unit changes over time at baseline levels of covariates; and the effect parameters, describing how the hazard varies in response to explanatory covariates. A typical medical example would include covariates such as treatment assignment, as well as patient characteristics such as age at start of study, gender, and the presence of other diseases at start of study, in order to reduce variability and/or control for confounding. The proportional hazards condition states that covariates are multiplicatively related to the hazard. In the simplest case of stationary coefficients, for example, a treatment with a drug may, say, halve a subject's hazard at any given time t {\displaystyle t} , while the baseline hazard may vary. Note however, that this does not double the lifetime of the subject; the precise effect of the covariates on the lifetime depends on the type of λ 0 ( t ) {\displaystyle \lambda _{0}(t)} . The covariate is not restricted to binary predictors; in the case of a continuous covariate x {\displaystyle x} , it is typically assumed that the hazard responds exponentially; each unit increase in x {\displaystyle x} results in proportional scaling of the hazard. == The Cox model == === Introduction === Sir David Cox observed that if the proportional hazards assumption holds (or, is assumed to hold) then it is possible to estimate the effect parameter(s), denoted β i {\displaystyle \beta _{i}} below, without any consideration of the full hazard function. This approach to survival data is called application of the Cox proportional hazards model, sometimes abbreviated to Cox model or to proportional hazards model. However, Cox also noted that biological interpretation of the proportional hazards assumption can be quite tricky. Let Xi = (Xi1, … , Xip) be the realized values of the p covariates for subject i. The hazard function for the Cox proportional hazards model has the form λ ( t | X i ) = λ 0 ( t ) exp ⁡ ( β 1 X i 1 + ⋯ + β p X i p ) = λ 0 ( t ) exp ⁡ ( X i ⋅ β ) {\displaystyle {\begin{aligned}\lambda (t|X_{i})&=\lambda _{0}(t)\exp(\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip})\\&=\lambda _{0}(t)\exp(X_{i}\cdot \beta )\end{aligned}}} This expression gives the hazard function at time t for subject i with covariate vector (explanatory variables) Xi. Note that between subjects, the baseline hazard λ 0 ( t ) {\displaystyle \lambda _{0}(t)} is identical (has no dependency on i). The only difference between subjects' hazards comes from the baseline scaling factor exp ⁡ ( X i ⋅ β ) {\displaystyle \exp(X_{i}\cdot \beta )} . === Why it is called "proportional" === To start, suppose we only have a single covariate, x {\displaystyle x} , and therefore a single coefficient, β 1 {\displaystyle \beta _{1}} . Our model looks like: λ ( t | x ) = λ 0 ( t ) exp ⁡ ( β 1 x ) {\displaystyle \lambda (t|x)=\lambda _{0}(t)\exp(\beta _{1}x)} Consider the effect of increasing x {\displaystyle x} by 1: λ ( t | x + 1 ) = λ 0 ( t ) exp ⁡ ( β 1 ( x + 1 ) ) = λ 0 ( t ) exp ⁡ ( β 1 x + β 1 ) = ( λ 0 ( t ) exp ⁡ ( β 1 x ) ) exp ⁡ ( β 1 ) = λ ( t | x ) exp ⁡ ( β 1 ) {\displaystyle {\begin{aligned}\lambda (t|x+1)&=\lambda _{0}(t)\exp(\beta _{1}(x+1))\\&=\lambda _{0}(t)\exp(\beta _{1}x+\beta _{1})\\&={\Bigl (}\lambda _{0}(t)\exp(\beta _{1}x){\Bigr )}\exp(\beta _{1})\\&=\lambda (t|x)\exp(\beta _{1})\end{aligned}}} We can see that increasing a covariate by 1 scales the original hazard by the constant exp ⁡ ( β 1 ) {\displaystyle \exp(\beta _{1})} . Rearranging things slightly, we see that: λ ( t | x + 1 ) λ ( t | x ) = exp ⁡ ( β 1 ) {\displaystyle {\frac {\lambda (t|x+1)}{\lambda (t|x)}}=\exp(\beta _{1})} The right-hand-side is constant over time (no term has a t {\displaystyle t} in it). This relationship, x / y = constant {\displaystyle x/y={\text{constant}}} , is called a proportional relationship. More generally, consider two subjects, i and j, with covariates X i {\displaystyle X_{i}} and X j {\displaystyle X_{j}} respectively. Consider the ratio of their hazards: λ ( t | X i ) λ ( t | X j ) = λ 0 ( t ) exp ⁡ ( X i ⋅ β ) λ 0 ( t ) exp ⁡ ( X j ⋅ β ) = λ 0 ( t ) exp ⁡ ( X i ⋅ β ) λ 0 ( t ) exp ⁡ ( X j ⋅ β ) = exp ⁡ ( ( X i − X j ) ⋅ β ) {\displaystyle {\begin{aligned}{\frac {\lambda (t|X_{i})}{\lambda (t|X_{j})}}&={\frac {\lambda _{0}(t)\exp(X_{i}\cdot \beta )}{\lambda _{0}(t)\exp(X_{j}\cdot \beta )}}\\&={\frac {{\cancel {\lambda _{0}(t)}}\exp(X_{i}\cdot \beta )}{{\cancel {\lambda _{0}(t)}}\exp(X_{j}\cdot \beta )}}\\&=\exp((X_{i}-X_{j})\cdot \beta )\end{aligned}}} The right-hand-side isn't dependent on time, as the only time-dependent factor, λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , was cancelled out. Thus the ratio of hazards of two subjects is a constant, i.e. the hazards are proportional. === Absence of an intercept term === Often there is an intercept term (also called a constant term or bias term) used in regression models. The Cox model lacks one because the baseline hazard, λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , takes the place of it. Let's see what would happen if we did include an intercept term anyways, denoted β 0 {\displaystyle \beta _{0}} : λ ( t | X i ) = λ 0 ( t ) exp ⁡ ( β 1 X i 1 + ⋯ + β p X i p + β 0 ) = λ 0 ( t ) exp ⁡ ( X i ⋅ β ) exp ⁡ ( β 0 ) = ( exp ⁡ ( β 0 ) λ 0 ( t ) ) exp ⁡ ( X i ⋅ β ) = λ 0 ∗ ( t ) exp ⁡ ( X i ⋅ β ) {\displaystyle {\begin{aligned}\lambda (t|X_{i})&=\lambda _{0}(t)\exp(\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip}+\beta _{0})\\&=\lambda _{0}(t)\exp(X_{i}\cdot \beta )\exp(\beta _{0})\\&=\left(\exp(\beta _{0})\lambda _{0}(t)\right)\exp(X_{i}\cdot \beta )\\&=\lambda _{0}^{*}(t)\exp(X_{i}\cdot \beta )\end{aligned}}} where we've redefined exp ⁡ ( β 0 ) λ 0 ( t ) {\displaystyle \exp(\beta _{0})\lambda _{0}(t)} to be a new baseline hazard, λ 0 ∗ ( t ) {\displaystyle \lambda _{0}^{*}(t)} . Thus, the baseline hazard incorporates all parts of the hazard that are not dependent on the subjects' covariates, which includes any intercept term (which is constant for all subjects, by definition). In other words, adding an intercept term would make the model unidentifiable. === Likelihood for unique times === The Cox partial likelihood, shown below, is obtained by using Breslow's estimate of the baseline hazard function, plugging it into the full likelihood and then observing that the result is a product of two factors. The first factor is the partial likelihood shown below, in which the baseline hazard has "canceled out". It is simply the probability for subjects to have experienced events in the order that they actually have occurred, given the set of times of occurrences and given the subjects' covariates. The second factor is free of the regression coefficients and depends on the data only through the censoring pattern. The effect of covariates estimated by any proportional hazards model can thus be reported as hazard ratios. To calculate the partial likelihood, the probability for the order of events, let us index the M samples for which events have already occurred by increasing time of occurrence, Y1 < Y2 < ... < YM. Covariates of all other subjects for which no event has occurred get indices M+1,.., N. The partial likelihood can be factorized into one factor for each event that has occurred. The i 'th factor is the probability that out of all subjects (i,i+1,..., N) for which no event has occurred before time Yi, the one that actually occurred at time Yi is the event for subject i: L i ( β ) = λ ( Y i ∣ X i ) ∑ j = i N λ ( Y i ∣ X j ) = λ 0 ( Y i ) θ i ∑ j = i N λ 0 ( Y i ) θ j = θ i ∑ j = i N θ j , {\displaystyle L_{i}(\beta )={\frac {\lambda (Y_{i}\mid X_{i})}{\sum _{j=i}^{N}\lambda (Y_{i}\mid X_{j})}}={\frac {\lambda _{0}(Y_{i})\theta _{i}}{\sum _{j=i}^{N}\lambda _{0}(Y_{i})\theta _{j}}}={\frac {\theta _{i}}{\sum _{j=i}^{N}\theta _{j}}},} where θj = exp(Xj ⋅ β) and the summation is over the set of subjects j where the event has not occurred before time Yi (including subject i itself). Obviously 0 < Li(β) ≤ 1. Treating the subjects as statistically independent of each other, the partial likelihood for the order of events is L ( β ) = ∏ i = 1 M L i ( β ) = ∏ i : C i = 1 L i ( β ) , {\displaystyle L(\beta )=\prod _{i=1}^{M}L_{i}(\beta )=\prod _{i:C_{i}=1}L_{i}(\beta ),} where the subjects for which an event has occurred are indicated by Ci = 1 and all others by Ci = 0. The corresponding log partial likelihood is ℓ ( β ) = ∑ i : C i = 1 ( X i ⋅ β − log ⁡ ∑ j : Y j ≥ Y i θ j ) , {\displaystyle \ell (\beta )=\sum _{i:C_{i}=1}\left(X_{i}\cdot \beta -\log \sum _{j:Y_{j}\geq Y_{i}}\theta _{j}\right),} where we have written ∑ j = i N {\displaystyle \sum _{j=i}^{N}} using the indexing introduced above in a more general way, as ∑ j : Y j ≥ Y i {\displaystyle \sum _{j:Y_{j}\geq Y_{i}}} . Crucially, the effect of the covariates can be estimated without the need to specify the hazard function λ 0 ( t ) {\displaystyle \lambda _{0}(t)} over time. The partial likelihood can be maximized over β to produce maximum partial likelihood estimates of the model parameters. The partial score function is ℓ ′ ( β ) = ∑ i : C i = 1 ( X i − ∑ j : Y j ≥ Y i θ j X j ∑ j : Y j ≥ Y i θ j ) , {\displaystyle \ell ^{\prime }(\beta )=\sum _{i:C_{i}=1}\left(X_{i}-{\frac {\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}}{\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}}}\right),} and the Hessian matrix of the partial log likelihood is ℓ ′ ′ ( β ) = − ∑ i : C i = 1 ( ∑ j : Y j ≥ Y i θ j X j X j ′ ∑ j : Y j ≥ Y i θ j − [ ∑ j : Y j ≥ Y i θ j X j ] [ ∑ j : Y j ≥ Y i θ j X j ′ ] [ ∑ j : Y j ≥ Y i θ j ] 2 ) . {\displaystyle \ell ^{\prime \prime }(\beta )=-\sum _{i:C_{i}=1}\left({\frac {\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}X_{j}^{\prime }}{\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}}}-{\frac {\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}\right]\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}^{\prime }\right]}{\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}\right]^{2}}}\right).} Using this score function and Hessian matrix, the partial likelihood can be maximized using the Newton-Raphson algorithm. The inverse of the Hessian matrix, evaluated at the estimate of β, can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate standard errors for the regression coefficients. === Likelihood when there exist tied times === Several approaches have been proposed to handle situations in which there are ties in the time data. Breslow's method describes the approach in which the procedure described above is used unmodified, even when ties are present. An alternative approach that is considered to give better results is Efron's method. Let tj denote the unique times, let Hj denote the set of indices i such that Yi = tj and Ci = 1, and let mj = |Hj|. Efron's approach maximizes the following partial likelihood. L ( β ) = ∏ j ∏ i ∈ H j θ i ∏ ℓ = 0 m j − 1 [ ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i ] . {\displaystyle L(\beta )=\prod _{j}{\frac {\prod _{i\in H_{j}}\theta _{i}}{\prod _{\ell =0}^{m_{j}-1}\left[\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right]}}.} The corresponding log partial likelihood is ℓ ( β ) = ∑ j ( ∑ i ∈ H j X i ⋅ β − ∑ ℓ = 0 m j − 1 log ⁡ ( ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i ) ) , {\displaystyle \ell (\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}\cdot \beta -\sum _{\ell =0}^{m_{j}-1}\log \left(\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right)\right),} the score function is ℓ ′ ( β ) = ∑ j ( ∑ i ∈ H j X i − ∑ ℓ = 0 m j − 1 ∑ i : Y i ≥ t j θ i X i − ℓ m j ∑ i ∈ H j θ i X i ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i ) , {\displaystyle \ell ^{\prime }(\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}-\sum _{\ell =0}^{m_{j}-1}{\frac {\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}}{\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}}}\right),} and the Hessian matrix is ℓ ′ ′ ( β ) = − ∑ j ∑ ℓ = 0 m j − 1 ( ∑ i : Y i ≥ t j θ i X i X i ′ − ℓ m j ∑ i ∈ H j θ i X i X i ′ ϕ j , ℓ , m j − Z j , ℓ , m j Z j , ℓ , m j ′ ϕ j , ℓ , m j 2 ) , {\displaystyle \ell ^{\prime \prime }(\beta )=-\sum _{j}\sum _{\ell =0}^{m_{j}-1}\left({\frac {\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}X_{i}^{\prime }-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}X_{i}^{\prime }}{\phi _{j,\ell ,m_{j}}}}-{\frac {Z_{j,\ell ,m_{j}}Z_{j,\ell ,m_{j}}^{\prime }}{\phi _{j,\ell ,m_{j}}^{2}}}\right),} where ϕ j , ℓ , m j = ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i {\displaystyle \phi _{j,\ell ,m_{j}}=\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}} Z j , ℓ , m j = ∑ i : Y i ≥ t j θ i X i − ℓ m j ∑ i ∈ H j θ i X i . {\displaystyle Z_{j,\ell ,m_{j}}=\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}.} Note that when Hj is empty (all observations with time tj are censored), the summands in these expressions are treated as zero. === Examples === Below are some worked examples of the Cox model in practice. ==== A single binary covariate ==== Suppose the endpoint we are interested in is patient survival during a 5-year observation period after a surgery. Patients can die within the 5-year period, and we record when they died, or patients can live past 5 years, and we only record that they lived past 5 years. The surgery was performed at one of two hospitals, A or B, and we would like to know if the hospital location is associated with 5-year survival. Specifically, we would like to know the relative increase (or decrease) in hazard from a surgery performed at hospital A compared to hospital B. Provided is some (fake) data, where each row represents a patient: T is how long the patient was observed for before death or 5 years (measured in months), and C denotes if the patient died in the 5-year period. We have encoded the hospital as a binary variable denoted X: 1 if from hospital A, 0 from hospital B. Our single-covariate Cox proportional model looks like the following, with β 1 {\displaystyle \beta _{1}} representing the hospital's effect, and i indexing each patient: λ ( t | X i ) ⏞ hazard for i = λ 0 ( t ) ⏟ baseline hazard ⋅ exp ⁡ ( β 1 X i ) ⏞ scaling factor for i {\displaystyle \overbrace {\lambda (t|X_{i})} ^{\text{hazard for i}}=\underbrace {\lambda _{0}(t)} _{{\text{baseline}} \atop {\text{hazard}}}\cdot \overbrace {\exp(\beta _{1}X_{i})} ^{\text{scaling factor for i}}} Using statistical software, we can estimate β 1 {\displaystyle \beta _{1}} to be 2.12. The hazard ratio is the exponential of this value, exp ⁡ ( β 1 ) = exp ⁡ ( 2.12 ) {\displaystyle \exp(\beta _{1})=\exp(2.12)} . To see why, consider the ratio of hazards, specifically: λ ( t | X = 1 ) λ ( t | X = 0 ) = λ 0 ( t ) exp ⁡ ( β 1 ⋅ 1 ) λ 0 ( t ) exp ⁡ ( β 1 ⋅ 0 ) = exp ⁡ ( β 1 ) {\displaystyle {\frac {\lambda (t|X=1)}{\lambda (t|X=0)}}={\frac {{\cancel {\lambda _{0}(t)}}\exp(\beta _{1}\cdot 1)}{{\cancel {\lambda _{0}(t)}}\exp(\beta _{1}\cdot 0)}}=\exp(\beta _{1})} Thus, the hazard ratio of hospital A to hospital B is exp ⁡ ( 2.12 ) = 8.32 {\displaystyle \exp(2.12)=8.32} . Putting aside statistical significance for a moment, we can make a statement saying that patients in hospital A are associated with a 8.3x higher risk of death occurring in any short period of time compared to hospital B. There are important caveats to mention about the interpretation: a 8.3x higher risk of death does not mean that 8.3x more patients will die in hospital A: survival analysis examines how quickly events occur, not simply whether they occur. More specifically, "risk of death" is a measure of a rate. A rate has units, like meters per second. However, a relative rate does not: a bicycle can go two times faster than another bicycle (the reference bicycle), without specifying any units. Likewise, the risk of death (comparable to the speed of a bike) in hospital A is 8.3 times higher (faster) than the risk of death in hospital B (the reference group). the inverse quantity, 1 / 8.32 = 1 exp ⁡ ( 2.12 ) = exp ⁡ ( − 2.12 ) = 0.12 {\displaystyle 1/8.32={\frac {1}{\exp(2.12)}}=\exp(-2.12)=0.12} is the hazard ratio of hospital B relative to hospital A. We haven't made any inferences about probabilities of survival between the hospitals. This is because we would need an estimate of the baseline hazard rate, λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , as well as our β 1 {\displaystyle \beta _{1}} estimate. However, standard estimation of the Cox proportional hazard model does not directly estimate the baseline hazard rate. Because we have ignored the only time varying component of the model, the baseline hazard rate, our estimate is timescale-invariant. For example, if we had measured time in years instead of months, we would get the same estimate. It is tempting to say that the hospital caused the difference in hazards between the two groups, but since our study is not causal (that is, we do not know how the data was generated), we stick with terminology like "associated". ==== A single continuous covariate ==== To demonstrate a less traditional use case of survival analysis, the next example will be an economics question: what is the relationship between a company's price-to-earnings ratio (P/E) on their first IPO anniversary and their future survival? More specifically, if we consider a company's "birth event" to be their first IPO anniversary, and any bankruptcy, sale, going private, etc. as a "death" event the company, we'd like to know the influence of the companies' P/E ratio at their "birth" (first IPO anniversary) on their survival. Provided is a (fake) dataset with survival data from 12 companies: T represents the number of days between first IPO anniversary and death (or an end date of 2022-01-01, if did not die). C represents if the company died before 2022-01-01 or not. P/E represents the company's price-to-earnings ratio at its 1st IPO anniversary. Unlike the previous example where there was a binary variable, this dataset has a continuous variable, P/E; however, the model looks similar: λ ( t | P i ) = λ 0 ( t ) ⋅ exp ⁡ ( β 1 P i ) {\displaystyle \lambda (t|P_{i})=\lambda _{0}(t)\cdot \exp(\beta _{1}P_{i})} where P i {\displaystyle P_{i}} represents a company's P/E ratio. Running this dataset through a Cox model produces an estimate of the value of the unknown β 1 {\displaystyle \beta _{1}} , which is -0.34. Therefore, an estimate of the entire hazard is: λ ( t | P i ) = λ 0 ( t ) ⋅ exp ⁡ ( − 0.34 P i ) {\displaystyle \lambda (t|P_{i})=\lambda _{0}(t)\cdot \exp(-0.34P_{i})} Since the baseline hazard, λ 0 ( t ) {\displaystyle \lambda _{0}(t)} , was not estimated, the entire hazard is not able to be calculated. However, consider the ratio of the companies i and j's hazards: λ ( t | P i ) λ ( t | P j ) = λ 0 ( t ) ⋅ exp ⁡ ( − 0.34 P i ) λ 0 ( t ) ⋅ exp ⁡ ( − 0.34 P j ) = exp ⁡ ( − 0.34 ( P i − P j ) ) {\displaystyle {\begin{aligned}{\frac {\lambda (t|P_{i})}{\lambda (t|P_{j})}}&={\frac {{\cancel {\lambda _{0}(t)}}\cdot \exp(-0.34P_{i})}{{\cancel {\lambda _{0}(t)}}\cdot \exp(-0.34P_{j})}}\\&=\exp(-0.34(P_{i}-P_{j}))\end{aligned}}} All terms on the right are known, so calculating the ratio of hazards between companies is possible. Since there is no time-dependent term on the right (all terms are constant), the hazards are proportional to each other. For example, the hazard ratio of company 5 to company 2 is exp ⁡ ( − 0.34 ( 6.3 − 3.0 ) ) = 0.33 {\displaystyle \exp(-0.34(6.3-3.0))=0.33} . This means that, within the interval of study, company 5's risk of "death" is 0.33 ≈ 1/3 as large as company 2's risk of death. There are important caveats to mention about the interpretation: The hazard ratio is the quantity exp ⁡ ( β 1 ) {\displaystyle \exp(\beta _{1})} , which is exp ⁡ ( − 0.34 ) = 0.71 {\displaystyle \exp(-0.34)=0.71} in the above example. From the last calculation above, an interpretation of this is as the ratio of hazards between two "subjects" that have their variables differ by one unit: if P i = P j + 1 {\displaystyle P_{i}=P_{j}+1} , then exp ⁡ ( β 1 ( P i − P j ) = exp ⁡ ( β 1 ( 1 ) ) {\displaystyle \exp(\beta _{1}(P_{i}-P_{j})=\exp(\beta _{1}(1))} . The choice of "differ by one unit" is convenience, as it communicates precisely the value of β 1 {\displaystyle \beta _{1}} . The baseline hazard can be represented when the scaling factor is 1, i.e. P = 0 {\displaystyle P=0} . λ ( t | P i = 0 ) = λ 0 ( t ) ⋅ exp ⁡ ( − 0.34 ⋅ 0 ) = λ 0 ( t ) {\displaystyle \lambda (t|P_{i}=0)=\lambda _{0}(t)\cdot \exp(-0.34\cdot 0)=\lambda _{0}(t)} Can we interpret the baseline hazard as the hazard of a "baseline" company whose P/E happens to be 0? This interpretation of the baseline hazard as "hazard of a baseline subject" is imperfect, as the covariate being 0 is impossible in this application: a P/E of 0 is meaningless (it means the company's stock price is 0, i.e., they are "dead"). A more appropriate interpretation would be "the hazard when all variables are nil". It is tempting to want to understand and interpret a value like exp ⁡ ( β 1 P i ) {\displaystyle \exp(\beta _{1}P_{i})} to represent the hazard of a company. However, consider what this is actually representing: exp ⁡ ( β 1 P i ) = exp ⁡ ( β 1 ( P i − 0 ) ) = exp ⁡ ( β 1 P i ) exp ⁡ ( β 1 0 ) = λ ( t | P i ) λ ( t | 0 ) {\displaystyle \exp(\beta _{1}P_{i})=\exp(\beta _{1}(P_{i}-0))={\frac {\exp(\beta _{1}P_{i})}{\exp(\beta _{1}0)}}={\frac {\lambda (t|P_{i})}{\lambda (t|0)}}} . There is implicitly a ratio of hazards here, comparing company i's hazard to an imaginary baseline company with 0 P/E. However, as explained above, a P/E of 0 is impossible in this application, so exp ⁡ ( β 1 P i ) {\displaystyle \exp(\beta _{1}P_{i})} is meaningless in this example. Ratios between plausible hazards are meaningful, however. == Time-varying predictors and coefficients == Extensions to time dependent variables, time dependent strata, and multiple events per subject, can be incorporated by the counting process formulation of Andersen and Gill. One example of the use of hazard models with time-varying regressors is estimating the effect of unemployment insurance on unemployment spells. In addition to allowing time-varying covariates (i.e., predictors), the Cox model may be generalized to time-varying coefficients as well. That is, the proportional effect of a treatment may vary with time; e.g. a drug may be very effective if administered within one month of morbidity, and become less effective as time goes on. The hypothesis of no change with time (stationarity) of the coefficient may then be tested. Details and software (R package) are available in Martinussen and Scheike (2006). In this context, it could also be mentioned that it is theoretically possible to specify the effect of covariates by using additive hazards, i.e. specifying λ ( t | X i ) = λ 0 ( t ) + β 1 X i 1 + ⋯ + β p X i p = λ 0 ( t ) + X i ⋅ β . {\displaystyle \lambda (t|X_{i})=\lambda _{0}(t)+\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip}=\lambda _{0}(t)+X_{i}\cdot \beta .} If such additive hazards models are used in situations where (log-)likelihood maximization is the objective, care must be taken to restrict λ ( t ∣ X i ) {\displaystyle \lambda (t\mid X_{i})} to non-negative values. Perhaps as a result of this complication, such models are seldom seen. If the objective is instead least squares the non-negativity restriction is not strictly required. == Specifying the baseline hazard function == The Cox model may be specialized if a reason exists to assume that the baseline hazard follows a particular form. In this case, the baseline hazard λ 0 ( t ) {\displaystyle \lambda _{0}(t)} is replaced by a given function. For example, assuming the hazard function to be the Weibull hazard function gives the Weibull proportional hazards model. Incidentally, using the Weibull baseline hazard is the only circumstance under which the model satisfies both the proportional hazards, and accelerated failure time models. The generic term parametric proportional hazards models can be used to describe proportional hazards models in which the hazard function is specified. The Cox proportional hazards model is sometimes called a semiparametric model by contrast. Some authors use the term Cox proportional hazards model even when specifying the underlying hazard function, to acknowledge the debt of the entire field to David Cox. The term Cox regression model (omitting proportional hazards) is sometimes used to describe the extension of the Cox model to include time-dependent factors. However, this usage is potentially ambiguous since the Cox proportional hazards model can itself be described as a regression model. == Relationship to Poisson models == There is a relationship between proportional hazards models and Poisson regression models which is sometimes used to fit approximate proportional hazards models in software for Poisson regression. The usual reason for doing this is that calculation is much quicker. This was more important in the days of slower computers but can still be useful for particularly large data sets or complex problems. Laird and Olivier (1981) provide the mathematical details. They note, "we do not assume [the Poisson model] is true, but simply use it as a device for deriving the likelihood." McCullagh and Nelder's book on generalized linear models has a chapter on converting proportional hazards models to generalized linear models. == Under high-dimensional setup == In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter. The Lasso estimator of the regression parameter β is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint. ℓ ( β ) = ∑ j ( ∑ i ∈ H j X i ⋅ β − ∑ ℓ = 0 m j − 1 log ⁡ ( ∑ i : Y i ≥ t j θ i − ℓ m j ∑ i ∈ H j θ i ) ) + λ ‖ β ‖ 1 , {\displaystyle \ell (\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}\cdot \beta -\sum _{\ell =0}^{m_{j}-1}\log \left(\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right)\right)+\lambda \|\beta \|_{1},} There has been theoretical progress on this topic recently. == Software implementations == Mathematica: CoxModelFit function. R: coxph() function, located in the survival package. SAS: phreg procedure Stata: stcox command Python: CoxPHFitter located in the lifelines library. phreg in the statsmodels library. SPSS: Available under Cox Regression. MATLAB: fitcox or coxphfit function Julia: Available in the Survival.jl library. JMP: Available in Fit Proportional Hazards platform. Prism: Available in Survival Analyses and Multiple Variable Analyses == See also == Accelerated failure time model One in ten rule Weibull distribution Hypertabastic distribution == Notes == == References ==
Wikipedia/Cox_model
Algorithmic inference gathers new developments in the statistical inference methods made feasible by the powerful computing devices widely available to any data analyst. Cornerstones in this field are computational learning theory, granular computing, bioinformatics, and, long ago, structural probability (Fraser 1966). The main focus is on the algorithms which compute statistics rooting the study of a random phenomenon, along with the amount of data they must feed on to produce reliable results. This shifts the interest of mathematicians from the study of the distribution laws to the functional properties of the statistics, and the interest of computer scientists from the algorithms for processing data to the information they process. == The Fisher parametric inference problem == Concerning the identification of the parameters of a distribution law, the mature reader may recall lengthy disputes in the mid 20th century about the interpretation of their variability in terms of fiducial distribution (Fisher 1956), structural probabilities (Fraser 1966), priors/posteriors (Ramsey 1925), and so on. From an epistemology viewpoint, this entailed a companion dispute as to the nature of probability: is it a physical feature of phenomena to be described through random variables or a way of synthesizing data about a phenomenon? Opting for the latter, Fisher defines a fiducial distribution law of parameters of a given random variable that he deduces from a sample of its specifications. With this law he computes, for instance "the probability that μ (mean of a Gaussian variable – omeur note) is less than any assigned value, or the probability that it lies between any assigned values, or, in short, its probability distribution, in the light of the sample observed". == The classic solution == Fisher fought hard to defend the difference and superiority of his notion of parameter distribution in comparison to analogous notions, such as Bayes' posterior distribution, Fraser's constructive probability and Neyman's confidence intervals. For half a century, Neyman's confidence intervals won out for all practical purposes, crediting the phenomenological nature of probability. With this perspective, when you deal with a Gaussian variable, its mean μ is fixed by the physical features of the phenomenon you are observing, where the observations are random operators, hence the observed values are specifications of a random sample. Because of their randomness, you may compute from the sample specific intervals containing the fixed μ with a given probability that you denote confidence. === Example === Let X be a Gaussian variable with parameters μ {\displaystyle \mu } and σ 2 {\displaystyle \sigma ^{2}} and { X 1 , … , X m } {\displaystyle \{X_{1},\ldots ,X_{m}\}} a sample drawn from it. Working with statistics S μ = ∑ i = 1 m X i {\displaystyle S_{\mu }=\sum _{i=1}^{m}X_{i}} and S σ 2 = ∑ i = 1 m ( X i − X ¯ ) 2 , where X ¯ = S μ m {\displaystyle S_{\sigma ^{2}}=\sum _{i=1}^{m}(X_{i}-{\overline {X}})^{2},{\text{ where }}{\overline {X}}={\frac {S_{\mu }}{m}}} is the sample mean, we recognize that T = S μ − m μ S σ 2 m − 1 m = X ¯ − μ S σ 2 / ( m ( m − 1 ) ) {\displaystyle T={\frac {S_{\mu }-m\mu }{\sqrt {S_{\sigma ^{2}}}}}{\sqrt {\frac {m-1}{m}}}={\frac {{\overline {X}}-\mu }{\sqrt {S_{\sigma ^{2}}/(m(m-1))}}}} follows a Student's t distribution (Wilks 1962) with parameter (degrees of freedom) m − 1, so that f T ( t ) = Γ ( m / 2 ) Γ ( ( m − 1 ) / 2 ) 1 π ( m − 1 ) ( 1 + t 2 m − 1 ) m / 2 . {\displaystyle f_{T}(t)={\frac {\Gamma (m/2)}{\Gamma ((m-1)/2)}}{\frac {1}{\sqrt {\pi (m-1)}}}\left(1+{\frac {t^{2}}{m-1}}\right)^{m/2}.} Gauging T between two quantiles and inverting its expression as a function of μ {\displaystyle \mu } you obtain confidence intervals for μ {\displaystyle \mu } . With the sample specification: x = { 7.14 , 6.3 , 3.9 , 6.46 , 0.2 , 2.94 , 4.14 , 4.69 , 6.02 , 1.58 } {\displaystyle \mathbf {x} =\{7.14,6.3,3.9,6.46,0.2,2.94,4.14,4.69,6.02,1.58\}} having size m = 10, you compute the statistics s μ = 43.37 {\displaystyle s_{\mu }=43.37} and s σ 2 = 46.07 {\displaystyle s_{\sigma ^{2}}=46.07} , and obtain a 0.90 confidence interval for μ {\displaystyle \mu } with extremes (3.03, 5.65). == Inferring functions with the help of a computer == From a modeling perspective the entire dispute looks like a chicken-egg dilemma: either fixed data by first and probability distribution of their properties as a consequence, or fixed properties by first and probability distribution of the observed data as a corollary. The classic solution has one benefit and one drawback. The former was appreciated particularly back when people still did computations with sheet and pencil. Per se, the task of computing a Neyman confidence interval for the fixed parameter θ is hard: you do not know θ, but you look for disposing around it an interval with a possibly very low probability of failing. The analytical solution is allowed for a very limited number of theoretical cases. Vice versa a large variety of instances may be quickly solved in an approximate way via the central limit theorem in terms of confidence interval around a Gaussian distribution – that's the benefit. The drawback is that the central limit theorem is applicable when the sample size is sufficiently large. Therefore, it is less and less applicable with the sample involved in modern inference instances. The fault is not in the sample size on its own part. Rather, this size is not sufficiently large because of the complexity of the inference problem. With the availability of large computing facilities, scientists refocused from isolated parameters inference to complex functions inference, i.e. re sets of highly nested parameters identifying functions. In these cases we speak about learning of functions (in terms for instance of regression, neuro-fuzzy system or computational learning) on the basis of highly informative samples. A first effect of having a complex structure linking data is the reduction of the number of sample degrees of freedom, i.e. the burning of a part of sample points, so that the effective sample size to be considered in the central limit theorem is too small. Focusing on the sample size ensuring a limited learning error with a given confidence level, the consequence is that the lower bound on this size grows with complexity indices such as VC dimension or detail of a class to which the function we want to learn belongs. === Example === A sample of 1,000 independent bits is enough to ensure an absolute error of at most 0.081 on the estimation of the parameter p of the underlying Bernoulli variable with a confidence of at least 0.99. The same size cannot guarantee a threshold less than 0.088 with the same confidence 0.99 when the error is identified with the probability that a 20-year-old man living in New York does not fit the ranges of height, weight and waistline observed on 1,000 Big Apple inhabitants. The accuracy shortage occurs because both the VC dimension and the detail of the class of parallelepipeds, among which the one observed from the 1,000 inhabitants' ranges falls, are equal to 6. == The general inversion problem solving the Fisher question == With insufficiently large samples, the approach: fixed sample – random properties suggests inference procedures in three steps: === Definition === For a random variable and a sample drawn from it a compatible distribution is a distribution having the same sampling mechanism M X = ( Z , g θ ) {\displaystyle {\mathcal {M}}_{X}=(Z,g_{\boldsymbol {\theta }})} of X with a value θ {\displaystyle {\boldsymbol {\theta }}} of the random parameter Θ {\displaystyle \mathbf {\Theta } } derived from a master equation rooted on a well-behaved statistic s. === Example === You may find the distribution law of the Pareto parameters A and K as an implementation example of the population bootstrap method as in the figure on the left. Implementing the twisting argument method, you get the distribution law F M ( μ ) {\displaystyle F_{M}(\mu )} of the mean M of a Gaussian variable X on the basis of the statistic s M = ∑ i = 1 m x i {\displaystyle s_{M}=\sum _{i=1}^{m}x_{i}} when Σ 2 {\displaystyle \Sigma ^{2}} is known to be equal to σ 2 {\displaystyle \sigma ^{2}} (Apolloni, Malchiodi & Gaito 2006). Its expression is: F M ( μ ) = Φ ( m μ − s M σ m ) , {\displaystyle F_{M}(\mu )=\Phi {\left({\frac {m\mu -s_{M}}{\sigma {\sqrt {m}}}}\right)},} shown in the figure on the right, where Φ {\displaystyle \Phi } is the cumulative distribution function of a standard normal distribution. Computing a confidence interval for M given its distribution function is straightforward: we need only find two quantiles (for instance δ / 2 {\displaystyle \delta /2} and 1 − δ / 2 {\displaystyle 1-\delta /2} quantiles in case we are interested in a confidence interval of level δ symmetric in the tail's probabilities) as indicated on the left in the diagram showing the behavior of the two bounds for different values of the statistic sm. The Achilles heel of Fisher's approach lies in the joint distribution of more than one parameter, say mean and variance of a Gaussian distribution. On the contrary, with the last approach (and above-mentioned methods: population bootstrap and twisting argument) we may learn the joint distribution of many parameters. For instance, focusing on the distribution of two or many more parameters, in the figures below we report two confidence regions where the function to be learnt falls with a confidence of 90%. The former concerns the probability with which an extended support vector machine attributes a binary label 1 to the points of the ( x , y ) {\displaystyle (x,y)} plane. The two surfaces are drawn on the basis of a set of sample points in turn labelled according to a specific distribution law (Apolloni et al. 2008). The latter concerns the confidence region of the hazard rate of breast cancer recurrence computed from a censored sample (Apolloni, Malchiodi & Gaito 2006). == Notes == == References == Fraser, D. A. S. (1966), "Structural probability and generalization", Biometrika, 53 (1/2): 1–9, doi:10.2307/2334048, JSTOR 2334048. Fisher, M. A. (1956), Statistical Methods and Scientific Inference, Edinburgh and London: Oliver and Boyd Apolloni, B.; Malchiodi, D.; Gaito, S. (2006), Algorithmic Inference in Machine Learning, International Series on Advanced Intelligence, vol. 5 (2nd ed.), Adelaide: Magill, Advanced Knowledge International Apolloni, B.; Bassis, S.; Malchiodi, D.; Witold, P. (2008), The Puzzle of Granular Computing, Studies in Computational Intelligence, vol. 138, Berlin: Springer, ISBN 9783540798637 Ramsey, F. P. (1925), "The Foundations of Mathematics", Proceedings of the London Mathematical Society: 338–384, doi:10.1112/plms/s2-25.1.338. Wilks, S.S. (1962), Mathematical Statistics, Wiley Publications in Statistics, New York: John Wiley
Wikipedia/Algorithmic_inference
Multimethodology or multimethod research includes the use of more than one method of data collection or research in a research study or set of related studies. Mixed methods research is more specific in that it includes the mixing of qualitative and quantitative data, methods, methodologies, and/or paradigms in a research study or set of related studies. One could argue that mixed methods research is a special case of multimethod research. Another applicable, but less often used label, for multi or mixed research is methodological pluralism. All of these approaches to professional and academic research emphasize that monomethod research can be improved through the use of multiple data sources, methods, research methodologies, perspectives, standpoints, and paradigms. The term multimethodology was used starting in the 1980s and in the 1989 book Multimethod Research: A Synthesis of Styles by John Brewer and Albert Hunter. During the 1990s and currently, the term mixed methods research has become more popular for this research movement in the behavioral, social, business, and health sciences. This pluralistic research approach has been gaining in popularity since the 1980s. == Multi and mixed methods research designs == There are four broad classes of research studies that are currently being labeled "mixed methods research": Quantitatively driven approaches/designs in which the research study is, at its core, a quantitative study with qualitative data/method added to supplement and improve the quantitative study by providing an added value and deeper, wider, and fuller or more complex answers to research questions; quantitative quality criteria are emphasized but high quality qualitative data also must be collected and analyzed; Qualitatively driven approaches/designs in which the research study is, at its core, a qualitative study with quantitative data/method added to supplement and improve the qualitative study by providing an added value and deeper, wider, and fuller or more complex answers to research questions; qualitative quality criteria are emphasized but high quality quantitative data also must be collected and analyzed; Interactive or equal status designs in which the research study equally emphasizes (interactively and through integration) quantitative and qualitative data, methods, methodologies, and paradigms. This third design is often done through the use of a team composed of an expert in quantitative research, an expert in qualitative research, and an expert in mixed methods research to help with dialogue and continual integration. In this type of mixed study, quantitative and qualitative and mixed methods quality criteria are emphasized. This use of multiple quality criteria is seen in the concept of multiple validities legitimation. Here is a definition of this important type of validity or legitimation: Multiple validities legitimation "refers to the extent to which the mixed methods researcher successfully addresses and resolves all relevant validity types, including the quantitative and qualitative validity types discussed earlier in this chapter as well as the mixed validity dimensions. In other words, the researcher must identify and address all of the relevant validity issues facing a particular research study. Successfully addressing the pertinent validity issues will help researchers produce the kinds of inferences and meta-inferences that should be made in mixed research"(Johnson & Johnson, 2014; page 311). Mixed priority designs in which the principal study results derive from the integration of qualitative and quantitative data during analysis. == Desirability == The case for multimethodology or mixed methods research as a strategy for intervention and/or research is based on four observations: Narrow views of the world are often misleading, so approaching a subject from different perspectives or paradigms may help to gain a holistic or more truthful worldview. There are different levels of social research (i.e.: biological, cognitive, social, etc.), and different methodologies may have particular strengths with respect to one of these levels. Using more than one should help to get a clearer picture of the social world and make for more adequate explanations. Many existing practices already combine methodologies to solve particular problems, yet they have not been theorised sufficiently. Multimethodology fits well with pragmatism. == Feasibility == There are also some hazards to multimethodological or mixed methods research approaches. Some of these problems include: Many paradigms are at odds with each other. However, once the understanding of the difference is present, it can be an advantage to see many sides, and possible solutions may present themselves. Multimethod and mixed method research can be undertaken from many paradigmatic perspectives, including pragmatism, dialectical pluralism, critical realism, and constructivism. Cultural issues affect world views and analyzability. Knowledge of a new paradigm is not enough to overcome potential biases; it must be learned through practice and experience. People have cognitive abilities that predispose them to particular paradigms. Quantitative research requires skills of data-analysis and several techniques of statistic reasoning, while qualitative research is rooted in in-depth observation, comparative thinking, interpretative skills and interpersonal ability. None of the approaches is easier to master than the other, and both require specific expertise, ability and skills. == Pragmatism and mixed methods == Pragmatism allows for the integration of qualitative and quantitative methods as loosely coupled systems to support mixed methods research. On the one hand, quantitative research is characterized by randomized controlled trials, research questions inspired by literature review gap, generalizability, validity, and reliability. On the other, qualitative research is characterized by socially constructed realities and lived experiences. Pragmatism reconciles these differences an integrates quantitative and qualitative research as loosely coupled systems, where "open systems interact with each other at the point of their boundaries.": 281  === History of Pragmatism in Multi/Mixed Methods Research === Developed as a philosophical method to solve problems towards the end of the nineteenth century, pragmatism is attributed to the work of philosopher Charles Sanders Peirce. For Peirce, research is conducted and interpreted from the eye of the beholder, as a practical approach to investigating social affairs. He sees science as a communal affair leading to single truths that are arrived at from multiple perspectives. For Peirce, the research conclusions are not as important as how these conclusions are reached. Focus is on answering the research question while allowing the methods to emerge in the process. Peirce pragmatism and its approach to research support qualitatively driven mixed methods studies. John Dewey extends both, "Peirce pragmatic method and (William) James' radical empiricism (and approach to experience) by application to social and political problems." : 70  His philosophical pragmatism takes an interdisciplinary approach, where the divide between quantitative and qualitative research represents an obstacle to solving a problem. In Dewey's pragmatism, success is measured by the outcome, where the outcome is the reason to engage in research. Live experiences constitute reality, were individual lived experiences form a continuum by the interaction of subjective (internal) and objective (external) conditions. In Dewey's continuum of experiences, no experience lives on its own, it is influenced by the experiences that preceded it, and influences those that will follow it. His approach to knowledge is open-minded, and inquire is central to his epistemology. Following Dewey, quantitatively driven research methods dominated until 1979, when Richard Rorty revived pragmatism. Rorty introduces his own ideas into pragmatism which includes the importance of culture, beliefs, and context. He shifts from understanding how things are to how they could be, and introduces the idea that "justification is audience dependent, and pretty much any justification finds a receptive audience" : 76  As Rorty explains, research success is peer dependent, not peer group neutral. From his perspective, MMR is not simply the merging of quantitative and qualitative research, but a third camp with its own peers and supporters. === Pragmatic philosophical positions === Multiple pragmatic philosophical stands may be used to justify pragmatism as a paradigm when conducting mixed methods research (MMR). A research paradigm provides a framework based on what constitutes and how knowledge is formed. Pragmatism as a philosophy may aid researchers in positioning themselves somewhere in the spectrum between qualitatively driven and quantitatively driven methods. The following philosophical stands can help address the debate between the use of qualitative and quantitative methods, and to ground quantitatively, qualitatively, or equal-status driven MMR. ==== Radical empiricism ==== Radical empiricism, as articulated by William James, takes reality as a function of our ongoing experiences, constantly changing at the individual level. James emphasizes that reality is not predetermined, and individual free will and chance matter. These ideas fit well with qualitative research emphasizing lived experiences. James also finds the truth in empirical and objectives facts, merging the divide between qualitative and quantitative research. However, James points out that no truth is independent of the thinker. James' brand of pragmatism may be used by researchers conducting qualitatively and equal-status driven MMR. ==== Dialectical Pluralism ==== Dialectical pluralism is a form of pragmatism that emphasizes intentionally drawing from multiple approaches to conducting research and developing knowledge. The multiple approaches being taken need not agree or converge with one another. Instead, the researcher using dialectical pluralism in the conduct of a mixed-method study may tack back and forth between models and perspectives in order to develop insight. ==== Realism and Critical Realism ==== Realists and critical realists take the perspective that the world exists independently of our observation and interpretation of it; critical realism goes beyond this to assert that multiple interpretations of the world are likely. Like dialectical pluralism, realist paradigms in the context of pragmatic multi/mixed-methods research emphasize the idea that multiple approaches to knowledge are expected and can be treated as complementary. In contrast to a more strict positivist approach, critical realism sees causality as embedded in the details of a situation and social processes that surround an event. ==== Transformative-Emancipatory ==== Transformative and emancipatory paradigms emphasize a commitment on the part of the researcher to social justice, as in critical race theory. Researchers conducting multi-method or mixed-methods research within this paradigm tend to orient to issues of "power, privilege, and inequity.": 49  == In contrast to quantitative and qualitative methodologies == One major similarity between mixed methodologies and qualitative and quantitative taken separately is that researchers need to maintain focus on the original purpose behind their methodological choices. A major difference between the two, however, is the way some authors differentiate the two, proposing that there is logic inherent in one that is different from the other. Creswell (2009) points out that in a quantitative study the researcher starts with a problem statement, moving on to the hypothesis and null hypothesis, through the instrumentation into a discussion of data collection, population, and data analysis. Creswell proposes that for a qualitative study the flow of logic begins with the purpose for the study, moves through the research questions discussed as data collected from a smaller group and then voices how they will be analysed. A research strategy is a procedure for achieving a particular intermediary research objective — such as sampling, data collection, or data analysis. We may therefore speak of sampling strategies or data analysis strategies. The use of multiple strategies to enhance construct validity (a form of methodological triangulation) is now routinely advocated by methodologists. In short, mixing or integrating research strategies (qualitative and/or quantitative) in any and all research undertaking is now considered a common feature of good research. A research approach refers to an integrated set of research principles and general procedural guidelines. Approaches are broad, holistic (but general) methodological guides or roadmaps that are associated with particular research motives or analytic interests. Two examples of analytic interests are population frequency distributions and prediction. Examples of research approaches include experiments, surveys, correlational studies, ethnographic research, and phenomenological inquiry. Each approach is ideally suited to addressing a particular analytic interest. For instance, experiments are ideally suited to addressing nomothetic explanations or probable cause; surveys — population frequency descriptions, correlations studies — predictions; ethnography — descriptions and interpretations of cultural processes; and phenomenology — descriptions of the essence of phenomena or lived experiences. In a single approach design (SAD)(also called a "monomethod design") only one analytic interest is pursued. In a mixed or multiple approach design (MAD) two or more analytic interests are pursued. Note: a multiple approach design may include entirely "quantitative" approaches such as combining a survey and an experiment; or entirely "qualitative" approaches such as combining an ethnographic and a phenomenological inquiry, and a mixed approach design includes a mixture of the above (e.g., a mixture of quantitative and qualitative data, methods, methodologies, and/or paradigms). A word of caution about the term "multimethodology". It has become quite common place to use the terms "method" and "methodology" as synonyms (as is the case with the above entry). However, there are convincing philosophical reasons for distinguishing the two. "Method" connotes a way of doing something — a procedure (such as a method of data collection). "Methodology" connotes a discourse about methods — i.e., a discourse about the adequacy and appropriateness of particular combination of research principles and procedures. The terms methodology and biology share a common suffix "logy." Just as bio-logy is a discourse about life — all kinds of life; so too, methodo-logy is a discourse about methods — all kinds of methods. It seems unproductive, therefore, to speak of multi-biologies or of multi-methodologies. It is very productive, however, to speak of multiple biological perspectives or of multiple methodological perspectives. == See also == Perestroika Movement (political science) Post-autistic economics Computer-assisted qualitative data analysis software == References == == Further reading == == External links == Mixed Methods Network for Behavioral, Social, and Health Sciences Official website of Mixed Methods International Research Association
Wikipedia/Multimethodology
Constructivism is a view in the philosophy of science that maintains that scientific knowledge is constructed by the scientific community, which seeks to measure and construct models of the natural world. According to constructivists, natural science consists of mental constructs that aim to explain sensory experiences and measurements, and that there is no single valid methodology in science but rather a diversity of useful methods. They also hold that the world is independent of human minds, but knowledge of the world is always a human and social construction. Constructivism opposes the philosophy of objectivism, embracing the belief that human beings can come to know the truth about the natural world not mediated by scientific approximations with different degrees of validity and accuracy. == Constructivism and sciences == === Social constructivism in sociology === One version of social constructivism contends that categories of knowledge and reality are actively created by social relationships and interactions. These interactions also alter the way in which scientific episteme is organized. Social activity presupposes human interaction, and in the case of social construction, utilizing semiotic resources (meaning-making and signifying) with reference to social structures and institutions. Several traditions use the term Social Constructivism: psychology (after Lev Vygotsky), sociology (after Peter Berger and Thomas Luckmann, themselves influenced by Alfred Schütz), sociology of knowledge (David Bloor), sociology of mathematics (Sal Restivo), philosophy of mathematics (Paul Ernest). Ludwig Wittgenstein's later philosophy can be seen as a foundation for social constructivism, with its key theoretical concepts of language games embedded in forms of life. === Constructivism in philosophy of science === Thomas Kuhn argued that changes in scientists' views of reality not only contain subjective elements but result from group dynamics, "revolutions" in scientific practice, and changes in "paradigms". As an example, Kuhn suggested that the Sun-centric Copernican "revolution" replaced the Earth-centric views of Ptolemy not because of empirical failures but because of a new "paradigm" that exerted control over what scientists felt to be the more fruitful way to pursue their goals. But paradigm debates are not really about relative problem-solving ability, though for good reasons they are usually couched in those terms. Instead, the issue is which paradigm should in future guide research on problems many of which neither competitor can yet claim to resolve completely. A decision between alternate ways of practicing science is called for, and in the circumstances that decision must be based less on past achievement than on future promise. ... A decision of that kind can only be made on faith. The view of reality as accessible only through models was called model-dependent realism by Stephen Hawking and Leonard Mlodinow. While not rejecting an independent reality, model-dependent realism says that we can know only an approximation of it provided by the intermediary of models. These models evolve over time as guided by scientific inspiration and experiments. In the field of the social sciences, constructivism as an epistemology urges that researchers reflect upon the paradigms that may be underpinning their research, and in the light of this that they become more open to considering other ways of interpreting any results of the research. Furthermore, the focus is on presenting results as negotiable constructs rather than as models that aim to "represent" social realities more or less accurately. Norma Romm, in her book Accountability in Social Research (2001), argues that social researchers can earn trust from participants and wider audiences insofar as they adopt this orientation and invite inputs from others regarding their inquiry practices and the results thereof. === Constructivism and psychology === In psychology, constructivism refers to many schools of thought that, though extraordinarily different in their techniques (applied in fields such as education and psychotherapy), are all connected by a common critique of previous standard objectivist approaches. Constructivist psychology schools share assumptions about the active constructive nature of human knowledge. In particular, the critique is aimed at the "associationist" postulate of empiricism, "by which the mind is conceived as a passive system that gathers its contents from its environment and, through the act of knowing, produces a copy of the order of reality.": 16  In contrast, "constructivism is an epistemological premise grounded on the assertion that, in the act of knowing, it is the human mind that actively gives meaning and order to that reality to which it is responding".: 16  The constructivist psychologies theorize about and investigate how human beings create systems for meaningfully understanding their worlds and experiences. === Constructivism and education === Joe L. Kincheloe has published numerous social and educational books on critical constructivism (2001, 2005, 2008), a version of constructivist epistemology that places emphasis on the exaggerated influence of political and cultural power in the construction of knowledge, consciousness, and views of reality. In the contemporary mediated electronic era, Kincheloe argues, dominant modes of power have never exerted such influence on human affairs. Coming from a critical pedagogical perspective, Kincheloe argues that understanding a critical constructivist epistemology is central to becoming an educated person and to the institution of just social change. Kincheloe's characteristics of critical constructivism: Knowledge is socially constructed: World and information co-construct one another Consciousness is a social construction Political struggles: Power plays an exaggerated role in the production of knowledge and consciousness The necessity of understanding consciousness—even though it does not lend itself to traditional reductionistic modes of measurability The importance of uniting logic and emotion in the process of knowledge and producing knowledge The inseparability of the knower and the known The centrality of the perspectives of oppressed peoples—the value of the insights of those who have suffered as the result of existing social arrangements The existence of multiple realities: Making sense of a world far more complex than we originally imagined Becoming humble knowledge workers: Understanding our location in the tangled web of reality Standpoint epistemology: Locating ourselves in the web of reality, we are better equipped to produce our own knowledge Constructing practical knowledge for critical social action Complexity: Overcoming reductionism Knowledge is always entrenched in a larger process The centrality of interpretation: Critical hermeneutics The new frontier of classroom knowledge: Personal experiences intersecting with pluriversal information Constructing new ways of being human: Critical ontology == Constructivist approaches == === Critical constructivism === A series of articles published in the journal Critical Inquiry (1991) served as a manifesto for the movement of critical constructivism in various disciplines, including the natural sciences. Not only truth and reality, but also "evidence", "document", "experience", "fact", "proof", and other central categories of empirical research (in physics, biology, statistics, history, law, etc.) reveal their contingent character as a social and ideological construction. Thus, a "realist" or "rationalist" interpretation is subjected to criticism. Kincheloe's political and pedagogical notion (above) has emerged as a central articulation of the concept. === Cultural constructivism === Cultural constructivism asserts that knowledge and reality are a product of their cultural context, meaning that two independent cultures will likely form different observational methodologies. === Genetic epistemology === James Mark Baldwin invented this expression, which was later popularized by Jean Piaget. From 1955 to 1980, Piaget was Director of the International Centre for Genetic Epistemology in Geneva. === Radical constructivism === Ernst von Glasersfeld was a prominent proponent of radical constructivism. This claims that knowledge is not a commodity that is transported from one mind into another. Rather, it is up to the individual to "link up" specific interpretations of experiences and ideas with their own reference of what is possible and viable. That is, the process of constructing knowledge, of understanding, is dependent on the individual's subjective interpretation of their active experience, not what "actually" occurs. Understanding and acting are seen by radical constructivists not as dualistic processes but "circularly conjoined". Radical constructivism is closely related to second-order cybernetics. Constructivist Foundations is a free online journal publishing peer-reviewed articles on radical constructivism by researchers from multiple domains. === Relational constructivism === Relational constructivism can be perceived as a relational consequence of radical constructivism. In contrary to social constructivism, it picks up the epistemological threads. It maintains the radical constructivist idea that humans cannot overcome their limited conditions of reception (i.e., self-referentially operating cognition). Therefore, humans are not able to come to objective conclusions about the world. In spite of the subjectivity of human constructions of reality, relational constructivism focuses on the relational conditions applying to human perceptional processes. Björn Kraus puts it in a nutshell: It is substantial for relational constructivism that it basically originates from an epistemological point of view, thus from the subject and its construction processes. Coming from this perspective it then focusses on the (not only social, but also material) relations under which these cognitive construction processes are performed. Consequently, it's not only about social construction processes, but about cognitive construction processes performed under certain relational conditions. === Social constructivism === == Criticisms == Numerous criticisms have been levelled at Constructivism. The most common one is that it either explicitly advocates or implicitly reduces to relativism. Another criticism of constructivism is that it holds that the concepts of two different social formations be entirely different and incommensurate. This being the case, it is impossible to make comparative judgments about statements made according to each worldview. This is because the criteria of judgment will themselves have to be based on some worldview or other. If this is the case, then it brings into question how communication between them about the truth or falsity of any given statement could be established. The Wittgensteinian philosopher Gavin Kitching argues that constructivists usually implicitly presuppose a deterministic view of language, which severely constrains the minds and use of words by members of societies: they are not just "constructed" by language on this view but are literally "determined" by it. Kitching notes the contradiction here: somehow, the advocate of constructivism is not similarly constrained. While other individuals are controlled by the dominant concepts of society, the advocate of constructivism can transcend these concepts and see through them. == See also == == References == == Further reading == Devitt, M. 1997. Realism and Truth, Princeton University Press. Gillett, E. 1998. "Relativism and the Social-constructivist Paradigm", Philosophy, Psychiatry, & Psychology, Vol.5, No.1, pp. 37–48 Ernst von Glasersfeld 1987. The construction of knowledge, Contributions to conceptual semantics. Ernst von Glasersfeld 1995. Radical constructivism: A way of knowing and learning. Joe L. Kincheloe 2001. Getting beyond the Facts: Teaching Social Studies/Social Science in the Twenty-First Century, NY: Peter Lang. Joe L. Kincheloe 2005. Critical Constructivism Primer, NY: Peter Lang. Joe L. Kincheloe 2008. Knowledge and Critical Pedagogy, Dordrecht, The Netherlands: Springer. Kitching, G. 2008. The Trouble with Theory: The Educational Costs of Postmodernism, Penn State University Press. Björn Kraus 2014: Introducing a model for analyzing the possibilities of power, help and control. In: Social Work and Society. International Online Journal. Retrieved 3 April 2019.(http://www.socwork.net/sws/article/view/393) Björn Kraus 2015: The Life We Live and the Life We Experience: Introducing the Epistemological Difference between "Lifeworld" (Lebenswelt) and "Life Conditions" (Lebenslage). In: Social Work and Society. International Online Journal. Retrieved 27 August 2018.(http://www.socwork.net/sws/article/view/438). Björn Kraus 2019: Relational constructivism and relational social work. In: Webb, Stephen, A. (edt.) The Routledge Handbook of Critical Social Work. Routledge international Handbooks. London and New York: Taylor & Francis Ltd. Friedrich Kratochwil: Constructivism: what it is (not) and how it matters, in Donatella della Porta & Michael Keating (eds.) 2008, Approaches and Methodologies in the Social Sciences: A Pluralist Perspective, Cambridge University Press, 80–98. Mariyani-Squire, E. 1999. "Social Constructivism: A flawed Debate over Conceptual Foundations", Capitalism, Nature, Socialism, vol.10, no.4, pp. 97–125 Matthews, M.R. (ed.) 1998. Constructivism in Science Education: A Philosophical Examination, Kluwer Academic Publishers. Edgar Morin 1986, La Méthode, Tome 3, La Connaissance de la connaissance. Nola, R. 1997. "Constructivism in Science and in Science Education: A Philosophical Critique", Science & Education, Vol.6, no.1-2, pp. 55–83. Jean Piaget (ed.) 1967. Logique et connaissance scientifique, Encyclopédie de la Pléiade, vol. 22. Editions Gallimard. Herbert A. Simon 1969. The Sciences of the Artificial (3rd Edition MIT Press 1996). Slezak, P. 2000. "A Critique of Radical Social Constructivism", in D.C. Philips, (ed.) 2000, Constructivism in Education: Opinions and Second Opinions on Controversial Issues, The University of Chicago Press. Suchting, W.A. 1992. "Constructivism Deconstructed", Science & Education, vol.1, no.3, pp. 223–254 Paul Watzlawick 1984. The Invented Reality: How Do We Know What We Believe We Know? (Contributions to Constructivism), W W. Norton. Tom Rockmore 2008. On Constructivist Epistemology. Romm, N.R.A. 2001. Accountability in Social Research, Dordrecht, The Netherlands: Springer. https://www.springer.com/social+sciences/book/978-0-306-46564-2 == External links == Journal of Constructivist Psychology Radical Constructivism Constructivist Foundations
Wikipedia/Constructivism_(philosophy_of_science)
Historiography is the study of the methods used by historians in developing history as an academic discipline. By extension, the term "historiography" is any body of historical work on a particular subject. The historiography of a specific topic covers how historians have studied that topic by using particular sources, techniques of research, and theoretical approaches to the interpretation of documentary sources. Scholars discuss historiography by topic—such as the historiography of the United Kingdom, of WWII, of the pre-Columbian Americas, of early Islam, and of China—and different approaches to the work and the genres of history, such as political history and social history. Beginning in the nineteenth century, the development of academic history produced a great corpus of historiographic literature. The extent to which historians are influenced by their own groups and loyalties—such as to their nation state—remains a debated question. In Europe, the academic discipline of historiography was established in the 5th century BC with the Histories, by Herodotus, who thus established Greek historiography. In the 2nd century BC, the Roman statesman Cato the Elder produced the Origines, which is the first Roman historiography. In Asia, the father and son intellectuals Sima Tan and Sima Qian established Chinese historiography with the book Shiji (Records of the Grand Historian), in the time of the Han Empire in Ancient China. During the Middle Ages, medieval historiography included the works of chronicles in medieval Europe, the Ethiopian Empire in the Horn of Africa, Islamic histories by Muslim historians, and the Korean and Japanese historical writings based on the existing Chinese model. During the 18th-century Age of Enlightenment, historiography in the Western world was shaped and developed by figures such as Voltaire, David Hume, and Edward Gibbon, who among others set the foundations for the modern discipline. In the 19th century, historical studies became professionalized at universities and research centers along with a belief that history was like a science. In the 20th century, historians incorporated social science dimensions like politics, economy, and culture in their historiography. The research interests of historians change over time, and there has been a shift away from traditional diplomatic, economic, and political history toward newer approaches, especially social and cultural studies. From 1975 to 1995 the proportion of professors of history in American universities identifying with social history increased from 31 to 41 percent, while the proportion of political historians decreased from 40 to 30 percent. In 2007, of 5,723 faculty members in the departments of history at British universities, 1,644 (29 percent) identified themselves with social history and 1,425 (25 percent) identified themselves with political history. Since the 1980s there has been a special interest in the memories and commemoration of past events—the histories as remembered and presented for popular celebration. == Terminology == In the early modern period, the term historiography meant "the writing of history", and historiographer meant "historian". In that sense certain official historians were given the title "Historiographer Royal" in Sweden (from 1618), England (from 1660), and Scotland (from 1681). The Scottish post is still in existence. Historiography was more recently defined as "the study of the way history has been and is written—the history of historical writing", which means that, "When you study 'historiography' you do not study the events of the past directly, but the changing interpretations of those events in the works of individual historians." == History == === Antiquity === Understanding the past appears to be a universal human need, and the "telling of history" has emerged independently in civilizations around the world. What constitutes history is a philosophical question (see philosophy of history). The earliest chronologies date back to ancient Egypt and Sumerian/Akkadian Mesopotamia, in the form of chronicles and annals. However, most historical writers in these early civilizations were not known by name, and their works usually did not contain narrative structures or detailed analysis. By contrast, the term "historiography" is taken to refer to written history recorded in a narrative format for the purpose of informing future generations about events. In this limited sense, "ancient history" begins with the written history of early historiography in Classical Antiquity, established in 5th century BC Classical Greece. ==== Europe ==== ===== Greece ===== The earliest known systematic historical thought and methodologies emerged in ancient Greece and the wider Greek world, a development which would be an important influence on the writing of history elsewhere around the Mediterranean region. The tradition of logography in Archaic Greece preceded the full narrative form of historiography, in which logographers such as Hecataeus of Miletus provided prose compilations about places in geography and peoples in an early form of cultural anthropology, as well as speeches used in courts of law. The earliest known fully narrative critical historical works were The Histories, composed by Herodotus of Halicarnassus (484–425 BC) who became known as the "father of history". Herodotus attempted to distinguish between more and less reliable accounts, and personally conducted research by travelling extensively, giving written accounts of various Mediterranean cultures. Although Herodotus' overall emphasis lay on the actions and characters of men, he also attributed an important role to divinity in the determination of historical events. The generation following Herodotus witnessed a spate of local histories of the individual city-states (poleis), written by the first of the local historians who employed the written archives of city and sanctuary. Dionysius of Halicarnassus characterized these historians as the forerunners of Thucydides, and these local histories continued to be written into Late Antiquity, as long as the city-states survived. Two early figures stand out: Hippias of Elis, who produced the lists of winners in the Olympic Games that provided the basic chronological framework as long as the pagan classical tradition lasted, and Hellanicus of Lesbos, who compiled more than two dozen histories from civic records, all of them now lost. Thucydides largely eliminated divine causality in his account of the war between Athens and Sparta, establishing a rationalistic element which set a precedent for subsequent Western historical writings. He was also the first to distinguish between cause and immediate origins of an event, while his successor Xenophon (c. 431 – 355 BC) introduced autobiographical elements and biographical character studies in his Anabasis. The proverbial Philippic attacks of the Athenian orator Demosthenes (384–322 BC) on Philip II of Macedon marked the height of ancient political agitation. The now lost history of Alexander's campaigns by the diadoch Ptolemy I (367–283 BC) may represent the first historical work composed by a ruler. Polybius (c. 203 – 120 BC) wrote on the rise of the Roman Republic to world prominence, and attempted to harmonize the Greek and Roman points of view. Diodorus Siculus composed a universal history, the Bibliotheca historica, that sought to explain various known civilizations from their origins up until his own day in the 1st century BC. The Chaldean priest Berossus (fl. 3rd century BC) composed a Greek-language History of Babylonia for the Seleucid king Antiochus I, combining Hellenistic methods of historiography and Mesopotamian accounts to form a unique composite. Reports exist of other near-eastern histories, such as that of the Phoenician historian Sanchuniathon; but he is considered semi-legendary and writings attributed to him are fragmentary, known only through the later historians Philo of Byblos and Eusebius, who asserted that he wrote before even the Trojan War. The native Egyptian priest and historian Manetho composed a history of Egypt in Greek for the Ptolemaic royal court during the 3rd century BC. ===== Rome ===== The Romans adopted the Greek tradition, writing at first in Greek, but eventually chronicling their history in a freshly non-Greek language. Early Roman works were still written in Greek, such as the annals of Quintus Fabius Pictor. However, the Origines, composed by the Roman statesman Cato the Elder (234–149 BC), was written in Latin, in a conscious effort to counteract Greek cultural influence. It marked the beginning of Latin historical writings. Hailed for its lucid style, Julius Caesar's (103–44 BC) de Bello Gallico exemplifies autobiographical war coverage. The politician and orator Cicero (106–43 BC) introduced rhetorical elements in his political writings. Strabo (63 BC – c. 24 AD) was an important exponent of the Greco-Roman tradition of combining geography with history, presenting a descriptive history of peoples and places known to his era. The Roman historian Sallust (86–35 BC) sought to analyze and document what he viewed as the decline of the Republican Roman state and its virtues, highlighted in his respective narrative accounts of the Catilinarian conspiracy and the Jugurthine War. Livy (59 BC – 17 AD) records the rise of Rome from city-state to empire. His speculation about what would have happened if Alexander the Great had marched against Rome represents the first known instance of alternate history. Biography, although popular throughout antiquity, was introduced as a branch of history by the works of Plutarch (c. 45 – 125 AD) and Suetonius (c. 69 – after 130 AD) who described the deeds and characters of ancient personalities, stressing their human side. Tacitus (c. 56 – c. 117 AD) denounces Roman immorality by praising German virtues, elaborating on the topos of the Noble savage. Tacitus' focus on personal character can also be viewed as pioneering work in psychohistory. Although rooted in Greek historiography, in some ways Roman historiography shared traits with Chinese historiography, lacking speculative theories and instead relying on annalistic forms, revering ancestors, and imparting moral lessons for their audiences, laying the groundwork for medieval Christian historiography. ==== Biblical ==== Biblical historiography is the study of the writing of history in the context of the Hebrew Bible, and covers from the 12th century BC to the mid-4th century BC with a revival during the Hasmonean period in the second century BC. It encompasses two main trends: a quasi-secular approach focusing on political history, and a kerygmatic approach emphasizing divine action and moral lessons. ==== East Asia ==== ===== China ===== The Han dynasty eunuch Sima Qian (145–86 BC) was the first in China to lay the groundwork for professional historical writing. His work superseded the older style of the Spring and Autumn Annals, compiled in the 5th century BC, the Bamboo Annals, the Classic of History, and other court and dynastic annals that recorded history in a chronological form that abstained from analysis and focused on moralistic teaching. In 281 AD the tomb of King Xiang of Wei (d. 296 BC) was opened, inside of which was found a historical text called the Bamboo Annals, after the writing material. It is similar in style to the Spring and Autumn Annals and covers events from the mythical Yellow Emperor to 299 BC. Opinions on the authenticity of the text has varied throughout the centuries, and it was rediscovered too late to gain the same status as the Spring and Autumn Annals. Sima's Shiji (Records of the Grand Historian), initiated by his father the court astronomer Sima Tan (165–110 BC), pioneered the "Annals-biography" format, which would become the standard for prestige history writing in China. In this genre a history opens with a chronological outline of court affairs, and then continues with detailed biographies of prominent people who lived during the period in question. The scope of his work extended as far back as the 16th century BC with the founding of the Shang dynasty. It included many treatises on specific subjects and individual biographies of prominent people. He also explored the lives and deeds of commoners, both contemporary and those of previous eras. Whereas Sima's had been a universal history from the beginning of time down to the time of writing, his successor Ban Gu wrote an annals-biography history limiting its coverage to only the Western Han dynasty, the Book of Han (96 AD). This established the notion of using dynastic boundaries as start- and end-points, and most later Chinese histories would focus on a single dynasty or group of dynasties. The Records of the Grand Historian and Book of Han were eventually joined by the Book of the Later Han (AD 488) (replacing the earlier, and now only partially extant, Han Records from the Eastern Pavilion) and the Records of the Three Kingdoms (AD 297) to form the "Four Histories". These became mandatory reading for the Imperial Examinations and have therefore exerted an influence on Chinese culture comparable to the Confucian Classics. More annals-biography histories were written in subsequent dynasties, eventually bringing the number to between twenty-four and twenty-six, but none ever reached the popularity and impact of the first four. Traditional Chinese historiography describes history in terms of dynastic cycles. In this view, each new dynasty is founded by a morally righteous founder. Over time, the dynasty becomes morally corrupt and dissolute. Eventually, the dynasty becomes so weak as to allow its replacement by a new dynasty. === Middle Ages to Renaissance === ==== Christendom ==== Christian historical writing arguably begins with the narrative sections of the New Testament, particularly Luke-Acts, which is the primary source for the Apostolic Age, though its historical reliability is disputed. The first tentative beginnings of a specifically Christian historiography can be seen in Clement of Alexandria in the second century. The growth of Christianity and its enhanced status in the Roman Empire after Constantine I (see State church of the Roman Empire) led to the development of a distinct Christian historiography, influenced by both Christian theology and the nature of the Christian Bible, encompassing new areas of study and views of history. The central role of the Bible in Christianity is reflected in the preference of Christian historians for written sources, compared to the classical historians' preference for oral sources and is also reflected in the inclusion of politically unimportant people. Christian historians also focused on development of religion and society. This can be seen in the extensive inclusion of written sources in the Ecclesiastical History of Eusebius of Caesarea around 324 and in the subjects it covers. Christian theology considered time as linear, progressing according to divine plan. As God's plan encompassed everyone, Christian histories in this period had a universal approach. For example, Christian writers often included summaries of important historical events prior to the period covered by the work. Writing history was popular among Christian monks and clergy in the Middle Ages. They wrote about the history of Jesus Christ, that of the Church and that of their patrons, the dynastic history of the local rulers. In the Early Middle Ages historical writing often took the form of annals or chronicles recording events year by year, but this style tended to hamper the analysis of events and causes. An example of this type of writing is the Anglo-Saxon Chronicle, which was the work of several different writers: it was started during the reign of Alfred the Great in the late 9th century, but one copy was still being updated in 1154. Some writers in the period did construct a more narrative form of history. These included Gregory of Tours and more successfully Bede, who wrote both secular and ecclesiastical history and who is known for writing the Ecclesiastical History of the English People. Outside of Europe and West Asia, Christian historiography also existed in Africa. For instance, Augustine of Hippo, the Berber theologian and bishop of Hippo Regius in Numidia (Roman North Africa), wrote a multiple volume autobiography called Confessions between 397 and 400 AD. While earlier pagan rulers of the Kingdom of Aksum produced autobiographical style epigraphic texts in locations spanning Ethiopia, Eritrea, and Sudan and in either Greek or the native Ge'ez script, the 4th century AD Ezana Stone commemorating Ezana of Axum's conquest of the Kingdom of Kush in Nubia also emphasized his conversion to Christianity (the first indigenous African head of state to do so). Aksumite manuscripts from the 5th to 7th centuries AD chronicling the dioceses and episcopal sees of the Coptic Orthodox Church demonstrate not only an adherence to Christian chronology but also influences from the non-Christian Kingdom of Kush, the Ptolemaic dynasty of Hellenistic Egypt, and the Yemenite Jews of the Himyarite Kingdom. The tradition of Ethiopian historiography evolved into a matured form during the Solomonic dynasty. Though works such as the 13th century Kebra Nagast blended Christian mythology with historical events in its narrative, the first proper biographical chronicle on an Emperor of Ethiopia was made for Amda Seyon I (r. 1314–1344), depicted as a Christian savior of his nation in conflicts with the Islamic Ifat Sultanate. The 16th century monk Bahrey was the first in Ethiopia to produce a historical ethnography, focusing on the migrating Oromo people who came into military conflict with the Ethiopian Empire. While royal biographies existed for individual Ethiopian emperors authored by court historians who were also clerical scholars within the Ethiopian Orthodox Church, the reigns of Iyasu II (r. 1730–1755) and Iyoas I (r. 1755–1769) were the first to be included in larger general dynastic histories. During the Renaissance, history was written about states or nations. The study of history changed during the Enlightenment and Romanticism. Voltaire described the history of certain ages that he considered important, rather than describing events in chronological order. History became an independent discipline. It was not called philosophia historiae anymore, but merely history (historia). ==== Islamic world ==== Muslim historical writings first began to develop in the 7th century, with the reconstruction of the Prophet Muhammad's life in the centuries following his death. With numerous conflicting narratives regarding Muhammad and his companions from various sources, it was necessary to verify which sources were more reliable. In order to evaluate these sources, various methodologies were developed, such as the "science of biography", "science of hadith" and "Isnad" (chain of transmission). These methodologies were later applied to other historical figures in the Islamic civilization. Famous historians in this tradition include Urwah (d. 712), Wahb ibn Munabbih (d. 728), Ibn Ishaq (d. 761), al-Waqidi (745–822), Ibn Hisham (d. 834), Muhammad al-Bukhari (810–870) and Ibn Hajar (1372–1449). Historians of the medieval Islamic world also developed an interest in world history. Islamic historical writing eventually culminated in the works of the Arab Muslim historian Ibn Khaldun (1332–1406), who published his historiographical studies in the Muqaddimah (translated as Prolegomena) and Kitab al-I'bar (Book of Advice). His work was forgotten until it was rediscovered in the late 19th century. ==== Jewish ==== Jewish historiography built on biblical and medieval historiography with significant periods in the 16th and 19th centuries, building on works such as the chains of tradition of the oral law, Christian and Hellenistic historiography, and the Josippon. ==== East Asia ==== ===== Japan ===== The earliest works of history produced in Japan were the Rikkokushi (Six National Histories), a corpus of six national histories covering the history of Japan from its mythological beginnings until the 9th century. The first of these works were the Nihon Shoki, compiled by Prince Toneri in 720. ===== Korea ===== The tradition of Korean historiography was established with the Samguk sagi, a history of Korea from its allegedly earliest times. It was compiled by Goryeo court historian Kim Pusik after its commission by King Injong of Goryeo (r. 1122–1146). It was completed in 1145 and relied not only on earlier Chinese histories for source material, but also on the Hwarang Segi written by the Silla historian Kim Taemun in the 8th century. The latter work is now lost. ===== China ===== The Shitong, published around 710 by the Tang Chinese historian Liu Zhiji (661–721), was the first work to provide an outline of the entire tradition of Chinese historiography up to that point, and the first comprehensive work on historical criticism, arguing that historians should be skeptical of primary sources, rely on systematically gathered evidence, and should not treat previous scholars with undue deference. In 1084 the Song dynasty official Sima Guang completed the Zizhi Tongjian (Comprehensive Mirror to Aid in Government), which laid out the entire history of China from the beginning of the Warring States period (403 BC) to the end of the Five Dynasties period (959) in chronological annals form, rather than in the traditional annals-biography form. This work is considered much more accessible than the "Official Histories" for the Six dynasties, Tang dynasty, and Five Dynasties, and in practice superseded those works in the mind of the general reader. The great Song Neo-Confucian Zhu Xi found the Mirror to be overly long for the average reader, as well as too morally nihilist, and therefore prepared a didactic summary of it called the Zizhi Tongjian Gangmu (Digest of the Comprehensive Mirror to Aid in Government), posthumously published in 1219. It reduced the original's 249 chapters to just 59, and for the rest of imperial Chinese history would be the first history book most people ever read. ==== South East Asia ==== ===== Philippines ===== Historiography of the Philippines refers to the studies, sources, critical methods and interpretations used by scholars to study the history of the Philippines. It includes historical and archival research and writing on the history of the Philippine archipelago including the islands of Luzon, Visayas, and Mindanao. The Philippine archipelago was part of many empires before the Spanish Empire arrived in the 16th century. Southeast Asia is classified as part of the Indosphere and the Sinosphere. The archipelago had direct contact with China during the Song dynasty (960–1279), and was a part of the Srivijaya and Majapahit empires. The pre-colonial Philippines widely used the abugida system in writing and seals on documents, though it was for communication and no recorded writings of early literature or history. Ancient Filipinos usually wrote documents on bamboo, bark, and leaves, which did not survive, unlike inscriptions on clay, metal, and ivory did, such as the Laguna Copperplate Inscription and Butuan Ivory Seal. The discovery of the Butuan Ivory Seal also proves the use of paper documents in ancient Philippines. After the Spanish conquest, pre-colonial Filipino manuscripts and documents were gathered and burned to eliminate pagan beliefs. This has been the burden of historians in the accumulation of data and the development of theories that gave historians many aspects of Philippine history that were left unexplained. The interplay of pre-colonial events and the use of secondary sources written by historians to evaluate the primary sources, do not provide a critical examination of the methodology of the early Philippine historical study. === Enlightenment === During the Age of Enlightenment, the modern development of historiography through the application of scrupulous methods began. Among the many Italians who contributed to this were Leonardo Bruni (c. 1370–1444), Francesco Guicciardini (1483–1540), and Cesare Baronio (1538–1607). ==== Voltaire ==== French philosophe Voltaire (1694–1778) had an enormous influence on the development of historiography during the Age of Enlightenment through his demonstration of fresh new ways to look at the past. Guillaume de Syon argues: Voltaire recast historiography in both factual and analytical terms. Not only did he reject traditional biographies and accounts that claim the work of supernatural forces, but he went so far as to suggest that earlier historiography was rife with falsified evidence and required new investigations at the source. Such an outlook was not unique in that the scientific spirit that 18th-century intellectuals perceived themselves as invested with. A rationalistic approach was key to rewriting history. Voltaire's best-known histories are The Age of Louis XIV (1751), and his Essay on the Customs and the Spirit of the Nations (1756). He broke from the tradition of narrating diplomatic and military events, and emphasized customs, social history and achievements in the arts and sciences. He was the first scholar to make a serious attempt to write the history of the world, eliminating theological frameworks, and emphasizing economics, culture and political history. Although he repeatedly warned against political bias on the part of the historian, he did not miss many opportunities to expose the intolerance and frauds of the church over the ages. Voltaire advised scholars that anything contradicting the normal course of nature was not to be believed. Although he found evil in the historical record, he fervently believed reason and educating the illiterate masses would lead to progress. Voltaire's History of Charles XII (1731) about the Swedish warrior king (Swedish: Karl XII) is also one of his most famous works. It is not least known as one of Napoleon's absolute favorite books. Voltaire explains his view of historiography in his article on "History" in Diderot's Encyclopédie: "One demands of modern historians more details, better ascertained facts, precise dates, more attention to customs, laws, mores, commerce, finance, agriculture, population." Already in 1739 he had written: "My chief object is not political or military history, it is the history of the arts, of commerce, of civilization—in a word—of the human mind." Voltaire's histories used the values of the Enlightenment to evaluate the past. He helped free historiography from antiquarianism, Eurocentrism, religious intolerance and a concentration on great men, diplomacy, and warfare. Peter Gay says Voltaire wrote "very good history", citing his "scrupulous concern for truths", "careful sifting of evidence", "intelligent selection of what is important", "keen sense of drama", and "grasp of the fact that a whole civilization is a unit of study". ==== David Hume ==== At the same time, philosopher David Hume was having a similar effect on the study of history in Great Britain. In 1754 he published The History of England, a 6-volume work which extended "From the Invasion of Julius Caesar to the Revolution in 1688". Hume adopted a similar scope to Voltaire in his history; as well as the history of Kings, Parliaments, and armies, he examined the history of culture, including literature and science, as well. His short biographies of leading scientists explored the process of scientific change and he developed new ways of seeing scientists in the context of their times by looking at how they interacted with society and each other—he paid special attention to Francis Bacon, Robert Boyle, Isaac Newton and William Harvey. He also argued that the quest for liberty was the highest standard for judging the past, and concluded that after considerable fluctuation, England at the time of his writing had achieved "the most entire system of liberty, that was ever known amongst mankind". ==== Edward Gibbon ==== The apex of Enlightenment history was reached with Edward Gibbon's monumental six-volume work, The History of the Decline and Fall of the Roman Empire, published on 17 February 1776. Because of its relative objectivity and heavy use of primary sources, its methodology became a model for later historians. This has led to Gibbon being called the first "modern historian". The book sold impressively, earning its author a total of about £9000. Biographer Leslie Stephen wrote that thereafter, "His fame was as rapid as it has been lasting." Gibbon's work has been praised for its style, its piquant epigrams and its effective irony. Winston Churchill memorably noted, "I set out upon ... Gibbon's Decline and Fall of the Roman Empire [and] was immediately dominated both by the story and the style. ... I devoured Gibbon. I rode triumphantly through it from end to end and enjoyed it all." Gibbon was pivotal in the secularizing and 'desanctifying' of history, remarking, for example, on the "want of truth and common sense" of biographies composed by Saint Jerome. Unusually for an 18th-century historian, Gibbon was never content with secondhand accounts when the primary sources were accessible (though most of these were drawn from well-known printed editions). He said, "I have always endeavoured to draw from the fountain-head; that my curiosity, as well as a sense of duty, has always urged me to study the originals; and that, if they have sometimes eluded my search, I have carefully marked the secondary evidence, on whose faith a passage or a fact were reduced to depend." In this insistence upon the importance of primary sources, Gibbon broke new ground in the methodical study of history: In accuracy, thoroughness, lucidity, and comprehensive grasp of a vast subject, the 'History' is unsurpassable. It is the one English history which may be regarded as definitive. ... Whatever its shortcomings the book is artistically imposing as well as historically unimpeachable as a vast panorama of a great period. === 19th century === The tumultuous events surrounding the French Revolution inspired much of the historiography and analysis of the early 19th century. Interest in the 1688 Glorious Revolution was also rekindled by the Reform Act of 1832 in England. Nineteenth century historiography, especially among American historians, featured conflicting viewpoints that represented the times. According to 20th-century historian Richard Hofstadter:The historians of the nineteenth century worked under the pressure of two internal tensions: on one side there was the constant demand of society—whether through the nationstate, the church, or some special group or class interest—for memory mixed with myth, for the historical tale that would strengthen group loyalties or confirm national pride; and against this there were the demands of critical method, and even, after a time, the goal of writing "scientific" history. ==== Thomas Carlyle ==== Thomas Carlyle published his three-volume The French Revolution: A History, in 1837. The first volume was accidentally burned by John Stuart Mill's maid. Carlyle rewrote it from scratch. Carlyle's style of historical writing stressed the immediacy of action, often using the present tense. He emphasised the role of forces of the spirit in history and thought that chaotic events demanded what he called 'heroes' to take control over the competing forces erupting within society. He considered the dynamic forces of history as being the hopes and aspirations of people that took the form of ideas, and were often ossified into ideologies. Carlyle's The French Revolution was written in a highly unorthodox style, far removed from the neutral and detached tone of the tradition of Gibbon. Carlyle presented the history as dramatic events unfolding in the present as though he and the reader were participants on the streets of Paris at the famous events. Carlyle's invented style was epic poetry combined with philosophical treatise. It is rarely read or cited in the last century. ==== French historians: Michelet and Taine ==== In his main work Histoire de France (1855), French historian Jules Michelet (1798–1874) coined the term Renaissance (meaning "rebirth" in French), as a period in Europe's cultural history that represented a break from the Middle Ages, creating a modern understanding of humanity and its place in the world. The 19-volume work covered French history from Charlemagne to the outbreak of the French Revolution. His inquiry into manuscript and printed authorities was most laborious, but his lively imagination, and his strong religious and political prejudices, made him regard all things from a singularly personal point of view. Michelet was one of the first historians to shift the emphasis of history to the common people, rather than the leaders and institutions of the country. He had a decisive impact on scholars. Gayana Jurkevich argues that led by Michelet: 19th-century French historians no longer saw history as the chronicling of royal dynasties, armies, treaties, and great men of state, but as the history of ordinary French people and the landscape of France. Hippolyte Taine (1828–1893), although unable to secure an academic position, was the chief theoretical influence of French naturalism, a major proponent of sociological positivism, and one of the first practitioners of historicist criticism. He pioneered the idea of "the milieu" as an active historical force which amalgamated geographical, psychological, and social factors. Historical writing for him was a search for general laws. His brilliant style kept his writing in circulation long after his theoretical approaches were passé. ==== Cultural and constitutional history ==== One of the major progenitors of the history of culture and art, was the Swiss historian Jacob Burckhardt. Siegfried Giedion described Burckhardt's achievement in the following terms: "The great discoverer of the age of the Renaissance, he first showed how a period should be treated in its entirety, with regard not only for its painting, sculpture and architecture, but for the social institutions of its daily life as well." His most famous work was The Civilization of the Renaissance in Italy, published in 1860; it was the most influential interpretation of the Italian Renaissance in the nineteenth century and is still widely read. According to John Lukacs, he was the first master of cultural history, which seeks to describe the spirit and the forms of expression of a particular age, a particular people, or a particular place. His innovative approach to historical research stressed the importance of art and its inestimable value as a primary source for the study of history. He was one of the first historians to rise above the narrow nineteenth-century notion that "history is past politics and politics current history." By the mid-19th century, scholars were beginning to analyse the history of institutional change, particularly the development of constitutional government. William Stubbs's Constitutional History of England (3 vols., 1874–1878) was an important influence on this developing field. The work traced the development of the English constitution from the Teutonic invasions of Britain until 1485, and marked a distinct step in the advance of English historical learning. He argued that the theory of the unity and continuity of history should not remove distinctions between ancient and modern history. He believed that, though work on ancient history is a useful preparation for the study of modern history, either may advantageously be studied apart. He was a good palaeographer, and excelled in textual criticism, in examination of authorship, and other such matters, while his vast erudition and retentive memory made him second to none in interpretation and exposition. ==== Von Ranke and professionalization in Germany ==== The modern academic study of history and methods of historiography were pioneered in 19th-century German universities, especially the University of Göttingen. Leopold von Ranke (1795–1886) at Berlin was a pivotal influence in this regard, and was the founder of modern source-based history. According to Caroline Hoefferle, "Ranke was probably the most important historian to shape historical profession as it emerged in Europe and the United States in the late 19th century." Specifically, he implemented the seminar teaching method in his classroom, and focused on archival research and analysis of historical documents. Beginning with his first book in 1824, the History of the Latin and Teutonic Peoples from 1494 to 1514, Ranke used an unusually wide variety of sources for a historian of the age, including "memoirs, diaries, personal and formal missives, government documents, diplomatic dispatches and first-hand accounts of eye-witnesses". Over a career that spanned much of the century, Ranke set the standards for much of later historical writing, introducing such ideas as reliance on primary sources, an emphasis on narrative history and especially international politics (Aussenpolitik). Sources had to be solid, not speculations and rationalizations. His credo was to write history the way it was. He insisted on primary sources with proven authenticity. Ranke also rejected the 'teleological approach' to history, which traditionally viewed each period as inferior to the period which follows. In Ranke's view, the historian had to understand a period on its own terms, and seek to find only the general ideas which animated every period of history. In 1831 and at the behest of the Prussian government, Ranke founded and edited the first historical journal in the world, called Historisch-Politische Zeitschrift. Another important German thinker was Georg Wilhelm Friedrich Hegel, whose theory of historical progress ran counter to Ranke's approach. In Hegel's own words, his philosophical theory of "World history ... represents the development of the spirit's consciousness of its own freedom and of the consequent realization of this freedom." This realization is seen by studying the various cultures that have developed over the millennia, and trying to understand the way that freedom has worked itself out through them: World history is the record of the spirit's efforts to attain knowledge of what it is in itself. The Orientals do not know that the spirit or man as such are free in themselves. And because they do not know that, they are not themselves free. They only know that One is free. ... The consciousness of freedom first awoke among the Greeks, and they were accordingly free; but, like the Romans, they only knew that Some, and not all men as such, are free. ... The Germanic nations, with the rise of Christianity, were the first to realize that All men are by nature free, and that freedom of spirit is his very essence. Karl Marx introduced the concept of historical materialism into the study of world historical development. In his conception, the economic conditions and dominant modes of production determined the structure of society at that point. In his view five successive stages in the development of material conditions would occur in Western Europe. The first stage was primitive communism where property was shared and there was no concept of "leadership". This progressed to a slave society where the idea of class emerged and the State developed. Feudalism was characterized by an aristocracy working in partnership with a theocracy and the emergence of the nation-state. Capitalism appeared after the bourgeois revolution when the capitalists (or their merchant predecessors) overthrew the feudal system and established a market economy, with private property and parliamentary democracy. Marx then predicted the eventual proletarian revolution that would result in the attainment of socialism, followed by communism, where property would be communally owned. Previous historians had focused on cyclical events of the rise and decline of rulers and nations. Process of nationalization of history, as part of national revivals in the 19th century, resulted with separation of "one's own" history from common universal history by such way of perceiving, understanding and treating the past that constructed history as history of a nation. A new discipline, sociology, emerged in the late 19th century and analyzed and compared these perspectives on a larger scale. ==== Macaulay and Whig history ==== The term "Whig history", coined by Herbert Butterfield in his short book The Whig Interpretation of History in 1931, means the approach to historiography which presents the past as an inevitable progression towards ever greater liberty and enlightenment, culminating in modern forms of liberal democracy and constitutional monarchy. In general, Whig historians emphasized the rise of constitutional government, personal freedoms and scientific progress. The term has been also applied widely in historical disciplines outside of British history (the history of science, for example) to criticize any teleological (or goal-directed), hero-based, and transhistorical narrative. Paul Rapin de Thoyras's history of England, published in 1723, became "the classic Whig history" for the first half of the 18th century. It was later supplanted by the immensely popular The History of England by David Hume. Whig historians emphasized the achievements of the Glorious Revolution of 1688. This included James Mackintosh's History of the Revolution in England in 1688, William Blackstone's Commentaries on the Laws of England, and Henry Hallam's Constitutional History of England. The most famous exponent of 'Whiggery' was Thomas Babington Macaulay. His writings are famous for their ringing prose and for their confident, sometimes dogmatic, emphasis on a progressive model of British history, according to which the country threw off superstition, autocracy and confusion to create a balanced constitution and a forward-looking culture combined with freedom of belief and expression. This model of human progress has been called the Whig interpretation of history. He published the first volumes of his most famous work of history, The History of England from the Accession of James II, in 1848. It proved an immediate success and replaced Hume's history to become the new orthodoxy. His 'Whiggish convictions' are spelled out in his first chapter: I shall relate how the new settlement was ... successfully defended against foreign and domestic enemies; how ... the authority of law and the security of property were found to be compatible with a liberty of discussion and of individual action never before known; how, from the auspicious union of order and freedom, sprang a prosperity of which the annals of human affairs had furnished no example; how our country, from a state of ignominious vassalage, rapidly rose to the place of umpire among European powers; how her opulence and her martial glory grew together; ... how a gigantic commerce gave birth to a maritime power, compared with which every other maritime power, ancient or modern, sinks into insignificance ... the history of our country during the last hundred and sixty years is eminently the history of physical, of moral, and of intellectual improvement. His legacy continues to be controversial; Gertrude Himmelfarb wrote that "most professional historians have long since given up reading Macaulay, as they have given up writing the kind of history he wrote and thinking about history as he did." However, J. R. Western wrote that: "Despite its age and blemishes, Macaulay's History of England has still to be superseded by a full-scale modern history of the period". The Whig consensus was steadily undermined during the post-World War I re-evaluation of European history, and Butterfield's critique exemplified this trend. Intellectuals no longer believed the world was automatically getting better and better. Subsequent generations of academic historians have similarly rejected Whig history because of its presentist and teleological assumption that history is driving toward some sort of goal. Other criticized 'Whig' assumptions included viewing the British system as the apex of human political development, assuming that political figures in the past held current political beliefs (anachronism), considering British history as a march of progress with inevitable outcomes and presenting political figures of the past as heroes, who advanced the cause of this political progress, or villains, who sought to hinder its inevitable triumph. J. Hart says "a Whig interpretation requires human heroes and villains in the story." === 20th century === 20th-century historiography in major countries is characterized by a move to universities and academic research centers. Popular history continued to be written by self-educated amateurs, but scholarly history increasingly became the province of PhD's trained in research seminars at a university. The training emphasized working with primary sources in archives. Seminars taught graduate students how to review the historiography of the topics, so that they could understand the conceptual frameworks currently in use, and the criticisms regarding their strengths and weaknesses. Western Europe and the United States took leading roles in this development. The emergence of area studies of other regions also developed historiographical practices. ==== France: Annales school ==== The French Annales school radically changed the focus of historical research in France during the 20th century by stressing long-term social history, rather than political or diplomatic themes. The school emphasized the use of quantification and the paying of special attention to geography. The Annales d'histoire économique et sociale journal was founded in 1929 in Strasbourg by Marc Bloch and Lucien Febvre. These authors, the former a medieval historian and the latter an early modernist, quickly became associated with the distinctive Annales approach, which combined geography, history, and the sociological approaches of the Année Sociologique (many members of which were their colleagues at Strasbourg) to produce an approach which rejected the predominant emphasis on politics, diplomacy and war of many 19th and early 20th-century historians as spearheaded by historians whom Febvre called Les Sorbonnistes. Instead, they pioneered an approach to a study of long-term historical structures (la longue durée) over events and political transformations. Geography, material culture, and what later Annalistes called mentalités, or the psychology of the epoch, are also characteristic areas of study. The goal of the Annales was to undo the work of the Sorbonnistes, to turn French historians away from the narrowly political and diplomatic toward the new vistas in social and economic history. For early modern Mexican history, the work of Marc Bloch's student François Chevalier on the formation of landed estates (haciendas) from the sixteenth century to the seventeenth had a major impact on Mexican history and historiography, setting off an important debate about whether landed estates were basically feudal or capitalistic. An eminent member of this school, Georges Duby, described his approach to history as one that relegated the sensational to the sidelines and was reluctant to give a simple accounting of events, but strived on the contrary to pose and solve problems and, neglecting surface disturbances, to observe the long and medium-term evolution of economy, society and civilisation. The Annalistes, especially Lucien Febvre, advocated a histoire totale, or histoire tout court, a complete study of a historical problem. The second era of the school was led by Fernand Braudel and was very influential throughout the 1960s and 1970s, especially for his work on the Mediterranean region in the era of Philip II of Spain. Braudel developed the idea, often associated with Annalistes, of different modes of historical time: l'histoire quasi immobile (motionless history) of historical geography, the history of social, political and economic structures (la longue durée), and the history of men and events, in the context of their structures. His 'longue durée' approach stressed slow, and often imperceptible effects of space, climate and technology on the actions of human beings in the past. The Annales historians, after living through two world wars and major political upheavals in France, were deeply uncomfortable with the notion that multiple ruptures and discontinuities created history. They preferred to stress slow change and the longue durée. They paid special attention to geography, climate, and demography as long-term factors. They considered the continuities of the deepest structures were central to history, beside which upheavals in institutions or the superstructure of social life were of little significance, for history lies beyond the reach of conscious actors, especially the will of revolutionaries. Noting the political upheavals in Europe and especially in France in 1968, Eric Hobsbawm argued that "in France the virtual hegemony of Braudelian history and the Annales came to an end after 1968, and the international influence of the journal dropped steeply." Multiple responses were attempted by the school. Scholars moved in multiple directions, covering in disconnected fashion the social, economic, and cultural history of different eras and different parts of the globe. By the time of crisis the school was building a vast publishing and research network reaching across France, Europe, and the rest of the world. Influence indeed spread out from Paris, but few new ideas came in. Much emphasis was given to quantitative data, seen as the key to unlocking all of social history. However, the Annales ignored the developments in quantitative studies underway in the U.S. and Britain, which reshaped economic, political and demographic research. ==== Marxist historiography ==== Marxist historiography developed as a school of historiography influenced by the chief tenets of Marxism, including the centrality of social class and economic constraints in determining historical outcomes (historical materialism). Friedrich Engels wrote The Peasant War in Germany, which analysed social warfare in early Protestant Germany in terms of emerging capitalist classes. Although it lacked a rigorous engagement with archival sources, it indicated an early interest in history from below and class analysis, and it attempts a dialectical analysis. Another treatise of Engels, The Condition of the Working Class in England in 1844, was salient in creating the socialist impetus in British politics from then on, e.g. the Fabian Society. R. H. Tawney was an early historian working in this tradition. The Agrarian Problem in the Sixteenth Century (1912) and Religion and the Rise of Capitalism (1926), reflected his ethical concerns and preoccupations in economic history. He was profoundly interested in the issue of the enclosure of land in the English countryside in the sixteenth and seventeenth centuries and in Max Weber's thesis on the connection between the appearance of Protestantism and the rise of capitalism. His belief in the rise of the gentry in the century before the outbreak of the Civil War in England provoked the 'Storm over the Gentry' in which his methods were subjected to severe criticisms by Hugh Trevor-Roper and John Cooper. Historiography in the Soviet Union was greatly influenced by Marxist historiography, as historical materialism was extended into the Soviet version of dialectical materialism. A circle of historians inside the Communist Party of Great Britain (CPGB) formed in 1946 and became a highly influential cluster of British Marxist historians, who contributed to history from below and class structure in early capitalist society. While some members of the group (most notably Christopher Hill and E. P. Thompson) left the CPGB after the 1956 Hungarian Revolution, the common points of British Marxist historiography continued in their works. They placed a great emphasis on the subjective determination of history. Christopher Hill's studies on 17th-century English history were widely acknowledged and recognised as representative of this school. His books include Puritanism and Revolution (1958), Intellectual Origins of the English Revolution (1965 and revised in 1996), The Century of Revolution (1961), AntiChrist in 17th-century England (1971), The World Turned Upside Down (1972) and many others. E. P. Thompson pioneered the study of history from below in his work, The Making of the English Working Class, published in 1963. It focused on the forgotten history of the first working-class political left in the world in the late-18th and early-19th centuries. In his preface to this book, Thompson set out his approach to writing history from below: I am seeking to rescue the poor stockinger, the Luddite cropper, the "obsolete" hand-loom weaver, the "Utopian" artisan, and even the deluded follower of Joanna Southcott, from the enormous condescension of posterity. Their crafts and traditions may have been dying. Their hostility to the new industrialism may have been backward-looking. Their communitarian ideals may have been fantasies. Their insurrectionary conspiracies may have been foolhardy. But they lived through these times of acute social disturbance, and we did not. Their aspirations were valid in terms of their own experience; and, if they were casualties of history, they remain, condemned in their own lives, as casualties. Thompson's work was also significant because of the way he defined "class". He argued that class was not a structure, but a relationship that changed over time. He opened the gates for a generation of labor historians, such as David Montgomery and Herbert Gutman, who made similar studies of the American working classes. Other important Marxist historians included Eric Hobsbawm, C. L. R. James, Raphael Samuel, A. L. Morton and Brian Pearce. ==== Biography ==== Biography has been a major form of historiography since the days when Plutarch wrote the parallel lives of great Roman and Greek leaders. It is a field especially attractive to nonacademic historians, and often to the spouses or children of famous people, who have access to the trove of letters and documents. Academic historians tend to downplay biography because it pays too little attention to broad social, cultural, political and economic forces, and perhaps too much attention to popular psychology. The "Great Man" tradition in Britain originated in the multi-volume Dictionary of National Biography (which originated in 1882 and issued updates into the 1970s); it continues to this day in the new Oxford Dictionary of National Biography. In the United States, the Dictionary of American Biography was planned in the late 1920s and appeared with numerous supplements into the 1980s. It has now been displaced by the American National Biography as well as numerous smaller historical encyclopedias that give thorough coverage to Great Persons. Bookstores do a thriving business in biographies, which sell far more copies than the esoteric monographs based on post-structuralism, cultural, racial or gender history. Michael Holroyd says the last forty years "may be seen as a golden age of biography", but nevertheless calls it the "shallow end of history". Nicolas Barker argues that "more and more biographies command an ever larger readership", as he speculates that biography has come "to express the spirit of our age". Daniel R. Meister argues that: Biography Studies is emerging as an independent discipline, especially in the Netherlands. This Dutch School of biography is moving biography studies away from the less scholarly life writing tradition and towards history by encouraging its practitioners to utilize an approach adapted from microhistory. ==== British debates ==== Marxist historian E. H. Carr developed a controversial theory of history in his 1961 book What Is History?, which proved to be one of the most influential books ever written on the subject. He presented a middle-of-the-road position between the empirical or (Rankean) view of history and R. G. Collingwood's idealism, and rejected the empirical view of the historian's work being an accretion of "facts" that they have at their disposal as nonsense. He maintained that there is such a vast quantity of information that the historian always chooses the "facts" they decide to make use of. In Carr's famous example, he claimed that millions had crossed the Rubicon, but only Julius Caesar's crossing in 49 BC is declared noteworthy by historians. For this reason, Carr argued that Leopold von Ranke's famous dictum wie es eigentlich gewesen (show what actually happened) was wrong because it presumed that the "facts" influenced what the historian wrote, rather than the historian choosing what "facts of the past" they intended to turn into "historical facts". At the same time, Carr argued that the study of the facts may lead the historian to change his or her views. In this way, Carr argued that history was "an unending dialogue between the past and present". Carr is held by some critics to have had a deterministic outlook in history. Others have modified or rejected this use of the label "determinist". He took a hostile view of those historians who stress the workings of chance and contingency in the workings of history. In Carr's view, no individual is truly free of the social environment in which they live, but contended that within those limitations, there was room, albeit very narrow room for people to make decisions that affect history. Carr emphatically contended that history was a social science, not an art, because historians like scientists seek generalizations that helped to broaden the understanding of one's subject. One of Carr's most forthright critics was Hugh Trevor-Roper, who argued that Carr's dismissal of the "might-have-beens of history" reflected a fundamental lack of interest in examining historical causation. Trevor-Roper asserted that examining possible alternative outcomes of history was far from being a "parlour-game" was rather an essential part of the historians' work, as only by considering all possible outcomes of a given situation could a historian properly understand the period. The controversy inspired Sir Geoffrey Elton to write his 1967 book The Practice of History. Elton criticized Carr for his "whimsical" distinction between the "historical facts" and the "facts of the past", arguing that it reflected "...an extraordinarily arrogant attitude both to the past and to the place of the historian studying it". Elton, instead, strongly defended the traditional methods of history and was also appalled by the inroads made by postmodernism. Elton saw the duty of historians as empirically gathering evidence and objectively analyzing what the evidence has to say. As a traditionalist, he placed great emphasis on the role of individuals in history instead of abstract, impersonal forces. Elton saw political history as the highest kind of history. Elton had no use for those who seek history to make myths, to create laws to explain the past, or to produce theories such as Marxism. ==== U.S. approaches ==== Classical and European history was part of the 19th-century grammar curriculum. American history became a topic later in the 19th century. In the historiography of the United States, there were a series of major approaches in the 20th century. In 2009–2012, there were an average of 16,000 new academic history books published in the U.S. every year. ===== Progressive historians ===== The Progressive historians were a group of 20th century historians of the United States associated with a historiographical tradition that embraced an economic interpretation of American history. Most prominent among these was Charles A. Beard, who was influential in academia and with the general public. ===== Consensus history ===== Consensus history emphasizes the basic unity of American values and downplays conflict as superficial. It was especially attractive in the 1950s and 1960s. Prominent leaders included Richard Hofstadter, Louis Hartz, Daniel Boorstin, Allan Nevins, Clinton Rossiter, Edmund Morgan, and David M. Potter. In 1948 Hofstadter made a compelling statement of the consensus model of the U.S. political tradition: The fierceness of the political struggles has often been misleading: for the range of vision embraced by the primary contestants in the major parties has always been bounded by the horizons of property and enterprise. However much at odds on specific issues, the major political traditions have shared a belief in the rights of property, the philosophy of economic individualism, the value of competition; they have accepted the economic virtues of capitalist culture as necessary qualities of man. ===== New Left history ===== Consensus history was rejected by New Left viewpoints that attracted a younger generation of radical historians in the 1960s. These viewpoints stress conflict and emphasize the central roles of class, race and gender. The history of dissent, and the experiences of racial minorities and disadvantaged classes was central to the narratives produced by New Left historians. ===== Quantification and new approaches to history ===== Social history, sometimes called the "new social history", is a broad branch that studies the experiences of ordinary people in the past. It had major growth as a field in the 1960s and 1970s, and still is well represented in history departments. However, after 1980 the "cultural turn" directed the next generation to new topics. In the two decades from 1975 to 1995, the proportion of professors of history in U.S. universities identifying with social history rose from 31 to 41 percent, while the proportion of political historians fell from 40 to 30 percent. The growth was enabled by the social sciences, computers, statistics, new data sources such as individual census information, and summer training programs at the Newberry Library and the University of Michigan. The New Political History saw the application of social history methods to politics, as the focus shifted from politicians and legislation to voters and elections. The Social Science History Association was formed in 1976 as an interdisciplinary group with a journal Social Science History and an annual convention. The goal was to incorporate in historical studies perspectives from all the social sciences, especially political science, sociology and economics. The pioneers shared a commitment to quantification. However, by the 1980s the first blush of quantification had worn off, as traditional historians counterattacked. Harvey J. Graff says: The case against the new mixed and confused a lengthy list of ingredients, including the following: history's supposed loss of identity and humanity in the stain of social science, the fear of subordinating quality to quantity, conceptual and technical fallacies, violation of the literary character and biographical base of "good" history (rhetorical and aesthetic concern), loss of audiences, derogation of history rooted in "great men" and "great events", trivialization in general, a hodgepodge of ideological objections from all directions, and a fear that new historians were reaping research funds that might otherwise come to their detractors. To defenders of history as they knew it, the discipline was in crisis, and the pursuit of the new was a major cause. Meanwhile, "new" economic history became well-established. However, cliometrics has never been considered a historical field by the vast majority of historians so that cliometric articles have not been cited by historians. Economists mostly employed economic theories and econometric applications similar to typical economic papers. As a result, quantification remained central to demographic studies, but slipped behind in political and social history as traditional narrative approaches made a comeback. Recently, as the newest approach in economic history "new history of capitalism" appeared. In the first article of the related journal, Marc Flandreau defined their purpose as "crossing border" to create a truly interdisciplinary field. ==== Latin America ==== Latin America is the former Spanish American empire in the Western Hemisphere plus Portuguese Brazil. Professional historians pioneered the creation of this field, starting in the late nineteenth century. The term "Latin America" did not come into general usage until the twentieth century and in some cases it was rejected. The historiography of the field has been more fragmented than unified, with historians of Spanish America and Brazil generally remaining in separate spheres. Another standard division within the historiography is the temporal factor, with works falling into either the early modern period (or "colonial era") or the post-independence (or "national") period, from the early nineteenth onward. Relatively few works span the two eras and few works except textbooks unite Spanish America and Brazil. There is a tendency to focus on histories of particular countries or regions (the Andes, the Southern Cone, the Caribbean) with relatively little comparative work. Historians of Latin America have contributed to various types of historical writing, but one major, innovative development in Spanish American history is the emergence of ethnohistory, the history of indigenous peoples, especially in Mexico based on alphabetic sources in Spanish or in indigenous languages. For the early modern period, the emergence of Atlantic history, based on comparisons and linkages of Europe, the Americas, and Africa from 1450 to 1850 that developed as a field in its own right has integrated early modern Latin American history into a larger framework. For all periods, global or world history have focused on the connections between areas, likewise integrating Latin America into a larger perspective. Latin America's importance to world history is notable but often overlooked. "Latin America's central, and sometimes pioneering, role in the development of globalization and modernity did not cease with the end of colonial rule and the early modern period. Indeed, the region's political independence places it at the forefront of two trends that are regularly considered thresholds of the modern world. The first is the so-called liberal revolution, the shift from monarchies of the ancien régime, where inheritance legitimated political power, to constitutional republics... The second, and related, trend consistently considered a threshold of modern history that saw Latin America in the forefront is the development of nation-states." Historical research appears in a number of specialized journals. These include Hispanic American Historical Review (est. 1918), published by the Conference on Latin American History; The Americas, (est. 1944); Journal of Latin American Studies (1969); Canadian Journal of Latin American and Caribbean Studies, (est.1976) Bulletin of Latin American Research, (est. 1981); Colonial Latin American Review (1992); and Colonial Latin American Historical Review (est. 1992). Latin American Research Review (est. 1969), published by the Latin American Studies Association, does not focus primarily on history, but it has often published historiographical essays on particular topics. General works on Latin American history have appeared since the 1950s, when the teaching of Latin American history expanded in U.S. universities and colleges. Most attempt full coverage of Spanish America and Brazil from the conquest to the modern era, focusing on institutional, political, social and economic history. An important, eleven volume treatment of Latin American history is The Cambridge History of Latin America, with separate volumes on the colonial era, nineteenth century, and the twentieth century. There is a small number of general works that have gone through multiple editions. Major trade publishers have also issued edited volumes on Latin American history and historiography. Reference works include the Handbook of Latin American Studies, which publishes articles by area experts, with annotated bibliographic entries, and the Encyclopedia of Latin American History and Culture. ==== Africa ==== Since most African societies recorded their history orally, written records largely focussed on the actions of outsiders. Historiography in the colonial period was undertaken by European academics and historians from a European perspective, under the pretence of Western superiority supported by scientific racism. Oral sources were deprecated and dismissed by unfamiliar historians, giving them the impression Africa had history nor desire to create it. African historiography became organised at the academic level in the mid 20th century. Kenneth Dike, among others, pioneered a new methodology of reconstructing African history using the oral traditions, alongside evidence from European-style histories and other historical sciences.: 212  This movement towards utilising oral sources in a multi-disciplinary approach culminated in UNESCO commissioning the General History of Africa, edited by specialists drawn from across the African continent, and publishing from 1981 to 2024. Contemporary historians are still tasked with building the institutional frameworks incorporating African epistemologies and representing an African perspective. ==== World history ==== World history, as a distinct field of historical study, emerged as an independent academic field in the 1980s. It focused on the examination of history from a global perspective and looked for common patterns that emerged across all cultures. The basic thematic approach of this field was to analyse two major focal points: integration—how processes of world history have drawn people of the world together, and difference—how patterns of world history reveal the diversity of the human experience. Arnold J. Toynbee's ten-volume A Study of History, took an approach that was widely discussed in the 1930s and 1940s. By the 1960s his work was virtually ignored by scholars and the general public. He compared 26 independent civilizations and argued that they displayed striking parallels in their origin, growth, and decay. He proposed a universal model to each of these civilizations, detailing the stages through which they all pass: genesis, growth, time of troubles, universal state, and disintegration. The later volumes gave too much emphasis on spirituality to satisfy critics. Chicago historian William H. McNeill wrote The Rise of the West (1965) to show how the separate civilizations of Eurasia interacted from the very beginning of their history, borrowing critical skills from one another, and thus precipitating still further change as adjustment between traditional old and borrowed new knowledge and practice became necessary. He then discusses the dramatic effect of Western civilization on others in the past 500 years of history. McNeill took a broad approach organized around the interactions of peoples across the globe. Such interactions have become both more numerous and more continual and substantial in recent times. Before about 1500, the network of communication between cultures was that of Eurasia. The term for these areas of interaction differ from one world historian to another and include world-system and ecumene. His emphasis on cultural fusions influenced historical theory significantly. ==== The cultural turn ==== The "cultural turn" of the 1980s and 1990s affected scholars in most areas of history. Inspired largely by anthropology, it turned away from leaders, ordinary people and famous events to look at the use of language and cultural symbols to represent the changing values of society. The British historian Peter Burke finds that cultural studies has numerous spinoffs, or topical themes it has strongly influenced. The most important include gender studies and postcolonial studies, as well as memory studies, and film studies. Diplomatic historian Melvyn P. Leffler finds that the problem with the "cultural turn" is that the culture concept is imprecise, and may produce excessively broad interpretations, because it: seems infinitely malleable and capable of giving shape to totally divergent policies; for example, to internationalism or isolationism in the United States, and to cooperative internationalism or race hatred in Japan. The malleability of culture suggest to me that in order to understand its effect on policy, one needs also to study the dynamics of political economy, the evolution of the international system, and the roles of technology and communication, among many other variables. ==== Memory studies ==== Memory studies is a new field, focused on how nations and groups (and historians) construct and select their memories of the past in order to celebrate (or denounce) key features, thus making a statement of their current values and beliefs. Historians have played a central role in shaping the memories of the past as their work is diffused through popular history books and school textbooks. French sociologist Maurice Halbwachs, opened the field with La mémoire collective (Paris: 1950). Many historians examine how the memory of the past has been constructed, memorialized or distorted. Historians examine how legends are invented. For example, there are numerous studies of the memory of atrocities from World War II, notably the Holocaust in Europe and Japanese war crimes in Asia. British historian Heather Jones argues that the historiography of the First World War in recent years has been reinvigorated by the cultural turn. Scholars have raised entirely new questions regarding military occupation, radicalization of politics, race, and the male body. Representative of recent scholarship is a collection of studies on the "Dynamics of Memory and Identity in Contemporary Europe". Sage has published the scholarly journal Memory Studies since 2008, and the book series "Memory Studies" was launched by Palgrave Macmillan in 2010 with 5–10 titles a year. == Scholarly journals == The historical journal, a forum where academic historians could exchange ideas and publish newly discovered information, came into being in the 19th century. The early journals were similar to those for the physical sciences, and were seen as a means for history to become more professional. Journals also helped historians to establish various historiographical approaches, the most notable example of which was Annales. Économies, sociétés, civilisations, a publication of the Annales school in France. Journals now typically have one or more editors and associate editors, an editorial board, and a pool of scholars to whom articles that are submitted are sent for confidential evaluation. The editors will send out new books to recognized scholars for reviews that usually run 500 to 1000 words. The vetting and publication process often takes months or longer. Publication in a prestigious journal (which accept 10 percent or fewer of the articles submitted) is an asset in the academic hiring and promotion process. Publication demonstrates that the author is conversant with the scholarly field. Page charges and fees for publication are uncommon in history. Journals are subsidized by universities or historical societies, scholarly associations, and subscription fees from libraries and scholars. Increasingly they are available through library pools that allow many academic institutions to pool subscriptions to online versions. Most libraries have a system for obtaining specific articles through inter-library loan. === Some major historical journals === 1839 Revista do Instituto Histórico e Geográfico Brasileiro (Brazil) 1840 Historisk tidsskrift (Denmark) 1859 Historische Zeitschrift (Germany) 1866 Archivum historicum, later Historiallinen arkisto (Finland, published in Finnish) 1867 Századok (Hungary) 1869 Časopis Matice moravské (Czech republic – then part of Austria-Hungary) 1871 Historisk tidsskrift (Norway) 1876 Revue Historique (France) 1880 Historisk tidskrift (Sweden) 1886 English Historical Review (England) 1887 Kwartalnik Historyczny (Poland – then part of Austria-Hungary) 1892 William and Mary Quarterly (US) 1894 Ons Hémecht (Luxembourg) 1895 American Historical Review (US) 1895 Český časopis historický (Czech republic – then part of Austria-Hungary) 1914 Mississippi Valley Historical Review (renamed in 1964 the Journal of American History) (US) 1915 The Catholic Historical Review (US) 1916 The Journal of Negro History (renamed in 2001 The Journal of African American History) (US) 1916 Historisk Tidskrift för Finland (Finland, published in Swedish) 1918 Hispanic American Historical Review (US) 1920 Canadian Historical Review (Canada) 1922 Slavonic and East European Review (SEER), (England) 1928 Scandia (Sweden) 1929 Annales d'histoire économique et sociale (France) 1935 Journal of Southern History (US) 1941 The Journal of Economic History (US) 1944 The Americas (US) 1951 Historia Mexicana (Mexico) 1952 Past & present: a journal of historical studies (England) 1953 Vierteljahrshefte für Zeitgeschichte (Germany) 1954 Ethnohistory (US) 1956 Journal of the Historical Society of Nigeria (Nigeria) 1957 Victorian Studies (US) 1960 Journal of African History (England) 1960 Technology and culture: the international quarterly of the Society for the History of Technology (US) 1960 History and Theory (US) 1967 Indian Church History Review (India) (earlier published as the Bulletin of Church History Association of India) 1967 The Journal of Social History (US) 1969 Journal of Interdisciplinary History (US) 1969 Journal of Latin American Studies (UK) 1975 Geschichte und Gesellschaft. Zeitschrift für historische Sozialwissenschaft (Germany) 1975 Signs (US) 1976 Journal of Family History (US) 1978 The Public Historian (US) 1981 Bulletin of Latin American Research (UK) 1982 Storia della Storiografia – History of Historiography – Histoire de l'Historiographie – Geschichte der Geschichtsschreibung 1982 Subaltern Studies (Oxford University Press) 1986 Zeitschrift für Sozialgeschichte des 20. und 21. Jahrhunderts, new title since 2003: Sozial.Geschichte. Zeitschrift für historische Analyse des 20. und 21. Jahrhunderts (Germany) 1990 Gender and History (US) 1990 Journal of World History (US) 1990 L'Homme. Zeitschrift für feministische Geschichtswissenschaft (Austria) 1990 Österreichische Zeitschrift für Geschichtswissenschaften (ÖZG) 1992 Women's History Review 1992 Colonial Latin American Historical Review (US) 1992 Colonial Latin American Review 1996 Environmental History (US) 2011 International Journal for the Historiography of Education == Narrative == According to Lawrence Stone, narrative has traditionally been the main rhetorical device used by historians. In 1979, at a time when the new Social History was demanding a social-science model of analysis, Stone detected a move back toward the narrative. Stone defined narrative as follows: it is organized chronologically; it is focused on a single coherent story; it is descriptive rather than analytical; it is concerned with people not abstract circumstances; and it deals with the particular and specific rather than the collective and statistical. He reported that, "More and more of the 'new historians' are now trying to discover what was going on inside people's heads in the past, and what it was like to live in the past, questions which inevitably lead back to the use of narrative." Historians committed to a social science approach, however, have criticized the narrowness of narrative and its preference for anecdote over analysis, and its use of clever examples rather than statistically verified empirical regularities. == Topics studied == Some of the common topics in historiography are: Reliability of the sources used, in terms of authorship, credibility of the author, and the authenticity or corruption of the text. (See also source criticism.) Historiographical tradition or framework. Every historian uses one (or more) historiographical traditions, for example Marxist, Annales school, "total history", or political history. Moral issues, guilt assignment, and praise assignment Revisionism versus orthodox interpretations Historical metanarratives and metahistory. == Approaches == How a historian approaches historical events is one of the most important decisions within historiography. Historians commonly recognise that individual historical facts—dealing with names, dates and places—are not particularly meaningful in themselves. Such facts only become useful/informative when assembled with other historical evidence, and the process of assembling this evidence is understood as a particular historiographical approach. Some influential historiographical approaches include: Big History Business history, History of institutions and Official history Black history Chronology Comparative history Cultural history Diplomatic history Decolonization of knowledge Economic history (history of capitalism), (Business history), (financial history) Environmental history, a relatively new field Ethnohistory Gender history including women's history, family history, feminist history Global history, or World History Global studies Great man theory and Heroism History of medicine History of religion and church history; the history of theology is usually handled under theology Indigenous history Industrial history and the history of technology Intellectual history and the history of ideas Labor history Legendary history – important in pre-modern contexts Local history and microhistory Marxist historiography and historical materialism Migration studies Military history, including naval and air history Mythistory – history incorporating elements of myth National history – comforting myths of individual peoples Oral history and Traditional knowledge Political history Public history, especially museums and historic preservation Quantitative history (prosopography using statistics to study biographies) Historiography of science Social history and people's history; along with the French version the Annales school and the German Bielefeld School Subaltern Studies, regarding post-colonial India Urban history American urban history Whig history, history interpreted as the story of continuous progress World history Zeitgeist === Related fields === Important related fields include: Antiquarianism Genealogy Historical archaeology Intellectual history Numismatics Paleography Philosophy of history Pseudohistory == See also == List of historians by area of study Historical significance National memory === Methods === Archival research Auxiliary sciences of history Historical method Humanistic historiography List of historians, inclusive of most major historians List of historians by area of study List of history journals Philosophy of history Popular history Primary source – documents, correspondence, diaries Secondary source – interpretations, written history Tertiary source – textbooks and encyclopedias Periodization Public history, including museums and historical preservation Historical revisionism Shared historical authority Historiography at Wikiversity, where it is part of the School of History === Topics === African historiography Historiography of Argentina Atlantic history Historiography of Canada Chinese historiography Historiography of the Cold War Historiography of early Christianity Ethiopian historiography Historiography of the French Revolution Annales school, in France Historiography of Germany Bielefeld School, in Germany Greek historiography Historiography of Alexander the Great Classics History of India#Historiography Historiography of the fall of the Mughal Empire Historiography of Islam Historiography of early Islam Historiography of Japan Historiography of Korea Korean nationalist historiography Latin American History Middle Ages Historiography of feudalism Dark Ages (historiography) Historiography of the Crusades Historiography and nationalism Roman historiography Historiography of the fall of the Western Roman Empire Historiography of Switzerland Historiography in the Soviet Union Historiography of the United Kingdom Historiography of Scotland Historiography of the British Empire Historiography of the United States Frontier thesis World history Historiography of the causes of World War I Historiography of World War II Historiography of the Battle of France, 1940 == References == == Bibliography == === Theory === Appleby, Joyce, Lynn Hunt & Margaret Jacob, Telling the Truth About History. New York: W. W. Norton & Company, 1994. Bentley, Michael. Modern Historiography: An Introduction, 1999 ISBN 0-415-20267-1 Marc Bloch, The Historian's Craft (1940) Burke, Peter. History and Social Theory, Polity Press, Oxford, 1992 David Cannadine (editor), What is History Now, Palgrave Macmillan, 2002 E. H. Carr, What is History? 1961, ISBN 0-394-70391-X R. G. Collingwood, The Idea of History, 1936, ISBN 0-19-285306-6 Deluermoz, Quentin, and Singaravélou, Pierre: A Past of Possibilities: A History of What Could Have Been ISBN 978-0300227543 ; Yale University Press, 2021 Doran, Robert. ed. Philosophy of History After Hayden White. London: Bloomsbury, 2013. Geoffrey Elton, The Practice of History, 1969, ISBN 0-631-22980-9 Richard J. Evans In Defence of History, 1997, ISBN 1-86207-104-7 Fischer, David Hackett. Historians' Fallacies: Towards a Logic of Historical Thought, Harper & Row, 1970 Gardiner, Juliet (ed) What is History Today...? London: MacMillan Education Ltd., 1988. Harlaftis, Gelina, ed. The New Ways of History: Developments in Historiography (I.B. Tauris, 2010) 260 pp; trends in historiography since 1990 Hewitson, Mark, History and Causality, Palgrave Macmillan, 2014 Jenkins, Keith ed. The Postmodern History Reader (2006) Jenkins, Keith. Rethinking History, 1991, ISBN 0-415-30443-1 Arthur Marwick, The New Nature of History: knowledge, evidence, language, Basingstoke: Palgrave, 2001, ISBN 0-333-96447-0 Munslow, Alan. The Routledge Companion to Historical Studies (2000), an encyclopedia of concepts, methods and historians Olstein, Diego. Thinking History Globally (2025), summary Spalding, Roger & Christopher Parker, Historiography: An Introduction, 2008, ISBN 0-7190-7285-9 Sreedharan, E. (2004). A Textbook of Historiography, 500 B.C. to A.D. 2000. Orient Blackswan. ISBN 978-8125026570. Archived from the original on 13 January 2023. Retrieved 13 February 2016. Sreedharan (2007). A Manual of Historical Research Methodology. South Indian Studies. ISBN 978-8190592802. Tosh, John. The Pursuit of History, 2002, ISBN 0-582-77254-0 Tucker, Aviezer, ed. A Companion to the Philosophy of History and Historiography Malden: Blackwell, 2009 White, Hayden. The Fiction of Narrative: Essays on History, Literature, and Theory, 1957–2007, Johns Hopkins, 2010. Ed. Robert Doran === Guides to scholarship === The American Historical Association's Guide to Historical Literature, ed. by Mary Beth Norton and Pamela Gerardi (3rd ed. 2 vol, Oxford U.P. 1995) 2064 pages; annotated guide to 27,000 of the most important English language history books in all fields and topics vol 1 online, vol 2 online Allison, William Henry et al. eds. A guide to historical literature (1931) comprehensive bibliography for scholarship to 1930 as selected by scholars from the American Historical Association online edition, free; Backhouse, Roger E. and Philippe Fontaine, eds. A Historiography of the Modern Social Sciences (Cambridge University Press, 2014) pp. ix, 248; essays on the ways in which the histories of psychology, anthropology, sociology, economics, history, and political science have been written since 1945 Black, Jeremy. Clio's Battles: Historiography in Practice (Indiana University Press, 2015.) xvi, 323 pp. Boyd, Kelly, ed. Encyclopedia of Historians and Historical Writers (2 Vol 1999), 1600 pp covering major historians and themes Cline, Howard F. ed. Guide to Ethnohistorical Sources, Handbook of Middle American Indians (4 vols U of Texas Press 1973. Gray, Wood. Historian's Handbook, 2nd ed. (Houghton-Mifflin Co., cop. 1964), vii, 88 pp; a primer Elton, G.R. Modern Historians on British History 1485–1945: A Critical Bibliography 1945–1969 (1969), annotated guide to 1000 history books on every major topic, plus book reviews and major scholarly articles. online Loades, David, ed. Reader's Guide to British History (Routledge; 2 vol 2003) 1760 pp; highly detailed guide to British historiography excerpt and text search Charles Oman (1906), Inaugural Lecture on The Study of History: delivered on Wednesday, February 7, 1906, Oxford: Oxford University Press, Wikidata Q26157365 Parish, Peter, ed. Reader's Guide to American History (Routledge, 1997), 880 pp; detailed guide to historiography of American topics excerpt and text search Popkin, Jeremy D. From Herodotus to H-Net: The Story of Historiography (Oxford UP, 2015). Woolf, Daniel et al. The Oxford History of Historical Writing (5 vol 2011–r12), covers all major historians since AD 600 The Oxford History of Historical Writing: Volume 1: Beginnings to AD 600 online at doi:10.1093/acprof:osobl/9780199218158.001.0001 The Oxford History of Historical Writing: Volume 3: 1400–1800 online at doi:10.1093/acprof:osobl/9780199219179.001.0001 The Oxford History of Historical Writing: Volume 4: 1800–1945 online at doi:10.1093/acprof:osobl/9780199533091.001.0001 === Histories of historical writing === Arnold, John H. History: A Very Short Introduction (2000). New York: Oxford University Press. ISBN 978-0192853523 Barnes, Harry Elmer. A history of historical writing (1962) Barraclough, Geoffrey. History: Main Trends of Research in the Social and Human Sciences, (1978) Bauer, Stefan. The Invention of Papal History: Onofrio Panvinio between Renaissance and Catholic Reform (Oxford University Press, 2020). Bentley, Michael. ed., Companion to Historiography, Routledge, 1997, ISBN 0415285577, 39 chapters by experts Boyd, Kelly, ed. Encyclopedia of historians and historical writing (2 vol. Taylor & Francis, 1999), 1562 pp Breisach, Ernst. Historiography: Ancient, Medieval and Modern, 3rd ed., 2007, ISBN 0-226-07278-9 Budd, Adam, ed. The Modern Historiography Reader: Western Sources. (Routledge, 2009). Cline, Howard F., ed.Latin American History: Essays on Its Study and Teaching, 1898–1965. 2 vols. Austin: University of Texas Press 1965. Cohen, H. Floris The Scientific Revolution: A Historiographical Inquiry, (1994), ISBN 0-226-11280-2 Conrad, Sebastian. The Quest for the Lost Nation: Writing History in Germany and Japan in the American Century (2010) Crymble, Adam. Technology and the Historian: Transformations in the Digital Age (University of Illinois, 2021), 241 pp Fitzsimons, M.A. et al. eds. The development of historiography (1954) 471 pages; comprehensive global coverage; online free Gilderhus, Mark T. History and Historians: A Historiographical Introduction, 2002, ISBN 0-13-044824-9 Iggers, Georg G. Historiography in the 20th Century: From Scientific Objectivity to the Postmodern Challenge (2005) Kramer, Lloyd, and Sarah Maza, eds. A Companion to Western Historical Thought Blackwell 2006. 520 pp; ISBN 978-1-4051-4961-7. Momigliano, Arnaldo. The Classical Foundation of Modern Historiography, 1990, ISBN 978-0-226-07283-8 The Oxford History of Historical Writing (5 vol 2011), Volume 1: Beginnings to AD 600; Volume 2: 600–1400; Volume 3: 1400–1800; Volume 4: 1800–1945; Volume 5: Historical Writing since 1945 catalog Rahman, M. M. ed. Encyclopaedia of Historiography (2006) Excerpt and text search Soffer, Reba. History, Historians, and Conservatism in Britain and America: From the Great War to Thatcher and Reagan (2009) excerpt and text search Thompson, James Westfall. A History of Historical Writing. vol 1: From the earliest Times to the End of the 17th Century (1942); A History of Historical Writing. vol 2: The 18th and 19th Centuries (1942) Woolf, Daniel, ed. A Global Encyclopedia of Historical Writing (2 vol. 1998) Woolf, Daniel. "Historiography", in New Dictionary of the History of Ideas, ed. M.C. Horowitz, (2005), vol. I. Woolf, Daniel. A Global History of History (Cambridge University Press, 2011) Woolf, Daniel, ed. The Oxford History of Historical Writing. 5 vols. (Oxford University Press, 2011–12) Woolf, Daniel, A Concise History Of History (Cambridge University Press, 2019) === Feminist historiography === Bonnie G. Smith, The Gender of History: Men, Women, and Historical Practice, Harvard University Press 2000 Gerda Lerner, The Majority Finds its Past: Placing Women in History, New York: Oxford University Press 1979 Judith M. Bennett, History Matters: Patriarchy and the Challenge of Feminism, University of Pennsylvania Press, 2006 Julie Des Jardins, Women and the Historical Enterprise in America, University of North Carolina Press, 2002 Donna Guy, "Gender and Sexuality in Latin America" in The Oxford Handbook of Latin American History, José C. Moya, ed. New York: Oxford University Press 2011, pp. 367–381. Asunción Lavrin, "Sexuality in Colonial Spanish America" in The Oxford Handbook of Latin American History, José C. Moya, ed. New York: Oxford University Press 2011, pp. 132–154. Mary Ritter Beard, Woman as force in history: A study in traditions and realities Mary Spongberg, Writing women's history since the Renaissance, Palgrave Macmillan, 2002 Clare Hemmings, "Why Stories Matter: The Political Grammar of Feminist Theory", Duke University Press 2011 === National and regional studies === Berger, Stefan et al., eds. Writing National Histories: Western Europe Since 1800 (1999) excerpt and text search; how history has been used in Germany, France & Italy to legitimize the nation-state against socialist, communist and Catholic internationalism Iggers, Georg G. A new Directions and European Historiography (1975) LaCapra, Dominic, and Stephen L. Kaplan, eds. Modern European Intellectual History: Reappraisals and New Perspective (1982) ==== Asia and Africa ==== Cohen, Paul (1984). Preview of Discovering history in China: American historical writing on the recent Chinese past [WorldCat.org]. Columbia University Press - Studies of the East Asian Institute. ISBN 023152546X. OCLC 456728837. R.C. Majumdar, Historiography in Modem India (Bombay, 1970) ISBN 978-2102227356 Marcinkowski, M. Ismail. Persian Historiography and Geography: Bertold Spuler on Major Works Produced in Iran, the Caucasus, Central Asia, India and Early Ottoman Turkey (Singapore: Pustaka Nasional, 2003) Martin, Thomas R. Herodotus and Sima Qian: The First Great Historians of Greece and China: A Brief History with Documents (2009) E. Sreedharan, A Textbook of Historiography, 500 B.C. to A.D. 2000 (2004) Arvind Sharma, Hinduism and Its Sense of History (Oxford University Press, 2003) ISBN 978-0-19-566531-4 Shourie, Arun (2014). Eminent historians: Their technology, their line, their fraud. Noida, Uttar Pradesh, India : HarperCollins Publishers. ISBN 978-9351365914 Yerxa, Donald A. Recent Themes in the History of Africa and the Atlantic World: Historians in Conversation (2008) excerpt and text search ==== Britain ==== Bann, Stephen. Romanticism and the Rise of History (Twayne Publishers, 1995) Bentley, Michael. Modernizing England's Past: English Historiography in the Age of Modernism, 1870–1970 (2006) excerpt and text search Cannadine, David. In Churchill's Shadow: Confronting the Passed in Modern Britain (2003) Furber, Elizabeth, ed. Changing Views on British History; Essays on Historical Writing Since 1939 (1966); 418pp; essays by scholars Goldstein, Doris S. (1986). "The origins and early years of the English Historical Review". English Historical Review. 101 (398): 6–19. doi:10.1093/ehr/ci.cccxcviii.6. Goldstein, Doris S. (1982). "The Organizational Development of the British Historical Profession, 1884–1921". Historical Research. 55 (132): 180–193. doi:10.1111/j.1468-2281.1982.tb01157.x. Hale, John Rigby, ed. The evolution of British historiography: from Bacon to Namier (1967). Hexter, J. H. On Historians: Reappraisals of some of the makers of modern history (1979); covers Carl Becker, Wallace Ferguson, Fernan Braudel, Lawrence Stone, Christopher Hill, and J.G.A. Pocock Howsam, Leslie. "Academic Discipline or Literary Genre?: The Establishment of Boundaries in Historical Writing". Victorian Literature and Culture 32.02 (2004): 525–545. online Jann, Rosemary. The Art and Science of Victorian History (1985) Jann, Rosemary. "From Amateur to Professional: The Case of the Oxbridge Historians". Journal of British Studies (1983) 22#2 pp: 122–147. Kenyon, John. The History Men: The Historical Profession in England since the Renaissance (1983) Loades, David. Reader's Guide to British History (2 vol. 2003) 1700pp; 1600-word-long historiographical essays on about 1000 topics Mitchell, Rosemary. Picturing the Past: English History in Text and Image 1830–1870 (Oxford: Clarendon Press, 2000) Philips, Mark Salber. Society and Sentiment: Genres of Historical Writing in Britain, 1740–1820 (Princeton University Press, 2000). Richardson, Roger Charles, ed. The debate on the English Revolution (2nd ed. Manchester University Press, 1998) Schlatter, Richard, ed. Recent Views on British History: Essays on Historical Writing Since 1966 (1984) 525 pp; 13 topics essays by scholars ===== British Empire ===== Berger, Carl. Writing Canadian History: Aspects of English Canadian Historical Writing since 1900, (2nd ed. 1986) Bhattacharjee, J. B. Historians and Historiography of North East India (2012) Davison, Graeme. The Use and Abuse of Australian History (2000) Farrell, Frank. Themes in Australian History: Questions, Issues and Interpretation in an Evolving Historiography (1990) Gare, Deborah. "Britishness in Recent Australian Historiography", The Historical Journal, Vol. 43, No. 4 (Dec., 2000), pp. 1145–1155 in JSTOR Guha, Ranajiit. Dominance Without Hegemony: History and Power in Colonial India (Harvard UP, 1998) Granatstein, J. L. Who Killed Canadian History? (1998) Mittal, S. C India distorted: A study of British historians on India (1995), on 19th century writers Saunders, Christopher. The making of the South African past: major historians on race and class, (1988) Winks, Robin, ed. The Oxford History of the British Empire: Volume V: Historiography (2001) ==== France ==== Burke, Peter. The French Historical Revolution: The Annales School 1929–2014 (John Wiley & Sons, 2015). Clark, Stuart (1983). "French historians and early modern popular culture". Past & Present (100): 62–99. doi:10.1093/past/100.1.62. Daileader, Philip and Philip Whalen, eds. French Historians 1900–2000: New Historical Writing in Twentieth-Century France (2010) 40 long essays by experts. excerpt Revel, Jacques, and Lynn Hunt, eds. Histories: French Constructions of the Past, (1995). 654pp; 65 essays by French historians Stoianovich, Traian. French Historical Method: The Annales Paradigm (1976) ==== Germany ==== Fletcher, Roger. "Recent developments in West German Historiography: the Bielefeld School and its critics". German Studies Review (1984): 451–480. in JSTOR Hagemann, Karen, and Jean H. Quataert, eds. Gendering Modern German History: Rewriting Historiography (2008) Iggers, Georg G. The German Conception of History: The National Tradition of Historical Thought from Herder to the Present (2nd ed. 1983) Rüger, Jan, and Nikolaus Wachsmann, eds. Rewriting German history: new perspectives on modern Germany (Palgrave Macmillan, 2015). excerpt Sheehan, James J. "What is German history? Reflections on the role of the nation in German history and historiography". Journal of Modern History (1981): 2–23. in JSTOR Sperber, Jonathan. "Master Narratives of Nineteenth-century German History". Central European History (1991) 24#1: 69–91. online Stuchtey, Benedikt, and Peter Wende, eds. British and German historiography, 1750–1950: traditions, perceptions, and transfers (2000). ==== Latin America ==== Adelman, Jeremy, ed. Colonial Legacies. New York: Routledge 1999. Coatsworth, John. "Cliometrics and Mexican History", Historical Methods18:1 (Winter 1985)31–37. Gootenberg, Paul (2004). "Between a Rock and a Softer Place: Reflections on Some Recent Economic History of Latin America". Latin American Research Review. 39 (2): 239–257. doi:10.1353/lar.2004.0031. S2CID 144339079. Kuzensof; Oppenheimer, Robert (1985). "The Family and Society in Nineteenth-Century Latin America: An Historiographical Introduction". Journal of Family History. 10 (3): 215–234. doi:10.1177/036319908501000301. S2CID 145607701. Lockhart, James. "The Social History of Early Latin America". Latin American Research Review 1972. Moya, José C. The Oxford Handbook of Latin American History. New York: Oxford University Press 2011. Russell-Wood, A. J. R. (2001). "Archives and the Recent Historiography on Colonial Brazil". Latin American Research Review. 36: 175–103. doi:10.1017/S0023879100018847. S2CID 252750152. Van Young, Eric (1999). "The New Cultural History Comes to Old Mexico". The Hispanic American Historical Review. 79 (2): 211–248. doi:10.1215/00182168-79.2.211. ==== United States ==== Hofstadter, Richard. The Progressive Historians: Turner, Beard, Parrington (1968) Novick, Peter. That Noble Dream: The "Objectivity Question" and the American Historical Profession (1988), ISBN 0-521-34328-3 Palmer, William W. "All Coherence Gone? A Cultural History of Leading History Departments in the United States, 1970–2010", Journal of The Historical Society (2012), 12: 111–153. doi:10.1111/j.1540-5923.2012.00360.x Palmer, William. Engagement with the Past: The Lives and Works of the World War II Generation of Historians (2001) Parish, Peter J., ed. Reader's Guide to American History (1997), historiographical overview of 600 topics Wish, Harvey. The American Historian (1960), covers pre-1920 === Themes, organizations, and teaching === Carlebach, Elishiva, et al. eds. Jewish History and Jewish Memory: Essays in Honor of Yosef Hayim Yerushalmi (1998) excerpt and text search Charlton, Thomas L. History of Oral History: Foundations and Methodology (2007) Darcy, R. and Richard C. Rohrs, A Guide to Quantitative History (1995) Dawidowicz, Lucy S. The Holocaust and Historians. (1981). Ernest, John. Liberation Historiography: African American Writers and the Challenge of History, 1794–1861. (2004) Evans, Ronald W. The Hope for American School Reform: The Cold War Pursuit of Inquiry Learning in Social Studies(Palgrave Macmillan; 2011) 265 pages Ferro, Marc, Cinema and History (1988) Green, Anna, and Kathleen Troup. The Houses of History: A Critical Reader in Twentieth Century History and Theory. 2 ed. Manchester University Press, 2016. Hudson, Pat. History by Numbers: An Introduction to Quantitative Approaches (2002) Jarzombek, Mark, A Prolegomenon to Critical Historiography, Journal of Architectural Education 52/4 (May 1999): 197-206 [2] Keita, Maghan. Race and the Writing of History. Oxford UP (2000) Leavy, Patricia. Oral History: Understanding Qualitative Research (2011) excerpt and text search Loewen, James W. Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong, (1996) Manning, Patrick, ed. World History: Global And Local Interactions (2006) Maza, Sarah. Thinking About History. Chicago: University of Chicago Press, 2017. doi:10.7208/chicago/9780226109473.001.0001 Meister, Daniel R. "The biographical turn and the case for historical biography" History Compass (Dec. 2017) doi:10.1111/hic3.12436 abstract Morris-Suzuki, Tessa. The Past Within Us: Media, Memory, History (2005), ISBN 1-85984-513-4 Ritchie, Donald A. The Oxford Handbook of Oral History (2010) excerpt and text search Tröhler, Daniel "History and Historiography. Approaches to Historical Research in Education" T. Fitzgerald (ed.), Handbook of Historical Studies in Education (2019); [3] == External links == International Commission for the History and Theory of Historiography [4] short guide to Historiographical terms Basic guide to historiography research for undergraduates Cromohs – cyber review of modern historiography open-access electronic scholarly journal Archived 2019-10-23 at the Wayback Machine History of Historiography scholarly journal in several languages
Wikipedia/Historiography
Critical theory is a social, historical, and political school of thought and philosophical perspective which centers on analyzing and challenging systemic power relations in society, arguing that knowledge, truth, and social structures are fundamentally shaped by power dynamics between dominant and oppressed groups. Beyond just understanding and critiquing these dynamics, it explicitly aims to transform society through praxis and collective action with an explicit sociopolitical purpose. Critical theory's main tenets center on analyzing systemic power relations in society, focusing on the dynamics between groups with different levels of social, economic, and institutional power. Unlike traditional social theories that aim primarily to describe and understand society, critical theory explicitly seeks to critique and transform it. Thus, it positions itself as both an analytical framework and a movement for social change. Critical theory examines how dominant groups and structures influence what society considers objective truth, challenging the very notion of pure objectivity and rationality by arguing that knowledge is shaped by power relations and social context. Key principles of critical theory include examining intersecting forms of oppression, emphasizing historical contexts in social analysis, and critiquing capitalist structures. The framework emphasizes praxis (combining theory with action) and highlights how lived experience, collective action, ideology, and educational systems play crucial roles in maintaining or challenging existing power structures. The historical evolution of critical theory traces back to the first generation of the Frankfurt School in the 1920s. Figures like Max Horkheimer, Theodor Adorno, Herbert Marcuse, and others sought to expand traditional Marxist analysis by incorporating insights from psychology, culture, and philosophy, moving beyond pure economic determinism. Their work was significantly influenced by Freud's psychoanalytic theories, particularly how subjective experience shaped human consciousness, behavior, and social reality. Freud's concept that an individual's lived experience could differ dramatically from objective reality aligned with critical theory's critique of positivism, science, and pure rationality. Critical theory continued to evolve beyond the first generation of the Frankfurt School. Jürgen Habermas, often identified with the second generation, shifted the focus toward communication and the role of language in social emancipation. Around the same time, post-structuralist and postmodern thinkers, including Michel Foucault and Jacques Derrida, were reshaping academic discourse with critiques of knowledge, meaning, power, institutions, and social control with deconstructive approaches that further challenged assumptions about objectivity and truth. Though neither Foucault nor Derrida belonged formally to the Frankfurt School tradition, their works profoundly influenced later formulations of critical theory. Collectively, the post-structuralist and postmodern insights expanded the scope of critical theory, weaving cultural and linguistic critiques into its Marxian roots. With the emigration of Herbert Marcuse, contemporary critical theory has expanded to the United States and today it covers a wide range of social critique within economics, ethics, history, law, politics, psychology, and sociology, with a diverse list of subjects including critical animal studies, critical criminology, dependency theory and imperialism studies, critical environmental justice, feminist theory and gender studies, critical historiography, intersectionality, critical legal studies, critical pedagogy, postcolonialism, critical race theory, queer theory, and critical terrorism studies. Modern critical theory represents a movement away from Marxism's purely economic analysis to a broader examination of social and cultural power structures with the incorporation and transformation of Freudian concepts and postmodernism, while retaining Marxism's emphasis on analyzing how dominant groups and systems shape and control society through exploitation and oppression along with social and political praxis, the adaptation and reformulation of multiple Marxian conceptual frameworks (including alienation, reification, ideology, emancipation, base and superstructure), and a general skepticism towards and critique of capitalism. Criticism of critical theory have come from various intellectual perspectives. Critics have raised concerns about critical theory's reliance on Marxist revisionism and its frequent emphasis on subjective narratives, which can sometimes be at odds with empirical methodologies. They also point to issues of circular reasoning and a lack of falsifiability in some critical theory arguments, as well as an epistemological and methodological stance that challenges or conflicts with traditional scientific methods and ideals of rationality and objectivity. == History == Max Horkheimer first defined critical theory (German: kritische Theorie) in his 1937 essay "Traditional and Critical Theory", as a social theory oriented toward critiquing and changing society as a whole, in contrast to traditional theory oriented only toward understanding or explaining it. Wanting to distinguish critical theory as a radical, emancipatory form of Marxist philosophy, Horkheimer critiqued both the model of science put forward by logical positivism, and what he and his colleagues saw as the covert positivism and authoritarianism of orthodox Marxism and Communism. He described a theory as critical insofar as it seeks "to liberate human beings from the circumstances that enslave them". Critical theory involves a normative dimension, either by criticizing society in terms of some general theory of values or norms (oughts), or by criticizing society in terms of its own espoused values (i.e. immanent critique). Significantly, critical theory not only conceptualizes and critiques societal power structures, but also establishes an empirically grounded model to link society to the human subject. It defends the universalist ambitions of the tradition, but does so within a specific context of social-scientific and historical research. The core concepts of critical theory are that it should: be directed at the totality of society in its historical specificity (i.e., how it came to be configured at a specific point in time) improve understanding of society by integrating all the major social sciences, including geography, economics, sociology, history, political science, anthropology, and psychology Postmodern critical theory is another major product of critical theory. It analyzes the fragmentation of cultural identities in order to challenge modernist-era constructs such as metanarratives, rationality, and universal truths, while politicizing social problems "by situating them in historical and cultural contexts, to implicate themselves in the process of collecting and analyzing data, and to relativize their findings". === Marx === Marx explicitly developed the notion of critique into the critique of ideology, linking it with the practice of social revolution, as stated in the 11th section of his Theses on Feuerbach: "The philosophers have only interpreted the world, in various ways; the point is to change it." In early works, including The German Ideology, Marx developed his concepts of false consciousness and of ideology as the interests of one section of society masquerading as the interests of society as a whole. === Adorno and Horkheimer === One of the distinguishing characteristics of critical theory, as Theodor W. Adorno and Max Horkheimer elaborated in their Dialectic of Enlightenment (1947), is an ambivalence about the ultimate source or foundation of social domination, an ambivalence that gave rise to the "pessimism" of the new critical theory about the possibility of human emancipation and freedom. This ambivalence was rooted in the historical circumstances in which the work was originally produced, particularly the rise of Nazism, state capitalism, and culture industry as entirely new forms of social domination that could not be adequately explained in the terms of traditional Marxist sociology. For Adorno and Horkheimer, state intervention in the economy had effectively abolished the traditional tension between Marxism's "relations of production" and "material productive forces" of society. The market (as an "unconscious" mechanism for the distribution of goods) had been replaced by centralized planning. Contrary to Marx's prediction in the Preface to a Contribution to the Critique of Political Economy, this shift did not lead to "an era of social revolution" but to fascism and totalitarianism. As a result, critical theory was left, in Habermas's words, without "anything in reserve to which it might appeal, and when the forces of production enter into a baneful symbiosis with the relations of production that they were supposed to blow wide open, there is no longer any dynamism upon which critique could base its hope". For Adorno and Horkheimer, this posed the problem of how to account for the apparent persistence of domination in the absence of the very contradiction that, according to traditional critical theory, was the source of domination itself. === Habermas === In the 1960s, Habermas, a proponent of critical social theory, raised the epistemological discussion to a new level in his Knowledge and Human Interests (1968), by identifying critical knowledge as based on principles that differentiated it either from the natural sciences or the humanities, through its orientation to self-reflection and emancipation. Although unsatisfied with Adorno and Horkheimer's thought in Dialectic of Enlightenment, Habermas shares the view that, in the form of instrumental rationality, the era of modernity marks a move away from the liberation of enlightenment and toward a new form of enslavement.: 6  In Habermas's work, critical theory transcended its theoretical roots in German idealism, and progressed closer to American pragmatism. Habermas's ideas about the relationship between modernity and rationalization are in this sense strongly influenced by Max Weber. He further dissolved the elements of critical theory derived from Hegelian German idealism, though his epistemology remains broadly Marxist. Perhaps his two most influential ideas are the concepts of the public sphere and communicative action, the latter arriving partly as a reaction to new post-structural or so-called "postmodern" challenges to the discourse of modernity. Habermas engaged in regular correspondence with Richard Rorty, and a strong sense of philosophical pragmatism may be felt in his thought, which frequently traverses the boundaries between sociology and philosophy. === Modern critical theorists === Contemporary philosophers and researchers who have focused on understanding and critiquing critical theory include Nancy Fraser, Axel Honneth, Judith Butler, and Rahel Jaeggi. Honneth is known for his works Pathology of Reason and The Legacy of Critical Theory, in which he attempts to explain critical theory's purpose in a modern context. Jaeggi focuses on both critical theory's original intent and a more modern understanding that some argue has created a new foundation for modern usage of critical theory. Butler contextualizes critical theory as a way to rhetorically challenge oppression and inequality, specifically concepts of gender. Honneth established a theory that many use to understand critical theory, the theory of recognition. In this theory, he asserts that in order for someone to be responsible for themselves and their own identity they must be also recognized by those around them: without recognition in this sense from peers and society, individuals can never become wholly responsible for themselves and others, nor experience true freedom and emancipation—i.e., without recognition, the individual cannot achieve self-actualization. Like many others who put stock in critical theory, Jaeggi is vocal about capitalism's cost to society. Throughout her writings, she has remained doubtful about the necessity and use of capitalism in regard to critical theory. Most of Jaeggi's interpretations of critical theory seem to work against the foundations of Habermas and follow more along the lines of Honneth in terms of how to look at the economy through the theory's lens. She shares many of Honneth's beliefs, and many of her works try to defend them against criticism Honneth has received. To provide a dialectical opposite to Jaeggi's conception of alienation as 'a relation of relationlessness', Hartmut Rosa has proposed the concept of resonance. Rosa uses this term to refer to moments when late modern subjects experience momentary feelings of self-efficacy in society, bringing them into a temporary moment of relatedness with some aspect of the world. Rosa describes himself as working within the critical theory tradition of the Frankfurt School, providing an extensive critique of late modernity through his concept of social acceleration. However his resonance theory has been questioned for moving too far beyond the Adornoian tradition of "looking coldly at society". == Fields == === Postmodern critical social theory === Focusing on language, symbolism, communication, and social construction, critical theory has been applied in the social sciences as a critique of social construction and postmodern society. While modernist critical theory (as described above) concerns itself with "forms of authority and injustice that accompanied the evolution of industrial and corporate capitalism as a political-economic system", postmodern critical theory politicizes social problems "by situating them in historical and cultural contexts, to implicate themselves in the process of collecting and analyzing data, and to relativize their findings". Meaning itself is seen as unstable due to social structures' rapid transformation. As a result, research focuses on local manifestations rather than broad generalizations. Postmodern critical research is also characterized by the crisis of representation, which rejects the idea that a researcher's work is an "objective depiction of a stable other". Instead, many postmodern scholars have adopted "alternatives that encourage reflection about the 'politics and poetics' of their work. In these accounts, the embodied, collaborative, dialogic, and improvisational aspects of qualitative research are clarified." The term critical theory is often appropriated when an author works in sociological terms, yet attacks the social or human sciences, thus attempting to remain "outside" those frames of inquiry. Michel Foucault has been described as one such author. Jean Baudrillard has also been described as a critical theorist to the extent that he was an unconventional and critical sociologist; this appropriation is similarly casual, holding little or no relation to the Frankfurt School. In contrast, Habermas is one of the key critics of postmodernism. ==== Communication studies ==== When, in the 1970s and 1980s, Habermas redefined critical social theory as a study of communication, with communicative competence and communicative rationality on the one hand, and distorted communication on the other, the two versions of critical theory began to overlap to a much greater degree than before. === Critical disability theory === === Critical legal studies === ==== Immigration studies ==== Critical theory can be used to interpret the right of asylum and immigration law. === Critical finance studies === Critical finance studies apply critical theory to financial markets and central banks. === Critical management studies === === Critical international relations theory === === Critical race theory === === Critical pedagogy === Critical theorists have widely credited Paulo Freire for the first applications of critical theory to education/pedagogy, considering his best-known work to be Pedagogy of the Oppressed, a seminal text in what is now known as the philosophy and social movement of critical pedagogy. Dedicated to the oppressed and based on his own experience helping Brazilian adults learn to read and write, Freire includes a detailed class analysis in his exploration of the relationship between the colonizer and the colonized. In the book, he calls traditional pedagogy the "banking model of education", because it treats the student as an empty vessel to be filled with knowledge. He argues that pedagogy should instead treat the learner as a co-creator of knowledge. In contrast to the banking model, the teacher in the critical-theory model is not the dispenser of all knowledge, but a participant who learns with and from the students—in conversation with them, even as they learn from the teacher. The goal is to liberate the learner from an oppressive construct of teacher versus student, a dichotomy analogous to colonizer and colonized. It is not enough for the student to analyze societal power structures and hierarchies, to merely recognize imbalance and inequity; critical theory pedagogy must also empower the learner to reflect and act on that reflection to challenge an oppressive status quo. ==== Critical consciousness ==== ==== Critical university studies ==== === Critical psychology === === Critical criminology === === Critical animal studies === === Critical social work === === Critical ethnography === === Critical data studies === === Critical environmental justice === Critical environmental justice applies critical theory to environmental justice. == Criticism == While critical theorists have often been called Marxist intellectuals, their tendency to denounce some Marxist concepts and to combine Marxian analysis with other sociological and philosophical traditions has resulted in accusations of revisionism by Orthodox Marxist and by Marxist–Leninist philosophers. Martin Jay has said that the first generation of critical theory is best understood not as promoting a specific philosophical agenda or ideology, but as "a gadfly of other systems". Critical theory has been criticized for not offering any clear road map to political action (praxis), often explicitly repudiating any solutions. Those objections mostly apply to first-generation Frankfurt School, while the issue of politics is addressed in a much more assertive way in contemporary theory. Another criticism of critical theory "is that it fails to provide rational standards by which it can show that it is superior to other theories of knowledge, science, or practice." Rex Gibson argues that critical theory suffers from being cliquish, conformist, elitist, immodest, anti-individualist, naive, too critical, and contradictory. Hughes and Hughes argue that Habermas' theory of ideal public discourse "says much about rational talkers talking, but very little about actors acting: Felt, perceptive, imaginative, bodily experience does not fit these theories". Some feminists argue that critical theory "can be as narrow and oppressive as the rationalization, bureaucratization, and cultures they seek to unmask and change. Critical theory's language has been criticized as being too dense to understand, although "Counter arguments to these issues of language include claims that a call for clearer and more accessible language is anti-intellectual, a new 'language of possibility' is needed, and oppressed peoples can understand and contribute to new languages." Bruce Pardy, writing for the National Post, argued that any challenges to the "legitimacy [of critical theory] can be interpreted as a demonstration of their [critical theory's proponents'] thesis: the assertion of reason, logic and evidence is a manifestation of privilege and power. Thus, any challenger risks the stigma of a bigoted oppressor." Robert Danisch, writing for The Conversation, argued that critical theory, and the modern humanities more broadly, focus too much on criticizing the current world rather than trying to make a better world. == See also == Modernism Antipositivism Critical military studies Cultural studies Information criticism Marxist cultural analysis Outline of critical theory Popular culture studies Outline of organizational theory Postcritique Quare theory === Lists === List of critical theorists List of works in critical theory === Journals === Constellations Representations Critical Inquiry Telos Law and Critique == References == === Footnotes === === Works cited === === Bibliography === "Problematizing Global Knowledge." Theory, Culture & Society 23(2–3). 2006. ISSN 0263-2764. Calhoun, Craig. 1995. Critical Social Theory: Culture, History, and the Challenge of Difference. Blackwell. ISBN 1557862885 – A survey of and introduction to the current state of critical social theory. Charmaz, K. 1995. "Between positivism and postmodernism: Implications for methods." Studies in Symbolic Interaction 17:43–72. Conquergood, D. 1991. "Rethinking ethnography: Towards a critical cultural politics." Communication Monographs 58(2):179–94. doi:10.1080/03637759109376222. Corchia, Luca. 2010. La logica dei processi culturali. Jürgen Habermas tra filosofia e sociologia. Genova: Edizioni ECIG. ISBN 978-8875441951. Dahms, Harry, ed. 2008. No Social Science Without Critical Theory, (Current Perspectives in Social Theory 25). Emerald/JAI. Gandler, Stefan. 2009. Fragmentos de Frankfurt. Ensayos sobre la Teoría crítica. México: 21st Century Publishers/Universidad Autónoma de Querétaro. ISBN 978-6070300707. Geuss, Raymond. 1981. The Idea of a Critical Theory. Habermas and the Frankfurt School. Cambridge University Press. ISBN 0521284228. Honneth, Axel. 2006. La société du mépris. Vers une nouvelle Théorie critique, La Découverte. ISBN 978-2707147721. Horkheimer, Max. 1982. Critical Theory Selected Essays. New York: Continuum Publishing. Morgan, Marcia. 2012. Kierkegaard and Critical Theory. New York: Lexington Books. Rolling, James H. 2008. "Secular blasphemy: Utter(ed) transgressions against names and fathers in the postmodern era." Qualitative Inquiry 14(6):926–48. – An example of critical postmodern work. Sim, Stuart, and Borin Van Loon. 2001. Introducing Critical Theory. ISBN 1840462647. – A short introductory volume with illustrations. Thomas, Jim. 1993. Doing Critical Ethnography. London: Sage. pp. 1–5 & 17–25. Tracy, S. J. 2000. "Becoming a character for commerce: Emotion labor, self subordination and discursive construction of identity in a total institution." Management Communication Quarterly 14(1):90–128. – An example of critical qualitative research. Willard, Charles Arthur. 1982. Argumentation and the Social Grounds of Knowledge. University of Alabama Press. — 1989. A Theory of Argumentation. University of Alabama Press. — 1996. Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy. Chicago: University of Chicago Press. Chapter 9. Critical Theory Solomon, Robert C., ed. (2007). The Blackwell Guide to Continental Philosophy. David L. Sherman. Oxford: John Wiley & Sons. ISBN 978-1405143042. OCLC 437147422. == External links == "The Frankfurt School and Critical Theory". Internet Encyclopedia of Philosophy. Gerhardt, Christina. "Frankfurt School". The International Encyclopedia of Revolution and Protest. Ness, Immanuel (ed). Blackwell Publishing, 2009. Blackwell Reference Online. "Theory: Death Is Not the End" N+1 magazine's short history of academic Critical Theory. Critical Legal Thinking A Critical Legal Studies website which uses Critical Theory in an analysis of law and politics. L. Corchia, Jürgen Habermas. A Bibliography: works and studies (1952–2013), Pisa, Edizioni Il Campano – Arnus University Books, 2013, 606 pages. Sim, S.; Van Loon, B. (2009). Introducing Critical Theory: A Graphic Guide. Icon Books Ltd. === Archival collections === Guide to the Critical Theory Offprint Collection. Special Collections and Archives, The UC Irvine Libraries, Irvine, Cali Guide to the Critical Theory Institute Audio and Video Recordings, University of California, Irvine. Special Collections and Archives, The UC Irvine Libraries, Irvine, California. University of California, Irvine, Critical Theory Institute Manuscript Materials. Special Collections and Archives, The UC Irvine Libraries, Irvine, California.
Wikipedia/Critical_theory
Phenomenography is a qualitative research methodology, within the interpretivist paradigm, that investigates the qualitatively different ways in which people experience something or think about something. It is an approach to educational research which appeared in publications in the early 1980s. It initially emerged from an empirical rather than a theoretical or philosophical basis. While being an established methodological approach in education for several decades, phenomenography has now been applied rather extensively in a range of diverse disciplines such as environmental management, computer programming, workplace competence, and internationalization practices. == Overview == Phenomenography's ontological assumptions are subjectivist: the world exists and different people construct it in different ways and from a non-dualist viewpoint (viz., there is only one world, one that is ours, and one that people experience in many different ways). Phenomenography's research object has the character of knowledge; therefore its ontological assumptions are also epistemological assumptions. Its emphasis is on description. Its data collection methods typically include semi-structured interviews with a small, purposive sample of subjects, with the researcher "working toward an articulation of the interviewee’s reflections on experience that is as complete as possible". Description is important because our knowledge of the world is a matter of meaning and of the qualitative similarities and differences in meaning as it is experienced by different people. A phenomenographic data analysis sorts qualitatively distinct perceptions which emerge from the data collected into specific "categories of description." The set of these categories is sometimes referred to as an "outcome space." These categories (and the underlying structure) become the phenomenographic essence of the phenomenon. They are the primary outcomes and are the most important result of phenomenographic research. Phenomenographic categories are logically related to one another, typically by way of hierarchically inclusive relationships, although linear and branched relationships can also occur. That which varies between different categories of description is known as the "dimensions of variation." The process of phenomenographic analysis is strongly iterative and comparative. It involves continual sorting and resorting of data and ongoing comparisons between the data and the developing categories of description, as well as between the categories themselves. A phenomenographic analysis seeks a "description, analysis, and understanding of . . . experiences". The focus is on variation: variation in both the perceptions of the phenomenon, as experienced by the actor, and in the "ways of seeing something" as experienced and described by the researcher. This is described as phenomenography's "theory of variation." Phenomenography allows researchers to use their own experiences as data for phenomenographic analysis; it aims for a collective analysis of individual experiences. == Emphasis on description == Phenomenographic studies usually involve contextual groups of people and data collection involves individual description of understanding, often through interview. Analysis is whole group orientated since all data is analysed together with the aim of identifying possible conceptions of experience related to the phenomenon under investigation, rather than individual experiences. There is emphasis on detailed analysis of description which follows from an assumption that conceptions are formed from both the results of human action and from the conditions for it. Clarification of understanding and experience depends upon the meaning of the conceptions themselves. The object of phenomenographic study is not the phenomenon per se but the relationship between the actors and the phenomenon. == Distinguished from phenomenology == Phenomenography is not phenomenology. Phenomenographers adopt an empirical orientation and they investigate the experiences of others. The focus of interpretive phenomenology is upon the essence of the phenomenon, whereas the focus of phenomenography is upon the essence of the experiences and the subsequent perceptions of the phenomenon. == See also == Ference Marton Antipositivism == References ==
Wikipedia/Phenomenography
British Journal for the Philosophy of Science is a peer-reviewed, academic journal of philosophy, owned by the British Society for the Philosophy of Science and published by University of Chicago Press. The journal publishes work that uses philosophical methods in addressing issues raised in the natural and human sciences. == Overview == The leading international journal in the field, BJPS publishes outstanding new work on a variety of traditional and 'cutting edge' topics, from issues of explanation and realism to the applicability of mathematics, from the metaphysics of science to the nature of models and simulations, as well as foundational issues in the physical, life, and social sciences. Recent topics covered in the journal include the epistemology of measurement, mathematical non-causal explanations, signalling games, the nature of biochemical kinds, and approaches to human cognitive development, among many others. The journal seeks to advance the field by publishing innovative and thought-provoking papers, discussion notes and book reviews that open up new directions or shed new light on well-known issues. The British Journal for the Philosophy of Science operates a triple-anonymized peer review process and receives over 600 submissions a year. It is fully compliant with the RCUK open access policy, and is a member of the Committee on Publication Ethics (COPE). In 2016, book reviews were moved to online-only publication in the BJPS Review of Books. In 2021, the journal launched BJPS Short Reads, essays and a podcast featuring introductions to their published articles. The journal also runs a blog, Auxiliary Hypotheses. == Editors-in-chief == The current editors-in-chief are Tim Lewens (University of Cambridge) and Robert D. Rupert (University of Colorado Boulder). The following persons have been editors-in-chief: == The BJPS Popper Prize == The "Sir Karl Popper Essay Prize" was originally established at the wish of the late Laurence B. Briskman (University of Edinburgh), who died on 8 May 2002, having endowed a fund to encourage work in any area falling under the general description of the critical rationalist philosophy of Karl Popper. Briskman was greatly influenced by Popper, who remained the dominant intellectual influence on his philosophical outlook throughout his career. While originally open for submissions, since 2011 the prize is only awarded to papers having appeared in the journal. The endowment ended in 2017, at which point the journal took over funding the prize. The decision was also taken to widen the prize's remit, to include all papers published in the journal and not just those concerned with Popper's work. At the same time, the prize's name was changed to the "BJPS Popper Prize". == Abstracting and indexing == According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.2. == References == == External links == Official website
Wikipedia/British_Journal_for_the_Philosophy_of_Science
Grounded theory is a systematic methodology that has been largely applied to qualitative research conducted by social scientists. The methodology involves the construction of hypotheses and theories through the collecting and analysis of data. Grounded theory involves the application of inductive reasoning. The methodology contrasts with the hypothetico-deductive model used in traditional scientific research. A study based on grounded theory is likely to begin with a question, or even just with the collection of qualitative data. As researchers review the data collected, ideas or concepts become apparent to the researchers. These ideas/concepts are said to "emerge" from the data. The researchers tag those ideas/concepts with codes that succinctly summarize the ideas/concepts. As more data are collected and re-reviewed, codes can be grouped into higher-level concepts and then into categories. These categories become the basis of a hypothesis or a new theory. Thus, grounded theory is quite different from the traditional scientific model of research, where the researcher chooses an existing theoretical framework, develops one or more hypotheses derived from that framework, and only then collects data for the purpose of assessing the validity of the hypotheses. == Background == Grounded theory is a general research methodology, a way of thinking about and conceptualizing data. It is used in studies of diverse populations from areas like remarriage after divorce and professional socialization. Grounded theory methods were developed by two sociologists, Barney Glaser and Anselm Strauss. While collaborating on research on dying hospital patients, Glaser and Strauss developed the constant comparative method which later became known as the grounded theory method. They summarized their research in the book Awareness of Dying, which was published in 1965. Glaser and Strauss went on to describe their method in more detail in their 1967 book, The Discovery of Grounded Theory. The three aims of the book were to: Provide a rationale to justify the idea that the gap between a social science theory and empirical data should be narrowed by firmly grounding a theory in empirical research; Provide a logic for grounded theory; Legitimize careful qualitative research, the most important goal, because, by the 1960s, quantitative research methods had gained so much prestige that qualitative research had come to be seen as inadequate. A turning point in the acceptance of the theory came after the publication of Awareness of Dying. Their work on dying helped establish the influence of grounded theory in medical sociology, psychology, and psychiatry. From its beginnings, grounded theory methods have become more prominent in fields as diverse as drama, management, manufacturing, and education. == Philosophical underpinnings == Grounded theory combines traditions in positivist philosophy, general sociology, and, particularly, the symbolic interactionist branch of sociology. According to Ralph, Birks and Chapman, grounded theory is "methodologically dynamic" in the sense that, rather than being a complete methodology, grounded theory provides a means of constructing methods to better understand situations humans find themselves in. Glaser had a background in positivism, which helped him develop a system of labeling for the purpose of coding study participants' qualitative responses. He recognized the importance of systematic analysis for qualitative research. He thus helped ensure that grounded theory require the generation of codes, categories, and properties. Strauss had a background in symbolic interactionism, a theory that aims to understand how people interact with each other in creating symbolic worlds and how an individual's symbolic world helps to shape a person's behavior. He viewed individuals as "active" participants in forming their own understanding of the world. Strauss underlined the richness of qualitative research in shedding light on social processes and the complexity of social life. According to Glaser, the strategy of grounded theory is to interpret personal meaning in the context of social interaction. The grounded theory system studies "the interrelationship between meaning in the perception of the subjects and their action". Grounded theory constructs symbolic codes based on categories emerging from recorded qualitative data. The idea is to allow grounded theory methods to help us better understand the phenomenal world of individuals. According to Milliken and Schreiber, another of the grounded theorist's tasks is to understand the socially-shared meanings that underlie individuals' behaviors and the reality of the participants being studied. == Premise == Grounded theory provides methods for generating hypotheses from qualitative data. After hypotheses are generated, it is up to other researchers to attempt to sustain or reject those hypotheses. Questions asked by the qualitative researcher employing grounded theory include "What is going on?" and "What is the main problem of the participants, and how are they trying to solve it?" Researchers using grounded theory methods do not aim for the "truth." Rather, those researchers try to conceptualize what has been taking place in the lives of study participants. When applying grounded theory methods, the researcher does not formulate hypotheses in advance of data collection as is often the case in traditional research, otherwise the hypotheses would be ungrounded in the data. Hypotheses are supposed to emerge from the data. A goal of the researcher employing grounded theory methods is that of generating concepts that explain the way people resolve their central concerns regardless of time and place. These concepts organize the ground-level data. The concepts become the building blocks of hypotheses. The hypotheses become the constituents of a theory. In most behavioral research endeavors, persons or patients are units of analysis, whereas in grounded theory the unit of analysis is the incident. Typically several hundred incidents are analyzed in a grounded theory study because every participant usually reports many incidents. When comparing many incidents in a certain area of study, the emerging concepts and their inter-relationships are paramount. Consequently, grounded theory is a general method that can use any kind of data although grounded theory is most commonly applied to qualitative data. Most researchers oriented toward grounded theory do not apply statistical methods to the qualitative data they collect. The results of grounded theory research are not reported in terms of statistically significant findings although there may be probability statements about the relationship between concepts. Internal validity in its traditional research sense is not an issue in grounded theory. Rather, questions of fit, relevance, workability, and modifiability are more important in grounded theory. In addition, adherents of grounded theory emphasize a theoretical validity rather than traditional ideas of internal validity or measurement-related validity. Grounded theory adherents are "less charitable when discussing [psychometric] reliability, calling a single method of observation continually yielding an unvarying measurement a quixotic reliability." A theory that is fitting has concepts that are closely connected to the incidents the theory purports to represent; fit depends on how thoroughly the constant comparison of incidents to concepts has been conducted. A qualitative study driven by grounded theory examines the genuine concerns of study participants; those concerns are not only of academic interest. Grounded theory works when it explains how study participants address the problem at hand and related problems. A theory is modifiable and can be altered when new relevant data are compared to existing data. == Methodology == Once the data are collected, grounded theory analysis involves the following basic steps: Coding text and theorizing: In grounded theory research, the search for a theory starts with the first line of the first interview that one codes. Small chunks of the text are coded line-by-line. Useful concepts are identified where key phrases are marked. The concepts are named. Another chunk of text is then taken and the above-mentioned steps are continued. According to Strauss and Corbin, this process is called open coding. The process involves analyzing data such that conceptual components emerge. The next step involves theorizing, which partly includes pulling concepts together and thinking through how each concept can be related to a larger more inclusive concept. The constant comparative method plays an important role here. Memoing and theorizing: Memoing is the process by which a researcher writes running notes bearing on each of the concepts being identified. The running notes constitute an intermediate step between coding and the first draft of the completed analysis. Memos are field notes about the concepts and insights that emerge from the observations. Memoing starts with the first concept identified and continues right through the processing of all the concepts. Memoing contributes to theory building. Integrating, refining and writing up theories: Once coding categories emerge, the next step is to link them together in a theoretical model constructed around a central category that holds the concepts together. The constant comparative method comes into play, along with negative case analysis. Negative case analysis refers to the researcher looking for cases that are inconsistent with the theoretical model. Theorizing is involved in all these steps. One is required to build and test theory all the way through till the end of a project. The idea that all is data is a fundamental property of grounded theory. The idea means that everything that the researcher encounters when studying a certain area is data, including not only interviews or observations but anything that helps the researcher generate concepts for the emerging theory. According to Ralph, Birks, and Chapman field notes can come from informal interviews, lectures, seminars, expert group meetings, newspaper articles, Internet mail lists, even television shows, conversations with friends etc. === Coding === Coding places incidents into categories and then creates one or more hierarchies out of these categories in terms of categories and subcategories or properties of a categories. A property might be on a continuum such as from low to high, this may be referred to as a dimension. Constant comparison where categories are continually compared to one another is used to create both subcategories and properties. There is some variation in the meanings of the terms code, concept and category with some authors viewing a code as identical to category while others consider a concept to be more abstract than a code, which a code being more like a substantive code. Different researchers have identified different types of codes and encourage different methods of coding, with Strauss and Glaser both going on to extend their work with different forms of coding. The core variable explains most of the participants' main concern with as much variation as possible. It has the most powerful properties to picture what's going on, but with as few properties as possible needed to do so. A popular type of core variable can be theoretically modeled as a basic social process that accounts for most of the variation in change over time, context, and behavior in the studied area. "grounded theory is multivariate. It happens sequentially, subsequently, simultaneously, serendipitously, and scheduled" (Glaser, 1998). Open coding or substantive coding is conceptualizing on the first level of abstraction. Written data from field notes or transcripts are conceptualized line by line. In the beginning of a study everything is coded in order to find out about the problem and how it is being resolved. The coding is often done in the margin of the field notes. This phase is often tedious since it involves conceptualizing all the incidents in the data, which yields many concepts. These are compared as more data is coded, merged into new concepts, and eventually renamed and modified. The grounded theory researcher goes back and forth while comparing data, constantly modifying, and sharpening the growing theory at the same time they follow the build-up schedule of grounded theory's different steps. Strauss and Corbin proposed axial coding and defined it in 1990 as "a set of procedures whereby data are put back together in new ways after open coding, by making connections between categories." Glaser proposed a similar concept called theoretical coding. Theoretical codes help to develop an integrated theory by weaving fractured concepts into hypotheses that work together. The theory, of which the just-mentioned hypotheses are constituents, explains the main concern of the participants. It is, however, important that the theory is not forced on the data beforehand but is allowed to emerge during the comparative process of grounded theory. Theoretical codes, like substantive codes, should emerge from the process of constantly comparing the data in field notes and memos. Selective coding is conducted after the researcher has found the core variable or what is thought to be the tentative core. The core explains the behavior of the participants in addressing their main concern. The tentative core is never wrong. It just more or less fits with the data. After the core variable is chosen, researchers selectively code data with the core guiding their coding, not bothering about concepts of little relevance to the core and its sub-cores. In addition, the researcher now selectively samples new data with the core in mind, a process that is called theoretical sampling – a deductive component of grounded theory. Selective coding delimits the scope of the study (Glaser, 1998). Grounded theory is less concerned with data accuracy than with generating concepts that are abstract and general. Selective coding could be conducted by reviewing old field notes and/or memos that have already been coded once at an earlier stage or by coding newly gathered data. Strauss and Corbin proposed a "coding paradigm" that involved "conditions, context, action/interactional strategies and consequences." === Memoing === Theoretical memoing is "the core stage of grounded theory methodology" (Glaser 1998). "Memos are the theorizing write-up of ideas about substantive codes and their theoretically coded relationships as they emerge during coding, collecting and analyzing data, and during memoing" (Glaser 1998). Memoing is also important in the early phase of a grounded theory study (e.g., during open coding). In memoing, the researcher conceptualizes incidents, helping the process along. Theoretical memos can be anything written or drawn in the context of the constant comparative method, an important component of grounded theory. Memos are important tools to both refine and keep track of ideas that develop when researchers compare incidents to incidents and then concepts to concepts in the evolving theory. In memos, investigators develop ideas about naming concepts and relating them to each other. They examine relationships between concepts with the help of fourfold tables, diagrams, figures, or other means generating comparative power. Without memoing, the theory is superficial and the concepts generated are not very original. Memoing works as an accumulation of written ideas into a bank of ideas about concepts and how they relate to each other. This bank contains rich parts of what will later be the written theory. Memoing is total creative freedom without rules of writing, grammar or style (Glaser 1998). The writing must be an instrument for outflow of ideas, and nothing else. When people write memos, the ideas become more realistic, being converted from thoughts into words, and thus ideas communicable to the afterworld. In grounded theory the preconscious processing that occurs when coding and comparing is recognized. The researcher is encouraged to register ideas about the ongoing study that eventually pop up in everyday situations, and awareness of the serendipity of the method is also necessary to achieve good results. === Serendipity pattern === Building on the work of sociologist Robert K. Merton, his idea of serendipity patterns has come to be applied in grounded theory research. Serendipity patterns refer to fairly common experiences when observing the world. Serendipity patterns include unanticipated and anomalous events. These patterns can become the impetus for the development of a new theory or the extension of an existing theory. Merton also coauthored (with Elinor Barber) The Travels and Adventures of Serendipity, which traces the origins and uses of the word "serendipity" since it was coined. The book is "a study in sociological semantics and the sociology of science," as the subtitle declares. Merton and Barber further develop the idea of serendipity as scientific "method," as contrasted with purposeful discovery by experiment or retrospective prophecy. === Sorting === In the next step memos are sorted, which is the key to formulating a theory that could be clearly presented to others. Sorting puts fractured data back together. During sorting new ideas can emerge. The new ideas can, in turn, be recorded in new memos, giving rise to the memo-on-memos phenomenon. Sorting memos can help generate theory that explains the main action in the studied area. A theory written from unsorted memos may be rich in ideas but the connections among concepts are likely to be weak. === Writing === Writing up the sorted memos follows the sorting process. At this stage, a written theory takes shape. The different categories are now related to each other and the core variable. The theory should encompass the important emergent concepts and their careful description. The researcher may also construct tables and/or figures to optimize readability. In a later rewriting stage, the relevant scholarly literature is woven into the theory. Finally, the theory is edited for style and language. Eventually, the researcher submits the resulting scholarly paper for publication. Most books on grounded theory do not explain what methodological details should be included in a scholarly article; however, some guidelines have been suggested. === No pre-research literature review and no talk === Grounded theory gives the researcher freedom to generate new concepts in explaining human behavior. Research based on grounded theory, however, follows a number of rules. These rules make grounded theory different from most other methods employed in qualitative research. No pre-research literature review. Reviewing the literature of the area under study is thought to generate preconceptions about what to find. The researcher is said to become sensitized to concepts in the extant literature. According to grounded theory, theoretical concepts should emerge from the data unsullied by what has come before. The literature should only be read at the sorting stage and be treated as more data to code and compared with what has already been coded and generated. No talk. Talking about the theory before it is written up drains the researcher of motivational energy. Talking can either render praise or criticism. Both can diminish the motivational drive to write memos that develop and refine the concepts and the theory. Positive feedback, according to Glaser, can make researchers content with what they have and negative feedback hampers their self-confidence. Talking about the grounded theory should be restricted to persons capable of helping the researcher without influencing their final judgments. == Use of preexisting theory == Different approaches to grounded theory reflect different views on how preexisting theory should be used in research. In The Discovery of Grounded Theory, Glaser and Strauss advanced the view that, prior to conducting research, investigators should come to an area of study without any preconceived ideas regarding relevant concepts and hypotheses. In this way, the investigator, according to Glaser and Strauss, avoids imposing preconceived categories upon the research endeavor. Glaser later attempted to address the tension between not reading and reading the literature before a qualitative study begins. Glaser raised the issue of the use of a literature review to enhance the researchers' "theoretical sensitivity," i.e., their ability to identify a grounded theory that is a good fit to the data. He suggested that novice researchers might delay reading the literature to avoid undue influence on their handling of the qualitative data they collect. Glaser believed that reading the relevant research literature (substantive literature) could lead investigators to apply preexisting concepts to the data, rather than interpret concepts emerging from the data. He, however, encouraged a broad reading of the literature to develop theoretical sensitivity. Strauss felt that reading relevant material could enhance the researcher's theoretical sensitivity. == Split in methodology and methods == There has been some divergence in the methodology of grounded theory. Over time, Glaser and Strauss came to disagree about methodology and other qualitative researchers have also modified ideas linked to grounded theory. This divergence occurred most obviously after Strauss published Qualitative Analysis for Social Scientists (1987). In 1990, Strauss, together with Juliet Corbin, published Basics of Qualitative Research: Grounded Theory Procedures and Techniques. The publication of the book was followed by a rebuke by Glaser (1992), who set out, chapter by chapter, to highlight the differences in what he argued was the original grounded theory and why what Strauss and Corbin had written was not grounded theory in its "intended form." This divergence in methodology is a subject of much academic debate, which Glaser (1998) calls a "rhetorical wrestle". Glaser continues to write about and teach the original grounded theory method. Grounded theory methods, according to Glaser, emphasize induction or emergence, and the individual researcher's creativity within a clear stagelike framework. By contrast, Strauss has been more interested in validation criteria and a systematic approach. According to Kelle (2005), "the controversy between Glaser and Strauss boils down to the question of whether the researcher uses a well-defined "coding paradigm" and always looks systematically for "causal conditions," "phenomena/context, intervening conditions, action strategies," and "consequences" in the data (Straussian), or whether theoretical codes are employed as they emerge in the same way as substantive codes emerge, but drawing on a huge fund of "coding families" (Glaserian). === Constructivist grounded theory === A later version of grounded theory called constructivist grounded theory, which is rooted in pragmatism and constructivist epistemology, assumes that neither data nor theories are discovered, but are constructed by researchers as a result of their interactions with the field and study participants. Proponents of this approach include Kathy Charmaz and Antony Bryant. In an interview, Charmaz justified her approach as follows: "Grounded theory methodology had been under attack. The postmodern critique of qualitative research had weakened its legitimacy and narrative analysts criticized grounded theory methodology for fragmenting participants' stories. Hence, grounded theory methodology was beginning to be seen as a dated methodology and some researchers advocated abandoning it. I agreed with much of the epistemological critique of the early versions of grounded theory methodology by people like Kenneth Gergen. However, I had long thought that the strategies of grounded theory methodology, including coding, memo writing, and theoretical sampling were excellent methodological tools. I saw no reason to discard these tools and every reason to shift the epistemological grounds on which researchers used them." Data are co-constructed by the researcher and study participants, and colored by the researcher's perspectives, values, privileges, positions, interactions, and geographical locations. This position takes a middle ground between the realist and postmodernist positions by assuming an "obdurate reality" at the same time as it assumes multiple perspectives on that reality. Within the framework of this approach, a literature review prior to data collection is used in a productive and data-sensitive way without forcing the conclusions contained in the review on the collected data. === Critical realist === More recently, a critical realist version of grounded theory has been developed and applied in research devoted to developing mechanism-based explanations for social phenomena. Critical realism (CR) is a philosophical approach associated with Roy Bhaskar, who argued for a structured and differentiated account of reality in which difference, stratification, and change are central. A critical realist grounded theory produces an explanation through an examination of the three domains of social reality: the "real," as the domain of structures and mechanisms; the "actual," as the domain of events; and the "empirical," as the domain of experiences and perceptions. == Use in various disciplines == Grounded theory has been "shaped by the desire to discover social and psychological processes." Grounded theory, however, is not restricted to these two areas study. As Gibbs points out, the process of grounded theory can be and has been applied to a number of different disciplines, including medicine, law, and economics. The reach of grounded theory has extended to nursing, business, and education. Grounded theory focuses more on procedures than on the discipline to which grounded theory is applied. Rather than being limited to a particular discipline or form of data collection, grounded theory has been found useful across multiple research areas. Here are some examples: In psychology, grounded theory is used to understand the role of therapeutic distance for adult clients with attachment anxiety. In sociology, grounded theory is used to discover the meaning of spirituality in cancer patients, and how their beliefs influence their attitude towards cancer treatments. Public health researchers have used grounded theory to examine nursing home preparedness needs in relation to Hurricane Katrina refugees sheltered in nursing homes. In business, grounded theory is used by managers to explain the ways in which organizational characteristics explain co-worker support. In software engineering, grounded theory has been used to study daily stand-up meetings. Grounded theory has also helped researchers in the field of information technology to study the use of computer technology in older adults. In nursing, grounded theory has been used to examine how change-of-shift reports can be used to keep patients safe. It was further developed in relation to students learning and working by Kath M. Melia. == Benefits == The benefits of using grounded theory include ecological validity, the discovery of novel phenomena, and parsimony.. Ecological validity refers to the extent to which research findings accurately represent real-world settings. Research based on grounded theories are often thought to be ecologically valid because the research is especially close to the real-world participants. Although the constructs in a grounded theory are appropriately abstract (since their goal is to explain other similar phenomenon), they are context-specific, detailed, and tightly connected to the data. Because grounded theories are not tied to any preexisting theory, grounded theories are often fresh and new and have the potential for novel discoveries in science and other areas. Parsimony refers to a heuristic often used in science that suggests that when there are competing hypotheses that make the same prediction, the hypothesis that relies on the fewest assumptions is preferable. Grounded theories aim to provide practical and simple explanations of complex phenomena by attempting to link those phenomena to abstract constructs and hypothesizing relationships among those constructs. Grounded theory has further significance because: It provides explicit, sequential guidelines for conducting qualitative research. It offers specific strategies for handling the analytic phases of inquiry. It provides ways to streamline and integrate data collection and analysis and It legitimizes qualitative research as scientific inquiry. Grounded theory methods have earned their place as a standard social research methodology and have influenced researchers from varied disciplines and professions. == Criticisms == Grounded theory has been criticized based on the scientific idea of what a theory is. Thomas and James, for example, distinguish the ideas of generalization, overgeneralization, and theory, noting that some scientific theories explain a broad range of phenomena succinctly, which grounded theory does not. Thomas and James observed that "The problems come when too much is claimed for [for a theory], simply because it is empirical; problems come in distinguishing generalization from over-generalization, narrative from induction." They also write that grounded theory advocates sometimes claim to find causal implications when in truth they only find an association. There has been criticism of grounded theory on the grounds that it opens the door to letting too much researcher subjectivity enter. The authors just cited suggest that it is impossible to free oneself of preconceptions in the collection and analysis of data in the way that Glaser and Strauss assert is necessary. Popper also undermines grounded theory's idea that hypotheses arise from data unaffected by prior expectations. Popper wrote that "objects can be classified and can become similar or dissimilar, only in this way--by being related to needs and interests." Observation is always selective, based on past research and the investigators' goals and motives, and that preconceptionless research is impossible. Critics also note that grounded theory fails to mitigate participant reactivity and has the potential for an investigator steeped in grounded theory to over-identify with one or more study participants. Although they suggest that one element of grounded theory worth keeping is the constant comparative method, Thomas and James point to the formulaic nature of grounded theory methods and the lack of congruence of those methods with open and creative interpretation, which ought to be the hallmark of qualitative inquiry. The grounded theory approach can be criticized as being too empiricist, i.e., that it relies too heavily on the empirical data. Grounded theory considers fieldwork data as the source of theory. Thus the theories that emerge from a new fieldwork are set against the theories that preceded the fieldwork. Strauss's version of grounded theory has been criticized in several other ways: Grounded theory researchers sometimes have a quasi-objective focus, emphasizing hypotheses, variables, reliability, and replicability. This multi-faceted focus leads to contradictory findings. It is inappropriate to ignore the existing theories by not paying attention to the literature. Grounded theory offers a complex methodology and confusing terminology rather than providing a practical orientation to research and data analysis. Also see Tolhurst. Some grounded theory researchers have produced poorly explained theories; concept generation rather than the generation of formal theory may be a more practical goal for grounded theory researchers. Grounded theory was developed during an era when qualitative methods were often considered unscientific. But as the academic rigor of qualitative research became known, this type of research approach achieved wide acceptance. In American academia, qualitative research is often equated with grounded theory methods. Such equating of most qualitative methods with grounded theory has sometimes been criticized by qualitative researchers who take different approaches to methodology (for example, in traditional ethnography, narratology, and storytelling). One alternative to grounded theory is engaged theory. Engaged theory equally emphasizes the conducting of on-the-ground empirical research but linking that research to analytical processes of empirical generalization. Unlike grounded theory, engaged theory derives from the tradition of critical theory. Engaged theory locates analytical processes within a larger theoretical framework that specifies different levels of abstraction, allowing investigators to make claims about the wider world. Braun and Clarke regard thematic analysis as having fewer theoretical assumptions than grounded theory, and can be used within several theoretical frameworks. They write that in comparison to grounded theory, thematic analysis is freer because it is not linked to any preexisting framework for making sense of qualitative data. Braun and Clarke, however, concede that there is a degree of similarity between grounded theory and thematic analysis but prefer thematic analysis. == See also == Antipositivism Engaged theory Formal concept analysis Grounded practical theory Qualitative research Postpositivism Social research Content analysis == Notes == == References == Aldmouz, R. s. (2009). Grounded theory as a methodology for theory generation in information systems research. European journal of economics, finance and administrative sciences (15). Grbich, C. (2007). Qualitative data analysis and introduction. Thousand Oaks, CA: Sage Publications. Charmaz K. (2000) 'Grounded Theory: Objectivist and Constructivist Methods', in Denzin N.K. and Y. S. Lincoln (eds) Handbook of Qualitative Research, second edition, London, Sage Publications. Strauss A. and J. Corbin (1998) Basics of Qualitative Research – Techniques and Procedures for Developing Grounded Theory, second edition, London, Sage Publications Groves, P. S., Manges, K. A., & Scott-Cawiezell, J. (2016). Handing Off Safety at the Bedside. Clinical nursing research, 1054773816630535. == Further reading == Bryant, A. & Charmaz, K. (Eds.) (2007) The SAGE Handbook of Grounded Theory. Los Angeles: Sage. Birks, M. & Mills, J. (2015) Grounded Theory: A practical Guide. London: SAGE Publications. Charmaz, K. (2000). Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. Thousand Oaks, CA: Sage Publications. Chun Tie, Ylona, Birks, Melanie, and Francis, Karen (2019) Grounded theory research: a design framework for novice researchers. SAGE Open Medicine, 7. pp. 1–8. Clarke, A. (2005). Situational Analysis: Grounded Theory After the Postmodern Turn. Thousand Oaks, CA: Sage Publications. Glaser, B. (1992). Basics of grounded theory analysis. Mill Valley, CA: Sociology Press. Goulding, C. (2002). Grounded Theory: A Practical Guide for Management, Business and Market Researchers. London: Sage. Kelle, Udo (2005). "Emergence" vs. "Forcing" of Empirical Data? A Crucial Problem of "Grounded Theory" Reconsidered. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research [On-line Journal], 6(2), Art. 27, paragraphs 49 & 50. [2] Morse, J. M., Stern, P. N., Corbin, J., Bowers, B., Charmaz, K. & Clarke, A. E. (Eds.) (2009). Developing Grounded Theory: The Second Generation. Walnut Creek: Left Coast Press. Mey, G. & Mruck, K. (Eds.) (2007). Grounded Theory Reader. Historical Social Research, Suppl. 19. 337 pages. Oktay, J. S. (2012) Grounded Theory. New York, NY: Oxford University Press. Stebbins, Robert A. (2001) Exploratory Research in the Social Sciences. Thousand Oaks, CA: Sage. Strauss, A. (1987). Qualitative analysis for social scientists. Cambridge, United Kingdom: Cambridge University Press. Thomas, G.; James, D. (2006). "Re-inventing grounded theory: some questions about theory, ground and discovery" (PDF). British Educational Research Journal. 32 (6): 767–795. doi:10.1080/01411920600989412. S2CID 44250223. Glaser Glaser BG, The Constant Comparative Method of Qualitative Analysis. Social Problems, 12(4), 445, 1965. Glaser BG, Strauss A. Discovery of Grounded Theory. Strategies for Qualitative Research. Sociology Press, 1967 Glaser BG. Theoretical Sensitivity: Advances in the methodology of Grounded Theory. Sociology Press, 1978. Glaser BG (ed). More Grounded Theory Methodology: A Reader. Sociology Press, 1994. Glaser BG (ed). Grounded Theory 1984–1994. A Reader (two volumes). Sociology Press, 1995. Glaser BG (ed). Gerund Grounded Theory: The Basic Social Process Dissertation. Sociology Press, 1996. Glaser BG. Doing Grounded Theory – Issues and Discussions. Sociology Press, 1998. Glaser BG. The Grounded Theory Perspective I: Conceptualization Contrasted with Description. Sociology Press, 2001. Glaser BG. The Grounded Theory Perspective II: Description's Remodeling of Grounded Theory. Sociology Press, 2003. Glaser BG. The Grounded Theory Perspective III: Theoretical coding. Sociology Press, 2005. Strauss and Corbin Anselm L. Strauss; Leonard Schatzman; Rue Bucher; Danuta Ehrlich & Melvin Sabshin: Psychiatric ideologies and institutions (1964) Barney G. Glaser; Anselm L. Strauss: The Discovery of Grounded Theory. Strategies for Qualitative Research (1967) Anselm L. Strauss: Qualitative Analysis for Social Scientists (1987) Anselm L. Strauss; Juliet Corbin: Basics of Qualitative Research: Grounded Theory Procedures and Techniques, Sage (1990) Anselm L. Strauss; Juliet Corbin: "Grounded Theory Research: Procedures, Canons and Evaluative Criteria", in: Zeitschrift für Soziologie, 19. Jg, S. 418 ff. (1990) Anselm L. Strauss: Continual Permutations of Action (1993) Anselm L. Strauss; Juliet Corbin: "Grounded Theory in Practice", Sage (1997) Anselm L. Strauss; Juliet Corbin: "Basics of Qualitative Research: Grounded Theory Procedures and Techniques". 2nd edition. Sage, 1998. Juliet Corbin; Anselm L. Strauss: "Basics of Qualitative Research: Grounded Theory Procedures and Techniques". 3rd edition. Sage, 2008. Constructivist grounded theory Bryant, Antony (2002) 'Re-grounding grounded theory', Journal of Information Technology Theory and Application, 4(1): 25–42. Bryant, Antony and Charmaz, Kathy (2007) 'Grounded theory in historical perspective: An epistemological account', in Bryant, A. and Charmaz, K. (eds.), The SAGE Handbook of Grounded Theory. Los Angeles: Sage. pp. 31–57. Charmaz, Kathy (2000) 'Grounded theory: Objectivist and constructivist methods', in Denzin, N.K. and Lincoln, Y.S. (eds.), Handbook of Qualitative Research. 2nd edn. Thousand Oaks, CA: Sage. pp. 509–535. Charmaz, Kathy (2003) 'Grounded theory', in Smith, J.A. (ed.), Qualitative Psychology: A Practical Guide to Research Methods. London: Sage. pp. 81–110. Charmaz, Kathy (2006) Constructing Grounded Theory. London: Sage. Charmaz, Kathy (2008) 'Constructionism and the grounded theory method', in Holstein, J.A. and Gubrium, J.F. (eds.), Handbook of Constructionist Research. New York: The Guilford Press. pp. 397–412. Charmaz, Kathy (2009) 'Shifting the grounds: Constructivist grounded theory methods', in J. M. Morse, P. N. Stern, J. Corbin, B. Bowers, K. Charmaz and A. E. Clarke (eds.), Developing Grounded Theory: The Second Generation. Walnut Creek: Left Coast Press. pp. 127–154. Charmaz, Kathy (forthcoming) Constructing Grounded Theory 2nd ed. London: Sage. Mills, Jane, Bonner, Ann, & Francis, Karen (2006) 'Adopting a constructivist approach to grounded theory: Implications for research design' International Journal of Nursing Practice, 12(1): 8–13. Mills, Jane, Bonner, Ann, & Francis, Karen (2006) 'The development of constructivist grounded theory', International Journal of Qualitative Methods, 5 (1): 25–35. Thornberg, Robert (2012) 'Informed grounded theory', Scandinavian Journal of Educational Research, 56: 243–259. Thornberg, Robert and Charmaz, Kathy (2011) 'Grounded theory', in Lapan, S.D., Quartaroli M.T. and Reimer F.J. (eds.), Qualitative Research: An Introduction to Methods and Designs. San Francisco, CA: John Wiley/Jossey–Bass. pp. 41–67. Thornberg, Robert & Charmaz, K. (forthcoming) 'Grounded theory and theoretical coding', in Flick, U. (ed.), The SAGE handbook of qualitative analysis. London: Sage. == External links == The Grounded Theory Institute Archived 2004-12-02 at the Wayback Machine (Glaser tradition) Grounded Theory Online (Supporting grounded theory researchers) Grounded Theory Review Sociology Press Grounded Theory Research Tutorial
Wikipedia/Grounded_theory
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time. == Definition of entropy and differential entropy == If X {\displaystyle X} is a continuous random variable with probability density p ( x ) {\displaystyle p(x)} , then the differential entropy of X {\displaystyle X} is defined as H ( X ) = − ∫ − ∞ ∞ p ( x ) log ⁡ p ( x ) d x . {\displaystyle H(X)=-\int _{-\infty }^{\infty }p(x)\log p(x)\,dx~.} If X {\displaystyle X} is a discrete random variable with distribution given by Pr ( X = x k ) = p k for k = 1 , 2 , … {\displaystyle \Pr(X{=}x_{k})=p_{k}\qquad {\text{ for }}\quad k=1,2,\ldots } then the entropy of X {\displaystyle X} is defined as H ( X ) = − ∑ k ≥ 1 p k log ⁡ p k . {\displaystyle H(X)=-\sum _{k\geq 1}p_{k}\log p_{k}\,.} The seemingly divergent term p ( x ) log ⁡ p ( x ) {\displaystyle p(x)\log p(x)} is replaced by zero, whenever p ( x ) = 0 . {\displaystyle p(x)=0\,.} This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing H ( X ) {\displaystyle H(X)} will also maximize the more general forms. The base of the logarithm is not important, as long as the same one is used consistently: Change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists often prefer the natural logarithm, resulting in a unit of "nat"s for the entropy. However, the chosen measure d x {\displaystyle dx} is crucial, even though the typical use of the Lebesgue measure is often defended as a "natural" choice: Which measure is chosen determines the entropy and the consequent maximum entropy distribution. == Distributions with measured constants == Many statistical distributions of applicable interest are those for which the moments or other measurable quantities are constrained to be constants. The following theorem by Ludwig Boltzmann gives the form of the probability density under these constraints. === Continuous case === Suppose S {\displaystyle S} is a continuous, closed subset of the real numbers R {\displaystyle \mathbb {R} } and we choose to specify n {\displaystyle n} measurable functions f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}} and n {\displaystyle n} numbers a 1 , … , a n . {\displaystyle a_{1},\ldots ,a_{n}.} We consider the class C {\displaystyle C} of all real-valued random variables which are supported on S {\displaystyle S} (i.e. whose density function is zero outside of S {\displaystyle S} ) and which satisfy the n {\displaystyle n} moment conditions: E ⁡ [ f j ( X ) ] ≥ a j for j = 1 , … , n {\displaystyle \operatorname {E} [f_{j}(X)]\geq a_{j}\qquad {\text{for }}\quad j=1,\ldots ,n} If there is a member in C {\displaystyle C} whose density function is positive everywhere in S , {\displaystyle S,} and if there exists a maximal entropy distribution for C , {\displaystyle C,} then its probability density p ( x ) {\displaystyle p(x)} has the following form: p ( x ) = exp ⁡ ( ∑ j = 0 n λ j f j ( x ) ) for all x ∈ S {\displaystyle p(x)=\exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x)\right)\qquad {\text{ for all }}~x\in S} where we assume that f 0 ( x ) = 1 . {\displaystyle f_{0}(x)=1\,.} The constant λ 0 {\displaystyle \lambda _{0}} and the n {\displaystyle n} Lagrange multipliers λ = ( λ 1 , … , λ n ) {\displaystyle {\boldsymbol {\lambda }}=(\lambda _{1},\ldots ,\lambda _{n})} solve the constrained optimization problem with a 0 = 1 {\displaystyle a_{0}=1} (which ensures that p {\displaystyle p} integrates to unity): max λ 0 ; λ { ∑ j = 0 n λ j a j − ∫ exp ⁡ ( ∑ j = 0 n λ j f j ( x ) ) d x } subject to λ ≥ 0 {\displaystyle \max _{\lambda _{0};\,{\boldsymbol {\lambda }}}\left\{\sum _{j=0}^{n}\lambda _{j}a_{j}-\int \exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x)\right)dx\right\}\qquad ~{\text{ subject to }}~{\boldsymbol {\lambda }}\geq \mathbf {0} } Using the Karush–Kuhn–Tucker conditions, it can be shown that the optimization problem has a unique solution because the objective function in the optimization is concave in λ . {\displaystyle {\boldsymbol {\lambda }}\,.} Note that when the moment constraints are equalities (instead of inequalities), that is, E ⁡ [ f j ( X ) ] = a j for j = 1 , … , n , {\displaystyle \operatorname {E} [f_{j}(X)]=a_{j}\qquad {\text{ for }}~j=1,\ldots ,n\,,} then the constraint condition λ ≥ 0 {\displaystyle {\boldsymbol {\lambda }}\geq \mathbf {0} } can be dropped, which makes optimization over the Lagrange multipliers unconstrained. === Discrete case === Suppose S = { x 1 , x 2 , … } {\displaystyle S=\{x_{1},x_{2},\ldots \}} is a (finite or infinite) discrete subset of the reals, and that we choose to specify n {\displaystyle n} functions f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}} and n {\displaystyle n} numbers a 1 , … , a n . {\displaystyle a_{1},\ldots ,a_{n}\,.} We consider the class C {\displaystyle C} of all discrete random variables X {\displaystyle X} which are supported on S {\displaystyle S} and which satisfy the n {\displaystyle n} moment conditions E ⁡ [ f j ( X ) ] ≥ a j for j = 1 , … , n {\displaystyle \operatorname {E} [f_{j}(X)]\geq a_{j}\qquad ~{\text{ for }}~j=1,\ldots ,n} If there exists a member of class C {\displaystyle C} which assigns positive probability to all members of S {\displaystyle S} and if there exists a maximum entropy distribution for C , {\displaystyle C,} then this distribution has the following shape: Pr ( X = x k ) = exp ⁡ ( ∑ j = 0 n λ j f j ( x k ) ) for k = 1 , 2 , … {\displaystyle \Pr(X{=}x_{k})=\exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x_{k})\right)\qquad {\text{ for }}~k=1,2,\ldots } where we assume that f 0 = 1 {\displaystyle f_{0}=1} and the constants λ 0 , λ ≡ ( λ 1 , … , λ n ) {\displaystyle \lambda _{0},\,{\boldsymbol {\lambda }}\equiv (\lambda _{1},\ldots ,\lambda _{n})} solve the constrained optimization problem with a 0 = 1 {\displaystyle a_{0}=1} : max λ 0 ; λ { ∑ j = 0 n λ j a j − ∑ k ≥ 1 exp ⁡ ( ∑ j = 0 n λ j f j ( x k ) ) } for which λ ≥ 0 {\displaystyle \max _{\lambda _{0};\,{\boldsymbol {\lambda }}}\left\{\sum _{j=0}^{n}\lambda _{j}a_{j}-\sum _{k\geq 1}\exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x_{k})\right)\right\}\qquad {\text{ for which }}~{\boldsymbol {\lambda }}\geq \mathbf {0} } Again as above, if the moment conditions are equalities (instead of inequalities), then the constraint condition λ ≥ 0 {\displaystyle {\boldsymbol {\lambda }}\geq \mathbf {0} } is not present in the optimization. === Proof in the case of equality constraints === In the case of equality constraints, this theorem is proved with the calculus of variations and Lagrange multipliers. The constraints can be written as ∫ − ∞ ∞ f j ( x ) p ( x ) d x = a j {\displaystyle \int _{-\infty }^{\infty }f_{j}(x)p(x)\,dx=a_{j}} We consider the functional J ( p ) = ∫ − ∞ ∞ p ( x ) ln ⁡ p ( x ) d x − η 0 ( ∫ − ∞ ∞ p ( x ) d x − 1 ) − ∑ j = 1 n λ j ( ∫ − ∞ ∞ f j ( x ) p ( x ) d x − a j ) {\displaystyle J(p)=\int _{-\infty }^{\infty }p(x)\ln {p(x)}\,dx-\eta _{0}\left(\int _{-\infty }^{\infty }p(x)\,dx-1\right)-\sum _{j=1}^{n}\lambda _{j}\left(\int _{-\infty }^{\infty }f_{j}(x)p(x)\,dx-a_{j}\right)} where η 0 {\displaystyle \eta _{0}} and λ j , j ≥ 1 {\displaystyle \lambda _{j},j\geq 1} are the Lagrange multipliers. The zeroth constraint ensures the second axiom of probability. The other constraints are that the measurements of the function are given constants up to order n {\displaystyle n} . The entropy attains an extremum when the functional derivative is equal to zero: δ J ( p ) δ p = ln ⁡ p ( x ) + 1 − η 0 − ∑ j = 1 n λ j f j ( x ) = 0 {\displaystyle {\frac {\delta J(p)}{\delta p}}=\ln {p(x)}+1-\eta _{0}-\sum _{j=1}^{n}\lambda _{j}f_{j}(x)=0} Therefore, the extremal entropy probability distribution in this case must be of the form ( λ 0 := η 0 − 1 {\displaystyle \lambda _{0}:=\eta _{0}-1} ), p ( x ) = e − 1 + η 0 e ∑ j = 1 n λ j f j ( x ) = exp ⁡ ( ∑ j = 0 n λ j f j ( x ) ) , {\displaystyle p(x)=e^{-1+\eta _{0}}\,e^{\sum _{j=1}^{n}\lambda _{j}f_{j}(x)}=\exp \left(\sum _{j=0}^{n}\lambda _{j}f_{j}(x)\right),} remembering that f 0 ( x ) = 1 {\displaystyle f_{0}(x)=1} . It can be verified that this is the maximal solution by checking that the variation around this solution is always negative. === Uniqueness of the maximum === Suppose p {\displaystyle p} and p ′ {\displaystyle p'} are distributions satisfying the expectation-constraints. Letting α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} and considering the distribution q = α p + ( 1 − α ) p ′ {\displaystyle q=\alpha \,p+(1-\alpha )\,p'} it is clear that this distribution satisfies the expectation-constraints and furthermore has as support supp ⁡ ( q ) = supp ⁡ ( p ) ∪ supp ⁡ ( p ′ ) . {\displaystyle \operatorname {supp} (q)=\operatorname {supp} (p)\cup \operatorname {supp} (p')\,.} From basic facts about entropy, it holds that H ( q ) ≥ α H ( p ) + ( 1 − α ) H ( p ′ ) . {\displaystyle {\mathcal {H}}(q)\geq \alpha \,{\mathcal {H}}(p)+(1-\alpha )\,{\mathcal {H}}(p').} Taking limits α → 1 {\displaystyle \alpha \to 1} and α → 0 , {\displaystyle \alpha \to 0\,,} respectively, yields H ( q ) ≥ H ( p ) , H ( p ′ ) . {\displaystyle {\mathcal {H}}(q)\geq {\mathcal {H}}(p),{\mathcal {H}}(p')\,.} It follows that a distribution satisfying the expectation-constraints and maximising entropy must necessarily have full support — i. e. the distribution is almost everywhere strictly positive. It follows that the maximising distribution must be an internal point in the space of distributions satisfying the expectation-constraints, that is, it must be a local extreme. Thus it suffices to show that the local extreme is unique, in order to show both that the entropy-maximising distribution is unique (and this also shows that the local extreme is the global maximum). Suppose p {\displaystyle p} and p ′ {\displaystyle p'} are local extremes. Reformulating the above computations these are characterised by parameters λ , λ ′ ∈ R n {\displaystyle {\boldsymbol {\lambda }},\,{\boldsymbol {\lambda }}'\in \mathbb {R} ^{n}} via p ( x ) = exp ⁡ ⟨ λ , f ( x ) ⟩ / C ( λ ) {\displaystyle p(x)={\exp \left\langle {\boldsymbol {\lambda }},\mathbf {f} (x)\right\rangle }/{C({\boldsymbol {\lambda }})}} and similarly for p ′ , {\displaystyle p',} where C ( λ ) = ∫ R exp ⁡ ⟨ λ , f ( x ) ⟩ d x . {\textstyle C({\boldsymbol {\lambda }})=\int _{\mathbb {R} }\exp \left\langle {\boldsymbol {\lambda }},\mathbf {f} (x)\right\rangle \,dx\,.} We now note a series of identities: Via the satisfaction of the expectation-constraints and utilising gradients / directional derivatives, one has D log ⁡ C ( ⋅ ) | λ = D C ( ⋅ ) C ( ⋅ ) | λ = E p ⁡ [ f ( X ) ] = a {\displaystyle {\left.D\log C(\cdot )\right|}_{\boldsymbol {\lambda }}={\left.{\tfrac {DC(\cdot )}{C(\cdot )}}\right|}_{\boldsymbol {\lambda }}=\operatorname {E} _{p}\left[\mathbf {f} (X)\right]=\mathbf {a} } and similarly for λ ′ . {\displaystyle {\boldsymbol {\lambda }}'~.} Letting u = λ ′ − λ ∈ R n {\displaystyle u={\boldsymbol {\lambda }}'-{\boldsymbol {\lambda }}\in \mathbb {R} ^{n}} one obtains: 0 = ⟨ u , a − a ⟩ = D u log ⁡ C ( ⋅ ) | λ ′ − D u log ⁡ C ( ⋅ ) | λ = D u 2 log ⁡ C ( ⋅ ) | γ {\displaystyle 0=\left\langle u,\mathbf {a} -\mathbf {a} \right\rangle ={\left.D_{u}\log C(\cdot )\right|}_{{\boldsymbol {\lambda }}'}-{\left.D_{u}\log C(\cdot )\right|}_{\boldsymbol {\lambda }}={\left.D_{u}^{2}\log C(\cdot )\right|}_{\boldsymbol {\gamma }}} where γ = θ λ + ( 1 − θ ) λ ′ {\displaystyle {\boldsymbol {\gamma }}=\theta {\boldsymbol {\lambda }}+(1-\theta ){\boldsymbol {\lambda }}'} for some θ ∈ ( 0 , 1 ) . {\displaystyle \theta \in (0,1).} Computing further, one has 0 = D u 2 log ⁡ C ( ⋅ ) | γ = D u ( D u C ( ⋅ ) C ( ⋅ ) ) | γ = D u 2 C ( ⋅ ) C ( ⋅ ) | γ − ( D u C ( ⋅ ) ) 2 C ( ⋅ ) 2 | γ = E q ⁡ [ ⟨ u , f ( X ) ⟩ 2 ] − ( E q ⁡ [ ⟨ u , f ( X ) ⟩ ] ) 2 = Var q ⁡ [ ⟨ u , f ( X ) ⟩ ] {\displaystyle {\begin{aligned}0&={\left.D_{u}^{2}\log C(\cdot )\right|}_{\boldsymbol {\gamma }}\\[1ex]&={\left.D_{u}\left({\frac {D_{u}C(\cdot )}{C(\cdot )}}\right)\right|}_{\boldsymbol {\gamma }}={\left.{\frac {D_{u}^{2}C(\cdot )}{C(\cdot )}}\right|}_{\boldsymbol {\gamma }}-{\left.{\frac {{\left(D_{u}C(\cdot )\right)}^{2}}{C(\cdot )^{2}}}\right|}_{\boldsymbol {\gamma }}\\[1ex]&=\operatorname {E} _{q}\left[{\left\langle u,\mathbf {f} (X)\right\rangle }^{2}\right]-{\left(\operatorname {E} _{q}\left[\left\langle u,\mathbf {f} (X)\right\rangle \right]\right)}^{2}\\[2ex]&=\operatorname {Var} _{q}\left[\left\langle u,\mathbf {f} (X)\right\rangle \right]\end{aligned}}} where q {\displaystyle q} is similar to the distribution above, only parameterised by γ , {\displaystyle {\boldsymbol {\gamma }}~,} Assuming that no non-trivial linear combination of the observables is almost everywhere (a.e.) constant, (which e.g. holds if the observables are independent and not a.e. constant), it holds that ⟨ u , f ( X ) ⟩ {\displaystyle \langle u,\mathbf {f} (X)\rangle } has non-zero variance, unless u = 0 . {\displaystyle u=0~.} By the above equation it is thus clear, that the latter must be the case. Hence λ ′ − λ = u = 0 , {\displaystyle {\boldsymbol {\lambda }}'-{\boldsymbol {\lambda }}=u=0\,,} so the parameters characterising the local extrema p , p ′ {\displaystyle p,\,p'} are identical, which means that the distributions themselves are identical. Thus, the local extreme is unique and by the above discussion, the maximum is unique – provided a local extreme actually exists. === Caveats === Note that not all classes of distributions contain a maximum entropy distribution. It is possible that a class contain distributions of arbitrarily large entropy (e.g. the class of all continuous distributions on R with mean 0 but arbitrary standard deviation), or that the entropies are bounded above but there is no distribution which attains the maximal entropy. It is also possible that the expected value restrictions for the class C force the probability distribution to be zero in certain subsets of S. In that case our theorem doesn't apply, but one can work around this by shrinking the set S. == Examples == Every probability distribution is trivially a maximum entropy probability distribution under the constraint that the distribution has its own entropy. To see this, rewrite the density as p ( x ) = exp ⁡ ( ln ⁡ p ( x ) ) {\displaystyle p(x)=\exp {(\ln {p(x)})}} and compare to the expression of the theorem above. By choosing ln ⁡ p ( x ) → f ( x ) {\displaystyle \ln {p(x)}\rightarrow f(x)} to be the measurable function and ∫ exp ⁡ ( f ( x ) ) f ( x ) d x = − H {\displaystyle \int \exp {(f(x))}f(x)dx=-H} to be the constant, p ( x ) {\displaystyle p(x)} is the maximum entropy probability distribution under the constraint ∫ p ( x ) f ( x ) d x = − H . {\displaystyle \int p(x)f(x)\,dx=-H.} Nontrivial examples are distributions that are subject to multiple constraints that are different from the assignment of the entropy. These are often found by starting with the same procedure ln ⁡ p ( x ) → f ( x ) {\displaystyle \ln {p(x)}\to f(x)} and finding that f ( x ) {\displaystyle f(x)} can be separated into parts. A table of examples of maximum entropy distributions is given in Lisman (1972) and Park & Bera (2009). === Uniform and piecewise uniform distributions === The uniform distribution on the interval [a,b] is the maximum entropy distribution among all continuous distributions which are supported in the interval [a, b], and thus the probability density is 0 outside of the interval. This uniform density can be related to Laplace's principle of indifference, sometimes called the principle of insufficient reason. More generally, if we are given a subdivision a=a0 < a1 < ... < ak = b of the interval [a,b] and probabilities p1,...,pk that add up to one, then we can consider the class of all continuous distributions such that Pr ( a j − 1 ≤ X < a j ) = p j for j = 1 , … , k {\displaystyle \Pr(a_{j-1}\leq X<a_{j})=p_{j}\quad {\text{ for }}j=1,\ldots ,k} The density of the maximum entropy distribution for this class is constant on each of the intervals [aj−1,aj). The uniform distribution on the finite set {x1,...,xn} (which assigns a probability of 1/n to each of these values) is the maximum entropy distribution among all discrete distributions supported on this set. === Positive and specified mean: the exponential distribution === The exponential distribution, for which the density function is p ( x | λ ) = { λ e − λ x x ≥ 0 , 0 x < 0 , {\displaystyle p(x|\lambda )={\begin{cases}\lambda e^{-\lambda x}&x\geq 0,\\0&x<0,\end{cases}}} is the maximum entropy distribution among all continuous distributions supported in [0,∞) that have a specified mean of 1/λ. In the case of distributions supported on [0,∞), the maximum entropy distribution depends on relationships between the first and second moments. In specific cases, it may be the exponential distribution, or may be another distribution, or may be undefinable. === Specified mean and variance: the normal distribution === The normal distribution N(μ,σ2), for which the density function is p ( x | μ , σ ) = 1 σ 2 π e − ( x − μ ) 2 2 σ 2 , {\displaystyle p(x|\mu ,\sigma )={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}},} has maximum entropy among all real-valued distributions supported on (−∞,∞) with a specified variance σ2 (a particular moment). The same is true when the mean μ and the variance σ2 is specified (the first two moments), since entropy is translation invariant on (−∞,∞). Therefore, the assumption of normality imposes the minimal prior structural constraint beyond these moments. (See the differential entropy article for a derivation.) === Discrete distributions with specified mean === Among all the discrete distributions supported on the set {x1,...,xn} with a specified mean μ, the maximum entropy distribution has the following shape: Pr ( X = x k ) = C r x k for k = 1 , … , n {\displaystyle \Pr(X{=}x_{k})=Cr^{x_{k}}\quad {\text{ for }}k=1,\ldots ,n} where the positive constants C and r can be determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ. For example, if a large number N of dice are thrown, and you are told that the sum of all the shown numbers is S. Based on this information alone, what would be a reasonable assumption for the number of dice showing 1, 2, ..., 6? This is an instance of the situation considered above, with {x1,...,x6} = {1,...,6} and μ = S/N. Finally, among all the discrete distributions supported on the infinite set { x 1 , x 2 , . . . } {\displaystyle \{x_{1},x_{2},...\}} with mean μ, the maximum entropy distribution has the shape: Pr ( X = x k ) = C r x k for k = 1 , 2 , … , {\displaystyle \Pr(X{=}x_{k})=Cr^{x_{k}}\quad {\text{ for }}k=1,2,\ldots ,} where again the constants C and r were determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ. For example, in the case that xk = k, this gives C = 1 μ − 1 , r = μ − 1 μ , {\displaystyle C={\frac {1}{\mu -1}},\quad \quad r={\frac {\mu -1}{\mu }},} such that respective maximum entropy distribution is the geometric distribution. === Circular random variables === For a continuous random variable θ i {\displaystyle \theta _{i}} distributed about the unit circle, the Von Mises distribution maximizes the entropy when the real and imaginary parts of the first circular moment are specified or, equivalently, the circular mean and circular variance are specified. When the mean and variance of the angles θ i {\displaystyle \theta _{i}} modulo 2 π {\displaystyle 2\pi } are specified, the wrapped normal distribution maximizes the entropy. === Maximizer for specified mean, variance and skew === There exists an upper bound on the entropy of continuous random variables on R {\displaystyle \mathbb {R} } with a specified mean, variance, and skew. However, there is no distribution which achieves this upper bound, because p ( x ) = c exp ⁡ ( λ 1 x + λ 2 x 2 + λ 3 x 3 ) {\displaystyle p(x)=c\exp {(\lambda _{1}x+\lambda _{2}x^{2}+\lambda _{3}x^{3})}} is unbounded when λ 3 ≠ 0 {\displaystyle \lambda _{3}\neq 0} (see Cover & Thomas (2006: chapter 12)). However, the maximum entropy is ε-achievable: a distribution's entropy can be arbitrarily close to the upper bound. Start with a normal distribution of the specified mean and variance. To introduce a positive skew, perturb the normal distribution upward by a small amount at a value many σ larger than the mean. The skewness, being proportional to the third moment, will be affected more than the lower order moments. This is a special case of the general case in which the exponential of any odd-order polynomial in x will be unbounded on R {\displaystyle \mathbb {R} } . For example, c e λ x {\displaystyle ce^{\lambda x}} will likewise be unbounded on R {\displaystyle \mathbb {R} } , but when the support is limited to a bounded or semi-bounded interval the upper entropy bound may be achieved (e.g. if x lies in the interval [0,∞] and λ< 0, the exponential distribution will result). === Maximizer for specified mean and deviation risk measure === Every distribution with log-concave density is a maximal entropy distribution with specified mean μ and deviation risk measure D . In particular, the maximal entropy distribution with specified mean E ( X ) ≡ μ {\displaystyle E(X)\equiv \mu } and deviation D ( X ) ≡ d {\displaystyle D(X)\equiv d} is: The normal distribution N ( m , d 2 ) , {\displaystyle {\mathcal {N}}(m,d^{2}),} if D ( X ) = E ⁡ [ ( X − μ ) 2 ] {\textstyle D(X)={\sqrt {\operatorname {E} \left[{\left(X-\mu \right)}^{2}\right]}}} is the standard deviation; The Laplace distribution, if D ( X ) = E ⁡ { | X − μ | } {\displaystyle D(X)=\operatorname {E} \left\{\left|X-\mu \right|\right\}} is the average absolute deviation; The distribution with density of the form f ( x ) = c exp ⁡ ( a x + b ⇂ x − μ ⇃ − 2 ) {\displaystyle f(x)=c\exp \left(ax+b{\downharpoonright x-\mu \downharpoonleft }_{-}^{2}\right)} if D ( X ) = E ⁡ { ⇂ X − μ ⇃ − 2 } {\textstyle D(X)={\sqrt {\operatorname {E} \left\{{\downharpoonright X-\mu \downharpoonleft }_{-}^{2}\right\}}}} is the standard lower semi-deviation, where a , b , c {\displaystyle a,b,c} are constants and the function ⇂ y ⇃ − ≡ min { 0 , y } for any y ∈ R , {\displaystyle \downharpoonright y\downharpoonleft _{-}\equiv \min \left\{0,y\right\}~{\text{ for any }}y\in \mathbb {R} \,,} returns only the negative values of its argument, otherwise zero. === Other examples === In the table below, each listed distribution maximizes the entropy for a particular set of functional constraints listed in the third column, and the constraint that x {\displaystyle x} be included in the support of the probability density, which is listed in the fourth column. Several listed examples (Bernoulli, geometric, exponential, Laplace, Pareto) are trivially true, because their associated constraints are equivalent to the assignment of their entropy. They are included anyway because their constraint is related to a common or easily measured quantity. For reference, Γ ( x ) = ∫ 0 ∞ e − t t x − 1 d t {\displaystyle \Gamma (x)=\int _{0}^{\infty }e^{-t}t^{x-1}\,dt} is the gamma function, ψ ( x ) = d d x ln ⁡ Γ ( x ) = Γ ′ ( x ) Γ ( x ) {\displaystyle \psi (x)={\frac {d}{dx}}\ln \Gamma (x)={\frac {\Gamma '(x)}{\Gamma (x)}}} is the digamma function, B ( p , q ) = Γ ( p ) Γ ( q ) Γ ( p + q ) {\displaystyle B(p,q)={\frac {\Gamma (p)\,\Gamma (q)}{\Gamma (p+q)}}} is the beta function, and γ E {\displaystyle \gamma _{\mathsf {E}}} is the Euler-Mascheroni constant. The maximum entropy principle can be used to upper bound the entropy of statistical mixtures. == See also == Exponential family Gibbs measure Partition function (mathematics) Maximal entropy random walk - maximizing entropy rate for a graph == Notes == == Citations == == References ==
Wikipedia/Maximum_entropy_probability_distribution
Art methodology refers to a studied and constantly reassessed, questioned method within the arts, as opposed to a method merely applied (without thought). This process of studying the method and reassessing its effectiveness allows art to move on and change. It is not the thing itself but it is an essential part of the process. An artist drawing, for instance, may choose to draw from what he or she observes in front of them, or from what they imagine, or from what they already know about the subject. These 3 methods will, very probably, produce 3 very different pictures. A careful methodology would include examination of the materials and tools used and how a different type of canvas/brush/paper/pencil/rag/camera/chisel etc. would produce a different effect. The artist may also look at various effects achieved by starting in one part of a canvas first, or by working over the whole surface equally. An author may experiment with stream of consciousness writing, as opposed to naturalistic narrative, or a combination of styles. == Fine Art compared with Traditional Crafts == In stark contrast to fine art practice is the traditional craft form. With traditional crafts, the method is handed down from generation to generation with often very little change in techniques. It is usually fair to say that folk crafts employ a method but not an art methodology, since that would involve rigorous questioning and criticising of the tradition. == Art Methodology compared with Science Methodology == An art methodology differs from a science methodology, perhaps mainly insofar as the artist is not always after the same goal as the scientist. In art it is not necessarily all about establishing the exact truth so much as making the most effective form (painting, drawing, poem, novel, performance, sculpture, video, etc.) through which ideas, feelings, perceptions can be communicated to a public. With this purpose in mind, some artists will exhibit preliminary sketches and notes which were part of the process leading to the creation of a work. Sometimes, in Conceptual art, the preliminary process is the only part of the work which is exhibited, with no visible result displayed. In such a case the "journey" is being presented as more important than the destination. Conceptual artist Robert Barry once put on an exhibition where the door of the gallery remained shut and a sign on the door informed visitors that the gallery would be closed for the exhibition. These kind of works question accepted concepts, such as that of having a tangible work of art as result. == Some Art Methodology statements == === Global Responsibility === The Peace Through Art methodology developed by the International Child Art Foundation (ICAF) was recognized as a Stockholm Challenge Finalist for 's statement on the methodology of the programme says: "the Peace Through Art methodology draws upon the creativity and imagination of young people, and teaches them the ethics of responsibility in this interdependent global village that has come to be our world. The methodology incorporates best practices from the fields of psychology, conflict resolution and peace education, while employing the power of the arts for self-expression, healing and communication." Pauline Mottram SRAsTh, gave a short paper entitled "Towards developing a methodology to evaluate outcomes of Art Therapy in Adult Mental Illness." for the October 2000 TAoAT Conference. Here's an excerpt: "In addressing the effectiveness challenge and in seeking a means of measuring outcomes of the art therapy service that I deliver, I have sought a methodology that can provide a valid and reliable quantitative outcome, whilst still respecting the aesthetic and humanistic nature of art therapy practice. This endeavour is made all the more complex by the fact that art therapy lacks a fully developed theory and it has a fragile research base. Generally it is argued that art therapy is more compatible with qualitative research designs that encompass subjectivity, rather than quantitative objective methods. Kaplan (1998, p95) states that 'Qualitative is exploratory and theory building. Quantitative tests hypothesis in order to refine and validate theory.' She holds that art therapy cannot afford to reject either form of inquiry." === Generative Art === In "The Methodology of Generative Art" by Tjark Ihmels and Julia Riedel, an online article at Media Art Net Mozart's "musical game of dice" is cited as a precedent for the methodology of Generative art which, say the authors, has "established itself in nearly every area of artistic practice (music, literature [3], the fine arts).": "Mozart composed 176 bars of music, from which sixteen were chosen from a list using dice, which then produced a new piece when performed on a piano. Sixteen bars, each with eleven possibilities, can result in 1,116 unique pieces of music. Using this historical example, the methodology of generative art can be appropriately described as the rigorous application of predefined principles of action for the intentional exclusion of, or substitution for, individual aesthetical decisions that sets in motion the generation of new artistic content out of material provided for that purpose." == External links == Art methodology quotations == References ==
Wikipedia/Art_methodology
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process. When referring specifically to probabilities, the corresponding term is probabilistic model. All statistical hypothesis tests and all statistical estimators are derived via statistical models. More generally, statistical models are part of the foundation of statistical inference. A statistical model is usually specified as a mathematical relationship between one or more random variables and other non-random variables. As such, a statistical model is "a formal representation of a theory" (Herman Adèr quoting Kenneth Bollen). == Introduction == Informally, a statistical model can be thought of as a statistical assumption (or set of statistical assumptions) with a certain property: that the assumption allows us to calculate the probability of any event. As an example, consider a pair of ordinary six-sided dice. We will study two different statistical assumptions about the dice. The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is ⁠1/6⁠. From that assumption, we can calculate the probability of both dice coming up 5:  ⁠1/6⁠ × ⁠1/6⁠ = ⁠1/36⁠.  More generally, we can calculate the probability of any event: e.g. (1 and 2) or (3 and 3) or (5 and 6). The alternative statistical assumption is this: for each of the dice, the probability of the face 5 coming up is ⁠1/8⁠ (because the dice are weighted). From that assumption, we can calculate the probability of both dice coming up 5:  ⁠1/8⁠ × ⁠1/8⁠ = ⁠1/64⁠.  We cannot, however, calculate the probability of any other nontrivial event, as the probabilities of the other faces are unknown. The first statistical assumption constitutes a statistical model: because with the assumption alone, we can calculate the probability of any event. The alternative statistical assumption does not constitute a statistical model: because with the assumption alone, we cannot calculate the probability of every event. In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation). For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. == Formal definition == In mathematical terms, a statistical model is a pair ( S , P {\displaystyle S,{\mathcal {P}}} ), where S {\displaystyle S} is the set of possible observations, i.e. the sample space, and P {\displaystyle {\mathcal {P}}} is a set of probability distributions on S {\displaystyle S} . The set P {\displaystyle {\mathcal {P}}} represents all of the models that are considered possible. This set is typically parameterized: P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . The set Θ {\displaystyle \Theta } defines the parameters of the model. If a parameterization is such that distinct parameter values give rise to distinct distributions, i.e. F θ 1 = F θ 2 ⇒ θ 1 = θ 2 {\displaystyle F_{\theta _{1}}=F_{\theta _{2}}\Rightarrow \theta _{1}=\theta _{2}} (in other words, the mapping is injective), it is said to be identifiable. In some cases, the model can be more complex. In Bayesian statistics, the model is extended by adding a probability distribution over the parameter space Θ {\displaystyle \Theta } . A statistical model can sometimes distinguish two sets of probability distributions. The first set Q = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {Q}}=\{F_{\theta }:\theta \in \Theta \}} is the set of models considered for inference. The second set P = { F λ : λ ∈ Λ } {\displaystyle {\mathcal {P}}=\{F_{\lambda }:\lambda \in \Lambda \}} is the set of models that could have generated the data which is much larger than Q {\displaystyle {\mathcal {Q}}} . Such statistical models are key in checking that a given procedure is robust, i.e. that it does not produce catastrophic errors when its assumptions about the data are incorrect. == An example == Suppose that we have a population of children, with the ages of the children distributed uniformly, in the population. The height of a child will be stochastically related to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in a linear regression model, like this: heighti = b0 + b1agei + εi, where b0 is the intercept, b1 is a parameter that age is multiplied by to obtain a prediction of height, εi is the error term, and i identifies the child. This implies that height is predicted by age, with some error. An admissible model must be consistent with all the data points. Thus, a straight line (heighti = b0 + b1agei) cannot be admissible for a model of the data—unless it exactly fits all the data points, i.e. all the data points lie perfectly on the line. The error term, εi, must be included in the equation, so that the model is consistent with all the data points. To do statistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εi distributions are i.i.d. Gaussian, with zero mean. In this instance, the model would have 3 parameters: b0, b1, and the variance of the Gaussian distribution. We can formally specify the model in the form ( S , P {\displaystyle S,{\mathcal {P}}} ) as follows. The sample space, S {\displaystyle S} , of our model comprises the set of all possible pairs (age, height). Each possible value of θ {\displaystyle \theta } = (b0, b1, σ2) determines a distribution on S {\displaystyle S} ; denote that distribution by F θ {\displaystyle F_{\theta }} . If Θ {\displaystyle \Theta } is the set of all possible values of θ {\displaystyle \theta } , then P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . (The parameterization is identifiable, and this is easy to check.) In this example, the model is determined by (1) specifying S {\displaystyle S} and (2) making some assumptions relevant to P {\displaystyle {\mathcal {P}}} . There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specify P {\displaystyle {\mathcal {P}}} —as they are required to do. == General remarks == A statistical model is a special class of mathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables are stochastic. In the above example with children's heights, ε is a stochastic variable; without that stochastic variable, the model would be deterministic. Statistical models are often used even when the data-generating process being modeled is deterministic. For instance, coin tossing is, in principle, a deterministic process; yet it is commonly modeled as stochastic (via a Bernoulli process). Choosing an appropriate statistical model to represent a given data-generating process is sometimes extremely difficult, and may require knowledge of both the process and relevant statistical analyses. Relatedly, the statistician Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". There are three purposes for a statistical model, according to Konishi & Kitagawa: Predictions Extraction of information Description of stochastic structures Those three purposes are essentially the same as the three purposes indicated by Friendly & Meyer: prediction, estimation, description. == Dimension of a model == Suppose that we have a statistical model ( S , P {\displaystyle S,{\mathcal {P}}} ) with P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . In notation, we write that Θ ⊆ R k {\displaystyle \Theta \subseteq \mathbb {R} ^{k}} where k is a positive integer ( R {\displaystyle \mathbb {R} } denotes the real numbers; other sets can be used, in principle). Here, k is called the dimension of the model. The model is said to be parametric if Θ {\displaystyle \Theta } has finite dimension. As an example, if we assume that data arise from a univariate Gaussian distribution, then we are assuming that P = { F μ , σ ( x ) ≡ 1 2 π σ exp ⁡ ( − ( x − μ ) 2 2 σ 2 ) : μ ∈ R , σ > 0 } {\displaystyle {\mathcal {P}}=\left\{F_{\mu ,\sigma }(x)\equiv {\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right):\mu \in \mathbb {R} ,\sigma >0\right\}} . In this example, the dimension, k, equals 2. As another example, suppose that the data consists of points (x, y) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean): this leads to the same statistical model as was used in the example with children's heights. The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.) Although formally θ ∈ Θ {\displaystyle \theta \in \Theta } is a single parameter that has dimension k, it is sometimes regarded as comprising k separate parameters. For example, with the univariate Gaussian distribution, θ {\displaystyle \theta } is formally a single parameter with dimension 2, but it is often regarded as comprising 2 separate parameters—the mean and the standard deviation. A statistical model is nonparametric if the parameter set Θ {\displaystyle \Theta } is infinite dimensional. A statistical model is semiparametric if it has both finite-dimensional and infinite-dimensional parameters. Formally, if k is the dimension of Θ {\displaystyle \Theta } and n is the number of samples, both semiparametric and nonparametric models have k → ∞ {\displaystyle k\rightarrow \infty } as n → ∞ {\displaystyle n\rightarrow \infty } . If k / n → 0 {\displaystyle k/n\rightarrow 0} as n → ∞ {\displaystyle n\rightarrow \infty } , then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly used statistical models. Regarding semiparametric and nonparametric models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies". == Nested models == Two statistical models are nested if the first model can be transformed into the second model by imposing constraints on the parameters of the first model. As an example, the set of all Gaussian distributions has, nested within it, the set of zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. As a second example, the quadratic model y = b0 + b1x + b2x2 + ε, ε ~ 𝒩(0, σ2) has, nested within it, the linear model y = b0 + b1x + ε, ε ~ 𝒩(0, σ2) —we constrain the parameter b2 to equal 0. In both those examples, the first model has a higher dimension than the second model (for the first example, the zero-mean model has dimension 1). Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2. == Comparing models == Comparing statistical models is fundamental for much of statistical inference. Konishi & Kitagawa (2008, p. 75) state: "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models." Common criteria for comparing models include the following: R2, Bayes factor, Akaike information criterion, and the likelihood-ratio test together with its generalization, the relative likelihood. Another way of comparing two statistical models is through the notion of deficiency introduced by Lucien Le Cam. == See also == == Notes == == References == == Further reading == Davison, A. C. (2008), Statistical Models, Cambridge University Press Drton, M.; Sullivant, S. (2007), "Algebraic statistical models" (PDF), Statistica Sinica, 17: 1273–1297 Freedman, D. A. (2009), Statistical Models, Cambridge University Press Helland, I. S. (2010), Steps Towards a Unified Basis for Scientific Models and Methods, World Scientific Kroese, D. P.; Chan, J. C. C. (2014), Statistical Modeling and Computation, Springer Shmueli, G. (2010), "To explain or to predict?", Statistical Science, 25 (3): 289–310, arXiv:1101.0891, doi:10.1214/10-STS330, S2CID 15900983
Wikipedia/Probability_models
Popular Science (also known as PopSci) is an American popular science website, covering science and technology topics geared toward general readers. Popular Science has won over 58 awards, including the American Society of Magazine Editors awards for its journalistic excellence in 2003 (for General Excellence), 2004 (for Best Magazine Section), and 2019 (for Single-Topic Issue). Its print magazine, which ran from 1872 to 2020, was translated into over 30 languages and distributed to at least 45 countries. In 2021, Popular Science switched to an all-digital format and abandoned the magazine format in 2023. == Early history == The Popular Science Monthly, as the publication was originally called, was founded in May 1872 by Edward L. Youmans to disseminate scientific knowledge to the educated layman. Youmans had previously worked as an editor for the weekly Appleton's Journal and persuaded them to publish his new journal. Early issues were mostly reprints of English periodicals. The journal became an outlet for writings and ideas of Charles Darwin, Thomas Henry Huxley, Louis Pasteur, Henry Ward Beecher, Charles Sanders Peirce, William James, Thomas Edison, John Dewey and James McKeen Cattell. William Jay Youmans, Edward's brother, helped found Popular Science Monthly in 1872 and was an editor as well. He became editor-in-chief on Edward's death in 1887. The publisher, D. Appleton & Company, was forced to sell the journal for economic reasons in 1900. James McKeen Cattell became the editor in 1900 and the publisher in 1901. Cattell had a background in academics and continued publishing articles for educated readers. By 1915, the readership was declining and publishing a science journal was a financial challenge. In a September 1915 editorial, Cattell related these difficulties to his readers and announced that the Popular Science Monthly name had been transferred to the Modern Publishing Company to start a new publication for general audiences. The existing academic journal would continue publishing under the name The Scientific Monthly, retaining existing subscribers. Scientific Monthly was published until 1958 when it was absorbed into Science. After acquiring the Electrician and Mechanic magazine in 1914, the Modern Publishing Company had merged it with Modern Electrics to become Modern Electrics & Mechanics. Later that year, they merged the publication with Popular Electricity and World's Advance to form Popular Electricity and Modern Mechanics. After further name changes that caused confusion among librarians, the Modern Publishing Company had purchased the Popular Science Monthly name to provide a clear signifier of the publication's focus on popular science. The October 1915 issue was titled Popular Science Monthly and World's Advance. The volume number (Vol. 87, No. 4) was that of Popular Science but the content was that of World's Advance. The new editor was Waldemar Kaempffert, a former editor of Scientific American. The change in Popular Science Monthly was dramatic. The old version was a scholarly journal that had eight to ten articles in a 100-page issue. There would be ten to twenty photographs or illustrations. The new version had hundreds of short, easy to read articles with hundreds of illustrations. Editor Kaempffert was writing for "the home craftsman and hobbyist who wanted to know something about the world of science." The circulation doubled in the first year. From the mid-1930s to the 1960s, the magazine featured fictional stories of Gus Wilson's Model Garage, centered on car problems. An annual review of changes to the new model year cars ran in 1940 and 1941, but did not return after the war until 1954. It continued until the mid-1970s when the magazine reverted to publishing the new models over multiple issues as information became available. From 1935 to 1949, the magazine sponsored a series of short films, produced by Jerry Fairbanks and released by Paramount Pictures. From July 1952 to December 1989, Popular Science carried Roy Doty's Wordless Workshop as a regular feature. From July 1969 to May 1989, the cover and table of contents carried the subtitle, "The What's New Magazine." The cover removed the subtitle the following month and the contents page removed it in February 1990. In 1983, the magazine introduced a new logo using the ITC Avant Garde font, which it used until late 1995. Within the next 11 years, its font changed four times (in 1995, 1997, 2001, and 2002, respectively). In 2009, the magazine used a new font for its logo, which was used until the January 2014 issue. In 2014, the magazine underwent a major redesign; its February 2014 issue introduced a new logo, and a new format featuring greater use of graphics and imagery, aiming to broaden its content to appeal to wider attention to the environment, science, and technology among a mass audience. The revamp concluded in November 2014 with a redesign of the Popular Science website. == Recent history == The Popular Science Publishing Company was acquired in 1967 by the Los Angeles–based Times Mirror Company. In 2000, Times Mirror merged with the Chicago-based Tribune Company, which then sold the Times Mirror magazines to Time Inc. (then a subsidiary of Time Warner) the following year. On January 25, 2007, Time Warner sold this magazine, along with 17 other special interest magazines, to Bonnier Magazine Group. In January 2016, Popular Science switched to bi-monthly publication after 144 years of monthly publication. In April 2016 it was announced that editor-in-chief Cliff Ransom would be leaving the magazine. In August 2016, Joe Brown was named Popular Science's new editor-in-chief. In September 2018, it was announced that Popular Science would become a quarterly publication. During his tenure, Popular Science diversified its readership base, was nominated for several National Magazine Awards, winning for The Tiny Issue in 2019, and named to AdWeek's Hot List in 2019. Brown stepped down in February 2020. In March 2020, executive editor Corinne Iozzio was named editor-in-chief. During her tenure, the brand moved from a print to a digital-only publication, produced extensive coverage of the COVID-19 pandemic, celebrated its 150-year anniversary, and relaunched its "Brilliant 10" franchise. Iozzio and her team won a 2022 National Magazine Award for its "Heat" issue. The issue, an in-depth look at the stark realities and ingenuity of a warming world, was the second win in the Single-Topic Issue category but the first in its new digital-only format. In August 2022, after more than a decade at Popular Science and two-and-a-half years leading the brand, Iozzio announced that she would step down as editor-in-chief in October of that year. On October 6, 2020, the Bonnier Group sold Popular Science and six other special interest magazines, including the well-known titles Popular Photography, Outdoor Life, and Field & Stream, to North Equity LLC. While North Equity is a venture equity firm that primarily invests in digital media brands, David Ritchie, CEO of the Bonnier Corp, said Bonnier believes, "North Equity is best-positioned to continue to invest in and grow these iconic legacy brands." In June 2021, North Equity introduced Recurrent Ventures as the new parent company to its digital media portfolio. From April 27, 2021, the Popular Science publication was changed to a fully digital format and is no longer in physical print. Its digital subscription offering, PopSci+ is inclusive of exclusive digital content and the magazine. In January 2023, Annie Colbert was named the new editor-in-chief. She joined the brand after spending more than 10 years at Mashable. === Radio === Popular Science Radio was a partnership between Popular Science and Entertainment Radio Network which ran through 2016. === Tablet === On March 27, 2011, Popular Science magazine sold the 10,000th subscription to its iPad edition, nearly six weeks after accepting Apple's terms for selling subs on its tablet. === Podcasts === In 2018, Popular Science launched two podcasts, Last Week in Tech and The Weirdest Thing I Learned This Week, Last Week in Tech was later replaced by Techathlon. Weirdest Thing proved to be the brand's breakout hit. After just one episode, Apple Podcasts included "Weirdest Thing" on their weekly "New & Noteworthy" list, and over the years it has hosted a number of live events. === Popular Science+ === In early 2010, Bonnier partnered with London-based design firm BERG to create Mag+, a magazine publishing platform for tablets. In April 2010, Popular Science+, the first title on the Mag+ platform, launched in the iTunes Store the same day the iPad launched. The app contains all the content in the print version as well as added content and digital-only extras. Bonnier has since launched several more titles on the Mag+ platform, including Popular Photography+ and Transworld Snowboarding+. === Australian Popular Science === On September 24, 2008, Australian publishing company Australian Media Properties (part of the WW Media Group) launched a local version of Popular Science. It is a monthly magazine, like its American counterpart, and uses content from the American version of the magazine as well as local material. Australian Media Properties also launched www.popsci.com.au at the same time, a localised version of the Popular Science website. === Popular Science Predictions Exchange === In July 2007, Popular Science launched the Popular Science Predictions EXchange (PPX). People were able to place virtual bets on what the next innovations in technology, the environment, and science would be. Bets have included whether Facebook would have an initial public offering by 2008, when a touchscreen iPod would be launched, and whether Dongtan, China's eco-city, would be inhabited by 2010. The PPX shut down in 2009. === Television: Future Of... === Popular Science's Future Of... show premiered on August 10, 2009, on the Science Channel. The show was concerned with the future of technology and science in a particular topic area that varies from week to week. As of December 2009, a new episode was premiering every Monday. === Books === Popular Science has published a number of books, including the bestselling Big Book of Hacks and Big Book of Maker Skills. The brand has also published The Total Inventor's Manual and The Future Then, which was published in conjunction with the brand's 145th anniversary. === Other languages === In June 2014, Popular Science Italia was launched in Italy by Kekoa Publishing. Directed by Francesco Maria Avitto, the magazine is available in print and digital version. In April 2017, Popular Science was launched in Arabic by United Arab Emirates-based publisher Haykal Media. The magazine is available in print bimonthly, and through a daily updated portal. == Publishers == Sources: American Mass-Market Magazines The Wall Street Journal and New York Post. == Gallery == == See also == Popular Mechanics == References == == External links == Popular Science Popular Science+ in iTunes Popular Science Australia Popular Science magazine: 1872–2008 Online, readable back issues.
Wikipedia/Popular_Science_Monthly
The expected utility hypothesis is a foundational assumption in mathematical economics concerning decision making under uncertainty. It postulates that rational agents maximize utility, meaning the subjective desirability of their actions. Rational choice theory, a cornerstone of microeconomics, builds this postulate to model aggregate social behaviour. The expected utility hypothesis states an agent chooses between risky prospects by comparing expected utility values (i.e., the weighted sum of adding the respective utility values of payoffs multiplied by their probabilities). The summarised formula for expected utility is U ( p ) = ∑ u ( x k ) p k {\displaystyle U(p)=\sum u(x_{k})p_{k}} where p k {\displaystyle p_{k}} is the probability that outcome indexed by k {\displaystyle k} with payoff x k {\displaystyle x_{k}} is realized, and function u expresses the utility of each respective payoff. Graphically the curvature of the u function captures the agent's risk attitude. For example, imagine you’re offered a choice between receiving $50 for sure, or flipping a coin to win $100 if heads, and nothing if tails. Although both options have the same average payoff ($50), many people choose the guaranteed $50 because they value the certainty of the smaller reward more than the possibility of a larger one, reflecting risk-averse preferences. Standard utility functions represent ordinal preferences. The expected utility hypothesis imposes limitations on the utility function and makes utility cardinal (though still not comparable across individuals). Although the expected utility hypothesis is a commonly accepted assumption in theories underlying economic modeling, it has frequently been found to be inconsistent with the empirical results of experimental psychology. Psychologists and economists have been developing new theories to explain these inconsistencies for many years. These include prospect theory, rank-dependent expected utility and cumulative prospect theory, and bounded rationality. == Justification == === Bernoulli's formulation === Nicolaus Bernoulli described the St. Petersburg paradox (involving infinite expected values) in 1713, prompting two Swiss mathematicians to develop expected utility theory as a solution. Bernoulli's paper was the first formalization of marginal utility, which has broad application in economics in addition to expected utility theory. He used this concept to formalize the idea that the same amount of additional money was less useful to an already wealthy person than it would be to a poor person. The theory can also more accurately describe more realistic scenarios (where expected values are finite) than expected value alone. He proposed that a nonlinear function of the utility of an outcome should be used instead of the expected value of an outcome, accounting for risk aversion, where the risk premium is higher for low-probability events than the difference between the payout level of a particular outcome and its expected value. Bernoulli further proposed that it was not the goal of the gambler to maximize his expected gain but to maximize the logarithm of his gain instead. The concept of expected utility was further developed by William Playfair, an eighteenth-century political writer who frequently addressed economic issues. In his 1785 pamphlet The Increase of Manufactures, Commerce and Finance, a criticism of Britain's usury laws, Playfair presented what he argued was the calculus investors made prior to committing funds to a project. Playfair said investors estimated the potential gains and potential losses, and then assessed the probability of each. This was, in effect, a verbal rendition of an expected utility equation. Playfair argued that, if government limited the potential gains of a successful project, it would discourage investment in general, causing the national economy to under-perform. Daniel Bernoulli drew attention to psychological and behavioral components behind the individual's decision-making process and proposed that the utility of wealth has a diminishing marginal utility. For example, an extra dollar or an additional good is perceived as less valuable as someone gets wealthier. In other words, desirability related to a financial gain depends on the gain itself and the person's wealth. Bernoulli suggested that people maximize "moral expectation" rather than expected monetary value. Bernoulli made a clear distinction between expected value and expected utility. Instead of using the weighted outcomes, he used the weighted utility multiplied by probabilities. He proved that the utility function used in real life is finite, even when its expected value is infinite. === Ramsey-theoretic approach to subjective probability === In 1926, Frank Ramsey introduced Ramsey's Representation Theorem. This representation theorem for expected utility assumes that preferences are defined over a set of bets where each option has a different yield. Ramsey believed that we should always make decisions to receive the best-expected outcome according to our personal preferences. This implies that if we can understand an individual's priorities and preferences, we can anticipate their choices. In this model, he defined numerical utilities for each option to exploit the richness of the space of prices. The outcome of each preference is exclusive of each other. For example, if you study, you can not see your friends. However, you will get a good grade in your course. In this scenario, we analyze personal preferences and beliefs and will be able to predict which option a person might choose (e.g., if someone prioritizes their social life over academic results, they will go out with their friends). Assuming that the decisions of a person are rational, according to this theorem, we should be able to know the beliefs and utilities of a person just by looking at the choices they make (which is wrong). Ramsey defines a proposition as "ethically neutral" when two possible outcomes have an equal value. In other words, if the probability can be defined as a preference, each proposition should have ⁠1/2⁠ to be indifferent between both options. Ramsey shows that P ( E ) = ( 1 − U ( m ) ) ( U ( b ) − U ( w ) ) {\displaystyle P(E)=(1-U(m))(U(b)-U(w))} === Savage's subjective expected utility representation === In the 1950s, Leonard Jimmie Savage, an American statistician, derived a framework for comprehending expected utility. Savage's framework involved proving that expected utility could be used to make an optimal choice among several acts through seven axioms. In his book, The Foundations of Statistics, Savage integrated a normative account of decision making under risk (when probabilities are known) and under uncertainty (when probabilities are not objectively known). Savage concluded that people have neutral attitudes towards uncertainty and that observation is enough to predict the probabilities of uncertain events. A crucial methodological aspect of Savage's framework is its focus on observable choices—cognitive processes and other psychological aspects of decision-making matter only to the extent that they directly impact choice. The theory of subjective expected utility combines two concepts: first, a personal utility function, and second, a personal probability distribution (usually based on Bayesian probability theory). This theoretical model has been known for its clear and elegant structure and is considered by some researchers to be "the most brilliant axiomatic theory of utility ever developed." Instead of assuming the probability of an event, Savage defines it in terms of preferences over acts. Savage used the states (something a person doesn't control) to calculate the probability of an event. On the other hand, he used utility and intrinsic preferences to predict the event's outcome. Savage assumed that each act and state were sufficient to determine an outcome uniquely. However, this assumption breaks in cases where an individual does not have enough information about the event. Additionally, he believed that outcomes must have the same utility regardless of state. Therefore, it is essential to identify which statement is an outcome correctly. For example, if someone says, "I got the job," this affirmation is not considered an outcome since the utility of the statement will be different for each person depending on intrinsic factors such as financial necessity or judgment about the company. Therefore, no state can rule out the performance of an act. Only when the state and the act are evaluated simultaneously is it possible to determine an outcome with certainty. ==== Savage's representation theorem ==== Savage's representation theorem (Savage, 1954): A preference < satisfies P1–P7 if and only if there is a finitely additive probability measure P and a function u : C → R such that for every pair of acts f and g. f < g ⇐⇒ Z Ω u(f(ω)) dP ≥ Z Ω u(g(ω)) dP *If and only if all the axioms are satisfied, one can use the information to reduce the uncertainty about the events that are out of their control. Additionally, the theorem ranks the outcome according to a utility function that reflects personal preferences. The key ingredients in Savage's theory are: States: The specification of every aspect of the decision problem at hand or "A description of the world leaving no relevant aspect undescribed." Events: A set of states identified by someone Consequences: A consequence describes everything relevant to the decision maker's utility (e.g., monetary rewards, psychological factors, etc.) Acts: An act is a finite-valued function that maps states to consequences. === Von Neumann–Morgenstern utility theorem === ==== The von Neumann–Morgenstern axioms ==== There are four axioms of the expected utility theory that define a rational decision maker: completeness; transitivity; independence of irrelevant alternatives; and continuity. Completeness assumes that an individual has well-defined preferences and can always decide between any two alternatives. Axiom (Completeness): For every A {\displaystyle A} and B {\displaystyle B} either A ⪰ B {\displaystyle A\succeq B} or A ⪯ B {\displaystyle A\preceq B} or both. This means that the individual prefers A {\displaystyle A} to B {\displaystyle B} , B {\displaystyle B} to A {\displaystyle A} , or is indifferent between A {\displaystyle A} and B {\displaystyle B} . Transitivity assumes that, as an individual decides according to the completeness axiom, the individual also decides consistently. Axiom (Transitivity): For every A , B {\displaystyle A,B} and C {\displaystyle C} with A ⪰ B {\displaystyle A\succeq B} and B ⪰ C {\displaystyle B\succeq C} we must have A ⪰ C {\displaystyle A\succeq C} . Independence of irrelevant alternatives pertains to well-defined preferences as well. It assumes that two gambles mixed with an irrelevant third one will maintain the same order of preference as when the two are presented independently of the third one. The independence axiom is the most controversial.. Axiom (Independence of irrelevant alternatives): For every A , B {\displaystyle A,B} such that A ⪰ B {\displaystyle A\succeq B} , the preference t A + ( 1 − t ) C ⪰ t B + ( 1 − t ) C , {\displaystyle tA+(1-t)C\succeq tB+(1-t)C,} must hold for every lottery C {\displaystyle C} and real t ∈ [ 0 , 1 ] {\displaystyle t\in [0,1]} . Continuity assumes that when there are three lotteries ( A , B {\displaystyle A,B} and C {\displaystyle C} ) and the individual prefers A {\displaystyle A} to B {\displaystyle B} and B {\displaystyle B} to C {\displaystyle C} . There should be a possible combination of A {\displaystyle A} and C {\displaystyle C} in which the individual is then indifferent between this mix and the lottery B {\displaystyle B} . Axiom (Continuity): Let A , B {\displaystyle A,B} and C {\displaystyle C} be lotteries with A ⪰ B ⪰ C {\displaystyle A\succeq B\succeq C} . Then B {\displaystyle B} is equally preferred to p A + ( 1 − p ) C {\displaystyle pA+(1-p)C} for some p ∈ [ 0 , 1 ] {\displaystyle p\in [0,1]} . If all these axioms are satisfied, then the individual is rational. A utility function can represent the preferences, i.e., one can assign numbers (utilities) to each outcome of the lottery such that choosing the best lottery according to the preference ⪰ {\displaystyle \succeq } amounts to choosing the lottery with the highest expected utility. This result is the von Neumann–Morgenstern utility representation theorem. In other words, if an individual's behavior always satisfies the above axioms, then there is a utility function such that the individual will choose one gamble over another if and only if the expected utility of one exceeds that of the other. The expected utility of any gamble may be expressed as a linear combination of the utilities of the outcomes, with the weights being the respective probabilities. Utility functions are also normally continuous functions. Such utility functions are also called von Neumann–Morgenstern (vNM). This is a central theme of the expected utility hypothesis in which an individual chooses not the highest expected value but rather the highest expected utility. The expected utility-maximizing individual makes decisions rationally based on the theory's axioms. The von Neumann–Morgenstern formulation is important in the application of set theory to economics because it was developed shortly after the Hicks–Allen "ordinal revolution" of the 1930s, and it revived the idea of cardinal utility in economic theory. However, while in this context the utility function is cardinal, in that implied behavior would be altered by a nonlinear monotonic transformation of utility, the expected utility function is ordinal because any monotonic increasing transformation of expected utility gives the same behavior. ==== Examples of von Neumann–Morgenstern utility functions ==== The utility function u ( w ) = log ⁡ ( w ) {\displaystyle u(w)=\log(w)} was originally suggested by Bernoulli (see above). It has relative risk aversion constant and equal to one and is still sometimes assumed in economic analyses. The utility function u ( w ) = − e − a w {\displaystyle u(w)=-e^{-aw}} It exhibits constant absolute risk aversion and, for this reason, is often avoided, although it has the advantage of offering substantial mathematical tractability when asset returns are normally distributed. Note that, as per the affine transformation property alluded to above, the utility function K − e − a w {\displaystyle K-e^{-aw}} gives the same preferences orderings as does − e − a w {\displaystyle -e^{-aw}} ; thus it is irrelevant that the values of − e − a w {\displaystyle -e^{-aw}} and its expected value are always negative: what matters for preference ordering is which of two gambles gives the higher expected utility, not the numerical values of those expected utilities. The class of constant relative risk aversion utility functions contains three categories. Bernoulli's utility function u ( w ) = log ⁡ ( w ) {\displaystyle u(w)=\log(w)} Has relative risk aversion equal to 1. The functions u ( w ) = w α {\displaystyle u(w)=w^{\alpha }} for α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} have relative risk aversion equal to 1 − α ∈ ( 0 , 1 ) {\displaystyle 1-\alpha \in (0,1)} . And the functions u ( w ) = − w α {\displaystyle u(w)=-w^{\alpha }} for α < 0 {\displaystyle \alpha <0} have relative risk aversion equal to 1 − α > 1. {\displaystyle 1-\alpha >1.} See also the discussion of utility functions having hyperbolic absolute risk aversion (HARA). == Formula for expected utility == When the entity x {\displaystyle x} whose value x i {\displaystyle x_{i}} affects a person's utility takes on one of a set of discrete values, the formula for expected utility, which is assumed to be maximized, is E ⁡ [ u ( x ) ] = p 1 ⋅ u ( x 1 ) + p 2 ⋅ u ( x 2 ) + ⋯ {\displaystyle \operatorname {E} [u(x)]=p_{1}\cdot u(x_{1})+p_{2}\cdot u(x_{2})+\cdots } where the left side is the subjective valuation of the gamble as a whole, x i {\displaystyle x_{i}} is the ith possible outcome, u ( x i ) {\displaystyle u(x_{i})} is its valuation, and p i {\displaystyle p_{i}} is its probability. There could be either a finite set of possible values x i , {\displaystyle x_{i},} , in which case the right side of this equation has a finite number of terms, or there could be an infinite set of discrete values, in which case the right side has an infinite number of terms. When x {\displaystyle x} can take on any of a continuous range of values, the expected utility is given by E ⁡ [ u ( x ) ] = ∫ − ∞ ∞ u ( x ) f ( x ) d x , {\displaystyle \operatorname {E} [u(x)]=\int _{-\infty }^{\infty }u(x)f(x)\,dx,} where f ( x ) {\displaystyle f(x)} is the probability density function of x . {\displaystyle x.} The certainty equivalent, which is the fixed amount that would make a person indifferent to it versus the outcome distribution, is given by C E = u − 1 ( E ⁡ [ u ( x ) ] ) . {\displaystyle \mathrm {CE} =u^{-1}(\operatorname {E} [u(x)])\,.} === Measuring risk in the expected utility context === Often, people refer to "risk" as a potentially quantifiable entity. In the context of mean-variance analysis, variance is used as a risk measure for portfolio return; however, this is only valid if returns are normally distributed or otherwise jointly elliptically distributed, or in the unlikely case in which the utility function has a quadratic form—however, David E. Bell proposed a measure of risk that follows naturally from a certain class of von Neumann–Morgenstern utility functions. Let utility of wealth be given by u ( w ) = w − b e − a w {\displaystyle u(w)=w-be^{-aw}} for individual-specific positive parameters a and b. Then, the expected utility is given by E ⁡ [ u ( w ) ] = E ⁡ [ w ] − b E ⁡ [ e − a w ] = E ⁡ [ w ] − b E ⁡ [ e − a E ⁡ [ w ] − a ( w − E ⁡ [ w ] ) ] = E ⁡ [ w ] − b e − a E ⁡ [ w ] E ⁡ [ e − a ( w − E ⁡ [ w ] ) ] = expected wealth − b ⋅ e − a ⋅ expected wealth ⋅ risk . {\displaystyle {\begin{aligned}\operatorname {E} [u(w)]&=\operatorname {E} [w]-b\operatorname {E} [e^{-aw}]\\&=\operatorname {E} [w]-b\operatorname {E} [e^{-a\operatorname {E} [w]-a(w-\operatorname {E} [w])}]\\&=\operatorname {E} [w]-be^{-a\operatorname {E} [w]}\operatorname {E} [e^{-a(w-\operatorname {E} [w])}]\\&={\text{expected wealth}}-b\cdot e^{-a\cdot {\text{expected wealth}}}\cdot {\text{risk}}.\end{aligned}}} Thus the risk measure is E ⁡ ( e − a ( w − E ⁡ w ) ) {\displaystyle \operatorname {E} (e^{-a(w-\operatorname {E} w)})} , which differs between two individuals if they have different values of the parameter a , {\displaystyle a,} allowing other people to disagree about the degree of risk associated with any given portfolio. Individuals sharing a given risk measure (based on a given value of a) may choose different portfolios because they may have different values of b. See also Entropic risk measure. For general utility functions, however, expected utility analysis does not permit the expression of preferences to be separated into two parameters, one representing the expected value of the variable in question and the other representing its risk. == Risk aversion == The expected utility theory takes into account that individuals may be risk-averse, meaning that the individual would refuse a fair gamble (a fair gamble has an expected value of zero). Risk aversion implies that their utility functions are concave and show diminishing marginal wealth utility. The risk attitude is directly related to the curvature of the utility function: risk-neutral individuals have linear utility functions, risk-seeking individuals have convex utility functions, and risk-averse individuals have concave utility functions. The curvature of the utility function can measure the degree of risk aversion. Since the risk attitudes are unchanged under affine transformations of u, the second derivative u'' is not an adequate measure of the risk aversion of a utility function. Instead, it needs to be normalized. This leads to the definition of the Arrow–Pratt measure of absolute risk aversion: A R A ( w ) = − u ″ ( w ) u ′ ( w ) , {\displaystyle {\mathit {ARA}}(w)=-{\frac {u''(w)}{u'(w)}},} where w {\displaystyle w} is wealth. The Arrow–Pratt measure of relative risk aversion is: R R A ( w ) = − w u ″ ( w ) u ′ ( w ) {\displaystyle {\mathit {RRA}}(w)=-{\frac {wu''(w)}{u'(w)}}} Special classes of utility functions are the CRRA (constant relative risk aversion) functions, where RRA(w) is constant, and the CARA (constant absolute risk aversion) functions, where ARA(w) is constant. These functions are often used in economics to simplify. A decision that maximizes expected utility also maximizes the probability of the decision's consequences being preferable to some uncertain threshold. In the absence of uncertainty about the threshold, expected utility maximization simplifies to maximizing the probability of achieving some fixed target. If the uncertainty is uniformly distributed, then expected utility maximization becomes expected value maximization. Intermediate cases lead to increasing risk aversion above some fixed threshold and increasing risk seeking below a fixed threshold. == The St. Petersburg paradox == The St. Petersburg paradox presented by Nicolas Bernoulli illustrates that decision-making based on the expected value of monetary payoffs leads to absurd conclusions. When a probability distribution function has an infinite expected value, a person who only cares about expected values of a gamble would pay an arbitrarily large finite amount to take this gamble. However, this experiment demonstrated no upper bound on the potential rewards from very low probability events. In the hypothetical setup, a person flips a coin repeatedly. The number of consecutive times the coin lands on heads determines the participant's prize. The participant's prize is doubled every time it comes up heads (1/2 probability); it ends when the participant flips the coin and comes out in tails. A player who only cares about expected payoff value should be willing to pay any finite amount of money to play because this entry cost will always be less than the expected, infinite value of the game. However, in reality, people do not do this. "Only a few participants were willing to pay a maximum of $25 to enter the game because many were risk averse and unwilling to bet on a very small possibility at a very high price. == Criticism == In the early days of the calculus of probability, classic utilitarians believed that the option with the greatest utility would produce more pleasure or happiness for the agent and, therefore, must be chosen. The main problem with the expected value theory is that there might not be a unique correct way to quantify utility or to identify the best trade-offs. For example, some of the trade-offs may be intangible or qualitative. Rather than monetary incentives, other desirable ends can also be included in utility, such as pleasure, knowledge, friendship, etc. Originally, the consumer's total utility was the sum of independent utilities of the goods. However, the expected value theory was dropped as it was considered too static and deterministic. The classic counter example to the expected value theory (where everyone makes the same "correct" choice) is the St. Petersburg Paradox. In empirical applications, several violations of expected utility theory are systematic, and these falsifications have deepened our understanding of how people decide. Daniel Kahneman and Amos Tversky in 1979 presented their prospect theory which showed empirically how preferences of individuals are inconsistent among the same choices, depending on the framing of the choices, i.e., how they are presented. Like any mathematical model, expected utility theory simplifies reality. The mathematical correctness of expected utility theory and the salience of its primitive concepts do not guarantee that expected utility theory is a reliable guide to human behavior or optimal practice. The mathematical clarity of expected utility theory has helped scientists design experiments to test its adequacy and to distinguish systematic departures from its predictions. This has led to the behavioral finance field, which has produced deviations from the expected utility theory to account for the empirical facts. Other critics argue that applying expected utility to economic and policy decisions has engendered inappropriate valuations, particularly when monetary units are used to scale the utility of nonmonetary outcomes, such as deaths. === Conservatism in updating beliefs === Psychologists have discovered systematic violations of probability calculations and behavior by humans. This has been evidenced by examples such as the Monty Hall problem, where it was demonstrated that people do not revise their degrees on belief in line with experimented probabilities and that probabilities cannot be applied to single cases. On the other hand, in updating probability distributions using evidence, a standard method uses conditional probability, namely the rule of Bayes. An experiment on belief revision has suggested that humans change their beliefs faster when using Bayesian methods than when using informal judgment. According to the empirical results, there has been almost no recognition in decision theory of the distinction between the problem of justifying its theoretical claims regarding the properties of rational belief and desire. One of the main reasons is that people's basic tastes and preferences for losses cannot be represented with utility as they change under different scenarios. === Irrational deviations === Behavioral finance has produced several generalized expected utility theories to account for instances where people's choices deviate from those predicted by expected utility theory. These deviations are described as "irrational" because they can depend on the way the problem is presented, not on the actual costs, rewards, or probabilities involved. Particular theories, including prospect theory, rank-dependent expected utility, and cumulative prospect theory, are considered insufficient to predict preferences and the expected utility. Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern. This is because preferences and utility functions constructed under different contexts differ significantly. This is demonstrated in the contrast of individual preferences under the insurance and lottery context, which shows the degree of indeterminacy of the expected utility theory. Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern. In practice, there will be many situations where the probabilities are unknown, and one operates under uncertainty. In economics, Knightian uncertainty or ambiguity may occur. Thus, one must make assumptions about the probabilities, but the expected values of various decisions can be very sensitive to the assumptions. This is particularly problematic when the expectation is dominated by rare extreme events, as in a long-tailed distribution. Alternative decision techniques are robust to the uncertainty of probability of outcomes, either not depending on probabilities of outcomes and only requiring scenario analysis (as in minimax or minimax regret), or being less sensitive to assumptions. Bayesian approaches to probability treat it as a degree of belief. Thus, they do not distinguish between risk and a wider concept of uncertainty: they deny the existence of Knightian uncertainty. They would model uncertain probabilities with hierarchical models, i.e., as distributions whose parameters are drawn from a higher-level distribution (hyperpriors). === Preference reversals over uncertain outcomes === Starting with studies such as Lichtenstein & Slovic (1971), it was discovered that subjects sometimes exhibit signs of preference reversals about their certainty equivalents of different lotteries. Specifically, when eliciting certainty equivalents, subjects tend to value "p bets" (lotteries with a high chance of winning a low prize) lower than "$ bets" (lotteries with a small chance of winning a large prize). When subjects are asked which lotteries they prefer in direct comparison, however, they frequently prefer the "p bets" over "$ bets". Many studies have examined this "preference reversal", from both an experimental (e.g., Plott & Grether, 1979) and theoretical (e.g., Holt, 1986) standpoint, indicating that this behavior can be brought into accordance with neoclassical economic theory under specific assumptions. == Recommendations == Three components in the psychology field are seen as crucial to developing a more accurate descriptive theory of decision under risks. Theory of decision framing effect (psychology) Better understanding of the psychologically relevant outcome space A psychologically richer theory of the determinants == See also == == References == == Further reading ==
Wikipedia/Von_Neumann–Morgenstern_utility_function
Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning differs from supervised learning in not needing labelled input-output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead, the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge) with the goal of maximizing the cumulative reward (the feedback of which might be incomplete or delayed). The search for this balance is known as the exploration–exploitation dilemma. The environment is typically stated in the form of a Markov decision process (MDP), as many reinforcement learning algorithms use dynamic programming techniques. The main difference between classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the Markov decision process, and they target large MDPs where exact methods become infeasible. == Principles == Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, RL is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in RL have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation (particularly in the absence of a mathematical model of the environment). Basic reinforcement learning is modeled as a Markov decision process: A set of environment and agent states (the state space), S {\displaystyle {\mathcal {S}}} ; A set of actions (the action space), A {\displaystyle {\mathcal {A}}} , of the agent; P a ( s , s ′ ) = Pr ( S t + 1 = s ′ ∣ S t = s , A t = a ) {\displaystyle P_{a}(s,s')=\Pr(S_{t+1}=s'\mid S_{t}=s,A_{t}=a)} , the transition probability (at time t {\displaystyle t} ) from state s {\displaystyle s} to state s ′ {\displaystyle s'} under action a {\displaystyle a} . R a ( s , s ′ ) {\displaystyle R_{a}(s,s')} , the immediate reward after transition from s {\displaystyle s} to s ′ {\displaystyle s'} under action a {\displaystyle a} . The purpose of reinforcement learning is for the agent to learn an optimal (or near-optimal) policy that maximizes the reward function or other user-provided reinforcement signal that accumulates from immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals learn to adopt behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning. A basic reinforcement learning agent interacts with its environment in discrete time steps. At each time step t, the agent receives the current state S t {\displaystyle S_{t}} and reward R t {\displaystyle R_{t}} . It then chooses an action A t {\displaystyle A_{t}} from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state S t + 1 {\displaystyle S_{t+1}} and the reward R t + 1 {\displaystyle R_{t+1}} associated with the transition ( S t , A t , S t + 1 ) {\displaystyle (S_{t},A_{t},S_{t+1})} is determined. The goal of a reinforcement learning agent is to learn a policy: π : S × A → [ 0 , 1 ] {\displaystyle \pi :{\mathcal {S}}\times {\mathcal {A}}\rightarrow [0,1]} , π ( s , a ) = Pr ( A t = a ∣ S t = s ) {\displaystyle \pi (s,a)=\Pr(A_{t}=a\mid S_{t}=s)} that maximizes the expected cumulative reward. Formulating the problem as a Markov decision process assumes the agent directly observes the current environmental state; in this case, the problem is said to have full observability. If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have partial observability, and formally the problem must be formulated as a partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed. When the agent's performance is compared to that of an agent that acts optimally, the difference in performance yields the notion of regret. In order to act near optimally, the agent must reason about long-term consequences of its actions (i.e., maximize future rewards), although the immediate reward associated with this might be negative. Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including energy storage, robot control, photovoltaic generators, backgammon, checkers, Go (AlphaGo), and autonomous driving systems. Two elements make reinforcement learning powerful: the use of samples to optimize performance, and the use of function approximation to deal with large environments. Thanks to these two key components, RL can be used in large environments in the following situations: A model of the environment is known, but an analytic solution is not available; Only a simulation model of the environment is given (the subject of simulation-based optimization); The only way to collect information about the environment is to interact with it. The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems. == Exploration == The exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and for finite state space Markov decision processes in Burnetas and Katehakis (1997). Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical. One such method is ε {\displaystyle \varepsilon } -greedy, where 0 < ε < 1 {\displaystyle 0<\varepsilon <1} is a parameter controlling the amount of exploration vs. exploitation. With probability 1 − ε {\displaystyle 1-\varepsilon } , exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability ε {\displaystyle \varepsilon } , exploration is chosen, and the action is chosen uniformly at random. ε {\displaystyle \varepsilon } is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics. == Algorithms for control learning == Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards. === Criterion of optimality === ==== Policy ==== The agent's action selection is modeled as a map called policy: π : A × S → [ 0 , 1 ] {\displaystyle \pi :{\mathcal {A}}\times {\mathcal {S}}\rightarrow [0,1]} π ( a , s ) = Pr ( A t = a ∣ S t = s ) {\displaystyle \pi (a,s)=\Pr(A_{t}=a\mid S_{t}=s)} The policy map gives the probability of taking action a {\displaystyle a} when in state s {\displaystyle s} .: 61  There are also deterministic policies π {\displaystyle \pi } for which π ( s ) {\displaystyle \pi (s)} denotes the action that should be played at state s {\displaystyle s} . ==== State-value function ==== The state-value function V π ( s ) {\displaystyle V_{\pi }(s)} is defined as, expected discounted return starting with state s {\displaystyle s} , i.e. S 0 = s {\displaystyle S_{0}=s} , and successively following policy π {\displaystyle \pi } . Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.: 60  V π ( s ) = E ⁡ [ G ∣ S 0 = s ] = E ⁡ [ ∑ t = 0 ∞ γ t R t + 1 ∣ S 0 = s ] , {\displaystyle V_{\pi }(s)=\operatorname {\mathbb {E} } [G\mid S_{0}=s]=\operatorname {\mathbb {E} } \left[\sum _{t=0}^{\infty }\gamma ^{t}R_{t+1}\mid S_{0}=s\right],} where the random variable G {\displaystyle G} denotes the discounted return, and is defined as the sum of future discounted rewards: G = ∑ t = 0 ∞ γ t R t + 1 = R 1 + γ R 2 + γ 2 R 3 + … , {\displaystyle G=\sum _{t=0}^{\infty }\gamma ^{t}R_{t+1}=R_{1}+\gamma R_{2}+\gamma ^{2}R_{3}+\dots ,} where R t + 1 {\displaystyle R_{t+1}} is the reward for transitioning from state S t {\displaystyle S_{t}} to S t + 1 {\displaystyle S_{t+1}} , 0 ≤ γ < 1 {\displaystyle 0\leq \gamma <1} is the discount rate. γ {\displaystyle \gamma } is less than 1, so rewards in the distant future are weighted less than rewards in the immediate future. The algorithm must find a policy with maximum expected discounted return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies. A policy is stationary if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to deterministic stationary policies. A deterministic stationary policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality. === Brute force === The brute force approach entails two steps: For each possible policy, sample returns while following it Choose the policy with the largest expected discounted return One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the discounted return of each policy. These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search. === Value function === Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a set of estimates of expected discounted returns E ⁡ [ G ] {\displaystyle \operatorname {\mathbb {E} } [G]} for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one). These methods rely on the theory of Markov decision processes, where optimality is defined in a sense stronger than the one above: A policy is optimal if it achieves the best-expected discounted return from any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found among stationary policies. To define optimality in a formal manner, define the state-value of a policy π {\displaystyle \pi } by V π ( s ) = E ⁡ [ G ∣ s , π ] , {\displaystyle V^{\pi }(s)=\operatorname {\mathbb {E} } [G\mid s,\pi ],} where G {\displaystyle G} stands for the discounted return associated with following π {\displaystyle \pi } from the initial state s {\displaystyle s} . Defining V ∗ ( s ) {\displaystyle V^{*}(s)} as the maximum possible state-value of V π ( s ) {\displaystyle V^{\pi }(s)} , where π {\displaystyle \pi } is allowed to change, V ∗ ( s ) = max π V π ( s ) . {\displaystyle V^{*}(s)=\max _{\pi }V^{\pi }(s).} A policy that achieves these optimal state-values in each state is called optimal. Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since V ∗ ( s ) = max π E [ G ∣ s , π ] {\displaystyle V^{*}(s)=\max _{\pi }\mathbb {E} [G\mid s,\pi ]} , where s {\displaystyle s} is a state randomly sampled from the distribution μ {\displaystyle \mu } of initial states (so μ ( s ) = Pr ( S 0 = s ) {\displaystyle \mu (s)=\Pr(S_{0}=s)} ). Although state-values suffice to define optimality, it is useful to define action-values. Given a state s {\displaystyle s} , an action a {\displaystyle a} and a policy π {\displaystyle \pi } , the action-value of the pair ( s , a ) {\displaystyle (s,a)} under π {\displaystyle \pi } is defined by Q π ( s , a ) = E ⁡ [ G ∣ s , a , π ] , {\displaystyle Q^{\pi }(s,a)=\operatorname {\mathbb {E} } [G\mid s,a,\pi ],\,} where G {\displaystyle G} now stands for the random discounted return associated with first taking action a {\displaystyle a} in state s {\displaystyle s} and following π {\displaystyle \pi } , thereafter. The theory of Markov decision processes states that if π ∗ {\displaystyle \pi ^{*}} is an optimal policy, we act optimally (take the optimal action) by choosing the action from Q π ∗ ( s , ⋅ ) {\displaystyle Q^{\pi ^{*}}(s,\cdot )} with the highest action-value at each state, s {\displaystyle s} . The action-value function of such an optimal policy ( Q π ∗ {\displaystyle Q^{\pi ^{*}}} ) is called the optimal action-value function and is commonly denoted by Q ∗ {\displaystyle Q^{*}} . In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally. Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions Q k {\displaystyle Q_{k}} ( k = 0 , 1 , 2 , … {\displaystyle k=0,1,2,\ldots } ) that converge to Q ∗ {\displaystyle Q^{*}} . Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces. ==== Monte Carlo methods ==== Monte Carlo methods are used to solve reinforcement learning problems by averaging sample returns. Unlike methods that require full knowledge of the environment's dynamics, Monte Carlo methods rely solely on actual or simulated experience—sequences of states, actions, and rewards obtained from interaction with an environment. This makes them applicable in situations where the complete dynamics are unknown. Learning from actual experience does not require prior knowledge of the environment and can still lead to optimal behavior. When using simulated experience, only a model capable of generating sample transitions is required, rather than a full specification of transition probabilities, which is necessary for dynamic programming methods. Monte Carlo methods apply to episodic tasks, where experience is divided into episodes that eventually terminate. Policy and value function updates occur only after the completion of an episode, making these methods incremental on an episode-by-episode basis, though not on a step-by-step (online) basis. The term "Monte Carlo" generally refers to any method involving random sampling; however, in this context, it specifically refers to methods that compute averages from complete returns, rather than partial returns. These methods function similarly to the bandit algorithms, in which returns are averaged for each state-action pair. The key difference is that actions taken in one state affect the returns of subsequent states within the same episode, making the problem non-stationary. To address this non-stationarity, Monte Carlo methods use the framework of general policy iteration (GPI). While dynamic programming computes value functions using full knowledge of the Markov decision process (MDP), Monte Carlo methods learn these functions through sample returns. The value functions and policies interact similarly to dynamic programming to achieve optimality, first addressing the prediction problem and then extending to policy improvement and control, all based on sampled experience. ==== Temporal difference methods ==== The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of generalized policy iteration algorithms. Many actor-critic methods belong to this category. The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called λ {\displaystyle \lambda } parameter ( 0 ≤ λ ≤ 1 ) {\displaystyle (0\leq \lambda \leq 1)} that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue. ==== Function approximation methods ==== In order to address the fifth issue, function approximation methods are used. Linear function approximation starts with a mapping ϕ {\displaystyle \phi } that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair ( s , a ) {\displaystyle (s,a)} are obtained by linearly combining the components of ϕ ( s , a ) {\displaystyle \phi (s,a)} with some weights θ {\displaystyle \theta } : Q ( s , a ) = ∑ i = 1 d θ i ϕ i ( s , a ) . {\displaystyle Q(s,a)=\sum _{i=1}^{d}\theta _{i}\phi _{i}(s,a).} The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored. Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants. Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems. The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency. === Direct policy search === An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods. Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector θ {\displaystyle \theta } , let π θ {\displaystyle \pi _{\theta }} denote the policy associated to θ {\displaystyle \theta } . Defining the performance function by ρ ( θ ) = ρ π θ {\displaystyle \rho (\theta )=\rho ^{\pi _{\theta }}} under mild conditions this function will be differentiable as a function of the parameter vector θ {\displaystyle \theta } . If the gradient of ρ {\displaystyle \rho } was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams's REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature). A large class of methods avoids relying on gradient information. These include simulated annealing, cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum. Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, actor–critic methods have been proposed and performed well on various problems. Policy search methods have been used in the robotics context. Many policy search methods may get stuck in local optima (as they are based on local search). === Model-based algorithms === Finally, all of the above methods can be combined with algorithms that first learn a model of the Markov decision process, the probability of each next state given an action taken from an existing state. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions. Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and "replayed" to the learning algorithm. Model-based methods can be more computationally intensive than model-free approaches, and their utility can be limited by the extent to which the Markov decision process can be learnt. There are other ways to use models than to update a value function. For instance, in model predictive control the model is used to update the behavior directly. == Theory == Both the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with provably good online performance (addressing the exploration issue) are known. Efficient exploration of Markov decision processes is given in Burnetas and Katehakis (1997). Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations. For incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation). == Research == Research topics include: actor-critic architecture actor-critic-scenery architecture adaptive methods that work with fewer (or no) parameters under a large number of conditions bug detection in software projects continuous learning combinations with logic-based frameworks exploration in large Markov decision processes entity-based reinforcement learning human feedback interaction between implicit and explicit learning in skill acquisition intrinsic motivation which differentiates information-seeking, curiosity-type behaviours from task-dependent goal-directed behaviours large-scale empirical evaluations large (or continuous) action spaces modular and hierarchical reinforcement learning multiagent/distributed reinforcement learning is a topic of interest. Applications are expanding. occupant-centric control optimization of computing resources partial information (e.g., using predictive state representation) reward function based on maximising novel information sample-based planning (e.g., based on Monte Carlo tree search). securities trading transfer learning TD learning modeling dopamine-based learning in the brain. Dopaminergic projections from the substantia nigra to the basal ganglia function are the prediction error. value-function and policy search methods == Comparison of key algorithms == The following table lists the key algorithms for learning a policy depending on several criteria: The algorithm can be on-policy (it performs policy updates using trajectories sampled via the current policy) or off-policy. The action space may be discrete (e.g. the action space could be "going up", "going left", "going right", "going down", "stay") or continuous (e.g. moving the arm with a given angle). The state space may be discrete (e.g. the agent could be in a cell in a grid) or continuous (e.g. the agent could be located at a given position in the plane). === Associative reinforcement learning === Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment. === Deep reinforcement learning === This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning. === Adversarial deep reinforcement learning === Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies. === Fuzzy reinforcement learning === By introducing fuzzy inference in reinforcement learning, approximating the state-action value function with fuzzy rules in continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values). === Inverse reinforcement learning === In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal. One popular IRL paradigm is named maximum entropy inverse reinforcement learning (MaxEnt IRL). MaxEnt IRL estimates the parameters of a linear model of the reward function by maximizing the entropy of the probability distribution of observed trajectories subject to constraints related to matching expected feature counts. Recently it has been shown that MaxEnt IRL is a particular case of a more general framework named random utility inverse reinforcement learning (RU-IRL). RU-IRL is based on random utility theory and Markov decision processes. While prior IRL approaches assume that the apparent random behavior of an observed agent is due to it following a random policy, RU-IRL assumes that the observed agent follows a deterministic policy but randomness in observed behavior is due to the fact that an observer only has partial access to the features the observed agent uses in decision making. The utility function is modeled as a random variable to account for the ignorance of the observer regarding the features the observed agent actually considers in its utility function. === Multi-objective reinforcement learning === Multi-objective reinforcement learning (MORL) is a form of reinforcement learning concerned with conflicting alternatives. It is distinct from multi-objective optimization in that it is concerned with agents acting in environments. === Safe reinforcement learning === Safe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. An alternative approach is risk-averse reinforcement learning, where instead of the expected return, a risk-measure of the return is optimized, such as the conditional value at risk (CVaR). In addition to mitigating risk, the CVaR objective increases robustness to model uncertainties. However, CVaR optimization in risk-averse RL requires special care, to prevent gradient bias and blindness to success. === Self-reinforcement learning === Self-reinforcement learning (or self-learning), is a learning paradigm which does not use the concept of immediate reward R a ( s , s ′ ) {\displaystyle R_{a}(s,s')} after transition from s {\displaystyle s} to s ′ {\displaystyle s'} with action a {\displaystyle a} . It does not use an external reinforcement, it only uses the agent internal self-reinforcement. The internal self-reinforcement is provided by mechanism of feelings and emotions. In the learning process emotions are backpropagated by a mechanism of secondary reinforcement. The learning equation does not include the immediate reward, it only includes the state evaluation. The self-reinforcement algorithm updates a memory matrix W = | | w ( a , s ) | | {\displaystyle W=||w(a,s)||} such that in each iteration executes the following machine learning routine: In situation s {\displaystyle s} perform action a {\displaystyle a} . Receive a consequence situation s ′ {\displaystyle s'} . Compute state evaluation v ( s ′ ) {\displaystyle v(s')} of how good is to be in the consequence situation s ′ {\displaystyle s'} . Update crossbar memory w ′ ( a , s ) = w ( a , s ) + v ( s ′ ) {\displaystyle w'(a,s)=w(a,s)+v(s')} . Initial conditions of the memory are received as input from the genetic environment. It is a system with only one input (situation), and only one output (action, or behavior). Self-reinforcement (self-learning) was introduced in 1982 along with a neural network capable of self-reinforcement learning, named Crossbar Adaptive Array (CAA). The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence states. The system is driven by the interaction between cognition and emotion. === Reinforcement Learning in Natural Language Processing === In recent years, Reinforcement learning has become a significant concept in Natural Language Processing (NLP), where tasks are often sequential decision-making rather than static classification. Reinforcement learning is where an agent take actions in an environment to maximize the accumulation of rewards. This framework is best fit for many NLP tasks, including dialogue generation, text summarization, and machine translation, where the quality of the output depends on optimizing long-term or human-centered goals rather than the prediction of single correct label. Early application of RL in NLP emerged in dialogue systems, where conversation was determined as a series of actions optimized for fluency and coherence. These early attempts, including policy gradient and sequence-level training techniques, laid a foundation for the broader application of reinforcement learning to other areas of NLP. A major breakthrough happened with the introduction of Reinforcement Learning from Human Feedback (RLHF), a method in which human feedbacks are used to train a reward model that guides the RL agent. Unlike traditional rule-based or supervised systems, RLHF allows models to align their behavior with human judgments on complex and subjective tasks. This technique was initially used in the development of InstructGPT, an effective language model trained to follow human instructions and later in ChatGPT which incorporates RLHF for improving output responses and ensuring safety. More recently, researchers have explored the use of offline RL in NLP to improve dialogue systems without the need of live human interaction. These methods optimize for user engagement, coherence, and diversity based on past conversation logs and pre-trained reward models. == Statistical comparison of reinforcement learning algorithms == Efficient comparison of RL algorithms is essential for research, deployment and monitoring of RL systems. To compare different algorithms on a given environment, an agent can be trained for each algorithm. Since the performance is sensitive to implementation details, all algorithms should be implemented as closely as possible to each other. After the training is finished, the agents can be run on a sample of test episodes, and their scores (returns) can be compared. Since episodes are typically assumed to be i.i.d, standard statistical tools can be used for hypothesis testing, such as T-test and permutation test. This requires to accumulate all the rewards within an episode into a single number—the episodic return. However, this causes a loss of information, as different time-steps are averaged together, possibly with different levels of noise. Whenever the noise level varies across the episode, the statistical power can be improved significantly, by weighting the rewards according to their estimated noise. == Challenges and Limitations == Despite significant advancements, reinforcement learning (RL) continues to face several challenges and limitations that hinder its widespread application in real-world scenarios. === Sample Inefficiency === RL algorithms often require a large number of interactions with the environment to learn effective policies, leading to high computational costs and time-intensive to train the agent. For instance, OpenAI's Dota-playing bot utilized thousands of years of simulated gameplay to achieve human-level performance. Techniques like experience replay and curriculum learning have been proposed to deprive sample inefficiency, but these techniques add more complexity and are not always sufficient for real-world applications. === Stability and Convergence Issues === Training RL models, particularly for deep neural network-based models, can be unstable and prone to divergence. A small change in the policy or environment can lead to extreme fluctuations in performance, making it difficult to achieve consistent results. This instability is further enhanced in the case of the continuous or high-dimensional action space, where the learning step becomes more complex and less predictable. === Generalization and Transferability === The RL agents trained in specific environments often struggle to generalize their learned policies to new, unseen scenarios. This is the major setback preventing the application of RL to dynamic real-world environments where adaptability is crucial. The challenge is to develop such algorithms that can transfer knowledge across tasks and environments without extensive retraining. === Bias and Reward Function Issues === Designing appropriate reward functions is critical in RL because poorly designed reward functions can lead to unintended behaviors. In addition, RL systems trained on biased data may perpetuate existing biases and lead to discriminatory or unfair outcomes. Both of these issues requires careful consideration of reward structures and data sources to ensure fairness and desired behaviors. == See also == == References == == Further reading == Annaswamy, Anuradha M. (3 May 2023). "Adaptive Control and Intersections with Reinforcement Learning". Annual Review of Control, Robotics, and Autonomous Systems. 6 (1): 65–93. doi:10.1146/annurev-control-062922-090153. ISSN 2573-5144. S2CID 255702873. Auer, Peter; Jaksch, Thomas; Ortner, Ronald (2010). "Near-optimal regret bounds for reinforcement learning". Journal of Machine Learning Research. 11: 1563–1600. Bertsekas, Dimitri P. (2023) [2019]. REINFORCEMENT LEARNING AND OPTIMAL CONTROL (1st ed.). Athena Scientific. ISBN 978-1-886-52939-7. Busoniu, Lucian; Babuska, Robert; De Schutter, Bart; Ernst, Damien (2010). Reinforcement Learning and Dynamic Programming using Function Approximators. Taylor & Francis CRC Press. ISBN 978-1-4398-2108-4. François-Lavet, Vincent; Henderson, Peter; Islam, Riashat; Bellemare, Marc G.; Pineau, Joelle (2018). "An Introduction to Deep Reinforcement Learning". Foundations and Trends in Machine Learning. 11 (3–4): 219–354. arXiv:1811.12560. Bibcode:2018arXiv181112560F. doi:10.1561/2200000071. S2CID 54434537. Li, Shengbo Eben (2023). Reinforcement Learning for Sequential Decision and Optimal Control (1st ed.). Springer Verlag, Singapore. doi:10.1007/978-981-19-7784-8. ISBN 978-9-811-97783-1. Powell, Warren (2011). Approximate dynamic programming: solving the curses of dimensionality. Wiley-Interscience. Archived from the original on 2016-07-31. Retrieved 2010-09-08. Sutton, Richard S. (1988). "Learning to predict by the method of temporal differences". Machine Learning. 3: 9–44. doi:10.1007/BF00115009. Sutton, Richard S.; Barto, Andrew G. (2018) [1998]. Reinforcement Learning: An Introduction (2nd ed.). MIT Press. ISBN 978-0-262-03924-6. Szita, Istvan; Szepesvari, Csaba (2010). "Model-based Reinforcement Learning with Nearly Tight Exploration Complexity Bounds" (PDF). ICML 2010. Omnipress. pp. 1031–1038. Archived from the original (PDF) on 2010-07-14. == External links == Dissecting Reinforcement Learning Series of blog post on reinforcement learning with Python code A (Long) Peek into Reinforcement Learning
Wikipedia/Reward_function
In mathematics, the support (sometimes topological support or spectrum) of a measure μ {\displaystyle \mu } on a measurable topological space ( X , Borel ⁡ ( X ) ) {\displaystyle (X,\operatorname {Borel} (X))} is a precise notion of where in the space X {\displaystyle X} the measure "lives". It is defined to be the largest (closed) subset of X {\displaystyle X} for which every open neighbourhood of every point of the set has positive measure. == Motivation == A (non-negative) measure μ {\displaystyle \mu } on a measurable space ( X , Σ ) {\displaystyle (X,\Sigma )} is really a function μ : Σ → [ 0 , + ∞ ] . {\displaystyle \mu :\Sigma \to [0,+\infty ].} Therefore, in terms of the usual definition of support, the support of μ {\displaystyle \mu } is a subset of the σ-algebra Σ : {\displaystyle \Sigma :} supp ⁡ ( μ ) := { A ∈ Σ | μ ( A ) ≠ 0 } ¯ , {\displaystyle \operatorname {supp} (\mu ):={\overline {\{A\in \Sigma \,\vert \,\mu (A)\neq 0\}}},} where the overbar denotes set closure. However, this definition is somewhat unsatisfactory: we use the notion of closure, but we do not even have a topology on Σ . {\displaystyle \Sigma .} What we really want to know is where in the space X {\displaystyle X} the measure μ {\displaystyle \mu } is non-zero. Consider two examples: Lebesgue measure λ {\displaystyle \lambda } on the real line R . {\displaystyle \mathbb {R} .} It seems clear that λ {\displaystyle \lambda } "lives on" the whole of the real line. A Dirac measure δ p {\displaystyle \delta _{p}} at some point p ∈ R . {\displaystyle p\in \mathbb {R} .} Again, intuition suggests that the measure δ p {\displaystyle \delta _{p}} "lives at" the point p , {\displaystyle p,} and nowhere else. In light of these two examples, we can reject the following candidate definitions in favour of the one in the next section: We could remove the points where μ {\displaystyle \mu } is zero, and take the support to be the remainder X ∖ { x ∈ X ∣ μ ( { x } ) = 0 } . {\displaystyle X\setminus \{x\in X\mid \mu (\{x\})=0\}.} This might work for the Dirac measure δ p , {\displaystyle \delta _{p},} but it would definitely not work for λ : {\displaystyle \lambda :} since the Lebesgue measure of any singleton is zero, this definition would give λ {\displaystyle \lambda } empty support. By comparison with the notion of strict positivity of measures, we could take the support to be the set of all points with a neighbourhood of positive measure: { x ∈ X ∣ ∃ N x open such that ( x ∈ N x and μ ( N x ) > 0 ) } {\displaystyle \{x\in X\mid \exists N_{x}{\text{ open}}{\text{ such that }}(x\in N_{x}{\text{ and }}\mu (N_{x})>0)\}} (or the closure of this). It is also too simplistic: by taking N x = X {\displaystyle N_{x}=X} for all points x ∈ X , {\displaystyle x\in X,} this would make the support of every measure except the zero measure the whole of X . {\displaystyle X.} However, the idea of "local strict positivity" is not too far from a workable definition. == Definition == Let ( X , T ) {\displaystyle (X,T)} be a topological space; let B ( T ) {\displaystyle B(T)} denote the Borel σ-algebra on X , {\displaystyle X,} i.e. the smallest sigma algebra on X {\displaystyle X} that contains all open sets U ∈ T . {\displaystyle U\in T.} Let μ {\displaystyle \mu } be a measure on ( X , B ( T ) ) {\displaystyle (X,B(T))} Then the support (or spectrum) of μ {\displaystyle \mu } is defined as the set of all points x {\displaystyle x} in X {\displaystyle X} for which every open neighbourhood N x {\displaystyle N_{x}} of x {\displaystyle x} has positive measure: supp ⁡ ( μ ) := { x ∈ X ∣ ∀ N x ∈ T : ( x ∈ N x ⇒ μ ( N x ) > 0 ) } . {\displaystyle \operatorname {supp} (\mu ):=\{x\in X\mid \forall N_{x}\in T\colon (x\in N_{x}\Rightarrow \mu (N_{x})>0)\}.} Some authors prefer to take the closure of the above set. However, this is not necessary: see "Properties" below. An equivalent definition of support is as the largest C ∈ B ( T ) {\displaystyle C\in B(T)} (with respect to inclusion) such that every open set which has non-empty intersection with C {\displaystyle C} has positive measure, i.e. the largest C {\displaystyle C} such that: ( ∀ U ∈ T ) ( U ∩ C ≠ ∅ ⟹ μ ( U ∩ C ) > 0 ) . {\displaystyle (\forall U\in T)(U\cap C\neq \varnothing \implies \mu (U\cap C)>0).} === Signed and complex measures === This definition can be extended to signed and complex measures. Suppose that μ : Σ → [ − ∞ , + ∞ ] {\displaystyle \mu :\Sigma \to [-\infty ,+\infty ]} is a signed measure. Use the Hahn decomposition theorem to write μ = μ + − μ − , {\displaystyle \mu =\mu ^{+}-\mu ^{-},} where μ ± {\displaystyle \mu ^{\pm }} are both non-negative measures. Then the support of μ {\displaystyle \mu } is defined to be supp ⁡ ( μ ) := supp ⁡ ( μ + ) ∪ supp ⁡ ( μ − ) . {\displaystyle \operatorname {supp} (\mu ):=\operatorname {supp} (\mu ^{+})\cup \operatorname {supp} (\mu ^{-}).} Similarly, if μ : Σ → C {\displaystyle \mu :\Sigma \to \mathbb {C} } is a complex measure, the support of μ {\displaystyle \mu } is defined to be the union of the supports of its real and imaginary parts. == Properties == supp ⁡ ( μ 1 + μ 2 ) = supp ⁡ ( μ 1 ) ∪ supp ⁡ ( μ 2 ) {\displaystyle \operatorname {supp} (\mu _{1}+\mu _{2})=\operatorname {supp} (\mu _{1})\cup \operatorname {supp} (\mu _{2})} holds. A measure μ {\displaystyle \mu } on X {\displaystyle X} is strictly positive if and only if it has support supp ⁡ ( μ ) = X . {\displaystyle \operatorname {supp} (\mu )=X.} If μ {\displaystyle \mu } is strictly positive and x ∈ X {\displaystyle x\in X} is arbitrary, then any open neighbourhood of x , {\displaystyle x,} since it is an open set, has positive measure; hence, x ∈ supp ⁡ ( μ ) , {\displaystyle x\in \operatorname {supp} (\mu ),} so supp ⁡ ( μ ) = X . {\displaystyle \operatorname {supp} (\mu )=X.} Conversely, if supp ⁡ ( μ ) = X , {\displaystyle \operatorname {supp} (\mu )=X,} then every non-empty open set (being an open neighbourhood of some point in its interior, which is also a point of the support) has positive measure; hence, μ {\displaystyle \mu } is strictly positive. The support of a measure is closed in X , {\displaystyle X,} as its complement is the union of the open sets of measure 0. {\displaystyle 0.} In general the support of a nonzero measure may be empty: see the examples below. However, if X {\displaystyle X} is a Hausdorff topological space and μ {\displaystyle \mu } is a Radon measure, a Borel set A {\displaystyle A} outside the support has measure zero: A ⊆ X ∖ supp ⁡ ( μ ) ⟹ μ ( A ) = 0. {\displaystyle A\subseteq X\setminus \operatorname {supp} (\mu )\implies \mu (A)=0.} The converse is true if A {\displaystyle A} is open, but it is not true in general: it fails if there exists a point x ∈ supp ⁡ ( μ ) {\displaystyle x\in \operatorname {supp} (\mu )} such that μ ( { x } ) = 0 {\displaystyle \mu (\{x\})=0} (e.g. Lebesgue measure). Thus, one does not need to "integrate outside the support": for any measurable function f : X → R {\displaystyle f:X\to \mathbb {R} } or C , {\displaystyle \mathbb {C} ,} ∫ X f ( x ) d μ ( x ) = ∫ supp ⁡ ( μ ) f ( x ) d μ ( x ) . {\displaystyle \int _{X}f(x)\,\mathrm {d} \mu (x)=\int _{\operatorname {supp} (\mu )}f(x)\,\mathrm {d} \mu (x).} The concept of support of a measure and that of spectrum of a self-adjoint linear operator on a Hilbert space are closely related. Indeed, if μ {\displaystyle \mu } is a regular Borel measure on the line R , {\displaystyle \mathbb {R} ,} then the multiplication operator ( A f ) ( x ) = x f ( x ) {\displaystyle (Af)(x)=xf(x)} is self-adjoint on its natural domain D ( A ) = { f ∈ L 2 ( R , d μ ) ∣ x f ( x ) ∈ L 2 ( R , d μ ) } {\displaystyle D(A)=\{f\in L^{2}(\mathbb {R} ,d\mu )\mid xf(x)\in L^{2}(\mathbb {R} ,d\mu )\}} and its spectrum coincides with the essential range of the identity function x ↦ x , {\displaystyle x\mapsto x,} which is precisely the support of μ . {\displaystyle \mu .} == Examples == === Lebesgue measure === In the case of Lebesgue measure λ {\displaystyle \lambda } on the real line R , {\displaystyle \mathbb {R} ,} consider an arbitrary point x ∈ R . {\displaystyle x\in \mathbb {R} .} Then any open neighbourhood N x {\displaystyle N_{x}} of x {\displaystyle x} must contain some open interval ( x − ϵ , x + ϵ ) {\displaystyle (x-\epsilon ,x+\epsilon )} for some ϵ > 0. {\displaystyle \epsilon >0.} This interval has Lebesgue measure 2 ϵ > 0 , {\displaystyle 2\epsilon >0,} so λ ( N x ) ≥ 2 ϵ > 0. {\displaystyle \lambda (N_{x})\geq 2\epsilon >0.} Since x ∈ R {\displaystyle x\in \mathbb {R} } was arbitrary, supp ⁡ ( λ ) = R . {\displaystyle \operatorname {supp} (\lambda )=\mathbb {R} .} === Dirac measure === In the case of Dirac measure δ p , {\displaystyle \delta _{p},} let x ∈ R {\displaystyle x\in \mathbb {R} } and consider two cases: if x = p , {\displaystyle x=p,} then every open neighbourhood N x {\displaystyle N_{x}} of x {\displaystyle x} contains p , {\displaystyle p,} so δ p ( N x ) = 1 > 0. {\displaystyle \delta _{p}(N_{x})=1>0.} on the other hand, if x ≠ p , {\displaystyle x\neq p,} then there exists a sufficiently small open ball B {\displaystyle B} around x {\displaystyle x} that does not contain p , {\displaystyle p,} so δ p ( B ) = 0. {\displaystyle \delta _{p}(B)=0.} We conclude that supp ⁡ ( δ p ) {\displaystyle \operatorname {supp} (\delta _{p})} is the closure of the singleton set { p } , {\displaystyle \{p\},} which is { p } {\displaystyle \{p\}} itself. In fact, a measure μ {\displaystyle \mu } on the real line is a Dirac measure δ p {\displaystyle \delta _{p}} for some point p {\displaystyle p} if and only if the support of μ {\displaystyle \mu } is the singleton set { p } . {\displaystyle \{p\}.} Consequently, Dirac measure on the real line is the unique measure with zero variance (provided that the measure has variance at all). === A uniform distribution === Consider the measure μ {\displaystyle \mu } on the real line R {\displaystyle \mathbb {R} } defined by μ ( A ) := λ ( A ∩ ( 0 , 1 ) ) {\displaystyle \mu (A):=\lambda (A\cap (0,1))} i.e. a uniform measure on the open interval ( 0 , 1 ) . {\displaystyle (0,1).} A similar argument to the Dirac measure example shows that supp ⁡ ( μ ) = [ 0 , 1 ] . {\displaystyle \operatorname {supp} (\mu )=[0,1].} Note that the boundary points 0 and 1 lie in the support: any open set containing 0 (or 1) contains an open interval about 0 (or 1), which must intersect ( 0 , 1 ) , {\displaystyle (0,1),} and so must have positive μ {\displaystyle \mu } -measure. === A nontrivial measure whose support is empty === The space of all countable ordinals with the topology generated by "open intervals" is a locally compact Hausdorff space. The measure ("Dieudonné measure") that assigns measure 1 to Borel sets containing an unbounded closed subset and assigns 0 to other Borel sets is a Borel probability measure whose support is empty. === A nontrivial measure whose support has measure zero === On a compact Hausdorff space the support of a non-zero measure is always non-empty, but may have measure 0. {\displaystyle 0.} An example of this is given by adding the first uncountable ordinal Ω {\displaystyle \Omega } to the previous example: the support of the measure is the single point Ω , {\displaystyle \Omega ,} which has measure 0. {\displaystyle 0.} == References == Ambrosio, L., Gigli, N. & Savaré, G. (2005). Gradient Flows in Metric Spaces and in the Space of Probability Measures. ETH Zürich, Birkhäuser Verlag, Basel. ISBN 3-7643-2428-7.{{cite book}}: CS1 maint: multiple names: authors list (link) Bogachev, V. I. (2007). Measure theory. Vol. 2. Springer Berlin Heidelberg. ISBN 978-3-540-34514-5. Parthasarathy, K. R. (2005). Probability measures on metric spaces. AMS Chelsea Publishing, Providence, RI. p. xii+276. ISBN 0-8218-3889-X. MR2169627 (See chapter 2, section 2) Teschl, Gerald (2009). Mathematical methods in Quantum Mechanics with applications to Schrödinger Operators. AMS.(See chapter 3, section 2)
Wikipedia/Support_(measure_theory)
In statistics, the mean integrated squared error (MISE) is used in density estimation. The MISE of an estimate of an unknown probability density is given by E ⁡ ‖ f n − f ‖ 2 2 = E ⁡ ∫ ( f n ( x ) − f ( x ) ) 2 d x {\displaystyle \operatorname {E} \|f_{n}-f\|_{2}^{2}=\operatorname {E} \int (f_{n}(x)-f(x))^{2}\,dx} where ƒ is the unknown density, ƒn is its estimate based on a sample of n independent and identically distributed random variables. Here, E denotes the expected value with respect to that sample. The MISE is also known as L2 risk function. == See also == Minimum distance estimation Mean squared error == References ==
Wikipedia/Mean_integrated_squared_error
A fitness function is a particular type of objective or cost function that is used to summarize, as a single figure of merit, how close a given candidate solution is to achieving the set aims. It is an important component of evolutionary algorithms (EA), such as genetic programming, evolution strategies or genetic algorithms. An EA is a metaheuristic that reproduces the basic principles of biological evolution as a computer algorithm in order to solve challenging optimization or planning tasks, at least approximately. For this purpose, many candidate solutions are generated, which are evaluated using a fitness function in order to guide the evolutionary development towards the desired goal. Similar quality functions are also used in other metaheuristics, such as ant colony optimization or particle swarm optimization. In the field of EAs, each candidate solution, also called an individual, is commonly represented as a string of numbers (referred to as a chromosome). After each round of testing or simulation the idea is to delete the n worst individuals, and to breed n new ones from the best solutions. Each individual must therefore to be assigned a quality number indicating how close it has come to the overall specification, and this is generated by applying the fitness function to the test or simulation results obtained from that candidate solution. Two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and one where the fitness function is mutable, as in niche differentiation or co-evolving the set of test cases. Another way of looking at fitness functions is in terms of a fitness landscape, which shows the fitness for each possible chromosome. In the following, it is assumed that the fitness is determined based on an evaluation that remains unchanged during an optimization run. A fitness function does not necessarily have to be able to calculate an absolute value, as it is sometimes sufficient to compare candidates in order to select the better one. A relative indication of fitness (candidate a is better than b) is sufficient in some cases, such as tournament selection or Pareto optimization. == Requirements of evaluation and fitness function == The quality of the evaluation and calculation of a fitness function is fundamental to the success of an EA optimisation. It implements Darwin's principle of "survival of the fittest". Without fitness-based selection mechanisms for mate selection and offspring acceptance, EA search would be blind and hardly distinguishable from the Monte Carlo method. When setting up a fitness function, one must always be aware that it is about more than just describing the desired target state. Rather, the evolutionary search on the way to the optimum should also be supported as much as possible (see also section on auxiliary objectives), if and insofar as this is not already done by the fitness function alone. If the fitness function is designed badly, the algorithm will either converge on an inappropriate solution, or will have difficulty converging at all. Definition of the fitness function is not straightforward in many cases and often is performed iteratively if the fittest solutions produced by an EA is not what is desired. Interactive genetic algorithms address this difficulty by outsourcing evaluation to external agents which are normally humans. == Computational efficiency == The fitness function should not only closely align with the designer's goal, but also be computationally efficient. Execution speed is crucial, as a typical evolutionary algorithm must be iterated many times in order to produce a usable result for a non-trivial problem. Fitness approximation may be appropriate, especially in the following cases: Fitness computation time of a single solution is extremely high Precise model for fitness computation is missing The fitness function is uncertain or noisy. Alternatively or also in addition to the fitness approximation, the fitness calculations can also be distributed to a parallel computer in order to reduce the execution times. Depending on the population model of the EA used, both the EA itself and the fitness calculations of all offspring of one generation can be executed in parallel. == Multi-objective optimization == Practical applications usually aim at optimizing multiple and at least partially conflicting objectives. Two fundamentally different approaches are often used for this purpose, Pareto optimization and optimization based on fitness calculated using the weighted sum. === Weighted sum and penalty functions === When optimizing with the weighted sum, the single values of the O {\displaystyle O} objectives are first normalized so that they can be compared. This can be done with the help of costs or by specifying target values and determining the current value as the degree of fulfillment. Costs or degrees of fulfillment can then be compared with each other and, if required, can also be mapped to a uniform fitness scale. Without loss of generality, fitness is assumed to represent a value to be maximized. Each objective o i {\displaystyle o_{i}} is assigned a weight w i {\displaystyle w_{i}} in the form of a percentage value so that the overall raw fitness f r a w {\displaystyle f_{raw}} can be calculated as a weighted sum: f r a w = ∑ i = 1 O o i ⋅ w i w i t h ∑ i = 1 O w i = 1 {\displaystyle f_{raw}=\sum _{i=1}^{O}{o_{i}\cdot w_{i}}\quad {\mathsf {with}}\quad \sum _{i=1}^{O}{w_{i}}=1} A violation of R {\displaystyle R} restrictions r j {\displaystyle r_{j}} can be included in the fitness determined in this way in the form of penalty functions. For this purpose, a function p f j ( r j ) {\displaystyle pf_{j}(r_{j})} can be defined for each restriction which returns a value between 0 {\displaystyle 0} and 1 {\displaystyle 1} depending on the degree of violation, with the result being 1 {\displaystyle 1} if there is no violation. The previously determined raw fitness is multiplied by the penalty function(s) and the result is then the final fitness f f i n a l {\displaystyle f_{final}} : f f i n a l = f r a w ⋅ ∏ j = 1 R p f j ( r j ) = ∑ i = 1 O ( o i ⋅ w i ) ⋅ ∏ j = 1 R p f j ( r j ) {\displaystyle f_{final}=f_{raw}\cdot \prod _{j=1}^{R}{pf_{j}(r_{j})}=\sum _{i=1}^{O}{(o_{i}\cdot w_{i})}\cdot \prod _{j=1}^{R}{pf_{j}(r_{j})}} This approach is simple and has the advantage of being able to combine any number of objectives and restrictions. The disadvantage is that different objectives can compensate each other and that the weights have to be defined before the optimization. This means that the compromise lines must be defined before optimization, which is why optimization with the weighted sum is also referred to as the a priori method. In addition, certain solutions may not be obtained, see the section on the comparison of both types of optimization. === Pareto optimization === A solution is called Pareto-optimal if the improvement of one objective is only possible with a deterioration of at least one other objective. The set of all Pareto-optimal solutions, also called Pareto set, represents the set of all optimal compromises between the objectives. The figure below on the right shows an example of the Pareto set of two objectives f 1 {\displaystyle f_{1}} and f 2 {\displaystyle f_{2}} to be maximized. The elements of the set form the Pareto front (green line). From this set, a human decision maker must subsequently select the desired compromise solution. Constraints are included in Pareto optimization in that solutions without constraint violations are per se better than those with violations. If two solutions to be compared each have constraint violations, the respective extent of the violations decides. It was recognized early on that EAs with their simultaneously considered solution set are well suited to finding solutions in one run that cover the Pareto front sufficiently well. They are therefore well suited as a-posteriori methods for multi-objective optimization, in which the final decision is made by a human decision maker after optimization and determination of the Pareto front. Besides the SPEA2, the NSGA-II and NSGA-III have established themselves as standard methods. The advantage of Pareto optimization is that, in contrast to the weighted sum, it provides all alternatives that are equivalent in terms of the objectives as an overall solution. The disadvantage is that a visualization of the alternatives becomes problematic or even impossible from four objectives on. Furthermore, the effort increases exponentially with the number of objectives. If there are more than three or four objectives, some have to be combined using the weighted sum or other aggregation methods. === Comparison of both types of assessment === With the help of the weighted sum, the total Pareto front can be obtained by a suitable choice of weights, provided that it is convex. This is illustrated by the adjacent picture on the left. The point P {\displaystyle {\mathsf {P}}} on the green Pareto front is reached by the weights w 1 {\displaystyle w_{1}} and w 2 {\displaystyle w_{2}} , provided that the EA converges to the optimum. The direction with the largest fitness gain in the solution set Z {\displaystyle Z} is shown by the drawn arrows. In case of a non-convex front, however, non-convex front sections are not reachable by the weighted sum. In the adjacent image on the right, this is the section between points A {\displaystyle {\mathsf {A}}} and B {\displaystyle {\mathsf {B}}} . This can be remedied to a limited extent by using an extension of the weighted sum, the cascaded weighted sum. Comparing both assessment approaches, the use of Pareto optimization is certainly advantageous when little is known about the possible solutions of a task and when the number of optimization objectives can be narrowed down to three, at most four. However, in the case of repeated optimization of variations of one and the same task, the desired lines of compromise are usually known and the effort to determine the entire Pareto front is no longer justified. This is also true when no human decision is desired or possible after optimization, such as in automated decision processes. == Auxiliary objectives == In addition to the primary objectives resulting from the task itself, it may be necessary to include auxiliary objectives in the assessment to support the achievement of one or more primary objectives. An example of a scheduling task is used for illustration purposes. The optimization goals include not only a general fast processing of all orders but also the compliance with a latest completion time. The latter is especially necessary for the scheduling of rush orders. The second goal is not achieved by the exemplary initial schedule, as shown in the adjacent figure. A following mutation does not change this, but schedules the work step d earlier, which is a necessary intermediate step for an earlier start of the last work step e of the order. As long as only the latest completion time is evaluated, however, the fitness of the mutated schedule remains unchanged, even though it represents a relevant step towards the objective of a timely completion of the order. This can be remedied, for example, by an additional evaluation of the delay of work steps. The new objective is an auxiliary one, since it was introduced in addition to the actual optimization objectives to support their achievement. A more detailed description of this approach and another example can be found in. == See also == Evolutionary computation Inferential programming Test functions for optimization Loss function == External links == A Nice Introduction to Adaptive Fuzzy Fitness Granulation (AFFG) (PDF), A promising approach to accelerate the convergence rate of EAs. The cyber shack of Adaptive Fuzzy Fitness Granulation (AFFG) That is designed to accelerate the convergence rate of EAs. Fitness functions in evolutionary robotics: A survey and analysis (AFFG) (PDF), A review of fitness functions used in evolutionary robotics. Ford, Neal; Richards, Mark, Sadalage, Pramod; Dehghani, Zhamak. (2021) Software Architecture: The Hard Parts O'Reilly Media, Inc. ISBN 9781492086895. == References ==
Wikipedia/Fitness_function
In economics, profit maximization is the short run or long run process by which a firm may determine the price, input and output levels that will lead to the highest possible total profit (or just profit in short). In neoclassical economics, which is currently the mainstream approach to microeconomics, the firm is assumed to be a "rational agent" (whether operating in a perfectly competitive market or otherwise) which wants to maximize its total profit, which is the difference between its total revenue and its total cost. Measuring the total cost and total revenue is often impractical, as the firms do not have the necessary reliable information to determine costs at all levels of production. Instead, they take more practical approach by examining how small changes in production influence revenues and costs. When a firm produces an extra unit of product, the additional revenue gained from selling it is called the marginal revenue ( MR {\displaystyle {\text{MR}}} ), and the additional cost to produce that unit is called the marginal cost ( MC {\displaystyle {\text{MC}}} ). When the level of output is such that the marginal revenue is equal to the marginal cost ( MR = MC {\displaystyle {\text{MR}}={\text{MC}}} ), then the firm's total profit is said to be maximized. If the marginal revenue is greater than the marginal cost ( MR > MC {\displaystyle {\text{MR}}>{\text{MC}}} ), then its total profit is not maximized, because the firm can produce additional units to earn additional profit. In other words, in this case, it is in the "rational" interest of the firm to increase its output level until its total profit is maximized. On the other hand, if the marginal revenue is less than the marginal cost ( MR < MC {\displaystyle {\text{MR}}<{\text{MC}}} ), then too its total profit is not maximized, because producing one unit less will reduce total cost more than total revenue gained, thus giving the firm more total profit. In this case, a "rational" firm has an incentive to reduce its output level until its total profit is maximized. There are several perspectives one can take on profit maximization. First, since profit equals revenue minus cost, one can plot graphically each of the variables revenue and cost as functions of the level of output and find the output level that maximizes the difference (or this can be done with a table of values instead of a graph). Second, if specific functional forms are known for revenue and cost in terms of output, one can use calculus to maximize profit with respect to the output level. Third, since the first order condition for the optimization equates marginal revenue and marginal cost, if marginal revenue ( MR {\displaystyle {\text{MR}}} ) and marginal cost ( MC {\displaystyle {\text{MC}}} ) functions in terms of output are directly available one can equate these, using either equations or a graph. Fourth, rather than a function giving the cost of producing each potential output level, the firm may have input cost functions giving the cost of acquiring any amount of each input, along with a production function showing how much output results from using any combination of input quantities. In this case one can use calculus to maximize profit with respect to input usage levels, subject to the input cost functions and the production function. The first order condition for each input equates the marginal revenue product of the input (the increment to revenue from selling the product caused by an increment to the amount of the input used) to the marginal cost of the input. For a firm in a perfectly competitive market for its output, the revenue function will simply equal the market price times the quantity produced and sold, whereas for a monopolist, which chooses its level of output simultaneously with its selling price. In the case of monopoly, the company will produce more products because it can still make normal profits. To get the most profit, you need to set higher prices and lower quantities than the competitive market. However, the revenue function takes into account the fact that higher levels of output require a lower price in order to be sold. An analogous feature holds for the input markets: in a perfectly competitive input market the firm's cost of the input is simply the amount purchased for use in production times the market-determined unit input cost, whereas a monopsonist’s input price per unit is higher for higher amounts of the input purchased. The principal difference between short run and long run profit maximization is that in the long run the quantities of all inputs, including physical capital, are choice variables, while in the short run the amount of capital is predetermined by past investment decisions. In either case, there are inputs of labor and raw materials. == Basic definitions == Any costs incurred by a firm may be classified into two groups: fixed costs and variable costs. Fixed costs, which occur only in the short run, are incurred by the business at any level of output, including zero output. These may include equipment maintenance, rent, wages of employees whose numbers cannot be increased or decreased in the short run, and general upkeep. Variable costs change with the level of output, increasing as more product is generated. Materials consumed during production often have the largest impact on this category, which also includes the wages of employees who can be hired and laid off in the short run span of time under consideration. Fixed cost and variable cost, combined, equal total cost. Revenue is the amount of money that a company receives from its normal business activities, usually from the sale of goods and services (as opposed to monies from security sales such as equity shares or debt issuances). The five ways formula is to increase leads, conversation rates, average dollar sales, the average number of sales, and average product profit. Profits can be increased by up to 1,000 percent, this is important for sole traders and small businesses let alone big businesses but none the less all profit maximization is a matter of each business stage and greater returns for profit sharing thus higher wages and motivation. Marginal cost and marginal revenue, depending on whether the calculus approach is taken or not, are defined as either the change in cost or revenue as each additional unit is produced or the derivative of cost or revenue with respect to the quantity of output. For instance, taking the first definition, if it costs a firm $400 to produce 5 units and $480 to produce 6, the marginal cost of the sixth unit is 80 dollars. Conversely, the marginal income from the production of 6 units is the income from the production of 6 units minus the income from the production of 5 units (the latter item minus the preceding item). == Total revenue – total cost perspective == To obtain the profit maximizing output quantity, we start by recognizing that profit is equal to total revenue ( TR {\displaystyle {\text{TR}}} ) minus total cost ( TC {\displaystyle {\text{TC}}} ). Given a table of costs and revenues at each quantity, we can either compute equations or plot the data directly on a graph. The profit-maximizing output is the one at which this difference reaches its maximum. In the accompanying diagram, the linear total revenue curve represents the case in which the firm is a perfect competitor in the goods market, and thus cannot set its own selling price. The profit-maximizing output level is represented as the one at which total revenue is the height of C {\displaystyle {\text{C}}} and total cost is the height of B {\displaystyle {\text{B}}} ; the maximal profit is measured as the length of the segment CB ¯ {\displaystyle {\overline {\text{CB}}}} . This output level is also the one at which the total profit curve is at its maximum. If, contrary to what is assumed in the graph, the firm is not a perfect competitor in the output market, the price to sell the product at can be read off the demand curve at the firm's optimal quantity of output. This optimal quantity of output is the quantity at which marginal revenue equals marginal cost. == Marginal revenue – marginal cost perspective == An equivalent perspective relies on the relationship that, for each unit sold, marginal profit ( M π {\displaystyle {\text{M}}\pi } ) equals marginal revenue ( MR {\displaystyle {\text{MR}}} ) minus marginal cost ( MC {\displaystyle {\text{MC}}} ). Then, if marginal revenue is greater than marginal cost at some level of output, marginal profit is positive and thus a greater quantity should be produced, and if marginal revenue is less than marginal cost, marginal profit is negative and a lesser quantity should be produced. At the output level at which marginal revenue equals marginal cost, marginal profit is zero and this quantity is the one that maximizes profit. Since total profit increases when marginal profit is positive and total profit decreases when marginal profit is negative, it must reach a maximum where marginal profit is zero—where marginal cost equals marginal revenue—and where lower or higher output levels give lower profit levels. In calculus terms, the requirement that the optimal output have higher profit than adjacent output levels is that: d 2 ⁡ R d ⁡ Q 2 < d 2 ⁡ C d ⁡ Q 2 . {\displaystyle {\frac {\operatorname {d} ^{2}R}{{\operatorname {d} Q}^{2}}}<{\frac {\operatorname {d} ^{2}C}{{\operatorname {d} Q}^{2}}}.} The intersection of MR {\displaystyle {\text{MR}}} and MC {\displaystyle {\text{MC}}} is shown in the next diagram as point A {\displaystyle {\text{A}}} . If the industry is perfectly competitive (as is assumed in the diagram), the firm faces a demand curve ( D {\displaystyle {\text{D}}} ) that is identical to its marginal revenue curve ( MR {\displaystyle {\text{MR}}} ), and this is a horizontal line at a price determined by industry supply and demand. Average total costs are represented by curve ATC {\displaystyle {\text{ATC}}} . Total economic profit is represented by the area of the rectangle PABC ¯ {\displaystyle {\overline {\text{PABC}}}} . The optimum quantity ( Q {\displaystyle Q} ) is the same as the optimum quantity in the first diagram. If the firm is a monopolist, the marginal revenue curve would have a negative slope as shown in the next graph, because it would be based on the downward-sloping market demand curve. The optimal output, shown in the graph as Q m {\displaystyle Q_{m}} , is the level of output at which marginal cost equals marginal revenue. The price that induces that quantity of output is the height of the demand curve at that quantity (denoted P m {\displaystyle P_{m}} ). A generic derivation of the profit maximisation level of output is given by the following steps. Firstly, suppose a representative firm i {\displaystyle i} has perfect information about its profit, given by: π i = TR i − TC i {\displaystyle \pi _{i}={\text{TR}}_{i}-{\text{TC}}_{i}} where TR {\displaystyle {\text{TR}}} denotes total revenue and TC {\displaystyle {\text{TC}}} denotes total costs. The above expression can be re-written as: π i = p i ⋅ q i − c i ⋅ q i {\displaystyle \pi _{i}=p_{i}\cdot q_{i}-c_{i}\cdot q_{i}} where p {\displaystyle p} denotes price (marginal revenue), q {\displaystyle q} quantity, and c {\displaystyle c} marginal cost. The firm maximises their profit with respect to quantity to yield the profit maximisation level of output: ( ∂ π i ) ( ∂ q i ) = p i − c i = 0 {\displaystyle {\frac {\left(\partial \pi _{i}\right)}{\left(\partial q_{i}\right)}}=p_{i}-c_{i}=0} As such, the profit maximisation level of output is marginal revenue p i {\displaystyle p_{i}} equating to marginal cost c i {\displaystyle c_{i}} . In an environment that is competitive but not perfectly so, more complicated profit maximization solutions involve the use of game theory. == Case in which maximizing revenue is equivalent == In some cases a firm's demand and cost conditions are such that marginal profits are greater than zero for all levels of production up to a certain maximum. In this case marginal profit plunges to zero immediately after that maximum is reached; hence the M π = 0 {\displaystyle {\text{M}}\pi =0} rule implies that output should be produced at the maximum level, which also happens to be the level that maximizes revenue. In other words, the profit-maximizing quantity and price can be determined by setting marginal revenue equal to zero, which occurs at the maximal level of output. Marginal revenue equals zero when the total revenue curve has reached its maximum value. An example would be a scheduled airline flight. The marginal costs of flying one more passenger on the flight are negligible until all the seats are filled. The airline would maximize profit by filling all the seats. == Maximizing profits in the real world == In the real world, it is not easy to achieve profit maximization. The company must accurately know the marginal income and the marginal cost of the last commodity sold because of MR. The price elasticity of demand for goods depends on the response of other companies. When it is the only company raising prices, demand will be elastic. If one family raises prices and others follow, demand may be inelastic. Companies can seek to maximize profits through estimation. When the price increase leads to a small decline in demand, the company can increase the price as much as possible before the demand becomes elastic. Generally, it is difficult to change the impact of the price according to the demand, because the demand may occur due to many other factors besides the price. The company may also have other goals and considerations. For example, companies may choose to earn less than the maximum profit in pursuit of higher market share. Because price increases maximize profits in the short term, they will attract more companies to enter the market. Many companies try to minimize costs by shifting production to foreign locations with cheap labor (e.g. Nike, Inc.). However, moving the production line to a foreign location may cause unnecessary transportation costs. Close market locations for producing and selling products can improve demand optimization, but when the production cost is much higher, it is not a good choice. === Tools === Profit analysis Habitually recording and analyzing the business costs of all products/services sold. There are many miscellaneous items in the cost including labor, materials, transportation, advertising, storage, etc. related to any goods or services sold, which become expenses. Business intelligence tools may be needed to integrate all financial information to record expense reports so that the business can clearly understand all costs related to operations and their accuracy. Planning and actual execution when implementing a "what if" solution to help in sales and operation planning process, familiarity with the company's operations, including the supply chain, inventory management and sales process is useful. Constraints are required to prevent corporate plans from becoming unfeasible. == Changes in total costs and profit maximization == A firm maximizes profit by operating where marginal revenue equals marginal cost. This is stipulated under neoclassical theory, in which a firm maximizes profit in order to determine a level of output and inputs, which provides the price equals marginal cost condition. In the short run, a change in fixed costs has no effect on the profit maximizing output or price. The firm merely treats short term fixed costs as sunk costs and continues to operate as before. This can be confirmed graphically. Using the diagram illustrating the total cost–total revenue perspective, the firm maximizes profit at the point where the slopes of the total cost line and total revenue line are equal. An increase in fixed cost would cause the total cost curve to shift up rigidly by the amount of the change. There would be no effect on the total revenue curve or the shape of the total cost curve. Consequently, the profit maximizing output would remain the same. This point can also be illustrated using the diagram for the marginal revenue–marginal cost perspective. A change in fixed cost would have no effect on the position or shape of these curves. In simple terms, although profit is related to total cost, Profit = TR − TC {\displaystyle {\text{Profit}}={\text{TR}}-{\text{TC}}} , the enterprise can maximize profit by producing to the maximum profit (the maximum value of TR − TC {\displaystyle {\text{TR}}-{\text{TC}}} ) to maximize profit. But when the total cost increases, it does not mean maximizing profit Will change, because the increase in total cost does not necessarily change the marginal cost. If the marginal cost remains the same, the enterprise can still produce to the unit of ( MR = MC = Price {\displaystyle {\text{MR}}={\text{MC}}={\text{Price}}} ) to maximize profit. In the long run, a firm will theoretically have zero expected profits under the competitive equilibrium. The market should adjust to clear any profits if there is perfect competition. In situations where there are non-zero profits, we should expect to see either some form of long run disequilibrium or non-competitive conditions, such as barriers to entry, where there is not perfect competition between firms. == Markup pricing == In addition to using methods to determine a firm's optimal level of output, a firm that is not perfectly competitive can equivalently set price to maximize profit (since setting price along a given demand curve involves picking a preferred point on that curve, which is equivalent to picking a preferred quantity to produce and sell). The profit maximization conditions can be expressed in a "more easily applicable" form or rule of thumb than the above perspectives use. The first step is to rewrite the expression for marginal revenue as MR = Δ TR Δ Q = P Δ Q + Q Δ P Δ Q = P + Q Δ P Δ Q {\displaystyle {\begin{aligned}{\text{MR}}=&{\frac {\Delta {\text{TR}}}{\Delta Q}}\\=&{\frac {P\Delta Q+Q\Delta P}{\Delta Q}}\\=&P+{\frac {Q\Delta P}{\Delta Q}}\\\end{aligned}}} , where P {\displaystyle P} and Q {\displaystyle Q} refer to the midpoints between the old and new values of price and quantity respectively. The marginal revenue from an incremental unit of output has two parts: first, the revenue the firm gains from selling the additional units or, giving the term P Δ Q {\displaystyle P\Delta Q} . The additional units are called the marginal units. Producing one extra unit and selling it at price P {\displaystyle P} brings in revenue of P {\displaystyle P} . Moreover, one must consider "the revenue the firm loses on the units it could have sold at the higher price"—that is, if the price of all units had not been pulled down by the effort to sell more units. These units that have lost revenue are called the infra-marginal units. That is, selling the extra unit results in a small drop in price which reduces the revenue for all units sold by the amount Q ⋅ ( Δ P Δ Q ) {\displaystyle Q\cdot \left({\frac {\Delta P}{\Delta Q}}\right)} . Thus, MR = P + Q ⋅ Δ P Δ Q = P + P ⋅ Q P ⋅ Δ P Δ Q = P + P PED {\displaystyle {\text{MR}}=P+Q\cdot {\frac {\Delta P}{\Delta Q}}=P+P\cdot {\frac {Q}{P}}\cdot {\frac {\Delta P}{\Delta Q}}=P+{\frac {P}{\text{PED}}}} , where PED {\displaystyle {\text{PED}}} is the price elasticity of demand characterizing the demand curve of the firms' customers, which is negative. Then setting MC = MR {\displaystyle {\text{MC}}={\text{MR}}} gives MC = P + P PED {\displaystyle {\text{MC}}=P+{\frac {P}{\text{PED}}}} so P − MC P = − 1 PED {\displaystyle {\frac {P-{\text{MC}}}{P}}={\frac {-1}{\text{PED}}}} and P = M C 1 + ( 1 PED ) {\displaystyle P={\frac {MC}{1+\left({\frac {1}{\text{PED}}}\right)}}} . Thus, the optimal markup rule is: ( P − MC ) P = 1 ( − PED ) {\displaystyle {\frac {\left(P-{\text{MC}}\right)}{P}}={\frac {1}{\left(-{\text{PED}}\right)}}} or equivalently P = PED 1 + PED ⋅ MC {\displaystyle P={\frac {\text{PED}}{1+{\text{PED}}}}\cdot {\text{MC}}} . In other words, the rule is that the size of the markup of price over the marginal cost is inversely related to the absolute value of the price elasticity of demand for the good. The optimal markup rule also implies that a non-competitive firm will produce on the elastic region of its market demand curve. Marginal cost is positive. The term P E D 1 + PED {\displaystyle {\frac {PED}{1+{\text{PED}}}}} would be positive so P > 0 {\displaystyle P>0} only if PED {\displaystyle {\text{PED}}} is between − 1 {\displaystyle -1} and − ∞ {\displaystyle -\infty } (that is, if demand is elastic at that level of output). The intuition behind this result is that, if demand is inelastic at some value Q 1 {\displaystyle Q_{1}} then a decrease in Q {\displaystyle Q} would increase P {\displaystyle P} more than proportionately, thereby increasing revenue P ⋅ Q {\displaystyle P\cdot Q} ; since lower Q {\displaystyle Q} would also lead to lower total cost, profit would go up due to the combination of increased revenue and decreased cost. Thus, Q 1 {\displaystyle Q_{1}} does not give the highest possible profit. == Marginal product of labor, marginal revenue product of labor, and profit maximization == The general rule is that the firm maximizes profit by producing that quantity of output where marginal revenue equals marginal cost. The profit maximization issue can also be approached from the input side. That is, what is the profit maximizing usage of the variable input? To maximize profit the firm should increase usage of the input "up to the point where the input's marginal revenue product equals its marginal costs". Mathematically, the profit-maximizing rule is MRP L = MC L {\displaystyle {\text{MRP}}_{L}={\text{MC}}_{L}} , where the subscript L {\displaystyle _{L}} refers to the commonly assumed variable input, labor. The marginal revenue product is the change in total revenue per unit change in the variable input, that is, MRP L = Δ TR Δ L {\displaystyle {\text{MRP}}_{L}={\frac {\Delta {\text{TR}}}{\Delta L}}} . MRP L {\displaystyle {\text{MRP}}_{L}} is the product of marginal revenue and the marginal product of labor or MRP L = MR ⋅ MP L {\displaystyle {\text{MRP}}_{L}={\text{MR}}\cdot {\text{MP}}_{L}} . == Criticism == The maximization of producer surplus can in some cases reduce consumer surplus. Some forms of producer profit maximization are considered anti-competitive practices and are regulated by competition law. Maximization of short-term producer profit can reduce long-term producer profit, which can be exploited by predatory pricing such as dumping. == Government Regulation == Market quotas reflect the power of a firm in the market, a firm dominating a market is very common, and too much power often becomes the motive for non-Hong behavior. Predatory pricing, tying, price gouging and other behaviors are reflecting the crisis of excessive power of monopolists in the market. In an attempt to prevent businesses from abusing their power to maximize their own profits, governments often intervene to stop them in their tracks. A major example of this is through anti-trust regulation which effectively outlaws most industry monopolies. Through this regulation, consumers enjoy a better relationship with the companies that serve them, even though the company itself may suffer, financially speaking. == See also == Utility maximization problem Welfare maximization Business organization Corporation Duality (optimization) Market structure Microeconomics Pricing Outline of industrial organization Rational choice theory Supply and demand Marginal revenue Total revenue Marginal cost == Notes == == References == Landsburg, S. (2002). Price Theory and Applications (fifth ed.). South-Western. Landsburg, S. (2013). Price Theory and Applications (PDF) (ninth ed.). South-Western. ISBN 978-1-285-42352-4. Lipsey, Richard G. (1975). An introduction to positive economics (fourth ed.). Weidenfeld and Nicolson. pp. 214–7. ISBN 0-297-76899-9. Samuelson, W.; Marks, S. (2003). Managerial Economics (Fourth ed.). Wiley. ISBN 0470000449. == External links == Profit Maximization in Perfect Competition by Fiona Maclachlan, Wolfram Demonstrations Project. Profit Maximization: The Comprehensive Guide by Richard Gulle, Techfunnel Project. Profit Maximisation by Tejvan Pettinger. Three Steps to Mastering Prescriptive Profit Maximization by Riverlogic.
Wikipedia/Profit_function
Planetary science (or more rarely, planetology) is the scientific study of planets (including Earth), celestial bodies (such as moons, asteroids, comets) and planetary systems (in particular those of the Solar System) and the processes of their formation. It studies objects ranging in size from micrometeoroids to gas giants, with the aim of determining their composition, dynamics, formation, interrelations and history. It is a strongly interdisciplinary field, which originally grew from astronomy and Earth science, and now incorporates many disciplines, including planetary geology, cosmochemistry, atmospheric science, physics, oceanography, hydrology, theoretical planetary science, glaciology, and exoplanetology. Allied disciplines include space physics, when concerned with the effects of the Sun on the bodies of the Solar System, and astrobiology. There are interrelated observational and theoretical branches of planetary science. Observational research can involve combinations of space exploration, predominantly with robotic spacecraft missions using remote sensing, and comparative, experimental work in Earth-based laboratories. The theoretical component involves considerable computer simulation and mathematical modelling. Planetary scientists are generally located in the astronomy and physics or Earth sciences departments of universities or research centres, though there are several purely planetary science institutes worldwide. Generally, planetary scientists study one of the Earth sciences, astronomy, astrophysics, geophysics, or physics at the graduate level and concentrate their research in planetary science disciplines. There are several major conferences each year, and a wide range of peer reviewed journals. Some planetary scientists work at private research centres and often initiate partnership research tasks. == History == The history of planetary science may be said to have begun with the Ancient Greek philosopher Democritus, who is reported by Hippolytus as saying The ordered worlds are boundless and differ in size, and that in some there is neither sun nor moon, but that in others, both are greater than with us, and yet with others more in number. And that the intervals between the ordered worlds are unequal, here more and there less, and that some increase, others flourish and others decay, and here they come into being and there they are eclipsed. But that they are destroyed by colliding with one another. And that some ordered worlds are bare of animals and plants and all water. In more modern times, planetary science began in astronomy, from studies of the unresolved planets. In this sense, the original planetary astronomer would be Galileo, who discovered the four largest moons of Jupiter, the mountains on the Moon, and first observed the rings of Saturn, all objects of intense later study. Galileo's study of the lunar mountains in 1609 also began the study of extraterrestrial landscapes: his observation "that the Moon certainly does not possess a smooth and polished surface" suggested that it and other worlds might appear "just like the face of the Earth itself". Advances in telescope construction and instrumental resolution gradually allowed increased identification of the atmospheric as well as surface details of the planets. The Moon was initially the most heavily studied, due to its proximity to the Earth, as it always exhibited elaborate features on its surface, and the technological improvements gradually produced more detailed lunar geological knowledge. In this scientific process, the main instruments were astronomical optical telescopes (and later radio telescopes) and finally robotic exploratory spacecraft, such as space probes. The Solar System has now been relatively well-studied, and a good overall understanding of the formation and evolution of this planetary system exists. However, there are large numbers of unsolved questions, and the rate of new discoveries is very high, partly due to the large number of interplanetary spacecraft currently exploring the Solar System. == Disciplines == Planetary science studies observational and theoretical astronomy, geology (astrogeology), atmospheric science, and an emerging subspecialty in planetary oceans, called planetary oceanography. === Planetary astronomy === This is both an observational and a theoretical science. Observational researchers are predominantly concerned with the study of the small bodies of the Solar System: those that are observed by telescopes, both optical and radio, so that characteristics of these bodies such as shape, spin, surface materials and weathering are determined, and the history of their formation and evolution can be understood. Theoretical planetary astronomy is concerned with dynamics: the application of the principles of celestial mechanics to the Solar System and extrasolar planetary systems. Observing exoplanets and determining their physical properties, exoplanetology, is a major area of research besides Solar System studies. Every planet has its own branch. === Planetary geology === In planetary science, the term geology is used in its broadest sense, to mean the study of the surface and interior parts of planets and moons, from their core to their magnetosphere. The best-known research topics of planetary geology deal with the planetary bodies in the near vicinity of the Earth: the Moon, and the two neighboring planets: Venus and Mars. Of these, the Moon was studied first, using methods developed earlier on the Earth. Planetary geology focuses on celestial objects that exhibit a solid surface or have significant solid physical states as part of their structure. Planetary geology applies geology, geophysics and geochemistry to planetary bodies. ==== Planetary geomorphology ==== Geomorphology studies the features on planetary surfaces and reconstructs the history of their formation, inferring the physical processes that acted on the surface. Planetary geomorphology includes the study of several classes of surface features: Impact features (multi-ringed basins, craters) Volcanic and tectonic features (lava flows, fissures, rilles) Glacial features Aeolian features Space weathering – erosional effects generated by the harsh environment of space (continuous micrometeorite bombardment, high-energy particle rain, impact gardening). For example, the thin dust cover on the surface of the lunar regolith is a result of micrometeorite bombardment. Hydrological features: the liquid involved can range from water to hydrocarbon and ammonia, depending on the location within the Solar System. This category includes the study of paleohydrological features (paleochannels, paleolakes). The history of a planetary surface can be deciphered by mapping features from top to bottom according to their deposition sequence, as first determined on terrestrial strata by Nicolas Steno. For example, stratigraphic mapping prepared the Apollo astronauts for the field geology they would encounter on their lunar missions. Overlapping sequences were identified on images taken by the Lunar Orbiter program, and these were used to prepare a lunar stratigraphic column and geological map of the Moon. ==== Cosmochemistry, geochemistry and petrology ==== One of the main problems when generating hypotheses on the formation and evolution of objects in the Solar System is the lack of samples that can be analyzed in the laboratory, where a large suite of tools are available, and the full body of knowledge derived from terrestrial geology can be brought to bear. Direct samples from the Moon, asteroids and Mars are present on Earth, removed from their parent bodies, and delivered as meteorites. Some of these have suffered contamination from the oxidising effect of Earth's atmosphere and the infiltration of the biosphere, but those meteorites collected in the last few decades from Antarctica are almost entirely pristine. The different types of meteorites that originate from the asteroid belt cover almost all parts of the structure of differentiated bodies: meteorites even exist that come from the core-mantle boundary (pallasites). The combination of geochemistry and observational astronomy has also made it possible to trace the HED meteorites back to a specific asteroid in the main belt, 4 Vesta. The comparatively few known Martian meteorites have provided insight into the geochemical composition of the Martian crust, although the unavoidable lack of information about their points of origin on the diverse Martian surface has meant that they do not provide more detailed constraints on theories of the evolution of the Martian lithosphere. As of July 24, 2013, 65 samples of Martian meteorites have been discovered on Earth. Many were found in either Antarctica or the Sahara Desert. During the Apollo era, in the Apollo program, 384 kilograms of lunar samples were collected and transported to the Earth, and three Soviet Luna robots also delivered regolith samples from the Moon. These samples provide the most comprehensive record of the composition of any Solar System body besides the Earth. The numbers of lunar meteorites are growing quickly in the last few years – as of April 2008 there are 54 meteorites that have been officially classified as lunar. Eleven of these are from the US Antarctic meteorite collection, 6 are from the Japanese Antarctic meteorite collection and the other 37 are from hot desert localities in Africa, Australia, and the Middle East. The total mass of recognized lunar meteorites is close to 50 kg. ==== Planetary geophysics and space physics ==== Space probes made it possible to collect data in not only the visible light region but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics. Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins. If a planet's magnetic field is sufficiently strong, its interaction with the solar wind forms a magnetosphere around a planet. Early space probes discovered the gross dimensions of the terrestrial magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles, the Van Allen radiation belts. Planetary geophysics includes, but is not limited to, seismology and tectonophysics, geophysical fluid dynamics, mineral physics, geodynamics, mathematical geophysics, and geophysical surveying. ==== Planetary geodesy ==== Planetary geodesy (also known as planetary geodetics) deals with the measurement and representation of the planets of the Solar System, their gravitational fields and geodynamic phenomena (polar motion in three-dimensional, time-varying space). The science of geodesy has elements of both astrophysics and planetary sciences. The shape of the Earth is to a large extent the result of its rotation, which causes its equatorial bulge, and the competition of geologic processes such as the collision of plates and of vulcanism, resisted by the Earth's gravity field. These principles can be applied to the solid surface of Earth (orogeny; Few mountains are higher than 10 km (6 mi), few deep sea trenches deeper than that because quite simply, a mountain as tall as, for example, 15 km (9 mi), would develop so much pressure at its base, due to gravity, that the rock there would become plastic, and the mountain would slump back to a height of roughly 10 km (6 mi) in a geologically insignificant time. Some or all of these geologic principles can be applied to other planets besides Earth. For instance on Mars, whose surface gravity is much less, the largest volcano, Olympus Mons, is 27 km (17 mi) high at its peak, a height that could not be maintained on Earth. The Earth geoid is essentially the figure of the Earth abstracted from its topographic features. Therefore, the Mars geoid (areoid) is essentially the figure of Mars abstracted from its topographic features. Surveying and mapping are two important fields of application of geodesy. === Planetary atmospheric science === An atmosphere is an important transitional zone between the solid planetary surface and the higher rarefied ionizing and radiation belts. Not all planets have atmospheres: their existence depends on the mass of the planet, and the planet's distance from the Sun – too distant and frozen atmospheres occur. Besides the four giant planets, three of the four terrestrial planets (Earth, Venus, and Mars) have significant atmospheres. Two moons have significant atmospheres: Saturn's moon Titan and Neptune's moon Triton. A tenuous atmosphere exists around Mercury. The effects of the rotation rate of a planet about its axis can be seen in atmospheric streams and currents. Seen from space, these features show as bands and eddies in the cloud system and are particularly visible on Jupiter and Saturn. === Planetary oceanography === === Exoplanetology === Exoplanetology studies exoplanets, the planets existing outside our Solar System. Until recently, the means of studying exoplanets have been extremely limited, but with the current rate of innovation in research technology, exoplanetology has become a rapidly developing subfield of astronomy. == Comparative planetary science == Planetary science frequently makes use of the method of comparison to give a greater understanding of the object of study. This can involve comparing the dense atmospheres of Earth and Saturn's moon Titan, the evolution of outer Solar System objects at different distances from the Sun, or the geomorphology of the surfaces of the terrestrial planets, to give only a few examples. The main comparison that can be made is to features on the Earth, as it is much more accessible and allows a much greater range of measurements to be made. Earth analog studies are particularly common in planetary geology, geomorphology, and also in atmospheric science. The use of terrestrial analogs was first described by Gilbert (1886). == In fiction == In Frank Herbert's 1965 science fiction novel Dune, the major secondary character Liet-Kynes serves as the "Imperial Planetologist" for the fictional planet Arrakis, a position he inherited from his father Pardot Kynes. In this role, a planetologist is described as having skills of an ecologist, geologist, meteorologist, and biologist, as well as basic understandings of human sociology. The planetologists apply this expertise to the study of entire planets.In the Dune series, planetologists are employed to understand planetary resources and to plan terraforming or other planetary-scale engineering projects. This fictional position in Dune has had an impact on the discourse surrounding planetary science itself and is referred to by one author as a "touchstone" within the related disciplines. In one example, a publication by Sybil P. Seitzinger in the journal Nature opens with a brief introduction on the fictional role in Dune, and suggests we should consider appointing individuals with similar skills to Liet-Kynes to help with managing human activity on Earth. == Professional activity == === Journals === === Professional bodies === This non-exhaustive list includes those institutions and universities with major groups of people working in planetary science. Alphabetical order is used. Division for Planetary Sciences (DPS) of the American Astronomical Society American Geophysical Union Meteoritical Society Europlanet ==== Government space agencies ==== Canadian Space Agency (CSA) China National Space Administration (CNSA, People's Republic of China). Centre national d'études spatiales French National Centre of Space Research Deutsches Zentrum für Luft- und Raumfahrt e.V., (German: abbreviated DLR), the German Aerospace Center European Space Agency (ESA) Indian Space Research Organisation (ISRO) Israel Space Agency (ISA) Italian Space Agency Japan Aerospace Exploration Agency (JAXA) NASA (National Aeronautics and Space Administration, United States of America) JPL GSFC Ames National Space Organization (Taiwan). Russian Federal Space Agency UK Space Agency (UKSA). === Major conferences === Lunar and Planetary Science Conference (LPSC), organized by the Lunar and Planetary Institute in Houston. Held annually since 1970, occurs in March. Division for Planetary Sciences (DPS) meeting held annually since 1970 at a different location each year, predominantly within the mainland US. Occurs around October. American Geophysical Union (AGU) annual Fall meeting in December in San Francisco. American Geophysical Union (AGU) Joint Assembly (co-sponsored with other societies) in April–May, in various locations around the world. Meteoritical Society annual meeting, held during the Northern Hemisphere summer, generally alternating between North America and Europe. European Planetary Science Congress (EPSC), held annually around September at a location within Europe. Smaller workshops and conferences on particular fields occur worldwide throughout the year. == See also == Areography (geography of Mars) Planetary cartography Planetary coordinate system Selenography – study of the surface and physical features of the Moon Theoretical planetology Timeline of Solar System exploration == References == == Further reading == Carr, Michael H., Saunders, R. S., Strom, R. G., Wilhelms, D. E. 1984. The Geology of the Terrestrial Planets. NASA. Morrison, David. 1994. Exploring Planetary Worlds. W. H. Freeman. ISBN 0-7167-5043-0 Hargitai H et al. (2015) Classification and Characterization of Planetary Landforms. In: Hargitai H (ed) Encyclopedia of Planetary Landforms. Springer. doi:10.1007/978-1-4614-3134-3 https://link.springer.com/content/pdf/bbm%3A978-1-4614-3134-3%2F1.pdf Hauber E et al. (2019) Planetary geologic mapping. In: Hargitai H (ed) Planetary Cartography and GIS. Springer. Page D (2015) The Geology of Planetary Landforms. In: Hargitai H (ed) Encyclopedia of Planetary Landforms. Springer. Rossi, A.P., van Gasselt S (eds) (2018) Planetary Geology. Springer == External links == Planetary Science Research Discoveries (articles) The Planetary Society (world's largest space-interest group: see also their active news blog) Planetary Exploration Newsletter (PSI-published professional newsletter, weekly distribution) Women in Planetary Science (professional networking and news)
Wikipedia/Planetary_sciences
A terrestrial planet, tellurian planet, telluric planet, or rocky planet, is a planet that is composed primarily of silicate, rocks or metals. Within the Solar System, the terrestrial planets accepted by the IAU are the inner planets closest to the Sun: Mercury, Venus, Earth and Mars. Among astronomers who use the geophysical definition of a planet, two or three planetary-mass satellites – Earth's Moon, Io, and sometimes Europa – may also be considered terrestrial planets. The large rocky asteroids Pallas and Vesta are sometimes included as well, albeit rarely. The terms "terrestrial planet" and "telluric planet" are derived from Latin words for Earth (Terra and Tellus), as these planets are, in terms of structure, Earth-like. Terrestrial planets are generally studied by geologists, astronomers, and geophysicists. Terrestrial planets have a solid planetary surface, making them substantially different from larger gaseous planets, which are composed mostly of some combination of hydrogen, helium, and water existing in various physical states. == Structure == All terrestrial planets in the Solar System have the same basic structure, such as a central metallic core (mostly iron) with a surrounding silicate mantle. The large rocky asteroid 4 Vesta has a similar structure; possibly so does the smaller one 21 Lutetia. Another rocky asteroid 2 Pallas is about the same size as Vesta, but is significantly less dense; it appears to have never differentiated a core and a mantle. The Earth's Moon and Jupiter's moon Io have similar structures to terrestrial planets, but Earth's Moon has a much smaller iron core. Another Jovian moon Europa has a similar density but has a significant ice layer on the surface: for this reason, it is sometimes considered an icy planet instead. Terrestrial planets can have surface structures such as canyons, craters, mountains, volcanoes, and others, depending on the presence at any time of an erosive liquid or tectonic activity or both. Terrestrial planets have secondary atmospheres, generated by volcanic out-gassing or from comet impact debris. This contrasts with the outer, giant planets, whose atmospheres are primary; primary atmospheres were captured directly from the original solar nebula. == Terrestrial planets within the Solar System == The Solar System has four terrestrial planets under the dynamical definition: Mercury, Venus, Earth and Mars. The Earth's Moon as well as Jupiter's moons Io and Europa would also count geophysically, as well as perhaps the large protoplanet-asteroids Pallas and Vesta (though those are borderline cases). Among these bodies, only the Earth has an active surface hydrosphere. Europa is believed to have an active hydrosphere under its ice layer. During the formation of the Solar System, there were many terrestrial planetesimals and proto-planets, but most merged with or were ejected by the four terrestrial planets, leaving only Pallas and Vesta to survive more or less intact. These two were likely both dwarf planets in the past, but have been battered out of equilibrium shapes by impacts. Some other protoplanets began to accrete and differentiate but suffered catastrophic collisions that left only a metallic or rocky core, like 16 Psyche or 8 Flora respectively. Many S-type and M-type asteroids may be such fragments. The other round bodies from the asteroid belt outward are geophysically icy planets. They are similar to terrestrial planets in that they have a solid surface, but are composed of ice and rock rather than of rock and metal. These include the dwarf planets, such as Ceres, Pluto and Eris, which are found today only in the regions beyond the formation snow line where water ice was stable under direct sunlight in the early Solar System. It also includes the other round moons, which are ice-rock (e.g. Ganymede, Callisto, Titan, and Triton) or even almost pure (at least 99%) ice (Tethys and Iapetus). Some of these bodies are known to have subsurface hydrospheres (Ganymede, Callisto, Enceladus, and Titan), like Europa, and it is also possible for some others (e.g. Ceres, Mimas, Dione, Miranda, Ariel, Triton, and Pluto). Titan even has surface bodies of liquid, albeit liquid methane rather than water. Jupiter's Ganymede, though icy, does have a metallic core like the Moon, Io, Europa, and the terrestrial planets. The name Terran world has been suggested to define all solid worlds (bodies assuming a rounded shape), without regard to their composition. It would thus include both terrestrial and icy planets. === Density trends === The uncompressed density of a terrestrial planet is the average density its materials would have at zero pressure. A greater uncompressed density indicates a greater metal content. Uncompressed density differs from the true average density (also often called "bulk" density) because compression within planet cores increases their density; the average density depends on planet size, temperature distribution, and material stiffness as well as composition. Calculations to estimate uncompressed density inherently require a model of the planet's structure. Where there have been landers or multiple orbiting spacecraft, these models are constrained by seismological data and also moment of inertia data derived from the spacecraft's orbits. Where such data is not available, uncertainties are inevitably higher. The uncompressed densities of the rounded terrestrial bodies directly orbiting the Sun trend towards lower values as the distance from the Sun increases, consistent with the temperature gradient that would have existed within the primordial solar nebula. The Galilean satellites show a similar trend going outwards from Jupiter; however, no such trend is observable for the icy satellites of Saturn or Uranus. The icy worlds typically have densities less than 2 g·cm−3. Eris is significantly denser (2.43±0.05 g·cm−3), and may be mostly rocky with some surface ice, like Europa. It is unknown whether extrasolar terrestrial planets in general will follow such a trend. The data in the tables below are mostly taken from a list of gravitationally rounded objects of the Solar System and planetary-mass moon. All distances from the Sun are averages. == Extrasolar terrestrial planets == Most of the planets discovered outside the Solar System are giant planets, because they are more easily detectable. But since 2005, hundreds of potentially terrestrial extrasolar planets have also been found, with several being confirmed as terrestrial. Most of these are super-Earths, i.e. planets with masses between Earth's and Neptune's; super-Earths may be gas planets or terrestrial, depending on their mass and other parameters. During the early 1990s, the first extrasolar planets were discovered orbiting the pulsar PSR B1257+12, with masses of 0.02, 4.3, and 3.9 times that of Earth, by pulsar timing. When 51 Pegasi b, the first planet found around a star still undergoing fusion, was discovered, many astronomers assumed it to be a gigantic terrestrial, because it was assumed no gas giant could exist as close to its star (0.052 AU) as 51 Pegasi b did. It was later found to be a gas giant. In 2005, the first planets orbiting a main-sequence star and which showed signs of being terrestrial planets were found: Gliese 876 d and OGLE-2005-BLG-390Lb. Gliese 876 d orbits the red dwarf Gliese 876, 15 light years from Earth, and has a mass seven to nine times that of Earth and an orbital period of just two Earth days. OGLE-2005-BLG-390Lb has about 5.5 times the mass of Earth and orbits a star about 21,000 light-years away in the constellation Scorpius. From 2007 to 2010, three (possibly four) potential terrestrial planets were found orbiting within the Gliese 581 planetary system. The smallest, Gliese 581e, is only about 1.9 Earth masses, but orbits very close to the star. Two others, Gliese 581c and the disputed Gliese 581d, are more-massive super-Earths orbiting in or close to the habitable zone of the star, so they could potentially be habitable, with Earth-like temperatures. Another possibly terrestrial planet, HD 85512 b, was discovered in 2011; it has at least 3.6 times the mass of Earth. The radius and composition of all these planets are unknown. The first confirmed terrestrial exoplanet, Kepler-10b, was found in 2011 by the Kepler space telescope, specifically designed to discover Earth-size planets around other stars using the transit method. In the same year, the Kepler space telescope mission team released a list of 1235 extrasolar planet candidates, including six that are "Earth-size" or "super-Earth-size" (i.e. they have a radius less than twice that of the Earth) and in the habitable zone of their star. Since then, Kepler has discovered hundreds of planets ranging from Moon-sized to super-Earths, with many more candidates in this size range (see image). In 2016, statistical modeling of the relationship between a planet's mass and radius using a broken power law appeared to suggest that the transition point between rocky, terrestrial worlds and mini-Neptunes without a defined surface was in fact very close to Earth and Venus's, suggesting that rocky worlds much larger than our own are in fact quite rare. This resulted in some advocating for the retirement of the term "super-earth" as being scientifically misleading. Since 2016 the catalog of known exoplanets has increased significantly, and there have been several published refinements of the mass-radius model. As of 2024, the expected transition point between rocky and intermediate-mass planets sits at roughly 4.4 earth masses, and roughly 1.6 earth radii. In September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an Earth-mass rogue planet (named OGLE-2016-BLG-1928) unbounded by any star, and free-floating in the Milky Way galaxy. === List of terrestrial exoplanets === The following exoplanets have a density of at least 5 g/cm3 and a mass below Neptune's and are thus very likely terrestrial: Kepler-10b, Kepler-20b, Kepler-36b, Kepler-48d, Kepler 68c, Kepler-78b, Kepler-89b, Kepler-93b, Kepler-97b, Kepler-99b, Kepler-100b, Kepler-101c, Kepler-102b, Kepler-102d, Kepler-113b, Kepler-131b, Kepler-131c, Kepler-138c, Kepler-406b, Kepler-406c, Kepler-409b. === Frequency === In 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth- and super-Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way. Eleven billion of these estimated planets may be orbiting Sun-like stars. The nearest such planet may be 12 light-years away, according to the scientists. However, this does not give estimates for the number of extrasolar terrestrial planets, because there are planets as small as Earth that have been shown to be gas planets (see Kepler-138d). Estimates show that about 80% of potentially habitable worlds are covered by land, and about 20% are ocean planets. Planets with rations more like those of Earth, which was 30% land and 70% ocean, only make up 1% of these worlds. == Types == Several possible classifications for solid planets have been proposed. Silicate planet A solid planet like Venus, Earth, or Mars, made primarily of a silicon-based rocky mantle with a metallic (iron) core. Carbon planet (also called "diamond planet") A theoretical class of planets, composed of a metal core surrounded by primarily carbon-based minerals. They may be considered a type of terrestrial planet if the metal content dominates. The Solar System contains no carbon planets but does have carbonaceous asteroids, such as Ceres and Hygiea. It is unknown if Ceres has a rocky or metallic core. Iron planet A theoretical type of solid planet that consists almost entirely of iron and therefore has a greater density and a smaller radius than other solid planets of comparable mass. Mercury in the Solar System has a metallic core equal to 60–70% of its planetary mass, and is sometimes called an iron planet, though its surface is made of silicates and is iron-poor. Iron planets are thought to form in the high-temperature regions close to a star, like Mercury, and if the protoplanetary disk is rich in iron. Icy planet A type of solid planet with an icy surface of volatiles. In the Solar System, most planetary-mass moons (such as Titan, Triton, and Enceladus) and many dwarf planets (such as Pluto and Eris) have such a composition. Europa is sometimes considered an icy planet due to its surface ice, but its higher density indicates that its interior is mostly rocky. Such planets can have internal saltwater oceans and cryovolcanoes erupting liquid water (i.e. an internal hydrosphere, like Europa or Enceladus); they can have an atmosphere and hydrosphere made from methane or nitrogen (like Titan). A metallic core is possible, as exists on Ganymede. Coreless planet A theoretical type of solid planet that consists of silicate rock but has no metallic core, i.e. the opposite of an iron planet. Although the Solar System contains no coreless planets, chondrite asteroids and meteorites are common in the Solar System. Ceres and Pallas have mineral compositions similar to carbonaceous chondrites, though Pallas is significantly less hydrated. Coreless planets are thought to form farther from the star where volatile oxidizing material is more common. == See also == Chthonian planet Earth analog List of potentially habitable exoplanets Planetary habitability Venus zone List of gravitationally rounded objects of the Solar System == References ==
Wikipedia/Terrestrial_planets
Daedalus is a prominent crater located near the center of the far side of the Moon. The inner wall is terraced, and there is a cluster of central peaks on the relatively flat floor. Because of its location (shielded from radio emissions from the Earth), it has been proposed as the site of a future giant radio telescope, which would be scooped out of the crater itself, much like the Arecibo radio telescope, but on a vastly larger scale. The crater is named after Daedalus of Greek myth. It is pictured in famous photographs taken by the Apollo 11 astronauts. In contemporary sources it was called Crater 308 (this was a temporary IAU designation that preceded the establishment of far-side lunar nomenclature). Nearby craters of note include Icarus to the east and Racah to the south. Less than a crater diameter to the north-northeast is Lipskiy. == Satellite craters == By convention these features are identified on lunar maps by placing the letter on the side of the crater midpoint that is closest to Daedalus. == See also == 1864 Daedalus, near-Earth asteroid Lunar Crater Radio Telescope - proposed radio telescope by NIAC for the far side of the moon == References == Graham-Rowe, Duncan (2002-01-03). "Astronomers plan telescope on Moon". New Scientist. Retrieved 2006-10-25. == External links == Lunar Orbiter photographic atlas, Photo Number II-033-M
Wikipedia/Daedalus_(crater)
Scientific laws or laws of science are statements, based on repeated experiments or observations, that describe or predict a range of natural phenomena. The term law has diverse usage in many cases (approximate, accurate, broad, or narrow) across all fields of natural science (physics, chemistry, astronomy, geoscience, biology). Laws are developed from data and can be further developed through mathematics; in all cases they are directly or indirectly based on empirical evidence. It is generally understood that they implicitly reflect, though they do not explicitly assert, causal relationships fundamental to reality, and are discovered rather than invented. Scientific laws summarize the results of experiments or observations, usually within a certain range of application. In general, the accuracy of a law does not change when a new theory of the relevant phenomenon is worked out, but rather the scope of the law's application, since the mathematics or statement representing the law does not change. As with other kinds of scientific knowledge, scientific laws do not express absolute certainty, as mathematical laws do. A scientific law may be contradicted, restricted, or extended by future observations. A law can often be formulated as one or several statements or equations, so that it can predict the outcome of an experiment. Laws differ from hypotheses and postulates, which are proposed during the scientific process before and during validation by experiment and observation. Hypotheses and postulates are not laws, since they have not been verified to the same degree, although they may lead to the formulation of laws. Laws are narrower in scope than scientific theories, which may entail one or several laws. Science distinguishes a law or theory from facts. Calling a law a fact is ambiguous, an overstatement, or an equivocation. The nature of scientific laws has been much discussed in philosophy, but in essence scientific laws are simply empirical conclusions reached by the scientific method; they are intended to be neither laden with ontological commitments nor statements of logical absolutes. Social sciences such as economics have also attempted to formulate scientific laws, though these generally have much less predictive power. == Overview == A scientific law always applies to a physical system under repeated conditions, and it implies that there is a causal relationship involving the elements of the system. Factual and well-confirmed statements like "Mercury is liquid at standard temperature and pressure" are considered too specific to qualify as scientific laws. A central problem in the philosophy of science, going back to David Hume, is that of distinguishing causal relationships (such as those implied by laws) from principles that arise due to constant conjunction. Laws differ from scientific theories in that they do not posit a mechanism or explanation of phenomena: they are merely distillations of the results of repeated observation. As such, the applicability of a law is limited to circumstances resembling those already observed, and the law may be found to be false when extrapolated. Ohm's law only applies to linear networks; Newton's law of universal gravitation only applies in weak gravitational fields; the early laws of aerodynamics, such as Bernoulli's principle, do not apply in the case of compressible flow such as occurs in transonic and supersonic flight; Hooke's law only applies to strain below the elastic limit; Boyle's law applies with perfect accuracy only to the ideal gas, etc. These laws remain useful, but only under the specified conditions where they apply. Many laws take mathematical forms, and thus can be stated as an equation; for example, the law of conservation of energy can be written as Δ E = 0 {\displaystyle \Delta E=0} , where E {\displaystyle E} is the total amount of energy in the universe. Similarly, the first law of thermodynamics can be written as d U = δ Q − δ W {\displaystyle \mathrm {d} U=\delta Q-\delta W\,} , and Newton's second law can be written as F = d p d t . {\displaystyle \textstyle F={\frac {dp}{dt}}.} While these scientific laws explain what our senses perceive, they are still empirical (acquired by observation or scientific experiment) and so are not like mathematical theorems which can be proved purely by mathematics. Like theories and hypotheses, laws make predictions; specifically, they predict that new observations will conform to the given law. Laws can be falsified if they are found in contradiction with new data. Some laws are only approximations of other more general laws, and are good approximations with a restricted domain of applicability. For example, Newtonian dynamics (which is based on Galilean transformations) is the low-speed limit of special relativity (since the Galilean transformation is the low-speed approximation to the Lorentz transformation). Similarly, the Newtonian gravitation law is a low-mass approximation of general relativity, and Coulomb's law is an approximation to quantum electrodynamics at large distances (compared to the range of weak interactions). In such cases it is common to use the simpler, approximate versions of the laws, instead of the more accurate general laws. Laws are constantly being tested experimentally to increasing degrees of precision, which is one of the main goals of science. The fact that laws have never been observed to be violated does not preclude testing them at increased accuracy or in new kinds of conditions to confirm whether they continue to hold, or whether they break, and what can be discovered in the process. It is always possible for laws to be invalidated or proven to have limitations, by repeatable experimental evidence, should any be observed. Well-established laws have indeed been invalidated in some special cases, but the new formulations created to explain the discrepancies generalize upon, rather than overthrow, the originals. That is, the invalidated laws have been found to be only close approximations, to which other terms or factors must be added to cover previously unaccounted-for conditions, e.g. very large or very small scales of time or space, enormous speeds or masses, etc. This, rather than unchanging knowledge, physical laws are better viewed as a series of improving and more precise generalizations. == Properties == Scientific laws are typically conclusions based on repeated scientific experiments and observations over many years and which have become accepted universally within the scientific community. A scientific law is "inferred from particular facts, applicable to a defined group or class of phenomena, and expressible by the statement that a particular phenomenon always occurs if certain conditions be present". The production of a summary description of our environment in the form of such laws is a fundamental aim of science. Several general properties of scientific laws, particularly when referring to laws in physics, have been identified. Scientific laws are: True, at least within their regime of validity. By definition, there have never been repeatable contradicting observations. Universal. They appear to apply everywhere in the universe.: 82  Simple. They are typically expressed in terms of a single mathematical equation. Absolute. Nothing in the universe appears to affect them.: 82  Stable. Unchanged since first discovered (although they may have been shown to be approximations of more accurate laws), All-encompassing. Everything in the universe apparently must comply with them (according to observations). Generally conservative of quantity.: 59  Often expressions of existing homogeneities (symmetries) of space and time. Typically theoretically reversible in time (if non-quantum), although time itself is irreversible. Broad. In physics, laws exclusively refer to the broad domain of matter, motion, energy, and force itself, rather than more specific systems in the universe, such as living systems, e.g. the mechanics of the human body. The term "scientific law" is traditionally associated with the natural sciences, though the social sciences also contain laws. For example, Zipf's law is a law in the social sciences which is based on mathematical statistics. In these cases, laws may describe general trends or expected behaviors rather than being absolutes. In natural science, impossibility assertions come to be widely accepted as overwhelmingly probable rather than considered proved to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. While an impossibility assertion in natural science can never be absolutely proved, it could be refuted by the observation of a single counterexample. Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined. Some examples of widely accepted impossibilities in physics are perpetual motion machines, which violate the law of conservation of energy, exceeding the speed of light, which violates the implications of special relativity, the uncertainty principle of quantum mechanics, which asserts the impossibility of simultaneously knowing both the position and the momentum of a particle, and Bell's theorem: no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. == Laws as consequences of mathematical symmetries == Some laws reflect mathematical symmetries found in nature (e.g. the Pauli exclusion principle reflects identity of electrons, conservation laws reflect homogeneity of space, time, and Lorentz transformations reflect rotational symmetry of spacetime). Many fundamental physical laws are mathematical consequences of various symmetries of space, time, or other aspects of nature. Specifically, Noether's theorem connects some conservation laws to certain symmetries. For example, conservation of energy is a consequence of the shift symmetry of time (no moment of time is different from any other), while conservation of momentum is a consequence of the symmetry (homogeneity) of space (no place in space is special, or different from any other). The indistinguishability of all particles of each fundamental type (say, electrons, or photons) results in the Dirac and Bose quantum statistics which in turn result in the Pauli exclusion principle for fermions and in Bose–Einstein condensation for bosons. Special relativity uses rapidity to express motion according to the symmetries of hyperbolic rotation, a transformation mixing space and time. Symmetry between inertial and gravitational mass results in general relativity. The inverse square law of interactions mediated by massless bosons is the mathematical consequence of the 3-dimensionality of space. One strategy in the search for the most fundamental laws of nature is to search for the most general mathematical symmetry group that can be applied to the fundamental interactions. == Laws of physics == === Conservation laws === ==== Conservation and symmetry ==== Conservation laws are fundamental laws that follow from the homogeneity of space, time and phase, in other words symmetry. Noether's theorem: Any quantity with a continuously differentiable symmetry in the action has an associated conservation law. Conservation of mass was the first law to be understood since most macroscopic physical processes involving masses, for example, collisions of massive particles or fluid flow, provide the apparent belief that mass is conserved. Mass conservation was observed to be true for all chemical reactions. In general, this is only approximative because with the advent of relativity and experiments in nuclear and particle physics: mass can be transformed into energy and vice versa, so mass is not always conserved but part of the more general conservation of mass–energy. Conservation of energy, momentum and angular momentum for isolated systems can be found to be symmetries in time, translation, and rotation. Conservation of charge was also realized since charge has never been observed to be created or destroyed and only found to move from place to place. ==== Continuity and transfer ==== Conservation laws can be expressed using the general continuity equation (for a conserved quantity) can be written in differential form as: ∂ ρ ∂ t = − ∇ ⋅ J {\displaystyle {\frac {\partial \rho }{\partial t}}=-\nabla \cdot \mathbf {J} } where ρ is some quantity per unit volume, J is the flux of that quantity (change in quantity per unit time per unit area). Intuitively, the divergence (denoted ∇⋅) of a vector field is a measure of flux diverging radially outwards from a point, so the negative is the amount piling up at a point; hence the rate of change of density in a region of space must be the amount of flux leaving or collecting in some region (see the main article for details). In the table below, the fluxes flows for various physical quantities in transport, and their associated continuity equations, are collected for comparison. More general equations are the convection–diffusion equation and Boltzmann transport equation, which have their roots in the continuity equation. === Laws of classical mechanics === ==== Principle of least action ==== Classical mechanics, including Newton's laws, Lagrange's equations, Hamilton's equations, etc., can be derived from the following principle: δ S = δ ∫ t 1 t 2 L ( q , q ˙ , t ) d t = 0 {\displaystyle \delta {\mathcal {S}}=\delta \int _{t_{1}}^{t_{2}}L(\mathbf {q} ,\mathbf {\dot {q}} ,t)\,dt=0} where S {\displaystyle {\mathcal {S}}} is the action; the integral of the Lagrangian L ( q , q ˙ , t ) = T ( q ˙ , t ) − V ( q , q ˙ , t ) {\displaystyle L(\mathbf {q} ,\mathbf {\dot {q}} ,t)=T(\mathbf {\dot {q}} ,t)-V(\mathbf {q} ,\mathbf {\dot {q}} ,t)} of the physical system between two times t1 and t2. The kinetic energy of the system is T (a function of the rate of change of the configuration of the system), and potential energy is V (a function of the configuration and its rate of change). The configuration of a system which has N degrees of freedom is defined by generalized coordinates q = (q1, q2, ... qN). There are generalized momenta conjugate to these coordinates, p = (p1, p2, ..., pN), where: p i = ∂ L ∂ q ˙ i {\displaystyle p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}} The action and Lagrangian both contain the dynamics of the system for all times. The term "path" simply refers to a curve traced out by the system in terms of the generalized coordinates in the configuration space, i.e. the curve q(t), parameterized by time (see also parametric equation for this concept). The action is a functional rather than a function, since it depends on the Lagrangian, and the Lagrangian depends on the path q(t), so the action depends on the entire "shape" of the path for all times (in the time interval from t1 to t2). Between two instants of time, there are infinitely many paths, but one for which the action is stationary (to the first order) is the true path. The stationary value for the entire continuum of Lagrangian values corresponding to some path, not just one value of the Lagrangian, is required (in other words it is not as simple as "differentiating a function and setting it to zero, then solving the equations to find the points of maxima and minima etc", rather this idea is applied to the entire "shape" of the function, see calculus of variations for more details on this procedure). Notice L is not the total energy E of the system due to the difference, rather than the sum: E = T + V {\displaystyle E=T+V} The following general approaches to classical mechanics are summarized below in the order of establishment. They are equivalent formulations. Newton's is commonly used due to simplicity, but Hamilton's and Lagrange's equations are more general, and their range can extend into other branches of physics with suitable modifications. From the above, any equation of motion in classical mechanics can be derived. Corollaries in mechanics : Euler's laws of motion Euler's equations (rigid body dynamics) Corollaries in fluid mechanics : Equations describing fluid flow in various situations can be derived, using the above classical equations of motion and often conservation of mass, energy and momentum. Some elementary examples follow. Archimedes' principle Bernoulli's principle Poiseuille's law Stokes' law Navier–Stokes equations Faxén's law === Laws of gravitation and relativity === Some of the more famous laws of nature are found in Isaac Newton's theories of (now) classical mechanics, presented in his Philosophiae Naturalis Principia Mathematica, and in Albert Einstein's theory of relativity. ==== Modern laws ==== Special relativity : The two postulates of special relativity are not "laws" in themselves, but assumptions of their nature in terms of relative motion. They can be stated as "the laws of physics are the same in all inertial frames" and "the speed of light is constant and has the same value in all inertial frames". The said postulates lead to the Lorentz transformations – the transformation law between two frame of references moving relative to each other. For any 4-vector A ′ = Λ A {\displaystyle A'=\Lambda A} this replaces the Galilean transformation law from classical mechanics. The Lorentz transformations reduce to the Galilean transformations for low velocities much less than the speed of light c. The magnitudes of 4-vectors are invariants – not "conserved", but the same for all inertial frames (i.e. every observer in an inertial frame will agree on the same value), in particular if A is the four-momentum, the magnitude can derive the famous invariant equation for mass–energy and momentum conservation (see invariant mass): E 2 = ( p c ) 2 + ( m c 2 ) 2 {\displaystyle E^{2}=(pc)^{2}+(mc^{2})^{2}} in which the (more famous) mass–energy equivalence E = mc2 is a special case. General relativity : General relativity is governed by the Einstein field equations, which describe the curvature of space-time due to mass–energy equivalent to the gravitational field. Solving the equation for the geometry of space warped due to the mass distribution gives the metric tensor. Using the geodesic equation, the motion of masses falling along the geodesics can be calculated. Gravitoelectromagnetism : In a relatively flat spacetime due to weak gravitational fields, gravitational analogues of Maxwell's equations can be found; the GEM equations, to describe an analogous gravitomagnetic field. They are well established by the theory, and experimental tests form ongoing research. ==== Classical laws ==== Kepler's laws, though originally discovered from planetary observations (also due to Tycho Brahe), are true for any central forces. === Thermodynamics === Newton's law of cooling Fourier's law Ideal gas law, combines a number of separately developed gas laws; Boyle's law Charles's law Gay-Lussac's law Avogadro's law, into one now improved by other equations of state Dalton's law (of partial pressures) Boltzmann equation Carnot's theorem Kopp's law === Electromagnetism === Maxwell's equations give the time-evolution of the electric and magnetic fields due to electric charge and current distributions. Given the fields, the Lorentz force law is the equation of motion for charges in the fields. These equations can be modified to include magnetic monopoles, and are consistent with our observations of monopoles either existing or not existing; if they do not exist, the generalized equations reduce to the ones above, if they do, the equations become fully symmetric in electric and magnetic charges and currents. Indeed, there is a duality transformation where electric and magnetic charges can be "rotated into one another", and still satisfy Maxwell's equations. Pre-Maxwell laws : These laws were found before the formulation of Maxwell's equations. They are not fundamental, since they can be derived from Maxwell's equations. Coulomb's law can be found from Gauss's law (electrostatic form) and the Biot–Savart law can be deduced from Ampere's law (magnetostatic form). Lenz's law and Faraday's law can be incorporated into the Maxwell–Faraday equation. Nonetheless, they are still very effective for simple calculations. Lenz's law Coulomb's law Biot–Savart law Other laws : Ohm's law Kirchhoff's laws Joule's law === Photonics === Classically, optics is based on a variational principle: light travels from one point in space to another in the shortest time. Fermat's principle In geometric optics laws are based on approximations in Euclidean geometry (such as the paraxial approximation). Law of reflection Law of refraction, Snell's law In physical optics, laws are based on physical properties of materials. Brewster's angle Malus's law Beer–Lambert law In actuality, optical properties of matter are significantly more complex and require quantum mechanics. === Laws of quantum mechanics === Quantum mechanics has its roots in postulates. This leads to results which are not usually called "laws", but hold the same status, in that all of quantum mechanics follows from them. These postulates can be summarized as follows: The state of a physical system, be it a particle or a system of many particles, is described by a wavefunction. Every physical quantity is described by an operator acting on the system; the measured quantity has a probabilistic nature. The wavefunction obeys the Schrödinger equation. Solving this wave equation predicts the time-evolution of the system's behavior, analogous to solving Newton's laws in classical mechanics. Two identical particles, such as two electrons, cannot be distinguished from one another by any means. Physical systems are classified by their symmetry properties. These postulates in turn imply many other phenomena, e.g., uncertainty principles and the Pauli exclusion principle. === Radiation laws === Applying electromagnetism, thermodynamics, and quantum mechanics, to atoms and molecules, some laws of electromagnetic radiation and light are as follows. Stefan–Boltzmann law Planck's law of black-body radiation Wien's displacement law Radioactive decay law == Laws of chemistry == Chemical laws are those laws of nature relevant to chemistry. Historically, observations led to many empirical laws, though now it is known that chemistry has its foundations in quantum mechanics. Quantitative analysis : The most fundamental concept in chemistry is the law of conservation of mass, which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is actually energy that is conserved, and that energy and mass are related; a concept which becomes important in nuclear chemistry. Conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics. Additional laws of chemistry elaborate on the law of conservation of mass. Joseph Proust's law of definite composition says that pure chemicals are composed of elements in a definite formulation; we now know that the structural arrangement of these elements is also important. Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers; although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. The law of definite composition and the law of multiple proportions are the first two of the three laws of stoichiometry, the proportions by which the chemical elements combine to form chemical compounds. The third law of stoichiometry is the law of reciprocal proportions, which provides the basis for establishing equivalent weights for each chemical element. Elemental equivalent weights can then be used to derive atomic weights for each element. More modern laws of chemistry define the relationship between energy and its transformations. Reaction kinetics and equilibria : In equilibrium, molecules exist in mixture defined by the transformations possible on the timescale of the equilibrium, and are in a ratio defined by the intrinsic energy of the molecules—the lower the intrinsic energy, the more abundant the molecule. Le Chatelier's principle states that the system opposes changes in conditions from equilibrium states, i.e. there is an opposition to change the state of an equilibrium reaction. Transforming one structure to another requires the input of energy to cross an energy barrier; this can come from the intrinsic energy of the molecules themselves, or from an external source which will generally accelerate transformations. The higher the energy barrier, the slower the transformation occurs. There is a hypothetical intermediate, or transition structure, that corresponds to the structure at the top of the energy barrier. The Hammond–Leffler postulate states that this structure looks most similar to the product or starting material which has intrinsic energy closest to that of the energy barrier. Stabilizing this hypothetical intermediate through chemical interaction is one way to achieve catalysis. All chemical processes are reversible (law of microscopic reversibility) although some processes have such an energy bias, they are essentially irreversible. The reaction rate has the mathematical parameter known as the rate constant. The Arrhenius equation gives the temperature and activation energy dependence of the rate constant, an empirical law. Thermochemistry : Dulong–Petit law Gibbs–Helmholtz equation Hess's law Gas laws : Raoult's law Henry's law Chemical transport : Fick's laws of diffusion Graham's law Lamm equation == Laws of biology == === Ecology === Competitive exclusion principle or Gause's law === Genetics === Mendelian laws (Dominance and Uniformity, segregation of genes, and Independent Assortment) Hardy–Weinberg principle === Natural selection === Whether or not Natural Selection is a "law of nature" is controversial among biologists. Henry Byerly, an American philosopher known for his work on evolutionary theory, discussed the problem of interpreting a principle of natural selection as a law. He suggested a formulation of natural selection as a framework principle that can contribute to a better understanding of evolutionary theory. His approach was to express relative fitness, the propensity of a genotype to increase in proportionate representation in a competitive environment, as a function of adaptedness (adaptive design) of the organism. == Laws of Earth sciences == === Geography === Arbia's law of geography Tobler's first law of geography Tobler's second law of geography === Geology === Archie's law Buys Ballot's law Birch's law Byerlee's law Principle of original horizontality Law of superposition Principle of lateral continuity Principle of cross-cutting relationships Principle of faunal succession Principle of inclusions and components Walther's law == Other fields == Some mathematical theorems and axioms are referred to as laws because they provide logical foundation to empirical laws. Examples of other observed phenomena sometimes described as laws include the Titius–Bode law of planetary positions, Zipf's law of linguistics, and Moore's law of technological growth. Many of these laws fall within the scope of uncomfortable science. Other laws are pragmatic and observational, such as the law of unintended consequences. By analogy, principles in other fields of study are sometimes loosely referred to as "laws". These include Occam's razor as a principle of philosophy and the Pareto principle of economics. == History == The observation and detection of underlying regularities in nature date from prehistoric times – the recognition of cause-and-effect relationships implicitly recognises the existence of laws of nature. The recognition of such regularities as independent scientific laws per se, though, was limited by their entanglement in animism, and by the attribution of many effects that do not have readily obvious causes—such as physical phenomena—to the actions of gods, spirits, supernatural beings, etc. Observation and speculation about nature were intimately bound up with metaphysics and morality. In Europe, systematic theorizing about nature (physis) began with the early Greek philosophers and scientists and continued into the Hellenistic and Roman imperial periods, during which times the intellectual influence of Roman law increasingly became paramount.The formula "law of nature" first appears as "a live metaphor" favored by Latin poets Lucretius, Virgil, Ovid, Manilius, in time gaining a firm theoretical presence in the prose treatises of Seneca and Pliny. Why this Roman origin? According to [historian and classicist Daryn] Lehoux's persuasive narrative, the idea was made possible by the pivotal role of codified law and forensic argument in Roman life and culture. For the Romans ... the place par excellence where ethics, law, nature, religion and politics overlap is the law court. When we read Seneca's Natural Questions, and watch again and again just how he applies standards of evidence, witness evaluation, argument and proof, we can recognize that we are reading one of the great Roman rhetoricians of the age, thoroughly immersed in forensic method. And not Seneca alone. Legal models of scientific judgment turn up all over the place, and for example prove equally integral to Ptolemy's approach to verification, where the mind is assigned the role of magistrate, the senses that of disclosure of evidence, and dialectical reason that of the law itself. The precise formulation of what are now recognized as modern and valid statements of the laws of nature dates from the 17th century in Europe, with the beginning of accurate experimentation and the development of advanced forms of mathematics. During this period, natural philosophers such as Isaac Newton (1642–1727) were influenced by a religious view – stemming from medieval concepts of divine law – which held that God had instituted absolute, universal and immutable physical laws. In chapter 7 of The World, René Descartes (1596–1650) described "nature" as matter itself, unchanging as created by God, thus changes in parts "are to be attributed to nature. The rules according to which these changes take place I call the 'laws of nature'." The modern scientific method which took shape at this time (with Francis Bacon (1561–1626) and Galileo (1564–1642)) contributed to a trend of separating science from theology, with minimal speculation about metaphysics and ethics. (Natural law in the political sense, conceived as universal (i.e., divorced from sectarian religion and accidents of place), was also elaborated in this period by scholars such as Grotius (1583–1645), Spinoza (1632–1677), and Hobbes (1588–1679).) The distinction between natural law in the political-legal sense and law of nature or physical law in the scientific sense is a modern one, both concepts being equally derived from physis, the Greek word (translated into Latin as natura) for nature. == See also == == References == == Further reading == == External links == Physics Formulary, a useful book in different formats containing many or the physical laws and formulae. Eformulae.com, website containing most of the formulae in different disciplines. Stanford Encyclopedia of Philosophy: "Laws of Nature" by John W. Carroll. Baaquie, Belal E. "Laws of Physics : A Primer". Core Curriculum, National University of Singapore. Francis, Erik Max. "The laws list".. Physics. Alcyone Systems Pazameta, Zoran. "The laws of nature". Archived 2014-02-26 at the Wayback Machine Committee for the scientific investigation of Claims of the Paranormal. The Internet Encyclopedia of Philosophy. "Laws of Nature" – By Norman Swartz Mark Buchanan; Frank Close; Nancy Cartwright; Melvyn Bragg (host) (Oct 19, 2000). "Laws of Nature". In Our Time. BBC Radio 4.
Wikipedia/Laws_of_science
Atmospheric science is the study of the Earth's atmosphere and its various inner-working physical processes. Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather forecasting. Climatology is the study of atmospheric conditions over timescales longer than those of weather, focusing on average climate conditions and their variability over time. Aeronomy is the study of the upper layers of the atmosphere, where dissociation and ionization are important. Atmospheric science has been extended to the field of planetary science and the study of the atmospheres of the planets and natural satellites of the Solar System. Experimental instruments used in atmospheric science include satellites, rocketsondes, radiosondes, weather balloons, radars, and lasers. The term aerology (from Greek ἀήρ, aēr, "air"; and -λογία, -logia) is sometimes used as an alternative term for the study of Earth's atmosphere; in other definitions, aerology is restricted to the free atmosphere, the region above the planetary boundary layer. Early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann. == Atmospheric chemistry == Atmospheric chemistry is a branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology and other disciplines. Research is increasingly connected with other areas of study such as climatology. The composition and chemistry of the atmosphere is of importance for several reasons, but primarily because of the interactions between the atmosphere and living organisms. The composition of the Earth's atmosphere has been changed by human activity and some of these changes are harmful to human health, crops and ecosystems. Examples of problems which have been addressed by atmospheric chemistry include acid rain, photochemical smog and global warming. Atmospheric chemistry seeks to understand the causes of these problems, and by obtaining a theoretical understanding of them, allow possible solutions to be tested and the effects of changes in government policy evaluated. Atmospheric chemistry plays a major role in understanding the concentration of gases in our atmosphere that contribute to climate change. More specifically, when combined with atmospheric physics and biogeochemistry, it is useful in terms of studying the influence of greenhouse gases like CO2, N2O, and CH4, on Earth's radiative balance. According to UNEP, CO2 emissions increased to a new record of 57.1 GtCO2e, up 1.3% from the previous year. Previous GHG emissions growth from 2010-2019 averaged only +0.8% yearly, illustrating the dramatic increase in global emissions. The Global Nitrous Oxide Budget cited that atmospheric N2O has increased by roughly 25% between 1750 and 2022, having the fasted annual growth rate in 2020 and 2021. Atmospheric chemistry is critical in understanding what contributes to our changing climate. The warming of our climate is caused when CO2 emissions, constrained by biogeochemistry, are above a 0% increase. In order to stop temperatures from rising, there must be no CO2 emissions. By understanding the chemical composition and emission rates in our atmosphere alongside economic factors, researchers are able to trace back emissions back to their sources. About 26% of the 2023 GtCO2e was used for power, 15% for transportation, 11% in industry, 11% in agriculture, etc. In order to successfully reverse the human-driven damage contributing to global climate change, cuts of nearly 42% are needed by 2030 and are to be implementing using government intervention. This is one example of how atmospheric chemistry goes hand-in-hand with social and political policy, biogeochemistry, and economic factors. == Atmospheric dynamics == Atmospheric dynamics is the study of motion systems of meteorological importance, integrating observations at multiple locations and times and theories. Common topics studied include diverse phenomena such as thunderstorms, tornadoes, gravity waves, tropical cyclones, extratropical cyclones, jet streams, and global-scale circulations. The goal of dynamical studies is to explain the observed circulations on the basis of fundamental principles from physics. The objectives of such studies incorporate improving weather forecasting, developing methods for predicting seasonal and interannual climate fluctuations, and understanding the implications of human-induced perturbations (e.g., increased carbon dioxide concentrations or depletion of the ozone layer) on the global climate. == Atmospheric physics == Atmospheric physics is the application of physics to the study of the atmosphere. Atmospheric physicists attempt to model Earth's atmosphere and the atmospheres of the other planets using fluid flow equations, chemical models, radiation balancing, and energy transfer processes in the atmosphere and underlying oceans and land. In order to model weather systems, atmospheric physicists employ elements of scattering theory, wave propagation models, cloud physics, statistical mechanics and spatial statistics, each of which incorporate high levels of mathematics and physics. Atmospheric physics has close links to meteorology and climatology and also covers the design and construction of instruments for studying the atmosphere and the interpretation of the data they provide, including remote sensing instruments. In the United Kingdom, atmospheric studies are underpinned by the Meteorological Office. Divisions of the U.S. National Oceanic and Atmospheric Administration (NOAA) oversee research projects and weather modeling involving atmospheric physics. The U.S. National Astronomy and Ionosphere Center also carries out studies of the high atmosphere. The Earth's magnetic field and the solar wind interact with the atmosphere, creating the ionosphere, Van Allen radiation belts, telluric currents, and radiant energy. Recent studies in atmospheric physics often rely on satellite-based observation. One example includes CALIPSO, or the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation. The CALIPSO mission, engineered by NASA and the Centre National d'Etudes Spatiales/CNES, studies how clouds and airborne particles play a role in regulating the weather, climate, and quality of Earth's atmosphere. According to NASA, this mission uses methods like retrieval algorithm development, climatology development, spectroscopy, weather and climate model evaluation, and cloudy radiative transfer models in addition to atmospheric physics concepts to understand the physics involved in Earth atmospheric regulation. == Climatology == Climatology is a science that derives knowledge and practices from the more specialized disciplines of meteorology, oceanography, geology, biology, and astronomy to study climate. In contrast to meteorology, which studies short-term weather systems lasting up to a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over timescales ranging from years to millennia, as well as changes in long-term average weather patterns. Climatologists, those who practice climatology, study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climate variability and current ongoing global warming. Additionally, the occurrence of past climates on Earth, such as those arising from glacial periods and interglacials, can be used to predict future changes in climate. Oftentimes, climatology is studied in conjunction with another specialized discipline. One recent scientific study that utilizes topics in climatology, oceanology, and even economics is entitled "Concerns about El Nino-Southern Oscillation and the Atlantic Meridional Overturning Circulation with an Increasingly Warm Ocean." Scientists under New Insights in Climate Science found that Earth is at risk of El Nino events of greater extremes and overall climate instability given new information regarding the El Nino Southern Oscillation (ENSO) and the Atlantic Meridional Overturning Circulation (AMOC). ENSO describes a recurring climate pattern in which the temperature of waters in the central and eastern tropical Pacific Ocean changes periodically. AMOC is best described by NOAA as "a system of ocean currents that circulates water within the Atlantic Ocean, bringing warm water north and cold water south". This research has revealed that the collapse of the AMOC appears to be occurring sooner than when earlier models had predicted. It also expands on the fact that our economic and social systems are more vulnerable to El Nino impacts than previously thought. The study of climatology is vital in understanding current climate risks. Research is necessary to mitigate and monitor the efforts put forth towards are ever-evolving climate. Strengthening our knowledge within the realm of climatology allows us to better prepare for the impacts of extreme El Nino events, such as amplified droughts, floods, and heat extremes. Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere, the oceans and land surface (particularly vegetation, land use and topography), as well as the chemical and physical composition of the atmosphere. Related disciplines include astrophysics, atmospheric physics, chemistry, ecology, physical geography, geology, geophysics, glaciology, hydrology, oceanography, and volcanology. == Aeronomy == Aeronomy is the scientific study of the upper atmosphere of the Earth — the atmospheric layers above the stratopause — and corresponding regions of the atmospheres of other planets, where the entire atmosphere may correspond to the Earth's upper atmosphere or a portion of it. A branch of both atmospheric chemistry and atmospheric physics, aeronomy contrasts with meteorology, which focuses on the layers of the atmosphere below the stratopause. In atmospheric regions studied by aeronomers, chemical dissociation and ionization are important phenomena. == Atmospheres on other celestial bodies == All of the Solar System's planets have atmospheres. This is because their gravity is strong enough to keep gaseous particles close to the surface. Larger gas giants are massive enough to keep large amounts of the light gases hydrogen and helium close by, while the smaller planets lose these gases into space. The composition of the Earth's atmosphere is different from the other planets because the various life processes that have transpired on the planet have introduced free molecular oxygen. Much of Mercury's atmosphere has been blasted away by the solar wind. The only moon that has retained a dense atmosphere is Titan. There is a thin atmosphere on Triton, and a trace of an atmosphere on the Moon. Planetary atmospheres are affected by the varying degrees of energy received from either the Sun or their interiors, leading to the formation of dynamic weather systems such as hurricanes (on Earth), planet-wide dust storms (on Mars), an Earth-sized anticyclone on Jupiter (called the Great Red Spot), and holes in the atmosphere (on Neptune). At least one extrasolar planet, HD 189733 b, has been claimed to possess such a weather system, similar to the Great Red Spot but twice as large. Hot Jupiters have been shown to be losing their atmospheres into space due to stellar radiation, much like the tails of comets. These planets may have vast differences in temperature between their day and night sides which produce supersonic winds, although the day and night sides of HD 189733b appear to have very similar temperatures, indicating that planet's atmosphere effectively redistributes the star's energy around the planet. == See also == Air pollution == References == == External links == Atmospheric fluid dynamics applied to weather maps – Principles such as Advection, Deformation and Vorticity National Center for Atmospheric Research (NCAR) Archives, documents the history of the atmospheric sciences
Wikipedia/Atmospheric_sciences
The germ theory of disease is the currently accepted scientific theory for many diseases. It states that microorganisms known as pathogens or "germs" can cause disease. These small organisms, which are too small to be seen without magnification, invade animals, plants, and even bacteria. Their growth and reproduction within their hosts can cause disease. "Germ" refers not just to bacteria but to any type of microorganism, such as protists or fungi, or other pathogens, including parasites, viruses, prions, or viroids. Diseases caused by pathogens are called infectious diseases. Even when a pathogen is the principal cause of a disease, environmental and hereditary factors often influence the severity of the disease, and whether a potential host individual becomes infected when exposed to the pathogen. Pathogens are disease-causing agents that can pass from one individual to another, across multiple domains of life. Basic forms of germ theory were proposed by Girolamo Fracastoro in 1546, and expanded upon by Marcus von Plenciz in 1762. However, such views were held in disdain in Europe, where Galen's miasma theory remained dominant among scientists and doctors. By the early 19th century, the first vaccine, smallpox vaccination, was commonplace in Europe, though doctors were unaware of how it worked or how to extend the principle to other diseases. A transitional period began in the late 1850s with the work of Louis Pasteur. This work was later extended by Robert Koch in the 1880s. By the end of that decade, the miasma theory was struggling to compete with the germ theory of disease. Viruses were initially discovered in the 1890s. Eventually, a "golden era" of bacteriology ensued, during which the germ theory quickly led to the identification of the actual organisms that cause many diseases. == Miasma theory == The miasma theory was the predominant theory of disease transmission before the germ theory took hold towards the end of the 19th century; it is no longer accepted as a correct explanation for disease by the scientific community. It held that diseases such as cholera, chlamydia infection, or the Black Death were caused by a miasma (μίασμα, Ancient Greek: "pollution"), a noxious form of "bad air" emanating from rotting organic matter. Miasma was considered to be a poisonous vapor or mist filled with particles from decomposed matter (miasmata) that was identifiable by its foul smell. The theory posited that diseases were the product of environmental factors such as contaminated water, foul air, and poor hygienic conditions. Such infections, according to the theory, were not passed between individuals but would affect those within a locale that gave rise to such vapors. == Development of germ theory == === Greece and Rome === In Antiquity, the Greek historian Thucydides (c. 460 – c. 400 BC) was the first person to write, in his account of the plague of Athens, that diseases could spread from an infected person to others. One theory of the spread of contagious diseases that were not spread by direct contact was that they were spread by spore-like "seeds" (Latin: semina) that were present in and dispersible through the air. In his poem, De rerum natura (On the Nature of Things, c. 56 BC), the Roman poet Lucretius (c. 99 BC – c. 55 BC) stated that the world contained various "seeds", some of which could sicken a person if they were inhaled or ingested. The Roman statesman Marcus Terentius Varro (116–27 BC) wrote, in his Rerum rusticarum libri III (Three Books on Agriculture, 36 BC): "Precautions must also be taken in the neighborhood of swamps... because there are bred certain minute creatures which cannot be seen by the eyes, which float in the air and enter the body through the mouth and nose and there cause serious diseases." The Greek physician Galen (AD 129 – c. 200/216) speculated in his On Initial Causes (c. 175 AD) that some patients might have "seeds of fever".: 4  In his On the Different Types of Fever (c. 175 AD), Galen speculated that plagues were spread by "certain seeds of plague", which were present in the air.: 6  And in his Epidemics (c. 176–178 AD), Galen explained that patients might relapse during recovery from fever because some "seed of the disease" lurked in their bodies, which would cause a recurrence of the disease if the patients did not follow a physician's therapeutic regimen.: 7  === The Middle Ages === A hybrid form of miasma and contagion theory was proposed by Persian physician Ibn Sina (known as Avicenna in Europe) in The Canon of Medicine (1025). He mentioned that people can transmit disease to others by breath, noted contagion with tuberculosis, and discussed the transmission of disease through water and dirt. During the early Middle Ages, Isidore of Seville (c. 560–636) mentioned "plague-bearing seeds" (pestifera semina) in his On the Nature of Things (c. AD 613).: 20  Later in 1345, Tommaso del Garbo (c. 1305–1370) of Bologna, Italy mentioned Galen's "seeds of plague" in his work Commentaria non-parum utilia in libros Galeni (Helpful commentaries on the books of Galen).: 214  The 16th century Reformer Martin Luther appears to have had some idea of the contagion theory, commenting, "I have survived three plagues and visited several people who had two plague spots which I touched. But it did not hurt me, thank God. Afterwards when I returned home, I took up Margaret," (born 1534), "who was then a baby, and put my unwashed hands on her face, because I had forgotten; otherwise I should not have done it, which would have been tempting God." In 1546, Italian physician Girolamo Fracastoro published De Contagione et Contagiosis Morbis (On Contagion and Contagious Diseases), a set of three books covering the nature of contagious diseases, categorization of major pathogens, and theories on preventing and treating these conditions. Fracastoro blamed "seeds of disease" that propagate through direct contact with an infected host, indirect contact with fomites, or through particles in the air. === The Early Modern Period === In 1668, Italian physician Francesco Redi published experimental evidence rejecting spontaneous generation, the theory that living creatures arise from nonliving matter. He observed that maggots only arose from rotting meat that was uncovered. When meat was left in jars covered by gauze, the maggots would instead appear on the gauze's surface, later understood as rotting meat's smell passing through the mesh to attract flies that laid eggs. Microorganisms are said to have been first directly observed in the 1670s by Anton van Leeuwenhoek, an early pioneer in microbiology, considered "the Father of Microbiology". Leeuwenhoek is said to be the first to see and describe bacteria in 1674, yeast cells, the teeming life in a drop of water (such as algae), and the circulation of blood corpuscles in capillaries. The word "bacteria" didn't exist yet, so he called these microscopic living organisms "animalcules", meaning "little animals". Those "very little animalcules" he was able to isolate from different sources, such as rainwater, pond and well water, and the human mouth and intestine. Yet German Jesuit priest and scholar Athanasius Kircher (or "Kirchner", as it is often spelled) may have observed such microorganisms prior to this. One of his books written in 1646 contains a chapter in Latin, which reads in translation: "Concerning the wonderful structure of things in nature, investigated by microscope...who would believe that vinegar and milk abound with an innumerable multitude of worms." Kircher defined the invisible organisms found in decaying bodies, meat, milk, and secretions as "worms." His studies with the microscope led him to the belief, which he was possibly the first to hold, that disease and putrefaction, or decay were caused by the presence of invisible living bodies, writing that "a number of things might be discovered in the blood of fever patients." When Rome was struck by the bubonic plague in 1656, Kircher investigated the blood of plague victims under the microscope. He noted the presence of "little worms" or "animalcules" in the blood and concluded that the disease was caused by microorganisms. Kircher was the first to attribute infectious disease to a microscopic pathogen, inventing the germ theory of disease, which he outlined in his Scrutinium Physico-Medicum, published in Rome in 1658. Kircher's conclusion that disease was caused by microorganisms was correct, although it is likely that what he saw under the microscope were in fact red or white blood cells and not the plague agent itself. Kircher also proposed hygienic measures to prevent the spread of disease, such as isolation, quarantine, burning clothes worn by the infected, and wearing facemasks to prevent the inhalation of germs. It was Kircher who first proposed that living beings enter and exist in the blood. In the 18th century, more proposals were made, but struggled to catch on. In 1700, physician Nicolas Andry argued that microorganisms he called "worms" were responsible for smallpox and other diseases. In 1720, Richard Bradley theorised that the plague and "all pestilential distempers" were caused by "poisonous insects", living creatures viewable only with the help of microscopes. In 1762, the Austrian physician Marcus Antonius von Plenciz (1705–1786) published a book titled Opera medico-physica. It outlined a theory of contagion stating that specific animalcules in the soil and the air were responsible for causing specific diseases. Von Plenciz noted the distinction between diseases which are both epidemic and contagious (like measles and dysentery), and diseases which are contagious but not epidemic (like rabies and leprosy). The book cites Anton van Leeuwenhoek to show how ubiquitous such animalcules are and was unique for describing the presence of germs in ulcerating wounds. Ultimately, the theory espoused by von Plenciz was not accepted by the scientific community. === 19th and 20th centuries === ==== Agostino Bassi, Italy ==== During the early 19th century, driven by economic concerns over collapsing silk production, Italian entomologist Agostino Bassi researched a silkworm disease known as "muscardine" in French and "calcinaccio" or "mal del segno" in Italian, causing white fungal spots along the caterpillar. From 1835 to 1836, Bassi published his findings that fungal spores transmitted the disease between individuals. In recommending the rapid removal of diseased caterpillars and disinfection of their surfaces, Bassi outlined methods used in modern preventative healthcare. Italian naturalist Giuseppe Gabriel Balsamo-Crivelli named the causative fungal species after Bassi, currently classified as Beauveria bassiana. ==== Louis-Daniel Beauperthuy, France ==== In 1838 French specialist in tropical medicine Louis-Daniel Beauperthuy pioneered using microscopy in relation to diseases and independently developed a theory that all infectious diseases were due to parasitic infection with "animalcules" (microorganisms). With the help of his friend M. Adele de Rosseville, he presented his theory in a formal presentation before the French Academy of Sciences in Paris. By 1853, he was convinced that malaria and yellow fever were spread by mosquitos. He even identified the particular group of mosquitos that transmit yellow fever as the "domestic species" of "striped-legged mosquito", which can be recognised as Aedes aegypti, the actual vector. He published his theory in 1854 in the Gaceta Oficial de Cumana ("Official Gazette of Cumana"). His reports were assessed by an official commission, which discarded his mosquito theory. ==== Ignaz Semmelweis, Austria ==== Ignaz Semmelweis, a Hungarian obstetrician working at the Vienna General Hospital (Allgemeines Krankenhaus) in 1847, noticed the dramatically high maternal mortality from puerperal fever following births assisted by doctors and medical students. However, those attended by midwives were relatively safe. Investigating further, Semmelweis made the connection between puerperal fever and examinations of delivering women by doctors, and further realized that these physicians had usually come directly from autopsies. Asserting that puerperal fever was a contagious disease and that matter from autopsies was implicated in its spread, Semmelweis made doctors wash their hands with chlorinated lime water before examining pregnant women. He then documented a sudden reduction in the mortality rate from 18% to 2.2% over a period of a year. Despite this evidence, he and his theories were rejected by most of the contemporary medical establishment. ==== Gideon Mantell, UK ==== Gideon Mantell, the Sussex doctor more famous for discovering dinosaur fossils, spent time with his microscope, and speculated in his Thoughts on Animalcules (1850) that perhaps "many of the most serious maladies which afflict humanity, are produced by peculiar states of invisible animalcular life". ==== John Snow, UK ==== British physician John Snow is credited as a founder of modern epidemiology for studying the 1854 Broad Street cholera outbreak. Snow criticized the Italian anatomist Giovanni Maria Lancisi for his early 18th century writings that claimed swamp miasma spread malaria, rebutting that bad air from decomposing organisms was not present in all cases. In his 1849 pamphlet On the Mode of Communication of Cholera, Snow proposed that cholera spread through the fecal–oral route, replicating in human lower intestines. In the book's second edition, published in 1855, Snow theorized that cholera was caused by cells smaller than human epithelial cells, leading to Robert Koch's 1884 confirmation of the bacterial species Vibrio cholerae as the causative agent. In recognizing a biological origin, Snow recommended boiling and filtering water, setting the precedent for modern boil-water advisory directives. Through a statistical analysis tying cholera cases to specific water pumps associated with the Southwark and Vauxhall Waterworks Company, which supplied sewage-polluted water from the River Thames, Snow showed that areas supplied by this company experienced fourteen times as many deaths as residents using Lambeth Waterworks Company pumps that obtained water from the upriver, cleaner Seething Wells. While Snow received praise for convincing the Board of Guardians of St James's Parish to remove the handles of contaminated pumps, he noted that the outbreak's cases were already declining as scared residents fled the region. ==== Louis Pasteur, France ==== During the mid-19th century, French microbiologist Louis Pasteur showed that treating the female genital tract with boric acid killed the microorganisms causing postpartum infections while avoiding damage to mucous membranes. Building on Redi's work, Pasteur disproved spontaneous generation by constructing swan neck flasks containing nutrient broth. Since the flask contents were only fermented when in direct contact with the external environment's air by removing the curved tubing, Pasteur demonstrated that bacteria must travel between sites of infection to colonize environments. Similar to Bassi, Pasteur extended his research on germ theory by studying pébrine, a disease that causes brown spots on silkworms. While Swiss botanist Carl Nägeli discovered the fungal species Nosema bombycis in 1857, Pasteur applied the findings to recommend improved ventilation and screening of silkworm eggs, an early form of disease surveillance. ==== Robert Koch, Germany ==== In 1884, German bacteriologist Robert Koch published four criteria for establishing causality between specific microorganisms and diseases, now known as Koch's postulates: The microorganism must be found in abundance in all organisms with the disease, but should not be found in healthy organisms. The microorganism must be isolated from a diseased organism and grown in pure culture. The cultured microorganism should cause disease when introduced into a healthy organism. The microorganism must be re-isolated from the inoculated, diseased experimental host and identified as being identical to the original specific causative agent. During his lifetime, Koch recognized that the postulates were not universally applicable, such as asymptomatic carriers of cholera violating the first postulate. For this same reason, the third postulate specifies "should", rather than "must", because not all host organisms exposed to an infectious agent will acquire the infection, potentially due to differences in prior exposure to the pathogen. Limiting the second postulate, it was later discovered that viruses cannot be grown in pure cultures because they are obligate intracellular parasites, making it impossible to fulfill the second postulate. Similarly, pathogenic misfolded proteins, known as prions, only spread by transmitting their structure to other proteins, rather than self-replicating. While Koch's postulates retain historical importance for emphasizing that correlation does not imply causation, many pathogens are accepted as causative agents of specific diseases without fulfilling all of the criteria. In 1988, American microbiologist Stanley Falkow published a molecular version of Koch's postulates to establish correlation between microbial genes and virulence factors. ==== Joseph Lister, UK ==== After reading Pasteur's papers on bacterial fermentation, British surgeon Joseph Lister recognized that compound fractures, involving bones breaking through the skin, were more likely to become infected due to exposure to environmental microorganisms. He recognized that carbolic acid could be applied to the site of injury as an effective antiseptic. == See also == Alexander Fleming Cell theory Cooties Epidemiology Germ theory denialism History of emerging infectious diseases History of public health in the United Kingdom Robert Hooke Rudolf Virchow Zymotic disease == References == == Further reading == Baldwin, Peter. Contagion and the State in Europe, 1830-1930 (Cambridge UP, 1999), focus on cholera, smallpox and syphilis in Britain, France, Germany and Sweden. Brock, Thomas D. Robert Koch. A Life in Medicine and Bacteriology (1988). Dubos, René. Louis Pasteur: Free Lance of Science (1986) Gaynes, Robert P. Germ Theory (ASM Press, 2023), pp.143-205 online Geison, Gerald L. The Private Science of Louis Pasteur (Princeton University Press, 1995) online Hudson, Robert P. Disease and Its Control: The Shaping of Modern Thought (1983) Lawrence, Christopher, and Richard Dixey. "Practising on Principle: Joseph Lister and the Germ Theories of Disease," in Medical Theory, Surgical Practice: Studies in the History of Surgery ed. by Christopher Lawrence (Routledge, 1992), pp. 153-215. Magner, Lois N. A history of infectious diseases and the microbial world (2008) online Magner, Lois N. A History of Medicine (1992) pp. 305–334. online Nutton, Vivian. "The seeds of disease: an explanation of contagion and infection from the Greeks to the Renaissance." Medical history 27.1 (1983): 1-34. online Porter, Roy. Blood and Guts: A Short History of Medicine (2004) online Tomes, Nancy. 'The gospel of germs: Men, women, and the microbe in American life (Harvard University Press, 1999) online. Tomes, Nancy. "Moralizing the microbe: the germ theory and the moral construction of behavior in the late-nineteenth-century antituberculosis movement." in Morality and health (Routledge, 2013) pp. 271-294. Tomes, Nancy J. "American attitudes toward the germ theory of disease: Phyllis Allen Richmond revisited." Journal of the History of Medicine and Allied Sciences 52.1 (1997): 17-50. online Winslow, Charles-Edward Amory. The Conquest of Epidemic Disease. A Chapter in the History of Ideas (1943) online. == External links == John Horgan, "Germ Theory" (2023) Stephen T. Abedon Germ Theory of Disease Supplemental Lecture (98/03/28 update), www.mansfield.ohio-state.edu William C. Campbell The Germ Theory Timeline, germtheorytimeline.info Science's war on infectious diseases, www.creatingtechnology.org
Wikipedia/Germ_theory_of_disease
Some chemical authorities define an organic compound as a chemical compound that contains a carbon–hydrogen or carbon–carbon bond; others consider an organic compound to be any chemical compound that contains carbon. For example, carbon-containing compounds such as alkanes (e.g. methane CH4) and its derivatives are universally considered organic, but many others are sometimes considered inorganic, such as certain compounds of carbon with nitrogen and oxygen (e.g. cyanide ion CN−, hydrogen cyanide HCN, chloroformic acid ClCO2H, carbon dioxide CO2, and carbonate ion CO2−3). Due to carbon's ability to catenate (form chains with other carbon atoms), millions of organic compounds are known. The study of the properties, reactions, and syntheses of organic compounds comprise the discipline known as organic chemistry. For historical reasons, a few classes of carbon-containing compounds (e.g., carbonate salts and cyanide salts), along with a few other exceptions (e.g., carbon dioxide, and even hydrogen cyanide despite the fact it contains a carbon–hydrogen bond), are generally considered inorganic. Other than those just named, little consensus exists among chemists on precisely which carbon-containing compounds are excluded, making any rigorous definition of an organic compound elusive. Although organic compounds make up only a small percentage of Earth's crust, they are of central importance because all known life is based on organic compounds. Living things incorporate inorganic carbon compounds into organic compounds through a network of processes (the carbon cycle) that begins with the conversion of carbon dioxide and a hydrogen source like water into simple sugars and other organic molecules by autotrophic organisms using light (photosynthesis) or other sources of energy. Most synthetically-produced organic compounds are ultimately derived from petrochemicals consisting mainly of hydrocarbons, which are themselves formed from the high pressure and temperature degradation of organic matter underground over geological timescales. This ultimate derivation notwithstanding, organic compounds are no longer defined as compounds originating in living things, as they were historically. In chemical nomenclature, an organyl group, frequently represented by the letter R, refers to any monovalent substituent whose open valence is on a carbon atom. == Definition == For historical reasons discussed below, a few types of carbon-containing compounds, such as carbides, carbonates (excluding carbonate esters), simple oxides of carbon (for example, CO and CO2) and cyanides are generally considered inorganic compounds. Different forms (allotropes) of pure carbon, such as diamond, graphite, fullerenes and carbon nanotubes are also excluded because they are simple substances composed of a single element and so not generally considered chemical compounds. The word "organic" in this context does not mean "natural". == History == === Vitalism === Vitalism was a widespread conception that substances found in organic nature are formed from the chemical elements by the action of a "vital force" or "life-force" (vis vitalis) that only living organisms possess. In the 1810s, Jöns Jacob Berzelius argued that a regulative force must exist within living bodies. Berzelius also contended that compounds could be distinguished by whether they required any organisms in their synthesis (organic compounds) or whether they did not (inorganic compounds). Vitalism taught that formation of these "organic" compounds were fundamentally different from the "inorganic" compounds that could be obtained from the elements by chemical manipulations in laboratories. Vitalism survived for a short period after the formulation of modern ideas about the atomic theory and chemical elements. It first came under question in 1824, when Friedrich Wöhler synthesized oxalic acid, a compound known to occur only in living organisms, from cyanogen. A further experiment was Wöhler's 1828 synthesis of urea from the inorganic salts potassium cyanate and ammonium sulfate. Urea had long been considered an "organic" compound, as it was known to occur only in the urine of living organisms. Wöhler's experiments were followed by many others, in which increasingly complex "organic" substances were produced from "inorganic" ones without the involvement of any living organism, thus disproving vitalism. === Modern classification and ambiguities === Although vitalism has been discredited, scientific nomenclature retains the distinction between organic and inorganic compounds. The modern meaning of organic compound is any compound that contains a significant amount of carbon—even though many of the organic compounds known today have no connection to any substance found in living organisms. The term carbogenic has been proposed by E. J. Corey as a modern alternative to organic, but this neologism remains relatively obscure. The organic compound L-isoleucine molecule presents some features typical of organic compounds: carbon–carbon bonds, carbon–hydrogen bonds, as well as covalent bonds from carbon to oxygen and to nitrogen. As described in detail below, any definition of organic compound that uses simple, broadly-applicable criteria turns out to be unsatisfactory, to varying degrees. The modern, commonly accepted definition of organic compound essentially amounts to any carbon-containing compound, excluding several classes of substances traditionally considered "inorganic". The list of substances so excluded varies from author to author. Still, it is generally agreed upon that there are (at least) a few carbon-containing compounds that should not be considered organic. For instance, almost all authorities would require the exclusion of alloys that contain carbon, including steel (which contains cementite, Fe3C), as well as other metal and semimetal carbides (including "ionic" carbides, e.g, Al4C3 and CaC2 and "covalent" carbides, e.g. B4C and SiC, and graphite intercalation compounds, e.g. KC8). Other compounds and materials that are considered 'inorganic' by most authorities include: metal carbonates, simple oxides of carbon (CO, CO2, and arguably, C3O2), the allotropes of carbon, cyanide derivatives not containing an organic residue (e.g., KCN, (CN)2, BrCN, cyanate anion OCN−, etc.), and heavier analogs thereof (e.g., cyaphide anion CP−, CSe2, COS; although carbon disulfide CS2 is often classed as an organic solvent). Halides of carbon without hydrogen (e.g., CF4 and CClF3), phosgene (COCl2), carboranes, metal carbonyls (e.g., nickel tetracarbonyl), mellitic anhydride (C12O9), and other exotic oxocarbons are also considered inorganic by some authorities. Nickel tetracarbonyl (Ni(CO)4) and other metal carbonyls are often volatile liquids, like many organic compounds, yet they contain only carbon bonded to a transition metal and to oxygen, and are often prepared directly from metal and carbon monoxide. Nickel tetracarbonyl is typically classified as an organometallic compound as it satisfies the broad definition that organometallic chemistry covers all compounds that contain at least one carbon to metal covalent bond; it is unknown whether organometallic compounds form a subset of organic compounds. For example, the evidence of covalent Fe-C bonding in cementite, a major component of steel, places it within this broad definition of organometallic, yet steel and other carbon-containing alloys are seldom regarded as organic compounds. Thus, it is unclear whether the definition of organometallic should be narrowed, whether these considerations imply that organometallic compounds are not necessarily organic, or both. Metal complexes with organic ligands but no carbon-metal bonds (e.g., (CH3CO2)2Cu) are not considered organometallic; instead, they are called metal-organic compounds (and might be considered organic). The relatively narrow definition of organic compounds as those containing C–H bonds excludes compounds that are (historically and practically) considered organic. Neither urea CO(NH2)2 nor oxalic acid (COOH)2 are organic by this definition, yet they were two key compounds in the vitalism debate. However, the IUPAC Blue Book on organic nomenclature specifically mentions urea and oxalic acid as organic compounds. Other compounds lacking C–H bonds but traditionally considered organic include benzenehexol, mesoxalic acid, and carbon tetrachloride. Mellitic acid, which contains no C–H bonds, is considered a possible organic compound in Martian soil. Terrestrially, it, and its anhydride, mellitic anhydride, are associated with the mineral mellite (Al2C6(COO)6·16H2O). A slightly broader definition of the organic compound includes all compounds bearing C–H or C–C bonds. This would still exclude urea. Moreover, this definition still leads to somewhat arbitrary divisions in sets of carbon-halogen compounds. For example, CF4 and CCl4 would be considered by this rule to be "inorganic", whereas CHF3, CHCl3, and C2Cl6 would be organic, though these compounds share many physical and chemical properties. == Classification == Organic compounds may be classified in a variety of ways. One major distinction is between natural and synthetic compounds. Organic compounds can also be classified or subdivided by the presence of heteroatoms, e.g., organometallic compounds, which feature bonds between carbon and a metal, and organophosphorus compounds, which feature bonds between carbon and a phosphorus. Another distinction, based on the size of organic compounds, distinguishes between small molecules and polymers. === Natural compounds === Natural compounds refer to those that are produced by plants or animals. Many of these are still extracted from natural sources because they would be more expensive to produce artificially. Examples include most sugars, some alkaloids and terpenoids, certain nutrients such as vitamin B12, and, in general, those natural products with large or stereoisometrically complicated molecules present in reasonable concentrations in living organisms. Further compounds of prime importance in biochemistry are antigens, carbohydrates, enzymes, hormones, lipids and fatty acids, neurotransmitters, nucleic acids, proteins, peptides and amino acids, lectins, vitamins, and fats and oils. === Synthetic compounds === Compounds that are prepared by reaction of other compounds are known as "synthetic". They may be either compounds that are already found in plants/animals or those artificial compounds that do not occur naturally. Most polymers (a category that includes all plastics and rubbers) are organic synthetic or semi-synthetic compounds. === Biotechnology === Many organic compounds—two examples are ethanol and insulin—are manufactured industrially using organisms such as bacteria and yeast. Typically, the DNA of an organism is altered to express compounds not ordinarily produced by the organism. Many such biotechnology-engineered compounds did not previously exist in nature. == Databases == The CAS database is the most comprehensive repository for data on organic compounds. The search tool SciFinder is offered. The Beilstein database contains information on 9.8 million substances, covers the scientific literature from 1771 to the present, and is today accessible via Reaxys. Structures and a large diversity of physical and chemical properties are available for each substance, with reference to original literature. PubChem contains 18.4 million entries on compounds and especially covers the field of medicinal chemistry. A great number of more specialized databases exist for diverse branches of organic chemistry. == Structure determination == The main tools are proton and carbon-13 NMR spectroscopy, IR spectroscopy, mass spectrometry, UV-Vis spectroscopy and X-ray crystallography. == See also == Inorganic compound – Chemical compound without any carbon-hydrogen bonds List of chemical compounds List of organic compounds Organometallic chemistry – Study of organic compounds containing metal(s) == References == == External links == Organic Compounds Database Organic Materials Database
Wikipedia/Organic_molecule
Nanotechnology is the manipulation of matter with at least one dimension sized from 1 to 100 nanometers (nm). At this scale, commonly known as the nanoscale, surface area and quantum mechanical effects become important in describing properties of matter. This definition of nanotechnology includes all types of research and technologies that deal with these special properties. It is common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to research and applications whose common trait is scale. An earlier understanding of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabricating macroscale products, now referred to as molecular nanotechnology. Nanotechnology defined by scale includes fields of science such as surface science, organic chemistry, molecular biology, semiconductor physics, energy storage, engineering, microfabrication, and molecular engineering. The associated research and applications range from extensions of conventional device physics to molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale. Nanotechnology may be able to create new materials and devices with diverse applications, such as in nanomedicine, nanoelectronics, agricultural sectors, biomaterials energy production, and consumer products. However, nanotechnology raises issues, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted. == Origins == The concepts that seeded nanotechnology were first discussed in 1959 by physicist Richard Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of synthesis via direct manipulation of atoms. The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known. Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which achieved popular success and helped thrust nanotechnology into the public sphere. In it he proposed the idea of a nanoscale "assembler" that would be able to build a copy of itself and of other items of arbitrary complexity with atom-level control. Also in 1986, Drexler co-founded The Foresight Institute to increase public awareness and understanding of nanotechnology concepts and implications. The emergence of nanotechnology as a field in the 1980s occurred through the convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework, and experimental advances that drew additional attention to the prospects. In the 1980s, two breakthroughs helped to spark the growth of nanotechnology. First, the invention of the scanning tunneling microscope in 1981 enabled visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received a Nobel Prize in Physics in 1986. Binnig, Quate and Gerber also invented the analogous atomic force microscope that year. Second, fullerenes (buckyballs) were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry. C60 was not initially described as nanotechnology; the term was used regarding subsequent work with related carbon nanotubes (sometimes called graphene tubes or Bucky tubes) which suggested potential applications for nanoscale electronics and devices. The discovery of carbon nanotubes is attributed to Sumio Iijima of NEC in 1991, for which Iijima won the inaugural 2008 Kavli Prize in Nanoscience. In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology. Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003. Meanwhile, commercial products based on advancements in nanoscale technologies began emerging. These products were limited to bulk applications of nanomaterials and did not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based sunscreens, carbon fiber strengthening using silica nanoparticles, and carbon nanotubes for stain-resistant textiles. Governments moved to promote and fund research into nanotechnology, such as American the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and established research funding, and in Europe via the European Framework Programmes for Research and Technological Development. By the mid-2000s scientific attention began to flourish. Nanotechnology roadmaps centered on atomically precise manipulation of matter and discussed existing and projected capabilities, goals, and applications. == Fundamental concepts == Nanotechnology is the science and engineering of functional systems at the molecular scale. In its original sense, nanotechnology refers to the projected ability to construct items from the bottom up making complete, high-performance products. One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison, typical carbon–carbon bond lengths, or the spacing between these atoms in a molecule, are in the range 0.12–0.15 nm, and DNA's diameter is around 2 nm. On the other hand, the smallest cellular life forms, the bacteria of the genus Mycoplasma, are around 200 nm in length. By convention, nanotechnology is taken as the scale range 1 to 100 nm, following the definition used by the American National Nanotechnology Initiative. The lower limit is set by the size of atoms (hydrogen has the smallest atoms, which have an approximately ,25 nm kinetic diameter). The upper limit is more or less arbitrary, but is around the size below which phenomena not observed in larger structures start to become apparent and can be made use of. These phenomena make nanotechnology distinct from devices that are merely miniaturized versions of an equivalent macroscopic device; such devices are on a larger scale and come under the description of microtechnology. To put that scale in another context, the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth. Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control. Areas of physics such as nanoelectronics, nanomechanics, nanophotonics and nanoionics have evolved to provide nanotechnology's scientific foundation. === Larger to smaller: a materials perspective === Several phenomena become pronounced as system size. These include statistical mechanical effects, as well as quantum mechanical effects, for example, the "quantum size effect" in which the electronic properties of solids alter along with reductions in particle size. Such effects do not apply at macro or micro dimensions. However, quantum effects can become significant when nanometer scales. Additionally, physical (mechanical, electrical, optical, etc.) properties change versus macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal, and catalytic properties of materials. Diffusion and reactions can be different as well. Systems with fast ion transport are referred to as nanoionics. The mechanical properties of nanosystems are of interest in research. === Simple to complex: a molecular perspective === Modern synthetic chemistry can prepare small molecules of almost any structure. These methods are used to manufacture a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble single molecules into supramolecular assemblies consisting of many molecules arranged in a well-defined manner. These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into a useful conformation through a bottom-up approach. The concept of molecular recognition is important: molecules can be designed so that a specific configuration or arrangement is favored due to non-covalent intermolecular forces. The Watson–Crick basepairing rules are a direct result of this, as is the specificity of an enzyme targeting a single substrate, or the specific folding of a protein. Thus, components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole. Such bottom-up approaches should be capable of producing devices in parallel and be much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, many examples of self-assembly based on molecular recognition in exist in biology, most notably Watson–Crick basepairing and enzyme-substrate interactions. === Molecular nanotechnology: a long-term view === Molecular nanotechnology, sometimes called molecular manufacturing, concerns engineered nanosystems (nanoscale machines) operating on the molecular scale. Molecular nanotechnology is especially associated with molecular assemblers, machines that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles. When Drexler independently coined and popularized the term "nanotechnology", he envisioned manufacturing technology based on molecular machine systems. The premise was that molecular-scale biological analogies of traditional machine components demonstrated molecular machines were possible: biology was full of examples of sophisticated, stochastically optimized biological machines. Drexler and other researchers have proposed that advanced nanotechnology ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification. The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems: Molecular Machinery, Manufacturing, and Computation. In general, assembling devices on the atomic scale requires positioning atoms on other atoms of comparable size and stickiness. Carlo Montemagno's view is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Richard Smalley argued that mechanosynthesis was impossible due to difficulties in mechanically manipulating individual molecules. This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003. Though biology clearly demonstrates that molecular machines are possible, non-biological molecular machines remained in their infancy. Alex Zettl and colleagues at Lawrence Berkeley Laboratories and UC Berkeley constructed at least three molecular devices whose motion is controlled via changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator. Ho and Lee at Cornell University in 1999 used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal and chemically bound the CO to the Fe by applying a voltage. == Research == === Nanomaterials === Many areas of science develop or study materials having unique properties arising from their nanoscale dimensions. Interface and colloid science produced many materials that may be useful in nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods. Nanomaterials with fast ion transport are related to nanoionics and nanoelectronics. Nanoscale materials can be used for bulk applications; most commercial applications of nanotechnology are of this flavor. Progress has been made in using these materials for medical applications, including tissue engineering, drug delivery, antibacterials and biosensors. Nanoscale materials such as nanopillars are used in solar cells. Applications incorporating semiconductor nanoparticles in products such as display technology, lighting, solar cells and biological imaging; see quantum dots. === Bottom-up approaches === The bottom-up approach seeks to arrange smaller components into more complex assemblies. DNA nanotechnology utilizes Watson–Crick basepairing to construct well-defined structures out of DNA and other nucleic acids. Approaches from the field of "classical" chemical synthesis (inorganic and organic synthesis) aim at designing molecules with well-defined shape (e.g. bis-peptides). More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and molecular recognition in particular, to cause single-molecule components to automatically arrange themselves into some useful conformation. Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a process called dip-pen nanolithography. This technique fits into the larger subfield of nanolithography. Molecular-beam epitaxy allows for bottom-up assemblies of materials, most notably semiconductor materials commonly used in chip and computing applications, stacks, gating, and nanowire lasers. === Top-down approaches === These seek to create smaller devices by using larger ones to direct their assembly. Many technologies that descended from conventional solid-state silicon methods for fabricating microprocessors are capable of creating features smaller than 100 nm. Giant magnetoresistance-based hard drives already on the market fit this description, as do atomic layer deposition (ALD) techniques. Peter Grünberg and Albert Fert received the Nobel Prize in Physics in 2007 for their discovery of giant magnetoresistance and contributions to the field of spintronics. Solid-state techniques can be used to create nanoelectromechanical systems or NEMS, which are related to microelectromechanical systems or MEMS. Focused ion beams can directly remove material, or even deposit material when suitable precursor gasses are applied at the same time. For example, this technique is used routinely to create sub-100 nm sections of material for analysis in transmission electron microscopy. Atomic force microscope tips can be used as a nanoscale "write head" to deposit a resist, which is then followed by an etching process to remove material in a top-down method. === Functional approaches === Functional approaches seek to develop useful components without regard to how they might be assembled. Magnetic assembly for the synthesis of anisotropic superparamagnetic materials such as magnetic nano chains. Molecular scale electronics seeks to develop molecules with useful electronic properties. These could be used as single-molecule components in a nanoelectronic device, such as rotaxane. Synthetic chemical methods can be used to create synthetic molecular motors, such as in a so-called nanocar. === Biomimetic approaches === Bionics or biomimicry seeks to apply biological methods and systems found in nature to the study and design of engineering systems and modern technology. Biomineralization is one example of the systems studied. Bionanotechnology is the use of biomolecules for applications in nanotechnology, including the use of viruses and lipid assemblies. Nanocellulose, a nanopolymer often used for bulk-scale applications, has gained interest owing to its useful properties such as abundance, high aspect ratio, good mechanical properties, renewability, and biocompatibility. === Speculative === These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry could progress. These often take a big-picture view, with more emphasis on societal implications than engineering details. Molecular nanotechnology is a proposed approach that involves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields, and many of its proposed techniques are beyond current capabilities. Nanorobotics considers self-sufficient machines operating at the nanoscale. There are hopes for applying nanorobots in medicine. Nevertheless, progress on innovative materials and patented methodologies have been demonstrated. Productive nanosystems are "systems of nanosystems" could produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-emergent properties, but well-understood fundamentals of manufacturing. Because of the discrete (i.e. atomic) nature of matter and the possibility of exponential growth, this stage could form the basis of another industrial revolution. Mihail Roco proposed four states of nanotechnology that seem to parallel the technical progress of the Industrial Revolution, progressing from passive nanostructures to active nanodevices to complex nanomachines and ultimately to productive nanosystems. Programmable matter seeks to design materials whose properties can be easily, reversibly and externally controlled though a fusion of information science and materials science. Due to the popularity and media exposure of the term nanotechnology, the words picotechnology and femtotechnology have been coined in analogy to it, although these are used only informally. === Dimensionality in nanomaterials === Nanomaterials can be classified in 0D, 1D, 2D and 3D nanomaterials. Dimensionality plays a major role in determining the characteristic of nanomaterials including physical, chemical, and biological characteristics. With the decrease in dimensionality, an increase in surface-to-volume ratio is observed. This indicates that smaller dimensional nanomaterials have higher surface area compared to 3D nanomaterials. Two dimensional (2D) nanomaterials have been extensively investigated for electronic, biomedical, drug delivery and biosensor applications. == Tools and techniques == === Scanning microscopes === The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two versions of scanning probes that are used for nano-scale observation. Other types of scanning probe microscopy have much higher resolution, since they are not limited by the wavelengths of sound or light. The tip of a scanning probe can also be used to manipulate nanostructures (positional assembly). Feature-oriented scanning may be a promising way to implement these nano-scale manipulations via an automatic algorithm. However, this is still a slow process because of low velocity of the microscope. The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are made. Scanning probe microscopy is an important technique both for characterization and synthesis. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning approach, atoms or molecules can be moved around on a surface with scanning probe microscopy techniques. === Lithography === Various techniques of lithography, such as optical lithography, X-ray lithography, dip pen lithography, electron beam lithography or nanoimprint lithography offer top-down fabrication techniques where a bulk material is reduced to a nano-scale pattern. Another group of nano-technological techniques include those used for fabrication of nanotubes and nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers. ==== Bottom-up ==== In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Dual-polarization interferometry is one tool suitable for characterization of self-assembled thin films. Another variation of the bottom-up approach is molecular-beam epitaxy or MBE. Researchers at Bell Telephone Laboratories including John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE lays down atomically precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics. Therapeutic products based on responsive nanomaterials, such as the highly deformable, stress-sensitive Transfersome vesicles, are approved for human use in some countries. == Applications == As of August 21, 2008, the Project on Emerging Nanotechnologies estimated that over 800 manufacturer-identified nanotech products were publicly available, with new ones hitting the market at a pace of 3–4 per week. Most applications are "first generation" passive nanomaterials that includes titanium dioxide in sunscreen, cosmetics, surface coatings, and some food products; Carbon allotropes used to produce gecko tape; silver in food packaging, clothing, disinfectants, and household appliances; zinc oxide in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide as a fuel catalyst. In the electric car industry, single wall carbon nanotubes (SWCNTs) address key lithium-ion battery challenges, including energy density, charge rate, service life, and cost. SWCNTs connect electrode particles during charge/discharge process, preventing battery premature degradation. Their exceptional ability to wrap active material particles enhanced electrical conductivity and physical properties, setting them apart multi-walled carbon nanotubes and carbon black. Further applications allow tennis balls to last longer, golf balls to fly straighter, and bowling balls to become more durable. Trousers and socks have been infused with nanotechnology to last longer and lower temperature in the summer. Bandages are infused with silver nanoparticles to heal cuts faster. Video game consoles and personal computers may become cheaper, faster, and contain more memory thanks to nanotechnology. Also, to build structures for on chip computing with light, for example on chip optical quantum information processing, and picosecond transmission of information. Nanotechnology may have the ability to make existing medical applications cheaper and easier to use in places like the doctors' offices and at homes. Cars use nanomaterials in such ways that car parts require fewer metals during manufacturing and less fuel to operate in the future. Nanoencapsulation involves the enclosure of active substances within carriers. Typically, these carriers offer advantages, such as enhanced bioavailability, controlled release, targeted delivery, and protection of the encapsulated substances. In the medical field, nanoencapsulation plays a significant role in drug delivery. It facilitates more efficient drug administration, reduces side effects, and increases treatment effectiveness. Nanoencapsulation is particularly useful for improving the bioavailability of poorly water-soluble drugs, enabling controlled and sustained drug release, and supporting the development of targeted therapies. These features collectively contribute to advancements in medical treatments and patient care. Nanotechnology may play role in tissue engineering. When designing scaffolds, researchers attempt to mimic the nanoscale features of a cell's microenvironment to direct its differentiation down a suitable lineage. For example, when creating scaffolds to support bone growth, researchers may mimic osteoclast resorption pits. Researchers used DNA origami-based nanobots capable of carrying out logic functions to target drug delivery in cockroaches. A nano bible (a .5mm2 silicon chip) was created by the Technion in order to increase youth interest in nanotechnology. == Implications == One concern is the effect that industrial-scale manufacturing and use of nanomaterials will have on human health and the environment, as suggested by nanotoxicology research. For these reasons, some groups advocate that nanotechnology be regulated. However, regulation might stifle scientific research and the development of beneficial innovations. Public health research agencies, such as the National Institute for Occupational Safety and Health research potential health effects stemming from exposures to nanoparticles. Nanoparticle products may have unintended consequences. Researchers have discovered that bacteriostatic silver nanoparticles used in socks to reduce foot odor are released in the wash. These particles are then flushed into the wastewater stream and may destroy bacteria that are critical components of natural ecosystems, farms, and waste treatment processes. Public deliberations on risk perception in the US and UK carried out by the Center for Nanotechnology in Society found that participants were more positive about nanotechnologies for energy applications than for health applications, with health applications raising moral and ethical dilemmas such as cost and availability. Experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, testified that commercialization depends on adequate oversight, risk research strategy, and public engagement. As of 206 Berkeley, California was the only US city to regulate nanotechnology. === Health and environmental concerns === Inhaling airborne nanoparticles and nanofibers may contribute to pulmonary diseases, e.g. fibrosis. Researchers found that when rats breathed in nanoparticles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response and that nanoparticles induce skin aging through oxidative stress in hairless mice. A two-year study at UCLA's School of Public Health found lab mice consuming nano-titanium dioxide showed DNA and chromosome damage to a degree "linked to all the big killers of man, namely cancer, heart disease, neurological disease and aging". A Nature Nanotechnology study suggested that some forms of carbon nanotubes could be as harmful as asbestos if inhaled in sufficient quantities. Anthony Seaton of the Institute of Occupational Medicine in Edinburgh, Scotland, who contributed to the article on carbon nanotubes said "We know that some of them probably have the potential to cause mesothelioma. So those sorts of materials need to be handled very carefully." In the absence of specific regulation forthcoming from governments, Paull and Lyons (2008) have called for an exclusion of engineered nanoparticles in food. A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs. == Regulation == Calls for tighter regulation of nanotechnology have accompanied a debate related to human health and safety risks. Some regulatory agencies cover some nanotechnology products and processes – by "bolting on" nanotechnology to existing regulations – leaving clear gaps. Davies proposed a road map describing steps to deal with these shortcomings. Andrew Maynard, chief science advisor to the Woodrow Wilson Center's Project on Emerging Nanotechnologies, reported insufficient funding for human health and safety research, and as a result inadequate understanding of human health and safety risks. Some academics called for stricter application of the precautionary principle, slowing marketing approval, enhanced labelling and additional safety data. A Royal Society report identified a risk of nanoparticles or nanotubes being released during disposal, destruction and recycling, and recommended that "manufacturers of products that fall under extended producer responsibility regimes such as end-of-life regulations publish procedures outlining how these materials will be managed to minimize possible human and environmental exposure". == See also == == References == == External links == What is Nanotechnology? (A Vega/BBC/OU Video Discussion).
Wikipedia/Nanoscience
The Natural Sciences Tripos is the framework within which most of the science at the University of Cambridge is taught. The tripos includes a wide range of Natural Sciences from physics, astronomy, and geoscience, to chemistry and biology, which are taught alongside the history and philosophy of science. The tripos covers several courses which form the University of Cambridge system of Tripos. It is known for its broad range of study in the first year, in which students cannot study just one discipline, but instead must choose three courses in different areas of the natural sciences and one in mathematics. As is traditional at Cambridge, the degree awarded after Part II (three years of study) is a Bachelor of Arts (BA). A Master of Natural Sciences degree (MSci) is available to those who take the optional Part III (one further year). It was started in the 19th century. == Teaching == Teaching is carried out by 16 different departments. Subjects offered in Part IA in 2019 are Biology of Cells, Chemistry, Computer Science, Evolution and Behaviour, Earth Sciences, Materials Science, Mathematics, Physics, Physiology of Organisms and Mathematical Biology; students must take three experimental subjects and one mathematics course. There are three options for the compulsory mathematics element in IA: "Mathematics A", "Mathematics B" and "Mathematical Biology". From 2020 Computer Science will no longer be an option in the natural sciences course. Students specialize further in the second year (Part IB) of their Tripos, taking three subjects from a choice of twenty, and completely in their third year (Part II) in, for example, genetics or astrophysics, although general third year courses do exist – Biomedical and Biological Sciences for biologists and Physical Sciences for chemists, physicists, etc. Fourth year options (Part III) are available in a number of subjects, and usually have an entry requirement of obtaining a 2:1 or a First in second year Tripos Examinations, and is applied for before the commencement of the third year. As of 2008, options with an available Part III option are: Astrophysics; Biochemistry; Chemistry; Earth Sciences; Materials Science and Metallurgy; and Experimental and Theoretical Physics. As of 2018 the tripos is delivered by sixteen different departments including: The Department of Chemistry The Department of Chemical Engineering and Biotechnology The Department of Genetics The Department of Physics The Institute of Astronomy The Department of Biochemistry The Department of Pharmacology The Department of Pathology The Department of Plant Sciences The Department of Physiology, Development and Neuroscience The Department of Zoology The Department of Psychology The Department of Computer Science and Technology The Department of Earth Sciences The Department of Materials Science and Metallurgy The Department of History and Philosophy of Science == Motivation == The University of Cambridge believes that their course's generalisation, rather than specialisation, gives their students an advantage. First, it allows students to experience subjects at university level before specialising. Second, many modern sciences exist at the boundaries of traditional disciplines, for example, applying methods from a different discipline. Third, this structure allows other scientific subjects, such as Mathematics (traditionally a very strong subject at Cambridge), Medicine and the History and Philosophy of Science, (and previously Computer sciences before it had been removed for 2020 entry) to link with the Natural Sciences Tripos so that once, say, the two-year Part I of the Medical Sciences tripos has been completed, one can specialise in another biological science in Part II during one's third year, and still come out with a science degree specialised enough to move into postgraduate studies, such as a PhD. == Student enrolment == As a result of this structure, the Natural Sciences Tripos has by far the greatest number of students of any Tripos. Undergraduates who are reading for the Natural Sciences Tripos in order to gain their degrees are colloquially known in University slang as 'NatScis (pronounced "Nat-Ski's"), being broadly nicknamed physical science ('phys') or biological science ('bio') NatScis, according to their course choices. (Of course, many students choose both physical and biological options in first year.) The split tends to be about 50:50 between the physical and biological sciences. In 2018, 2594 students applied and 577 were admitted to the Natural Sciences Tripos. In order to be accepted to study on the Natural Sciences course, students must sit the ESAT (Engineering and Science Admissions Test) exam in the year of their application. This is a test required by Cambridge to assess their candidates. == References ==
Wikipedia/Natural_Sciences_(Cambridge)
The historiography of science or the historiography of the history of science is the study of the history and methodology of the sub-discipline of history, known as the history of science, including its disciplinary aspects and practices (methods, theories, schools) and the study of its own historical development ("History of History of Science", i.e., the history of the discipline called History of Science). Historiographical debates regarding the proper method for the study of the history of science are sometimes difficult to demarcate from historical controversies regarding the course of science. Early controversies of the latter kind are considered by some to be the inception of the sub-discipline. == Amateur historiography of science == Histories of science were originally written by practicing and retired scientists, a notable early example being William Whewell's History of the Inductive Sciences (1837). Biographies of natural philosophers (early scientists) were also popular in the nineteenth century, helping to create Isaac Newton as a scientific genius and national hero in Great Britain. H.G. Wells began a trend for histories of science on the grand scale, a kind of epic of civilisation and progress, with his Outline of History (1919/1920). Popular accounts of science's past were often linked to speculations about its future, with science fiction authors such as Isaac Asimov and L. Sprague de Camp dabbling in the two. == Professional historiography of science == === Internalism and externalism === In the early 1930s, a paper given by the Soviet historian Boris Hessen prompted many historians to look at the ways in which scientific practices were allied with the needs and motivations of their context. Hessen's work focused on socio-political factors in what science is done, and how. This method of doing the history of science that became known as externalism looks at the manner in which science and scientists are affected, and guided by, their context and the world in which they exist. It is an approach which eschews the notion that the history of science is the development of pure thought over time, one idea leading to another in a contextual bubble which could exist at any place, at any time, if only given the right geniuses. The method of doing history of science which preceded externalism, became known as internalism. Internalist histories of science often focus on the rational reconstruction of scientific ideas and consider the development of these ideas wholly within the scientific world. Although internalist histories of modern science tend to emphasize the norms of modern science, internalist histories can also consider the different systems of thought underlying the development of Babylonian astronomy or Medieval impetus theory. In practice, the line between internalism and externalism can be incredibly fuzzy. Few historians then, or now, would insist that either of these approaches in their extremes paint a wholly complete picture, nor would it necessarily be possible to practice one fully over the other. However, at their heart they contain a basic question about the nature of science: what is the relationship between the producers and consumers of scientific knowledge? The answer to this question must, in some form, inform the method in which the history of science and technology is conducted; conversely, how the history of science and technology is conducted, and what it concludes, can inform the answer to the question. The question itself contains an entire host of philosophical questions: what is the nature of scientific truth? What does objectivity mean in a scientific context? How does change in scientific theories occur? The historian/sociologist of science Robert K. Merton produced many works following Hessen's thesis, which can be seen as reactions to and refinements of Hessen's argument. In his work on science, technology, and society in the 17th-century England, Merton sought to introduce an additional category — Puritanism — to explain the growth of science in this period. Merton split Hessen's category of economics into smaller subcategories of influence, including transportation, mining, and military technique. Merton also tried to develop empirical, quantitative approaches to showing the influence of external factors on science. Even with his emphasis on external factors, Merton differed from Hessen in his interpretation: Merton maintained that while researchers may be inspired and interested by problems which were suggested by extra-scientific factors, ultimately the researcher's interests were driven by "the internal history of the science in question". Merton attempted to delineate externalism and internalism along disciplinary boundaries, with context studied by the sociologist of science, and content by the historian. === Historiographical approaches to theory change in science === A major subject of concern and controversy in the philosophy of science has been the nature of paradigm shift or theory change in science. Karl Popper argued that scientific knowledge is progressive and cumulative; Thomas Kuhn, that scientific knowledge moves through "paradigm shifts" and is not necessarily progressive; and Paul Feyerabend, that scientific knowledge is not cumulative or progressive and that there can be no demarcation in terms of method between science and any other form of investigation. ==== Thought collectives ==== In 1935, Ludwik Fleck, a Polish medical microbiologist publishedGenesis and Development of a Scientific Fact. Fleck's book focused on the epistemological and linguistic factors that affect scientific discovery, innovation and progress or development. It used a case study in the field of medicine (of the development of the disease concept of Syphilis) to present a thesis about the social nature of knowledge, and in particular science and scientific "thought styles" (Denkstil), which are the epistemological, conceptual and linguistic styles of scientific (but also non-scientific) 'thought collectives' (Denkkollektiv). Fleck's book suggests that epistemologically, there is nothing stable or realistically true or false about any scientific fact. A fact has a "genesis" which is grounded in certain theoretic grounds and many times other obscure and fuzzy notions, and it "develops" as it is subject to dispute and additional research by other scientists. Fleck's monograph was published at almost the same time as Karl Popper's Logik der Forschung but unlike Popper's work, the book received no review notice in Isis. However, Thomas S. Kuhn acknowledged the influence it had upon the Structure of Scientific Revolutions. Kuhn also wrote the foreword to Fleck's English translation. ==== Falsifiability ==== Popper coined the term "critical rationalism" to describe his philosophy. He distinguished between verification and falsifiability and said that a theory should be considered scientific if, and only if, it is falsifiable. Popper sought to explain the apparent progress of scientific knowledge in All Life is Problem Solving. Popper suggested that our understanding of the universe seems to improve over time because of an evolutionary process. He proposed that the process of "error elimination" in the field of science is like that of natural selection for biological evolution, whereby theories that better survive the process of refutation are not necessarily more "true" but more "fit" or applicable to the problem situation at hand. Popper suggested that the evolution of theories through the scientific method could reflect a certain type of progress: toward more and more interesting problems. Popper helped to establish the philosophy of science as an autonomous discipline within philosophy, through his own prolific and influential works, and also through his influence on his own contemporaries and students. ==== Revolutions ==== The mid 20th century saw a series of studies investigating the role of science in a social context. The sociology of science focused on the ways in which scientists work, looking closely at the ways in which they "produce" and "construct" scientific knowledge. Thomas Kuhn's The Structure of Scientific Revolutions (1962) is considered particularly influential. It opened the study of science to new disciplines by suggesting that the evolution of science was in part sociologically determined and that positivism did not explain the actual interactions and strategies of the human participants in science. As Kuhn put it, the history of science may be seen in more nuanced terms, such as that of competing paradigms or conceptual systems in a wider matrix that includes intellectual, cultural, economic and political themes outside of science. "Partly by selection and partly by distortion, the scientists of earlier ages are implicitly presented as having worked upon the same set of fixed problems and in accordance with the same set of fixed canons that the most recent revolution in scientific theory and method made seem scientific." In 1965, Gerd Buchdahl wrote "A Revolution in Historiography of Science", referring to the studies of Thomas Kuhn and Joseph Agassi. He suggested that these two writers had inaugurated the sub-discipline by distinguishing clearly between the history and the historiography of science, as they argued that historiographical views greatly influence the writing of the history of science. Further studies, such as Jerome Ravetz's Scientific Knowledge and its Social Problems (1971) referred to the role of the scientific community, as a social construct, in accepting or rejecting (objective) scientific knowledge. Since the 1960s, a common trend in science studies (the study of the sociology and history of science) has been to emphasize the "human component" of scientific knowledge, and to de-emphasize the view that scientific data are self-evident, value-free, and context-free. The field of Science and Technology Studies, an area that overlaps and often informs historical studies of science, focuses on the social context of science in both contemporary and historical periods. Corresponding with the rise of the environmentalism movement and a general loss of optimism of the power of science and technology unfettered to solve the problems of the world, this new history encouraged many critics to pronounce the preeminence of science to be overthrown. ==== Science wars ==== The Science wars of the 1990s were about the influence of especially French philosophers, which denied the objectivity of science in general or seemed to do so. They described as well differences between the idealized model of a pure science and the actual scientific practice; while scientism, a revival of the positivism approach, saw in precise measurement and rigorous calculation the basis for finally settling enduring metaphysical and moral controversies. == History of science in the 21st Century == The discipline today encompasses a wide variety of fields of academic study, ranging from the traditional ones of history, sociology, and philosophy, and a variety of others such as law, architecture, and literature. There is a tendency towards integrating with global history, as well as employing new methodological concepts such as cross-cultural exchange. Historians of science also closely work with scholars from related disciplines such as the history of medicine and science and technology studies. === Questioning postmodernism === Some critical theorists later argued that their postmodern deconstructions had at times been counter-productive, and had provided intellectual ammunition for reactionary interests. Bruno Latour noted that "dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives. Was I wrong to participate in the invention of this field known as science studies? Is it enough to say that we did not really mean what we meant?" === Eurocentrism in the historiography of science === Eurocentrism in scientific history are historical accounts written about the development of modern science that attribute all scholarly, technological, and philosophical gains to Europe and marginalize outside contributions. Until Joseph Needham's book series Science and Civilisation in China began in 1954, many historians would write about modern science solely as a European achievement with no significant contributions from civilizations other than the Greeks. Recent historical writings have argued that there was significant influence and contribution from Egyptian, Mesopotamian, Arabic, Indian, and Chinese astronomy and mathematics. The employment of notions of cross-cultural exchange in the study of history of science helps in putting the discipline on the path towards being a non-Eurocentric and non-linear field of study. == See also == Conflict thesis Logology (science) Metascience History of military technology Sociology of the history of science == References == == Bibliography == Agassi, Joseph. Towards an Historiography of Science Wesleyan University Press. 1963 Bennett, J. A. (1997). "Museums and the Establishment of the History of Science at Oxford and Cambridge". British Journal for the History of Science. 30 (104 Pt 1): 29–46. doi:10.1017/s0007087496002889. PMID 11618881. S2CID 5697866. Buchdahl, Gerd (1965). "A Revolution in Historiography of Science". History of Science. 4: 55–69. Bibcode:1965HisSc...4...55B. doi:10.1177/007327536500400103. S2CID 142838889. Dennis, Michael Aaron. "Historiography of Science: An American Perspective," in John Krige and Dominique Pestre, eds., Science in the Twentieth Century, Amsterdam: Harwood, 1997, pp. 1–26. von Engelhardt, Dietrich. Historisches Bewußtsein in der Naturwissenschaft : von der Aufklärung bis zum Positivismus, Freiburg [u.a.] : Alber, 1979. Graham, Loren R. (1985), "The socio-political Roots of Boris Hessen: Soviet Marxism and he History of Science", Social Studies of Science, 15 (4), London: SAGE: 705–722, doi:10.1177/030631285015004005, S2CID 143937146. Fleck, Ludwik, Genesis and Development of a Scientific Fact, Chicago and London: The University of Chicago Press, 1979. Graham, Loren R. "Soviet attitudes towards the social and historical study of science," in Science in Russia and the Soviet Union: A Short History, Cambridge, England: Cambridge University Press, 1993, pp. 137–155. Kragh, Helge. An Introduction to the Historiography of Science, Cambridge University Press 1990 Kuhn, Thomas. The Structure of Scientific Revolutions, Chicago: University of Chicago, 1962 (third edn, 1996). Gavroglu, Kostas. O Passado das Ciências como História, Porto: Porto Editora, 2007. Golinski, Jan. Making Natural Knowledge: Constructivism and the History of Science, 2nd ed. with a new Preface. Princeton: University Press, 2005. Lakatos, Imre. "History of Science and its Rational Reconstructions" in Y.Elkana (ed.) The Interaction between Science and Philosophy, pp. 195–241, Atlantic Highlands, New Jersey: Humanities Press and also published in Mathematics Science and Epistemology: Volume 2 of the Philosophical and Scientific Papers of Imre Lakatos Papers Imre Lakatos, Worrall & Currie (eds), Cambridge University Press, 1980 Mayer, Anna K (2000). "Setting up a Discipline: Conflicting Agendas of the Cambridge History of Science Committee, 1936–1950". Studies in History and Philosophy of Science. 31 (4): 665–89. Bibcode:2000SHPSA..31..665M. doi:10.1016/s0039-3681(00)00026-1. PMID 11640235. Mayer. "End of Ideology".'". Studies in History and Philosophy of Science. 35: 2004. doi:10.1016/j.shpsa.2003.12.010. Pestre, Dominique (1995). "Pour une histoire sociale et culturelle des sciences. Nouvelles définitions, nouveaux objets, nouvelles pratiques". Annales. Histoire, Sciences Sociales. 50 (3): 487–522. doi:10.3406/ahess.1995.279379. S2CID 162390064. Popper, Karl R. (1962). Conjectures and Refutations: The Growth of Scientific Knowledge. New York: Basic Books. Retrieved 31 May 2023. Raina, Dhruv. Images and Contexts Critical Essays on the Historiography of Science in India, Oxford University Press 2003 Rossi, Paolo, I ragni e le formiche: un'apologia della storia della scienza, Bologna, 1986. Swerdlow, Noel M. (1993), "Montucla's Legacy: The History of the Exact Sciences", Journal of the History of Ideas, 54 (2): 299–328, doi:10.2307/2709984, JSTOR 2709984. Schaffer, Simon (1984), "Newton at the crossroads", Radical Philosophy, 37: 23–38. Transversal: International Journal for the Historiography of Science == External links == Media related to Historiography of science at Wikimedia Commons
Wikipedia/Historiography_of_science
The phlogiston theory, a superseded scientific theory, postulated the existence of a fire-like element dubbed phlogiston () contained within combustible bodies and released during combustion. The name comes from the Ancient Greek φλογιστόν phlogistón (burning up), from φλόξ phlóx (flame). The idea of a phlogistic substance was first proposed in 1667 by Johann Joachim Becher and later put together more formally in 1697 by Georg Ernst Stahl. Phlogiston theory attempted to explain chemical processes such as combustion and rusting, now collectively known as oxidation. The theory was challenged by the concomitant weight increase and was abandoned before the end of the 18th century following experiments by Antoine Lavoisier in the 1770s and by other scientists. Phlogiston theory led to experiments that ultimately resulted in the identification (c. 1771), and naming (1777), of oxygen by Joseph Priestley and Antoine Lavoisier, respectively. == Theory == Phlogiston theory states that phlogisticated substances contain phlogiston and that they dephlogisticate when burned, releasing stored phlogiston, which is absorbed by the air. Growing plants then absorb this phlogiston, which is why air does not spontaneously combust and also why plant matter burns. This method of accounting for combustion was inverse to the oxygen theory by Antoine Lavoisier. In general, substances that burned in the air were said to be rich in phlogiston; the fact that combustion soon ceased in an enclosed space was taken as clear-cut evidence that air had the capacity to absorb only a finite amount of phlogiston. When the air had become completely phlogisticated it would no longer serve to support the combustion of any material, nor would a metal heated in it yield a calx; nor could phlogisticated air support life. Breathing was thought to take phlogiston out of the body. Joseph Black's Scottish student Daniel Rutherford discovered nitrogen in 1772, and the pair used the theory to explain his results. The residue of air left after burning, in fact, a mixture of nitrogen and carbon dioxide, was sometimes referred to as phlogisticated air, having taken up all of the phlogiston. Conversely, when Joseph Priestley discovered oxygen, he believed it to be dephlogisticated air, capable of combining with more phlogiston and thus supporting combustion for longer than ordinary air. == History == Empedocles had formulated the classical theory that there were four elements—water, earth, fire, and air—and Aristotle reinforced this idea by characterising them as moist, dry, hot, and cold. Fire was thus thought of as a substance, and burning was seen as a process of decomposition that applied only to compounds. Experience had shown that burning was not always accompanied by a loss of material, and a better theory was needed to account for this. === Terra pinguis === In 1667, Johann Joachim Becher published his book Physica subterranea, which contained the first instance of what would become the phlogiston theory. In his book, Becher eliminated fire and air from the classical element model and replaced them with three forms of the earth: terra lapidea, terra fluida, and terra pinguis. Terra pinguis was the element that imparted oily, sulphurous, or combustible properties. Becher believed that terra pinguis was a key feature of combustion and was released when combustible substances were burned. Becher did not have much to do with phlogiston theory as we know it now, but he had a large influence on his student Stahl. Becher's main contribution was the start of the theory itself, however much of it was changed after him. Becher's idea was that combustible substances contain an ignitable matter, the terra pinguis. === Georg Ernst Stahl === In 1703, Georg Ernst Stahl, a professor of medicine and chemistry at Halle, proposed a variant of the theory in which he renamed Becher's terra pinguis to phlogiston, and it was in this form that the theory probably had its greatest influence. The term 'phlogiston' itself was not something that Stahl invented. There is evidence that the word was used as early as 1606, and in a way that was very similar to what Stahl was using it for. The term was derived from a Greek word meaning inflame. The following paragraph describes Stahl's view of phlogiston: To Stahl, metals were compounds containing phlogiston in combination with metallic oxides (calces); when ignited, the phlogiston was freed from the metal leaving the oxide behind. When the oxide was heated with a substance rich in phlogiston, such as charcoal, the calx again took up phlogiston and regenerated the metal. Phlogiston was a definite substance, the same in all its combinations. Stahl's first definition of phlogiston first appeared in his Zymotechnia fundamentalis, published in 1697. His most quoted definition was found in the treatise on chemistry entitled Fundamenta chymiae in 1723. According to Stahl, phlogiston was a substance that was not able to be put into a bottle but could be transferred nonetheless. To him, wood was just a combination of ash and phlogiston, and making a metal was as simple as getting a metal calx and adding phlogiston. Soot was almost pure phlogiston, which is why heating it with a metallic calx transforms the calx into the metal and Stahl attempted to prove that the phlogiston in soot and sulphur were identical by converting sulphates to liver of sulphur using charcoal. He did not account for the increase in weight on combustion of tin and lead that were known at the time. === J. H. Pott === Johann Heinrich Pott, a student of one of Stahl's students, expanded the theory and attempted to make it much more understandable to a general audience. He compared phlogiston to light or fire, saying that all three were substances whose natures were widely understood but not easily defined. He thought that phlogiston should not be considered as a particle but as an essence that permeates substances, arguing that in a pound of any substance, one could not simply pick out the particles of phlogiston. Pott also observed the fact that when certain substances are burned they increase in mass instead of losing the mass of the phlogiston as it escapes; according to him, phlogiston was the basic fire principle and could not be obtained by itself. Flames were considered to be a mix of phlogiston and water, while a phlogiston-and-earthy mixture could not burn properly. Phlogiston permeates everything in the universe, it could be released as heat when combined with an acid. Pott proposed the following properties: The form of phlogiston consists of a circular movement around its axis. When homogeneous it cannot be consumed or dissipated in a fire. The reason it causes expansion in most bodies is unknown, but not accidental. It is proportional to the compactness of the texture of the bodies or to the intimacy of their constitution. The increase of weight during calcination is evident only after a long time, and is due either to the fact that the particles of the body become more compact, decrease the volume and hence increase the density as in the case of lead, or those little heavy particles of air become lodged in the substance as in the case of powdered zinc oxide. Air attracts the phlogiston of bodies. When set in motion, phlogiston is the chief active principle in nature of all inanimate bodies. It is the basis of colours. It is the principal agent in fermentation. Pott's formulations proposed little new theory; he merely supplied further details and rendered existing theory more approachable to the common man. === Others === Johann Juncker also created a very complete picture of phlogiston. When reading Stahl's work, he assumed that phlogiston was in fact very material. He, therefore, came to the conclusion that phlogiston has the property of levity, or that it makes the compound that it is in much lighter than it would be without the phlogiston. He also showed that air was needed for combustion by putting substances in a sealed flask and trying to burn them. Guillaume-François Rouelle brought the theory of phlogiston to France, where he was a very influential scientist and teacher, popularizing the theory very quickly. Many of his students became very influential scientists in their own right, Lavoisier included. The French viewed phlogiston as a very subtle principle that vanishes in all analysis, yet it is in all bodies. Essentially they followed straight from Stahl's theory. Giovanni Antonio Giobert introduced Lavoisier's work in Italy. Giobert won a prize competition from the Academy of Letters and Sciences of Mantua in 1792 for his work refuting phlogiston theory. He presented a paper at the Académie royale des Sciences of Turin on 18 March 1792, entitled Examen chimique de la doctrine du phlogistique et de la doctrine des pneumatistes par rapport à la nature de l'eau ("Chemical examination of the doctrine of phlogiston and the doctrine of pneumatists in relation to the nature of water"), which is considered the most original defence of Lavoisier's theory of water composition to appear in Italy. == Challenge and demise == Eventually, quantitative experiments revealed problems, including the fact that some metals gained weight after they burned, even though they were supposed to have lost phlogiston. Some phlogiston proponents, like Robert Boyle, explained this by concluding that phlogiston has negative mass; others, such as Louis-Bernard Guyton de Morveau, gave the more conventional argument that it is lighter than air. However, a more detailed analysis based on Archimedes' principle, the densities of magnesium and its combustion product showed that just being lighter than air could not account for the increase in weight. Stahl himself did not address the problem of the metals that burn gaining weight, but those who followed his school of thought were the ones that worked on this problem. During the eighteenth century, as it became clear that metals gained weight after they were oxidized, phlogiston was increasingly regarded as a principle rather than a material substance. By the end of the eighteenth century, for the few chemists who still used the term phlogiston, the concept was linked to hydrogen. Joseph Priestley, for example, in referring to the reaction of steam on iron, while fully acknowledging that the iron gains weight after it binds with oxygen to form a calx, iron oxide, iron also loses "the basis of inflammable air (hydrogen), and this is the substance or principle, to which we give the name phlogiston". Following Lavoisier's description of oxygen as the oxidizing principle (hence its name, from Ancient Greek: oksús, "sharp"; génos, "birth" referring to oxygen's supposed role in the formation of acids), Priestley described phlogiston as the alkaline principle. Phlogiston remained the dominant theory until the 1770s when Antoine-Laurent de Lavoisier showed that combustion requires a gas that has weight (specifically, oxygen) and could be measured by means of weighing closed vessels. The use of closed vessels by Lavoisier and earlier by the Russian scientist Mikhail Lomonosov also negated the buoyancy that had disguised the weight of the gases of combustion, and culminated in the principle of mass conservation. These observations solved the mass paradox and set the stage for the new oxygen theory of combustion. The British chemist Elizabeth Fulhame demonstrated through experiment that many oxidation reactions occur only in the presence of water, that they directly involve water, and that water is regenerated and is detectable at the end of the reaction. Based on her experiments, she disagreed with some of the conclusions of Lavoisier as well as with the phlogiston theorists that he critiqued. Her book on the subject appeared in print soon after Lavoisier's execution for Farm-General membership during the French Revolution. Experienced chemists who supported Stahl's phlogiston theory attempted to respond to the challenges suggested by Lavoisier and the newer chemists. In doing so, the theory became more complicated and assumed too much, contributing to its overall demise. Many people tried to remodel their theories on phlogiston to have the theory work with what Lavoisier was doing in his experiments. Pierre Macquer reworded his theory many times, and even though he is said to have thought the theory of phlogiston was doomed, he stood by phlogiston and tried to make the theory work. == See also == Caloric theory – Obsolete scientific theory of heat flow Pneumatic chemistry – Very first studies of the role of gases in the air in combustion reactions Electronegativity – Tendency of an atom to attract a shared pair of electrons Energeticism – View that energy is the fundamental element in all physical change Antiphlogistine – Topical pain relief medicinePages displaying short descriptions of redirect targets == References == == External links == Quotations related to Phlogiston theory at Wikiquote
Wikipedia/Phlogiston_theory
The term cosmography has two distinct meanings: traditionally it has been the protoscience of mapping the general features of the cosmos, heaven and Earth; more recently, it has been used to describe the ongoing effort to determine the large-scale features of the observable universe. Premodern views of cosmography can be traditionally divided into those following the tradition of ancient near eastern cosmology, dominant in the Ancient Near East and in early Greece. == Traditional usage == The 14th-century work 'Aja'ib al-makhluqat wa-ghara'ib al-mawjudat by Persian physician Zakariya al-Qazwini is considered to be an early work of cosmography. Traditional Hindu, Buddhist and Jain cosmography schematize a universe centered on Mount Meru surrounded by rivers, continents and seas. These cosmographies posit a universe being repeatedly created and destroyed over time cycles of immense lengths. In 1551, Martín Cortés de Albacar, from Zaragoza, Spain, published Breve compendio de la esfera y del arte de navegar. Translated into English and reprinted several times, the work was of great influence in Britain for many years. He proposed spherical charts and mentioned magnetic deviation and the existence of magnetic poles. Peter Heylin's 1652 book Cosmographie (enlarged from his Microcosmos of 1621) was one of the earliest attempts to describe the entire world in English, and is the first known description of Australia, and among the first of California. The book has four sections, examining the geography, politics, and cultures of Europe, Asia, Africa, and America, with an addendum on Terra Incognita, including Australia, and extending to Utopia, Fairyland, and the "Land of Chivalrie". In 1659, Thomas Porter published a smaller, but extensive Compendious Description of the Whole World, which also included a chronology of world events from Creation forward. These were all part of a major trend in the European Renaissance to explore (and perhaps comprehend) the known world. == Modern usage == In astrophysics, the term "cosmography" is beginning to be used to describe attempts to determine the large-scale matter distribution and kinematics of the observable universe, dependent on the Friedmann–Lemaître–Robertson–Walker metric but independent of the temporal dependence of the scale factor on the matter/energy composition of the Universe. The word was also commonly used by Buckminster Fuller in his lectures. Using the Tully-Fisher relation on a catalog of 10000 galaxies has allowed the construction of 3D images of the local structure of the cosmos. This led to the identification of a local supercluster named the Laniakea Supercluster. == See also == == References ==
Wikipedia/Cosmography
Method engineering in the "field of information systems is the discipline to construct new methods from existing methods". It focuses on "the design, construction and evaluation of methods, techniques and support tools for information systems development". Furthermore, method engineering "wants to improve the usefulness of systems development methods by creating an adaptation framework whereby methods are created to match specific organisational situations". == Types == === Computer aided method engineering === The meta-process modeling process is often not supported through software tools, called computer aided method engineering (CAME) tools, or MetaCASE tools (Meta-level Computer Assisted Software Engineering tools). Often the instantiation technique "has been utilised to build the repository of Computer Aided Method Engineering environments". There are many tools for meta-process modeling. === Method tailoring === In the literature, different terms refer to the notion of method adaptation, including 'method tailoring', 'method fragment adaptation' and 'situational method engineering'. Method tailoring is defined as: A process or capability in which human agents through responsive changes in, and dynamic interplays between contexts, intentions, and method fragments determine a system development approach for a specific project situation. Potentially, almost all agile methods are suitable for method tailoring. Even the DSDM method is being used for this purpose and has been successfully tailored in a CMM context. Situation-appropriateness can be considered as a distinguishing characteristic between agile methods and traditional software development methods, with the latter being relatively much more rigid and prescriptive. The practical implication is that agile methods allow project teams to adapt working practices according to the needs of individual projects. Practices are concrete activities and products that are part of a method framework. At a more extreme level, the philosophy behind the method, consisting of a number of principles, could be adapted. === Situational method engineering === Situational method engineering is the construction of methods which are tuned to specific situations of development projects. It can be described as the creation of a new method by selecting appropriate method components from a repository of reusable method components, tailoring these method components as appropriate, and integrating these tailored method components to form the new situation-specific method. This enables the creation of development methods suitable for any development situation. Each system development starts then, with a method definition phase where the development method is constructed on the spot. In case of mobile business development, there are methods available for specific parts of the business model design process and ICT development. Situational method engineering can be used to combine these methods into one unified method that adopts the characteristics of mobile ICT services. == Method engineering process == The developers of the IDEF modeling languages, Richard J. Mayer et al. (1995), have developed an early approach to method engineering from studying common method engineering practice and experience in developing other analysis and design methods. The following figure provides a process-oriented view of this approach. This image uses the IDEF3 Process Description Capture method to describe this process where boxes with verb phrases represent activities, arrows represent precedence relationships, and "exclusive or" conditions among possible paths are represented by the junction boxes labeled with an "X.". According to this approach there are three basic strategies in method engineering: Reuse: one of the basic strategies of methods engineering is reuse. Whenever possible, existing methods are adopted. Tailormade: find methods that can satisfy the identified needs with minor modification. This option is an attractive one if the modification does not require a fundamental change in the basic concepts or design goals of the method. New development: Only when neither of these options is viable should method designers seek to develop a new method. This basic strategies can be developed in a similar process of concept development === Knowledge engineering approach === A knowledge engineering approach is the predominant mechanism for method enhancement and new method development. In other words, with very few exceptions, method development involves isolating, documenting, and packaging existing practice for a given task in a form that promotes reliable success among practitioners. Expert attunements are first characterized in the form of basic intuitions and method concepts. These are often initially identified through analysis of the techniques, diagrams, and expressions used by experts. These discoveries aid in the search for existing methods that can be leveraged to support novice practitioners in acquiring the same attunements and skills. New method development is accomplished by establishing the scope of the method, refining characterizations of the method concepts and intuitions, designing a procedure that provides both task accomplishment and basic apprenticeship support to novice practitioners, and developing a language(s) of expression. Method application techniques are then developed outlining guidelines for use in a stand-alone mode and in concert with other methods. Each element of the method then undergoes iterative refinement through both laboratory and field testing. === Method language design process === The method language design process is highly iterative and experimental in nature. Unlike procedure development, where a set of heuristics and techniques from existing practice can be identified, merged, and refined, language designers rarely encounter well-developed graphical display or textual information capture mechanisms. When potentially reusable language structures can be found, they are often poorly defined or only partially suited to the needs of the method. A critical factor in the design of a method language is clearly establishing the purpose and scope of the method. The purpose of the method establishes the needs the method must address. This is used to determine the expressive power required of the supporting language. The scope of the method establishes the range and depth of coverage which must also be established before one can design an appropriate language design strategy. Scope determination also involves deciding what cognitive activities will be supported through method application. For example, language design can be confined to only display the final results of method application (as in providing IDEF9 with graphical and textual language facilities that capture the logic and structure of constraints). Alternatively, there may be a need for in-process language support facilitating information collection and analysis. In those situations, specific language constructs may be designed to help method practitioners organize, classify, and represent information that will later be synthesized into additional representation structures intended for display. With this foundation, language designers begin the process of deciding what needs to be expressed in the language and how it should be expressed. Language design can begin by developing a textual language capable of representing the full range of information to be addressed. Graphical language structures designed to display select portions of the textual language can then be developed. Alternatively, graphical language structures may evolve prior to, or in parallel with, the development of the textual language. The sequence of these activities largely depends on the degree of understanding of the language requirements held among language developers. These may become clear only after several iterations of both graphical and textual language design. === Graphical language design === Graphical language design begins by identifying a preliminary set of schematics and the purpose or goals of each in terms of where and how they will support the method application process. The central item of focus is determined for each schematic. For example, in experimenting with alternative graphical language designs for IDEF9, a Context Schematic was envisioned as a mechanism to classify the varying environmental contexts in which constraints may apply. The central focus of this schematic was the context. After deciding on the central focus for the schematic, additional information (concepts and relations) that should be captured or conveyed is identified. Up to this point in the language design process, the primary focus has been on the information that should be displayed in a given schematic to achieve the goals of the schematic. This is where the language designer must determine which items identified for possible inclusion in the schematic are amenable to graphical representation and will serve to keep the user focused on the desired information content. With this general understanding, previously developed graphical language structures are explored to identify potential reuse opportunities. While exploring candidate graphical language designs for emerging IDEF methods, a wide range of diagrams were identified and explored. Quite often, even some of the central concepts of a method will have no graphical language element in the method. For example, the IDEF1 Information Modeling method includes the notion of an entity but has no syntactic element for an entity in the graphical language.8. When the language designer decides that a syntactic element should be included for a method concept, candidate symbols are designed and evaluated. Throughout the graphical language design process, the language designer applies a number of guiding principles to assist in developing high quality designs. Among these, the language designer avoids overlapping concept classes or poorly defined ones. They also seek to establish intuitive mechanisms to convey the direction for reading the schematics. For example, schematics may be designed to be read from left to right, in a bottom-up fashion, or center-out. The potential for clutter or overwhelmingly large amounts of information on a single schematic is also considered as either condition makes reading and understanding the schematic extremely difficult. === Method testing === Each candidate design is then tested by developing a wide range of examples to explore the utility of the designs relative to the purpose for each schematic. Initial attempts at method development, and the development of supporting language structures in particular, are usually complicated. With successive iterations on the design, unnecessary and complex language structures are eliminated. As the graphical language design approaches a level of maturity, attention turns to the textual language. The purposes served by textual languages range from providing a mechanism for expressing information that has explicitly been left out of the graphical language to providing a mechanism for standard data exchange and automated model interpretation. Thus, the textual language supporting the method may be simple and unstructured (in terms of computer interpretability), or it may emerge as a highly structured, and complex language. The purpose of the method largely determines what level of structure will be required of the textual language. === Formalization and application techniques === As the method language begins to approach maturity, mathematical formalization techniques are employed so the emerging language has clear syntax and semantics. The method formalization process often helps uncover ambiguities, identify awkward language structures, and streamline the language. These general activities culminate in a language that helps focus user attention on the information that needs to be discovered, analyzed, transformed, or communicated in the course of accomplishing the task for which the method was designed. Both the procedure and language components of the method also help users develop the necessary skills and attunements required to achieve consistently high quality results for the targeted task. Once the method has been developed, application techniques will be designed to successfully apply the method in stand-alone mode as well as together with other methods. Application techniques constitute the "use" component of the method which continues to evolve and grow throughout the life of the method. The method procedure, language constructs, and application techniques are reviewed and tested to iteratively refine the method. == See also == == References == Attribution This article incorporates text from US Air Force, Information Integration for Concurrent Engineering (IICE) Compendium of methods report by Richard J. Mayer et al., 1995, a publication now in the public domain. == Further reading == Sjaak Brinkkemper, Kalle Lyytinen, Richard J. Welke (1996). Method engineering: principles of method construction and tool support: proceedings of the IFIP TC8, WG8.1/8.2 Working Conference on Method Engineering 26–28 August 1996, Atlanta, USA. Springer. ISBN 041279750X doi:10.1007/978-0-387-35080-6 Sjaak Brinkkemper, Saeki and Harmsen (1998). Assembly techniques for method engineering. Advanced Information Systems Engineering, Proceedings of CaiSE'98. New York: Springer. doi:10.1007/BFb0054236 Ajantha Dahanayake (2001). Computer-aided method engineering: designing CASE repositories for the 21st century. Hershey, PA: Idea Group Inc (IGI), 2001. ISBN 1878289942 Brian Henderson-Sellers, Jolita Ralyté, Pär J. Ågerfalk and Matti Rossi (2014). Situational method engineering. Berlin: Springer. ISBN 9783642414664 doi:10.1007/978-3-642-41467-1 Brian Henderson-Sellers, Jolita Ralyté and Sjaak Brinkkemper, eds. (2008). Situational method engineering: fundamentals and experiences: proceedings of the IFIP WG 8.1 Working Conference, 12–14 September 2007, Geneva, Switzerland. New York: Springer. ISBN 0387739467 doi:10.1007/978-0-387-73947-2 Brian Henderson-Sellers, C. Gonzalez-Perez and Donald Firesmith (2004) Method engineering and COTS evaluation in: ACM SIGSOFT Software Engineering Notes archive. Vol 30, Issue 4 (July 2005). Manfred A. Jeusfeld, Matthias Jarke and John Mylopoulos, eds. (2009). Metamodeling for method engineering. Cambridge, MA: MIT Press. ISBN 0262101084 == External links == Metamodeling and method engineering presentation by Minna Koskinen, 2000.
Wikipedia/Method_engineering
The Carnegie Institution for Science, also known as Carnegie Science and the Carnegie Institution of Washington, is an organization established to fund and perform scientific research in the United States. This institution is headquartered in Washington, D.C. As of June 30, 2020, the Institution's endowment was valued at $926.9 million. In 2018, the expenses for scientific programs and administration were $96.6 million. American astrophysicist John Mulchaey is the current president of the institution. == Name == More than 20 independent organizations were established through the philanthropy of Andrew Carnegie and feature his surname. In 2024, the "Carnegie Institution for Science" officially adopted the name "Carnegie Science", a name which has been used informally since 2007 when they first changed the name from "Carnegie Institution of Washington" to "Carnegie Institution for Science". == History == It is proposed to found in the city of Washington, an institution which ... shall in the broadest and most liberal manner encourage investigation, research, and discovery [and] show the application of knowledge to the improvement of mankind. —Andrew Carnegie, January 28, 1902 When the United States joined World War II, Vannevar Bush was president of the Carnegie Institution of Washington. Several months prior to June 12, 1940, Bush persuaded President Franklin Roosevelt to create the National Defense Research Committee (later superseded by the Office of Scientific Research and Development) to coordinate the nation's scientific war effort. Bush housed the new agency in the Carnegie Institution's administrative headquarters at 16th and P Streets, Northwest, in Washington, D.C., converting its rotunda and auditorium into office cubicles. From this location, Bush supervised multiple projects, including the Manhattan Project. Carnegie scientists assisted with the development of the proximity fuze and mass production of penicillin.: 77–79  == Research == John Mulchaey, an American astrophysicist, is the institution's 12th president. Carnegie Science is composed of three scientific divisions on the East and West Coasts that center on life and environmental science, Earth and planetary science, and astronomy and astrophysics. Additionally, Carnegie Science manages the Las Campanas Observatory in Chile. === Life and Environmental Sciences === Among its notable staff members are Nobel laureates Andrew Fire, Alfred Hershey, and Barbara McClintock.In addition to the Department of Embryology, BioEYES is located at the University of Pennsylvania in Philadelphia, Pennsylvania; Monash University in Melbourne, Australia; the University of Utah in Salt Lake City, Utah; and the University of Notre Dame in South Bend, Indiana. The Department of Global Ecology was established in 2002. The Department of Plant Biology began as a desert laboratory in 1903 to study plants in their natural habitats. Over time, the research evolved to the study of photosynthesis. The department develops bioinformatics. === Space studies === The Observatories were founded in 1904 as the Mount Wilson Observatory. Carnegie astronomers operate from offices in Pasadena and from the Las Campanas Observatory established in 1969. As Los Angeles encroached more on Mount Wilson, day-to-day operations there were transferred to the Mount Wilson Institute in 1986. The newest additions at Las Campanas was a twin 6.5-meter reflectors. In 2020, the Department of Terrestrial Magnetism and Geophysical Lab merged to become the Earth and Planets Laboratory, located on the organization's Broad Branch Road campus in Washington. The Laboratory is a member of the NASA Astrobiology Institute. The Department of Terrestrial Magnetism was founded in 1904 and used two ships for magnetic observations around the world: the Galilee was chartered in 1905, but it was unsuitable; later, Carnegie was built in 1909 and completed seven cruises to measure the Earth's magnetic field before it suffered an explosion and burned.: 133–136  == History == In 1920, the Eugenics Record Office, founded by Charles Davenport in 1910 in Cold Spring Harbor, New York, was merged with the Station for Experimental Evolution to become the Carnegie Institution's Department of Genetics. The Institution funded that laboratory until 1939; it employed Morris Steggerda, an American anthropologist who has collaborated with Davenport. The Carnegie Institution closed the department in 1944. The department's records were retained in a university library. == Carnegie Academy for Science Education and First Light == In 1989, Carnegie President Maxine Singer founded Carnegie Academy for Science Education and First Light (CASE), a free Saturday science program for middle school students. The program teaches hands-on learning in science. == Administration == The Carnegie Institution's administrative offices were located at 1530 P St., Northwest, Washington, D.C., at the corner of 16th and P Streets until 2020. The building housed the offices of the president, administration and finance, publications, and advancement. In 2020, the administrative building was sold to the government of Qatar to be used as its embassy. == Partnerships == Carnegie Science and Caltech formalized a partnership in Pasadena. The Carnegie Institution partnered with several other organizations in constructing the Giant Magellan Telescope. == Presidents == The following persons had served as president of the Carnegie Institution for Science: == See also == Kurt Adelberger == References == == External links == Official website 20th century publications of the Carnegie Institution for Science, from HathiTrust Historic American Engineering Record (HAER) No. DC-52-A, "Carnegie Institute of Washington, Department of Terrestrial Magnetism, Standardizing Magnetic Observatory" HAER No. DC-52-B, "Carnegie Institute of Washington, Department of Terrestrial Magnetism, Brass Foundry" HAER No. DC-52-C, "Carnegie Institute of Washington, Department of Terrestrial Magnetism, Atomic Physics Observatory"
Wikipedia/Carnegie_Institution_for_Science
The Structure of Evolutionary Theory (2002) is Harvard paleontologist Stephen Jay Gould's technical book on macroevolution and the historical development of evolutionary theory. The book was twenty years in the making, published just two months before Gould's death. Aimed primarily at professionals, the volume is divided into two parts. The first is a historical study of classical evolutionary thought, drawing extensively upon primary documents; the second is a constructive critique of the modern synthesis, and presents a case for an interpretation of biological evolution based largely on hierarchical selection, and the theory of punctuated equilibrium (developed by Niles Eldredge and Gould in 1972). == Summary == According to Gould, classical Darwinism encompasses three essential core commitments: Agency, the unit of selection (which for Charles Darwin was the organism) upon which natural selection acts; efficacy, which encompasses the dominance of natural selection over all other forces—such as genetic drift, and biological constraints—in shaping the historical, ecological, and structural influences on evolution; and scope, the degree to which natural selection can be extrapolated to explain biodiversity at the macroevolutionary level, including the evolution of higher taxonomic groups. Gould described these three propositions as the "tripod" of Darwinian central logic, each being so essential to the structure that if any branch were cut it would either kill, revise, or superficially refurbish the whole structure—depending on the severity of the cut. According to Gould "substantial changes, introduced during the last half of the 20th century, have built a structure so expanded beyond the original Darwinian core, and so enlarged by new principles of macroevolutionary explanation, that the full exposition, while remaining within the domain of Darwinian logic, must be construed as basically different from the canonical theory of natural selection, rather than simply extended." In the arena of agency, Gould explores the concept of "hierarchy" in the action of evolution (the idea that evolution may act on more than one unit simultaneously, as opposed to only acting upon individual organisms). In the arena of efficacy he explores the forces beside natural selection that have been considered in evolutionary theory. In the arena of scope he considers the relevance of natural selection to the larger scale patterns of life. Gould was motivated to write the book by contrasting the opinions of Darwin and Hugh Falconer about the future of Darwinism. Part I of the book focuses on the early history of evolutionary thought (pre-1859). Chapter one introduces and outlines the Structure of Evolutionary Theory, with chapter two covering the structure of The Origin of Species, chapter three focusing on issues surrounding agency, chapters four and five covering efficacy, and chapters six and seven covering scope. Part II—comprising the bulk of the text—focuses on the modern discussion and debate (post-1959). Chapters eight and nine cover agency, while chapters ten and eleven cover efficacy, and twelve covers scope. Sections of the book dealing with punctuated equilibrium, primarily chapter nine, have been posthumously reprinted as a separate volume by Belknap Harvard. == References == == External links == Harvard's promotional page Charlie Rose, March 1, 1994 - Gould discusses the purpose of the book Of Beauty and Consolation - Gould on writing Structure
Wikipedia/The_Structure_of_Evolutionary_Theory
Daedalus; or, Science and the Future is a book by the British scientist J. B. S. Haldane, published in England in 1924. It was the text of a lecture read to the Heretics Society (an intellectual club at the University of Cambridge) on 4 February 1923. Haldane uses the Greek myth of Daedalus as a symbol for the revolutionary nature of science with particular regard to his own discipline of biology. The chemical or physical inventor is always a Prometheus. There is no great invention, from fire to flying, which has not been hailed as an insult to some god. But if every physical and chemical invention is a blasphemy, every biological invention is a perversion. There is hardly one which, on first being brought to the notice of an observer from any nation which had not previously heard of their existence, would not appear to him as indecent and unnatural. He also expressed skepticism over the human benefits of some scientific advances, arguing that scientific advance would bring grief, rather than progress to mankind, unless it was accompanied by a similar advance in ethics. The book is an early vision of transhumanism and his vision of a future in which humans controlled their own evolution through directed mutation and use of in vitro fertilisation ("ectogenesis") was a major influence on Aldous Huxley's Brave New World. The book ends with the image of a biologist, much like Haldane himself, in a laboratory: "just a poor little scrubby underpaid man groping blindly amid the mazes of the ultramicroscope... conscious of his ghastly mission and proud of it." The book has been discussed at length by other writers, including Freeman Dyson in his book Imagined Worlds and Sal Restivo in Science, Society, and Values, and the concept has been used in contemporary science lectures. == References == == External links == Daedalus; or, Science and the Future at Project Gutenberg book at Hathi Trust
Wikipedia/Daedalus;_or,_Science_and_the_Future
Law for the Prevention of Genetically Diseased Offspring (German: Gesetz zur Verhütung erbkranken Nachwuchses) or "Sterilisation Law" was a statute in Nazi Germany enacted on July 14, 1933, (and made active in January 1934) which allowed the compulsory sterilisation of any citizen who in the opinion of a "Genetic Health Court" (Erbgesundheitsgericht) suffered from a list of alleged genetic disorders – many of which were not, in fact, genetic. The elaborate interpretive commentary on the law was written by three dominant figures in the racial hygiene movement: Ernst Rüdin, Arthur Gütt and the lawyer Falk Ruttke. While it has close resemblances with the American Model Eugenical Sterilization Law developed by Harry H. Laughlin, the law itself was initially drafted in 1932, at the end of the Weimar Republic period, by a committee led by the Prussian health board. == Operation of the law == The basic provisions of the 1933 law stated that: (1) Any person suffering from a hereditary disease may be rendered incapable of procreation by means of a surgical operation (sterilization), if the experience of medical science shows that it is highly probable that his descendants would suffer from some serious physical or mental hereditary defect. (2) For the purposes of this law, any person will be considered as hereditarily diseased who is suffering from any one of the following diseases:– (1) Congenital Mental Deficiency, (2) Schizophrenia, (3) Manic-Depressive Insanity, (4) Hereditary Epilepsy, (5) Hereditary Chorea (Huntington's), (6) Hereditary Blindness, (7) Hereditary Deafness, (8) Any severe hereditary deformity. (3) Any person suffering from severe alcoholism may be also rendered incapable of procreation. The law applied to anyone in the general population, making its scope significantly larger than the compulsory sterilisation laws in the United States, which generally were only applicable on people in psychiatric hospitals or prisons. The 1933 law created a large number of "Genetic Health Courts" (German: Erbgesundheitsgericht, EGG), consisting of a judge, a medical officer, and medical practitioner, which "shall decide at its own discretion after considering the results of the whole proceedings and the evidence tendered". If the court decided that the person in question was to be sterilised, the decision could be appealed to the "Higher Genetic Health Court" (German: Erbgesundheitsobergericht, EGOG). If the appeal failed, the sterilization was to be carried out, with the law specifying that "the use of force is permissible". The law also required that people seeking voluntary sterilizations also go through the courts. There were three amendments by 1935, most making minor adjustments to how the statute operated or clarifying bureaucratic aspects (such as who paid for the operations). The most significant changes allowed the Higher Court to renounce a patient's right to appeal, and to fine physicians who did not report patients who they knew would qualify for sterilisation under the law. The law also enforced sterilization on the so-called "Rhineland bastards," the mixed-race children of German civilians and French African soldiers who helped occupy the Rhineland. At the time of its enaction, the German government pointed to the success of sterilisation laws elsewhere, especially the work in California documented by the American eugenicists E. S. Gosney and Paul Popenoe, as evidence of the humaneness and efficacy of such laws. Eugenicists abroad admired the German law for its legal and ideological clarity. Popenoe himself wrote that "the German law is well drawn and, in form, may be considered better than the sterilization laws of most American states", and trusted in the German government's "conservative, sympathetic, and intelligent administration" of the law, praising the "scientific leadership" of the Nazis. The German mathematician Otfrid Mittmann defended the law against "unfavorable judgements". In the first year of the law's operation, 1934, 84,600 cases were brought to Genetic Health Courts, with 62,400 forced sterilisations. Nearly 4,000 people appealed against the decisions of sterilisation authorities; 3,559 of the appeals failed. In 1935, it was 88,100 trials and 71,700 sterilizations. By the end of the Nazi regime, over 200 "Genetic Health Courts" were created, and under their rulings over 400,000 people were sterilized against their will. Along with the law, Adolf Hitler personally decriminalised abortion in case of fetuses having racial or hereditary defects for doctors, while the abortion of healthy "pure" German, "Aryan" unborn remained strictly forbidden. == See also == Life unworthy of life Aktion T4 Nazi eugenics Eugenics in the United States Rhineland Bastard == Notes == == External links == "Eugenics in Germany : 'The Law for the Prevention of Hereditarily Diseased Offspring'" article from Facing History and Ourselves United States Holocaust Memorial Museum – The Biological State: Nazi Racial Hygiene, 1933–1939
Wikipedia/Law_for_the_Prevention_of_Hereditarily_Diseased_Offspring
David Anwyll Coleman (born 1946) is a demographer and anthropologist who served as the Professor of Demography at the Department of Social Policy and Intervention, University of Oxford from October 2002 until 2013, and a lecturer since 1980. == Early life == Coleman was born in 1946 in London, England. He was educated at St Benedict's School, Ealing. == University education == In 1967, Coleman graduated from Oxford University with a Bachelor of Arts in Zoology. In 1978, Coleman graduated from the London School of Economics with a Ph.D. in Demography. == Career == Between 1985 and 1987 he worked for the British Government, as the Special Adviser to Home Secretary Douglas Hurd and then to the Ministers of Housing and of the Environment. He is a former fellow of St John's College, Oxford. Coleman has published over 90 papers and eight books and was the joint editor of the European Journal of Population from 1992 to 2000. In 1997 he was elected to the Council of the International Union for the Scientific Study of Population. He is also an advisor to Migration Watch UK which he helped to found, and is a member of the Galton Institute, formerly known as the Eugenics Society, however the institute at the time did not do "research into eugenics". In 2013, Coleman's analysis said that White British people would be a minority in the UK around 2066 if current immigration trends continued. == References ==
Wikipedia/David_Coleman_(demographer)
United States of America v. Karl Brandt, et al., commonly known as the Doctors' Trial, was the first of the twelve "Subsequent Nuremberg trials" for war crimes and crimes against humanity after the end of World War II between 1946 and 1947. The accused were 20 physicians and 3 SS officials charged for their involvement in the Aktion T4 programme and Nazi human experimentation. The Doctors' Trial was held by United States authorities at the Palace of Justice in Nuremberg in the American occupation zone before US military courts, not before the International Military Tribunal. Seven of the accused were sentenced to death by hanging, five were sentenced to life imprisonment, four were given prison sentences from 10 to 20 years, and seven were acquitted. The judges, heard before Military Tribunal I, were Walter B. Beals (presiding judge) from Washington, Harold L. Sebring from Florida, and Johnson T. Crawford from Oklahoma, with Victor C. Swearingen, a former special assistant to the Attorney General of the United States, as an alternate judge. The Chief of Counsel for the Prosecution was Telford Taylor and the chief prosecutor was James M. McHaney. The indictment was filed on 25 October 1946; the trial lasted from 9 December that year until 20 August 1947. == Case == Twenty of the defendants were physicians and three were SS officials (Viktor Brack, Rudolf Brandt, and Wolfram Sievers), all of whom were accused of being involved in Nazi human experimentation and the Aktion T4 programme of involuntary euthanasia. The physicians came from a variety of civilian and military backgrounds, and some were members of the SS. Other Nazi physicians such as Philipp Bouhler, Ernst-Robert Grawitz, Leonardo Conti, and Enno Lolling had died by suicide, while Josef Mengele, one of the leading Nazi doctors, had evaded capture. In his opening statement, Taylor summarized the crimes of the defendants."The defendants in this case are charged with murders, tortures, and other atrocities committed in the name of medical science. The victims of these crimes are numbered in the hundreds of thousands. A handful only are still alive; a few of the survivors will appear in this courtroom. But most of these miserable victims were slaughtered outright or died in the course of the tortures to which they were subjected. For the most part they are nameless dead. To their murderers, these wretched people were not individuals at all. They came in wholesale lots and were treated worse than animals." == Indictment == The accused faced four charges, including: Conspiracy to commit war crimes and crimes against humanity as described in counts 2 and 3; War crimes: performing medical experiments, without the subjects' consent, on prisoners of war and civilians of occupied countries, in the course of which experiments the defendants committed murders, brutalities, cruelties, tortures, atrocities, and other inhuman acts. Also planning and performing the mass murder of prisoners of war and civilians of occupied countries, stigmatized as aged, insane, incurably ill, deformed, and so on, by gas, lethal injections, and diverse other means in nursing homes, hospitals, and asylums during the Euthanasia Program and participating in the mass murder of concentration camp inmates. Crimes against humanity: committing crimes described under count 2 also on German nationals. Membership in a criminal organization, the SS. The tribunal largely dropped count 1, stating that the charge was beyond its jurisdiction. I — Indicted G — Indicted and found guilty All of the criminals sentenced to death were hanged on 2 June 1948 at Landsberg Prison. For some, the difference between receiving a prison term and the death sentence was membership in the SS, "an organization declared criminal by the judgement of the International Military Tribunal". However, some SS medical personnel received prison sentences. The degree of personal involvement and/or presiding over groups involved was a factor in others. == See also == Command responsibility Declaration of Geneva Declaration of Helsinki Euthanasia trials Medical ethics Medical torture Nazi eugenics Nuremberg Code Nuremberg principles Nuremberg trials Bruno Beger Hans Conrad Julius Reiter Claus Schilling Hermann Stieve List of medical ethics cases == References == == Further reading == Hanauske-Abel, H. (1996). "Not a slippery slope or sudden subversion: German medicine and National Socialism in 1933". British Medical Journal. 313 (7070): 1453–1463. doi:10.1136/bmj.313.7070.1453. ISSN 0959-8138. PMC 2352969. PMID 8973235.(subscription required) Heller, Kevin Jon (2011). The Nuremberg Military Tribunals and the Origins of International Criminal Law. Oxford University Press. ISBN 978-0-19-955431-7. Lifton-Robert, Robert J. (2000) [1st. Pub. 1986 London:Macmillan]. The Nazi Doctors: Medical Killing and the Psychology of Genocide. Basic Books. ISBN 978-0-465-04905-9. Pellegrino, E. (15 August 1997). "The Nazi Doctors and Nuremberg: Some Moral Lessons Revisited". Annals of Internal Medicine. 127 (4): 307–308. CiteSeerX 10.1.1.694.9894. doi:10.7326/0003-4819-127-4-199708150-00010. PMID 9265432. S2CID 30547329.(subscription required) Seidelman, W. (1996). "Nuremberg lamentation: for the forgotten victims of medical science". British Medical Journal. 313 (7070): 1463–1467. doi:10.1136/bmj.313.7070.1463. ISSN 0959-8138. PMC 2352986. PMID 8973236.(subscription required) Spitz, Vivien (2005). Doctors from Hell. Sentient Publications. ISBN 978-1-59181-032-2. Weindling, P.J. (2005). Nazi Medicine and the Nuremberg Trials: From Medical War Crimes to Informed Consent. Palgrave Macmillan. ISBN 978-1-4039-3911-1. == External links == Media related to Doctors' Trial at Wikimedia Commons "Transcripts". The Nuremberg Trials Project. Harvard Law School Library. Archived from the original on 2011-04-15. – Partial transcript from the trial Cohen, Baruch C. "The Ethics Of Using Medical Data From Nazi Experiments". Jewish Law. Biddiss, M (June 1997). "Disease and dictatorship: the case of Hitler's Reich" (pdf). Journal of the Royal Society of Medicine. 90 (6): 342–346. doi:10.1177/014107689709000616. PMC 1296317. PMID 9227388.
Wikipedia/Doctors'_Trial
Demographic engineering is deliberate effort to shift the ethnic balance of an area, especially when undertaken to create ethnically homogeneous populations. Demographic engineering ranges from falsification of census results, redrawing borders, differential natalism to change birth rates of certain population groups, targeting disfavored groups with voluntary or coerced emigration, and population transfer and resettlement with members of the favored group. At an extreme, demographic engineering is undertaken through genocide. It is a common feature of conflicts around the world. == Definition == The term "demographic engineering" is related to population transfers (forced migrations), ethnic cleansing, and in extreme cases genocide. It denotes a state policy (such as population transfer) to deliberately effect population compositions or distributions. John McGarry states that during a territorial dispute—and especially before negotiations—the disputants often try "to create 'demographic facts' on the ground which undercut the claims of competitors, strengthens one’s own claims, and present fait accomplis at negotiations". He cites many examples of demographic engineering, including the former Yugoslavia, Cyprus dispute, Germans in Poland, Arab-Israeli conflict and Ossetians in Georgia. Although he restricts demographic engineering to state policies, McGarry also notes the existence of "a grey area where state representatives use surrogates to inflict violence on minorities" or fail to prevent mobs, as occurred with the anti-Jewish pogrom Kristallnacht and anti-German violence in interwar Poland. == Goals == The aim of demographic engineering does not have to be ethnic homogeneity. Before the rise of nation states demographic engineering was used to secure the newly conquered territories of empires, or to increase population levels in sparsely populated areas, often having strategic importance for imperial trade routes and increasing the political and economic power of a privileged ethnic group. Demographic engineering in the era of nation states, that is, after the decline of empires, has been used in support of the rise of nationalism (usually ethnic nationalism, but also religious nationalism). == Examples == === Ottoman Empire and Turkey === There are three phases of demographic engineering as a state policy of the Ottoman Empire. Between the 16th and 18th centuries the policy of population transfer was commonly practiced to achieve demographic engineering of the populations of newly conquered regions. (This type of demographic engineering is sometimes called "ethnic restructuring".) The second phase between the 1850s and 1913 saw thousands of Muslims displaced in the aftermath of significant Ottoman military defeats in the Balkans. This was also the start of demographic engineering policies in Anatolia that eventually escalated to genocide in the Armenian Genocide. According to Dutch Turkologist Erik-Jan Zürcher, the era from 1850 to 1950 was "Europe’s age of demographic engineering", citing the large number of forced population movements and genocides that occurred. He states that for much of this period, the Ottoman Empire was "the laboratory of demographic engineering in Europe". Swiss historian Hans-Lukas Kieser states that the Ottoman Committee of Union and Progress "was far ahead of German elites" when it came to ethnic nationalism and demographic engineering. Kerem Öktem connects demographic engineering to the state-led efforts to change toponyms derived from the language of the undesired population group during or after state efforts to effect its reduction or elimination (see geographical name changes in Turkey). Dilek Güven states that the 1955 Istanbul pogrom was demographic engineering because it was provoked by the state in order to cause ethnic minority citizens (Armenians, Greeks, Jews) to leave. McGarry states that tens of millions of Europeans were uprooted by demographic engineering projects in the twentieth century. === Eastern Europe after WWII === In the wake of the Second World War, most ethnic Germans fled or were expelled from the countries of Eastern Europe. === Kuwait === In recent decades, numerous policies of the Kuwaiti government have been characterized as demographic engineering, especially in relation to Kuwait's stateless Bedoon crisis and the history of naturalization in Kuwait. The State of Kuwait formally has an official Nationality Law that grants non-nationals a legal pathway to obtaining citizenship. However, as access to citizenship in Kuwait is autocratically controlled by the Al Sabah ruling family it is not subject to any external regulatory supervision. The naturalization provisions within the Nationality Law are arbitrarily implemented and lack transparency. The lack of transparency prevents non-nationals from receiving a fair opportunity to obtain citizenship. Consequently, the Al Sabah ruling family have been able to manipulate naturalization for politically motivated reasons. In the three decades after independence in 1961, the Al Sabah ruling family naturalized hundreds of thousands of foreign Bedouin immigrants predominantly from Saudi Arabia. By 1980, as many as 200,000 immigrants were naturalized in Kuwait. Throughout the 1980s, the Al Sabah's politically motivated naturalization policy continued. The naturalizations were not regulated nor sanctioned by Kuwaiti law. The exact number of naturalizations is unknown but it is estimated that up to 400,000 immigrants were unlawfully naturalized in Kuwait. The foreign Bedouin immigrants were mainly naturalized to alter the demographic makeup of the citizen population in a way that made the power of the Al Sabah ruling family more secure. As a result of the politically motivated naturalizations, the number of naturalized citizens exceeds the number of Bedoon in Kuwait. The Al Sabah ruling family actively encouraged foreign Bedouin immigrants to migrate to Kuwait. The Al Sabah ruling family favored naturalizing Bedouin immigrants because they were considered loyal to the ruling family, unlike the politically active Palestinian, Lebanese, and Syrian expats in Kuwait. The naturalized citizens were predominantly Sunni Saudi immigrants from southern tribes. Accordingly, none of the stateless Bedoon in Kuwait belong to the Ajman tribe. The Kuwaiti judicial system's lack of authority to rule on citizenship further complicates the Bedoon crisis, leaving Bedoon no access to the judiciary to present evidence and plead their case for citizenship. Although non-nationals constitute 70% of Kuwait's total population the Al Sabah ruling family persistently denies citizenship to most non-nationals, including those who fully satisfy the requirements for naturalization as stipulated in the state's official Nationality Law. The Kuwaiti authorities permit the forgery of hundreds of thousands of politically motivated naturalizations whilst simultaneously denying citizenship to the Bedoon. The politically motivated naturalizations were noted by the United Nations, political activists, scholars, researchers and even members of the Al Sabah family. It is widely considered a form of deliberate demographic engineering and has been likened to Bahrain's politically motivated naturalization policy. Within the GCC countries, politically-motivated naturalization policies are referred to as "political naturalization" (التجنيس السياسي). === Israel === Numerous policies of the Israeli government have been characterized by scholars and human rights organizations as demographic engineering. A Human Rights Watch report charging Israel with committing the crime of apartheid cites its policies that fragment the Palestinian population in the occupied territories as facilitating "the demographic engineering that is key to preserving political control by Jewish Israelis" Israel's efforts to ensure a Jewish majority has influenced its policies towards the Israeli-occupied territories over time. David Ben-Gurion had initially been in favor of withdrawal due to the much higher birth rates of the Palestinian population in the newly occupied territories and "to insure survival a Jewish state must at all times maintain within her own borders an unassailable Jewish majority". Yigal Allon was in favor of holding the Jordan Valley, which was sparsely populated, while allowing autonomy for the rest of the more heavily populated West Bank so that "The result would be the Whole Land strategically and a Jewish state demographically". Large scale Russian Jewish immigration to Israel was hoped, by the Israeli right which favored retaining the territories, to be enough of a buffer to allow for both absorption of those territories and maintain a Jewish majority. The West Bank barrier follows a route to maximize the inclusion of Jewish settlers in the West Bank and minimize the Palestinian population, with Ariel Sharon telling Arnon Soffer "For the world it is a security fence but for you and me, Arnon, it is a demography fence." Israel's efforts to establish a Jewish majority that would ensure control over the Palestinian population extended to Israel proper. Following an attack by Jewish forces on Lod that saw the fleeing or expulsion of 20,000 Palestinians from the city, the Palestinian population attempted to return to their homes. The Israeli response was to both rebuff them with military attacks and to settle a massive number of Jewish immigrants in the now seized properties that had been abandoned. While 1,030 Arabs were allowed to remain in Lod, in the years immediately following the 1948 war over 10,000 Jewish immigrants were settled in the city. A new master plan for the city saw massive construction of housing and other infrastructure for Jewish residents, unlike the intensive demolition carried out in the Arab core of the city. A 2017 report by Richard A. Falk, professor emeritus of international law at Princeton University, and Virginia Tilley, a political scientist from Southern Illinois University Carbondale, wrote that "The first general policy of Israel has been one of demographic engineering, in order to establish and maintain an overwhelming Jewish majority." === Syria === During the colonial period, the French used demographic engineering, among other measures, to contain the Arab nationalism. For example, the "loyal" refugees were resettled in strategically important areas. The Syrian government's actions in Homs during the Syrian Civil War were described as demographic engineering seeking "to permanently manipulate the population along sectarian lines in order to consolidate the government’s power base." == Forms == Forms of demographic engineering in recent decades include: Population measurement Pronatalist policies Assimilation Boundary changes Economic pressures (both direct and indirect) Population transfers (ethnic dilution, ethnic consolidation and ethnic cleansing) == See also == Cultural assimilation Deportation Ethnic cleansing Ethnic nationalism Genocide Immigration Racial segregation Settler colonialism Forced sterilisation Forced adoption Social cleansing Redlining Internal colonialism Illegal immigration to the United States White genocide conspiracy theory Great Replacement conspiracy theory Eurabia conspiracy theory == References == == Sources == Bookman, Milica Zarkovic (2013) [1997]. The Demographic Struggle for Power: The Political Economy of Demographic Engineering in the Modern World. Routledge. ISBN 978-1-135-24829-1. Bookman, Milica Zarkovic (2002). "Demographic Engineering and The Struggle for Power". Journal of International Affairs. 56 (1): 25–51. JSTOR 24357882. Kieser, Hans-Lukas (2018). Talaat Pasha: Father of Modern Turkey, Architect of Genocide. Princeton University Press. ISBN 978-1-4008-8963-1. Lay summary in: Kieser, Hans-Lukas. "Pasha, Talat". 1914-1918-online. International Encyclopedia of the First World War. McGarry, John (1998). "'Demographic engineering': the state-directed movement of ethnic groups as a technique of conflict regulation". Ethnic and Racial Studies. 21 (4): 613–638. doi:10.1080/014198798329793. Morland, Paul (2016). Demographic Engineering: Population Strategies in Ethnic Conflict. Routledge. ISBN 978-1-317-15292-7. Öktem, Kerem (2008). "The Nation's Imprint: Demographic Engineering and the Change of Toponymes in Republican Turkey". European Journal of Turkish Studies. Social Sciences on Contemporary Turkey (7). doi:10.4000/ejts.2243. ISSN 1773-0546. Schad, Thomas (2016). "From Muslims into Turks? Consensual demographic engineering between interwar Yugoslavia and Turkey". Journal of Genocide Research. 18 (4): 427–446. doi:10.1080/14623528.2016.1228634. S2CID 151533035. Tirtosudarmo, Riwanto (2019). "Demographic Engineering and Displacement". The Politics of Migration in Indonesia and Beyond. Springer. pp. 49–68. ISBN 978-981-10-9032-5. Tzfadia, Erez; Yacobi, Haim (2007). "Identity, Migration, and the City: Russian Immigrants in Contested Urban Space in Israel". Urban Geography. 28 (5). Tayloy & Francis: 436–455. doi:10.2747/0272-3638.28.5.436. ISSN 0272-3638. S2CID 145502001. Üngör, Uğur Ümit (2011). The Making of Modern Turkey: Nation and State in Eastern Anatolia, 1913–1950. Oxford University Press. ISBN 978-0-19-965522-9. Weiner, Myron; Teitelbaum, Michael S. (2001). Political Demography, Demographic Engineering. Berghahn Books. ISBN 978-1-57181-254-4. Zürcher, Erik-Jan (2009). "The Late Ottoman Empire as Laboratory of Demographic Engineering". Il Mestiere di Storico (1): 1000–1012. doi:10.1400/148038.
Wikipedia/Demographic_engineering
Fisher's geometric model (FGM) is an evolutionary model of the effect sizes and effect on fitness of spontaneous mutations proposed by Ronald Fisher to explain the distribution of effects of mutations that could contribute to adaptative evolution. == Conceptualization == Sometimes referred to as the Fisher–Orr model, Fisher's model addresses the problem of adaptation (and, to some extent, complexity), and continues to be a point of reference in contemporary research on the genetic and evolutionary consequences of pleiotropy. The model has two forms, a geometric formalism, and a microscope analogy. A microscope which has many knobs to adjust the lenses to obtain a sharp image has little chance of obtaining an optimally functioning image by randomly turning the knobs. The chances of a clear image are not so bad if the number of knobs is low, but the chances will decrease dramatically if the number of adjustable parameters (knobs) is larger than two or three. Fisher introduced a geometric metaphor, which eventually became known as Fisher's geometric model. In his model, Fisher argues that the functioning of the microscope is analogous to the fitness of an organism. The performance of the microscope depends on the state of various knobs that can be manipulated, corresponding to distances and orientations of various lenses, whereas the fitness of an organism depends on the state of various phenotypic character such as body size and beak length and depth. The increase in the fitness of an organism by random changes is then analogous to the attempt to improve the performance of a microscope through randomly changing the positions of the knobs on the microscope. The analogy between the microscope and an evolving organism can be formalized by representing the phenotype of an organism as a point in a high-dimensional data space, where the dimensions of that space correspond to the traits of the organism. The more independent dimensions of variation the phenotype has, the more difficult is improvement resulting from random changes. If there are many different ways to change a phenotype it becomes very unlikely that a random change affects the right combination of traits in the right way to improve fitness. Fisher noted that the smaller the effect, the higher the chance that a change is beneficial. At one extreme, changes with infinitesimally small effect have a 50% chance of improving fitness. This argument led to the widely held position that evolution proceeds by small mutations. Furthermore, Orr discovered that both the fixation probability of a beneficial mutation and the fitness gain that is conferred by the fixation of the beneficial mutation decrease with organismal complexity. Thus, the predicted rate of adaptation decreases quickly with the rise in organismal complexity, a theoretical finding known as the ‘cost of complexity’. == References ==
Wikipedia/Fisher's_geometric_model
Discriminative models, also referred to as conditional models, are a class of models frequently used for classification. They are typically used to solve binary classification problems, i.e. assign labels, such as pass/fail, win/lose, alive/dead or healthy/sick, to existing datapoints. Types of discriminative models include logistic regression (LR), conditional random fields (CRFs), decision trees among many others. Generative model approaches which uses a joint probability distribution instead, include naive Bayes classifiers, Gaussian mixture models, variational autoencoders, generative adversarial networks and others. == Definition == Unlike generative modelling, which studies the joint probability P ( x , y ) {\displaystyle P(x,y)} , discriminative modeling studies the P ( y | x ) {\displaystyle P(y|x)} or maps the given unobserved variable (target) x {\displaystyle x} to a class label y {\displaystyle y} dependent on the observed variables (training samples). For example, in object recognition, x {\displaystyle x} is likely to be a vector of raw pixels (or features extracted from the raw pixels of the image). Within a probabilistic framework, this is done by modeling the conditional probability distribution P ( y | x ) {\displaystyle P(y|x)} , which can be used for predicting y {\displaystyle y} from x {\displaystyle x} . Note that there is still distinction between the conditional model and the discriminative model, though more often they are simply categorised as discriminative model. === Pure discriminative model vs. conditional model === A conditional model models the conditional probability distribution, while the traditional discriminative model aims to optimize on mapping the input around the most similar trained samples. == Typical discriminative modelling approaches == The following approach is based on the assumption that it is given the training data-set D = { ( x i ; y i ) | i ≤ N ∈ Z } {\displaystyle D=\{(x_{i};y_{i})|i\leq N\in \mathbb {Z} \}} , where y i {\displaystyle y_{i}} is the corresponding output for the input x i {\displaystyle x_{i}} . === Linear classifier === We intend to use the function f ( x ) {\displaystyle f(x)} to simulate the behavior of what we observed from the training data-set by the linear classifier method. Using the joint feature vector ϕ ( x , y ) {\displaystyle \phi (x,y)} , the decision function is defined as: f ( x ; w ) = arg ⁡ max y w T ϕ ( x , y ) {\displaystyle f(x;w)=\arg \max _{y}w^{T}\phi (x,y)} According to Memisevic's interpretation, w T ϕ ( x , y ) {\displaystyle w^{T}\phi (x,y)} , which is also c ( x , y ; w ) {\displaystyle c(x,y;w)} , computes a score which measures the compatibility of the input x {\displaystyle x} with the potential output y {\displaystyle y} . Then the arg ⁡ max {\displaystyle \arg \max } determines the class with the highest score. === Logistic regression (LR) === Since the 0-1 loss function is a commonly used one in the decision theory, the conditional probability distribution P ( y | x ; w ) {\displaystyle P(y|x;w)} , where w {\displaystyle w} is a parameter vector for optimizing the training data, could be reconsidered as following for the logistics regression model: P ( y | x ; w ) = 1 Z ( x ; w ) exp ⁡ ( w T ϕ ( x , y ) ) {\displaystyle P(y|x;w)={\frac {1}{Z(x;w)}}\exp(w^{T}\phi (x,y))} , with Z ( x ; w ) = ∑ y exp ⁡ ( w T ϕ ( x , y ) ) {\displaystyle Z(x;w)=\textstyle \sum _{y}\displaystyle \exp(w^{T}\phi (x,y))} The equation above represents logistic regression. Notice that a major distinction between models is their way of introducing posterior probability. Posterior probability is inferred from the parametric model. We then can maximize the parameter by following equation: L ( w ) = ∑ i log ⁡ p ( y i | x i ; w ) {\displaystyle L(w)=\textstyle \sum _{i}\displaystyle \log p(y^{i}|x^{i};w)} It could also be replaced by the log-loss equation below: l log ( x i , y i , c ( x i ; w ) ) = − log ⁡ p ( y i | x i ; w ) = log ⁡ Z ( x i ; w ) − w T ϕ ( x i , y i ) {\displaystyle l^{\log }(x^{i},y^{i},c(x^{i};w))=-\log p(y^{i}|x^{i};w)=\log Z(x^{i};w)-w^{T}\phi (x^{i},y^{i})} Since the log-loss is differentiable, a gradient-based method can be used to optimize the model. A global optimum is guaranteed because the objective function is convex. The gradient of log likelihood is represented by: ∂ L ( w ) ∂ w = ∑ i ϕ ( x i , y i ) − E p ( y | x i ; w ) ϕ ( x i , y ) {\displaystyle {\frac {\partial L(w)}{\partial w}}=\textstyle \sum _{i}\displaystyle \phi (x^{i},y^{i})-E_{p(y|x^{i};w)}\phi (x^{i},y)} where E p ( y | x i ; w ) {\displaystyle E_{p(y|x^{i};w)}} is the expectation of p ( y | x i ; w ) {\displaystyle p(y|x^{i};w)} . The above method will provide efficient computation for the relative small number of classification. == Contrast with generative model == === Contrast in approaches === Let's say we are given the m {\displaystyle m} class labels (classification) and n {\displaystyle n} feature variables, Y : { y 1 , y 2 , … , y m } , X : { x 1 , x 2 , … , x n } {\displaystyle Y:\{y_{1},y_{2},\ldots ,y_{m}\},X:\{x_{1},x_{2},\ldots ,x_{n}\}} , as the training samples. A generative model takes the joint probability P ( x , y ) {\displaystyle P(x,y)} , where x {\displaystyle x} is the input and y {\displaystyle y} is the label, and predicts the most possible known label y ~ ∈ Y {\displaystyle {\widetilde {y}}\in Y} for the unknown variable x ~ {\displaystyle {\widetilde {x}}} using Bayes' theorem. Discriminative models, as opposed to generative models, do not allow one to generate samples from the joint distribution of observed and target variables. However, for tasks such as classification and regression that do not require the joint distribution, discriminative models can yield superior performance (in part because they have fewer variables to compute). On the other hand, generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks. In addition, most discriminative models are inherently supervised and cannot easily support unsupervised learning. Application-specific details ultimately dictate the suitability of selecting a discriminative versus generative model. Discriminative models and generative models also differ in introducing the posterior possibility. To maintain the least expected loss, the minimization of result's misclassification should be acquired. In the discriminative model, the posterior probabilities, P ( y | x ) {\displaystyle P(y|x)} , is inferred from a parametric model, where the parameters come from the training data. Points of estimation of the parameters are obtained from the maximization of likelihood or distribution computation over the parameters. On the other hand, considering that the generative models focus on the joint probability, the class posterior possibility P ( k ) {\displaystyle P(k)} is considered in Bayes' theorem, which is P ( y | x ) = p ( x | y ) p ( y ) ∑ i p ( x | i ) p ( i ) = p ( x | y ) p ( y ) p ( x ) {\displaystyle P(y|x)={\frac {p(x|y)p(y)}{\textstyle \sum _{i}p(x|i)p(i)\displaystyle }}={\frac {p(x|y)p(y)}{p(x)}}} . === Advantages and disadvantages in application === In the repeated experiments, logistic regression and naive Bayes are applied here for different models on binary classification task, discriminative learning results in lower asymptotic errors, while generative one results in higher asymptotic errors faster. However, in Ulusoy and Bishop's joint work, Comparison of Generative and Discriminative Techniques for Object Detection and Classification, they state that the above statement is true only when the model is the appropriate one for data (i.e.the data distribution is correctly modeled by the generative model). ==== Advantages ==== Significant advantages of using discriminative modeling are: Higher accuracy, which mostly leads to better learning result. Allows simplification of the input and provides a direct approach to P ( y | x ) {\displaystyle P(y|x)} Saves calculation resource Generates lower asymptotic errors Compared with the advantages of using generative modeling: Takes all data into consideration, which could result in slower processing as a disadvantage Requires fewer training samples A flexible framework that could easily cooperate with other needs of the application ==== Disadvantages ==== Training method usually requires multiple numerical optimization techniques Similarly by the definition, the discriminative model will need the combination of multiple subtasks for solving a complex real-world problem == Optimizations in applications == Since both advantages and disadvantages present on the two way of modeling, combining both approaches will be a good modeling in practice. For example, in Marras' article A Joint Discriminative Generative Model for Deformable Model Construction and Classification, he and his coauthors apply the combination of two modelings on face classification of the models, and receive a higher accuracy than the traditional approach. Similarly, Kelm also proposed the combination of two modelings for pixel classification in his article Combining Generative and Discriminative Methods for Pixel Classification with Multi-Conditional Learning. During the process of extracting the discriminative features prior to the clustering, Principal component analysis (PCA), though commonly used, is not a necessarily discriminative approach. In contrast, LDA is a discriminative one. Linear discriminant analysis (LDA), provides an efficient way of eliminating the disadvantage we list above. As we know, the discriminative model needs a combination of multiple subtasks before classification, and LDA provides appropriate solution towards this problem by reducing dimension. == Types == Examples of discriminative models include: Logistic regression, a type of generalized linear regression used for predicting binary or categorical outputs (also known as maximum entropy classifiers) Boosting (meta-algorithm) Conditional random fields Linear regression Random forests == See also == Generative model == References ==
Wikipedia/Discriminative_model
Dependency networks (DNs) are graphical models, similar to Markov networks, wherein each vertex (node) corresponds to a random variable and each edge captures dependencies among variables. Unlike Bayesian networks, DNs may contain cycles. Each node is associated to a conditional probability table, which determines the realization of the random variable given its parents. == Markov blanket == In a Bayesian network, the Markov blanket of a node is the set of parents and children of that node, together with the children's parents. The values of the parents and children of a node evidently give information about that node. However, its children's parents also have to be included in the Markov blanket, because they can be used to explain away the node in question. In a Markov random field, the Markov blanket for a node is simply its adjacent (or neighboring) nodes. In a dependency network, the Markov blanket for a node is simply the set of its parents. == Dependency network versus Bayesian networks == Dependency networks have advantages and disadvantages with respect to Bayesian networks. In particular, they are easier to parameterize from data, as there are efficient algorithms for learning both the structure and probabilities of a dependency network from data. Such algorithms are not available for Bayesian networks, for which the problem of determining the optimal structure is NP-hard. Nonetheless, a dependency network may be more difficult to construct using a knowledge-based approach driven by expert-knowledge. == Dependency networks versus Markov networks == Consistent dependency networks and Markov networks have the same representational power. Nonetheless, it is possible to construct non-consistent dependency networks, i.e., dependency networks for which there is no compatible valid joint probability distribution. Markov networks, in contrast, are always consistent. == Definition == A consistent dependency network for a set of random variables X = ( X 1 , … , X n ) {\textstyle \mathbf {X} =(X_{1},\ldots ,X_{n})} with joint distribution p ( x ) {\displaystyle p(\mathbf {x} )} is a pair ( G , P ) {\displaystyle (G,P)} where G {\displaystyle G} is a cyclic directed graph, where each of its nodes corresponds to a variable in X {\displaystyle \mathbf {X} } , and P {\displaystyle P} is a set of conditional probability distributions. The parents of node X i {\displaystyle X_{i}} , denoted P a i {\displaystyle \mathbf {Pa_{i}} } , correspond to those variables P a i ⊆ ( X 1 , … , X i − 1 , X i + 1 , … , X n ) {\displaystyle \mathbf {Pa_{i}} \subseteq (X_{1},\ldots ,X_{i-1},X_{i+1},\ldots ,X_{n})} that satisfy the following independence relationships p ( x i ∣ p a i ) = p ( x i ∣ x 1 , … , x i − 1 , x i + 1 , … , x n ) = p ( x i ∣ x − x i ) . {\displaystyle p(x_{i}\mid \mathbf {pa_{i}} )=p(x_{i}\mid x_{1},\ldots ,x_{i-1},x_{i+1},\ldots ,x_{n})=p(x_{i}\mid \mathbf {x} -{x_{i}}).} The dependency network is consistent in the sense that each local distribution can be obtained from the joint distribution p ( x ) {\displaystyle p(\mathbf {x} )} . Dependency networks learned using large data sets with large sample sizes will almost always be consistent. A non-consistent network is a network for which there is no joint probability distribution compatible with the pair ( G , P ) {\displaystyle (G,P)} . In that case, there is no joint probability distribution that satisfies the independence relationships subsumed by that pair. == Structure and parameters learning == Two important tasks in a dependency network are to learn its structure and probabilities from data. Essentially, the learning algorithm consists of independently performing a probabilistic regression or classification for each variable in the domain. It comes from observation that the local distribution for variable X i {\displaystyle X_{i}} in a dependency network is the conditional distribution p ( x i | x − x i ) {\displaystyle p(x_{i}|\mathbf {x} -{x_{i}})} , which can be estimated by any number of classification or regression techniques, such as methods using a probabilistic decision tree, a neural network or a probabilistic support-vector machine. Hence, for each variable X i {\displaystyle X_{i}} in domain X {\displaystyle X} , we independently estimate its local distribution from data using a classification algorithm, even though it is a distinct method for each variable. Here, we will briefly show how probabilistic decision trees are used to estimate the local distributions. For each variable X i {\displaystyle X_{i}} in X {\displaystyle \mathbf {X} } , a probabilistic decision tree is learned where X i {\displaystyle X_{i}} is the target variable and X − X i {\displaystyle \mathbf {X} -X_{i}} are the input variables. To learn a decision tree structure for X i {\displaystyle X_{i}} , the search algorithm begins with a singleton root node without children. Then, each leaf node in the tree is replaced with a binary split on some variable X j {\displaystyle X_{j}} in X − X i {\displaystyle \mathbf {X} -X_{i}} , until no more replacements increase the score of the tree. == Probabilistic Inference == A probabilistic inference is the task in which we wish to answer probabilistic queries of the form p ( y ∣ z ) {\displaystyle p(\mathbf {y\mid z} )} , given a graphical model for X {\displaystyle \mathbf {X} } , where Y {\displaystyle \mathbf {Y} } (the 'target' variables) Z {\displaystyle \mathbf {Z} } (the 'input' variables) are disjoint subsets of X {\displaystyle \mathbf {X} } . One of the alternatives for performing probabilistic inference is using Gibbs sampling. A naive approach for this uses an ordered Gibbs sampler, an important difficulty of which is that if either p ( y ∣ z ) {\displaystyle p(\mathbf {y\mid z} )} or p ( z ) {\displaystyle p(\mathbf {z} )} is small, then many iterations are required for an accurate probability estimate. Another approach for estimating p ( y ∣ z ) {\displaystyle p(\mathbf {y\mid z} )} when p ( z ) {\displaystyle p(\mathbf {z} )} is small is to use modified ordered Gibbs sampler, where Z = z {\displaystyle \mathbf {Z=z} } is fixed during Gibbs sampling. It may also happen that y {\displaystyle \mathbf {y} } is rare, e.g. when Y {\displaystyle \mathbf {Y} } has many variables. So, the law of total probability along with the independencies encoded in a dependency network can be used to decompose the inference task into a set of inference tasks on single variables. This approach comes with the advantage that some terms may be obtained by direct lookup, thereby avoiding some Gibbs sampling. You can see below an algorithm that can be used for obtain p ( y | z ) {\displaystyle p(\mathbf {y|z} )} for a particular instance of y ∈ Y {\displaystyle \mathbf {y} \in \mathbf {Y} } and z ∈ Z {\displaystyle \mathbf {z} \in \mathbf {Z} } , where Y {\displaystyle \mathbf {Y} } and Z {\displaystyle \mathbf {Z} } are disjoint subsets. Algorithm 1: U := Y {\displaystyle \mathbf {U:=Y} } (* the unprocessed variables *) P := Z {\displaystyle \mathbf {P:=Z} } (* the processed and conditioning variables *) p := z {\displaystyle \mathbf {p:=z} } (* the values for P {\displaystyle \mathbf {P} } *) While U ≠ ∅ {\displaystyle \mathbf {U} \neq \emptyset } : Choose X i ∈ U {\displaystyle X_{i}\in \mathbf {U} } such that X i {\displaystyle X_{i}} has no more parents in U {\displaystyle U} than any variable in U {\displaystyle U} If all the parents of X {\displaystyle X} are in P {\displaystyle \mathbf {P} } p ( x i | p ) := p ( x i | p a i ) {\displaystyle p(x_{i}|\mathbf {p} ):=p(x_{i}|\mathbf {pa_{i}} )} Else Use a modified ordered Gibbs sampler to determine p ( x i | p ) {\displaystyle p(x_{i}|\mathbf {p} )} U := U − X i {\displaystyle \mathbf {U:=U} -X_{i}} P := P + X i {\displaystyle \mathbf {P:=P} +X_{i}} p := p + x i {\displaystyle \mathbf {p:=p} +x_{i}} Returns the product of the conditionals p ( x i | p ) {\displaystyle p(x_{i}|\mathbf {p} )} == Applications == In addition to the applications to probabilistic inference, the following applications are in the category of Collaborative Filtering (CF), which is the task of predicting preferences. Dependency networks are a natural model class on which to base CF predictions, once an algorithm for this task only needs estimation of p ( x i = 1 | x − x i = 0 ) {\displaystyle p(x_{i}=1|\mathbf {x} -{x_{i}}=0)} to produce recommendations. In particular, these estimates may be obtained by a direct lookup in a dependency network. Predicting what movies a person will like based on his or her ratings of movies seen; Predicting what web pages a person will access based on his or her history on the site; Predicting what news stories a person is interested in based on other stories he or she read; Predicting what product a person will buy based on products he or she has already purchased and/or dropped into his or her shopping basket. Another class of useful applications for dependency networks is related to data visualization, that is, visualization of predictive relationships. == See also == Relational dependency network == References ==
Wikipedia/Dependency_network_(graphical_model)
Graphical Models is an academic journal in computer graphics and geometry processing publisher by Elsevier. As of 2021, its editor-in-chief is Bedrich Benes of the Purdue University. == History == This journal has gone through multiple names. Founded in 1972 as Computer Graphics and Image Processing by Azriel Rosenfeld, it became the first journal to focus on computer image analysis. Its first change of name came in 1983, when it became Computer Vision, Graphics, and Image Processing. In 1991, it split into two journals, CVGIP: Graphical Models and Image Processing, and CVGIP: Image Understanding, which later became Computer Vision and Image Understanding. Meanwhile, in 1995, the journal Graphical Models and Image Processing removed the "CVGIP" prefix from its former name, and finally took its current title, Graphical Models, in 2002. == Ranking == Although initially ranked by SCImago Journal Rank as a top-quartile journal in 1999 in its main topic areas, computer graphics and computer-aided design, and then for many years ranked as second-quartile, by 2020 it had fallen to the third quartile. == References ==
Wikipedia/Graphical_Models
Graphical models have become powerful frameworks for protein structure prediction, protein–protein interaction, and free energy calculations for protein structures. Using a graphical model to represent the protein structure allows the solution of many problems including secondary structure prediction, protein-protein interactions, protein-drug interaction, and free energy calculations. There are two main approaches to using graphical models in protein structure modeling. The first approach uses discrete variables for representing the coordinates or the dihedral angles of the protein structure. The variables are originally all continuous values and, to transform them into discrete values, a discretization process is typically applied. The second approach uses continuous variables for the coordinates or dihedral angles. == Discrete graphical models for protein structure == Markov random fields, also known as undirected graphical models are common representations for this problem. Given an undirected graph G = (V, E), a set of random variables X = (Xv)v ∈ V indexed by V, form a Markov random field with respect to G if they satisfy the pairwise Markov property: any two non-adjacent variables are conditionally independent given all other variables: X u ⊥ ⊥ X v | X V ∖ { u , v } if { u , v } ∉ E . {\displaystyle X_{u}\perp \!\!\!\perp X_{v}|X_{V\setminus \{u,v\}}\quad {\text{if }}\{u,v\}\notin E.} In the discrete model, the continuous variables are discretized into a set of favorable discrete values. If the variables of choice are dihedral angles, the discretization is typically done by mapping each value to the corresponding rotamer conformation. === Model === Let X = {Xb, Xs} be the random variables representing the entire protein structure. Xb can be represented by a set of 3-d coordinates of the backbone atoms, or equivalently, by a sequence of bond lengths and dihedral angles. The probability of a particular conformation x can then be written as: p ( X = x | Θ ) = p ( X b = x b ) p ( X s = x s | X b , Θ ) , {\displaystyle p(X=x|\Theta )=p(X_{b}=x_{b})p(X_{s}=x_{s}|X_{b},\Theta ),\,} where Θ {\displaystyle \Theta } represents any parameters used to describe this model, including sequence information, temperature etc. Frequently the backbone is assumed to be rigid with a known conformation, and the problem is then transformed to a side-chain placement problem. The structure of the graph is also encoded in Θ {\displaystyle \Theta } . This structure shows which two variables are conditionally independent. As an example, side chain angles of two residues far apart can be independent given all other angles in the protein. To extract this structure, researchers use a distance threshold, and only a pair of residues which are within that threshold are considered connected (i.e. have an edge between them). Given this representation, the probability of a particular side chain conformation xs given the backbone conformation xb can be expressed as p ( X s = x s | X b = x b ) = 1 Z ∏ c ∈ C ( G ) Φ c ( x s c , x b c ) {\displaystyle p(X_{s}=x_{s}|X_{b}=x_{b})={\frac {1}{Z}}\prod _{c\in C(G)}\Phi _{c}(x_{s}^{c},x_{b}^{c})} where C(G) is the set of all cliques in G, Φ {\displaystyle \Phi } is a potential function defined over the variables, and Z is the partition function. To completely characterize the MRF, it is necessary to define the potential function Φ {\displaystyle \Phi } . To simplify, the cliques of a graph are usually restricted to only the cliques of size 2, which means the potential function is only defined over pairs of variables. In Goblin System, these pairwise functions are defined as Φ ( x s i p , x b j q ) = exp ⁡ ( − E ( x s i p , x b j q ) / K B T ) {\displaystyle \Phi (x_{s}^{i_{p}},x_{b}^{j_{q}})=\exp(-E(x_{s}^{i_{p}},x_{b}^{j_{q}})/K_{B}T)} where E ( x s i p , x b j q ) {\displaystyle E(x_{s}^{i_{p}},x_{b}^{j_{q}})} is the energy of interaction between rotamer state p of residue X i s {\displaystyle X_{i}^{s}} and rotamer state q of residue X j s {\displaystyle X_{j}^{s}} and k B {\displaystyle k_{B}} is the Boltzmann constant. Using a PDB file, this model can be built over the protein structure. From this model, free energy can be calculated. === Free energy calculation: belief propagation === It has been shown that the free energy of a system is calculated as G = E − T S {\displaystyle G=E-TS} where E is the enthalpy of the system, T the temperature and S, the entropy. Now if we associate a probability with each state of the system, (p(x) for each conformation value, x), G can be rewritten as G = ∑ x p ( x ) E ( x ) − T ∑ x p ( x ) ln ⁡ ( p ( x ) ) {\displaystyle G=\sum _{x}p(x)E(x)-T\sum _{x}p(x)\ln(p(x))\,} Calculating p(x) on discrete graphs is done by the generalized belief propagation algorithm. This algorithm calculates an approximation to the probabilities, and it is not guaranteed to converge to a final value set. However, in practice, it has been shown to converge successfully in many cases. == Continuous graphical models for protein structures == Graphical models can still be used when the variables of choice are continuous. In these cases, the probability distribution is represented as a multivariate probability distribution over continuous variables. Each family of distribution will then impose certain properties on the graphical model. Multivariate Gaussian distribution is one of the most convenient distributions in this problem. The simple form of the probability and the direct relation with the corresponding graphical model makes it a popular choice among researchers. === Gaussian graphical models of protein structures === Gaussian graphical models are multivariate probability distributions encoding a network of dependencies among variables. Let Θ = [ θ 1 , θ 2 , … , θ n ] {\displaystyle \Theta =[\theta _{1},\theta _{2},\dots ,\theta _{n}]} be a set of n {\displaystyle n} variables, such as n {\displaystyle n} dihedral angles, and let f ( Θ = D ) {\displaystyle f(\Theta =D)} be the value of the probability density function at a particular value D. A multivariate Gaussian graphical model defines this probability as follows: f ( Θ = D ) = 1 Z exp ⁡ { − 1 2 ( D − μ ) T Σ − 1 ( D − μ ) } {\displaystyle f(\Theta =D)={\frac {1}{Z}}\exp \left\{-{\frac {1}{2}}(D-\mu )^{T}\Sigma ^{-1}(D-\mu )\right\}} Where Z = ( 2 π ) n / 2 | Σ | 1 / 2 {\displaystyle Z=(2\pi )^{n/2}|\Sigma |^{1/2}} is the closed form for the partition function. The parameters of this distribution are μ {\displaystyle \mu } and Σ {\displaystyle \Sigma } . μ {\displaystyle \mu } is the vector of mean values of each variable, and Σ − 1 {\displaystyle \Sigma ^{-1}} , the inverse of the covariance matrix, also known as the precision matrix. Precision matrix contains the pairwise dependencies between the variables. A zero value in Σ − 1 {\displaystyle \Sigma ^{-1}} means that conditioned on the values of the other variables, the two corresponding variable are independent of each other. To learn the graph structure as a multivariate Gaussian graphical model, we can use either L-1 regularization, or neighborhood selection algorithms. These algorithms simultaneously learn a graph structure and the edge strength of the connected nodes. An edge strength corresponds to the potential function defined on the corresponding two-node clique. We use a training set of a number of PDB structures to learn the μ {\displaystyle \mu } and Σ − 1 {\displaystyle \Sigma ^{-1}} . Once the model is learned, we can repeat the same step as in the discrete case, to get the density functions at each node, and use analytical form to calculate the free energy. Here, the partition function already has a closed form, so the inference, at least for the Gaussian graphical models is trivial. If the analytical form of the partition function is not available, particle filtering or expectation propagation can be used to approximate Z, and then perform the inference and calculate free energy. == References == Time Varying Undirected Graphs, Shuheng Zhou and John D. Lafferty and Larry A. Wasserman, COLT 2008 Free Energy Estimates of All-atom Protein Structures Using Generalized Belief Propagation, Hetunandan Kamisetty Eric P. Xing Christopher J. Langmead, RECOMB 2008 == External links == http://www.liebertonline.com/doi/pdf/10.1089/cmb.2007.0131 https://web.archive.org/web/20110724225908/http://www.learningtheory.org/colt2008/81-Zhou.pdf Liu Y; Carbonell J; Gopalakrishnan V (2009). "Conditional graphical models for protein structural motif recognition". J. Comput. Biol. 16 (5): 639–57. doi:10.1089/cmb.2008.0176. hdl:1721.1/62177. PMID 19432536. S2CID 7035106. Predicting Protein Folds with Structural Repeats Using a Chain Graph Model
Wikipedia/Graphical_models_for_protein_structure
In the mathematical theory of stochastic processes, variable-order Markov (VOM) models are an important class of models that extend the well known Markov chain models. In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization. This realization sequence is often called the context; therefore the VOM models are also called context trees. VOM models are nicely rendered by colorized probabilistic suffix trees (PST). The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such as statistical analysis, classification and prediction. == Example == Consider for example a sequence of random variables, each of which takes a value from the ternary alphabet {a, b, c}. Specifically, consider the string constructed from infinite concatenations of the sub-string aaabc: aaabcaaabcaaabcaaabc…aaabc. The VOM model of maximal order 2 can approximate the above string using only the following five conditional probability components: Pr(a | aa) = 0.5, Pr(b | aa) = 0.5, Pr(c | b) = 1.0, Pr(a | c)= 1.0, Pr(a | ca) = 1.0. In this example, Pr(c | ab) = Pr(c | b) = 1.0; therefore, the shorter context b is sufficient to determine the next character. Similarly, the VOM model of maximal order 3 can generate the string exactly using only five conditional probability components, which are all equal to 1.0. To construct the Markov chain of order 1 for the next character in that string, one must estimate the following 9 conditional probability components: Pr(a | a), Pr(a | b), Pr(a | c), Pr(b | a), Pr(b | b), Pr(b | c), Pr(c | a), Pr(c | b), Pr(c | c). To construct the Markov chain of order 2 for the next character, one must estimate 27 conditional probability components: Pr(a | aa), Pr(a | ab), …, Pr(c | cc). And to construct the Markov chain of order three for the next character one must estimate the following 81 conditional probability components: Pr(a | aaa), Pr(a | aab), …, Pr(c | ccc). In practical settings there is seldom sufficient data to accurately estimate the exponentially increasing number of conditional probability components as the order of the Markov chain increases. The variable-order Markov model assumes that in realistic settings, there are certain realizations of states (represented by contexts) in which some past states are independent from the future states; accordingly, "a great reduction in the number of model parameters can be achieved." == Definition == Let A be a state space (finite alphabet) of size | A | {\displaystyle |A|} . Consider a sequence with the Markov property x 1 n = x 1 x 2 … x n {\displaystyle x_{1}^{n}=x_{1}x_{2}\dots x_{n}} of n realizations of random variables, where x i ∈ A {\displaystyle x_{i}\in A} is the state (symbol) at position i ( 1 ≤ i ≤ n ) {\displaystyle \scriptstyle (1\leq i\leq n)} , and the concatenation of states x i {\displaystyle x_{i}} and x i + 1 {\displaystyle x_{i+1}} is denoted by x i x i + 1 {\displaystyle x_{i}x_{i+1}} . Given a training set of observed states, x 1 n {\displaystyle x_{1}^{n}} , the construction algorithm of the VOM models learns a model P that provides a probability assignment for each state in the sequence given its past (previously observed symbols) or future states. Specifically, the learner generates a conditional probability distribution P ( x i ∣ s ) {\displaystyle P(x_{i}\mid s)} for a symbol x i ∈ A {\displaystyle x_{i}\in A} given a context s ∈ A ∗ {\displaystyle s\in A^{*}} , where the * sign represents a sequence of states of any length, including the empty context. VOM models attempt to estimate conditional distributions of the form P ( x i ∣ s ) {\displaystyle P(x_{i}\mid s)} where the context length | s | ≤ D {\displaystyle |s|\leq D} varies depending on the available statistics. In contrast, conventional Markov models attempt to estimate these conditional distributions by assuming a fixed contexts' length | s | = D {\displaystyle |s|=D} and, hence, can be considered as special cases of the VOM models. Effectively, for a given training sequence, the VOM models are found to obtain better model parameterization than the fixed-order Markov models that leads to a better variance-bias tradeoff of the learned models. == Application areas == Various efficient algorithms have been devised for estimating the parameters of the VOM model. VOM models have been successfully applied to areas such as machine learning, information theory and bioinformatics, including specific applications such as coding and data compression, document compression, classification and identification of DNA and protein sequences, [1] statistical process control, spam filtering, haplotyping, speech recognition, sequence analysis in social sciences, and others. == See also == Stochastic chains with memory of variable length Examples of Markov chains Variable order Bayesian network Markov process Markov chain Monte Carlo Semi-Markov process Artificial intelligence == References ==
Wikipedia/Variable-order_Markov_model
In statistics and Markov modeling, an ancestral graph is a type of mixed graph to provide a graphical representation for the result of marginalizing one or more vertices in a graphical model that takes the form of a directed acyclic graph. == Definition == Ancestral graphs are mixed graphs used with three kinds of edges: directed edges, drawn as an arrow from one vertex to another, bidirected edges, which have an arrowhead at both ends, and undirected edges, which have no arrowheads. It is required to satisfy some additional constraints: If there is an edge from a vertex u to another vertex v, with an arrowhead at v (that is, either an edge directed from u to v or a bidirected edge), then there does not exist a path from v to u consisting of undirected edges and/or directed edges oriented consistently with the path. If a vertex v is an endpoint of an undirected edge, then it is not also the endpoint of an edge with an arrowhead at v. == Applications == Ancestral graphs are used to depict conditional independence relations between variables in Markov models. == References ==
Wikipedia/Ancestral_graph
A factor graph is a bipartite graph representing the factorization of a function. In probability theory and its applications, factor graphs are used to represent factorization of a probability distribution function, enabling efficient computations, such as the computation of marginal distributions through the sum–product algorithm. One of the important success stories of factor graphs and the sum–product algorithm is the decoding of capacity-approaching error-correcting codes, such as LDPC and turbo codes. Factor graphs generalize constraint graphs. A factor whose value is either 0 or 1 is called a constraint. A constraint graph is a factor graph where all factors are constraints. The max-product algorithm for factor graphs can be viewed as a generalization of the arc-consistency algorithm for constraint processing. == Definition == A factor graph is a bipartite graph representing the factorization of a function. Given a factorization of a function g ( X 1 , X 2 , … , X n ) {\displaystyle g(X_{1},X_{2},\dots ,X_{n})} , g ( X 1 , X 2 , … , X n ) = ∏ j = 1 m f j ( S j ) , {\displaystyle g(X_{1},X_{2},\dots ,X_{n})=\prod _{j=1}^{m}f_{j}(S_{j}),} where S j ⊆ { X 1 , X 2 , … , X n } {\displaystyle S_{j}\subseteq \{X_{1},X_{2},\dots ,X_{n}\}} , the corresponding factor graph G = ( X , F , E ) {\displaystyle G=(X,F,E)} consists of variable vertices X = { X 1 , X 2 , … , X n } {\displaystyle X=\{X_{1},X_{2},\dots ,X_{n}\}} , factor vertices F = { f 1 , f 2 , … , f m } {\displaystyle F=\{f_{1},f_{2},\dots ,f_{m}\}} , and edges E {\displaystyle E} . The edges depend on the factorization as follows: there is an undirected edge between factor vertex f j {\displaystyle f_{j}} and variable vertex X k {\displaystyle X_{k}} if X k ∈ S j {\displaystyle X_{k}\in S_{j}} . The function is tacitly assumed to be real-valued: g ( X 1 , X 2 , … , X n ) ∈ R {\displaystyle g(X_{1},X_{2},\dots ,X_{n})\in \mathbb {R} } . Factor graphs can be combined with message passing algorithms to efficiently compute certain characteristics of the function g ( X 1 , X 2 , … , X n ) {\displaystyle g(X_{1},X_{2},\dots ,X_{n})} , such as the marginal distributions. == Examples == Consider a function that factorizes as follows: g ( X 1 , X 2 , X 3 ) = f 1 ( X 1 ) f 2 ( X 1 , X 2 ) f 3 ( X 1 , X 2 ) f 4 ( X 2 , X 3 ) {\displaystyle g(X_{1},X_{2},X_{3})=f_{1}(X_{1})f_{2}(X_{1},X_{2})f_{3}(X_{1},X_{2})f_{4}(X_{2},X_{3})} , with a corresponding factor graph shown on the right. Observe that the factor graph has a cycle. If we merge f 2 ( X 1 , X 2 ) f 3 ( X 1 , X 2 ) {\displaystyle f_{2}(X_{1},X_{2})f_{3}(X_{1},X_{2})} into a single factor, the resulting factor graph will be a tree. This is an important distinction, as message passing algorithms are usually exact for trees, but only approximate for graphs with cycles. == Message passing on factor graphs == A popular message passing algorithm on factor graphs is the sum–product algorithm, which efficiently computes all the marginals of the individual variables of the function. In particular, the marginal of variable X k {\displaystyle X_{k}} is defined as g k ( X k ) = ∑ X k ¯ g ( X 1 , X 2 , … , X n ) {\displaystyle g_{k}(X_{k})=\sum _{X_{\bar {k}}}g(X_{1},X_{2},\dots ,X_{n})} where the notation X k ¯ {\displaystyle X_{\bar {k}}} means that the summation goes over all the variables, except X k {\displaystyle X_{k}} . The messages of the sum–product algorithm are conceptually computed in the vertices and passed along the edges. A message from or to a variable vertex is always a function of that particular variable. For instance, when a variable is binary, the messages over the edges incident to the corresponding vertex can be represented as vectors of length 2: the first entry is the message evaluated in 0, the second entry is the message evaluated in 1. When a variable belongs to the field of real numbers, messages can be arbitrary functions, and special care needs to be taken in their representation. In practice, the sum–product algorithm is used for statistical inference, whereby g ( X 1 , X 2 , … , X n ) {\displaystyle g(X_{1},X_{2},\dots ,X_{n})} is a joint distribution or a joint likelihood function, and the factorization depends on the conditional independencies among the variables. The Hammersley–Clifford theorem shows that other probabilistic models such as Bayesian networks and Markov networks can be represented as factor graphs; the latter representation is frequently used when performing inference over such networks using belief propagation. On the other hand, Bayesian networks are more naturally suited for generative models, as they can directly represent the causalities of the model. == See also == Belief propagation Bayesian inference Bayesian programming Conditional probability Markov network Bayesian network Hammersley–Clifford theorem == External links == Loeliger, Hans-Andrea (January 2004), "An Introduction to Factor Graphs]" (PDF), IEEE Signal Processing Magazine, 21 (1): 28–41, Bibcode:2004ISPM...21...28L, doi:10.1109/MSP.2004.1267047, S2CID 7722934 dimple Archived 2016-01-06 at the Wayback Machine an open-source tool for building and solving factor graphs in MATLAB. Loeliger, Hans-Andrea (2008), An introduction to Factor Graphs (PDF) == References == Clifford (1990), "Markov random fields in statistics", in Grimmett, G.R.; Welsh, D.J.A. (eds.), Disorder in Physical Systems, J.M. Hammersley Festschrift (postscript), Oxford University Press, pp. 19–32, ISBN 9780198532156 Frey, Brendan J. (2003), "Extending Factor Graphs so as to Unify Directed and Undirected Graphical Models", in Jain, Nitin (ed.), UAI'03, Proceedings of the 19th Conference in Uncertainty in Artificial Intelligence, Morgan Kaufmann, pp. 257–264, arXiv:1212.2486, ISBN 0127056645 Kschischang, Frank R.; Frey, Brendan J.; Loeliger, Hans-Andrea (2001), "Factor Graphs and the Sum-Product Algorithm", IEEE Transactions on Information Theory, 47 (2): 498–519, CiteSeerX 10.1.1.54.1570, doi:10.1109/18.910572. Wymeersch, Henk (2007), Iterative Receiver Design, Cambridge University Press, ISBN 978-0-521-87315-4
Wikipedia/Factor_graph
A gene (or genetic) regulatory network (GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins which, in turn, determine the function of the cell. GRN also play a central role in morphogenesis, the creation of body structures, which in turn is central to evolutionary developmental biology (evo-devo). The regulator can be DNA, RNA, protein or any combination of two or more of these three that form a complex, such as a specific sequence of DNA and a transcription factor to activate that sequence. The interaction can be direct or indirect (through transcribed RNA or translated protein). In general, each mRNA molecule goes on to make a specific protein (or set of proteins). In some cases this protein will be structural, and will accumulate at the cell membrane or within the cell to give it particular structural properties. In other cases the protein will be an enzyme, i.e., a micro-machine that catalyses a certain reaction, such as the breakdown of a food source or toxin. Some proteins though serve only to activate other genes, and these are the transcription factors that are the main players in regulatory networks or cascades. By binding to the promoter region at the start of other genes they turn them on, initiating the production of another protein, and so on. Some transcription factors are inhibitory. In single-celled organisms, regulatory networks respond to the external environment, optimising the cell at a given time for survival in this environment. Thus a yeast cell, finding itself in a sugar solution, will turn on genes to make enzymes that process the sugar to alcohol. This process, which we associate with wine-making, is how the yeast cell makes its living, gaining energy to multiply, which under normal circumstances would enhance its survival prospects. In multicellular animals the same principle has been put in the service of gene cascades that control body-shape. Each time a cell divides, two cells result which, although they contain the same genome in full, can differ in which genes are turned on and making proteins. Sometimes a 'self-sustaining feedback loop' ensures that a cell maintains its identity and passes it on. Less understood is the mechanism of epigenetics by which chromatin modification may provide cellular memory by blocking or allowing transcription. A major feature of multicellular animals is the use of morphogen gradients, which in effect provide a positioning system that tells a cell where in the body it is, and hence what sort of cell to become. A gene that is turned on in one cell may make a product that leaves the cell and diffuses through adjacent cells, entering them and turning on genes only when it is present above a certain threshold level. These cells are thus induced into a new fate, and may even generate other morphogens that signal back to the original cell. Over longer distances morphogens may use the active process of signal transduction. Such signalling controls embryogenesis, the building of a body plan from scratch through a series of sequential steps. They also control and maintain adult bodies through feedback processes, and the loss of such feedback because of a mutation can be responsible for the cell proliferation that is seen in cancer. In parallel with this process of building structure, the gene cascade turns on genes that make structural proteins that give each cell the physical properties it needs. == Overview == At one level, biological cells can be thought of as "partially mixed bags" of biological chemicals – in the discussion of gene regulatory networks, these chemicals are mostly the messenger RNAs (mRNAs) and proteins that arise from gene expression. These mRNA and proteins interact with each other with various degrees of specificity. Some diffuse around the cell. Others are bound to cell membranes, interacting with molecules in the environment. Still others pass through cell membranes and mediate long range signals to other cells in a multi-cellular organism. These molecules and their interactions comprise a gene regulatory network. The nodes of this network can represent genes, proteins, mRNAs, protein/protein complexes or cellular processes. Nodes that are depicted as lying along vertical lines are associated with the cell/environment interfaces, while the others are free-floating and can diffuse. Edges between nodes represent interactions between the nodes, that can correspond to individual molecular reactions between DNA, mRNA, miRNA, proteins or molecular processes through which the products of one gene affect those of another, though the lack of experimentally obtained information often implies that some reactions are not modeled at such a fine level of detail. These interactions can be inductive (usually represented by arrowheads or the + sign), with an increase in the concentration of one leading to an increase in the other, inhibitory (represented with filled circles, blunt arrows or the minus sign), with an increase in one leading to a decrease in the other, or dual, when depending on the circumstances the regulator can activate or inhibit the target node. The nodes can regulate themselves directly or indirectly, creating feedback loops, which form cyclic chains of dependencies in the topological network. The network structure is an abstraction of the system's molecular or chemical dynamics, describing the manifold ways in which one substance affects all the others to which it is connected. In practice, such GRNs are inferred from the biological literature on a given system and represent a distillation of the collective knowledge about a set of related biochemical reactions. To speed up the manual curation of GRNs, some recent efforts try to use text mining, curated databases, network inference from massive data, model checking and other information extraction technologies for this purpose. Genes can be viewed as nodes in the network, with input being proteins such as transcription factors, and outputs being the level of gene expression. The value of the node depends on a function which depends on the value of its regulators in previous time steps (in the Boolean network described below these are Boolean functions, typically AND, OR, and NOT). These functions have been interpreted as performing a kind of information processing within the cell, which determines cellular behavior. The basic drivers within cells are concentrations of some proteins, which determine both spatial (location within the cell or tissue) and temporal (cell cycle or developmental stage) coordinates of the cell, as a kind of "cellular memory". The gene networks are only beginning to be understood, and it is a next step for biology to attempt to deduce the functions for each gene "node", to help understand the behavior of the system in increasing levels of complexity, from gene to signaling pathway, cell or tissue level. Mathematical models of GRNs have been developed to capture the behavior of the system being modeled, and in some cases generate predictions corresponding with experimental observations. In some other cases, models have proven to make accurate novel predictions, which can be tested experimentally, thus suggesting new approaches to explore in an experiment that sometimes wouldn't be considered in the design of the protocol of an experimental laboratory. Modeling techniques include differential equations (ODEs), Boolean networks, Petri nets, Bayesian networks, graphical Gaussian network models, Stochastic, and Process Calculi. Conversely, techniques have been proposed for generating models of GRNs that best explain a set of time series observations. Recently it has been shown that ChIP-seq signal of histone modification are more correlated with transcription factor motifs at promoters in comparison to RNA level. Hence it is proposed that time-series histone modification ChIP-seq could provide more reliable inference of gene-regulatory networks in comparison to methods based on expression levels. == Structure and evolution == === Global feature === Gene regulatory networks are generally thought to be made up of a few highly connected nodes (hubs) and many poorly connected nodes nested within a hierarchical regulatory regime. Thus gene regulatory networks approximate a hierarchical scale free network topology. This is consistent with the view that most genes have limited pleiotropy and operate within regulatory modules. This structure is thought to evolve due to the preferential attachment of duplicated genes to more highly connected genes. Recent work has also shown that natural selection tends to favor networks with sparse connectivity. There are primarily two ways that networks can evolve, both of which can occur simultaneously. The first is that network topology can be changed by the addition or subtraction of nodes (genes) or parts of the network (modules) may be expressed in different contexts. The Drosophila Hippo signaling pathway provides a good example. The Hippo signaling pathway controls both mitotic growth and post-mitotic cellular differentiation. Recently it was found that the network the Hippo signaling pathway operates in differs between these two functions which in turn changes the behavior of the Hippo signaling pathway. This suggests that the Hippo signaling pathway operates as a conserved regulatory module that can be used for multiple functions depending on context. Thus, changing network topology can allow a conserved module to serve multiple functions and alter the final output of the network. The second way networks can evolve is by changing the strength of interactions between nodes, such as how strongly a transcription factor may bind to a cis-regulatory element. Such variation in strength of network edges has been shown to underlie between species variation in vulva cell fate patterning of Caenorhabditis worms. === Local feature === Another widely cited characteristic of gene regulatory network is their abundance of certain repetitive sub-networks known as network motifs. Network motifs can be regarded as repetitive topological patterns when dividing a big network into small blocks. Previous analysis found several types of motifs that appeared more often in gene regulatory networks than in randomly generated networks. As an example, one such motif is called feed-forward loops, which consist of three nodes. This motif is the most abundant among all possible motifs made up of three nodes, as is shown in the gene regulatory networks of fly, nematode, and human. The enriched motifs have been proposed to follow convergent evolution, suggesting they are "optimal designs" for certain regulatory purposes. For example, modeling shows that feed-forward loops are able to coordinate the change in node A (in terms of concentration and activity) and the expression dynamics of node C, creating different input-output behaviors. The galactose utilization system of E. coli contains a feed-forward loop which accelerates the activation of galactose utilization operon galETK, potentially facilitating the metabolic transition to galactose when glucose is depleted. The feed-forward loop in the arabinose utilization systems of E.coli delays the activation of arabinose catabolism operon and transporters, potentially avoiding unnecessary metabolic transition due to temporary fluctuations in upstream signaling pathways. Similarly in the Wnt signaling pathway of Xenopus, the feed-forward loop acts as a fold-change detector that responses to the fold change, rather than the absolute change, in the level of β-catenin, potentially increasing the resistance to fluctuations in β-catenin levels. Following the convergent evolution hypothesis, the enrichment of feed-forward loops would be an adaptation for fast response and noise resistance. A recent research found that yeast grown in an environment of constant glucose developed mutations in glucose signaling pathways and growth regulation pathway, suggesting regulatory components responding to environmental changes are dispensable under constant environment. On the other hand, some researchers hypothesize that the enrichment of network motifs is non-adaptive. In other words, gene regulatory networks can evolve to a similar structure without the specific selection on the proposed input-output behavior. Support for this hypothesis often comes from computational simulations. For example, fluctuations in the abundance of feed-forward loops in a model that simulates the evolution of gene regulatory networks by randomly rewiring nodes may suggest that the enrichment of feed-forward loops is a side-effect of evolution. In another model of gene regulator networks evolution, the ratio of the frequencies of gene duplication and gene deletion show great influence on network topology: certain ratios lead to the enrichment of feed-forward loops and create networks that show features of hierarchical scale free networks. De novo evolution of coherent type 1 feed-forward loops has been demonstrated computationally in response to selection for their hypothesized function of filtering out a short spurious signal, supporting adaptive evolution, but for non-idealized noise, a dynamics-based system of feed-forward regulation with different topology was instead favored. == Bacterial regulatory networks == Regulatory networks allow bacteria to adapt to almost every environmental niche on earth. A network of interactions among diverse types of molecules including DNA, RNA, proteins and metabolites, is utilised by the bacteria to achieve regulation of gene expression. In bacteria, the principal function of regulatory networks is to control the response to environmental changes, for example nutritional status and environmental stress. A complex organization of networks permits the microorganism to coordinate and integrate multiple environmental signals. One example stress is when the environment suddenly becomes poor of nutrients. This triggers a complex adaptation process in bacteria, such as E. coli. After this environmental change, thousands of genes change expression level. However, these changes are predictable from the topology and logic of the gene network that is reported in RegulonDB. Specifically, on average, the response strength of a gene was predictable from the difference between the numbers of activating and repressing input transcription factors of that gene. == Modelling == === Coupled ordinary differential equations === It is common to model such a network with a set of coupled ordinary differential equations (ODEs) or SDEs, describing the reaction kinetics of the constituent parts. Suppose that our regulatory network has N {\displaystyle N} nodes, and let S 1 ( t ) , S 2 ( t ) , … , S N ( t ) {\displaystyle S_{1}(t),S_{2}(t),\ldots ,S_{N}(t)} represent the concentrations of the N {\displaystyle N} corresponding substances at time t {\displaystyle t} . Then the temporal evolution of the system can be described approximately by d S j d t = f j ( S 1 , S 2 , … , S N ) {\displaystyle {\frac {dS_{j}}{dt}}=f_{j}\left(S_{1},S_{2},\ldots ,S_{N}\right)} where the functions f j {\displaystyle f_{j}} express the dependence of S j {\displaystyle S_{j}} on the concentrations of other substances present in the cell. The functions f j {\displaystyle f_{j}} are ultimately derived from basic principles of chemical kinetics or simple expressions derived from these e.g. Michaelis–Menten enzymatic kinetics. Hence, the functional forms of the f j {\displaystyle f_{j}} are usually chosen as low-order polynomials or Hill functions that serve as an ansatz for the real molecular dynamics. Such models are then studied using the mathematics of nonlinear dynamics. System-specific information, like reaction rate constants and sensitivities, are encoded as constant parameters. By solving for the fixed point of the system: d S j d t = 0 {\displaystyle {\frac {dS_{j}}{dt}}=0} for all j {\displaystyle j} , one obtains (possibly several) concentration profiles of proteins and mRNAs that are theoretically sustainable (though not necessarily stable). Steady states of kinetic equations thus correspond to potential cell types, and oscillatory solutions to the above equation to naturally cyclic cell types. Mathematical stability of these attractors can usually be characterized by the sign of higher derivatives at critical points, and then correspond to biochemical stability of the concentration profile. Critical points and bifurcations in the equations correspond to critical cell states in which small state or parameter perturbations could switch the system between one of several stable differentiation fates. Trajectories correspond to the unfolding of biological pathways and transients of the equations to short-term biological events. For a more mathematical discussion, see the articles on nonlinearity, dynamical systems, bifurcation theory, and chaos theory. === Boolean network === The following example illustrates how a Boolean network can model a GRN together with its gene products (the outputs) and the substances from the environment that affect it (the inputs). Stuart Kauffman was amongst the first biologists to use the metaphor of Boolean networks to model genetic regulatory networks. Each gene, each input, and each output is represented by a node in a directed graph in which there is an arrow from one node to another if and only if there is a causal link between the two nodes. Each node in the graph can be in one of two states: on or off. For a gene, "on" corresponds to the gene being expressed; for inputs and outputs, "on" corresponds to the substance being present. Time is viewed as proceeding in discrete steps. At each step, the new state of a node is a Boolean function of the prior states of the nodes with arrows pointing towards it. The validity of the model can be tested by comparing simulation results with time series observations. A partial validation of a Boolean network model can also come from testing the predicted existence of a yet unknown regulatory connection between two particular transcription factors that each are nodes of the model. === Continuous networks === Continuous network models of GRNs are an extension of the Boolean networks described above. Nodes still represent genes and connections between them regulatory influences on gene expression. Genes in biological systems display a continuous range of activity levels and it has been argued that using a continuous representation captures several properties of gene regulatory networks not present in the Boolean model. Formally most of these approaches are similar to an artificial neural network, as inputs to a node are summed up and the result serves as input to a sigmoid function, e.g., but proteins do often control gene expression in a synergistic, i.e. non-linear, way. However, there is now a continuous network model that allows grouping of inputs to a node thus realizing another level of regulation. This model is formally closer to a higher order recurrent neural network. The same model has also been used to mimic the evolution of cellular differentiation and even multicellular morphogenesis. === Stochastic gene networks === Experimental results have demonstrated that gene expression is a stochastic process. Thus, many authors are now using the stochastic formalism, after the work by Arkin et al. Works on single gene expression and small synthetic genetic networks, such as the genetic toggle switch of Tim Gardner and Jim Collins, provided additional experimental data on the phenotypic variability and the stochastic nature of gene expression. The first versions of stochastic models of gene expression involved only instantaneous reactions and were driven by the Gillespie algorithm. Since some processes, such as gene transcription, involve many reactions and could not be correctly modeled as an instantaneous reaction in a single step, it was proposed to model these reactions as single step multiple delayed reactions in order to account for the time it takes for the entire process to be complete. From here, a set of reactions were proposed that allow generating GRNs. These are then simulated using a modified version of the Gillespie algorithm, that can simulate multiple time delayed reactions (chemical reactions where each of the products is provided a time delay that determines when will it be released in the system as a "finished product"). For example, basic transcription of a gene can be represented by the following single-step reaction (RNAP is the RNA polymerase, RBS is the RNA ribosome binding site, and Pro i is the promoter region of gene i): RNAP + Pro i ⟶ k i , b a s Pro i ( τ i 1 ) + RBS i ( τ i 1 ) + RNAP ( τ i 2 ) {\displaystyle {\text{RNAP}}+{\text{Pro}}_{i}{\overset {k_{i,bas}}{\longrightarrow }}{\text{Pro}}_{i}(\tau _{i}^{1})+{\text{RBS}}_{i}(\tau _{i}^{1})+{\text{RNAP}}(\tau _{i}^{2})} Furthermore, there seems to be a trade-off between the noise in gene expression, the speed with which genes can switch, and the metabolic cost associated their functioning. More specifically, for any given level of metabolic cost, there is an optimal trade-off between noise and processing speed and increasing the metabolic cost leads to better speed-noise trade-offs. A recent work proposed a simulator (SGNSim, Stochastic Gene Networks Simulator), that can model GRNs where transcription and translation are modeled as multiple time delayed events and its dynamics is driven by a stochastic simulation algorithm (SSA) able to deal with multiple time delayed events. The time delays can be drawn from several distributions and the reaction rates from complex functions or from physical parameters. SGNSim can generate ensembles of GRNs within a set of user-defined parameters, such as topology. It can also be used to model specific GRNs and systems of chemical reactions. Genetic perturbations such as gene deletions, gene over-expression, insertions, frame shift mutations can also be modeled as well. The GRN is created from a graph with the desired topology, imposing in-degree and out-degree distributions. Gene promoter activities are affected by other genes expression products that act as inputs, in the form of monomers or combined into multimers and set as direct or indirect. Next, each direct input is assigned to an operator site and different transcription factors can be allowed, or not, to compete for the same operator site, while indirect inputs are given a target. Finally, a function is assigned to each gene, defining the gene's response to a combination of transcription factors (promoter state). The transfer functions (that is, how genes respond to a combination of inputs) can be assigned to each combination of promoter states as desired. In other recent work, multiscale models of gene regulatory networks have been developed that focus on synthetic biology applications. Simulations have been used that model all biomolecular interactions in transcription, translation, regulation, and induction of gene regulatory networks, guiding the design of synthetic systems. == Prediction == Other work has focused on predicting the gene expression levels in a gene regulatory network. The approaches used to model gene regulatory networks have been constrained to be interpretable and, as a result, are generally simplified versions of the network. For example, Boolean networks have been used due to their simplicity and ability to handle noisy data but lose data information by having a binary representation of the genes. Also, artificial neural networks omit using a hidden layer so that they can be interpreted, losing the ability to model higher order correlations in the data. Using a model that is not constrained to be interpretable, a more accurate model can be produced. Being able to predict gene expressions more accurately provides a way to explore how drugs affect a system of genes as well as for finding which genes are interrelated in a process. This has been encouraged by the DREAM competition which promotes a competition for the best prediction algorithms. Some other recent work has used artificial neural networks with a hidden layer. == Applications == === Multiple sclerosis === There are three classes of multiple sclerosis: relapsing-remitting (RRMS), primary progressive (PPMS) and secondary progressive (SPMS). Gene regulatory network (GRN) plays a vital role to understand the disease mechanism across these three different multiple sclerosis classes. == See also == Body plan Cis-regulatory module Genenetwork (database) Morphogen Operon Synexpression Systems biology Weighted gene co-expression network analysis == References == == Further reading == == External links == Plant Transcription Factor Database and Plant Transcriptional Regulation Data and Analysis Platform Open source web service for GRN analysis BIB: Yeast Biological Interaction Browser Graphical Gaussian models for genome data – Inference of gene association networks with GGMs A bibliography on learning causal networks of gene interactions – regularly updated, contains hundreds of links to papers from bioinformatics, statistics, machine learning. https://web.archive.org/web/20060907074456/http://mips.gsf.de/proj/biorel/ BIOREL is a web-based resource for quantitative estimation of the gene network bias in relation to available database information about gene activity/function/properties/associations/interactio. Evolving Biological Clocks using Genetic Regulatory Networks – Information page with model source code and Java applet. Engineered Gene Networks Tutorial: Genetic Algorithms and their Application to the Artificial Evolution of Genetic Regulatory Networks BEN: a web-based resource for exploring the connections between genes, diseases, and other biomedical entities Global protein-protein interaction and gene regulation network of Arabidopsis thaliana Archived 16 March 2016 at the Wayback Machine
Wikipedia/Gene_regulatory_network
Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The study of why things occur is called etiology, and can be described using the language of scientific causal notation. Causal inference is said to provide the evidence of causality theorized by causal reasoning. Causal inference is widely studied across all sciences. Several innovations in the development and implementation of methodology designed to determine causality have proliferated in recent decades. Causal inference remains especially difficult where experimentation is difficult or impossible, which is common throughout most sciences. The approaches to causal inference are broadly applicable across all types of scientific disciplines, and many methods of causal inference that were designed for certain disciplines have found use in other disciplines. This article outlines the basic process behind causal inference and details some of the more conventional tests used across different disciplines; however, this should not be mistaken as a suggestion that these methods apply only to those disciplines, merely that they are the most commonly used in that discipline. Causal inference is difficult to perform and there is significant debate amongst scientists about the proper way to determine causality. Despite other innovations, there remain concerns of misattribution by scientists of correlative results as causal, of the usage of incorrect methodologies by scientists, and of deliberate manipulation by scientists of analytical results in order to obtain statistically significant estimates. Particular concern is raised in the use of regression models, especially linear regression models. == Definition == Inferring the cause of something has been described as: "...reason[ing] to the conclusion that something is, or is likely to be, the cause of something else". "Identification of the cause or causes of a phenomenon, by establishing covariation of cause and effect, a time-order relationship with the cause preceding the effect, and the elimination of plausible alternative causes." == Methodology == === General === Causal inference is conducted via the study of systems where the measure of one variable is suspected to affect the measure of another. Causal inference is conducted with regard to the scientific method. The first step of causal inference is to formulate a falsifiable null hypothesis, which is subsequently tested with statistical methods. Frequentist statistical inference is the use of statistical methods to determine the probability that the data occur under the null hypothesis by chance; Bayesian inference is used to determine the effect of an independent variable. Statistical inference is generally used to determine the difference between variations in the original data that are random variation or the effect of a well-specified causal mechanism. Notably, correlation does not imply causation, so the study of causality is as concerned with the study of potential causal mechanisms as it is with variation amongst the data. A frequently sought after standard of causal inference is an experiment wherein treatment is randomly assigned but all other confounding factors are held constant. Most of the efforts in causal inference are in the attempt to replicate experimental conditions. Epidemiological studies employ different epidemiological methods of collecting and measuring evidence of risk factors and effect and different ways of measuring association between the two. Results of a 2020 review of methods for causal inference found that using existing literature for clinical training programs can be challenging. This is because published articles often assume an advanced technical background, they may be written from multiple statistical, epidemiological, computer science, or philosophical perspectives, methodological approaches continue to expand rapidly, and many aspects of causal inference receive limited coverage. Common frameworks for causal inference include the causal pie model (component-cause), Pearl's structural causal model (causal diagram + do-calculus), structural equation modeling, and Rubin causal model (potential-outcome), which are often used in areas such as social sciences and epidemiology. === Experimental === Experimental verification of causal mechanisms is possible using experimental methods. The main motivation behind an experiment is to hold other experimental variables constant while purposefully manipulating the variable of interest. If the experiment produces statistically significant effects as a result of only the treatment variable being manipulated, there is grounds to believe that a causal effect can be assigned to the treatment variable, assuming that other standards for experimental design have been met. ==== Quasi-experimental ==== Quasi-experimental verification of causal mechanisms is conducted when traditional experimental methods are unavailable. This may be the result of prohibitive costs of conducting an experiment, or the inherent infeasibility of conducting an experiment, especially experiments that are concerned with large systems such as economies of electoral systems, or for treatments that are considered to present a danger to the well-being of test subjects. Quasi-experiments may also occur where information is withheld for legal reasons. == Approaches in epidemiology == Epidemiology studies patterns of health and disease in defined populations of living beings in order to infer causes and effects. An association between an exposure to a putative risk factor and a disease may be suggestive of, but is not equivalent to causality because correlation does not imply causation. Historically, Koch's postulates have been used since the 19th century to decide if a microorganism was the cause of a disease. In the 20th century the Bradford Hill criteria, described in 1965 have been used to assess causality of variables outside microbiology, although even these criteria are not exclusive ways to determine causality. In molecular epidemiology the phenomena studied are on a molecular biology level, including genetics, where biomarkers are evidence of cause or effects. A recent trend is to identify evidence for influence of the exposure on molecular pathology within diseased tissue or cells, in the emerging interdisciplinary field of molecular pathological epidemiology (MPE). Linking the exposure to molecular pathologic signatures of the disease can help to assess causality. Considering the inherent nature of heterogeneity of a given disease, the unique disease principle, disease phenotyping and subtyping are trends in biomedical and public health sciences, exemplified as personalized medicine and precision medicine. Causal Inference has also been used for treatment effect estimation. Assuming a set of observable patient symptoms(X) caused by a set of hidden causes(Z) we can choose to give or not a treatment t. The result of the giving or not giving the treatment is the effect estimation y. If the treatment is not guaranteed to have a positive effect then the decision whether the treatment should be applied or not depends firstly on expert knowledge that encompasses the causal connections. For novel diseases, this expert knowledge may not be available. As a result, we rely solely on past treatment outcomes to make decisions. A modified variational autoencoder can be used to model the causal graph described above. While the above scenario could be modelled without the use of the hidden confounder(Z) we would lose the insight that the symptoms a patient together with other factors impacts both the treatment assignment and the outcome. == Approaches in computer science == Causal inference is an important concept in the field of causal artificial intelligence. Determination of cause and effect from joint observational data for two time-independent variables, say X and Y, has been tackled using asymmetry between evidence for some model in the directions, X → Y and Y → X. The primary approaches are based on Algorithmic information theory models and noise models. === Noise models === Incorporate an independent noise term in the model to compare the evidences of the two directions. Here are some of the noise models for the hypothesis Y → X with the noise E: Additive noise: Y = F ( X ) + E {\displaystyle Y=F(X)+E} Linear noise: Y = p X + q E {\displaystyle Y=pX+qE} Post-nonlinear: Y = G ( F ( X ) + E ) {\displaystyle Y=G(F(X)+E)} Heteroskedastic noise: Y = F ( X ) + E . G ( X ) {\displaystyle Y=F(X)+E.G(X)} Functional noise: Y = F ( X , E ) {\displaystyle Y=F(X,E)} The common assumption in these models are: There are no other causes of Y. X and E have no common causes. Distribution of cause is independent from causal mechanisms. On an intuitive level, the idea is that the factorization of the joint distribution P(Cause, Effect) into P(Cause)*P(Effect | Cause) typically yields models of lower total complexity than the factorization into P(Effect)*P(Cause | Effect). Although the notion of "complexity" is intuitively appealing, it is not obvious how it should be precisely defined. A different family of methods attempt to discover causal "footprints" from large amounts of labeled data, and allow the prediction of more flexible causal relations. == Approaches in social sciences == === Social science === The social sciences in general have moved increasingly toward including quantitative frameworks for assessing causality. Much of this has been described as a means of providing greater rigor to social science methodology. Political science was significantly influenced by the publication of Designing Social Inquiry, by Gary King, Robert Keohane, and Sidney Verba, in 1994. King, Keohane, and Verba recommend that researchers apply both quantitative and qualitative methods and adopt the language of statistical inference to be clearer about their subjects of interest and units of analysis. Proponents of quantitative methods have also increasingly adopted the potential outcomes framework, developed by Donald Rubin, as a standard for inferring causality. While much of the emphasis remains on statistical inference in the potential outcomes framework, social science methodologists have developed new tools to conduct causal inference with both qualitative and quantitative methods, sometimes called a "mixed methods" approach. Advocates of diverse methodological approaches argue that different methodologies are better suited to different subjects of study. Sociologist Herbert Smith and political scientists James Mahoney and Gary Goertz have cited the observation of Paul W. Holland, a statistician and author of the 1986 article "Statistics and Causal Inference", that statistical inference is most appropriate for assessing the "effects of causes" rather than the "causes of effects". Qualitative methodologists have argued that formalized models of causation, including process tracing and fuzzy set theory, provide opportunities to infer causation through the identification of critical factors within case studies or through a process of comparison among several case studies. These methodologies are also valuable for subjects in which a limited number of potential observations or the presence of confounding variables would limit the applicability of statistical inference. On longer timescales, persistence studies uses causal inference to link historical events to later political, economic and social outcomes. === Economics and political science === In the economic sciences and political sciences causal inference is often difficult, owing to the real world complexity of economic and political realities and the inability to recreate many large-scale phenomena within controlled experiments. Causal inference in the economic and political sciences continues to see improvement in methodology and rigor, due to the increased level of technology available to social scientists, the increase in the number of social scientists and research, and improvements to causal inference methodologies throughout social sciences. Despite the difficulties inherent in determining causality in economic systems, several widely employed methods exist throughout those fields. ==== Theoretical methods ==== Economists and political scientists can use theory (often studied in theory-driven econometrics) to estimate the magnitude of supposedly causal relationships in cases where they believe a causal relationship exists. Theorists can presuppose a mechanism believed to be causal and describe the effects using data analysis to justify their proposed theory. For example, theorists can use logic to construct a model, such as theorizing that rain causes fluctuations in economic productivity but that the converse is not true. However, using purely theoretical claims that do not offer any predictive insights has been called "pre-scientific" because there is no ability to predict the impact of the supposed causal properties. It is worth reiterating that regression analysis in the social science does not inherently imply causality, as many phenomena may correlate in the short run or in particular datasets but demonstrate no correlation in other time periods or other datasets. Thus, the attribution of causality to correlative properties is premature absent a well defined and reasoned causal mechanism. ==== Instrumental variables ==== The instrumental variables (IV) technique is a method of determining causality that involves the elimination of a correlation between one of a model's explanatory variables and the model's error term. This method presumes that if a model's error term moves similarly with the variation of another variable, then the model's error term is probably an effect of variation in that explanatory variable. The elimination of this correlation through the introduction of a new instrumental variable thus reduces the error present in the model as a whole. ==== Model specification ==== Model specification is the act of selecting a model to be used in data analysis. Social scientists (and, indeed, all scientists) must determine the correct model to use because different models are good at estimating different relationships. Model specification can be useful in determining causality that is slow to emerge, where the effects of an action in one period are only felt in a later period. It is worth remembering that correlations only measure whether two variables have similar variance, not whether they affect one another in a particular direction; thus, one cannot determine the direction of a causal relation based on correlations only. Because causal acts are believed to precede causal effects, social scientists can use a model that looks specifically for the effect of one variable on another over a period of time. This leads to using the variables representing phenomena happening earlier as treatment effects, where econometric tests are used to look for later changes in data that are attributed to the effect of such treatment effects, where a meaningful difference in results following a meaningful difference in treatment effects may indicate causality between the treatment effects and the measured effects (e.g., Granger-causality tests). Such studies are examples of time-series analysis. ==== Sensitivity analysis ==== Other variables, or regressors in regression analysis, are either included or not included across various implementations of the same model to ensure that different sources of variation can be studied more separately from one another. This is a form of sensitivity analysis: it is the study of how sensitive an implementation of a model is to the addition of one or more new variables. A chief motivating concern in the use of sensitivity analysis is the pursuit of discovering confounding variables. Confounding variables are variables that have a large impact on the results of a statistical test but are not the variable that causal inference is trying to study. Confounding variables may cause a regressor to appear to be significant in one implementation, but not in another. ===== Multicollinearity ===== Another reason for the use of sensitivity analysis is to detect multicollinearity. Multicollinearity is the phenomenon where the correlation between two explanatory variables is very high. A high level of correlation between two such variables can dramatically affect the outcome of a statistical analysis, where small variations in highly correlated data can flip the effect of a variable from a positive direction to a negative direction, or vice versa. This is an inherent property of variance testing. Determining multicollinearity is useful in sensitivity analysis because the elimination of highly correlated variables in different model implementations can prevent the dramatic changes in results that result from the inclusion of such variables. However, there are limits to sensitivity analysis' ability to prevent the deleterious effects of multicollinearity, especially in the social sciences, where systems are complex. Because it is theoretically impossible to include or even measure all of the confounding factors in a sufficiently complex system, econometric models are susceptible to the common-cause fallacy, where causal effects are incorrectly attributed to the wrong variable because the correct variable was not captured in the original data. This is an example of the failure to account for a lurking variable. ==== Design-based econometrics ==== Recently, improved methodology in design-based econometrics has popularized the use of both natural experiments and quasi-experimental research designs to study the causal mechanisms that such experiments are believed to identify. == Malpractice in causal inference == Despite the advancements in the development of methodologies used to determine causality, significant weaknesses in determining causality remain. These weaknesses can be attributed both to the inherent difficulty of determining causal relations in complex systems but also to cases of scientific malpractice. Separate from the difficulties of causal inference, the perception that large numbers of scholars in the social sciences engage in non-scientific methodology exists among some large groups of social scientists. Criticism of economists and social scientists as passing off descriptive studies as causal studies are rife within those fields. === Scientific malpractice and flawed methodology === In the sciences, especially in the social sciences, there is concern among scholars that scientific malpractice is widespread. As scientific study is a broad topic, there are theoretically limitless ways to have a causal inference undermined through no fault of a researcher. Nonetheless, there remain concerns among scientists that large numbers of researchers do not perform basic duties or practice sufficiently diverse methods in causal inference. One prominent example of common non-causal methodology is the erroneous assumption of correlative properties as causal properties. There is no inherent causality in phenomena that correlate. Regression models are designed to measure variance within data relative to a theoretical model: there is nothing to suggest that data that presents high levels of covariance have any meaningful relationship (absent a proposed causal mechanism with predictive properties or a random assignment of treatment). The use of flawed methodology has been claimed to be widespread, with common examples of such malpractice being the overuse of correlative models, especially the overuse of regression models and particularly linear regression models. The presupposition that two correlated phenomena are inherently related is a logical fallacy known as spurious correlation. Some social scientists claim that widespread use of methodology that attributes causality to spurious correlations have been detrimental to the integrity of the social sciences, although improvements stemming from better methodologies have been noted. A potential effect of scientific studies that erroneously conflate correlation with causality is an increase in the number of scientific findings whose results are not reproducible by third parties. Such non-reproducibility is a logical consequence of findings that correlation only temporarily being overgeneralized into mechanisms that have no inherent relationship, where new data does not contain the previous, idiosyncratic correlations of the original data. Debates over the effect of malpractice versus the effect of the inherent difficulties of searching for causality are ongoing. Critics of widely practiced methodologies argue that researchers have engaged statistical manipulation in order to publish articles that supposedly demonstrate evidence of causality but are actually examples of spurious correlation being touted as evidence of causality: such endeavors may be referred to as P hacking. To prevent this, some have advocated that researchers preregister their research designs prior to conducting to their studies so that they do not inadvertently overemphasize a nonreproducible finding that was not the initial subject of inquiry but was found to be statistically significant during data analysis. == See also == Causal analysis Causal model Granger causality Multivariate statistics Partial least squares regression Pathogenesis Pathology Probabilistic causation Probabilistic argumentation Probabilistic logic Regression analysis Transfer entropy == References == == Bibliography == Hernán, MA; Robins, JM (21 January 2020). Causal Inference: What If. Barnsley: Boca Raton: Chapman & Hall/CRC. == External links == NIPS 2013 Workshop on Causality Causal inference at the Max Planck Institute for Intelligent Systems Tübingen
Wikipedia/Causal_inference
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process. When referring specifically to probabilities, the corresponding term is probabilistic model. All statistical hypothesis tests and all statistical estimators are derived via statistical models. More generally, statistical models are part of the foundation of statistical inference. A statistical model is usually specified as a mathematical relationship between one or more random variables and other non-random variables. As such, a statistical model is "a formal representation of a theory" (Herman Adèr quoting Kenneth Bollen). == Introduction == Informally, a statistical model can be thought of as a statistical assumption (or set of statistical assumptions) with a certain property: that the assumption allows us to calculate the probability of any event. As an example, consider a pair of ordinary six-sided dice. We will study two different statistical assumptions about the dice. The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is ⁠1/6⁠. From that assumption, we can calculate the probability of both dice coming up 5:  ⁠1/6⁠ × ⁠1/6⁠ = ⁠1/36⁠.  More generally, we can calculate the probability of any event: e.g. (1 and 2) or (3 and 3) or (5 and 6). The alternative statistical assumption is this: for each of the dice, the probability of the face 5 coming up is ⁠1/8⁠ (because the dice are weighted). From that assumption, we can calculate the probability of both dice coming up 5:  ⁠1/8⁠ × ⁠1/8⁠ = ⁠1/64⁠.  We cannot, however, calculate the probability of any other nontrivial event, as the probabilities of the other faces are unknown. The first statistical assumption constitutes a statistical model: because with the assumption alone, we can calculate the probability of any event. The alternative statistical assumption does not constitute a statistical model: because with the assumption alone, we cannot calculate the probability of every event. In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation). For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. == Formal definition == In mathematical terms, a statistical model is a pair ( S , P {\displaystyle S,{\mathcal {P}}} ), where S {\displaystyle S} is the set of possible observations, i.e. the sample space, and P {\displaystyle {\mathcal {P}}} is a set of probability distributions on S {\displaystyle S} . The set P {\displaystyle {\mathcal {P}}} represents all of the models that are considered possible. This set is typically parameterized: P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . The set Θ {\displaystyle \Theta } defines the parameters of the model. If a parameterization is such that distinct parameter values give rise to distinct distributions, i.e. F θ 1 = F θ 2 ⇒ θ 1 = θ 2 {\displaystyle F_{\theta _{1}}=F_{\theta _{2}}\Rightarrow \theta _{1}=\theta _{2}} (in other words, the mapping is injective), it is said to be identifiable. In some cases, the model can be more complex. In Bayesian statistics, the model is extended by adding a probability distribution over the parameter space Θ {\displaystyle \Theta } . A statistical model can sometimes distinguish two sets of probability distributions. The first set Q = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {Q}}=\{F_{\theta }:\theta \in \Theta \}} is the set of models considered for inference. The second set P = { F λ : λ ∈ Λ } {\displaystyle {\mathcal {P}}=\{F_{\lambda }:\lambda \in \Lambda \}} is the set of models that could have generated the data which is much larger than Q {\displaystyle {\mathcal {Q}}} . Such statistical models are key in checking that a given procedure is robust, i.e. that it does not produce catastrophic errors when its assumptions about the data are incorrect. == An example == Suppose that we have a population of children, with the ages of the children distributed uniformly, in the population. The height of a child will be stochastically related to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in a linear regression model, like this: heighti = b0 + b1agei + εi, where b0 is the intercept, b1 is a parameter that age is multiplied by to obtain a prediction of height, εi is the error term, and i identifies the child. This implies that height is predicted by age, with some error. An admissible model must be consistent with all the data points. Thus, a straight line (heighti = b0 + b1agei) cannot be admissible for a model of the data—unless it exactly fits all the data points, i.e. all the data points lie perfectly on the line. The error term, εi, must be included in the equation, so that the model is consistent with all the data points. To do statistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εi distributions are i.i.d. Gaussian, with zero mean. In this instance, the model would have 3 parameters: b0, b1, and the variance of the Gaussian distribution. We can formally specify the model in the form ( S , P {\displaystyle S,{\mathcal {P}}} ) as follows. The sample space, S {\displaystyle S} , of our model comprises the set of all possible pairs (age, height). Each possible value of θ {\displaystyle \theta } = (b0, b1, σ2) determines a distribution on S {\displaystyle S} ; denote that distribution by F θ {\displaystyle F_{\theta }} . If Θ {\displaystyle \Theta } is the set of all possible values of θ {\displaystyle \theta } , then P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . (The parameterization is identifiable, and this is easy to check.) In this example, the model is determined by (1) specifying S {\displaystyle S} and (2) making some assumptions relevant to P {\displaystyle {\mathcal {P}}} . There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specify P {\displaystyle {\mathcal {P}}} —as they are required to do. == General remarks == A statistical model is a special class of mathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables are stochastic. In the above example with children's heights, ε is a stochastic variable; without that stochastic variable, the model would be deterministic. Statistical models are often used even when the data-generating process being modeled is deterministic. For instance, coin tossing is, in principle, a deterministic process; yet it is commonly modeled as stochastic (via a Bernoulli process). Choosing an appropriate statistical model to represent a given data-generating process is sometimes extremely difficult, and may require knowledge of both the process and relevant statistical analyses. Relatedly, the statistician Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". There are three purposes for a statistical model, according to Konishi & Kitagawa: Predictions Extraction of information Description of stochastic structures Those three purposes are essentially the same as the three purposes indicated by Friendly & Meyer: prediction, estimation, description. == Dimension of a model == Suppose that we have a statistical model ( S , P {\displaystyle S,{\mathcal {P}}} ) with P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . In notation, we write that Θ ⊆ R k {\displaystyle \Theta \subseteq \mathbb {R} ^{k}} where k is a positive integer ( R {\displaystyle \mathbb {R} } denotes the real numbers; other sets can be used, in principle). Here, k is called the dimension of the model. The model is said to be parametric if Θ {\displaystyle \Theta } has finite dimension. As an example, if we assume that data arise from a univariate Gaussian distribution, then we are assuming that P = { F μ , σ ( x ) ≡ 1 2 π σ exp ⁡ ( − ( x − μ ) 2 2 σ 2 ) : μ ∈ R , σ > 0 } {\displaystyle {\mathcal {P}}=\left\{F_{\mu ,\sigma }(x)\equiv {\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right):\mu \in \mathbb {R} ,\sigma >0\right\}} . In this example, the dimension, k, equals 2. As another example, suppose that the data consists of points (x, y) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean): this leads to the same statistical model as was used in the example with children's heights. The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.) Although formally θ ∈ Θ {\displaystyle \theta \in \Theta } is a single parameter that has dimension k, it is sometimes regarded as comprising k separate parameters. For example, with the univariate Gaussian distribution, θ {\displaystyle \theta } is formally a single parameter with dimension 2, but it is often regarded as comprising 2 separate parameters—the mean and the standard deviation. A statistical model is nonparametric if the parameter set Θ {\displaystyle \Theta } is infinite dimensional. A statistical model is semiparametric if it has both finite-dimensional and infinite-dimensional parameters. Formally, if k is the dimension of Θ {\displaystyle \Theta } and n is the number of samples, both semiparametric and nonparametric models have k → ∞ {\displaystyle k\rightarrow \infty } as n → ∞ {\displaystyle n\rightarrow \infty } . If k / n → 0 {\displaystyle k/n\rightarrow 0} as n → ∞ {\displaystyle n\rightarrow \infty } , then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly used statistical models. Regarding semiparametric and nonparametric models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies". == Nested models == Two statistical models are nested if the first model can be transformed into the second model by imposing constraints on the parameters of the first model. As an example, the set of all Gaussian distributions has, nested within it, the set of zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. As a second example, the quadratic model y = b0 + b1x + b2x2 + ε, ε ~ 𝒩(0, σ2) has, nested within it, the linear model y = b0 + b1x + ε, ε ~ 𝒩(0, σ2) —we constrain the parameter b2 to equal 0. In both those examples, the first model has a higher dimension than the second model (for the first example, the zero-mean model has dimension 1). Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2. == Comparing models == Comparing statistical models is fundamental for much of statistical inference. Konishi & Kitagawa (2008, p. 75) state: "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models." Common criteria for comparing models include the following: R2, Bayes factor, Akaike information criterion, and the likelihood-ratio test together with its generalization, the relative likelihood. Another way of comparing two statistical models is through the notion of deficiency introduced by Lucien Le Cam. == See also == == Notes == == References == == Further reading == Davison, A. C. (2008), Statistical Models, Cambridge University Press Drton, M.; Sullivant, S. (2007), "Algebraic statistical models" (PDF), Statistica Sinica, 17: 1273–1297 Freedman, D. A. (2009), Statistical Models, Cambridge University Press Helland, I. S. (2010), Steps Towards a Unified Basis for Scientific Models and Methods, World Scientific Kroese, D. P.; Chan, J. C. C. (2014), Statistical Modeling and Computation, Springer Shmueli, G. (2010), "To explain or to predict?", Statistical Science, 25 (3): 289–310, arXiv:1101.0891, doi:10.1214/10-STS330, S2CID 15900983
Wikipedia/Probabilistic_model
In discrete mathematics, particularly in graph theory, a graph is a structure consisting of a set of objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions called vertices (also called nodes or points) and each of the related pairs of vertices is called an edge (also called link or line). Typically, a graph is depicted in diagrammatic form as a set of dots or circles for the vertices, joined by lines or curves for the edges. The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this graph is undirected because any person A can shake hands with a person B only if B also shakes hands with A. In contrast, if an edge from a person A to a person B means that A owes money to B, then this graph is directed, because owing money is not necessarily reciprocated. Graphs are the basic subject studied by graph theory. The word "graph" was first used in this sense by J. J. Sylvester in 1878 due to a direct relation between mathematics and chemical structure (what he called a chemico-graphical image). == Definitions == Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures. === Graph === A graph (sometimes called an undirected graph to distinguish it from a directed graph, or a simple graph to distinguish it from a multigraph) is a pair G = (V, E), where V is a set whose elements are called vertices (singular: vertex), and E is a set of unordered pairs { v 1 , v 2 } {\displaystyle \{v_{1},v_{2}\}} of vertices, whose elements are called edges (sometimes links or lines). The vertices u and v of an edge {u, v} are called the edge's endpoints. The edge is said to join u and v and to be incident on them. A vertex may belong to no edge, in which case it is not joined to any other vertex and is called isolated. When an edge { u , v } {\displaystyle \{u,v\}} exists, the vertices u and v are called adjacent. A multigraph is a generalization that allows multiple edges to have the same pair of endpoints. In some texts, multigraphs are simply called graphs. Sometimes, graphs are allowed to contain loops, which are edges that join a vertex to itself. To allow loops, the pairs of vertices in E must be allowed to have the same node twice. Such generalized graphs are called graphs with loops or simply graphs when it is clear from the context that loops are allowed. Generally, the vertex set V is taken to be finite (which implies that the edge set E is also finite). Sometimes infinite graphs are considered, but they are usually viewed as a special kind of binary relation, because most results on finite graphs either do not extend to the infinite case or need a rather different proof. An empty graph is a graph that has an empty set of vertices (and thus an empty set of edges). The order of a graph is its number |V| of vertices, usually denoted by n. The size of a graph is its number |E| of edges, typically denoted by m. However, in some contexts, such as for expressing the computational complexity of algorithms, the term size is used for the quantity |V| + |E| (otherwise, a non-empty graph could have size 0). The degree or valency of a vertex is the number of edges that are incident to it; for graphs with loops, a loop is counted twice. In a graph of order n, the maximum degree of each vertex is n − 1 (or n + 1 if loops are allowed, because a loop contributes 2 to the degree), and the maximum number of edges is n(n − 1)/2 (or n(n + 1)/2 if loops are allowed). The edges of a graph define a symmetric relation on the vertices, called the adjacency relation. Specifically, two vertices x and y are adjacent if {x, y} is an edge. A graph is fully determined by its adjacency matrix A, which is an n × n square matrix, with Aij specifying the number of connections from vertex i to vertex j. For a simple graph, Aij is either 0, indicating disconnection, or 1, indicating connection; moreover Aii = 0 because an edge in a simple graph cannot start and end at the same vertex. Graphs with self-loops will be characterized by some or all Aii being equal to a positive integer, and multigraphs (with multiple edges between vertices) will be characterized by some or all Aij being equal to a positive integer. Undirected graphs will have a symmetric adjacency matrix (meaning Aij = Aji). === Directed graph === A directed graph or digraph is a graph in which edges have orientations. In one restricted but very common sense of the term, a directed graph is a pair G = (V, E) comprising: V, a set of vertices (also called nodes or points); E, a set of edges (also called directed edges, directed links, directed lines, arrows, or arcs), which are ordered pairs of distinct vertices: E ⊆ { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle E\subseteq \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}} . To avoid ambiguity, this type of object may be called precisely a directed simple graph. In the edge (x, y) directed from x to y, the vertices x and y are called the endpoints of the edge, x the tail of the edge and y the head of the edge. The edge is said to join x and y and to be incident on x and on y. A vertex may exist in a graph and not belong to an edge. The edge (y, x) is called the inverted edge of (x, y). Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head. In one more general sense of the term allowing multiple edges, a directed graph is sometimes defined to be an ordered triple G = (V, E, ϕ) comprising: V, a set of vertices (also called nodes or points); E, a set of edges (also called directed edges, directed links, directed lines, arrows or arcs); ϕ, an incidence function mapping every edge to an ordered pair of vertices (that is, an edge is associated with two distinct vertices): ϕ : E → { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle \phi :E\to \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}} . To avoid ambiguity, this type of object may be called precisely a directed multigraph. A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex x {\displaystyle x} to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) ( x , x ) {\displaystyle (x,x)} which is not in { ( x , y ) ∣ ( x , y ) ∈ V 2 and x ≠ y } {\displaystyle \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}} . So to allow loops the definitions must be expanded. For directed simple graphs, the definition of E {\displaystyle E} should be modified to E ⊆ V 2 {\displaystyle E\subseteq V^{2}} . For directed multigraphs, the definition of ϕ {\displaystyle \phi } should be modified to ϕ : E → V 2 {\displaystyle \phi :E\to V^{2}} . To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a quiver) respectively. The edges of a directed simple graph permitting loops G is a homogeneous relation ~ on the vertices of G that is called the adjacency relation of G. Specifically, for each edge (x, y), its endpoints x and y are said to be adjacent to one another, which is denoted x ~ y. === Mixed graph === A mixed graph is a graph in which some edges may be directed and some may be undirected. It is an ordered triple G = (V, E, A) for a mixed simple graph and G = (V, E, A, ϕE, ϕA) for a mixed multigraph with V, E (the undirected edges), A (the directed edges), ϕE and ϕA defined as above. Directed and undirected graphs are special cases. === Weighted graph === A weighted graph or a network is a graph in which a number (the weight) is assigned to each edge. Such weights might represent for example costs, lengths or capacities, depending on the problem at hand. Such graphs arise in many contexts, for example in shortest path problems such as the traveling salesman problem. == Types of graphs == === Oriented graph === One definition of an oriented graph is that it is a directed graph in which at most one of (x, y) and (y, x) may be edges of the graph. That is, it is a directed graph that can be formed as an orientation of an undirected (simple) graph. Some authors use "oriented graph" to mean the same as "directed graph". Some authors use "oriented graph" to mean any orientation of a given undirected graph or multigraph. === Regular graph === A regular graph is a graph in which each vertex has the same number of neighbours, i.e., every vertex has the same degree. A regular graph with vertices of degree k is called a k‑regular graph or regular graph of degree k. === Complete graph === A complete graph is a graph in which each pair of vertices is joined by an edge. A complete graph contains all possible edges. === Finite graph === A finite graph is a graph in which the vertex set and the edge set are finite sets. Otherwise, it is called an infinite graph. Most commonly in graph theory it is implied that the graphs discussed are finite. If the graphs are infinite, that is usually specifically stated. === Connected graph === In an undirected graph, an unordered pair of vertices {x, y} is called connected if a path leads from x to y. Otherwise, the unordered pair is called disconnected. A connected graph is an undirected graph in which every unordered pair of vertices in the graph is connected. Otherwise, it is called a disconnected graph. In a directed graph, an ordered pair of vertices (x, y) is called strongly connected if a directed path leads from x to y. Otherwise, the ordered pair is called weakly connected if an undirected path leads from x to y after replacing all of its directed edges with undirected edges. Otherwise, the ordered pair is called disconnected. A strongly connected graph is a directed graph in which every ordered pair of vertices in the graph is strongly connected. Otherwise, it is called a weakly connected graph if every ordered pair of vertices in the graph is weakly connected. Otherwise it is called a disconnected graph. A k-vertex-connected graph or k-edge-connected graph is a graph in which no set of k − 1 vertices (respectively, edges) exists that, when removed, disconnects the graph. A k-vertex-connected graph is often called simply a k-connected graph. === Bipartite graph === A bipartite graph is a simple graph in which the vertex set can be partitioned into two sets, W and X, so that no two vertices in W share a common edge and no two vertices in X share a common edge. Alternatively, it is a graph with a chromatic number of 2. In a complete bipartite graph, the vertex set is the union of two disjoint sets, W and X, so that every vertex in W is adjacent to every vertex in X but there are no edges within W or X. === Path graph === A path graph or linear graph of order n ≥ 2 is a graph in which the vertices can be listed in an order v1, v2, …, vn such that the edges are the {vi, vi+1} where i = 1, 2, …, n − 1. Path graphs can be characterized as connected graphs in which the degree of all but two vertices is 2 and the degree of the two remaining vertices is 1. If a path graph occurs as a subgraph of another graph, it is a path in that graph. === Planar graph === A planar graph is a graph whose vertices and edges can be drawn in a plane such that no two of the edges intersect. === Cycle graph === A cycle graph or circular graph of order n ≥ 3 is a graph in which the vertices can be listed in an order v1, v2, …, vn such that the edges are the {vi, vi+1} where i = 1, 2, …, n − 1, plus the edge {vn, v1}. Cycle graphs can be characterized as connected graphs in which the degree of all vertices is 2. If a cycle graph occurs as a subgraph of another graph, it is a cycle or circuit in that graph. === Tree === A tree is an undirected graph in which any two vertices are connected by exactly one path, or equivalently a connected acyclic undirected graph. A forest is an undirected graph in which any two vertices are connected by at most one path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees. === Polytree === A polytree (or directed tree or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest. === Advanced classes === More advanced kinds of graphs are: Petersen graph and its generalizations; perfect graphs; cographs; chordal graphs; other graphs with large automorphism groups: vertex-transitive, arc-transitive, and distance-transitive graphs; strongly regular graphs and their generalizations distance-regular graphs. == Properties of graphs == Two edges of a graph are called adjacent if they share a common vertex. Two edges of a directed graph are called consecutive if the head of the first one is the tail of the second one. Similarly, two vertices are called adjacent if they share a common edge (consecutive if the first one is the tail and the second one is the head of an edge), in which case the common edge is said to join the two vertices. An edge and a vertex on that edge are called incident. The graph with only one vertex and no edges is called the trivial graph. A graph with only vertices and no edges is known as an edgeless graph. The graph with no vertices and no edges is sometimes called the null graph or empty graph, but the terminology is not consistent and not all mathematicians allow this object. Normally, the vertices of a graph, by their nature as elements of a set, are distinguishable. This kind of graph may be called vertex-labeled. However, for many questions it is better to treat vertices as indistinguishable. (Of course, the vertices may be still distinguishable by the properties of the graph itself, e.g., by the numbers of incident edges.) The same remarks apply to edges, so graphs with labeled edges are called edge-labeled. Graphs with labels attached to edges or vertices are more generally designated as labeled. Consequently, graphs in which vertices are indistinguishable and edges are indistinguishable are called unlabeled. (In the literature, the term labeled may apply to other kinds of labeling, besides that which serves only to distinguish different vertices or edges.) The category of directed multigraphs permitting loops is the comma category Set ↓ D where D: Set → Set is the functor taking a set s to s × s. == Examples == The diagram is a schematic representation of the graph with vertices V = { 1 , 2 , 3 , 4 , 5 , 6 } {\displaystyle V=\{1,2,3,4,5,6\}} and edges E = { { 1 , 2 } , { 1 , 5 } , { 2 , 3 } , { 2 , 5 } , { 3 , 4 } , { 4 , 5 } , { 4 , 6 } } . {\displaystyle E=\{\{1,2\},\{1,5\},\{2,3\},\{2,5\},\{3,4\},\{4,5\},\{4,6\}\}.} In computer science, directed graphs are used to represent knowledge (e.g., conceptual graph), finite-state machines, and many other discrete structures. A binary relation R on a set X defines a directed graph. An element x of X is a direct predecessor of an element y of X if and only if xRy. A directed graph can model information networks such as Twitter, with one user following another. Particularly regular examples of directed graphs are given by the Cayley graphs of finitely-generated groups, as well as Schreier coset graphs In category theory, every small category has an underlying directed multigraph whose vertices are the objects of the category, and whose edges are the arrows of the category. In the language of category theory, one says that there is a forgetful functor from the category of small categories to the category of quivers. == Graph operations == There are several operations that produce new graphs from initial ones, which might be classified into the following categories: unary operations, which create a new graph from an initial one, such as: edge contraction, line graph, dual graph, complement graph, graph rewriting; binary operations, which create a new graph from two initial ones, such as: disjoint union of graphs, cartesian product of graphs, tensor product of graphs, strong product of graphs, lexicographic product of graphs, series–parallel graphs. == Generalizations == In a hypergraph, an edge can join any positive number of vertices. An undirected graph can be seen as a simplicial complex consisting of 1-simplices (the edges) and 0-simplices (the vertices). As such, complexes are generalizations of graphs since they allow for higher-dimensional simplices. Every graph gives rise to a matroid. In model theory, a graph is just a structure. But in that case, there is no limitation on the number of edges: it can be any cardinal number, see continuous graph. In computational biology, power graph analysis introduces power graphs as an alternative representation of undirected graphs. In geographic information systems, geometric networks are closely modeled after graphs, and borrow many concepts from graph theory to perform spatial analysis on road networks or utility grids. == See also == Conceptual graph Graph (abstract data type) Graph database Graph drawing List of graph theory topics List of publications in graph theory Network theory == Notes == == References == Balakrishnan, V. K. (1997). Graph Theory (1st ed.). McGraw-Hill. ISBN 978-0-07-005489-9. Bang-Jensen, J.; Gutin, G. (2000). Digraphs: Theory, Algorithms and Applications. Springer. Bender, Edward A.; Williamson, S. Gill (2010). Lists, Decisions and Graphs. With an Introduction to Probability. Berge, Claude (1958). Théorie des graphes et ses applications (in French). Paris: Dunod. Biggs, Norman (1993). Algebraic Graph Theory (2nd ed.). Cambridge University Press. ISBN 978-0-521-45897-9. Bollobás, Béla (2002). Modern Graph Theory (1st ed.). Springer. ISBN 978-0-387-98488-9. Diestel, Reinhard (2005). Graph Theory (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-540-26183-4. Graham, R.L.; Grötschel, M.; Lovász, L. (1995). Handbook of Combinatorics. MIT Press. ISBN 978-0-262-07169-7. Gross, Jonathan L.; Yellen, Jay (1998). Graph Theory and Its Applications. CRC Press. ISBN 978-0-8493-3982-0. Gross, Jonathan L.; Yellen, Jay (2003). Handbook of Graph Theory. CRC. ISBN 978-1-58488-090-5. Harary, Frank (1995). Graph Theory. Addison Wesley Publishing Company. ISBN 978-0-201-41033-4. Iyanaga, Shôkichi; Kawada, Yukiyosi (1977). Encyclopedic Dictionary of Mathematics. MIT Press. ISBN 978-0-262-09016-2. Zwillinger, Daniel (2002). CRC Standard Mathematical Tables and Formulae (31st ed.). Chapman & Hall/CRC. ISBN 978-1-58488-291-6. == Further reading == Trudeau, Richard J. (1993). Introduction to Graph Theory (Corrected, enlarged republication. ed.). New York: Dover Publications. ISBN 978-0-486-67870-2. Retrieved 8 August 2012. == External links == Media related to Graph (discrete mathematics) at Wikimedia Commons Weisstein, Eric W. "Graph". MathWorld.
Wikipedia/Undirected_graph
A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or hidden) Markov process (referred to as X {\displaystyle X} ). An HMM requires that there be an observable process Y {\displaystyle Y} whose outcomes depend on the outcomes of X {\displaystyle X} in a known way. Since X {\displaystyle X} cannot be observed directly, the goal is to learn about state of X {\displaystyle X} by observing Y {\displaystyle Y} . By definition of being a Markov model, an HMM has an additional requirement that the outcome of Y {\displaystyle Y} at time t = t 0 {\displaystyle t=t_{0}} must be "influenced" exclusively by the outcome of X {\displaystyle X} at t = t 0 {\displaystyle t=t_{0}} and that the outcomes of X {\displaystyle X} and Y {\displaystyle Y} at t < t 0 {\displaystyle t<t_{0}} must be conditionally independent of Y {\displaystyle Y} at t = t 0 {\displaystyle t=t_{0}} given X {\displaystyle X} at time t = t 0 {\displaystyle t=t_{0}} . Estimation of the parameters in an HMM can be performed using maximum likelihood estimation. For linear chain HMMs, the Baum–Welch algorithm can be used to estimate parameters. Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition—such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics. == Definition == Let X n {\displaystyle X_{n}} and Y n {\displaystyle Y_{n}} be discrete-time stochastic processes and n ≥ 1 {\displaystyle n\geq 1} . The pair ( X n , Y n ) {\displaystyle (X_{n},Y_{n})} is a hidden Markov model if X n {\displaystyle X_{n}} is a Markov process whose behavior is not directly observable ("hidden"); P ⁡ ( Y n ∈ A | X 1 = x 1 , … , X n = x n ) = P ⁡ ( Y n ∈ A | X n = x n ) {\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\ {\bigl |}\ X_{1}=x_{1},\ldots ,X_{n}=x_{n}{\bigr )}=\operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\ {\bigl |}\ X_{n}=x_{n}{\bigr )}} , for every n ≥ 1 {\displaystyle n\geq 1} , x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} , and every Borel set A {\displaystyle A} . Let X t {\displaystyle X_{t}} and Y t {\displaystyle Y_{t}} be continuous-time stochastic processes. The pair ( X t , Y t ) {\displaystyle (X_{t},Y_{t})} is a hidden Markov model if X t {\displaystyle X_{t}} is a Markov process whose behavior is not directly observable ("hidden"); P ⁡ ( Y t 0 ∈ A ∣ { X t ∈ B t } t ≤ t 0 ) = P ⁡ ( Y t 0 ∈ A ∣ X t 0 ∈ B t 0 ) {\displaystyle \operatorname {\mathbf {P} } (Y_{t_{0}}\in A\mid \{X_{t}\in B_{t}\}_{t\leq t_{0}})=\operatorname {\mathbf {P} } (Y_{t_{0}}\in A\mid X_{t_{0}}\in B_{t_{0}})} , for every t 0 {\displaystyle t_{0}} , every Borel set A {\displaystyle A} , and every family of Borel sets { B t } t ≤ t 0 {\displaystyle \{B_{t}\}_{t\leq t_{0}}} . === Terminology === The states of the process X n {\displaystyle X_{n}} (resp. X t ) {\displaystyle X_{t})} are called hidden states, and P ⁡ ( Y n ∈ A ∣ X n = x n ) {\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\mid X_{n}=x_{n}{\bigr )}} (resp. P ⁡ ( Y t ∈ A ∣ X t ∈ B t ) ) {\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{t}\in A\mid X_{t}\in B_{t}{\bigr )})} is called emission probability or output probability. == Examples == === Drawing balls from hidden urns === In its discrete form, a hidden Markov process can be visualized as a generalization of the urn problem with replacement (where each item from the urn is returned to the original urn before the next step). Consider this example: in a room that is not visible to an observer there is a genie. The room contains urns X1, X2, X3, ... each of which contains a known mix of balls, with each ball having a unique label y1, y2, y3, ... . The genie chooses an urn in that room and randomly draws a ball from that urn. It then puts the ball onto a conveyor belt, where the observer can observe the sequence of the balls but not the sequence of urns from which they were drawn. The genie has some procedure to choose urns; the choice of the urn for the n-th ball depends only upon a random number and the choice of the urn for the (n − 1)-th ball. The choice of urn does not directly depend on the urns chosen before this single previous urn; therefore, this is called a Markov process. It can be described by the upper part of Figure 1. The Markov process cannot be observed, only the sequence of labeled balls, thus this arrangement is called a hidden Markov process. This is illustrated by the lower part of the diagram shown in Figure 1, where one can see that balls y1, y2, y3, y4 can be drawn at each state. Even if the observer knows the composition of the urns and has just observed a sequence of three balls, e.g. y1, y2 and y3 on the conveyor belt, the observer still cannot be sure which urn (i.e., at which state) the genie has drawn the third ball from. However, the observer can work out other information, such as the likelihood that the third ball came from each of the urns. === Weather guessing game === Consider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like. Alice believes that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are hidden from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the observations. The entire system is that of a hidden Markov model (HMM). Alice knows the general weather trends in the area, and what Bob likes to do on average. In other words, the parameters of the HMM are known. They can be represented as follows in Python: In this piece of code, start_probability represents Alice's belief about which state the HMM is in when Bob first calls her (all she knows is that it tends to be rainy on average). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately {'Rainy': 0.57, 'Sunny': 0.43}. The transition_probability represents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. The emission_probability represents how likely Bob is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is outside for a walk. A similar example is further elaborated in the Viterbi algorithm page. == Structural architecture == The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable x(t) is the hidden state at time t (with the model from the above diagram, x(t) ∈ { x1, x2, x3 }). The random variable y(t) is the observation at time t (with y(t) ∈ { y1, y2, y3, y4 }). The arrows in the diagram (often called a trellis diagram) denote conditional dependencies. From the diagram, it is clear that the conditional probability distribution of the hidden variable x(t) at time t, given the values of the hidden variable x at all times, depends only on the value of the hidden variable x(t − 1); the values at time t − 2 and before have no influence. This is called the Markov property. Similarly, the value of the observed variable y(t) depends on only the value of the hidden variable x(t) (both at time t). In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). The parameters of a hidden Markov model are of two types, transition probabilities and emission probabilities (also known as output probabilities). The transition probabilities control the way the hidden state at time t is chosen given the hidden state at time t − 1 {\displaystyle t-1} . The hidden state space is assumed to consist of one of N possible values, modelled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of the N possible states that a hidden variable at time t can be in, there is a transition probability from this state to each of the N possible states of the hidden variable at time t + 1 {\displaystyle t+1} , for a total of N 2 {\displaystyle N^{2}} transition probabilities. The set of transition probabilities for transitions from any given state must sum to 1. Thus, the N × N {\displaystyle N\times N} matrix of transition probabilities is a Markov matrix. Because any transition probability can be determined once the others are known, there are a total of N ( N − 1 ) {\displaystyle N(N-1)} transition parameters. In addition, for each of the N possible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete with M possible values, governed by a categorical distribution, there will be M − 1 {\displaystyle M-1} separate parameters, for a total of N ( M − 1 ) {\displaystyle N(M-1)} emission parameters over all hidden states. On the other hand, if the observed variable is an M-dimensional vector distributed according to an arbitrary multivariate Gaussian distribution, there will be M parameters controlling the means and M ( M + 1 ) 2 {\displaystyle {\frac {M(M+1)}{2}}} parameters controlling the covariance matrix, for a total of N ( M + M ( M + 1 ) 2 ) = N M ( M + 3 ) 2 = O ( N M 2 ) {\displaystyle N\left(M+{\frac {M(M+1)}{2}}\right)={\frac {NM(M+3)}{2}}=O(NM^{2})} emission parameters. (In such a case, unless the value of M is small, it may be more practical to restrict the nature of the covariances between individual elements of the observation vector, e.g. by assuming that the elements are independent of each other, or less restrictively, are independent of all but a fixed number of adjacent elements.) == Inference == Several inference problems are associated with hidden Markov models, as outlined below. === Probability of an observed sequence === The task is to compute in a best way, given the parameters of the model, the probability of a particular output sequence. This requires summation over all possible state sequences: The probability of observing a sequence Y = y ( 0 ) , y ( 1 ) , … , y ( L − 1 ) , {\displaystyle Y=y(0),y(1),\dots ,y(L-1),} of length L is given by P ( Y ) = ∑ X P ( Y ∣ X ) P ( X ) , {\displaystyle P(Y)=\sum _{X}P(Y\mid X)P(X),} where the sum runs over all possible hidden-node sequences X = x ( 0 ) , x ( 1 ) , … , x ( L − 1 ) . {\displaystyle X=x(0),x(1),\dots ,x(L-1).} Applying the principle of dynamic programming, this problem, too, can be handled efficiently using the forward algorithm. === Probability of the latent variables === A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observations y ( 1 ) , … , y ( t ) {\displaystyle y(1),\dots ,y(t)} . ==== Filtering ==== The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to compute P ( x ( t ) ∣ y ( 1 ) , … , y ( t ) ) {\displaystyle P(x(t)\mid y(1),\dots ,y(t))} . This task is used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points in time, with corresponding observations at each point. Then, it is natural to ask about the state of the process at the end. This problem can be handled efficiently using the forward algorithm. An example is when the algorithm is applied to a Hidden Markov Network to determine P ( h t ∣ v 1 : t ) {\displaystyle \mathrm {P} {\big (}h_{t}\mid v_{1:t}{\big )}} . ==== Smoothing ==== This is similar to filtering but asks about the distribution of a latent variable somewhere in the middle of a sequence, i.e. to compute P ( x ( k ) ∣ y ( 1 ) , … , y ( t ) ) {\displaystyle P(x(k)\mid y(1),\dots ,y(t))} for some k < t {\displaystyle k<t} . From the perspective described above, this can be thought of as the probability distribution over hidden states for a point in time k in the past, relative to time t. The forward-backward algorithm is a good method for computing the smoothed values for all hidden state variables. ==== Most likely explanation ==== The task, unlike the previous two, asks about the joint probability of the entire sequence of hidden states that generated a particular sequence of observations (see illustration on the right). This task is generally applicable when HMM's are applied to different sorts of problems from those for which the tasks of filtering and smoothing are applicable. An example is part-of-speech tagging, where the hidden states represent the underlying parts of speech corresponding to an observed sequence of words. In this case, what is of interest is the entire sequence of parts of speech, rather than simply the part of speech for a single word, as filtering or smoothing would compute. This task requires finding a maximum over all possible state sequences, and can be solved efficiently by the Viterbi algorithm. === Statistical significance === For some of the above problems, it may also be interesting to ask about statistical significance. What is the probability that a sequence drawn from some null distribution will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence? When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the false positive rate associated with failing to reject the hypothesis for the output sequence. == Learning == The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and emission probabilities. The task is usually to derive the maximum likelihood estimate of the parameters of the HMM given the set of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the Baum–Welch algorithm or the Baldi–Chauvin algorithm. The Baum–Welch algorithm is a special case of the expectation-maximization algorithm. If the HMMs are used for time series prediction, more sophisticated Bayesian inference methods, like Markov chain Monte Carlo (MCMC) sampling are proven to be favorable over finding a single maximum likelihood model both in terms of accuracy and stability. Since MCMC imposes significant computational burden, in cases where computational scalability is also of interest, one may alternatively resort to variational approximations to Bayesian inference, e.g. Indeed, approximate variational inference offers computational efficiency comparable to expectation-maximization, while yielding an accuracy profile only slightly inferior to exact MCMC-type Bayesian inference. == Applications == HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but other data that depend on the sequence are). Applications include: Computational finance Single-molecule kinetic analysis Neuroscience Cryptanalysis Speech recognition, including Siri Speech synthesis Part-of-speech tagging Document separation in scanning solutions Machine translation Partial discharge Gene prediction Handwriting recognition Alignment of bio-sequences Time series analysis Activity recognition Protein folding Sequence classification Metamorphic virus detection Sequence motif discovery (DNA and proteins) DNA hybridization kinetics Chromatin state discovery Transportation forecasting Solar irradiance variability == History == Hidden Markov models were described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s. From the linguistics point of view, hidden Markov models are equivalent to stochastic regular grammar. In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics. == Extensions == === General state spaces === In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is a linear dynamical system, with a linear relationship among related variables and where all hidden and observed variables follow a Gaussian distribution. In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using the Kalman filter); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as the extended Kalman filter or the particle filter. Nowadays, inference in hidden Markov models is performed in nonparametric settings, where the dependency structure enables identifiability of the model and the learnability limits are still under exploration. === Bayesian modeling of the transitions probabilities === Hidden Markov models are generative models, in which the joint distribution of observations and hidden states, or equivalently both the prior distribution of hidden states (the transition probabilities) and conditional distribution of observations given states (the emission probabilities), is modeled. The above algorithms implicitly assume a uniform prior distribution over the transition probabilities. However, it is also possible to create hidden Markov models with other types of prior distributions. An obvious candidate, given the categorical distribution of the transition probabilities, is the Dirichlet distribution, which is the conjugate prior distribution of the categorical distribution. Typically, a symmetric Dirichlet distribution is chosen, reflecting ignorance about which states are inherently more likely than others. The single parameter of this distribution (termed the concentration parameter) controls the relative density or sparseness of the resulting transition matrix. A choice of 1 yields a uniform distribution. Values greater than 1 produce a dense matrix, in which the transition probabilities between pairs of states are likely to be nearly equal. Values less than 1 result in a sparse matrix in which, for each given source state, only a small number of destination states have non-negligible transition probabilities. It is also possible to use a two-level prior Dirichlet distribution, in which one Dirichlet distribution (the upper distribution) governs the parameters of another Dirichlet distribution (the lower distribution), which in turn governs the transition probabilities. The upper distribution governs the overall distribution of states, determining how likely each state is to occur; its concentration parameter determines the density or sparseness of states. Such a two-level prior distribution, where both concentration parameters are set to produce sparse distributions, might be useful for example in unsupervised part-of-speech tagging, where some parts of speech occur much more commonly than others; learning algorithms that assume a uniform prior distribution generally perform poorly on this task. The parameters of models of this sort, with non-uniform prior distributions, can be learned using Gibbs sampling or extended versions of the expectation-maximization algorithm. An extension of the previously described hidden Markov models with Dirichlet priors uses a Dirichlet process in place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called a hierarchical Dirichlet process hidden Markov model, or HDP-HMM for short. It was originally described under the name "Infinite Hidden Markov Model" and was further formalized in "Hierarchical Dirichlet Processes". === Discriminative approach === A different type of extension uses a discriminative model in place of the generative model of standard HMMs. This type of model directly models the conditional distribution of the hidden states given the observations, rather than modeling the joint distribution. An example of this model is the so-called maximum entropy Markov model (MEMM), which models the conditional distribution of the states using logistic regression (also known as a "maximum entropy model"). The advantage of this type of model is that arbitrary features (i.e. functions) of the observations can be modeled, allowing domain-specific knowledge of the problem at hand to be injected into the model. Models of this sort are not limited to modeling direct dependencies between a hidden state and its associated observation; rather, features of nearby observations, of combinations of the associated observation and nearby observations, or in fact of arbitrary observations at any distance from a given hidden state can be included in the process used to determine the value of a hidden state. Furthermore, there is no need for these features to be statistically independent of each other, as would be the case if such features were used in a generative model. Finally, arbitrary features over pairs of adjacent hidden states can be used rather than simple transition probabilities. The disadvantages of such models are: (1) The types of prior distributions that can be placed on hidden states are severely limited; (2) It is not possible to predict the probability of seeing an arbitrary observation. This second limitation is often not an issue in practice, since many common usages of HMM's do not require such predictive probabilities. A variant of the previously described discriminative model is the linear-chain conditional random field. This uses an undirected graphical model (aka Markov random field) rather than the directed graphical models of MEMM's and similar models. The advantage of this type of model is that it does not suffer from the so-called label bias problem of MEMM's, and thus may make more accurate predictions. The disadvantage is that training can be slower than for MEMM's. === Other extensions === Yet another variant is the factorial hidden Markov model, which allows for a single observation to be conditioned on the corresponding hidden variables of a set of K {\displaystyle K} independent Markov chains, rather than a single Markov chain. It is equivalent to a single HMM, with N K {\displaystyle N^{K}} states (assuming there are N {\displaystyle N} states for each chain), and therefore, learning in such a model is difficult: for a sequence of length T {\displaystyle T} , a straightforward Viterbi algorithm has complexity O ( N 2 K T ) {\displaystyle O(N^{2K}\,T)} . To find an exact solution, a junction tree algorithm could be used, but it results in an O ( N K + 1 K T ) {\displaystyle O(N^{K+1}\,K\,T)} complexity. In practice, approximate techniques, such as variational approaches, could be used. All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in general K {\displaystyle K} adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have an O ( N K T ) {\displaystyle O(N^{K}\,T)} running time, for K {\displaystyle K} adjacent states and T {\displaystyle T} total observations (i.e. a length- T {\displaystyle T} Markov chain). This extension has been widely used in bioinformatics, in the modeling of DNA sequences. Another recent extension is the triplet Markov model, in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between the theory of evidence and the triplet Markov models and which allows to fuse data in Markovian context and to model nonstationary data. Alternative multi-stream data fusion strategies have also been proposed in recent literature, e.g., Finally, a different rationale towards addressing the problem of modeling nonstationary data by means of hidden Markov models was suggested in 2012. It consists in employing a small recurrent neural network (RNN), specifically a reservoir network, to capture the evolution of the temporal dynamics in the observed data. This information, encoded in the form of a high-dimensional vector, is used as a conditioning variable of the HMM state transition probabilities. Under such a setup, eventually is obtained a nonstationary HMM, the transition probabilities of which evolve over time in a manner that is inferred from the data, in contrast to some unrealistic ad-hoc model of temporal evolution. In 2023, two innovative algorithms were introduced for the Hidden Markov Model. These algorithms enable the computation of the posterior distribution of the HMM without the necessity of explicitly modeling the joint distribution, utilizing only the conditional distributions. Unlike traditional methods such as the Forward-Backward and Viterbi algorithms, which require knowledge of the joint law of the HMM and can be computationally intensive to learn, the Discriminative Forward-Backward and Discriminative Viterbi algorithms circumvent the need for the observation's law. This breakthrough allows the HMM to be applied as a discriminative model, offering a more efficient and versatile approach to leveraging Hidden Markov Models in various applications. The model suitable in the context of longitudinal data is named latent Markov model. The basic version of this model has been extended to include individual covariates, random effects and to model more complex data structures such as multilevel data. A complete overview of the latent Markov models, with special attention to the model assumptions and to their practical use is provided in == Measure theory == Given a Markov transition matrix and an invariant distribution on the states, a probability measure can be imposed on the set of subshifts. For example, consider the Markov chain given on the left on the states A , B 1 , B 2 {\displaystyle A,B_{1},B_{2}} , with invariant distribution π = ( 2 / 7 , 4 / 7 , 1 / 7 ) {\displaystyle \pi =(2/7,4/7,1/7)} . By ignoring the distinction between B 1 , B 2 {\displaystyle B_{1},B_{2}} , this space of subshifts is projected on A , B 1 , B 2 {\displaystyle A,B_{1},B_{2}} into another space of subshifts on A , B {\displaystyle A,B} , and this projection also projects the probability measure down to a probability measure on the subshifts on A , B {\displaystyle A,B} . The curious thing is that the probability measure on the subshifts on A , B {\displaystyle A,B} is not created by a Markov chain on A , B {\displaystyle A,B} , not even multiple orders. Intuitively, this is because if one observes a long sequence of B n {\displaystyle B^{n}} , then one would become increasingly sure that the Pr ( A ∣ B n ) → 2 3 {\displaystyle \Pr(A\mid B^{n})\to {\frac {2}{3}}} , meaning that the observable part of the system can be affected by something infinitely in the past. Conversely, there exists a space of subshifts on 6 symbols, projected to subshifts on 2 symbols, such that any Markov measure on the smaller subshift has a preimage measure that is not Markov of any order (example 2.6). == See also == == References == == External links == === Concepts === Teif, V. B.; Rippe, K. (2010). "Statistical–mechanical lattice models for protein–DNA binding in chromatin". J. Phys.: Condens. Matter. 22 (41): 414105. arXiv:1004.5514. Bibcode:2010JPCM...22O4105T. doi:10.1088/0953-8984/22/41/414105. PMID 21386588. S2CID 103345. A Revealing Introduction to Hidden Markov Models by Mark Stamp, San Jose State University. Fitting HMM's with expectation-maximization – complete derivation A step-by-step tutorial on HMMs Archived 2017-08-13 at the Wayback Machine (University of Leeds) Hidden Markov Models (an exposition using basic mathematics) Hidden Markov Models (by Narada Warakagoda) Hidden Markov Models: Fundamentals and Applications Part 1, Part 2 (by V. Petrushin) Lecture on a Spreadsheet by Jason Eisner, Video and interactive spreadsheet
Wikipedia/Hidden_Markov_models
In graph theory, a mixed graph G = (V, E, A) is a graph consisting of a set of vertices V, a set of (undirected) edges E, and a set of directed edges (or arcs) A. == Definitions and notation == Consider adjacent vertices u , v ∈ V {\displaystyle u,v\in V} . A directed edge, called an arc, is an edge with an orientation and can be denoted as u v → {\displaystyle {\overrightarrow {uv}}} or ( u , v ) {\displaystyle (u,v)} (note that u {\displaystyle u} is the tail and v {\displaystyle v} is the head of the arc). Also, an undirected edge, or edge, is an edge with no orientation and can be denoted as u v {\displaystyle uv} or [ u , v ] {\displaystyle [u,v]} . For the purpose of our example we will not be considering loops or multiple edges of mixed graphs. A walk in a mixed graph is a sequence v 0 , c 1 , v 1 , c 2 , v 2 , … , c k , v k {\displaystyle v_{0},c_{1},v_{1},c_{2},v_{2},\dots ,c_{k},v_{k}} of vertices and edges/arcs such that for every index i {\displaystyle i} , either c i = v i v i + 1 {\displaystyle c_{i}=v_{i}v_{i+1}} is an edge of the graph or c i = v i v i + 1 → {\displaystyle c_{i}={\overrightarrow {v_{i}v_{i+1}}}} is an arc of the graph. This walk is a path if it does not repeat any edges, arcs, or vertices, except possibly the first and last vertices. A walk is closed if its first and last vertices are the same, and a closed path is a cycle. A mixed graph is acyclic if it does not contain a cycle. == Coloring == Mixed graph coloring can be thought of as labeling or an assignment of k different colors (where k is a positive integer) to the vertices of a mixed graph. Different colors must be assigned to vertices that are connected by an edge. The colors may be represented by the numbers from 1 to k, and for a directed arc, the tail of the arc must be colored by a smaller number than the head of the arc. === Example === For example, consider the figure to the right. Our available k-colors to color our mixed graph are {1, 2, 3}. Since u and v are connected by an edge, they must receive different colors or labelings (u and v are labelled 1 and 2, respectively). We also have an arc from v to w. Since orientation assigns an ordering, we must label the tail (v) with a smaller color (or integer from our set) than the head (w) of our arc. === Strong and weak coloring === A (strong) proper k-coloring of a mixed graph is a function c : V → [k] where [k] := {1, 2, …, k} such that c(u) ≠ c(v) if uv ∈ E and c(u) < c(v) if u v → ∈ A {\displaystyle {\overrightarrow {uv}}\in A} . A weaker condition on our arcs can be applied and we can consider a weak proper k-coloring of a mixed graph to be a function c : V → [k] where [k] := {1, 2, …, k} such that c(u) ≠ c(v) if uv ∈ E and c(u) ≤ c(v) if u v → ∈ A {\displaystyle {\overrightarrow {uv}}\in A} . Referring back to our example, this means that we can label both the head and tail of (v,w) with the positive integer 2. === Counting === A coloring may or may not exist for a mixed graph. In order for a mixed graph to have a k-coloring, the graph cannot contain any directed cycles. If such a k-coloring exists, then we refer to the smallest k needed in order to properly color our graph as the chromatic number, denoted by χ(G). The number of proper k-colorings is a polynomial function of k called the chromatic polynomial of our graph G (by analogy with the chromatic polynomial of undirected graphs) and can be denoted by χG(k). === Computing weak chromatic polynomials === The deletion–contraction method can be used to compute weak chromatic polynomials of mixed graphs. This method involves deleting (i.e., removing) an edge or arc and possibly joining the remaining vertices incident to that edge or arc to form one vertex. After deleting an edge e from a mixed graph G = (V, E, A) we obtain the mixed graph (V, E – e, A). We denote this deletion of the edge e by G – e. Similarly, by deleting an arc a from a mixed graph, we obtain (V, E, A – a) where we denote the deletion of a by G – a. Also, we denote the contraction of e and a by G/e and G/a, respectively. From Propositions given in Beck et al. we obtain the following equations to compute the chromatic polynomial of a mixed graph: χ G ( k ) = χ G − e ( k ) − χ G / e ( k ) {\displaystyle \chi _{G}(k)=\chi _{G-e}(k)-\chi _{G/e}(k)} , χ G ( k ) = χ G − a ( k ) + χ G / a ( k ) − χ G a ( k ) {\displaystyle \chi _{G}(k)=\chi _{G-a}(k)+\chi _{G/a}(k)-\chi _{G_{a}}(k)} . == Applications == === Scheduling problem === Mixed graphs may be used to model job shop scheduling problems in which a collection of tasks is to be performed, subject to certain timing constraints. In this sort of problem, undirected edges may be used to model a constraint that two tasks are incompatible (they cannot be performed simultaneously). Directed edges may be used to model precedence constraints, in which one task must be performed before another. A graph defined in this way from a scheduling problem is called a disjunctive graph. The mixed graph coloring problem can be used to find a schedule of minimum length for performing all the tasks. === Bayesian inference === Mixed graphs are also used as graphical models for Bayesian inference. In this context, an acyclic mixed graph (one with no cycles of directed edges) is also called a chain graph. The directed edges of these graphs are used to indicate a causal connection between two events, in which the outcome of the first event influences the probability of the second event. Undirected edges, instead, indicate a non-causal correlation between two events. A connected component of the undirected subgraph of a chain graph is called a chain. A chain graph may be transformed into an undirected graph by constructing its moral graph, an undirected graph formed from the chain graph by adding undirected edges between pairs of vertices that have outgoing edges to the same chain, and then forgetting the orientations of the directed edges. == Notes == == References == Beck, M.; Blado, D.; Crawford, J.; Jean-Louis, T.; Young, M. (2013), "On weak chromatic polynomials of mixed graphs", Graphs and Combinatorics, 31: 91–98, arXiv:1210.4634, doi:10.1007/s00373-013-1381-1. Cowell, Robert G.; Dawid, A. Philip; Lauritzen, Steffen L.; Spiegelhalter, David J. (1999), Probabilistic Networks and Expert Systems: Exact Computational Methods for Bayesian Networks, Springer-Verlag New York, p. 27, doi:10.1007/0-387-22630-3 (inactive 1 November 2024), ISBN 0-387-98767-3{{citation}}: CS1 maint: DOI inactive as of November 2024 (link) Hansen, Pierre; Kuplinsky, Julio; de Werra, Dominique (1997), "Mixed graph colorings", Mathematical Methods of Operations Research, 45 (1): 145–160, doi:10.1007/BF01194253, MR 1435900. Ries, B. (2007), "Coloring some classes of mixed graphs", Discrete Applied Mathematics, 155 (1): 1–6, doi:10.1016/j.dam.2006.05.004, MR 2281351. == External links == Weisstein, Eric W. "Mixed Graph". MathWorld.
Wikipedia/Chain_graph
The junction tree algorithm (also known as 'Clique Tree') is a method used in machine learning to extract marginalization in general graphs. In essence, it entails performing belief propagation on a modified graph called a junction tree. The graph is called a tree because it branches into different sections of data; nodes of variables are the branches. The basic premise is to eliminate cycles by clustering them into single nodes. Multiple extensive classes of queries can be compiled at the same time into larger structures of data. There are different algorithms to meet specific needs and for what needs to be calculated. Inference algorithms gather new developments in the data and calculate it based on the new information provided. == Junction tree algorithm == === Hugin algorithm === If the graph is directed then moralize it to make it un-directed. Introduce the evidence. Triangulate the graph to make it chordal. Construct a junction tree from the triangulated graph (we will call the vertices of the junction tree "supernodes"). Propagate the probabilities along the junction tree (via belief propagation) Note that this last step is inefficient for graphs of large treewidth. Computing the messages to pass between supernodes involves doing exact marginalization over the variables in both supernodes. Performing this algorithm for a graph with treewidth k will thus have at least one computation which takes time exponential in k. It is a message passing algorithm. The Hugin algorithm takes fewer computations to find a solution compared to Shafer-Shenoy. === Shafer-Shenoy algorithm === Computed recursively Multiple recursions of the Shafer-Shenoy algorithm results in Hugin algorithm Found by the message passing equation Separator potentials are not stored The Shafer-Shenoy algorithm is the sum product of a junction tree. It is used because it runs programs and queries more efficiently than the Hugin algorithm. The algorithm makes calculations for conditionals for belief functions possible. Joint distributions are needed to make local computations happen. === Underlying theory === The first step concerns only Bayesian networks, and is a procedure to turn a directed graph into an undirected one. We do this because it allows for the universal applicability of the algorithm, regardless of direction. The second step is setting variables to their observed value. This is usually needed when we want to calculate conditional probabilities, so we fix the value of the random variables we condition on. Those variables are also said to be clamped to their particular value. The third step is to ensure that graphs are made chordal if they aren't already chordal. This is the first essential step of the algorithm. It makes use of the following theorem: Theorem: For an undirected graph, G, the following properties are equivalent: Graph G is triangulated. The clique graph of G has a junction tree. There is an elimination ordering for G that does not lead to any added edges. Thus, by triangulating a graph, we make sure that the corresponding junction tree exists. A usual way to do this, is to decide an elimination order for its nodes, and then run the Variable elimination algorithm. The variable elimination algorithm states that the algorithm must be run each time there is a different query. This will result to adding more edges to the initial graph, in such a way that the output will be a chordal graph. All chordal graphs have a junction tree. The next step is to construct the junction tree. To do so, we use the graph from the previous step, and form its corresponding clique graph. Now the next theorem gives us a way to find a junction tree: Theorem: Given a triangulated graph, weight the edges of the clique graph by their cardinality, |A∩B|, of the intersection of the adjacent cliques A and B. Then any maximum-weight spanning tree of the clique graph is a junction tree. So, to construct a junction tree we just have to extract a maximum weight spanning tree out of the clique graph. This can be efficiently done by, for example, modifying Kruskal's algorithm. The last step is to apply belief propagation to the obtained junction tree. Usage: A junction tree graph is used to visualize the probabilities of the problem. The tree can become a binary tree to form the actual building of the tree. A specific use could be found in auto encoders, which combine the graph and a passing network on a large scale automatically. === Inference Algorithms === Loopy belief propagation: A different method of interpreting complex graphs. The loopy belief propagation is used when an approximate solution is needed instead of the exact solution. It is an approximate inference. Cutset conditioning: Used with smaller sets of variables. Cutset conditioning allows for simpler graphs that are easier to read but are not exact. == References == == Further reading == Lauritzen, Steffen L.; Spiegelhalter, David J. (1988). "Local Computations with Probabilities on Graphical Structures and their Application to Expert Systems". Journal of the Royal Statistical Society. Series B (Methodological). 50 (2): 157–224. doi:10.1111/j.2517-6161.1988.tb01721.x. JSTOR 2345762. MR 0964177. Dawid, A. P. (1992). "Applications of a general propagation algorithm for probabilistic expert systems". Statistics and Computing. 2 (1): 25–26. doi:10.1007/BF01890546. S2CID 61247712. Huang, Cecil; Darwiche, Adnan (1996). "Inference in Belief Networks: A Procedural Guide". International Journal of Approximate Reasoning. 15 (3): 225–263. CiteSeerX 10.1.1.47.3279. doi:10.1016/S0888-613X(96)00069-2. Lepar, V., Shenoy, P. (1998). "A Comparison of Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer Architectures for Computing Marginals of Probability Distributions." https://arxiv.org/ftp/arxiv/papers/1301/1301.7394.pdf
Wikipedia/Junction_tree_algorithm
Vaccine storage relates to the proper vaccine storage and handling practices from their manufacture to the administration in people. The general standard is the 2–8 °C cold chain for vaccine storage and transportation. This is used for all current US Food and Drug Administration (FDA)-licensed human vaccines and in low and middle-income countries. Exceptions include some vaccines for smallpox, chickenpox, shingles and one of the measles, mumps, and rubella II vaccines, which are transported between −25 °C and −15 °C. Some vaccines, such as the COVID-19 vaccine, require a cooler temperature between −80 °C and −60 °C for storage. In 1996, the World Health Organization (WHO) decided to spread vaccines worldwide. This urges researchers to design storage for vaccines without losing its potency. Since then, the production of vaccines has spiked, and various kinds of vaccines have their handling practices. WHO has set standards to ensure cold chain and has different types of storage, including refrigerators, freezers, cold boxes, and vaccine carriers. Different types of thermometers are also used because a slight temperature change could result in loss of potency. The storage are necessary to improve vaccine shelf life and transport vaccine worldwide. == History == Vaccine storage was first developed in the early 1960s, when the infectious smallpox disease outbreaks. During this time, vaccine technology was available and offered for protection. Since smallpox has been one of the deadliest diseases known, the World Health Organization (WHO) prepared to launch a campaign to spread the vaccines and end smallpox in 1966. It was not until 1974 where WHO first introduced the Expanded Programme on Immunization (EPI). The main goal was to make immunization available to every child worldwide by 1990. Immunization of six illnesses was being transported, including tuberculosis, diphtheria, pertussis, tetanus, measles, and polio. Dr. Rafe Henderson, the first director of EPI, designed a plan to deliver temperature-sensitive vaccines across dozens of countries safely. It was an important step to ensure that the vaccines were maintained in their determined conditions and guides towards the development of the cold chain. The WHO supported countries worldwide to ensure the vaccine cold chain is maintained. The cold chain has been implemented for years. After EPI was initiated, over 700,000 measles deaths were prevented, and millions of the target diseases have been prevented. There has been a huge milestone in the vaccine industry as scientists create more vaccines for new types of diseases. Therefore, it has a direct impact on the cost of transportation and different kinds of refrigerator storage either at +2° to +8 °C or +20° to +25 °C. This urge EPI to create a strategy to encompasses both vaccines and medicines to be able to sustain their components without the need of storage. The term 'cold chain' has now been replaced with 'supply chain'. The current system of vaccine cold chain still continues for delivering particular vaccines. WHO has made improvements by introducing the "controlled temperature chain" (CTC), which is an innovative approach allowing the vaccine to be taken out of the cold chain for a limited period of time, but CTC is still in the development process and will not be available for all vaccines for many years. Nowadays, engineers is still thinking of a way to eliminate refrigeration at +2 to +8C from the entire supply chain for all vaccines. With the initiatives of reducing temperature sensitivity of vaccines and regulation permits, it could eliminate the need for refrigeration in the supply chain. It will be suitable for an undeveloped country as less handling of vaccines needs to be done. == Recommended storage temperature == The cold chain has been one of the most reliable supply chains for transporting vaccines around the globe. Since vaccines are sensitive biological products, proper storage and handling of vaccines are important to ensure the potency of vaccines is not lost. Vaccines must be continuously monitored as each has different reactivity to low temperature, high temperature, and light. The majority of vaccines required storage temperature of +35° to +46 °F (+2° to +8 °C) and must not be exposed to freezing temperature. Temperature too cold can result in an irreversible reaction that reduces vaccines potency and loss in adjuvant effect. Certain vaccines contain adjuvants (aluminum) that will precipitate when exposed to freezing temperatures. Temperature too hot could also result in wanted viruses permanently degrading and losing potency. However, the effects are usually smaller, gradual, and predictable than from freezing temperatures. Visible signs of physical changes after exposure to undesirable temperature are not necessary to result in a decrease of vaccine potency. == Vaccine storage and handling requirements == Health facilities use storage called purpose-built units (also referred to pharmaceutical-grade units). These refrigerators or freezers are specifically designed for the storage of biologics, including vaccines. These units differ from standard household-grade units since it has microprocessor-based temperature control with a digital temperature sensor (thermistor, thermocouple, or resistance temperature detector), and fan-forced air circulation to promote uniform temperature around the unit. These storage are usually a stand-alone refrigerator or freezers because they perform better at keeping the temperature constant. A Household-grade refrigerator can also be an acceptable alternative to purpose-built units. However, the freezer compartment of this type is not recommended to store vaccines, and vaccines should be stored centrally inside the refrigerator. Many combination units cool the refrigerator using air from the freezer, resulting in different temperature zones inside the fridge. Placing vaccines near the cold air output from the freezer could cause too low temperature, and placing it at the very bottom could cause too high temperature. It is important not to place vaccines near the storage unit doors because it affects the temperature and exposes vaccines to light, reducing potency for some vaccines. == Types of vaccine storage == WHO has set standards to ensure the cold chain types of equipment can sustain different vaccines in health facilities. === Refrigerators === Refrigerators are the most common type of storage in health facilities as they can hold many vaccines in one single unit. This storage will help temperature-sensitive vaccines to withstand their components, and the surrounding area will always remain between +2° and +8 °C. In developed countries, electric refrigerators (compression units) are wildly used as there is an electricity supply for at least 8 hours per day. If the country doesn't have sufficient electricity, the solar energy refrigerator (photovoltaic units) or bottled gas/kerosene (absorption units) is also reliable. It is important to keep the desired temperature in any of the models in any circumstances and should not be changed. === Freezers === Freezers act the same way as refrigerators but for extreme temperatures. Its minimum temperature depends on the manufacture. Typically this storage is to store frozen vaccines and maintained temperature between -80 and -15C. Health facilities use purpose-built or pharmaceutical-grade units and vary in size. Dippin' Dots, a manufacturer of frozen desserts, had previously created equipment to preserve its products. This equipment was subsequently utilized by developers of the COVID-19 vaccine for transportation and storage of the vaccine. === Cold boxes === Cold boxes are typically used to carry vaccines around the area. It is a self-supporting container with insulation and ice-packs surrounding the interior to keep vaccines at low temperatures. Unlike the refrigerator, the cold box has limited time to maintain temperatures below +10 °C, normally 48–96 hours. It comes in many different types and shapes, and this storage is very useful for the transportation of vaccines in or out of the health facility. === Vaccine carriers === Vaccine carriers are similar to cold boxes, but they are smaller and easier to carry around. This small carrier is also packed with ice packs to keep the vaccine at a low temperature. However, they do not stay cold for as long as cold boxes, at most 36–48 hours. It is generally used for transporting from a health facility to outreach sites. === Water packs === Water packs are flat and leak-proof plastic containers used in the interiors of cold boxes and vaccine carriers. These containers are set to the appropriate temperature depending on the type of vaccine being transported. The temperature could range from -10° to +24 °C and does not last that long before coming back to the same temperature as the surroundings. === Foam pads === Foam pads are used to cover the lid of cold boxes and vaccine carriers, protecting the vaccine vials from damage during transportation and external heat. It is just a soft sponge that ensures the vials stay in place and prolong the desired temperature inside the containers. == Temperature monitoring == Temperature plays a crucial part in maintaining the potency of vaccines. Although the risk of storage cooler malfunction is low, it is better to check than the need to replace vaccines wasted due to the loss of potency. Temperature monitoring needs to take place in both storage units and transport units. The refrigerator should maintain a temperature between 2° and 8 °C (36° and 46 °F). Freezers should maintain a temperature between -50° and -15 °C (-58° and +5 °F). Thermometers are useful to monitor the temperature by placing at the storage unit's central location, adjacent to the vaccines. Every vaccine storage unit must have a temperature monitoring device. There are many different thermometers, including standard fluid-filled, min-max, and continuous temperature monitoring devices. Each type of thermometer has its advantages and disadvantage. Health facilities use the digital data logger (DDL) as their temperature monitoring device. This continuous temperature monitoring device uses a buffered temperature probe, the most accurate way to measure actual vaccine temperature. The DDL also includes details on how long a unit has been operating outside the temperature range and record all temperatures at present intervals. Temperature probes are also designed to prevent false readings by protecting the thermometer from sudden temperature changes when opening a refrigerator door. == Applications == The breakthrough of vaccines has changed the health industry, and numerous vaccines are still being developed nowadays. Each type of vaccine has its standard to keep wanted components intact. Due to the abundant number of vaccines, pharmaceutics combines two or more vaccines to save more time. These types of vaccines might change in storage temperature recommendation due to the additional stability of each vaccine. == See also == Vaccine Medicine Cold chain Supply chain Immunization Refrigerators Temperature Thermometer Vaccine cooler == References ==
Wikipedia/Vaccine_storage
A vaccine dose contains many ingredients (such as stabilizers, adjuvants, residual inactivating ingredients, residual cell culture materials, residual antibiotics and preservatives) very little of which is the active ingredient, the immunogen. A single dose may have merely nanograms of virus particles, or micrograms of bacterial polysaccharides. A vaccine injection, oral drops or nasal spray is mostly water. Other ingredients are added to boost the immune response, to ensure safety or help with storage, and a tiny amount of material is left-over from the manufacturing process. Very rarely, these materials can cause an allergic reaction in people who are very sensitive to them. == Volume == The volume of a vaccine dose is influenced by the route of administration. While some vaccines are given orally or nasally, most require an injection. Vaccines are not injected intravenously into the bloodstream. Most injections deposit a small dose into a muscle, but some are given superficially just under the skin surface or deeper beneath the skin. Fluenz Tetra, a live flu vaccine for children, is administered nasally with 0.1ml of liquid sprayed into each nostril. The live typhoid vaccine, Vivotif, and a live adenovirus vaccine, licensed only for military use, both come as hard gastro-resistant tablets. The Sabin oral live polio vaccine is taken as two 0.05ml drops of a bitter salty liquid that was historically added to sugar cubes when given to young children. Rotarix, a live rotavirus vaccine, has about 1.5ml of liquid containing 1g of sugar to make it taste better. The Dukoral cholera vaccine comes as a 3ml suspension along with 5.6g of effervescent granules, which are mixed and added to around 150ml water to make a sweet raspberry flavoured drink. At the other end of the volume scale, the smallpox vaccine is a minuscule 0.0025ml droplet that is picked up when a bifurcated needle is dipped into a vial containing around 100 doses. This needle is pricked 15 times into a small area of skin, just firmly enough to produce a drop of blood. A little larger is the BCG tuberculosis vaccine, which is 0.05ml for babies and children under 12, and 0.1ml for others. This tiny dose is inserted a couple of millimetres under the skin, producing a small blanched blister. Many vaccines for intramuscular injection have 0.5ml liquid, though a few have 1ml. Some vaccines come with the active ingredients already suspended in solution and the syringe pre-filled (e.g., Bexsero meningococcal Group B vaccine). Others are supplied as a vial of freeze-dried powder, which is reconstituted prior to administration using a dilutant from a separate vial or pre-filled syringe (e.g., MMR vaccine). Infanrix hexa, the 6-in-1 vaccine that protects against six diseases, uses a combination approach: the Hib vaccine in the powder and DTPa-HBV-IPV in suspension. Alternatively two separate vaccine solutions are mixed just before administration (ViATIM hepatitis A and typhoid vaccine). == Immunogens == Many vaccines developed in the 20th century contain whole bacteria or viruses, which are either inactivated (killed), attenuated (weakened) or a strain chosen to be harmless in humans. Since these are so small, even a tiny amount of them contains a huge number of individuals. With bacterial vaccines, we can enumerate this with an approximate number of bacteria cells. The live typhoid vaccine contains two billion viable cells of Salmonella enterica subsp. enterica serovar Typhi, which have been attenuated and cannot cause disease. The cholera vaccine has over thirty billion of each of four strains of Vibrio cholerae, which are inactivated by heat or formalin. The BCG vaccine, infant dose, contains between 100,000 and 400,000 colony-forming unit of live attenuated Mycobacterium bovis. One way to count viruses is to observe their impact on host cells in tissue cultures. The two tablets of adenovirus vaccine, one with adenovirus type 4 and the other with type 7, each contain 32,000 tissue-culture infective doses (104.5 TCID50). The current live polio vaccine contains two serotypes of poliovirus: over 1 million tissue-culture infective doses (106 TCID50) of type 1 and over 630,000 (105.8 TCID50) of type 3. The smallpox vaccine contains between 250,000 and 1,250,000 plaque forming units of live vaccinia virus per dose. The MMR vaccine contains 1,000 TCID50 measles, 12,500 TCID50 mumps and 1,000 TCID50 rubella live attenuated viruses. Many modern vaccines are made of only the parts of the pathogen necessary to invoke an immune response (a subunit vaccine) – for example just the surface proteins of the virus, or only the polysaccharide coating of a bacterium. Some vaccines invoke an immune response against the toxin produced by bacteria, rather than the bacteria itself. These toxoid vaccines are used against tetanus, diphtheria and pertussis (whooping cough). If the bacteria polysaccharide coating produces only a weak immune response on its own, it may be combined with (carried on) a protein that does provoke a strong response, which in turn improves the response to the weaker component. Such conjugate vaccines, may make use of a toxoid as the carrier protein. For all these, the quantity of immunogen is given by weight and sometimes expressed as international units (IU). The HVP vaccine contains 120 micrograms of the L1 capsid proteins from four types of human papillomavirus. The pneumococcal conjugate vaccine contains 32 micrograms of pneumococcal polysaccharide conjugated with CRM197 (a diphtheria toxin). Another variant is the RNA vaccine, which contains mRNA embedded in lipid (fat) nanoparticles. The mRNA instructs body's own cell machinery to produce the proteins that stimulate the immune response. Comirnaty, the Pfizer-BioNTech COVID-19 vaccine contains thirty micrograms of BNT162b2 RNA. == Excipients == Excipients are substances present in the vaccine that are not the principal immunological agents. These may be present to enhance the vaccine's potency, ensure safety, aid with storage or are left over from the manufacturing process. === Adjuvants === Live vaccines produce a strong immune response that lasts a long time, but they are not suitable for people with weakened immune systems. Other kinds of vaccine, where the pathogen has been inactivated or that contain only part of the pathogen, often alone produce a weaker response and require booster doses. In these vaccines, a substance called an adjuvant is added to make the immune response stronger and longer lasting. The most commonly used adjuvants are aluminium salts such as aluminium hydroxide, aluminium phosphate or potassium aluminium sulphate (also simply called alum). These aluminium salts can be responsible for soreness and redness at the vaccination site but do not cause any long-term harm to human health. The amount of aluminium in these vaccines ranges from 0.125 milligrams in the pneumococcal conjugate vaccine to 0.82 milligrams in the 6-in-1 vaccine. The Meningococcal Group B vaccine contains 0.5 milligrams and in the UK Immunisation Schedule is given at the same time as the 6-in-1 vaccine at eight and sixteen weeks, giving a combined dose of 1.32 milligrams of aluminium. Aluminium salts are commonly and naturally consumed in small quantities, and the quantity in this combined vaccine dose is lower than the weekly safe intake level. Vaccines containing aluminium adjuvants cannot be frozen or allowed to freeze accidentally in a refrigerator, as this causes the particles to coagulate and damages the antigen. Another adjuvant used in some flu vaccines is an oil-in-water emulsion. The oil, squalene, is found in all plant and animal cells, and is commercially extracted and purified from shark liver. The flu vaccine for older adults, Fluad, uses an adjuvant branded MF59, which has squalene (9.75 milligrams), citric acid (0.04 milligrams) and three emulsifiers: polysorbate 80, sorbitan trioleate, sodium citrate (1.175, 1.175 and 0.66 milligrams respectively). The H1N1 swine-flu vaccine, Pandemrix, used the adjuvant branded AS03, which has squalene (10.69 milligrams), DL-α-tocopherol (11.86 milligrams) and polysorbate 80 (4.86 milligrams) === Preservatives === Preservatives prevent the growth of bacteria and fungi, and are more commonly used in vaccines produced as multi-dose vials. They must also be non-toxic in the dose used and not adversely affect the immunogenicity of the vaccine. Thiomersal is the best known and most controversial preservative. It was phased out of UK vaccines between 2003 and 2005 and is not used in any routine vaccines in the UK. As a precaution, the US and Europe have also removed thiomersal from vaccines, despite there being no evidence of harm. The US-licensed vaccines in the routine paediatric schedule generally have no thiomersal at all; a few have only a trace amount as a residual from manufacturing (less than one microgram). This is also the case for influenza vaccines in the US that come in single-dose vials or prefilled syringes. Some influenza vaccines are also available as a multi-dose vial, and in that form contain thiomersal (24.5 micrograms of mercury). Phenol 0.25% v/v is used in Pneumovax 23, a pneumococcal polysaccharide vaccine, and in the smallpox vaccine. However, phenol reduces the potency of diphtheria and tetanus toxoid-containing vaccines. Similarly, thiomersal weakens the immunogenicity of the inactivated poliovirus vaccine, so the IPOL vaccine contains 2–3 microlitres of 2-phenoxyethanol instead. === Stabilisers === Stabilisers protect the vaccine from the effects of temperature and ensure it does not degrade in storage. For vaccines that are freeze-dried, they provide a necessary bulk. Without them, the vaccine powder would be invisibly tiny (ranging from nanograms to a few tens of micrograms) and stick to the vial glass. Stabilisers used for vaccines include sugars (sucrose, lactose), sorbitol, amino acids (glycine, monosodium glutamate) and proteins (hydrolysed gelatin). There have very rarely (one in two million vaccinations) been cases of allergic reaction to the proteins in gelatin. The source of gelatin, pork, is of religious concern to Jewish and Muslim communities, though some leaders have ruled this is not a cause to reject vaccines that are injected or inhaled rather than ingested. There are alternatives for some vaccines that contain gelatine. Acidity regulators such as phosphate salts keep the pH within a required range during manufacture and in the final product. Other salts help ensure the vaccine is isotonic with body fluids. === Manufacturing residuals === There are materials that serve no function in the final vaccine but are left over from the manufacturing process. Bacteria and viruses may be inactivated using formaldehyde. The quantity remaining in diphtheria or tetanus toxoid vaccines licensed in the US is required to be less than 0.1 milligrams (0.02%). Although formaldehyde has potentially toxic and carcinogenic properties in large doses, it is present in the blood (due to natural biochemical processes) at much higher concentrations than permitted in vaccines. Alternatives used in some vaccines include glutaraldehyde and β-propiolactone. Antibiotics may be used to prevent bacteria growing during vaccine manufacture and traces of these may remain. Antibiotics that some people are allergic to (such as cephalosporins, penicillins and sulphonamides) are not used. Those that are used include kanamycin, gentamicin, neomycin, polymyxin B, and streptomycin. A small amounts of protein may remain from the material used to grow viruses, to which some people may be hypersensitive. Some influenza and yellow fever vaccines are grown in chicken eggs, and measles or mumps vaccines may be grown in chick embryo cell culture. Engerix-B, a recombinant DNA vaccine for hepatitis B is produced in yeast and may contain up to five percent yeast protein. Cervarix, an HPV vaccine, is grown in a cell line from the cabbage looper moth. The amount of insect protein remaining is less than forty nanograms. Some components of the vaccine vial or syringe may contain latex rubber. This is a problem for those with a severe allergic reaction to latex, but not for those who get contact dermatitis after wearing latex gloves. == Notes == == References == === Works cited === == External links == Vaccine ingredients from the Oxford Vaccine Group. Vaccine Excipient Summary from the Centers for Disease Control and Prevention (CDC). Vaccine ingredients from Full Fact.
Wikipedia/Vaccine_ingredients
Malaria vaccines are vaccines that prevent malaria, a mosquito-borne infectious disease which affected an estimated 249 million people globally in 85 malaria-endemic countries and areas and caused 608,000 deaths in 2022. The first approved vaccine for malaria is RTS,S, known by the brand name Mosquirix. As of April 2023, the vaccine has been given to 1.5 million children living in areas with moderate-to-high malaria transmission. It requires at least three doses in infants by age 2, and a fourth dose extends the protection for another 1–2 years. The vaccine reduces hospital admissions from severe malaria by around 30%. Research continues with other malaria vaccines. The most effective malaria vaccine is the R21/Matrix-M, with a 77% efficacy rate shown in initial trials and significantly higher antibody levels than with the RTS,S vaccine. It is the first vaccine that meets the World Health Organization's (WHO) goal of a malaria vaccine with at least 75% efficacy, and only the second malaria vaccine to be recommended by the WHO. In April 2023, Ghana's Food and Drugs Authority approved the use of the R21 vaccine for use in children aged between five months and three years old. Following Ghana's decision, Nigeria provisionally approved the R21 vaccine. == Approved vaccines == === RTS,S === RTS,S/AS01 (brand name Mosquirix) is the first malaria vaccine approved for public use. It requires at least three doses in infants by age 2, with a fourth dose extending the protection for another 1–2 years. The vaccine reduces hospital admissions from severe malaria by around 30%. RTS,S was developed by PATH Malaria Vaccine Initiative (MVI) and GlaxoSmithKline (GSK) with support from the Bill and Melinda Gates Foundation. It is a recombinant vaccine, consisting of the Plasmodium falciparum circumsporozoite protein (CSP) from the pre-erythrocytic stage. The CSP antigen causes the production of antibodies capable of preventing the invasion of hepatocytes and also elicits a cellular response enabling the destruction of infected hepatocytes. The CSP vaccine presented problems in the trial stage due to its poor immunogenicity. RTS,S attempted to avoid these by fusing the protein with a surface antigen from hepatitis B virus, creating a more potent and immunogenic vaccine. When tested in trials as an emulsion of oil in water and with the added adjuvants of monophosphoryl A and QS21 (SBAS2), the vaccine gave protective immunity to 7 out of 8 volunteers when challenged with P. falciparum. RTS,S was engineered using genes from the outer protein of P. falciparum malaria parasite and a portion of a hepatitis B virus plus a chemical adjuvant to boost the immune response. Infection is prevented by inducing high antibody titers that block the parasite from infecting the liver. In November 2012, a Phase III trial of RTS,S found that it provided modest protection against both clinical and severe malaria in young infants. In October 2013, preliminary results of a phase III clinical trial indicated that RTS,S/AS01 reduced the number of cases among young children by almost 50 percent and among infants by around 25 percent. The study ended in 2014. The effects of a booster dose were positive, even though overall efficacy seems to wane with time. After four years, reductions were 36 percent for children who received three shots and a booster dose. Missing the booster dose reduced the efficacy against severe malaria to a negligible effect. The vaccine was shown to be less effective for infants. Three doses of vaccine plus a booster reduced the risk of clinical episodes by 26 percent over three years but offered no significant protection against severe malaria. In a bid to accommodate a larger group and guarantee sustained availability for the general public, GSK applied for a marketing license with the European Medicines Agency (EMA) in July 2014. GSK treated the project as a non-profit initiative, with most funding coming from the Gates Foundation, a major contributor to malaria eradication. In July 2015, Mosquirix received a positive scientific opinion from the European Medicines Agency (EMA) on the proposal for the vaccine to be used to vaccinate children aged 6 weeks to 17 months outside the European Union. A pilot project for vaccination was launched on 23 April 2019 in Malawi, on 30 April 2019 in Ghana, and on 13 September 2019 in Kenya. In October 2021, the vaccine was endorsed by the World Health Organization for "broad use" in children, making it the first malaria vaccine to receive this recommendation. The vaccine was prequalified by WHO in July 2022. In August 2022, UNICEF awarded a contract to GSK to supply 18 million doses of the RTS,S vaccine over three years. More than 30 countries have areas with moderate to high malaria transmission where the vaccine is expected to be useful. As of April 2023, 1.5 million children in Ghana, Kenya, and Malawi had received at least one injection of the vaccine, with more than 4.5 million doses of the vaccine administered through the countries' routine immunization programs. The next 9 countries to receive the vaccine over the next 2 years are Benin, Burkina Faso, Burundi, Cameroon, the Democratic Republic of the Congo, Liberia, Niger, Sierra Leone, and Uganda. === R21/Matrix-M === The most effective malaria vaccine is R21/Matrix-M, with 77% efficacy shown in initial trials. It is the first vaccine that meets the World Health Organization's goal of a malaria vaccine with at least 75% efficacy. It was developed through a collaboration involving the Jenner Institute at the University of Oxford, the Kenya Medical Research Institute, the London School of Hygiene and Tropical Medicine, Novavax, and the Serum Institute of India. The trials took place at the Institut de Recherche en Sciences de la Santé in Nanoro, Burkina Faso with Halidou Tinto as the principal investigator. The R21 vaccine uses a circumsporozoite protein (CSP) antigen, at a higher proportion than the RTS,S vaccine. It uses the same HBsAg-linked recombinant structure but contains no excess HBsAg. It includes the Matrix-M adjuvant that is also utilized in the Novavax COVID-19 vaccine. A phase II trial was reported in April 2021, with a vaccine efficacy of 77% and antibody levels significantly higher than with the RTS,S vaccine. A booster shot of R21/Matrix-M that is given 12 months after the primary three-dose regimen maintains a high efficacy against malaria, providing high protection against symptomatic malaria for at least 2 years. A phase III trial with 4,800 children across four African countries was reported in November 2022, demonstrating vaccine efficacy of 74% against a severe malaria episode. Further data from multiple studies is being collected. As of April 2023 data from the phase III study had not been formally published, but late-stage data from the study was shared with regulatory authorities. Ghana's Food and Drugs Authority approved the use of the R21 vaccine in April 2023, for use in children aged between five months to three years old. The Serum Institute of India is preparing to produce between 100–200 million doses of the vaccine per year, and is constructing a vaccine factory in Accra, Ghana. Following Ghana's decision, Nigeria provisionally approved the R21 vaccine. In October 2023 the WHO endorsed the R21 vaccine against malaria, end of December 2023 it was added to the list of Prequalified Vaccines. Further developments for a vaccine that targets the erythrocytic stage of the Malaria parasite have been made, provisionally named RH5.1/Matrix-M, which it is hoped will combine with the R21/Matrix-M pre-erythrocytic vaccine to create an even more efficacious second-generation Malaria vaccine. == Agents under development == A completely effective vaccine is not available for malaria, although several vaccines are under development. Multiple vaccine candidates targeting the blood-stage of the parasite's lifecycle have been insufficient on their own. Several potential vaccines targeting the pre-erythrocytic stage are being developed, with RTS,S and R-21/Matrix-M the two approved options so far. === Nanoparticle enhancement of RTS,S === In 2015, researchers used a repetitive antigen display technology to engineer a nanoparticle that displayed malaria-specific B cell and T cell epitopes. The particle exhibited icosahedral symmetry and carried on its surface up to 60 copies of the RTS,S protein. The researchers claimed that the density of the protein was much higher than the 14% of the GSK vaccine. === PfSPZ vaccine === The PfSPZ vaccine is a candidate malaria vaccine developed by Sanaria using radiation-attenuated sporozoites to elicit an immune response. Clinical trials have been promising, with trials in Africa, Europe, and the US protecting over 80% of volunteers. It has been subject to some criticism regarding the ultimate feasibility of large-scale production and delivery in Africa since it must be stored in liquid nitrogen. The PfSPZ vaccine candidate was granted fast track designation by the U.S. Food and Drug Administration in September 2016. In April 2019, a phase III trial in Bioko was announced, scheduled to start in early 2020. === Other developments === SPf66 is a synthetic peptide-based vaccine developed by the Manuel Elkin Patarroyo team in Colombia and was tested extensively in endemic areas in the 1990s. Clinical trials showed it to be insufficiently effective, with 28% efficacy in South America and minimal or no efficacy in Africa. This vaccine had no protective effect in the largest placebo-controlled randomized trial in South East Asia and was abandoned. The CSP (Circum-Sporozoite Protein) was a vaccine developed that initially appeared promising enough to undergo trials. It is also based on the circumsporozoite protein but additionally has the recombinant (Asn-Ala-Pro15Asn-Val-Asp-Pro)2-Leu-Arg(R32LR) protein covalently bound to a purified Pseudomonas aeruginosa toxin (A9). However, at an early stage, a complete lack of protective immunity was demonstrated in those inoculated. The study group used in Kenya had an 82% incidence of parasitaemia while the control group only had an 89% incidence. The vaccine intended to cause an increased T-lymphocyte response in those exposed; this was also not observed. The NYVAC-Pf7 multi-stage vaccine attempted to use different technology, incorporating seven P. falciparum antigenic genes. These came from a variety of stages during the lifecycle. CSP and sporozoite surface protein 2 (called PfSSP2) were derived from the sporozoite phase. The liver stage antigen 1 (LSA1), three from the erythrocytic stage (merozoite surface protein 1, serine repeat antigen, and AMA-1), and one sexual stage antigen (the 25-kDa Pfs25) were included. This was first investigated using rhesus monkeys and produced encouraging results: 4 out of the 7 antigens produced specific antibody responses (CSP, PfSSP2, MSP1 and PFs25). Later trials in humans, despite demonstrating cellular immune responses in over 90% of the subjects, had very poor antibody responses. Despite this following administration of the vaccine, some candidates had complete protection when challenged with P. falciparum. This result has warranted ongoing trials. In 1995 a field trial involving [NANP]19-5.1 proved to be very successful. Out of 194 children vaccinated, none developed symptomatic malaria in the 12-week follow-up period, and only 8 failed to have higher levels of antibody present. The vaccine consists of the schizont export protein (5.1) and 19 repeats of the sporozoite surface protein [NANP]. Limitations of the technology exist as it contains only 20% peptide and has low levels of immunogenicity. It also does not contain any immunodominant T-cell epitopes. A chemical compound undergoing trials for the treatment of tuberculosis and cancer—the JmJc inhibitor ML324 and the antitubercular clinical candidate SQ109—is potentially a new line of drugs to treat malaria and kill the parasite in its infectious stage. More tests still need to be carried out before the compounds can be approved as a viable treatment. == Considerations == The task of developing a preventive vaccine for malaria is a complex process. There are a number of considerations to be made concerning what strategy a potential vaccine should adopt. === Parasite diversity === P. falciparum has demonstrated the capability, through the development of multiple drug-resistant parasites, for evolutionary change. The Plasmodium species has a very high rate of replication, much higher than that needed to ensure transmission in the parasite's lifecycle. This enables pharmaceutical treatments that are effective at reducing the reproduction rate, but not halting it, to exert a high selection pressure, thus favoring the development of resistance. The process of evolutionary change is one of the key considerations necessary when considering potential vaccine candidates. The development of resistance could cause a significant reduction in the efficacy of any potential vaccine thus rendering useless a carefully developed and effective treatment. === Choosing to address the symptom or the source === The parasite induces two main response types from the human immune system. These are anti-parasitic immunity and anti-toxic immunity. "Anti-parasitic immunity" addresses the source; it consists of an antibody response (humoral immunity) and a cell-mediated immune response. Ideally, a vaccine would enable the development of anti-plasmodial antibodies in addition to generating an elevated cell-mediated response. Potential antigens against which a vaccine could be targeted will be discussed in greater depth later. Antibodies are part of the specific immune response. They exert their effect by activating the complement cascade, stimulating phagocytic cells into endocytosis through adhesion to an external surface of the antigenic substances, thus 'marking' it as offensive. Humoral or cell-mediated immunity consists of many interlinking mechanisms that essentially aim to prevent infection from entering the body (through external barriers or hostile internal environments) and then kill any microorganisms or foreign particles that succeed in penetration. The cell-mediated component consists of many white blood cells (such as monocytes, neutrophils, macrophages, lymphocytes, basophils, mast cells, natural killer cells, and eosinophils) that target foreign bodies by a variety of different mechanisms. In the case of malaria, both systems would be targeted to attempt to increase the potential response generated, thus ensuring the maximum chance of preventing disease. "Anti-toxic immunity" addresses the symptoms; it refers to the suppression of the immune response associated with the production of factors that either induce symptoms or reduce the effect that any toxic by-products (of micro-organism presence) have on the development of disease. For example, it has been shown that tumor necrosis factor-alpha has a central role in generating the symptoms experienced in severe P. falciparum malaria. Thus a therapeutic vaccine could target the production of TNF-a, preventing respiratory distress and cerebral symptoms. This approach has serious limitations as it would not reduce the parasitic load; rather, it only reduces the associated pathology. As a result, there are substantial difficulties in evaluating efficacy in human trials. Taking this information into consideration an ideal vaccine candidate would attempt to generate a more substantial cell-mediated and antibody response on parasite presentation. This would have the benefit of increasing the rate of parasite clearance, thus reducing the experienced symptoms and providing a level of consistent future immunity against the parasite. === Potential targets === By their very nature, protozoa are more complex organisms than bacteria and viruses, with more complicated structures and lifecycles. This presents problems in vaccine development but also increases the number of potential targets for a vaccine. These have been summarised into the lifecycle stage and the antibodies that could potentially elicit an immune response. The epidemiology of malaria varies enormously across the globe and has led to the belief that it may be necessary to adopt very different vaccine development strategies to target different populations. A Type 1 vaccine is suggested for those exposed mostly to P. falciparum malaria in sub-Saharan Africa, with the primary objective to reduce the number of severe malaria cases and deaths in infants and children exposed to high transmission rates. The Type 2 vaccine could be thought of as a 'travelers' vaccine,' aiming to prevent all clinical symptoms in individuals with no previous exposure. This is another major public health problem, with malaria presenting as one of the most substantial threats to travelers' health. Problems with the available pharmaceutical therapies include costs, availability, adverse effects, contraindications, inconvenience, and compliance, many of which would be reduced or eliminated if an effective (greater than 85–90%) vaccine was developed. The lifecycle of the malaria parasite is particularly complex, presenting initial developmental problems. Despite the huge number of vaccines available, none target parasitic infections. The distinct developmental stages involved in the lifecycle present numerous opportunities for targeting antigens, thus potentially eliciting an immune response. Theoretically, each developmental stage could have a vaccine developed specifically to target the parasite. Moreover, any vaccine produced would ideally have the ability to be of therapeutic value as well as preventing further transmission and is likely to consist of a combination of antigens from different phases of the parasite's development. More than 30 of these antigens are being researched by teams all over the world in the hope of identifying a combination that can elicit immunity in the inoculated individual. Some of the approaches involve surface expression of the antigen, inhibitory effects of specific antibodies on the lifecycle, and the protective effects through immunization or passive transfer of antibodies between an immune and a non-immune host. The majority of research into malarial vaccines has focused on the Plasmodium falciparum strain due to the high mortality caused by the parasite and the ease of carrying out in vitro/in vivo studies. The earliest vaccines attempted to use the parasitic circumsporozoite protein (CSP). This is the most dominant surface antigen of the initial pre-erythrocytic phase. However, problems were encountered due to low efficacy, reactogenicity and low immunogenicity. The initial stage in the lifecycle, following inoculation, is a relatively short "pre-erythrocytic" or "hepatic" phase. A vaccine at this stage must have the ability to protect against sporozoites invading and possibly inhibiting the development of parasites in the hepatocytes (through inducing cytotoxic T-lymphocytes that can destroy the infected liver cells). However, if any sporozoites evaded the immune system they would then have the potential to be symptomatic and cause the clinical disease. The second phase of the lifecycle is the "erythrocytic" or blood phase. A vaccine here could prevent merozoite multiplication or the invasion of red blood cells. This approach is complicated by the lack of MHC molecule expression on the surface of erythrocytes. Instead, malarial antigens are expressed, and it is this towards which the antibodies could potentially be directed. Another approach would be to attempt to block the process of erythrocyte adherence to blood vessel walls. It is thought that this process is accountable for much of the clinical syndrome associated with malarial infection; therefore, a vaccine given during this stage would be therapeutic and hence administered during clinical episodes to prevent further deterioration. The last phase of the lifecycle that has the potential to be targeted by a vaccine is the "sexual stage". This would not give any protective benefits to the individual inoculated but would prevent further transmission of the parasite by preventing the gametocytes from producing multiple sporozoites in the gut wall of the mosquito. It therefore would be used as part of a policy directed at eliminating the parasite from areas of low prevalence or to prevent the development and spread of vaccine-resistant parasites. This type of transmission-blocking vaccine is potentially very important. The evolution of resistance in the malaria parasite occurs very quickly, potentially making any vaccine redundant within a few generations. This approach to the prevention of spread is therefore essential. Another approach is to target the protein kinases, which are present during the entire lifecycle of the malaria parasite. Research is underway on this, yet production of an actual vaccine targeting these protein kinases may still take a long time. Report of a vaccine candidate capable of neutralizing all tested strains of Plasmodium falciparum, the most deadly form of the parasite causing malaria, was published in Nature Communications by a team of scientists from the University of Oxford in 2011. The viral vector vaccine, targeting a full-length P. falciparum reticulocyte-binding protein homologue 5 (PfRH5) was found to induce an antibody response in an animal model. The results of this new vaccine confirmed the utility of a key discovery reported by scientists at the Wellcome Trust Sanger Institute, published in Nature. The earlier publication reported P. falciparum relies on a red blood cell surface receptor, known as 'basigin', to invade the cells by binding a protein PfRH5 to the receptor. Unlike other antigens of the malaria parasite which are often genetically diverse, the PfRH5 antigen appears to have little genetic diversity. It was found to induce a very low antibody response in people naturally exposed to the parasite. The high susceptibility of PfRH5 to the cross-strain neutralizing vaccine-induced antibody demonstrated a significant promise for preventing malaria in the long and often difficult road of vaccine development. According to Professor Adrian Hill, a Wellcome Trust Senior Investigator at the University of Oxford, the next step would be the safety tests of this vaccine. At the time (2011) it was projected that if these proved successful, the clinical trials in patients could begin within two to three years. PfEMP1, one of the proteins known as variant surface antigens (VSAs) produced by Plasmodium falciparum, was found to be a key target of the immune system's response against the parasite. Studies of blood samples from 296 mostly Kenyan children by researchers of Burnet Institute and their cooperators showed that antibodies against PfEMP1 provide protective immunity, while antibodies developed against other surface antigens do not. Their results demonstrated that PfEMP1 could be a target for developing an effective vaccine that will reduce the risk of developing malaria. Plasmodium vivax is the common malaria species found in India, Southeast Asia, and South America. It can stay dormant in the liver and reemerge years later to elicit new infections. Two key proteins involved in the invasion of the red blood cells (RBC) by P. vivax are potential targets for drug or vaccine development. When the Duffy binding protein (DBP) of P. vivax binds the Duffy antigen (DARC) on the surface of the RBC, the process for the parasite to enter the RBC is initiated. Structures of the core region of DARC and the receptor binding pocket of DBP have been mapped by scientists at the Washington University in St. Louis. The researchers found that the binding is a two-step process that involves two copies of the parasite protein acting together like a pair of tongs that "clamp" two copies of DARC. Antibodies that interfere with the binding by either targeting the key region of the DARC or the DBP will prevent the infection. Antibodies against the Schizont Egress Antigen-1 (PfSEA-1) were found to disable the parasite's ability to rupture from the infected red blood cells (RBCs), thus preventing it from continuing with its lifecycle. Researchers from Rhode Island Hospital identified Plasmodium falciparum PfSEA-1, a 244 kd malaria antigen expressed in the schizont-infected RBCs. Mice vaccinated with the recombinant PfSEA-1 produced antibodies that interrupted the schizont rupture from the RBCs and decreased the parasite replication. The vaccine protected the mice from the lethal challenge of the parasite. Tanzanian and Kenyan children who have antibodies to PfSEA-1 were found to have fewer parasites in their bloodstream and a milder case of malaria. By blocking the schizont outlet, the PfSEA-1 vaccine may work synergistically with vaccines targeting the other stages of the malaria lifecycle such as hepatocyte and RBC invasion. === Mix of antigenic components === Increasing the potential immunity generated against Plasmodia can be achieved by attempting to target multiple phases in the lifecycle. This is additionally beneficial in reducing the possibility of resistant parasites developing. The use of multiple-parasite antigens can therefore have a synergistic or additive effect. One of the most successful vaccine candidates in clinical trials consists of recombinant antigenic proteins to the circumsporozoite protein. == History == Individuals who are exposed to the parasite in endemic countries develop acquired immunity against disease and death. Such immunity does not, however, prevent malarial infection; immune individuals often harbour asymptomatic parasites in their blood. This does, however, imply that it is possible to create an immune response that protects against the harmful effects of the parasite. Research shows that if immunoglobulin is taken from immune adults, purified, and then given to individuals who have no protective immunity, some protection can be gained. === Irradiated mosquitoes === In 1967, it was reported that a level of immunity to the Plasmodium berghei parasite could be given to mice by exposing them to sporozoites that had been irradiated by x-rays. Subsequent human studies in the 1970s showed that humans could be immunized against Plasmodium vivax and Plasmodium falciparum by exposing them to the bites of significant numbers of irradiated mosquitos. From 1989 to 1999, eleven volunteers recruited from the United States Public Health Service, United States Army, and United States Navy were immunized against Plasmodium falciparum by the bites of 1001–2927 mosquitoes that had been irradiated with 15,000 rads of gamma rays from a Co-60 or Cs-137 source. This level of radiation is sufficient to attenuate the malaria parasites so that, while they can still enter hepatic cells, they cannot develop into schizonts nor infect red blood cells. Over 42 weeks, 24 of 26 tests on the volunteers showed that they were protected from malaria. == References == == Further reading == == External links == "Malaria Vaccines". PubChem. U.S. National Library of Medicine. Malaria Vaccine Initiative Malaria vaccines UK Gates Foundation Global Health: Malaria
Wikipedia/Malaria_vaccine
An mRNA vaccine is a type of vaccine that uses a copy of a molecule called messenger RNA (mRNA) to produce an immune response. The vaccine delivers molecules of antigen-encoding mRNA into cells, which use the designed mRNA as a blueprint to build foreign protein that would normally be produced by a pathogen (such as a virus) or by a cancer cell. These protein molecules stimulate an adaptive immune response that teaches the body to identify and destroy the corresponding pathogen or cancer cells. The mRNA is delivered by a co-formulation of the RNA encapsulated in lipid nanoparticles that protect the RNA strands and help their absorption into the cells. Reactogenicity, the tendency of a vaccine to produce adverse reactions, is similar to that of conventional non-RNA vaccines. People susceptible to an autoimmune response may have an adverse reaction to messenger RNA vaccines. The advantages of mRNA vaccines over traditional vaccines are ease of design, speed and lower cost of production, the induction of both cellular and humoral immunity, and lack of interaction with the genomic DNA. While some messenger RNA vaccines, such as the Pfizer–BioNTech COVID-19 vaccine, have the disadvantage of requiring ultracold storage before distribution, other mRNA vaccines, such as the Moderna vaccine, do not have such requirements. In RNA therapeutics, messenger RNA vaccines have attracted considerable interest as COVID-19 vaccines. In December 2020, Pfizer–BioNTech and Moderna obtained authorization for their mRNA-based COVID-19 vaccines. On 2 December, the UK Medicines and Healthcare products Regulatory Agency (MHRA) became the first medicines regulator to approve an mRNA vaccine, authorizing the Pfizer–BioNTech vaccine for widespread use. On 11 December, the US Food and Drug Administration (FDA) issued an emergency use authorization for the Pfizer–BioNTech vaccine and a week later similarly authorized the Moderna vaccine. In 2023 the Nobel Prize in Physiology or Medicine was awarded to Katalin Karikó and Drew Weissman for their discoveries concerning modified nucleosides that enabled the development of effective mRNA vaccines against COVID-19. == History == === Early research === The first successful transfection of designed mRNA packaged within a liposomal nanoparticle into a cell was published in 1989. "Naked" (or unprotected) lab-made mRNA was injected a year later into the muscle of mice. These studies were the first evidence that in vitro transcribed mRNA with a chosen gene was able to deliver the genetic information to produce a desired protein within living cell tissue and led to the concept proposal of messenger RNA vaccines. Liposome-encapsulated mRNA encoding a viral antigen was shown in 1993 to stimulate T cells in mice. The following year self-amplifying mRNA was developed by including both a viral antigen and replicase encoding gene. The method was used in mice to elicit both a humoral and cellular immune response against a viral pathogen. The next year mRNA encoding a tumor antigen was shown to elicit a similar immune response against cancer cells in mice. === Development === The first human clinical trial using ex vivo dendritic cells transfected with mRNA encoding tumor antigens (therapeutic cancer mRNA vaccine) was started in 2001. Four years later, the successful use of modified nucleosides as a method to transport mRNA inside cells without setting off the body's defense system was reported. Clinical trial results of an mRNA vaccine directly injected into the body against cancer cells were reported in 2008. BioNTech in 2008, and Moderna in 2010, were founded to develop mRNA biotechnologies. The US research agency DARPA launched at this time the biotechnology research program ADEPT to develop emerging technologies for the US military. The agency recognized the potential of nucleic acid technology for defense against pandemics and began to invest in the field. DARPA grants were seen as a vote of confidence that in turn encouraged other government agencies and private investors to invest in mRNA technology. DARPA awarded at the time a $25 million grant to Moderna. The first human clinical trials using an mRNA vaccine against an infectious agent (rabies) began in 2013. Over the next few years, clinical trials of mRNA vaccines for a number of other viruses were started. mRNA vaccines for human use were studied for infectious agents such as influenza, Zika virus, cytomegalovirus, and Chikungunya virus. === Acceleration === The COVID-19 pandemic, and sequencing of the causative virus SARS-CoV-2 at the beginning of 2020, led to the rapid development of the first approved mRNA vaccines. BioNTech and Moderna in December of the same year obtained approval for their mRNA-based COVID-19 vaccines. On 2 December, seven days after its final eight-week trial, the UK Medicines and Healthcare products Regulatory Agency (MHRA) became the first global medicines regulator in history to approve an mRNA vaccine, granting emergency authorization for Pfizer–BioNTech's BNT162b2 COVID-19 vaccine for widespread use. On 11 December, the FDA gave emergency use authorization for the Pfizer–BioNTech COVID-19 vaccine and a week later similar approval for the Moderna COVID-19 vaccine. == Mechanism == The goal of a vaccine is to stimulate the adaptive immune system to create antibodies that precisely target that particular pathogen. The markers on the pathogen that the antibodies target are called antigens. Traditional vaccines stimulate an antibody response by injecting either antigens, an attenuated (weakened) virus, an inactivated (dead) virus, or a recombinant antigen-encoding viral vector (harmless carrier virus with an antigen transgene) into the body. These antigens and viruses are prepared and grown outside the body. In contrast, mRNA vaccines introduce a short-lived synthetically created fragment of the RNA sequence of a virus into the individual being vaccinated. These mRNA fragments are taken up by dendritic cells through phagocytosis. The dendritic cells use their internal machinery (ribosomes) to read the mRNA and produce the viral antigens that the mRNA encodes. The body degrades the mRNA fragments within a few days of introduction. Although non-immune cells can potentially also absorb vaccine mRNA, produce antigens, and display the antigens on their surfaces, dendritic cells absorb the mRNA globules much more readily. The mRNA fragments are translated in the cytoplasm and do not affect the body's genomic DNA, located separately in the cell nucleus. Once the viral antigens are produced by the host cell, the normal adaptive immune system processes are followed. Antigens are broken down by proteasomes. Class I and class II MHC molecules then attach to the antigen and transport it to the cellular membrane, "activating" the dendritic cell. Once activated, dendritic cells migrate to lymph nodes, where they present the antigen to T cells and B cells. This triggers the production of antibodies specifically targeted to the antigen, ultimately resulting in immunity. == mRNA == The central component of a mRNA vaccine is its mRNA construct. The in vitro transcribed mRNA is generated from an engineered plasmid DNA, which has an RNA polymerase promoter and sequence which corresponds to the mRNA construct. By combining T7 phage RNA polymerase and the plasmid DNA, the mRNA can be transcribed in the lab. Efficacy of the vaccine is dependent on the stability and structure of the designed mRNA. The in vitro transcribed mRNA has the same structural components as natural mRNA in eukaryotic cells. It has a 5' cap, a 5'-untranslated region (UTR) and 3'-UTR, an open reading frame (ORF), which encodes the relevant antigen, and a 3'-poly(A) tail. By modifying these different components of the synthetic mRNA, the stability and translational ability of the mRNA can be enhanced, and in turn, the efficacy of the vaccine improved. The mRNA can be improved by using synthetic 5'-cap analogues which enhance the stability and increase protein translation. Similarly, regulatory elements in the 5'-untranslated region and the 3'-untranslated region can be altered, and the length of the poly(A) tail optimized, to stabilize the mRNA and increase protein production. The mRNA nucleotides can be modified to both decrease innate immune activation and increase the mRNA's half-life in the host cell. The nucleic acid sequence and codon usage impacts protein translation. Enriching the sequence with guanine-cytosine content improves mRNA stability and half-life and, in turn, protein production. Replacing rare codons with synonymous codons frequently used by the host cell also enhances protein production. == Delivery == For a vaccine to be successful, sufficient mRNA must enter the host cell cytoplasm to stimulate production of the specific antigens. Entry of mRNA molecules, however, faces a number of difficulties. Not only are mRNA molecules too large to cross the cell membrane by simple diffusion, they are also negatively charged like the cell membrane, which causes a mutual electrostatic repulsion. Additionally, mRNA is easily degraded by RNAases in skin and blood. Various methods have been developed to overcome these delivery hurdles. The method of vaccine delivery can be broadly classified by whether mRNA transfer into cells occurs within (in vivo) or outside (ex vivo) the organism. === Ex vivo === Dendritic cells display antigens on their surfaces, leading to interactions with T cells to initiate an immune response. Dendritic cells can be collected from patients and programmed with the desired mRNA, then administered back into patients to create an immune response. The simplest way that ex vivo dendritic cells take up mRNA molecules is through endocytosis, a fairly inefficient pathway in the laboratory setting that can be significantly improved through electroporation. === In vivo === Since the discovery that the direct administration of in vitro transcribed mRNA leads to the expression of antigens in the body, in vivo approaches have been investigated. They offer some advantages over ex vivo methods, particularly by avoiding the cost of harvesting and adapting dendritic cells from patients and by imitating a regular infection. Different routes of injection, such as into the skin, blood, or muscles, result in varying levels of mRNA uptake, making the choice of administration route a critical aspect of in vivo delivery. One study showed, in comparing different routes, that lymph node injection leads to the largest T-cell response. ==== Naked mRNA injection ==== Naked mRNA injection means that the delivery of the vaccine is only done in a buffer solution. This mode of mRNA uptake has been known since the 1990s. The first worldwide clinical studies used intradermal injections of naked mRNA for vaccination. A variety of methods have been used to deliver naked mRNA, such as subcutaneous, intravenous, and intratumoral injections. Although naked mRNA delivery causes an immune response, the effect is relatively weak, and after injection the mRNA is often rapidly degraded. ==== Polymer and peptide vectors ==== Cationic polymers can be mixed with mRNA to generate protective coatings called polyplexes. These protect the recombinant mRNA from ribonucleases and assist its penetration in cells. Protamine is a natural cationic peptide and has been used to encapsulate mRNA for vaccination. ==== Lipid nanoparticle vector ==== The first time the FDA approved the use of lipid nanoparticles as a drug delivery system was in 2018, when the agency approved the first siRNA drug, Onpattro. Encapsulating the mRNA molecule in lipid nanoparticles was a critical breakthrough for producing viable mRNA vaccines, solving a number of key technical barriers in delivering the mRNA molecule into the host cell. Research into using lipids to deliver siRNA to cells became a foundation for similar research into using lipids to deliver mRNA. However, new lipids had to be invented to encapsulate mRNA strands, which are much longer than siRNA strands. Principally, the lipid provides a layer of protection against degradation, allowing more robust translational output. In addition, the customization of the lipid's outer layer allows the targeting of desired cell types through ligand interactions. However, many studies have also highlighted the difficulty of studying this type of delivery, demonstrating that there is an inconsistency between in vivo and in vitro applications of nanoparticles in terms of cellular intake. The nanoparticles can be administered to the body and transported via multiple routes, such as intravenously or through the lymphatic system. One issue with lipid nanoparticles is that several of the breakthroughs leading to the practical use of that technology involve the use of microfluidics. Microfluidic reaction chambers are difficult to scale up, since the entire point of microfluidics is to exploit the microscale behaviors of liquids. The only way around this obstacle is to run an extensive number of microfluidic reaction chambers in parallel, a novel task requiring custom-built equipment. For COVID-19 mRNA vaccines, this was the main manufacturing bottleneck. Pfizer used such a parallel approach to solve the scaling problem. After verifying that impingement jet mixers could not be directly scaled up, Pfizer made about 100 of the little mixers (each about the size of a U.S. half-dollar coin), connected them together with pumps and filters with a "maze of piping," and set up a computer system to regulate flow and pressure through the mixers. Another issue, with the large-scale use of this delivery method, is the availability of the novel lipids used to create lipid nanoparticles, especially ionizable cationic lipids. Before 2020, such lipids were manufactured in small quantities measured in grams or kilograms, and they were used for medical research and a handful of drugs for rare conditions. As the safety and efficacy of mRNA vaccines became clear in 2020, the few companies able to manufacture the requisite lipids were confronted with the challenge of scaling up production to respond to orders for several tons of lipids. ==== Viral vector ==== In addition to non-viral delivery methods, RNA viruses have been engineered to achieve similar immunological responses. Typical RNA viruses used as vectors include retroviruses, lentiviruses, alphaviruses and rhabdoviruses, each of which can differ in structure and function. Clinical studies have utilized such viruses on a range of diseases in model animals such as mice, chicken and primates. == Advantages == === Traditional vaccines === mRNA vaccines offer specific advantages over traditional vaccines. Because mRNA vaccines are not constructed from an active pathogen (or even an inactivated pathogen), they are non-infectious. In contrast, traditional vaccines require the production of pathogens, which, if done at high volumes, could increase the risks of localized outbreaks of the virus at the production facility. Another biological advantage of mRNA vaccines is that since the antigens are produced inside the cell, they stimulate cellular immunity, as well as humoral immunity. mRNA vaccines have the production advantage that they can be designed swiftly. Moderna designed their mRNA-1273 vaccine for COVID-19 in 2 days. They can also be manufactured faster, more cheaply, and in a more standardized fashion (with fewer error rates in production), which can improve responsiveness to serious outbreaks. The Pfizer–BioNTech vaccine originally required 110 days to mass-produce (before Pfizer began to optimize the manufacturing process to only 60 days), which was substantially faster than traditional flu and polio vaccines. Within that larger timeframe, the actual production time is only about 22 days: two weeks for molecular cloning of DNA plasmids and purification of DNA, four days for DNA-to-RNA transcription and purification of mRNA, and four days to encapsulate mRNA in lipid nanoparticles followed by fill and finish. The majority of the days needed for each production run are allocated to rigorous quality control at each stage. === DNA vaccines === In addition to sharing the advantages of theoretical DNA vaccines over established traditional vaccines, mRNA vaccines also have additional advantages over DNA vaccines. The mRNA is translated in the cytosol, so there is no need for the RNA to enter the cell nucleus, and the risk of being integrated into the host genome is averted. Modified nucleosides (for example, pseudouridines, 2'-O-methylated nucleosides) can be incorporated to mRNA to suppress immune response stimulation to avoid immediate degradation and produce a more persistent effect through enhanced translation capacity. The open reading frame (ORF) and untranslated regions (UTR) of mRNA can be optimized for different purposes (a process called sequence engineering of mRNA), for example through enriching the guanine-cytosine content or choosing specific UTRs known to increase translation. An additional ORF coding for a replication mechanism can be added to amplify antigen translation and therefore immune response, decreasing the amount of starting material needed. == Disadvantages == === Storage === Because mRNA is fragile, some vaccines must be kept at very low temperatures to avoid degrading and thus giving little effective immunity to the recipient. Pfizer–BioNTech's BNT162b2 mRNA vaccine has to be kept between −80 and −60 °C (−112 and −76 °F). Moderna says their mRNA-1273 vaccine can be stored between −25 and −15 °C (−13 and 5 °F), which is comparable to a home freezer, and that it remains stable between 2 and 8 °C (36 and 46 °F) for up to 30 days. In November 2020, Nature reported, "While it's possible that differences in LNP formulations or mRNA secondary structures could account for the thermostability differences [between Moderna and BioNtech], many experts suspect both vaccine products will ultimately prove to have similar storage requirements and shelf lives under various temperature conditions." Several platforms are being studied that may allow storage at higher temperatures. === Recent === Before 2020, no mRNA technology platform (drug or vaccine) had been authorized for use in humans, so there was a risk of unknown effects. The 2020 COVID-19 pandemic required faster production capability of mRNA vaccines, made them attractive to national health organisations, and led to debate about the type of initial authorization mRNA vaccines should get (including emergency use authorization or expanded access authorization) after the eight-week period of post-final human trials. === Side effects === Reactogenicity is similar to that of conventional, non-RNA vaccines. However, those susceptible to an autoimmune response may have an adverse reaction to mRNA vaccines. The mRNA strands in the vaccine may elicit an unintended immune reaction – this entails the body believing itself to be sick, and the person feeling as if they are as a result. To minimize this, mRNA sequences in mRNA vaccines are designed to mimic those produced by host cells. Strong but transient reactogenic effects were reported in trials of novel COVID-19 mRNA vaccines; most people will not experience severe side effects which include fever and fatigue. Severe side effects are defined as those that prevent daily activity. == Efficacy == The COVID-19 mRNA vaccines from Moderna and Pfizer–BioNTech had short-term efficacy rates of over 90 percent against the original SARS-CoV-2 virus. Prior to mRNA, drug trials on pathogens other than COVID-19 were not effective and had to be abandoned in the early phases of trials. The reason for the efficacy of the new mRNA vaccines is not clear. Physician-scientist Margaret Liu stated that the efficacy of the new COVID-19 mRNA vaccines could be due to the "sheer volume of resources" that went into development, or that the vaccines might be "triggering a nonspecific inflammatory response to the mRNA that could be heightening its specific immune response, given that the modified nucleoside technique reduced inflammation but hasn't eliminated it completely", and that "this may also explain the intense reactions such as aches and fevers reported in some recipients of the mRNA SARS-CoV-2 vaccines". These reactions though severe were transient and another view is that they were believed to be a reaction to the lipid drug delivery molecules. In June 2021, the US FDA added a warning about the possibility of increased risk of myocarditis and pericarditis for some people. == Hesitancy == There is misinformation implying that mRNA vaccines could alter DNA in the nucleus. mRNA in the cytosol is very rapidly degraded before it would have time to gain entry into the cell nucleus. In fact, mRNA vaccines must be stored at very low temperature and free from RNAses to prevent mRNA degradation. Retrovirus can be single-stranded RNA (just as many SARS-CoV-2 vaccines are single-stranded RNA) which enters the cell nucleus and uses reverse transcriptase to make DNA from the RNA in the cell nucleus. A retrovirus has mechanisms to be imported into the nucleus, but other mRNA (such as the vaccine) lack these mechanisms. Once inside the nucleus, creation of DNA from RNA cannot occur without a reverse transcriptase and appropriate primers, which both accompany a retrovirus, but which would not be present for other exogenous mRNA (such as a vaccine) even if it could enter the nucleus. == Amplification == mRNA vaccines use either non-amplifying (conventional) mRNA or self-amplifying mRNA. Pfizer–BioNTech and Moderna vaccines use non-amplifying mRNA. Both mRNA types continue to be investigated as vaccine methods against other potential pathogens and cancer. === Non-amplifying === The initial mRNA vaccines use a non-amplifying mRNA construct. Non-amplifying mRNA has only one open reading frame that codes for the antigen of interest. The total amount of mRNA available to the cell is equal to the amount delivered by the vaccine. Dosage strength is limited by the amount of mRNA that can be delivered by the vaccine. Non-amplifying vaccines replace uridine with N1-Methylpseudouridine in an attempt to reduce toxicity. === Self-amplifying === Self-amplifying mRNA (saRNA) vaccines replicate their mRNA after transfection. Self-amplifying mRNA has two open reading frames. The first frame, like conventional mRNA, codes for the antigen of interest. The second frame codes for an RNA-dependent RNA polymerase (and its helper proteins) which replicates the mRNA construct in the cell. This allows smaller vaccine doses. The mechanisms and consequently the evaluation of self-amplifying mRNA may be different, as self-amplifying mRNA is a much bigger molecule. SaRNA vaccines being researched include a malaria vaccine. The first saRNA Covid vaccine authorised was Gemcovac, in India in June 2022. The second was ARCT-154, developed by Arcturus Therapeutics. A version manufactured by Meiji Seika Pharma was authorised in Japan in November 2023. GSK began a phase 1 trial of an saRNA COVID-19 vaccine in 2021. Gritstone bio started also started a phase 1 trial of an saRNA COVID-19 vaccine in 2021, used as a booster vaccine, with interim results published in 2023. The vaccine is designed to target both the spike protein of the SARS‑CoV‑2 virus, and viral proteins that may be less prone to genetic variation, to provide greater protection against SARS‑CoV‑2 variants. saRNA vaccines must use uridine, which is required for reproduction to occur. == See also == DNA vaccine Nucleoside-modified messenger RNA RNA therapeutics Timeline of human vaccines == References == == Further reading == Dolgin E (September 2021). "The tangled history of mRNA vaccines" (PDF). Nature. 597 (9): 318–24. Bibcode:2021Natur.597..318D. doi:10.1038/d41586-021-02483-w. PMID 34522017. S2CID 237515383. Sahin U, Karikó K, Türeci Ö (October 2014). "mRNA-based therapeutics – developing a new class of drugs". Nat Rev Drug Discov. 13 (10): 759–80. doi:10.1038/nrd4278. PMID 25233993. == External links == "Five things you need to know about: mRNA vaccines". Horizon. Archived from the original on 4 April 2020. Retrieved 17 November 2020. "RNA vaccines: an introduction". PHG Foundation. University of Cambridge. "Understanding mRNA COVID-19 Vaccines". Centers for Disease Control and Prevention. 4 January 2022. Kolata, Gina; Mueller, Benjamin (15 January 2022). "Halting Progress and Happy Accidents: How mRNA Vaccines Were Made". The New York Times. M.I.T. Lecture 10: Kizzmekia Corbett, Vaccines" on YouTube
Wikipedia/MRNA_vaccine
Hookworm vaccine is a vaccine against hookworm. No effective vaccine for the disease in humans has yet been developed. Hookworms, parasitic nematodes transmitted in soil, infect approximately 700 million humans, particularly in tropical regions of the world where endemic hookworms include Ancylostoma duodenale and Necator americanus. Hookworms feed on blood and those infected with hookworms may develop chronic anaemia and malnutrition. Helminth infection can be effectively treated with benzimidazole drugs (such as mebendazole or albendazole), and efforts led by the World Health Organization have focused on one to three yearly de-worming doses in schools because hookworm infections with the heaviest intensities are most common in school-age children. However, these drugs only eliminate existing adult parasites and re-infection can occur soon after treatment. School-based de-worming efforts do not treat adults or pre-school children and concerns exist about drug resistance developing in hookworms against the commonly used treatments, thus a vaccine against hookworm disease is sought to provide more permanent resistance to infection. Hookworm infection is considered a neglected disease as it disproportionately affects poorer localities and has received little attention from pharmaceutical companies. == Vaccine targets == Hookworm infections in humans can last for several years, and re-infection can occur very shortly after treatment, suggesting that hookworms effectively evade—and may interrupt or modulate—the host immune system. Successful hookworm vaccines have been developed for several animal species. On the basis of prior work, human vaccine development has targeted antigens from both the larval and adult stages of the hookworm life cycle; a combined vaccine for humans that would provide more complete protection. Current targets of larval proteins attenuate larval migration through host tissue; targets of adult proteins have been demonstrated to block enzymes vital to hookworm feeding. The "ASP" (ancylostoma secreted protein) proteins are cysteine-rich secretory proteins. They are promising vaccine candidates based on previous vaccine studies in sheep, guinea pigs, cattle, and mice, which have demonstrated inhibition of hookworm larval migration. Furthermore, epidemiologic studies determined that high titers of circulating antibodies against ASPs are associated with lower hookworm burdens in residents of Hainan Province, China, and Minas Gerais, Brazil. The function of Na-ASP-2 (Q7Z1H1) is not currently known (though it may function as a chemotaxin mimic), but it is known to be released during parasite entry into the host. It may have some function in the transition from the larval environment stage of the hookworm life-cycle to an adult parasitic existence and tissue migration. The "APR" proteins are aspartic proteases. Ac-APR-1 and Na-APR-1 specifically participate in the hookworm's digestion of hemoglobin from its blood meal and are present in the adult stage of the hookworm life cycle. Animals immunized against Ac-APR-1 exhibited a reduction in worm burden, a reduction in hemoglobin loss, and a dramatic reduction in worm fecundity. The "GST" proteins are glutathione S-transferases. Na-GST-1 (D3U1A5) plays a role in the worm's digestion of hemoglobin; specifically, it serves to protect the worm from heme molecules released by digestion. == Research == Examples of antigenic targets of hookworm vaccines currently in clinical trials include Na-ASP-2, Ac-APR-1, Na-APR-1, and Na-GST-1. In a clinical trial a vaccine containing recombinant Na-ASP-2 with Aluminium hydroxide (Alhydrogel) as an adjuvant was found to increase Th2 helper cells and IgE. Both the Th2 helper cells and IgE antibody are important players in recognition and immunoregulation against parasites. The vaccine containing recombinant Na-ASP-2 resulted in significantly decreased risk of a hookworm infection. In 2014, Na-GST-1 with Alhydrogel adjuvant completed a successful phase 1 clinical trial in Brazil. In 2017, it completed a successful phase 1 trial in the US. === Funding === Research funding to develop hookworm vaccines has come from the Human Hookworm Vaccine Initiative, a program of the Sabin Vaccine Institute and collaborations with George Washington University, the Oswaldo Cruz Foundation, the Chinese Institute of Parasitic Diseases, the Queensland Institute of Medical Research, and the London School of Hygiene and Tropical Medicine. Funding for hookworm vaccine research efforts also includes funds from the Bill & Melinda Gates Foundation totaling in excess of $53 million, and additional support from the Rockefeller Foundation, Doctors Without Borders, National Institute of Allergy and Infectious Diseases, and the March of Dimes Birth Defects Foundation. The government of Brazil, where hookworm is still endemic in some poorer areas, has promised to manufacture a vaccine if one can be proven effective. == References == == External links == Study of Na-ASP-2 Human Hookworm Vaccine in Healthy Adults Without Evidence of Hookworm Infection, ClinicalTrials.gov Phase 1 Trial of Na-ASP-2 Hookworm Vaccine in Previously Infected Brazilian Adults, ClinicalTrials.gov
Wikipedia/Hookworm_vaccine
Alzheimer's disease (AD) in the Hispanic/Latino population is becoming a topic of interest in AD research as Hispanics and Latinos are disproportionately affected by Alzheimer's Disease and underrepresented in clinical research. AD is a neurodegenerative disease, characterized by the presence of amyloid-beta plaques and neurofibrillary tangles, that causes memory loss and cognitive decline in its patients. However, pathology and symptoms have been shown to manifest differently in Hispanic/Latinos, as different neuroinflammatory markers are expressed and cognitive decline is more pronounced. Additionally, there is a large genetic component of AD, with mutations in the amyloid precursor protein (APP), apolipoprotein E APOE), presenilin 1 (PSEN1), bridging Integrator 1 (BIN1), SORL1, and clusterin (CLU) genes increasing one's risk to develop the condition. However, research has shown these high-risk genes have a different effect on Hispanics and Latinos then they do in other racial and ethnic groups. Additionally, this population experiences higher rates of comorbidities, that increase their risk of developing AD. Hispanics and Latinos also face socioeconomic and cultural factors, such as low income and a language barrier, that affect their ability to engage in clinical trials and receive proper care. == Alzheimer's disease == Alzheimer's disease is the most common form of dementia, accounting for 60% of all cases, and is the sixth leading cause of death in the elderly. The disease typically presents itself with intracellular aggregation of hyper-phosphorylated tau, forming neurofibrillary tangles (NFTs), and the extracellular aggregation of amyloid beta (Aβ), forming neuritic plaques. As of 2020, 5.4 million Americans have been diagnosed with Alzheimer's disease, and this number is projected to reach 15-22 million by 2050. Hispanics and Latinos account for 55 million of the population and this population is projected to rise to 97 million, accounting for 25% of the U.S. population, in 2050. === Hispanic/Latino population === Hispanics and Latinos make up 18% of the U.S. population, however, they are underrepresented in clinical research. The National Institute of Health (NIH) reported that Hispanic and Latinos accounted for less than 8% of reported clinical trial participants. One proposed reason for the lack of representation is that the Hispanic/Latino population includes many people from different countries, and with this diverse background come different characteristics and comorbidities associated with AD. Hispanic and Latinos have a higher prevalence of AD compared to White non-Hispanics. Studies estimate that 12% of older Hispanic and Latino adults were diagnosed with AD, the highest proportion compared to all other ethnic groups. Hispanic and Latinos are the fastest-growing population in the U.S., and Hispanic/Latino seniors, those who are 65 and older, are expected to have the largest rise in AD and dementia cases compared to other populations, reaching 3.5 million by 2060. In 2011 the National Alzheimer's Disease Project Act was signed into law. The goal of this act was for The National Institute of Aging to accelerate research and clinical care of patients with AD; this act also has the benefit of seeking to improve the inclusion of underrepresented populations in AD research and clinical trials. It is important to note that this neurodegenerative disease varies across all ethnic groups. Studies show that Latinos are up to one-and-a-half times more likely to develop Alzheimer's than non-Latino whites. === Types === There exist two types of Alzheimer's disease: familial Alzheimer's disease, also called early-onset Alzheimer's disease (EOAD), and sporadic Alzheimer's disease, also called late-onset Alzheimer's disease (LOAD). EOAD is the less common of the two, accounting for 5-10%, and patients with EOAD are typically diagnosed with familial AD before they turn 65 years old. LOAD is more common and accounts for 90% of AD cases and its patients experience onset after they turn 65. EOAD has been shown to have 2 types of inheritance patterns: mendelian (mEOAD) and non-mendelian (nmEOAD, sporadic) patterns. The three main genes that implicated in familial AD are amyloid precursor protein (APP), Presenilin-1 (PSEN-1), presenilin-2 (PSEN-2). Onset of sporadic AD has both genetic and environmental risk factors. Some genes of interest in LOAD are the APOE gene, specifically the ApoE4 allele, bridging Integrator 1 (BIN1), SORL1, and clusterin (CLU). === Pathology and symptoms === ==== Amyloid beta plaques and neurofibrillary tangles ==== Neuritic plaques and neurofibrillary Tangles (NFTs) are the main pathological component of AD. Neuritic plaques are composed mainly of the peptide Aβ, but also include other components. APP is processed through either of two pathways, the first is termed the amyloidogenic pathway and produces Aβ. The second pathway is the non-amyloidogenic pathway and does not produce Aβ. In the amyloidogenic pathway, APP (the parent protein) is trafficked to endosomes, cleaved by beta-secretase (BACE), after which it moves back to the cell surface to be cleaved by gamma secretase thus releasing the Aβ peptide. This form of APP processing produces multiple Aβ peptides, of which Aβ40 and Aβ42 peptide are most abundantly produced. APP can also be cleaved by alpha and gamma-secretase and undergo non-amyloidogenic processing. NFTs form when the tau protein, which is involved in microtubule stability, is hyperphosphorylated, dissociates from microtubules and then aggregates with other p-tau monomers forming tau oligomers, fibrils and eventually NFTs. These types of pathologies do not differ across ethnic groups. ==== Chronic inflammation ==== Neuroinflammation is another pathological component of AD pathology, and is associated with an increase in levels of inflammatory markers. Researchers have also identified many genes linked to immune function that are risk factors for AD, such as TREM2. TREM2 are highly expressed in microglia, the immune cell of the brain, tying the progression of AD with dysfunction in microglial activity. Neuroinflammation, Aβ and tau pathologies interact in Alzheimer's disease, and their progress can be monitored with biomarkers. For example, studies have found the neuroinflammatory marker YKL-40 in the cerebrospinal fluid (CSF) of AD patients. This protein increases years before the onset of AD symptoms and correlates with other neurodegenerative biomarkers, which suggests a potential to predict disease progression. While YKL-40 has not been shown to associate with the APOE -ε4 allele, there is research to link this biomarker to AD in the Hispanic population. ==== Neurodegeneration ==== Along with the plaques and inflammation, patients with AD also suffer from neurodegeneration, characterized by loss of neurons and synapses (the communication tool for neurons). As a result, the entire brain shrinks in volume (termed brain atrophy), the ventricles grow larger, and the hippocampus and cortex shrink in size. As all these pathologies progress, patients begin to experience a decline in memory, cognitive abilities, and independence. The accumulation of neurofibrillary tangles and loss of synaptic fields correlate most closely with cognitive loss. ==== Mild cognitive impairment ==== Mild cognitive impairment (MCI) precedes the overt diagnosis of AD. To be diagnosed with MCI a patient must show memory impairment, a progressive decline in cognitive abilities without presenting symptoms of Parkinson's disease, cerebrovascular diseases, and behavioral or language disorders. Studies show that older Hispanic/Latinos exhibit a higher prevalence of dementia than caucasians. Demographic and linguistic factors can prevent a proper MCI diagnosis. For instance, subjects for whom English might not be the primary language require use of a translator to perform the cognitive testing, which could affect testing results, but such experimental weaknesses are frequently not described. === Brain imaging === Positron emission tomography (PET) is a brain imaging technique that uses a small amount of radioactive substance, called a tracer, to measure energy use or a specific molecule in different brain regions. In AD research, tracers can be used to detect neuritic plaques (containing Aβ) and tau. Depending on the tracer used, the signal can be used to determine the presence of neuritic plaques and NFTs. === Biomarkers === Establishing biomarkers for the early detection of AD is an ongoing are of research because pathological changes occur years before symptoms become apparent. Common biomarkers for AD include amyloid beta and hyperphosphorylated tau, both of which can be found in brain tissue and cerebrospinal fluid (CSF). Aβ provides a biomarker of neuritic plaques, which form when APP cleavage results in an increase in the Aβ42/Aβ40 ratio. Hyperphosphorylated tau serves as a biomarker of neurofibrillary tangles (NFTs), with the presence of NFTs indicating a disruption in microtubule stability and neuronal injury. Researchers have begun investigating other markers in the CSF, such as markers of neuronal injury, like Visinin-Like-Protein-1 (VILIP-1), YKL-40, and Neurogranin (NGRN). Due to the low use of CSF sampling in most countries, efforts have been made to study biomarkers in blood. Through work on blood biomarkers, researchers have been able to find proteins that are elevated in AD patients and have predictive potential. The predictive potential of these plasma markers increases further when coupled with the presence of genetic risk factors, such as the APOE E4 allele. Examples of plasma biomarkers include interleukin, TGF, and micro-RNA. Genetic risk factors being considered for integration with biomarkers includes ApoE, PSEN1, Bin1, CLU, and SORL1. ==== APOE gene ==== Apolipoprotein E (APOE) is a lipoprotein, composed of 299 amino acids, that is expressed throughout the body but particularly important in the brain where it is involved in cholesterol metabolism, specifically in the intracellular and extracellular transport, delivery, and distribution. APOE associates with high-density lipoproteins (HDL). The gene for APOE is located on chromosome 19 and exhibits polymorphisms in the population defined by three alleles of APOE: 𝜀2, 𝜀3, and 𝜀4. These alleles differ by only two single nucleotide polymorphisms in exon 4, rs429358 and rs7412 (amino-acid position 112 and 158). APOE𝜀3 is the more common of the three and is present in 50-90% of the general population. APOE 𝜀4 is the most common risk factor for LOAD, increasing genetic risk up to 33-fold, depending on the population (see the table below). Over 50% of LOAD patients have the 𝜀4 allele and that having even one copy of the 𝜀4 allele increases the risk of developing Alzheimer's by a factor of 4. APOE 𝜀2 has been shown to have a neuroprotective effect with carriers of the allele showing a lower prevalence of AD. The neuropathological effects of APOE ε4 are pleiotropic; APOE ε4 impairs uptake of cholesterol by neurons, promotes microglial dysfunction, promotes beta-amyloid aggregation and is increases cerebral angiopathy (CAA). The increased incidence of AD associated with the APOE ε4 allele has been proposed to be directly linked to Aβ because it modulates Aβ aggregation and clearance, although increasing evidence points to a multitude of actions. While extensive research has been done on the APOE4 gene, more information is available for caucasians than for other ethnic and racial groups. The research focusing on the Hispanic and Latino population, suggests that race is a key variable in assessing the risk of carrying the APOE ε4 allele in developing AD. Genetic studies of Hispanic and Latino populations, show a lower risk of developing AD for individuals with the APOE ε4 than observed in Caucasians. For instance, the prevalence of AD seen in Caribbean Hispanics, compared to White Non-Hispanics, appears to be independent of APOE genotype. Our understanding of ApoE ε4 continues to evolve, though, as Hispanic and Latino representation in research increases leading to larger sample sizes and improved population stratification . ==== PSEN1 gene ==== Presenilin 1 (PSEN1) is the catalytic subunit of the transmembrane protein complex gamma-secretase. This protein cleaves amyloid precursor protein (APP), after beta-secretase cleavage, to produce Amyloid beta. The gene for PSEN1 is located on chromosome 14, q24.2, and consists of 12 exons that encode a 467-amino-acid protein that is predicted to traverse the membrane 9 times. Over 200 pathogenic mutations of this protein exist, many of which account for 18% to 50% of autosomal dominant EOAD cases. PSEN1 mutations increase the risk of developing AD by increasing the production of amyloid beta-42, compared to amyloid beta-40. Unlike APOE4, PSEN1 mutations tend to have a clear pathological effect in Hispanics and Latinos and tend to express region-specific mutations. Puerto Ricans have a G206A mutation that causes familial AD, Cuban families diagnosed with EAOD commonly exhibit a L174M, mutation, Mexican families exhibit L171P and A431E mutations, and Colombian families exhibit an E280A mutation (particularly in the municipality of Yarumal). ==== BIN1 gene ==== Bridging Integrator 1 (BIN1) is a widely expressed Bin-Amphiphysin-Rvs (BAR) adaptor protein that is located on chromosome 2, q14.3, and contains 20 exons. BIN1 mainly regulates clathrin-mediated endocytosis. Due to a large number of exons, this gene is subject to alternative splicing, with the major forms varying in the splicing of exons 6a, 10, 12, and 13. Genetic studies indicate that dysfunction in BIN1 is the second most important risk factor for AD. The mechanism through which BIN1 contributes to AD, though, is unclear. Studies suggest a range of possible reasons that BIN1 is associated with AD including interactions of BIN1 with the microtubule-associated proteins (i.e. CLIP170), endosomal trafficking of APP and APOE by BIN1, and BIN1 mediated regulation of inflammation through the expression of indoleamine 2,3-dioxygenase (IDO1). The interaction of BIN1 and tau might also be important because elevated levels of BIN1 are associated with increased AD risk through tau load; this result is suggested because of studies of the functional rs59335482 variant. Additionally, endocytic trafficking of APP by BIN1 could be important because trafficking determines if APP will undergo the non-amyloidogenic or the amyloidogenic pathway. For example, if APP is transported to the endosome, it will likely be cleaved by beta-secretase and undergo amyloidogenic processing, but if it accumulates on the cell surface it will be likely cleaved by alpha-secretase and undergo non-amyloidogenic processing. Lastly, BIN1 facilitates the kynurenine pathway of tryptophan metabolism by regulating the expression of the rate-limiting enzyme indoleamine 2,3-dioxygenase (IDO1). For the IDO1 mechanism, increased BIN1 expression could increase levels of a toxic tryptophan derivative IDO1. These metabolites have been suggested to be involved in AD pathology and cognitive decline, as they co-localize with the plaques and NFTs in patient brains. Genome-wide association studies (GWAS) in the Hispanic/Latino population indicate that polymorphisms in the BIN1, ABAC7 and CD2AP genes are more significant in Caribbean Hispanics. For instance, one BIN1 mutation that has been explored in this Caribbean Hispanics is rs13426725. Other variants that increase the risk of AD include rs6733839. ==== SORL1 gene ==== Sortilin-related receptor 1 (SORL1) is a 250-kDa membrane protein with seven distinct domains that make it a member of two receptor families: the low-density lipoprotein receptor (LDLR) family of ApoE receptors and the vacuolar protein sorting 10 (VPS10) domain receptor family. In humans, the SORL1 gene is located on chromosome 11, specifically q232-q24.2. As a protein SORL1 is highly expressed in the brain. SORL1 is an intracellular protein that is expressed in early endosomes and the trans-Golgi network. SORL1 plays an important role in the intracellular trafficking of APP. The protein can bind to APP expressed in endosomes and allow APP to be transported back to the cell surface, which prevents amyloidogenic processing and the production of cytotoxic Aβ40 and Aβ42. Moreover, SORL1 has been shown to facilitate cholesterol transport through its tendency to bind to APOE-lipoprotein complexes. Strong disease-linked polymorphisms in SORL1 combine with its role in APP trafficking render SORL1 a biomarker of strong interest for LOAD. Decreased levels of SORL1 transcript and protein have been observed in the brains of AD patients. Analysis of SORL1 polymorphisms in the Hispanic/Latino population using GWAS and whole-genome sequencing (WGS) have shown that several SORL1 mutations are seen in Caribbean Hispanics. Two rare polymorphisms in SORL1 associated with AD were observed in this population, rs117260922-E270K and rs143571823-T947M, as well as a common variant (rs2298813-A528T) . These polymorphisms are not specific to Hispanics/Latinos as they are also observed in non-Hispanic white individuals. ==== ATP binding cassette transporters ==== Adenosine triphosphate (ATP)-binding cassette transporters are a large family of ABC transporters that regulates the efflux of cholesterol in neuronal cells. Two members of the ATP binding cassette transporter family have been implicated in LOAD: ABC Subfamily A Member 1 (ABCA1) and ABC Subfamily A member 7 (ABCA7). ABCA1 is a 220-240 kDa protein whose gene is located in chromosome 9q31.1 Its putative role in AD is tied to its role in stabilizing ApoE lipidation and degrading amyloid beta expressed in the brain. ABCA7 exhibits strong genetic linkage to AD; it is involved in lipid and cholesterol processing, as well as immune system function. ABCA7 exhibits tighter association with amyloid deposition than ABCA1. The research on ABCA7 mutations in the Hispanic/Latino population is a bit controversial. Some studies propose that mutations in ABCA7 are more common in Caucasian patients, whereas Hispanics/Latinos are more likely to express a BIN1 mutation. But researchers have still found polymorphisms specific to this population, one example being a 44-base pair frameshift deletion in ABCA7 increases AD risk in African Americans and Caribbean Hispanics. ==== Clusterin ==== Clusterin (CLU), also known as apolipoprotein J, is an 82 kDa glycoprotein protein that is located on chromosome 8. It has multiple physiological functions, some examples being lipid transport, immune modulation, and cell death. CLU is known to have the ability to clear Aβ peptides and prevent their aggregation, which suggests that CLU has a neuroprotective effect. While one might expect AD to be associated with lower levels of a neuroprotective protein, biomarker studies indicate that CLU is upregulated in the plasma, CSF, hippocampus, and cortex of AD patients. One theory explaining this apparent contradiction is CLU is actually reduced early in life, increasing risk of developing AD. In this scenario, CLU might increase in subjects with AD as a compensatory response to the disease; further research is clearly necessary to clairy this area. However, genetic studies show that CLU is the third most significant risk factor for LOAD, which has propelled extensive genetic research related to CLU and AD. Two CLU variants of interest, that are associated with reduced AD frequency, are rs11136000 and rs9331896. However, little is known about the relationship between CLU and AD risk in the Hispanic/Latino community. === Comorbidities === Studies have shown that metabolic disorders, such as diabetes, cardiovascular disease, hypertension, obesity, and depression can increase one's risk of developing AD as well as increase the rate at which it progresses. In the United States, 33% of the total population suffers from a metabolic disorder (i.e. diabetes and hypertension). Hispanic and Latino adults are at an increased risk of developing these conditions compared to Caucasians. As of 2017, it has been reported that 35% of all Latinos suffer from an AD comorbidity, with 17% suffering from diabetes and 25.4% suffering from hypertension. The high prevalence of cardiovascular and metabolic conditions is thought to contribute to the higher risk of Alzheimer's Disease seen in Hispanics and Latinos. ==== Hypertension ==== Hypertension is a condition characterized by persistently high blood pressure. Two measurements are taken to quantify blood pressure: systolic and diastolic blood pressure. A patient is diagnosed as hypertensive when their systolic blood pressure is greater than 140 mmHg or diastolic blood pressure is greater than 90 mmHg. Not many people with hypertension are aware that they have the condition. In 2010, 31.1% of adults worldwide had hypertension, but only 45.6% of them were aware they had the condition. Hypertension is more apparent in the Hispanic/Latino population, as in 2008 the incidence rate of hypertension for Hispanic adults aged 45–84 was 65.7%, compared to 56.8% for non-Hispanic whites of the same age. Hispanic/Latinos are more likely to be unaware of their condition, compared to non-Hispanics, and be less likely to seek treatment, which increases the risk of developing cardiovascular disease and AD. Hypertension can be reduced by lifestyle changes (e.g., weight loss and exercise) and pharmacological intervention. Multiple studies suggest that use of antihypertensive drugs might reduce the risk of dementia and AD. ==== Diabetes ==== Type-II diabetes is a metabolic condition characterized by high blood sugar and defective insulin secretion by pancreatic β-cells. A patient is diagnosed with diabetes when their fasting blood glucose levels are above 7.0 mmol/L (126 mg/dL) or above 11.1 mmol/L (200 mg/dl) two hours after an oral dose of glucose. As with hypertension, diabetes is more common in the Hispanic/Latino population than in the Caucasian population. In the U.S., Hispanic/Latinos are reported to have a diabetes prevalence of 22.6%, compared to 11.3% in non-Hispanic whites. This increased risk is associated with genetic factors, environmental factors (i.e. diet), and socioeconomic status. Recently AD has also been characterized as a metabolic disease of glucose regulation, as patients experience alterations in brain insulin responsiveness that leads to oxidative stress and inflammation. Changes in cerebral glucose signaling associated with AD have been proposed to as type three diabetes, although this has not been adopted as an official designation. Older diabetic Mexican Americans are twice as likely to develop dementia than those without diabetes. Additionally, the longer a patient is diabetic, the faster the rate of cognitive decline within the same and racial and age group. ==== Lifestyle ==== Alzheimer's can change your life rapidly and it involves stripping away a lot of an individual's abilities. The neurons that are damaged first are those involving memory, language, and thinking. Individual's living with this disease may develop changes in mood, personality or behavior and eventually neuronal damage extends to parts of the brain that enable basic bodily functions. Many of the comorbidities for AD can be reduced through lifestyle changes. Hypertension can be treated by reducing sodium intake, increasing potassium intake, reducing alcohol consumption, and engaging in at least 150 min of moderate-intensity or 75 min of vigorous-intensity physical activity per week. Similarly, diabetes can also be treated by increasing physical activity and changing diet. The same lifestyle changes are recommended for AD patients. == Hispanics and Latinos in clinical research == The terms Latino and Hispanic are used interchangeably in formal and informal settings. The term "Hispanic" specifically denotes those who can trace their ancestry to a Spanish-speaking country, such as Spain and most of Latin America, the exceptions being Brazil, Guyana, Suriname, and French Guiana. The term "Latino" denotes those who can trace their ancestry to Latin America and the Caribbean. The diversity among Hispanic and Latino groups has led to proposals that the Latino/Hispanic population in the U.S. should not be treated as a homogenous group, and also to define subgroups based on their country of origin. The Latino/Hispanic community has recently been open to wanting to learn more about Alzheimer's Disease but because of they limited resources they aren't able to participate in clinical research. === Socioeconomic status === The socioeconomic status of the Hispanic/Latino population is thought to contribute to their low participation in clinical trials. In the U.S., the income and socioeconomic status of Hispanics and Latinos are lower when compared to White Non-Hispanics. In 2015, a study reported that 25% of the Hispanic/Latino population lives in poverty and their median family income was $17,800 lower than their white counterparts. The Hispanic/Latino community have grown the understand the affect Alzheimer has had on their community. A need for care was found to have been associated with education level and be a leading contributor to disability among older people. This shows significant implications for the economy and overall burden caregivers have to take on. Studies from 2017 showed that the difference in income is present across many income brackets, with only 38.6% of Latinos reporting a household income between $50,000 and $149,999, in comparison to 45.6% of Non-Hispanic whites. A similar trend was seen with levels of higher education and medical literacy attained by Hispanics and Latinos. As of 2013, it was reported that 22% of Latino adults (25 years and over) had earned an associate degree or higher, compared to the 46% seen in White Non-Hispanics. This trend is also seen with advanced degrees, as Latinos only account for 7% of Master's degrees and 1% Doctorate degrees awarded in the U.S. The lower measures of socioeconomic achievement and education level have been proposed to be associated with high mortality and dementia rates. In contrast, high educational levels are correlated with lower rates of dementia. Hispanic and Latinos that are 65 years and older make up one of the largest groups of uninsured individuals in the U.S. This makes it difficult for these seniors to obtain the clinical care necessary for management of AD. The cost of care for insured patients 65 years and older was estimated to be $25,213 per person, compared to the estimated $7750 for senior patients without AD. Increasing efforts are being made to include minorities in clinical research, for instance by providing travel support for participants. === Language barrier === Putative language barriers can also interfere with healthcare for the Hispanic and Latino community. Clinical trials can require participants to be fluent in English, which can potentially exclude older Hispanic and Latino subjects, who might not be fluent in English. Only 40% of the older generation, Latinos over the age of 69, are fluent. Proficiency in English is less of an issue for younger Latinos, with 90% of the population between the ages of 5 and 17 speaking the language fluently. Weak communication between physician and patient can also impair medical care. For instance, this issue occurs with clinical notes, which are typically translated from the English version of the same document for Spanish-speaking individuals. The medical community is attempting to address this issue by recruiting staff and physicians able to communicate with participants in their native language, as well as providing training on cultural sensitivity. == See also == Race and health in the United States == References == == External links == National Institute of Aging Summary of its Studies of AD in the Hispanic/Latino Population
Wikipedia/Alzheimer's_disease_in_the_Hispanic/Latino_population
DTaP-IPV/Hib vaccine is a 5-in-1 combination vaccine that protects against diphtheria, tetanus, whooping cough, polio, and Haemophilus influenzae type B. Its generic name is "diphtheria and tetanus toxoids and acellular pertussis adsorbed, inactivated poliovirus and haemophilus B conjugate vaccine", and it is also known as DTaP-IPV-Hib. == Uses == DTaP-IPV/Hib vaccine is administered to young children to immunise against diphtheria, tetanus, pertussis, poliomyelitis, and diseases caused by Haemophilus influenzae type B. == Formulations == A branded formulation marketed in the United States is Pentacel, manufactured by Sanofi Pasteur. Pentacel is known in the UK and Canada as Pediacel. An equivalent vaccine marketed in the UK and Canada by GlaxoSmithKline is Infanrix IPV + Hib. This is a two-part vaccine. The DTaP-IPV component is supplied as a sterile liquid, which is used to reconstitute lyophilized (freeze-dried) Hib vaccine. Pentaxim is a liquid formulation marketed by Sanofi Pasteur. == Availability == It is only licensed for young children. The United States Food and Drug Administration has approved the vaccine for children from age 6 weeks up to age 5 years. It was used in the UK until 2017, following which a 6-in-1 vaccine became available containing the additional protection against Hepatitis B. == References ==
Wikipedia/DTaP-IPV/Hib_vaccine
Preregistration is the practice of registering the hypotheses, methods, or analyses of a scientific study before it is conducted. Clinical trial registration is similar, although it may not require the registration of a study's analysis protocol. Finally, registered reports include the peer review and in principle acceptance of a study protocol prior to data collection. Preregistration can have a number of different goals, including (a) facilitating and documenting research plans, (b) identifying and reducing questionable research practices and researcher biases, (c) distinguishing between confirmatory and exploratory analyses, (d) transparently evaluating the severity of hypothesis tests, and, in the case of Registered Reports, (e) facilitating results-blind peer review, and (f) reducing publication bias. A number of research practices such as p-hacking, publication bias, data dredging, inappropriate forms of post hoc analysis, and HARKing may increase the probability of incorrect claims. Although the idea of preregistration is old, the practice of preregistering studies has gained prominence to mitigate to some of the issues that are thought to underlie the replication crisis. == Types == === Standard preregistration === In the standard preregistration format, researchers prepare a research protocol document prior to conducting their research. Ideally, this document indicates the research hypotheses, sampling procedure, sample size, research design, testing conditions, stimuli, measures, data coding and aggregation method, criteria for data exclusions, and statistical analyses, including potential variations on those analyses. This preregistration document is then posted on a publicly available website such as the Open Science Framework or AsPredicted. The preregistered study is then conducted, and a report of the study and its results are submitted for publication together with access to the preregistration document. This preregistration approach allows peer reviewers and subsequent readers to cross-reference the preregistration document with the published research article in order to identify the presence of any undisclosed deviations of the preregistration. Deviations from the preregistration are possible and common in practice, but they should be transparently reported, and the consequences for the severity of the test should be evaluated. === Registered reports === The registered report format requires authors to submit a description of the study methods and analyses prior to data collection. Once the theoretical introduction, method, and analysis plan has been peer reviewed (Stage 1 peer review), publication of the findings is provisionally guaranteed (in principle acceptance). The proposed study is then performed, and the research report is submitted for Stage 2 peer review. Stage 2 peer review confirms that the actual research methods are consistent with the preregistered protocol, that quality thresholds are met (e.g., manipulation checks confirm the validity of the experimental manipulation), and that the conclusions follow from the data. Because studies are accepted for publication regardless of whether the results are statistically significant Registered Reports prevent publication bias. Meta-scientific research has shown that the percentage of non-significant results in Registered Reports is substantially higher than in standard publications. === Specialised preregistration === Preregistration can be used in relation to a variety of different research designs and methods, including: Quantitative research in psychology Qualitative research Preexisting data Single case designs Electroencephalogram research Experience sampling Exploratory research Animal Research == Clinical trial registration == Clinical trial registration is the practice of documenting clinical trials before they are performed in a clinical trials registry so as to combat publication bias and selective reporting. Registration of clinical trials is required in some countries and is increasingly being standardized. Some top medical journals will only publish the results of trials that have been pre-registered. A clinical trials registry is a platform which catalogs registered clinical trials. ClinicalTrials.gov, run by the United States National Library of Medicine (NLM) was the first online registry for clinical trials, and remains the largest and most widely used. In addition to combating bias, clinical trial registries serve to increase transparency and access to clinical trials for the public. Clinical trials registries are often searchable (e.g. by disease/indication, drug, location, etc.). Trials are registered by the pharmaceutical, biotech or medical device company (Sponsor) or by the hospital or foundation which is sponsoring the study, or by another organization, such as a contract research organization (CRO) which is running the study. There has been a push from governments and international organizations, especially since 2005, to make clinical trial information more widely available and to standardize registries and processes of registering. The World Health Organization is working toward "achieving consensus on both the minimal and the optimal operating standards for trial registration". === Creation and development === For many years, scientists and others have worried about reporting biases such that negative or null results from initiated clinical trials may be less likely to be published than positive results, thus skewing the literature and our understanding of how well interventions work. This worry has been international and written about for over 50 years. One of the proposals to address this potential bias was a comprehensive register of initiated clinical trials that would inform the public which trials had been started. Ethical issues were those that seemed to interest the public most, as trialists (including those with potential commercial gain) benefited from those who enrolled in trials, but were not required to “give back,” telling the public what they had learned. Those who were particularly concerned by the double standard were systematic reviewers, those who summarize what is known from clinical trials. If the literature is skewed, then the results of a systematic review are also likely to be skewed, possibly favoring the test intervention when in fact the accumulated data do not show this, if all data were made public. ClinicalTrials.gov was originally developed largely as a result of breast cancer consumer lobbying, which led to authorizing language in the FDA Modernization Act of 1997 (Food and Drug Administration Modernization Act of 1997. Pub L No. 105-115, §113 Stat 2296), but the law provided neither funding nor a mechanism of enforcement. In addition, the law required that ClinicalTrials.gov only include trials of serious and life-threatening diseases. Then, two events occurred in 2004 that increased public awareness of the problems of reporting bias. First, the then-New York State Attorney General Eliot Spitzer sued GlaxoSmithKline (GSK) because they had failed to reveal results from trials showing that certain antidepressants might be harmful. Shortly thereafter, the International Committee of Medical Journal Editors (ICMJE) announced that their journals would not publish reports of trials unless they had been registered. The ICMJE action was probably the most important motivator for trial registration, as investigators wanted to reserve the possibility that they could publish their results in prestigious journals, should they want to. In 2007, the Food and Drug Administration Amendments Act of 2007 (FDAAA) clarified the requirements for registration and also set penalties for non-compliance (Public Law 110-85. The Food and Drug Administration Amendments Act of 2007 [1]. === International participation === The International Committee of Medical Journal Editors (ICMJE) decided that from July 1, 2005 no trials will be considered for publication unless they are included on a clinical trials registry. The World Health Organization has begun the push for clinical trial registration with the initiation of the International Clinical Trials Registry Platform. There has also been action from the pharmaceutical industry, which released plans to make clinical trial data more transparent and publicly available. Released in October 2008, the revised Declaration of Helsinki, states that "Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject." The World Health Organization maintains an international registry portal at http://apps.who.int/trialsearch/. WHO states that the international registry's mission is "to ensure that a complete view of research is accessible to all those involved in health care decision making. This will improve research transparency and will ultimately strengthen the validity and value of the scientific evidence base." Since 2007, the International Committee of Medical Journal Editors ICMJE accepts all primary registries in the WHO network in addition to clinicaltrials.gov. Clinical trial registration in other registries excluding ClinicalTrials.gov has increased irrespective of study designs since 2014. === Reporting compliance === Various studies have measured the extent to which various trials are in compliance with the reporting standards of their registry. === Overview of clinical trial registries === Worldwide, there is growing number of registries. A 2013 study identified the following top five registries (numbers updated as of August 2013): === Overview of preclinical study registries === Similar to clinical research, preregistration can help to improve transparency and quality of research data in preclinical research. In contrast to clinical research where preregistration is mandatory for vast parts it is still new in preclinical research. A large part of preclinical and basic biomedical research relies on animal experiments. The non-publication of results gained from animal experiments not only distorts the state of research by reinforcing the publication bias, it further represents an ethical issue. Preregistration is discussed as a measure that could counteract this problem. Following registries are suited for the preregistration of preclinical studies. == Journal support == Over 200 journals offer a registered reports option (Centre for Open Science, 2019), and the number of journals that are adopting registered reports is approximately doubling each year (Chambers et al., 2019). Psychological Science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. The editor-in-chief also noted that the editorial staff will be asking for replication of studies with surprising findings from examinations using small sample sizes before allowing the manuscripts to be published. Nature Human Behaviour has adopted the registered report format, as it “shift[s] the emphasis from the results of research to the questions that guide the research and the methods used to answer them”. European Journal of Personality defines this format: “In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data (if available). Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes.” Note that only a very small proportion of academic journals in psychology and neurosciences explicitly stated that they welcome submissions of replication studies in their aim and scope or instructions to authors. This phenomenon does not encourage the reporting or even attempt on replication studies. Overall, the number of participating journals is increasing, as indicated by the Center for Open Science, which maintains a list of journals encouraging the submission of registered reports. == Benefits == Several articles have outlined the rationale for preregistration (e.g., Lakens, 2019; Nosek et al., 2018; Wagenmakers et al., 2012). The primary goal of preregistration is to improve the transparency of reported hypothesis tests, which allows readers to evaluate the extent to which decisions during the data analysis were pre-planned (maintaining statistical error control) or data-driven (increasing the Type 1 or Type 2 error rate). Meta-scientific research has revealed additional benefits. Researchers indicate preregistering a study leads to a more carefully thought through research hypothesis, experimental design, and statistical analysis. In addition, preregistration has been shown to encourage better learning of Open Science concepts and students felt that they understood their dissertation and it improved the clarity of the manuscript writing, promoted rigour and were more likely to avoid questionable research practices. In addition, it becomes a tool that can supervisors can use to shape students to combat any questionable research practices. A 2024 study in the Journal of Political Economy: Microeconomics preregistration in economics journals found that preregistration did not reduce p-hacking and publication bias, unless the preregistration was accompanied by a preanalysis plan. == Criticisms == Proponents of preregistration have argued that it is "a method to increase the credibility of published results" (Nosek & Lakens, 2014), that it "makes your science better by increasing the credibility of your results" (Centre for Open Science), and that it "improves the interpretability and credibility of research findings" (Nosek et al., 2018, p. 2605). This argument assumes that non-preregistered exploratory analyses are less "credible" and/or "interpretable" than preregistered confirmatory analyses because they may involve "circular reasoning" in which post hoc hypotheses are based on the observed data (Nosek et al., 2018, p. 2600). However, critics have argued that preregistration is not necessary to identify circular reasoning during exploratory analyses (Rubin, 2020). Circular reasoning can be identified by analysing the reasoning per se without needing to know whether that reasoning was preregistered. Critics have also noted that the idea that preregistration improves research credibility may deter researchers from undertaking non-preregistered exploratory analyses (Coffman & Niederle, 2015; see also Collins et al., 2021, Study 1). In response, preregistration advocates have stressed that exploratory analyses are permitted in preregistered studies, and that the results of these analyses retain some value vis-a-vis hypothesis generation rather than hypothesis testing. Preregistration merely makes the distinction between confirmatory and exploratory research clearer (Nosek et al., 2018; Nosek & Lakens, 2014; Wagenmakers et al., 2012). Hence, although preregistraton is supposed to reduce researcher degrees of freedom during the data analysis stage, it is also supposed to be “a plan, not a prison” (Dehaven, 2017). However, critics counterargue that, if preregistration is only supposed to be a plan, and not a prison, then researchers should feel free to deviate from that plan and undertake exploratory analyses without fearing accusations of low research credibility due to circular reasoning and inappropriate research practices such as p-hacking and unreported multiple testing that leads to inflated familywise error rates (e.g., Navarro, 2020). Again, they have pointed out that preregistration is not necessary to address such concerns. For example, concerns about p-hacking and unreported multiple testing can be addressed if researchers engage in other open science practices, such as (a) open data and research materials and (b) robustness or multiverse analyses (Rubin, 2020; Steegen et al., 2016; for several other approaches, see Srivastava, 2018). Finally, and more fundamentally, critics have argued that the distinction between confirmatory and exploratory analyses is unclear and/or irrelevant (Devezer et al., 2020; Rubin, 2020; Szollosi & Donkin, 2019), and that concerns about inflated familywise error rates are unjustified when those error rates refer to abstract, atheoretical studywise hypotheses that are not being tested (Rubin, 2020, 2021; Szollosi et al., 2020). There are also concerns about the practical implementation of preregistration. Many preregistered protocols leave plenty of room for p-hacking (Bakker et al., 2020; Heirene et al., 2021; Ikeda et al., 2019; Singh et al., 2021; Van den Akker et al., 2023), and researchers rarely follow the exact research methods and analyses that they preregister (Abrams et al., 2020; Claesen et al., 2019; Heirene et al., 2021; Clayson et al., 2025; see also Boghdadly et al., 2018; Singh et al., 2021; Sun et al., 2019). For example, pre-registered studies are only of higher quality than non-pre-registered studies if the former has a power analysis and higher sample size than the latter but other than that they do not seem to prevent p-hacking and HARKing, as both the proportion of positive results and effect sizes are similar between preregistered and non-preregistered studies (Van den Akker et al., 2023). In addition, a survey of 27 preregistered studies found that researchers deviated from their preregistered plans in all cases (Claesen et al., 2019). The most frequent deviations were with regards to the planned sample size, exclusion criteria, and statistical model. Hence, what were intended as preregistered confirmatory tests ended up as unplanned exploratory tests. Again, preregistration advocates argue that deviations from preregistered plans are acceptable as long as they are reported transparently and justified. They also point out that even vague preregistrations help to reduce researcher degrees of freedom and make any residual flexibility transparent (Simmons et al., 2021, p. 180). A larger study of 92 EEG/ERP studies showed that only 60% of studies adhered to their preregistrations or disclosed all deviations. Notably, registered reports had the higher adherence rates (92%) than unreviewed preregistrations (60%). However, critics have argued that it is not useful to identify or justify deviations from preregistered plans when those plans do not reflect high quality theory and research practice. As Rubin (2020) explained, “we should be more interested in the rationale for the current method and analyses than in the rationale for historical changes that have led up to the current method and analyses” (pp. 378–379). In addition, pre-registering a study requires careful deliberation about the study's hypotheses, research design and statistical analyses. This depends on the use of pre-registration templates that provides detailed guidance on what to include and why (Bowman et al., 2016; Haven & Van Grootel, 2019; Van den Akker et al., 2021). Many pre-registration template stress the importance of a power analysis but not only stress the importance of why the methodology was used. Additionally to the concerns raised about its practical implementation in quantitative research, critics have also argued that preregistration is less applicable, or even unsuitable, for qualitative research. Pre-registration imposes rigidity, limiting researchers' ability to adapt to emerging data and evolving contexts, which are essential to capturing the richness of participants' lived experiences (Souza-Neto & Moyle, 2025). Additionally, it conflicts with the inductive and flexible nature of theory-building in qualitative research, constraining the exploratory approach that is central to this methodology (Souza-Neto & Moyle, 2025). Finally, some commentators have argued that, under some circumstances, preregistration may actually harm science by providing a false sense of credibility to research studies and analyses (Devezer et al., 2020; McPhetres, 2020; Pham & Oh, 2020; Szollosi et al., 2020). Consistent with this view, there is some evidence that researchers view registered reports as being more credible than standard reports on a range of dimensions (Soderberg et al., 2020; see also Field et al., 2020 for inconclusive evidence), although it is unclear whether this represents a "false" sense of credibility due to pre-existing positive community attitudes about preregistration or a genuine causal effect of registered reports on quality of research. == See also == AllTrials Clinical trial registration Metascience Open science == References == == External links == Preregistration resources from the Centre for Open Science Guidelines for creating registered reports by the Center for Open Science As Predicted
Wikipedia/Preregistration_(science)
A hepatitis C vaccine, a vaccine capable of protecting against the hepatitis C virus (HCV), is not yet available. Although vaccines exist for hepatitis A and hepatitis B, development of an HCV vaccine has presented challenges. No vaccine is currently available, but several vaccines are currently under development. Most vaccines work through inducing an antibody response that targets the outer surfaces of viruses. However, the HCV virus is highly variable among strains and rapidly mutating, making an effective vaccine very difficult. Another strategy which is different from a conventional vaccine is to induce the T cell arm of the immune response using viral vectors, adenoviral vectors that contain large parts of the HCV genome itself, to induce a T cell immune response against HCV. Most of the work to develop a T cell vaccine has been done against a particular genotype. There are six different genotypes which reflect differences in the structure of the virus. The first approved vaccine will likely only target genotypes 1a and 1b, which account for over 60% of chronic HCV infections worldwide. Likely, vaccines following the first approved vaccine will address other genotypes by prevalence. VLP-based HCV vaccines are also subject of intensive research. Since 2014, well-tolerated and extremely effective direct‐acting antiviral agents (DAAs) have been available which allows eradication of the disease in 8–12 weeks in most patients. While this has changed treatment options drastically for patients with HCV, it does not replace a vaccine that would prevent people from ever getting infected with the virus and will likely not be sufficient to eradicate HCV completely. == Specific vaccines == As of 2020, Inovio Pharmaceuticals is developing a synthetic multi-antigen DNA vaccine covering HCV genotypes 1a and 1b and targeting the HCV antigens nonstructural protein 3 (NS3) and 4A (NS4A), as well as NS4B and NS5A proteins. Following immunization, rhesus macaques mounted strong HCV-specific T cell immune responses strikingly similar to those reported in patients who have cleared the virus on their own. The responses included strong HCV antigen-specific interferon-γ (IFN-γ), tumor necrosis factor-α (TNF-α), and interleukin-2 (IL-2) induction, robust CD4 and CD8 T cell proliferation, and induction of polyfunctional T cells. This vaccine is in Phase I clinical trial. The major histocompatibility complex class II-associated invariant chain (CD74)—fused with a viral vector to a conserved region of the HCV genome—has been tested as an adjuvant for an HCV vaccine in a cohort of 17 healthy human volunteers. This experimental vaccine was well-tolerated, and those who received the adjuvanted vaccine had stronger anti-HCV immune responses (enhanced magnitude, breadth and proliferative capacity of anti-HCV-specific T helper cells) compared with volunteers who received the vaccine that lacked this adjuvant. == References ==
Wikipedia/Hepatitis_C_vaccine
Dengue vaccine is a vaccine used to prevent dengue fever in humans. Development of dengue vaccines began in the 1920s but was hindered by the need to create immunity against all four dengue serotypes. As of 2023, there are two commercially available vaccines, sold under the brand names Dengvaxia and Qdenga. Dengvaxia is only recommended in those who have previously had dengue fever or populations in which most people have been previously infected due to a phenomenon known as antibody-dependent enhancement. The value of Dengvaxia is limited by the fact that it may increase the risk of severe dengue in those who have not previously been infected. In 2017, more than 733,000 children and more than 50,000 adult volunteers were vaccinated with Dengvaxia regardless of serostatus, which led to a controversy. Qdenga is designated for people not previously infected. There are other vaccine candidates in development including live attenuated, inactivated, DNA and subunit vaccines. == History == In December 2018, Dengvaxia was approved in the European Union. In May 2019, Dengvaxia was approved in the United States as the first vaccine approved for the prevention of dengue disease caused by all dengue virus serotypes (1, 2, 3, and 4) in people ages nine through 16 who have laboratory-confirmed previous dengue infection and who live in endemic areas. Dengue is endemic in the US territories of American Samoa, Guam, Puerto Rico, and the US Virgin Islands. The safety and effectiveness of the vaccine were determined in three randomized, placebo-controlled studies involving approximately 35,000 individuals in dengue-endemic areas, including Puerto Rico, Latin America, and the Asia Pacific region. The vaccine was determined to be approximately 76 percent effective in preventing symptomatic, laboratory-confirmed dengue disease in individuals 9 through 16 years of age who previously had laboratory-confirmed dengue disease. In March 2021, the European Medicines Agency accepted the filing package for TAK-003 (Qdenga) intended for markets outside of the EU. In August 2022, the Indonesian FDA approved Qdenga for use in individuals six years to 45 years of age and became the first authority in the world to approve Qdenga. Qdenga was approved in the European Union in December 2022. == CYD-TDV (Dengvaxia) == CYD-TDV, sold under the brand name Dengvaxia and made by Sanofi Pasteur, is a live attenuated tetravalent vaccine that is administered as three separate injections, with the initial dose followed by two additional shots given six and twelve months later. The US Food and Drug Administration (FDA) granted the application for Dengvaxia priority review designation and a tropical disease priority review voucher. The approval of Dengvaxia was granted to Sanofi Pasteur. The vaccine has been approved in 19 countries and the European Union, but it is not approved in the US for use in individuals not previously infected by any dengue virus serotype or for whom this information is unknown. Dengvaxia is a chimeric vaccine made using recombinant DNA technology by replacing the PrM (pre-membrane) and E (envelope) structural genes of the yellow fever attenuated 17D strain vaccine with those from the four dengue serotypes. Evidence indicates that CYD-TDV is partially effective in preventing infection, but may lead to a higher risk of severe disease in those who have not been previously infected and then do go on to contract the disease. It is not clear why the vaccinated seronegative population had more serious adverse outcomes. A plausible hypothesis is the phenomenon of antibody-dependent enhancement (ADE). American virologist Scott Halstead was one of the first researchers to identify the ADE phenomenon. Dr. Halstead and his colleague Dr. Phillip Russell proposed that the vaccine only be used after antibody testing, to check for prior dengue exposure and avoid vaccination of sero-negative individuals. Common side effects include headache, pain at the site of injection, and general muscle pains. Severe side effects may include anaphylaxis. Use is not recommended in people with poor immune function. Safety of use during pregnancy is unclear. Dengvaxia is a weakened but live vaccine and works by triggering an immune response against four types of dengue virus. Dengvaxia became commercially available in 2016 in 11 countries: Mexico, the Philippines, Indonesia, Brazil, El Salvador, Costa Rica, Paraguay, Guatemala, Peru, Thailand, and Singapore. In 2019 it was approved for medical use in the United States. It is on the World Health Organization's List of Essential Medicines. In 2017, the manufacturer recommended that the vaccine only be used in people who have previously had a dengue infection, as outcomes may be worsened in those who have not been previously infected due to antibody-dependent enhancement. This led to a controversy in the Philippines where more than 733,000 children and more than 50,000 adult volunteers were vaccinated regardless of serostatus. The World Health Organization (WHO) recommends that countries should consider vaccination with the dengue vaccine CYD-TDV only if the risk of severe dengue in seronegative individuals can be minimized either through pre-vaccination screening or recent documentation of high seroprevalence rates in the area (at least 80% by age nine years). The WHO updated its recommendations regarding the use of Dengvaxia in 2018, based on long-term safety data stratified by serostatus on 29 November 2017. Seronegative vaccine recipients have an excess risk of severe dengue compared to unvaccinated seronegative individuals. For every 13 hospitalizations prevented in seropositive vaccinees, there would be 1 excess hospitalization in seronegative vaccinees per 1,000 vaccinees. WHO recommends serological testing for past dengue infection In 2017, the manufacturer recommended that the vaccine only be used in people who have previously had a dengue infection as otherwise there was evidence it may worsen subsequent infections. The initial protocol did not require baseline blood samples before vaccination to establish an understanding of increased risk of severe dengue in participants who had not been previously exposed. In November 2017, Sanofi acknowledged that some participants were put at risk of severe dengue if they had no prior exposure to the infection; subsequently, the Philippine government suspended the mass immunization program with the backing of the WHO which began a review of the safety data. Phase III trials in Latin America and Asia involved over 31,000 children between the ages of two and 14 years. In the first reports from the trials, vaccine efficacy was 56.5% in the Asian study and 64.7% in the Latin American study in patients who received at least one injection of the vaccine. Efficacy varied by serotype. In both trials vaccine reduced by about 80% the number of severe dengue cases. An analysis of both the Latin American and Asian studies at the 3rd year of follow-up showed that the efficacy of the vaccine was 65.6% in preventing hospitalization in children older than nine years of age, but considerably greater (81.9%) for seropositive children (indicating previous dengue infection) at baseline. The vaccination series consists of three injections at 0, 6 and 12 months. The vaccine was approved in Mexico, the Philippines, and Brazil in December 2015, and in El Salvador, Costa Rica, Paraguay, Guatemala, Peru, Indonesia, Thailand, and Singapore in 2016. Under the brand name Dengvaxia, it is approved for use for those aged nine years of age and older and can prevent all four serotypes. As of February 2025, Sanofi announced that Dengvaxia has been "definitely discontinued" in Brazil due to low demand, which may have been caused by Qdenga being the first choice locally as its safer for individuals with unknown dengue serostatus. == TAK-003 (Qdenga) == TAK-003 or DENVax, sold under the brand name Qdenga and made by Takeda, is a recombinant chimeric attenuated vaccine with DENV1, DENV3, and DENV4 components on a dengue virus type 2 (DENV2) backbone originally developed at Mahidol University in Bangkok and now funded by Inviragen (DENVax) and (TAK-003). Phase I and II trials were conducted in the United States, Colombia, Puerto Rico, Singapore and Thailand. The 18-month data published in the journal Lancet Infectious Diseases, indicate that TAK-003 produced sustained antibody responses against all four virus strains, regardless of previous dengue exposure and dosing schedule. Data from the phase III trial, which began in September 2016, show that TAK-003 was efficacious against symptomatic dengue. TAK-003 appears to not lack efficacy in seronegative people or potentially cause them harm, unlike CYD-TDV. The data appear to show only moderate efficacy in other dengue serotypes than DENV2. Qdenga received approval for use in the European Union in 2022 for people aged 4 and above, and is also approved in the United Kingdom, Brazil, Argentina, Indonesia, and Thailand. Takeda voluntarily withdrew their application for the vaccination's approval in the United States in July 2023 after the FDA sought further data from the firm, which the company stated could not be provided during the current review cycle. == In development == === TV-003/005 === TV-003/005 is a tetravalent admixture of monovalent vaccines, that was developed by NIAID, that were tested separately for safety and immunogenicity. The vaccine passed phase I trials and phase II studies in the US, Thailand, Bangladesh, India, and Brazil. The National Institutes of Health has conducted phase I and phase II studies in over 1,000 participants in the US. It has also conducted human challenge studies while having conducted NHP model studies successfully. NIH has licensed their technology for further development and commercial scale manufacturing to Panacea Biotec, Serum Institute of India, Instituto Butantan, Vabiotech, Merck, and Medigen. In Brazil, phase III studies are being conducted by Instituto Butantan in collaboration with NIH. Panacea Biotec has conducted phase II clinical studies in India. A company in Vietnam (Vabiotech) is conducting safety tests and developing a clinical trial plan. All four companies are involved in studies of a TetraVax-DV vaccine in conjunction with the US NIH. === TDENV PIV === TDENV PIV (tetravalent dengue virus purified inactivated vaccine) is undergoing phase I trials as part of a collaboration between GlaxoSmithKline (GSK) and the Walter Reed Army Institute of Research (WRAIR). A synergistic formulation with another live attenuated candidate vaccine (prime-boost strategy) is also being evaluated in a phase II study. In prime-boosting, one type of vaccine is followed by a boost with another type in an attempt to improve immunogenicity. === V180 === Merck is studying recombinant subunit vaccines expressed in Drosophila cells. As of 2019, it had completed phase I stage and found V180 formulations to be generally well tolerated. === DNA vaccines === In 2011, the Naval Medical Research Center attempted to develop a monovalent DNA plasmid vaccine, but early results showed it to be only moderately immunogenic. == Society and culture == === Legal status === On 13 October 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Qdenga, intended for prophylaxis against dengue disease. The applicant for this medicinal product is Takeda GmbH. The active substance of Qdenga is dengue tetravalent vaccine (live, attenuated), a viral vaccine containing live attenuated dengue viruses which replicate locally and elicit humoral and cellular immune responses against the four dengue virus serotypes. Qdenga was approved for medical use in the European Union in December 2022. In February 2023, Qdenga was approved by the UK Medicines and Healthcare products Regulatory Agency (MHRA) for people aged four years and older. In April 2023, the National Administration of Drugs, Food and Medical Technology (ANMAT) gave the green light to the use of the tetravalent vaccine TAK-003 known as Qdenga, developed by the Japanese laboratory Takeda Pharmaceutical Company, making it the only vaccine approved to date. to combat dengue in Argentina. It has been used in the 2024 dengue epidemic. In July 2023, Takeda withdrew its application for Qdenga before the FDA, citing the FDA's requirement for additional data not captured in the phase III studies. === Economics === In Indonesia, Dengvaxia costs about US$207 for the recommended three doses as of 2016. Indonesia was the first country to approve Qdenga, in late 2022. === Controversies === ==== Philippines ==== The 2017 dengue vaccine controversy in the Philippines involved a vaccination program run by the Philippines Department of Health (DOH). The DOH vaccinated schoolchildren with Sanofi Pasteur's CYD-TDV (Dengvaxia) dengue vaccine. Some of the children who received the vaccine had never been infected by the dengue virus before. The program was stopped when Sanofi Pasteur advised the government that the vaccine could put previously uninfected people at a somewhat higher risk of a severe case of dengue fever. A political controversy erupted over whether the program was run with sufficient care and who should be held responsible for the alleged harm to the vaccinated children. == References == == External links == "Dengue Vaccine Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). December 2021. Dengue Vaccines at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Dengue_vaccine
The phases of clinical research are the stages in which scientists conduct experiments with a health intervention to obtain sufficient evidence for a process considered effective as a medical treatment. For drug development, the clinical phases start with testing for drug safety in a few human subjects, then expand to many study participants (potentially tens of thousands) to determine if the treatment is effective. Clinical research is conducted on drug candidates, vaccine candidates, new medical devices, and new diagnostic assays. == Description == Clinical trials testing potential medical products are commonly classified into four phases. The drug development process will normally proceed through all four phases over many years. When expressed specifically, a clinical trial phase is capitalized both in name and Roman numeral, such as "Phase I" clinical trial. If the drug successfully passes through Phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population. Phase IV trials are 'post-marketing' or 'surveillance' studies conducted to monitor safety over several years. == Preclinical studies == Before clinical trials are undertaken for a candidate drug, vaccine, medical device, or diagnostic assay, the product candidate is tested extensively in preclinical studies. Such studies involve in vitro (test tube or cell culture) and in vivo (animal model) experiments using wide-ranging doses of the study agent to obtain preliminary efficacy, toxicity and pharmacokinetic information. Such tests assist the developer to decide whether a drug candidate has scientific merit for further development as an investigational new drug. == Phase 0 == Phase 0 is a designation for optional exploratory trials, originally introduced by the United States Food and Drug Administration's (FDA) 2006 Guidance on Exploratory Investigational New Drug (IND) Studies, but now generally adopted as standard practice. Phase 0 trials are also known as human microdosing studies and are designed to speed up the development of promising drugs or imaging agents by establishing very early on whether the drug or agent behaves in human subjects as was expected from preclinical studies. Distinctive features of Phase 0 trials include the administration of single subtherapeutic doses of the study drug to a small number of subjects (10 to 15) to gather preliminary data on the agent's pharmacokinetics (what the body does to the drugs). A Phase 0 study gives no data on safety or efficacy, being by definition a dose too low to cause any therapeutic effect. Drug development companies carry out Phase 0 studies to rank drug candidates to decide which has the best pharmacokinetic parameters in humans to take forward into further development. They enable go/no-go decisions to be based on relevant human models instead of relying on sometimes inconsistent animal data. == Phase I == Phase I trials were formerly referred to as "first-in-man studies" but the field generally moved to the gender-neutral language phrase "first-in-humans" in the 1990s; these trials are the first stage of testing in human subjects. They are designed to test the safety, side effects, best dose, and formulation method for the drug. Phase I trials are not randomized, and thus are vulnerable to selection bias. Normally, a small group of 20–100 healthy volunteers will be recruited. These trials are often conducted in a clinical trial clinic, where the subject can be observed by full-time staff. These clinical trial clinics are often run by contract research organization (CROs) who conduct these studies on behalf of pharmaceutical companies or other research investigators. The subject who receives the drug is usually observed until several half-lives of the drug have passed. This phase is designed to assess the safety (pharmacovigilance), tolerability, pharmacokinetics, and pharmacodynamics of a drug. Phase I trials normally include dose-ranging, also called dose escalation studies, so that the best and safest dose can be found and to discover the point at which a compound is too poisonous to administer. The tested range of doses will usually be a fraction of the dose that caused harm in animal testing. Phase I trials most often include healthy volunteers. However, there are some circumstances when clinical patients are used, such as patients who have terminal cancer or HIV and the treatment is likely to make healthy individuals ill. These studies are usually conducted in tightly controlled clinics called Central Pharmacological Units, where participants receive 24-hour medical attention and oversight. In addition to the previously mentioned unhealthy individuals, "patients who have typically already tried and failed to improve on the existing standard therapies" may also participate in Phase I trials. Volunteers are paid a variable inconvenience fee for their time spent in the volunteer center. Before beginning a Phase I trial, the sponsor must submit an Investigational New Drug application to the FDA detailing the preliminary data on the drug gathered from cellular models and animal studies. Phase I trials can be further divided: === Phase Ia === Single ascending dose (Phase Ia): In single ascending dose studies, small groups of subjects are given a single dose of the drug while they are observed and tested for a period of time to confirm safety. Typically, a small number of participants, usually three, are entered sequentially at a particular dose. If they do not exhibit any adverse side effects, and the pharmacokinetic data are roughly in line with predicted safe values, the dose is escalated, and a new group of subjects is then given a higher dose. If unacceptable toxicity is observed in any of the three participants, an additional number of participants, usually three, are treated at the same dose. This is continued until pre-calculated pharmacokinetic safety levels are reached, or intolerable side effects start showing up (at which point the drug is said to have reached the maximum tolerated dose (MTD)). If an additional unacceptable toxicity is observed, then the dose escalation is terminated and that dose, or perhaps the previous dose, is declared to be the maximally tolerated dose. This particular design assumes that the maximally tolerated dose occurs when approximately one-third of the participants experience unacceptable toxicity. Variations of this design exist, but most are similar. === Phase Ib === Multiple ascending dose (Phase Ib): Multiple ascending dose studies investigate the pharmacokinetics and pharmacodynamics of multiple doses of the drug, looking at safety and tolerability. In these studies, a group of patients receives multiple low doses of the drug, while samples (of blood, and other fluids) are collected at various time points and analyzed to acquire information on how the drug is processed within the body. The dose is subsequently escalated for further groups, up to a predetermined level. === Food effect === A short trial designed to investigate any differences in absorption of the drug by the body, caused by eating before the drug is given. These studies are usually run as a crossover study, with volunteers being given two identical doses of the drug while fasted, and after being fed. == Phase II == Once a dose or range of doses is determined, the next goal is to evaluate whether the drug has any biological activity or effect. Phase II trials are performed on larger groups (50–300 individuals) and are designed to assess how well the drug works, as well as to continue Phase I safety assessments in a larger group of volunteers and patients. Genetic testing is common, particularly when there is evidence of variation in metabolic rate. When the development process for a new drug fails, this usually occurs during Phase II trials when the drug is discovered not to work as planned, or to have toxic effects. Phase II studies are sometimes divided into Phase IIa and Phase IIb. There is no formal definition for these two sub-categories, but generally: Phase IIa studies are usually pilot studies designed to find an optimal dose and assess safety ('dose finding' studies). Phase IIb studies determine how well the drug works in subjects at a given dose to assess efficacy ('proof of concept' studies). === Trial design === Some Phase II trials are designed as case series, demonstrating a drug's safety and activity in a selected group of participants. Other Phase II trials are designed as randomized controlled trials, where some patients receive the drug/device and others receive placebo/standard treatment. Randomized Phase II trials have far fewer patients than randomized Phase III trials. ==== Example: cancer design ==== In the first stage, the investigator attempts to rule out drugs that have no or little biologic activity. For example, the researcher may specify that a drug must have some minimal level of activity, say, in 20% of participants. If the estimated activity level is less than 20%, the researcher chooses not to consider this drug further, at least not at that maximally tolerated dose. If the estimated activity level exceeds 20%, the researcher will add more participants to get a better estimate of the response rate. A typical study for ruling out a 20% or lower response rate enters 14 participants. If no response is observed in the first 14 participants, the drug is considered not likely to have a 20% or higher activity level. The number of additional participants added depends on the degree of precision desired, but ranges from 10 to 20. Thus, a typical cancer phase II study might include fewer than 30 people to estimate the response rate. ==== Efficacy vs effectiveness ==== When a study assesses efficacy, it is looking at whether the drug given in the specific manner described in the study is able to influence an outcome of interest (e.g. tumor size) in the chosen population (e.g. cancer patients with no other ongoing diseases). When a study is assessing effectiveness, it is determining whether a treatment will influence the disease. In an effectiveness study, it is essential that participants are treated as they would be when the treatment is prescribed in actual practice. That would mean that there should be no aspects of the study designed to increase compliance above those that would occur in routine clinical practice. The outcomes in effectiveness studies are also more generally applicable than in most efficacy studies (for example does the patient feel better, come to the hospital less or live longer in effectiveness studies as opposed to better test scores or lower cell counts in efficacy studies). There is usually less rigid control of the type of participant to be included in effectiveness studies than in efficacy studies, as the researchers are interested in whether the drug will have a broad effect in the population of patients with the disease. === Success rate === Phase II clinical programs historically have experienced the lowest success rate of the four development phases. In 2010, the percentage of Phase II trials that proceeded to Phase III was 18%, and only 31% of developmental candidates advanced from Phase II to Phase III in a study of trials over 2006–2015. == Phase III == This phase is designed to assess the effectiveness of the new intervention and, thereby, its value in clinical practice. Phase III studies are randomized controlled multicenter trials on large patient groups (300–3,000 or more depending upon the disease/medical condition studied) and are aimed at being the definitive assessment of how effective the drug is, in comparison with current 'gold standard' treatment. Because of their size and comparatively long duration, Phase III trials are the most expensive, time-consuming and difficult trials to design and run, especially in therapies for chronic medical conditions. Phase III trials of chronic conditions or diseases often have a short follow-up period for evaluation, relative to the period of time the intervention might be used in practice. This is sometimes called the "pre-marketing phase" because it actually measures consumer response to the drug. It is common practice that certain Phase III trials will continue while the regulatory submission is pending at the appropriate regulatory agency. This allows patients to continue to receive possibly lifesaving drugs until the drug can be obtained by purchase. Other reasons for performing trials at this stage include attempts by the sponsor at "label expansion" (to show the drug works for additional types of patients/diseases beyond the original use for which the drug was approved for marketing), to obtain additional safety data, or to support marketing claims for the drug. Studies in this phase are by some companies categorized as "Phase IIIB studies." While not required in all cases, it is typically expected that there be at least two successful Phase III trials, demonstrating a drug's safety and efficacy, to obtain approval from the appropriate regulatory agencies such as FDA (US), or the EMA (European Union). Once a drug has proved satisfactory after Phase III trials, the trial results are usually combined into a large document containing a comprehensive description of the methods and results of human and animal studies, manufacturing procedures, formulation details, and shelf life. This collection of information makes up the "regulatory submission" that is provided for review to the appropriate regulatory authorities in different countries. They will review the submission, and if it is acceptable, give the sponsor approval to market the drug. Most drugs undergoing Phase III clinical trials can be marketed under FDA norms with proper recommendations and guidelines through a New Drug Application (NDA) containing all manufacturing, preclinical, and clinical data. In case of any adverse effects being reported anywhere, the drugs need to be recalled immediately from the market. While most pharmaceutical companies refrain from this practice, it is not abnormal to see many drugs undergoing Phase III clinical trials in the market. === Adaptive design === The design of individual trials may be altered during a trial – usually during Phase II or III – to accommodate interim results for the benefit of the treatment, adjust statistical analysis, or to reach early termination of an unsuccessful design, a process called an "adaptive design". Examples are the 2020 World Health Organization Solidarity trial, European Discovery trial, and UK RECOVERY Trial of hospitalized people with severe COVID-19 infection, each of which applies adaptive designs to rapidly alter trial parameters as results from the experimental therapeutic strategies emerge. Adaptive designs within ongoing Phase II–III clinical trials on candidate therapeutics may shorten trial durations and use fewer subjects, possibly expediting decisions for early termination or success, and coordinating design changes for a specific trial across its international locations. === Success rate === For vaccines, the probability of success ranges from 7% for non-industry-sponsored candidates to 40% for industry-sponsored candidates. A 2019 review of average success rates of clinical trials at different phases and diseases over the years 2005–15 found a success range of 5–14%. Separated by diseases studied, cancer drug trials were on average only 3% successful, whereas ophthalmology drugs and vaccines for infectious diseases were 33% successful. Trials using disease biomarkers, especially in cancer studies, were more successful than those not using biomarkers. A 2010 review found about 50% of drug candidates either fail during the Phase III trial or are rejected by the national regulatory agency. == Cost of trials by phases == In the early 21st century, a typical Phase I trial conducted at a single clinic in the United States ranged from $1.4 million for pain or anesthesia studies to $6.6 million for immunomodulation studies. Main expense drivers were operating and clinical monitoring costs of the Phase I site. The amount of money spent on Phase II or III trials depends on numerous factors, with therapeutic area being studied and types of clinical procedures as key drivers. Phase II studies may cost as low as $7 million for cardiovascular projects, and as much as $20 million for hematology trials. Phase III trials for dermatology may cost as low as $11 million, whereas a pain or anesthesia Phase III trial may cost as much as $53 million. An analysis of Phase III pivotal trials leading to 59 drug approvals by the US Food and Drug Administration over 2015–16 showed that the median cost was $19 million, but some trials involving thousands of subjects may cost 100 times more. Across all trial phases, the main expenses for clinical trials were administrative staff (about 20% of the total), clinical procedures (about 19%), and clinical monitoring of the subjects (about 11%). == Phase IV == A Phase IV trial is also known as a postmarketing surveillance trial or drug monitoring trial to assure long-term safety and effectiveness of the drug, vaccine, device or diagnostic test. Phase IV trials involve the safety surveillance (pharmacovigilance) and ongoing technical support of a drug after it receives regulatory approval to be sold. Phase IV studies may be required by regulatory authorities or may be undertaken by the sponsoring company for competitive (finding a new market for the drug) or other reasons (for example, the drug may not have been tested for interactions with other drugs, or on certain population groups such as pregnant women, who are unlikely to subject themselves to trials). The safety surveillance is designed to detect any rare or long-term adverse effects over a much larger patient population and longer time period than was possible during the Phase I-III clinical trials. Harmful effects discovered by Phase IV trials may result in a drug being withdrawn from the market or restricted to certain uses; examples include cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx). == Overall cost == The entire process of developing a drug from preclinical research to marketing can take approximately 12 to 18 years and often costs well over $1 billion. == References ==
Wikipedia/Phases_of_clinical_research
The DPT vaccine or DTP vaccine is a class of combination vaccines to protect against three infectious diseases in humans: diphtheria, pertussis (whooping cough), and tetanus (lockjaw). The vaccine components include diphtheria and tetanus toxoids, and either killed whole cells of the bacterium that causes pertussis or pertussis antigens. The term toxoid refers to vaccines which use an inactivated toxin produced by the pathogen which they are targeted against to generate an immune response. In this way, the toxoid vaccine generates an immune response which is targeted against the toxin which is produced by the pathogen and causes disease, rather than a vaccine which is targeted against the pathogen itself. The whole cells or antigens will be depicted as either "DTwP" or "DTaP", where the lower-case "w" indicates whole-cell inactivated pertussis and the lower-case "a" stands for "acellular". In comparison to alternative vaccine types, such as live attenuated vaccines, the DTP vaccine does not contain any live pathogen, but rather uses inactivated toxoid (and for pertussis, either a dead pathogen or pure antigens) to generate an immune response; therefore, there is not a risk of use in populations that are immune compromised since there is not any known risk of causing the disease itself. As a result, the DTP vaccine is considered a safe vaccine to use in anyone and it generates a much more targeted immune response specific for the pathogen of interest. In the United States, the DPT (whole-cell) vaccine was administered as part of the childhood vaccines recommended by the Centers for Disease Control and Prevention (CDC) until 1996, when the acellular DTaP vaccine was licensed for use. == History == Diphtheria and tetanus toxoids and whole-cell pertussis (DTP; now also "DTwP" to differentiate from the broader class of triple-combination vaccines) vaccination was licensed in 1949. Since the introduction of the combination vaccine, there has been an extensive decline in the incidence of pertussis, or whooping cough, the disease which the vaccine protects against. Additionally, the rates of disease have continued to decline as more extensive immunization strategies have been implemented, including booster doses and increased emphasis on increasing health literacy. In the 20th century, the advancements in vaccinations helped to reduce the incidence of childhood pertussis and had a dramatically positive effect on the health of populations in the United States. However, in the early 21st century, reported instances of the disease increased 20-fold due to a downturn in the number of immunizations received and resulted in numerous fatalities. During the 21st century, many parents declined to vaccinate their children against pertussis for fear of perceived side effects, despite scientific evidence showing vaccines to be highly effective and safe. In 2009, the journal Pediatrics concluded the largest risk among unvaccinated children was not the contraction of side effects, but rather the disease that the vaccination aims to protect against. DTP vaccines with acellular pertussis (DTaP; see below) were introduced in the 1990s. The reduced range of antigens causes fewer side effects, but results in a more expensive, shorter-lasting, and possibly less protective vaccine compared to DTwP. High-income countries have mostly switched to DTaP. As of 2023, global production of aP remains limited. === Vaccination rates === In 2016, the CDC reported that 80.4% of children in the US had received four or more DTaP vaccinations by 2 years of life. Vaccination rates for children aged 13–17 with one or more TDaP shots was 90.2% in 2019. Only 43.6% of adults (older than 18) have received a TDaP shot in the last 10 years. The CDC aimed to increase vaccination rate among 2-year-olds from 80.4% to 90.0% The World Health Organization (WHO) estimated that 89% of people globally had received at least one dose of DTP vaccine and 84% had received three doses of the vaccine, completing the WHO-recommended primary series (DTP3). The WHO also tracks the DTP3 completion rate among one-year-olds on a yearly basis. Yearly DTP3 completion rate is considered a good proxy for the completeness of childhood vaccination in general. == Combination vaccines with acellular pertussis == DTaP and Tdap are both combination vaccines against diphtheria, tetanus, and pertussis. The "a" indicates that the pertussis toxoids are acellular, while the lower-case "d" and "p" in "Tdap" indicate smaller concentrations of diphtheria toxoids and pertussis antigens. === DTaP === DTaP (also DTP and TDaP) is a combination vaccine against diphtheria, tetanus, and pertussis, in which the pertussis component is acellular. This is in contrast to whole-cell, inactivated DTP (or DTwP). The acellular vaccine uses selected antigens of the pertussis pathogen to induce immunity. Because it uses fewer antigens than the whole-cell vaccines, it is considered to cause fewer side effects, but it is also more expensive. Research suggests that the DTwP vaccine is more effective than DTaP in conferring immunity, because DTaP's narrower antigen base is less effective against current pathogen strains. === Tdap === Tdap (also TDP) is a tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis vaccine. It was licensed in the United States for use in adults and adolescents on 10 June 2005. Two Tdap vaccines are available in the US. In January 2011, the US Centers for Disease Control and Prevention (CDC) Advisory Committee on Immunization Practices (ACIP) recommended the use of Tdap in adults of all ages, including those age 65 and above. In October 2011, in an effort to reduce the burden of pertussis in infants, the ACIP recommended that unvaccinated pregnant women receive a dose of Tdap. On 24 October 2012, the ACIP voted to recommend the use of Tdap during every pregnancy. The ACIP and Canada's National Advisory Committee on Immunization (NACI) recommended that both adolescents and adults receive Tdap in place of their next Td booster (recommended to be given every ten years). Tdap and Td can be used as prophylaxis for tetanus in wound management. People who will be in contact with young infants are encouraged to get Tdap even if it has been less than five years since Td or TT to reduce the risk of infants being exposed to pertussis. NACI suggests intervals shorter than five years can be used for catch-up programs and other instances where programmatic concerns make five-year intervals difficult. The WHO recommends a pentavalent vaccine, combining the DTP vaccine with vaccines against Haemophilus influenzae type B and hepatitis B. Evidence on how effective this pentavalent vaccine is compared to the individual vaccines has not yet been determined. A 2019 study found that state requirements mandating the use of the Tdap vaccine "increased Tdap vaccine take-up and reduced pertussis (whooping cough) incidence by about 32%." == Related combination vaccines == === Excluding pertussis === DT and Td vaccines lack the pertussis component. The Td vaccine is administered to children over the age of seven as well as to adults. It is most commonly administered as a booster shot every 10 years. The Td booster shot may also be administered as protection from a severe burn or dirty wound. The DT vaccine is given to children under the age of seven who are unable to receive the pertussis antigen in the DTaP vaccine due to a contraindication. === Additional targets === In the United States, a combined inactivated polio (IPV), DTaP, and hepatitis B DTaP-IPV-HepB vaccine is available for children. In the UK, all babies born on or after 1 August 2017 are offered a hexavalent vaccine: DTaP, IPV, Haemophilus influenzae, and hepatitis B (DTaP-Hib-HepB-IPV in short). As of 2023, most of the DTP vaccine procured by UNICEF is of the DTwP-HepB-Hib (pentavalent whole-cell) type. The UNICEF plans to procure the DTwP-HepB-Hib-IPV (hexavalent whole-cell) vaccine starting in 2024. == Contraindications == The DPT vaccine should be avoided in persons who experienced a severe allergic reaction, such as anaphylaxis, to a past vaccine containing tetanus, diphtheria, or pertussis. It should also be avoided in persons with a known severe allergy to an ingredient in the vaccine. If the reaction was caused by tetanus toxoids, the CDC recommends considering a passive immunization with tetanus immune globulin (TIG) if a person has a large or unclean wound. The DPT vaccine should also be avoided if a person developed encephalopathy (seizures, coma, declined consciousness) within seven days of receiving any pertussis-containing vaccine and the encephalopathy cannot be traced to another cause. A DT vaccine is available for children under the ages of seven who have contraindications or precautions to pertussis-containing vaccines. == Side effects == === DTaP === Common side effects include soreness where the shot was given, fever, irritability, tenderness, loss of appetite, and vomiting. Most side effects are mild to moderate and may last from one to three days. More serious but rare reactions after a DTaP vaccination may include seizures, lowered consciousness, or a high fever over 105 °F (41 °C). Allergic reactions are uncommon, but are medical emergencies. Signs of an allergic reaction include hives, dyspnea, wheezing, swelling of face and throat, syncope, and tachycardia and the child should be rushed to the nearest hospital. === Tdap === Common side effects include pain or swelling where the shot was given, mild fever, headache, tiredness, nausea, vomiting, diarrhea, and stomach ache. Allergic reactions are possible and have the same presentation and indications as described above for allergic reactions in DTaP. Any individual who has experienced a life-threatening allergic reaction after receiving a previous dose of diphtheria, tetanus, or pertussis containing vaccine should not receive the Tdap vaccination. In pregnant women, research suggests that Tdap administration may be associated with an increased risk of chorioamnionitis, a placental infection. Increased incidence of fever is also noted in pregnant women. Despite the observed increase in incidence of chorioamnionitis in pregnant women following Tdap administration, there has been no observed increase in the incidence of preterm birth, for which chorioamnionitis is a risk factor. Research has not discerned an association between Tdap administration during pregnancy and other serious pregnancy complications such as neonatal death and stillbirth. An association between Tdap administration during pregnancy and pregnancy-related hypertensive disorders (such as pre-eclampsia) has not been identified. == Immunization schedules and requirements == === Australia === In Australia, the DTP vaccine is part of the National Immunisation Program (NIP). The vaccine is administered to infants in a series of doses: the first three doses are given at 2, 4, and 6 months of age, followed by a fourth dose at 18 months and a fifth dose at 4 years. Adolescents receive a single booster dose at 12-13 years. Adults are recommended to receive a dTpa booster every 10 years, especially those in close contact with infants. Pregnant women are advised to receive a dTpa booster during each pregnancy, ideally between 20-32 weeks of gestation, to protect newborns from pertussis. === France === In France, children are given DTaP-Hib-HepB-IPV vaccines at 2 months (first dose) and 4 months (second dose) with a booster at 11 months of age. A tetravalent booster for diphtheria, pertussis, tetanus and poliomyelitis is given at 6 years, at 11–13 years, then at 25, 45, 65 years of age, then every 10 years. === Netherlands === In the Netherlands, pertussis is known as kinkhoest and DKTP refers to the DTaP-IPV combination vaccine against diphtheria, kinkhoest, tetanus, and polio. DTaP is given as part of the National Immunisation Programme. === United Kingdom === In the United Kingdom, Td/IPV is called the "3-in-1 teenage booster" and protects against tetanus, diphtheria and polio. It is given by the NHS to all teenagers aged 14 (the hexavalent vaccine is given to infants and provides the first stage of protection against diphtheria, tetanus, and polio, as well as pertussis, Haemophilus influenzae type B and hepatitis B). Subsequent boosters are recommended for foreign travellers where more than 10 years has passed since their last booster. This is provided on the NHS free of charge due to the significant risk that an imported case of polio could pose to public health in Britain. === United States === The standard immunization regimen for children within the United States is five doses of DTaP between the ages of two months and fifteen years. To be considered fully vaccinated, the Centers for Disease Control and Prevention (CDC) typically requires five doses of Tdap. The CDC recommends that children receive their first dose at two months, the second dose at four months, the third dose at six months, the fourth dose between 15 and 18 months, and the fifth dose between 4–6 years. If the fourth dose of the DTaP immunization regimen falls on or subsequent to the recipient's fourth birthday, the CDC states that only four doses are required to be fully vaccinated. In the instance that an individual under 18 has not received the DTaP vaccine, individuals should be vaccinated on the schedule in accordance with the vaccination "catch up schedule" provided by the CDC. Infants younger than twelve months of age, specifically less than three months of age, are at highest risk of acquiring pertussis. In U.S., there is no current tetanus-diphtheria-pertussis vaccination (whooping cough) recommended or licensed for new born infants. As a result, in their first few months of life, unprotected infants are at highest risk of life-threatening complications and infections from pertussis. Infants should not receive pertussis vaccination younger than six weeks of age. Ideally, Infants should receive DTaP (name of whooping cough vaccine for children from age 2 months through 6 years) at 2, 4, 6 months of age and they are not protected until the full series is completed. To protect infants younger than twelve months of age not vaccinated with Tdap against pertussis, ACIP also recommends adults (e.g., parents, siblings, grandparents, childcare providers, and healthcare personnel) and children to receive Tdap at least two weeks before being in contact with the infant. The CDC recommends that adults who have received their childhood DTP series receive a Td or Tdap booster every ten years. For adults that have not received the DTP series, the CDC recommends a three-part vaccine series followed by a Td or Tdap booster every ten years. ==== In pregnancy ==== According to the CDC's Advisory Committee on Immunization Practices (ACIP) guidelines, one dose of Tdap is recommended during each pregnancy to ensure protection against pertussis in newborn infants. Optimal timing to administer a dose of Tdap during each pregnancy is between 27 through 36 weeks gestation. If Tdap is administered early in pregnancy, it is not recommended to administer again during the 27 through 36 weeks gestation period as only one dose is recommended during pregnancy. In October 2022, Boostrix (Tetanus Toxoid, Reduced Diphtheria Toxoid and Acellular Pertussis Vaccine, Adsorbed [Tdap]) was approved for immunization during the third trimester of pregnancy to prevent pertussis, commonly known as whooping cough, in infants younger than two months of age. Pregnant women who have not previously vaccinated with Tdap (i.e., have never received DTP, DTaP, or DT as child or Td or TT as an adult) are recommended to receive a series of three Td vaccinations starting during pregnancy to ensure protection against maternal and neonatal tetanus. In such cases, administration of Tdap is recommended after 20 weeks' gestation, and in earlier pregnancy a single dose of Tdap can be substituted for one dose of Td, and then the series completed with Td. For pregnant women not previously vaccinated with Tdap, if Tdap is not administered during pregnancy, it should be administered immediately postpartum. Postpartum administration of TDaP is not equivalent to administration of the vaccination during pregnancy. Because the vaccine is administered postpartum, the mother is unable to develop antibodies that can be transferred to the infant in utero, consequently, leaving the infant vulnerable to the diseases preventable by the Tdap Vaccine. Postpartum administration of the TdaP vaccine to the mother seeks to reduce the likelihood that the mother will contract disease that can be subsequently passed on the infant, albeit there will still be a two-week period prior to the protective effects of the vaccine setting in. Postpartum administration is an extension of the concept of "cocooning", a term that refers to the full vaccination of all individuals that may come into direct contact with the infant. Cocooning, like postpartum Tdap administration, is not recommended by the CDC. Cocooning depends on ensuring full vaccination of all individuals that the infant may come into contact with, and there may be financial, administrative or personal barriers that preclude full and timely vaccination of all individuals within the "cocoon". == Brand names == === Australia === === United Kingdom === Brand names in the United Kingdom include Revaxis (Sanofi Pasteur). === United States === As of January 2020, there are six DTaP vaccines and two Tdap vaccines licensed and available for use in the United States. All of them are indicated as childhood vaccinations with the schedules as follows: == References == == Further reading == === Diphtheria === World Health Organization (2009). The immunological basis for immunization : module 2: diphtheria — update 2009. World Health Organization (WHO). hdl:10665/44094. ISBN 9789241597869. Ramsay M, ed. (2013). "Chapter 15: Diphtheria". Immunisation against infectious disease. Public Health England. Roush SW, Baldy LM, Hall MA, eds. (March 2019). Manual for the surveillance of vaccine-preventable diseases. Atlanta GA: U.S. Centers for Disease Control and Prevention (CDC). === Pertussis === World Health Organization (2017). The immunological basis for immunization series: module 4: pertussis, update 2017. World Health Organization (WHO). hdl:10665/259388. ISBN 9789241513173. Ramsay M, ed. (2013). "Chapter 24: Pertussis". Immunisation against infectious disease. Public Health England. Hamborsky J, Kroger A, Wolfe S, eds. (2015). "Chapter 16: Pertussis". Epidemiology and Prevention of Vaccine-Preventable Diseases (13th ed.). Washington D.C.: U.S. Centers for Disease Control and Prevention (CDC). ISBN 978-0990449119. Roush SW, Baldy LM, Hall MA, eds. (March 2019). "Chapter 10: Pertussis". Manual for the surveillance of vaccine-preventable diseases. Atlanta GA: U.S. Centers for Disease Control and Prevention (CDC). === Tetanus === World Health Organization (2018). The immunological basis for immunization series: module 3: tetanus: update 2018. World Health Organization (WHO). hdl:10665/275340. ISBN 9789241513616. Ramsay M, ed. (2013). "Chapter 30: Tetanus". Immunisation against infectious disease. Public Health England. Hamborsky J, Kroger A, Wolfe S, eds. (2015). "Chapter 21: Tetanus". Epidemiology and Prevention of Vaccine-Preventable Diseases (13th ed.). Washington D.C.: U.S. Centers for Disease Control and Prevention (CDC). ISBN 978-0990449119. Roush SW, Baldy LM, Hall MA, eds. (March 2019). "Chapter 16: Tetanus". Manual for the surveillance of vaccine-preventable diseases. Atlanta GA: U.S. Centers for Disease Control and Prevention (CDC). == External links == "Tdap (Tetanus, Diphtheria, Pertussis) Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). 19 May 2023. "DTaP (Diphtheria, Tetanus, Pertussis) Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). 21 July 2023. "DTaP/Tdap/Td ACIP Vaccine Recommendations". U.S. Centers for Disease Control and Prevention (CDC). 24 September 2024. Tetanus Toxoid at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Diphtheria-Tetanus Vaccine at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Diphtheria-Tetanus-Pertussis Vaccine at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Diphtheria-Tetanus-acellular Pertussis Vaccines at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/DPT_vaccine
The smallpox vaccine is used to prevent smallpox infection caused by the variola virus. It is the first vaccine to have been developed against a contagious disease. In 1796, British physician Edward Jenner demonstrated that an infection with the relatively mild cowpox virus conferred immunity against the deadly smallpox virus. Cowpox served as a natural vaccine until the modern smallpox vaccine emerged in the 20th century. From 1958 to 1977, the World Health Organization (WHO) conducted a global vaccination campaign that eradicated smallpox, making it the only human disease to be eradicated. Although routine smallpox vaccination is no longer performed on the general public, the vaccine is still being produced for research, and to guard against bioterrorism, biological warfare, and mpox. The term vaccine derives from vacca, the Latin word for cow, reflecting the origins of smallpox vaccination. Edward Jenner referred to cowpox as variolae vaccinae (smallpox of the cow). The origins of the smallpox vaccine became murky over time, especially after Louis Pasteur developed laboratory techniques for creating vaccines in the 19th century. Allan Watt Downie demonstrated in 1939 that the modern smallpox vaccine was serologically distinct from cowpox, and vaccinia was subsequently recognized as a separate viral species. Whole-genome sequencing has revealed that vaccinia is most closely related to horsepox, and the cowpox strains found in Great Britain are the least closely related to vaccinia. == Types == As the oldest vaccine, the smallpox vaccine has gone through several generations of medical technology. From 1796 to the 1880s, the vaccine was transmitted from one person to another through arm-to-arm vaccination. Smallpox vaccine was successfully maintained in cattle starting in the 1840s, and calf lymph vaccine became the leading smallpox vaccine in the 1880s. First-generation vaccines grown on the skin of live animals were widely distributed in the 1950s–1970s to eradicate smallpox. Second-generation vaccines were grown in chorioallantoic membrane or cell cultures for greater purity, and they were used in some areas during the smallpox eradication campaign. Third-generation vaccines are based on attenuated strains of vaccinia and saw limited use prior to the eradication of smallpox. All three generations of vaccine are available in stockpiles. First and second-generation vaccines contain live unattenuated vaccinia virus and can cause serious side effects in a small percentage of recipients, including death in 1–10 people per million vaccinations. Third-generation vaccines are much safer due to the milder side effects of the attenuated vaccinia strains. Second and third-generation vaccines are still being produced, with manufacturing capacity being built up in the 2000s due to fears of bioterrorism and biological warfare. === First-generation === The first-generation vaccines are manufactured by growing live vaccinia virus in the skin of live animals. Most first-generation vaccines are calf lymph vaccines that were grown on the skin of cows, but other animals were also used, including sheep. The development of freeze-dried vaccine in the 1950s made it possible to preserve vaccinia virus for long periods of time without refrigeration, leading to the availability of freeze-dried vaccines such as Dryvax.: 115  The vaccine is administered by multiple puncture of the skin (scarification) with a bifurcated needle that holds vaccine solution in the fork. The skin should be cleaned with water rather than alcohol, as the alcohol could inactivate the vaccinia virus.: 292  If alcohol is used, it must be allowed to evaporate completely before the vaccine is administered.: 292  Vaccination results in a skin lesion that fills with pus and eventually crusts over. This manifestation of localized vaccinia infection is known as a vaccine "take" and demonstrates immunity to smallpox. After 2–3 weeks, the scab will fall off and leave behind a vaccine scar. First generation vaccines consist of live, unattenuated vaccinia virus. One-third of first-time vaccinees develop side effects significant enough to miss school, work, or other activities, or have difficulty sleeping. 15–20% of children receiving the vaccine for the first time develop fevers of over 102 °F (39 °C). The vaccinia lesion can transmit the virus to other people. Rare side effects include postvaccinal encephalitis and myopericarditis. Many countries have stockpiled first generation smallpox vaccines. In a 2006 predictive analysis of casualties if there were a mass vaccination of the populations of Germany and the Netherlands, it was estimated that a total of 9.8 people in the Netherlands and 46.2 people in Germany would die from uncontrolled vaccinia infection after being vaccinated with the New York City Board of Health strain. More deaths were predicted for vaccines based other strains: Lister (55.1 Netherlands, 268.5 Germany) and Bern (303.5 Netherlands, 1,381 Germany). === Second-generation === The second-generation vaccines consist of live vaccinia virus grown in the chorioallantoic membrane or cell culture. The second-generation vaccines are also administered through scarification with a bifurcated needle, and they carry the same side effects as the first-generation vaccinia strain that was cloned. However, the use of eggs or cell culture allows for vaccine production in a sterile environment, while first-generation vaccine contains skin bacteria from the animal that the vaccine was grown on. Ernest William Goodpasture, Alice Miles Woodruff, and G. John Buddingh grew vaccinia virus on the chorioallantoic membrane of chicken embryos in 1932. The Texas Department of Health began producing egg-based vaccine in 1939 and started using it in vaccination campaigns in 1948.: 588  Lederle Laboratories began selling its Avianized smallpox vaccine in the United States in 1959. Egg-based vaccine was also used widely in Brazil, New Zealand, and Sweden, and on a smaller scale in many other countries. Concerns about temperature stability and avian sarcoma leukosis virus prevented it from being used more widely during the eradication campaign, although no increase in leukemia was seen in Brazil and Sweden despite the presence of ASLV in the chickens.: 588  Vaccinia was first grown in cell culture in 1931 by Thomas Milton Rivers. The WHO funded work in the 1960s at the Dutch National Institute for Public Health and the Environment (RIVM) on growing the Lister/Elstree strain in rabbit kidney cells and tested it in 45,443 Indonesian children in 1973, with comparable results to the same strain of calf lymph vaccine.: 588–589  Two other cell culture vaccines were developed from the Lister strain in the 2000s: Elstree-BN (Bavarian Nordic) and VV Lister CEP (Chicken Embryo Primary, Sanofi Pasteur). Lister/Elstree-RIVM was stockpiled in the Netherlands, and Elstree-BN was sold to some European countries for stockpiles. However, Sanofi dropped its own vaccine after it acquired Acambis in 2008. ACAM2000 is a vaccine developed by Acambis, which was acquired by Sanofi Pasteur in 2008, before selling the smallpox vaccine to Emergent Biosolutions in 2017. Six strains of vaccinia were isolated from 3,000 doses of Dryvax and found to exhibit significant variation in virulence. The strain with the most similar virulence to the overall Dryvax mixture was selected and grown in MRC-5 cells to make the ACAM1000 vaccine. After a successful phase I trial of ACAM1000, the virus was passaged three times in Vero cells to develop ACAM2000, which entered mass production at Baxter. The United States ordered over 200 million doses of ACAM2000 in 1999–2001 for its stockpile, and production is ongoing to replace expired vaccine. ACAM2000 was approved for mpox prevention in the United States in August 2024. === Third-generation === The third-generation vaccines are based on attenuated vaccinia viruses that are much less virulent and carry lesser side effects. The attenuated viruses may be replicating or non-replicating. ==== MVA ==== Modified vaccinia Ankara (MVA, German: Modifiziertes Vakziniavirus Ankara) is a replication-incompetent variant of vaccinia that was developed in West Germany through serial passage. The original Ankara strain of vaccinia was maintained at the vaccine institute in Ankara, Turkey on donkeys and cows. The Ankara strain was taken to West Germany in 1953, where Herrlich and Mayr grew it on chorioallantoic membrane at the University of Munich. After 572 serial passages, the vaccinia virus had lost over 14% of its genome and could no longer replicate in human cells. MVA was used in West Germany in 1977–1980, but the eradication of smallpox ended the vaccination campaign after only 120,000 doses. MVA stimulates the production of fewer antibodies than replicating vaccines. During the smallpox eradication campaign, MVA was considered to be a pre-vaccine that would be administered before a replicating vaccine to reduce the side effects, or an alternative vaccine that could be safely given to people at high risk from a replicating vaccine.: 585  Japan evaluated MVA and rejected it due to its low immunogenicity, deciding to develop its own attenuated vaccine instead. In the 2000s, MVA was tested in animal models at much higher dosages. When MVA is given to monkeys at 40 times the dosage of Dryvax, it stimulates a more rapid immune response while still causing lesser side effects. ==== MVA-BN ==== MVA-BN (also known as: Imvanex in the European Union; Imvamune in Canada; and Jynneos) is a vaccine manufactured by Bavarian Nordic by growing MVA in cell culture. Unlike replicating vaccines, MVA-BN is administered by injection via the subcutaneous route and does not result in a vaccine "take." A "take" or "major cutaneous reaction" is a pustular lesion or an area of definite induration or congestion surrounding a central lesion, which can be a scab or an ulcer. MVA-BN can also be administered intradermally to increase the number of available doses. It is safer for immunocompromised patients and those who are at risk from a vaccinia infection. MVA-BN has been approved in the European Union, Canada, and the United States. Clinical trials have found that MVA-BN is safer and just as immunogenic as ACAM2000. This vaccine has also been approved for use against mpox. ==== LC16m8 ==== LC16m8 is a replicating attenuated strain of vaccinia that is manufactured by Kaketsuken in Japan. Working at the Chiba Serum Institute in Japan, So Hashizume passaged the Lister strain 45 times in primary rabbit kidney cells, interrupting the process after passages 36, 42, and 45 to grow clones on chorioallantoic membrane and select for pock size. The resulting variant was designated LC16m8 (Lister clone 16, medium pocks, clone 8). Unlike the severely-damaged MVA, LC16m8 contains every gene that is present in the ancestral vaccinia. However, a single-nucleotide deletion truncates membrane protein B5R from a residue length of 317 to 92. Although the truncated protein decreases production of extracellular enveloped virus, animal models have shown that antibodies against other membrane proteins are sufficient for immunity. LC16m8 was approved in Japan in 1975 after testing in over 50,000 children. Vaccination with LC16m8 results in a vaccine "take", but safety is similar to MVA. == Safety == Vaccinia is infectious, which improves its effectiveness, but causes serious complications for people with impaired immune systems (for example chemotherapy and AIDS patients) or history of eczema, and pregnant women. It is also not recommended for anyone who lives with someone who belongs to any of the aforementioned groups. According to the US Centers for Disease Control and Prevention (CDC), "within 3 days of being exposed to the virus, the vaccine might protect you from getting the disease. If you still get the disease, you might get much less sick than an unvaccinated person would. Within 4 to 7 days of being exposed to the virus, the vaccine likely gives you some protection from the disease. If you still get the disease, you might not get as sick as an unvaccinated person would." In May 2007, the Vaccines and Related Biological Products Advisory Committee (VRBPAC) of the US Food and Drug Administration (FDA) voted unanimously that a new live virus vaccine produced by Acambis, ACAM2000, is both safe and effective for use in persons at high risk of exposure to smallpox virus. However, due to the high rate of serious adverse effects, the vaccine will only be made available to the CDC for the Strategic National Stockpile. ACAM2000 was approved for medical use in the United States in August 2007. == Stockpiles == Since smallpox has been eradicated, the public is not routinely vaccinated against the disease. The World Health Organization maintained a stockpile of 200 million doses in 1980, to guard against reemergence of the disease, but 99% of the stockpile was destroyed in the late 1980s when smallpox failed to return. After the September 11 attacks in 2001, many governments began building up vaccine stockpiles again for fear of bioterrorism. Several companies sold off their stockpiles of vaccines manufactured in the 1970s, and production of smallpox vaccines resumed. Aventis Pasteur discovered a stockpile from the 1950s and donated it to the US government. Stockpiles of newer vaccines must be repurchased periodically since they carry expiration dates. The United States had received 269 million doses of ACAM2000 and 28 million doses of MVA-BN by 2019, but only 100 million doses of ACAM2000 and 65,000 doses of MVA-BN were still available from the stockpile at the start of the 2022–2023 mpox outbreak. First-generation vaccines have no specified expiration date and remain viable indefinitely in deep freeze. The U.S. stockpile of WetVax was manufactured in 1956–1957 and maintained since then at −4 °F (−20 °C), and it was still effective when tested in 2004. Replicating vaccines also remain effective even at 1:10 dilution, so a limited number of doses can be stretched to cover a much larger population. == History == === Variolation === The mortality of the severe form of smallpox – variola major – was very high without vaccination, up to 35% in some outbreaks. A method of inducing immunity known as inoculation, insufflation or "variolation" was practiced before the development of a modern vaccine and likely occurred in Africa and China well before the practice arrived in Europe. It may also have occurred in India, but this is disputed; other investigators contend the ancient Sanskrit medical texts of India do not describe these techniques. The first clear reference to smallpox inoculation was made by the Chinese author Wan Quan (1499–1582) in his Douzhen xinfa (痘疹心法) published in 1549. Inoculation for smallpox does not appear to have been widespread in China until the reign era of the Longqing Emperor (r. 1567–1572) during the Ming Dynasty. In China, powdered smallpox scabs were blown up the noses of the healthy. The patients would then develop a mild case of the disease and from then on were immune to it. The technique did have a 0.5–2.0% mortality rate, but that was considerably less than the 20–30% mortality rate of the disease itself. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Dr. Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers. According to Voltaire (1742), the Turks derived their use of inoculation from neighbouring Circassia. Voltaire does not speculate on where the Circassians derived their technique from, though he reports that the Chinese have practiced it "these hundred years". Variolation was also practiced throughout the latter half of the 17th century by physicians in Turkey, Persia, and Africa. In 1714 and 1716, two reports of the Ottoman Empire Turkish method of inoculation were made to the Royal Society in England, by Emmanuel Timoni, a doctor affiliated with the British Embassy in Constantinople, and Giacomo Pylarini. Source material tells us on Lady Mary Wortley Montagu; "When Lady Mary was in the Ottoman Empire, she discovered the local practice of inoculation against smallpox called variolation." In 1718 she had her son, aged five variolated. He recovered quickly. She returned to London and had her daughter variolated in 1721 by Charles Maitland, during an epidemic of smallpox. This encouraged the British Royal Family to take an interest and a trial of variolation was carried out on prisoners in Newgate Prison. This was successful and in 1722 Caroline of Ansbach, the Princess of Wales, allowed Maitland to vaccinate her children. The success of these variolations assured the British people that the procedure was safe. Stimulated by a severe epidemic, variolation was first employed in North America in 1721. The procedure had been known in Boston since 1706, when preacher Cotton Mather learned it from Onesimus, a man he held as a slave, who – like many of his peers – had been inoculated in Africa before they were kidnapped. This practice was widely criticized at first. However, a limited trial showed six deaths occurred out of 244 who were variolated (2.5%), while 844 out of 5980 died of natural disease (14%), and the process was widely adopted throughout the colonies. The inoculation technique was documented as having a mortality rate of only one in a thousand. Two years after Kennedy's description appeared, March 1718, Dr. Charles Maitland successfully inoculated the five-year-old son of the British ambassador to the Turkish court under orders from the ambassador's wife Lady Mary Wortley Montagu, who four years later introduced the practice to England. An account from letter by Lady Mary Wortley Montagu to Sarah Chiswell, dated 1 April 1717, from the Turkish Embassy describes this treatment: The small-pox so fatal and so general amongst us is here entirely harmless by the invention of ingrafting (which is the term they give it). There is a set of old women who make it their business to perform the operation. Every autumn in the month of September, when the great heat is abated, people send to one another to know if any of their family has a mind to have the small-pox. They make parties for this purpose, and when they are met (commonly fifteen or sixteen together) the old woman comes with a nutshell full of the matter of the best sort of small-pox and asks what veins you please to have opened. She immediately rips open that you offer to her with a large needle (which gives you no more pain than a common scratch) and puts into the vein as much venom as can lye upon the head of her needle, and after binds up the little wound with a hollow bit of shell, and in this manner opens four or five veins. ... The children or young patients play together all the rest of the day and are in perfect health till the eighth. Then the fever begins to seize them and they keep their beds two days, very seldom three. They have very rarely above twenty or thirty in their faces, which never mark, and in eight days time they are as well as before the illness. ... There is no example of any one that has died in it, and you may believe I am very well satisfied of the safety of the experiment since I intend to try it on my dear little son. I am patriot enough to take pains to bring this useful invention into fashion in England, and I should not fail to write to some of our doctors very particularly about it if I knew any one of them that I thought had virtue enough to destroy such a considerable branch of their revenue for the good of mankind, but that distemper is too beneficial to them not to expose to all their resentment the hardy wight that should undertake to put an end to it. Perhaps if I live to return I may, however, have courage to war with them. === Early vaccination === In the early empirical days of vaccination, before Louis Pasteur's work on establishing the germ theory and Joseph Lister's on antisepsis and asepsis, there was considerable cross-infection. William Woodville, one of the early vaccinators and director of the London Smallpox Hospital is thought to have contaminated the cowpox matter – the vaccine – with smallpox matter and this essentially produced variolation. Other vaccine material was not reliably derived from cowpox, but from other skin eruptions of cattle. During the earlier days of empirical experimentation in 1758, American Calvinist Jonathan Edwards died from a smallpox inoculation. Some of the earliest statistical and epidemiological studies were performed by James Jurin in 1727 and Daniel Bernoulli in 1766. In 1768, Dr John Fewster reported that variolation induced no reaction in persons who had had cowpox. Edward Jenner was born in Berkeley, England. As a young child, Jenner was variolated with the other schoolboys through parish funds, but nearly died due to the seriousness of his infection. Fed purgative medicine and going through the bloodletting process, Jenner was put in one of the variolation stables until he recovered. At the age of 13, he was apprenticed to apothecary Daniel Ludlow and later surgeon George Hardwick in nearby Sodbury. He observed that people who caught cowpox while working with cattle were known not to catch smallpox. Jenner assumed a causal connection but the idea was not taken up at that time. From 1770 to 1772 Jenner received advanced training in London at St. George's Hospital and as the private pupil of John Hunter, then returned to set up practice in Berkeley. Perhaps there was already an informal public understanding of some connection between disease resistance and working with cattle. The "beautiful milkmaid" seems to have been a frequent image in the art and literature of this period. But it is known for certain that in the years following 1770, at least six people in England and Germany (Sevel, Jensen, Jesty 1774, Rendall, Plett 1791) tested successfully the possibility of using the cowpox vaccine as an immunization for smallpox in humans. Jenner sent a paper reporting his observations to the Royal Society in April 1797. It was not submitted formally and there is no mention of it in the Society's records. Jenner had sent the paper informally to Sir Joseph Banks, the Society's president, who asked Everard Home for his views. Reviews of his rejected report, published for the first time in 1999, were skeptical and called for further vaccinations. Additional vaccinations were performed and in 1798 Jenner published his work entitled An Inquiry into the Causes and Effects of the Variolae Vaccinae, a disease discovered in some of the western counties of England, particularly Gloucestershire and Known by the Name of Cow Pox. It was an analysis of 23 cases including several individuals who had resisted natural exposure after previous cowpox. It is not known how many Jenner vaccinated or challenged by inoculation with smallpox virus; e.g. Case 21 included 'several children and adults'. Crucially all of at least four whom Jenner deliberately inoculated with smallpox virus resisted it. These included the first and last patients in a series of arm-to-arm transfers. He concluded that cowpox inoculation was a safe alternative to smallpox inoculation, but rashly claimed that the protective effect was lifelong. This last proved to be incorrect. Jenner also tried to distinguish between 'True' cowpox which produced the desired result and 'Spurious' cowpox which was ineffective and/or produced severe reaction. Modern research suggests Jenner was trying to distinguish between effects caused by what would be recognised as a non-infectious vaccine, a different virus (e.g. paravaccinia/milker's nodes), or contaminating bacterial pathogens. This caused confusion at the time, but would become important criteria in vaccine development. A further source of confusion was Jenner's belief that fully effective vaccine obtained from cows originated in an equine disease, which he mistakenly referred to as grease. This was criticised at the time but vaccines derived from horsepox were soon introduced and later contributed to the complicated problem of the origin of vaccinia virus, the virus in present-day vaccine.: 165–78  The introduction of the vaccine to the New World took place in Trinity, Newfoundland, in 1798 by Dr. John Clinch, boyhood friend and medical colleague of Jenner. The first smallpox vaccine in the United States was administered in 1799. The physician Valentine Seaman gave his children a smallpox vaccination using a serum acquired from Jenner. By 1800, Jenner's work had been published in all the major European languages and had reached Benjamin Waterhouse in the United States – an indication of rapid spread and deep interest.: 262–67  Despite some concern about the safety of vaccination the mortality using carefully selected vaccine was close to zero, and it was soon in use all over Europe and the United States. In 1804 the Balmis Expedition, an official Spanish mission commanded by Francisco Javier de Balmis, sailed to spread the vaccine throughout the Spanish Empire, first to the Canary Islands and on to Spanish Central America. While his deputy, José Salvany, took vaccine to the west and east coasts of Spanish South America, Balmis sailed to Manila in the Philippines and on to Canton and Macao on the Chinese coast. He returned to Spain in 1806. The vaccine was not carried in the form of flasks, but in the form of 22 orphaned boys, who were 'carriers' of the live cowpox virus. After arrival, "other Spanish governors and doctors used enslaved girls to move the virus between islands, using lymph fluid harvested from them to inoculate their local populations". Napoleon was an early proponent of smallpox vaccination and ordered that army recruits be given the vaccine. Additionally a vaccination program was created for the French Army and his Imperial Guard. In 1811 he had his son, Napoleon II, vaccinated after his birth. By 1815 about half of French children were vaccinated and by the end of the Napoleonic Empire smallpox deaths accounted for 1.8% of deaths, as opposed to the 4.8% of deaths it accounted for at the time of the French Revolution. On March 26, 1806, the Swiss canton Thurgau became the first state in the world to introduce compulsory smallpox vaccinations, by order of the cantonal councillor Jakob Christoph Scherb. Half a year later, Elisa Bonaparte issued a corresponding order for her Principality of Lucca and Piombino on 25 December 1806. On 26 August 1807, Bavaria introduced a similar measure. Baden followed in 1809, Prussia in 1815, Württemberg in 1818, Sweden in 1816, England in 1867 and the German Empire in 1874 through the Reichs Vaccination Act. In Lutheran Sweden, the Protestant clergy played a pioneering role in voluntary smallpox vaccination as early as 1800. The first vaccination was carried out in Liechtenstein in 1801, and from 1812 it was mandatory to vaccinate. The question of who first tried cowpox inoculation/vaccination cannot be answered with certainty. Most, but still limited, information is available for Benjamin Jesty, Peter Plett and John Fewster. In 1774 Jesty, a farmer of Yetminster in Dorset, observing that the two milkmaids living with his family were immune to smallpox, inoculated his family with cowpox to protect them from smallpox. He attracted a certain amount of local criticism and ridicule at the time then interest waned. Attention was later drawn to Jesty, and he was brought to London in 1802 by critics jealous of Jenner's prominence at a time when he was applying to Parliament for financial reward. During 1790–92 Peter Plett, a teacher from Holstein, reported limited results of cowpox inoculation to the Medical Faculty of the University of Kiel. However, the Faculty favoured variolation and took no action. John Fewster, a surgeon friend of Jenner's from nearby Thornbury, discussed the possibility of cowpox inoculation at meetings as early as 1765. He may have done some cowpox inoculations in 1796 at about the same time that Jenner vaccinated Phipps. However, Fewster, who had a flourishing variolation practice, may have considered this option but used smallpox instead. He thought vaccination offered no advantage over variolation, but maintained friendly contact with Jenner and certainly made no claim of priority for vaccination when critics attacked Jenner's reputation. It seems clear that the idea of using cowpox instead of smallpox for inoculation was considered, and actually tried in the late 18th century, and not just by the medical profession. Therefore, Jenner was not the first to try cowpox inoculation. However, he was the first to publish his evidence and distribute vaccine freely, provide information on selection of suitable material, and maintain it by arm-to-arm transfer. The authors of the official World Health Organization (WHO) account Smallpox and its Eradication assessing Jenner's role wrote:: 264  Publication of the Inquiry and the subsequent energetic promulgation by Jenner of the idea of vaccination with a virus other than variola virus constituted a watershed in the control of smallpox for which he, more than anyone else deserves the credit. As vaccination spread, some European countries made it compulsory. Concern about its safety led to opposition and then repeal of legislation in some instances.: 236–40  Compulsory infant vaccination was introduced in England by the Vaccination Act 1853 (16 & 17 Vict. c. 100). By 1871, parents could be fined for non-compliance, and then imprisoned for non-payment.: 202–13  This intensified opposition, and the Vaccination Act 1898 (61 & 62 Vict. c. 49) introduced a conscience clause. This allowed exemption on production of a certificate of conscientious objection signed by two magistrates. Such certificates were not always easily obtained and a further act in 1907 allowed exemption by a statutory declaration which could not be refused. Although theoretically still compulsory, the Vaccination Act 1907 (7 Edw. 7. c. 31) effectively marked the end of compulsory infant vaccination in England.: 233–38  In the United States vaccination was regulated by individual states, the first to impose compulsory vaccination being Massachusetts in 1809. There then followed sequences of compulsion, opposition and repeal in various states. By 1930 Arizona, Utah, North Dakota and Minnesota prohibited compulsory vaccination, 35 states allowed regulation by local authorities, or had no legislation affecting vaccination, whilst in ten states, including Washington, D.C. and Massachusetts, infant vaccination was compulsory.: 292–93  Compulsory infant vaccination was regulated by only allowing access to school for those who had been vaccinated. Those seeking to enforce compulsory vaccination argued that the public good overrode personal freedom, a view supported by the U.S. Supreme Court in Jacobson v. Massachusetts in 1905, a landmark ruling which set a precedent for cases dealing with personal freedom and the public good. Louis T. Wright, an African-American Harvard Medical School graduate (1915), introduced, while serving in the Army during World War I, intradermal, smallpox vaccination for the soldiers. === Developments in production === Until the end of the 19th century, vaccination was performed either directly with vaccine produced on the skin of calves or, particularly in England, with vaccine obtained from the calf but then maintained by arm-to-arm transfer; initially in both cases vaccine could be dried on ivory points for short-term storage or transport but increasing use was made of glass capillary tubes for this purpose towards the end of the century. During this period there were no adequate methods for assessing the safety of the vaccine and there were instances of contaminated vaccine transmitting infections such as erysipelas, tetanus, septicaemia and tuberculosis. In the case of arm-to-arm transfer there was also the risk of transmitting syphilis. Although this did occur occasionally, estimated as 750 cases in 100 million vaccinations,: 122  some critics of vaccination e.g. Charles Creighton believed that uncontaminated vaccine itself was a cause of syphilis. Smallpox vaccine was the only vaccine available during this period, and so the determined opposition to it initiated a number of vaccine controversies that spread to other vaccines and into the 21st century. Sydney Arthur Monckton Copeman, an English Government bacteriologist interested in smallpox vaccine, investigated the effects on the bacteria in it of various treatments, including glycerine. Glycerine was sometimes used simply as a diluent by some continental vaccine producers. However, Copeman found that vaccine suspended in 50% chemically pure glycerine and stored under controlled conditions contained very few "extraneous" bacteria and produced satisfactory vaccinations. He later reported that glycerine killed the causative organisms of erysipelas and tuberculosis when they were added to the vaccine in "considerable quantity", and that his method was widely used on the continent. In 1896, Copeman was asked to supply "extra good calf vaccine" to vaccinate the future Edward VIII. Vaccine produced by Copeman's method was the only type issued free to public vaccinators by the British Government Vaccine Establishment from 1899. At the same time the Vaccination Act 1898 (61 & 62 Vict. c. 49) banned arm-to-arm vaccination, thus preventing transmission of syphilis by this vaccine. However, private practitioners had to purchase vaccine from commercial producers. Although proper use of glycerine reduced bacterial contamination considerably the crude starting material, scraped from the skin of infected calves, was always heavily contaminated and no vaccine was totally free from bacteria. A survey of vaccines in 1900 found wide variations in bacterial contamination. Vaccine issued by the Government Vaccine Establishment contained 5,000 bacteria per gram, while commercial vaccines contained up to 100,000 per gram. The level of bacterial contamination remained unregulated until the Therapeutic Substances Act 1925 (15 & 16 Geo. 5. c. 60) set an upper limit of 5,000 per gram, and rejected any batch of vaccine found to contain the causative organisms of erysipelas or wound infections. Unfortunately glycerolated vaccine lost its potency quickly at ambient temperatures which restricted its use in tropical climates. However, it remained in use into the 1970s when a satisfactory cold chain was available. Animals continued to be widely used by vaccine producers during the smallpox eradication campaign. A WHO survey of 59 producers, some of whom used more than one source of vaccine, found that 39 used calves, 12 used sheep and 6 used water buffalo, whilst only 3 made vaccine in cell culture and 3 in embryonated hens' eggs.: 543–45  English vaccine was occasionally made in sheep during World War I but from 1946 only sheep were used. In the late 1940s and early 1950s, Leslie Collier, an English microbiologist working at the Lister Institute of Preventive Medicine, developed a method for producing a heat-stable freeze-dried vaccine in powdered form. Collier added 0.5% phenol to the vaccine to reduce the number of bacterial contaminants but the key stage was to add 5% peptone to the liquid vaccine before it was dispensed into ampoules. This protected the virus during the freeze drying process. After drying, the ampoules were sealed under nitrogen. Like other vaccines, once reconstituted it became ineffective after 1–2 days at ambient temperatures. However, the dried vaccine was 100% effective when reconstituted after 6 months storage at 37 °C (99 °F) allowing it to be transported to, and stored in, remote tropical areas. Collier's method was increasingly used and, with minor modifications, became the standard for vaccine production adopted by the WHO Smallpox Eradication Unit when it initiated its global smallpox eradication campaign in 1967, at which time 23 of 59 manufacturers were using the Lister strain.: 545, 550  In a letter about landmarks in the history of smallpox vaccine, written to and quoted from by Derrick Baxby, Donald Henderson, chief of the Smallpox Eradication Unit from 1967 to 1977 wrote; "Copeman and Collier made an enormous contribution for which neither, in my opinion ever received due credit". Smallpox vaccine was inoculated by scratches into the superficial layers of the skin, with a wide variety of instruments used to achieve this. They ranged from simple needles to multi-pointed and multi-bladed spring-operated instruments specifically designed for the purpose. A major contribution to smallpox vaccination was made in the 1960s by Benjamin Rubin, an American microbiologist working for Wyeth Laboratories. Based on initial tests with textile needles with the eyes cut off transversely half-way he developed the bifurcated needle. This was a sharpened two-prong fork designed to hold one dose of reconstituted freeze-dried vaccine by capillarity. Easy to use with minimum training, cheap to produce ($5 per 1000), using one quarter as much vaccine as other methods, and repeatedly re-usable after flame sterilization, it was used globally in the WHO Smallpox Eradication Campaign from 1968.: 472–73, 568–72  Rubin estimated that it was used to do 200 million vaccinations per year during the last years of the campaign. Those closely involved in the campaign were awarded the "Order of the Bifurcated Needle". This, a personal initiative by Donald Henderson, was a lapel badge, designed and made by his daughter, formed from the needle shaped to form an "O". This represented "Target Zero", the objective of the campaign. === Eradication of smallpox === Smallpox was eradicated by a massive international search for outbreaks, backed up with a vaccination program, starting in 1967. It was organised and co-ordinated by a World Health Organization (WHO) unit, set up and headed by Donald Henderson. The last case in the Americas occurred in 1971 (Brazil), south-east Asia (Indonesia) in 1972, and on the Indian subcontinent in 1975 (Bangladesh). After two years of intensive searches, what proved to be the last endemic case anywhere in the world occurred in Somalia, in October 1977.: 526–37  A Global Commission for the Certification of Smallpox Eradication chaired by Frank Fenner examined the evidence from, and visited where necessary, all countries where smallpox had been endemic. In December 1979 they concluded that smallpox had been eradicated; a conclusion endorsed by the WHO General Assembly in May 1980.: 1261–62  However, even as the disease was being eradicated there still remained stocks of smallpox virus in many laboratories. Accelerated by two cases of smallpox in 1978, one fatal (Janet Parker), caused by an accidental and unexplained containment breach at a laboratory at the University of Birmingham Medical School, the WHO ensured that known stocks of smallpox virus were either destroyed or moved to safer laboratories. By 1979, only four laboratories were known to have smallpox virus. All English stocks held at St Mary's Hospital, London were transferred to more secure facilities at Porton Down and then to the US at the Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia in 1982, and all South African stocks were destroyed in 1983. By 1984, the only known stocks were kept at the CDC in the U.S. and the State Research Center of Virology and Biotechnology (VECTOR) in Koltsovo, Russia.: 1273–76  These states report that their repositories are for possible anti-bioweaponry research and insurance if some obscure reservoir of natural smallpox is discovered in the future. === Anti-terrorism preparation === Among more than 270,000 US military service members vaccinated with smallpox vaccine between December 2002, and March 2003, eighteen cases of probable myopericarditis were reported (all in first-time vaccinees who received the NYCBOH strain of vaccinia virus), an incidence of 7.8 per 100,000 during the 30 days they were observed. All cases were in young, otherwise healthy adult white men and all survived. In 2002, the United States government started a program to vaccinate 500,000 volunteer health care professionals throughout the country. Recipients were healthcare workers who would be first-line responders in the event of a bioterrorist attack. Many healthcare workers refused or did not pursue vaccination, worried about vaccine side effects, compensation and liability. Most did not see an immediate need for the vaccine. Some healthcare systems refused to participate, worried about becoming a destination for smallpox patients in the event of an epidemic. Fewer than 40,000 actually received the vaccine. On 21 April 2022, Public Services and Procurement Canada published a notice of tender seeking to stockpile 500,000 doses of smallpox vaccine in order to protect against a potential accidental or intentional release of the eradicated virus. On 6 May, the contract was awarded to Bavarian Nordic for their Imvamune vaccine. These were deployed by the Public Health Agency of Canada for targeted vaccination in response to the 2022 mpox outbreak. == Origin == The origin of the modern smallpox vaccine has long been unclear, but horsepox was identified in the 2010s as the most likely ancestor.: 9  Edward Jenner had obtained his vaccine from a cow, so he named the virus vaccinia, after the Latin word for cow. Jenner believed that both cowpox and smallpox were viruses that originated in the horse and passed to the cow,: 52–53  and some doctors followed his reasoning by inoculating their patients directly with horsepox. The situation was further muddied when Louis Pasteur developed techniques for creating vaccines in the laboratory in the late 19th century. As medical researchers subjected viruses to serial passage, inadequate recordkeeping resulted in the creation of laboratory strains with unclear origins.: 4  By the late 19th century, it was unknown whether the vaccine originated from cowpox, horsepox, or an attenuated strain of smallpox. In 1939, Allan Watt Downie showed that the vaccinia virus was serologically distinct from the "spontaneous" cowpox virus. This work established vaccinia and cowpox as two separate viral species. The term vaccinia now refers only to the smallpox vaccine, while cowpox no longer has a Latin name. The development of whole genome sequencing in the 1990s made it possible to compare orthopoxvirus genomes and identify their relationships with each other. The horsepox virus was sequenced in 2006 and found to be most closely related to vaccinia. In a phylogenetic tree of the orthopoxviruses, horsepox forms a clade with vaccinia strains, and cowpox strains form a different clade. Horsepox is extinct in the wild, and the only known sample was collected in 1976. Because the sample was collected at the end of the smallpox eradication campaign, scientists considered the possibility that horsepox is a strain of vaccinia that had escaped into the wild. However, as more smallpox vaccines were sequenced, older vaccines were found to be more similar to horsepox than modern vaccinia strains. A smallpox vaccine manufactured by Mulford in 1902 is 99.7% similar to horsepox, closer than any previously known strain of vaccinia. Modern Brazilian vaccines with a documented introduction date of 1887, made from material collected in an 1866 outbreak of "cowpox" in France, are more similar to horsepox than other strains of vaccinia. Five smallpox vaccines manufactured in the United States in 1859–1873 are most similar to each other and horsepox, as well as the 1902 Mulford vaccine. One of the 1859–1873 vaccines was identified as a novel strain of horsepox, containing a complete gene from the 1976 horsepox sample that has deletions in vaccinia. == Terminology == The word "vaccine" is derived from Variolae vaccinae (i.e. smallpox of the cow), the term devised by Jenner to denote cowpox and used in the long title of his An enquiry into the causes and effects of Variolae vaccinae, known by the name of cow pox. Vaccination, the term which soon replaced cowpox inoculation and vaccine inoculation, was first used in print by Jenner's friend, Richard Dunning in 1800. Initially, the terms vaccine/vaccination referred only to smallpox, but in 1881 Louis Pasteur proposed at the 7th International Congress of Medicine that to honour Jenner the terms be widened to cover the new protective inoculations being introduced. According to some sources the term was first introduced by Jenner's friend Richard Dunning in 1800. == References == == Further reading == Ramsay M, ed. (September 2022) [March 2013]. "Smallpox and mpox (monkeypox): the green book, chapter 29". Immunisation against infectious disease. Public Health England. == External links == Smallpox U.S. Centers for Disease Control and Prevention Smallpox Center for Infectious Disease Research and Policy "Smallpox/Monkeypox Vaccine Information Statement (VIS)". U.S. Centers for Disease Control and Prevention (CDC). 19 May 2023. "Interim Clinical Considerations for Use of JYNNEOS Vaccine for Mpox Prevention in the United States". U.S. Centers for Disease Control and Prevention (CDC). 26 August 2024. "Medication Guide Smallpox (Vaccinia) Vaccine, Live ACAM2000" (PDF). Emergent Biosolutions.
Wikipedia/Smallpox_vaccine
Diphtheria vaccine is a toxoid vaccine against diphtheria, an illness caused by Corynebacterium diphtheriae. Its use has resulted in a more than 90% decrease in number of cases globally between 1980 and 2000. The first dose is recommended at six weeks of age with two additional doses four weeks apart, after which it is about 95% effective during childhood. Three further doses are recommended during childhood. It is unclear if further doses later in life are needed. The diphtheria vaccine is very safe. Significant side effects are rare. Pain may occur at the injection site. A bump may form at the site of injection that lasts a few weeks. The vaccine is safe in both pregnancy and among those who have a poor immune function. The diphtheria vaccine is delivered in several combinations. Some combinations (Td and DT vaccines) include tetanus vaccine, others (known as DPT vaccine or DTaP vaccine depending on the pertussis antigen used) comes with the tetanus and pertussis vaccines, and still others include additional vaccines such as Hib vaccine, hepatitis B vaccine, or inactivated polio vaccine. The World Health Organization (WHO) has recommended its use since 1974. About 84% of the world population is vaccinated. It is given as an intramuscular injection. The vaccine needs to be kept cold but not frozen. The diphtheria vaccine was developed in 1923. It is on the World Health Organization's List of Essential Medicines. == History == In 1890, Kitasato Shibasaburō and Emil von Behring at the University of Berlin reported the development of 'antitoxins' against diphtheria and tetanus. Their method involved injecting the respective toxins into animals and then purifying antibodies from their blood. Behring called this method 'serum therapy'. While effective against the pathogen, initial tests on humans were unsuccessful. By 1894, the production of antibodies had been optimised with help from Paul Ehrlich, and the treatment started to show success in humans. The serum therapy reduced mortality to 1–5%, although there were also reports of severe adverse reactions, including at least one death. Behring won the very first Nobel Prize in Physiology or Medicine for this discovery. Kitasato, however, was not awarded. By 1913, Behring had created Antitoxin-Toxin (antibody-antigen) complexes to produce the diphtheria AT vaccine. In the 1920s, Gaston Ramon developed a cheaper version by using formaldehyde-inactivated toxins. As the use of these vaccines spread across the world, the number of diphtheria cases was greatly reduced. In the United States alone, the number of cases fell from 100,000 to 200,000 per year in the 1920s to 19,000 in 1945 and 14 in the period 1996–2018. == Effectiveness == About 95% of people vaccinated develop immunity, and vaccination against diphtheria has resulted in a more than 90% decrease in number of cases globally between 1980 and 2000. About 86% of the world population was vaccinated as of 2016. == Side effects == Severe side effects from diphtheria toxoid are rare. Pain may occur at the injection site. A bump may form at the site of injection that lasts a few weeks. The vaccine is safe during pregnancy and among those who have a poor immune function. DTP vaccines may cause additional adverse effects such as fever, irritability, drowsiness, loss of appetite, and, in 6–13% of vaccine recipients, vomiting. Severe adverse effects of DTP vaccines include fever over 40.5 °C/104.9 °F (1 in 333 doses), febrile seizures (1 in 12,500 doses), and hypotonic-hyporesponsive episodes (1 in 1,750 doses). Side effects of DTaP vaccines are similar but less frequent. Tetanus toxoid containing vaccines (Td, DT, DTP and DTaP) may cause brachial neuritis at a rate of 0.5 to 1 case per 100,000 toxoid recipients. == Recommendations == The World Health Organization has recommended vaccination against diphtheria since 1974. The first dose is recommended at six weeks of age with two additional doses four weeks apart, after receiving these three doses about 95% of people are immune. Three further doses are recommended during childhood. Booster doses every ten years are no longer recommended if this vaccination scheme of 3 doses + 3 booster doses is followed. Injection of 3 doses + 1 booster dose, provides immunity for 25 years after the last dose. If only three initial doses are given, booster doses are needed to ensure continuing protection. == See also == Bundaberg tragedy DTP-HepB vaccine == References == == Further reading == == External links == "Infanrix". U.S. Food and Drug Administration (FDA). 6 November 2019. Archived from the original on 12 May 2019. "Daptacel". U.S. Food and Drug Administration (FDA). 22 July 2017. Diphtheria Toxoid at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Diphtheria_vaccine
Parkinson's disease (PD), or simply Parkinson's, is a neurodegenerative disease primarily of the central nervous system, affecting both motor and non-motor systems. Symptoms typically develop gradually and non-motor issues become more prevalent as the disease progresses. The motor symptoms are collectively called parkinsonism and include tremors, bradykinesia, rigidity as well as postural instability (i.e., difficulty maintaining balance). Non-motor symptoms develop later in the disease and include behavioral changes or neuropsychiatric problems such as sleep abnormalities, psychosis, and mood swings. Most Parkinson's disease cases are idiopathic, though contributing factors have been identified. Pathophysiology involves progressive degeneration of nerve cells in the substantia nigra, a midbrain region that provides dopamine to the basal ganglia, a system involved in voluntary motor control. The cause of this cell death is poorly understood but involves the aggregation of alpha-synuclein into Lewy bodies within neurons. Other potential factors involve genetic and environmental influences, medications, lifestyle, and prior health conditions. Diagnosis is primarily based on signs and symptoms, typically motor-related, identified through neurological examination. Medical imaging techniques like positron emission tomography can support the diagnosis. Parkinson's typically manifests in individuals over 60, with about one percent affected. In those younger than 50, it is termed "early-onset PD". No cure for Parkinson's is known, and treatment focuses on alleviating symptoms. Initial treatment typically includes levodopa, MAO-B inhibitors, or dopamine agonists. As the disease progresses, these medications become less effective and may cause involuntary muscle movements. Diet and rehabilitation therapies can help improve symptoms. Deep brain stimulation is used to manage severe motor symptoms when drugs are ineffective. There is little evidence for treatments addressing non-motor symptoms, such as sleep disturbances and mood instability. Life expectancy for those with PD is near-normal but is decreased for early-onset. == Classification and terminology == Parkinson's disease (PD) is a neurodegenerative disease affecting both the central and peripheral nervous systems, characterized by the loss of dopamine-producing neurons in the substantia nigra region of the brain. It is classified as a synucleinopathy due to the abnormal accumulation of the protein alpha-synuclein, which aggregates into Lewy bodies within affected neurons. The loss of dopamine-producing neurons in the substantia nigra causes movement abnormalities, leading to Parkinson's further categorization as a movement disorder. In 30% of cases, disease progression leads to the cognitive decline, resulting in Parkinson's disease dementia (PDD). Alongside dementia with Lewy bodies, PDD is one of the two subtypes of Lewy body dementia. The four cardinal motor symptoms of Parkinson's—bradykinesia (slowed movements), postural instability, rigidity, and tremor—are called parkinsonism. These four symptoms are not exclusive to Parkinson's and can occur in many other conditions, including HIV infection and recreational drug use. Neurodegenerative diseases that feature parkinsonism but have distinct differences are grouped under the umbrella of Parkinson-plus syndromes or, alternatively, atypical parkinsonian disorders. Parkinson's disease can be attributed to genetic factors, but most cases are idiopathic, with no clearly identifiable cause. == Signs and symptoms == === Motor === A wide spectrum of motor and non-motor symptoms appear in Parkinson's; the cardinal features are tremor, bradykinesia, rigidity, and postural instability, collectively termed parkinsonism. Appearing in 70–75 percent of those with PD, tremor is often the predominant motor symptom. Resting tremor is the most common, but kinetic tremors—occurring during voluntary movements—and postural tremor—preventing upright, stable posture—also occur. Tremor largely affects the hands and feet: a classic parkinsonian tremor is "pill-rolling", a resting tremor in which the thumb and index finger make contact in a circular motion at 4–6 Hz frequency. Bradykinesia describes difficulties in motor planning, beginning, and executing, resulting in overall slowed movement with reduced amplitude that affects sequential and simultaneous tasks. Bradykinesia can also lead to hypomimia, reduced facial expressions. Rigidity, also called rigor, refers to a feeling of stiffness and resistance to passive stretching of muscles. Postural instability typically appears in later stages, leading to impaired balance and falls. Postural instability also leads to a forward stooping posture. Beyond the cardinal four, other motor deficits, termed secondary motor symptoms, commonly occur. Notably, gait disturbances result in the Parkinsonian gait, which includes shuffling and paroxysmal deficits, where a normal gait is interrupted by rapid footsteps—known as festination—or sudden stops, impairing balance and causing falls. Most people with PD experience speech problems, including stuttering, hypophonic, "soft" speech, slurring, and festinating speech (rapid and poorly intelligible). Handwriting is commonly altered in Parkinson's, decreasing in size—known as micrographia—and becoming jagged and sharply fluctuating. Grip and dexterity are also impaired. === Neuropsychiatric and cognitive === Neuropsychiatric symptoms like anxiety, apathy, depression, hallucination, and impulse control disorders occur in up to 60% of those with Parkinson's. They often precede motor symptoms and vary with disease progression. Non-motor fluctuations, including dysphoria, fatigue, and slowness of thought, are also common. Some neuropsychiatric symptoms are not directly caused by neurodegeneration but rather by its pharmacological management. Cognitive impairments rank among the most prevalent and debilitating non-motor symptoms. These deficits may emerge in the early stages or before diagnosis, and their prevalence and severity tend to increase with disease progression. Ranging from mild cognitive impairment to severe Parkinson's disease dementia, these impairments include executive dysfunction, slowed cognitive processing speed, and disruptions in time perception and estimation. === Autonomic === Autonomic nervous system failures, known as dysautonomia, can appear at any stage of Parkinson's. They are among the most debilitating symptoms and greatly reduce quality of life. Although almost all individuals with PD have cardiovascular autonomic dysfunction, only some are symptomatic. Chiefly, orthostatic hypotension—a sustained blood pressure drop of at least 20 mmHg systolic or 10 mmHg diastolic after standing—occurs in 30–50 percent of cases. This can result in lightheadedness or fainting: subsequent falls are associated with higher morbidity and mortality. Other autonomic failures include gastrointestinal issues like chronic constipation, impaired stomach emptying and subsequent nausea, excessive salivation, and dysphagia (difficulty swallowing): all greatly reduce quality of life. Dysphagia, for instance, can prevent pill swallowing and lead to aspiration pneumonia. Urinary incontinence, sexual dysfunction, and thermoregulatory dysfunction—including heat and cold intolerance and excessive sweating—also frequently occur. === Other === Sensory deficits appear in up to 90 percent of people with PD and are usually present at early stages. Nociceptive and neuropathic pain are common, with peripheral neuropathy affecting up to 55 percent of individuals. Visual impairments are also frequently observed, including deficits in visual acuity, color vision, eye coordination, and visual hallucinations. An impaired sense of smell is also prevalent. Individuals often struggle with spatial awareness, recognizing faces and emotions, and may experience challenges with reading and double vision. Sleep disorders are highly prevalent in PD, affecting up to 98%. These disorders include insomnia, excessive daytime sleepiness, restless legs syndrome, REM sleep behavior disorder (RBD), and sleep-disordered breathing, many of which can be worsened by medication. RBD may begin years before the initial motor symptoms. Individual presentation of symptoms varies, although most people affected by PD show an altered circadian rhythm at some point of disease progression. PD is also associated with a variety of skin disorders that include melanoma, seborrheic dermatitis, bullous pemphigoid, and rosacea. Seborrheic dermatitis is recognized as a premotor feature that indicates dysautonomia and demonstrates that PD can be detected not only by changes of nervous tissue, but tissue abnormalities outside the nervous system as well. == Causes == As of 2024, the cause of neurodegeneration in Parkinson's is unclear, though it is believed to result from the interplay of genetic and environmental factors. The majority of cases are idiopathic with no clearly identifiable cause, while approximately 5–10 percent are familial. Around a third of familial cases can be attributed to a single monogenic cause. Molecularly, abnormal aggregation of alpha-synuclein is considered a key contributor to PD pathogenesis, although the trigger for this aggregation is debated and some forms of PD do not include these aggregations. Also, the vulnerability of substantia nigra pars compacta (SNc) dopaminergic neurons to oxidative stress, caused in part by intracellular dopamine being toxic, has been proposed as a major contributor to the disease. Proteostasis disruption and the dysfunction of cell organelles, including endosomes, lysosomes, and mitochondria, are implicated in pathogenesis. Additionally, maladaptive immune and inflammatory responses are potential contributors. The substantial heterogeneity in PD presentation and progression suggests the involvement of multiple interacting triggers and pathogenic pathways. === Genetic === Parkinson's can be narrowly defined as a genetic disease, as rare inherited gene variants have been firmly linked to monogenic PD, and most cases carry variants that increase PD risk. PD heritability is estimated to range from 22 to 40 percent. Around 15 percent of diagnosed individuals have a family history, of which 5–10 percent can be attributed to a causative risk gene mutation. Carrying one of these mutations may not lead to disease. Rates of familial PD vary by ethnicity: monogenic PD occurs in up to 40% of Arab-Berber and 20% of Ashkenazi Jewish people with PD. As of 2024, around 90 genetic risk variants across 78 genomic loci have been identified. Notable risk variants include SNCA (which encodes alpha-synuclein), LRRK2, and VPS35 for autosomal dominant inheritance, and PRKN, PINK1, and DJ1 for autosomal recessive inheritance. LRRK2 is the most common autosomal dominant variant, responsible for 1–2 percent of all PD cases and 40 percent of familial cases. Parkin variants are associated with nearly half of recessive, early-onset monogenic PD. Mutations in the GBA1 gene, linked to Gaucher's disease, can cause monogenic PD, and are associated with cognitive decline. === Environmental === The limited heritability of Parkinson's strongly suggests environmental factors are involved, though identifying these risk factors and establishing causality is challenging due to PD's decade-long prodromal period. Environmental toxicants such as air pollution, pesticides, and industrial solvents like trichloroethylene are strongly linked to Parkinson's. Certain pesticides—like paraquat, glyphosate, and rotenone—are the most established environmental toxicants for Parkinson's and are likely causal. PD prevalence is strongly associated with local pesticide use, and many pesticides are mitochondrial toxins. Paraquat, for instance, structurally resembles metabolized MPTP, which selectively kills dopaminergic neurons by inhibiting mitochondrial complex 1 and is widely used to model PD. Pesticide exposure after diagnosis may also accelerate disease progression. Without high pesticide exposure, an estimated 20 percent of all PD cases would be prevented. === Hypotheses === ==== Prionic ==== The hallmark of Parkinson's is the formation of protein aggregates, beginning with alpha-synuclein fibrils and followed by Lewy bodies and Lewy neurites. The prion hypothesis suggests that alpha-synuclein aggregates are pathogenic and can spread to neighboring, healthy neurons and seed new aggregates. Some propose that the heterogeneity of PD may stem from different "strains" of alpha-synuclein aggregates and varying anatomical sites of origin. Alpha-synuclein propagation has been demonstrated in cell and animal models and is the most popular explanation for the progressive spread through specific neuronal systems. However, therapeutic efforts to clear alpha-synuclein have failed. Additionally, postmortem brain tissue analysis shows that alpha-synuclein pathology does not clearly progress through the nearest neural connections. ==== Braak's ==== In 2002, Heiko Braak and colleagues proposed that Parkinson's disease begins outside the brain and is triggered by a "neuroinvasion" of some unknown pathogen. The pathogen enters through the nasal cavity and is swallowed into the digestive tract, initiating Lewy pathology in both areas. This alpha-synuclein pathology may then travel from the gut to the central nervous system through the vagus nerve. This theory could explain the presence of Lewy pathology in both the enteric nervous system and olfactory tract neurons, as well as clinical symptoms like loss of smell and gastrointestinal problems. It has also been suggested that environmental toxicants might be ingested in a similar manner to trigger PD. === Risk factors === As 90 percent of Parkinson's cases are idiopathic, the identification of the risk factors that may influence disease progression or severity is critical. The most significant risk factor in developing PD is age, with a prevalence of 1 percent in those aged over 65 and approximately 4.3 percent in age over 85. Traumatic brain injury significantly increases PD risk, especially if recent. Dairy consumption correlates with a higher risk, possibly due to contaminants like heptachlor epoxide. Although the connection is unclear, melanoma diagnosis is associated with an approximately 45 percent risk increase. There is also an association between methamphetamine use and PD risk. === Protective factors === Although no compounds or activities have been mechanistically established as neuroprotective for Parkinson's, several factors have been found to be associated with a decreased risk. Tobacco use and smoking is strongly associated with a decreased risk, reducing the chance of developing PD by up to 70%. Various tobacco and smoke components have been hypothesized to be neuroprotective, including nicotine, carbon monoxide, and monoamine oxidase B inhibitors. Consumption of caffeine as an ingredient of coffee or tea is also strongly associated with neuroprotection. Prescribed adrenergic antagonists like terazosin may reduce risk. Although findings have varied, usage of nonsteroidal anti-inflammatory drugs (NSAIDs) like ibuprofen may be neuroprotective. Calcium channel blockers may also have a protective effect, with a 22% risk reduction reported. Higher blood concentrations of urate—a potent antioxidant—have been proposed to be neuroprotective. Although longitudinal studies observe a slight decrease in PD risk among those who consume alcohol—possibly due to alcohol's urate-increasing effect—alcohol abuse may increase risk. == Pathophysiology == Parkinson's disease has two hallmark pathophysiological processes: the abnormal aggregation of alpha-synuclein that leads to Lewy pathology, and the degeneration of dopaminergic neurons in the substantia nigra pars compacta. The death of these neurons reduces available dopamine in the striatum, which in turn affects circuits controlling movement in the basal ganglia. By the time motor symptoms appear, 50–80 percent of all dopaminergic neurons in the substantia nigra have degenerated. However, cell death and Lewy pathology are not limited to the substantia nigra. The six-stage Braak system holds that alpha-synuclein pathology begins in the olfactory bulb or outside the central nervous system in the enteric nervous system before ascending the brain stem. In the third Braak stage, Lewy body pathology appears in the substantia nigra, and, by the sixth step, Lewy pathology has spread to the limbic and neocortical regions. Although Braak staging offers a strong basis for PD progression, around 50 percent of individuals do not adhere to the predicted model. Lewy pathology is highly variable and may be entirely absent in some persons with PD. === Alpha-synuclein pathology === Alpha-synuclein is an intracellular protein typically localized to presynaptic terminals and involved in synaptic vesicle trafficking, intracellular transport, and neurotransmitter release. When misfolded, it can aggregate into oligomers and proto-fibrils that in turn lead to Lewy body formation. Due to their lower molecular weight, oligomers and proto-fibrils may disseminate and be transmitted to other cells more rapidly. Lewy bodies consist of a fibrillar exterior and granular core. Although alpha-synuclein is the dominant proteinaceous component, the core contains mitochondrial and autophagosomal membrane components, suggesting a link with organelle dysfunction. It is unclear whether Lewy bodies themselves contribute to or are simply the result of PD pathogenesis: alpha-synuclein oligomers can independently mediate cell damage, and neurodegeneration can precede Lewy body formation. === Pathways involved in neurodegeneration === Three major pathways—vesicular trafficking, lysosomal degradation, and mitochondrial maintenance—are known to be affected by and contribute to Parkinson's pathogenesis, with all three linked to alpha-synuclein. High risk gene variants also impair all three of these processes. All steps of vesicular trafficking are impaired by alpha-synuclein. It blocks endoplasmic reticulum (ER) vesicles from reaching the Golgi—leading to ER stress—and Golgi vesicles from reaching the lysosome, preventing alpha-synuclein degradation and leading to its build-up. Risky gene variants, chiefly GBA, further compromise lysosomal function. Although the mechanism is not well established, alpha-synuclein can impair mitochondrial function and cause subsequent oxidative stress. Mitochondrial dysfunction can in turn lead to further alpha-synuclein accumulation in a positive feedback loop. Microglial activation, possibly caused by alpha-synuclein, is also strongly indicated. ==== Mitochondrial dysfunction ==== Mitochondrial dysfunction is well-established in Parkinson's. Increased oxidative stress and reduced calcium buffering may contribute to neurodegeneration. The finding that MPP+—a respiratory complex I inhibitor and MPTP metabolite—caused parkinsonian symptoms strongly implied that mitochondria contributed to PD pathogenesis. Additionally, faulty gene variants involved in familial Parkinson's—including PINK1 and Parkin—prevent the elimination of dysfunctional mitochondria through mitophagy. ==== Neuroinflammation ==== Some hypothesize that neurodegeneration arises from a chronic neuroinflammatory state created by local activated microglia and infiltrating immune cells. Mitochondrial dysfunction may also drive immune activation, particularly in monogenic PD. Some autoimmune disorders increase the risk of developing PD, supporting an autoimmune contribution. Additionally, influenza and herpes simplex virus infections increase the risk of PD, possibly due to a viral protein resembling alpha-synuclein. Parkinson's risk is also decreased with immunosuppressants. == Diagnosis == Diagnosis of Parkinson's disease is largely clinical, relying on medical history and examination of symptoms, with an emphasis on symptoms that appear in later stages. Although early stage diagnosis is not reliable, prodromal diagnosis may consider previous family history of Parkinson's and possible early symptoms like rapid eye movement sleep behavior disorder (RBD), reduced sense of smell, and gastrointestinal issues. Isolated RBD is a particularly significant sign as 90% of those affected will develop some form of neurodegenerative parkinsonism. Diagnosis in later stages requires the manifestation of parkinsonism, specifically bradykinesia and rigidity or tremor. Further support includes other motor and non-motor symptoms and genetic profiling. A PD diagnosis is typically confirmed by two of the following criteria: responsiveness to levodopa, resting tremor, levodopa-induced dyskinesia, or with dopamine transporter single-proton emission computed tomography. If these criteria are not met, atypical parkinsonism is considered. Definitive diagnoses can only be made post-mortem through pathological analysis. Misdiagnosis is common, with a reported error rate of near 25 percent, and diagnoses often change during follow-ups. Diagnosis can be further complicated by multiple overlapping conditions. === Imaging === Diagnosis can be aided by molecular imaging techniques such as magnetic resonance imaging (MRI), positron emission tomography (PET), and single-photon emission computed tomography (SPECT). As both conventional MRI and computed tomography (CT) scans are usually normal in early PD, they can be used to exclude other pathologies that cause parkinsonism. Diffusion MRI can differentiate PD from multiple systems atrophy (MSA). Emerging MRI techniques of at least 3.0 T field strength—including neuromelanin-MRI, 1H-MRSI, and resting state fMRI—may detect abnormalities in the substantia nigra, nigrostriatal pathway, and elsewhere. Unlike MRI, PET and SPECT use radioisotopes for imaging. Both techniques can aid diagnosis by characterizing PD-associated alterations in the metabolism and transport of dopamine in the basal ganglia. Largely used outside the United States, iodine-123-meta-iodobenzylguanidine myocardial scintigraphy can assess heart muscle denervation to support a PD diagnosis. === Differential diagnosis === Differential diagnosis of Parkinson's is among the most difficult in neurology. Differentiating early PD from atypical parkinsonian disorders is a major difficulty. In their initial stages, PD can be difficult to distinguish from the atypical neurodegenerative parkinsonisms, including MSA, dementia with Lewy bodies, and the tauopathies progressive supranuclear palsy and corticobasal degeneration. Other conditions that may present similarly to PD include vascular parkinsonism, Alzheimer's disease, and frontotemporal dementia. The International Parkinson and Movement Disorder Society has proposed a set of criteria that, unlike the standard Queen's Square Brain Bank Criteria, includes non-exclusionary "red-flag" clinical features that may not suggest Parkinson's. A large number of "red flags" have been proposed and adopted for various conditions that might mimic the symptoms of PD. Diagnostic tests, including gene sequencing, molecular imaging techniques, and assessment of smell may also distinguish PD. MRI is particularly powerful due to several unique features for atypical parkinsonisms. == Management == As of 2024, no disease-modifying therapies exist that reverse or slow neurodegeneration. Management typically combines lifestyle modifications with physical therapy. Current pharmacological interventions purely target symptoms, by either increasing endogenous dopamine levels or directly mimicking dopamine's effect on the patient's brain. These include dopamine agonists, MAO-B inhibitors, and levodopa: the most widely used and effective drug. The optimal time to initiate pharmacological treatment is debated, but initial dopamine agonist and MAO-B inhibitor treatment and later levodopa therapy is common. Invasive procedures such as deep brain stimulation may be used when medication is ineffective. === Medications === ==== Levodopa ==== Levodopa is the most widely used and the most effective therapy—the gold standard—for Parkinson's treatment. The compound occurs naturally and is the immediate precursor for dopamine synthesis in the dopaminergic neurons of the substantia nigra. Levodopa administration reduces the dopamine deficiency in parkinsonism. Despite its efficacy, levodopa poses several challenges and its administration has been called the "pharmacologist's nightmare". Its metabolism outside the brain by aromatic L-amino acid decarboxylase (AAAD) and catechol-O-methyltransferase (COMT) can cause nausea and vomiting; inhibitors like carbidopa, entacapone, and benserazide are usually taken with levodopa to mitigate these effects. Long-term levodopa use may also induce dyskinesia and motor fluctuations. Although this often causes levodopa use to be delayed to later stages, earlier administration leads to improved motor function and quality of life. ==== Dopamine agonists ==== Dopamine agonists are an alternative or complement for levodopa therapy. They activate dopamine receptors in the striatum, with reduced risk of motor fluctuations and dyskinesia, and are efficacious in both early and late stage Parkinson's, The agonist apomorphine is often used for drug-resistant OFF time in later-stage PD. After five years of use, impulse control disorders may occur in over 40 percent of those taking dopamine agonists. A problematic, narcotic-like withdrawal effect may occur when agonist use is reduced or stopped. Compared to levodopa, dopamine agonists are more likely to cause fatigue, daytime sleepiness, and hallucinations. ==== MAO-B inhibitors ==== MAO-B inhibitors—such as safinamide, selegiline and rasagiline—increase the amount of dopamine in the basal ganglia by inhibiting the activity of monoamine oxidase B, an enzyme that breaks down dopamine. These compounds mildly alleviate motor symptoms when used as monotherapy but can also be used with levodopa and can be used at any disease stage. Common side effects are nausea, dizziness, insomnia, sleepiness, and orthostatic hypotension. MAO-Bs are known to increase serotonin and cause a potentially dangerous condition known as serotonin syndrome. ==== Other drugs ==== Treatments for non-motor symptoms of PD have not been well studied and many medications are used off-label. A diverse range of symptoms beyond those related to motor function can be treated pharmaceutically. Examples include cholinesterase inhibitors for cognitive impairment and modafinil for excessive daytime sleepiness. Fludrocortisone, midodrine and droxidopa are commonly used off label for orthostatic hypotension related to autonomic dysfunction. Sublingual atropine or botulinum toxin injections may be used off-label for drooling. SSRIs and SNRIs are often used for depression related to PD, but there is a risk of serotonin syndrome with the SSRI or SNRI antidepressants. Doxepin and rasagline may reduce physical fatigue in PD. === Invasive interventions === Surgery for Parkinson's first appeared in the 19th century and by the 1960s had evolved into ablative brain surgery that lesioned the basal ganglia, thalamus or globus pallidus (a pallidotomy). The discovery of levadopa for PD treatment caused ablative therapies to largely disappear. Ablative surgeries experienced a resurgence in the 1990s but were quickly superseded by newly-developed deep brain stimulation (DBS). Although gamma knife and high-intensity focused ultrasound surgeries have been developed for pallidotomies and thalamotomies, their use is rare as of 2025. Deep brain stimulation (DBS) involves the implantation of electrodes called neurostimulators, which sends electrical impulses to specific parts of the brain. DBS for the subthalamic nucleus and globus pallidus interna has high efficacy for up to 2 years, but longterm efficacy is unclear and likely decreases with time. DBS typically targets rigidity and tremor, and is recommended for PD patients who are intolerant or do not respond to medication. Cognitive impairment is the most common exclusion criteria. === Rehabilitation === Although pharmacological therapies can improve symptoms, autonomy and ability to perform everyday tasks is still reduced by PD. Rehabilitation is often useful, but the scientific support for any single rehabilitation treatment is limited. Exercise programs are often recommended, with preliminary evidence of efficacy. Regular physical exercise with or without physical therapy can be beneficial to maintain and improve mobility, flexibility, strength, gait speed, and quality of life. Aerobic, mind-body, and resistance training may be beneficial in alleviating PD-associated depression and anxiety. Strength training may increase manual dexterity and strength, facilitating daily tasks that require grasping objects. Aerobic exercise, resistance training, balance and task-specific training have been found to improve strength, VO2 Max and balance. While flexibility training is commonly used, but it has a lower strength of recommendation compared to aerobic and resistance training. In improving flexibility and range of motion for people experiencing rigidity, generalized relaxation techniques such as gentle rocking have been found to decrease excessive muscle tension. Other effective techniques to promote relaxation include slow rotational movements of the extremities and trunk, rhythmic initiation, diaphragmatic breathing, and meditation. Deep diaphragmatic breathing may also improve chest-wall mobility and vital capacity decreased by the stooped posture and respiratory dysfunctions of advanced Parkinson's. Rehabilitation techniques targeting gait and the challenges posed by bradykinesia, shuffling, and decreased arm swing include pole walking, treadmill walking, and marching exercises. Long-term physiotherapy (greater than six months) reduces the need for antiparkinsonian medication; multidisciplinary rehabilitation programs combined with physiotherapy can result in reduction in the levodopa-equivalent dose. Speech therapies such as the Lee Silverman voice treatment may reduce the effect of speech disorders associated with PD. Occupational therapy is a rehabilitation strategy that can improve quality of life by enabling people with PD to find engaging activities and communal roles, adapt to their living environment, and improve domestic and work abilities. === Diet === Parkinson's poses digestive problems like constipation and prolonged emptying of stomach contents, and a balanced diet with periodical nutritional assessments is recommended to avoid weight loss or gain and minimize the consequences of gastrointestinal dysfunction. In particular, a Mediterranean diet is advised and may slow disease progression. As it can compete for uptake with amino acids derived from protein, levodopa should be taken 30 minutes before meals to minimize such competition. Low protein diets may also be needed by later stages. As the disease advances, swallowing difficulties often arise. Using thickening agents for liquid intake and an upright posture when eating may be useful; both measures reduce the risk of choking. Gastrostomy can be used to deliver food directly into the stomach. Increased water and fiber intake is used to treat constipation. === Palliative care === As Parkinson's is incurable, palliative care aims to improve the quality of life for both the patient and family by alleviating the symptoms and stress associated with illness. Early integration of palliative care into the disease course is recommended, rather than delaying until later stages. Palliative care specialists can help with physical symptoms, emotional factors such as loss of function and jobs, depression, fear, as well as existential concerns. Palliative care team members also help guide difficult decisions caused by disease progression, such as wishes for a feeding tube, noninvasive ventilator or tracheostomy, use of cardiopulmonary resuscitation, and entering hospice care. == Prognosis == As Parkinson's is a heterogeneous condition with multiple etiologies, prognostication can be difficult and prognoses can be highly variable. On average, life expectancy is reduced in those with Parkinson's, with younger age of onset resulting in greater life expectancy decreases. Although PD subtype categorization is controversial, the 2017 Parkinson's Progression Markers Initiative study identified three broad scorable subtypes of increasing severity and more rapid progression: mild-motor predominant, intermediate, and diffuse malignant. Mean years of survival post-diagnosis were 20.2, 13.1, and 8.1. Around 30% of individuals with Parkinson's develop dementia, which is 12 times more likely to occur in the elderly with severe PD. Dementia is less likely to arise in tremor-dominant PD. Parkinson's disease dementia is associated with a reduced quality of life in people with PD and their caregivers, increased mortality, and a higher probability of needing nursing home care. The incidence rate of falls is approximately 45 to 68%, thrice that of healthy individuals, and half of such falls result in serious secondary injuries. Falls increase morbidity and mortality. Around 90% of those with PD develop hypokinetic dysarthria, which worsens with disease progression and can hinder communication. Over 80% develop dysphagia: consequent inhalation of gastric and oropharyngeal secretions can lead to aspiration pneumonia. == Epidemiology == As of 2024, Parkinson's is the second most common neurodegenerative disease and the fastest-growing in total cases. As of 2023, global prevalence was estimated to be 1.51 per 1000. Although it is around 40% more common in men, age is the dominant predeterminant of Parkinson's. Consequently, as global life expectancy has increased, Parkinson's disease prevalence has also risen, with an estimated increase in cases by 74% from 1990 to 2016. The number is predicted to rise to over 12 million by 2040. This increase may be due to a number of global factors, including prolonged life expectancy, increased industrialisation, and decreased smoking. Although genetics is the sole factor in a minority of cases, most cases of Parkinson's are likely a result of gene-environment interactions: concordance studies with twins have found Parkinson's heritability to be just 30%. The influence of multiple genetic and environmental factors complicates epidemiological efforts. Relative to Europe and North America, disease prevalence is lower in Africa but similar in Latin America. Although China is predicted to have nearly half of the global Parkinson's population by 2030, estimates of prevalence in Asia vary. Potential explanations for these geographic differences include genetic variation, environmental factors, health care access, and life expectancy. Although PD incidence and prevalence may vary by race and ethnicity, significant disparities in care, diagnosis, and study participation limit generalizability and lead to conflicting results. Within the United States, high rates of PD have been identified in the Midwest, the South, and agricultural regions of other states: collectively termed the "PD belt". The association between rural residence and Parkinson's has been hypothesized to be caused by environmental factors like herbicides, pesticides, and industrial waste. == History == In 1817, English physician James Parkinson published the first full medical description of the disease as a neurological syndrome in his monograph An Essay on the Shaking Palsy. He presented six clinical cases, including three he had observed on the streets near Hoxton Square in London. Parkinson described three cardinal symptoms: tremor, postural instability and "paralysis" (undistinguished from rigidity or bradykinesia), and speculated that the disease was caused by trauma to the spinal cord. There was little discussion or investigation of the "shaking palsy" until 1861, when Frenchman Jean-Martin Charcot—regarded as the father of neurology—began expanding Parkinson's description, adding bradykinesia as one of the four cardinal symptoms. In 1877, Charcot renamed the disease after Parkinson, as the tremor suggested by "shaking palsy" is not present in all. Subsequent neurologists who made early advances to the understanding of Parkinson's include Armand Trousseau, William Gowers, Samuel Kinnier Wilson, and Wilhelm Erb. Although Parkinson is typically credited with the first detailed description of PD, many previous texts reference some of the disease's clinical signs. In his essay, Parkinson himself acknowledged partial descriptions by Galen, William Cullen, Johann Juncker, and others. Possible earlier but incomplete descriptions include a Nineteenth Dynasty Egyptian papyrus, the ayurvedic text Charaka Samhita, Ecclesiastes 12:3, and a discussion of tremors by Leonardo da Vinci. Multiple traditional Chinese medicine texts may include references to PD, including a discussion in the Yellow Emperor's Internal Classic (c. 425–221 BC) of a disease with symptoms of tremor, stiffness, staring, and stooped posture. In 2009, a systematic description of PD was found in the Hungarian medical text Pax corporis written by Ferenc Pápai Páriz in 1690, some 120 years before Parkinson. Although Páriz correctly described all four cardinal signs, it was only published in Hungarian and was not widely distributed. In 1912, Frederic Lewy described microscopic particles in affected brains, later named Lewy bodies. In 1919, Konstantin Tretiakoff reported that the substantia nigra was the main brain structure affected, corroborated by Rolf Hassler in 1938. The underlying changes in dopamine signaling were identified in the 1950s, largely by Arvid Carlsson and Oleh Hornykiewicz. In 1997, Polymeropoulos and colleagues at the NIH discovered the first gene for PD, SNCA, which encodes alpha-synuclein. Alpha-synuclein was in turn found to be the main component of Lewy bodies by Spillantini, Trojanowski, Goedert, and others. Anticholinergics and surgery were the only treatments until the use of levodopa, which, although first synthesized by Casimir Funk in 1911, did not enter clinical use until 1967. By the late 1980s, deep brain stimulation introduced by Alim Louis Benabid and colleagues at Grenoble, France, emerged as an additional treatment. == Society and culture == === Social impact === For some people with PD, masked facial expressions and difficulty moderating facial expressions of emotion or recognizing other people's facial expressions can impact social well-being. As the condition progresses, tremor, other motor symptoms, difficulty communicating, or mobility issues may interfere with social engagement, causing individuals with PD to feel isolated. Public perception and awareness of PD symptoms such as shaking, hallucinating, slurring speech, and being off balance is lacking in some countries and can lead to stigma. === Cost === The economic cost of Parkinson's to both individuals and society is high. In many low- and middle-income countries, public health systems may not fully cover Parkinson’s disease therapies, leading to disparities in access to treatment. In contrast, high-income countries with universal healthcare typically cover standard treatments such as levodopa and specialist care. Indirect costs include lifetime earnings losses due to premature death, productivity losses, and caregiver burdens. The duration and progressive nature of PD can place a heavy burden on caregivers: family members like spouses dedicate around 22 hours per week to care. In 2010, the total economic burden of Parkinson's across Europe, including indirect and direct medical costs, was estimated to be €13.9 billion (US $14.9 billion) in 2010. The total burden in the United States was estimated to be $51.9 billion in 2017, and is project to surpass $79 billion by 2037. As of 2022, no rigorous economic surveys had been performed for low or middle income nations. Preventative care has been identified as crucial to prevent the rapidly increasing incidence of Parkinson's from overwhelming national health systems. === Advocacy === The birthday of James Parkinson, 11 April, has been designated as World Parkinson's Day. A red tulip was chosen by international organizations as the symbol of the disease in 2005; it represents the 'James Parkinson' tulip cultivar, registered in 1981 by a Dutch horticulturalist. Advocacy organizations include the National Parkinson Foundation, which has provided more than $180 million in care, research, and support services since 1982, Parkinson's Disease Foundation, which has distributed more than $115 million for research and nearly $50 million for education and advocacy programs since its founding in 1957 by William Black; the American Parkinson Disease Association, founded in 1961; and the European Parkinson's Disease Association, founded in 1992. === Notable cases === In the 21st century, the diagnosis of Parkinson's among notable figures has increased the public's understanding of the disorder. Actor Michael J. Fox was diagnosed with PD at 29 years old, and has used his diagnosis to increase awareness of the disease. To illustrate the effects of the disease, Fox has appeared without medication in television roles and before the United States Congress without medication. The Michael J. Fox Foundation, which he founded in 2000, has raised over $2 billion for Parkinson's research. Boxer Muhammad Ali showed signs of PD when he was 38, but was undiagnosed until he was 42; he has been called the "world's most famous Parkinson's patient". Whether he had PD or parkinsonism related to boxing is unresolved. Cyclist and Olympic medalist Davis Phinney, diagnosed with Parkinson's at 40, started the Davis Phinney Foundation in 2004 to support PD research. Adolf Hitler is believed to have had Parkinson's, and the condition may have influenced his decision making. == Clinical research == As of 2024, no disease-modifying therapies exist that reverse or slow the progression of Parkinson's. Active research directions include the search for new animal models of the disease and development and trial of gene therapy, stem cell transplants, and neuroprotective agents. Improved treatments will likely combine therapeutic strategies to manage symptoms and enhance outcomes. Reliable biomarkers are needed for early diagnosis, and research criteria for their identification have been established. === Neuroprotective treatments === Anti-alpha-synuclein drugs that prevent alpha-synuclein oligomerization and aggregation or promote their clearance are under active investigation, and potential therapeutic strategies include small molecules and immunotherapies like vaccines and monoclonal antibodies. While immunotherapies show promise, their effiacy is often inconsistent. Anti-inflammatory drugs that target NLRP3 and the JAK-STAT signaling pathway offer another potential therapeutic approach. As the gut microbiome in PD is often disrupted and produces toxic compounds, fecal microbiota transplants might restore a healthy microbiome and alleviate various motor and non-motor symptoms. Neurotrophic factors—peptides that enhance the growth, maturation, and survival of neurons—show modest results but require invasive surgical administration. Viral vectors may represent a more feasible delivery platform. Calcium channel blockers may restore the calcium imbalance present in Parkinson's, and are being investigated as a neuroprotective treatment. Other therapies, like deferiprone, may reduce the abnormal accumulation of iron in PD. === Cell-based therapies === In contrast to other neurodegenerative disorders, many Parkinson's symptoms can be attributed to the loss of a single cell type. Consequently, dopaminergic neuron regeneration is a promising therapeutic approach. Although most initial research sought to generate dopaminergic neuron precursor cells from fetal brain tissue, pluripotent stem cells—particularly induced pluripotent stem cells (iPSCs)—have become an increasingly popular tissue source. Both fetal and iPSC-derived DA neurons have been transplanted into patients in clinical trials. Although some individuals see improvement, the results are highly variable. Adverse effects, such as dyskinesia arising from excess dopamine release by the transplanted tissues, have also been observed. === Gene therapy === Gene therapy for Parkinson's seeks to restore the healthy function of dopaminergic neurons in the substantia nigra by delivering genetic material—typically through a viral vector—to these diseased cells. This material may deliver a functional, wildtype version of a gene, or knockdown a pathological variants. Experimental gene therapies for PD have aimed to increase the expression of growth factors or enzymes involved in dopamine synthesis, like tyrosine hydroxylase. The one-time delivery of genes circumvents the recurrent invasive administration required to administer some peptides and proteins to the brain. MicroRNAs are an emerging PD gene therapy platform that may serve as an alternative to viral vectors. == Notes and references == === Notes === === Citations === === Works cited === ==== Books ==== ==== Journal articles ==== ==== Web sources ==== ==== News publications ==== == External links ==
Wikipedia/Parkinson's_disease
The Janssen COVID‑19 vaccine, (Ad26.COV2.S) sold under the brand name Jcovden, is a COVID‑19 vaccine that was developed by Janssen Vaccines in Leiden, Netherlands, and its Belgian parent company Janssen Pharmaceuticals, a subsidiary of American company Johnson & Johnson. It is a viral vector vaccine based on a human adenovirus that has been modified to contain the gene for making the spike protein of the SARS-CoV-2 virus that causes COVID‑19. The body's immune system responds to this spike protein to produce antibodies. The vaccine requires only one dose and does not need to be stored frozen. Clinical trials for the vaccine were started in June 2020, with phase III involving around 43,000 people. In January 2021, Janssen announced that 28 days after a completed vaccination, the vaccine was 66% effective in a one-dose regimen in preventing symptomatic COVID‑19, with an 85% efficacy in preventing severe COVID‑19 and 100% efficacy in preventing hospitalization or death caused by the disease. The vaccine has been granted an emergency use authorization (EUA) by the US Food and Drug Administration (FDA) and a conditional marketing authorization by the European Medicines Agency (EMA) and the UK Medicines and Healthcare products Regulatory Agency. In June 2023, the FDA revoked the emergency use authorization for the Janssen COVID-19 vaccine at the request of its manufacturer. Because cases of thrombosis with thrombocytopenia syndrome and Guillain-Barré syndrome have been reported after receipt of the Janssen COVID‑19 vaccine, the US Centers for Disease Control and Prevention (CDC) recommends "preferential use of mRNA COVID‑19 vaccines over the Janssen COVID‑19 vaccine, including both primary and booster doses administered to prevent COVID‑19, for all persons aged 18 years of age and older. The Janssen COVID‑19 vaccine may be considered in some situations, including for persons with a contraindication to receipt of mRNA COVID‑19 vaccines." In February 2022, Johnson & Johnson announced it has temporarily suspended production of the vaccine though they also noted that it will likely resume at some point in the future and that it will honor all pre-existing contracts that oblige Janssen to supply its vaccine by using the millions of already existing vaccine doses in its inventory where requested. == Medical uses == The Janssen COVID‑19 vaccine is used to provide protection against infection by the SARS-CoV-2 virus in order to prevent COVID‑19 in people aged eighteen years and older. The vaccine is given by intramuscular injection into the deltoid muscle. The initial course consists of a single dose. There is no evidence that a second booster dose is needed to prevent severe disease in healthy adults. In October 2021, the US Centers for Disease Control and Prevention (CDC) began recommending a booster dose. === Efficacy === A vaccine is generally considered effective if the estimate is ≥50% with a >30% lower limit of the 95% confidence interval. Efficacy is closely related to effectiveness, which is generally expected to slowly decrease over time. In October 2021, Janssen reported at a meeting of the US Food and Drug Administration Vaccines and Related Biological Products Advisory Committee (VRBPAC) that a single dose produced durable protection against severe disease and hospitalization for at least 6 months in the United States, even when Delta emerged, but also a global decrease in protection against moderate disease attributed to emerging variants outside the US. Janssen also reported that a booster dose given 2 months after the primary dose increased efficacy against symptomatic disease to 75% (95% CI, 55–87%) globally and to 94% (59–100%) in the US and that it also increased efficacy against severe disease to nearly 100% (33–100%) globally. == Pharmacology == The vaccine consists of a replication-incompetent recombinant adenovirus type 26 (Ad26) viral vector expressing the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike (S) protein in a stabilized conformation. The PER.C6 cell line derived from human embryonic retinal cells is used in the production (replication) of the Ad26 adenovirus vector. It is similar to the approach used by the Oxford–AstraZeneca COVID-19 vaccine and the Russian Sputnik V COVID-19 vaccine which use human embryonic kidney (HEK) 293 cells for adenovirus vector replication. The Ad26 viral vector lacks the E1 gene required for replication. Therefore, it cannot replicate in the human organism. == Chemistry == The vaccine contains the following excipients: citric acid monohydrate, trisodium citrate dihydrate, ethanol (alcohol), 2-hydroxypropyl-β-cyclodextrin (HBCD) (hydroxypropyl betadex), polysorbate 80, sodium chloride, sodium hydroxide, and hydrochloric acid. == Manufacturing == Unpunctured vials may be stored between 9 and 25 °C (48 and 77 °F) for up to twelve hours, and the vaccine can remain viable for months in a standard refrigerator. It is not shipped or stored frozen. In April 2020, Johnson & Johnson entered a partnership with Catalent to provide large-scale manufacturing of the Johnson & Johnson vaccine at Catalent's Bloomington, Indiana facility. In July 2020, the partnership was expanded to include Catalent's facility in Anagni, Italy. In September 2020, Grand River Aseptic Manufacturing agreed with Johnson & Johnson to support the manufacture of the vaccine, including technology transfer and fill and finish manufacture, at its Grand Rapids, Michigan facility. In December 2020, Johnson & Johnson and Reig Jofre, a Spanish pharmaceutical company, entered into an agreement to manufacture the vaccine at Reig Jofre's Barcelona facility. In February 2021, Sanofi and Johnson & Johnson struck a deal for Sanofi to provide support and infrastructure at Sanofi's Marcy-l'Étoile, France facility to manufacture approximately twelve million doses of the Johnson & Johnson vaccine per month once authorized. In March 2021, Johnson & Johnson and Aspen Pharmacare made a deal to manufacture 220 million vaccines at Aspen's Gqeberha facility in Eastern Cape, South Africa. They plan to distribute the vaccine to other countries, mainly in Africa, and also through the COVID-19 Vaccines Global Access (COVAX) program. In March 2021, Merck & Co and Johnson & Johnson struck a deal for Merck to manufacture the Johnson & Johnson vaccine at two facilities in the United States to help expand the manufacturing capacity of the vaccine using provisions of the Defense Production Act. That same month, human error at a plant run by Emergent BioSolutions in Baltimore resulted in the spoilage of up to fifteen million doses of the Johnson & Johnson vaccine. The error, which was caught before the doses left the plant, delayed expected shipments of the Johnson & Johnson vaccine within the United States. As the error had involved combining ingredients of the Johnson & Johnson vaccine with the AstraZeneca vaccine, the Biden administration gave control of the plant to Johnson & Johnson and said the plant should produce only the Johnson & Johnson vaccine to avoid further mix-ups. In July 2021, the FDA authorized Emergent to resume production (but not distribution) of the Janssen vaccine. 400 million doses were destroyed. == Adverse effects == Review of Vaccine Adverse Events Reporting System (VAERS) safety monitoring data by the US Centers for Disease Control and Prevention (CDC) through 21 April 2021, (by which time 7.98 million doses of the Janssen COVID‑19 vaccine had been administered), showed that "97% of reported reactions after vaccine receipt were nonserious, consistent with preauthorization clinical trials data." The most common side effects of the vaccine in the trials were usually mild or moderate, occurred within two days after vaccination, and got better within 1 or 2 days. The most common side effects are pain at the injection site, headache, tiredness, muscle pain, and nausea, affecting more than 1 in 10 people. Coughing, joint pain, fever, chills, redness, and swelling at the injection site occurred in less than 1 in 10 people. Sneezing, tremor, throat pain, rash, sweating, muscle weakness, pain in the arms and legs, backache, weakness, and feeling generally unwell occurred in less than 1 in 100 people. Rare side effects (that occurred in less than 1 in 1,000 people) are hypersensitivity (allergy) and itchy rash. An increased risk of the rare and potentially fatal thrombosis with thrombocytopenia syndrome (TTS) has been associated with mainly younger female recipients of the vaccine. This syndrome, marked by formation of blood clots in the blood vessels in combination with low levels of blood platelets 4–28 days after the vaccines administration, occurred at a rate of about 7 per 1 million vaccinated women aged 18–49 years old and it occurs more rarely in other populations (i.e., women 50 years and older and men of all ages). Allergic reactions, including anaphylaxis, can occur in rare cases within a few minutes to one hour after receiving a dose. In May 2021, with 7.98 million doses administered, the CDC reported four cases of anaphylaxis after vaccination (none of which resulted in death) and 28 cases of cerebral venous sinus thrombosis (of which three resulted in death). In July 2021, the US fact sheet for the vaccine was updated to indicate that there may be an increased risk of Guillain-Barré syndrome during the 42 days following vaccination. The European Medicines Agency (EMA) listed Guillain-Barré syndrome (GBS) as a very rare side effect of COVID‑19 Vaccine Janssen and added a warning in the product information. In August 2021, the Pharmacovigilance Risk Assessment Committee (PRAC) recommended updating the product information to the European Medicines Agency (EMA) that "cases of dizziness and tinnitus (ringing or other noises in one or both ears) are linked to the administration of COVID‑19 vaccine Janssen." Tinnitus was later labeled as "very rare" in a final safety study by the manufacturer. In December 2021, the CDC accepted the recommendation from a panel of experts for a preference of using the Pfizer-BioNech and Moderna vaccines over the Janssen vaccine due to rare but serious blood clotting events. In May 2022, the FDA limited the use of the Janssen vaccine to those over eighteen unable to access other vaccines or who are otherwise "medically ineligible" for other vaccine options. == History == The stabilized version of the spike protein – that includes two mutations in which the regular amino acids are replaced with prolines – was developed by researchers at the National Institute of Allergy and Infectious Diseases' Vaccine Research Center and the University of Texas at Austin. During the COVID‑19 pandemic, Johnson & Johnson committed over US$1 billion toward development of a not-for-profit vaccine in partnership with the Biomedical Advanced Research and Development Authority (BARDA) Office of the Assistant Secretary for Preparedness and Response (ASPR) at the U.S. Department of Health and Human Services (HHS). Johnson & Johnson said its vaccine project would be "at a not-for-profit level" as the company viewed it as "the fastest and the best way to find all the collaborations in the world to make this happen". In November, Johnson & Johnson announced that Janssen would commit about $604 million and BARDA would commit $454 million to fund the ENSEMBLE trial. Johnson & Johnson subsidiary Janssen Vaccines, in partnership with Beth Israel Deaconess Medical Center (BIDMC), was responsible for developing the vaccine candidate, based on the same technology used to make its Ebola vaccine. === Clinical trials === Preclinical trials indicated that the vaccine effectively protected hamsters and rhesus macaques from SARS‐CoV‐2. ==== Phase I–II ==== In June 2020, Johnson & Johnson and the National Institute of Allergy and Infectious Diseases (NIAID) confirmed that they planned to start clinical trials of the Ad26.COV2.S vaccine in September 2020, with the possibility of phase I–IIa human clinical trials starting at an accelerated pace in the second half of July. A phase I–IIa clinical trial started with the recruitment of the first subject in July 2020 and enrolled study participants in Belgium and the US. Interim results from the phase I–IIa trial established the safety, reactogenicity, and immunogenicity of Ad26.COV2.S. With one dose, after 29 days, the vaccine ensured ninety percent of participants had enough antibodies required to neutralize the virus. After 57 days, that number reached one hundred percent. 1x1011 viral particles (high dose) provided an increase in the neutralizing-antibody titers compared to 5×1010 (low dose). After the second dose 56 days after the first dose among participants between the ages of 18 and 55 years, the incidence of grade 3 solicited systemic adverse events was much lower than that after the first immunization in both the low-dose and high-dose groups, a finding that contrasts with observations with respect to messenger RNA–based vaccines, for which the second dose has been associated with increased reactogenicity. A substudy with 20 participants found that humoral and cell-mediated immune responses, including cytotoxic T cells, lasted for at least 8 months. ==== Phase III ==== A phase III clinical trial called ENSEMBLE started enrollment in September 2020 and completed enrollment in December 2020. It was designed as a randomized, double-blind, placebo-controlled clinical trial intended to evaluate the safety and efficacy of a single-dose vaccine versus placebo in adults aged 18 years of age and older. Study participants received a single intramuscular injection of Ad26.COV2.S at a dose level of 5×1010 virus particles on day one. The trial was paused in October 2020, because a volunteer became ill, but the company said it found no evidence that the vaccine had caused the illness and announced in October 2020 that it would resume the trial. In January 2021, Janssen announced safety and efficacy data from an interim analysis of ENSEMBLE trial data, which demonstrated the vaccine was 66% effective at preventing the combined endpoints of moderate and severe COVID‑19 at 28 days post-vaccination among all volunteers. The interim analysis was based on 468 cases of symptomatic COVID‑19 among 43,783 adult volunteers in Argentina, Brazil, Chile, Colombia, Mexico, Peru, South Africa, and the United States. No deaths related to COVID‑19 were reported in the vaccine group, while five deaths in the placebo group were related to COVID‑19. During the trial, no anaphylaxis was observed in participants. A second phase III clinical trial called ENSEMBLE 2 started enrollment in November 2020. ENSEMBLE 2 differed from ENSEMBLE in that its study participants received two intramuscular (IM) injections of Ad26.COV2.S, one on day 1 and the next on day 57. Early results indicated 85% efficacy against severe/critical disease. Plasma from 8 participants showed greater neutralization activity against the Delta variant than against Beta. === Authorizations === ==== European Union ==== Beginning in December 2020, clinical trial of the vaccine candidate has been undergoing a "rolling review" process by the Committee for Medicinal Products for Human Use of the European Medicines Agency (EMA), a step to expedite EMA consideration of an expected conditional marketing authorization. In February 2021, Janssen applied to the EMA for conditional marketing authorization of the vaccine. The European Commission approved the COVID‑19 Vaccine Janssen in March 2021. In Finland, the Janssen vaccine is only offered for those aged 65 and over. ==== United States ==== In February 2021, Janssen Biotech applied to the US Food and Drug Administration (FDA) for an emergency use authorization (EUA), and the FDA announced that its Vaccines and Related Biological Products Advisory Committee (VRBPAC) would meet in February to consider the application. In February, ahead of the VRBPAC meeting, briefing documents from Janssen and the FDA were issued; the FDA document recommends granting the EUA, concluding that the results of the clinical trials and the safety data are consistent with FDA EUA guidance for COVID‑19 vaccines. At the 26 February meeting, VRBPAC voted unanimously (22–0) to recommend that an EUA for the vaccine be issued. The FDA granted the EUA for the vaccine the following day. In February, the Advisory Committee on Immunization Practices (ACIP) of the Centers for Disease Control and Prevention (CDC) recommended the use of the vaccine for those aged 18 and older. In April 2021, the CDC and the FDA issued a joint statement recommending that use of the Janssen vaccine be suspended, due to reports of six cases of cerebral venous sinus thrombosis—a "rare and severe" blood clot—in combination with low levels of blood platelets (thrombocytopenia), in six women between the ages of 18 and 48 who had received the vaccine. The symptoms occurred 6–13 days after they had received the vaccination, and it was reported that one woman had died and a second woman had been hospitalized in critical condition. In April, the FDA and the CDC determined that the recommended pause regarding the use of the Janssen COVID‑19 Vaccine in the US should be lifted and use of the vaccine should resume. The EUA and the fact sheets were updated to reflect the risks of thrombosis-thrombocytopenia syndrome (TTS). The FDA granted an emergency use authorization and the CDC issued a standing order for the use of the vaccine. In June 2023, the FDA revoked the emergency use authorization for the Janssen COVID-19 vaccine at the request of its manufacturer. ==== Elsewhere ==== In February 2021, Saint Vincent and the Grenadines issued an emergency authorization for the Janssen COVID‑19 vaccine, as well as the Moderna COVID‑19 vaccine, the Pfizer–BioNTech vaccine, the Gam-COVID-Vac vaccine (Sputnik V), and the Oxford–AstraZeneca vaccine. In December 2020, Johnson & Johnson entered into an agreement in principle with the GAVI vaccine alliance to support the COVAX Facility. In February 2021, Johnson & Johnson submitted its formal request and data package to the World Health Organization for an Emergency Use Listing (EUL); an EUL is a requirement for participation in COVAX. Johnson & Johnson anticipated providing up to five hundred million doses through 2022 for COVAX. The World Health Organization issued an EUL for the Janssen COVID‑19 vaccine Ad26.COV2.S vaccine in March 2021. In February 2021, the vaccine received emergency authorization in South Africa. In April 2021, South Africa suspended its rollout of the vaccine. The program resumed in April 2021. In February 2021, Bahrain authorized the vaccine for emergency use. In February 2021, the South Korean Ministry of Food and Drug Safety began a review of Johnson & Johnson's application for approval of its vaccine. In late November 2020, Johnson & Johnson submitted a rolling review application to Health Canada for approval of its vaccine. In March 2021, the vaccine received emergency authorization in Colombia. In March 2021, the vaccine was authorized under interim order in Canada. In April 2021, the Australian government stated that it would not be purchasing the Janssen vaccine, as it "does not intend to purchase any further adenovirus vaccines at this time". The Therapeutic Goods Administration granted provisional approval for use of the Janssen vaccine in Australia in June 2021. In April 2021, the vaccine received emergency use authorization in the Philippines. In May 2021, the vaccine received conditional marketing authorization in the United Kingdom. In June 2021, the vaccine received emergency use authorization in Chile. The vaccine will be provided via COVAX. In June 2021, Malaysia's National Pharmaceutical Regulatory Agency (NPRA) issued conditional registration for emergency use of the vaccine. In June 2021, COVID‑19 Janssen Ad26.COV2.S was granted provisional approval in Australia. In July 2021, the vaccine received provisional approval for use for people aged 18 and above in New Zealand. In August 2021, Health and Family Welfare Minister of India announced that Johnson and Johnson single-dose vaccine was approved for emergency use in India through a supply agreement with homegrown vaccine maker Biological E. Limited. In September 2021, National Agency of Drug and Food Control (BPOM) issued emergency use authorization in Indonesia. In November 2021, the vaccine's authorization under interim order in Canada was transitioned to approval for use under the country's Food and Drug Regulations. In August 2023, the COVID-19 Vaccine Janssen was removed from the Australian Register of Therapeutic Goods at the request of Janssen-Cilag Pty Ltd. The vaccine was never supplied in Australia. === Further development === ==== Homologous prime-boost vaccination ==== In October 2021, the FDA and the CDC authorized the use of either homologous or heterologous vaccine booster doses. ==== Heterologous prime-boost vaccination ==== In October 2021, the US Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) authorized the use of either homologous or heterologous vaccine booster doses. The authorization was expanded to include all adults in November 2021. == Society and culture == About 19.4 million doses of the Janssen COVID-19 vaccine were administered in the EU/EEA from authorization to 26 June 2022. === Economics === Given the Janssen vaccine is a single dose and has a lower cost, it was expected to play an important role in low and middle-income countries. Since it is a single dose vaccine, it has been a popular vaccine to distribute to the homeless, the incarcerated, and refugee populations. This is due to the fact that it is difficult for these aforementioned demographics to be contacted for vaccines that require a second dose. With lower costs and lower requirements of storage and distribution in comparison to the COVID‑19 vaccines by Pfizer and Moderna, the Janssen vaccine is more easily transported, stored, and administered. South African health minister Zweli Mkhize announced on 9 February 2021 that the country would sell or swap its one million doses of AstraZeneca vaccine. Once it did so, South Africa began vaccination using the Janssen vaccine in February 2021, marking the vaccine's first use outside of a clinical trial. In July 2020, Johnson & Johnson pledged to deliver up to three hundred million doses of its vaccine to the US, with one hundred million upfront and an option for twenty million more. The deal, worth more than $1 billion, is funded by the Biomedical Advanced Research and Development Authority (BARDA) and the U.S. Department of Defense. The deal was confirmed on 5 August. In August 2020, Johnson & Johnson signed a contract with the US federal government for $1 billion, agreeing to deliver one hundred million doses of the vaccine to the US following the Food and Drug Administration (FDA) grant of approval or emergency use authorization (EUA) for the vaccine. Under its agreement with the US government, Johnson & Johnson was targeted to produce twelve million doses by the end of February 2021, more than sixty million doses by the end of April 2021, and more than one hundred million doses by the end of June 2021. However, in January 2021, Johnson & Johnson acknowledged manufacturing delays would likely prevent it from meeting its contract of twelve million doses delivered to the US by the end of February. In February 2021, through congressional testimony by a company executive, Johnson & Johnson indicated that the company could deliver twenty million doses to the US government by the end of March and one hundred million doses in the first half of 2021. In February 2021, Johnson & Johnson announced that it planned to ship the vaccine immediately following authorization. In March 2021, the Canadian government placed an order with Johnson & Johnson for ten million doses, with an option to purchase up to twenty-eight million more; on 5 March, the vaccine became the fourth to receive Health Canada approval. Shipments of the vaccine were scheduled to start in the second half of April 2021, with a commitment to deliver at least two hundred million doses to the EU in 2021. The European distribution of the vaccine was slightly delayed until the EMA decided that rare cases of vaccine-induced blood clots did not outweigh the benefits of helping to fight the COVID‑19 pandemic. === Controversies === The United States Conference of Catholic Bishops expressed concern about the vaccine because the cell line Per.C6, which is used in development and production, was originally derived from the retinal tissue of an 18-week-old fetus electively aborted in 1985. Although the use of fetal tissue in vaccine development has become common since the 1930s, especially with cell-based vaccines, there are currently alternatives that do not carry the same potential ethical concerns as the Janssen vaccine. Some bioethicists dismiss that ethical concerns to using cells derived from ethically compromised sources should be addressed or alternatives sought. Others advance the view that the cells used for COVID‑19 vaccines are thousands of generations removed from their source material and do not contain any fetal tissue. In December 2020, the Vatican published a note approved by Pope Francis, stating that "... all [COVID-19] vaccinations recognized as clinically safe and effective can be used in good conscience ..." However, the key objection to using these vaccines still remains. In September 2021, after criticism that doses of its single-shot COVID‑19 vaccine produced in Aspen Pharmacare's facility in South Africa were being exported to Europe, millions of doses that had been shipped to Europe and stored in warehouses will be returned to Africa, and newly manufactured doses will be shipped to African countries. ==== Misinformation ==== Videos on video-sharing platforms circulated around May 2021 showing people having magnets stick to their arms after receiving the vaccine, purportedly demonstrating the conspiracy theory that vaccines contain microchips, but these videos have been debunked. == Notes == == References == == External links == Corum J, Zimmer C (18 December 2020). "How the Johnson & Johnson Vaccine Works". The New York Times. "The Story of One Dose". New York. 5 April 2021.{{cite web}}: CS1 maint: overridden setting (link) "Jcovden Safety Updates". European Medicines Agency (EMA). December 2023. Australian Public Assessment Report for Ad26.COV2.S (PDF) (Report). Therapeutic Goods Administration (TGA). June 2021. M.I.T. Lecture 12: Dan Barouch, Covid-19 Vaccine Development on YouTube
Wikipedia/Janssen_COVID-19_vaccine
Pertussis vaccine is a vaccine that protects against whooping cough (pertussis). There are two main types: whole-cell vaccines and acellular vaccines. The whole-cell vaccine is about 78% effective while the acellular vaccine is 71–85% effective. The effectiveness of the vaccines appears to decrease by between 2 and 10% per year after vaccination, with a more rapid decrease with the acellular vaccines. The vaccine is only available in combination with tetanus and diphtheria vaccines (DPT vaccine). Pertussis vaccine is estimated to have saved over 500,000 lives in 2002. Vaccinating the mother during pregnancy may protect the baby. The World Health Organization and the US Centers for Disease Control and Prevention recommend all children be vaccinated for pertussis and that it be included in routine vaccinations. Three doses starting at six weeks of age are typically recommended in young children. Additional doses may be given to older children and adults. This recommendation includes people who have HIV/AIDS. The acellular vaccines are more commonly used in the developed world due to fewer adverse effects. Between 10 and 50% of people given the whole-cell vaccines develop redness at the injection site or fever. Febrile seizures and long periods of crying occur in less than 1% of people. With the acellular vaccines a brief period of non-serious swelling of the arm may occur. Side effects with both types of vaccines, but especially the whole-cell vaccine, are less common the younger the child. The whole-cell vaccines should not be used after seven years of age. Serious long-term neurological problems are not associated with either type. The pertussis vaccine was developed in 1926. It is on the World Health Organization's List of Essential Medicines. == Medical uses == === Effectiveness === Acellular pertussis vaccine (aP) with three or more antigens prevents around 85% of typical whooping cough cases in children. Compared to the whole cell pertussis vaccine (wP) used previously, the efficacy of aP declines faster. Multi-antigen aP has higher efficacy than old low-efficacy wP, but is possibly less effective than the highest-efficacy wP vaccines. Acellular vaccines also cause fewer side effects than whole-cell vaccines. Despite widespread vaccination, pertussis has persisted in vaccinated populations and is one of the most common vaccine-preventable diseases. The recent resurgence in pertussis infections is attributed to a combination of waning immunity and new mutations in the pathogen that existing vaccines are unable to effectively control. It is debated whether the switch from wP to aP has played a role in this resurgence, with two 2019 articles disagreeing with one another. Some studies have suggested that while acellular pertussis vaccines are effective at preventing the disease, they have a limited impact on infection and transmission, meaning that vaccinated people could spread the disease even though they may have only mild symptoms or none at all. === Children === For children, immunizations are commonly given in combination with immunizations against tetanus, diphtheria, polio, and haemophilus influenzae type B at two, four, six, and 15–18 months of age. === Adults === In 2006, the US Centers for Disease Control and Prevention (CDC) recommended adults receive pertussis vaccination along with the tetanus and diphtheria toxoid booster. In 2011, they began recommending boosters during each pregnancy. The UK commenced routine vaccination of pregnant women in 2012. The program initially aimed to vaccinate women between 28 and 32 weeks (but up to 38 weeks) of pregnancy: later advise allowed maternal pertussis immunisation from week 16 of pregnancy. Since its introduction the maternal pertussis immunisation programme is very effective in protecting infants until they can have their first vaccinations at two months of age. During the first year of the maternal immunization programme in Britain, the average vaccine coverage in England was 64% and vaccine effectiveness was estimated to be 91%. During 2012 fourteen infants died from pertussis in England and Wales; all were born before the introduction of the programme. Up to 31 October 2014, 10 deaths were reported in infants with confirmed whooping cough who were born after the introduction of the maternal programme. Nine of them were born to unvaccinated mothers and all 10 were too young to have received a dose of pertussis-containing vaccine. The pertussis booster for adults is combined with a tetanus vaccine and diphtheria vaccine booster; this combination is abbreviated "Tdap" (Tetanus, diphtheria, acellular pertussis). It is similar to the childhood vaccine called "DTaP" (Diphtheria, Tetanus, acellular Pertussis), with the main difference that the adult version contains smaller amounts of diphtheria and pertussis components—this is indicated in the name by the use of lower-case "d" and "p" for the adult vaccine. The lower-case "a" in each vaccine indicates that the pertussis component is acellular, or cell-free, which reduces the incidence of side effects. The pertussis component of the original DPT vaccine accounted for most of the minor local and systemic side effects in many vaccinated infants (such as mild fever or soreness at the injection site). The newer acellular vaccine, known as DTaP, has greatly reduced the incidence of adverse effects compared to the earlier "whole-cell" pertussis vaccine, however, immunity wanes faster after the acellular vaccine than the whole-cell vaccine. == Side effects == Between 10% and 50% of people given the whole-cell vaccines develop redness, swelling, soreness or tenderness at the injection site and/or fever, less than 1% experience febrile seizures or long periods of crying, and less than 1 out of every 1,000 to 2,000 people vaccinated have a hypotonic-hyporesponsive episode. The same reactions may occur after acellular vaccines, but are less common. Side effects with both types of vaccines, but especially the whole-cell vaccine, are more likely the older the child. The whole-cell vaccines should not be used after seven years of age. According to the WHO serious long-term neurological problems are not associated with either type. The WHO says that the only contraindication to either whole cell or acellular pertussis vaccines is an anaphylactic reaction to a previous dose of pertussis vaccine, while the US Centers for Disease Control and Prevention (CDC) lists encephalopathy not due to another identifiable cause occurring within seven days after a previous dose of pertussis vaccine as a contraindication and recommends those who have had seizures, have a known or suspected neurological disorder or have had a neurologic event after a previous dose not be vaccinated until after treatment is initiated and the condition stabilized. Only the acellular vaccine is used in the US. == Modern formulations == Whole-cell pertussis vaccines contain the entire inactivated organism while acellular pertussis vaccines contain parts (subunits) including the pertussis toxin alone or with components such as filamentous haemagglutinin, fimbrial antigens and pertactin. Whole-cell (wP) remains the vaccine of choice in low and middle-income countries, as it is cheaper and easier to produce. As of 2018, there are four acellular DTaP/Tdap vaccines licensed for use in the United States: Infanrix and Daptacel for children, Boostrix and Adacel for adolescents and adults. As of April 2016, the United Kingdom authorized five multivalent vaccines that include pertussis components: Pediacel, Infanrix-IPV+Hib, Repevax, Infanrix-IPV, and Boostrix-IPV. == History == Pearl Kendrick, Loney Gordon and Grace Eldering studied pertussis in the 1930s. They developed and ran the first large-scale study of a successful vaccine for the disease. The pertussis vaccine is usually administered as a component of the diphtheria-tetanus-pertussis (DTP/DTwP, DTaP, and Tdap) vaccines. There are several types of diphtheria-tetanus-pertussis vaccines. The first vaccine against pertussis was developed in the 1930s by pediatrician Leila Denmark. It included whole-cell killed Bordetella pertussis bacteria. Until the beginning of the 1990s, it was used as a part of the DTwP vaccine for the immunization of children. It, however, contained pertussis endotoxin (surface lipooligosaccharide) and produced side effects. New acellular pertussis vaccines were developed in the 1980s, which included only a few selected pertussis antigens (toxins and adhesins). Acellular vaccines are less likely to provoke side effects. They became a part of DTaP vaccines for children. In 2005, two new vaccine products were licensed for use in adolescents and adults that combine the tetanus and diphtheria toxoids with acellular pertussis vaccine. These (Tdap) vaccines contain reduced amounts of pertussis antigens compared to DTaP vaccines. === Controversy in the 1970s–1980s === During the 1970s and 1980s, a controversy erupted related to the question of whether the whole-cell pertussis component caused permanent brain injury in rare cases, called pertussis vaccine encephalopathy. Despite this allegation, doctors recommended the vaccine due to the overwhelming public health benefit, because the claimed rate was very low (one case per 310,000 immunizations, or about 50 cases out of the 15 million immunizations each year in the United States), and the risk of death from the disease was high (pertussis killed thousands of Americans each year before the vaccine was introduced). No studies showed a causal connection, and later studies showed no connection of any type between the DPT vaccine and permanent brain injury. The alleged vaccine-induced brain damage proved to be an unrelated condition, infantile epilepsy. In 1990, the Journal of the American Medical Association called the connection a "myth" and "nonsense". However, negative publicity and fearmongering caused the immunization rate to fall in several countries, including the UK, Sweden, and Japan. A dramatic increase in the incidence of pertussis followed. For example, in England and Wales before the introduction of pertussis immunisation in the 1950s, the average annual number of notifications exceeded 120,000. By 1972, when vaccine coverage was around 80%, there were only 2,069 notifications of pertussis. The professional and public anxiety about the safety and efficacy of the whole-cell vaccine caused coverage to fall to about 60% in 1975 and around 30% by 1978. Major epidemics occurred in 1977–79 and 1981–83. In 1978 there were over 65,000 notifications and 12 deaths (see the chart of pertussis notifications). These two major epidemics illustrate the impact of a fall in coverage of an effective vaccine. The actual number of deaths due to these pertussis outbreaks was higher since not all cases in infants are recognised. In the United States, low-profit margins and an increase in vaccine-related lawsuits led many manufacturers to stop producing the DPT vaccine by the early 1980s. In 1982, the television documentary DPT: Vaccine Roulette by reporter Lea Thompson of Washington, D. C. station WRC-TV depicted the lives of children whose severe disabilities were incorrectly blamed on the DPT vaccine. The ensuing negative publicity led to many lawsuits against vaccine manufacturers. By 1985, vaccine manufacturers had difficulty obtaining liability insurance. The price of the DPT vaccine skyrocketed, leading providers to curtail purchases, and limiting availability. Only one manufacturer remained in the US by the end of 1985. In response, Congress passed the National Childhood Vaccine Injury Act (NCVIA) in 1986, establishing a federal no-fault system to compensate victims of injury caused by recommended vaccines. Concerns about side effects led Sato to introduce an even safer acellular vaccine for Japan in 1981, which was approved in the US in 1992, for use in the combination DTaP vaccine. The acellular vaccine has a rate of adverse events similar to that of a Td vaccine (a tetanus-diphtheria vaccine containing no pertussis vaccine). == References == == External links == Pertussis Vaccine at the U.S. National Library of Medicine Medical Subject Headings (MeSH) "Tetanus, Diphtheria, and Pertussis Vaccines". MedlinePlus. U.S. National Library of Medicine. "Tdap (Tetanus, Diphtheria, Pertussis) Vaccine Information Statement". Centers for Disease Control and Prevention (CDC). 11 July 2018. "DTaP (Diphtheria, Tetanus, Pertussis) Vaccine Information Statement". Centers for Disease Control and Prevention (CDC). 24 August 2018.
Wikipedia/Pertussis_vaccine
The rabies vaccine is a vaccine used to prevent rabies. There are several rabies vaccines available that are both safe and effective. Vaccinations must be administered prior to rabies virus exposure or within the latent period after exposure to prevent the disease. Transmission of rabies virus to humans typically occurs through a bite or scratch from an infectious animal, but exposure can occur through indirect contact with the saliva from an infectious individual. Doses are usually given by injection into the skin or muscle. After exposure, the vaccination is typically used along with rabies immunoglobulin. It is recommended that those who are at high risk of exposure be vaccinated before potential exposure. Rabies vaccines are effective in humans and other animals, and vaccinating dogs is very effective in preventing the spread of rabies to humans. A long-lasting immunity to the virus develops after a full course of treatment. Rabies vaccines may be used safely by all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. After exposure to rabies, there is no contraindication to its use, because the untreated virus is virtually 100% fatal. The first rabies vaccine was introduced in 1885 and was followed by an improved version in 1908. Over 29 million people worldwide receive human rabies vaccine annually. It is on the World Health Organization's List of Essential Medicines. == Medical uses == === Before exposure === The World Health Organization (WHO) recommends vaccinating those who are at high risk of the disease, such as children who live in areas where it is common. Other groups may include veterinarians, researchers, or people planning to travel to regions where rabies is common. Three doses of the vaccine are given over a one-month period on days zero, seven, and either twenty-one or twenty-eight. === After exposure === For individuals who have been potentially exposed to the virus, four doses over two weeks are recommended, as well as an injection of rabies immunoglobulin with the first dose. This is known as post-exposure vaccination. For people who have previously been vaccinated, only a single dose of the rabies vaccine is required. However, vaccination after exposure is neither a treatment nor a cure for rabies; it can only prevent the development of rabies in a person if given before the virus reaches the brain. Because the rabies virus has a relatively long incubation period, post-exposure vaccinations are typically highly effective. === Additional doses === Immunity following a course of doses is typically long lasting, and additional doses are usually not needed unless the person has a high risk of contracting the virus. Those at risk may have tests done to measure the amount of rabies antibodies in the blood, and then get rabies boosters as needed. Following administration of a booster dose, one study found 97% of immunocompetent individuals demonstrated protective levels of neutralizing antibodies after ten years. == Safety == Rabies vaccines are safe in all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. Because of the certain fatality of the virus, receiving the vaccine is always advisable. Vaccines made from nerve tissue are used in a few countries, mainly in Asia and Latin America, but are less effective and have greater side effects. Their use is thus not recommended by the World Health Organization. == Types == The human diploid cell rabies vaccine (HDCV) was started in 1967. Human diploid cell rabies vaccines are inactivated vaccines made using the attenuated Pitman-Moore L503 strain of the virus. In addition to these developments, newer and less expensive purified chicken embryo cell vaccines (CCEEV) and purified Vero cell rabies vaccines are now available and are recommended for use by the WHO. The purified Vero cell rabies vaccine uses the attenuated Wistar strain of the rabies virus, and uses the Vero cell line as its host. CCEEVs can be used in both pre- and post-exposure vaccinations. CCEEVs use inactivated rabies virus grown from either embryonated eggs or in cell cultures and are safe for use in humans and animals. The vaccine was attenuated and prepared in the H.D.C. strain WI-38 which was gifted to Hilary Koprowski at the Wistar Institute by Leonard Hayflick, an Associate Member, who developed this normal human diploid cell strain. Verorab, developed by Sanofi-Aventis and Speeda, developed by Liaoning Chengda are purified vero cell rabies vaccine (PVRV). The first is approved by the World Health Organization. Verorab is approved for medical use in Australia and the European Union and is indicated for both pre-exposure and post-exposure prophylaxis against rabies. == History == Virtually all infections with rabies resulted in death until two French scientists, Louis Pasteur and Émile Roux, developed the first rabies vaccination in 1885. Nine-year-old Joseph Meister (1876–1940), who had been mauled by a rabid dog, was the first human to receive this vaccine. The treatment started with a subcutaneous injection on 6 July 1885, at 8:00 pm, which was followed with 12 additional doses administered over the following 10 days. The first injection was derived from the spinal cord of an inoculated rabbit which had died of rabies 15 days earlier. All the doses were obtained by attenuation, but later ones were progressively more virulent. The Pasteur-Roux vaccine attenuated the harvested virus samples by allowing them to dry for five to ten days. Similar nerve tissue-derived vaccines are still used in some countries, and while they are much cheaper than modern cell culture vaccines, they are not as effective. Neural tissue vaccines also carry a certain risk of neurological complications. == Society and culture == === Economics === When the modern cell-culture rabies vaccine was first introduced in the early 1980s, it cost $45 per dose, and was considered to be too expensive. The cost of the rabies vaccine continues to be a limitation to acquiring pre-exposure rabies immunization for travelers from developed countries. In 2015, in the United States, a course of three doses could cost over US$1,000, while in Europe a course costs around €100. It is possible and more cost-effective to split one intramuscular dose of the vaccine into several intradermal doses. This method is recommended by the World Health Organization (WHO) in areas that are constrained by cost or with supply issues. The route is as safe and effective as intramuscular according to the WHO. == Veterinary use == Pre-exposure immunization has been used on domesticated and wild populations. In many jurisdictions, domestic dogs, cats, ferrets, and rabbits are required to be vaccinated. There are two main types of vaccines used for domesticated animals and pets (including pets from wildlife species): Inactivated rabies virus (similar technology to that given to humans) administered by injection Modified live viruses administered orally (by mouth): Live rabies virus from attenuated strains. Attenuated means strains that have developed mutations that cause them to be weaker and do not cause disease. Imrab is an example of a veterinary rabies vaccine containing the Pasteur strain of killed rabies virus. Several different types of Imrab exist, including Imrab, Imrab 3, and Imrab Large Animal. Imrab 3 has been approved for ferrets and, in some areas, pet skunks. === Dogs === Aside from vaccinating humans, another approach was also developed by vaccinating dogs to prevent the spread of the virus. In 1979, the Van Houweling Research Laboratory of the Silliman University Medical Center in Dumaguete in the Philippines developed and produced a dog vaccine that gave a three-year immunity from rabies. The development of the vaccine resulted in the elimination of rabies in many parts of the Visayas and Mindanao Islands. The successful program in the Philippines was later used as a model by other countries, such as Ecuador and the Mexican state of Yucatán, in their fight against rabies conducted in collaboration with the World Health Organization. In Tunisia, a rabies control program was initiated to give dog owners free vaccination to promote mass vaccination which was sponsored by their government. The vaccine is known as Rabisin (Mérial), which is a cell based rabies vaccine only used countrywide. Vaccinations are often administered when owners take in their dogs for check-ups and visits at the vet. Oral rabies vaccines (see below for details) have been trialled on feral/stray dogs in some areas with high rabies incidence, as it could potentially be more efficient than catching and injecting them. However these have not been deployed for dogs at large scale yet. === Wild animals === Wildlife species, primarily bats, raccoons, skunks, and foxes, act as reservoir species for different variants of the rabies virus in distinct geographic regions of the United States. This results in the general occurrence of rabies as well as outbreaks in animal populations. Approximately 90% of all reported rabies cases in the US are from wildlife. ==== Oral rabies vaccine ==== Oral rabies vaccines are distributed across the landscape, targeting reservoir species, in an effort to produce a herd immunity effect. The idea of wildlife vaccination was conceived during the 1960s, and modified-live rabies viruses were used for the experimental oral vaccination of carnivores by the 1970s. Development of an oral immunization for wildlife began in the United States with laboratory trials using the live, attenuated Evelyn-Rokitnicki-Abelseth (ERA) vaccine, derived from the Street Alabama Dufferin (SAD) strain. The first ORV field trial using the live attenuated vaccine to immunize foxes occurred in Switzerland during 1978. There are currently three different types of oral wildlife rabies vaccine in use: Modified live virus: Attenuated vaccine strains of rabies virus such as SAG2 and SAD B19 Recombinant vaccinia virus expressing rabies glycoprotein (V-RG): This is a strain of the vaccinia virus (originally a smallpox vaccine) that has been engineered to encode the gene for the rabies glycoprotein. V-RG has been proven safe in over 60 animal species including cats and dogs. The idea of wildlife vaccination was conceived during the 1960s, and modified-live rabies viruses were used for the experimental oral vaccination of carnivores by the 1970s. ONRAB: an experimental live recombinant adenovirus vaccine Other oral rabies experimental vaccines in development include recombinant adenovirus vaccines. Oral rabies vaccination (ORV) programs have been used in many countries in an effort to control the spread of rabies and limit the risk of human contact with the rabies virus. ORV programs were initiated in Europe in the 1980s, Canada in 1985, and in the United States in 1990. ORV is a preventive measure to eliminate rabies in wild animal vectors of disease, mainly foxes, raccoons, raccoon dogs, coyotes and jackals, but also can be used for dogs in developing countries. ORV programs typically attractive baits to deliver the vaccine to targeted animals. In the United States, RABORAL V-RG (Boehringer Ingelheim, Duluth, GA, USA) has been the only licensed ORV for rabies virus management since 1997. However, ONRAB "Ultralite" (Artemis Technologies Inc., Guelph, Ontario, Canada) baits have been distributed by the United States Department of Agriculture (USDA) in select areas of the eastern United States under an experimental permit to target raccoons since 2011. RABORAL V-RG baits consist of a small packet containing the oral vaccine which is then either coated in a fishmeal paste or encased in a fishmeal-polymer block. ONRAB "Ultralite" baits consist of a blister pack with a coating matrix of vanilla flavor, green food coloring, vegetable oil and hydrogenated vegetable fat. When an animal bites into the bait, the packets burst and the vaccine is administered. Current research suggests that if adequate amounts of the vaccine is ingested, immunity to the virus should last for upwards of one year. By immunizing wild or stray animals, ORV programs work to create a buffer zone between the rabies virus and potential contact with humans, pets, or livestock. Landscape features such as large bodies of water and mountains are often used to enhance the effectiveness of the buffer. The effectiveness of ORV campaigns in specific areas is determined through trap-and-release methods. Titer tests are performed on the blood drawn from the sample animals in order to measure rabies antibody levels in the blood. Baits are usually distributed by aircraft to more efficiently cover large, rural regions. In order to place baits more precisely and to minimize human and pet contact with baits, they are distributed by hand in suburban or urban regions. The standard bait distribution density is 75 baits/km2 in rural areas and 150 baits/km2 in urban and developed areas. Implementation of ORV programs in the United States has led to the elimination of the coyote rabies virus variant in 2003 and gray fox variant during 2013. Furthermore, ORV has been successful in preventing the westward expansion of the raccoon rabies enzootic front beyond Alabama. == References == == External links == "Imovax". U.S. Food and Drug Administration (FDA). 16 December 2019. STN: 103931. Archived from the original on 18 September 2020. "RabAvert - Rabies Vaccine". U.S. Food and Drug Administration (FDA). 19 December 2019. STN: BL 103334. Archived from the original on 30 September 2019. Rabies Vaccines at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Rabies_vaccine
A respiratory syncytial virus vaccine, or RSV vaccine, is a vaccine that protects against respiratory syncytial virus. RSV affects an estimated 64 million people and causes 160,000 deaths worldwide each year. The RSV vaccines Arexvy (GSK), Abrysvo (Pfizer), and Mresvia (Moderna) are approved for medical use in the United States. Arexvy is approved for medical use in the United States, in the European Union, and in Canada for people aged 60 years of age and older. Arexvy is approved in the US for people aged 50–59 years of age who are at increased risk. In June 2024, the US Centers for Disease Control and Prevention (CDC) updated its recommendation for the use of respiratory syncytial virus vaccine in people aged 60 years of age and older. The CDC recommends that people who have not received the respiratory syncytial virus vaccine and are aged 75 years of age and older receive the respiratory syncytial virus vaccine; and that people who have not received the respiratory syncytial virus vaccine and are aged 60–74 years of age who are at increased risk of severe respiratory syncytial virus, meaning they have certain chronic medical conditions, such as lung or heart disease, or they live in nursing homes, receive the respiratory syncytial virus vaccine. A 2013 study led to the approval of RSV vaccines. Work on RSV vaccines also supported the rapid development of COVID-19 vaccines. == Medical uses == Respiratory syncytial virus vaccine is indicated for active immunization for the prevention of lower respiratory tract disease caused by respiratory syncytial virus in people 60 years of age and older. Abrysvo is indicated for active immunization for the prevention of lower respiratory tract disease caused by RSV in people 60 years of age and older, high-risk individuals aged 18 through 59 and pregnant individuals at 32 through 36 weeks gestational age to prevent severe disease in their infants from birth through six months of age. Abrysvo is approved for use in pregnant women at 24 through 36 weeks and older adults in the European Union. and between 28 through 36 weeks and older adults in the United Kingdom. Infant-specific issues include the immature infant immune system and the presence of maternal antibodies, which make infantile immunization difficult. == History == === Development === Attempts to develop an RSV vaccine began in the 1960s with an unsuccessful inactivated vaccine developed by exposing the RSV virus to formalin (formalin-inactivated RSV (FI-RSV)). This vaccine induced vaccine-associated enhanced respiratory disease, in which children who had not previously been exposed to RSV and were subsequently vaccinated would develop severe RSV disease if exposed to the virus itself, including fever, wheezing, and bronchopneumonia. Some eighty percent of such children (vs. 5% of virus-exposed controls) were hospitalized, and two children died of lethal lung inflammation during the first natural RSV infection after vaccination of RSV-naive infants. This disaster slowed vaccine development for many years. A 1998 paper reported that research had advanced greatly over the previous ten years. A 2019 paper similarly claimed that research toward developing a vaccine had advanced greatly over the prior 10 years, with more than 30 candidates in some stage of development. The same study predicted that a vaccine would be available within ten years. Candidates included particle-based vaccines, attenuated vaccines, mRNA vaccines, protein subunit vaccines, and vector-based vaccines. A 2013 study detailed the crystal structure of the RSV fusion (F) protein and how its stability could be improved. This provided the basis for finding the most effective F protein constructs, which are used in RSV vaccines. To develop its vaccine, Pfizer engineered 400 different F protein constructs to identify the most immunogenic, and constructed a bivalent RSV prefusion F investigational vaccine. In February 2023, results of a phase III study of around 25,000 participants age 60+ were published. One dose of the Arexvy vaccine provided 94% efficacy against severe RSV pneumonia and 72% efficacy against RSV acute respiratory infection. An advisory panel to the FDA recommended approval of the vaccine in February 2023. In April 2023, the Committee for Medicinal Products for Human Use of the European Medicines Agency (EMA) recommended to grant a marketing authorization for Arexvy for the prevention of RSV lower respiratory tract disease in people 60 years of age or older after review under the EMA's accelerated assessment program. In May 2023, Arexvy was approved for people aged 60 years of age and older, making it the first FDA-approved RSV vaccine. In May 2023, the FDA's expert panel unanimously recommended Abrysvo for approval in pregnant women. The panel was split on the safety of the vaccine in respect of preterm births. In June 2023, Arexvy was authorized for medical use in the European Union. The mRNA vaccine Mresvia was approved for medical use in the United States in May 2024. In June 2024, the FDA approved Arexvy for use in people aged 50 to 59 years of age who are at an increased risk of RSV-caused lower respiratory tract disease. The approval is based on data from a phase III study (NCT05590403), which showed that immune responses were non-inferior in people aged 50–59 years of age at increased risk for RSV disease compared to people aged 60 years of age and older. In October 2024, the FDA approved Abrysvo for the prevention of lower respiratory tract disease caused by respiratory syncytial virus (RSV) in individuals 18 through 59 years of age who are at increased risk for lower respiratory tract disease caused by RSV. Since 2023, Abrysvo has been approved for the prevention of lower respiratory tract disease caused by RSV in individuals 60 years of age and older and for use in pregnant individuals at 32 through 36 weeks gestational age for the prevention of lower respiratory tract disease and severe lower respiratory tract disease caused by RSV in infants from birth through six months of age. Abrysvo is manufactured by Pfizer. Mresvia was authorized for medical use in the European Union in August 2024. === Clinical trials === As of October 2022, phase III trials by multiple companies are ongoing to test RSV vaccines for people aged 60 years of age and older. These include vaccines by GSK, Pfizer, Johnson & Johnson, Moderna, and Bavarian Nordic. As of April 2023, other vaccines were in development, including vaccines for pregnant women to immunize their fetuses by passing maternal antibodies to them, and vaccines for children. ==== GSK ==== In November 2020, GSK's vaccine, GSK3888550A, entered phase III trials for pregnant women. The vaccine's antigen is a stabilized version of the RSV F protein, which was developed using structure-based vaccine design. This trial was terminated in February 2022, on the advice of an external Data Monitoring Committee, because of an excess of premature births in the trial arm. The FDA analyzed data from an ongoing, randomized, placebo-controlled clinical study conducted in the US and internationally in individuals 60 years of age and older. The main clinical study was designed to assess the safety and effectiveness of a single dose administered to individuals 60 years of age and older. Participants agreed to remain in the study through three RSV seasons to assess the duration of effectiveness and the safety and effectiveness of repeat vaccination. Data from the first RSV season of the study were available for the FDA's analysis. In this study, approximately 12,500 participants received vaccine and 12,500 participants received a placebo. The vaccine reduced the risk of developing RSV-associated lower respiratory tract disease by 82.6% and reduced the risk of developing severe RSV-associated lower respiratory tract disease by 94.1%. The FDA granted the application priority review designation and granted approval of Arexvy to GlaxoSmithKline Biologicals. In October 2022, GSK started a phase III, observer-blind, randomized, placebo-controlled study to evaluate the safety of the vaccine in people 50–59 years of age compared to people 60 years of age and older. The vaccine elicited an immune response in people aged 50 to 59 years of age at increased risk for RSV disease due to select underlying medical conditions that was non-inferior to that observed in people aged 60 years of age and older, meeting the trial's primary co-endpoint. ==== Pfizer ==== RSVpreF (Abrysvo) is a bivalent recombinant protein subunit vaccine which consists of equal amounts of stabilized prefusion F antigens from the two major RSV subgroups: RSV A and RSV B. In April 2023, Pfizer published their interim results of their phase III study of a RSV vaccine for adults age 60 and older in over 34,000 participants. One dose of the vaccine was 67% efficacious in preventing infections with at least two symptoms and it was 86% effective against more severe disease, in people with three related symptoms. The vaccine's protection was consistent across different subgroups, and was 62% effective in preventing acute respiratory illness caused by RSV infection. In April 2023, Pfizer published interim results of their double blind phase III study in about 3,600 pregnant women, with another 3,600 women receiving a placebo. One dose of the vaccine provided 81% efficacy in preventing severe infection within three months after birth and 69% in six months after birth. The most common side effects were pain at the injection site, headache, muscle pain and nausea. In a subgroup of pregnant individuals who were 32 through 36 weeks gestational age, of whom about 1,500 received Abrysvo and 1,500 received placebo, Abrysvo reduced the risk of lower respiratory tract disease by 34.7%, and reduced the risk of severe lower respiratory tract disease by 91.1% within 90 days after birth when compared to placebo. Within 180 days after birth, Abrysvo reduced the risk of lower respiratory tract disease by 57.3% and by 76.5% for severe lower respiratory tract disease, when compared to placebo. In a second study, about 100 pregnant individuals received Abrysvo and approximately 100 pregnant women received placebo. ==== Moderna ==== Mresvia is an mRNA vaccine that was studied in clincal trial NCT05127434. It was approved for medical use in the United States in May 2024. == Society and culture == === Legal status === In June 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Mresvia, intended for the prevention of lower respiratory tract disease caused by respiratory syncytial virus. The applicant for this medicinal product is Moderna Biotech Spain S.L. Mresvia was approved for medical use in the European Union in August 2024. == References == == External links == "RSV Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). 15 October 2024. Clinical trial number NCT04886596 for "Efficacy Study of GSK's Investigational Respiratory Syncytial Virus (RSV) Vaccine in Adults Aged 60 Years and Above" at ClinicalTrials.gov Clinical trial number NCT04732871 for "Immunogenicity, Safety, Reactogenicity and Persistence of an Investigational Respiratory Syncytial Virus (RSV) Vaccine in Adults Aged 60 Years and Above" at ClinicalTrials.gov Clinical trial number NCT04841577 for "A Study on the Immune Response and Safety Elicited by a Vaccine Against Respiratory Syncytial Virus (RSV) When Given Alone and Together With a Vaccine Against Influenza in Adults Aged 60 Years and Above" at ClinicalTrials.gov Clinical trial number NCT05035212 for "Study to Evaluate the Efficacy, Immunogenicity, and Safety of RSVpreF in Adults. (RENOIR)" at ClinicalTrials.gov Clinical trial number NCT04424316 for "A Trial to Evaluate the Efficacy and Safety of RSVpreF in Infants Born to Women Vaccinated During Pregnancy" at ClinicalTrials.gov
Wikipedia/Respiratory_syncytial_virus_vaccine
Radiation therapy or radiotherapy (RT, RTx, or XRT) is a treatment using ionizing radiation, generally provided as part of cancer therapy to either kill or control the growth of malignant cells. It is normally delivered by a linear particle accelerator. Radiation therapy may be curative in a number of types of cancer if they are localized to one area of the body, and have not spread to other parts. It may also be used as part of adjuvant therapy, to prevent tumor recurrence after surgery to remove a primary malignant tumor (for example, early stages of breast cancer). Radiation therapy is synergistic with chemotherapy, and has been used before, during, and after chemotherapy in susceptible cancers. The subspecialty of oncology concerned with radiotherapy is called radiation oncology. A physician who practices in this subspecialty is a radiation oncologist. Radiation therapy is commonly applied to the cancerous tumor because of its ability to control cell growth. Ionizing radiation works by damaging the DNA of cancerous tissue leading to cellular death. To spare normal tissues (such as skin or organs which radiation must pass through to treat the tumor), shaped radiation beams are aimed from several angles of exposure to intersect at the tumor, providing a much larger absorbed dose there than in the surrounding healthy tissue. Besides the tumor itself, the radiation fields may also include the draining lymph nodes if they are clinically or radiologically involved with the tumor, or if there is thought to be a risk of subclinical malignant spread. It is necessary to include a margin of normal tissue around the tumor to allow for uncertainties in daily set-up and internal tumor motion. These uncertainties can be caused by internal movement (for example, respiration and bladder filling) and movement of external skin marks relative to the tumor position. Radiation oncology is the medical specialty concerned with prescribing radiation, and is distinct from radiology, the use of radiation in medical imaging and diagnosis. Radiation may be prescribed by a radiation oncologist with intent to cure or for adjuvant therapy. It may also be used as palliative treatment (where cure is not possible and the aim is for local disease control or symptomatic relief) or as therapeutic treatment (where the therapy has survival benefit and can be curative). It is also common to combine radiation therapy with surgery, chemotherapy, hormone therapy, immunotherapy or some mixture of the four. Most common cancer types can be treated with radiation therapy in some way. The precise treatment intent (curative, adjuvant, neoadjuvant therapeutic, or palliative) will depend on the tumor type, location, and stage, as well as the general health of the patient. Total body irradiation (TBI) is a radiation therapy technique used to prepare the body to receive a bone marrow transplant. Brachytherapy, in which a radioactive source is placed inside or next to the area requiring treatment, is another form of radiation therapy that minimizes exposure to healthy tissue during procedures to treat cancers of the breast, prostate, and other organs. Radiation therapy has several applications in non-malignant conditions, such as the treatment of trigeminal neuralgia, acoustic neuromas, severe thyroid eye disease, pterygium, pigmented villonodular synovitis, and prevention of keloid scar growth, vascular restenosis, and heterotopic ossification. The use of radiation therapy in non-malignant conditions is limited partly by worries about the risk of radiation-induced cancers. == Medical uses == It is estimated that half of the US' 1.2M invasive cancer cases diagnosed in 2022 received radiation therapy in their treatment program. Different cancers respond to radiation therapy in different ways. The response of a cancer to radiation is described by its radiosensitivity. Highly radiosensitive cancer cells are rapidly killed by modest doses of radiation. These include leukemias, most lymphomas, and germ cell tumors. The majority of epithelial cancers are only moderately radiosensitive, and require a significantly higher dose of radiation (60–70 Gy) to achieve a radical cure. Some types of cancer are notably radioresistant, that is, much higher doses are required to produce a radical cure than may be safe in clinical practice. Renal cell cancer and melanoma are generally considered to be radioresistant but radiation therapy is still a palliative option for many patients with metastatic melanoma. Combining radiation therapy with immunotherapy is an active area of investigation and has shown some promise for melanoma and other cancers. It is important to distinguish the radiosensitivity of a particular tumor, which to some extent is a laboratory measure, from the radiation "curability" of a cancer in actual clinical practice. For example, leukemias are not generally curable with radiation therapy, because they are disseminated through the body. Lymphoma may be radically curable if it is localized to one area of the body. Similarly, many of the common, moderately radioresponsive tumors are routinely treated with curative doses of radiation therapy if they are at an early stage. For example, non-melanoma skin cancer, head and neck cancer, breast cancer, non-small cell lung cancer, cervical cancer, anal cancer, and prostate cancer. With the exception of oligometastatic disease, metastatic cancers are incurable with radiation therapy because it is not possible to treat the whole body. Modern radiation therapy relies on a CT scan to identify the tumor and surrounding normal structures and to perform dose calculations for the creation of a complex radiation treatment plan. The patient receives small skin marks to guide the placement of treatment fields. Patient positioning is crucial at this stage as the patient will have to be placed in an identical position during each treatment. Many patient positioning devices have been developed for this purpose, including masks and cushions which can be molded to the patient. Image-guided radiation therapy is a method that uses imaging to correct for positional errors of each treatment session. Building on the principles of Image-guided radiation therapy, Daily MR-guided ART (MRgART) offers many dosimetric advantages over the traditional single-plan RT workflow, including the ability to conform the high-dose region to the tumor as the anatomy changes throughout the course of RT. The response of a tumor to radiation therapy is also related to its size. Due to complex radiobiology, very large tumors are affected less by radiation compared to smaller tumors or microscopic disease. Various strategies are used to overcome this effect. The most common technique is surgical resection prior to radiation therapy. This is most commonly seen in the treatment of breast cancer with wide local excision or mastectomy followed by adjuvant radiation therapy. Another method is to shrink the tumor with neoadjuvant chemotherapy prior to radical radiation therapy. A third technique is to enhance the radiosensitivity of the cancer by giving certain drugs during a course of radiation therapy. Examples of radiosensitizing drugs include cisplatin, nimorazole, and cetuximab. The impact of radiotherapy varies between different types of cancer and different groups. For example, for breast cancer after breast-conserving surgery, radiotherapy has been found to halve the rate at which the disease recurs. In pancreatic cancer, radiotherapy has increased survival times for inoperable tumors. == Side effects == Radiation therapy (RT) is in itself painless, but has iatrogenic side effect risks. Many low-dose palliative treatments (for example, radiation therapy to bony metastases) cause minimal or no side effects, although short-term pain flare-up can be experienced in the days following treatment due to oedema compressing nerves in the treated area. Higher doses can cause varying side effects during treatment (acute side effects), in the months or years following treatment (long-term side effects), or after re-treatment (cumulative side effects). The nature, severity, and longevity of side effects depends on the organs that receive the radiation, the treatment itself (type of radiation, dose, fractionation, concurrent chemotherapy), and the patient. Serious radiation complications may occur in 5% of RT cases. Acute (near immediate) or sub-acute (2 to 3 months post RT) radiation side effects may develop after 50 Gy RT dosing. Late or delayed radiation injury (6 months to decades) may develop after 65 Gy. Side effects from radiation are usually limited to the area of the patient's body that is under treatment. Side effects are dose-dependent; for example, higher doses of head and neck radiation can be associated with cardiovascular complications, thyroid dysfunction, and pituitary axis dysfunction. Modern radiation therapy aims to reduce side effects to a minimum and to help the patient understand and deal with side effects that are unavoidable. The main side effects reported are fatigue and skin irritation, like a mild to moderate sun burn. The fatigue often sets in during the middle of a course of treatment and can last for weeks after treatment ends. The irritated skin will heal, but may not be as elastic as it was before. === Acute side effects === Nausea and vomiting This is not a general side effect of radiation therapy, and mechanistically is associated only with treatment of the stomach or abdomen (which commonly react a few hours after treatment), or with radiation therapy to certain nausea-producing structures in the head during treatment of certain head and neck tumors, most commonly the vestibules of the inner ears. As with any distressing treatment, some patients vomit immediately during radiotherapy, or even in anticipation of it, but this is considered a psychological response. Nausea for any reason can be treated with antiemetics. Damage to the epithelial surfaces Epithelial surfaces may sustain damage from radiation therapy. Depending on the area being treated, this may include the skin, oral mucosa, pharyngeal, bowel mucosa, and ureter. The rates of onset of damage and recovery from it depend upon the turnover rate of epithelial cells. Typically the skin starts to become pink and sore several weeks into treatment. The reaction may become more severe during the treatment and for up to about one week following the end of radiation therapy, and the skin may break down. Although this moist desquamation is uncomfortable, recovery is usually quick. Skin reactions tend to be worse in areas where there are natural folds in the skin, such as underneath the female breast, behind the ear, and in the groin. Mouth, throat and stomach sores If the head and neck area is treated, temporary soreness and ulceration commonly occur in the mouth and throat. If severe, this can affect swallowing, and the patient may need painkillers and nutritional support/food supplements. The esophagus can also become sore if it is treated directly, or if, as commonly occurs, it receives a dose of collateral radiation during treatment of lung cancer. When treating liver malignancies and metastases, it is possible for collateral radiation to cause gastric, stomach, or duodenal ulcers This collateral radiation is commonly caused by non-targeted delivery (reflux) of the radioactive agents being infused. Methods, techniques and devices are available to lower the occurrence of this type of adverse side effect. Intestinal discomfort The lower bowel may be treated directly with radiation (treatment of rectal or anal cancer) or be exposed by radiation therapy to other pelvic structures (prostate, bladder, female genital tract). Typical symptoms are soreness, diarrhoea, and nausea. Nutritional interventions may be able to help with diarrhoea associated with radiotherapy. Studies in people having pelvic radiotherapy as part of anticancer treatment for a primary pelvic cancer found that changes in dietary fat, fibre and lactose during radiotherapy reduced diarrhoea at the end of treatment. Swelling As part of the general inflammation that occurs, swelling of soft tissues may cause problems during radiation therapy. This is a concern during treatment of brain tumors and brain metastases, especially where there is pre-existing raised intracranial pressure or where the tumor is causing near-total obstruction of a lumen (e.g., trachea or main bronchus). Surgical intervention may be considered prior to treatment with radiation. If surgery is deemed unnecessary or inappropriate, the patient may receive steroids during radiation therapy to reduce swelling. Infertility The gonads (ovaries and testicles) are very sensitive to radiation. They may be unable to produce gametes following direct exposure to most normal treatment doses of radiation. Treatment planning for all body sites is designed to minimize, if not completely exclude dose to the gonads if they are not the primary area of treatment. === Late side effects === Late side effects occur months to years after treatment and are generally limited to the area that has been treated. They are often due to damage of blood vessels and connective tissue cells. Many late effects are reduced by fractionating treatment into smaller parts. Fibrosis Tissues which have been irradiated tend to become less elastic over time due to a diffuse scarring process. Epilation Epilation (hair loss) may occur on any hair bearing skin with doses above 1 Gy. It only occurs within the radiation field(s). Hair loss may be permanent with a single dose of 10 Gy, but if the dose is fractionated permanent hair loss may not occur until dose exceeds 45 Gy. Dryness The salivary glands and tear glands have a radiation tolerance of about 30 Gy in 2 Gy fractions, a dose which is exceeded by most radical head and neck cancer treatments. Dry mouth (xerostomia) and dry eyes (xerophthalmia) can become irritating long-term problems and severely reduce the patient's quality of life. Similarly, sweat glands in treated skin (such as the armpit) tend to stop working, and the naturally moist vaginal mucosa is often dry following pelvic irradiation. Chronic sinus drainage Radiation therapy treatments to the head and neck regions for soft tissue, palate or bone cancer can cause chronic sinus tract draining and fistulae from the bone. Lymphedema Lymphedema, a condition of localized fluid retention and tissue swelling, can result from damage to the lymphatic system sustained during radiation therapy. It is the most commonly reported complication in breast radiation therapy patients who receive adjuvant axillary radiotherapy following surgery to clear the axillary lymph nodes . Cancer Radiation is a potential cause of cancer, and secondary malignancies are seen in some patients. Cancer survivors are already more likely than the general population to develop malignancies due to a number of factors including lifestyle choices, genetics, and previous radiation treatment. It is difficult to directly quantify the rates of these secondary cancers from any single cause. Studies have found radiation therapy as the cause of secondary malignancies for only a small minority of patients, e.g., exposure to ionizing radiation is an identified risk factor for subsequent glioma; see main topic Glioma#Causes. The combined risk of a radiation-induced glioblastoma or astrocytoma within 15 years of the initial radiotherapy is 0.5-2.7%. New techniques such as proton beam therapy and carbon ion radiotherapy which aim to reduce dose to healthy tissues will lower these risks. It starts to occur 4–6 years following treatment, although some haematological malignancies may develop within 3 years. In the vast majority of cases, this risk is greatly outweighed by the reduction in risk conferred by treating the primary cancer even in pediatric malignancies which carry a higher burden of secondary malignancies. Cardiovascular disease Radiation can increase the risk of heart disease and death as observed in previous breast cancer RT regimens. Therapeutic radiation increases the risk of a subsequent cardiovascular event (i.e., heart attack or stroke) by 1.5 to 4 times a person's normal rate, aggravating factors included. The increase is dose dependent, related to the RT's dose strength, volume and location. Use of concomitant chemotherapy, e.g. anthracyclines, is an aggravating risk factor. The occurrence rate of RT induced cardiovascular disease is estimated between 10 and 30%. Cardiovascular late side effects have been termed radiation-induced heart disease (RIHD) and radiation-induced cardiovascular disease (RIVD). Symptoms are dose dependent and include cardiomyopathy, myocardial fibrosis, valvular heart disease, coronary artery disease, heart arrhythmia and peripheral artery disease. Radiation-induced fibrosis, vascular cell damage and oxidative stress can lead to these and other late side effect symptoms. Most radiation-induced cardiovascular diseases occur 10 or more years post treatment, making causality determinations more difficult. Cognitive decline In cases of radiation applied to the head radiation therapy may cause cognitive decline. Cognitive decline was especially apparent in young children, between the ages of 5 and 11. Studies found, for example, that the IQ of 5-year-old children declined each year after treatment by several IQ points. Radiation enteropathy The gastrointestinal tract can be damaged following abdominal and pelvic radiotherapy. Atrophy, fibrosis and vascular changes produce malabsorption, diarrhea, steatorrhea and bleeding with bile acid diarrhea and vitamin B12 malabsorption commonly found due to ileal involvement. Pelvic radiation disease includes radiation proctitis, producing bleeding, diarrhoea and urgency, and can also cause radiation cystitis when the bladder is affected. Lung injury Radiation-induced lung injury (RILI) encompasses radiation pneumonitis and pulmonary fibrosis. Lung tissue is sensitive to ionizing radiation, tolerating only 18–20 Gy, a fraction of typical therapeutic dosage levels. The lung's terminal airways and associated alveoli can become damaged, preventing effective respiratory gas exchange. The adverse effects of radiation are often asymptomatic with clinically significant RILI occurrence rates varying widely in literature, affecting 5–25% of those treated for thoracic and mediastinal malignancies and 1–5% of those treated for breast cancer. Neurogenic lower urinary tract dysfunction Pelvic radiation therapy has been associated with acquired neurogenic bladder dysfunction, radiation impacting the nerves or structures involved with urinary continence and voiding. The voluntary micturition process is controlled by the central nervous system, synchronizing the bladder's detrusor muscle and the internal and external urethral sphincters. Trauma to the process's components, e.g., spinal cord, peripheral motor and sensor nerves, and the bladder and urethra sphincters, can cause dysfunction, e.g., dysuria, frequency and incontinence. Radiation-induced polyneuropathy Radiation treatments may damage nerves near the target area or within the delivery path as nerve tissue is also radiosensitive. Nerve damage from ionizing radiation occurs in phases, the initial phase from microvascular injury, capillary damage and nerve demyelination. Subsequent damage occurs from vascular constriction and nerve compression due to uncontrolled fibrous tissue growth caused by radiation. Radiation-induced polyneuropathy, ICD-10-CM Code G62.82, occurs in approximately 1–5% of those receiving radiation therapy. Depending upon the irradiated zone, late effect neuropathy may occur in either the central nervous system (CNS) or the peripheral nervous system (PNS). In the CNS for example, cranial nerve injury typically presents as a visual acuity loss 1–14 years post treatment. In the PNS, injury to the plexus nerves presents as radiation-induced brachial plexopathy or radiation-induced lumbosacral plexopathy appearing up to 3 decades post treatment. Myokymia (muscle cramping, spasms or twitching) may develop. Radiation-induced nerve injury, chronic compressive neuropathies and polyradiculopathies are the most common cause of myokymic discharges. Clinically, the majority of patients receiving radiation therapy have measurable myokymic discharges within their field of radiation which present as focal or segmental myokymia. Common areas affected include the arms, legs or face depending upon the location of nerve injury. Myokymia is more frequent when radiation doses exceed 10 gray (Gy). Radiation necrosis Radiation necrosis is the death of healthy tissue near the irradiated site. It is a type of coagulative necrosis that occurs because the radiation directly or indirectly damages blood vessels in the area, which reduces the blood supply to the remaining healthy tissue, causing it to die by ischemia, similar to what happens in an ischemic stroke. Because it is an indirect effect of the treatment, it occurs months to decades after radiation exposure. Radiation necrosis most commonly presents as osteoradionecrosis, vaginal radionecrosis, soft tissue radionecrosis, or laryngeal radionecrosis. === Cumulative side effects === Cumulative effects from this process should not be confused with long-term effects – when short-term effects have disappeared and long-term effects are subclinical, reirradiation can still be problematic. These doses are calculated by the radiation oncologist and many factors are taken into account before the subsequent radiation takes place. === Effects on reproduction === During the first two weeks after fertilization, radiation therapy is lethal but not teratogenic. High doses of radiation during pregnancy induce anomalies, impaired growth and intellectual disability, and there may be an increased risk of childhood leukemia and other tumors in the offspring. In males previously having undergone radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. However, the use of assisted reproductive technologies and micromanipulation techniques might increase this risk. === Effects on pituitary system === Hypopituitarism commonly develops after radiation therapy for sellar and parasellar neoplasms, extrasellar brain tumors, head and neck tumors, and following whole body irradiation for systemic malignancies. 40–50% of children treated for childhood cancer develop some endocrine side effect. Radiation-induced hypopituitarism mainly affects growth hormone and gonadal hormones. In contrast, adrenocorticotrophic hormone (ACTH) and thyroid stimulating hormone (TSH) deficiencies are the least common among people with radiation-induced hypopituitarism. Changes in prolactin-secretion is usually mild, and vasopressin deficiency appears to be very rare as a consequence of radiation. === Effects on subsequent surgery === Delayed tissue injury with impaired wound healing capability often develops after receiving doses in excess of 65 Gy. A diffuse injury pattern due to the external beam radiotherapy's holographic isodosing occurs. While the targeted tumor receives the majority of radiation, healthy tissue at incremental distances from the center of the tumor are also irradiated in a diffuse pattern due to beam divergence. These wounds demonstrate progressive, proliferative endarteritis, inflamed arterial linings that disrupt the tissue's blood supply. Such tissue ends up chronically hypoxic, fibrotic, and without an adequate nutrient and oxygen supply. Surgery of previously irradiated tissue has a very high failure rate, e.g. women who have received radiation for breast cancer develop late effect chest wall tissue fibrosis and hypovascularity, making successful reconstruction and healing difficult, if not impossible. === Radiation therapy accidents === There are rigorous procedures in place to minimise the risk of accidental overexposure of radiation therapy to patients. However, mistakes do occasionally occur; for example, the radiation therapy machine Therac-25 was responsible for at least six accidents between 1985 and 1987, where patients were given up to one hundred times the intended dose; two people were killed directly by the radiation overdoses. From 2005 to 2010, a hospital in Missouri overexposed 76 patients (most with brain cancer) during a five-year period because new radiation equipment had been set up incorrectly. Although medical errors are exceptionally rare, radiation oncologists, medical physicists and other members of the radiation therapy treatment team are working to eliminate them. In 2010 the American Society for Radiation Oncology (ASTRO) launched a safety initiative called Target Safely that, among other things, aimed to record errors nationwide so that doctors can learn from each and every mistake and prevent them from recurring. ASTRO also publishes a list of questions for patients to ask their doctors about radiation safety to ensure every treatment is as safe as possible. == Use in non-cancerous diseases == Radiation therapy is used to treat early stage Dupuytren's disease and Ledderhose disease. When Dupuytren's disease is at the nodules and cords stage or fingers are at a minimal deformation stage of less than 10 degrees, then radiation therapy is used to prevent further progress of the disease. Radiation therapy is also used post surgery in some cases to prevent the disease continuing to progress. Low doses of radiation are used typically three gray of radiation for five days, with a break of three months followed by another phase of three gray of radiation for five days. == Technique == === Mechanism of action === Radiation therapy works by damaging the DNA of cancer cells and can cause them to undergo mitotic catastrophe. This DNA damage is caused by one of two types of energy, photon or charged particle. This damage is either direct or indirect ionization of the atoms which make up the DNA chain. Indirect ionization happens as a result of the ionization of water, forming free radicals, notably hydroxyl radicals, which then damage the DNA. In photon therapy, most of the radiation effect is through free radicals. Cells have mechanisms for repairing single-strand DNA damage and double-stranded DNA damage. However, double-stranded DNA breaks are much more difficult to repair, and can lead to dramatic chromosomal abnormalities and genetic deletions. Targeting double-stranded breaks increases the probability that cells will undergo cell death. Cancer cells are generally less differentiated and more stem cell-like; they reproduce more than most healthy differentiated cells, and have a diminished ability to repair sub-lethal damage. Single-strand DNA damage is then passed on through cell division; damage to the cancer cells' DNA accumulates, causing them to die or reproduce more slowly. One of the major limitations of photon radiation therapy is that the cells of solid tumors become deficient in oxygen. Solid tumors can outgrow their blood supply, causing a low-oxygen state known as hypoxia. Oxygen is a potent radiosensitizer, increasing the effectiveness of a given dose of radiation by forming DNA-damaging free radicals. Tumor cells in a hypoxic environment may be as much as 2 to 3 times more resistant to radiation damage than those in a normal oxygen environment. Much research has been devoted to overcoming hypoxia including the use of high pressure oxygen tanks, hyperthermia therapy (heat therapy which dilates blood vessels to the tumor site), blood substitutes that carry increased oxygen, hypoxic cell radiosensitizer drugs such as misonidazole and metronidazole, and hypoxic cytotoxins (tissue poisons), such as tirapazamine. Newer research approaches are currently being studied, including preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as a radiosensitizer. Charged particles such as protons and boron, carbon, and neon ions can cause direct damage to cancer cell DNA through high-LET (linear energy transfer) and have an antitumor effect independent of tumor oxygen supply because these particles act mostly via direct energy transfer usually causing double-stranded DNA breaks. Due to their relatively large mass, protons and other charged particles have little lateral side scatter in the tissue – the beam does not broaden much, stays focused on the tumor shape, and delivers small dose side-effects to surrounding tissue. They also more precisely target the tumor using the Bragg peak effect. See proton therapy for a good example of the different effects of intensity-modulated radiation therapy (IMRT) vs. charged particle therapy. This procedure reduces damage to healthy tissue between the charged particle radiation source and the tumor and sets a finite range for tissue damage after the tumor has been reached. In contrast, IMRT's use of uncharged particles causes its energy to damage healthy cells when it exits the body. This exiting damage is not therapeutic, can increase treatment side effects, and increases the probability of secondary cancer induction. This difference is very important in cases where the close proximity of other organs makes any stray ionization very damaging (example: head and neck cancers). This X-ray exposure is especially bad for children, due to their growing bodies, and while depending on a multitude of factors, they are around 10 times more sensitive to developing secondary malignancies after radiotherapy as compared to adults. === Dose === The amount of radiation used in photon radiation therapy is measured in grays (Gy), and varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy. Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers.) Many other factors are considered by radiation oncologists when selecting a dose, including whether the patient is receiving chemotherapy, patient comorbidities, whether radiation therapy is being administered before or after surgery, and the degree of success of surgery. Delivery parameters of a prescribed dose are determined during treatment planning (part of dosimetry). Treatment planning is generally performed on dedicated computers using specialized treatment planning software. Depending on the radiation delivery method, several angles or sources may be used to sum to the total necessary dose. The planner will try to design a plan that delivers a uniform prescription dose to the tumor and minimizes dose to surrounding healthy tissues. In radiation therapy, three-dimensional dose distributions may be evaluated using the dosimetry technique known as gel dosimetry. ==== Fractionation ==== The total dose is fractionated (spread out over time) for several important reasons. Fractionation allows normal cells time to recover, while tumor cells are generally less efficient in repair between fractions. Fractionation also allows tumor cells that were in a relatively radio-resistant phase of the cell cycle during one treatment to cycle into a sensitive phase of the cycle before the next fraction is given. Similarly, tumor cells that were chronically or acutely hypoxic (and therefore more radioresistant) may reoxygenate between fractions, improving the tumor cell kill. Fractionation regimens are individualised between different radiation therapy centers and even between individual doctors. In North America, Australia, and Europe, the typical fractionation schedule for adults is 1.8 to 2 Gy per day, five days a week. In some cancer types, prolongation of the fraction schedule over too long can allow for the tumor to begin repopulating, and for these tumor types, including head-and-neck and cervical squamous cell cancers, radiation treatment is preferably completed within a certain amount of time. For children, a typical fraction size may be 1.5 to 1.8 Gy per day, as smaller fraction sizes are associated with reduced incidence and severity of late-onset side effects in normal tissues. In some cases, two fractions per day are used near the end of a course of treatment. This schedule, known as a concomitant boost regimen or hyperfractionation, is used on tumors that regenerate more quickly when they are smaller. In particular, tumors in the head-and-neck demonstrate this behavior. Patients receiving palliative radiation to treat uncomplicated painful bone metastasis should not receive more than a single fraction of radiation. A single treatment gives comparable pain relief and morbidity outcomes to multiple-fraction treatments, and for patients with limited life expectancy, a single treatment is best to improve patient comfort. ==== Schedules for fractionation ==== One fractionation schedule that is increasingly being used and continues to be studied is hypofractionation. This is a radiation treatment in which the total dose of radiation is divided into large doses. Typical doses vary significantly by cancer type, from 2.2 Gy/fraction to 20 Gy/fraction, the latter being typical of stereotactic treatments (stereotactic ablative body radiotherapy, or SABR – also known as SBRT, or stereotactic body radiotherapy) for subcranial lesions, or SRS (stereotactic radiosurgery) for intracranial lesions. The rationale of hypofractionation is to reduce the probability of local recurrence by denying clonogenic cells the time they require to reproduce and also to exploit the radiosensitivity of some tumors. In particular, stereotactic treatments are intended to destroy clonogenic cells by a process of ablation, i.e., the delivery of a dose intended to destroy clonogenic cells directly, rather than to interrupt the process of clonogenic cell division repeatedly (apoptosis), as in routine radiotherapy. ==== Estimation of dose based on target sensitivity ==== Different cancer types have different radiation sensitivity. While predicting the sensitivity based on genomic or proteomic analyses of biopsy samples has proven challenging, the predictions of radiation effect on individual patients from genomic signatures of intrinsic cellular radiosensitivity have been shown to associate with clinical outcome. An alternative approach to genomics and proteomics was offered by the discovery that radiation protection in microbes is offered by non-enzymatic complexes of manganese and small organic metabolites. The content and variation of manganese (measurable by electron paramagnetic resonance) were found to be good predictors of radiosensitivity, and this finding extends also to human cells. An association was confirmed between total cellular manganese contents and their variation, and clinically inferred radioresponsiveness in different tumor cells, a finding that may be useful for more precise radiodosages and improved treatment of cancer patients. == Types == Historically, the three main divisions of radiation therapy are: external beam radiation therapy (EBRT or XRT) or teletherapy; brachytherapy or sealed source radiation therapy; and systemic radioisotope therapy or unsealed source radiotherapy. The differences relate to the position of the radiation source; external is outside the body, brachytherapy uses sealed radioactive sources placed precisely in the area under treatment, and systemic radioisotopes are given by infusion or oral ingestion. Brachytherapy can use temporary or permanent placement of radioactive sources. The temporary sources are usually placed by a technique called afterloading. In afterloading a hollow tube or applicator is placed surgically in the organ to be treated, and the sources are loaded into the applicator after the applicator is implanted. This minimizes radiation exposure to health care personnel. Particle therapy is a special case of external beam radiation therapy where the particles are protons or heavier ions. A review of radiation therapy randomised clinical trials from 2018 to 2021 found many practice-changing data and new concepts that emerge from RCTs, identifying techniques that improve the therapeutic ratio, techniques that lead to more tailored treatments, stressing the importance of patient satisfaction, and identifying areas that require further study. === External beam radiation therapy === The following three sections refer to treatment using X-rays. ==== Conventional external beam radiation therapy ==== Historically conventional external beam radiation therapy (2DXRT) was delivered via two-dimensional beams using kilovoltage therapy X-ray units, medical linear accelerators that generate high-energy X-rays, or with machines that were similar to a linear accelerator in appearance, but used a sealed radioactive source like the one shown above. 2DXRT mainly consists of a single beam of radiation delivered to the patient from several directions: often front or back, and both sides. Conventional refers to the way the treatment is planned or simulated on a specially calibrated diagnostic X-ray machine known as a simulator because it recreates the linear accelerator actions (or sometimes by eye), and to the usually well-established arrangements of the radiation beams to achieve a desired plan. The aim of simulation is to accurately target or localize the volume which is to be treated. This technique is well established and is generally quick and reliable. The worry is that some high-dose treatments may be limited by the radiation toxicity capacity of healthy tissues which lie close to the target tumor volume. An example of this problem is seen in radiation of the prostate gland, where the sensitivity of the adjacent rectum limited the dose which could be safely prescribed using 2DXRT planning to such an extent that tumor control may not be easily achievable. Prior to the invention of the CT, physicians and physicists had limited knowledge about the true radiation dosage delivered to both cancerous and healthy tissue. For this reason, 3-dimensional conformal radiation therapy has become the standard treatment for almost all tumor sites. More recently other forms of imaging are used including MRI, PET, SPECT and Ultrasound. ==== Stereotactic radiation ==== Stereotactic radiation is a specialized type of external beam radiation therapy. It uses focused radiation beams targeting a well-defined tumor using extremely detailed imaging scans. Radiation oncologists perform stereotactic treatments, often with the help of a neurosurgeon for tumors in the brain or spine. There are two types of stereotactic radiation. Stereotactic radiosurgery (SRS) is when doctors use a single or several stereotactic radiation treatments of the brain or spine. Stereotactic body radiation therapy (SBRT) refers to one or several stereotactic radiation treatments with the body, such as the lungs. Some doctors say an advantage to stereotactic treatments is that they deliver the right amount of radiation to the cancer in a shorter amount of time than traditional treatments, which can often take 6 to 11 weeks. Plus treatments are given with extreme accuracy, which should limit the effect of the radiation on healthy tissues. One problem with stereotactic treatments is that they are only suitable for certain small tumors. Stereotactic treatments can be confusing because many hospitals call the treatments by the name of the manufacturer rather than calling it SRS or SBRT. Brand names for these treatments include Axesse, Cyberknife, Gamma Knife, Novalis, Primatom, Synergy, X-Knife, TomoTherapy, Trilogy and Truebeam. This list changes as equipment manufacturers continue to develop new, specialized technologies to treat cancers. ==== Virtual simulation, and 3-dimensional conformal radiation therapy ==== The planning of radiation therapy treatment has been revolutionized by the ability to delineate tumors and adjacent normal structures in three dimensions using specialized CT and/or MRI scanners and planning software. Virtual simulation, the most basic form of planning, allows more accurate placement of radiation beams than is possible using conventional X-rays, where soft-tissue structures are often difficult to assess and normal tissues difficult to protect. An enhancement of virtual simulation is 3-dimensional conformal radiation therapy (3DCRT), in which the profile of each radiation beam is shaped to fit the profile of the target from a beam's eye view (BEV) using a multileaf collimator (MLC) and a variable number of beams. When the treatment volume conforms to the shape of the tumor, the relative toxicity of radiation to the surrounding normal tissues is reduced, allowing a higher dose of radiation to be delivered to the tumor than conventional techniques would allow. ==== Intensity-modulated radiation therapy (IMRT) ==== Intensity-modulated radiation therapy (IMRT) is an advanced type of high-precision radiation that is the next generation of 3DCRT. IMRT also improves the ability to conform the treatment volume to concave tumor shapes, for example when the tumor is wrapped around a vulnerable structure such as the spinal cord or a major organ or blood vessel. Computer-controlled X-ray accelerators distribute precise radiation doses to malignant tumors or specific areas within the tumor. The pattern of radiation delivery is determined using highly tailored computing applications to perform optimization and treatment simulation (Treatment Planning). The radiation dose is consistent with the 3-D shape of the tumor by controlling, or modulating, the radiation beam's intensity. The radiation dose intensity is elevated near the gross tumor volume while radiation among the neighboring normal tissues is decreased or avoided completely. This results in better tumor targeting, lessened side effects, and improved treatment outcomes than even 3DCRT. 3DCRT is still used extensively for many body sites but the use of IMRT is growing in more complicated body sites such as CNS, head and neck, prostate, breast, and lung. Unfortunately, IMRT is limited by its need for additional time from experienced medical personnel. This is because physicians must manually delineate the tumors one CT image at a time through the entire disease site which can take much longer than 3DCRT preparation. Then, medical physicists and dosimetrists must be engaged to create a viable treatment plan. Also, the IMRT technology has only been used commercially since the late 1990s even at the most advanced cancer centers, so radiation oncologists who did not learn it as part of their residency programs must find additional sources of education before implementing IMRT. Proof of improved survival benefit from either of these two techniques over conventional radiation therapy (2DXRT) is growing for many tumor sites, but the ability to reduce toxicity is generally accepted. This is particularly the case for head and neck cancers in a series of pivotal trials performed by Professor Christopher Nutting of the Royal Marsden Hospital. Both techniques enable dose escalation, potentially increasing usefulness. There has been some concern, particularly with IMRT, about increased exposure of normal tissue to radiation and the consequent potential for secondary malignancy. Overconfidence in the accuracy of imaging may increase the chance of missing lesions that are invisible on the planning scans (and therefore not included in the treatment plan) or that move between or during a treatment (for example, due to respiration or inadequate patient immobilization). New techniques are being developed to better control this uncertainty – for example, real-time imaging combined with real-time adjustment of the therapeutic beams. This new technology is called image-guided radiation therapy or four-dimensional radiation therapy. Another technique is the real-time tracking and localization of one or more small implantable electric devices implanted inside or close to the tumor. There are various types of medical implantable devices that are used for this purpose. It can be a magnetic transponder which senses the magnetic field generated by several transmitting coils, and then transmits the measurements back to the positioning system to determine the location. The implantable device can also be a small wireless transmitter sending out an RF signal which then will be received by a sensor array and used for localization and real-time tracking of the tumor position. A well-studied issue with IMRT is the "tongue and groove effect" which results in unwanted underdosing, due to irradiating through extended tongues and grooves of overlapping MLC (multileaf collimator) leaves. While solutions to this issue have been developed, which either reduce the TG effect to negligible amounts or remove it completely, they depend upon the method of IMRT being used and some of them carry costs of their own. Some texts distinguish "tongue and groove error" from "tongue or groove error", according as both or one side of the aperture is occluded. ==== Volumetric modulated arc therapy (VMAT) ==== Volumetric modulated arc therapy (VMAT) is a radiation technique introduced in 2007 which can achieve highly conformal dose distributions on target volume coverage and sparing of normal tissues. The specificity of this technique is to modify three parameters during the treatment. VMAT delivers radiation by rotating gantry (usually 360° rotating fields with one or more arcs), changing speed and shape of the beam with a multileaf collimator (MLC) ("sliding window" system of moving) and fluence output rate (dose rate) of the medical linear accelerator. VMAT has an advantage in patient treatment, compared with conventional static field intensity modulated radiotherapy (IMRT), of reduced radiation delivery times. Comparisons between VMAT and conventional IMRT for their sparing of healthy tissues and Organs at Risk (OAR) depends upon the cancer type. In the treatment of nasopharyngeal, oropharyngeal and hypopharyngeal carcinomas VMAT provides equivalent or better protection of the organ at risk (OAR). In the treatment of prostate cancer the OAR protection result is mixed with some studies favoring VMAT, others favoring IMRT. ==== Temporally feathered radiation therapy (TFRT) ==== Temporally feathered radiation therapy (TFRT) is a radiation technique introduced in 2018 which aims to use the inherent non-linearities in normal tissue repair to allow for sparing of these tissues without affecting the dose delivered to the tumor. The application of this technique, which has yet to be automated, has been described carefully to enhance the ability of departments to perform it, and in 2021 it was reported as feasible in a small clinical trial, though its efficacy has yet to be formally studied. ==== Automated planning ==== Automated treatment planning has become an integrated part of radiotherapy treatment planning. There are in general two approaches of automated planning. 1) Knowledge based planning where the treatment planning system has a library of high quality plans, from which it can predict the target and dose-volume histogram of the organ at risk. 2) The other approach is commonly called protocol based planning, where the treatment planning system tried to mimic an experienced treatment planner and through an iterative process evaluates the plan quality from on the basis of the protocol. ==== Particle therapy ==== In particle therapy (proton therapy being one example), energetic ionizing particles (protons or carbon ions) are directed at the target tumor. The dose increases while the particle penetrates the tissue, up to a maximum (the Bragg peak) that occurs near the end of the particle's range, and it then drops to (almost) zero. The advantage of this energy deposition profile is that less energy is deposited into the healthy tissue surrounding the target tissue. ==== Auger therapy ==== Auger therapy (AT) makes use of a very high dose of ionizing radiation in situ that provides molecular modifications at an atomic scale. AT differs from conventional radiation therapy in several aspects; it neither relies upon radioactive nuclei to cause cellular radiation damage at a cellular dimension, nor engages multiple external pencil-beams from different directions to zero-in to deliver a dose to the targeted area with reduced dose outside the targeted tissue/organ locations. Instead, the in situ delivery of a very high dose at the molecular level using AT aims for in situ molecular modifications involving molecular breakages and molecular re-arrangements such as a change of stacking structures as well as cellular metabolic functions related to the said molecule structures. ==== Motion compensation ==== In many types of external beam radiotherapy, motion can negatively impact the treatment delivery by moving target tissue out of, or other healthy tissue into, the intended beam path. Some form of patient immobilisation is common, to prevent the large movements of the body during treatment, however this cannot prevent all motion, for example as a result of breathing. Several techniques have been developed to account for motion like this. Deep inspiration breath-hold (DIBH) is commonly used for breast treatments where it is important to avoid irradiating the heart. In DIBH the patient holds their breath after breathing in to provide a stable position for the treatment beam to be turned on. This can be done automatically using an external monitoring system such as a spirometer or a camera and markers. The same monitoring techniques, as well as 4DCT imaging, can also be for respiratory gated treatment, where the patient breathes freely and the beam is only engaged at certain points in the breathing cycle. Other techniques include using 4DCT imaging to plan treatments with margins that account for motion, and active movement of the treatment couch, or beam, to follow motion. === Contact X-ray brachytherapy === Contact X-ray brachytherapy (also called "CXB", "electronic brachytherapy" or the "Papillon Technique") is a type of radiation therapy using low energy (50 kVp) kilovoltage X-rays applied directly to the tumor to treat rectal cancer. The process involves endoscopic examination first to identify the tumor in the rectum and then inserting treatment applicator on the tumor through the anus into the rectum and placing it against the cancerous tissue. Finally, treatment tube is inserted into the applicator to deliver high doses of X-rays (30Gy) emitted directly onto the tumor at two weekly intervals for three times over four weeks period. It is typically used for treating early rectal cancer in patients who may not be candidates for surgery. A 2015 NICE review found the main side effect to be bleeding that occurred in about 38% of cases, and radiation-induced ulcer which occurred in 27% of cases. === Brachytherapy (sealed source radiotherapy) === Brachytherapy is delivered by placing radiation source(s) inside or next to the area requiring treatment. Brachytherapy is commonly used as an effective treatment for cervical, prostate, breast, and skin cancer and can also be used to treat tumors in many other body sites. In brachytherapy, radiation sources are precisely placed directly at the site of the cancerous tumor. This means that the irradiation only affects a very localized area – exposure to radiation of healthy tissues further away from the sources is reduced. These characteristics of brachytherapy provide advantages over external beam radiation therapy – the tumor can be treated with very high doses of localized radiation, whilst reducing the probability of unnecessary damage to surrounding healthy tissues. A course of brachytherapy can often be completed in less time than other radiation therapy techniques. This can help reduce the chance of surviving cancer cells dividing and growing in the intervals between each radiation therapy dose. As one example of the localized nature of breast brachytherapy, the SAVI device delivers the radiation dose through multiple catheters, each of which can be individually controlled. This approach decreases the exposure of healthy tissue and resulting side effects, compared both to external beam radiation therapy and older methods of breast brachytherapy. === Radionuclide therapy === Radionuclide therapy (also known as systemic radioisotope therapy, radiopharmaceutical therapy, or molecular radiotherapy), is a form of targeted therapy. Targeting can be due to the chemical properties of the isotope such as radioiodine which is specifically absorbed by the thyroid gland a thousandfold better than other bodily organs. Targeting can also be achieved by attaching the radioisotope to another molecule or antibody to guide it to the target tissue. The radioisotopes are delivered through infusion (into the bloodstream) or ingestion. Examples are the infusion of metaiodobenzylguanidine (MIBG) to treat neuroblastoma, of oral iodine-131 to treat thyroid cancer or thyrotoxicosis, and of hormone-bound lutetium-177 and yttrium-90 to treat neuroendocrine tumors (peptide receptor radionuclide therapy). Another example is the injection of radioactive yttrium-90 or holmium-166 microspheres into the hepatic artery to radioembolize liver tumors or liver metastases. These microspheres are used for the treatment approach known as selective internal radiation therapy. The microspheres are approximately 30 μm in diameter (about one-third of a human hair) and are delivered directly into the artery supplying blood to the tumors. These treatments begin by guiding a catheter up through the femoral artery in the leg, navigating to the desired target site and administering treatment. The blood feeding the tumor will carry the microspheres directly to the tumor enabling a more selective approach than traditional systemic chemotherapy. There are currently three different kinds of microspheres: SIR-Spheres, TheraSphere and QuiremSpheres. A major use of systemic radioisotope therapy is in the treatment of bone metastasis from cancer. The radioisotopes travel selectively to areas of damaged bone, and spare normal undamaged bone. Isotopes commonly used in the treatment of bone metastasis are radium-223, strontium-89 and samarium (153Sm) lexidronam. In 2002, the United States Food and Drug Administration (FDA) approved ibritumomab tiuxetan (Zevalin), which is an anti-CD20 monoclonal antibody conjugated to yttrium-90. In 2003, the FDA approved the tositumomab/iodine (131I) tositumomab regimen (Bexxar), which is a combination of an iodine-131 labelled and an unlabelled anti-CD20 monoclonal antibody. These medications were the first agents of what is known as radioimmunotherapy, and they were approved for the treatment of refractory non-Hodgkin's lymphoma. === Intraoperative radiotherapy === Intraoperative radiation therapy (IORT) is applying therapeutic levels of radiation to a target area, such as a cancer tumor, while the area is exposed during surgery. ==== Rationale ==== The rationale for IORT is to deliver a high dose of radiation precisely to the targeted area with minimal exposure of surrounding tissues which are displaced or shielded during the IORT. Conventional radiation techniques such as external beam radiotherapy (EBRT) following surgical removal of the tumor have several drawbacks: The tumor bed where the highest dose should be applied is frequently missed due to the complex localization of the wound cavity even when modern radiotherapy planning is used. Additionally, the usual delay between the surgical removal of the tumor and EBRT may allow a repopulation of the tumor cells. These potentially harmful effects can be avoided by delivering the radiation more precisely to the targeted tissues leading to immediate sterilization of residual tumor cells. Another aspect is that wound fluid has a stimulating effect on tumor cells. IORT was found to inhibit the stimulating effects of wound fluid. == History == Medicine has used radiation therapy as a treatment for cancer for more than 100 years, with its earliest roots traced from the discovery of X-rays in 1895 by Wilhelm Röntgen. Emil Grubbe of Chicago was possibly the first American physician to use X-rays to treat cancer, beginning in 1896. The field of radiation therapy began to grow in the early 1900s largely due to the groundbreaking work of Nobel Prize–winning scientist Marie Curie (1867–1934), who discovered the radioactive elements polonium and radium in 1898. This began a new era in medical treatment and research. Through the 1920s the hazards of radiation exposure were not understood, and little protection was used. Radium was believed to have wide curative powers and radiotherapy was applied to many diseases. Prior to World War 2, the only practical sources of radiation for radiotherapy were radium, its "emanation", radon gas, and the X-ray tube. External beam radiotherapy (teletherapy) began at the turn of the century with relatively low voltage (<150 kV) X-ray machines. It was found that while superficial tumors could be treated with low voltage X-rays, more penetrating, higher energy beams were required to reach tumors inside the body, requiring higher voltages. Orthovoltage X-rays, which used tube voltages of 200-500 kV, began to be used during the 1920s. To reach the most deeply buried tumors without exposing intervening skin and tissue to dangerous radiation doses required rays with energies of 1 MV or above, called "megavolt" radiation. Producing megavolt X-rays required voltages on the X-ray tube of 3 to 5 million volts, which required huge expensive installations. Megavoltage X-ray units were first built in the late 1930s but because of cost were limited to a few institutions. One of the first, installed at St. Bartholomew's hospital, London in 1937 and used until 1960, used a 30 foot long X-ray tube and weighed 10 tons. Radium produced megavolt gamma rays, but was extremely rare and expensive due to its low occurrence in ores. In 1937 the entire world supply of radium for radiotherapy was 50 grams, valued at £800,000, or $50 million in 2005 dollars. The invention of the nuclear reactor in the Manhattan Project during World War 2 made possible the production of artificial radioisotopes for radiotherapy. Cobalt therapy, teletherapy machines using megavolt gamma rays emitted by cobalt-60, a radioisotope produced by irradiating ordinary cobalt metal in a reactor, revolutionized the field between the 1950s and the early 1980s. Cobalt machines were relatively cheap, robust and simple to use, although due to its 5.27 year half-life the cobalt had to be replaced about every 5 years. Medical linear particle accelerators, developed since the 1940s, began replacing X-ray and cobalt units in the 1980s and these older therapies are now declining. The first medical linear accelerator was used at the Hammersmith Hospital in London in 1953. Linear accelerators can produce higher energies, have more collimated beams, and do not produce radioactive waste with its attendant disposal problems like radioisotope therapies. With Godfrey Hounsfield's invention of computed tomography (CT) in 1971, three-dimensional planning became a possibility and created a shift from 2-D to 3-D radiation delivery. CT-based planning allows physicians to more accurately determine the dose distribution using axial tomographic images of the patient's anatomy. The advent of new imaging technologies, including magnetic resonance imaging (MRI) in the 1970s and positron emission tomography (PET) in the 1980s, has moved radiation therapy from 3-D conformal to intensity-modulated radiation therapy (IMRT) and to image-guided radiation therapy tomotherapy. These advances allowed radiation oncologists to better see and target tumors, which have resulted in better treatment outcomes, more organ preservation and fewer side effects. While access to radiotherapy is improving globally, more than half of patients in low and middle income countries still do not have available access to the therapy as of 2017. == See also == == References == == Further reading == == External links == Information Human Health Campus The official website of the International Atomic Energy Agency dedicated to Professionals in Radiation Medicine. This site is managed by the Division of Human Health, Department of Nuclear Sciences and Applications RT Answers – ASTRO: patient information site The Radiation Therapy Oncology Group: an organisation for radiation oncology research RadiologyInfo -The radiology information resource for patients: Radiation Therapy Source of cancer stem cells' resistance to radiation explained on YouTube. Biologically equivalent dose calculator Radiobiology Treatment Gap Compensator Calculator About the profession PROS (Paediatric Radiation Oncology Society) American Society for Radiation Oncology European Society for Radiotherapy and Oncology Who does what in Radiation Oncology? – Responsibilities of the various personnel within Radiation Oncology in the United States Accidents and QA Verification of dose calculations in radiation therapy Radiation Safety in External Beam Radiotherapy (IAEA)
Wikipedia/Radiation_therapy
Mumps vaccines are vaccines which prevent mumps. When given to a majority of the population they decrease complications at the population level. Effectiveness when 90% of a population is vaccinated is estimated at 85%. Two doses are required for long term prevention. The initial dose is recommended between 12 and 18 months of age. The second dose is then typically given between two years and six years of age. Usage after exposure in those not already immune may be useful. Side effects are usually mild. It may cause "slight soreness and swelling" at the site of injection, parotisis and mild fever. More significant side effects are rare. Evidence is insufficient to link the vaccine to complications such as neurological effects (beyond "occasional orchitis and sensorineural deafness"). The vaccine should not be given to people who are pregnant or have very poor immune system function. Poor outcomes among children of mothers who received the vaccine during pregnancy, however, have not been documented. Even though the vaccine is developed in chicken cells, it is generally safe to give to those with egg allergies. Most of the developed world and many countries in the developing world include it in their immunization programs often in combination with measles and rubella vaccine known as MMR. A formulation with the previous three and the varicella (chickenpox) vaccine known as MMRV is also available. As of 2005, 110 countries provided the vaccine as part of their immunization programs. In areas where widespread vaccination is carried out it has resulted in a more than 90% decline in rates of disease. Almost half a billion doses of one variety of the vaccine has been given. == History == In the mid-twentieth century, mumps infections among children were not viewed as a serious public health issue, but adult men may develop debilitating testicular inflammation, which posed particular difficulty among close-quartered soldiers during wartime. As a result, during World War II (1939-1945), the United States government targeted mumps for scientific research. The first experimental mumps vaccine was licensed in 1948; developed from inactivated virus, it only had short-term effectiveness. Improved vaccines became commercially available in the 1960s. In 1963, Maurice Hilleman of Merck & Co. took samples of the mumps virus from his daughter, who had contracted the disease; she became the namesake for the resulting Jeryl Lynn strain. Building on then-recent advances that had led to vaccines for polio and measles, the mumps virus strains were developed in embryonic hens' eggs and chick embryo cell cultures. The resulting strains of virus were less well-suited for human cells, and are thus said to be attenuated. They are sometimes referred to as neuroattenuated in the sense that these strains are less virulent to human neurons than the wild strains. Hilleman's work led to the first effective mumps vaccine, called Mumpsvax. Licensed in 1967, its four-year development set a record for fastest development of a new vaccine, a record later surpassed by the COVID-19 vaccine, which was developed in less than a year. Vaccination against mumps did not become routine until Mumpsvax was included in Merck's combined MMR vaccine, which targeted measles and rubella along with mumps. MMR was licensed in 1971, and 40 percent of American children had received the combined vaccine by 1974. In 1977, the U.S. Centers for Disease Control and Prevention (CDC) recommended mumps immunization (as part of MMR) for all children over 12 months of age, and in 1998, CDC began recommending a two-dose immunization of MMR. == Types == While the initial vaccine in the 1940s was based on inactivated virus, subsequent preparations since the 1960s consist of live virus that has been weakened. Mumps vaccine is on the World Health Organization's List of Essential Medicines. There are a number of different types in use as of 2007. Mumpsvax is Merck's brand of Jeryl Lynn strain vaccines. It is a component of Merck's three-virus MMR vaccine, and is the mumps vaccine standard in the United States. Mumpsvax is given by a subcutaneous injection of live virus reconstituted from freeze-dried (lyophilized) vaccine. Production of Mumpsvax as a stand-alone product ceased in 2009. The cells used in culture, virus stocks used, and animal fluids are all screened for extraneous material as part of the vaccine production. They are grown in Medium 199 (a solution containing buffered salt, vitamins, amino acids, fetal bovine serum) with SPGA (sucrose, phosphate, glutamate, human serum albumin) and neomycin. The human albumin processing uses the Cohn cold ethanol fractionation method. === Other types === RIT 4385 is a newer strain derived from the Jeryl Lynn strain by Maurice Hilleman, Jeryl Lynn's father. Leningrad-3 strain was developed by Smrodintsev and Klyachko in guinea pig kidney cell culture and has been used since 1950 in former Soviet countries. This vaccine is routinely used in Russia. L-Zagreb strain used in Croatia and India was derived from the Leningrad-3 strain by further passaging. Urabe strain was introduced in Japan, and later licensed in Belgium, France and Italy. It has been associated with a higher incidence of meningitis (1/143 000 versus 1/227 000 for J-L), and abandoned in several countries. It was formulated as MMR in the UK. Rubini strain used mainly in Switzerland was attenuated by a higher number of passes through chicken embryos, and later proved to have low potency. It was introduced in 1985. == Illegal importation of ineffective version into the UK == Monovalent mumps vaccine (Mumpsvax) remained available in the US when MMR was introduced in the UK, replacing the MR (measles and rubella) mixed vaccine. No UK-licensed monovalent preparation was ever available. Monovalent mumps vaccines were available before MMR, but only used on a limited scale. This became the subject of considerable argument at the end of the 20th century, since some parents preferred to obtain individually the components of the MMR mixture. One unlicensed mumps vaccine preparation imported into the United Kingdom proved to be essentially ineffective. Immunisation against mumps in the UK became routine in 1988, commencing with MMR. The Aventis-Pasteur "MMR-2" brand is usual in the UK in 2006. == Storage and stability == The cold chain is a major consideration in vaccination, particularly in less-developed countries. Mumps vaccines are normally refrigerated, but have a long half-life of 65 days at 23 degrees Celsius. == References == == Further reading == == External links == Mumps Vaccine at the U.S. National Library of Medicine Medical Subject Headings (MeSH) "Mumps Vaccine". Drug Information Portal. U.S. National Library of Medicine. Mumps (The History of Vaccines) Mumps Immunization. WHO
Wikipedia/Mumps_vaccine
Tetanus vaccine, also known as tetanus toxoid (TT), is a toxoid vaccine used to prevent tetanus. During childhood, five doses are recommended, with a sixth given during adolescence. After three doses, almost everyone is initially immune, but additional doses every ten years are recommended to maintain immunity. A booster shot should be given within 48 hours of an injury to people whose immunization is out of date. Confirming that pregnant women are up to date on tetanus immunization during each pregnancy can prevent both maternal and neonatal tetanus. The vaccine is very safe, including during pregnancy and in those with HIV/AIDS. Redness and pain at the site of injection occur in between 25% and 85% of people. Fever, feeling tired, and minor muscle pain occurs in less than 10% of people. Severe allergic reactions occur in fewer than one in 100,000 people. A number of vaccine combinations include the tetanus vaccine, such as DTaP and Tdap, which contain diphtheria, tetanus, and pertussis vaccines, and DT and Td, which contain diphtheria and tetanus vaccines. DTaP and DT are given to children less than seven years old, while Tdap and Td are given to those seven years old and older. The lowercase d and p denote lower strengths of diphtheria and pertussis vaccines. Tetanus antiserum was developed in 1890, with its protective effects lasting a few weeks. The tetanus toxoid vaccine was developed in 1924, and came into common use for soldiers in World War II. Its use resulted in a 95% decrease in the rate of tetanus. It is on the World Health Organization's List of Essential Medicines. == Medical uses == === Effectiveness === Vaccination confers near-complete protection from tetanus, provided the individual has received their recommended booster shots. Globally, deaths from tetanus in newborns decreased from 787,000 in 1988 to 58,000 in 2010, and 34,000 deaths in 2015 (a 96% decrease from 1988). In the 1940s, before the vaccine, there were about 550 cases of tetanus per year in the United States, which has decreased to about 30 cases per year in the 2000s. Nearly all cases are among those who have never received a vaccine, or adults who have not stayed up to date on their 10-year booster shots. === Pregnancy === Guidelines on prenatal care in the United States specify that women should receive a dose of the Tdap vaccine during each pregnancy, preferably between weeks 27 and 36, to allow antibody transfer to the fetus. All postpartum women who have not previously received the Tdap vaccine are recommended to get it prior to discharge after delivery. It is recommended for pregnant women who have never received the tetanus vaccine (i.e., neither DTP or DTaP, nor DT as a child or Td or TT as an adult) to receive a series of three Td vaccinations starting during pregnancy to ensure protection against maternal and neonatal tetanus. In such cases, Tdap is recommended to be substituted for one dose of Td, again preferably between 27 and 36 weeks of gestation, and then the series completed with Td. === Specific types === The first vaccine is given in infancy. The baby is injected with the DTaP vaccine, which is three inactive toxins in one injection. DTaP protects against diphtheria, pertussis, and tetanus. This acellular vaccine is safer than the previously used DTP with whole inactivated pertussis (now retroactively notated DTwP). Another option is DT, which is a combination of diphtheria and tetanus vaccines. This is given as an alternative to infants who have conflicts with the DTaP vaccine. Quadrivalent, pentavalent, and hexavalent formulations contain DTaP with one or more of the additional vaccines: inactivated polio virus vaccine (IPV), Haemophilus influenzae type b conjugate, Hepatitis B, with the availability varying in different countries. For the every ten-year booster Td or Tdap may be used, though Tdap is more expensive. === Schedule === Because DTaP and DT are administered to children less than a year old, the recommended location for injection is the anterolateral thigh muscle. However, these vaccines can be injected into the deltoid muscle if necessary. The World Health Organization (WHO) recommends six doses in childhood starting at six weeks of age. Four doses of DTaP are to be given in early childhood. The first dose should be around two months of age, the second at four months, the third at six, and the fourth from fifteen to eighteen months of age. There is a recommended fifth dose to be administered to four- to six-year-olds. Td and Tdap are for older children, adolescents, and adults and can be injected into the deltoid muscle. These are boosters and are recommended every ten years. It is safe to have shorter intervals between a single dose of Tdap and a dose of the Td booster. ==== Additional doses ==== Booster shots are important because lymphocyte production (antibodies) is not at a constant high rate of activity. This is because after the introduction of the vaccine when lymphocyte production is high, the production activity of white blood cells will start to decline. The decline in activity of the T-helper cells means that there must be a booster to help keep the white blood cells active. Td and Tdap are the booster shots given every ten years to maintain immunity for adults nineteen years of age to sixty-five years of age. Tdap is given as a one-time, first-time-only dose that includes the tetanus, diphtheria, and acellular pertussis vaccinations. This should not be administered to those who are under the age of eleven or over the age of sixty-five. Td is the booster shot given to people over the age of seven and includes the tetanus and diphtheria toxoids. However, Td has less of the diphtheria toxoid, which is why the "d" is lowercase and the "T" is capitalized. In 2020, the US Centers for Disease Control and Prevention (CDC) Advisory Committee on Immunization Practices (ACIP) recommended that either tetanus and diphtheria toxoids (Td) vaccine or Tdap to be used for the decennial Td booster, tetanus prevention during wound management, and for additional required doses in the catch-up immunization schedule if a person has received at least one Tdap dose. == Side effects == Common side effects of the tetanus vaccine include fever, redness, and swelling with soreness or tenderness around the injection site (one in five people have redness or swelling). Body aches and tiredness have been reported following Tdap. Td / Tdap can cause painful swelling of the entire arm in one of 500 people. Tetanus toxoid containing vaccines (DTaP, DTP, Tdap, Td, DT) may cause brachial neuritis at a rate of one out of every 100,000 to 200,000 doses. == Mechanism of action == The type of vaccination for this disease is called artificial active immunity. This type of immunity is generated when a dead or weakened version of the disease enters the body, causing an immune response which includes the production of antibodies. This is beneficial because it means that if the disease is ever introduced into the body, the immune system will recognize the antigen and produce antibodies more rapidly. == History == The first vaccine for passive immunology was discovered by a group of German scientists under the leadership of Emil von Behring in 1890. The first inactive tetanus toxoid was discovered and produced in 1924. A more effective adsorbed version of the vaccine, created in 1938, was proven to be successful when it was used to prevent tetanus in the military during World War II. DTP/DTwP (which is the combined vaccine for diphtheria, tetanus, and pertussis) was first used in 1948, and was continued until 1991, when it was replaced with an acellular form of the pertussis vaccine due to safety concerns. Half of those who received the DT(w)P vaccine had redness, swelling, and pain around the injection site, which convinced researchers to find a replacement vaccine. Two new vaccines were launched in 1992. These combined tetanus and diphtheria with acellular pertussis (TDaP or DTaP), which could be given to adolescents and adults (as opposed to previously when the vaccine was only given to children). == References == == Further reading == == External links == "Td (Tetanus, Diphtheria) Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). August 2021. "DTaP (Diphtheria, Tetanus, Pertussis) Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). August 2021. "DTaP/Tdap/Td ACIP Vaccine Recommendations". Centers for Disease Control and Prevention (CDC). 28 January 2020. Tetanus Toxoid at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Diphtheria-Tetanus Vaccine at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Diphtheria-Tetanus-Pertussis Vaccine at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Diphtheria-Tetanus-acellular Pertussis Vaccines at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Tetanus_vaccine
Peptide-based synthetic vaccines (epitope vaccines) are subunit vaccines made from peptides. The peptides mimic the epitopes of the antigen that triggers direct or potent immune responses. Peptide vaccines can not only induce protection against infectious pathogens and non-infectious diseases but also be utilized as therapeutic cancer vaccines, where peptides from tumor-associated antigens are used to induce an effective anti-tumor T-cell response. == History == The traditional vaccines are the whole live or fixed pathogens. The second generation of vaccines is mainly the protein purified from the pathogen. The third generation of vaccines is the DNA or plasmid that can express the proteins of the pathogen. Peptide vaccines are the latest step in the evolution of vaccines. == Advantages and disadvantages == Compared with the traditional vaccines such as the whole fixed pathogens or protein molecules, the peptide vaccines have several advantages and disadvantages. Advantages: The vaccines are fully synthesized by chemical synthesis and can be treated as chemical entity. With more advanced solid-phase peptide synthesis (SPPS) using automation and microwave techniques, the production of peptides becomes more efficient. The vaccines do not have any biological contamination since they are chemically synthesized. The vaccines are water-soluble and can be kept stable under simple conditions. The peptides can be specially designed for specificity. A single peptide vaccine can be designed to have multiple epitopes to generate immune responses for several diseases. The vaccines only contain a short peptide chain, so they are less like to lead to allergic or auto-immune responses. Disadvantages: Poor immunogenicity. Unstable in cells. Lack of native conformation. Only effective for a limited population. == Epitope design == The whole peptide vaccine is to mimic the epitope of an antigen, so epitope design is the most important stage of vaccine development and requires an accurate understanding of the amino acid sequence of the immunogenic protein interested. The designed epitope is expected to generate strong and long-period immuno-response against the pathogen. The followings are the points to consider when designing the epitope: The non-dominant epitope could generate a stronger immune response than the dominant epitope. Ex. The antibodies from people infected by hookworm can recognize the dominant epitope of the antigen called Necator americanus APR-1 protein, but the antibodies can't induce protection against hookworm. However, other non-dominant epitopes on APR-1 protein show the ability to induce the production of neutralizing antibodies against hookworm. Therefore, the non-dominant epitopes are the better candidate for peptide vaccines against hookworm infection. Take hypersensitivity into consideration. Ex. Some IgE-inducing epitopes cause hypersensitivity reactions after vaccination in humans due to the overlap with IgG epitopes in the Na-ASP-2 protein which is an antigen from hookworm. Some short peptide epitopes need elongating to maintain the native conformation. The elongated sequences can include proper secondary structure. Also, some short peptides can be stabled or cyclized together to maintain the proper conformation. Ex. B-cell epitopes could only have 5 amino acids. To induce an immune response, a sequence from yeast GCN4 protein is used to improve the conformation of the peptide vaccines by forming alpha-helix.. Use adjuvants associated with the epitope to induce the immune response. == Applications == === Cancer === Gp100 peptide vaccine is studied to treat melanoma. To generate a greater in vitro CTL response, the peptide, gp100:209-217(210M), is modified and binds to HLA-A2*0201. After vaccination, more circulating T cells can recognize and kill melanoma cancer cells in vitro. Rindopepimut is the epidermal growth factor receptor (EGFR)-derived peptide vaccine to treat glioblastoma multiforme (GBM). The 14-mer peptide is coupled with keyhole limpet hemocyanin (KLH), which can reduce the risk of cancer. E75, GP2, and AE37 are three different HER2/neu-derived single-peptide vaccines to treat breast cancer. HER2/neu usually has low expression in healthy tissues. E75 consisting of 9 amino acids is the immunodominant epitope of the HER2 protein. GP2 consisting of 9 amino acids is the subdominant epitope. Both E75 and GP2 stimulate the CD8+ lymphocytes but GP2 has a lower affinity than E75. AE37 stimulates CD4+ lymphocytes. === Other common diseases === EpiVacCorona, a peptide-based vaccine against COVID-19. IC41 is a peptide vaccine candidate against the Hepatitis C virus. It consists of five synthetic peptides along with the synthetic adjuvant called poly-l-arginine. Multimeric-001 is the most efficient peptide vaccine candidate against influenza. It contains B- and T-cell epitopes from Hemagglutinin. Matrix I and nucleoprotein are combined into a single recombinantly-expressed polypeptide. Alzheimer peptide vaccines: CAD106, UB311, Lu, AF20513, ABvac40, ACI-35, AADvac-1. == References ==
Wikipedia/Peptide_vaccine
A prescription drug (also prescription medication, prescription medicine or prescription-only medication) is a pharmaceutical drug that is permitted to be dispensed only to those with a medical prescription. In contrast, over-the-counter drugs can be obtained without a prescription. The reason for this difference in substance control is the potential scope of misuse, from drug abuse to practising medicine without a license and without sufficient education. Different jurisdictions have different definitions of what constitutes a prescription drug. In North America, ℞, usually printed as "Rx", is used as an abbreviation of the word "prescription". It is a contraction of the Latin word "recipe" (an imperative form of "recipere") meaning "take". Prescription drugs are often dispensed together with a monograph (in Europe, a Patient Information Leaflet or PIL) that gives detailed information about the drug. The use of prescription drugs has been increasing since the 1960s. == Regulation == === Australia === In Australia, the Standard for the Uniform Scheduling of Medicines and Poisons (SUSMP) governs the manufacture and supply of drugs with several categories: Schedule 1 – Defunct Drug. Schedule 2 – Pharmacy Medicine Schedule 3 – Pharmacist-Only Medicine Schedule 4 – Prescription-Only Medicine/Prescription Animal Remedy Schedule 5 – Caution/Poison. Schedule 6 – Poison Schedule 7 – Dangerous Poison Schedule 8 – Controlled Drug (Possession without authority illegal) Schedule 9 – Prohibited Substance (Possession illegal without a license legal only for research purposes) Schedule 10 – Controlled Poison. Unscheduled Substances. As in other developed countries, the person requiring a prescription drug attends the clinic of a qualified health practitioner, such as a physician, who may write the prescription for the required drug. Many prescriptions issued by health practitioners in Australia are covered by the Pharmaceutical Benefits Scheme, a scheme that provides subsidised prescription drugs to residents of Australia to ensure that all Australians have affordable and reliable access to a wide range of necessary medicines. When purchasing a drug under the PBS, the consumer pays no more than the patient co-payment contribution, which, as of January 1, 2022, is A$42.50 for general patients. Those covered by government entitlements (low-income earners, welfare recipients, Health Care Card holders, etc.) and or under the Repatriation Pharmaceutical Benefits Scheme (RPBS) have a reduced co-payment, which is A$6.80 in 2022. The co-payments are compulsory and can be discounted by pharmacies up to a maximum of A$1.00 at cost to the pharmacy. === United Kingdom === In the United Kingdom, the Medicines Act 1968 and the Prescription Only Medicines (Human Use) Order 1997 contain regulations that cover the supply of sale, use, prescribing and production of medicines. There are three categories of medicine: Prescription-only medicines (POM), which may be dispensed (sold in the case of a private prescription) by a pharmacist only to those to whom they have been prescribed Pharmacy medicines (P), which may be sold by a pharmacist without a prescription General sales list (GSL) medicines, which may be sold without a prescription in any shop The simple possession of a prescription-only medicine without a prescription is legal unless it is covered by the Misuse of Drugs Act 1971. A patient visits a medical practitioner or dentist, who may prescribe drugs and certain other medical items, such as blood glucose-testing equipment for diabetics. Also, qualified and experienced nurses, paramedics and pharmacists may be independent prescribers. Both may prescribe all POMs (including controlled drugs), but may not prescribe Schedule 1 controlled drugs, and 3 listed controlled drugs for the treatment of addiction; which is similar to doctors, who require a special licence from the Home Office to prescribe schedule 1 drugs. Schedule 1 drugs have little or no medical benefit, hence their limitations on prescribing. District nurses and health visitors have had limited prescribing rights since the mid-1990s; until then, prescriptions for dressings and simple medicines had to be signed by a doctor. Once issued, a prescription is taken by the patient to a pharmacy, which dispenses the medicine. Most prescriptions are NHS prescriptions, subject to a standard charge that is unrelated to what is dispensed. The NHS prescription fee was increased to £9.90 for each item in England in May 2024; prescriptions are free of charge if prescribed and dispensed in Scotland, Wales and Northern Ireland, and for some patients in England, such as inpatients, children, those over 60s or with certain medical conditions, and claimants of certain benefits. The pharmacy charges the NHS the actual cost of the medicine, which may vary from a few pence to hundreds of pounds. A patient can consolidate prescription charges by using a prescription payment certificate (informally a "season ticket"), effectively capping costs at £31.25 a quarter or £111.60 for a year. Outside the NHS, private prescriptions are issued by private medical practitioner and sometimes under the NHS for medicines that are not covered by the NHS. A patient pays the pharmacy the normal price for medicine prescribed outside the NHS. Survey results published by Ipsos MORI in 2008 found that around 800,000 people in England were not collecting prescriptions or getting them dispensed because of the cost, the same as in 2001. === United States === In the United States, the Federal Food, Drug, and Cosmetic Act defines what substances, known as legend drugs, require a prescription for them to be dispensed by a pharmacy. The federal government authorizes physicians (of any specialty), physician assistants, nurse practitioners and other advanced practice nurses, veterinarians, dentists, and optometrists to prescribe any controlled substance. They are issued unique DEA numbers. Many other mental and physical health technicians, including basic-level registered nurses, medical assistants, emergency medical technicians, most psychologists, and social workers, are not authorized to prescribe legend drugs. The federal Controlled Substances Act (CSA) was enacted in 1970. It regulates manufacture, importation, possession, use, and distribution of controlled substances, which are drugs with potential for abuse or addiction. The legislation classifies these drugs into five schedules, with varying qualifications for each schedule. The schedules are designated schedule I, schedule II, schedule III, schedule IV, and schedule V. Many drugs other than controlled substances require a prescription. The safety and the effectiveness of prescription drugs in the US are regulated by the 1987 Prescription Drug Marketing Act (PDMA). The Food and Drug Administration (FDA) is charged with implementing the law. As a general rule, over-the-counter drugs (OTC) are used to treat a condition that does not need care from a healthcare professional if have been proven to meet higher safety standards for self-medication by patients. Often, a lower strength of a drug will be approved for OTC use, but higher strengths require a prescription to be obtained; a notable case is ibuprofen, which has been widely available as an OTC pain killer since the mid-1980s, but it is available by prescription in doses up to four times the OTC dose for severe pain that is not adequately controlled by the OTC strength. Herbal preparations, amino acids, vitamins, minerals, and other food supplements are regulated by the FDA as dietary supplements. Because specific health claims cannot be made, the consumer must make informed decisions when purchasing such products. By law, American pharmacies operated by "membership clubs" such as Costco and Sam's Club must allow non-members to use their pharmacy services and may not charge more for these services than they charge as their members. Physicians may legally prescribe drugs for uses other than those specified in the FDA approval, known as off-label use. Drug companies, however, are prohibited from marketing their drugs for off-label uses. Some prescription drugs are commonly abused, particularly those marketed as analgesics, including fentanyl (Duragesic), hydrocodone (Vicodin), oxycodone (OxyContin), oxymorphone (Opana), propoxyphene (Darvon), hydromorphone (Dilaudid), meperidine (Demerol), and diphenoxylate (Lomotil). Some prescription painkillers have been found to be addictive, and unintentional poisoning deaths in the United States have skyrocketed since the 1990s according to the National Safety Council. Prescriber education guidelines as well as patient education, prescription drug monitoring programs and regulation of pain clinics are regulatory tactics which have been used to curtail opioid use and misuse. ==== Expiration date ==== The expiration date, required in several countries, specifies the date up to which the manufacturer guarantees the full potency and safety of a drug. In the United States, expiration dates are determined by regulations established by the FDA. The FDA advises consumers not to use products after their expiration dates. A study conducted by the U.S. Food and Drug Administration covered over 100 drugs, prescription and over-the-counter. The results showed that about 90% of them were safe and effective far past their original expiration date. At least one drug worked 15 years after its expiration date. Joel Davis, a former FDA expiration-date compliance chief, said that with a handful of exceptions—notably nitroglycerin, insulin, and some liquid antibiotics (outdated tetracyclines can cause Fanconi syndrome)—most expired drugs are probably effective. The American Medical Association issued a report and statement on Pharmaceutical Expiration Dates. The Harvard Medical School Family Health Guide notes that, with rare exceptions, "it's true the effectiveness of a drug may decrease over time, but much of the original potency still remains even a decade after the expiration date". The expiration date is the final day that the manufacturer guarantees the full potency and safety of a medication. Drug expiration dates exist on most medication labels, including prescription, over-the-counter and dietary supplements. U.S. pharmaceutical manufacturers are required by law to place expiration dates on prescription products prior to marketing. For legal and liability reasons, manufacturers will not make recommendations about the stability of drugs past the original expiration date. == Cost == Prices of prescription drugs vary widely around the world. Prescription costs for biosimilar and generic drugs are usually less than brand names, but the cost is different from one pharmacy to another. To lower prescription drug costs, some U.S. states have sought federal approval to buy drugs in Canada, as of 2022. Generics undergo strict scrutiny to meet the equal efficacy, safety, dosage, strength, stability, and quality of brand name drugs. Generics are developed after the brand name has already been established, and so generic drug approval in many aspects has a shortened approval process because it replicates the brand name drug. Brand name drugs cost more due to time, money, and resources that drug companies invest in them to conduct development, including clinical trials that the FDA requires for the drug to be marketed. Because drug companies have to invest more in research costs to do this, brand name drug prices are much higher when sold to consumers. When the patent expires for a brand name drug, generic versions of that drug are produced by other companies and are sold for lower price. By switching to generic prescription drugs, patients can save significant amounts of money: e.g. one study by the FDA showed an example with more than 52% savings of a consumer's overall costs of their prescription drugs. == Strategies to limit drug prices in the United States == In the United States there are many resources available to patients to lower the costs of medication. These include copayments, coinsurance, and deductibles. The Medicaid Drug Rebate Program is another example. Generic drug programs lower the amount of money patients have to pay when picking up their prescription at the pharmacy. As their name implies, they only cover generic drugs. Co-pay assistance programs are programs that help patients lower the costs of specialty medications; i.e., medications that are on restricted formularies, have limited distribution, and/or have no generic version available. These medications can include drugs for HIV, hepatitis C, and multiple sclerosis. Patient Assistance Program Center (RxAssist) has a list of foundations that provide co-pay assistance programs. Co-pay assistance programs are for under-insured patients. Patients without insurance are not eligible for this resource; however, they may be eligible for patient assistance programs. Patient assistance programs are funded by the manufacturer of the medication. Patients can often apply to these programs through the manufacturer's website. This type of assistance program is one of the few options available to uninsured patients. The out-of-pocket cost for patients enrolled in co-pay assistance or patient assistance programs is $0. It is a major resource to help lower costs of medications—however, many providers and patients are not aware of these resources. == Environment == Traces of prescription drugs—including antibiotics, anti-convulsants, mood stabilizers and sex hormones—have been detected in drinking water. Pharmaceutically active compounds (PhACs) discarded from human therapy and their metabolites may not be eliminated entirely by sewage treatment plants and have been detected at low concentrations in surface waters downstream from those plants. The continuous discarding of incompletely treated water may interact with other environmental chemicals and lead to uncertain ecological effects. Due to most pharmaceuticals being highly soluble, fish and other aquatic organisms are susceptible to their effects. The long-term effects of pharmaceuticals in the environment may affect survival and reproduction of such organisms. However, levels of medical drug waste in the water is at a low enough level that it is not a direct concern to human health. However, processes, such as biomagnification, are potential human health concerns. On the other hand, there is clear evidence of harm to aquatic animals and fauna. Recent advancements in technology have allowed scientists to detect smaller, trace quantities of pharmaceuticals in the ng/ml range. Despite being found at low concentrations, female hormonal contraceptives may cause feminizing effects on male vertebrate species, such as fish, frogs and crocodiles. The FDA established guidelines in 2007 to inform consumers should dispose of prescription drugs. When medications do not include specific disposal instructions, patients should not flush medications in the toilet, but instead use medication take-back programs to reduce the amount of pharmaceutical waste in sewage and landfills. If no take-back programs are available, prescription drugs can be discarded in household trash after they are crushed or dissolved and then mixed in a separate container or sealable bag with undesirable substances like cat litter or other unappealing material (to discourage consumption). == See also == == References ==
Wikipedia/Prescription_drug
Measles vaccine protects against becoming infected with measles. Nearly all of those who do not develop immunity after a single dose develop it after a second dose. When the rate of vaccination within a population is greater than 92%, outbreaks of measles typically no longer occur; however, they may occur again if the rate of vaccination decreases. The vaccine's effectiveness lasts many years. It is unclear if it becomes less effective over time. The vaccine may also protect against measles if given within a couple of days after exposure to measles. The vaccine is generally safe, even for those infected by HIV. Most children do not experience any side effects; those that do occur are usually mild, such as fever, rash, pain at the site of injection, and joint stiffness; and are short-lived. Anaphylaxis has been documented in about 3.5–10 cases per million doses. Rates of Guillain–Barré syndrome, autism and inflammatory bowel disease do not appear to be increased by measles vaccination. The vaccine is available both by itself and in combinations such as the MMR vaccine (a combination with the rubella vaccine and mumps vaccine) or the MMRV vaccine (a combination of MMR with the chickenpox vaccine). The measles vaccine is equally effective for preventing measles in all formulations, but side effects vary for different combinations. The World Health Organization (WHO) recommends measles vaccine be given at nine months of age in areas of the world where the disease is common, or at twelve months where the disease is not common. Measles vaccine is based on a live but weakened strain of measles. It comes as a dried powder that is mixed with a specific liquid before being injected either just under the skin or into a muscle. Verification that the vaccine was effective can be determined by blood tests. The measles vaccine was first introduced in 1963. In that year, the Edmonston-B strain of measles virus was turned into a vaccine by John Enders and colleagues and licensed in the United States. In 1968, an improved and even weaker measles vaccine was developed by Maurice Hilleman and colleagues, and began to be distributed, becoming the only measles vaccine used in the United States since 1968. About 86% of children globally had received at least one dose of the vaccine as of 2018. In 2021, at least 183 countries provided two doses in their routine immunization schedule. It is on the World Health Organization's List of Essential Medicines. As outbreaks easily occur in under-vaccinated populations, non-prevalence of disease is seen as a test of sufficient vaccination within a population. == Effectiveness == One dose is about 93% effective while two doses of the vaccine are about 97% effective at preventing measles. Before the widespread use of the vaccine, measles was so common that infection was considered "as inevitable as death and taxes." In the United States, reported cases of measles fell from 3 to 4 million with 400 to 500 deaths to tens of thousands per year following introduction of two measles vaccines in 1963 (both an inactivated and a live attenuated vaccine (Edmonston B strain) were licensed for use, see chart at right). Increasing uptake of the vaccine following outbreaks in 1971 and 1977 brought this down to thousands of cases per year in the 1980s. An outbreak of almost 30,000 cases in 1990 led to a renewed push for vaccination and the addition of a second vaccine to the recommended schedule. No more than 220 cases were reported in any year from 1997 to 2013, and the disease was believed no longer endemic in the United States. In 2014, 667 cases were reported. The benefits of measles vaccination in preventing illness, disability, and death have been well documented. Within the first 20 years of being licensed in the U.S., measles vaccination prevented an estimated 52 million cases of the disease, 17,400 cases of intellectual disability, and 5,200 deaths. From 1999 to 2004 a strategy led by the WHO and UNICEF led to improvements in measles vaccination coverage that averted an estimated 1.4 million measles deaths worldwide. The vaccine for measles led to the near-complete elimination of the disease in the United States and other developed countries. While the vaccine is made with a live virus which can cause side effects, these are far fewer and less serious than the sickness and death caused by measles itself; side effects ranging from rashes to, rarely, convulsions, occur in a small percentage of recipients. Measles vaccination averted 57 million deaths between 2000 and 2022, as per World Health Organization report. Measles is common worldwide. Although it was declared eliminated from the U.S. in 2000, high rates of vaccination and excellent communication with those who refuse vaccination are needed to prevent outbreaks and sustain the elimination of measles. Of the 66 cases of measles reported in the U.S. in 2005, slightly over half were attributable to one unvaccinated teenager who became infected during a visit to Romania. This individual returned to a community with many unvaccinated children. The resulting outbreak infected 34 people, mostly children and virtually all unvaccinated; three of them were hospitalized. The public health response required making almost 5,000 phone calls as part of contact tracing, arranging and performing testing as needed, and arranging emergency vaccination for at-risk people who had had contact with this person. Taxpayers and local healthcare organizations likely paid more than US$167,000 in direct costs to contain this one outbreak. A major epidemic was averted due to high rates of vaccination in the surrounding communities. When addressing the major U.S. measles outbreak in 2019, the Centers for Disease Control and Prevention stated that outbreaks are more likely in areas with pockets of unvaccinated residents. However, during the U.S. outbreak beginning in February 2025, the agency declined to publicize their updated expert assessment and forecasting model supporting this conclusion, thereby choosing not to alert clinicians and the public of being at specific risk in areas with low immunization rates. The vaccine has nonspecific effects such as preventing respiratory infections, that may be greater than those of measles prevention alone. These benefits are greater when the vaccine is given before one year of age. A high-titre vaccine resulted in worse outcomes in girls, and consequently is not recommended by the World Health Organization. The immune response to the measles vaccine can be impaired by the presence of parasitic infections such as helminthiasis. === Schedule === The World Health Organization (WHO) recommends two doses of vaccine for all children. In countries with a high risk of disease the first dose should be given around nine months of age. Otherwise it can be given at twelve months of age. The second dose should be given at least one month after the first dose. This is often done at age 15 to 18 months. After one dose at the age of nine months 85% are immune, while a dose at twelve months results in 95% immunity. In the United States, the Centers for Disease Control and Prevention (CDC) recommends that children aged six to eleven months traveling outside the United States receive their first dose of MMR vaccine before departure and then receive two more doses; one at 12–15 months (12 months for children in high-risk areas) and the second as early as four weeks later. Otherwise the first dose is typically given at 12–15 months and the second at 4–6 years. In the UK, the National Health Service (NHS) recommendation is for a first dose at around 13 months of age and the second at three years and four months old. In Canada, Health Canada recommends that children traveling outside North America should receive an MMR vaccine if they are aged six to 12 months. However, after the child is 12 months old they should receive two additional doses to ensure long-lasting protection. == Adverse effects == Adverse effects associated with the MMR vaccine include fever, rash, injection site pain, and, in rare cases, red or purple discolorations on the skin known as thrombocytopenic purpura, or seizures related to fever (febrile seizure). === False claims about autism === In 1998, Andrew Wakefield et al. published a now retracted and fraudulent The Lancet paper linking the MMR vaccine to autism, leading to a decline in vaccination rates. Wakefield was later found to have been "dishonest" by the General Medical Council and barred from practicing medicine in the UK. Numerous subsequent studies and reviews by organizations such as the US Centers for Disease Control and Prevention, Institute of Medicine, NHS, and Cochrane Library have found no evidence of a link between the MMR vaccine and autism. The controversy surrounding Wakefield's publication led to decreased MMR vaccination rates and a subsequent increase in measles cases in the UK. In Japan, where the MMR vaccine is not used as a combined vaccine, autism rates have remained unaffected, further disproving Wakefield's hypothesis. According to a 2019 Los Angeles Times article, concerns were raised about unvaccinated students contributing to the large number of measles outbreaks. While Robert F. Kennedy Jr—United States Secretary of Health and Human Services— publicly supported Wakefield's disproven theory that vaccines cause autism, and was the founder of the anti-vaccine Children's Health Defense, on 28 February, during the 2025 Southwest US measles outbreak, he announced that he would be sending 2,000 doses of the MMR vaccine to Texas along with other resources. A New York Times article reporting on the death of a child in Texas from measles—the first in ten years in the United States, said that vaccine hesitancy had been rising for many years. === Contraindications === Some people shouldn't receive the measles or MMR vaccine, including cases of: Pregnancy: MMR vaccine and its components should not be given to pregnant women. Women of childbearing age should check with their doctor about getting vaccinated prior to getting pregnant. HIV-infected children, who may receive measles vaccines if their CD4+ lymphocyte count is greater than 15%. Weakened immune system due to HIV/AIDS or certain medical treatments Having a parent or sibling with a history of immune problems Condition that makes a patient bruise or bleed easily Recent transfusion of blood or blood products Tuberculosis Receiving other vaccines in the past 4 weeks Moderate or severe illness. However, mild illness (e.g., common cold) is usually not contraindicated. == History == John Franklin Enders, who had shared the 1954 Nobel Prize in Medicine for work on the polio virus, sent Thomas C. Peebles to Fay School in Massachusetts, where an outbreak of measles was underway; Peebles was able to isolate the virus from blood samples and throat swabs, and was later able to cultivate the virus and show that the disease could be passed on to monkeys inoculated with the material he had collected. Enders was able to use the cultivated virus to develop a measles vaccine in 1963 by attenuation through cultured chicken embryo fibroblasts of the material isolated by Peebles. In the late 1950s and early 1960s, nearly twice as many children died from measles as from polio. The vaccine Enders developed was based on the Edmonston strain of attenuated live measles virus, which was named for 11-year-old David Edmonston, the Fay student from whom Peebles had taken the culture that led to the virus's cultivation. In the mid-20th century, measles was particularly devastating in West Africa, where child mortality rate was 50 percent before age five, and the children were struck with the type of rash and other symptoms common prior to 1900 in England and other countries. The first trial of a live attenuated measles vaccine was undertaken in 1960 by the British paediatrician David Morley in a village near Ilesha, Nigeria; in case he could be accused of exploiting the Nigerian population, Morley included his own four children in the study. The encouraging results led to a second study of about 450 children in the village and at the Wesley Guild Hospital in Ilesha. Following another epidemic, a larger trial was undertaken in September and October 1962, in New York City with the assistance of the WHO: 131 children received the live Enders-attenuated Edmonston B strain plus gamma globulin, 130 children received a "further attenuated" vaccine without gamma globulin, and 173 children acted as control subjects for both groups. As also shown in the Nigerian trial, the trial confirmed that the "further attenuated" vaccine was superior to the Edmonston B vaccine, and caused significantly fewer instances of fever and diarrhea. 2,000 children in the area were vaccinated with the further-attenuated vaccine. Maurice Hilleman at Merck & Co., a pioneer in the development of vaccinations, developed an improved version of the measles vaccine in 1968 and subsequently the MMR vaccine in 1971, which vaccinates against measles, mumps and rubella in a single shot followed by a booster. One form is called "Attenuvax". The measles component of the MMR vaccine uses Attenuvax, which is grown in a chick embryo cell culture using the Enders' attenuated Edmonston strain. Following ACIP recommendations, Merck decided not to resume production of Attenuvax as standalone vaccine on 21 October 2009. A 2022 study in the American Economic Journal found that the measles vaccine uptake led to increases in income of 1.1 percent and positive effects on employment due to greater productivity by those who were vaccinated. == Types == Measles is seldom given as an individual vaccine and is often given in combination with rubella, mumps, or varicella (chickenpox) vaccines. Below is the list of measles-containing vaccines: Measles vaccine (standalone vaccine) Measles and rubella combined vaccine (MR vaccine) Mumps, measles and rubella combined vaccine (MMR vaccine) Mumps, measles, rubella and varicella combined vaccine (MMRV vaccine) == Society and culture == Most health insurance plans in the United States cover the cost of vaccines, and Vaccines for Children Program may be able to help those who do not have coverage. State law requires vaccinations for school children, but offer exemptions for medical reasons and sometimes for religious or philosophical reasons. All fifty states require two doses of the MMR vaccine at the appropriate age. A different vaccine distribution within a single territory by age or social class may define different general perceptions of vaccination efficacy. == References == == Further reading == == External links == "MMR (Measles, Mumps, & Rubella) Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). 22 October 2019. "MMRV (Measles, Mumps, Rubella & Varicella) Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). 22 October 2019. Measles Vaccine at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Measles_vaccine
Meningococcal vaccine refers to any vaccine used to prevent infection by Neisseria meningitidis. Different versions are effective against some or all of the following types of meningococcus: A, B, C, W-135, and Y. The vaccines are between 85 and 100% effective for at least two years. They result in a decrease in meningitis and sepsis among populations where they are widely used. They are given either by injection into a muscle or just under the skin. The World Health Organization recommends that countries with a moderate or high rate of disease or with frequent outbreaks should routinely vaccinate. In countries with a low risk of disease, they recommend that high-risk groups should be immunized. In the African meningitis belt efforts to immunize all people between the ages of one and thirty with the meningococcal A conjugate vaccine are ongoing. In Canada and the United States the vaccines are effective against four types of meningococcus (A, C, W, and Y) are recommended routinely for teenagers and others who are at high risk. Saudi Arabia requires vaccination with the quadrivalent vaccine for international travellers to Mecca for Hajj. Meningococcal vaccines are generally safe. Some people develop pain and redness at the injection site. Use in pregnancy appears to be safe. Severe allergic reactions occur in less than one in a million doses. The first meningococcal vaccine became available in the 1970s. It is on the World Health Organization's List of Essential Medicines. Inspired by the response to the 1997 outbreak in Nigeria, the WHO, Médecins Sans Frontières, and other groups created the International Coordinating Group on Vaccine Provision for Epidemic Meningitis Control, which manages global response strategy. ICGs have since been created for other epidemic diseases. == Types == Neisseria meningitidis has 13 clinically significant serogroups, classified according to the antigenic structure of their polysaccharide capsule. Six serogroups, A, B, C, Y, W-135, and X, are responsible for virtually all cases in humans. Serogroup B is a major cause of meningococcal disease in younger children and adolescents. === Pentavalent (serogroups A, B, C, W, and Y) === There are two pentavalent vaccines available in the United States targeting serogroups A, B, C, W, and Y: Penbraya was approved for use in the United States in October 2023. It combines the vaccines Trumenba and Nimenrix. It is approved for use in individuals 10 through 25 years of age. Penmenvy was approved for use in the United States in February 2025. Penmenvy is approved for use in people aged 10 through 25 years of age. === Pentavalent (serogroups A, C, W, X, and Y) (MenFive) === There are one pentavalent vaccine available targeting serogroups A, C, W, X, and Y: MenFive: Approved in several countries, and WHO prequalified. It is approved for use in individuals aged 9 months to 85 years against invasive meningococcal disease caused by Neisseria meningitidis groups A, C, Y, W, and X. It is a freeze-dried conjugate vaccine, recommended as a single intramuscular dose, and is available as 1-dose and 5-dose vials. === Quadrivalent (serogroups A, C, W-135, and Y) === There are three quadrivalent vaccines available in the United States targeting serogroups A, C, W-135, and Y: three conjugate vaccines (MCV-4), Menactra, Menveo, and Menquadfi. The pure polysaccharide vaccine Menomune, MPSV4, was discontinued in the United States in 2017. Menveo and Menquadfi are approved for medical use in the European Union. ==== Menactra and Menveo ==== The first meningococcal conjugate vaccine (MCV-4), Menactra, was licensed in the US in 2005, by Sanofi Pasteur; Menveo was licensed in 2010, by Novartis. Both MCV-4 vaccines are approved by the Food and Drug Administration (FDA) for people 2 through 55 years of age. Menactra received FDA approval for use in children as young as 9 months in April 2011, while Menveo received FDA approval for use in children as young as two months in August 2013. The Centers for Disease Control and Prevention (CDC) has not made recommendations for or against its use in children less than two years. In November 2024, the European Commission (EC) approved Menveo to protect individuals aged two years and older against invasive meningococcal disease. ==== Menquadfi ==== Menquadfi, manufactured by Sanofi Pasteur, was approved by the US Food and Drug Administration in April 2020, for use in individuals two years of age and older. ==== Menomune ==== Meningococcal polysaccharide vaccine (MPSV-4), Menomune, has been available since the 1970s. It may be used if MCV-4 is not available, and is the only meningococcal vaccine licensed for people older than 55. Information about who should receive the meningococcal vaccine is available from the CDC. ==== Nimenrix ==== Nimenrix (developed by GlaxoSmithKline and later acquired by Pfizer), is a quadrivalent conjugate vaccine against serogroups A, C, W-135, and Y. In April 2012 Nimenrix was approved as the first quadrivalent vaccine against invasive meningococcal disease to be administered as a single dose in those over the age of one year, by the European Medicines Agency. In 2016, they approved the vaccine in infants six weeks of age and older, and it has been approved in other countries including Canada and Australia, among others. It is not licensed in the United States. ==== Mencevax ==== Mencevax (GlaxoSmithKline) and NmVac4-A/C/Y/W-135 (JN-International Medical Corporation) are used worldwide but have not been licensed in the United States. ==== Limitations ==== The duration of immunity mediated by Menomune (MPSV-4) is three years or less in children aged under five because it does not generate memory T cells. Attempting to overcome this problem by repeated immunization results in a diminished, not increased, antibody response, so boosters are not recommended with this vaccine. As with all polysaccharide vaccines, Menomune does not produce mucosal immunity, so people can still become colonised with virulent strains of meningococcus, and no herd immunity can develop. For this reason, Menomune is suitable for travellers requiring short-term protection, but not for national public health prevention programs. Menveo and Menactra contain the same antigens as Menomune, but the antigens are conjugated to a diphtheria toxoid polysaccharide-protein complex, resulting in anticipated enhanced duration of protection, increased immunity with booster vaccinations, and effective herd immunity. ==== Endurance ==== A study published in March 2006, comparing the two kinds of vaccines found that 76% of subjects still had passive protection three years after receiving MCV-4 (63% protective compared with controls), but only 49% had passive protection after receiving MPSV-4 (31% protective compared with controls). As of 2010, there remains limited evidence that any of the current conjugate vaccines offer continued protection beyond three years; studies are ongoing to determine the actual duration of immunity, and the subsequent requirement of booster vaccinations. The CDC offers recommendations regarding who they feel should get booster vaccinations. === Bivalent (serogroups C and Y) === In June 2012, the FDA approved a combination vaccine against two types of meningococcal disease and Hib disease for infants and children 6 weeks to 18 months old. The vaccine, Menhibrix, prevents disease caused by Neisseria meningitidis serogroups C and Y and Haemophilus influenzae type b. This was the first meningococcal vaccine that could be given to infants as young as six weeks old. Menhibrix is indicated for active immunization to prevent invasive disease caused by Neisseria meningitidis serogroups C and Y and Haemophilus influenzae type b for children 6 weeks of age through 18 months of age. === Serogroup A === A vaccine called MenAfriVac has been developed through a program called the Meningitis Vaccine Project and has the potential to prevent outbreaks of group A meningitis, which is common in sub-Saharan Africa. === Serogroup B === Vaccines against serotype B meningococcal disease have proved difficult to produce, and require a different approach from vaccines against other serotypes. Whereas effective polysaccharide vaccines have been produced against types A, C, W-135, and Y, the capsular polysaccharide on the type B bacterium is too similar to human neural adhesion molecules to be a useful target. Several "serogroup B" vaccines have been produced. Strictly speaking, these are not "serogroup B" vaccines, as they do not aim to produce antibodies to the group B antigen: it would be more accurate to describe them as serogroup-independent vaccines, as they employ different antigenic components of the organism; indeed, some of the antigens are common to different Neisseria species. A vaccine for serogroup B was developed in Cuba in response to a large outbreak of meningitis B during the 1980s. This vaccine was based on artificially produced outer membrane vesicles of the bacterium. The VA-MENGOC-BC vaccine proved safe and effective in randomized double-blind studies, but it was granted a licence only for research purposes in the United States as political differences limited cooperation between the two countries. Due to a similarly high prevalence of B-serotype meningitis in Norway between 1974 and 1988, Norwegian health authorities developed a vaccine specifically designed for Norwegian children and adolescents. Clinical trials were discontinued after the vaccine was shown to cover only slightly more than 50% of all cases. Furthermore, lawsuits for damages were filed against the State of Norway by persons affected by serious adverse reactions. Information that the health authorities obtained during the vaccine development was subsequently passed on to Chiron (now GlaxoSmithKline), who developed a similar vaccine, MeNZB, for New Zealand. A MenB vaccine was approved for use in Europe in January 2013. Following a positive recommendation from the European Union's Committee for Medicinal Products for Human Use, Bexsero, produced by Novartis, received a licence from the European Commission. However, deployment in individual EU member countries still depends on decisions by national governments. In July 2013, the United Kingdom's Joint Committee on Vaccination and Immunisation (JCVI) issued an interim position statement recommending against the adoption of Bexsero as part of a routine meningococcal B immunisation program, on the grounds of cost-effectiveness. This decision was reverted in favor of Bexsero vaccination in March 2014. In March 2015 the UK government announced that they had reached agreement with GlaxoSmithKline who had taken over Novartis's vaccines business, and that Bexsero would be introduced into the UK routine immunization schedule later in 2015. In November 2013, in response to an outbreak of B-serotype meningitis on the campus of Princeton University, the acting head of the Centers for Disease Control and Prevention (CDC) meningitis and vaccine-preventable diseases branch told NBC News that they had authorized emergency importation of Bexsero to stop the outbreak. Bexsero was subsequently approved by the FDA in February 2015 for use in individuals 10 through 25 years of age. In October 2014, Trumenba, a serogroup B vaccine produced by Pfizer, was approved by the FDA for use in individuals 10 through 25 years of age. === Serogroup X === The occurrence of serogroup X has been reported in North America, Europe, Australia, and West Africa. Till recently, there used to be no vaccine to protect against serogroup X N. meningitidis disease. However, a new pentavalent vaccine that protects against serogroups A, C, W, X and Y (MenFive; Serum Institute of India) has become available and has been recommended by the WHO in 2023 for use in endemic countries in Africa == Side effects == Common side effects include pain and redness around the site of injection (up to 50% of recipients). A small percentage of people develop a mild fever. A small proportion of people develop a severe allergic reaction. In 2016 Health Canada warned of an increased risk of anemia or hemolysis in people treated with eculizumab (Soliris). The highest risk was when individuals "received a dose of Soliris within 2 weeks after being vaccinated with Bexsero". Despite initial concerns about Guillain-Barré syndrome, subsequent studies in 2012 have shown no increased risk of GBS after meningococcal conjugate vaccination. == Travel requirements == Travellers who wish to enter or leave certain countries or territories must be vaccinated against meningococcal meningitis, preferably 10–14 days before crossing the border, and be able to present a vaccination record/certificate at the border checks.: 21–24  Countries with required meningococcal vaccination for travellers include The Gambia, Indonesia, Lebanon, Libya, the Philippines and, most importantly and extensively, Saudi Arabia for Muslims visiting or working in Mecca during the Hajj or Umrah pilgrimages. For some countries in African meningitis belt, vaccinations prior to entry are not required, but highly recommended.: 21–24  == References == == Further reading == == External links == "Meningococcal ACWY Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). "Meningococcal B Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). Meningococcal Vaccines at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Meningococcal_vaccine
Placebo-controlled studies are a way of testing a medical therapy in which, in addition to a group of subjects that receives the treatment to be evaluated, a separate control group receives a sham "placebo" treatment which is specifically designed to have no real effect. Placebos are most commonly used in blinded trials, where subjects do not know whether they are receiving real or placebo treatment. Often, there is also a further "natural history" group that does not receive any treatment at all. The purpose of the placebo group is to account for the placebo effect, that is, effects from treatment that do not depend on the treatment itself. Such factors include knowing one is receiving a treatment, attention from health care professionals, and the expectations of a treatment's effectiveness by those running the research study. Without a placebo group to compare against, it is not possible to know whether the treatment itself had any effect. Patients frequently show improvement even when given a sham or "fake" treatment. Such intentionally inert placebo treatments can take many forms, such as a pill containing only sugar, or a medical device (such as an ultrasound machine) that is not actually turned on. Also, due to the body's natural healing ability and statistical effects such as regression to the mean, many patients will get better even when given no treatment at all. Thus, the relevant question when assessing a treatment is not "does the treatment work?" but "does the treatment work better than a placebo treatment, or no treatment at all?" More broadly, the aim of a clinical trial is to determine what treatments, delivered in what circumstances, to which patients, in what conditions, are the most effective. Therefore, the use of placebos is a standard control component of most clinical trials, which attempt to make some sort of quantitative assessment of the efficacy of medicinal drugs or treatments. Such a test or clinical trial is called a placebo-controlled study, and its control is of the negative type. A study whose control is a previously tested treatment, rather than no treatment, is called a positive-control study, because its control is of the positive type. This close association of placebo effects with RCTs has a profound impact on how placebo effects are understood and valued in the scientific community. == Methodology == === Blinding === Blinding is the withholding of information from participants which may influence them in some way until after the experiment is complete. Good blinding may reduce or eliminate experimental biases such as confirmation bias, the placebo effect, the observer effect, and others. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constrains. During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments, and must be measured and reported. === Natural history groups === The practice of using an additional natural history group as the trial's so-called "third arm" has emerged; and trials are conducted using three randomly selected, equally matched trial groups, Reilly wrote: "... it is necessary to remember the adjective 'random' [in the term 'random sample'] should apply to the method of drawing the sample and not to the sample itself." The Active drug group (A): who receive the active test drug. The Placebo drug group (P): who receive a placebo drug that simulates the active drug. The Natural history group (NH): who receive no treatment of any kind (and whose condition, therefore, is allowed to run its natural course). The outcomes within each group are observed, and compared with each other, allowing us to measure: The efficacy of the active drug's treatment: the difference between A and NH (i.e., A-NH). The efficacy of the active drug's active ingredient: the difference between A and P (i.e., A-P). The magnitude of the placebo response: the difference between P and NH (i.e., P-NH). It is a matter of interpretation whether the value of P-NH indicates the efficacy of the entire treatment process or the magnitude of the "placebo response". The results of these comparisons then determine whether or not a particular drug is considered efficacious. Natural-History groups yield useful information when separate groups of subjects are used in a parallel or longitudinal study design. In crossover studies, however, where each subject undergoes both treatments in succession, the natural history of the chronic condition under investigation (e.g., progression) is well understood, with the study's duration being chosen such that the condition's intensity will be more or less stable over that duration. (Wang et al. provide the example of late-phase diabetes, whose natural history is long enough that even a crossover study lasting one year is acceptable.) In these circumstances, a natural history group is not expected to yield useful information. === Indexing === In certain clinical trials of particular drugs, it may happen that the level of the "placebo responses" manifested by the trial's subjects are either considerably higher or lower (in relation to the "active" drug's effects) than one would expect from other trials of similar drugs. In these cases, with all other things being equal, it is reasonable to conclude that: the degree to which there is a considerably higher level of "placebo response" than one would expect is an index of the degree to which the drug's active ingredient is not efficacious. the degree to which there is a considerably lower level of "placebo response" than one would expect is an index of the degree to which, in some particular way, the placebo is not simulating the active drug in an appropriate way. However, in particular cases such as the use of Cimetidine to treat ulcers, a significant level of placebo response can also prove to be an index of how much the treatment has been directed at a wrong target. == Implementation issues == === Adherence === The Coronary Drug Project was intended to study the safety and effectiveness of drugs for long-term treatment of coronary heart disease in men. Those in the placebo group who adhered to the placebo treatment (took the placebo regularly as instructed) showed nearly half the mortality rate as those who were not adherent. A similar study of women similarly found survival was nearly 2.5 times greater for those who adhered to their placebo. This apparent placebo effect may have occurred because: Adhering to the protocol had a psychological effect, i.e. genuine placebo effect. People who were already healthier were more able or more inclined to follow the protocol. Compliant people were more diligent and health-conscious in all aspects of their lives. === Unblinding === In some cases, a study participant may deduce or otherwise obtain information that has been blinded to them. For example, a patient taking a psychoactive drug may recognize that they are taking a drug. When this occurs, it is called unblinding. This kind of unblinding can be reduced with the use of an active placebo, which is a drug that produces effects similar to the active drug, making it more difficult for patients to determine which group they are in. An active placebo was used in the Marsh Chapel Experiment, a blinded study in which the experimental group received the psychedelic substance psilocybin while the control group received a large dose of niacin, a substance that produces noticeable physical effects intended to lead the control subjects to believe they had received the psychoactive drug. == History == === James Lind and scurvy === In 1747, James Lind (1716–1794), the ship's doctor on HMS Salisbury, conducted the first clinical trial when he investigated the efficacy of citrus fruit in cases of scurvy. He randomly divided twelve scurvy patients, whose "cases were as similar as I could have them", into six pairs. Each pair was given a different remedy. According to Lind's 1753 Treatise on the Scurvy in Three Parts Containing an Inquiry into the Nature, Causes, and Cure of the Disease, Together with a Critical and Chronological View of what has been Published of the Subject, the remedies were: one quart of cider per day, twenty-five drops of elixir vitriol (sulfuric acid) three times a day, two spoonfuls of vinegar three times a day, a course of sea-water (half a pint every day), two oranges and one lemon each day, and electuary, (a mixture containing garlic, mustard, balsam of Peru, and myrrh). He noted that the pair who had been given the oranges and lemons were so restored to health within six days of treatment that one of them returned to duty, and the other was well enough to attend the rest of the sick. === Animal magnetism === In 1784, the French Royal Commission investigated the existence of animal magnetism, comparing the effects of allegedly "magnetized" water with that of plain water. It did not examine the practices of Franz Mesmer, but examined the significantly different practices of his associate Charles d'Eslon (1739–1786). === Perkins tractors === In 1799, John Haygarth investigated the efficacy of medical instruments called "Perkins tractors", by comparing the results from dummy wooden tractors with a set of allegedly "active" metal tractors, and published his findings in a book On the Imagination as a Cause & as a Cure of Disorders of the Body. === Flint and placebo active treatment comparison === In 1863 Austin Flint (1812–1886) conducted the first-ever trial that directly compared the efficacy of a dummy simulator with that of an active treatment; although Flint's examination did not compare the two against each other in the same trial. Even so, this was a significant departure from the (then) customary practice of contrasting the consequences of an active treatment with what Flint described as "the natural history of [an untreated] disease".: 18  Flint's paper is the first time that he terms "placebo" or "placeboic remedy" were used to refer to a dummy simulator in a clinical trial.... to secure the moral effect of a remedy given specially for the disease, the patients were placed on the use of a placebo which consisted, in nearly all of the cases, of the tincture of quassia, very largely diluted. This was given regularly, and became well known in my wards as the placeboic remedy for rheumatism. Flint: 21  treated 13 hospital inmates who had rheumatic fever; 11 were "acute", and 2 were "sub-acute". He then compared the results of his dummy "placeboic remedy" with that of the active treatment's already well-understood results. (Flint had previously tested, and reported on, the active treatment's efficacy.) There was no significant difference between the results of the active treatment and his "placeboic remedy" in 12 of the cases in terms of disease duration, duration of convalescence, number of joints affected, and emergence of complications.: 32–34  In the thirteenth case, Flint expressed some doubt whether the particular complications that had emerged (namely, pericarditis, endocarditis, and pneumonia) would have been prevented if that subject had been immediately given the "active treatment".: 36  === Jellinek and headache remedy ingredients === Jellinek in 1946 was asked to test whether or not the headache drug's overall efficacy would be reduced if certain ingredients were removed. In post-World War II 1946, pharmaceutical chemicals were restricted, and one U.S. headache remedy manufacturer sold a drug composed of three ingredients: a, b, and c, and chemical b was in particular short supply. Jellinek set up a complex trial involving 199 subjects, all of whom had "frequent headaches". The subjects were randomly divided into four test groups. He prepared four test drugs, involving various permutations of the three drug constituents, with a placebo as a scientific control. The structure of this trial is significant because, in those days, the only time placebos were ever used "was to express the efficacy or non-efficacy of a drug in terms of "how much better" the drug was than the "placebo".: 88  (Note that the trial conducted by Austin Flint is an example of such a drug efficacy vs. placebo efficacy trial.) The four test drugs were identical in shape, size, colour and taste: Drug A: contained a, b, and c. Drug B: contained a and c. Drug C: contained a and b. Drug D: a 'simulator', contained "ordinary lactate". Each time a subject had a headache, they took their group's designated test drug, and recorded whether their headache had been relieved (or not). Although "some subjects had only three headaches in the course of a two-week period while others had up to ten attacks in the same period", the data showed a "great consistency" across all subjects: 88  Every two weeks the groups' drugs were changed; so that by the end of eight weeks, all groups had tested all the drugs. The stipulated drug (i.e., A, B, C, or D) was taken as often as necessary over each two-week period, and the two-week sequences for each of the four groups were: A, B, C, D B, A, D, C C, D, A, B D, C, B, A. Over the entire population of 199 subjects, there were 120 "subjects reacting to placebo" and 79 "subjects not reacting to placebo".: 89  On initial analysis, there was no difference between the self-reported "success rates" of Drugs A, B, and C (84%, 80%, and 80% respectively) (the "success rate" of the simulating placebo Drug D was 52%); and, from this, it appeared that ingredient b was completely unnecessary. However, further analysis on the trial demonstrated that ingredient b made a significant contribution to the remedy's efficacy. Examining his data, Jellinek discovered that there was a very significant difference in responses between the 120 placebo-responders and the 79 non-responders. The 79 non-responders' reports showed that if they were considered as an entirely separate group, there was a significant difference the "success rates" of Drugs A, B, and C: viz., 88%, 67%, and 77%, respectively. And because this significant difference in relief from the test drugs could only be attributed to the presence or absence of ingredient b, he concluded that ingredient b was essential. Two conclusions came from this trial: Jellinek,: 90  having identified 120 "placebo reactors", went on to suppose that all of them may have had either "psychological headaches" (with or without attendant "hypochondriasis") or "true physiological headaches [which were] accessible to suggestion". Thus, according to this view, the degree to which a "placebo response" is present tends to be an index of the psychogenic origins of the condition in question.: 777  It indicated that, whilst any given placebo was inert, a responder to that particular placebo may be responding for a wide number of reasons unconnected with the drug's active ingredients; and, from this, it could be important to pre-screen potential test populations, and treat those manifesting a placebo-response as a special group, or remove them altogether from the test population! === MRC and randomized trials === It used to be thought that the first-ever randomized clinical trial was the trial conducted by the Medical Research Council (MRC) in 1948 into the efficacy of streptomycin in the treatment of pulmonary tuberculosis. In this trial, there were two test groups: those "treated by streptomycin and bed-rest", and those "[treated] by bed-rest alone" (the control group). What made this trial novel was that the subjects were randomly allocated to their test groups. The up-to-that-time practice was to allocate subjects alternately to each group, based on the order in which they presented for treatment. This practice could be biased, because those admitting each patient knew to which group that patient would be allocated (and so the decision to admit or not admit a specific patient might be influenced by the experimenter's knowledge of the nature of their illness, and their knowledge of the group to which they would occupy). Recently, an earlier MRC trial on the antibiotic patulin on the course of common colds has been suggested to have been the first randomized trial. Another early and until recently overlooked randomized trial was published on strophanthin in a local Finnish journal in 1946. == Declaration of Helsinki == From the time of the Hippocratic Oath questions of the ethics of medical practice have been widely discussed, and codes of practice have been gradually developed as a response to advances in scientific medicine. The Nuremberg Code, which was issued in August 1947, as a consequence of the so-called Doctors' Trial which examined the human experimentation conducted by Nazi doctors during World War II, offers ten principles for legitimate medical research, including informed consent, absence of coercion, and beneficence towards experiment participants. In 1964, the World Medical Association issued the Declaration of Helsinki, which specifically limited its directives to health research by physicians, and emphasized a number of additional conditions in circumstances where "medical research is combined with medical care". The significant difference between the 1947 Nuremberg Code and the 1964 Declaration of Helsinki is that the first was a set of principles that was suggested to the medical profession by the "Doctors' Trial" judges, whilst the second was imposed by the medical profession upon itself. Paragraph 29 of the Declaration makes specific mention of placebos: 29. The benefits, risks, burdens and effectiveness of a new method should be tested against those of the best current prophylactic, diagnostic, and therapeutic methods. This does not exclude the use of placebo, or no treatment, in studies where no proven prophylactic, diagnostic or therapeutic method exists. In 2002, World Medical Association issued the following elaborative announcement: Note of clarification on paragraph 29 of the WMA Declaration of HelsinkiThe WMA hereby reaffirms its position that extreme care must be taken in making use of a placebo-controlled trial and that in general this methodology should only be used in the absence of existing proven therapy. However, a placebo-controlled trial may be ethically acceptable, even if proven therapy is available, under the following circumstances: — Where for compelling and scientifically sound methodological reasons its use is necessary to determine the efficacy or safety of a prophylactic, diagnostic or therapeutic method; or — Where a prophylactic, diagnostic or therapeutic method is being investigated for a minor condition and the patients who receive placebo will not be subject to any additional risk of serious or irreversible harm. All other provisions of the Declaration of Helsinki must be adhered to, especially the need for appropriate ethical and scientific review. In addition to the requirement for informed consent from all drug-trial participants, it is also standard practice to inform all test subjects that they may receive the drug being tested or that they may receive the placebo. == Non-drug treatments == "Talking therapies" (such as hypnotherapy, psychotherapy, counseling, and non-drug psychiatry) are now required to have scientific validation by clinical trial. However, there is controversy over what might or might not be an appropriate placebo for such therapeutic treatments. Furthermore, there are methodological challenges such as blinding the person providing the psychological non-drug intervention. In 2005, the Journal of Clinical Psychology, devoted an issue to the issue of "The Placebo Concept in Psychotherapy" that contained a range of contributions to this question. As the abstract of one paper noted: "Unlike within the domain of medicine, in which the logic of placebos is relatively straightforward, the concept of placebo as applied to psychotherapy is fraught with both conceptual and practical problems." == See also == == References == == External links == James Lind Library A source of historical texts on fair tests of treatments in health care.
Wikipedia/Placebo-controlled_studies
A vaccine adverse event (VAE), sometimes referred to as a vaccine injury, is an adverse event believed to have been caused by vaccination. The World Health Organization (WHO) refers to Adverse Events Following Immunization (AEFI). AEFIs can be related to the vaccine itself (product or quality defects), to the vaccination process (administration error or stress related reactions) or can occur independently from vaccination (coincidental). Most vaccine adverse events are mild. Serious injuries and deaths caused by vaccines are very rare, and the idea that severe events are common has been classed as a "common misconception about immunization" by the WHO. Some claimed vaccine injuries are not, in fact, caused by vaccines; for example, there is a subculture of advocates who attribute their children's autism to vaccine injury, despite the fact that vaccines do not cause autism. Claims of vaccine injuries appeared in litigation in the United States in the latter part of the 20th century. Some families have won substantial awards from sympathetic juries, even though many public health officials have said that the claims of injuries are unfounded. In response, several vaccine makers stopped production, threatening public health, resulting in laws being passed at several points to shield makers from liabilities stemming from vaccine injury claims. == Adverse events == According to the U.S. Centers for Disease Control and Prevention, while "any vaccine can cause side effects", most side effects are minor, primarily including sore arms or a mild fever. Unlike most medical interventions vaccines are given to healthy people, where the risk of side effects is not as easily outweighed by the benefit of treating existing disease. As such, the safety of immunization interventions is taken very seriously by the scientific community, with constant monitoring of a number of data sources looking for patterns of adverse events. As the success of immunization programs increases and the incidence of disease decreases, public attention shifts away from the risks of disease to the risk of vaccination. Concerns about immunization safety often follow a pattern. First, some investigators suggest that a medical condition of increasing prevalence or unknown cause is due to an adverse effect of vaccination. The initial study, and subsequent studies by the same investigators, have inadequate methodology, typically a poorly controlled or uncontrolled case series. A premature announcement is made of the alleged adverse effect, which resonates with individuals who have the condition and which underestimates the potential harm of not being vaccinated. The initial study is not reproduced by other investigators. Finally, it takes several years before the public regains confidence in the vaccine. Controversies in this area revolve around the question of whether the risks of adverse events following immunization outweigh the benefits of preventing infectious disease. In rare cases immunizations can cause serious adverse effects, such as gelatin measles-mumps-rubella vaccine (MMR) causing anaphylaxis, a severe allergic reaction. Allegations particularly focus on disorders claimed to be caused by the MMR vaccine and thiomersal, a preservative used in vaccines routinely given to U.S. infants prior to 2001. Current scientific evidence does not support claims of vaccines causing various disorders. The debate is complicated by misconceptions around the recording and reporting of adverse events by anti-vaccination activists. According to authorities, anti-vaccination websites greatly exaggerate the risk of serious adverse effects from vaccines and falsely describe conditions such as autism and shaken baby syndrome as vaccine injuries, leading to misconceptions about the safety and effectiveness of vaccines. This has had the result of stigmatizing autistic people and the parents who had them immunized. Many countries, including Canada, Germany, Japan, and the United States have specific requirements for reporting vaccine-related adverse effects, while other countries including Australia, France, and the United Kingdom include vaccines under their general requirements for reporting injuries associated with medical treatments.: 8–11  A number of countries have programs for the compensation of injuries alleged to have been caused by a vaccination.: 9–44  === Febrile seizures === Febrile seizures may occur after the administration of certain vaccines, including the MMR vaccine, and influenza vaccines. According to the Centers for Disease Control and Prevention, febrile seizures do not cause any harm or have any permanent effects. === Allergic reactions === It is thought that certain vaccines can, very rarely, cause anaphylaxis in yeast-sensitive individuals and children allergic to vaccine ingredients. The rate of anaphylaxis is estimated to be around one per million vaccine doses. == United States == === Vaccine Injury Compensation Program === In 1988, the National Vaccine Injury Compensation Program (VICP) went into effect to compensate individuals and families of individuals who have been injured by specified childhood vaccines. The VICP was adopted in response to an earlier scare over the pertussis portion of the DPT vaccine. These claims were later generally discredited, but some U.S. lawsuits against vaccine makers won substantial awards; most makers ceased production, and the last remaining major manufacturer threatened to do so. As of October 2019, $4.2 billion in compensation (not including attorneys fees and costs) has been awarded. VICP uses a streamlined system for litigating vaccine injury claims under which the claimant must show that the vaccine caused the injury, but just as in litigation for injury by any other product, they are not required to establish it was anyone's fault (i.e. negligence need not be proven) Claims that are denied can be pursued through civil lawsuits, though this is rare, and the statute creating the VICP also imposes substantial limitations on the ability to pursue such lawsuits. The VICP covers all vaccines listed on the Vaccine Injury Table which is maintained by the Secretary of Health and Human Services. To win an award, a claimant is required to show a causal connection between an injury and one of the vaccines listed in the Vaccine Injury Table. Compensation is payable for "table" injuries, those listed in the Vaccine Injury Table, as well as, "non-table" injuries, injuries not listed in the table. In addition, an award may only be given if the claimant's injury lasted for more than 6 months after the vaccine was given, resulted in a hospital stay and surgery or resulted in death. Awards are based on medical expenses, lost earnings and pain and suffering (capped at $250,000). From 1988 until March 3, 2011, 5,636 claims relating to autism, and 8,119 non-autism claims, were made to the VICP. 2,620 of these claims, one autism-related, were compensated, with 4,463 non-autism and 814 autism claims dismissed; awards (including attorney's fees) totaled over $2 billion. The VICP also applies to claims for injuries sustained before 1988; there were 4,264 of these claims of which 1,189 were compensated with awards totaling $903 million. As of October 2019, $4.2 billion in compensation (not including attorneys fees and costs) has been awarded over the life of the program. As part of NVICP, a table has been created which lists various vaccines, side effects that might plausibly be caused by them, and the time within which the symptoms must present in order to be eligible to apply for compensation. For example, for vaccines containing tetanus toxoid (e.g., DTaP, DTP, DT, Td, or TT), anaphylaxis within four hours or brachial neuritis between two and twenty-eight days after administration, may be compensated. === Countermeasures Injury Compensation Program === Established by PREP Act, in the case of pandemic, epidemic, or other major security threat requiring a medical countermeasures, such as vaccines and medications, the CICP provides compensation to eligible individuals for serious physical injuries or death. Covid-19 vaccines are covered under the program. === Vaccine Adverse Event Reporting System === The Vaccine Adverse Event Reporting System (VAERS) is a passive surveillance program administered jointly by the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC). VAERS is intended to track adverse events associated with vaccines. VAERS collects and analyzes information from reports of adverse events that occur after the administration of US licensed vaccines. VAERS has several limitations, including underreporting, unverified reports, inconsistent data quality, and inadequate data about the number of people vaccinated. Due to the program's open and accessible design and its allowance of unverified reports, incomplete VAERS data is often used in false claims regarding vaccine safety. === Vaccine Safety Datalink === The Vaccine Safety Datalink (VSD), funded by the Centers for Disease Control, is composed of databases from several organizations containing information regarding health outcomes for millions of US citizens and to enhance assessment of vaccine injuries. It was designed to allow for such things as comparisons between vaccinated and non-vaccinated populations, and for the identification of possible groups at risk for adverse events. == United Kingdom == The Vaccine Damage Payment Act 1979 governs AEFIs in the UK, and sets up the Vaccine Damage Payment Scheme (VDPS). === Vaccine Damage Payment Scheme === Under the VDPS, it is thought that thousands of unsuccessful claims have been made. The maximum payment per claim is currently £120,000. The 'disability threshold' before payments are granted is 60%. The scheme covers vaccinations for illnesses such as tetanus, measles, tuberculosis, and meningitis C. As of 2005, the British government had paid out £3.5 million to vaccine injury patients since 1997. Until the advent of COVID-19, disabled vaccine injury patients were allowed to file a claim up to the age of 21. On the 2 December 2020, government agreed under regulation secondary to the 1979 Act the statutory £120,000 blanket payout for any person provably damaged by the vaccine, and by the same addition of COVID-19 to the list, government-approved Covax manufacturers were exempted from legal pursuit. Individuals who provide the vaccine (and thus are permitted by government to do so) are also protected. == Canada == Quebec has a legal process to compensate certain forms of vaccination injuries; the program was set up by statute in 1985, and its first awards were made in 1988. On 10 December 2020, the nations was made aware via an op-ed published in the Globe and Mail that "Canada needs to prepare for rare but serious health problems resulting from [Covid-19] vaccination" by, inter alia, the Honourable Dr Jane Philpott, former cabinet member and Dean of the Faculty of Health Sciences of Queen's University. The authors observed that, outside of Quebec, "People suffering severe AEFIs are left to assume the costs of legal fees, lost wages, uninsured medical services and rehabilitation supports", and plumped for a no-fault system, in which "compensation is needs-based and not punitive." They go on to write: In the context of the COVID-19 pandemic, we are concerned that, given the anticipated scale of the COVID-19 immunization campaign and new vaccine technologies employed, mass immunization may result in a small number of Canadians experiencing serious AEFIs, despite adherence to best practices. While AEFIs are possible with routine immunizations, pandemic situations are unique with respect to the speed and scale with which vaccine technologies are developed and distributed. Rare serious AEFIs may not be captured during phases of clinical trials because it may require very large numbers of the population to be immunized for AEFIs to manifest. The anticipated incidence of serious AEFIs can be estimated at 1 in one million immunizations... the potential health consequences of adverse events following immunization borne by the few will be for our collective benefit in stopping the deadly spread of the virus. Operating under this estimate, we anticipate 25 Canadians may suffer a serious health outcome following COVID-19 vaccination, or 0.1 per 100,000 doses. The authors conclude that an "equitable and fair compensation system with a transparent accountability process for monitoring potential AEFIs associated with COVID-19 immunization could increase public confidence in vaccines and promote uptake." == Germany == Germany has established a treatment and research centre for VAEs at Marburg University Hospital (UKGM). == See also == Bundaberg tragedy Vaccine hesitancy COVID-19 vaccine#Adverse events == References ==
Wikipedia/Vaccine_adverse_event
Anthrax vaccines are vaccines to prevent the livestock and human disease anthrax, caused by the bacterium Bacillus anthracis. They have had a prominent place in the history of medicine, from Pasteur's pioneering 19th-century work with cattle (the first effective bacterial vaccine and the second effective vaccine ever) to the controversial late 20th century use of a modern product to protect American troops against the use of anthrax in biological warfare. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current vaccine approved by the U.S. Food and Drug Administration (FDA) was formulated in the 1960s. Currently administered human anthrax vaccines include acellular (USA, UK) and live spore (Russia) varieties. All currently used anthrax vaccines show considerable local and general reactogenicity (erythema, induration, soreness, fever) and serious adverse reactions occur in about 1% of recipients. New third-generation vaccines being researched include recombinant live vaccines and recombinant sub-unit vaccines. == Pasteur's vaccine == In the 1870s, the French chemist Louis Pasteur (1822–1895) applied his previous method of immunising chickens against chicken cholera to anthrax, which affected cattle, and thereby aroused widespread interest in combating other diseases with the same approach. In May 1881, Pasteur performed a famous public experiment at Pouilly-le-Fort to demonstrate his concept of vaccination. He prepared two groups of 25 sheep, one goat and several cows. The animals of one group were twice injected, with an interval of 15 days, with an anthrax vaccine prepared by Pasteur; a control group was left unvaccinated. Thirty days after the first injection, both groups were injected with a culture of live anthrax bacteria. All the animals in the non-vaccinated group died, while all of the animals in the vaccinated group survived. The public reception was sensational. Pasteur publicly claimed he had made the anthrax vaccine by exposing the bacilli to oxygen. His laboratory notebooks, now in the Bibliothèque Nationale in Paris, in fact show Pasteur used the method of rival Jean-Joseph-Henri Toussaint (1847–1890), a Toulouse veterinary surgeon, to create the anthrax vaccine. This method used the oxidizing agent potassium dichromate. Pasteur's oxygen method did eventually produce a vaccine but only after he had been awarded a patent on the production of an anthrax vaccine. The notion of a weak form of a disease causing immunity to the virulent version was not new; this had been known for a long time for smallpox. Inoculation with smallpox (variolation) was known to result in far less scarring, and greatly reduced mortality, in comparison with the naturally acquired disease. The English physician Edward Jenner (1749–1823) had also discovered (1796) the process of vaccination by using cowpox to give cross-immunity to smallpox and by Pasteur's time this had generally replaced the use of actual smallpox material in inoculation. The difference between smallpox vaccination and anthrax or chicken cholera vaccination was that the weakened form of the latter two disease organisms had been "generated artificially", so a naturally weak form of the disease organism did not need to be found. This discovery revolutionized work in infectious diseases and Pasteur gave these artificially weakened diseases the generic name "vaccines", in honor of Jenner's groundbreaking discovery. In 1885, Pasteur produced his celebrated first vaccine for rabies by growing the virus in rabbits and then weakening it by drying the affected nerve tissue. In 1995, the centennial of Pasteur's death, The New York Times ran an article titled "Pasteur's Deception". After having thoroughly read Pasteur's lab notes, the science historian Gerald L. Geison declared Pasteur had given a misleading account of the preparation of the anthrax vaccine used in the experiment at Pouilly-le-Fort. The same year, Max Perutz published a vigorous defense of Pasteur in The New York Review of Books. == Sterne's vaccine == The Austrian-South African immunologist Max Sterne (1905–1997) developed an attenuated live animal vaccine in 1935 that is still employed and derivatives of his strain account for almost all veterinary anthrax vaccines used in the world today. Beginning in 1934 at the Onderstepoort Veterinary Research Institute, north of Pretoria, he prepared an attenuated anthrax vaccine, using the method developed by Pasteur. A persistent problem with Pasteur's vaccine was achieving the correct balance between virulence and immunogenicity during preparation. This notoriously difficult procedure regularly produced casualties among vaccinated animals. With little help from colleagues, Sterne performed small-scale experiments which isolated the "Sterne strain" (34F2) of anthrax which became, and remains today, the basis of most of the improved livestock anthrax vaccines throughout the world. As Sterne's vaccine is a live vaccine, vaccination during use of antibiotics produces much reduced results and should be avoided. There is a withholding period after vaccination when animals cannot be slaughtered. No such period is defined for milk and there are no reports of humans being infected by products from vaccinated animals. There have been a few cases when humans accidentally self-inject the vaccine when trying to administer to a struggling animal. One case developed fever and meningitis, but it is unclear whether the illness was caused by the vaccine. Livestock anthrax vaccines are made in many countries around the world, most of which use 34F2 with saponin adjuvant. == Soviet/Russian anthrax vaccines == Anthrax vaccines were developed in the Soviet Union in the 1930s and available for use in humans by 1940. A live attenuated, unencapsulated spore vaccine became widely used for humans. It was given either by scarification or subcutaneous injection (only in emergency) and its developers claimed that it was reasonably well tolerated and showed some degree of protective efficacy against cutaneous anthrax in clinical field trials. The efficacy of the live Russian vaccine was reported to have been greater than that of either of the killed British or US anthrax vaccines (AVP and AVA, respectively) during the 1970s and '80s. The STI-1 vaccine, consisting only of freeze-dried spores, is given in a two-dose schedule, but serious side-effects restricted its use to healthy adults. It was reportedly manufactured at the George Eliava Institute of Bacteriophage, Microbiology and Virology in Tbilisi, Georgia, until 1991. As of 2008, the STI-1 vaccine remains available, and is the only human anthrax vaccine "nominally available outside national borders". China uses a different live attenuated strain for their human vaccines, designated "A16R". The A16R vaccine is given as a suspention in 50% glycerol and distilled water. A single dose is given by scarification, followed by a booster in 6 or 12 months, then annual boosters. == British anthrax vaccines == British biochemist Harry Smith (1921–2011), working for the UK bio-weapons program at Porton Down, discovered the three anthrax toxins in 1948. This discovery was the basis of the next generation of antigenic anthrax vaccines and for modern antitoxins to anthrax. The widely used British anthrax vaccine—sometimes called Anthrax Vaccine Precipitated (AVP) to distinguish it from the similar AVA (see below)—became available for human use in 1954. This was a cell-free vaccine in distinction to the live-cell Pasteur-style vaccine previously used for veterinary purposes. It is now manufactured by Porton Biopharma Ltd, a Company owned by the UK Department of Health. AVP is administered at primovaccination in three doses with a booster dose after six months. The active ingredient is a sterile filtrate of an alum-precipitated anthrax antigen from the Sterne strain in a solution for injection. The other ingredients are aluminium potassium sulphate, sodium chloride and purified water. The preservative is thiomersal (0.005%). The vaccine is given by intramuscular injection and the primary course of four single injections (3 injections 3 weeks apart, followed by a 6-month dose) is followed by a single booster dose given once a year. During the Gulf War (1990–1991), UK military personnel were given AVP concomitantly with the pertussis vaccine as an adjuvant to improve overall immune response and efficacy. == American anthrax vaccines == The United States undertook basic research directed at producing a new anthrax vaccine during the 1950s and '60s. Research under the auspices of the US Army at Fort Detrick in Frederick, MD was led by George G. Wright and Milton Puziss. The Wright/Puziss vaccine, known as Anthrax Vaccine Adsorbed (AVA)—trade name BioThrax—was licensed in 1970 by the U.S. National Institutes of Health (NIH) and in 1972 the Food and Drug Administration (FDA) took over responsibility for vaccine licensure and oversight. AVA is produced from culture filtrates of an avirulent, nonencapsulated mutant of the B. anthracis Vollum strain known as V770-NP1-R. No living organisms are present in the vaccine which results in protective immunity after 3 to 6 doses. AVA remains the only FDA-licensed human anthrax vaccine in the United States and is produced by Emergent BioSolutions, formerly known as BioPort Corporation in Lansing, Michigan. The principal purchasers of the vaccine in the United States are the Department of Defense and Department of Health and Human Services. Ten million doses of AVA have been purchased for the U.S. Strategic National Stockpile for use in the event of a mass bioterrorist anthrax attack. In 1997, the Clinton administration initiated the Anthrax Vaccine Immunization Program (AVIP), under which active U.S. service personnel were to be immunized with the vaccine. Controversy ensued since vaccination was mandatory and GAO published reports that questioned the safety and efficacy of AVA, causing sometimes serious side effects. A Congressional report also questioned the safety and efficacy of the vaccine and challenged the legality of mandatory inoculations. Mandatory vaccinations were halted in 2004 by a formal legal injunction which made numerous substantive challenges regarding the vaccine and its safety. After reviewing extensive scientific evidence, the FDA determined in 2005 that AVA is safe and effective as licensed for the prevention of anthrax, regardless of the route of exposure. In 2006, the Defense Department announced the reinstatement of mandatory anthrax vaccinations for more than 200,000 troops and defense contractors. The vaccinations are required for most U.S. military units and civilian contractors assigned to homeland bioterrorism defense or deployed in Iraq, Afghanistan or South Korea. == Investigational anthrax vaccines == A number of experimental anthrax vaccines are undergoing pre-clinical testing, notably the Bacillus anthracis protective antigen—known as PA (see Anthrax toxin—combined with various adjuvants such as aluminum hydroxide (Alhydrogel), saponin QS-21, and monophosphoryl lipid A (MPL) in squalene/lecithin/Tween 80 emulsion (SLT). One dose of each formulation has provided significant protection (> 90%) against inhalational anthrax in rhesus macaques. Omer-2 trial: Beginning in 1998 and running for eight years, a secret Israeli project known as Omer-2 tested an Israeli investigational anthrax vaccine on 716 volunteers of the Israel Defense Forces. The vaccine—given under a seven-dose schedule—was developed by the Nes Tziona Biological Institute. A group of study volunteers complained of multi-symptom illnesses allegedly associated with the vaccine and petitioned for disability benefits to the Defense Ministry, but were denied. In February 2009, a petition from the volunteers to disclose a report about Omer-2 was filed with the Israel's High Court against the Defense Ministry, the Israel Institute for Biological Research at Nes Tziona, the director, Avigdor Shafferman, and the IDF Medical Corps. Release of the information was requested to support further action to provide disability compensation for the volunteers. In 2012, B. anthracis isolate H9401 was obtained from a Korean patient with gastrointestinal anthrax. The goal of the Republic of Korea is to use this strain as a challenge strain to develop a recombinant vaccine against anthrax. == References == == Further reading == == External links == "Anthrax Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). January 2020.
Wikipedia/Anthrax_vaccine
Research ethics is a discipline within the study of applied ethics. Its scope ranges from general scientific integrity and misconduct to the treatment of human and animal subjects. The social responsibilities of scientists and researchers are not traditionally included and are less well defined. The discipline is most developed in medical research. Beyond the issues of falsification, fabrication, and plagiarism that arise in every scientific field, research design in human subject research and animal testing are the areas that raise ethical questions most often. The list of historic cases includes many large-scale violations and crimes against humanity such as Nazi human experimentation and the Tuskegee syphilis experiment which led to international codes of research ethics. No approach has been universally accepted, but typically cited codes are the 1947 Nuremberg Code, the 1964 Declaration of Helsinki, and the 1978 Belmont Report. Today, research ethics committees, such as those of the US, UK, and EU, govern and oversee the responsible conduct of research. One major goal being to reduce questionable research practices. Research in other fields such as social sciences, information technology, biotechnology, or engineering may generate ethical concerns. == History == The list of historic cases includes many large scale violations and crimes against humanity such as Nazi human experimentation and the Tuskegee syphilis experiment which led to international codes of research ethics. Medical ethics developed out of centuries of general malpractice and science motivated only by results. Medical ethics in turn led to today's more broad understanding in bioethics. == Scientific conduct == === Scientific integrity === === Scientific misconduct === == Discipline specific ethics == Research ethics for Human subject research and Animal testing derives, historically, from Medical ethics and, in modern times, from the much more broad field of Bioethics. === Medical ethics === === Bioethics === === Clinical research ethics === ==== Study participant rights ==== Participants in a clinical trial in clinical research have rights which they expect to be honored, including: informed consent shared decision-making privacy for research participants return of results to withdraw ==== Vulnerable populations ==== Study participants are entitled to some degree of autonomy in deciding their participation. One measure for safeguarding this right is the use of informed consent for clinical research. Researchers refer to populations with limited autonomy as "vulnerable populations"; these are subjects who may not be able to fairly decide for themselves whether to participate. Examples of vulnerable populations include incarcerated persons, children, prisoners, soldiers, people under detention, migrants, persons exhibiting insanity or any other condition that precludes their autonomy, and to a lesser extent, any population for which there is reason to believe that the research study could seem particularly or unfairly persuasive or misleading. Ethical problems particularly encumber using children in clinical trials. == Society == Consequences for the environment, for society and for future generations must be considered. == Governance == In the United Kingdom, the National Research Ethics Service is the responsible quango that forms Research Ethic Committees. In the United States, the Institutional review board is the relevant ethics committee. In Canada, there are different committees for different agencies. The committees are the Research Ethics Board (REB) as well as two others that split their committee duties between conduct (PRCR) and ethics committee (PRE). The European Union only sets the guidelines for its member's ethics committees. Large international organizations like the WHO have their own ethics committees. In Canada, mandatory research ethics training is required for students, professors and others who work in research. The US first legislated institutional review boards procedures in the 1974 National Research Act. == Criticism == Published in Social Sciences & Medicine (2009) several authors suggested that research ethics in a medical context is dominated by principlism. == See also == List of medical ethics cases Children in clinical research Unethical human experimentation Self-experimentation in medicine Clinical trial Academic freedom Scientific literature § Ethics Psychology § Ethics Information ethics Regulation of genetic engineering Engineering ethics Ethics of technology Philosophy of engineering Philosophy of science == References == == Sources == Laine, Heidi (31 December 2018). "Open science and codes of conduct on research integrity". Informaatiotutkimus. 37 (4). doi:10.23978/inf.77414. ISSN 1797-9129. Retrieved 2021-11-11. == Further reading == Speid, Lorna (2010). Clinical trials : what patients and healthy volunteers need to know. Oxford: Oxford University Press. ISBN 978-0-19-973416-0. The Oxford Textbook of Clinical Research Ethics, Ezekiel Emanuel, Christine Grady, Robert Crouch, Reidar Lie, Franklin Miller, David Wendler, Oxford University Press, 2008 == External links == list of research participant rights from Harvard School of Public Health
Wikipedia/Clinical_research_ethics
DTaP-IPV-HepB vaccine is a combination vaccine whose generic name is diphtheria and tetanus toxoids and acellular pertussis adsorbed, hepatitis B (recombinant) and inactivated polio vaccine or DTaP-IPV-Hep B. It protects against the infectious diseases diphtheria, tetanus, pertussis, poliomyelitis, and hepatitis B. A branded formulation is marketed in the U.S. as Pediarix by GlaxoSmithKline. == DTaP == The DTaP portion of the vaccine protects against three bacterial infections: diphtheria, tetanus, and pertussis (whooping cough). Diphtheria is a bacterium that causes problems with breathing, heart failure, paralysis, and in some cases death. It is spread via human to human interaction. Tetanus is spread via open cuts or wounds in the body. It can lead to stiffening of the muscles, which can result in difficulties breathing. Pertussis, also known as whooping cough, is the is "aP" portion of the DTaP vaccine. Like diphtheria, it is spread via human to human interaction. With the vaccine, children can build up a supply of antibodies that prevent infection. In general, the DTaP vaccine is only administered to children ages 7 and younger. == IPV == The IPV portion of the DTaP-IPV-HepB vaccine protects against poliomyelitis, otherwise known as polio. IPV stands for inactivated poliovirus vaccine, which means that it does not use a live strand of the polio virus and cannot result in polio. Polio is a life-threatening disease that can cause paralysis, poor muscle function that weakens the ability to breathe, and brain problems. Since 2016, the United States requires all polio vaccines administered to be IPV and not OPV to eliminate the use of live polio virus. == HepB == The HepB portion of the vaccine protects against hepatitis B. Hepatitis B is a virus that can be spread via mother to child if the mother is infected with hepatitis B, so most doctors recommend that infants be vaccinated. In most individuals infected with hepatitis B, they are asymptomatic. However, symptoms of hepatitis B include flu-like symptoms, diarrhea, and jaundice. Hepatitis B can either be acute or chronic and can ultimately lead to damage of the liver. == Uses == The main reason for the use of combination vaccines is because they require fewer shots. Instead of having a child receive separate shots for each virus they need protection from, scientists were able to create vaccines, like MMR and DTap-IPV-HepB, that protect against several viruses at a time. Another reason is that with the IPV (inactivated poliovirus vaccine) portion of the DTap-IPV-HeB vaccine, children no longer have to take the oral vaccine (OPV) that was administered starting in the 1950s. Although the oral vaccine helped eliminate polio in several countries and is still used in countries today, OPV contains live polio virus and can still result in individuals getting polio. Combination vaccines are also more cost effective and make it more likely for children to receive vaccinations. With the DTaP vaccine on its own, it is to be administered in five doses. However, when the DTaP vaccine is administered through the DTaP-IPV-HepB combination vaccine like Pediarix, it only has to be administered in three doses. == Formulations == In general, the DTaP-IPV-HepB vaccine is recommended to be administered in three doses around 8, 12, and 16 weeks old. Talk to your doctor about the vaccine schedule that is best for your child. There are several common DTaP combinations vaccines: Pediarix, Kinrix, and Pentacel. Pediarix combines DTap-IPV-Hep B and Pentacel combines DTaP-IPV-Hib (Haemophilus influenza type b); however, Kinrix only combines DTaP-IPV, which leaves out Hep B and Hib. Therefore, Pediarix and Pentacel are more commonly used because they protect from five rather than four viruses in each dose. For protecting against DTaP viruses, polio, and hepatitis B, Pediarix is the recommended formulation. === Pediarix === Pediarix is vaccine that is protective against diphtheria, tetanus, pertussis, hepatitis B, and polio. This vaccine is FDA approved to be administered to infants in three doses between ages six weeks and six years. Pediarix should not be injected into any child seven years old or older. However, it is recommended that the immunizations be done at months two, four, and six. The wide age gap between six weeks and six years allows for children who fall behind in their vaccinations to still have the opportunity to be vaccinated. From the moment of birth, babies can become infected with these life-threatening diseases, which is why this vaccine is recommended to be given so early on. With these three doses, the Pediarix vaccine has been given to over 8,088 infants. Each does is 0.5mL and is given via intramuscular. For children ages one and younger, the vaccine is injected into the thigh. While for children older than one, it is injected into the deltoid muscle of the arm. Because the Pediarix vaccine has HepB, is it important to note the mother's HBsAg status. Pediarix is recommended for mothers who are HBsAg-negative; however, in 2003 it was approved that children whose mothers are HBsAg-positive can also receive the Pediarix immunization. Looking at overall completed vaccine records, Pediarix completes the amount of HepB doses that an individual needs to be protected. However, boosters are still needed for DTaP and IPV vaccines after the three doses of Pediarix. == DTaP-IPV-HepB virus activity == As of 2021, there were 1,609 cases of pertussis in the United States. The majority of cases were found amongst 6-11 month old children. == References ==
Wikipedia/DTaP-IPV-HepB_vaccine
The Bacillus Calmette–Guérin (BCG) vaccine is a vaccine primarily used against tuberculosis (TB). It is named after its inventors Albert Calmette and Camille Guérin. In countries where tuberculosis or leprosy is common, one dose is recommended in healthy babies as soon after birth as possible. In areas where tuberculosis is not common, only children at high risk are typically immunized, while suspected cases of tuberculosis are individually tested for and treated. Adults who do not have tuberculosis and have not been previously immunized, but are frequently exposed, may be immunized, as well. BCG also has some effectiveness against Buruli ulcer infection and other nontuberculous mycobacterial infections. Additionally, it is sometimes used as part of the treatment of bladder cancer. Rates of protection against tuberculosis infection vary widely and protection lasts up to 20 years. Among children, it prevents about 20% from getting infected and among those who do get infected, it protects half from developing disease. The vaccine is injected into the skin. No evidence shows that additional doses are beneficial. Serious side effects are rare. Redness, swelling, and mild pain often occur at the injection site. A small ulcer may also form with some scarring after healing. Side effects are more common and potentially more severe in those with immunosuppression. Although no harmful effects on the fetus have been observed, there is insufficient evidence about the safety of BCG vaccination during pregnancy. Therefore, the vaccine is not recommended for use during pregnancy. The vaccine was originally developed from Mycobacterium bovis, which is commonly found in cattle. While it has been weakened, it is still live. The BCG vaccine was first used medically in 1921. It is on the World Health Organization's List of Essential Medicines. As of 2004, the vaccine is given to about 100 million children per year globally. However, it is not commonly administered in the United States. == Medical uses == === Tuberculosis === The main use of BCG is for vaccination against tuberculosis. BCG vaccine can be administered after birth intradermally. BCG vaccination can cause a false positive Mantoux test. The most controversial aspect of BCG is the variable efficacy found in different clinical trials, which appears to depend on geography. Trials in the UK consistently show a 60 to 80% protective effect. Still, those trials conducted elsewhere have shown no protective effect, and efficacy appears to fall the closer one gets to the equator. A 1994 systematic review found that BCG reduces the risk of getting tuberculosis by about 50%. Differences in effectiveness depend on region, due to factors such as genetic differences in the populations, changes in environment, exposure to other bacterial infections, and conditions in the laboratory where the vaccine is grown, including genetic differences between the strains being cultured and the choice of growth medium. A systematic review and meta-analysis conducted in 2014 demonstrated that the BCG vaccine reduced infections by 19–27% and reduced progression to active tuberculosis by 71%. The studies included in this review were limited to those that used interferon gamma release assay. The duration of protection of BCG is not clearly known. In those studies showing a protective effect, the data are inconsistent. The MRC study showed protection waned to 59% after 15 years and to zero after 20 years; however, a study looking at Native Americans immunized in the 1930s found evidence of protection even 60 years after immunization, with only slightly waning in efficacy. BCG seems to have its greatest effect in preventing miliary tuberculosis or tuberculosis meningitis, so it is still extensively used even in countries where efficacy against pulmonary tuberculosis is negligible. The 100th anniversary of the BCG vaccine was in 2021. It remains the only vaccine licensed against tuberculosis, which is an ongoing pandemic. Tuberculosis elimination is a goal of the World Health Organization (WHO). The development of new vaccines with greater efficacy against adult pulmonary tuberculosis may be needed to make substantial progress. ==== Efficacy ==== Several possible reasons for the variable efficacy of BCG in different countries have been proposed. None has been proven, some have been disproved, and none can explain the lack of efficacy in low tuberculosis-burden countries (US) and high tuberculosis-burden countries (India). The reasons for variable efficacy have been discussed at length in a WHO document on BCG. Genetic variation in BCG strains: Genetic variation in the BCG strains may explain the variable efficacy reported in different trials. Genetic variation in populations: Differences in the genetic makeup of different populations may explain the difference in efficacy. The Birmingham BCG trial was published in 1988. The trial, based in Birmingham, United Kingdom, examined children born to families who originated from the Indian subcontinent (where vaccine efficacy had previously been shown to be zero). The trial showed a 64% protective effect, similar to the figure from other UK trials, thus arguing against the genetic variation hypothesis. Interference by nontuberculous mycobacteria: Exposure to environmental mycobacteria (especially Mycobacterium avium, Mycobacterium marinum and Mycobacterium intracellulare) results in a nonspecific immune response against mycobacteria. Administering BCG to someone with a nonspecific immune response against mycobacteria does not augment the response. BCG will, therefore, appear not to be efficacious because that person already has a level of immunity and BCG is not adding to that immunity. This effect is called masking because the effect of BCG is masked by environmental mycobacteria. Clinical evidence for this effect was found in a series of studies performed in parallel in adolescent school children in the UK and Malawi. In this study, the UK school children had a low baseline cellular immunity to mycobacteria which was increased by BCG; in contrast, the Malawi school children had a high baseline cellular immunity to mycobacteria and this was not significantly increased by BCG. Whether this natural immune response is protective is not known. An alternative explanation is suggested by mouse studies; immunity against mycobacteria stops BCG from replicating and so stops it from producing an immune response. This is called the block hypothesis. Interference by concurrent parasitic infection: In another hypothesis, simultaneous infection with parasites such as helminthiasis changes the immune response to BCG, making it less effective. As Th1 response is required for an effective immune response to tuberculous infection, concurrent infection with various parasites produces a simultaneous Th2 response, which blunts the effect of BCG. === Mycobacteria === BCG has protective effects against some nontuberculosis mycobacteria. Leprosy: BCG has a protective effect against leprosy in the range of 20 to 80%. Buruli ulcer: BCG may protect against or delay the onset of Buruli ulcer. === Cancer === BCG has been one of the most successful immunotherapies. BCG vaccine has been the "standard of care for patients with bladder cancer (NMIBC)" since 1977. By 2014, more than eight different considered biosimilar agents or strains used to treat nonmuscle-invasive bladder cancer. Several cancer vaccines use BCG as an additive to provide an initial stimulation of the person's immune system. BCG is used in the treatment of superficial forms of bladder cancer. Since the late 1970s, evidence has become available that the instillation of BCG into the bladder is an effective form of immunotherapy in this disease. While the mechanism is unclear, it appears a local immune reaction is mounted against the tumor. Immunotherapy with BCG prevents recurrence in up to 67% of cases of superficial bladder cancer. BCG has been evaluated in many studies as a therapy for colorectal cancer. The US biotech company Vaccinogen is evaluating BCG as an adjuvant to autologous tumour cells used as a cancer vaccine in stage II colon cancer. == Method of administration == A pre-injection tuberculin skin test is usually carried out before administering the BCG vaccine. A reactive tuberculin skin test is a contraindication to BCG due to the risk of severe local inflammation and scarring; it does not indicate immunity. BCG is also contraindicated in certain people who have IL-12 receptor pathway defects. BCG is given as a single intradermal injection at the insertion of the deltoid. If BCG is accidentally given subcutaneously, then a local abscess may form (a "BCG-oma") that can sometimes ulcerate, and may require treatment with antibiotics immediately, otherwise without treatment it could spread the infection, causing severe damage to vital organs. An abscess is not always associated with incorrect administration, and it is one of the more common complications that can occur with the vaccination. Numerous medical studies on the treatment of these abscesses with antibiotics have been done with varying results, but the consensus is once pus is aspirated and analysed, provided no unusual bacilli are present, the abscess will generally heal on its own in a matter of weeks. The characteristic raised scar that BCG immunization leaves is often used as proof of prior immunization. This scar must be distinguished from that of smallpox vaccination, which it may resemble. When given for bladder cancer, the vaccine is not injected through the skin but is instilled into the bladder through the urethra using a soft catheter. == Adverse effects == BCG immunization generally causes some pain and scarring at the site of injection. The main adverse effects are keloids—large, raised scars. The insertion to the deltoid muscle is most frequently used because the local complication rate is smallest when that site is used. Nonetheless, the buttock is an alternative site of administration because it provides better cosmetic outcomes. BCG vaccine should be given intradermally. If given subcutaneously, it may induce local infection and spread to the regional lymph nodes, causing either suppurative (production of pus) or nonsuppurative lymphadenitis. Conservative management is usually adequate for nonsuppurative lymphadenitis. If suppuration occurs, it may need needle aspiration. For unresolved suppuration, surgical excision may be required. Evidence for the treatment of these complications is scarce. Uncommonly, breast and gluteal abscesses can occur due to haematogenous (carried by the blood) and lymphangiomatous spread. Regional bone infection (BCG osteomyelitis or osteitis) and disseminated BCG infection are rare complications of BCG vaccination, but potentially life-threatening. Systemic antituberculous therapy may be helpful in severe complications. When BCG is used for bladder cancer, around 2.9% of treated patients discontinue immunotherapy due to a genitourinary or systemic BCG-related infection, however while symptomatic bladder BCG infection is frequent, the involvement of other organs is very uncommon. When systemic involvement occurs, liver and lungs are the first organs to be affected (1 week [median] after the last BCG instillation). If BCG is accidentally given to an immunocompromised patient (e.g., an infant with severe combined immune deficiency), it can cause disseminated or life-threatening infection. The documented incidence of this happening is less than one per million immunizations given. In 2007, the WHO stopped recommending BCG for infants with HIV, even if the risk of exposure to tuberculosis is high, because of the risk of disseminated BCG infection (which is roughly 400 per 100,000 in that higher risk context). == Usage == The person's age and the frequency with which BCG is given have always varied from country to country. The WHO recommends childhood BCG for all countries with a high incidence of tuberculosis and/or high leprosy burden. This is a partial list of historic and active BCG practices around the globe. A complete atlas of past and present practice has been generated. As of 2022, 155 countries offer the BCG vaccine in their schedule. === Americas === Brazil introduced universal BCG immunization in 1967–1968, and the practice continues until now. According to Brazilian law, BCG is given again to professionals in the health sector and people close to patients with tuberculosis or leprosy. Canadian Indigenous communities receive the BCG vaccine, and in the province of Quebec the vaccine was offered to children until the mid-70s. Most countries in Central and South America have universal BCG immunizations, as does Mexico. The United States has never used mass immunization of BCG due to the rarity of tuberculosis in the US, relying instead on the detection and treatment of latent tuberculosis. === Europe === === Asia === China: Introduced in the 1930s. Increasingly, it became more widespread after 1949. The majority were inoculated by 1979. South Korea, Singapore, Taiwan, and Malaysia. In these countries, BCG was given at birth and again at age 12. In Malaysia and Singapore from 2001, this policy was changed to once only at birth. South Korea stopped re-vaccination in 2008. Hong Kong: BCG is given to all newborns. Japan: In Japan, BCG was introduced in 1951, given typically at age 6. From 2005 it is administered between five and eight months after birth, and no later than a child's first birthday. BCG was administered no later than the fourth birthday until 2005, and no later than six months from birth from 2005 to 2012; the schedule was changed in 2012 due to reports of osteitis side effects from vaccinations at 3–4 months. Some municipalities recommend an earlier immunization schedule. Thailand: In Thailand, the BCG vaccine is given routinely at birth. India and Pakistan: India and Pakistan introduced BCG mass immunization in 1948, the first countries outside Europe to do so. In 2015, millions of infants were denied BCG vaccine in Pakistan for the first time due to shortage globally. Mongolia: All newborns are vaccinated with BCG. Previously, the vaccine was also given at ages 8 and 15, although this is no longer common practice. Philippines: BCG vaccine started in the Philippines in 1979 with the Expanded Program on Immunization. Sri Lanka: In Sri Lanka, The National Policy of Sri Lanka is to give BCG vaccination to all newborn babies immediately after birth. BCG vaccination is carried out under the Expanded Programme of Immunisation (EPI). === Middle East === Israel: BCG was given to all newborns between 1955 and 1982. Iran: Iran's vaccination policy was implemented in 1984. Vaccination with the Bacillus Calmette–Guerin (BCG) is among the most important tuberculosis control strategies in Iran [2]. According to Iranian neonatal vaccination policy, BCG has been given as a single dose to children aged <6 years, shortly after birth or at first contact with the health services. === Africa === South Africa: In South Africa, the BCG Vaccine is given routinely at birth, to all newborns, except those with clinically symptomatic AIDS. The vaccination site is in the right arm. Morocco: In Morocco, the BCG was introduced in 1949. The policy is BCG vaccination at birth, to all newborns. Kenya: In Kenya, the BCG Vaccine is given routinely at birth to all newborns. === South Pacific === Australia: BCG vaccination was used between the 1950s and mid-1980s. BCG has not been part of routine vaccination since the mid-1980s. New Zealand: BCG Immunisation was first introduced for 13-year-olds in 1948. Vaccination was phased out from 1963 to 1990. == Manufacture == BCG is prepared from a strain of the attenuated (virulence-reduced) live bovine tuberculosis bacillus, Mycobacterium bovis, that has lost its ability to cause disease in humans. It is specially subcultured in a culture medium, usually Middlebrook 7H9. Because the living bacilli evolve to make the best use of available nutrients, they become less well-adapted to human blood and can no longer induce disease when introduced into a human host. Still, they are similar enough to their wild ancestors to provide some immunity against human tuberculosis. The BCG vaccine can be anywhere from 0 to 80% effective in preventing tuberculosis for 15 years; however, its protective effect appears to vary according to geography and the lab in which the vaccine strain was grown. Several companies make BCG, sometimes using different genetic strains of the bacterium. This may result in different product characteristics. OncoTICE, used for bladder instillation for bladder cancer, was developed by Organon Laboratories (since acquired by Schering-Plough, and in turn acquired by Merck & Co.). A similar application is the product of Onko BCG of the Polish company Biomed-Lublin, which owns the Brazilian substrain M. bovis BCG Moreau which is less reactogenic than vaccines including other BCG strains. Pacis BCG, made from the Montréal (Institut Armand-Frappier) strain, was first marketed by Urocor in about 2002. Urocor was since acquired by Dianon Systems. Evans Vaccines (a subsidiary of PowderJect Pharmaceuticals). Statens Serum Institut in Denmark has marketed a BCG vaccine prepared using Danish strain 1331. The production of BCG Danish strain 1331 and its distribution was later undertaken by AJVaccines company since the ownership transfer of SSI's vaccine production business to AJ Vaccines Holding A/S which took place on 16 January 2017. Japan BCG Laboratory markets its vaccine, based on the Tokyo 172 substrain of Pasteur BCG, in 50 countries worldwide. According to a UNICEF report published in December 2015, on BCG vaccine supply security, global demand increased in 2015 from 123 to 152.2 million doses. To improve security and to [diversify] sources of affordable and flexible supply," UNICEF awarded seven new manufacturers contracts to produce BCG. Along with supply availability from existing manufacturers, and a "new WHO prequalified vaccine" the total supply will be "sufficient to meet both suppressed 2015 demand carried over to 2016, as well as total forecast demand through 2016–2018." === Supply shortage === In 2011, the Sanofi Pasteur plant flooded, causing problems with mold. The facility, located in Toronto, Ontario, Canada, produced BCG vaccine products made with substrain Connaught such as a tuberculosis vaccine and ImmuCYST, a BCG immunotherapeutic and bladder cancer drug. By April 2012 the FDA had found dozens of documented problems with sterility at the plant including mold, nesting birds and rusted electrical conduits. The resulting closure of the plant for over two years caused shortages of bladder cancer and tuberculosis vaccines. On 29 October 2014 Health Canada gave the permission for Sanofi to resume production of BCG. A 2018 analysis of the global supply concluded that the supplies are adequate to meet forecast BCG vaccine demand, but that risks of shortages remain, mainly due to dependence of 75 percent of WHO pre-qualified supply on just two suppliers. === Dried === Some BCG vaccines are freeze dried and become fine powder. Sometimes the powder is sealed with vacuum in a glass ampoule. Such a glass ampoule has to be opened slowly to prevent the airflow from blowing out the powder. Then the powder has to be diluted with saline water before injecting. == History == The history of BCG is tied to that of smallpox. By 1865 Jean Antoine Villemin had demonstrated that rabbits could be infected with tuberculosis from humans; by 1868 he had found that rabbits could be infected with tuberculosis from cows and that rabbits could be infected with tuberculosis from other rabbits. Thus, he concluded that tuberculosis was transmitted via some unidentified microorganism (or "virus", as he called it). In 1882 Robert Koch regarded human and bovine tuberculosis as identical. But in 1895, Theobald Smith presented differences between human and bovine tuberculosis, which he reported to Koch. By 1901 Koch distinguished Mycobacterium bovis from Mycobacterium tuberculosis. Following the success of vaccination in preventing smallpox, established during the 18th century, scientists thought to find a corollary in tuberculosis by drawing a parallel between bovine tuberculosis and cowpox: it was hypothesized that infection with bovine tuberculosis might protect against infection with human tuberculosis. In the late 19th century, clinical trials using M. bovis were conducted in Italy with disastrous results, because M. bovis was found to be just as virulent as M. tuberculosis. Albert Calmette, a French physician and bacteriologist, and his assistant and later colleague, Camille Guérin, a veterinarian, were working at the Institut Pasteur de Lille (Lille, France) in 1908. Their work included subculturing virulent strains of the tuberculosis bacillus and testing different culture media. They noted a glycerin-bile-potato mixture grew bacilli that seemed less virulent and changed the course of their research to see if repeated subculturing would produce a strain that was attenuated enough to be considered for use as a vaccine. The BCG strain was isolated after subculturing 239 times during 13 years from a virulent strain on glycerine potato medium. The research continued throughout World War I until 1919 when the now avirulent bacilli were unable to cause tuberculosis disease in research animals. Calmette and Guerin transferred to the Paris Pasteur Institute in 1919. The BCG vaccine was first used in humans in 1921. Public acceptance was slow, and the Lübeck disaster, in particular, did much to harm it. Between 1929 and 1933 in Lübeck, 251 infants were vaccinated in the first 10 days of life; 173 developed tuberculosis and 72 died. It was subsequently discovered that the BCG administered there had been contaminated with a virulent strain that was being stored in the same incubator, which led to legal action against the vaccine's manufacturers. Dr. R. G. Ferguson, working at the Fort Qu'Appelle Sanatorium in Saskatchewan, was among the pioneers in developing the practice of vaccination against tuberculosis. In Canada, more than 600 children from residential schools were used as involuntary participants in BCG vaccine trials between 1933 and 1945. In 1928, the BCG vaccine was adopted by the Health Committee of the League of Nations (predecessor to the World Health Organization (WHO)). Because of opposition, however, it only became widely used after World War II. From 1945 to 1948, relief organizations (International Tuberculosis Campaign or Joint Enterprises) vaccinated over eight million babies in Eastern Europe and prevented the predicted typical increase of tuberculosis after a major war. The BCG vaccine is very efficacious against tuberculous meningitis in the pediatric age group, but its efficacy against pulmonary tuberculosis appears variable. Some countries have removed the BCG vaccine from routine vaccination. Two countries that have never used it routinely are the United States and the Netherlands (in both countries, it is felt that having a reliable Mantoux test and therefore being able to accurately detect active disease is more beneficial to society than vaccinating against a relatively rare condition). Other names include "Vaccin Bilié de Calmette et Guérin vaccine" and "Bacille de Calmette et Guérin vaccine". == Research == Tentative evidence exists for a beneficial non-specific effect of BCG vaccination on overall mortality in low-income countries, or for its reducing other health problems including sepsis and respiratory infections when given early, with greater benefit the earlier it is used. In rhesus macaques, BCG shows improved rates of protection when given intravenously. Some risks must be evaluated before it can be translated to humans. The University of Oxford Jenner Institute is conducting a study comparing the efficacy of injected versus inhaled BCG vaccine in already-vaccinated adults. === Type 1 diabetes === As of 2017, BCG vaccine is in the early stages of being studied in type 1 diabetes (T1D). === COVID-19 === Use of the BCG vaccine may provide protection against COVID-19. However, epidemiologic observations in this respect are ambiguous. The WHO does not recommend its use for prevention as of 12 January 2021. As of January 2021, 20 BCG trials are in various clinical stages. As of October 2022, the results are extremely mixed. A 15-month trial involving people thrice-vaccinated over the two years before the pandemic shows positive results in preventing infection in BCG-naive people with type 1 diabetes. On the other hand, a 5-month trial shows that re-vaccinating with BCG does not help prevent infection in healthcare workers. Both of these trials were double-blind randomized controlled trials. == References == == External links == BCG Vaccine at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/BCG_vaccine
The Novavax COVID-19 vaccine, sold under the brand names Nuvaxovid and Covovax, among others, is a subunit COVID-19 vaccine developed by Novavax and the Coalition for Epidemic Preparedness Innovations. Updated versions of the vaccine have been developed to provide coverage against the Omicron variant, with different formulas for 2023–2024 (containing a recombinant spike protein from lineage XBB.1.5) and 2024–2025 (containing recombinant spike protein from lineage JN.1). == Medical uses == The Novavax COVID‑19 vaccine is indicated for active immunization to prevent COVID‑19 caused by SARS-CoV-2. In the US, Nuvaxovid is indicated for active immunization to prevent COVID‑19 caused by SARS-CoV-2 in adults aged 65 years of age and older. Nuvaxovid is also indicated for individuals aged 12 through 64 years of age who have at least one underlying condition that puts them at high risk for severe outcomes from COVID-19. === Efficacy === In December 2021, Novavax reported that its phase III trial showed the vaccine achieved its primary endpoint of preventing infection at least seven days after the second dose. Overall efficacy against different SARS-CoV-2s was 90.4% and efficacy against moderate-to-severe disease was 100%. A vaccine is generally considered effective if the estimate is ≥50% with a >30% lower limit of the 95% confidence interval. Efficacy is closely related to effectiveness, which is generally expected to slowly decrease over time. == Side effects == The most common side effects include fever, headache, nausea, muscle and joint pain, tenderness and pain at the injection site, tiredness, and feeling unwell. Additional possible side effects include anaphylaxis (severe allergic reaction), paresthesia (unusual feeling in the skin, such as tingling or a crawling sensation) and hypoesthesia (decreased feeling or sensitivity, especially in the skin), and pericarditis (inflammation of lining around the heart). It has also been reported that myocarditis (inflammation of heart muscle cells) as reported incidence from receiving the NVX-CoV2373 vaccine. Tinnitus is a possible side effect. == Handling and administration == The vaccine requires two doses and is stable at 2 to 8 °C (36 to 46 °F) refrigerated temperatures. The second dose can be administered three to eight weeks after the first dose. The vaccine injection is administered intramuscularly. There are no drug interactions that may affect the vaccine's efficacy if administered with other vaccines at the same time. == Pregnancy and breastfeeding == The vaccine is a recombinant protein, not a live virus, therefore it will not replicate and spread to the infant. There are no studies conducted to trial the efficacy and safety of Novavax vaccine to people who are breastfeeding. == Technology == NVX-CoV2373 has been described as both a protein subunit vaccine and a virus-like particle vaccine, although the producers call it a "recombinant nanoparticle vaccine". The vaccine is produced by creating an engineered baculovirus containing a gene for a modified SARS-CoV-2 spike protein. The spike protein was modified by incorporating two proline amino acids in order to stabilize the pre-fusion form of the protein; this same 2P modification is being used in several other COVID‑19 vaccines. The baculovirus is made to infect a culture of Sf9 moth cells, which then create the spike protein and display it on their cell membranes. The spike proteins are harvested and assembled onto a synthetic lipid nanoparticle about 50 nanometers across, each displaying up to 14 spike proteins. The formulation includes a saponin-based adjuvant named Matrix-M. Matrix-M adjuvant source is purified from Quillaja Saponaria Molina Tree. Matrix-M adjuvant is combined with the spike protein from the SARS-CoV-2 antigen to induce immune response in body upon vaccination. The adjuvant primarily enhances local antibodies and immunity at the local site of injection and draining lymph nodes. The adjuvant demonstrates its protection against the virus by inducing innate immune system rapidly. At the local site of injection, the adjuvant recruits antigen presenting cells and attracts more T cells, such as CD4+ and CD8+ T cells. After entry of the vaccine nanoparticle containing the recombinant spike protein of the virus, it binds to ACE2 (angiotensin-converting enzyme 2) receptor to allow endocytosis and viral replication. However, upon endocytosis these viral particles are digested by lysosome and presented to MHC class molecules. This will lead to attracting T cells (CD4+ and CD8+). This chemokine activity is further enhanced by the presence of the adjuvant component to enhance immune response from the viral particle. The cascade of immune activation leads to immediate immune response to target the virus as well as creating memory B cells specific to the antigen that the virus have. These memory B cells enhances our immune response by faster immune cell recognition of these subsequent viral exposure to the same antigen compared to the initial exposure. == Manufacturing == In February 2021, Novavax partnered with Takeda to manufacture the vaccine in Japan, where its COVID‑19 vaccine candidate is known as TAK-019. Novavax signed an agreement with Serum Institute of India for mass scale production for developing and low-income countries. In 2020 it was reported, that the vaccine would be manufactured in Spain and in November 2021 it was reported to be produced in Poland by Mabion SA, a Contract Development and Manufacturing Organisation (CDMO). As of 2021, antigens were made at Novavax's factory Novavax CZ in the Czech Republic; Novavax CZ was also marketing authorisation holder of its EU authorization. In May 2021, Serum Institute of India said that it started the production of the Novavax COVID‑19 vaccine candidate branded as Covovax in India after receiving permission from the Indian government. == History == In January 2020, Novavax announced development of a vaccine candidate, codenamed NVX-CoV2373, to establish immunity to SARS-CoV-2. Novavax's work is in competition for vaccine development among dozens of other companies. In March 2020, Novavax announced a collaboration with Emergent BioSolutions for preclinical and early-stage human research on the vaccine candidate. Under the partnership, Emergent BioSolutions was supposed to manufacture the vaccine at large scale at their Baltimore facility. However, following production issues with the Johnson & Johnson and Oxford–AstraZeneca vaccines at its Baltimore plant and to decrease the burden on the plant, Novavax subsequently partnered with a different manufacturer in a new agreement overseen by the US government. Trials have also taken place in the United Kingdom. The first human safety studies of the candidate, codenamed NVX-CoV2373, started in May 2020 in Australia. In July 2020, the company announced it might receive US$1.6 billion from Operation Warp Speed to expedite development of its coronavirus vaccine candidate by 2021 – if clinical trials show the vaccine to be effective. A spokesperson for Novavax stated that the $1.6 billion was coming from a "collaboration" between the Department of Health and Human Services and Department of Defense. In 2024, Novavax announced a deal with Sanofi in which Sanofi would take over commercialization responsibilities for NVX-CoV2373 in most countries starting in 2025. The deal also allows them to use the vaccine and Matrix-M to develop new vaccine products. An adapted version of the vaccine with the brand name Nuvaxovid XBB.1.5 is available. It contains a version of the protein from the Omicron XBB.1.5 subvariant of SARS-CoV-2. In October 2024, the CHMP gave a positive opinion to update the composition of Nuvaxovid, a vaccine to target the SARS-CoV-2 JN.1 variant of the virus that causes COVID-19 following the recommendations issued by EMA's Emergency Task Force to update COVID-19 vaccines for the 2024/2025 vaccination campaign. === Clinical trials === ==== Phase I and II ==== In May 2020, Australia's first human trials of a candidate COVID‑19 vaccine, Novavax's NVX-CoV2373, began in Melbourne. It involved about 130 volunteers aged between 18-59. ==== Phase III ==== In September 2020, Novavax started a phase III trial with 15,000 participants in the UK. In December 2020, Novavax started the PREVENT-19 (NCT04611802) phase III trial in the US and Mexico, funded by NIAID and BARDA. In May 2021, Novavax initiated a pediatric expansion for the phase III clinical trial, with 3,000 adolescents 12–17 years of age in up to 75 sites in the United States. ==== UK trial ==== In January 2021, Novavax reported that preliminary results from the United Kingdom trial showed that its vaccine candidate was more than 89% effective. The Thackray Museum of Medicine in Leeds hosted the largest cohort of volunteers. Trials of the Novavax vaccine were conducted on 5,000 people there during 2021. In June 2021, a primary Novavax-funded study found that the vaccine has an overall efficacy of 83.4% two weeks after the first dose and 89.7% one week after the second dose. A post hoc analysis showed an efficacy of 86.3% against the B.1.1.7 (Alpha) variant and 96.4% against "non-B.1.1.7 strains", the majority of which were the "prototype strains" (original strain). ==== South Africa trial ==== In January 2021, Novavax reported that interim results from a trial in South Africa showed a lower effectiveness rate against the Beta variant (lineage B.1.351), at around 50–60%. In a study reported in March and May 2021, the efficacy of the Novavax vaccine (NVX-CoV2373) was tested in a preliminary randomized, placebo-controlled study involving 2684 participants who were negative for COVID at baseline testing. The Beta variant was the predominant variant to occur, with post-hoc analysis indicating a cross-protective vaccine efficacy of Novavax against Beta of 51.0% for HIV-negative participants. ==== US and Mexico trial ==== In June 2021, Novavax announced overall 90.4% efficacy in the phase III US and Mexico trial that involved nearly 30,000 people aged 18 years of age and older. From the total 77 COVID-19 cases found in the trial participants, 14 occurred in the vaccine group, while 63 occurred in the placebo group. === Administration === About 216,000 doses of the Novavax COVID-19 vaccine were administered in the EU/EEA from authorization to 26 June 2022. == Legal status == In February 2021, the European Medicines Agency (EMA) started a rolling review of the Novavax COVID-19 vaccine (NVX‑CoV2373). In November 2021, the EMA received application for conditional marketing authorization. In December 2021, the European Commission granted a conditional marketing authorization across the EU, following a recommendation from the EMA, for it to be sold under the brand name Nuvaxovid. As of November 2021, it had been authorized for use in Indonesia, the Philippines, as of December in India, as of January 2022 in South Korea, Australia, as of February 2022 in the United Kingdom, Canada, Taiwan, and Singapore. As of December 2021 it was validated by the World Health Organization. During June 2022 a US Food and Drug Administration (FDA) advisory committee voted 21-0 with one abstention to recommend authorization of Novavax's vaccine for use in adults. In July 2022, the FDA authorized NVX-CoV2373 for emergency use as a primary immunization (not booster) in adults.COVID-19 Vaccine, Adjuvanted making it the fourth COVID‑19 vaccine authorized in the US. In July 2022, the US Centers for Disease Control and Prevention (CDC) recommended the Novavax COVID‑19 vaccine as a two-dose primary series for adults age 18 and older, thus endorsing the recommendation from the Advisory Committee on Immunization Practices (ACIP) regarding this vaccine. In August 2022, the FDA granted Emergency Use Authorization for the Novavax COVID‑19 vaccine in people aged 12–17 years. In August 2022, the CDC recommended the Novavax COVID‑19 vaccine for adolescents aged 12–17 years. In October 2023, the FDA amended the emergency use authorization of the Novavax COVID-19 Vaccine, Adjuvanted for use in individuals 12 years of age and older to include the Novavax COVID-19 Vaccine, Adjuvanted (2023–2024 Formula) and removed the authorization for the Novavax COVID-19 Vaccine, Adjuvanted (Original monovalent). In January 2024, it was authorized in Brazil. In August 2024, the FDA granted emergency use authorization for an updated version of the Novavax COVID-19 that includes a monovalent (single) component that corresponds to the Omicron variant JN.1 strain of SARS-CoV-2. The Novavax COVID-19 Vaccine, Adjuvanted (2023-2024 Formula) is no longer authorized for use. The FDA granted the emergency use authorization of the Novavax COVID-19 Vaccine, Adjuvanted (2024-2025 Formula) to Novavax Inc. of Gaithersburg, Maryland. In October 2024, the CHMP gave a positive opinion to update the composition of Nuvaxovid, a vaccine to target the SARS-CoV-2 JN.1 variant of the virus that causes COVID-19 following the recommendations issued by EMA's Emergency Task Force to update COVID-19 vaccines for the 2024/2025 vaccination campaign. == Notes == == References == == Further reading == Corum J, Zimmer C (30 December 2020). "How the Novavax Vaccine Works". The New York Times. == External links == "Nuvaxovid Safety Updates". European Medicines Agency (EMA). December 2023.
Wikipedia/Novavax_COVID-19_vaccine
Medication (also called medicament, medicine, pharmaceutical drug, medicinal product, medicinal drug or simply drug) is a drug used to diagnose, cure, treat, or prevent disease. Drug therapy (pharmacotherapy) is an important part of the medical field and relies on the science of pharmacology for continual advancement and on pharmacy for appropriate management. Drugs are classified in many ways. One of the key divisions is by level of control, which distinguishes prescription drugs (those that a pharmacist dispenses only on the medical prescription) from over-the-counter drugs (those that consumers can order for themselves). Medicines may be classified by mode of action, route of administration, biological system affected, or therapeutic effects. The World Health Organization keeps a list of essential medicines. Drug discovery and drug development are complex and expensive endeavors undertaken by pharmaceutical companies, academic scientists, and governments. As a result of this complex path from discovery to commercialization, partnering has become a standard practice for advancing drug candidates through development pipelines. Governments generally regulate what drugs can be marketed, how drugs are marketed, and in some jurisdictions, drug pricing. Controversies have arisen over drug pricing and disposal of used medications. == Definition == Medication is a medicine or a chemical compound used to treat or cure illness. According to Encyclopædia Britannica, medication is "a substance used in treating a disease or relieving pain". As defined by the National Cancer Institute, dosage forms of medication can include tablets, capsules, liquids, creams, and patches. Medications can be administered in different ways, such as by mouth, by infusion into a vein, or by drops put into the ear or eye. A medication that does not contain an active ingredient and is used in research studies is called a placebo. In Europe, the term is "medicinal product", and it is defined by EU law as: "Any substance or combination of substances presented as having properties for treating or preventing disease in human beings; or" "Any substance or combination of substances which may be used in or administered to human beings either with a view to restoring, correcting, or modifying physiological functions by exerting a pharmacological, immunological or metabolic action or to making a medical diagnosis.": 36  In the US, a "drug" is: A substance (other than food) intended to affect the structure or any function of the body. A substance intended for use as a component of a medicine but not a device or a component, part, or accessory of a device. A substance intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease. A substance recognized by an official pharmacopeia or formulary. Biological products are included within this definition and are generally covered by the same laws and regulations, but differences exist regarding their manufacturing processes (chemical process versus biological process). == Usage == Drug use among elderly Americans has been studied; in a group of 2,377 people with an average age of 71 surveyed between 2005 and 2006, 84% took at least one prescription drug, 44% took at least one over-the-counter (OTC) drug, and 52% took at least one dietary supplement; in a group of 2245 elderly Americans (average age of 71) surveyed over the period 2010 – 2011, those percentages were 88%, 38%, and 64%. == Classification == One of the key classifications is between traditional small molecule drugs; usually derived from chemical synthesis and biological medical products; which include recombinant proteins, vaccines, blood products used therapeutically (such as IVIG), gene therapy, and cell therapy (for instance, stem cell therapies). Pharmaceuticals or drugs or medicines are classified into various other groups besides their origin on the basis of pharmacological properties like mode of action and their pharmacological action or activity, such as by chemical properties, mode or route of administration, biological system affected, or therapeutic effects. An elaborate and widely used classification system is the Anatomical Therapeutic Chemical Classification System (ATC system). The World Health Organization keeps a list of essential medicines. A sampling of classes of medicine includes: Antipyretics: reducing fever (pyrexia/pyresis) Analgesics: reducing pain (painkillers) Antimalarial drugs: treating malaria Antibiotics: inhibiting germ growth Antiseptics: prevention of germ growth near burns, cuts, and wounds Mood stabilizers: lithium and valproate Hormone replacements: Premarin Oral contraceptives: Enovid, "biphasic" pill, and "triphasic" pill Stimulants: methylphenidate, amphetamine Tranquilizers: meprobamate, chlorpromazine, reserpine, chlordiazepoxide, diazepam, and alprazolam Statins: lovastatin, pravastatin, and simvastatin Pharmaceuticals may also be described as "specialty", independent of other classifications, which is an ill-defined class of drugs that might be difficult to administer, require special handling during administration, require patient monitoring during and immediately after administration, have particular regulatory requirements restricting their use, and are generally expensive relative to other drugs. == Types of medicines == === For the digestive system === Lower digestive tract: laxatives, antispasmodics, antidiarrhoeals, bile acid sequestrants, opioids. Upper digestive tract: antacids, reflux suppressants, antiflatulents, antidopaminergics, proton pump inhibitors (PPIs), H2-receptor antagonists, cytoprotectants, prostaglandin analogues. === For the cardiovascular system === Affecting blood pressure/(antihypertensive drugs): ACE inhibitors, angiotensin receptor blockers, beta-blockers, α blockers, calcium channel blockers, thiazide diuretics, loop diuretics, aldosterone inhibitors. Coagulation: anticoagulants, heparin, antiplatelet drugs, fibrinolytics, anti-hemophilic factors, haemostatic drugs. General: β-receptor blockers ("beta blockers"), calcium channel blockers, diuretics, cardiac glycosides, antiarrhythmics, nitrate, antianginals, vasoconstrictors, vasodilators. HMG-CoA reductase inhibitors (statins) for lowering LDL cholesterol inhibitors: hypolipidaemic agents. === For the central nervous system === Drugs affecting the central nervous system include psychedelics, hypnotics, anaesthetics, antipsychotics, eugeroics, antidepressants (including tricyclic antidepressants, monoamine oxidase inhibitors, lithium salts, and selective serotonin reuptake inhibitors (SSRIs)), antiemetics, anticonvulsants/antiepileptics, anxiolytics, barbiturates, movement disorder (e.g., Parkinson's disease) drugs, nootropics, stimulants (including amphetamines), benzodiazepines, cyclopyrrolones, dopamine antagonists, antihistamines, cholinergics, anticholinergics, emetics, cannabinoids, and 5-HT (serotonin) antagonists. === For pain === The main classes of painkillers are NSAIDs, opioids, and local anesthetics. For consciousness (anesthetic drugs) Some anesthetics include benzodiazepines and barbiturates. === For musculoskeletal disorders === The main categories of drugs for musculoskeletal disorders are: NSAIDs (including COX-2 selective inhibitors), muscle relaxants, neuromuscular drugs, and anticholinesterases. === For the eye === Anti-allergy: mast cell inhibitors. Anti-fungal: imidazoles, polyenes. Anti-glaucoma: adrenergic agonists, beta-blockers, carbonic anhydrase inhibitors/hyperosmotics, cholinergics, miotics, parasympathomimetics, prostaglandin agonists/prostaglandin inhibitors, nitroglycerin. Anti-inflammatory: NSAIDs, corticosteroids. Antibacterial: antibiotics, topical antibiotics, sulfa drugs, aminoglycosides, fluoroquinolones. Antiviral drugs. Diagnostic: topical anesthetics, sympathomimetics, parasympatholytics, mydriatics, cycloplegics. General: adrenergic neurone blocker, astringent. === For the ear, nose, and oropharynx === Antibiotics, sympathomimetics, antihistamines, anticholinergics, NSAIDs, corticosteroids, antiseptics, local anesthetics, antifungals, and cerumenolytics. === For the respiratory system === Bronchodilators, antitussives, mucolytics, decongestants, inhaled and systemic corticosteroids, beta2-adrenergic agonists, anticholinergics, mast cell stabilizers, leukotriene antagonists. === For endocrine problems === Androgens, antiandrogens, estrogens, gonadotropin, corticosteroids, human growth hormone, insulin, antidiabetics (sulfonylureas, biguanides/metformin, thiazolidinediones, insulin), thyroid hormones, antithyroid drugs, calcitonin, diphosphonate, vasopressin analogues. === For the reproductive system or urinary system === Antifungal, alkalinizing agents, quinolones, antibiotics, cholinergics, anticholinergics, antispasmodics, 5-alpha reductase inhibitor, selective alpha-1 blockers, sildenafils, fertility medications. === For contraception === Hormonal contraception. Ormeloxifene. Spermicide. === For obstetrics and gynecology === NSAIDs, anticholinergics, haemostatic drugs, antifibrinolytics, Hormone Replacement Therapy (HRT), bone regulators, beta-receptor agonists, follicle stimulating hormone, luteinising hormone, LHRH, gamolenic acid, gonadotropin release inhibitor, progestogen, dopamine agonists, oestrogen, prostaglandins, gonadorelin, clomiphene, tamoxifen, diethylstilbestrol. === For the skin === Emollients, anti-pruritics, antifungals, antiseptics, scabicides, pediculicides, tar products, vitamin A derivatives, vitamin D analogues, keratolytics, abrasives, systemic antibiotics, topical antibiotics, hormones, desloughing agents, exudate absorbents, fibrinolytics, proteolytics, sunscreens, antiperspirants, corticosteroids, immune modulators. === For infections and infestations === Antibiotics, antifungals, antileprotics, antituberculous drugs, antimalarials, anthelmintics, amoebicides, antivirals, antiprotozoals, probiotics, prebiotics, antitoxins, and antivenoms. === For the immune system === Vaccines, immunoglobulins, immunosuppressants, interferons, and monoclonal antibodies. === For allergic disorders === Anti-allergics, antihistamines, NSAIDs, corticosteroids. === For nutrition === Tonics, electrolytes and mineral preparations (including iron preparations and magnesium preparations), parenteral nutrition, vitamins, anti-obesity drugs, anabolic drugs, haematopoietic drugs, food product drugs. === For neoplastic disorders === Cytotoxic drugs, therapeutic antibodies, sex hormones, aromatase inhibitors, somatostatin inhibitors, recombinant interleukins, G-CSF, erythropoietin. === For diagnostics === Contrast media. === For euthanasia === A euthanaticum is used for euthanasia and physician-assisted suicide. Euthanasia is not permitted by law in many countries, and consequently, medicines will not be licensed for this use in those countries. == Administration == A single drug may contain single or multiple active ingredients. The administration is the process by which a patient takes medicine. There are three major categories of drug administration: enteral (via the human gastrointestinal tract), injection into the body, and by other routes (dermal, nasal, ophthalmic, otologic, and urogenital). Oral administration, the most common form of enteral administration, can be performed using various dosage forms including tablets or capsules and liquid such as syrup or suspension. Other ways to take the medication include buccally (placed inside the cheek), sublingually (placed underneath the tongue), eye and ear drops (dropped into the eye or ear), and transdermally (applied to the skin). They can be administered in one dose, as a bolus. Administration frequencies are often abbreviated from Latin, such as every 8 hours reading Q8H from Quaque VIII Hora. The drug frequencies are often expressed as the number of times a drug is used per day (e.g., four times a day). It may include event-related information (e.g., 1 hour before meals, in the morning, at bedtime), or complimentary to an interval, although equivalent expressions may have different implications (e.g., every 8 hours versus 3 times a day). == Drug discovery == In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new drugs are discovered. Historically, drugs were discovered by identifying the active ingredient from traditional remedies or by serendipitous discovery. Later chemical libraries of synthetic small molecules, natural products, or extracts were screened in intact cells or whole organisms to identify substances that have a desirable therapeutic effect in a process known as classical pharmacology. Since sequencing of the human genome which allowed rapid cloning and synthesis of large quantities of purified proteins, it has become common practice to use high throughput screening of large compound libraries against isolated biological targets which are hypothesized to be disease-modifying in a process known as reverse pharmacology. Hits from these screens are then tested in cells and then in animals for efficacy. Even more recently, scientists have been able to understand the shape of biological molecules at the atomic level and to use that knowledge to design (see drug design) drug candidates. Modern drug discovery involves the identification of screening hits, medicinal chemistry, and optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects), efficacy/potency, metabolic stability (to increase the half-life), and oral bioavailability. Once a compound that fulfills all of these requirements has been identified, it will begin the process of drug development prior to clinical trials. One or more of these steps may, but not necessarily, involve computer-aided drug design. Despite advances in technology and understanding of biological systems, drug discovery is still a lengthy, "expensive, difficult, and inefficient process" with a low rate of new therapeutic discovery. In 2010, the research and development cost of each new molecular entity (NME) was approximately US$1.8 billion. Drug discovery is done by pharmaceutical companies, sometimes with research assistance from universities. The "final product" of drug discovery is a patent on the potential drug. The drug requires very expensive Phase I, II, and III clinical trials, and most of them fail. Small companies have a critical role, often then selling the rights to larger companies that have the resources to run the clinical trials. Drug discovery is different from Drug Development. Drug Discovery is often considered the process of identifying new medicine. At the same time, Drug development is delivering a new drug molecule into clinical practice. In its broad definition, this encompasses all steps from the basic research process of finding a suitable molecular target to supporting the drug's commercial launch. == Development == Drug development is the process of bringing a new drug to the market once a lead compound has been identified through the process of drug discovery. It includes pre-clinical research (microorganisms/animals) and clinical trials (on humans) and may include the step of obtaining regulatory approval to market the drug. Drug Development Process Discovery: The Drug Development process starts with Discovery, a process of identifying a new medicine. Development: Chemicals extracted from natural products are used to make pills, capsules, or syrups for oral use. Injections for direct infusion into the blood drops for eyes or ears. Preclinical research: Drugs go under laboratory or animal testing, to ensure that they can be used on Humans. Clinical testing: The drug is used on people to confirm that it is safe to use. FDA Review: drug is sent to FDA before launching the drug into the market. FDA post-Market Review: The drug is reviewed and monitored by FDA for the safety once it is available to the public. == Regulation == The regulation of drugs varies by jurisdiction. In some countries, such as the United States, they are regulated at the national level by a single agency. In other jurisdictions, they are regulated at the state level, or at both state and national levels by various bodies, as is the case in Australia. The role of therapeutic goods regulation is designed mainly to protect the health and safety of the population. Regulation is aimed at ensuring the safety, quality, and efficacy of the therapeutic goods which are covered under the scope of the regulation. In most jurisdictions, therapeutic goods must be registered before they are allowed to be marketed. There is usually some degree of restriction on the availability of certain therapeutic goods depending on their risk to consumers. Depending upon the jurisdiction, drugs may be divided into over-the-counter drugs (OTC) which may be available without special restrictions, and prescription drugs, which must be prescribed by a licensed medical practitioner in accordance with medical guidelines due to the risk of adverse effects and contraindications. The precise distinction between OTC and prescription depends on the legal jurisdiction. A third category, "behind-the-counter" drugs, is implemented in some jurisdictions. These do not require a prescription, but must be kept in the dispensary, not visible to the public, and be sold only by a pharmacist or pharmacy technician. Doctors may also prescribe prescription drugs for off-label use – purposes which the drugs were not originally approved for by the regulatory agency. The Classification of Pharmaco-Therapeutic Referrals helps guide the referral process between pharmacists and doctors. The International Narcotics Control Board of the United Nations imposes a world law of prohibition of certain drugs. They publish a lengthy list of chemicals and plants whose trade and consumption (where applicable) are forbidden. OTC drugs are sold without restriction as they are considered safe enough that most people will not hurt themselves accidentally by taking it as instructed. Many countries, such as the United Kingdom have a third category of "pharmacy medicines", which can be sold only in registered pharmacies by or under the supervision of a pharmacist. Medical errors include over-prescription and polypharmacy, mis-prescription, contraindication and lack of detail in dosage and administration instructions. In 2000 the definition of a prescription error was studied using a Delphi method conference; the conference was motivated by ambiguity in what a prescription error is and a need to use a uniform definition in studies. == Drug pricing == In many jurisdictions, drug prices are regulated. === United Kingdom === In the UK, the Pharmaceutical Price Regulation Scheme is intended to ensure that the National Health Service is able to purchase drugs at reasonable prices. The prices are negotiated between the Department of Health, acting with the authority of Northern Ireland and the UK Government, and the representatives of the Pharmaceutical industry brands, the Association of the British Pharmaceutical Industry (ABPI). For 2017 this payment percentage set by the PPRS will be 4,75%. === Canada === In Canada, the Patented Medicine Prices Review Board examines drug pricing and determines if a price is excessive or not. In these circumstances, drug manufacturers must submit a proposed price to the appropriate regulatory agency. Furthermore, "the International Therapeutic Class Comparison Test is responsible for comparing the National Average Transaction Price of the patented drug product under review" different countries that the prices are being compared to are the following: France, Germany, Italy, Sweden, Switzerland, the United Kingdom, and the United States === Brazil === In Brazil, the prices are regulated through legislation under the name of Medicamento Genérico (generic drugs) since 1999. === India === In India, drug prices are regulated by the National Pharmaceutical Pricing Authority. === United States === In the United States, drug costs are partially unregulated, but instead are the result of negotiations between drug companies and insurance companies. High prices have been attributed to monopolies given to manufacturers by the government. New drug development costs continue to rise as well. Despite the enormous advances in science and technology, the number of new blockbuster drugs approved by the government per billion dollars spent has halved every 9 years since 1950. == Blockbuster drug == A blockbuster drug is a drug that generates more than $1 billion in revenue for a pharmaceutical company in a single year. Cimetidine was the first drug ever to reach more than $1 billion a year in sales, thus making it the first blockbuster drug. In the pharmaceutical industry, a blockbuster drug is one that achieves acceptance by prescribing physicians as a therapeutic standard for, most commonly, a highly prevalent chronic (rather than acute) condition. Patients often take the medicines for long periods. == History == === Prescription drug history === Antibiotics first arrived on the medical scene in 1932 thanks to Gerhard Domagk; and were coined the "wonder drugs". The introduction of the sulfa drugs led to the mortality rate from pneumonia in the U.S. to drop from 0.2% each year to 0.05% (i.e., 1⁄4 as much) by 1939. Antibiotics inhibit the growth or the metabolic activities of bacteria and other microorganisms by a chemical substance of microbial origin. Penicillin, introduced a few years later, provided a broader spectrum of activity compared to sulfa drugs and reduced side effects. Streptomycin, found in 1942, proved to be the first drug effective against the cause of tuberculosis and also came to be the best known of a long series of important antibiotics. A second generation of antibiotics was introduced in the 1940s: aureomycin and chloramphenicol. Aureomycin was the best known of the second generation. Lithium was discovered in the 19th century for nervous disorders and its possible mood-stabilizing or prophylactic effect; it was cheap and easily produced. As lithium fell out of favor in France, valpromide came into play. This antibiotic was the origin of the drug that eventually created the mood stabilizer category. Valpromide had distinct psychotrophic effects that were of benefit in both the treatment of acute manic states and in the maintenance treatment of manic depression illness. Psychotropics can either be sedative or stimulant; sedatives aim at damping down the extremes of behavior. Stimulants aim at restoring normality by increasing tone. Soon arose the notion of a tranquilizer which was quite different from any sedative or stimulant. The term tranquilizer took over the notions of sedatives and became the dominant term in the West through the 1980s. In Japan, during this time, the term tranquilizer produced the notion of a psyche-stabilizer and the term mood stabilizer vanished. Premarin (conjugated estrogens, introduced in 1942) and Prempro (a combination estrogen-progestin pill, introduced in 1995) dominated hormone replacement therapy (HRT) regimens during the 1990s. Though not designed to cure any disease, HRT is prescribed to improve quality of life and as a preventative measure, such as treating post-menopausal symptoms. In the 1960s and early 1970s, more physicians began to prescribe estrogen for their female patients. Between 1991 and 1999, Premarin was listed as the most popular prescription and best-selling drug in America. The first oral contraceptive, Enovid, was approved by FDA in 1960. Oral contraceptives inhibit ovulation and so prevent conception. Enovid was known to be much more effective than alternatives including the condom and the diaphragm. As early as 1960, oral contraceptives were available in several different strengths by every manufacturer. In the 1980s and 1990s, an increasing number of options arose including, most recently, a new delivery system for the oral contraceptive via a transdermal patch. In 1982, a new version of "the pill" was introduced, known as the biphasic pill. By 1985, a new triphasic pill was approved. Physicians began to think of "the pill" as an excellent means of birth control for young women. Stimulants such as Ritalin (methylphenidate) came to be pervasive tools for behavior management and modification in young children. Ritalin was first marketed in 1955 for narcolepsy; its potential users were middle-aged and the elderly. It was not until some time in the 1980s along with hyperactivity in children that Ritalin came onto the market. Medical use of methylphenidate is predominantly for symptoms of attention deficit hyperactivity disorder (ADHD). Consumption of methylphenidate in the U.S. out-paced all other countries between 1991 and 1999. Significant growth in consumption was also evident in Canada, New Zealand, Australia, and Norway. Currently, 85% of the world's methylphenidate is consumed in America. The first minor tranquilizer was meprobamate. Only fourteen months after it was made available, meprobamate had become the country's largest-selling prescription drug. By 1957, meprobamate had become the fastest-growing drug in history. The popularity of meprobamate paved the way for Librium and Valium, two minor tranquilizers that belonged to a new chemical class of drugs called the benzodiazepines. These were drugs that worked chiefly as anti-anxiety agents and muscle relaxants. The first benzodiazepine was Librium. Three months after it was approved, Librium had become the most prescribed tranquilizer in the nation. Three years later, Valium hit the shelves and was ten times more effective as a muscle relaxant and anti-convulsant. Valium was the most versatile of the minor tranquilizers. Later came the widespread adoption of major tranquilizers such as chlorpromazine and the drug reserpine. In 1970, sales began to decline for Valium and Librium, but sales of new and improved tranquilizers, such as Xanax, introduced in 1981 for the newly created diagnosis of panic disorder, soared. Mevacor (lovastatin) is the first and most influential statin in the American market. The 1991 launch of Pravachol (pravastatin), the second available in the United States, and the release of Zocor (simvastatin) made Mevacor no longer the only statin on the market. In 1998, Viagra was released as a treatment for erectile dysfunction. === Ancient pharmacology === Using plants and plant substances to treat all kinds of diseases and medical conditions is believed to date back to prehistoric medicine. The Kahun Gynaecological Papyrus, the oldest known medical text of any kind, dates to about 1800 BC and represents the first documented use of any kind of drug. It and other medical papyri describe Ancient Egyptian medical practices, such as using honey to treat infections and the legs of bee-eaters to treat neck pains. Ancient Babylonian medicine demonstrated the use of medication in the first half of the 2nd millennium BC. Medicinal creams and pills were employed as treatments. On the Indian subcontinent, the Atharvaveda, a sacred text of Hinduism whose core dates from the second millennium BC, although the hymns recorded in it are believed to be older, is the first Indic text dealing with medicine. It describes plant-based drugs to counter diseases. The earliest foundations of ayurveda were built on a synthesis of selected ancient herbal practices, together with a massive addition of theoretical conceptualizations, new nosologies and new therapies dating from about 400 BC onwards. The student of Āyurveda was expected to know ten arts that were indispensable in the preparation and application of his medicines: distillation, operative skills, cooking, horticulture, metallurgy, sugar manufacture, pharmacy, analysis and separation of minerals, compounding of metals, and preparation of alkalis. The Hippocratic Oath for physicians, attributed to fifth century BC Greece, refers to the existence of "deadly drugs", and ancient Greek physicians imported drugs from Egypt and elsewhere. The pharmacopoeia De materia medica, written between 50 and 70 CE by the Greek physician Pedanius Dioscorides, was widely read for more than 1,500 years. === Medieval pharmacology === Al-Kindi's ninth century AD book, De Gradibus and Ibn Sina (Avicenna)'s The Canon of Medicine, covers a range of drugs known to the practice of medicine in the medieval Islamic world. Medieval medicine of Western Europe saw advances in surgery compared to previously, but few truly effective drugs existed, beyond opium (found in such extremely popular drugs as the "Great Rest" of the Antidotarium Nicolai at the time) and quinine. Folklore cures and potentially poisonous metal-based compounds were popular treatments. Theodoric Borgognoni, (1205–1296), one of the most significant surgeons of the medieval period, responsible for introducing and promoting important surgical advances including basic antiseptic practice and the use of anaesthetics. Garcia de Orta described some herbal treatments that were used. === Modern pharmacology === For most of the 19th century, drugs were not highly effective, leading Oliver Wendell Holmes Sr. to famously comment in 1842 that "if all medicines in the world were thrown into the sea, it would be all the better for mankind and all the worse for the fishes".: 21  During the First World War, Alexis Carrel and Henry Dakin developed the Carrel-Dakin method of treating wounds with an irrigation, Dakin's solution, a germicide which helped prevent gangrene. In the inter-war period, the first anti-bacterial agents such as the sulpha antibiotics were developed. The Second World War saw the introduction of widespread and effective antimicrobial therapy with the development and mass production of penicillin antibiotics, made possible by the pressures of the war and the collaboration of British scientists with the American pharmaceutical industry. Medicines commonly used by the late 1920s included aspirin, codeine, and morphine for pain; digitalis, nitroglycerin, and quinine for heart disorders, and insulin for diabetes. Other drugs included antitoxins, a few biological vaccines, and a few synthetic drugs. In the 1930s, antibiotics emerged: first sulfa drugs, then penicillin and other antibiotics. Drugs increasingly became "the center of medical practice".: 22  In the 1950s, other drugs emerged including corticosteroids for inflammation, rauvolfia alkaloids as tranquilizers and antihypertensives, antihistamines for nasal allergies, xanthines for asthma, and typical antipsychotics for psychosis.: 23–24  As of 2007, thousands of approved drugs have been developed. Increasingly, biotechnology is used to discover biopharmaceuticals. Recently, multi-disciplinary approaches have yielded a wealth of new data on the development of novel antibiotics and antibacterials and on the use of biological agents for antibacterial therapy. In the 1950s, new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control. Governments have been heavily involved in the regulation of drug development and drug sales. In the U.S., the Elixir Sulfanilamide disaster led to the establishment of the Food and Drug Administration, and the 1938 Federal Food, Drug, and Cosmetic Act required manufacturers to file new drugs with the FDA. The 1951 Humphrey-Durham Amendment required certain drugs to be sold by prescription. In 1962, a subsequent amendment required new drugs to be tested for efficacy and safety in clinical trials.: 24–26  Until the 1970s, drug prices were not a major concern for doctors and patients. As more drugs became prescribed for chronic illnesses, however, costs became burdensome, and by the 1970s nearly every U.S. state required or encouraged the substitution of generic drugs for higher-priced brand names. This also led to the 2006 U.S. law, Medicare Part D, which offers Medicare coverage for drugs.: 28–29  As of 2008, the United States is the leader in medical research, including pharmaceutical development. U.S. drug prices are among the highest in the world, and drug innovation is correspondingly high. In 2000, U.S.-based firms developed 29 of the 75 top-selling drugs; firms from the second-largest market, Japan, developed eight, and the United Kingdom contributed 10. France, which imposes price controls, developed three. Throughout the 1990s, outcomes were similar.: 30–31  == Controversies == Controversies concerning pharmaceutical drugs include patient access to drugs under development and not yet approved, pricing, and environmental issues. === Access to unapproved drugs === Governments worldwide have created provisions for granting access to drugs prior to approval for patients who have exhausted all alternative treatment options and do not match clinical trial entry criteria. Often grouped under the labels of compassionate use, expanded access, or named patient supply, these programs are governed by rules which vary by country defining access criteria, data collection, promotion, and control of drug distribution. Within the United States, pre-approval demand is generally met through treatment IND (investigational new drug) applications (INDs), or single-patient INDs. These mechanisms, which fall under the label of expanded access programs, provide access to drugs for groups of patients or individuals residing in the US. Outside the US, Named Patient Programs provide controlled, pre-approval access to drugs in response to requests by physicians on behalf of specific, or "named", patients before those medicines are licensed in the patient's home country. Through these programs, patients are able to access drugs in late-stage clinical trials or approved in other countries for a genuine, unmet medical need, before those drugs have been licensed in the patient's home country. Patients who have not been able to get access to drugs in development have organized and advocated for greater access. In the United States, ACT UP formed in the 1980s, and eventually formed its Treatment Action Group in part to pressure the US government to put more resources into discovering treatments for AIDS and then to speed release of drugs that were under development. The Abigail Alliance was established in November 2001 by Frank Burroughs in memory of his daughter, Abigail. The Alliance seeks broader availability of investigational drugs on behalf of terminally ill patients. In 2013, BioMarin Pharmaceutical was at the center of a high-profile debate regarding expanded access of cancer patients to experimental drugs. === Access to medicines and drug pricing === Essential medicines, as defined by the World Health Organization (WHO), are "those drugs that satisfy the health care needs of the majority of the population; they should therefore be available at all times in adequate amounts and in appropriate dosage forms, at a price the community can afford." Recent studies have found that most of the medicines on the WHO essential medicines list, outside of the field of HIV drugs, are not patented in the developing world, and that lack of widespread access to these medicines arise from issues fundamental to economic development – lack of infrastructure and poverty. Médecins Sans Frontières also runs a Campaign for Access to Essential Medicines campaign, which includes advocacy for greater resources to be devoted to currently untreatable diseases that primarily occur in the developing world. The Access to Medicine Index tracks how well pharmaceutical companies make their products available in the developing world. World Trade Organization negotiations in the 1990s, including the TRIPS Agreement and the Doha Declaration, have centered on issues at the intersection of international trade in pharmaceuticals and intellectual property rights, with developed world nations seeking strong intellectual property rights to protect investments made to develop new drugs, and developing world nations seeking to promote their generic pharmaceuticals industries and their ability to make medicine available to their people via compulsory licenses. Some have raised ethical objections specifically with respect to pharmaceutical patents and the high prices for drugs that they enable their proprietors to charge, which poor people around the world, cannot afford. Critics also question the rationale that exclusive patent rights and the resulting high prices are required for pharmaceutical companies to recoup the large investments needed for research and development. One study concluded that marketing expenditures for new drugs often doubled the amount that was allocated for research and development. Other critics claim that patent settlements would be costly for consumers, the health care system, and state and federal governments because it would result in delaying access to lower cost generic medicines. Novartis fought a protracted battle with the government of India over the patenting of its drug, Gleevec, in India, which ended up in a Supreme Court in a case known as Novartis v. Union of India & Others. The Supreme Court ruled narrowly against Novartis, but opponents of patenting drugs claimed it as a major victory. === Environmental issues === Pharmaceutical medications are commonly described as "ubiquitous" in nearly every type of environmental medium (i.e. lakes, rivers, streams, estuaries, seawater, and soil) worldwide. Their chemical components are typically present at relatively low concentrations in the ng/L to μg/L ranges. The primary avenue for medications reaching the environment are through the effluent of wastewater treatment plants, both from industrial plants during production, and from municipal plants after consumption. Agricultural pollution is another significant source derived from the prevalence of antibiotic use in livestock. Scientists generally divide environmental impacts of a chemical into three primary categories: persistence, bioaccumulation, and toxicity. Since medications are inherently bio-active, most are naturally degradable in the environment, however they are classified as "pseudopersistent" because they are constantly being replenished from their sources. These Environmentally Persistent Pharmaceutical Pollutants (EPPPs) rarely reach toxic concentrations in the environment, however they have been known to bioaccumulate in some species. Their effects have been observed to compound gradually across food webs, rather than becoming acute, leading to their classification by the US Geological Survey as "Ecological Disrupting Compounds." == See also == == References == == External links == Drug Reference Site Directory – OpenMD Drugs & Medications Directory – Curlie European Medicines Agency NHS Medicines A–Z U.S. Food & Drug Administration: Drugs WHO Model List of Essential Medicines
Wikipedia/Pharmaceutical_drug
An adenovirus vaccine is a vaccine against adenovirus infection. According to the American CDC, "There is currently no adenovirus vaccine available to the general public. It should not be confused with the strategy of using adenovirus as a viral vector to develop vaccines for other pathogens, or as a general gene carrier. == US military == It was used by the United States military from 1971 to 1999, but was discontinued when the only manufacturer stopped production. This vaccine elicited immunity to adenovirus serotypes 4 and 7, the serotypes most often associated with acute respiratory disease. On 16 March 2011, the U.S. Food and Drug Administration approved an adenovirus vaccine manufactured by Teva Pharmaceuticals under contract to the U.S. Army. This vaccine is essentially the same as the one used from 1971 to 1999. On 24 October 2011, the military services began administering the new adenovirus vaccine to recruits during basic training. The vaccine is orally administered and consists of live (not attenuated) virus. The tablets are coated, so that the virus passes the stomach and infects the intestines, where the immune response is raised. == References == == External links == "Adenovirus Vaccine Information Statement". U.S. Centers for Disease Control and Prevention (CDC). January 2020. Adenovirus Vaccines at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Wikipedia/Adenovirus_vaccine