text
stringlengths
559
401k
source
stringlengths
13
121
Mortality rate, or death rate,: 189, 69  is a measure of the number of deaths (in general, or due to a specific cause) in a particular population, scaled to the size of that population, per unit of time. Mortality rate is typically expressed in units of deaths per 1,000 individuals per year; thus, a mortality rate of 9.5 (out of 1,000) in a population of 1,000 would mean 9.5 deaths per year in that entire population, or 0.95% out of the total. It is distinct from "morbidity", which is either the prevalence or incidence of a disease, and also from the incidence rate (the number of newly appearing cases of the disease per unit of time).: 189  An important specific mortality rate measure is the crude death rate, which looks at mortality from all causes in a given time interval for a given population. As of 2020, for instance, the CIA estimates that the crude death rate globally will be 7.7 deaths per 1,000 people in a population per year. In a generic form,: 189  mortality rates can be seen as calculated using ( d / p ) ⋅ 10 n {\displaystyle (d/p)\cdot 10^{n}} , where d represents the deaths from whatever cause of interest is specified that occur within a given time period, p represents the size of the population in which the deaths occur (however this population is defined or limited), and 10 n {\displaystyle 10^{n}} is the conversion factor from the resulting fraction to another unit (e.g., multiplying by 10 3 {\displaystyle 10^{3}} to get mortality rate per 1,000 individuals).: 189  == Crude death rate, globally == The crude death rate is defined as "the mortality rate from all causes of death for a population," calculated as the "total number of deaths during a given time interval" divided by the "mid-interval population", per 1,000 or 100,000; for instance, the population of the United States was around 290,810,000 in 2003, and in that year, approximately 2,419,900 deaths occurred in total, giving a crude death (mortality) rate of 832 deaths per 100,000.: 3–20f  As of 2020, the CIA estimates the U.S. crude death rate will be 8.3 per 1,000, while it estimates that the global rate will be 7.7 per 1,000. According to the World Health Organization, the ten leading causes of death, globally, in 2016, for both sexes and all ages, were as presented in the table below. Crude death rate, per 100,000 population Ischaemic heart disease, 126 Stroke, 77 Chronic obstructive pulmonary disease, 41 Lower respiratory infections, 40 Alzheimer's disease and other dementias, 27 Trachea, bronchus, and lung cancers, 23 Diabetes mellitus, 21 Road injury, 19 Diarrhoeal diseases, 19 Tuberculosis, 17 Mortality rate is also measured per thousand. It is determined by how many people of a certain age die per thousand people. Decrease of mortality rate is one of the reasons for increase of population. Development of medical science and other technologies has resulted in the decrease of mortality rate in all the countries of the world for some decades. In 1990, the mortality rate of children under five years of age was 144 per thousand, but in 2015 the child mortality rate was 38 per thousand. == Related measures of mortality == Other specific measures of mortality include: For any of these, a "sex-specific mortality rate" refers to "a mortality rate among either males or females", where the calculation involves both "numerator and denominator... limited to the one sex".: 3–23  == Use in epidemiology == In most cases there are few if any ways to obtain exact mortality rates, so epidemiologists use estimation to predict correct mortality rates. Mortality rates are usually difficult to predict due to language barriers, health infrastructure related issues, conflict, and other reasons. Maternal mortality has additional challenges, especially as they pertain to stillbirths, abortions, and multiple births. In some countries, during the 1920s, a stillbirth was defined as "a birth of at least twenty weeks' gestation in which the child shows no evidence of life after complete birth". In most countries, however, a stillbirth was defined as "the birth of a fetus, after 28 weeks of pregnancy, in which pulmonary respiration does not occur". === Census data and vital statistics === Ideally, all mortality estimation would be done using vital statistics and census data. Census data will give detailed information about the population at risk of death. The vital statistics provide information about live births and deaths in the population. Often, either census data and vital statistics data is not available. This is common in developing countries, countries that are in conflict, areas where natural disasters have caused mass displacement, and other areas where there is a humanitarian crisis === Household surveys === Household surveys or interviews are another way in which mortality rates are often assessed. There are several methods to estimate mortality in different segments of the population. One such example is the sisterhood method, which involves researchers estimating maternal mortality by contacting women in populations of interest and asking whether or not they have a sister, if the sister is of child-bearing age (usually 15) and conducting an interview or written questions about possible deaths among sisters. The sisterhood method, however, does not work in cases where sisters may have died before the sister being interviewed was born. Orphanhood surveys estimate mortality by questioning children are asked about the mortality of their parents. It has often been criticized as an adult mortality rate that is very biased for several reasons. The adoption effect is one such instance in which orphans often do not realize that they are adopted. Additionally, interviewers may not realize that an adoptive or foster parent is not the child's biological parent. There is also the issue of parents being reported on by multiple children while some adults have no children, thus are not counted in mortality estimates. Widowhood surveys estimate adult mortality by responding to questions about the deceased husband or wife. One limitation of the widowhood survey surrounds the issues of divorce, where people may be more likely to report that they are widowed in places where there is the great social stigma around being a divorcee. Another limitation is that multiple marriages introduce biased estimates, so individuals are often asked about first marriage. Biases will be significant if the association of death between spouses, such as those in countries with large AIDS epidemics. === Sampling === Sampling refers to the selection of a subset of the population of interest to efficiently gain information about the entire population. Samples should be representative of the population of interest. Cluster sampling is an approach to non-probability sampling; this is an approach in which each member of the population is assigned to a group (cluster), and then clusters are randomly selected, and all members of selected clusters are included in the sample. Often combined with stratification techniques (in which case it is called multistage sampling), cluster sampling is the approach most often used by epidemiologists. In areas of forced migration, there is more significant sampling error. Thus cluster sampling is not the ideal choice. == Mortality statistics == Causes of death vary greatly between developed and less developed countries; see also list of causes of death by rate for worldwide statistics. According to Jean Ziegler (the United Nations Special Rapporteur on the Right to Food for 2000 to March 2008), mortality due to malnutrition accounted for 58% of the total mortality in 2006: "In the world, approximately 62 million people, all causes of death combined, die each year. In 2006, more than 36 million died of hunger or diseases due to deficiencies in micronutrients". Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes. In industrialized nations, the proportion is much higher, reaching 90%. == Economics == Scholars have stated that there is a significant relationship between a low standard of living that results from low income; and increased mortality rates. A low standard of living is more likely to result in malnutrition, which can make people more susceptible to disease and more likely to die from these diseases. A lower standard of living may lead to as a lack of hygiene and sanitation, increased exposure to and the spread of disease, and a lack of access to proper medical care and facilities. Poor health can in turn contribute to low and reduced incomes, which can create a loop known as the health-poverty trap. Indian economist and philosopher Amartya Sen has stated that mortality rates can serve as an indicator of economic success and failure.: 27, 32  Historically, mortality rates have been adversely affected by short term price increases. Studies have shown that mortality rates increase at a rate concurrent with increases in food prices. These effects have a greater impact on vulnerable, lower-income populations than they do on populations with a higher standard of living.: 35–36, 70  In more recent times, higher mortality rates have been less tied to socio-economic levels within a given society, but have differed more between low and high-income countries. It is now found that national income, which is directly tied to standard of living within a country, is the largest factor in mortality rates being higher in low-income countries. === Preventable mortality === These rates are especially pronounced for children under 5 years old, particularly in lower-income, developing countries. These children have a much greater chance of dying of diseases that have become mostly preventable in higher-income parts of the world. More children die of malaria, respiratory infections, diarrhea, perinatal conditions, and measles in developing nations. Data shows that after the age of 5 these preventable causes level out between high and low-income countries. == See also == == References == === Sources === == External links == DeathRiskRankings: Calculates risk of dying in the next year using MicroMorts and displays risk rankings for up to 66 causes of death Data regarding death rates by age and cause in the United States (from Data360) Complex Emergency Database (CE-DAT): Mortality data from conflict-affected populations Archived 2008-12-26 at the Wayback Machine Human Mortality Database: Historic mortality data from developed nations Archived 2011-02-28 at the Wayback Machine Deaths this year OUR WORLD IN DATA: Number of deaths per year, World
Wikipedia/Mortality_rate
The Wittgenstein Centre for Demography and Global Human Capital (IIASA, VID/ÖAW, WU) is a research collaboration between the International Institute for Applied Systems Analysis in Laxenburg, the Vienna Institute of Demography of the Austrian Academy of Sciences, and the University of Vienna, both located in Vienna. From 2011-2019 the Vienna University of Economics and Business (WU) was the Centre's university pillar. The Centre was founded in 2010 by demographer Wolfgang Lutz who had won the Wittgenstein Award in the same year. The Wittgenstein-Preis, the highest Austrian science award, is given out by the Austrian Science Fund, and Lutz (who was the first social scientist to win it) used the 1.5 million euro prize money to establish the Centre by teaming up several existing demographic research institutions in and around Vienna which had been cooperating before but not under the umbrella of a common concern. These three pillar institutions – the World Population Program of the International Institute for Applied Systems Analysis (IIASA), the Vienna Institute of Demography of the Austrian Academy of Sciences (VID/ÖAW) as well as the Demography Group and the Research Institute on Human Capital and Development at the Vienna University of Economics and Business (WU) – each put a different emphasis and can therefore combine their strengths in the fields of demography, human capital formation and analysis of the returns to healthcare and education. The Centre’s objective is to provide a sound scientific basis for decision-making at various levels by better understanding the implications of changing population structures and human capital investments for the well-being of mankind under a global perspective. The Wittgenstein Centre is governed by founding director Wolfgang Lutz, Jesús Crespo Cuaresma (Director of Economic Analysis), Alexia Fürnkranz-Prskawetz (Director of Research Training) and Sergei Scherbov (Director of Demographic Analysis). Scientific advice and guidance is ensured by an International Scientific Advisory Board chaired by Sir Partha Dasgupta. There are some 60 researchers and 10 administrative staff members working at the Wittgenstein Centre in one of the three pillar institutions, two of which have been joined under a common roof since August 2015 when VID moved from its old premises in Vienna's 4th district to a new location on the WU campus in the 2nd district adjacent to the Vienna Prater: an additional campus building (D5) at Welthandelsplatz 2 now houses (on two levels) both the new Vienna Institute of Demography and the two relevant WU research groups next to each other, linked by the Demographenstiege (demographers' staircase). On 9 September 2015, the Centre celebrated its first five years, together with the 40th anniversaries of IIASA and VID, with a symposium on "Demography that Matters". == Areas of research == The Wittgenstein Centre applies multidisciplinary research to the analysis of human capital and population dynamics, assessing the effects of these forces on long-term human well-being and focusing on the following research themes: Human reproduction Education policy and planning Migration and education Health and mortality Cognitive ageing Modelling human capital formation Human capital data lab Population dynamics and ageing Differential disaster vulnerability Economics of ageing and labour markets Recent research results of the Centre’s scientists, in particular on educational attainment by age and sex in 195 countries but also on trends in fertility, mortality, migration, and educational level for the world’s regions are summarized in a 2014 Oxford University Press publication edited by Wolfgang Lutz, William P. Butz and Samir KC: World Population and Human Capital in the Twenty-First Century. The data that this study is based on is freely available by way of the Wittgenstein Centre Data Explorer which allows to select and download global population projections broken down by country, region, sex, age, time periods and a number of other indicators (see link below). == References == == External links == Wittgenstein Centre Website Wittgenstein Centre Data Explorer
Wikipedia/Wittgenstein_Centre_for_Demography_and_Global_Human_Capital
In population ecology and demography, the net reproduction rate, R0, is the average number of offspring (often specifically daughters) that would be born to a female if she passed through her lifetime conforming to the age-specific fertility and mortality rates of a given year. This rate is similar to the gross reproduction rate but takes into account that some females will die before completing their childbearing years. An R0 of one means that each generation of mothers is having exactly enough daughters to replace themselves in the population. If the R0 is less than one, the reproductive performance of the population is below replacement level. The R0 is particularly relevant where sex ratios at birth are significantly affected by the use of reproductive technologies, or where life expectancy is low. The current (2015–20) estimate for the R0 worldwide under the UN's medium variant model is 1.09 daughters per woman. == See also == List of countries by net reproduction rate Sub-replacement fertility Total fertility rate == References == == External links == Net reproduction rate (daughters per woman), UNdata.
Wikipedia/Net_reproduction_rate
Research design refers to the overall strategy utilized to answer research questions. A research design typically outlines the theories and models underlying a project; the research question(s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. A strong research design yields valid answers to research questions while weak designs yield unreliable, imprecise or irrelevant answers. Incorporated in the design of a research study will depend on the standpoint of the researcher over their beliefs in the nature of knowledge (see epistemology) and reality (see ontology), often shaped by the disciplinary areas the researcher belongs to. The design of a study defines the study type (descriptive, correlational, semi-experimental, experimental, review, meta-analytic) and sub-type (e.g., descriptive-longitudinal case study), research problem, hypotheses, independent and dependent variables, experimental design, and, if applicable, data collection methods and a statistical analysis plan. A research design is a framework that has been created to find answers to research questions. == Design types and sub-types == There are many ways to classify research designs. Nonetheless, the list below offers a number of useful distinctions between possible research designs. A research design is an arrangement or collection of conditions. Descriptive (e.g., case-study, naturalistic observation, survey) Correlational (e.g., case-control study, observational study) Experimental (e.g., field experiment, controlled experiment, quasi-experiment) Review (literature review, systematic review) Meta-analytic (meta-analysis) Sometimes a distinction is made between "fixed" and "flexible" designs. In some cases, these types coincide with quantitative and qualitative research designs respectively, though this need not be the case. In fixed designs, the design of the study is fixed before the main stage of data collection takes place. Fixed designs are normally theory-driven; otherwise, it is impossible to know in advance which variables need to be controlled and measured. Often, these variables are measured quantitatively. Flexible designs allow for more freedom during the data collection process. One reason for using a flexible research design can be that the variable of interest is not quantitatively measurable, such as culture. In other cases, the theory might not be available before one starts the research. === Grouping === The choice of how to group participants depends on the research hypothesis and on how the participants are sampled. In a typical experimental study, there will be at least one "experimental" condition (e.g., "treatment") and one "control" condition ("no treatment"), but the appropriate method of grouping may depend on factors such as the duration of measurement phase and participant characteristics: Cohort study Cross-sectional study Cross-sequential study Longitudinal study == Confirmatory versus exploratory research == Confirmatory research tests a priori hypotheses — outcome predictions that are made before the measurement phase begins. Such a priori hypotheses are usually derived from a theory or the results of previous studies. The advantage of confirmatory research is that the result is more meaningful, in the sense that it is much harder to claim that a certain result is generalizable beyond the data set. The reason for this is that in confirmatory research, one ideally strives to reduce the probability of falsely reporting a coincidental result as meaningful. This probability is known as α-level or the probability of a type I error. Exploratory research, on the other hand, seeks to generate a posteriori hypotheses by examining a data-set and looking for potential relations between variables. It is also possible to have an idea about a relation between variables but to lack knowledge of the direction and strength of the relation. If the researcher does not have any specific hypotheses beforehand, the study is exploratory with respect to the variables in question (although it might be confirmatory for others). The advantage of exploratory research is that it is easier to make new discoveries due to the less stringent methodological restrictions. Here, the researcher does not want to miss a potentially interesting relation and therefore aims to minimize the probability of rejecting a real effect or relation; this probability is sometimes referred to as β and the associated error is of type II. In other words, if the researcher simply wants to see whether some measured variables could be related, he would want to increase the chances of finding a significant result by lowering the threshold of what is deemed to be significant. Sometimes, a researcher may conduct exploratory research but report it as if it had been confirmatory ('Hypothesizing After the Results are Known', HARKing—see Hypotheses suggested by the data); this is a questionable research practice bordering on fraud. == State problems versus process problems == A distinction can be made between state problems and process problems. State problems aim to answer what the state of a phenomenon is at a given time, while process problems deal with the change of phenomena over time. Examples of state problems are the level of mathematical skills of sixteen-year-old children, the computer skills of the elderly, the depression level of a person, etc. Examples of process problems are the development of mathematical skills from puberty to adulthood, the change in computer skills when people get older, and how depression symptoms change during therapy. State problems are easier to measure than process problems. State problems just require one measurement of the phenomena of interest, while process problems always require multiple measurements. Research designs such as repeated measurements and longitudinal study are needed to address process problems. == Examples of fixed designs == === Experimental research designs === In an experimental design, the researcher actively tries to change the situation, circumstances, or experience of participants (manipulation), which may lead to a change in behavior or outcomes for the participants of the study. The researcher randomly assigns participants to different conditions, measures the variables of interest, and tries to control for confounding variables. Therefore, experiments are often highly fixed even before the data collection starts. In a good experimental design, a few things are of great importance. First of all, it is necessary to think of the best way to operationalize the variables that will be measured, as well as which statistical methods would be most appropriate to answer the research question. Thus, the researcher should consider what the expectations of the study are as well as how to analyze any potential results. Finally, in an experimental design, the researcher must think of the practical limitations including the availability of participants as well as how representative the participants are to the target population. It is important to consider each of these factors before beginning the experiment. Additionally, many researchers employ power analysis before they conduct an experiment, in order to determine how large the sample must be to find an effect of a given size with a given design at the desired probability of making a Type I or Type II error. The researcher has the advantage of minimizing resources in experimental research designs. === Non-experimental research designs === Non-experimental research designs do not involve a manipulation of the situation, circumstances or experience of the participants. Non-experimental research designs can be broadly classified into three categories. First, in relational designs, a range of variables are measured. These designs are also called correlation studies because correlation data are most often used in the analysis. Since correlation does not imply causation, such studies simply identify co-movements of variables. Correlational designs are helpful in identifying the relation of one variable to another, and seeing the frequency of co-occurrence in two natural groups (see Correlation and dependence). The second type is comparative research. These designs compare two or more groups on one or more variable, such as the effect of gender on grades. The third type of non-experimental research is a longitudinal design. A longitudinal design examines variables such as performance exhibited by a group or groups over time (see Longitudinal study). == Examples of flexible research designs == === Case study === Famous case studies are for example the descriptions about the patients of Freud, who were thoroughly analysed and described. Bell (1999) states "a case study approach is particularly appropriate for individual researchers because it gives an opportunity for one aspect of a problem to be studied in some depth within a limited time scale". === Grounded theory study === Grounded theory research is a systematic research process that works to develop "a process, and action or an interaction about a substantive topic". == See also == Bold hypothesis Clinical study design Design of experiments Grey box completion and validation Research proposal Royal Commission on Animal Magnetism == References ==
Wikipedia/Research_design
Demography is a peer-reviewed academic journal covering issues related to population and demography. It is the flagship journal of the Population Association of America and has been published by Duke University Press since 2021. Demography was formerly published by Springer. The editor is Sara R. Curran (University of Washington). == History == The journal was established in 1964. The publication has become more frequent in recent years: 1964–1965: Published once a year 1966–1968: Published twice a year 1969–2012: Published four times a year (with the exception of 2010, where there were five issues, one of which was a special supplement) 2013–present: Published six times a year == Publication model == Older issues of the journal are available via JSTOR and Project MUSE. While published by Springer, Demography was a hybrid open access journal, charging subscription fees for access while offering authors the option of making their work available open access by paying an article processing charge. The journal was fully converted to diamond open access in 2021 when Duke University Press became its publisher. It relies on a community partnership model, in which libraries, research centers, academic departments, and other entities voluntarily contribute funds to cover publication costs. Demography no longer assesses article processing charges. Articles are published under a Creative Commons license (BY-NC-ND). Authors retain copyright over their works. == Impact and reception == Demography is a leading journal on issues related to population and demographic trends and research published in Demography has been cited in The New York Times. According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.984. == References == == External links == Official website
Wikipedia/Demography_(journal)
Mortality rate, or death rate,: 189, 69  is a measure of the number of deaths (in general, or due to a specific cause) in a particular population, scaled to the size of that population, per unit of time. Mortality rate is typically expressed in units of deaths per 1,000 individuals per year; thus, a mortality rate of 9.5 (out of 1,000) in a population of 1,000 would mean 9.5 deaths per year in that entire population, or 0.95% out of the total. It is distinct from "morbidity", which is either the prevalence or incidence of a disease, and also from the incidence rate (the number of newly appearing cases of the disease per unit of time).: 189  An important specific mortality rate measure is the crude death rate, which looks at mortality from all causes in a given time interval for a given population. As of 2020, for instance, the CIA estimates that the crude death rate globally will be 7.7 deaths per 1,000 people in a population per year. In a generic form,: 189  mortality rates can be seen as calculated using ( d / p ) ⋅ 10 n {\displaystyle (d/p)\cdot 10^{n}} , where d represents the deaths from whatever cause of interest is specified that occur within a given time period, p represents the size of the population in which the deaths occur (however this population is defined or limited), and 10 n {\displaystyle 10^{n}} is the conversion factor from the resulting fraction to another unit (e.g., multiplying by 10 3 {\displaystyle 10^{3}} to get mortality rate per 1,000 individuals).: 189  == Crude death rate, globally == The crude death rate is defined as "the mortality rate from all causes of death for a population," calculated as the "total number of deaths during a given time interval" divided by the "mid-interval population", per 1,000 or 100,000; for instance, the population of the United States was around 290,810,000 in 2003, and in that year, approximately 2,419,900 deaths occurred in total, giving a crude death (mortality) rate of 832 deaths per 100,000.: 3–20f  As of 2020, the CIA estimates the U.S. crude death rate will be 8.3 per 1,000, while it estimates that the global rate will be 7.7 per 1,000. According to the World Health Organization, the ten leading causes of death, globally, in 2016, for both sexes and all ages, were as presented in the table below. Crude death rate, per 100,000 population Ischaemic heart disease, 126 Stroke, 77 Chronic obstructive pulmonary disease, 41 Lower respiratory infections, 40 Alzheimer's disease and other dementias, 27 Trachea, bronchus, and lung cancers, 23 Diabetes mellitus, 21 Road injury, 19 Diarrhoeal diseases, 19 Tuberculosis, 17 Mortality rate is also measured per thousand. It is determined by how many people of a certain age die per thousand people. Decrease of mortality rate is one of the reasons for increase of population. Development of medical science and other technologies has resulted in the decrease of mortality rate in all the countries of the world for some decades. In 1990, the mortality rate of children under five years of age was 144 per thousand, but in 2015 the child mortality rate was 38 per thousand. == Related measures of mortality == Other specific measures of mortality include: For any of these, a "sex-specific mortality rate" refers to "a mortality rate among either males or females", where the calculation involves both "numerator and denominator... limited to the one sex".: 3–23  == Use in epidemiology == In most cases there are few if any ways to obtain exact mortality rates, so epidemiologists use estimation to predict correct mortality rates. Mortality rates are usually difficult to predict due to language barriers, health infrastructure related issues, conflict, and other reasons. Maternal mortality has additional challenges, especially as they pertain to stillbirths, abortions, and multiple births. In some countries, during the 1920s, a stillbirth was defined as "a birth of at least twenty weeks' gestation in which the child shows no evidence of life after complete birth". In most countries, however, a stillbirth was defined as "the birth of a fetus, after 28 weeks of pregnancy, in which pulmonary respiration does not occur". === Census data and vital statistics === Ideally, all mortality estimation would be done using vital statistics and census data. Census data will give detailed information about the population at risk of death. The vital statistics provide information about live births and deaths in the population. Often, either census data and vital statistics data is not available. This is common in developing countries, countries that are in conflict, areas where natural disasters have caused mass displacement, and other areas where there is a humanitarian crisis === Household surveys === Household surveys or interviews are another way in which mortality rates are often assessed. There are several methods to estimate mortality in different segments of the population. One such example is the sisterhood method, which involves researchers estimating maternal mortality by contacting women in populations of interest and asking whether or not they have a sister, if the sister is of child-bearing age (usually 15) and conducting an interview or written questions about possible deaths among sisters. The sisterhood method, however, does not work in cases where sisters may have died before the sister being interviewed was born. Orphanhood surveys estimate mortality by questioning children are asked about the mortality of their parents. It has often been criticized as an adult mortality rate that is very biased for several reasons. The adoption effect is one such instance in which orphans often do not realize that they are adopted. Additionally, interviewers may not realize that an adoptive or foster parent is not the child's biological parent. There is also the issue of parents being reported on by multiple children while some adults have no children, thus are not counted in mortality estimates. Widowhood surveys estimate adult mortality by responding to questions about the deceased husband or wife. One limitation of the widowhood survey surrounds the issues of divorce, where people may be more likely to report that they are widowed in places where there is the great social stigma around being a divorcee. Another limitation is that multiple marriages introduce biased estimates, so individuals are often asked about first marriage. Biases will be significant if the association of death between spouses, such as those in countries with large AIDS epidemics. === Sampling === Sampling refers to the selection of a subset of the population of interest to efficiently gain information about the entire population. Samples should be representative of the population of interest. Cluster sampling is an approach to non-probability sampling; this is an approach in which each member of the population is assigned to a group (cluster), and then clusters are randomly selected, and all members of selected clusters are included in the sample. Often combined with stratification techniques (in which case it is called multistage sampling), cluster sampling is the approach most often used by epidemiologists. In areas of forced migration, there is more significant sampling error. Thus cluster sampling is not the ideal choice. == Mortality statistics == Causes of death vary greatly between developed and less developed countries; see also list of causes of death by rate for worldwide statistics. According to Jean Ziegler (the United Nations Special Rapporteur on the Right to Food for 2000 to March 2008), mortality due to malnutrition accounted for 58% of the total mortality in 2006: "In the world, approximately 62 million people, all causes of death combined, die each year. In 2006, more than 36 million died of hunger or diseases due to deficiencies in micronutrients". Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes. In industrialized nations, the proportion is much higher, reaching 90%. == Economics == Scholars have stated that there is a significant relationship between a low standard of living that results from low income; and increased mortality rates. A low standard of living is more likely to result in malnutrition, which can make people more susceptible to disease and more likely to die from these diseases. A lower standard of living may lead to as a lack of hygiene and sanitation, increased exposure to and the spread of disease, and a lack of access to proper medical care and facilities. Poor health can in turn contribute to low and reduced incomes, which can create a loop known as the health-poverty trap. Indian economist and philosopher Amartya Sen has stated that mortality rates can serve as an indicator of economic success and failure.: 27, 32  Historically, mortality rates have been adversely affected by short term price increases. Studies have shown that mortality rates increase at a rate concurrent with increases in food prices. These effects have a greater impact on vulnerable, lower-income populations than they do on populations with a higher standard of living.: 35–36, 70  In more recent times, higher mortality rates have been less tied to socio-economic levels within a given society, but have differed more between low and high-income countries. It is now found that national income, which is directly tied to standard of living within a country, is the largest factor in mortality rates being higher in low-income countries. === Preventable mortality === These rates are especially pronounced for children under 5 years old, particularly in lower-income, developing countries. These children have a much greater chance of dying of diseases that have become mostly preventable in higher-income parts of the world. More children die of malaria, respiratory infections, diarrhea, perinatal conditions, and measles in developing nations. Data shows that after the age of 5 these preventable causes level out between high and low-income countries. == See also == == References == === Sources === == External links == DeathRiskRankings: Calculates risk of dying in the next year using MicroMorts and displays risk rankings for up to 66 causes of death Data regarding death rates by age and cause in the United States (from Data360) Complex Emergency Database (CE-DAT): Mortality data from conflict-affected populations Archived 2008-12-26 at the Wayback Machine Human Mortality Database: Historic mortality data from developed nations Archived 2011-02-28 at the Wayback Machine Deaths this year OUR WORLD IN DATA: Number of deaths per year, World
Wikipedia/Death_rate
Sullivan's index also known as Disability Free Life Expectancy (DFLE) is a method to compute life expectancy free of disability. It is calculated by formula: Life expectancy − {\displaystyle -} duration of disability Health expectancy calculated by Sullivan's method is the number of remaining years, at a particular age, that an individual can expect to live in a healthy state. It is computed by subtracting the probable duration of bed disability and inability to perform major activities from the life expectancy. The data for calculation is obtained from population surveys and period life table. The Sullivan's index collects mortality and disability data separately, and this data is almost often readily available. The Sullivan health expectancy reflects the current health of a real population adjusted for mortality levels and independent of age structure. == See also == Disability-adjusted life year (DALY) Quality-adjusted life year (QALY) Healthy Life Years == References == == External links == Definition of Sullivan's index
Wikipedia/Sullivan's_method
Biodemography is the science dealing with the integration of biological theory and demography. == Overview == Biodemography is a new branch of human (classical) demography concerned with understanding the complementary biological and demographic determinants of and interactions between the birth and death processes that shape individuals, cohorts and populations. The biological component brings human demography under the unifying theoretical umbrella of evolution, and the demographic component provides an analytical foundation for many of the principles upon which evolutionary theory rests including fitness, selection, structure, and change. Biodemographers are concerned with birth and death processes as they relate to populations in general and to humans in particular, whereas population biologists specializing in life history theory are interested in these processes only insofar as they relate to fitness and evolution. Traditionally, evolutionary biologists seldom focused on older, post-reproductives because these individuals (it is typically argued) do not contribute to fitness. In contrast, biodemographers embraced research programs expressly designed to study individuals at ages beyond their reproductive years because information on these age classes will shed important light on longevity and aging. The biological and demographic components of biodemography are not hierarchical but reciprocal in that both are primary windows on the world and are thus synergistic, complementary and mutually informing. However, there has been much more synthesis between the approaches to demographic research in recent years, such that collaboration between evolutionary, ecology and demographic researchers is increasingly common. An example of this is the "Evolutionary Demography Society", formed in 2012/2013 to increase opportunities for inter and multidisciplinary approaches to understanding how life history and ageing are related and lead to different population demographics. Biodemography is one of a small number of key subdisciplines arising from the social sciences that has embraced biology such as evolutionary psychology and neuroeconomics. However, unlike the others which focus more narrowly on biological sub-areas (neurology) or concepts (evolution), biodemography has no explicit biological boundaries. As a consequence, it is an interdisciplinary concept, but maintains biological roots. The hierarchical organizations that are inherent to both biology (cell, organ, individual) and demography (individual cohort, population) form a chain in which the individual serves as the link between the lower mechanistic levels, and the higher functional levels. Biodemography serves to inform research on human aging through theory building using mathematical and statistical modeling, hypothesis testing using experimental methods, and coherence-seeking using genetics and evolutionary concepts. == See also == == References == == Further reading == Gavrilov L.A., Gavrilova N.S. 2012. "Biodemography of Exceptional Longevity: Early-life and mid-life predictors of human longevity". Biodemography and Social Biology, 58(1):14–39, PMID 22582891 Curtsinger J.W., Gavrilova N.S., Gavrilov L.A. 2006. "Biodemography of Aging and Age-Specific Mortality in Drosophila melanogaster". In: Masoro E.J. & Austad S.N.. (eds.): Handbook of the Biology of Aging, Sixth Edition. Academic Press. San Diego, CA. 261–288. Carey, J. R., and J. W. Vaupel. 2005. "Biodemography". in D. Poston and M. Micklin, editors. Handbook of Population. Kluwer Academic/Plenum Publishers, New York. 625–658 Carnes, B.A., S.J. Olshansky, and D. Grahn. 2003. "Biological evidence for limits to the duration of life". Biogerontology 4: 31–45. Gavrilov L.A., Gavrilova N.S., Olshansky S.J., Carnes B.A. 2002. "Genealogical data and biodemography of human longevity". Social Biology, 49(3-4): 160–173. Gavrilov, L.A., Gavrilova, N.S. 2001. "Biodemographic study of familial determinants of human longevity". Population: An English Selection, 13(1): 197–222. Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher, ISBN 3-7186-4983-7 National Research Council (US) Panel for the Workshop on the Biodemography of Fertility and Family Behavior; Wachter KW, Bulatao RA, editors. (2003). Offspring: Human Fertility Behavior in Biodemographic Perspective. Washington (DC): National Academies Press (US). doi:10.17226/10654 == External links == Biodemography of Exceptional Longevity Laboratory of Survival and Longevity Biodemography and Paleodemography Max Planck Institute for Demographic Research National Institute on Aging Biodemography and Social Biology Academic journal.
Wikipedia/Biodemography
The Comparative Study of Electoral Systems (CSES) is a collaborative research project among national election studies around the world. Participating countries and polities include a common module of survey questions in their national post-election studies. The resulting data are collated together along with voting, demographic, district and macro variables into one dataset allowing comparative analysis of voting behavior from a multilevel perspective. The CSES is published as a free, public dataset. The project is administered by the CSES Secretariat, a joint effort between the Institute for Social Research at the University of Michigan and the GESIS – Leibniz Institute for the Social Sciences in Germany. == Aims and content of the study == The CSES project was founded in 1994 with two major aims. The first was to promote international collaboration between national election studies. The second was to allow researchers to study variations in political institutions, especially electoral systems, and their effects on individual attitudes and behaviors, especially turnout and vote choice. CSES datasets contain variables at three levels. The first is micro-level variables which are answered by respondents during post-election surveys in each included country. The second is district-level variables that contain election results from the electoral districts that survey respondents are situated in. The third is macro-level variables containing information about the country context and electoral system, as well as aggregate data such as economic indicators and democracy indices. This nested data structure, as depicted in Figure 1, allows for multilevel analysis. A new thematic module is devised by the CSES Planning Committee every five years. Between the final releases of the complete modules, CSES also disseminates advance releases of datasets periodically, which include partial data for modules that have not been fully released yet. Survey data collection for module 1 was conducted between 1996 and 2001 and focuses on system performance. The module allows investigation of the impact of electoral institutions on citizens’ political cognition and behavior as well as of the nature of political and social cleavages and alignment. Furthermore, it enables research about citizens’ evaluation of democratic institutions and processes. Module 1 includes 39 election studies conducted in 33 countries. Survey data collection for module 2 was conducted between 2001 and 2006 and focuses on accountability and representation. It addresses the contrast between the view that elections are a mechanism to hold government accountable and the view that they are a means to ensure that citizens’ views and interests are properly represented in the democratic process. Module 2 includes 41 election studies conducted in 38 countries. Survey data collection for module 3 was conducted between 2006 and 2011. The module allows investigating the meaningfulness of electoral choices and, accordingly, focuses on a major aspect of electoral research: the contingency in choice of available options. Module 3 includes 50 election studies conducted in 41 countries. Survey data collection for module 4 was conducted between 2011 and 2016 and focuses on distributional politics and social protection. The main topics investigated are voters’ preferences for public policy and the mediating factors of political institutions and voting behavior. Module 4 includes 45 election studies conducted in 39 countries. Survey data collection for module 5 was conducted between 2016 and 2021 and focuses on the electorate's attitudes towards political elites, on the one hand, and towards "out groups", on the other hand. It thus enables research on attitudes and voting behavior in the context of a rise of parties campaigning on anti-establishment messages and in opposition to "out groups". Module 5 includes 56 election studies conducted in 45 countries. Survey data collection for module 6 is ongoing, with the survey to be administered between 2021 and 2026. It focuses on the theme “Representative Democracy Under Pressure” with questions tapping citizens’ views on the democratic system and perceptions of system outputs, gender representation, preferences for government and the impact of the COVID-19 pandemic. The CSES Module 6 First Advance Release published in December 2024 includes 7 election studies conducted in 7 countries. A complete table of all variables available across modules can be found on the CSES website. CSES also has an Integrated Module Dataset (IMD) which brings together the existing Standalone CSES Modules (CSES Modules 1–5 inclusive) into one longitudinal and harmonized dataset. Variables that appear in at least three Standalone CSES Modules, up to and including CSES Module 5, are eligible for inclusion in IMD, with all polities participating in CSES included in the dataset. CSES IMD includes over 395,000 individual-level observations across 230 elections in 59 polities, with voter evaluations of over 800 political parties. Highlights of the IMD file are party and coalition numerical codes synchronized across CSES Modules and the incorporation of data bridging variables allowing CSES data to be easily merged with other common datasets in the social sciences. CSES IMD launched in December 2018 and is being rolled out on a phased basis with the latest release, Phase 4 released in February 2024. == Countries in the study == A frequently updated election study table across all modules can be found on the CSES website. == Data access == CSES data are available publicly and are free of charge. Data releases are non-proprietary – in other words the data are made available to the public without preferential or advance access to anyone. Data is available in multiple formats including for common statistical packages like STATA, SPSS, SAS and R. The data can be downloaded from the CSES website as well as via the GESIS data catalogue. The GESIS online analysis tool ZACAT can furthermore be used to browse and explore the dataset. == Organizational structure and funding == === The CSES Secretariat === In conjunction with national election study collaborators, the CSES Secretariat administers the CSES project. It consists of staff from the GESIS – Leibniz Institute for the Social Sciences in Germany and the University of Michigan, Ann-Arbor in the United States. The Secretariat is responsible for compiling the final CSES dataset by harmonizing the single country studies into a cross-national dataset. It is also responsible for collecting the district and macro data, for data documentation, and for ensuring data quality. The Secretariat, furthermore, maintains the CSES website, promotes the project, provides support to the user community, and organizes conferences and project meetings. === The Planning Committee, collaborators and the CSES Plenary === The CSES research agenda, study design, and questionnaires are developed by an international committee of leading scholars in political science, sociology, and survey methodology. This committee is known as the CSES Planning Committee. At the beginning of each new module, a new Planning Committee is established. Nominations for the Planning Committee come from the user community, with membership of the Committee then being approved by the CSES Plenary Meeting. The Plenary Meeting is made up of national collaborators from each national election study involved in the CSES. Ideas for new modules can be submitted by anyone. More information on the current planning committee, its members, and subcommittee reports, as well as on past Planning Committees can be found on the CSES website. A list of country collaborators who participate in CSES can also be found on the CSES website. === Funding and support === The work of the CSES Secretariat is funded by the American National Science Foundation, the GESIS – Leibniz Institute for the Social Sciences and the University of Michigan’s Center for Political Studies along with in-kind support from participating election studies, additional organizations that sponsor planning meetings and conferences, and the many organizations that fund election studies by CSES collaborators. == Klingemann Prize == Each year, the CSES awards the GESIS Klingemann Prize for the best CSES scholarship (paper, book, dissertation, or other scholarly work, broadly defined). The award is sponsored by the GESIS – Leibniz Institute for the Social Sciences and is named in honor of Professor Dr. Hans-Dieter Klingemann, co-founder of the CSES, an internationally renowned political scientist who made significant contributions to cross-national electoral research. Nominated works must make extensive use of CSES and have a publication date in the calendar year prior to the award, either in print or online. === Winners of the Klingemann Prize === 2024: Andres Reiljan (University of Tartu), Diego Garzia (University of Lausanne), Frederico Ferreira da Silva (University of Lausanne) and Alexander H. Trechsel (University of Lucerne) (2024): Patterns of Affective Polarization toward Parties and Leaders across the Democratic World. American Political Science Review, 118(2), 654–670. 2023: James Adams (University of California, Davis), David Bracken (University of California, Davis), Noam Gidron (Hebrew University of Jerusalem), Will Horne (Georgia State University), Diana Z. O’Brien (Washington University in St. Louis) and Kaitlin Senk (Exeter University) (2022): Can’t We All Just Get Along? How Women MPs Can Ameliorate Affective Polarization in Western Publics. American Political Science Review, 117(1), 318–324. 2022: Vicente Valentim (University of Oxford) (2021): Parliamentary representation and the normalization of radical right support. Comparative political studies, 54(14), 2475–2511. 2021: Enrique Hernández (Universitat Autònoma de Barcelona), Eva Anduiza (Universitat Autònoma de Barcelona) and Guillem Rico (Universitat Autònoma de Barcelona) (2021): Affective polarization and the salience of elections. Electoral Studies, 69(1), 1–9. 2020: Eelco Harteveld (University of Amsterdam), Stefan Dahlberg (University of Gothenburg), Andrej Kokkonen (Aarhus University) and Wouter Van Der Brug (University of Amsterdam) (2019). “Gender Differences in Vote Choice: Social Cues and Social Harmony as Heuristics”. British Journal of Political Science, 49(3), 1141–1161. 2019: Ruth Dassonneville (University of Montreal) and Ian McAllister (Australian National University) (2018). "Gender, Political Knowledge, and Descriptive Representation: The Impact of Long-Term Socialization". American Journal of Political Science, 62(2), 249–265. 2018: André Blais (University of Montreal), Eric Guntermann (University of Montreal) and Marc-André Bodet (University of Laval) (2017). "Linking Party Preferences and the Composition of Government: A New Standard for Evaluating the Performance of Electoral Democracy". Political Science Research and Methods, 5(2), 315–331. 2017: Dani Marinova (Autonomous University of Barcelona) (2016). "Coping with Complexity: How Voters Adapt to Unstable Parties". ECPR Press. 2016: Kasara Kimuli (Columbia University) and Pavithra Suryanarayan (Johns Hopkins University) (2015). "When Do the Rich Vote Less Than the Poor and Why? Explaining Turnout Inequality across the World". American Journal of Political Science, 59(3), 613–627. 2015: Noam Lupu (University of Wisconsin-Madison) (2015). "Party Polarization and Mass Partisanship: A Comparative Perspective". Political Behavior, 37(2), 331–356. 2014: Richard R. Lau (Rutgers University), Parina Patel (Georgetown University), Dalia F. Fahmy (Long Island University) and Robert R. Kaufman (Rutgers University) (2014). "Correct Voting Across Thirty-Three Democracies: A Preliminary Analysis". British Journal of Political Science, 44(02), 239–259. 2013: Mark Andreas Kayser (Hertie School of Governance) and Michael Peress (University of Rochester) (2012). "Benchmarking across Borders: Electoral Accountability and the Necessity of Comparison". American Political Science Review, 106(03), 661–684. 2012: Russell J. Dalton (University of California, Irvine) David M. Farrell (University College Dublin) and Ian McAllister (Australian National University) (2011). "Political Parties and Democratic Linkage. How Parties Organize Democracy". Oxford University Press. 2011: Matt Golder (Florida State University) and Jacek Stramski (Florida State University) (2011). "Ideological Congruence and Electoral Institutions". American Journal of Political Science, 54(1), 90–106. == Notes == == References == == External links == Comparative Study of Electoral Systems "CSES at GESIS – Leibniz-Institute for the Social Sciences". Archived from the original on 2016-04-27. CSES Blog
Wikipedia/Comparative_Study_of_Electoral_Systems
Demography > The Basement Tapes is a compilation album by 16volt, released on November 14, 2000, by Cleopatra Records. The album comprises a collection of old, unfinished tracks by the band. Specifically, it contains the Imitation cassette produced in 1991, which helped 16volt secure a place on Re-Constriction Records, and "Out of Time", which was cut from their first album, Wisdom, due to time constraints. == Reception == Fabryka awarded the album three out of four and said "Demography sounds a way hermetic with lots of electronics, less of raw guitars, more of bass sound." == Track listing == == Personnel == Adapted from the Demography > The Basement Tapes liner notes. 16volt Eric Powell – lead vocals, arrangements, production, engineering, design Production and design Judson Leach – mastering == Release history == == References == == External links == Official website Demography > The Basement Tapes at Discogs (list of releases)
Wikipedia/Demography:_The_Basement_Tapes
Science is the peer-reviewed academic journal of the American Association for the Advancement of Science (AAAS) and one of the world's top academic journals. It was first published in 1880, is currently circulated weekly and has a subscriber base of around 130,000. Because institutional subscriptions and online access serve a larger audience, its estimated readership is over 400,000 people. Science is based in Washington, D.C., United States, with a second office in Cambridge, UK. == Contents == The major focus of the journal is publishing important original scientific research and research reviews, but Science also publishes science-related news, opinions on science policy and other matters of interest to scientists and others who are concerned with the wide implications of science and technology. Unlike most scientific journals, which focus on a specific field, Science and its rival Nature cover the full range of scientific disciplines. According to the Journal Citation Reports, Science's 2023 impact factor was 44.7. Studies of methodological quality and reliability have found that some high-prestige journals including Science "publish significantly substandard structures", and overall "reliability of published research works in several fields may be decreasing with increasing journal rank". Although it is the journal of the AAAS, membership in the AAAS is not required to publish in Science. Papers are accepted from authors around the world. Competition to publish in Science is very intense, as an article published in such a highly cited journal can lead to attention and career advancement for the authors. Fewer than 7% of articles submitted are accepted for publication. == History == Science was founded by New York journalist John Michels in 1880 with financial support from Thomas Edison and later from Alexander Graham Bell. (Edison received favorable editorial treatment in return, without disclosure of the financial relationship, at a time when his reputation was suffering due to delays producing the promised commercially viable light bulb.) However, the journal never gained enough subscribers to succeed and ended publication in March 1882. Alexander Graham Bell and Gardiner Greene Hubbard bought the magazine rights and hired young entomologist Samuel H. Scudder to resurrect the journal one year later. They had some success while covering the meetings of prominent American scientific societies, including the AAAS. However, by 1894, Science was again in financial difficulty and was sold to psychologist James McKeen Cattell for $500 (equivalent to $18,170 in 2024). In an agreement worked out by Cattell and AAAS secretary Leland O. Howard, Science became the journal of the American Association for the Advancement of Science in 1900. During the early part of the 20th century, important articles published in Science included papers on fruit fly genetics by Thomas Hunt Morgan, gravitational lensing by Albert Einstein, and spiral nebulae by Edwin Hubble. After Cattell died in 1944, the ownership of the journal was transferred to the AAAS. After Cattell's death in 1944, the journal lacked a consistent editorial presence until Graham DuShane became editor in 1956. In 1958, under DuShane's leadership, Science absorbed The Scientific Monthly, thus increasing the journal's circulation by over 62% from 38,000 to more than 61,000. Physicist Philip Abelson, a co-discoverer of neptunium, served as editor from 1962 to 1984. Under Abelson the efficiency of the review process was improved and the publication practices were brought up to date. During this time, papers on the Apollo program missions and some of the earliest reports on AIDS were published. Biochemist Daniel E. Koshland Jr. served as editor from 1985 until 1995. From 1995 until 2000, neuroscientist Floyd E. Bloom held that position. Biologist Donald Kennedy became the editor of Science in 2000. Biochemist Bruce Alberts took his place in March 2008. Geophysicist Marcia McNutt became editor-in-chief in June 2013. During her tenure the family of journals expanded to include Science Robotics and Science Immunology, and open access publishing with Science Advances. Jeremy M. Berg became editor-in-chief on July 1, 2016. Former Washington University in St. Louis Provost Holden Thorp was named editor-in-chief on Monday, August 19, 2019. In February 2001, draft results of the human genome were simultaneously published by Nature and Science with Science publishing the Celera Genomics paper and Nature publishing the publicly funded Human Genome Project. In 2007, Science (together with Nature) received the Prince of Asturias Award for Communications and Humanity. In 2015, Rush D. Holt Jr., chief executive officer of the AAAS and executive publisher of Science, stated that the journal was becoming increasingly international: "[I]nternationally co-authored papers are now the norm—they represent almost 60 percent of the papers. In 1992, it was slightly less than 20 percent." == Availability == The latest editions of the journal are available online, through the main journal website, only to subscribers, AAAS members, and for delivery to IP addresses at institutions that subscribe; students, K–12 teachers, and some others can subscribe at a reduced fee. However, research articles published after 1997 are available free (with online registration) one year after they are published i.e. delayed open access. Significant public-health related articles are also available free, sometimes immediately after publication. AAAS members may also access the pre-1997 Science archives at the Science website, where it is called "Science Classic". The journal also participates in initiatives that provide free or low-cost access to readers in developing countries, including HINARI, OARE, AGORA, and Scidev.net. Other features of the Science website include the free "ScienceNow" section with "up to the minute news from science", and "ScienceCareers", which provides free career resources for scientists and engineers. Science Express (Sciencexpress) provides advance electronic publication of selected Science papers. == Affiliations == Science received funding for COVID-19-related coverage from the Pulitzer Center and the Heising-Simons Foundation. == See also == AAAS publications Breakthrough of the Year List of scientific journals == References == === AAAS references === == External links == Official website
Wikipedia/Science_(magazine)
The English suffix -graphy means a "field of study" or related to "writing" a book, and is an anglicization of the French -graphie inherited from the Latin -graphia, which is a transliterated direct borrowing from Greek. == Arts == Cartography – the art and field of making maps. Choreography – the art of creating and arranging dances or ballets. Cinematography – the art of making lighting and camera choices when recording photographic images for the cinema. Collagraphy - In printmaking, a fine art technique in which collage materials are used as ink-carrying imagery on a printing plate. Iconography – the art of interpreting the content by icons. Klecksography – the art of making images from inkblots. Lexicography – the study lexicons and the art of compiling dictionaries. Lithography – planographic printing technique. Photography – the art, practice or occupation of taking and printing photographs. Photolithography – the method for microfabrication in electronics manufacturing. Pornography – the practice, occupation and result of producing sexually arousing imagery or words. Pyrography – the art of decorating wood or other materials with burn marks. Serigraphy – printmaking technique that uses a stencil made of fine synthetic material through which ink is forced. Tasseography – the art of reading tea leaves. Thermography – thermal imaging. Tomography – three-dimensional imaging. Typography – the art and techniques of type design. Videography – the art and techniques of filming video. Vitreography – in printmaking, a fine art technique that uses glass printing matrices. Xerography – the means of copying documents. === Writing === Cacography – bad handwriting or spelling. Calligraphy – the art of fine handwriting. Ideography – the use of symbols to represent a concept or idea. Orthography – the rules of correct writing. Palaeography – the study of historical handwriting. Pictography – the use of pictographs. Steganography – the art of writing hidden messages. Stenography – the art of writing in shorthand. == Types of works == Bibliography – a list of writings, typically those used or considered by an author in preparing a particular work or research. Metabibliography – bibliography of bibliographies. Biography – an account of a person's life. Autobiography – biography of a person written by themselves. Discography – a list of recorded music, or other sound recordings/auditory media. Filmography – a list of films, documentaries, or other visual media. Ludography (or gameography) – a list of games, specifically video games. Webography (or webliography or arachniography) – a list of websites, or URLs. == Fields of study == Areography – the geography of Mars (studies the physical features of the planet). Cartography – the study and making of maps. Cosmography – the study and making of maps of the universe or cosmos. Cryptography – the study of securing information. Crystallography – the study of crystals. Demography – the study of the characteristics of human populations, such as size, growth, density, distribution, and vital statistics. Encephalography – recording of voltages from the brain. Epigraphy – the study of written inscriptions on hard surfaces. Ethnography – the study of cultures and cultural phenomena. Floriography – the language of flowers. Geography – study of the lands, features, inhabitants, and phenomena of the Earth. Anthropogeography – study of human society's interactions and relationships with the environment. Orography – the study of mountains. Physiography – study of the processes and patterns in the Earth's environment. Hagiography – the study of saints. Historiography – study of the methods of historians Holography – study and mapping of computer project imaged called Holograms for interactive and assisted computations. Hydrography – measurement and description of any waters. Monography – the study of a single specialized subject of the aspect of a subject. Oceanography – exploration and scientific study of the ocean and its phenomena. Orography – science and study of mountains. Pathography – study of the history of an individual or community with regard to the influence of a physical or mental condition. Radiography – use of X-rays to produce medical images. Reprography – reproduction of graphics through mechanical or electrical means. Selenography – the study and mapping of the physical features of the Moon. Topography – the study of Earth's surface shape and features or those of planets, moons and asteroids. Uranography – the study and mapping of stars and space objects. Zoography – the study of animal description and their habits; descriptive zoology. == Medical == Mammography – an x-ray method used to examine the breast for detection of early-stage cancer and other diseases. Venography – a test that uses a x-ray moving pictures of blood in the veins of the legs and pelvis. Ultrasonography – a test that uses energy sound waves to observe tissues and organs. Urography – an examination with an x-ray to evaluate the kidneys, ureters and bladder. == See also == -ism -ology -logy List of words ending in ology == References == Black, Richard Harrison (1874). The student's manual complete; an etymological vocabulary of words derived from the Greek and Latin. Oxford University. pp. 10–12. Retrieved 2009-07-28. -graphy. The Oxford Pocket Dictionary of Current English. 2009. Retrieved 2009-07-28.
Wikipedia/-graphy
Regional science is a field of economics concerned with analytical approaches to problems that are related specifically to regional and international issues. Topics in regional science include, but are not limited to location theory or spatial economics, location modeling, transportation, trade and migration flows, economic geography, land use and urban development, inter-industry analysis such as input-output analysis, environmental and ecological analysis, resource management, urban and regional policy analysis, and spatial data analysis. In the broadest sense, any social science analysis that has a spatial dimension is embraced by regional scientists. == Origins == Regional science was founded in the late 1940s when some economists began to become dissatisfied with the low level of regional economic analysis and felt an urge to upgrade it. But even in this early era, the founders of regional science expected to catch the interest of people from a wide variety of disciplines. Regional science's formal roots date to the aggressive campaigns by Walter Isard and his supporters to promote the "objective" and "scientific" analysis of settlement, industrial location, and urban development. Isard targeted key universities and campaigned tirelessly. Accordingly, the Regional Science Association was founded in 1954, when the core group of scholars and practitioners held its first meetings independent from those initially held as sessions of the annual meetings of the American Economics Association. A reason for meeting independently undoubtedly was the group's desire to extend the new science beyond the rather restrictive world of economists and have natural scientists, psychologists, anthropologists, lawyers, sociologists, political scientists, planners, and geographers join the club. Now called the Regional Science Association International (RSAI), it maintains subnational and international associations, journals, and a conference circuit (notably in North America, continental Europe, Japan, and South Korea). Membership in the RSAI continues to grow. == Seminal publications == Topically speaking, regional science took off in the wake of Walter Christaller's book Die Zentralen Orte in Sűddeutschland (Verlag von Gustav Fischer, Jena, 1933; transl. Central Places in Southern Germany, 1966), soon followed by Tord Palander's (1935) Beiträge zur Standortstheorie; August Lösch's Die räumliche Ordnung der Wirtschaft (Verlag von Gustav Fischer, Jena, 1940; 2nd rev. edit., 1944; transl. The Economics of Location, 1954); and Edgar M. Hoover's two books--Location Theory and the Shoe and Leather Industry (1938) and The Location of Economic Activity (1948). Other important early publications include: Edward H. Chamberlin's (1950) The Theory of Monopolistic Competition; François Perroux's (1950) Economic Spaces: Theory and Application; Torsten Hägerstrand's (1953) Innovationsförloppet ur Korologisk Synpunkt; Edgar S. Dunn's (1954)The Location of Agricultural Production; Martin J. Beckmann, C.B McGuire, and Clifford B. Winston's (1956) Studies in the Economics of Transportation; Melvin L. Greenhut's (1956) Plant Location in Theory and Practice; Gunnar Myrdal's (1957) Economic Theory and Underdeveloped Regions; Albert O. Hirschman's (1958) The Strategy of Economic Development; and Claude Ponsard's (1958) Histoire des Théories Économiques Spatiales. Nonetheless, Walter Isard's first book in 1956, Location and Space Economy, apparently captured the imagination of many, and his third, Methods of Regional Analysis, published in 1960, only sealed his position as the father of the field. As is typically the case, the above works were built on the shoulders of giants. Much of this predecessor work is documented well in Walter Isard's Location and Space Economy as well as Claude Ponsard's Histoire des Théorie Économique Spatiales. Particularly important was the contribution by 19th century German economists to location theory. The early German hegemony more or less starts with Johann Heinrich von Thünen and runs through both Wilhelm Launhardt and Alfred Weber to Walter Christaller and August Lösch. == Core journals == If an academic discipline is identified by its journals, then technically regional science began in 1955 with the publication of the first volume of the Papers and Proceedings, Regional Science Association (now Papers in Regional Science published by Springer). In 1958, the Journal of Regional Science followed. Since the 1970s, the number of journals serving the field has exploded. The RSAI website displays most of them. Most recently the journal Spatial Economic Analysis has been published by the RSAI British and Irish Section with the Regional Studies Association. The latter is a separate and growing organisation involving economists, planners, geographers, political scientists, management academics, policymakers, and practitioners. == Academic programs == Walter Isard's efforts culminated in the creation of a few academic departments and several university-wide programs in regional science. At Walter Isard's suggestion, the University of Pennsylvania started the Regional Science Department in 1956. It featured as its first graduate William Alonso and was looked upon by many to be the international academic leader for the field. Another important graduate and faculty member of the department is Masahisa Fujita. The core curriculum of this department was microeconomics, input-output analysis, location theory, and statistics. Faculty also taught courses in mathematical programming, transportation economics, labor economics, energy and ecological policy modeling, spatial statistics, spatial interaction theory and models, benefit/cost analysis, urban and regional analysis, and economic development theory, among others. But the department's unusual multidisciplinary orientation undoubtedly encouraged its demise, and it lost its department status in 1993. With a few exceptions, such as Cornell University which awards graduate degrees in Regional Science and where Walter Isard had spent the rest of his life after UPENN, most practitioners hold positions in departments such as economics, geography, civil engineering, agricultural economics, rural sociology, urban planning, public policy, or demography. The diversity of disciplines participating in regional science have helped make it one of the most interesting and fruitful fields of academic specialization, but it has also made it difficult to fit the many perspectives into a curriculum for an academic major. It is even difficult for authors to write regional science textbooks, since what is elementary knowledge for one discipline might be entirely novel for another. == Public policy impact == Part of the movement was, and continues to be, associated with the political and economic realities of the role of the local community. On any occasion where public policy is directed at the sub-national level, such as a city or group of counties, the methods of regional science can prove useful. Traditionally, regional science has provided policymakers with guidance on the following issues: Determinants of industrial location (both within the nation and region) Regional economic impact of the arrival or departure of a firm Determinants and patterns of intra-national and inter-national trade(commodity) and migration(people) flows Regional specialization and exchange Environmental impacts of social and economic change Geographic association of economic and social conditions By targeting federal resources to specific geographic areas the Kennedy administration realized that political favors could be bought. This is also evident in Europe and other places where local economic areas do not coincide with political boundaries. In the more current era of devolution knowledge about "local solutions to local problems" has driven much of the interest in regional science. Thus, there has been much political impetus to the growth of the discipline. == Developments after 1980 == Regional science has enjoyed mixed fortunes since the 1980s. While it has gained a larger following among economists and public policy practitioners, the discipline has fallen out of favor among more radical and post-modernist geographers. In an apparent effort to secure a larger share of research funds, geographers had the National Science Foundation's Geography and Regional Science Program renamed "Geography and Spatial Sciences". === New economic geography === In 1991, Paul Krugman, as a highly regarded international trade theorist, put out a call for economists to pay more attention to economic geography in a book entitled Geography and Trade, focusing largely on the core regional science concept of agglomeration economies. Krugman's call renewed interest by economists in regional science and, perhaps more importantly, founded what some term the "new economic geography", which enjoys much common ground with regional science. Broadly trained "new economic geographers" combine quantitative work with other research techniques, for example at the London School of Economics. The unification of Europe and the increased internationalization of the world's economic, social, and political realms has further induced interest in the study of regional, as opposed to national, phenomena. The new economic geography appears to have garnered more interest in Europe than in America where amenities, notably climate, have been found to better predict human location and re-location patterns, as emphasized in recent work by Mark Partridge. In 2008 Krugman won the Nobel Memorial Prize in Economic Sciences and his Prize Lecture has references both to work in regional science's location theory as well as economic's trade theory. === Criticisms === Today there are dwindling numbers of regional scientists from academic planning programs and mainstream geography departments. Attacks on regional science's practitioners by radical critics began as early as the 1970s, notably David Harvey who believed it lacked social and political commitment. Regional science's founder, Walter Isard, never envisioned regional scientists would be political or planning activists. In fact, he suggested that they will seek to be sitting in front of a computer and surrounded by research assistants. Trevor J. Barnes suggests the decline of regional science practice among planners and geographers in North America could have been avoided. He says "It is unreflective, and consequently inured to change, because of a commitment to a God’s eye view. It is so convinced of its own rightness, of its Archimedean position, that it remained aloof and invariant, rather than being sensitive to its changing local context." However, such critics have failed to provide empirical evidence for their claims and ended up criticizing for the sake of criticizing. == See also == == References == == Further reading == Boyce, David. (2004). A Short History of the Field of Regional Science. Papers in Regional Science., 83 pp. 31–57. Short history. (PDF) . Retrieved on 2011-06-04. Durlauf, Steven N., and Lawrence E. Blume, ed. (2008). The New Palgrave Dictionary of Economics, 2nd Edition: "new economic geography" by Anthony J. Venables. Abstract. "regional development, geography of" by Jeffrey D. Sachs and Gordon McCord. Abstract. "spatial economics" by Gilles Duranton. Abstract. "urban agglomeration" by William C. Strange. Abstract. Fujita, Masahisa, Paul Krugman, and Anthony Venables. (1999). The Spatial Economy: Cities, Regions and International Trade (Cambridge, Massachusetts: MIT press). (ISBN 0-262-06204-6) Fujita, Masahisa. (1989). Urban Economic Theory: Land Use and City Size (Cambridge, UK: Cambridge University Press). (ISBN 0-521-34662-2) Fritsch, Michael und Mueller, Pamela (2006), The Effect of New Business Formation on Regional Development over Time. The Case of Germany, Discussion Papers on Entrepreneurship, Growth and Public Policy, Jena Krumm, Ronald J.; Tolley, George S. (1987). "Regional economics". The New Palgrave: A Dictionary of Economics. 4: 116–20. Scott, A. J. (2000). "Economic Geography: The Great Half-Century". Cambridge Journal of Economics. 24: 504. Web Book of Regional Science
Wikipedia/Regional_science
The Cahiers québécois de démographie (English: Quebec Notebooks of Demography) is a peer-reviewed academic journal publishing original research in areas of demography, demographic analysis, and the demographics of Quebec and other populations. The journal was established in 1971 and is published biannually by the Association des démographes du Québec (Quebec Association of Demographers), with support from the Demography Department at the Université de Montréal. Articles are published in French, with abstracts in French and English. The journal is indexed in Revue des revues démographiques, Repère, Sociological Abstracts, and MEDLINE. Articles are freely available online through the Érudit publishing consortium. == Scope == The Cahiers québécois de démographie publishes articles on topics of mortality, fertility, migration, demographic theory, demographic measures, and related issues. Articles may focus on Quebec, Canada, or have an international perspective. The journal occasionally publishes special volumes of interdisciplinary research on themes such as health, population ageing, urbanization, education, linguistic demography, historical demography, population policy, and the demographics of indigenous peoples, Francophone Africa, or other population groups. == History == The journal was originally entitled Bulletin de l'Association des démographes du Québec (Bulletin of the Quebec Association of Demographers). The name was changed to its current title in 1976. == References == == External links == Official website Association des démographes du Québec
Wikipedia/Cahiers_québécois_de_démographie
The Panel Study of Income Dynamics (PSID) is a longitudinal panel survey of American families, conducted by the Survey Research Center at the University of Michigan. The PSID measures economic, social, and health factors over the life course of families over multiple generations. Data have been collected from the same families and their descendants since 1968. It has been claimed that it is the world’s longest running household panel survey, and more than 7,600 peer-reviewed publications have been based on PSID data. As of 2025, Thomas Crossley of the University of Michigan's Institute for Social Research is the director of PSID. == Background == The PSID gathers data about the circumstances of the family as a whole and about each individual in the family. The greatest level of detail is gathered for the primary adult(s) heading the family. The PSID has achieved high and consistent response rates, and because of low attrition and the success in following young adults as they form their own families, the sample size has grown from 4,800 families in 1968, to 7,000 families in 2001, to 7,400 by 2005, and to more than 9,000 as of 2013. By 2003, the PSID had collected information on more than 65,000 individuals. As of 2013, the PSID had information on over 75,000 individuals, spanning as many as four decades of their lives. == Framework == The structure of the PSID started with two distinct samples. A nationally representative sample designed by the Survey Research Center became known as the SRC sample. A second sample of individuals was drawn from lower income levels, and this became known as the Survey of Economic Opportunity (SEO) sample. This second sample, though not nationally representative, allowed for more studies to investigate poverty in the United States. After this initial 1968 interview, families were interviewed each year until 1997. After 1997, the survey has been biennial, with data being collected every two years. Over time, as individuals leave their household, they are followed as they form their new residence. As time passed, the representativeness of the original sample became more and more out of line with the overall US demographic. To ameliorate the potential bias, two additional samples were added to the PSID. A third sample consisting of Latinos was added. In 1997, a new fourth Immigrant sample was added, and the other three reorganized. All three continued to be collected, but with a reduced number of households. The two "core" samples (SRC and SEO) were reduced to include 6,168 families, and the Latino sample was reduced to 2000 families. To these, a new set of 441 families from the Immigrant sample created a study group capable of tracking the current demographics in the US. Until 1972, interviews were done in person using paper. Since 1973 interviews are by phone. Starting in 1993, interviews were conducted using Computer Assisted Telephone (CAT) technology. === Child Development Supplement === The Child Development Supplement (CDS) is a research component of the PSID. The CDS provides researchers with extensive data on children and their extended families with which to study the dynamic process of early human and social capital formation. The first CDS study included up to two children per household who were 0 to 12 years old in 1997, and followed those children over three waves, ending in 2007-08. The CDS 2014 includes all eligible children in PSID households born since 1997. === Transition into Adulthood Supplement === When children in the CDS cohort reach 18 years of age, information is obtained about their circumstances through a telephone interview completed shortly after the Main Interview. This study, called Transition into Adulthood Supplement, has been implemented in 2005, and biennially thereafter. Information includes measures of time use, psychological functioning, marriage, family, responsibilities, employment and income, education and career goals, health, social environment, religiosity, and outlook on life. === File structure of the PSID === The PSID's information is held in many files. The main head and wife responses are held in a series of "Family Files" that are uniquely identified by a Family ID number. A smaller subset of information pertaining to individuals (whether they are a head, wife, or other family unit members) is contained in the cross-year individual file, and each record is uniquely identified by a 1968 Family ID and Person Number pair. Many additional supplemental files are available with supplemental information that may have been collected for only one or a few years. === Topical information === The PSID collects data on a wide array of social, demographic, health, economic, geospatial and psychological data. As of 2009, the 75 minute interview collected data on: employment earnings income from all sources expenditures covering 100% of total household spending transfers housing education geospatial data health status health behaviors health insurance early childhood and adult health conditions and their timing emotional well-being life satisfaction mortality and cause of death marriage and fertility participation in government programs financial distress including problems paying debt such as mortgages and foreclosure vehicle ownership wealth pensions philanthropy Many of these areas have been included in the instrument since 1968. Hundreds of additional variables that fall into other domains have been collected in various waves throughout the history of the PSID. No identifying information is distributed to data users and the identity of all respondents is held in confidence. Approximately 7,600 peer-reviewed publications are based on PSID data published in the fields of economics, sociology, demography, psychology, child development, public health, medicine, education, communications, and others. The PSID was named one of the National Science Foundation's "Sensational Sixty" NSF-funded inventions, innovations and discoveries that have become commonplace in American lives. == Researchers and funding == The main source of support for the study comes from the National Science Foundation, the National Institute on Aging and the National Institute of Child Health and Human Development. There are other important sponsors of the study as well including the Office of the Assistant Secretary for Planning and Evaluation of the United States Department of Health and Human Services, the Economic Research Service of the United States Department of Agriculture, the United States Department of Housing and Urban Development and the Center on Philanthropy at Indiana University. == See also == List of household surveys in the United States The PSID has sister surveys conducted in other countries, including: German Socio-Economic Panel (SOEP), housed at the German Institute for Economic Research (DIW) Berlin British Household Panel Survey (BHPS) conducted by investigators at the University of Essex and now merging with the UK households: a longitudinal study Household, Income and Labour Dynamics in Australia Survey (HILDA) Italian Survey on Household Income and Wealth (SHIW) == References == == External links == Panel Study of Income Dynamics Cross National Equivalent File (CNEF) (URL accessed 2016-01-28)
Wikipedia/Panel_Study_of_Income_Dynamics
The Demographic and Health Surveys (DHS) Program was responsible for collecting and disseminating accurate, nationally representative data on health and population in developing countries. The project is implemented by ICF International and was funded by the United States Agency for International Development (USAID) with contributions from other donors such as UNICEF, UNFPA, WHO, and UNAIDS. The DHS is highly comparable to the Multiple Indicator Cluster Surveys and the technical teams developing and supporting the surveys are in close collaboration. Since September 2013, ICF International has been partnering with seven internationally experienced organizations to expand access to and use of the DHS data: Johns Hopkins Bloomberg School of Public Health Center for Communication Programs; Program for Appropriate Technology in Health (PATH); Avenir Health; Vysnova; Blue Raster; Kimetrica; and EnCompass. == Overview == Since 1984, The Demographic and Health Surveys (DHS) Program has provided technical assistance to more than 300 demographic and health surveys in over 90 countries. DHS surveys collect information on fertility and total fertility rate (TFR), reproductive health, maternal health, child health, immunization and survival, HIV/AIDS; maternal mortality, child mortality, malaria, and nutrition among women and children stunted. The strategic objective of The DHS Program is to improve and institutionalize the collection and use of data by host countries for program monitoring and evaluation and for policy development decisions. == Surveys == The DHS Program supports the following data collection options: Demographic and Health Surveys (DHS): provide data for monitoring and impact evaluation indicators in the areas of population, health, and nutrition. AIDS Indicator Surveys (AIS): provide countries with a standardized tool to obtain indicators for the effective monitoring of national HIV/AIDS programs. Service Provision Assessment (SPA) Surveys: provide information about the characteristics of health and family planning services available in a country. Malaria Indicators Surveys (MIS): Provide data on bednet ownership and use, prevention of malaria during pregnancy, and prompt and effective treatment of fever in young children. In some cases, biomarker testing for malaria and anemia are also included. Key Indicators Survey (KIS): provide monitoring and evaluation data for population and health activities in small areas—regions, districts, catchment areas—that may be targeted by an individual project, although they can be used in nationally representative surveys as well. Other Quantitative Data: include Geographic Data Collection, and Benchmarking Surveys. Biomarker Collection: in conjunction with surveys, more than 2 million tests have been conducted for HIV, anemia, malaria, and more than 25 other biomarkers. Qualitative Research: provides information outside the purview of standard quantitative approaches. == Data == The DHS Program works to provide survey data for program managers, health care providers, policymakers, country leaders, researchers, members of the media, and others who can act to improve public health. The DHS Program distributes unrestricted survey data files for legitimate academic research at no cost. Online databases include: STATcompiler, STATmapper, HIV/AIDS Survey Indicators Database, HIV Spatial Data Repository, HIVmapper, and Country QuickStats. == Publications == The DHS Program produces publications that provide country specific and comparative data on population, health, and nutrition in developing countries. Most publications are available online for download, but if an electronic version of the publication is not available, a hard copy may be available. == Countries == The DHS Program has been active in over 90 countries in Africa, Asia, Central Asia; West Asia; and Southeast Asia, Latin America and the Caribbean. A list of the publications for each country is available online at The DHS Program web site. == Special Focus Topics == === HIV/AIDS === Since 2001, The DHS Program has worked in over 15 countries in Africa, Asia and Latin America and Caribbean conducting population-based HIV testing. By collecting blood for HIV testing from representative samples of the population of men and women in a country, the DHS Program provides nationally representative estimates of HIV rates. The testing protocol provides for anonymous, informed, and voluntary testing of women and men. The program also collects data on internationally recognized AIDS indicators. Currently, the main sources of HIV/AIDS indicators in the database are the Demographic and Health Surveys (DHS), the Multiple Indicator Cluster Surveys (MICS), the Reproductive Health Surveys (RHS), the Sexual Behavior Surveys (SBS), and Behavioral Surveillance Surveys (BSS). Eventually it will cover all countries for which indicators are available. The project also collects data on the capacity of health care facilities to deliver HIV prevention and treatment services. === Malaria === Since 2000, DHS (and some AIS) surveys have collected data on ownership and use of mosquito nets, treatment of fever in children, and intermittent preventive treatment of pregnant women. In recent years, additional questions on indoor residual spraying, and biomarker testing for anemia and malaria have been conducted. This has however not changed the trend in malaria infections thereby calling for more interventions by researchers and scientists. === Gender === The DHS Program researches and trains for integrating gender into population, health and nutrition programs and HIV/AIDS-related activities in the developing world. Questions on gender roles and empowerment are integrated into most DHS questionnaires. For countries interested in more in-depth data on gender, modules of questions are available on specific topics such as status of women, domestic violence, and female genital mutilation. === Youth === The DHS Program has interviewed thousands of young people and gathered information about their education, employment, media exposure, nutrition, sexual activity, fertility, unions, and general reproductive health, including HIV prevalence. The Youth Corner on the DHS website presents findings about youth and features profiles of young adults ages 15–24 from more than 30 countries worldwide. The Youth Corner is part of the broader effort by the Interagency Youth Working Group (IYWG) to help program managers, donors, national and local governments, teachers, religious leaders, and nongovernmental organizations (NGOs) plan and implement programs to improve the reproductive health of young adults. === Geographic information === The DHS Program now analyzes the impact of geographic location using DHS data and geographic information systems (GIS). The DHS Program routinely collects geographic information in all surveyed countries. Using GIS, researchers can link DHS data with routine health data, health facility locations, local infrastructure such as roads and rivers, and environmental conditions. === Biomarkers === Using field-friendly technologies, the DHS Program is able to collect biomarker data relating to conditions and infections. DHS surveys have tested for anemia (by measuring hemoglobin), HIV infection, sexually transmitted diseases such as syphilis and the herpes simplex virus, serum retinol (Vitamin A), lead exposure, high blood pressure, and immunity from vaccine-preventable diseases like measles and tetanus. Traditionally, much of the data gathered in DHS surveys is self-reported. Biomarkers complement this information by providing an objective profile of a specific disease or health condition in a population. Biomarker data contributes to the understanding of behavioral risk factors and determinants of different illnesses. == See also == National Survey of Family Growth PMA2020 Family Planning and WASH surveys == References == == External links == Official website STATcompiler
Wikipedia/Demographic_and_Health_Surveys
Structural equation modeling (SEM) is a diverse set of methods used by scientists for both observational and experimental research. SEM is used mostly in the social and behavioral science fields, but it is also used in epidemiology, business, and other fields. A common definition of SEM is, "...a class of methodologies that seeks to represent hypotheses about the means, variances, and covariances of observed data in terms of a smaller number of 'structural' parameters defined by a hypothesized underlying conceptual or theoretical model,". SEM involves a model representing how various aspects of some phenomenon are thought to causally connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using equations but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures. The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including confirmatory factor analysis (CFA), confirmatory composite analysis, path analysis, multi-group modeling, longitudinal modeling, partial least squares path modeling, latent growth modeling and hierarchical or multilevel modeling. SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases. A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately. == History == Structural equation modeling (SEM) began differentiating itself from correlation and regression when Sewall Wright provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables. The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989). Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early Cowles Commission work on simultaneous equations estimation centered on Koopman and Hood's (1953) algorithms from transport economics and optimal routing, with maximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables. Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates. Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature. Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain. Disciplinary differences in approaches can be seen in SEMNET discussions of endogeneity, and in discussions on causality via directed acyclic graphs (DAGs). Discussions comparing and contrasting various SEM approaches are available highlighting disciplinary differences in data structures and the concerns motivating economic models. Judea Pearl extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms. SEM analyses are popular in the social sciences because these analytic techniques help us break down complex concepts and understand causal processes, but the complexity of the models can introduce substantial variability in the results depending on the presence or absence of conventional control variables, the sample size, and the variables of interest. The use of experimental designs may address some of these doubts. Today, SEM forms the basis of machine learning and (interpretable) neural networks. Exploratory and confirmatory factor analyses in classical statistics mirror unsupervised and supervised machine learning. == General steps and considerations == The following considerations apply to the construction and assessment of many structural equation models. === Model specification === Building or specifying a model requires attending to: the set of variables to be employed, what is known about the variables, what is theorized or hypothesized about the variables' causal connections and disconnections, what the researcher seeks to learn from the modeling, and the instances of missing values and/or the need for imputation. Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies: which effects and/or correlations/covariances are to be included and estimated, which effects and other coefficients are forbidden or presumed unnecessary, and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2). The latent level of a model is composed of endogenous and exogenous variables. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether exogenous or endogenous, is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables. The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations. Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices. Two main components of models are distinguished in SEM: the structural model showing potential causal dependencies between endogenous and exogenous latent variables, and the measurement model showing the causal connections between the latent variables and the indicators. Exploratory and confirmatory factor analysis models, for example, focus on the causal measurement connections, while path models more closely correspond to SEMs latent structural connections. Modelers specify each coefficient in a model as being free to be estimated, or fixed at some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used. The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure. There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation. === Estimation of free model coefficients === Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on: a) the coefficients' locations in the model (e.g. which variables are connected/disconnected), b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear), c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables), and d) the measurement scales appropriate for the variables (interval level measurement is often assumed). A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features. The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares. One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate, but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification. Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly. Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent. === Model assessment === Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider: whether the data contain reasonable measurements of appropriate variables, whether the modeled case are causally homogeneous, (It makes no sense to estimate one model if the data cases reflect two or more different causal networks.) whether the model appropriately represents the theory or features of interest, (Models are unpersuasive if they omit features required by a theory, or contain coefficients inconsistent with that theory.) whether the estimates are statistically justifiable, (Substantive assessments may be devastated: by violating assumptions, by using an inappropriate estimator, and/or by encountering non-convergence of iterative estimators.) the substantive reasonableness of the estimates, (Negative variances, and correlations exceeding 1.0 or -1.0, are impossible. Statistically possible estimates that are inconsistent with theory may also challenge theory, and our understanding.) the remaining consistency, or inconsistency, between the model and data. (The estimation process minimizes the differences between the model and data but important and informative differences may remain.) Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying a χ2 (chi-squared) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A small χ2 probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant model χ2 test). Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification. Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence. Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data. A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution." Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables. "Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the χ2 test can have (though it does not always have) considerable power to detect model misspecification. The probability accompanying a χ2 test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small χ2 probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to χ2. The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of χ2 testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification. Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that χ2 increases (and hence χ2 probability decreases) with increasing sample size (N). There are two mistakes in discounting χ2 on this basis. First, for proper models, χ2 does not increase with increasing N, so if χ2 increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, χ2 increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by χ2, so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration. The χ2 model test, possibly adjusted, is the strongest available structural equation model test. Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency. Models with different causal structures which fit the data identically well, have been called equivalent models. Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment. This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data, but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose." The χ2 evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model. Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables. The considerations relevant to using fit indices include checking: whether data concerns have been addressed (to ensure data mistakes are not driving model-data inconsistency); whether criterion values for the index have been investigated for models structured like the researcher's model (e.g. index criterion based on factor structured models are only appropriate if the researcher's model actually is factor structured); whether the kinds of potential misspecifications in the current model correspond to the kinds of misspecifications on which the index criterion are based (e.g. criteria based on simulation of omitted factor loadings may not be appropriate for misspecification resulting from failure to include appropriate control variables); whether the researcher knowingly agrees to disregard evidence pointing to the kinds of misspecifications on which the index criteria were based. (If the index criterion is based on simulating a missing factor loading or two, using that criterion acknowledges the researcher's willingness to accept a model missing a factor loading or two.); whether the latest, not outdated, index criteria are being used (because the criteria for some indices tightened over time); whether satisfying criterion values on pairs of indices are required (e.g. Hu and Bentler report that some common indices function inappropriately unless they are assessed together.); whether a model test is, or is not, available. (A χ2 value, degrees of freedom, and probability will be available for models reporting indices based on χ2.) and whether the researcher has considered both alpha (Type I) and beta (Type II) errors in making their index-based decisions (E.g. if the model is significantly data-inconsistent, the "tolerable" amount of inconsistency is likely to differ in the context of medical, business, social and psychological contexts.). Some of the more commonly used fit statistics include Chi-square A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified. Akaike information criterion (AIC) An index of relative model fit: The preferred model is the one with the lowest AIC value. A I C = 2 k − 2 ln ⁡ ( L ) {\displaystyle {\mathit {AIC}}=2k-2\ln(L)\,} where k is the number of parameters in the statistical model, and L is the maximized value of the likelihood of the model. Root Mean Square Error of Approximation (RMSEA) Fit index where a value of zero indicates the best fit. Guidelines for determining a "close fit" using RMSEA are highly contested. Standardized Root Mean Squared Residual (SRMR) The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit. Comparative Fit Index (CFI) In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable. The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions. For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit. === Sample size, power, and estimation === Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients. Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances. Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators. The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data. === Interpretation === Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several misspecifications. Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores. SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes. The meaning and interpretation of specific estimates should be contextualized in the full model. SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models. Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause. (A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance. Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled. As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect. The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided by R2, though the Blocked-Error R2 should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable. The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features. Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency. The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation. Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables. Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations. Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients. Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible. The multiple ways of conceptualizing PLS models complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on R2 or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation. Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term causal model must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures. === Controversies and movements === Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent. If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser followed by several comments and a rejoinder, all made freely available, thanks to the efforts of George Marcoulides. These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.” (page 821). Barrett’s article was also accompanied by commentary from both perspectives. The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports. The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models. The comments by Bollen and Pearl regarding myths about causality in the context of SEM reinforced the centrality of causal thinking in the context of SEM. A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007), for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016) remain disturbingly weak in their presentation of model testing. Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available. An additional controversy that touched the fringes of the previous controversies awaits ignition. Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012) discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time, but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective. Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence. == Extensions, modeling alternatives, and statistical kin == Categorical dependent variables Categorical intervening variables Copulas Deep Path Modelling Exploratory Structural Equation Modeling Fusion validity models Item response theory models Latent class models Latent growth modeling Link functions Longitudinal models Measurement invariance models Mixture model Multilevel models, hierarchical models (e.g. people nested in groups) Multiple group modelling with or without constraints between groups (genders, cultures, test forms, languages, etc.) Multi-method multi-trait models Random intercepts models Structural Equation Model Trees Structural Equation Multidimensional scaling == Software == Structural equation modeling programs differ widely in their capabilities and user requirements. Below is a table of available software. == See also == Causal model – Conceptual model in philosophy of science Graphical model – Probabilistic model Judea Pearl Multivariate statistics – Simultaneous observation and analysis of more than one outcome variable Partial least squares path modeling – Method for structural equation modeling Partial least squares regression – Statistical method Simultaneous equations model – Type of statistical model Causal map – A network consisting of links or arcs between nodes or factors Bayesian Network – Statistical modelPages displaying short descriptions of redirect targets == References == == Bibliography == Hu, Li-tze; Bentler, Peter M (1999). "Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives". Structural Equation Modeling. 6: 1–55. doi:10.1080/10705519909540118. hdl:2027.42/139911. Kaplan, D. (2008). Structural Equation Modeling: Foundations and Extensions (2nd ed.). SAGE. ISBN 978-1412916240. Kline, Rex (2011). Principles and Practice of Structural Equation Modeling (Third ed.). Guilford. ISBN 978-1-60623-876-9. MacCallum, Robert; Austin, James (2000). "Applications of Structural Equation Modeling in Psychological Research" (PDF). Annual Review of Psychology. 51: 201–226. doi:10.1146/annurev.psych.51.1.201. PMID 10751970. Archived from the original (PDF) on 28 January 2015. Retrieved 25 January 2015. Quintana, Stephen M.; Maxwell, Scott E. (1999). "Implications of Recent Developments in Structural Equation Modeling for Counseling Psychology". The Counseling Psychologist. 27 (4): 485–527. doi:10.1177/0011000099274002. S2CID 145586057. == Further reading == Bagozzi, Richard P; Yi, Youjae (2011). "Specification, evaluation, and interpretation of structural equation models". Journal of the Academy of Marketing Science. 40 (1): 8–34. doi:10.1007/s11747-011-0278-x. S2CID 167896719. Bartholomew, D. J., and Knott, M. (1999) Latent Variable Models and Factor Analysis Kendall's Library of Statistics, vol. 7, Edward Arnold Publishers, ISBN 0-340-69243-X Bentler, P.M. & Bonett, D.G. (1980), "Significance tests and goodness of fit in the analysis of covariance structures", Psychological Bulletin, 88, 588–606. Bollen, K. A. (1989). Structural Equations with Latent Variables. Wiley, ISBN 0-471-01171-1 Byrne, B. M. (2001) Structural Equation Modeling with AMOS - Basic Concepts, Applications, and Programming.LEA, ISBN 0-8058-4104-0 Goldberger, A. S. (1972). Structural equation models in the social sciences. Econometrica 40, 979- 1001. Haavelmo, Trygve (January 1943). "The Statistical Implications of a System of Simultaneous Equations". Econometrica. 11 (1): 1–12. doi:10.2307/1905714. JSTOR 1905714. Hoyle, R H (ed) (1995) Structural Equation Modeling: Concepts, Issues, and Applications. SAGE, ISBN 0-8039-5318-6 Jöreskog, Karl G.; Yang, Fan (1996). "Non-linear structural equation models: The Kenny-Judd model with interaction effects". In Marcoulides, George A.; Schumacker, Randall E. (eds.). Advanced structural equation modeling: Concepts, issues, and applications. Thousand Oaks, CA: Sage Publications. pp. 57–88. ISBN 978-1-317-84380-1. Lewis-Beck, Michael; Bryman, Alan E.; Bryman, Emeritus Professor Alan; Liao, Tim Futing (2004). "Structural Equation Modeling". The SAGE Encyclopedia of Social Science Research Methods. doi:10.4135/9781412950589.n979. hdl:2022/21973. ISBN 978-0-7619-2363-3. Schermelleh-Engel, K.; Moosbrugger, H.; Müller, H. (2003), "Evaluating the fit of structural equation models" (PDF), Methods of Psychological Research, 8 (2): 23–74. == External links == Structural equation modeling page under David Garson's StatNotes, NCSU Issues and Opinion on Structural Equation Modeling, SEM in IS Research The causal interpretation of structural equations (or SEM survival kit) by Judea Pearl 2000. Structural Equation Modeling Reference List by Jason Newsom: journal articles and book chapters on structural equation models Handbook of Management Scales, a collection of previously used multi-item scales to measure constructs for SEM
Wikipedia/Structural_equation_modeling
In mathematics, the Jacobian conjecture is a famous unsolved problem concerning polynomials in several variables. It states that if a polynomial function from an n-dimensional space to itself has Jacobian determinant which is a non-zero constant, then the function has a polynomial inverse. It was first conjectured in 1939 by Ott-Heinrich Keller, and widely publicized by Shreeram Abhyankar, as an example of a difficult question in algebraic geometry that can be understood using little beyond a knowledge of calculus. The Jacobian conjecture is notorious for the large number of attempted proofs that turned out to contain subtle errors. As of 2018, there are no plausible claims to have proved it. Even the two-variable case has resisted all efforts. There are currently no known compelling reasons for believing the conjecture to be true, and according to van den Essen there are some suspicions that the conjecture is in fact false for large numbers of variables (indeed, there is equally also no compelling evidence to support these suspicions). The Jacobian conjecture is number 16 in Stephen Smale's 1998 list of Mathematical Problems for the Next Century. == The Jacobian determinant == Let N > 1 be a fixed integer and consider polynomials f1, ..., fN in variables X1, ..., XN with coefficients in a field k. Then we define a vector-valued function F: kN → kN by setting: F(X1, ..., XN) = (f1(X1, ...,XN),..., fN(X1,...,XN)). Any map F: kN → kN arising in this way is called a polynomial mapping. The Jacobian determinant of F, denoted by JF, is defined as the determinant of the N × N Jacobian matrix consisting of the partial derivatives of fi with respect to Xj: J F = | ∂ f 1 ∂ X 1 ⋯ ∂ f 1 ∂ X N ⋮ ⋱ ⋮ ∂ f N ∂ X 1 ⋯ ∂ f N ∂ X N | , {\displaystyle J_{F}=\left|{\begin{matrix}{\frac {\partial f_{1}}{\partial X_{1}}}&\cdots &{\frac {\partial f_{1}}{\partial X_{N}}}\\\vdots &\ddots &\vdots \\{\frac {\partial f_{N}}{\partial X_{1}}}&\cdots &{\frac {\partial f_{N}}{\partial X_{N}}}\end{matrix}}\right|,} then JF is itself a polynomial function of the N variables X1, ..., XN. == Formulation of the conjecture == It follows from the multivariable chain rule that if F has a polynomial inverse function G: kN → kN, then JF has a polynomial reciprocal, so is a nonzero constant. The Jacobian conjecture is the following partial converse: Jacobian conjecture: Let k have characteristic 0. If JF is a non-zero constant, then F has an inverse function G: kN → kN which is regular, meaning its components are polynomials. According to van den Essen, the problem was first conjectured by Keller in 1939 for the limited case of two variables and integer coefficients. The obvious analogue of the Jacobian conjecture fails if k has characteristic p > 0 even for one variable. The characteristic of a field, if it is not zero, must be prime, so at least 2. The polynomial x − xp has derivative 1 − p xp−1 which is 1 (because px is 0) but it has no inverse function. However, Kossivi Adjamagbo suggested extending the Jacobian conjecture to characteristic p > 0 by adding the hypothesis that p does not divide the degree of the field extension k(X) / k(F). The existence of a polynomial inverse is obvious if F is simply a set of functions linear in the variables, because then the inverse will also be a set of linear functions. A simple non-linear example is given by u = x 2 + y + x {\displaystyle u=x^{2}+y+x} v = x 2 + y {\displaystyle v=x^{2}+y} so that the Jacobian determinant is J F = | 1 + 2 x 1 2 x 1 | = ( 1 + 2 x ) ( 1 ) − ( 1 ) 2 x = 1. {\displaystyle J_{F}=\left|{\begin{matrix}1+2x&1\\2x&1\end{matrix}}\right|=(1+2x)(1)-(1)2x=1.} In this case the inverse exists as the polynomials x = u − v {\displaystyle x=u-v} y = v − ( u − v ) 2 . {\displaystyle y=v-(u-v)^{2}.} But if we modify F slightly, to u = 2 x 2 + y {\displaystyle u=2x^{2}+y} v = x 2 + y {\displaystyle v=x^{2}+y} then the determinant is J F = | 4 x 1 2 x 1 | = ( 4 x ) ( 1 ) − 2 x ( 1 ) = 2 x , {\displaystyle J_{F}=\left|{\begin{matrix}4x&1\\2x&1\end{matrix}}\right|=(4x)(1)-2x(1)=2x,} which is not constant, and the Jacobian conjecture does not apply. The function still has an inverse: x = u − v {\displaystyle x={\sqrt {u-v}}} y = 2 v − u , {\displaystyle y=2v-u,} but the expression for x is not a polynomial. The condition JF ≠ 0 is related to the inverse function theorem in multivariable calculus. In fact for smooth functions (and so in particular for polynomials) a smooth local inverse function to F exists at every point where JF is non-zero. For example, the map x → x + x3 has a smooth global inverse, but the inverse is not polynomial. == Results == Stuart Sui-Sheng Wang proved the Jacobian conjecture for polynomials of degree 2. Hyman Bass, Edwin Connell, and David Wright showed that the general case follows from the special case where the polynomials are of degree 3, or even more specifically, of cubic homogeneous type, meaning of the form F = (X1 + H1, ..., Xn + Hn), where each Hi is either zero or a homogeneous cubic. Ludwik Drużkowski showed that one may further assume that the map is of cubic linear type, meaning that the nonzero Hi are cubes of homogeneous linear polynomials. It seems that Drużkowski's reduction is one most promising way to go forward. These reductions introduce additional variables and so are not available for fixed N. Edwin Connell and Lou van den Dries proved that if the Jacobian conjecture is false, then it has a counterexample with integer coefficients and Jacobian determinant 1. In consequence, the Jacobian conjecture is true either for all fields of characteristic 0 or for none. For fixed dimension N, it is true if it holds for at least one algebraically closed field of characteristic 0. Let k[X] denote the polynomial ring k[X1, ..., Xn] and k[F] denote the k-subalgebra generated by f1, ..., fn. For a given F, the Jacobian conjecture is true if, and only if, k[X] = k[F]. Keller (1939) proved the birational case, that is, where the two fields k(X) and k(F) are equal. The case where k(X) is a Galois extension of k(F) was proved by Andrew Campbell for complex maps and in general by Michael Razar and, independently, by David Wright. Tzuong-Tsieng Moh checked the conjecture for polynomials of degree at most 100 in two variables. Michiel de Bondt and Arno van den Essen and Ludwik Drużkowski independently showed that it is enough to prove the Jacobian Conjecture for complex maps of cubic homogeneous type with a symmetric Jacobian matrix, and further showed that the conjecture holds for maps of cubic linear type with a symmetric Jacobian matrix, over any field of characteristic 0. The strong real Jacobian conjecture was that a real polynomial map with a nowhere vanishing Jacobian determinant has a smooth global inverse. That is equivalent to asking whether such a map is topologically a proper map, in which case it is a covering map of a simply connected manifold, hence invertible. Sergey Pinchuk constructed two variable counterexamples of total degree 35 and higher. It is well known that the Dixmier conjecture implies the Jacobian conjecture. Conversely, it is shown by Yoshifumi Tsuchimoto and independently by Alexei Belov-Kanel and Maxim Kontsevich that the Jacobian conjecture for 2N variables implies the Dixmier conjecture in N dimensions. A self-contained and purely algebraic proof of the last implication is also given by Kossivi Adjamagbo and Arno van den Essen who also proved in the same paper that these two conjectures are equivalent to the Poisson conjecture. == See also == List of unsolved problems in mathematics == References == == External links == Web page of Tzuong-Tsieng Moh on the conjecture
Wikipedia/Jacobian_conjecture
In mathematics, particularly in operator theory and C*-algebra theory, the continuous functional calculus is a functional calculus which allows the application of a continuous function to normal elements of a C*-algebra. In advanced theory, the applications of this functional calculus are so natural that they are often not even mentioned. It is no overstatement to say that the continuous functional calculus makes the difference between C*-algebras and general Banach algebras, in which only a holomorphic functional calculus exists. == Motivation == If one wants to extend the natural functional calculus for polynomials on the spectrum σ ( a ) {\displaystyle \sigma (a)} of an element a {\displaystyle a} of a Banach algebra A {\displaystyle {\mathcal {A}}} to a functional calculus for continuous functions C ( σ ( a ) ) {\displaystyle C(\sigma (a))} on the spectrum, it seems obvious to approximate a continuous function by polynomials according to the Stone-Weierstrass theorem, to insert the element into these polynomials and to show that this sequence of elements converges to A {\displaystyle {\mathcal {A}}} . The continuous functions on σ ( a ) ⊂ C {\displaystyle \sigma (a)\subset \mathbb {C} } are approximated by polynomials in z {\displaystyle z} and z ¯ {\displaystyle {\overline {z}}} , i.e. by polynomials of the form p ( z , z ¯ ) = ∑ k , l = 0 N c k , l z k z ¯ l ( c k , l ∈ C ) {\textstyle p(z,{\overline {z}})=\sum _{k,l=0}^{N}c_{k,l}z^{k}{\overline {z}}^{l}\;\left(c_{k,l}\in \mathbb {C} \right)} . Here, z ¯ {\displaystyle {\overline {z}}} denotes the complex conjugation, which is an involution on the complex numbers. To be able to insert a {\displaystyle a} in place of z {\displaystyle z} in this kind of polynomial, Banach *-algebras are considered, i.e. Banach algebras that also have an involution *, and a ∗ {\displaystyle a^{*}} is inserted in place of z ¯ {\displaystyle {\overline {z}}} . In order to obtain a homomorphism C [ z , z ¯ ] → A {\displaystyle {\mathbb {C} }[z,{\overline {z}}]\rightarrow {\mathcal {A}}} , a restriction to normal elements, i.e. elements with a ∗ a = a a ∗ {\displaystyle a^{*}a=aa^{*}} , is necessary, as the polynomial ring C [ z , z ¯ ] {\displaystyle \mathbb {C} [z,{\overline {z}}]} is commutative. If ( p n ( z , z ¯ ) ) n {\displaystyle (p_{n}(z,{\overline {z}}))_{n}} is a sequence of polynomials that converges uniformly on σ ( a ) {\displaystyle \sigma (a)} to a continuous function f {\displaystyle f} , the convergence of the sequence ( p n ( a , a ∗ ) ) n {\displaystyle (p_{n}(a,a^{*}))_{n}} in A {\displaystyle {\mathcal {A}}} to an element f ( a ) {\displaystyle f(a)} must be ensured. A detailed analysis of this convergence problem shows that it is necessary to resort to C*-algebras. These considerations lead to the so-called continuous functional calculus. == Theorem == Due to the *-homomorphism property, the following calculation rules apply to all functions f , g ∈ C ( σ ( a ) ) {\displaystyle f,g\in C(\sigma (a))} and scalars λ , μ ∈ C {\displaystyle \lambda ,\mu \in \mathbb {C} } : One can therefore imagine actually inserting the normal elements into continuous functions; the obvious algebraic operations behave as expected. The requirement for a unit element is not a significant restriction. If necessary, a unit element can be adjoined, yielding the enlarged C*-algebra A 1 {\displaystyle {\mathcal {A}}_{1}} . Then if a ∈ A {\displaystyle a\in {\mathcal {A}}} and f ∈ C ( σ ( a ) ) {\displaystyle f\in C(\sigma (a))} with f ( 0 ) = 0 {\displaystyle f(0)=0} , it follows that 0 ∈ σ ( a ) {\displaystyle 0\in \sigma (a)} and f ( a ) ∈ A ⊂ A 1 {\displaystyle f(a)\in {\mathcal {A}}\subset {\mathcal {A}}_{1}} . The existence and uniqueness of the continuous functional calculus are proven separately: Existence: Since the spectrum of a {\displaystyle a} in the C*-subalgebra C ∗ ( a , e ) {\displaystyle C^{*}(a,e)} generated by a {\displaystyle a} and e {\displaystyle e} is the same as it is in A {\displaystyle {\mathcal {A}}} , it suffices to show the statement for A = C ∗ ( a , e ) {\displaystyle {\mathcal {A}}=C^{*}(a,e)} . The actual construction is almost immediate from the Gelfand representation: it suffices to assume A {\displaystyle {\mathcal {A}}} is the C*-algebra of continuous functions on some compact space X {\displaystyle X} and define Φ a ( f ) = f ∘ x {\displaystyle \Phi _{a}(f)=f\circ x} . Uniqueness: Since Φ a ( 1 ) {\displaystyle \Phi _{a}({\boldsymbol {1}})} and Φ a ( Id σ ( a ) ) {\displaystyle \Phi _{a}(\operatorname {Id} _{\sigma (a)})} are fixed, Φ a {\displaystyle \Phi _{a}} is already uniquely defined for all polynomials p ( z , z ¯ ) = ∑ k , l = 0 N c k , l z k z ¯ l ( c k , l ∈ C ) {\textstyle p(z,{\overline {z}})=\sum _{k,l=0}^{N}c_{k,l}z^{k}{\overline {z}}^{l}\;\left(c_{k,l}\in \mathbb {C} \right)} , since Φ a {\displaystyle \Phi _{a}} is a *-homomorphism. These form a dense subalgebra of C ( σ ( a ) ) {\displaystyle C(\sigma (a))} by the Stone-Weierstrass theorem. Thus Φ a {\displaystyle \Phi _{a}} is unique. In functional analysis, the continuous functional calculus for a normal operator T {\displaystyle T} is often of interest, i.e. the case where A {\displaystyle {\mathcal {A}}} is the C*-algebra B ( H ) {\displaystyle {\mathcal {B}}(H)} of bounded operators on a Hilbert space H {\displaystyle H} . In the literature, the continuous functional calculus is often only proved for self-adjoint operators in this setting. In this case, the proof does not need the Gelfand representation. == Further properties of the continuous functional calculus == The continuous functional calculus Φ a {\displaystyle \Phi _{a}} is an isometric isomorphism into the C*-subalgebra C ∗ ( a , e ) {\displaystyle C^{*}(a,e)} generated by a {\displaystyle a} and e {\displaystyle e} , that is: ‖ Φ a ( f ) ‖ = ‖ f ‖ σ ( a ) {\displaystyle \left\|\Phi _{a}(f)\right\|=\left\|f\right\|_{\sigma (a)}} for all f ∈ C ( σ ( a ) ) {\displaystyle f\in C(\sigma (a))} ; Φ a {\displaystyle \Phi _{a}} is therefore continuous. Φ a ( C ( σ ( a ) ) ) = C ∗ ( a , e ) ⊆ A {\displaystyle \Phi _{a}\left(C(\sigma (a))\right)=C^{*}(a,e)\subseteq {\mathcal {A}}} Since a {\displaystyle a} is a normal element of A {\displaystyle {\mathcal {A}}} , the C*-subalgebra generated by a {\displaystyle a} and e {\displaystyle e} is commutative. In particular, f ( a ) {\displaystyle f(a)} is normal and all elements of a functional calculus commutate. The holomorphic functional calculus is extended by the continuous functional calculus in an unambiguous way. Therefore, for polynomials p ( z , z ¯ ) {\displaystyle p(z,{\overline {z}})} the continuous functional calculus corresponds to the natural functional calculus for polynomials: Φ a ( p ( z , z ¯ ) ) = p ( a , a ∗ ) = ∑ k , l = 0 N c k , l a k ( a ∗ ) l {\textstyle \Phi _{a}(p(z,{\overline {z}}))=p(a,a^{*})=\sum _{k,l=0}^{N}c_{k,l}a^{k}(a^{*})^{l}} for all p ( z , z ¯ ) = ∑ k , l = 0 N c k , l z k z ¯ l {\textstyle p(z,{\overline {z}})=\sum _{k,l=0}^{N}c_{k,l}z^{k}{\overline {z}}^{l}} with c k , l ∈ C {\displaystyle c_{k,l}\in \mathbb {C} } . For a sequence of functions f n ∈ C ( σ ( a ) ) {\displaystyle f_{n}\in C(\sigma (a))} that converges uniformly on σ ( a ) {\displaystyle \sigma (a)} to a function f ∈ C ( σ ( a ) ) {\displaystyle f\in C(\sigma (a))} , f n ( a ) {\displaystyle f_{n}(a)} converges to f ( a ) {\displaystyle f(a)} . For a power series f ( z ) = ∑ n = 0 ∞ c n z n {\textstyle f(z)=\sum _{n=0}^{\infty }c_{n}z^{n}} , which converges absolutely uniformly on σ ( a ) {\displaystyle \sigma (a)} , therefore f ( a ) = ∑ n = 0 ∞ c n a n {\textstyle f(a)=\sum _{n=0}^{\infty }c_{n}a^{n}} holds. If f ∈ C ( σ ( a ) ) {\displaystyle f\in {\mathcal {C}}(\sigma (a))} and g ∈ C ( σ ( f ( a ) ) ) {\displaystyle g\in {\mathcal {C}}(\sigma (f(a)))} , then ( g ∘ f ) ( a ) = g ( f ( a ) ) {\displaystyle (g\circ f)(a)=g(f(a))} holds for their composition. If a , b ∈ A N {\displaystyle a,b\in {\mathcal {A}}_{N}} are two normal elements with f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} and g {\displaystyle g} is the inverse function of f {\displaystyle f} on both σ ( a ) {\displaystyle \sigma (a)} and σ ( b ) {\displaystyle \sigma (b)} , then a = b {\displaystyle a=b} , since a = ( f ∘ g ) ( a ) = f ( g ( a ) ) = f ( g ( b ) ) = ( f ∘ g ) ( b ) = b {\displaystyle a=(f\circ g)(a)=f(g(a))=f(g(b))=(f\circ g)(b)=b} . The spectral mapping theorem applies: σ ( f ( a ) ) = f ( σ ( a ) ) {\displaystyle \sigma (f(a))=f(\sigma (a))} for all f ∈ C ( σ ( a ) ) {\displaystyle f\in C(\sigma (a))} . If a b = b a {\displaystyle ab=ba} holds for b ∈ A {\displaystyle b\in {\mathcal {A}}} , then f ( a ) b = b f ( a ) {\displaystyle f(a)b=bf(a)} also holds for all f ∈ C ( σ ( a ) ) {\displaystyle f\in C(\sigma (a))} , i.e. if b {\displaystyle b} commutates with a {\displaystyle a} , then also with the corresponding elements of the continuous functional calculus f ( a ) {\displaystyle f(a)} . Let Ψ : A → B {\displaystyle \Psi \colon {\mathcal {A}}\rightarrow {\mathcal {B}}} be an unital *-homomorphism between C*-algebras A {\displaystyle {\mathcal {A}}} and B {\displaystyle {\mathcal {B}}} . Then Ψ {\displaystyle \Psi } commutates with the continuous functional calculus. The following holds: Ψ ( f ( a ) ) = f ( Ψ ( a ) ) {\displaystyle \Psi (f(a))=f(\Psi (a))} for all f ∈ C ( σ ( a ) ) {\displaystyle f\in C(\sigma (a))} . In particular, the continuous functional calculus commutates with the Gelfand representation. With the spectral mapping theorem, functions with certain properties can be directly related to certain properties of elements of C*-algebras: f ( a ) {\displaystyle f(a)} is invertible if and only if f {\displaystyle f} has no zero on σ ( a ) {\displaystyle \sigma (a)} . Then f ( a ) − 1 = 1 f ( a ) {\textstyle f(a)^{-1}={\tfrac {1}{f}}(a)} holds. f ( a ) {\displaystyle f(a)} is self-adjoint if and only if f {\displaystyle f} is real-valued, i.e. f ( σ ( a ) ) ⊆ R {\displaystyle f(\sigma (a))\subseteq \mathbb {R} } . f ( a ) {\displaystyle f(a)} is positive ( f ( a ) ≥ 0 {\displaystyle f(a)\geq 0} ) if and only if f ≥ 0 {\displaystyle f\geq 0} , i.e. f ( σ ( a ) ) ⊆ [ 0 , ∞ ) {\displaystyle f(\sigma (a))\subseteq [0,\infty )} . f ( a ) {\displaystyle f(a)} is unitary if all values of f {\displaystyle f} lie in the circle group, i.e. f ( σ ( a ) ) ⊆ T = { λ ∈ C ∣ ‖ λ ‖ = 1 } {\displaystyle f(\sigma (a))\subseteq \mathbb {T} =\{\lambda \in \mathbb {C} \mid \left\|\lambda \right\|=1\}} . f ( a ) {\displaystyle f(a)} is a projection if f {\displaystyle f} only takes on the values 0 {\displaystyle 0} and 1 {\displaystyle 1} , i.e. f ( σ ( a ) ) ⊆ { 0 , 1 } {\displaystyle f(\sigma (a))\subseteq \{0,1\}} . These are based on statements about the spectrum of certain elements, which are shown in the Applications section. In the special case that A {\displaystyle {\mathcal {A}}} is the C*-algebra of bounded operators B ( H ) {\displaystyle {\mathcal {B}}(H)} for a Hilbert space H {\displaystyle H} , eigenvectors v ∈ H {\displaystyle v\in H} for the eigenvalue λ ∈ σ ( T ) {\displaystyle \lambda \in \sigma (T)} of a normal operator T ∈ B ( H ) {\displaystyle T\in {\mathcal {B}}(H)} are also eigenvectors for the eigenvalue f ( λ ) ∈ σ ( f ( T ) ) {\displaystyle f(\lambda )\in \sigma (f(T))} of the operator f ( T ) {\displaystyle f(T)} . If T v = λ v {\displaystyle Tv=\lambda v} , then f ( T ) v = f ( λ ) v {\displaystyle f(T)v=f(\lambda )v} also holds for all f ∈ σ ( T ) {\displaystyle f\in \sigma (T)} . == Applications == The following applications are typical and very simple examples of the numerous applications of the continuous functional calculus: === Spectrum === Let A {\displaystyle {\mathcal {A}}} be a C*-algebra and a ∈ A N {\displaystyle a\in {\mathcal {A}}_{N}} a normal element. Then the following applies to the spectrum σ ( a ) {\displaystyle \sigma (a)} : a {\displaystyle a} is self-adjoint if and only if σ ( a ) ⊆ R {\displaystyle \sigma (a)\subseteq \mathbb {R} } . a {\displaystyle a} is unitary if and only if σ ( a ) ⊆ T = { λ ∈ C ∣ ‖ λ ‖ = 1 } {\displaystyle \sigma (a)\subseteq \mathbb {T} =\{\lambda \in \mathbb {C} \mid \left\|\lambda \right\|=1\}} . a {\displaystyle a} is a projection if and only if σ ( a ) ⊆ { 0 , 1 } {\displaystyle \sigma (a)\subseteq \{0,1\}} . Proof. The continuous functional calculus Φ a {\displaystyle \Phi _{a}} for the normal element a ∈ A {\displaystyle a\in {\mathcal {A}}} is a *-homomorphism with Φ a ( Id ) = a {\displaystyle \Phi _{a}(\operatorname {Id} )=a} and thus a {\displaystyle a} is self-adjoint/unitary/a projection if Id ∈ C ( σ ( a ) ) {\displaystyle \operatorname {Id} \in C(\sigma (a))} is also self-adjoint/unitary/a projection. Exactly then Id {\displaystyle \operatorname {Id} } is self-adjoint if z = Id ( z ) = Id ¯ ( z ) = z ¯ {\displaystyle z={\text{Id}}(z)={\overline {\text{Id}}}(z)={\overline {z}}} holds for all z ∈ σ ( a ) {\displaystyle z\in \sigma (a)} , i.e. if σ ( a ) {\displaystyle \sigma (a)} is real. Exactly then Id {\displaystyle {\text{Id}}} is unitary if 1 = Id ( z ) Id ¯ ( z ) = z z ¯ = | z | 2 {\displaystyle 1={\text{Id}}(z){\overline {\operatorname {Id} }}(z)=z{\overline {z}}=|z|^{2}} holds for all z ∈ σ ( a ) {\displaystyle z\in \sigma (a)} , therefore σ ( a ) ⊆ { λ ∈ C | ‖ λ ‖ = 1 } {\displaystyle \sigma (a)\subseteq \{\lambda \in \mathbb {C} \ |\ \left\|\lambda \right\|=1\}} . Exactly then Id {\displaystyle {\text{Id}}} is a projection if and only if ( Id ⁡ ( z ) ) 2 = Id ( z ) = Id ⁡ ( z ) ¯ {\displaystyle (\operatorname {Id} (z))^{2}=\operatorname {Id} }(z)={\overline {\operatorname {Id} (z)}} , that is z 2 = z = z ¯ {\displaystyle z^{2}=z={\overline {z}}} for all z ∈ σ ( a ) {\displaystyle z\in \sigma (a)} , i.e. σ ( a ) ⊆ { 0 , 1 } {\displaystyle \sigma (a)\subseteq \{0,1\}} === Roots === Let a {\displaystyle a} be a positive element of a C*-algebra A {\displaystyle {\mathcal {A}}} . Then for every n ∈ N {\displaystyle n\in \mathbb {N} } there exists a uniquely determined positive element b ∈ A + {\displaystyle b\in {\mathcal {A}}_{+}} with b n = a {\displaystyle b^{n}=a} , i.e. a unique n {\displaystyle n} -th root. Proof. For each n ∈ N {\displaystyle n\in \mathbb {N} } , the root function f n : R 0 + → R 0 + , x ↦ x n {\displaystyle f_{n}\colon \mathbb {R} _{0}^{+}\to \mathbb {R} _{0}^{+},x\mapsto {\sqrt[{n}]{x}}} is a continuous function on σ ( a ) ⊆ R 0 + {\displaystyle \sigma (a)\subseteq \mathbb {R} _{0}^{+}} . If b : = f n ( a ) {\displaystyle b\;\colon =f_{n}(a)} is defined using the continuous functional calculus, then b n = ( f n ( a ) ) n = ( f n n ) ( a ) = Id σ ( a ) ⁡ ( a ) = a {\displaystyle b^{n}=(f_{n}(a))^{n}=(f_{n}^{n})(a)=\operatorname {Id} _{\sigma (a)}(a)=a} follows from the properties of the calculus. From the spectral mapping theorem follows σ ( b ) = σ ( f n ( a ) ) = f n ( σ ( a ) ) ⊆ [ 0 , ∞ ) {\displaystyle \sigma (b)=\sigma (f_{n}(a))=f_{n}(\sigma (a))\subseteq [0,\infty )} , i.e. b {\displaystyle b} is positive. If c ∈ A + {\displaystyle c\in {\mathcal {A}}_{+}} is another positive element with c n = a = b n {\displaystyle c^{n}=a=b^{n}} , then c = f n ( c n ) = f n ( b n ) = b {\displaystyle c=f_{n}(c^{n})=f_{n}(b^{n})=b} holds, as the root function on the positive real numbers is an inverse function to the function z ↦ z n {\displaystyle z\mapsto z^{n}} . If a ∈ A s a {\displaystyle a\in {\mathcal {A}}_{sa}} is a self-adjoint element, then at least for every odd n ∈ N {\displaystyle n\in \mathbb {N} } there is a uniquely determined self-adjoint element b ∈ A s a {\displaystyle b\in {\mathcal {A}}_{sa}} with b n = a {\displaystyle b^{n}=a} . Similarly, for a positive element a {\displaystyle a} of a C*-algebra A {\displaystyle {\mathcal {A}}} , each α ≥ 0 {\displaystyle \alpha \geq 0} defines a uniquely determined positive element a α {\displaystyle a^{\alpha }} of C ∗ ( a ) {\displaystyle C^{*}(a)} , such that a α a β = a α + β {\displaystyle a^{\alpha }a^{\beta }=a^{\alpha +\beta }} holds for all α , β ≥ 0 {\displaystyle \alpha ,\beta \geq 0} . If a {\displaystyle a} is invertible, this can also be extended to negative values of α {\displaystyle \alpha } . === Absolute value === If a ∈ A {\displaystyle a\in {\mathcal {A}}} , then the element a ∗ a {\displaystyle a^{*}a} is positive, so that the absolute value can be defined by the continuous functional calculus | a | = a ∗ a {\displaystyle |a|={\sqrt {a^{*}a}}} , since it is continuous on the positive real numbers. Let a {\displaystyle a} be a self-adjoint element of a C*-algebra A {\displaystyle {\mathcal {A}}} , then there exist positive elements a + , a − ∈ A + {\displaystyle a_{+},a_{-}\in {\mathcal {A}}_{+}} , such that a = a + − a − {\displaystyle a=a_{+}-a_{-}} with a + a − = a − a + = 0 {\displaystyle a_{+}a_{-}=a_{-}a_{+}=0} holds. The elements a + {\displaystyle a_{+}} and a − {\displaystyle a_{-}} are also referred to as the positive and negative parts. In addition, | a | = a + + a − {\displaystyle |a|=a_{+}+a_{-}} holds. Proof. The functions f + ( z ) = max ( z , 0 ) {\displaystyle f_{+}(z)=\max(z,0)} and f − ( z ) = − min ( z , 0 ) {\displaystyle f_{-}(z)=-\min(z,0)} are continuous functions on σ ( a ) ⊆ R {\displaystyle \sigma (a)\subseteq \mathbb {R} } with Id ⁡ ( z ) = z = f + ( z ) − f − ( z ) {\displaystyle \operatorname {Id} (z)=z=f_{+}(z)-f_{-}(z)} and f + ( z ) f − ( z ) = f − ( z ) f + ( z ) = 0 {\displaystyle f_{+}(z)f_{-}(z)=f_{-}(z)f_{+}(z)=0} . Put a + = f + ( a ) {\displaystyle a_{+}=f_{+}(a)} and a − = f − ( a ) {\displaystyle a_{-}=f_{-}(a)} . According to the spectral mapping theorem, a + {\displaystyle a_{+}} and a − {\displaystyle a_{-}} are positive elements for which a = Id ⁡ ( a ) = ( f + − f − ) ( a ) = f + ( a ) − f − ( a ) = a + − a − {\displaystyle a=\operatorname {Id} (a)=(f_{+}-f_{-})(a)=f_{+}(a)-f_{-}(a)=a_{+}-a_{-}} and a + a − = f + ( a ) f − ( a ) = ( f + f − ) ( a ) = 0 = ( f − f + ) ( a ) = f − ( a ) f + ( a ) = a − a + {\displaystyle a_{+}a_{-}=f_{+}(a)f_{-}(a)=(f_{+}f_{-})(a)=0=(f_{-}f_{+})(a)=f_{-}(a)f_{+}(a)=a_{-}a_{+}} holds. Furthermore, f + ( z ) + f − ( z ) = | z | = z ∗ z = z 2 {\textstyle f_{+}(z)+f_{-}(z)=|z|={\sqrt {z^{*}z}}={\sqrt {z^{2}}}} , such that a + + a − = f + ( a ) + f − ( a ) = | a | = a ∗ a = a 2 {\textstyle a_{+}+a_{-}=f_{+}(a)+f_{-}(a)=|a|={\sqrt {a^{*}a}}={\sqrt {a^{2}}}} holds. === Unitary elements === If a {\displaystyle a} is a self-adjoint element of a C*-algebra A {\displaystyle {\mathcal {A}}} with unit element e {\displaystyle e} , then u = e i a {\displaystyle u=\mathrm {e} ^{\mathrm {i} a}} is unitary, where i {\displaystyle \mathrm {i} } denotes the imaginary unit. Conversely, if u ∈ A U {\displaystyle u\in {\mathcal {A}}_{U}} is an unitary element, with the restriction that the spectrum is a proper subset of the unit circle, i.e. σ ( u ) ⊊ T {\displaystyle \sigma (u)\subsetneq \mathbb {T} } , there exists a self-adjoint element a ∈ A s a {\displaystyle a\in {\mathcal {A}}_{sa}} with u = e i a {\displaystyle u=\mathrm {e} ^{\mathrm {i} a}} . Proof. It is u = f ( a ) {\displaystyle u=f(a)} with f : R → C , x ↦ e i x {\displaystyle f\colon \mathbb {R} \to \mathbb {C} ,\ x\mapsto \mathrm {e} ^{\mathrm {i} x}} , since a {\displaystyle a} is self-adjoint, it follows that σ ( a ) ⊂ R {\displaystyle \sigma (a)\subset \mathbb {R} } , i.e. f {\displaystyle f} is a function on the spectrum of a {\displaystyle a} . Since f ⋅ f ¯ = f ¯ ⋅ f = 1 {\displaystyle f\cdot {\overline {f}}={\overline {f}}\cdot f=1} , using the functional calculus u u ∗ = u ∗ u = e {\displaystyle uu^{*}=u^{*}u=e} follows, i.e. u {\displaystyle u} is unitary. Since for the other statement there is a z 0 ∈ T {\displaystyle z_{0}\in \mathbb {T} } , such that σ ( u ) ⊆ { e i z ∣ z 0 ≤ z ≤ z 0 + 2 π } {\displaystyle \sigma (u)\subseteq \{\mathrm {e} ^{\mathrm {i} z}\mid z_{0}\leq z\leq z_{0}+2\pi \}} the function f ( e i z ) = z {\displaystyle f(\mathrm {e} ^{\mathrm {i} z})=z} is a real-valued continuous function on the spectrum σ ( u ) {\displaystyle \sigma (u)} for z 0 ≤ z ≤ z 0 + 2 π {\displaystyle z_{0}\leq z\leq z_{0}+2\pi } , such that a = f ( u ) {\displaystyle a=f(u)} is a self-adjoint element that satisfies e i a = e i f ( u ) = u {\displaystyle \mathrm {e} ^{\mathrm {i} a}=\mathrm {e} ^{\mathrm {i} f(u)}=u} . === Spectral decomposition theorem === Let A {\displaystyle {\mathcal {A}}} be an unital C*-algebra and a ∈ A N {\displaystyle a\in {\mathcal {A}}_{N}} a normal element. Let the spectrum consist of n {\displaystyle n} pairwise disjoint closed subsets σ k ⊂ C {\displaystyle \sigma _{k}\subset \mathbb {C} } for all 1 ≤ k ≤ n {\displaystyle 1\leq k\leq n} , i.e. σ ( a ) = σ 1 ⊔ ⋯ ⊔ σ n {\displaystyle \sigma (a)=\sigma _{1}\sqcup \cdots \sqcup \sigma _{n}} . Then there exist projections p 1 , … , p n ∈ A {\displaystyle p_{1},\ldots ,p_{n}\in {\mathcal {A}}} that have the following properties for all 1 ≤ j , k ≤ n {\displaystyle 1\leq j,k\leq n} : For the spectrum, σ ( p k ) = σ k {\displaystyle \sigma (p_{k})=\sigma _{k}} holds. The projections commutate with a {\displaystyle a} , i.e. p k a = a p k {\displaystyle p_{k}a=ap_{k}} . The projections are orthogonal, i.e. p j p k = δ j k p k {\displaystyle p_{j}p_{k}=\delta _{jk}p_{k}} . The sum of the projections is the unit element, i.e. ∑ k = 1 n p k = e {\textstyle \sum _{k=1}^{n}p_{k}=e} . In particular, there is a decomposition a = ∑ k = 1 n a k {\textstyle a=\sum _{k=1}^{n}a_{k}} for which σ ( a k ) = σ k {\displaystyle \sigma (a_{k})=\sigma _{k}} holds for all 1 ≤ k ≤ n {\displaystyle 1\leq k\leq n} . Proof. Since all σ k {\displaystyle \sigma _{k}} are closed, the characteristic functions χ σ k {\displaystyle \chi _{\sigma _{k}}} are continuous on σ ( a ) {\displaystyle \sigma (a)} . Now let p k := χ σ k ( a ) {\displaystyle p_{k}:=\chi _{\sigma _{k}}(a)} be defined using the continuous functional. As the σ k {\displaystyle \sigma _{k}} are pairwise disjoint, χ σ j χ σ k = δ j k χ σ k {\displaystyle \chi _{\sigma _{j}}\chi _{\sigma _{k}}=\delta _{jk}\chi _{\sigma _{k}}} and ∑ k = 1 n χ σ k = χ ∪ k = 1 n σ k = χ σ ( a ) = 1 {\textstyle \sum _{k=1}^{n}\chi _{\sigma _{k}}=\chi _{\cup _{k=1}^{n}\sigma _{k}}=\chi _{\sigma (a)}={\textbf {1}}} holds and thus the p k {\displaystyle p_{k}} satisfy the claimed properties, as can be seen from the properties of the continuous functional equation. For the last statement, let a k = a p k = Id ⁡ ( a ) ⋅ χ σ k ( a ) = ( Id ⋅ χ σ k ) ( a ) {\displaystyle a_{k}=ap_{k}=\operatorname {Id} (a)\cdot \chi _{\sigma _{k}}(a)=(\operatorname {Id} \cdot \chi _{\sigma _{k}})(a)} . == Notes == == References == Blackadar, Bruce (2006). Operator Algebras. Theory of C*-Algebras and von Neumann Algebras. Berlin/Heidelberg: Springer. ISBN 3-540-28486-9. Deitmar, Anton; Echterhoff, Siegfried (2014). Principles of Harmonic Analysis. Second Edition. Springer. ISBN 978-3-319-05791-0. Dixmier, Jacques (1969). Les C*-algèbres et leurs représentations (in French). Gauthier-Villars. Dixmier, Jacques (1977). C*-algebras. Translated by Jellett, Francis. Amsterdam/New York/Oxford: North-Holland. ISBN 0-7204-0762-1. English translation of Les C*-algèbres et leurs représentations (in French). Gauthier-Villars. 1969. Kaballo, Winfried (2014). Aufbaukurs Funktionalanalysis und Operatortheorie (in German). Berlin/Heidelberg: Springer. ISBN 978-3-642-37794-5. Kadison, Richard V.; Ringrose, John R. (1983). Fundamentals of the Theory of Operator Algebras. Volume 1 Elementary Theory. New York/London: Academic Press. ISBN 0-12-393301-3. Kaniuth, Eberhard (2009). A Course in Commutative Banach Algebras. Springer. ISBN 978-0-387-72475-1. Schmüdgen, Konrad (2012). Unbounded Self-adjoint Operators on Hilbert Space. Springer. ISBN 978-94-007-4752-4. Reed, Michael; Simon, Barry (1980). Methods of modern mathematical physics. vol. 1. Functional analysis. San Diego, CA: Academic Press. ISBN 0-12-585050-6. Takesaki, Masamichi (1979). Theory of Operator Algebras I. Heidelberg/Berlin: Springer. ISBN 3-540-90391-7. == External links == Continuous functional calculus on PlanetMath
Wikipedia/Continuous_functional_calculus
Strong measurability has a number of different meanings, some of which are explained below. == Values in Banach spaces == For a function f with values in a Banach space (or Fréchet space), strong measurability usually means Bochner measurability. However, if the values of f lie in the space L ( X , Y ) {\displaystyle {\mathcal {L}}(X,Y)} of continuous linear operators from X to Y, then often strong measurability means that the operator f(x) is Bochner measurable for each fixed x in the domain of f, whereas the Bochner measurability of f is called uniform measurability (cf. "uniformly continuous" vs. "strongly continuous"). == Bounded operators == A family of bounded linear operators combined with the direct integral is strongly measurable, when each of the individual operators is strongly measurable. == Semigroups == A semigroup of linear operators can be strongly measurable yet not strongly continuous. It is uniformly measurable if and only if it is uniformly continuous, i.e., if and only if its generator is bounded. == References ==
Wikipedia/Strongly_measurable_function
Let X be a set of sets none of which are empty. Then a choice function (selector, selection) on X is a mathematical function f that is defined on X such that f is a mapping that assigns each element of X to one of its elements. == An example == Let X = { {1,4,7}, {9}, {2,7} }. Then the function f defined by f({1, 4, 7}) = 7, f({9}) = 9 and f({2, 7}) = 2 is a choice function on X. == History and importance == Ernst Zermelo (1904) introduced choice functions as well as the axiom of choice (AC) and proved the well-ordering theorem, which states that every set can be well-ordered. AC states that every set of nonempty sets has a choice function. A weaker form of AC, the axiom of countable choice (ACω) states that every countable set of nonempty sets has a choice function. However, in the absence of either AC or ACω, some sets can still be shown to have a choice function. If X {\displaystyle X} is a finite set of nonempty sets, then one can construct a choice function for X {\displaystyle X} by picking one element from each member of X . {\displaystyle X.} This requires only finitely many choices, so neither AC or ACω is needed. If every member of X {\displaystyle X} is a nonempty set, and the union ⋃ X {\displaystyle \bigcup X} is well-ordered, then one may choose the least element of each member of X {\displaystyle X} . In this case, it was possible to simultaneously well-order every member of X {\displaystyle X} by making just one choice of a well-order of the union, so neither AC nor ACω was needed. (This example shows that the well-ordering theorem implies AC. The converse is also true, but less trivial.) == Choice function of a multivalued map == Given two sets X {\displaystyle X} and Y {\displaystyle Y} , let F {\displaystyle F} be a multivalued map from X {\displaystyle X} to Y {\displaystyle Y} (equivalently, F : X → P ( Y ) {\displaystyle F:X\rightarrow {\mathcal {P}}(Y)} is a function from X {\displaystyle X} to the power set of Y {\displaystyle Y} ). A function f : X → Y {\displaystyle f:X\rightarrow Y} is said to be a selection of F {\displaystyle F} , if: ∀ x ∈ X ( f ( x ) ∈ F ( x ) ) . {\displaystyle \forall x\in X\,(f(x)\in F(x))\,.} The existence of more regular choice functions, namely continuous or measurable selections is important in the theory of differential inclusions, optimal control, and mathematical economics. See Selection theorem. == Bourbaki tau function == Nicolas Bourbaki used epsilon calculus for their foundations that had a τ {\displaystyle \tau } symbol that could be interpreted as choosing an object (if one existed) that satisfies a given proposition. So if P ( x ) {\displaystyle P(x)} is a predicate, then τ x ( P ) {\displaystyle \tau _{x}(P)} is one particular object that satisfies P {\displaystyle P} (if one exists, otherwise it returns an arbitrary object). Hence we may obtain quantifiers from the choice function, for example P ( τ x ( P ) ) {\displaystyle P(\tau _{x}(P))} was equivalent to ( ∃ x ) ( P ( x ) ) {\displaystyle (\exists x)(P(x))} . However, Bourbaki's choice operator is stronger than usual: it's a global choice operator. That is, it implies the axiom of global choice. Hilbert realized this when introducing epsilon calculus. == See also == Axiom of countable choice Axiom of dependent choice Hausdorff paradox Hemicontinuity == Notes == == References == This article incorporates material from Choice function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Choice_function
In mathematics – specifically, in functional analysis – a Bochner-measurable function taking values in a Banach space is a function that equals almost everywhere the limit of a sequence of measurable countably-valued functions, i.e., f ( t ) = lim n → ∞ f n ( t ) for almost every t , {\displaystyle f(t)=\lim _{n\rightarrow \infty }f_{n}(t){\text{ for almost every }}t,\,} where the functions f n {\displaystyle f_{n}} each have a countable range and for which the pre-image f n − 1 ( { x } ) {\displaystyle f_{n}^{-1}(\{x\})} is measurable for each element x. The concept is named after Salomon Bochner. Bochner-measurable functions are sometimes called strongly measurable, μ {\displaystyle \mu } -measurable or just measurable (or uniformly measurable in case that the Banach space is the space of continuous linear operators between Banach spaces). == Properties == The relationship between measurability and weak measurability is given by the following result, known as Pettis' theorem or Pettis measurability theorem. Function f is almost surely separably valued (or essentially separably valued) if there exists a subset N ⊆ X with μ(N) = 0 such that f(X \ N) ⊆ B is separable. A function f : X → B defined on a measure space (X, Σ, μ) and taking values in a Banach space B is (strongly) measurable (with respect to Σ and the Borel algebra on B) if and only if it is both weakly measurable and almost surely separably valued. In the case that B is separable, since any subset of a separable Banach space is itself separable, one can take N above to be empty, and it follows that the notions of weak and strong measurability agree when B is separable. == See also == Bochner integral – Concept in mathematics Bochner space – Type of topological space Measurable function – Kind of mathematical function Measurable space – Basic object in measure theory; set and a sigma-algebra Pettis integral Vector measure Weakly measurable function == References == Showalter, Ralph E. (1997). "Theorem III.1.1". Monotone operators in Banach space and nonlinear partial differential equations. Mathematical Surveys and Monographs 49. Providence, RI: American Mathematical Society. p. 103. ISBN 0-8218-0500-2. MR 1422252..
Wikipedia/Bochner_measurable_function
In the calculus of variations, a field of mathematical analysis, the functional derivative (or variational derivative) relates a change in a functional (a functional in this sense is a function that acts on functions) to a change in a function on which the functional depends. In the calculus of variations, functionals are usually expressed in terms of an integral of functions, their arguments, and their derivatives. In an integrand L of a functional, if a function f is varied by adding to it another function δf that is arbitrarily small, and the resulting integrand is expanded in powers of δf, the coefficient of δf in the first order term is called the functional derivative. For example, consider the functional J [ f ] = ∫ a b L ( x , f ( x ) , f ′ ( x ) ) d x , {\displaystyle J[f]=\int _{a}^{b}L(\,x,f(x),f'{(x)}\,)\,dx\,,} where f ′(x) ≡ df/dx. If f is varied by adding to it a function δf, and the resulting integrand L(x, f +δf, f ′+δf ′) is expanded in powers of δf, then the change in the value of J to first order in δf can be expressed as follows: δ J = ∫ a b ( ∂ L ∂ f δ f ( x ) + ∂ L ∂ f ′ d d x δ f ( x ) ) d x = ∫ a b ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) δ f ( x ) d x + ∂ L ∂ f ′ ( b ) δ f ( b ) − ∂ L ∂ f ′ ( a ) δ f ( a ) {\displaystyle {\begin{aligned}\delta J&=\int _{a}^{b}\left({\frac {\partial L}{\partial f}}\delta f(x)+{\frac {\partial L}{\partial f'}}{\frac {d}{dx}}\delta f(x)\right)\,dx\,\\[1ex]&=\int _{a}^{b}\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\delta f(x)\,dx\,+\,{\frac {\partial L}{\partial f'}}(b)\delta f(b)\,-\,{\frac {\partial L}{\partial f'}}(a)\delta f(a)\end{aligned}}} where the variation in the derivative, δf ′ was rewritten as the derivative of the variation (δf) ′, and integration by parts was used in these derivatives. == Definition == In this section, the functional differential (or variation or first variation) is defined. Then the functional derivative is defined in terms of the functional differential. === Functional differential === Suppose B {\displaystyle B} is a Banach space and F {\displaystyle F} is a functional defined on B {\displaystyle B} . The differential of F {\displaystyle F} at a point ρ ∈ B {\displaystyle \rho \in B} is the linear functional δ F [ ρ , ⋅ ] {\displaystyle \delta F[\rho ,\cdot ]} on B {\displaystyle B} defined by the condition that, for all ϕ ∈ B {\displaystyle \phi \in B} , F [ ρ + ϕ ] − F [ ρ ] = δ F [ ρ ; ϕ ] + ε ‖ ϕ ‖ {\displaystyle F[\rho +\phi ]-F[\rho ]=\delta F[\rho ;\phi ]+\varepsilon \left\|\phi \right\|} where ε {\displaystyle \varepsilon } is a real number that depends on ‖ ϕ ‖ {\displaystyle \|\phi \|} in such a way that ε → 0 {\displaystyle \varepsilon \to 0} as ‖ ϕ ‖ → 0 {\displaystyle \|\phi \|\to 0} . This means that δ F [ ρ , ⋅ ] {\displaystyle \delta F[\rho ,\cdot ]} is the Fréchet derivative of F {\displaystyle F} at ρ {\displaystyle \rho } . However, this notion of functional differential is so strong it may not exist, and in those cases a weaker notion, like the Gateaux derivative is preferred. In many practical cases, the functional differential is defined as the directional derivative δ F [ ρ , ϕ ] = lim ε → 0 F [ ρ + ε ϕ ] − F [ ρ ] ε = [ d d ε F [ ρ + ε ϕ ] ] ε = 0 . {\displaystyle {\begin{aligned}\delta F[\rho ,\phi ]&=\lim _{\varepsilon \to 0}{\frac {F[\rho +\varepsilon \phi ]-F[\rho ]}{\varepsilon }}\\[1ex]&=\left[{\frac {d}{d\varepsilon }}F[\rho +\varepsilon \phi ]\right]_{\varepsilon =0}.\end{aligned}}} Note that this notion of the functional differential can even be defined without a norm. === Functional derivative === In many applications, the domain of the functional F {\displaystyle F} is a space of differentiable functions ρ {\displaystyle \rho } defined on some space Ω {\displaystyle \Omega } and F {\displaystyle F} is of the form F [ ρ ] = ∫ Ω L ( x , ρ ( x ) , D ρ ( x ) ) d x {\displaystyle F[\rho ]=\int _{\Omega }L(x,\rho (x),D\rho (x))\,dx} for some function L ( x , ρ ( x ) , D ρ ( x ) ) {\displaystyle L(x,\rho (x),D\rho (x))} that may depend on x {\displaystyle x} , the value ρ ( x ) {\displaystyle \rho (x)} and the derivative D ρ ( x ) {\displaystyle D\rho (x)} . If this is the case and, moreover, δ F [ ρ , ϕ ] {\displaystyle \delta F[\rho ,\phi ]} can be written as the integral of ϕ {\displaystyle \phi } times another function (denoted δF/δρ) δ F [ ρ , ϕ ] = ∫ Ω δ F δ ρ ( x ) ϕ ( x ) d x {\displaystyle \delta F[\rho ,\phi ]=\int _{\Omega }{\frac {\delta F}{\delta \rho }}(x)\ \phi (x)\ dx} then this function δF/δρ is called the functional derivative of F at ρ. If F {\displaystyle F} is restricted to only certain functions ρ {\displaystyle \rho } (for example, if there are some boundary conditions imposed) then ϕ {\displaystyle \phi } is restricted to functions such that ρ + ε ϕ {\displaystyle \rho +\varepsilon \phi } continues to satisfy these conditions. Heuristically, ϕ {\displaystyle \phi } is the change in ρ {\displaystyle \rho } , so we 'formally' have ϕ = δ ρ {\displaystyle \phi =\delta \rho } , and then this is similar in form to the total differential of a function F ( ρ 1 , ρ 2 , … , ρ n ) {\displaystyle F(\rho _{1},\rho _{2},\dots ,\rho _{n})} , d F = ∑ i = 1 n ∂ F ∂ ρ i d ρ i , {\displaystyle dF=\sum _{i=1}^{n}{\frac {\partial F}{\partial \rho _{i}}}\ d\rho _{i},} where ρ 1 , ρ 2 , … , ρ n {\displaystyle \rho _{1},\rho _{2},\dots ,\rho _{n}} are independent variables. Comparing the last two equations, the functional derivative δ F / δ ρ ( x ) {\displaystyle \delta F/\delta \rho (x)} has a role similar to that of the partial derivative ∂ F / ∂ ρ i {\displaystyle \partial F/\partial \rho _{i}} , where the variable of integration x {\displaystyle x} is like a continuous version of the summation index i {\displaystyle i} . One thinks of δF/δρ as the gradient of F at the point ρ, so the value δF/δρ(x) measures how much the functional F will change if the function ρ is changed at the point x. Hence the formula ∫ δ F δ ρ ( x ) ϕ ( x ) d x {\displaystyle \int {\frac {\delta F}{\delta \rho }}(x)\phi (x)\;dx} is regarded as the directional derivative at point ρ {\displaystyle \rho } in the direction of ϕ {\displaystyle \phi } . This is analogous to vector calculus, where the inner product of a vector v {\displaystyle v} with the gradient gives the directional derivative in the direction of v {\displaystyle v} . == Properties == Like the derivative of a function, the functional derivative satisfies the following properties, where F[ρ] and G[ρ] are functionals: Linearity: δ ( λ F + μ G ) [ ρ ] δ ρ ( x ) = λ δ F [ ρ ] δ ρ ( x ) + μ δ G [ ρ ] δ ρ ( x ) , {\displaystyle {\frac {\delta (\lambda F+\mu G)[\rho ]}{\delta \rho (x)}}=\lambda {\frac {\delta F[\rho ]}{\delta \rho (x)}}+\mu {\frac {\delta G[\rho ]}{\delta \rho (x)}},} where λ, μ are constants. Product rule: δ ( F G ) [ ρ ] δ ρ ( x ) = δ F [ ρ ] δ ρ ( x ) G [ ρ ] + F [ ρ ] δ G [ ρ ] δ ρ ( x ) , {\displaystyle {\frac {\delta (FG)[\rho ]}{\delta \rho (x)}}={\frac {\delta F[\rho ]}{\delta \rho (x)}}G[\rho ]+F[\rho ]{\frac {\delta G[\rho ]}{\delta \rho (x)}}\,,} Chain rules: If F is a functional and G another functional, then δ F [ G [ ρ ] ] δ ρ ( y ) = ∫ d x δ F [ G ] δ G ( x ) G = G [ ρ ] ⋅ δ G [ ρ ] ( x ) δ ρ ( y ) . {\displaystyle {\frac {\delta F[G[\rho ]]}{\delta \rho (y)}}=\int dx{\frac {\delta F[G]}{\delta G(x)}}_{G=G[\rho ]}\cdot {\frac {\delta G[\rho ](x)}{\delta \rho (y)}}\ .} If G is an ordinary differentiable function (local functional) g, then this reduces to δ F [ g ( ρ ) ] δ ρ ( y ) = δ F [ g ( ρ ) ] δ g [ ρ ( y ) ] d g ( ρ ) d ρ ( y ) . {\displaystyle {\frac {\delta F[g(\rho )]}{\delta \rho (y)}}={\frac {\delta F[g(\rho )]}{\delta g[\rho (y)]}}\ {\frac {dg(\rho )}{d\rho (y)}}\ .} == Determining functional derivatives == A formula to determine functional derivatives for a common class of functionals can be written as the integral of a function and its derivatives. This is a generalization of the Euler–Lagrange equation: indeed, the functional derivative was introduced in physics within the derivation of the Lagrange equation of the second kind from the principle of least action in Lagrangian mechanics (18th century). The first three examples below are taken from density functional theory (20th century), the fourth from statistical mechanics (19th century). === Formula === Given a functional F [ ρ ] = ∫ f ( r , ρ ( r ) , ∇ ρ ( r ) ) d r , {\displaystyle F[\rho ]=\int f({\boldsymbol {r}},\rho ({\boldsymbol {r}}),\nabla \rho ({\boldsymbol {r}}))\,d{\boldsymbol {r}},} and a function ϕ ( r ) {\displaystyle \phi ({\boldsymbol {r}})} that vanishes on the boundary of the region of integration, from a previous section Definition, ∫ δ F δ ρ ( r ) ϕ ( r ) d r = [ d d ε ∫ f ( r , ρ + ε ϕ , ∇ ρ + ε ∇ ϕ ) d r ] ε = 0 = ∫ ( ∂ f ∂ ρ ϕ + ∂ f ∂ ∇ ρ ⋅ ∇ ϕ ) d r = ∫ [ ∂ f ∂ ρ ϕ + ∇ ⋅ ( ∂ f ∂ ∇ ρ ϕ ) − ( ∇ ⋅ ∂ f ∂ ∇ ρ ) ϕ ] d r = ∫ [ ∂ f ∂ ρ ϕ − ( ∇ ⋅ ∂ f ∂ ∇ ρ ) ϕ ] d r = ∫ ( ∂ f ∂ ρ − ∇ ⋅ ∂ f ∂ ∇ ρ ) ϕ ( r ) d r . {\displaystyle {\begin{aligned}\int {\frac {\delta F}{\delta \rho ({\boldsymbol {r}})}}\,\phi ({\boldsymbol {r}})\,d{\boldsymbol {r}}&=\left[{\frac {d}{d\varepsilon }}\int f({\boldsymbol {r}},\rho +\varepsilon \phi ,\nabla \rho +\varepsilon \nabla \phi )\,d{\boldsymbol {r}}\right]_{\varepsilon =0}\\&=\int \left({\frac {\partial f}{\partial \rho }}\,\phi +{\frac {\partial f}{\partial \nabla \rho }}\cdot \nabla \phi \right)d{\boldsymbol {r}}\\&=\int \left[{\frac {\partial f}{\partial \rho }}\,\phi +\nabla \cdot \left({\frac {\partial f}{\partial \nabla \rho }}\,\phi \right)-\left(\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi \right]d{\boldsymbol {r}}\\&=\int \left[{\frac {\partial f}{\partial \rho }}\,\phi -\left(\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi \right]d{\boldsymbol {r}}\\&=\int \left({\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi ({\boldsymbol {r}})\ d{\boldsymbol {r}}\,.\end{aligned}}} The second line is obtained using the total derivative, where ∂f /∂∇ρ is a derivative of a scalar with respect to a vector. The third line was obtained by use of a product rule for divergence. The fourth line was obtained using the divergence theorem and the condition that ϕ = 0 {\displaystyle \phi =0} on the boundary of the region of integration. Since ϕ {\displaystyle \phi } is also an arbitrary function, applying the fundamental lemma of calculus of variations to the last line, the functional derivative is δ F δ ρ ( r ) = ∂ f ∂ ρ − ∇ ⋅ ∂ f ∂ ∇ ρ {\displaystyle {\frac {\delta F}{\delta \rho ({\boldsymbol {r}})}}={\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}} where ρ = ρ(r) and f = f (r, ρ, ∇ρ). This formula is for the case of the functional form given by F[ρ] at the beginning of this section. For other functional forms, the definition of the functional derivative can be used as the starting point for its determination. (See the example Coulomb potential energy functional.) The above equation for the functional derivative can be generalized to the case that includes higher dimensions and higher order derivatives. The functional would be, F [ ρ ( r ) ] = ∫ f ( r , ρ ( r ) , ∇ ρ ( r ) , ∇ ( 2 ) ρ ( r ) , … , ∇ ( N ) ρ ( r ) ) d r , {\displaystyle F[\rho ({\boldsymbol {r}})]=\int f({\boldsymbol {r}},\rho ({\boldsymbol {r}}),\nabla \rho ({\boldsymbol {r}}),\nabla ^{(2)}\rho ({\boldsymbol {r}}),\dots ,\nabla ^{(N)}\rho ({\boldsymbol {r}}))\,d{\boldsymbol {r}},} where the vector r ∈ Rn, and ∇(i) is a tensor whose ni components are partial derivative operators of order i, [ ∇ ( i ) ] α 1 α 2 ⋯ α i = ∂ i ∂ r α 1 ∂ r α 2 ⋯ ∂ r α i where α 1 , α 2 , … , α i = 1 , 2 , … , n . {\displaystyle \left[\nabla ^{(i)}\right]_{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}={\frac {\partial ^{\,i}}{\partial r_{\alpha _{1}}\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}\qquad \qquad {\text{where}}\quad \alpha _{1},\alpha _{2},\dots ,\alpha _{i}=1,2,\dots ,n\ .} An analogous application of the definition of the functional derivative yields δ F [ ρ ] δ ρ = ∂ f ∂ ρ − ∇ ⋅ ∂ f ∂ ( ∇ ρ ) + ∇ ( 2 ) ⋅ ∂ f ∂ ( ∇ ( 2 ) ρ ) + ⋯ + ( − 1 ) N ∇ ( N ) ⋅ ∂ f ∂ ( ∇ ( N ) ρ ) = ∂ f ∂ ρ + ∑ i = 1 N ( − 1 ) i ∇ ( i ) ⋅ ∂ f ∂ ( ∇ ( i ) ρ ) . {\displaystyle {\begin{aligned}{\frac {\delta F[\rho ]}{\delta \rho }}&{}={\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial (\nabla \rho )}}+\nabla ^{(2)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(2)}\rho \right)}}+\dots +(-1)^{N}\nabla ^{(N)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(N)}\rho \right)}}\\&{}={\frac {\partial f}{\partial \rho }}+\sum _{i=1}^{N}(-1)^{i}\nabla ^{(i)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}\ .\end{aligned}}} In the last two equations, the ni components of the tensor ∂ f ∂ ( ∇ ( i ) ρ ) {\displaystyle {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}} are partial derivatives of f with respect to partial derivatives of ρ, [ ∂ f ∂ ( ∇ ( i ) ρ ) ] α 1 α 2 ⋯ α i = ∂ f ∂ ρ α 1 α 2 ⋯ α i {\displaystyle \left[{\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}\right]_{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}={\frac {\partial f}{\partial \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}}}} where ρ α 1 α 2 ⋯ α i ≡ ∂ i ρ ∂ r α 1 ∂ r α 2 ⋯ ∂ r α i {\displaystyle \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}\equiv {\frac {\partial ^{\,i}\rho }{\partial r_{\alpha _{1}}\,\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}} , and the tensor scalar product is, ∇ ( i ) ⋅ ∂ f ∂ ( ∇ ( i ) ρ ) = ∑ α 1 , α 2 , ⋯ , α i = 1 n ∂ i ∂ r α 1 ∂ r α 2 ⋯ ∂ r α i ∂ f ∂ ρ α 1 α 2 ⋯ α i . {\displaystyle \nabla ^{(i)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}=\sum _{\alpha _{1},\alpha _{2},\cdots ,\alpha _{i}=1}^{n}\ {\frac {\partial ^{\,i}}{\partial r_{\alpha _{1}}\,\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}\ {\frac {\partial f}{\partial \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}}}\ .} === Examples === ==== Thomas–Fermi kinetic energy functional ==== The Thomas–Fermi model of 1927 used a kinetic energy functional for a noninteracting uniform electron gas in a first attempt of density-functional theory of electronic structure: T T F [ ρ ] = C F ∫ ρ 5 / 3 ( r ) d r . {\displaystyle T_{\mathrm {TF} }[\rho ]=C_{\mathrm {F} }\int \rho ^{5/3}(\mathbf {r} )\,d\mathbf {r} \,.} Since the integrand of TTF[ρ] does not involve derivatives of ρ(r), the functional derivative of TTF[ρ] is, δ T T F δ ρ ( r ) = C F ∂ ρ 5 / 3 ( r ) ∂ ρ ( r ) = 5 3 C F ρ 2 / 3 ( r ) . {\displaystyle {\frac {\delta T_{\mathrm {TF} }}{\delta \rho ({\boldsymbol {r}})}}=C_{\mathrm {F} }{\frac {\partial \rho ^{5/3}(\mathbf {r} )}{\partial \rho (\mathbf {r} )}}={\frac {5}{3}}C_{\mathrm {F} }\rho ^{2/3}(\mathbf {r} )\,.} ==== Coulomb potential energy functional ==== The electron-nucleus potential energy is V [ ρ ] = ∫ ρ ( r ) | r | d r . {\displaystyle V[\rho ]=\int {\frac {\rho ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}.} Applying the definition of functional derivative, ∫ δ V δ ρ ( r ) ϕ ( r ) d r = [ d d ε ∫ ρ ( r ) + ε ϕ ( r ) | r | d r ] ε = 0 = ∫ ϕ ( r ) | r | d r . {\displaystyle {\begin{aligned}\int {\frac {\delta V}{\delta \rho ({\boldsymbol {r}})}}\ \phi ({\boldsymbol {r}})\ d{\boldsymbol {r}}&{}=\left[{\frac {d}{d\varepsilon }}\int {\frac {\rho ({\boldsymbol {r}})+\varepsilon \phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}\right]_{\varepsilon =0}\\[1ex]&{}=\int {\frac {\phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}\,.\end{aligned}}} So, δ V δ ρ ( r ) = 1 | r | . {\displaystyle {\frac {\delta V}{\delta \rho ({\boldsymbol {r}})}}={\frac {1}{|{\boldsymbol {r}}|}}\ .} The functional derivative of the classical part of the electron-electron interaction (often called Hartree energy) is J [ ρ ] = 1 2 ∬ ρ ( r ) ρ ( r ′ ) | r − r ′ | d r d r ′ . {\displaystyle J[\rho ]={\frac {1}{2}}\iint {\frac {\rho (\mathbf {r} )\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d\mathbf {r} d\mathbf {r} '\,.} From the definition of the functional derivative, ∫ δ J δ ρ ( r ) ϕ ( r ) d r = [ d d ε J [ ρ + ε ϕ ] ] ε = 0 = [ d d ε ( 1 2 ∬ [ ρ ( r ) + ε ϕ ( r ) ] [ ρ ( r ′ ) + ε ϕ ( r ′ ) ] | r − r ′ | d r d r ′ ) ] ε = 0 = 1 2 ∬ ρ ( r ′ ) ϕ ( r ) | r − r ′ | d r d r ′ + 1 2 ∬ ρ ( r ) ϕ ( r ′ ) | r − r ′ | d r d r ′ {\displaystyle {\begin{aligned}\int {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}\phi ({\boldsymbol {r}})d{\boldsymbol {r}}&{}=\left[{\frac {d\ }{d\varepsilon }}\,J[\rho +\varepsilon \phi ]\right]_{\varepsilon =0}\\&{}=\left[{\frac {d\ }{d\varepsilon }}\,\left({\frac {1}{2}}\iint {\frac {[\rho ({\boldsymbol {r}})+\varepsilon \phi ({\boldsymbol {r}})]\,[\rho ({\boldsymbol {r}}')+\varepsilon \phi ({\boldsymbol {r}}')]}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'\right)\right]_{\varepsilon =0}\\&{}={\frac {1}{2}}\iint {\frac {\rho ({\boldsymbol {r}}')\phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'+{\frac {1}{2}}\iint {\frac {\rho ({\boldsymbol {r}})\phi ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'\\\end{aligned}}} The first and second terms on the right hand side of the last equation are equal, since r and r′ in the second term can be interchanged without changing the value of the integral. Therefore, ∫ δ J δ ρ ( r ) ϕ ( r ) d r = ∫ ( ∫ ρ ( r ′ ) | r − r ′ | d r ′ ) ϕ ( r ) d r {\displaystyle \int {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}\phi ({\boldsymbol {r}})d{\boldsymbol {r}}=\int \left(\int {\frac {\rho ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}d{\boldsymbol {r}}'\right)\phi ({\boldsymbol {r}})d{\boldsymbol {r}}} and the functional derivative of the electron-electron Coulomb potential energy functional J[ρ] is, δ J δ ρ ( r ) = ∫ ρ ( r ′ ) | r − r ′ | d r ′ . {\displaystyle {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}=\int {\frac {\rho ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}d{\boldsymbol {r}}'\,.} The second functional derivative is δ 2 J [ ρ ] δ ρ ( r ′ ) δ ρ ( r ) = ∂ ∂ ρ ( r ′ ) ( ρ ( r ′ ) | r − r ′ | ) = 1 | r − r ′ | . {\displaystyle {\frac {\delta ^{2}J[\rho ]}{\delta \rho (\mathbf {r} ')\delta \rho (\mathbf {r} )}}={\frac {\partial }{\partial \rho (\mathbf {r} ')}}\left({\frac {\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\right)={\frac {1}{|\mathbf {r} -\mathbf {r} '|}}.} ==== von Weizsäcker kinetic energy functional ==== In 1935 von Weizsäcker proposed to add a gradient correction to the Thomas-Fermi kinetic energy functional to make it better suit a molecular electron cloud: T W [ ρ ] = 1 8 ∫ ∇ ρ ( r ) ⋅ ∇ ρ ( r ) ρ ( r ) d r = ∫ t W ( r ) d r , {\displaystyle T_{\mathrm {W} }[\rho ]={\frac {1}{8}}\int {\frac {\nabla \rho (\mathbf {r} )\cdot \nabla \rho (\mathbf {r} )}{\rho (\mathbf {r} )}}d\mathbf {r} =\int t_{\mathrm {W} }(\mathbf {r} )\ d\mathbf {r} \,,} where t W ≡ 1 8 ∇ ρ ⋅ ∇ ρ ρ and ρ = ρ ( r ) . {\displaystyle t_{\mathrm {W} }\equiv {\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho }}\qquad {\text{and}}\ \ \rho =\rho ({\boldsymbol {r}})\ .} Using a previously derived formula for the functional derivative, δ T W δ ρ = ∂ t W ∂ ρ − ∇ ⋅ ∂ t W ∂ ∇ ρ = − 1 8 ∇ ρ ⋅ ∇ ρ ρ 2 − ( 1 4 ∇ 2 ρ ρ − 1 4 ∇ ρ ⋅ ∇ ρ ρ 2 ) where ∇ 2 = ∇ ⋅ ∇ , {\displaystyle {\begin{aligned}{\frac {\delta T_{\mathrm {W} }}{\delta \rho }}&={\frac {\partial t_{\mathrm {W} }}{\partial \rho }}-\nabla \cdot {\frac {\partial t_{\mathrm {W} }}{\partial \nabla \rho }}\\&=-{\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}-\left({\frac {1}{4}}{\frac {\nabla ^{2}\rho }{\rho }}-{\frac {1}{4}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}\right)\qquad {\text{where}}\ \ \nabla ^{2}=\nabla \cdot \nabla \ ,\end{aligned}}} and the result is, δ T W δ ρ = 1 8 ∇ ρ ⋅ ∇ ρ ρ 2 − 1 4 ∇ 2 ρ ρ . {\displaystyle {\frac {\delta T_{\mathrm {W} }}{\delta \rho }}=\ \ \,{\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}-{\frac {1}{4}}{\frac {\nabla ^{2}\rho }{\rho }}\ .} ==== Entropy ==== The entropy of a discrete random variable is a functional of the probability mass function. H [ p ( x ) ] = − ∑ x p ( x ) log ⁡ p ( x ) {\displaystyle H[p(x)]=-\sum _{x}p(x)\log p(x)} Thus, ∑ x δ H δ p ( x ) ϕ ( x ) = [ d d ε H [ p ( x ) + ε ϕ ( x ) ] ] ε = 0 = [ − d d ε ∑ x [ p ( x ) + ε ϕ ( x ) ] log ⁡ [ p ( x ) + ε ϕ ( x ) ] ] ε = 0 = − ∑ x [ 1 + log ⁡ p ( x ) ] ϕ ( x ) . {\displaystyle {\begin{aligned}\sum _{x}{\frac {\delta H}{\delta p(x)}}\,\phi (x)&{}=\left[{\frac {d}{d\varepsilon }}H[p(x)+\varepsilon \phi (x)]\right]_{\varepsilon =0}\\&{}=\left[-\,{\frac {d}{d\varepsilon }}\sum _{x}\,[p(x)+\varepsilon \phi (x)]\ \log[p(x)+\varepsilon \phi (x)]\right]_{\varepsilon =0}\\&{}=-\sum _{x}\,[1+\log p(x)]\ \phi (x)\,.\end{aligned}}} Thus, δ H δ p ( x ) = − 1 − log ⁡ p ( x ) . {\displaystyle {\frac {\delta H}{\delta p(x)}}=-1-\log p(x).} ==== Exponential ==== Let F [ φ ( x ) ] = e ∫ φ ( x ) g ( x ) d x . {\displaystyle F[\varphi (x)]=e^{\int \varphi (x)g(x)dx}.} Using the delta function as a test function, δ F [ φ ( x ) ] δ φ ( y ) = lim ε → 0 F [ φ ( x ) + ε δ ( x − y ) ] − F [ φ ( x ) ] ε = lim ε → 0 e ∫ ( φ ( x ) + ε δ ( x − y ) ) g ( x ) d x − e ∫ φ ( x ) g ( x ) d x ε = e ∫ φ ( x ) g ( x ) d x lim ε → 0 e ε ∫ δ ( x − y ) g ( x ) d x − 1 ε = e ∫ φ ( x ) g ( x ) d x lim ε → 0 e ε g ( y ) − 1 ε = e ∫ φ ( x ) g ( x ) d x g ( y ) . {\displaystyle {\begin{aligned}{\frac {\delta F[\varphi (x)]}{\delta \varphi (y)}}&{}=\lim _{\varepsilon \to 0}{\frac {F[\varphi (x)+\varepsilon \delta (x-y)]-F[\varphi (x)]}{\varepsilon }}\\&{}=\lim _{\varepsilon \to 0}{\frac {e^{\int (\varphi (x)+\varepsilon \delta (x-y))g(x)dx}-e^{\int \varphi (x)g(x)dx}}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}\lim _{\varepsilon \to 0}{\frac {e^{\varepsilon \int \delta (x-y)g(x)dx}-1}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}\lim _{\varepsilon \to 0}{\frac {e^{\varepsilon g(y)}-1}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}g(y).\end{aligned}}} Thus, δ F [ φ ( x ) ] δ φ ( y ) = g ( y ) F [ φ ( x ) ] . {\displaystyle {\frac {\delta F[\varphi (x)]}{\delta \varphi (y)}}=g(y)F[\varphi (x)].} This is particularly useful in calculating the correlation functions from the partition function in quantum field theory. ==== Functional derivative of a function ==== A function can be written in the form of an integral like a functional. For example, ρ ( r ) = F [ ρ ] = ∫ ρ ( r ′ ) δ ( r − r ′ ) d r ′ . {\displaystyle \rho ({\boldsymbol {r}})=F[\rho ]=\int \rho ({\boldsymbol {r}}')\delta ({\boldsymbol {r}}-{\boldsymbol {r}}')\,d{\boldsymbol {r}}'.} Since the integrand does not depend on derivatives of ρ, the functional derivative of ρ(r) is, δ ρ ( r ) δ ρ ( r ′ ) ≡ δ F δ ρ ( r ′ ) = ∂ ∂ ρ ( r ′ ) [ ρ ( r ′ ) δ ( r − r ′ ) ] = δ ( r − r ′ ) . {\displaystyle {\frac {\delta \rho ({\boldsymbol {r}})}{\delta \rho ({\boldsymbol {r}}')}}\equiv {\frac {\delta F}{\delta \rho ({\boldsymbol {r}}')}}={\frac {\partial \ \ }{\partial \rho ({\boldsymbol {r}}')}}\,[\rho ({\boldsymbol {r}}')\delta ({\boldsymbol {r}}-{\boldsymbol {r}}')]=\delta ({\boldsymbol {r}}-{\boldsymbol {r}}').} ==== Functional derivative of iterated function ==== The functional derivative of the iterated function f ( f ( x ) ) {\displaystyle f(f(x))} is given by: δ f ( f ( x ) ) δ f ( y ) = f ′ ( f ( x ) ) δ ( x − y ) + δ ( f ( x ) − y ) {\displaystyle {\frac {\delta f(f(x))}{\delta f(y)}}=f'(f(x))\delta (x-y)+\delta (f(x)-y)} and δ f ( f ( f ( x ) ) ) δ f ( y ) = f ′ ( f ( f ( x ) ) ( f ′ ( f ( x ) ) δ ( x − y ) + δ ( f ( x ) − y ) ) + δ ( f ( f ( x ) ) − y ) {\displaystyle {\frac {\delta f(f(f(x)))}{\delta f(y)}}=f'(f(f(x))(f'(f(x))\delta (x-y)+\delta (f(x)-y))+\delta (f(f(x))-y)} In general: δ f N ( x ) δ f ( y ) = f ′ ( f N − 1 ( x ) ) δ f N − 1 ( x ) δ f ( y ) + δ ( f N − 1 ( x ) − y ) {\displaystyle {\frac {\delta f^{N}(x)}{\delta f(y)}}=f'(f^{N-1}(x)){\frac {\delta f^{N-1}(x)}{\delta f(y)}}+\delta (f^{N-1}(x)-y)} Putting in N = 0 gives: δ f − 1 ( x ) δ f ( y ) = − δ ( f − 1 ( x ) − y ) f ′ ( f − 1 ( x ) ) {\displaystyle {\frac {\delta f^{-1}(x)}{\delta f(y)}}=-{\frac {\delta (f^{-1}(x)-y)}{f'(f^{-1}(x))}}} == Using the delta function as a test function == In physics, it is common to use the Dirac delta function δ ( x − y ) {\displaystyle \delta (x-y)} in place of a generic test function ϕ ( x ) {\displaystyle \phi (x)} , for yielding the functional derivative at the point y {\displaystyle y} (this is a point of the whole functional derivative as a partial derivative is a component of the gradient): δ F [ ρ ( x ) ] δ ρ ( y ) = lim ε → 0 F [ ρ ( x ) + ε δ ( x − y ) ] − F [ ρ ( x ) ] ε . {\displaystyle {\frac {\delta F[\rho (x)]}{\delta \rho (y)}}=\lim _{\varepsilon \to 0}{\frac {F[\rho (x)+\varepsilon \delta (x-y)]-F[\rho (x)]}{\varepsilon }}.} This works in cases when F [ ρ ( x ) + ε f ( x ) ] {\displaystyle F[\rho (x)+\varepsilon f(x)]} formally can be expanded as a series (or at least up to first order) in ε {\displaystyle \varepsilon } . The formula is however not mathematically rigorous, since F [ ρ ( x ) + ε δ ( x − y ) ] {\displaystyle F[\rho (x)+\varepsilon \delta (x-y)]} is usually not even defined. The definition given in a previous section is based on a relationship that holds for all test functions ϕ ( x ) {\displaystyle \phi (x)} , so one might think that it should hold also when ϕ ( x ) {\displaystyle \phi (x)} is chosen to be a specific function such as the delta function. However, the latter is not a valid test function (it is not even a proper function). In the definition, the functional derivative describes how the functional F [ ρ ( x ) ] {\displaystyle F[\rho (x)]} changes as a result of a small change in the entire function ρ ( x ) {\displaystyle \rho (x)} . The particular form of the change in ρ ( x ) {\displaystyle \rho (x)} is not specified, but it should stretch over the whole interval on which x {\displaystyle x} is defined. Employing the particular form of the perturbation given by the delta function has the meaning that ρ ( x ) {\displaystyle \rho (x)} is varied only in the point y {\displaystyle y} . Except for this point, there is no variation in ρ ( x ) {\displaystyle \rho (x)} . == Notes == == Footnotes == == References == Courant, Richard; Hilbert, David (1953). "Chapter IV. The Calculus of Variations". Methods of Mathematical Physics. Vol. I (First English ed.). New York, New York: Interscience Publishers, Inc. pp. 164–274. ISBN 978-0471504474. MR 0065391. Zbl 0001.00501. {{cite book}}: ISBN / Date incompatibility (help). Frigyik, Béla A.; Srivastava, Santosh; Gupta, Maya R. (January 2008), Introduction to Functional Derivatives (PDF), UWEE Tech Report, vol. UWEETR-2008-0001, Seattle, WA: Department of Electrical Engineering at the University of Washington, p. 7, archived from the original (PDF) on 2017-02-17, retrieved 2013-10-23. Gelfand, I. M.; Fomin, S. V. (2000) [1963], Calculus of variations, translated and edited by Richard A. Silverman (Revised English ed.), Mineola, N.Y.: Dover Publications, ISBN 978-0486414485, MR 0160139, Zbl 0127.05402. Giaquinta, Mariano; Hildebrandt, Stefan (1996), Calculus of Variations 1. The Lagrangian Formalism, Grundlehren der Mathematischen Wissenschaften, vol. 310 (1st ed.), Berlin: Springer-Verlag, ISBN 3-540-50625-X, MR 1368401, Zbl 0853.49001. Greiner, Walter; Reinhardt, Joachim (1996), "Section 2.3 – Functional derivatives", Field quantization, With a foreword by D. A. Bromley, Berlin–Heidelberg–New York: Springer-Verlag, pp. 36–38, ISBN 3-540-59179-6, MR 1383589, Zbl 0844.00006. Parr, R. G.; Yang, W. (1989). "Appendix A, Functionals". Density-Functional Theory of Atoms and Molecules. New York: Oxford University Press. pp. 246–254. ISBN 978-0195042795. == External links == "Functional derivative", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Functional_derivative
In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus (that is, an assignment of operators from commutative algebras to functions defined on their spectra), which has particularly broad scope. Thus for instance if T is an operator, applying the squaring function s → s2 to T yields the operator T2. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential e i t Δ . {\displaystyle e^{it\Delta }.} The 'scope' here means the kind of function of an operator which is allowed. The Borel functional calculus is more general than the continuous functional calculus, and its focus is different than the holomorphic functional calculus. More precisely, the Borel functional calculus allows for applying an arbitrary Borel function to a self-adjoint operator, in a way that generalizes applying a polynomial function. == Motivation == If T is a self-adjoint operator on a finite-dimensional inner product space H, then H has an orthonormal basis {e1, ..., eℓ} consisting of eigenvectors of T, that is T e k = λ k e k , 1 ≤ k ≤ ℓ . {\displaystyle Te_{k}=\lambda _{k}e_{k},\qquad 1\leq k\leq \ell .} Thus, for any positive integer n, T n e k = λ k n e k . {\displaystyle T^{n}e_{k}=\lambda _{k}^{n}e_{k}.} If only polynomials in T are considered, then one gets the holomorphic functional calculus. The relation also holds for more general functions of T. Given a Borel function h, one can define an operator h(T) by specifying its behavior on the basis: h ( T ) e k = h ( λ k ) e k . {\displaystyle h(T)e_{k}=h(\lambda _{k})e_{k}.} Generally, any self-adjoint operator T is unitarily equivalent to a multiplication operator; this means that for many purposes, T can be considered as an operator [ T ψ ] ( x ) = f ( x ) ψ ( x ) {\displaystyle [T\psi ](x)=f(x)\psi (x)} acting on L2 of some measure space. The domain of T consists of those functions whose above expression is in L2. In such a case, one can define analogously [ h ( T ) ψ ] ( x ) = [ h ∘ f ] ( x ) ψ ( x ) . {\displaystyle [h(T)\psi ](x)=[h\circ f](x)\psi (x).} For many technical purposes, the previous formulation is good enough. However, it is desirable to formulate the functional calculus in a way that does not depend on the particular representation of T as a multiplication operator. That's what we do in the next section. == The bounded functional calculus == Formally, the bounded Borel functional calculus of a self adjoint operator T on Hilbert space H is a mapping defined on the space of bounded complex-valued Borel functions f on the real line, { π T : L ∞ ( R , C ) → B ( H ) f ↦ f ( T ) {\displaystyle {\begin{cases}\pi _{T}:L^{\infty }(\mathbb {R} ,\mathbb {C} )\to {\mathcal {B}}({\mathcal {H}})\\f\mapsto f(T)\end{cases}}} such that the following conditions hold πT is an involution-preserving and unit-preserving homomorphism from the ring of complex-valued bounded measurable functions on R. If ξ is an element of H, then ν ξ : E ↦ ⟨ π T ( 1 E ) ξ , ξ ⟩ {\displaystyle \nu _{\xi }:E\mapsto \langle \pi _{T}(\mathbf {1} _{E})\xi ,\xi \rangle } is a countably additive measure on the Borel sets E of R. In the above formula 1E denotes the indicator function of E. These measures νξ are called the spectral measures of T. If η denotes the mapping z → z on C, then: π T ( [ η + i ] − 1 ) = [ T + i ] − 1 . {\displaystyle \pi _{T}\left([\eta +i]^{-1}\right)=[T+i]^{-1}.} This defines the functional calculus for bounded functions applied to possibly unbounded self-adjoint operators. Using the bounded functional calculus, one can prove part of the Stone's theorem on one-parameter unitary groups: As an application, we consider the Schrödinger equation, or equivalently, the dynamics of a quantum mechanical system. In non-relativistic quantum mechanics, the Hamiltonian operator H models the total energy observable of a quantum mechanical system S. The unitary group generated by iH corresponds to the time evolution of S. We can also use the Borel functional calculus to abstractly solve some linear initial value problems such as the heat equation, or Maxwell's equations. === Existence of a functional calculus === The existence of a mapping with the properties of a functional calculus requires proof. For the case of a bounded self-adjoint operator T, the existence of a Borel functional calculus can be shown in an elementary way as follows: First pass from polynomial to continuous functional calculus by using the Stone–Weierstrass theorem. The crucial fact here is that, for a bounded self adjoint operator T and a polynomial p, ‖ p ( T ) ‖ = sup λ ∈ σ ( T ) | p ( λ ) | . {\displaystyle \|p(T)\|=\sup _{\lambda \in \sigma (T)}|p(\lambda )|.} Consequently, the mapping p ↦ p ( T ) {\displaystyle p\mapsto p(T)} is an isometry and a densely defined homomorphism on the ring of polynomial functions. Extending by continuity defines f(T) for a continuous function f on the spectrum of T. The Riesz-Markov theorem then allows us to pass from integration on continuous functions to spectral measures, and this is the Borel functional calculus. Alternatively, the continuous calculus can be obtained via the Gelfand transform, in the context of commutative Banach algebras. Extending to measurable functions is achieved by applying Riesz-Markov, as above. In this formulation, T can be a normal operator. Given an operator T, the range of the continuous functional calculus h → h(T) is the (abelian) C*-algebra C(T) generated by T. The Borel functional calculus has a larger range, that is the closure of C(T) in the weak operator topology, a (still abelian) von Neumann algebra. == The general functional calculus == We can also define the functional calculus for not necessarily bounded Borel functions h; the result is an operator which in general fails to be bounded. Using the multiplication by a function f model of a self-adjoint operator given by the spectral theorem, this is multiplication by the composition of h with f. The operator S of the previous theorem is denoted h(T). More generally, a Borel functional calculus also exists for (bounded) normal operators. == Resolution of the identity == Let T {\displaystyle T} be a self-adjoint operator. If E {\displaystyle E} is a Borel subset of R, and 1 E {\displaystyle \mathbf {1} _{E}} is the indicator function of E, then 1 E ( T ) {\displaystyle \mathbf {1} _{E}(T)} is a self-adjoint projection on H. Then mapping Ω T : E ↦ 1 E ( T ) {\displaystyle \Omega _{T}:E\mapsto \mathbf {1} _{E}(T)} is a projection-valued measure. The measure of R with respect to Ω T {\textstyle \Omega _{T}} is the identity operator on H. In other words, the identity operator can be expressed as the spectral integral I = Ω T ( [ − ∞ , ∞ ] ) = ∫ − ∞ ∞ d Ω T {\displaystyle I=\Omega _{T}([-\infty ,\infty ])=\int _{-\infty }^{\infty }d\Omega _{T}} . Stone's formula expresses the spectral measure Ω T {\displaystyle \Omega _{T}} in terms of the resolvent R T ( λ ) ≡ ( T − λ I ) − 1 {\displaystyle R_{T}(\lambda )\equiv \left(T-\lambda I\right)^{-1}} : 1 2 π i lim ϵ → 0 + ∫ a b [ R T ( λ + i ϵ ) ) − R T ( λ − i ϵ ) ] d λ = Ω T ( ( a , b ) ) + 1 2 [ Ω T ( { a } ) + Ω T ( { b } ) ] . {\displaystyle {\frac {1}{2\pi i}}\lim _{\epsilon \to 0^{+}}\int _{a}^{b}\left[R_{T}(\lambda +i\epsilon ))-R_{T}(\lambda -i\epsilon )\right]\,d\lambda =\Omega _{T}((a,b))+{\frac {1}{2}}\left[\Omega _{T}(\{a\})+\Omega _{T}(\{b\})\right].} Depending on the source, the resolution of the identity is defined, either as a projection-valued measure Ω T {\displaystyle \Omega _{T}} , or as a one-parameter family of projection-valued measures { Σ λ } {\displaystyle \{\Sigma _{\lambda }\}} with − ∞ < λ < ∞ {\displaystyle -\infty <\lambda <\infty } . In the case of a discrete measure (in particular, when H is finite-dimensional), I = ∫ 1 d Ω T {\textstyle I=\int 1\,d\Omega _{T}} can be written as I = ∑ i | i ⟩ ⟨ i | {\displaystyle I=\sum _{i}\left|i\right\rangle \left\langle i\right|} in the Dirac notation, where each | i ⟩ {\displaystyle |i\rangle } is a normalized eigenvector of T. The set { | i ⟩ } {\displaystyle \{|i\rangle \}} is an orthonormal basis of H. In physics literature, using the above as heuristic, one passes to the case when the spectral measure is no longer discrete and write the resolution of identity as I = ∫ d i | i ⟩ ⟨ i | {\displaystyle I=\int \!\!di~|i\rangle \langle i|} and speak of a "continuous basis", or "continuum of basis states", { | i ⟩ } {\displaystyle \{|i\rangle \}} Mathematically, unless rigorous justifications are given, this expression is purely formal. == References ==
Wikipedia/Borel_functional_calculus
In mathematics, holomorphic functional calculus is functional calculus with holomorphic functions. That is to say, given a holomorphic function f of a complex argument z and an operator T, the aim is to construct an operator, f(T), which naturally extends the function f from complex argument to operator argument. More precisely, the functional calculus defines a continuous algebra homomorphism from the holomorphic functions on a neighbourhood of the spectrum of T to the bounded operators. This article will discuss the case where T is a bounded linear operator on some Banach space. In particular, T can be a square matrix with complex entries, a case which will be used to illustrate functional calculus and provide some heuristic insights for the assumptions involved in the general construction. == Motivation == === Need for a general functional calculus === In this section T will be assumed to be a n × n matrix with complex entries. If a given function f is of certain special type, there are natural ways of defining f(T). For instance, if p ( z ) = ∑ i = 0 m a i z i {\displaystyle p(z)=\sum _{i=0}^{m}a_{i}z^{i}} is a complex polynomial, one can simply substitute T for z and define p ( T ) = ∑ i = 0 m a i T i {\displaystyle p(T)=\sum _{i=0}^{m}a_{i}T^{i}} where T0 = I, the identity matrix. This is the polynomial functional calculus. It is a homomorphism from the ring of polynomials to the ring of n × n matrices. Extending slightly from the polynomials, if f : C → C is holomorphic everywhere, i.e. an entire function, with MacLaurin series f ( z ) = ∑ i = 0 ∞ a i z i , {\displaystyle f(z)=\sum _{i=0}^{\infty }a_{i}z^{i},} mimicking the polynomial case suggests we define f ( T ) = ∑ i = 0 ∞ a i T i . {\displaystyle f(T)=\sum _{i=0}^{\infty }a_{i}T^{i}.} Since the MacLaurin series converges everywhere, the above series will converge, in a chosen operator norm. An example of this is the exponential of a matrix. Replacing z by T in the MacLaurin series of f(z) = ez gives f ( T ) = e T = I + T + T 2 2 ! + T 3 3 ! + ⋯ . {\displaystyle f(T)=e^{T}=I+T+{\frac {T^{2}}{2!}}+{\frac {T^{3}}{3!}}+\cdots .} The requirement that the MacLaurin series of f converges everywhere can be relaxed somewhat. From above it is evident that all that is really needed is the radius of convergence of the MacLaurin series be greater than ǁTǁ, the operator norm of T. This enlarges somewhat the family of f for which f(T) can be defined using the above approach. However it is not quite satisfactory. For instance, it is a fact from matrix theory that every non-singular T has a logarithm S in the sense that eS = T. It is desirable to have a functional calculus that allows one to define, for a non-singular T, ln(T) such that it coincides with S. This can not be done via power series, for example the logarithmic series ln ⁡ ( z + 1 ) = z − z 2 2 + z 3 3 − ⋯ , {\displaystyle \ln(z+1)=z-{\frac {z^{2}}{2}}+{\frac {z^{3}}{3}}-\cdots ,} converges only on the open unit disk. Substituting T for z in the series fails to give a well-defined expression for ln(T + I) for invertible T + I with ǁTǁ ≥ 1. Thus a more general functional calculus is needed. === Functional calculus and the spectrum === It is expected that a necessary condition for f(T) to make sense is f be defined on the spectrum of T. For example, the spectral theorem for normal matrices states every normal matrix is unitarily diagonalizable. This leads to a definition of f(T) when T is normal. One encounters difficulties if f(λ) is not defined for some eigenvalue λ of T. Other indications also reinforce the idea that f(T) can be defined only if f is defined on the spectrum of T. If T is not invertible, then (recalling that T is an n x n matrix) 0 is an eigenvalue. Since the natural logarithm is undefined at 0, one would expect that ln(T) can not be defined naturally. This is indeed the case. As another example, for f ( z ) = 1 ( z − 2 ) ( z − 5 ) {\displaystyle f(z)={\frac {1}{(z-2)(z-5)}}} the reasonable way of calculating f(T) would seem to be f ( T ) = ( T − 2 I ) − 1 ( T − 5 I ) − 1 . {\displaystyle f(T)=(T-2I)^{-1}(T-5I)^{-1}.\,} However, this expression is not defined if the inverses on the right-hand side do not exist, that is, if either 2 or 5 are eigenvalues of T. For a given matrix T, the eigenvalues of T dictate to what extent f(T) can be defined; i.e., f(λ) must be defined for all eigenvalues λ of T. For a general bounded operator this condition translates to "f must be defined on the spectrum of T". This assumption turns out to be an enabling condition such that the functional calculus map, f → f(T), has certain desirable properties. == Functional calculus for a bounded operator == Let X be a complex Banach space, and L(X) denote the family of bounded operators on X. Recall the Cauchy integral formula from classical function theory. Let f : C → C be holomorphic on some open set D ⊂ C, and Γ be a rectifiable Jordan curve in D, that is, a closed curve of finite length without self-intersections. Assume that the set U of points lying in the inside of Γ, i.e. such that the winding number of Γ about z is 1, is contained in D. The Cauchy integral formula states f ( z ) = 1 2 π i ∫ Γ f ( ζ ) ζ − z d ζ {\displaystyle f(z)={\frac {1}{2\pi i}}\int \nolimits _{\Gamma }{\frac {f(\zeta )}{\zeta -z}}\,d\zeta } for any z in U. The idea is to extend this formula to functions taking values in the Banach space L(X). Cauchy's integral formula suggests the following definition (purely formal, for now): f ( T ) = 1 2 π i ∫ Γ f ( ζ ) ζ − T d ζ , {\displaystyle f(T)={\frac {1}{2\pi i}}\int _{\Gamma }{\frac {f(\zeta )}{\zeta -T}}\,d\zeta ,} where (ζ−T)−1 is the resolvent of T at ζ. Assuming this Banach space-valued integral is appropriately defined, this proposed functional calculus implies the following necessary conditions: As the scalar version of Cauchy's integral formula applies to holomorphic f, we anticipate that is also the case for the Banach space case, where there should be a suitable notion of holomorphy for functions taking values in the Banach space L(X). As the resolvent mapping ζ → (ζ−T)−1 is undefined on the spectrum of T, σ(T), the Jordan curve Γ should not intersect σ(T). Now, the resolvent mapping will be holomorphic on the complement of σ(T). So to obtain a non-trivial functional calculus, Γ must enclose (at least part of) σ(T). The functional calculus should be well-defined in the sense that f(T) has to be independent of Γ. The full definition of the functional calculus is as follows: For T ∈ L(X), define f ( T ) = 1 2 π i ∫ Γ f ( ζ ) ζ − T d ζ , {\displaystyle f(T)={\frac {1}{2\pi i}}\int \nolimits _{\Gamma }{\frac {f(\zeta )}{\zeta -T}}\,d\zeta ,} where f is a holomorphic function defined on an open set D ⊂ C which contains σ(T), and Γ = {γ1, ..., γm} is a collection of disjoint Jordan curves in D bounding an "inside" set U, such that σ(T) lies in U, and each γi is oriented in the boundary sense. The open set D may vary with f and need not be connected or simply connected, as shown by the figures on the right. The following subsections make precise the notions invoked in the definition and show f(T) is indeed well defined under given assumptions. === Banach space-valued integral === Cf. Bochner integral For a continuous function g defined in an open neighborhood of Γ and taking values in L(X), the contour integral ∫Γg is defined in the same way as for the scalar case. One can parametrize each γi ∈ Γ by a real interval [a, b], and the integral is the limit of the Riemann sums obtained from ever-finer partitions of [a, b]. The Riemann sums converge in the uniform operator topology. We define ∫ Γ g = ∑ i ∫ γ i g . {\displaystyle \int _{\Gamma }g=\sum \nolimits _{i}\int _{\gamma _{i}}g.} In the definition of the functional calculus, f is assumed to be holomorphic in an open neighborhood of Γ. It will be shown below that the resolvent mapping is holomorphic on the resolvent set. Therefore, the integral 1 2 π i ∫ Γ f ( ζ ) ζ − T d ζ {\displaystyle {\frac {1}{2\pi i}}\int _{\Gamma }{\frac {f(\zeta )}{\zeta -T}}\,d\zeta } makes sense. === The resolvent mapping === The mapping ζ → (ζ−T)−1 is called the resolvent mapping of T. It is defined on the complement of σ(T), called the resolvent set of T and will be denoted by ρ(T). Much of classical function theory depends on the properties of the integral 1 2 π i ∫ Γ d ζ ζ − z . {\displaystyle {\frac {1}{2\pi i}}\int _{\Gamma }{\frac {d\zeta }{\zeta -z}}.} The holomorphic functional calculus is similar in that the resolvent mapping plays a crucial role in obtaining properties one requires from a nice functional calculus. This subsection outlines properties of the resolvent map that are essential in this context. ==== The 1st resolvent formula ==== Direct calculation shows, for z1, z2 ∈ ρ(T), ( z 1 − T ) − 1 − ( z 2 − T ) − 1 = ( z 1 − T ) − 1 ( z 2 − z 1 ) ( z 2 − T ) − 1 . {\displaystyle (z_{1}-T)^{-1}-(z_{2}-T)^{-1}=(z_{1}-T)^{-1}(z_{2}-z_{1})(z_{2}-T)^{-1}.\,} Therefore, ( z 1 − T ) − 1 ( z 2 − T ) − 1 = ( z 1 − T ) − 1 − ( z 2 − T ) − 1 ( z 2 − z 1 ) . {\displaystyle (z_{1}-T)^{-1}(z_{2}-T)^{-1}={\frac {(z_{1}-T)^{-1}-(z_{2}-T)^{-1}}{(z_{2}-z_{1})}}.} This equation is called the first resolvent formula. The formula shows (z1−T)−1 and (z2−T)−1 commute, which hints at the fact that the image of the functional calculus will be a commutative algebra. Letting z2 → z1 shows the resolvent map is (complex-) differentiable at each z1 ∈ ρ(T); so the integral in the expression of functional calculus converges in L(X). ==== Analyticity ==== Stronger statement than differentiability can be made regarding the resolvent map. The resolvent set ρ(T) is actually an open set on which the resolvent map is analytic. This property will be used in subsequent arguments for the functional calculus. To verify this claim, let z1 ∈ ρ(T) and notice the formal expression 1 z 2 − T = 1 z 1 − T ⋅ 1 1 − z 1 − z 2 z 1 − T {\displaystyle {\frac {1}{z_{2}-T}}={\frac {1}{z_{1}-T}}\cdot {\frac {1}{1-{\frac {z_{1}-z_{2}}{z_{1}-T}}}}} suggests we consider ( z 1 − T ) − 1 ∑ n ≥ 0 ( ( z 1 − z 2 ) ( z 1 − T ) − 1 ) n {\displaystyle (z_{1}-T)^{-1}\sum _{n\geq 0}\left((z_{1}-z_{2})(z_{1}-T)^{-1}\right)^{n}} for (z2−T)−1. The above series converges in L(X), which implies the existence of (z2−T)−1, if | z 1 − z 2 | < 1 ‖ ( z 1 − T ) − 1 ‖ . {\displaystyle |z_{1}-z_{2}|<{\frac {1}{\left\|(z_{1}-T)^{-1}\right\|}}.} Therefore, the resolvent set ρ(T) is open and the power series expression on an open disk centered at z1 ∈ ρ(T) shows the resolvent map is analytic on ρ(T). ==== Neumann series ==== Another expression for (z−T)−1 will also be useful. The formal expression 1 z − T = 1 z ⋅ 1 1 − T z {\displaystyle {\frac {1}{z-T}}={\frac {1}{z}}\cdot {\frac {1}{1-{\frac {T}{z}}}}} leads one to consider 1 z ∑ n ≥ 0 ( T z ) n . {\displaystyle {\frac {1}{z}}\sum _{n\geq 0}\left({\frac {T}{z}}\right)^{n}.} This series, the Neumann series, converges to (z−T)−1 if ‖ T z ‖ < 1 , i.e. | z | > ‖ T ‖ . {\displaystyle \left\|{\frac {T}{z}}\right\|<1,\;{\text{i.e.}}\;|z|>\|T\|.} ==== Compactness of σ(T) ==== From the last two properties of the resolvent we can deduce that the spectrum σ(T) of a bounded operator T is a compact subset of C. Therefore, for any open set D such that σ(T) ⊂ D, there exists a positively oriented and smooth system of Jordan curves Γ = {γ1, ..., γm} such that σ(T) is in the inside of Γ and the complement of D is contained in the outside of Γ. Hence, for the definition of the functional calculus, indeed a suitable family of Jordan curves can be found for each f that is holomorphic on some D. === Well-definedness === The previous discussion has shown that the integral makes sense, i.e. a suitable collection Γ of Jordan curves does exist for each f and the integral does converge in the appropriate sense. What has not been shown is that the definition of the functional calculus is unambiguous, i.e. does not depend on the choice of Γ. This issue we now try to resolve. ==== A preliminary fact ==== For a collection of Jordan curves Γ = {γ1, ..., γm} and a point a ∈ C, the winding number of Γ with respect to a is the sum of the winding numbers of its elements. If we define: n ( Γ , a ) = ∑ i n ( γ i , a ) , {\displaystyle n(\Gamma ,a)=\sum \nolimits _{i}n(\gamma _{i},a),} the following theorem is by Cauchy: Theorem. Let G ⊂ C be an open set and Γ ⊂ G. If g : G → C is holomorphic, and for all a in the complement of G, n(Γ, a) = 0, then the contour integral of g on Γ is zero. We will need the vector-valued analog of this result when g takes values in L(X). To this end, let g : G → L(X) be holomorphic, with the same assumptions on Γ. The idea is use the dual space L(X)* of L(X), and pass to Cauchy's theorem for the scalar case. Consider the integral ∫ Γ g ∈ L ( X ) , {\displaystyle \int _{\Gamma }g\in L(X),} if we can show that all φ ∈ L(X)* vanish on this integral then the integral itself has to be zero. Since φ is bounded and the integral converges in norm, we have: ϕ ( ∫ Γ g ) = ∫ Γ ϕ ( g ) . {\displaystyle \phi \left(\int _{\Gamma }g\right)=\int _{\Gamma }\phi (g).} But g is holomorphic, hence the composition φ(g): G ⊂ C → C is holomorphic and therefore by Cauchy's theorem ∫ Γ ϕ ( g ) = 0. {\displaystyle \int _{\Gamma }\phi (g)=0.} ==== Main argument ==== The well-definedness of functional calculus now follows as an easy consequence. Let D be an open set containing σ(T). Suppose Γ = {γi} and Ω = {ωj} are two (finite) collections of Jordan curves satisfying the assumption given for the functional calculus. We wish to show ∫ Γ f ( ζ ) ζ − T d ζ = ∫ Ω f ( ζ ) ζ − T d ζ . {\displaystyle \int _{\Gamma }{\frac {f(\zeta )}{\zeta -T}}\,d\zeta =\int _{\Omega }{\frac {f(\zeta )}{\zeta -T}}\,d\zeta .} Let Ω′ be obtained from Ω by reversing the orientation of each ωj, then ∫ Ω f ( ζ ) ζ − T d ζ = − ∫ Ω ′ f ( ζ ) ζ − T d ζ . {\displaystyle \int _{\Omega }{\frac {f(\zeta )}{\zeta -T}}\,d\zeta =-\int _{\Omega '}{\frac {f(\zeta )}{\zeta -T}}\,d\zeta .} Consider the union of the two collections Γ ∪ Ω′. Both Γ ∪ Ω′ and σ(T) are compact. So there is some open set U containing Γ ∪ Ω′ such that σ(T) lies in the complement of U. Any a in the complement of U has winding number n(Γ ∪ Ω′, a) = 0 and the function ζ → f ( ζ ) ζ − T {\displaystyle \zeta \rightarrow {\frac {f(\zeta )}{\zeta -T}}} is holomorphic on U. So the vector-valued version of Cauchy's theorem gives ∫ Γ ∪ Ω ′ f ( ζ ) ζ − T d ζ = 0 {\displaystyle \int _{\Gamma \cup \Omega '}{\frac {f(\zeta )}{\zeta -T}}\,d\zeta =0} i.e. ∫ Γ f ( ζ ) ζ − T d ζ + ∫ Ω ′ f ( ζ ) ζ − T d ζ = ∫ Γ f ( ζ ) ζ − T d ζ − ∫ Ω f ( ζ ) ζ − T d ζ = 0. {\displaystyle \int _{\Gamma }{\frac {f(\zeta )}{\zeta -T}}\,d\zeta +\int _{\Omega '}{\frac {f(\zeta )}{\zeta -T}}\,d\zeta =\int _{\Gamma }{\frac {f(\zeta )}{\zeta -T}}\,d\zeta -\int _{\Omega }{\frac {f(\zeta )}{\zeta -T}}\,d\zeta =0.} Hence the functional calculus is well-defined. Consequently, if f1 and f2 are two holomorphic functions defined on neighborhoods D1 and D2 of σ(T) and they are equal on an open set containing σ(T), then f1(T) = f2(T). Moreover, even though the D1 may not be D2, the operator (f1 + f2) (T) is well-defined. Same holds for the definition of (f1·f2)(T). === On the assumption that f be holomorphic over an open neighborhood of σ(T) === So far the full strength of this assumption has not been used. For convergence of the integral, only continuity was used. For well-definedness, we only needed f to be holomorphic on an open set U containing the contours Γ ∪ Ω′ but not necessarily σ(T). The assumption will be applied in its entirety in showing the homomorphism property of the functional calculus. == Properties == === Polynomial case === The linearity of the map f ↦ f(T) follows from the convergence of the integral and that linear operations on a Banach space are continuous. We recover the polynomial functional calculus when f(z) = Σ0 ≤ i ≤ m ai zi is a polynomial. To prove this, it is sufficient to show, for k ≥ 0 and f(z) = zk, it is true that f(T) = Tk, i.e. 1 2 π i ∫ Γ ζ k ζ − T d ζ = T k {\displaystyle {\frac {1}{2\pi i}}\int _{\Gamma }{\frac {\zeta ^{k}}{\zeta -T}}\,d\zeta =T^{k}} for any suitable Γ enclosing σ(T). Choose Γ to be a circle of radius greater than the operator norm of T. As stated above, on such Γ, the resolvent map admits a power series representation ( z − T ) − 1 = 1 z ∑ n ≥ 0 ( T z ) n . {\displaystyle (z-T)^{-1}={\frac {1}{z}}\sum _{n\geq 0}\left({\frac {T}{z}}\right)^{n}.} Substituting gives f ( T ) = 1 2 π i ∫ Γ ( ∑ n ≥ 0 T n ζ n + 1 − k ) d ζ {\displaystyle f(T)={\frac {1}{2\pi i}}\int _{\Gamma }\left(\sum _{n\geq 0}{\frac {T^{n}}{\zeta ^{n+1-k}}}\right)\,d\zeta } which is ∑ n ≥ 0 T n ⋅ 1 2 π i ( ∫ Γ d ζ ζ n + 1 − k ) = ∑ n ≥ 0 T n ⋅ δ n k = T k . {\displaystyle \sum _{n\geq 0}T^{n}\cdot {\frac {1}{2\pi i}}\left(\int _{\Gamma }{\frac {d\zeta }{\zeta ^{n+1-k}}}\right)=\sum _{n\geq 0}T^{n}\cdot \delta _{nk}=T^{k}.} The δ is the Kronecker delta symbol. === The homomorphism property === For any f1 and f2 satisfying the appropriate assumptions, the homomorphism property states f 1 ( T ) f 2 ( T ) = ( f 1 ⋅ f 2 ) ( T ) . {\displaystyle f_{1}(T)f_{2}(T)=(f_{1}\cdot f_{2})(T).\,} We sketch an argument which invokes the first resolvent formula and the assumptions placed on f. First we choose the Jordan curves such that Γ1 lies in the inside of Γ2. The reason for this will become clear below. Start by calculating directly f 1 ( T ) f 2 ( T ) = ( 1 2 π i ∫ Γ 1 f 1 ( ζ ) ζ − T d ζ ) ( 1 2 π i ∫ Γ 2 f 2 ( ω ) ω − T d ω ) = 1 ( 2 π i ) 2 ∫ Γ 1 ∫ Γ 2 f 1 ( ζ ) f 2 ( ω ) ( ζ − T ) ( ω − T ) d ω d ζ = 1 ( 2 π i ) 2 ∫ Γ 1 ∫ Γ 2 f 1 ( ζ ) f 2 ( ω ) ( ( ζ − T ) − 1 − ( ω − T ) − 1 ω − ζ ) d ω d ζ First Resolvent Formula = 1 ( 2 π i ) 2 { ( ∫ Γ 1 f 1 ( ζ ) ζ − T [ ∫ Γ 2 f 2 ( ω ) ω − ζ d ω ] d ζ ) − ( ∫ Γ 2 f 2 ( ω ) ω − T [ ∫ Γ 1 f 1 ( ζ ) ω − ζ d ζ ] d ω ) } = 1 ( 2 π i ) 2 ∫ Γ 1 f 1 ( ζ ) ζ − T [ ∫ Γ 2 f 2 ( ω ) ω − ζ d ω ] d ζ {\displaystyle {\begin{aligned}f_{1}(T)f_{2}(T)&=\left({\frac {1}{2\pi i}}\int _{\Gamma _{1}}{\frac {f_{1}(\zeta )}{\zeta -T}}d\zeta \right)\left({\frac {1}{2\pi i}}\int _{\Gamma _{2}}{\frac {f_{2}(\omega )}{\omega -T}}\,d\omega \right)\\&={\frac {1}{(2\pi i)^{2}}}\int _{\Gamma _{1}}\int _{\Gamma _{2}}{\frac {f_{1}(\zeta )f_{2}(\omega )}{(\zeta -T)(\omega -T)}}\;d\omega \,d\zeta \\&={\frac {1}{(2\pi i)^{2}}}\int _{\Gamma _{1}}\int _{\Gamma _{2}}f_{1}(\zeta )f_{2}(\omega )\left({\frac {(\zeta -T)^{-1}-(\omega -T)^{-1}}{\omega -\zeta }}\right)d\omega \,d\zeta &&{\text{First Resolvent Formula}}\\&={\frac {1}{(2\pi i)^{2}}}\left\{\left(\int _{\Gamma _{1}}{\frac {f_{1}(\zeta )}{\zeta -T}}\left[\int _{\Gamma _{2}}{\frac {f_{2}(\omega )}{\omega -\zeta }}d\omega \right]d\zeta \right)-\left(\int _{\Gamma _{2}}{\frac {f_{2}(\omega )}{\omega -T}}\left[\int _{\Gamma _{1}}{\frac {f_{1}(\zeta )}{\omega -\zeta }}d\zeta \right]d\omega \right)\right\}\\&={\frac {1}{(2\pi i)^{2}}}\int _{\Gamma _{1}}{\frac {f_{1}(\zeta )}{\zeta -T}}\left[\int _{\Gamma _{2}}{\frac {f_{2}(\omega )}{\omega -\zeta }}d\omega \right]d\zeta \end{aligned}}} The last line follows from the fact that ω ∈ Γ2 lies outside of Γ1 and f1 is holomorphic on some open neighborhood of σ(T) and therefore the second term vanishes. Therefore, we have: f 1 ( T ) f 2 ( T ) = 1 2 π i ∫ Γ 1 f 1 ( ζ ) ζ − T [ 1 2 π i ∫ Γ 2 f 2 ( ω ) ω − ζ d ω ] d ζ = 1 2 π i ∫ Γ 1 f 1 ( ζ ) ζ − T [ f 2 ( ζ ) ] d ζ Cauchy's Integral Formula = 1 2 π i ∫ Γ 1 f 1 ( ζ ) f 2 ( ζ ) ζ − T d ζ = ( f 1 ⋅ f 2 ) ( T ) {\displaystyle {\begin{aligned}f_{1}(T)f_{2}(T)&={\frac {1}{2\pi i}}\int _{\Gamma _{1}}{\frac {f_{1}(\zeta )}{\zeta -T}}\left[{\frac {1}{2\pi i}}\int _{\Gamma _{2}}{\frac {f_{2}(\omega )}{\omega -\zeta }}d\omega \right]d\zeta \\&={\frac {1}{2\pi i}}\int _{\Gamma _{1}}{\frac {f_{1}(\zeta )}{\zeta -T}}\left[f_{2}(\zeta )\right]d\zeta &&{\text{Cauchy's Integral Formula}}\\&={\frac {1}{2\pi i}}\int _{\Gamma _{1}}{\frac {f_{1}(\zeta )f_{2}(\zeta )}{\zeta -T}}d\zeta \\&=(f_{1}\cdot f_{2})(T)\end{aligned}}} === Continuity with respect to compact convergence === Let G ⊂ C be open with σ(T) ⊂ G. Suppose a sequence {fk} of holomorphic functions on G converges uniformly on compact subsets of G (this is sometimes called compact convergence). Then {fk(T)} is convergent in L(X): Assume for simplicity that Γ consists of only one Jordan curve. We estimate ‖ f k ( T ) − f l ( T ) ‖ = 1 2 π ‖ ∫ Γ ( f k − f l ) ( ζ ) ζ − T d ζ ‖ ≤ 1 2 π ∫ Γ | ( f k − f l ) ( ζ ) | ⋅ ‖ ( ζ − T ) − 1 ‖ d ζ {\displaystyle {\begin{aligned}\left\|f_{k}(T)-f_{l}(T)\right\|&={\frac {1}{2\pi }}\left\|\int _{\Gamma }{\frac {(f_{k}-f_{l})(\zeta )}{\zeta -T}}d\zeta \right\|\\&\leq {\frac {1}{2\pi }}\int _{\Gamma }\left|(f_{k}-f_{l})(\zeta )\right|\cdot \left\|(\zeta -T)^{-1}\right\|d\zeta \end{aligned}}} By combining the uniform convergence assumption and various continuity considerations, we see that the above tends to 0 as k, l → ∞. So {fk(T)} is Cauchy, therefore convergent. === Uniqueness === To summarize, we have shown the holomorphic functional calculus, f → f(T), has the following properties: It extends the polynomial functional calculus. It is an algebra homomorphism from the algebra of holomorphic functions defined on a neighborhood of σ(T) to L(X) It preserves uniform convergence on compact sets. It can be proved that a calculus satisfying the above properties is unique. We note that, everything discussed so far holds verbatim if the family of bounded operators L(X) is replaced by a Banach algebra A. The functional calculus can be defined in exactly the same way for an element in A. == Spectral considerations == === Spectral mapping theorem === It is known that the spectral mapping theorem holds for the polynomial functional calculus: for any polynomial p, σ(p(T)) = p(σ(T)). This can be extended to the holomorphic calculus. To show f(σ(T)) ⊂ σ(f(T)), let μ be any complex number. By a result from complex analysis, there exists a function g holomorphic on a neighborhood of σ(T) such that f ( z ) − f ( μ ) = ( z − μ ) g ( z ) . {\displaystyle f(z)-f(\mu )=(z-\mu )g(z).\,} According to the homomorphism property, f(T) − f(μ) = (T − μ)g(T). Therefore, μ ∈ σ(T) implies f(μ) ∈ σ(f(T)). For the other inclusion, if μ is not in f(σ(T)), then the functional calculus is applicable to g ( z ) = 1 f ( z ) − μ . {\displaystyle g(z)={\frac {1}{f(z)-\mu }}.} So g(T)(f(T) − μ) = I. Therefore, μ does not lie in σ(f(T)). === Spectral projections === The underlying idea is as follows. Suppose that K is a subset of σ(T) and U,V are disjoint neighbourhoods of K and σ(T) \ K respectively. Define e(z) = 1 if z ∈ U and e(z) = 0 if z ∈ V. Then e is a holomorphic function with [e(z)]2 = e(z) and so, for a suitable contour Γ which lies in U ∪ V and which encloses σ(T), the linear operator e ( T ) = 1 2 π i ∫ Γ e ( z ) z − T d z {\displaystyle e(T)={\frac {1}{2\pi i}}\int _{\Gamma }{\frac {e(z)}{z-T}}\,dz} will be a bounded projection that commutes with T and provides a great deal of useful information. It transpires that this scenario is possible if and only if K is both open and closed in the subspace topology on σ(T). Moreover, the set V can be safely ignored since e is zero on it and therefore makes no contribution to the integral. The projection e(T) is called the spectral projection of T at K and is denoted by P(K;T). Thus every subset K of σ(T) that is both open and closed in the subspace topology has an associated spectral projection given by P ( K ; T ) = 1 2 π i ∫ Γ d z z − T {\displaystyle P(K;T)={\frac {1}{2\pi i}}\int \nolimits _{\Gamma }{\frac {dz}{z-T}}} where Γ is a contour that encloses K but no other points of σ(T). Since P = P(K;T) is bounded and commutes with T it enables T to be expressed in the form U ⊕ V where U = T|PX and V = T|(1−P)X. Both PX and (1 − P)X are invariant subspaces of T moreover σ(U) = K and σ(V) = σ(T) \ K. A key property is mutual orthogonality. If L is another open and closed set in the subspace topology on σ(T) then P(K;T)P(L;T) = P(L;T)P(K;T) = P(K ∩ L;T) which is zero whenever K and L are disjoint. Spectral projections have numerous applications. Any isolated point of σ(T) is both open and closed in the subspace topology and therefore has an associated spectral projection. When X has finite dimension σ(T) consists of isolated points and the resultant spectral projections lead to a variant of Jordan normal form wherein all the Jordan blocks corresponding to the same eigenvalue are consolidated. In other words there is precisely one block per distinct eigenvalue. The next section considers this decomposition in more detail. Sometimes spectral projections inherit properties from their parent operators. For example if T is a positive matrix with spectral radius r then the Perron–Frobenius theorem asserts that r ∈ σ(T). The associated spectral projection P = P(r;T) is also positive and by mutual orthogonality no other spectral projection can have a positive row or column. In fact TP = rP and (T/r)n → P as n → ∞ so this projection P (which is called the Perron projection) approximates (T/r)n as n increases, and each of its columns is an eigenvector of T. More generally if T is a compact operator then all non-zero points in σ(T) are isolated and so any finite subset of them can be used to decompose T. The associated spectral projection always has finite rank. Those operators in L(X) with similar spectral characteristics are known as Riesz operators. Many classes of Riesz operators (including the compact operators) are ideals in L(X) and provide a rich field for research. However if X is a Hilbert space there is exactly one closed ideal sandwiched between the Riesz operators and those of finite rank. Much of the foregoing discussion can be set in the more general context of a complex Banach algebra. Here spectral projections are referred to as spectral idempotents since there may no longer be a space for them to project onto. === Invariant subspace decomposition === If the spectrum σ(T) is not connected, X can be decomposed into invariant subspaces of T using the functional calculus. Let σ(T) be a disjoint union σ ( T ) = ⋃ i = 1 m F i . {\displaystyle \sigma (T)=\bigcup _{i=1}^{m}F_{i}.} Define ei to be 1 on some neighborhood that contains only the component Fi and 0 elsewhere. By the homomorphism property, ei(T) is a projection for all i. In fact it is just the spectral projection P(Fi;T) described above. The relation ei(T) T = T ei(T) means the range of each ei(T), denoted by Xi, is an invariant subspace of T. Since ∑ i e i ( T ) = I , {\displaystyle \sum _{i}e_{i}(T)=I,\,} X can be expressed in terms of these complementary subspaces: X = ∑ i X i . {\displaystyle X=\sum _{i}X_{i}.\,} Similarly, if Ti is T restricted to Xi, then T = ∑ i T i . {\displaystyle T=\sum _{i}T_{i}.\,} Consider the direct sum X ′ = ⨁ i X i . {\displaystyle X'=\bigoplus _{i}X_{i}.} With the norm ‖ ⨁ i x i ‖ = ∑ i ‖ x i ‖ , {\displaystyle \left\|\bigoplus _{i}x_{i}\right\|=\sum _{i}\|x_{i}\|,} X' is a Banach space. The mapping R: X' → X defined by R ( ⨁ i x i ) = ∑ i x i {\displaystyle R\left(\bigoplus _{i}x_{i}\right)=\sum _{i}x_{i}} is a Banach space isomorphism, and we see that R T R − 1 = ⨁ i T i . {\displaystyle RTR^{-1}=\bigoplus _{i}T_{i}.} This can be viewed as a block diagonalization of T. When X is finite-dimensional, σ(T) = {λi} is a finite set of points in the complex plane. Choose ei to be 1 on an open disc containing only λi from the spectrum. The corresponding block-diagonal matrix ⨁ i T i {\displaystyle \bigoplus _{i}T_{i}} is the Jordan canonical form of T. == Related results == With stronger assumptions, when T is a normal operator acting on a Hilbert space, the domain of the functional calculus can be broadened. When comparing the two results, a rough analogy can be made with the relationship between the spectral theorem for normal matrices and the Jordan canonical form. When T is a normal operator, a continuous functional calculus can be obtained, that is, one can evaluate f(T) with f being a continuous function defined on σ(T). Using the machinery of measure theory, this can be extended to functions which are only measurable (see Borel functional calculus). In that context, if E ⊂ σ(T) is a Borel set and 1E is the characteristic function of E, the projection operator 1E(T) is a refinement of ei(T) discussed above. The Borel functional calculus extends to unbounded self-adjoint operators on a Hilbert space. In slightly more abstract language, the holomorphic functional calculus can be extended to any element of a Banach algebra, using essentially the same arguments as above. Similarly, the continuous functional calculus holds for normal elements in any C*-algebra and the measurable functional calculus for normal elements in any von Neumann algebra. === Unbounded operators === A holomorphic functional calculus can be defined in a similar fashion for unbounded closed operators with non-empty resolvent set. == See also == Helffer–Sjöstrand formula Resolvent formalism Jordan canonical form, where the finite-dimensional case is discussed in some detail. == References == N. Dunford and J.T. Schwartz, Linear Operators, Part I: General Theory, Interscience, 1958. Steven G Krantz. Dictionary of Algebra, Arithmetic, and Trigonometry. CRC Press, 2000. ISBN 1-58488-052-X. Israel Gohberg, Seymour Goldberg and Marinus A. Kaashoek, Classes of Linear Operators: Volume 1. Birkhauser, 1991. ISBN 978-0817625313.
Wikipedia/Holomorphic_functional_calculus
In metalogic, mathematical logic, and computability theory, an effective method or effective procedure is a finite-time, deterministic procedure for solving a problem from a specific class. An effective method is sometimes also called a mechanical method or procedure. == Definition == Formally, a method is called effective to a specific class of problems when it satisfies the following criteria: It consists of a finite number of exact, finite instructions. When it is applied to a problem from its class: It always finishes (terminates) after a finite number of steps. It always produces a correct answer. In principle, it can be done by a human without any aids except writing materials. Its instructions need only to be followed rigorously to succeed. In other words, it requires no ingenuity to succeed. Optionally, it may also be required that the method never returns a result as if it were an answer when the method is applied to a problem from outside its class. Adding this requirement reduces the set of classes for which there is an effective method. == Algorithms == An effective method for calculating the values of a function is an algorithm. Functions for which an effective method exists are sometimes called effectively calculable. == Computable functions == Several independent efforts to give a formal characterization of effective calculability led to a variety of proposed definitions (general recursive functions, Turing machines, λ-calculus) that later were shown to be equivalent. The notion captured by these definitions is known as recursive or effective computability. The Church–Turing thesis states that the two notions coincide: any number-theoretic function that is effectively calculable is recursively computable. As this is not a mathematical statement, it cannot be proven by a mathematical proof. == See also == Decidability (logic) Decision problem Effective results in number theory Function problem Model of computation Recursive set Undecidable problem == References == S. C. Kleene (1967), Mathematical logic. Reprinted, Dover, 2002, ISBN 0-486-42533-9, pp. 233 ff., esp. p. 231.
Wikipedia/Effective_method
An infinite-dimensional vector function is a function whose values lie in an infinite-dimensional topological vector space, such as a Hilbert space or a Banach space. Such functions are applied in most sciences including physics. == Example == Set f k ( t ) = t / k 2 {\displaystyle f_{k}(t)=t/k^{2}} for every positive integer k {\displaystyle k} and every real number t . {\displaystyle t.} Then the function f {\displaystyle f} defined by the formula f ( t ) = ( f 1 ( t ) , f 2 ( t ) , f 3 ( t ) , … ) , {\displaystyle f(t)=(f_{1}(t),f_{2}(t),f_{3}(t),\ldots )\,,} takes values that lie in the infinite-dimensional vector space X {\displaystyle X} (or R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} ) of real-valued sequences. For example, f ( 2 ) = ( 2 , 2 4 , 2 9 , 2 16 , 2 25 , … ) . {\displaystyle f(2)=\left(2,{\frac {2}{4}},{\frac {2}{9}},{\frac {2}{16}},{\frac {2}{25}},\ldots \right).} As a number of different topologies can be defined on the space X , {\displaystyle X,} to talk about the derivative of f , {\displaystyle f,} it is first necessary to specify a topology on X {\displaystyle X} or the concept of a limit in X . {\displaystyle X.} Moreover, for any set A , {\displaystyle A,} there exist infinite-dimensional vector spaces having the (Hamel) dimension of the cardinality of A {\displaystyle A} (for example, the space of functions A → K {\displaystyle A\to K} with finitely-many nonzero elements, where K {\displaystyle K} is the desired field of scalars). Furthermore, the argument t {\displaystyle t} could lie in any set instead of the set of real numbers. == Integral and derivative == Most theorems on integration and differentiation of scalar functions can be generalized to vector-valued functions, often using essentially the same proofs. Perhaps the most important exception is that absolutely continuous functions need not equal the integrals of their (a.e.) derivatives (unless, for example, X {\displaystyle X} is a Hilbert space); see Radon–Nikodym theorem A curve is a continuous map of the unit interval (or more generally, of a non−degenerate closed interval of real numbers) into a topological space. An arc is a curve that is also a topological embedding. A curve valued in a Hausdorff space is an arc if and only if it is injective. === Derivatives === If f : [ 0 , 1 ] → X , {\displaystyle f:[0,1]\to X,} where X {\displaystyle X} is a Banach space or another topological vector space then the derivative of f {\displaystyle f} can be defined in the usual way: f ′ ( t ) = lim h → 0 f ( t + h ) − f ( t ) h . {\displaystyle f'(t)=\lim _{h\to 0}{\frac {f(t+h)-f(t)}{h}}.} ==== Functions with values in a Hilbert space ==== If f {\displaystyle f} is a function of real numbers with values in a Hilbert space X , {\displaystyle X,} then the derivative of f {\displaystyle f} at a point t {\displaystyle t} can be defined as in the finite-dimensional case: f ′ ( t ) = lim h → 0 f ( t + h ) − f ( t ) h . {\displaystyle f'(t)=\lim _{h\to 0}{\frac {f(t+h)-f(t)}{h}}.} Most results of the finite-dimensional case also hold in the infinite-dimensional case too, with some modifications. Differentiation can also be defined to functions of several variables (for example, t ∈ R n {\displaystyle t\in R^{n}} or even t ∈ Y , {\displaystyle t\in Y,} where Y {\displaystyle Y} is an infinite-dimensional vector space). If X {\displaystyle X} is a Hilbert space then any derivative (and any other limit) can be computed componentwise: if f = ( f 1 , f 2 , f 3 , … ) {\displaystyle f=(f_{1},f_{2},f_{3},\ldots )} (that is, f = f 1 e 1 + f 2 e 2 + f 3 e 3 + ⋯ , {\displaystyle f=f_{1}e_{1}+f_{2}e_{2}+f_{3}e_{3}+\cdots ,} where e 1 , e 2 , e 3 , … {\displaystyle e_{1},e_{2},e_{3},\ldots } is an orthonormal basis of the space X {\displaystyle X} ), and f ′ ( t ) {\displaystyle f'(t)} exists, then f ′ ( t ) = ( f 1 ′ ( t ) , f 2 ′ ( t ) , f 3 ′ ( t ) , … ) . {\displaystyle f'(t)=(f_{1}'(t),f_{2}'(t),f_{3}'(t),\ldots ).} However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space. Most of the above hold for other topological vector spaces X {\displaystyle X} too. However, not as many classical results hold in the Banach space setting, for example, an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases. === Crinkled arcs === If [ a , b ] {\displaystyle [a,b]} is an interval contained in the domain of a curve f {\displaystyle f} that is valued in a topological vector space then the vector f ( b ) − f ( a ) {\displaystyle f(b)-f(a)} is called the chord of f {\displaystyle f} determined by [ a , b ] {\displaystyle [a,b]} . If [ c , d ] {\displaystyle [c,d]} is another interval in its domain then the two chords are said to be non−overlapping chords if [ a , b ] {\displaystyle [a,b]} and [ c , d ] {\displaystyle [c,d]} have at most one end−point in common. Intuitively, two non−overlapping chords of a curve valued in an inner product space are orthogonal vectors if the curve makes a right angle turn somewhere along its path between its starting point and its ending point. If every pair of non−overlapping chords are orthogonal then such a right turn happens at every point of the curve; such a curve can not be differentiable at any point. A crinkled arc is an injective continuous curve with the property that any two non−overlapping chords are orthogonal vectors. An example of a crinkled arc in the Hilbert L 2 {\displaystyle L^{2}} space L 2 ( 0 , 1 ) {\displaystyle L^{2}(0,1)} is: f : [ 0 , 1 ] → L 2 ( 0 , 1 ) t ↦ 1 [ 0 , t ] {\displaystyle {\begin{alignedat}{4}f:\;&&[0,1]&&\;\to \;&L^{2}(0,1)\\[0.3ex]&&t&&\;\mapsto \;&\mathbb {1} _{[0,t]}\\\end{alignedat}}} where 1 [ 0 , t ] : ( 0 , 1 ) → { 0 , 1 } {\displaystyle \mathbb {1} _{[0,\,t]}:(0,1)\to \{0,1\}} is the indicator function defined by x ↦ { 1 if x ∈ [ 0 , t ] 0 otherwise {\displaystyle x\;\mapsto \;{\begin{cases}1&{\text{ if }}x\in [0,t]\\0&{\text{ otherwise }}\end{cases}}} A crinkled arc can be found in every infinite−dimensional Hilbert space because any such space contains a closed vector subspace that is isomorphic to L 2 ( 0 , 1 ) . {\displaystyle L^{2}(0,1).} A crinkled arc f : [ 0 , 1 ] → X {\displaystyle f:[0,1]\to X} is said to be normalized if f ( 0 ) = 0 , {\displaystyle f(0)=0,} ‖ f ( 1 ) ‖ = 1 , {\displaystyle \|f(1)\|=1,} and the span of its image f ( [ 0 , 1 ] ) {\displaystyle f([0,1])} is a dense subset of X . {\displaystyle X.} If h : [ 0 , 1 ] → [ 0 , 1 ] {\displaystyle h:[0,1]\to [0,1]} is an increasing homeomorphism then f ∘ h {\displaystyle f\circ h} is called a reparameterization of the curve f : [ 0 , 1 ] → X . {\displaystyle f:[0,1]\to X.} Two curves f {\displaystyle f} and g {\displaystyle g} in an inner product space X {\displaystyle X} are unitarily equivalent if there exists a unitary operator L : X → X {\displaystyle L:X\to X} (which is an isometric linear bijection) such that g = L ∘ f {\displaystyle g=L\circ f} (or equivalently, f = L − 1 ∘ g {\displaystyle f=L^{-1}\circ g} ). === Measurability === The measurability of f {\displaystyle f} can be defined by a number of ways, most important of which are Bochner measurability and weak measurability. === Integrals === The most important integrals of f {\displaystyle f} are called Bochner integral (when X {\displaystyle X} is a Banach space) and Pettis integral (when X {\displaystyle X} is a topological vector space). Both these integrals commute with linear functionals. Also L p {\displaystyle L^{p}} spaces have been defined for such functions. == See also == Differentiation in Fréchet spaces Differentiable vector–valued functions from Euclidean space – Differentiable function in functional analysisPages displaying short descriptions of redirect targets == References == Einar Hille & Ralph Phillips: "Functional Analysis and Semi Groups", Amer. Math. Soc. Colloq. Publ. Vol. 31, Providence, R.I., 1957. Halmos, Paul R. (8 November 1982). A Hilbert Space Problem Book. Graduate Texts in Mathematics. Vol. 19 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90685-0. OCLC 8169781.
Wikipedia/Infinite-dimensional_vector_function
In the mathematical discipline of functional analysis, a differentiable vector-valued function from Euclidean space is a differentiable function valued in a topological vector space (TVS) whose domains is a subset of some finite-dimensional Euclidean space. It is possible to generalize the notion of derivative to functions whose domain and codomain are subsets of arbitrary topological vector spaces (TVSs) in multiple ways. But when the domain of a TVS-valued function is a subset of a finite-dimensional Euclidean space then many of these notions become logically equivalent resulting in a much more limited number of generalizations of the derivative and additionally, differentiability is also more well-behaved compared to the general case. This article presents the theory of k {\displaystyle k} -times continuously differentiable functions on an open subset Ω {\displaystyle \Omega } of Euclidean space R n {\displaystyle \mathbb {R} ^{n}} ( 1 ≤ n < ∞ {\displaystyle 1\leq n<\infty } ), which is an important special case of differentiation between arbitrary TVSs. This importance stems partially from the fact that every finite-dimensional vector subspace of a Hausdorff topological vector space is TVS isomorphic to Euclidean space R n {\displaystyle \mathbb {R} ^{n}} so that, for example, this special case can be applied to any function whose domain is an arbitrary Hausdorff TVS by restricting it to finite-dimensional vector subspaces. All vector spaces will be assumed to be over the field F , {\displaystyle \mathbb {F} ,} where F {\displaystyle \mathbb {F} } is either the real numbers R {\displaystyle \mathbb {R} } or the complex numbers C . {\displaystyle \mathbb {C} .} == Continuously differentiable vector-valued functions == A map f , {\displaystyle f,} which may also be denoted by f ( 0 ) , {\displaystyle f^{(0)},} between two topological spaces is said to be 0 {\displaystyle 0} -times continuously differentiable or C 0 {\displaystyle C^{0}} if it is continuous. A topological embedding may also be called a C 0 {\displaystyle C^{0}} -embedding. === Curves === Differentiable curves are an important special case of differentiable vector-valued (i.e. TVS-valued) functions which, in particular, are used in the definition of the Gateaux derivative. They are fundamental to the analysis of maps between two arbitrary topological vector spaces X → Y {\displaystyle X\to Y} and so also to the analysis of TVS-valued maps from Euclidean spaces, which is the focus of this article. A continuous map f : I → X {\displaystyle f:I\to X} from a subset I ⊆ R {\displaystyle I\subseteq \mathbb {R} } that is valued in a topological vector space X {\displaystyle X} is said to be (once or 1 {\displaystyle 1} -time) differentiable if for all t ∈ I , {\displaystyle t\in I,} it is differentiable at t , {\displaystyle t,} which by definition means the following limit in X {\displaystyle X} exists: f ′ ( t ) := f ( 1 ) ( t ) := lim t ≠ r ∈ I r → t f ( r ) − f ( t ) r − t = lim t ≠ t + h ∈ I h → 0 f ( t + h ) − f ( t ) h {\displaystyle f^{\prime }(t):=f^{(1)}(t):=\lim _{\stackrel {r\to t}{t\neq r\in I}}{\frac {f(r)-f(t)}{r-t}}=\lim _{\stackrel {h\to 0}{t\neq t+h\in I}}{\frac {f(t+h)-f(t)}{h}}} where in order for this limit to even be well-defined, t {\displaystyle t} must be an accumulation point of I . {\displaystyle I.} If f : I → X {\displaystyle f:I\to X} is differentiable then it is said to be continuously differentiable or C 1 {\displaystyle C^{1}} if its derivative, which is the induced map f ′ = f ( 1 ) : I → X , {\displaystyle f^{\prime }=f^{(1)}:I\to X,} is continuous. Using induction on 1 < k ∈ N , {\displaystyle 1<k\in \mathbb {N} ,} the map f : I → X {\displaystyle f:I\to X} is k {\displaystyle k} -times continuously differentiable or C k {\displaystyle C^{k}} if its k − 1 th {\displaystyle k-1^{\text{th}}} derivative f ( k − 1 ) : I → X {\displaystyle f^{(k-1)}:I\to X} is continuously differentiable, in which case the k th {\displaystyle k^{\text{th}}} -derivative of f {\displaystyle f} is the map f ( k ) := ( f ( k − 1 ) ) ′ : I → X . {\displaystyle f^{(k)}:=\left(f^{(k-1)}\right)^{\prime }:I\to X.} It is called smooth, C ∞ , {\displaystyle C^{\infty },} or infinitely differentiable if it is k {\displaystyle k} -times continuously differentiable for every integer k ∈ N . {\displaystyle k\in \mathbb {N} .} For k ∈ N , {\displaystyle k\in \mathbb {N} ,} it is called k {\displaystyle k} -times differentiable if it is k − 1 {\displaystyle k-1} -times continuous differentiable and f ( k − 1 ) : I → X {\displaystyle f^{(k-1)}:I\to X} is differentiable. A continuous function f : I → X {\displaystyle f:I\to X} from a non-empty and non-degenerate interval I ⊆ R {\displaystyle I\subseteq \mathbb {R} } into a topological space X {\displaystyle X} is called a curve or a C 0 {\displaystyle C^{0}} curve in X . {\displaystyle X.} A path in X {\displaystyle X} is a curve in X {\displaystyle X} whose domain is compact while an arc or C0-arc in X {\displaystyle X} is a path in X {\displaystyle X} that is also a topological embedding. For any k ∈ { 1 , 2 , … , ∞ } , {\displaystyle k\in \{1,2,\ldots ,\infty \},} a curve f : I → X {\displaystyle f:I\to X} valued in a topological vector space X {\displaystyle X} is called a C k {\displaystyle C^{k}} -embedding if it is a topological embedding and a C k {\displaystyle C^{k}} curve such that f ′ ( t ) ≠ 0 {\displaystyle f^{\prime }(t)\neq 0} for every t ∈ I , {\displaystyle t\in I,} where it is called a C k {\displaystyle C^{k}} -arc if it is also a path (or equivalently, also a C 0 {\displaystyle C^{0}} -arc) in addition to being a C k {\displaystyle C^{k}} -embedding. === Differentiability on Euclidean space === The definition given above for curves are now extended from functions valued defined on subsets of R {\displaystyle \mathbb {R} } to functions defined on open subsets of finite-dimensional Euclidean spaces. Throughout, let Ω {\displaystyle \Omega } be an open subset of R n , {\displaystyle \mathbb {R} ^{n},} where n ≥ 1 {\displaystyle n\geq 1} is an integer. Suppose t = ( t 1 , … , t n ) ∈ Ω {\displaystyle t=\left(t_{1},\ldots ,t_{n}\right)\in \Omega } and f : domain ⁡ f → Y {\displaystyle f:\operatorname {domain} f\to Y} is a function such that t ∈ domain ⁡ f {\displaystyle t\in \operatorname {domain} f} with t {\displaystyle t} an accumulation point of domain ⁡ f . {\displaystyle \operatorname {domain} f.} Then f {\displaystyle f} is differentiable at t {\displaystyle t} if there exist n {\displaystyle n} vectors e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} in Y , {\displaystyle Y,} called the partial derivatives of f {\displaystyle f} at t {\displaystyle t} , such that lim t ≠ p ∈ domain ⁡ f p → t f ( p ) − f ( t ) − ∑ i = 1 n ( p i − t i ) e i ‖ p − t ‖ 2 = 0 in Y {\displaystyle \lim _{\stackrel {p\to t}{t\neq p\in \operatorname {domain} f}}{\frac {f(p)-f(t)-\sum _{i=1}^{n}\left(p_{i}-t_{i}\right)e_{i}}{\|p-t\|_{2}}}=0{\text{ in }}Y} where p = ( p 1 , … , p n ) . {\displaystyle p=\left(p_{1},\ldots ,p_{n}\right).} If f {\displaystyle f} is differentiable at a point then it is continuous at that point. If f {\displaystyle f} is differentiable at every point in some subset S {\displaystyle S} of its domain then f {\displaystyle f} is said to be (once or 1 {\displaystyle 1} -time) differentiable in S {\displaystyle S} , where if the subset S {\displaystyle S} is not mentioned then this means that it is differentiable at every point in its domain. If f {\displaystyle f} is differentiable and if each of its partial derivatives is a continuous function then f {\displaystyle f} is said to be (once or 1 {\displaystyle 1} -time) continuously differentiable or C 1 . {\displaystyle C^{1}.} For k ∈ N , {\displaystyle k\in \mathbb {N} ,} having defined what it means for a function f {\displaystyle f} to be C k {\displaystyle C^{k}} (or k {\displaystyle k} times continuously differentiable), say that f {\displaystyle f} is k + 1 {\displaystyle k+1} times continuously differentiable or that f {\displaystyle f} is C k + 1 {\displaystyle C^{k+1}} if f {\displaystyle f} is continuously differentiable and each of its partial derivatives is C k . {\displaystyle C^{k}.} Say that f {\displaystyle f} is C ∞ , {\displaystyle C^{\infty },} smooth, C ∞ , {\displaystyle C^{\infty },} or infinitely differentiable if f {\displaystyle f} is C k {\displaystyle C^{k}} for all k = 0 , 1 , … . {\displaystyle k=0,1,\ldots .} The support of a function f {\displaystyle f} is the closure (taken in its domain domain ⁡ f {\displaystyle \operatorname {domain} f} ) of the set { x ∈ domain ⁡ f : f ( x ) ≠ 0 } . {\displaystyle \{x\in \operatorname {domain} f:f(x)\neq 0\}.} == Spaces of Ck vector-valued functions == In this section, the space of smooth test functions and its canonical LF-topology are generalized to functions valued in general complete Hausdorff locally convex topological vector spaces (TVSs). After this task is completed, it is revealed that the topological vector space C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} that was constructed could (up to TVS-isomorphism) have instead been defined simply as the completed injective tensor product C k ( Ω ) ⊗ ^ ϵ Y {\displaystyle C^{k}(\Omega ){\widehat {\otimes }}_{\epsilon }Y} of the usual space of smooth test functions C k ( Ω ) {\displaystyle C^{k}(\Omega )} with Y . {\displaystyle Y.} Throughout, let Y {\displaystyle Y} be a Hausdorff topological vector space (TVS), let k ∈ { 0 , 1 , … , ∞ } , {\displaystyle k\in \{0,1,\ldots ,\infty \},} and let Ω {\displaystyle \Omega } be either: an open subset of R n , {\displaystyle \mathbb {R} ^{n},} where n ≥ 1 {\displaystyle n\geq 1} is an integer, or else a locally compact topological space, in which case k {\displaystyle k} can only be 0. {\displaystyle 0.} === Space of Ck functions === For any k = 0 , 1 , … , ∞ , {\displaystyle k=0,1,\ldots ,\infty ,} let C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} denote the vector space of all C k {\displaystyle C^{k}} Y {\displaystyle Y} -valued maps defined on Ω {\displaystyle \Omega } and let C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} denote the vector subspace of C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} consisting of all maps in C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} that have compact support. Let C k ( Ω ) {\displaystyle C^{k}(\Omega )} denote C k ( Ω ; F ) {\displaystyle C^{k}(\Omega ;\mathbb {F} )} and C c k ( Ω ) {\displaystyle C_{c}^{k}(\Omega )} denote C c k ( Ω ; F ) . {\displaystyle C_{c}^{k}(\Omega ;\mathbb {F} ).} Give C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} the topology of uniform convergence of the functions together with their derivatives of order < k + 1 {\displaystyle <k+1} on the compact subsets of Ω . {\displaystyle \Omega .} Suppose Ω 1 ⊆ Ω 2 ⊆ ⋯ {\displaystyle \Omega _{1}\subseteq \Omega _{2}\subseteq \cdots } is a sequence of relatively compact open subsets of Ω {\displaystyle \Omega } whose union is Ω {\displaystyle \Omega } and that satisfy Ω i ¯ ⊆ Ω i + 1 {\displaystyle {\overline {\Omega _{i}}}\subseteq \Omega _{i+1}} for all i . {\displaystyle i.} Suppose that ( V α ) α ∈ A {\displaystyle \left(V_{\alpha }\right)_{\alpha \in A}} is a basis of neighborhoods of the origin in Y . {\displaystyle Y.} Then for any integer ℓ < k + 1 , {\displaystyle \ell <k+1,} the sets: U i , ℓ , α := { f ∈ C k ( Ω ; Y ) : ( ∂ / ∂ p ) q f ( p ) ∈ U α for all p ∈ Ω i and all q ∈ N n , | q | ≤ ℓ } {\displaystyle {\mathcal {U}}_{i,\ell ,\alpha }:=\left\{f\in C^{k}(\Omega ;Y):\left(\partial /\partial p\right)^{q}f(p)\in U_{\alpha }{\text{ for all }}p\in \Omega _{i}{\text{ and all }}q\in \mathbb {N} ^{n},|q|\leq \ell \right\}} form a basis of neighborhoods of the origin for C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} as i , {\displaystyle i,} ℓ , {\displaystyle \ell ,} and α ∈ A {\displaystyle \alpha \in A} vary in all possible ways. If Ω {\displaystyle \Omega } is a countable union of compact subsets and Y {\displaystyle Y} is a Fréchet space, then so is C ( Ω ; Y ) . {\displaystyle C^{(}\Omega ;Y).} Note that U i , l , α {\displaystyle {\mathcal {U}}_{i,l,\alpha }} is convex whenever U α {\displaystyle U_{\alpha }} is convex. If Y {\displaystyle Y} is metrizable (resp. complete, locally convex, Hausdorff) then so is C k ( Ω ; Y ) . {\displaystyle C^{k}(\Omega ;Y).} If ( p α ) α ∈ A {\displaystyle (p_{\alpha })_{\alpha \in A}} is a basis of continuous seminorms for Y {\displaystyle Y} then a basis of continuous seminorms on C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} is: μ i , l , α ( f ) := sup y ∈ Ω i ( ∑ | q | ≤ l p α ( ( ∂ / ∂ p ) q f ( p ) ) ) {\displaystyle \mu _{i,l,\alpha }(f):=\sup _{y\in \Omega _{i}}\left(\sum _{|q|\leq l}p_{\alpha }\left(\left(\partial /\partial p\right)^{q}f(p)\right)\right)} as i , {\displaystyle i,} ℓ , {\displaystyle \ell ,} and α ∈ A {\displaystyle \alpha \in A} vary in all possible ways. === Space of Ck functions with support in a compact subset === The definition of the topology of the space of test functions is now duplicated and generalized. For any compact subset K ⊆ Ω , {\displaystyle K\subseteq \Omega ,} denote the set of all f {\displaystyle f} in C k ( Ω ; Y ) {\displaystyle C^{k}(\Omega ;Y)} whose support lies in K {\displaystyle K} (in particular, if f ∈ C k ( K ; Y ) {\displaystyle f\in C^{k}(K;Y)} then the domain of f {\displaystyle f} is Ω {\displaystyle \Omega } rather than K {\displaystyle K} ) and give it the subspace topology induced by C k ( Ω ; Y ) . {\displaystyle C^{k}(\Omega ;Y).} If K {\displaystyle K} is a compact space and Y {\displaystyle Y} is a Banach space, then C 0 ( K ; Y ) {\displaystyle C^{0}(K;Y)} becomes a Banach space normed by ‖ f ‖ := sup ω ∈ Ω ‖ f ( ω ) ‖ . {\displaystyle \|f\|:=\sup _{\omega \in \Omega }\|f(\omega )\|.} Let C k ( K ) {\displaystyle C^{k}(K)} denote C k ( K ; F ) . {\displaystyle C^{k}(K;\mathbb {F} ).} For any two compact subsets K ⊆ L ⊆ Ω , {\displaystyle K\subseteq L\subseteq \Omega ,} the inclusion In K L : C k ( K ; Y ) → C k ( L ; Y ) {\displaystyle \operatorname {In} _{K}^{L}:C^{k}(K;Y)\to C^{k}(L;Y)} is an embedding of TVSs and that the union of all C k ( K ; Y ) , {\displaystyle C^{k}(K;Y),} as K {\displaystyle K} varies over the compact subsets of Ω , {\displaystyle \Omega ,} is C c k ( Ω ; Y ) . {\displaystyle C_{c}^{k}(\Omega ;Y).} === Space of compactly support Ck functions === For any compact subset K ⊆ Ω , {\displaystyle K\subseteq \Omega ,} let In K : C k ( K ; Y ) → C c k ( Ω ; Y ) {\displaystyle \operatorname {In} _{K}:C^{k}(K;Y)\to C_{c}^{k}(\Omega ;Y)} denote the inclusion map and endow C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} with the strongest topology making all In K {\displaystyle \operatorname {In} _{K}} continuous, which is known as the final topology induced by these map. The spaces C k ( K ; Y ) {\displaystyle C^{k}(K;Y)} and maps In K 1 K 2 {\displaystyle \operatorname {In} _{K_{1}}^{K_{2}}} form a direct system (directed by the compact subsets of Ω {\displaystyle \Omega } ) whose limit in the category of TVSs is C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} together with the injections In K . {\displaystyle \operatorname {In} _{K}.} The spaces C k ( Ω i ¯ ; Y ) {\displaystyle C^{k}\left({\overline {\Omega _{i}}};Y\right)} and maps In Ω i ¯ Ω j ¯ {\displaystyle \operatorname {In} _{\overline {\Omega _{i}}}^{\overline {\Omega _{j}}}} also form a direct system (directed by the total order N {\displaystyle \mathbb {N} } ) whose limit in the category of TVSs is C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} together with the injections In Ω i ¯ . {\displaystyle \operatorname {In} _{\overline {\Omega _{i}}}.} Each embedding In K {\displaystyle \operatorname {In} _{K}} is an embedding of TVSs. A subset S {\displaystyle S} of C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} is a neighborhood of the origin in C c k ( Ω ; Y ) {\displaystyle C_{c}^{k}(\Omega ;Y)} if and only if S ∩ C k ( K ; Y ) {\displaystyle S\cap C^{k}(K;Y)} is a neighborhood of the origin in C k ( K ; Y ) {\displaystyle C^{k}(K;Y)} for every compact K ⊆ Ω . {\displaystyle K\subseteq \Omega .} This direct limit topology (i.e. the final topology) on C c ∞ ( Ω ) {\displaystyle C_{c}^{\infty }(\Omega )} is known as the canonical LF topology. If Y {\displaystyle Y} is a Hausdorff locally convex space, T {\displaystyle T} is a TVS, and u : C c k ( Ω ; Y ) → T {\displaystyle u:C_{c}^{k}(\Omega ;Y)\to T} is a linear map, then u {\displaystyle u} is continuous if and only if for all compact K ⊆ Ω , {\displaystyle K\subseteq \Omega ,} the restriction of u {\displaystyle u} to C k ( K ; Y ) {\displaystyle C^{k}(K;Y)} is continuous. The statement remains true if "all compact K ⊆ Ω {\displaystyle K\subseteq \Omega } " is replaced with "all K := Ω ¯ i {\displaystyle K:={\overline {\Omega }}_{i}} ". === Properties === === Identification as a tensor product === Suppose henceforth that Y {\displaystyle Y} is Hausdorff. Given a function f ∈ C k ( Ω ) {\displaystyle f\in C^{k}(\Omega )} and a vector y ∈ Y , {\displaystyle y\in Y,} let f ⊗ y {\displaystyle f\otimes y} denote the map f ⊗ y : Ω → Y {\displaystyle f\otimes y:\Omega \to Y} defined by ( f ⊗ y ) ( p ) = f ( p ) y . {\displaystyle (f\otimes y)(p)=f(p)y.} This defines a bilinear map ⊗ : C k ( Ω ) × Y → C k ( Ω ; Y ) {\displaystyle \otimes :C^{k}(\Omega )\times Y\to C^{k}(\Omega ;Y)} into the space of functions whose image is contained in a finite-dimensional vector subspace of Y ; {\displaystyle Y;} this bilinear map turns this subspace into a tensor product of C k ( Ω ) {\displaystyle C^{k}(\Omega )} and Y , {\displaystyle Y,} which we will denote by C k ( Ω ) ⊗ Y . {\displaystyle C^{k}(\Omega )\otimes Y.} Furthermore, if C c k ( Ω ) ⊗ Y {\displaystyle C_{c}^{k}(\Omega )\otimes Y} denotes the vector subspace of C k ( Ω ) ⊗ Y {\displaystyle C^{k}(\Omega )\otimes Y} consisting of all functions with compact support, then C c k ( Ω ) ⊗ Y {\displaystyle C_{c}^{k}(\Omega )\otimes Y} is a tensor product of C c k ( Ω ) {\displaystyle C_{c}^{k}(\Omega )} and Y . {\displaystyle Y.} If X {\displaystyle X} is locally compact then C c 0 ( Ω ) ⊗ Y {\displaystyle C_{c}^{0}(\Omega )\otimes Y} is dense in C 0 ( Ω ; X ) {\displaystyle C^{0}(\Omega ;X)} while if X {\displaystyle X} is an open subset of R n {\displaystyle \mathbb {R} ^{n}} then C c ∞ ( Ω ) ⊗ Y {\displaystyle C_{c}^{\infty }(\Omega )\otimes Y} is dense in C k ( Ω ; X ) . {\displaystyle C^{k}(\Omega ;X).} == See also == Convenient vector space – locally convex vector spaces satisfying a very mild completeness conditionPages displaying wikidata descriptions as a fallback Crinkled arc Differentiation in Fréchet spaces Fréchet derivative – Derivative defined on normed spaces Gateaux derivative – Generalization of the concept of directional derivative Infinite-dimensional vector function – function whose values lie in an infinite-dimensional vector spacePages displaying wikidata descriptions as a fallback Injective tensor product == Notes == == Citations == == References == Diestel, Joe (2008). The Metric Theory of Tensor Products: Grothendieck's Résumé Revisited. Vol. 16. Providence, R.I.: American Mathematical Society. ISBN 9781470424831. OCLC 185095773. Dubinsky, Ed (1979). The Structure of Nuclear Fréchet Spaces. Lecture Notes in Mathematics. Vol. 720. Berlin New York: Springer-Verlag. ISBN 978-3-540-09504-0. OCLC 5126156. Grothendieck, Alexander (1955). "Produits Tensoriels Topologiques et Espaces Nucléaires" [Topological Tensor Products and Nuclear Spaces]. Memoirs of the American Mathematical Society Series (in French). 16. Providence: American Mathematical Society. MR 0075539. OCLC 9308061. Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098. Hogbe-Nlend, Henri; Moscatelli, V. B. (1981). Nuclear and Conuclear Spaces: Introductory Course on Nuclear and Conuclear Spaces in the Light of the Duality "topology-bornology". North-Holland Mathematics Studies. Vol. 52. Amsterdam New York New York: North Holland. ISBN 978-0-08-087163-9. OCLC 316564345. Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370. Pietsch, Albrecht (1979). Nuclear Locally Convex Spaces. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 66 (Second ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-05644-9. OCLC 539541. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Ryan, Raymond A. (2002). Introduction to Tensor Products of Banach Spaces. Springer Monographs in Mathematics. London New York: Springer. ISBN 978-1-85233-437-6. OCLC 48092184. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Wong, Yau-Chuen (1979). Schwartz Spaces, Nuclear Spaces, and Tensor Products. Lecture Notes in Mathematics. Vol. 726. Berlin New York: Springer-Verlag. ISBN 978-3-540-09513-2. OCLC 5126158.
Wikipedia/Differentiable_vector-valued_functions_from_Euclidean_space
In measure theory, a radonifying function (ultimately named after Johann Radon) between measurable spaces is one that takes a cylinder set measure (CSM) on the first space to a true measure on the second space. It acquired its name because the pushforward measure on the second space was historically thought of as a Radon measure. == Definition == Given two separable Banach spaces E {\displaystyle E} and G {\displaystyle G} , a CSM { μ T | T ∈ A ( E ) } {\displaystyle \{\mu _{T}|T\in {\mathcal {A}}(E)\}} on E {\displaystyle E} and a continuous linear map θ ∈ L i n ( E ; G ) {\displaystyle \theta \in \mathrm {Lin} (E;G)} , we say that θ {\displaystyle \theta } is radonifying if the push forward CSM (see below) { ( θ ∗ ( μ ⋅ ) ) S | S ∈ A ( G ) } {\displaystyle \left\{\left.\left(\theta _{*}(\mu _{\cdot })\right)_{S}\right|S\in {\mathcal {A}}(G)\right\}} on G {\displaystyle G} "is" a measure, i.e. there is a measure ν {\displaystyle \nu } on G {\displaystyle G} such that ( θ ∗ ( μ ⋅ ) ) S = S ∗ ( ν ) {\displaystyle \left(\theta _{*}(\mu _{\cdot })\right)_{S}=S_{*}(\nu )} for each S ∈ A ( G ) {\displaystyle S\in {\mathcal {A}}(G)} , where S ∗ ( ν ) {\displaystyle S_{*}(\nu )} is the usual push forward of the measure ν {\displaystyle \nu } by the linear map S : G → F S {\displaystyle S:G\to F_{S}} . == Push forward of a CSM == Because the definition of a CSM on G {\displaystyle G} requires that the maps in A ( G ) {\displaystyle {\mathcal {A}}(G)} be surjective, the definition of the push forward for a CSM requires careful attention. The CSM { ( θ ∗ ( μ ⋅ ) ) S | S ∈ A ( G ) } {\displaystyle \left\{\left.\left(\theta _{*}(\mu _{\cdot })\right)_{S}\right|S\in {\mathcal {A}}(G)\right\}} is defined by ( θ ∗ ( μ ⋅ ) ) S = μ S ∘ θ {\displaystyle \left(\theta _{*}(\mu _{\cdot })\right)_{S}=\mu _{S\circ \theta }} if the composition S ∘ θ : E → F S {\displaystyle S\circ \theta :E\to F_{S}} is surjective. If S ∘ θ {\displaystyle S\circ \theta } is not surjective, let F ~ {\displaystyle {\tilde {F}}} be the image of S ∘ θ {\displaystyle S\circ \theta } , let i : F ~ → F S {\displaystyle i:{\tilde {F}}\to F_{S}} be the inclusion map, and define ( θ ∗ ( μ ⋅ ) ) S = i ∗ ( μ Σ ) {\displaystyle \left(\theta _{*}(\mu _{\cdot })\right)_{S}=i_{*}\left(\mu _{\Sigma }\right)} , where Σ : E → F ~ {\displaystyle \Sigma :E\to {\tilde {F}}} (so Σ ∈ A ( E ) {\displaystyle \Sigma \in {\mathcal {A}}(E)} ) is such that i ∘ Σ = S ∘ θ {\displaystyle i\circ \Sigma =S\circ \theta } . == See also == Abstract Wiener space – Mathematical construction relating to infinite-dimensional spaces Classical Wiener space – Space of stochastic processes Sazonov's theorem == References ==
Wikipedia/Radonifying_function
In mathematics—specifically, in functional analysis—a weakly measurable function taking values in a Banach space is a function whose composition with any element of the dual space is a measurable function in the usual (strong) sense. For separable spaces, the notions of weak and strong measurability agree. == Definition == If ( X , Σ ) {\displaystyle (X,\Sigma )} is a measurable space and B {\displaystyle B} is a Banach space over a field K {\displaystyle \mathbb {K} } (which is the real numbers R {\displaystyle \mathbb {R} } or complex numbers C {\displaystyle \mathbb {C} } ), then f : X → B {\displaystyle f:X\to B} is said to be weakly measurable if, for every continuous linear functional g : B → K , {\displaystyle g:B\to \mathbb {K} ,} the function g ∘ f : X → K defined by x ↦ g ( f ( x ) ) {\displaystyle g\circ f\colon X\to \mathbb {K} \quad {\text{ defined by }}\quad x\mapsto g(f(x))} is a measurable function with respect to Σ {\displaystyle \Sigma } and the usual Borel σ {\displaystyle \sigma } -algebra on K . {\displaystyle \mathbb {K} .} A measurable function on a probability space is usually referred to as a random variable (or random vector if it takes values in a vector space such as the Banach space B {\displaystyle B} ). Thus, as a special case of the above definition, if ( Ω , P ) {\displaystyle (\Omega ,{\mathcal {P}})} is a probability space, then a function Z : Ω → B {\displaystyle Z:\Omega \to B} is called a ( B {\displaystyle B} -valued) weak random variable (or weak random vector) if, for every continuous linear functional g : B → K , {\displaystyle g:B\to \mathbb {K} ,} the function g ∘ Z : Ω → K defined by ω ↦ g ( Z ( ω ) ) {\displaystyle g\circ Z\colon \Omega \to \mathbb {K} \quad {\text{ defined by }}\quad \omega \mapsto g(Z(\omega ))} is a K {\displaystyle \mathbb {K} } -valued random variable (i.e. measurable function) in the usual sense, with respect to Σ {\displaystyle \Sigma } and the usual Borel σ {\displaystyle \sigma } -algebra on K . {\displaystyle \mathbb {K} .} == Properties == The relationship between measurability and weak measurability is given by the following result, known as Pettis' theorem or Pettis measurability theorem. A function f {\displaystyle f} is said to be almost surely separably valued (or essentially separably valued) if there exists a subset N ⊆ X {\displaystyle N\subseteq X} with μ ( N ) = 0 {\displaystyle \mu (N)=0} such that f ( X ∖ N ) ⊆ B {\displaystyle f(X\setminus N)\subseteq B} is separable. In the case that B {\displaystyle B} is separable, since any subset of a separable Banach space is itself separable, one can take N {\displaystyle N} above to be empty, and it follows that the notions of weak and strong measurability agree when B {\displaystyle B} is separable. == See also == Bochner measurable function Bochner integral – Concept in mathematics Bochner space – Type of topological space Pettis integral Vector measure == References == Pettis, B. J. (1938). "On integration in vector spaces". Trans. Amer. Math. Soc. 44 (2): 277–304. doi:10.2307/1989973. ISSN 0002-9947. MR 1501970. Showalter, Ralph E. (1997). "Theorem III.1.1". Monotone operators in Banach space and nonlinear partial differential equations. Mathematical Surveys and Monographs 49. Providence, RI: American Mathematical Society. p. 103. ISBN 0-8218-0500-2. MR 1422252.
Wikipedia/Weakly_measurable_function
In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions. As its name implies, the moment-generating function can be used to compute a distribution’s moments: the n-th moment about 0 is the n-th derivative of the moment-generating function, evaluated at 0. In addition to univariate real-valued distributions, moment-generating functions can also be defined for vector- or matrix-valued random variables, and can even be extended to more general cases. The moment-generating function of a real-valued distribution does not always exist, unlike the characteristic function. There are relations between the behavior of the moment-generating function of a distribution and properties of the distribution, such as the existence of moments. == Definition == Let X {\displaystyle X} be a random variable with CDF F X {\displaystyle F_{X}} . The moment generating function (mgf) of X {\displaystyle X} (or F X {\displaystyle F_{X}} ), denoted by M X ( t ) {\displaystyle M_{X}(t)} , is M X ( t ) = E ⁡ [ e t X ] {\displaystyle M_{X}(t)=\operatorname {E} \left[e^{tX}\right]} provided this expectation exists for t {\displaystyle t} in some open neighborhood of 0. That is, there is an h > 0 {\displaystyle h>0} such that for all t {\displaystyle t} in − h < 0 < h {\displaystyle -h<0<h} , E ⁡ [ e t X ] {\displaystyle \operatorname {E} \left[e^{tX}\right]} exists. If the expectation does not exist in an open neighborhood of 0, we say that the moment generating function does not exist. In other words, the moment-generating function of X is the expectation of the random variable e t X {\displaystyle e^{tX}} . More generally, when X = ( X 1 , … , X n ) T {\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\mathrm {T} }} , an n {\displaystyle n} -dimensional random vector, and t {\displaystyle \mathbf {t} } is a fixed vector, one uses t ⋅ X = t T X {\displaystyle \mathbf {t} \cdot \mathbf {X} =\mathbf {t} ^{\mathrm {T} }\mathbf {X} } instead of t X {\displaystyle tX} : M X ( t ) := E ⁡ [ e t T X ] . {\displaystyle M_{\mathbf {X} }(\mathbf {t} ):=\operatorname {E} \left[e^{\mathbf {t} ^{\mathrm {T} }\mathbf {X} }\right].} M X ( 0 ) {\displaystyle M_{X}(0)} always exists and is equal to 1. However, a key problem with moment-generating functions is that moments and the moment-generating function may not exist, as the integrals need not converge absolutely. By contrast, the characteristic function or Fourier transform always exists (because it is the integral of a bounded function on a space of finite measure), and for some purposes may be used instead. The moment-generating function is so named because it can be used to find the moments of the distribution. The series expansion of e t X {\displaystyle e^{tX}} is e t X = 1 + t X + t 2 X 2 2 ! + t 3 X 3 3 ! + ⋯ + t n X n n ! + ⋯ . {\displaystyle e^{tX}=1+tX+{\frac {t^{2}X^{2}}{2!}}+{\frac {t^{3}X^{3}}{3!}}+\cdots +{\frac {t^{n}X^{n}}{n!}}+\cdots .} Hence, M X ( t ) = E ⁡ [ e t X ] = 1 + t E ⁡ [ X ] + t 2 E ⁡ [ X 2 ] 2 ! + t 3 E ⁡ [ X 3 ] 3 ! + ⋯ + t n E ⁡ [ X n ] n ! + ⋯ = 1 + t m 1 + t 2 m 2 2 ! + t 3 m 3 3 ! + ⋯ + t n m n n ! + ⋯ , {\displaystyle {\begin{aligned}M_{X}(t)&=\operatorname {E} [e^{tX}]\\[1ex]&=1+t\operatorname {E} [X]+{\frac {t^{2}\operatorname {E} [X^{2}]}{2!}}+{\frac {t^{3}\operatorname {E} [X^{3}]}{3!}}+\cdots +{\frac {t^{n}\operatorname {E} [X^{n}]}{n!}}+\cdots \\[1ex]&=1+tm_{1}+{\frac {t^{2}m_{2}}{2!}}+{\frac {t^{3}m_{3}}{3!}}+\cdots +{\frac {t^{n}m_{n}}{n!}}+\cdots ,\end{aligned}}} where m n {\displaystyle m_{n}} is the n {\displaystyle n} -th moment. Differentiating M X ( t ) {\displaystyle M_{X}(t)} i {\displaystyle i} times with respect to t {\displaystyle t} and setting t = 0 {\displaystyle t=0} , we obtain the i {\displaystyle i} -th moment about the origin, m i {\displaystyle m_{i}} ; see § Calculations of moments below. If X {\displaystyle X} is a continuous random variable, the following relation between its moment-generating function M X ( t ) {\displaystyle M_{X}(t)} and the two-sided Laplace transform of its probability density function f X ( x ) {\displaystyle f_{X}(x)} holds: M X ( t ) = L { f X } ( − t ) , {\displaystyle M_{X}(t)={\mathcal {L}}\{f_{X}\}(-t),} since the PDF's two-sided Laplace transform is given as L { f X } ( s ) = ∫ − ∞ ∞ e − s x f X ( x ) d x , {\displaystyle {\mathcal {L}}\{f_{X}\}(s)=\int _{-\infty }^{\infty }e^{-sx}f_{X}(x)\,dx,} and the moment-generating function's definition expands (by the law of the unconscious statistician) to M X ( t ) = E ⁡ [ e t X ] = ∫ − ∞ ∞ e t x f X ( x ) d x . {\displaystyle M_{X}(t)=\operatorname {E} \left[e^{tX}\right]=\int _{-\infty }^{\infty }e^{tx}f_{X}(x)\,dx.} This is consistent with the characteristic function of X {\displaystyle X} being a Wick rotation of M X ( t ) {\displaystyle M_{X}(t)} when the moment generating function exists, as the characteristic function of a continuous random variable X {\displaystyle X} is the Fourier transform of its probability density function f X ( x ) {\displaystyle f_{X}(x)} , and in general when a function f ( x ) {\displaystyle f(x)} is of exponential order, the Fourier transform of f {\displaystyle f} is a Wick rotation of its two-sided Laplace transform in the region of convergence. See the relation of the Fourier and Laplace transforms for further information. == Examples == Here are some examples of the moment-generating function and the characteristic function for comparison. It can be seen that the characteristic function is a Wick rotation of the moment-generating function M X ( t ) {\displaystyle M_{X}(t)} when the latter exists. == Calculation == The moment-generating function is the expectation of a function of the random variable, it can be written as: For a discrete probability mass function, M X ( t ) = ∑ i = 0 ∞ e t x i p i {\displaystyle M_{X}(t)=\sum _{i=0}^{\infty }e^{tx_{i}}\,p_{i}} For a continuous probability density function, M X ( t ) = ∫ − ∞ ∞ e t x f ( x ) d x {\displaystyle M_{X}(t)=\int _{-\infty }^{\infty }e^{tx}f(x)\,dx} In the general case: M X ( t ) = ∫ − ∞ ∞ e t x d F ( x ) {\displaystyle M_{X}(t)=\int _{-\infty }^{\infty }e^{tx}\,dF(x)} , using the Riemann–Stieltjes integral, and where F {\displaystyle F} is the cumulative distribution function. This is simply the Laplace-Stieltjes transform of F {\displaystyle F} , but with the sign of the argument reversed. Note that for the case where X {\displaystyle X} has a continuous probability density function f ( x ) {\displaystyle f(x)} , M X ( − t ) {\displaystyle M_{X}(-t)} is the two-sided Laplace transform of f ( x ) {\displaystyle f(x)} . M X ( t ) = ∫ − ∞ ∞ e t x f ( x ) d x = ∫ − ∞ ∞ ( 1 + t x + t 2 x 2 2 ! + ⋯ + t n x n n ! + ⋯ ) f ( x ) d x = 1 + t m 1 + t 2 m 2 2 ! + ⋯ + t n m n n ! + ⋯ , {\displaystyle {\begin{aligned}M_{X}(t)&=\int _{-\infty }^{\infty }e^{tx}f(x)\,dx\\[1ex]&=\int _{-\infty }^{\infty }\left(1+tx+{\frac {t^{2}x^{2}}{2!}}+\cdots +{\frac {t^{n}x^{n}}{n!}}+\cdots \right)f(x)\,dx\\[1ex]&=1+tm_{1}+{\frac {t^{2}m_{2}}{2!}}+\cdots +{\frac {t^{n}m_{n}}{n!}}+\cdots ,\end{aligned}}} where m n {\displaystyle m_{n}} is the n {\displaystyle n} th moment. === Linear transformations of random variables === If random variable X {\displaystyle X} has moment generating function M X ( t ) {\displaystyle M_{X}(t)} , then α X + β {\displaystyle \alpha X+\beta } has moment generating function M α X + β ( t ) = e β t M X ( α t ) {\displaystyle M_{\alpha X+\beta }(t)=e^{\beta t}M_{X}(\alpha t)} M α X + β ( t ) = E ⁡ [ e ( α X + β ) t ] = e β t E ⁡ [ e α X t ] = e β t M X ( α t ) {\displaystyle M_{\alpha X+\beta }(t)=\operatorname {E} \left[e^{(\alpha X+\beta )t}\right]=e^{\beta t}\operatorname {E} \left[e^{\alpha Xt}\right]=e^{\beta t}M_{X}(\alpha t)} === Linear combination of independent random variables === If S n = ∑ i = 1 n a i X i {\textstyle S_{n}=\sum _{i=1}^{n}a_{i}X_{i}} , where the Xi are independent random variables and the ai are constants, then the probability density function for Sn is the convolution of the probability density functions of each of the Xi, and the moment-generating function for Sn is given by M S n ( t ) = M X 1 ( a 1 t ) M X 2 ( a 2 t ) ⋯ M X n ( a n t ) . {\displaystyle M_{S_{n}}(t)=M_{X_{1}}(a_{1}t)M_{X_{2}}(a_{2}t)\cdots M_{X_{n}}(a_{n}t)\,.} === Vector-valued random variables === For vector-valued random variables X {\displaystyle \mathbf {X} } with real components, the moment-generating function is given by M X ( t ) = E ⁡ [ e ⟨ t , X ⟩ ] {\displaystyle M_{X}(\mathbf {t} )=\operatorname {E} \left[e^{\langle \mathbf {t} ,\mathbf {X} \rangle }\right]} where t {\displaystyle \mathbf {t} } is a vector and ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the dot product. == Important properties == Moment generating functions are positive and log-convex, with M(0) = 1. An important property of the moment-generating function is that it uniquely determines the distribution. In other words, if X {\displaystyle X} and Y {\displaystyle Y} are two random variables and for all values of t, M X ( t ) = M Y ( t ) , {\displaystyle M_{X}(t)=M_{Y}(t),} then F X ( x ) = F Y ( x ) {\displaystyle F_{X}(x)=F_{Y}(x)} for all values of x (or equivalently X and Y have the same distribution). This statement is not equivalent to the statement "if two distributions have the same moments, then they are identical at all points." This is because in some cases, the moments exist and yet the moment-generating function does not, because the limit lim n → ∞ ∑ i = 0 n t i m i i ! {\displaystyle \lim _{n\to \infty }\sum _{i=0}^{n}{\frac {t^{i}m_{i}}{i!}}} may not exist. The log-normal distribution is an example of when this occurs. === Calculations of moments === The moment-generating function is so called because if it exists on an open interval around t = 0, then it is the exponential generating function of the moments of the probability distribution: m n = E ⁡ [ X n ] = M X ( n ) ( 0 ) = d n M X d t n | t = 0 . {\displaystyle m_{n}=\operatorname {E} \left[X^{n}\right]=M_{X}^{(n)}(0)=\left.{\frac {d^{n}M_{X}}{dt^{n}}}\right|_{t=0}.} That is, with n being a nonnegative integer, the n-th moment about 0 is the n-th derivative of the moment generating function, evaluated at t = 0. == Other properties == Jensen's inequality provides a simple lower bound on the moment-generating function: M X ( t ) ≥ e μ t , {\displaystyle M_{X}(t)\geq e^{\mu t},} where μ {\displaystyle \mu } is the mean of X. The moment-generating function can be used in conjunction with Markov's inequality to bound the upper tail of a real random variable X. This statement is also called the Chernoff bound. Since x ↦ e x t {\displaystyle x\mapsto e^{xt}} is monotonically increasing for t > 0 {\displaystyle t>0} , we have Pr ( X ≥ a ) = Pr ( e t X ≥ e t a ) ≤ e − a t E ⁡ [ e t X ] = e − a t M X ( t ) {\displaystyle \Pr(X\geq a)=\Pr(e^{tX}\geq e^{ta})\leq e^{-at}\operatorname {E} \left[e^{tX}\right]=e^{-at}M_{X}(t)} for any t > 0 {\displaystyle t>0} and any a, provided M X ( t ) {\displaystyle M_{X}(t)} exists. For example, when X is a standard normal distribution and a > 0 {\displaystyle a>0} , we can choose t = a {\displaystyle t=a} and recall that M X ( t ) = e t 2 / 2 {\displaystyle M_{X}(t)=e^{t^{2}/2}} . This gives Pr ( X ≥ a ) ≤ e − a 2 / 2 {\displaystyle \Pr(X\geq a)\leq e^{-a^{2}/2}} , which is within a factor of 1+a of the exact value. Various lemmas, such as Hoeffding's lemma or Bennett's inequality provide bounds on the moment-generating function in the case of a zero-mean, bounded random variable. When X {\displaystyle X} is non-negative, the moment generating function gives a simple, useful bound on the moments: E ⁡ [ X m ] ≤ ( m t e ) m M X ( t ) , {\displaystyle \operatorname {E} [X^{m}]\leq \left({\frac {m}{te}}\right)^{m}M_{X}(t),} For any X , m ≥ 0 {\displaystyle X,m\geq 0} and t > 0 {\displaystyle t>0} . This follows from the inequality 1 + x ≤ e x {\displaystyle 1+x\leq e^{x}} into which we can substitute x ′ = t x / m − 1 {\displaystyle x'=tx/m-1} implies t x / m ≤ e t x / m − 1 {\displaystyle tx/m\leq e^{tx/m-1}} for any x , t , m ∈ R {\displaystyle x,t,m\in \mathbb {R} } . Now, if t > 0 {\displaystyle t>0} and x , m ≥ 0 {\displaystyle x,m\geq 0} , this can be rearranged to x m ≤ ( m / ( t e ) ) m e t x {\displaystyle x^{m}\leq (m/(te))^{m}e^{tx}} . Taking the expectation on both sides gives the bound on E ⁡ [ X m ] {\displaystyle \operatorname {E} [X^{m}]} in terms of E ⁡ [ e t X ] {\displaystyle \operatorname {E} [e^{tX}]} . As an example, consider X ∼ Chi-Squared {\displaystyle X\sim {\text{Chi-Squared}}} with k {\displaystyle k} degrees of freedom. Then from the examples M X ( t ) = ( 1 − 2 t ) − k / 2 {\displaystyle M_{X}(t)=(1-2t)^{-k/2}} . Picking t = m / ( 2 m + k ) {\displaystyle t=m/(2m+k)} and substituting into the bound: E ⁡ [ X m ] ≤ ( 1 + 2 m / k ) k / 2 e − m ( k + 2 m ) m . {\displaystyle \operatorname {E} [X^{m}]\leq {\left(1+2m/k\right)}^{k/2}e^{-m}{\left(k+2m\right)}^{m}.} We know that in this case the correct bound is E ⁡ [ X m ] ≤ 2 m Γ ( m + k / 2 ) / Γ ( k / 2 ) {\displaystyle \operatorname {E} [X^{m}]\leq 2^{m}\Gamma (m+k/2)/\Gamma (k/2)} . To compare the bounds, we can consider the asymptotics for large k {\displaystyle k} . Here the moment-generating function bound is k m ( 1 + m 2 / k + O ( 1 / k 2 ) ) {\displaystyle k^{m}(1+m^{2}/k+O(1/k^{2}))} , where the real bound is k m ( 1 + ( m 2 − m ) / k + O ( 1 / k 2 ) ) {\displaystyle k^{m}(1+(m^{2}-m)/k+O(1/k^{2}))} . The moment-generating function bound is thus very strong in this case. == Relation to other functions == Related to the moment-generating function are a number of other transforms that are common in probability theory: Characteristic function The characteristic function φ X ( t ) {\displaystyle \varphi _{X}(t)} is related to the moment-generating function via φ X ( t ) = M i X ( t ) = M X ( i t ) : {\displaystyle \varphi _{X}(t)=M_{iX}(t)=M_{X}(it):} the characteristic function is the moment-generating function of iX or the moment generating function of X evaluated on the imaginary axis. This function can also be viewed as the Fourier transform of the probability density function, which can therefore be deduced from it by inverse Fourier transform. Cumulant-generating function The cumulant-generating function is defined as the logarithm of the moment-generating function; some instead define the cumulant-generating function as the logarithm of the characteristic function, while others call this latter the second cumulant-generating function. Probability-generating function The probability-generating function is defined as G ( z ) = E ⁡ [ z X ] . {\displaystyle G(z)=\operatorname {E} \left[z^{X}\right].} This immediately implies that G ( e t ) = E ⁡ [ e t X ] = M X ( t ) . {\displaystyle G(e^{t})=\operatorname {E} \left[e^{tX}\right]=M_{X}(t).} == See also == Characteristic function (probability theory) Factorial moment generating function Rate function Hamburger moment problem == References == === Citations === === Sources ===
Wikipedia/Moment_generating_function
In mathematics, the two-sided Laplace transform or bilateral Laplace transform is an integral transform equivalent to probability's moment-generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, the Z-transform and the ordinary or one-sided Laplace transform. If f(t) is a real- or complex-valued function of the real variable t defined for all real numbers, then the two-sided Laplace transform is defined by the integral B { f } ( s ) = F ( s ) = ∫ − ∞ ∞ e − s t f ( t ) d t . {\displaystyle {\mathcal {B}}\{f\}(s)=F(s)=\int _{-\infty }^{\infty }e^{-st}f(t)\,dt.} The integral is most commonly understood as an improper integral, which converges if and only if both integrals ∫ 0 ∞ e − s t f ( t ) d t , ∫ − ∞ 0 e − s t f ( t ) d t {\displaystyle \int _{0}^{\infty }e^{-st}f(t)\,dt,\quad \int _{-\infty }^{0}e^{-st}f(t)\,dt} exist. There seems to be no generally accepted notation for the two-sided transform; the B {\displaystyle {\mathcal {B}}} used here recalls "bilateral". The two-sided transform used by some authors is T { f } ( s ) = s B { f } ( s ) = s F ( s ) = s ∫ − ∞ ∞ e − s t f ( t ) d t . {\displaystyle {\mathcal {T}}\{f\}(s)=s{\mathcal {B}}\{f\}(s)=sF(s)=s\int _{-\infty }^{\infty }e^{-st}f(t)\,dt.} In pure mathematics the argument t can be any variable, and Laplace transforms are used to study how differential operators transform the function. In science and engineering applications, the argument t often represents time (in seconds), and the function f(t) often represents a signal or waveform that varies with time. In these cases, the signals are transformed by filters, that work like a mathematical operator, but with a restriction. They have to be causal, which means that the output in a given time t cannot depend on an output which is a higher value of t. In population ecology, the argument t often represents spatial displacement in a dispersal kernel. When working with functions of time, f(t) is called the time domain representation of the signal, while F(s) is called the s-domain (or Laplace domain) representation. The inverse transformation then represents a synthesis of the signal as the sum of its frequency components taken over all frequencies, whereas the forward transformation represents the analysis of the signal into its frequency components. == Relationship to the Fourier transform == The Fourier transform can be defined in terms of the two-sided Laplace transform: F { f ( t ) } = F ( s = i ω ) = F ( ω ) . {\displaystyle {\mathcal {F}}\{f(t)\}=F(s=i\omega )=F(\omega ).} Note that definitions of the Fourier transform differ, and in particular F { f ( t ) } = F ( s = i ω ) = 1 2 π B { f ( t ) } ( s ) {\displaystyle {\mathcal {F}}\{f(t)\}=F(s=i\omega )={\frac {1}{\sqrt {2\pi }}}{\mathcal {B}}\{f(t)\}(s)} is often used instead. In terms of the Fourier transform, we may also obtain the two-sided Laplace transform, as B { f ( t ) } ( s ) = F { f ( t ) } ( − i s ) . {\displaystyle {\mathcal {B}}\{f(t)\}(s)={\mathcal {F}}\{f(t)\}(-is).} The Fourier transform is normally defined so that it exists for real values; the above definition defines the image in a strip a < ℑ ( s ) < b {\displaystyle a<\Im (s)<b} which may not include the real axis where the Fourier transform is supposed to converge. This is then why Laplace transforms retain their value in control theory and signal processing: the convergence of a Fourier transform integral within its domain only means that a linear, shift-invariant system described by it is stable or critical. The Laplace one on the other hand will somewhere converge for every impulse response which is at most exponentially growing, because it involves an extra term which can be taken as an exponential regulator. Since there are no superexponentially growing linear feedback networks, Laplace transform based analysis and solution of linear, shift-invariant systems, takes its most general form in the context of Laplace, not Fourier, transforms. At the same time, nowadays Laplace transform theory falls within the ambit of more general integral transforms, or even general harmonic analysis. In that framework and nomenclature, Laplace transforms are simply another form of Fourier analysis, even if more general in hindsight. == Relationship to other integral transforms == If u is the Heaviside step function, equal to zero when its argument is less than zero, to one-half when its argument equals zero, and to one when its argument is greater than zero, then the Laplace transform L {\displaystyle {\mathcal {L}}} may be defined in terms of the two-sided Laplace transform by L { f } = B { f u } . {\displaystyle {\mathcal {L}}\{f\}={\mathcal {B}}\{fu\}.} On the other hand, we also have B { f } = L { f } + L { f ∘ m } ∘ m , {\displaystyle {\mathcal {B}}\{f\}={\mathcal {L}}\{f\}+{\mathcal {L}}\{f\circ m\}\circ m,} where m : R → R {\displaystyle m:\mathbb {R} \to \mathbb {R} } is the function that multiplies by minus one ( m ( x ) = − x {\displaystyle m(x)=-x} ), so either version of the Laplace transform can be defined in terms of the other. The Mellin transform may be defined in terms of the two-sided Laplace transform by M { f } = B { f ∘ exp ∘ m } , {\displaystyle {\mathcal {M}}\{f\}={\mathcal {B}}\{f\circ {\exp }\circ m\},} with m {\displaystyle m} as above, and conversely we can get the two-sided transform from the Mellin transform by B { f } = M { f ∘ m ∘ log } . {\displaystyle {\mathcal {B}}\{f\}={\mathcal {M}}\{f\circ m\circ \log \}.} The moment-generating function of a continuous probability density function ƒ(x) can be expressed as B { f } ( − s ) {\displaystyle {\mathcal {B}}\{f\}(-s)} . == Properties == The following properties can be found in Bracewell (2000) and Oppenheim & Willsky (1997) Most properties of the bilateral Laplace transform are very similar to properties of the unilateral Laplace transform, but there are some important differences: === Parseval's theorem and Plancherel's theorem === Let f 1 ( t ) {\displaystyle f_{1}(t)} and f 2 ( t ) {\displaystyle f_{2}(t)} be functions with bilateral Laplace transforms F 1 ( s ) {\displaystyle F_{1}(s)} and F 2 ( s ) {\displaystyle F_{2}(s)} in the strips of convergence α 1 , 2 < ℜ s < β 1 , 2 {\displaystyle \alpha _{1,2}<\Re s<\beta _{1,2}} . Let c ∈ R {\displaystyle c\in \mathbb {R} } with max ( − β 1 , α 2 ) < c < min ( − α 1 , β 2 ) {\displaystyle \max(-\beta _{1},\alpha _{2})<c<\min(-\alpha _{1},\beta _{2})} . Then Parseval's theorem holds: ∫ − ∞ ∞ f 1 ( t ) ¯ f 2 ( t ) d t = 1 2 π i ∫ c − i ∞ c + i ∞ F 1 ( − s ¯ ) ¯ F 2 ( s ) d s {\displaystyle \int _{-\infty }^{\infty }{\overline {f_{1}(t)}}\,f_{2}(t)\,dt={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }{\overline {F_{1}(-{\overline {s}})}}\,F_{2}(s)\,ds} This theorem is proved by applying the inverse Laplace transform on the convolution theorem in form of the cross-correlation. Let f ( t ) {\displaystyle f(t)} be a function with bilateral Laplace transform F ( s ) {\displaystyle F(s)} in the strip of convergence α < ℜ s < β {\displaystyle \alpha <\Re s<\beta } . Let c ∈ R {\displaystyle c\in \mathbb {R} } with α < c < β {\displaystyle \alpha <c<\beta } . Then the Plancherel theorem holds: ∫ − ∞ ∞ e − 2 c t | f ( t ) | 2 d t = 1 2 π ∫ − ∞ ∞ | F ( c + i r ) | 2 d r {\displaystyle \int _{-\infty }^{\infty }e^{-2c\,t}\,|f(t)|^{2}\,dt={\frac {1}{2\pi }}\int _{-\infty }^{\infty }|F(c+ir)|^{2}\,dr} === Uniqueness === For any two functions f , g {\textstyle f,g} for which the two-sided Laplace transforms T { f } , T { g } {\textstyle {\mathcal {T}}\{f\},{\mathcal {T}}\{g\}} exist, if T { f } = T { g } , {\textstyle {\mathcal {T}}\{f\}={\mathcal {T}}\{g\},} i.e. T { f } ( s ) = T { g } ( s ) {\textstyle {\mathcal {T}}\{f\}(s)={\mathcal {T}}\{g\}(s)} for every value of s ∈ R , {\textstyle s\in \mathbb {R} ,} then f = g {\textstyle f=g} almost everywhere. == Region of convergence == Bilateral transform requirements for convergence are more difficult than for unilateral transforms. The region of convergence will be normally smaller. If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit lim R → ∞ ∫ 0 R f ( t ) e − s t d t {\displaystyle \lim _{R\to \infty }\int _{0}^{R}f(t)e^{-st}\,dt} exists. The Laplace transform converges absolutely if the integral ∫ 0 ∞ | f ( t ) e − s t | d t {\displaystyle \int _{0}^{\infty }\left|f(t)e^{-st}\right|\,dt} exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense. The set of values for which F(s) converges absolutely is either of the form Re(s) > a or else Re(s) ≥ a, where a is an extended real constant, −∞ ≤ a ≤ ∞. (This follows from the dominated convergence theorem.) The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t). Analogously, the two-sided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b. The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence. Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at s = s0, then it automatically converges for all s with Re(s) > Re(s0). Therefore, the region of convergence is a half-plane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a. In the region of convergence Re(s) > Re(s0), the Laplace transform of f can be expressed by integrating by parts as the integral F ( s ) = ( s − s 0 ) ∫ 0 ∞ e − ( s − s 0 ) t β ( t ) d t , β ( u ) = ∫ 0 u e − s 0 t f ( t ) d t . {\displaystyle F(s)=(s-s_{0})\int _{0}^{\infty }e^{-(s-s_{0})t}\beta (t)\,dt,\quad \beta (u)=\int _{0}^{u}e^{-s_{0}t}f(t)\,dt.} That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic. There are several Paley–Wiener theorems concerning the relationship between the decay properties of f and the properties of the Laplace transform within the region of convergence. In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output. == Causality == Bilateral transforms do not respect causality. They make sense when applied over generic functions but when working with functions of time (signals) unilateral transforms are preferred. == Table of selected bilateral Laplace transforms == Following list of interesting examples for the bilateral Laplace transform can be deduced from the corresponding Fourier or unilateral Laplace transformations (see also Bracewell (2000)): == See also == Causal filter Acausal system Causal system Sinc filter – ideal sinc filter (aka rectangular filter) is acausal and has an infinite delay. == References == LePage, Wilbur R. (1980). Complex Variables and the Laplace Transform for Engineers. Dover Publications. Van der Pol, Balthasar, and Bremmer, H., Operational Calculus Based on the Two-Sided Laplace Integral, Chelsea Pub. Co., 3rd ed., 1987. Widder, David Vernon (1941), The Laplace Transform, Princeton Mathematical Series, v. 6, Princeton University Press, MR 0005923. Bracewell, Ronald N. (2000). The Fourier Transform and Its Applications (3rd ed.). Oppenheim, Alan V.; Willsky, Alan S. (1997). Signals & Systems (2nd ed.).
Wikipedia/Two-sided_Laplace_transform
In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless. Each partition function is constructed to represent a particular statistical ensemble (which, in turn, corresponds to a particular free energy). The most common statistical ensembles have named partition functions. The canonical partition function applies to a canonical ensemble, in which the system is allowed to exchange heat with the environment at fixed temperature, volume, and number of particles. The grand canonical partition function applies to a grand canonical ensemble, in which the system can exchange both heat and particles with the environment, at fixed temperature, volume, and chemical potential. Other types of partition functions can be defined for different circumstances; see partition function (mathematics) for generalizations. The partition function has many physical meanings, as discussed in Meaning and significance. == Canonical partition function == === Definition === Initially, let us assume that a thermodynamically large system is in thermal contact with the environment, with a temperature T, and both the volume of the system and the number of constituent particles are fixed. A collection of this kind of system comprises an ensemble called a canonical ensemble. The appropriate mathematical expression for the canonical partition function depends on the degrees of freedom of the system, whether the context is classical mechanics or quantum mechanics, and whether the spectrum of states is discrete or continuous. ==== Classical discrete system ==== For a canonical ensemble that is classical and discrete, the canonical partition function is defined as Z = ∑ i e − β E i , {\displaystyle Z=\sum _{i}e^{-\beta E_{i}},} where i {\displaystyle i} is the index for the microstates of the system; e {\displaystyle e} is Euler's number; β {\displaystyle \beta } is the thermodynamic beta, defined as 1 k B T {\displaystyle {\tfrac {1}{k_{\text{B}}T}}} where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant; E i {\displaystyle E_{i}} is the total energy of the system in the respective microstate. The exponential factor e − β E i {\displaystyle e^{-\beta E_{i}}} is otherwise known as the Boltzmann factor. ==== Classical continuous system ==== In classical mechanics, the position and momentum variables of a particle can vary continuously, so the set of microstates is actually uncountable. In classical statistical mechanics, it is rather inaccurate to express the partition function as a sum of discrete terms. In this case we must describe the partition function using an integral rather than a sum. For a canonical ensemble that is classical and continuous, the canonical partition function is defined as Z = 1 h 3 ∫ e − β H ( q , p ) d 3 q d 3 p , {\displaystyle Z={\frac {1}{h^{3}}}\int e^{-\beta H(q,p)}\,d^{3}q\,d^{3}p,} where h {\displaystyle h} is the Planck constant; β {\displaystyle \beta } is the thermodynamic beta, defined as 1 k B T {\displaystyle {\tfrac {1}{k_{\text{B}}T}}} ; H ( q , p ) {\displaystyle H(q,p)} is the Hamiltonian of the system; q {\displaystyle q} is the canonical position; p {\displaystyle p} is the canonical momentum. To make it into a dimensionless quantity, we must divide it by h, which is some quantity with units of action (usually taken to be the Planck constant). For generalized cases, the partition function of N {\displaystyle N} particles in d {\displaystyle d} -dimensions is given by Z = 1 h N d ∫ ∏ i = 1 N e − β H ( q i , p i ) d d q i d d p i , {\displaystyle Z={\frac {1}{h^{Nd}}}\int \prod _{i=1}^{N}e^{-\beta {\mathcal {H}}({\textbf {q}}_{i},{\textbf {p}}_{i})}\,d^{d}{\textbf {q}}_{i}\,d^{d}{\textbf {p}}_{i},} ==== Classical continuous system (multiple identical particles) ==== For a gas of N {\displaystyle N} identical classical non-interacting particles in three dimensions, the partition function is Z = 1 N ! h 3 N ∫ exp ⁡ ( − β ∑ i = 1 N H ( q i , p i ) ) d 3 q 1 ⋯ d 3 q N d 3 p 1 ⋯ d 3 p N = Z single N N ! {\displaystyle Z={\frac {1}{N!h^{3N}}}\int \,\exp \left(-\beta \sum _{i=1}^{N}H({\textbf {q}}_{i},{\textbf {p}}_{i})\right)\;d^{3}q_{1}\cdots d^{3}q_{N}\,d^{3}p_{1}\cdots d^{3}p_{N}={\frac {Z_{\text{single}}^{N}}{N!}}} where h {\displaystyle h} is the Planck constant; β {\displaystyle \beta } is the thermodynamic beta, defined as 1 k B T {\displaystyle {\tfrac {1}{k_{\text{B}}T}}} ; i {\displaystyle i} is the index for the particles of the system; H {\displaystyle H} is the Hamiltonian of a respective particle; q i {\displaystyle q_{i}} is the canonical position of the respective particle; p i {\displaystyle p_{i}} is the canonical momentum of the respective particle; d 3 {\displaystyle d^{3}} is shorthand notation to indicate that q i {\displaystyle q_{i}} and p i {\displaystyle p_{i}} are vectors in three-dimensional space. Z single {\displaystyle Z_{\text{single}}} is the classical continuous partition function of a single particle as given in the previous section. The reason for the factorial factor N! is discussed below. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is not dimensionless. As stated in the previous section, to make it into a dimensionless quantity, we must divide it by h3N (where h is usually taken to be the Planck constant). ==== Quantum mechanical discrete system ==== For a canonical ensemble that is quantum mechanical and discrete, the canonical partition function is defined as the trace of the Boltzmann factor: Z = tr ⁡ ( e − β H ^ ) , {\displaystyle Z=\operatorname {tr} (e^{-\beta {\hat {H}}}),} where: tr ⁡ ( ∘ ) {\displaystyle \operatorname {tr} (\circ )} is the trace of a matrix; β {\displaystyle \beta } is the thermodynamic beta, defined as 1 k B T {\displaystyle {\tfrac {1}{k_{\text{B}}T}}} ; H ^ {\displaystyle {\hat {H}}} is the Hamiltonian operator. The dimension of e − β H ^ {\displaystyle e^{-\beta {\hat {H}}}} is the number of energy eigenstates of the system. ==== Quantum mechanical continuous system ==== For a canonical ensemble that is quantum mechanical and continuous, the canonical partition function is defined as Z = 1 h ∫ ⟨ q , p | e − β H ^ | q , p ⟩ d q d p , {\displaystyle Z={\frac {1}{h}}\int \left\langle q,p\right\vert e^{-\beta {\hat {H}}}\left\vert q,p\right\rangle \,dq\,dp,} where: h {\displaystyle h} is the Planck constant; β {\displaystyle \beta } is the thermodynamic beta, defined as 1 k B T {\displaystyle {\tfrac {1}{k_{\text{B}}T}}} ; H ^ {\displaystyle {\hat {H}}} is the Hamiltonian operator; q {\displaystyle q} is the canonical position; p {\displaystyle p} is the canonical momentum. In systems with multiple quantum states s sharing the same energy Es, it is said that the energy levels of the system are degenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed by j) as follows: Z = ∑ j g j e − β E j , {\displaystyle Z=\sum _{j}g_{j}\,e^{-\beta E_{j}},} where gj is the degeneracy factor, or number of quantum states s that have the same energy level defined by Ej = Es. The above treatment applies to quantum statistical mechanics, where a physical system inside a finite-sized box will typically have a discrete set of energy eigenstates, which we can use as the states s above. In quantum mechanics, the partition function can be more formally written as a trace over the state space (which is independent of the choice of basis): Z = tr ⁡ ( e − β H ^ ) , {\displaystyle Z=\operatorname {tr} (e^{-\beta {\hat {H}}}),} where Ĥ is the quantum Hamiltonian operator. The exponential of an operator can be defined using the exponential power series. The classical form of Z is recovered when the trace is expressed in terms of coherent states and when quantum-mechanical uncertainties in the position and momentum of a particle are regarded as negligible. Formally, using bra–ket notation, one inserts under the trace for each degree of freedom the identity: 1 = ∫ | x , p ⟩ ⟨ x , p | d x d p h , {\displaystyle {\boldsymbol {1}}=\int |x,p\rangle \langle x,p|{\frac {dx\,dp}{h}},} where |x, p⟩ is a normalised Gaussian wavepacket centered at position x and momentum p. Thus Z = ∫ tr ⁡ ( e − β H ^ | x , p ⟩ ⟨ x , p | ) d x d p h = ∫ ⟨ x , p | e − β H ^ | x , p ⟩ d x d p h . {\displaystyle Z=\int \operatorname {tr} \left(e^{-\beta {\hat {H}}}|x,p\rangle \langle x,p|\right){\frac {dx\,dp}{h}}=\int \langle x,p|e^{-\beta {\hat {H}}}|x,p\rangle {\frac {dx\,dp}{h}}.} A coherent state is an approximate eigenstate of both operators x ^ {\displaystyle {\hat {x}}} and p ^ {\displaystyle {\hat {p}}} , hence also of the Hamiltonian Ĥ, with errors of the size of the uncertainties. If Δx and Δp can be regarded as zero, the action of Ĥ reduces to multiplication by the classical Hamiltonian, and Z reduces to the classical configuration integral. === Connection to probability theory === For simplicity, we will use the discrete form of the partition function in this section. Our results will apply equally well to the continuous form. Consider a system S embedded into a heat bath B. Let the total energy of both systems be E. Let pi denote the probability that the system S is in a particular microstate, i, with energy Ei. According to the fundamental postulate of statistical mechanics (which states that all attainable microstates of a system are equally probable), the probability pi will be inversely proportional to the number of microstates of the total closed system (S, B) in which S is in microstate i with energy Ei. Equivalently, pi will be proportional to the number of microstates of the heat bath B with energy E − Ei: p i = Ω B ( E − E i ) Ω ( S , B ) ( E ) . {\displaystyle p_{i}={\frac {\Omega _{B}(E-E_{i})}{\Omega _{(S,B)}(E)}}.} Assuming that the heat bath's internal energy is much larger than the energy of S (E ≫ Ei), we can Taylor-expand Ω B {\displaystyle \Omega _{B}} to first order in Ei and use the thermodynamic relation ∂ S B / ∂ E = 1 / T {\displaystyle \partial S_{B}/\partial E=1/T} , where here S B {\displaystyle S_{B}} , T {\displaystyle T} are the entropy and temperature of the bath respectively: k ln ⁡ p i = k ln ⁡ Ω B ( E − E i ) − k ln ⁡ Ω ( S , B ) ( E ) ≈ − ∂ ( k ln ⁡ Ω B ( E ) ) ∂ E E i + k ln ⁡ Ω B ( E ) − k ln ⁡ Ω ( S , B ) ( E ) ≈ − ∂ S B ∂ E E i + k ln ⁡ Ω B ( E ) Ω ( S , B ) ( E ) ≈ − E i T + k ln ⁡ Ω B ( E ) Ω ( S , B ) ( E ) {\displaystyle {\begin{aligned}k\ln p_{i}&=k\ln \Omega _{B}(E-E_{i})-k\ln \Omega _{(S,B)}(E)\\[5pt]&\approx -{\frac {\partial {\big (}k\ln \Omega _{B}(E){\big )}}{\partial E}}E_{i}+k\ln \Omega _{B}(E)-k\ln \Omega _{(S,B)}(E)\\[5pt]&\approx -{\frac {\partial S_{B}}{\partial E}}E_{i}+k\ln {\frac {\Omega _{B}(E)}{\Omega _{(S,B)}(E)}}\\[5pt]&\approx -{\frac {E_{i}}{T}}+k\ln {\frac {\Omega _{B}(E)}{\Omega _{(S,B)}(E)}}\end{aligned}}} Thus p i ∝ e − E i / ( k T ) = e − β E i . {\displaystyle p_{i}\propto e^{-E_{i}/(kT)}=e^{-\beta E_{i}}.} Since the total probability to find the system in some microstate (the sum of all pi) must be equal to 1, we know that the constant of proportionality must be the normalization constant, and so, we can define the partition function to be this constant: Z = ∑ i e − β E i = Ω ( S , B ) ( E ) Ω B ( E ) . {\displaystyle Z=\sum _{i}e^{-\beta E_{i}}={\frac {\Omega _{(S,B)}(E)}{\Omega _{B}(E)}}.} === Calculating the thermodynamic total energy === In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply the expected value, or ensemble average for the energy, which is the sum of the microstate energies weighted by their probabilities: ⟨ E ⟩ = ∑ s E s P s = 1 Z ∑ s E s e − β E s = − 1 Z ∂ ∂ β Z ( β , E 1 , E 2 , … ) = − ∂ ln ⁡ Z ∂ β {\displaystyle {\begin{aligned}\langle E\rangle =\sum _{s}E_{s}P_{s}&={\frac {1}{Z}}\sum _{s}E_{s}e^{-\beta E_{s}}\\[1ex]&=-{\frac {1}{Z}}{\frac {\partial }{\partial \beta }}Z(\beta ,E_{1},E_{2},\dots )\\[1ex]&=-{\frac {\partial \ln Z}{\partial \beta }}\end{aligned}}} or, equivalently, ⟨ E ⟩ = k B T 2 ∂ ln ⁡ Z ∂ T . {\displaystyle \langle E\rangle =k_{\text{B}}T^{2}{\frac {\partial \ln Z}{\partial T}}.} Incidentally, one should note that if the microstate energies depend on a parameter λ in the manner E s = E s ( 0 ) + λ A s for all s {\displaystyle E_{s}=E_{s}^{(0)}+\lambda A_{s}\qquad {\text{for all}}\;s} then the expected value of A is ⟨ A ⟩ = ∑ s A s P s = − 1 β ∂ ∂ λ ln ⁡ Z ( β , λ ) . {\displaystyle \langle A\rangle =\sum _{s}A_{s}P_{s}=-{\frac {1}{\beta }}{\frac {\partial }{\partial \lambda }}\ln Z(\beta ,\lambda ).} This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then set λ to zero in the final expression. This is analogous to the source field method used in the path integral formulation of quantum field theory. === Relation to thermodynamic variables === In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations. As we have already seen, the thermodynamic energy is ⟨ E ⟩ = − ∂ ln ⁡ Z ∂ β . {\displaystyle \langle E\rangle =-{\frac {\partial \ln Z}{\partial \beta }}.} The variance in the energy (or "energy fluctuation") is ⟨ ( Δ E ) 2 ⟩ ≡ ⟨ ( E − ⟨ E ⟩ ) 2 ⟩ = ⟨ E 2 ⟩ − ⟨ E ⟩ 2 = ∂ 2 ln ⁡ Z ∂ β 2 . {\displaystyle \left\langle (\Delta E)^{2}\right\rangle \equiv \left\langle (E-\langle E\rangle )^{2}\right\rangle =\left\langle E^{2}\right\rangle -{\left\langle E\right\rangle }^{2}={\frac {\partial ^{2}\ln Z}{\partial \beta ^{2}}}.} The heat capacity is C v = ∂ ⟨ E ⟩ ∂ T = 1 k B T 2 ⟨ ( Δ E ) 2 ⟩ . {\displaystyle C_{v}={\frac {\partial \langle E\rangle }{\partial T}}={\frac {1}{k_{\text{B}}T^{2}}}\left\langle (\Delta E)^{2}\right\rangle .} In general, consider the extensive variable X and intensive variable Y where X and Y form a pair of conjugate variables. In ensembles where Y is fixed (and X is allowed to fluctuate), then the average value of X will be: ⟨ X ⟩ = ± ∂ ln ⁡ Z ∂ β Y . {\displaystyle \langle X\rangle =\pm {\frac {\partial \ln Z}{\partial \beta Y}}.} The sign will depend on the specific definitions of the variables X and Y. An example would be X = volume and Y = pressure. Additionally, the variance in X will be ⟨ ( Δ X ) 2 ⟩ ≡ ⟨ ( X − ⟨ X ⟩ ) 2 ⟩ = ∂ ⟨ X ⟩ ∂ β Y = ∂ 2 ln ⁡ Z ∂ ( β Y ) 2 . {\displaystyle \left\langle (\Delta X)^{2}\right\rangle \equiv \left\langle (X-\langle X\rangle )^{2}\right\rangle ={\frac {\partial \langle X\rangle }{\partial \beta Y}}={\frac {\partial ^{2}\ln Z}{\partial (\beta Y)^{2}}}.} In the special case of entropy, entropy is given by S ≡ − k B ∑ s P s ln ⁡ P s = k B ( ln ⁡ Z + β ⟨ E ⟩ ) = ∂ ∂ T ( k B T ln ⁡ Z ) = − ∂ A ∂ T {\displaystyle S\equiv -k_{\text{B}}\sum _{s}P_{s}\ln P_{s}=k_{\text{B}}(\ln Z+\beta \langle E\rangle )={\frac {\partial }{\partial T}}(k_{\text{B}}T\ln Z)=-{\frac {\partial A}{\partial T}}} where A is the Helmholtz free energy defined as A = U − TS, where U = ⟨E⟩ is the total energy and S is the entropy, so that A = ⟨ E ⟩ − T S = − k B T ln ⁡ Z . {\displaystyle A=\langle E\rangle -TS=-k_{\text{B}}T\ln Z.} Furthermore, the heat capacity can be expressed as C v = T ∂ S ∂ T = − T ∂ 2 A ∂ T 2 . {\displaystyle C_{\text{v}}=T{\frac {\partial S}{\partial T}}=-T{\frac {\partial ^{2}A}{\partial T^{2}}}.} === Partition functions of subsystems === Suppose a system is subdivided into N sub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems are ζ1, ζ2, ..., ζN, then the partition function of the entire system is the product of the individual partition functions: Z = ∏ j = 1 N ζ j . {\displaystyle Z=\prod _{j=1}^{N}\zeta _{j}.} If the sub-systems have the same physical properties, then their partition functions are equal, ζ1 = ζ2 = ... = ζ, in which case Z = ζ N . {\displaystyle Z=\zeta ^{N}.} However, there is a well-known exception to this rule. If the sub-systems are actually identical particles, in the quantum mechanical sense that they are impossible to distinguish even in principle, the total partition function must be divided by a N! (N factorial): Z = ζ N N ! . {\displaystyle Z={\frac {\zeta ^{N}}{N!}}.} This is to ensure that we do not "over-count" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as the Gibbs paradox. === Meaning and significance === It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, consider what goes into it. The partition function is a function of the temperature T and the microstate energies E1, E2, E3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system. The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probability Ps that the system occupies microstate s is P s = 1 Z e − β E s . {\displaystyle P_{s}={\frac {1}{Z}}e^{-\beta E_{s}}.} Thus, as shown above, the partition function plays the role of a normalizing constant (note that it does not depend on s), ensuring that the probabilities sum up to one: ∑ s P s = 1 Z ∑ s e − β E s = 1 Z Z = 1. {\displaystyle \sum _{s}P_{s}={\frac {1}{Z}}\sum _{s}e^{-\beta E_{s}}={\frac {1}{Z}}Z=1.} This is the reason for calling Z the "partition function": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. Other partition functions for different ensembles divide up the probabilities based on other macrostate variables. As an example: the partition function for the isothermal-isobaric ensemble, the generalized Boltzmann distribution, divides up probabilities based on particle number, pressure, and temperature. The energy is replaced by the characteristic potential of that ensemble, the Gibbs Free Energy. The letter Z stands for the German word Zustandssumme, "sum over states". The usefulness of the partition function stems from the fact that the macroscopic thermodynamic quantities of a system can be related to its microscopic details through the derivatives of its partition function. Finding the partition function is also equivalent to performing a Laplace transform of the density of states function from the energy domain to the β domain, and the inverse Laplace transform of the partition function reclaims the state density function of energies. == Grand canonical partition function == We can define a grand canonical partition function for a grand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperature T, and a chemical potential μ. The grand canonical partition function, denoted by Z {\displaystyle {\mathcal {Z}}} , is the following sum over microstates Z ( μ , V , T ) = ∑ i exp ⁡ ( N i μ − E i k B T ) . {\displaystyle {\mathcal {Z}}(\mu ,V,T)=\sum _{i}\exp \left({\frac {N_{i}\mu -E_{i}}{k_{B}T}}\right).} Here, each microstate is labelled by i {\displaystyle i} , and has total particle number N i {\displaystyle N_{i}} and total energy E i {\displaystyle E_{i}} . This partition function is closely related to the grand potential, Φ G {\displaystyle \Phi _{\rm {G}}} , by the relation − k B T ln ⁡ Z = Φ G = ⟨ E ⟩ − T S − μ ⟨ N ⟩ . {\displaystyle -k_{\text{B}}T\ln {\mathcal {Z}}=\Phi _{\rm {G}}=\langle E\rangle -TS-\mu \langle N\rangle .} This can be contrasted to the canonical partition function above, which is related instead to the Helmholtz free energy. It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in state i {\displaystyle i} : p i = 1 Z exp ⁡ ( N i μ − E i k B T ) . {\displaystyle p_{i}={\frac {1}{\mathcal {Z}}}\exp \left({\frac {N_{i}\mu -E_{i}}{k_{B}T}}\right).} An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi–Dirac statistics for fermions, Bose–Einstein statistics for bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases. The grand partition function is sometimes written (equivalently) in terms of alternate variables as Z ( z , V , T ) = ∑ N i z N i Z ( N i , V , T ) , {\displaystyle {\mathcal {Z}}(z,V,T)=\sum _{N_{i}}z^{N_{i}}Z(N_{i},V,T),} where z ≡ exp ⁡ ( μ / k B T ) {\displaystyle z\equiv \exp(\mu /k_{\text{B}}T)} is known as the absolute activity (or fugacity) and Z ( N i , V , T ) {\displaystyle Z(N_{i},V,T)} is the canonical partition function. == See also == Partition function (mathematics) Partition function (quantum field theory) Virial theorem Widom insertion method == References ==
Wikipedia/Partition_function_(statistical_mechanics)
The ramp function is a unary real function, whose graph is shaped like a ramp. It can be expressed by numerous definitions, for example "0 for negative inputs, output equals input for non-negative inputs". The term "ramp" can also be used for other functions obtained by scaling and shifting, and the function in this article is the unit ramp function (slope 1, starting at 0). In mathematics, the ramp function is also known as the positive part. In machine learning, it is commonly known as a ReLU activation function or a rectifier in analogy to half-wave rectification in electrical engineering. In statistics (when used as a likelihood function) it is known as a tobit model. This function has numerous applications in mathematics and engineering, and goes by various names, depending on the context. There are differentiable variants of the ramp function. == Definitions == The ramp function (R(x) : R → R0+) may be defined analytically in several ways. Possible definitions are: A piecewise function: R ( x ) := { x , x ≥ 0 ; 0 , x < 0 {\displaystyle R(x):={\begin{cases}x,&x\geq 0;\\0,&x<0\end{cases}}} Using the Iverson bracket notation: R ( x ) := x ⋅ [ x ≥ 0 ] {\displaystyle R(x):=x\cdot [x\geq 0]} or R ( x ) := x ⋅ [ x > 0 ] {\displaystyle R(x):=x\cdot [x>0]} The max function: R ( x ) := max ( x , 0 ) {\displaystyle R(x):=\max(x,0)} The mean of an independent variable and its absolute value (a straight line with unity gradient and its modulus): R ( x ) := x + | x | 2 {\displaystyle R(x):={\frac {x+|x|}{2}}} this can be derived by noting the following definition of max(a, b), max ( a , b ) = a + b + | a − b | 2 {\displaystyle \max(a,b)={\frac {a+b+|a-b|}{2}}} for which a = x and b = 0 The Heaviside step function multiplied by a straight line with unity gradient: R ( x ) := x H ( x ) {\displaystyle R\left(x\right):=xH(x)} The convolution of the Heaviside step function with itself: R ( x ) := H ( x ) ∗ H ( x ) {\displaystyle R\left(x\right):=H(x)*H(x)} The integral of the Heaviside step function: R ( x ) := ∫ − ∞ x H ( ξ ) d ξ {\displaystyle R(x):=\int _{-\infty }^{x}H(\xi )\,d\xi } Macaulay brackets: R ( x ) := ⟨ x ⟩ {\displaystyle R(x):=\langle x\rangle } The positive part of the identity function: R := id + {\displaystyle R:=\operatorname {id} ^{+}} As a limit function: R ( x ) := lim a → ∞ { 1 a , x = 0 x 1 − e − a x , x ≠ 0 {\displaystyle R\left(x\right):=\lim _{a\to \infty }{\begin{cases}{\frac {1}{a}},\quad x=0\\{\dfrac {x}{1-e^{-ax}}},\quad x\neq 0\end{cases}}} It could approximated as close as desired by choosing an increasing positive value a > 0 {\displaystyle a>0} . == Applications == The ramp function has numerous applications in engineering, such as in the theory of digital signal processing. In finance, the payoff of a call option is a ramp (shifted by strike price). Horizontally flipping a ramp yields a put option, while vertically flipping (taking the negative) corresponds to selling or being "short" an option. In finance, the shape is widely called a "hockey stick", due to the shape being similar to an ice hockey stick. In statistics, hinge functions of multivariate adaptive regression splines (MARS) are ramps, and are used to build regression models. == Analytic properties == === Non-negativity === In the whole domain the function is non-negative, so its absolute value is itself, i.e. ∀ x ∈ R : R ( x ) ≥ 0 {\displaystyle \forall x\in \mathbb {R} :R(x)\geq 0} and | R ( x ) | = R ( x ) {\displaystyle \left|R(x)\right|=R(x)} === Derivative === Its derivative is the Heaviside step function: R ′ ( x ) = H ( x ) for x ≠ 0. {\displaystyle R'(x)=H(x)\quad {\mbox{for }}x\neq 0.} === Second derivative === The ramp function satisfies the differential equation: d 2 d x 2 R ( x − x 0 ) = δ ( x − x 0 ) , {\displaystyle {\frac {d^{2}}{dx^{2}}}R(x-x_{0})=\delta (x-x_{0}),} where δ(x) is the Dirac delta. This means that R(x) is a Green's function for the second derivative operator. Thus, any function, f(x), with an integrable second derivative, f″(x), will satisfy the equation: f ( x ) = f ( a ) + ( x − a ) f ′ ( a ) + ∫ a b R ( x − s ) f ″ ( s ) d s for a < x < b . {\displaystyle f(x)=f(a)+(x-a)f'(a)+\int _{a}^{b}R(x-s)f''(s)\,ds\quad {\mbox{for }}a<x<b.} === Fourier transform === F { R ( x ) } ( f ) = ∫ − ∞ ∞ R ( x ) e − 2 π i f x d x = i δ ′ ( f ) 4 π − 1 4 π 2 f 2 , {\displaystyle {\mathcal {F}}{\big \{}R(x){\big \}}(f)=\int _{-\infty }^{\infty }R(x)e^{-2\pi ifx}\,dx={\frac {i\delta '(f)}{4\pi }}-{\frac {1}{4\pi ^{2}f^{2}}},} where δ(x) is the Dirac delta (in this formula, its derivative appears). === Laplace transform === The single-sided Laplace transform of R(x) is given as follows, L { R ( x ) } ( s ) = ∫ 0 ∞ e − s x R ( x ) d x = 1 s 2 . {\displaystyle {\mathcal {L}}{\big \{}R(x){\big \}}(s)=\int _{0}^{\infty }e^{-sx}R(x)dx={\frac {1}{s^{2}}}.} == Algebraic properties == === Iteration invariance === Every iterated function of the ramp mapping is itself, as R ( R ( x ) ) = R ( x ) . {\displaystyle R{\big (}R(x){\big )}=R(x).} == See also == Tobit model Rectifier (neural networks) == References ==
Wikipedia/Ramp_function
In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples". A sample is a value of the signal at a point in time and/or space; this definition differs from the term's usage in statistics, which refers to a set of such values. A sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points. The original signal can be reconstructed from a sequence of samples, up to the Nyquist limit, by passing the sequence of samples through a reconstruction filter. == Theory == Functions of space, time, or any other dimension can be sampled, and similarly in two or more dimensions. For functions that vary with time, let s ( t ) {\displaystyle s(t)} be a continuous function (or "signal") to be sampled, and let sampling be performed by measuring the value of the continuous function every T {\displaystyle T} seconds, which is called the sampling interval or sampling period. Then the sampled function is given by the sequence: s ( n T ) {\displaystyle s(nT)} , for integer values of n {\displaystyle n} . The sampling frequency or sampling rate, f s {\displaystyle f_{s}} , is the average number of samples obtained in one second, thus f s = 1 / T {\displaystyle f_{s}=1/T} , with the unit samples per second, sometimes referred to as hertz, for example 48 kHz is 48,000 samples per second. Reconstructing a continuous function from samples is done by interpolation algorithms. The Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal low-pass filter whose input is a sequence of Dirac delta functions that are modulated (multiplied) by the sample values. When the time interval between adjacent samples is a constant ( T ) {\displaystyle (T)} , the sequence of delta functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the comb function with s ( t ) {\displaystyle s(t)} . That mathematical abstraction is sometimes referred to as impulse sampling. Most sampled signals are not simply stored and reconstructed. The fidelity of a theoretical reconstruction is a common measure of the effectiveness of sampling. That fidelity is reduced when s ( t ) {\displaystyle s(t)} contains frequency components whose cycle length (period) is less than 2 sample intervals (see Aliasing). The corresponding frequency limit, in cycles per second (hertz), is 0.5 {\displaystyle 0.5} cycle/sample × f s {\displaystyle f_{s}} samples/second = f s / 2 {\displaystyle f_{s}/2} , known as the Nyquist frequency of the sampler. Therefore, s ( t ) {\displaystyle s(t)} is usually the output of a low-pass filter, functionally known as an anti-aliasing filter. Without an anti-aliasing filter, frequencies higher than the Nyquist frequency will influence the samples in a way that is misinterpreted by the interpolation process. == Practical considerations == In practice, the continuous signal is sampled using an analog-to-digital converter (ADC), a device with various physical limitations. This results in deviations from the theoretically perfect reconstruction, collectively referred to as distortion. Various types of distortion can occur, including: Aliasing. Some amount of aliasing is inevitable because only theoretical, infinitely long, functions can have no frequency content above the Nyquist frequency. Aliasing can be made arbitrarily small by using a sufficiently large order of the anti-aliasing filter. Aperture error results from the fact that the sample is obtained as a time average within a sampling region, rather than just being equal to the signal value at the sampling instant. In a capacitor-based sample and hold circuit, aperture errors are introduced by multiple mechanisms. For example, the capacitor cannot instantly track the input signal and the capacitor can not instantly be isolated from the input signal. Jitter or deviation from the precise sample timing intervals. Noise, including thermal sensor noise, analog circuit noise, etc.. Slew rate limit error, caused by the inability of the ADC input value to change sufficiently rapidly. Quantization as a consequence of the finite precision of words that represent the converted values. Error due to other non-linear effects of the mapping of input voltage to converted output value (in addition to the effects of quantization). Although the use of oversampling can completely eliminate aperture error and aliasing by shifting them out of the passband, this technique cannot be practically used above a few GHz, and may be prohibitively expensive at much lower frequencies. Furthermore, while oversampling can reduce quantization error and non-linearity, it cannot eliminate these entirely. Consequently, practical ADCs at audio frequencies typically do not exhibit aliasing, aperture error, and are not limited by quantization error. Instead, analog noise dominates. At RF and microwave frequencies where oversampling is impractical and filters are expensive, aperture error, quantization error and aliasing can be significant limitations. Jitter, noise, and quantization are often analyzed by modeling them as random errors added to the sample values. Integration and zero-order hold effects can be analyzed as a form of low-pass filtering. The non-linearities of either ADC or DAC are analyzed by replacing the ideal linear function mapping with a proposed nonlinear function. == Applications == === Audio sampling === Digital audio uses pulse-code modulation (PCM) and digital signals for sound reproduction. This includes analog-to-digital conversion (ADC), digital-to-analog conversion (DAC), storage, and transmission. In effect, the system commonly referred to as digital is in fact a discrete-time, discrete-level analog of a previous electrical analog. While modern systems can be quite subtle in their methods, the primary usefulness of a digital system is the ability to store, retrieve and transmit signals without any loss of quality. When it is necessary to capture audio covering the entire 20–20,000 Hz range of human hearing such as when recording music or many types of acoustic events, audio waveforms are typically sampled at 44.1 kHz (CD), 48 kHz, 88.2 kHz, or 96 kHz. The approximately double-rate requirement is a consequence of the Nyquist theorem. Sampling rates higher than about 50 kHz to 60 kHz cannot supply more usable information for human listeners. Early professional audio equipment manufacturers chose sampling rates in the region of 40 to 50 kHz for this reason. There has been an industry trend towards sampling rates well beyond the basic requirements: such as 96 kHz and even 192 kHz Even though ultrasonic frequencies are inaudible to humans, recording and mixing at higher sampling rates is effective in eliminating the distortion that can be caused by foldback aliasing. Conversely, ultrasonic sounds may interact with and modulate the audible part of the frequency spectrum (intermodulation distortion), degrading the fidelity. One advantage of higher sampling rates is that they can relax the low-pass filter design requirements for ADCs and DACs, but with modern oversampling delta-sigma-converters this advantage is less important. The Audio Engineering Society recommends 48 kHz sampling rate for most applications but gives recognition to 44.1 kHz for CD and other consumer uses, 32 kHz for transmission-related applications, and 96 kHz for higher bandwidth or relaxed anti-aliasing filtering. Both Lavry Engineering and J. Robert Stuart state that the ideal sampling rate would be about 60 kHz, but since this is not a standard frequency, recommend 88.2 or 96 kHz for recording purposes. A more complete list of common audio sample rates is: ==== Bit depth ==== Audio is typically recorded at 8-, 16-, and 24-bit depth; which yield a theoretical maximum signal-to-quantization-noise ratio (SQNR) for a pure sine wave of, approximately; 49.93 dB, 98.09 dB, and 122.17 dB. CD quality audio uses 16-bit samples. Thermal noise limits the true number of bits that can be used in quantization. Few analog systems have signal to noise ratios (SNR) exceeding 120 dB. However, digital signal processing operations can have very high dynamic range, consequently it is common to perform mixing and mastering operations at 32-bit precision and then convert to 16- or 24-bit for distribution. ==== Speech sampling ==== Speech signals, i.e., signals intended to carry only human speech, can usually be sampled at a much lower rate. For most phonemes, almost all of the energy is contained in the 100 Hz – 4 kHz range, allowing a sampling rate of 8 kHz. This is the sampling rate used by nearly all telephony systems, which use the G.711 sampling and quantization specifications. === Video sampling === Standard-definition television (SDTV) uses either 720 by 480 pixels (US NTSC 525-line) or 720 by 576 pixels (UK PAL 625-line) for the visible picture area. High-definition television (HDTV) uses 720p (progressive), 1080i (interlaced), and 1080p (progressive, also known as Full-HD). In digital video, the temporal sampling rate is defined as the frame rate – or rather the field rate – rather than the notional pixel clock. The image sampling frequency is the repetition rate of the sensor integration period. Since the integration period may be significantly shorter than the time between repetitions, the sampling frequency can be different from the inverse of the sample time: 50 Hz – PAL video 60 / 1.001 Hz ~= 59.94 Hz – NTSC video Video digital-to-analog converters operate in the megahertz range (from ~3 MHz for low quality composite video scalers in early game consoles, to 250 MHz or more for the highest-resolution VGA output). When analog video is converted to digital video, a different sampling process occurs, this time at the pixel frequency, corresponding to a spatial sampling rate along scan lines. A common pixel sampling rate is: 13.5 MHz – CCIR 601, D1 video Spatial sampling in the other direction is determined by the spacing of scan lines in the raster. The sampling rates and resolutions in both spatial directions can be measured in units of lines per picture height. Spatial aliasing of high-frequency luma or chroma video components shows up as a moiré pattern. === 3D sampling === The process of volume rendering samples a 3D grid of voxels to produce 3D renderings of sliced (tomographic) data. The 3D grid is assumed to represent a continuous region of 3D space. Volume rendering is common in medical imaging, X-ray computed tomography (CT/CAT), magnetic resonance imaging (MRI), positron emission tomography (PET) are some examples. It is also used for seismic tomography and other applications. == Undersampling == When a bandpass signal is sampled slower than its Nyquist rate, the samples are indistinguishable from samples of a low-frequency alias of the high-frequency signal. That is often done purposefully in such a way that the lowest-frequency alias satisfies the Nyquist criterion, because the bandpass signal is still uniquely represented and recoverable. Such undersampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF to digital conversion. == Oversampling == Oversampling is used in most modern analog-to-digital converters to reduce the distortion introduced by practical digital-to-analog converters, such as a zero-order hold instead of idealizations like the Whittaker–Shannon interpolation formula. == Complex sampling == Complex sampling (or I/Q sampling) is the simultaneous sampling of two different, but related, waveforms, resulting in pairs of samples that are subsequently treated as complex numbers. When one waveform, s ^ ( t ) {\displaystyle {\hat {s}}(t)} , is the Hilbert transform of the other waveform, s ( t ) {\displaystyle s(t)} , the complex-valued function, s a ( t ) ≜ s ( t ) + i ⋅ s ^ ( t ) {\displaystyle s_{a}(t)\triangleq s(t)+i\cdot {\hat {s}}(t)} , is called an analytic signal, whose Fourier transform is zero for all negative values of frequency. In that case, the Nyquist rate for a waveform with no frequencies ≥ B can be reduced to just B (complex samples/sec), instead of 2 B {\displaystyle 2B} (real samples/sec). More apparently, the equivalent baseband waveform, s a ( t ) ⋅ e − i 2 π B 2 t {\displaystyle s_{a}(t)\cdot e^{-i2\pi {\frac {B}{2}}t}} , also has a Nyquist rate of B {\displaystyle B} , because all of its non-zero frequency content is shifted into the interval [ − B / 2 , B / 2 ] {\displaystyle [-B/2,B/2]} . Although complex-valued samples can be obtained as described above, they are also created by manipulating samples of a real-valued waveform. For instance, the equivalent baseband waveform can be created without explicitly computing s ^ ( t ) {\displaystyle {\hat {s}}(t)} , by processing the product sequence, [ s ( n T ) ⋅ e − i 2 π B 2 T n ] {\displaystyle \left[s(nT)\cdot e^{-i2\pi {\frac {B}{2}}Tn}\right]} , through a digital low-pass filter whose cutoff frequency is B / 2 {\displaystyle B/2} . Computing only every other sample of the output sequence reduces the sample rate commensurate with the reduced Nyquist rate. The result is half as many complex-valued samples as the original number of real samples. No information is lost, and the original s ( t ) {\displaystyle s(t)} waveform can be recovered, if necessary. == See also == Crystal oscillator frequencies Downsampling Upsampling Multidimensional sampling In-phase and quadrature components and I/Q data Sample rate conversion Digitizing Sample and hold Beta encoder Kell factor Bit rate Normalized frequency == Notes == == References == == Further reading == Matt Pharr, Wenzel Jakob and Greg Humphreys, Physically Based Rendering: From Theory to Implementation, 3rd ed., Morgan Kaufmann, November 2016. ISBN 978-0128006450. The chapter on sampling (available online) is nicely written with diagrams, core theory and code sample. == External links == Journal devoted to Sampling Theory I/Q Data for Dummies – a page trying to answer the question Why I/Q Data? Sampling of analog signals – an interactive presentation in a web-demo at the Institute of Telecommunications, University of Stuttgart
Wikipedia/Sampling_rate
A signal is both the process and the result of transmission of data over some media accomplished by embedding some variation. Signals are important in multiple subject fields including signal processing, information theory and biology. In signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The IEEE Transactions on Signal Processing includes audio, video, speech, image, sonar, and radar as examples of signals. A signal may also be defined as any observable change in a quantity over space or time (a time series), even if it does not carry information. In nature, signals can be actions done by an organism to alert other organisms, ranging from the release of plant chemicals to warn nearby plants of a predator, to sounds or motions made by animals to alert other animals of food. Signaling occurs in all organisms even at cellular levels, with cell signaling. Signaling theory, in evolutionary biology, proposes that a substantial driver for evolution is the ability of animals to communicate with each other by developing ways of signaling. In human engineering, signals are typically provided by a sensor, and often the original form of a signal is converted to another form of energy using a transducer. For example, a microphone converts an acoustic signal to a voltage waveform, and a speaker does the reverse. Another important property of a signal is its entropy or information content. Information theory serves as the formal study of signals and their content. The information of a signal is often accompanied by noise, which primarily refers to unwanted modifications of signals, but is often extended to include unwanted signals conflicting with desired signals (crosstalk). The reduction of noise is covered in part under the heading of signal integrity. The separation of desired signals from background noise is the field of signal recovery, one branch of which is estimation theory, a probabilistic approach to suppressing random disturbances. Engineering disciplines such as electrical engineering have advanced the design, study, and implementation of systems involving transmission, storage, and manipulation of information. In the latter half of the 20th century, electrical engineering itself separated into several disciplines: electronic engineering and computer engineering developed to specialize in the design and analysis of systems that manipulate physical signals, while design engineering developed to address the functional design of signals in user–machine interfaces. == Definitions == Definitions specific to sub-fields are common: In electronics and telecommunications, signal refers to any time-varying voltage, current, or electromagnetic wave that carries information. In signal processing, signals are analog and digital representations of analog physical quantities. In information theory, a signal is a codified message, that is, the sequence of states in a communication channel that encodes a message. In a communication system, a transmitter encodes a message to create a signal, which is carried to a receiver by the communication channel. For example, the words "Mary had a little lamb" might be the message spoken into a telephone. The telephone transmitter converts the sounds into an electrical signal. The signal is transmitted to the receiving telephone by wires; at the receiver it is reconverted into sounds. In telephone networks, signaling, for example common-channel signaling, refers to phone number and other digital control information rather than the actual voice signal. == Classification == Signals can be categorized in various ways. The most common distinction is between discrete and continuous spaces that the functions are defined over, for example, discrete and continuous-time domains. Discrete-time signals are often referred to as time series in other fields. Continuous-time signals are often referred to as continuous signals. A second important distinction is between discrete-valued and continuous-valued. Particularly in digital signal processing, a digital signal may be defined as a sequence of discrete values, typically associated with an underlying continuous-valued physical process. In digital electronics, digital signals are the continuous-time waveform signals in a digital system, representing a bit-stream. Signals may also be categorized by their spatial distributions as either point source signals (PSSs) or distributed source signals (DSSs). In Signals and Systems, signals can be classified according to many criteria, mainly: according to the different feature of values, classified into analog signals and digital signals; according to the determinacy of signals, classified into deterministic signals and random signals; according to the strength of signals, classified into energy signals and power signals. === Analog and digital signals === Two main types of signals encountered in practice are analog and digital. The figure shows a digital signal that results from approximating an analog signal by its values at particular time instants. Digital signals are quantized, while analog signals are continuous. ==== Analog signal ==== An analog signal is any continuous signal for which the time-varying feature of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. For example, in an analog audio signal, the instantaneous voltage of the signal varies continuously with the sound pressure. It differs from a digital signal, in which the continuous quantity is a representation of a sequence of discrete values which can only take on one of a finite number of values. The term analog signal usually refers to electrical signals; however, analog signals may use other mediums such as mechanical, pneumatic or hydraulic. An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. In an electrical signal, the voltage, current, or frequency of the signal may be varied to represent the information. Any information may be conveyed by an analog signal; often such a signal is a measured response to changes in physical phenomena, such as sound, light, temperature, position, or pressure. The physical variable is converted to an analog signal by a transducer. For example, in sound recording, fluctuations in air pressure (that is to say, sound) strike the diaphragm of a microphone which induces corresponding electrical fluctuations. The voltage or the current is said to be an analog of the sound. ==== Digital signal ==== A digital signal is a signal that is constructed from a discrete set of waveforms of a physical quantity so as to represent a sequence of discrete values. A logic signal is a digital signal with only two possible values, and describes an arbitrary bit stream. Other types of digital signals can represent three-valued logic or higher valued logics. Alternatively, a digital signal may be considered to be the sequence of codes represented by such a physical quantity. The physical quantity may be a variable electric current or voltage, the intensity, phase or polarization of an optical or other electromagnetic field, acoustic pressure, the magnetization of a magnetic storage media, etc. Digital signals are present in all digital electronics, notably computing equipment and data transmission. With digital signals, system noise, provided it is not too great, will not affect system operation whereas noise always degrades the operation of analog signals to some degree. Digital signals often arise via sampling of analog signals, for example, a continually fluctuating voltage on a line that can be digitized by an analog-to-digital converter circuit, wherein the circuit will read the voltage level on the line, say, every 50 microseconds and represent each reading with a fixed number of bits. The resulting stream of numbers is stored as digital data on a discrete-time and quantized-amplitude signal. Computers and other digital devices are restricted to discrete time. === Energy and power === According to the strengths of signals, practical signals can be classified into two categories: energy signals and power signals. Energy signals: Those signals' energy are equal to a finite positive value, but their average powers are 0; 0 < E = ∫ − ∞ ∞ s 2 ( t ) d t < ∞ {\displaystyle 0<E=\int _{-\infty }^{\infty }s^{2}(t)dt<\infty } Power signals: Those signals' average power are equal to a finite positive value, but their energy are infinite. P = lim T → ∞ 1 T ∫ − T / 2 T / 2 s 2 ( t ) d t {\displaystyle P=\lim _{T\rightarrow \infty }{\frac {1}{T}}\int _{-T/2}^{T/2}s^{2}(t)dt} === Deterministic and random === Deterministic signals are those whose values at any time are predictable and can be calculated by a mathematical equation. Random signals are signals that take on random values at any given time instant and must be modeled stochastically. === Even and odd === An even signal satisfies the condition x ( t ) = x ( − t ) {\displaystyle x(t)=x(-t)} or equivalently if the following equation holds for all t {\displaystyle t} and − t {\displaystyle -t} in the domain of x {\displaystyle x} : x ( t ) − x ( − t ) = 0. {\displaystyle x(t)-x(-t)=0.} An odd signal satisfies the condition x ( t ) = − x ( − t ) {\displaystyle x(t)=-x(-t)} or equivalently if the following equation holds for all t {\displaystyle t} and − t {\displaystyle -t} in the domain of x {\displaystyle x} : x ( t ) + x ( − t ) = 0. {\displaystyle x(t)+x(-t)=0.} === Periodic === A signal is said to be periodic if it satisfies the condition: x ( t ) = x ( t + T ) ∀ t ∈ [ t 0 , t m a x ] {\displaystyle x(t)=x(t+T)\quad \forall t\in [t_{0},t_{max}]} or x ( n ) = x ( n + N ) ∀ n ∈ [ n 0 , n m a x ] {\displaystyle x(n)=x(n+N)\quad \forall n\in [n_{0},n_{max}]} Where: T {\displaystyle T} = fundamental time period, 1 / T = f {\displaystyle 1/T=f} = fundamental frequency. The same can be applied to N {\displaystyle N} . A periodic signal will repeat for every period. ==== Time discretization ==== Signals can be classified as continuous or discrete time. In the mathematical abstraction, the domain of a continuous-time signal is the set of real numbers (or some interval thereof), whereas the domain of a discrete-time (DT) signal is the set of integers (or other subsets of real numbers). What these integers represent depends on the nature of the signal; most often it is time. A continuous-time signal is any function which is defined at every time t in an interval, most commonly an infinite interval. A simple source for a discrete-time signal is the sampling of a continuous signal, approximating the signal by a sequence of its values at particular time instants. === Amplitude quantization === If a signal is to be represented as a sequence of digital data, it is impossible to maintain exact precision – each number in the sequence must have a finite number of digits. As a result, the values of such a signal must be quantized into a finite set for practical representation. Quantization is the process of converting a continuous analog audio signal to a digital signal with discrete numerical values of integers. == Examples of signals == Naturally occurring signals can be converted to electronic signals by various sensors. Examples include: Motion. The motion of an object can be considered to be a signal and can be monitored by various sensors to provide electrical signals. For example, radar can provide an electromagnetic signal for following aircraft motion. A motion signal is one-dimensional (time), and the range is generally three-dimensional. Position is thus a 3-vector signal; position and orientation of a rigid body is a 6-vector signal. Orientation signals can be generated using a gyroscope. Sound. Since a sound is a vibration of a medium (such as air), a sound signal associates a pressure value to every value of time and possibly three space coordinates indicating the direction of travel. A sound signal is converted to an electrical signal by a microphone, generating a voltage signal as an analog of the sound signal. Sound signals can be sampled at a discrete set of time points; for example, compact discs (CDs) contain discrete signals representing sound, recorded at 44,100 Hz; since CDs are recorded in stereo, each sample contains data for a left and right channel, which may be considered to be a 2-vector signal. The CD encoding is converted to an electrical signal by reading the information with a laser, converting the sound signal to an optical signal. Images. A picture or image consists of a brightness or color signal, a function of a two-dimensional location. The object's appearance is presented as emitted or reflected light, an electromagnetic signal. It can be converted to voltage or current waveforms using devices such as the charge-coupled device. A 2D image can have a continuous spatial domain, as in a traditional photograph or painting; or the image can be discretized in space, as in a digital image. Color images are typically represented as a combination of monochrome images in three primary colors. Videos. A video signal is a sequence of images. A point in a video is identified by its two-dimensional position in the image and by the time at which it occurs, so a video signal has a three-dimensional domain. Analog video has one continuous domain dimension (across a scan line) and two discrete dimensions (frame and line). Biological membrane potentials. The value of the signal is an electric potential (voltage). The domain is more difficult to establish. Some cells or organelles have the same membrane potential throughout; neurons generally have different potentials at different points. These signals have very low energies, but are enough to make nervous systems work; they can be measured in aggregate by electrophysiology techniques. The output of a thermocouple, which conveys temperature information. The output of a pH meter which conveys acidity information. == Signal processing == Signal processing is the manipulation of signals. A common example is signal transmission between different locations. The embodiment of a signal in electrical form is made by a transducer that converts the signal from its original form to a waveform expressed as a current or a voltage, or electromagnetic radiation, for example, an optical signal or radio transmission. Once expressed as an electronic signal, the signal is available for further processing by electrical devices such as electronic amplifiers and filters, and can be transmitted to a remote location by a transmitter and received using radio receivers. == Signals and systems == In electrical engineering (EE) programs, signals are covered in a class and field of study known as signals and systems. Depending on the school, undergraduate EE students generally take the class as juniors or seniors, normally depending on the number and level of previous linear algebra and differential equation classes they have taken. The field studies input and output signals, and the mathematical representations between them known as systems, in four domains: time, frequency, s and z. Since signals and systems are both studied in these four domains, there are 8 major divisions of study. As an example, when working with continuous-time signals (t), one might transform from the time domain to a frequency or s domain; or from discrete time (n) to frequency or z domains. Systems also can be transformed between these domains like signals, with continuous to s and discrete to z. Signals and systems is a subset of the field of mathematical modeling. It involves circuit analysis and design via mathematical modeling and some numerical methods, and was updated several decades ago with dynamical systems tools including differential equations, and recently, Lagrangians. Students are expected to understand the modeling tools as well as the mathematics, physics, circuit analysis, and transformations between the 8 domains. Because mechanical engineering (ME) topics like friction, dampening etc. have very close analogies in signal science (inductance, resistance, voltage, etc.), many of the tools originally used in ME transformations (Laplace and Fourier transforms, Lagrangians, sampling theory, probability, difference equations, etc.) have now been applied to signals, circuits, systems and their components, analysis and design in EE. Dynamical systems that involve noise, filtering and other random or chaotic attractors and repellers have now placed stochastic sciences and statistics between the more deterministic discrete and continuous functions in the field. (Deterministic as used here means signals that are completely determined as functions of time). EE taxonomists are still not decided where signals and systems falls within the whole field of signal processing vs. circuit analysis and mathematical modeling, but the common link of the topics that are covered in the course of study has brightened boundaries with dozens of books, journals, etc. called "Signals and Systems", and used as text and test prep for the EE, as well as, recently, computer engineering exams. == Gallery == == See also == A Mathematical Theory of Communication – 1948 scholarly article by Claude Shannon Beacon Current loop – a signaling system in widespread use for process control Signal-to-noise ratio == Notes == == References == == Further reading == Hsu, P. H. (1995). Schaum's Theory and Problems: Signals and Systems. McGraw-Hill. ISBN 0-07-030641-9. Lathi, B.P. (1998). Signal Processing & Linear Systems. Berkeley-Cambridge Press. ISBN 0-941413-35-7.
Wikipedia/Signal_(information_theory)
In mathematics, the Laplace–Carson transform, named after Pierre Simon Laplace and John Renshaw Carson, is an integral transform with significant applications in the field of physics and engineering, particularly in the field of railway engineering. == Definition == Let V ( j , t ) {\displaystyle V(j,t)} be a function and p {\displaystyle p} a complex variable. The Laplace–Carson transform is defined as: V ∗ ( j , p ) = p ∫ 0 ∞ V ( j , t ) e − p t d t {\displaystyle V^{\ast }(j,p)=p\int _{0}^{\infty }V(j,t)e^{-pt}\,dt} The inverse Laplace–Carson transform is: V ( j , t ) = 1 2 π i ∫ a 0 − i ∞ a 0 + i ∞ e t p V ∗ ( j , p ) p d p {\displaystyle V(j,t)={\frac {1}{2\pi i}}\int _{a_{0}-i\infty }^{a_{0}+i\infty }e^{tp}{\frac {V^{\ast }(j,p)}{p}}\,dp} where a 0 {\displaystyle a_{0}} is a real-valued constant, i ∞ {\displaystyle i\infty } refers to the imaginary axis, which indicates the integral is carried out along a straight line parallel to the imaginary axis lying to the right of all the singularities of the following expression: e t p V ( j , t ) p {\displaystyle e^{tp}{\frac {V(j,t)}{p}}} == See also == Laplace transform == References ==
Wikipedia/Laplace–Carson_transform
In electrical engineering and electronics, a network is a collection of interconnected components. Network analysis is the process of finding the voltages across, and the currents through, all network components. There are many techniques for calculating these values; however, for the most part, the techniques assume linear components. Except where stated, the methods described in this article are applicable only to linear network analysis. == Definitions == == Equivalent circuits == A useful procedure in network analysis is to simplify the network by reducing the number of components. This can be done by replacing physical components with other notional components that have the same effect. A particular technique might directly reduce the number of components, for instance by combining impedances in series. On the other hand, it might merely change the form into one in which the components can be reduced in a later operation. For instance, one might transform a voltage generator into a current generator using Norton's theorem in order to be able to later combine the internal resistance of the generator with a parallel impedance load. A resistive circuit is a circuit containing only resistors, ideal current sources, and ideal voltage sources. If the sources are constant (DC) sources, the result is a DC circuit. Analysis of a circuit consists of solving for the voltages and currents present in the circuit. The solution principles outlined here also apply to phasor analysis of AC circuits. Two circuits are said to be equivalent with respect to a pair of terminals if the voltage across the terminals and current through the terminals for one network have the same relationship as the voltage and current at the terminals of the other network. If V 2 = V 1 {\displaystyle V_{2}=V_{1}} implies I 2 = I 1 {\displaystyle I_{2}=I_{1}} for all (real) values of V1, then with respect to terminals ab and xy, circuit 1 and circuit 2 are equivalent. The above is a sufficient definition for a one-port network. For more than one port, then it must be defined that the currents and voltages between all pairs of corresponding ports must bear the same relationship. For instance, star and delta networks are effectively three port networks and hence require three simultaneous equations to fully specify their equivalence. === Impedances in series and in parallel === Some two terminal network of impedances can eventually be reduced to a single impedance by successive applications of impedances in series or impedances in parallel. Impedances in series: Z e q = Z 1 + Z 2 + ⋯ + Z n . {\displaystyle Z_{\mathrm {eq} }=Z_{1}+Z_{2}+\,\cdots \,+Z_{n}.} Impedances in parallel: 1 Z e q = 1 Z 1 + 1 Z 2 + ⋯ + 1 Z n . {\displaystyle {\frac {1}{Z_{\mathrm {eq} }}}={\frac {1}{Z_{1}}}+{\frac {1}{Z_{2}}}+\,\cdots \,+{\frac {1}{Z_{n}}}.} The above simplified for only two impedances in parallel: Z e q = Z 1 Z 2 Z 1 + Z 2 . {\displaystyle Z_{\mathrm {eq} }={\frac {Z_{1}Z_{2}}{Z_{1}+Z_{2}}}.} === Delta-wye transformation === A network of impedances with more than two terminals cannot be reduced to a single impedance equivalent circuit. An n-terminal network can, at best, be reduced to n impedances (at worst ( n 2 ) {\displaystyle {\tbinom {n}{2}}} ). For a three terminal network, the three impedances can be expressed as a three node delta (Δ) network or four node star (Y) network. These two networks are equivalent and the transformations between them are given below. A general network with an arbitrary number of nodes cannot be reduced to the minimum number of impedances using only series and parallel combinations. In general, Y-Δ and Δ-Y transformations must also be used. For some networks the extension of Y-Δ to star-polygon transformations may also be required. For equivalence, the impedances between any pair of terminals must be the same for both networks, resulting in a set of three simultaneous equations. The equations below are expressed as resistances but apply equally to the general case with impedances. ==== Delta-to-star transformation equations ==== R a = R a c R a b R a c + R a b + R b c R b = R a b R b c R a c + R a b + R b c R c = R b c R a c R a c + R a b + R b c {\displaystyle {\begin{aligned}R_{a}&={\frac {R_{\mathrm {ac} }R_{\mathrm {ab} }}{R_{\mathrm {ac} }+R_{\mathrm {ab} }+R_{\mathrm {bc} }}}\\R_{b}&={\frac {R_{\mathrm {ab} }R_{\mathrm {bc} }}{R_{\mathrm {ac} }+R_{\mathrm {ab} }+R_{\mathrm {bc} }}}\\R_{c}&={\frac {R_{\mathrm {bc} }R_{\mathrm {ac} }}{R_{\mathrm {ac} }+R_{\mathrm {ab} }+R_{\mathrm {bc} }}}\end{aligned}}} ==== Star-to-delta transformation equations ==== R a c = R a R b + R b R c + R c R a R b R a b = R a R b + R b R c + R c R a R c R b c = R a R b + R b R c + R c R a R a {\displaystyle {\begin{aligned}R_{\mathrm {ac} }&={\frac {R_{a}R_{b}+R_{b}R_{c}+R_{c}R_{a}}{R_{b}}}\\R_{\mathrm {ab} }&={\frac {R_{a}R_{b}+R_{b}R_{c}+R_{c}R_{a}}{R_{c}}}\\R_{\mathrm {bc} }&={\frac {R_{a}R_{b}+R_{b}R_{c}+R_{c}R_{a}}{R_{a}}}\end{aligned}}} === General form of network node elimination === The star-to-delta and series-resistor transformations are special cases of the general resistor network node elimination algorithm. Any node connected by N resistors (R1 … RN) to nodes 1 … N can be replaced by resistors interconnecting the remaining N nodes. The resistance between any two nodes x, y is given by: R x y = R x R y ∑ i = 1 N 1 R i {\displaystyle R_{\mathrm {xy} }=R_{x}R_{y}\sum _{i=1}^{N}{\frac {1}{R_{i}}}} For a star-to-delta (N = 3) this reduces to: R a b = R a R b ( 1 R a + 1 R b + 1 R c ) = R a R b ( R a R b + R a R c + R b R c ) R a R b R c = R a R b + R b R c + R c R a R c {\displaystyle {\begin{aligned}R_{\mathrm {ab} }&=R_{a}R_{b}\left({\frac {1}{R}}_{a}+{\frac {1}{R}}_{b}+{\frac {1}{R}}_{c}\right)={\frac {R_{a}R_{b}(R_{a}R_{b}+R_{a}R_{c}+R_{b}R_{c})}{R_{a}R_{b}R_{c}}}\\&={\frac {R_{a}R_{b}+R_{b}R_{c}+R_{c}R_{a}}{R_{c}}}\end{aligned}}} For a series reduction (N = 2) this reduces to: R a b = R a R b ( 1 R a + 1 R b ) = R a R b ( R a + R b ) R a R b = R a + R b {\displaystyle R_{\mathrm {ab} }=R_{a}R_{b}\left({\frac {1}{R}}_{a}+{\frac {1}{R}}_{b}\right)={\frac {R_{a}R_{b}(R_{a}+R_{b})}{R_{a}R_{b}}}=R_{a}+R_{b}} For a dangling resistor (N = 1) it results in the elimination of the resistor because ( 1 2 ) = 0 {\displaystyle {\tbinom {1}{2}}=0} . === Source transformation === A generator with an internal impedance (i.e. non-ideal generator) can be represented as either an ideal voltage generator or an ideal current generator plus the impedance. These two forms are equivalent and the transformations are given below. If the two networks are equivalent with respect to terminals ab, then V and I must be identical for both networks. Thus, V s = R I s {\displaystyle V_{\mathrm {s} }=RI_{\mathrm {s} }\,\!} or I s = V s R {\displaystyle I_{\mathrm {s} }={\frac {V_{\mathrm {s} }}{R}}} Norton's theorem states that any two-terminal linear network can be reduced to an ideal current generator and a parallel impedance. Thévenin's theorem states that any two-terminal linear network can be reduced to an ideal voltage generator plus a series impedance. == Simple networks == Some very simple networks can be analysed without the need to apply the more systematic approaches. === Voltage division of series components === Consider n impedances that are connected in series. The voltage across any impedance Z i {\displaystyle Z_{i}} is === Current division of parallel components === Consider n admittances that are connected in parallel. The current I i {\displaystyle I_{i}} through any admittance Y i {\displaystyle Y_{i}} is I i = Y i V = ( Y i Y 1 + Y 2 + ⋯ + Y n ) I {\displaystyle I_{i}=Y_{i}V=\left({\frac {Y_{i}}{Y_{1}+Y_{2}+\cdots +Y_{n}}}\right)I} for i = 1 , 2 , . . . , n . {\displaystyle i=1,2,...,n.} ==== Special case: Current division of two parallel components ==== I 1 = ( Z 2 Z 1 + Z 2 ) I {\displaystyle I_{1}=\left({\frac {Z_{2}}{Z_{1}+Z_{2}}}\right)I} I 2 = ( Z 1 Z 1 + Z 2 ) I {\displaystyle I_{2}=\left({\frac {Z_{1}}{Z_{1}+Z_{2}}}\right)I} == Nodal analysis == Nodal analysis uses the concept of a node voltage and considers the node voltages to be the unknown variables.: 2-8 - 2-9  For all nodes, except a chosen reference node, the node voltage is defined as the voltage drop from the node to the reference node. Therefore, there are N-1 node voltages for a circuit with N nodes.: 2-10  In principle, nodal analysis uses Kirchhoff's current law (KCL) at N-1 nodes to get N-1 independent equations. Since equations generated with KCL are in terms of currents going in and out of nodes, these currents, if their values are not known, need to be represented by the unknown variables (node voltages). For some elements (such as resistors and capacitors) getting the element currents in terms of node voltages is trivial. For some common elements where this is not possible, specialized methods are developed. For example, a concept called supernode is used for circuits with independent voltage sources.: 2-12 - 2-13  Label all nodes in the circuit. Arbitrarily select any node as reference. Define a voltage variable from every remaining node to the reference. These voltage variables must be defined as voltage rises with respect to the reference node. Write a KCL equation for every node except the reference. Solve the resulting system of equations. == Mesh analysis == Mesh — a loop that does not contain an inner loop. Count the number of “window panes” in the circuit. Assign a mesh current to each window pane. Write a KVL equation for every mesh whose current is unknown. Solve the resulting equations == Superposition == In this method, the effect of each generator in turn is calculated. All the generators other than the one being considered are removed and either short-circuited in the case of voltage generators or open-circuited in the case of current generators. The total current through or the total voltage across a particular branch is then calculated by summing all the individual currents or voltages. There is an underlying assumption to this method that the total current or voltage is a linear superposition of its parts. Therefore, the method cannot be used if non-linear components are present. : 6–14  Superposition of powers cannot be used to find total power consumed by elements even in linear circuits. Power varies according to the square of total voltage or current and the square of the sum is not generally equal to the sum of the squares. Total power in an element can be found by applying superposition to the voltages and current independently and then calculating power from the total voltage and current. == Choice of method == Choice of method: 112–113  is to some extent a matter of taste. If the network is particularly simple or only a specific current or voltage is required then ad-hoc application of some simple equivalent circuits may yield the answer without recourse to the more systematic methods. Nodal analysis: The number of voltage variables, and hence simultaneous equations to solve, equals the number of nodes minus one. Every voltage source connected to the reference node reduces the number of unknowns and equations by one. Mesh analysis: The number of current variables, and hence simultaneous equations to solve, equals the number of meshes. Every current source in a mesh reduces the number of unknowns by one. Mesh analysis can only be used with networks which can be drawn as a planar network, that is, with no crossing components.: 94  Superposition is possibly the most conceptually simple method but rapidly leads to a large number of equations and messy impedance combinations as the network becomes larger. Effective medium approximations: For a network consisting of a high density of random resistors, an exact solution for each individual element may be impractical or impossible. Instead, the effective resistance and current distribution properties can be modelled in terms of graph measures and geometrical properties of networks. == Transfer function == A transfer function expresses the relationship between an input and an output of a network. For resistive networks, this will always be a simple real number or an expression which boils down to a real number. Resistive networks are represented by a system of simultaneous algebraic equations. However, in the general case of linear networks, the network is represented by a system of simultaneous linear differential equations. In network analysis, rather than use the differential equations directly, it is usual practice to carry out a Laplace transform on them first and then express the result in terms of the Laplace parameter s, which in general is complex. This is described as working in the s-domain. Working with the equations directly would be described as working in the time (or t) domain because the results would be expressed as time varying quantities. The Laplace transform is the mathematical method of transforming between the s-domain and the t-domain. This approach is standard in control theory and is useful for determining stability of a system, for instance, in an amplifier with feedback. === Two terminal component transfer functions === For two terminal components the transfer function, or more generally for non-linear elements, the constitutive equation, is the relationship between the current input to the device and the resulting voltage across it. The transfer function, Z(s), will thus have units of impedance, ohms. For the three passive components found in electrical networks, the transfer functions are; For a network to which only steady ac signals are applied, s is replaced with jω and the more familiar values from ac network theory result. Finally, for a network to which only steady dc is applied, s is replaced with zero and dc network theory applies. === Two port network transfer function === Transfer functions, in general, in control theory are given the symbol H(s). Most commonly in electronics, transfer function is defined as the ratio of output voltage to input voltage and given the symbol A(s), or more commonly (because analysis is invariably done in terms of sine wave response), A(jω), so that; A ( j ω ) = V o V i {\displaystyle A(j\omega )={\frac {V_{o}}{V_{i}}}} The A standing for attenuation, or amplification, depending on context. In general, this will be a complex function of jω, which can be derived from an analysis of the impedances in the network and their individual transfer functions. Sometimes the analyst is only interested in the magnitude of the gain and not the phase angle. In this case the complex numbers can be eliminated from the transfer function and it might then be written as; A ( ω ) = | V o V i | {\displaystyle A(\omega )=\left|{\frac {V_{o}}{V_{i}}}\right|} ==== Two port parameters ==== The concept of a two-port network can be useful in network analysis as a black box approach to analysis. The behaviour of the two-port network in a larger network can be entirely characterised without necessarily stating anything about the internal structure. However, to do this it is necessary to have more information than just the A(jω) described above. It can be shown that four such parameters are required to fully characterise the two-port network. These could be the forward transfer function, the input impedance, the reverse transfer function (i.e., the voltage appearing at the input when a voltage is applied to the output) and the output impedance. There are many others (see the main article for a full listing), one of these expresses all four parameters as impedances. It is usual to express the four parameters as a matrix; [ V 1 V 0 ] = [ z ( j ω ) 11 z ( j ω ) 12 z ( j ω ) 21 z ( j ω ) 22 ] [ I 1 I 0 ] {\displaystyle {\begin{bmatrix}V_{1}\\V_{0}\end{bmatrix}}={\begin{bmatrix}z(j\omega )_{11}&z(j\omega )_{12}\\z(j\omega )_{21}&z(j\omega )_{22}\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{0}\end{bmatrix}}} The matrix may be abbreviated to a representative element; [ z ( j ω ) ] {\displaystyle \left[z(j\omega )\right]} or just [ z ] {\displaystyle \left[z\right]} These concepts are capable of being extended to networks of more than two ports. However, this is rarely done in reality because, in many practical cases, ports are considered either purely input or purely output. If reverse direction transfer functions are ignored, a multi-port network can always be decomposed into a number of two-port networks. ==== Distributed components ==== Where a network is composed of discrete components, analysis using two-port networks is a matter of choice, not essential. The network can always alternatively be analysed in terms of its individual component transfer functions. However, if a network contains distributed components, such as in the case of a transmission line, then it is not possible to analyse in terms of individual components since they do not exist. The most common approach to this is to model the line as a two-port network and characterise it using two-port parameters (or something equivalent to them). Another example of this technique is modelling the carriers crossing the base region in a high frequency transistor. The base region has to be modelled as distributed resistance and capacitance rather than lumped components. ==== Image analysis ==== Transmission lines and certain types of filter design use the image method to determine their transfer parameters. In this method, the behaviour of an infinitely long cascade connected chain of identical networks is considered. The input and output impedances and the forward and reverse transmission functions are then calculated for this infinitely long chain. Although the theoretical values so obtained can never be exactly realised in practice, in many cases they serve as a very good approximation for the behaviour of a finite chain as long as it is not too short. == Time-based network analysis with simulation == Most analysis methods calculate the voltage and current values for static networks, which are circuits consisting of memoryless components only but have difficulties with complex dynamic networks. In general, the equations that describe the behaviour of a dynamic circuit are in the form of a differential-algebraic system of equations (DAEs). DAEs are challenging to solve and the methods for doing so are not yet fully understood and developed (as of 2010). Also, there is no general theorem that guarantees solutions to DAEs will exist and be unique. : 204–205  In special cases, the equations of the dynamic circuit will be in the form of an ordinary differential equations (ODE), which are easier to solve, since numerical methods for solving ODEs have a rich history, dating back to the late 1800s. One strategy for adapting ODE solution methods to DAEs is called direct discretization and is the method of choice in circuit simulation. : 204-205  Simulation-based methods for time-based network analysis solve a circuit that is posed as an initial value problem (IVP). That is, the values of the components with memories (for example, the voltages on capacitors and currents through inductors) are given at an initial point of time t0, and the analysis is done for the time t 0 ≤ t ≤ t f {\displaystyle t_{0}\leq t\leq t_{f}} . : 206-207  Since finding numerical results for the infinite number of time points from t0 to tf is not possible, this time period is discretized into discrete time instances, and the numerical solution is found for every instance. The time between the time instances is called the time step and can be fixed throughout the whole simulation or may be adaptive. In an IVP, when finding a solution for time tn+1, the solution for time tn is already known. Then, temporal discretization is used to replace the derivatives with differences, such as x ′ ( t n + 1 ) ≈ x n + 1 − x n h n + 1 {\displaystyle x'(t_{n+1})\approx {\frac {x_{n+1}-x_{n}}{h_{n+1}}}} for the backward Euler method, where hn+1 is the time step. : 266  If all circuit components were linear or the circuit was linearized beforehand, the equation system at this point is a system of linear equations and is solved with numerical linear algebra methods. Otherwise, it is a nonlinear algebraic equation system and is solved with nonlinear numerical methods such as Root-finding algorithms. === Comparison to other methods === Simulation methods are much more applicable than Laplace transform based methods, such as transfer functions, which only work for simple dynamic networks with capacitors and inductors. Also, the input signals to the network cannot be arbitrarily defined for Laplace transform based methods. == Non-linear networks == Most electronic designs are, in reality, non-linear. There are very few that do not include some semiconductor devices. These are invariably non-linear, the transfer function of an ideal semiconductor p-n junction is given by the very non-linear relationship; where; i and v are the instantaneous current and voltage. Io is an arbitrary parameter called the reverse leakage current whose value depends on the construction of the device. VT is a parameter proportional to temperature called the thermal voltage and equal to about 25mV at room temperature. There are many other ways that non-linearity can appear in a network. All methods utilising linear superposition will fail when non-linear components are present. There are several options for dealing with non-linearity depending on the type of circuit and the information the analyst wishes to obtain. === Constitutive equations === The diode equation above is an example of an element constitutive equation of the general form, f ( v , i ) = 0 {\displaystyle f(v,i)=0} This can be thought of as a non-linear resistor. The corresponding constitutive equations for non-linear inductors and capacitors are respectively; f ( v , φ ) = 0 {\displaystyle f(v,\varphi )=0} f ( v , q ) = 0 {\displaystyle f(v,q)=0} where f is any arbitrary function, φ is the stored magnetic flux and q is the stored charge. === Existence, uniqueness and stability === An important consideration in non-linear analysis is the question of uniqueness. For a network composed of linear components there will always be one, and only one, unique solution for a given set of boundary conditions. This is not always the case in non-linear circuits. For instance, a linear resistor with a fixed current applied to it has only one solution for the voltage across it. On the other hand, the non-linear tunnel diode has up to three solutions for the voltage for a given current. That is, a particular solution for the current through the diode is not unique, there may be others, equally valid. In some cases there may not be a solution at all: the question of existence of solutions must be considered. Another important consideration is the question of stability. A particular solution may exist, but it may not be stable, rapidly departing from that point at the slightest stimulation. It can be shown that a network that is absolutely stable for all conditions must have one, and only one, solution for each set of conditions. === Methods === ==== Boolean analysis of switching networks ==== A switching device is one where the non-linearity is utilised to produce two opposite states. CMOS devices in digital circuits, for instance, have their output connected to either the positive or the negative supply rail and are never found at anything in between except during a transient period when the device is switching. Here the non-linearity is designed to be extreme, and the analyst can take advantage of that fact. These kinds of networks can be analysed using Boolean algebra by assigning the two states ("on"/"off", "positive"/"negative" or whatever states are being used) to the Boolean constants "0" and "1". The transients are ignored in this analysis, along with any slight discrepancy between the state of the device and the nominal state assigned to a Boolean value. For instance, Boolean "1" may be assigned to the state of +5V. The output of the device may be +4.5V but the analyst still considers this to be Boolean "1". Device manufacturers will usually specify a range of values in their data sheets that are to be considered undefined (i.e. the result will be unpredictable). The transients are not entirely uninteresting to the analyst. The maximum rate of switching is determined by the speed of transition from one state to the other. Happily for the analyst, for many devices most of the transition occurs in the linear portion of the devices transfer function and linear analysis can be applied to obtain at least an approximate answer. It is mathematically possible to derive Boolean algebras that have more than two states. There is not too much use found for these in electronics, although three-state devices are passingly common. ==== Separation of bias and signal analyses ==== This technique is used where the operation of the circuit is to be essentially linear, but the devices used to implement it are non-linear. A transistor amplifier is an example of this kind of network. The essence of this technique is to separate the analysis into two parts. Firstly, the dc biases are analysed using some non-linear method. This establishes the quiescent operating point of the circuit. Secondly, the small signal characteristics of the circuit are analysed using linear network analysis. Examples of methods that can be used for both these stages are given below. ==== Graphical method of dc analysis ==== In a great many circuit designs, the dc bias is fed to a non-linear component via a resistor (or possibly a network of resistors). Since resistors are linear components, it is particularly easy to determine the quiescent operating point of the non-linear device from a graph of its transfer function. The method is as follows: from linear network analysis the output transfer function (that is output voltage against output current) is calculated for the network of resistor(s) and the generator driving them. This will be a straight line (called the load line) and can readily be superimposed on the transfer function plot of the non-linear device. The point where the lines cross is the quiescent operating point. Perhaps the easiest practical method is to calculate the (linear) network open circuit voltage and short circuit current and plot these on the transfer function of the non-linear device. The straight line joining these two point is the transfer function of the network. In reality, the designer of the circuit would proceed in the reverse direction to that described. Starting from a plot provided in the manufacturers data sheet for the non-linear device, the designer would choose the desired operating point and then calculate the linear component values required to achieve it. It is still possible to use this method if the device being biased has its bias fed through another device which is itself non-linear, a diode for instance. In this case however, the plot of the network transfer function onto the device being biased would no longer be a straight line and is consequently more tedious to do. ==== Small signal equivalent circuit ==== This method can be used where the deviation of the input and output signals in a network stay within a substantially linear portion of the non-linear devices transfer function, or else are so small that the curve of the transfer function can be considered linear. Under a set of these specific conditions, the non-linear device can be represented by an equivalent linear network. It must be remembered that this equivalent circuit is entirely notional and only valid for the small signal deviations. It is entirely inapplicable to the dc biasing of the device. For a simple two-terminal device, the small signal equivalent circuit may be no more than two components. A resistance equal to the slope of the v/i curve at the operating point (called the dynamic resistance), and tangent to the curve. A generator, because this tangent will not, in general, pass through the origin. With more terminals, more complicated equivalent circuits are required. A popular form of specifying the small signal equivalent circuit amongst transistor manufacturers is to use the two-port network parameters known as [h] parameters. These are a matrix of four parameters as with the [z] parameters but in the case of the [h] parameters they are a hybrid mixture of impedances, admittances, current gains and voltage gains. In this model the three terminal transistor is considered to be a two port network, one of its terminals being common to both ports. The [h] parameters are quite different depending on which terminal is chosen as the common one. The most important parameter for transistors is usually the forward current gain, h21, in the common emitter configuration. This is designated hfe on data sheets. The small signal equivalent circuit in terms of two-port parameters leads to the concept of dependent generators. That is, the value of a voltage or current generator depends linearly on a voltage or current elsewhere in the circuit. For instance the [z] parameter model leads to dependent voltage generators as shown in this diagram; There will always be dependent generators in a two-port parameter equivalent circuit. This applies to the [h] parameters as well as to the [z] and any other kind. These dependencies must be preserved when developing the equations in a larger linear network analysis. ==== Piecewise linear method ==== In this method, the transfer function of the non-linear device is broken up into regions. Each of these regions is approximated by a straight line. Thus, the transfer function will be linear up to a particular point where there will be a discontinuity. Past this point the transfer function will again be linear but with a different slope. A well known application of this method is the approximation of the transfer function of a pn junction diode. The transfer function of an ideal diode has been given at the top of this (non-linear) section. However, this formula is rarely used in network analysis, a piecewise approximation being used instead. It can be seen that the diode current rapidly diminishes to -Io as the voltage falls. This current, for most purposes, is so small it can be ignored. With increasing voltage, the current increases exponentially. The diode is modelled as an open circuit up to the knee of the exponential curve, then past this point as a resistor equal to the bulk resistance of the semiconducting material. The commonly accepted values for the transition point voltage are 0.7V for silicon devices and 0.3V for germanium devices. An even simpler model of the diode, sometimes used in switching applications, is short circuit for forward voltages and open circuit for reverse voltages. The model of a forward biased pn junction having an approximately constant 0.7V is also a much used approximation for transistor base-emitter junction voltage in amplifier design. The piecewise method is similar to the small signal method in that linear network analysis techniques can only be applied if the signal stays within certain bounds. If the signal crosses a discontinuity point then the model is no longer valid for linear analysis purposes. The model does have the advantage over small signal however, in that it is equally applicable to signal and dc bias. These can therefore both be analysed in the same operations and will be linearly superimposable. === Time-varying components === In linear analysis, the components of the network are assumed to be unchanging, but in some circuits this does not apply, such as sweep oscillators, voltage controlled amplifiers, and variable equalisers. In many circumstances the change in component value is periodic. A non-linear component excited with a periodic signal, for instance, can be represented as a periodically varying linear component. Sidney Darlington disclosed a method of analysing such periodic time varying circuits. He developed canonical circuit forms which are analogous to the canonical forms of Ronald M. Foster and Wilhelm Cauer used for analysing linear circuits. === Vector circuit theory === Generalization of circuit theory based on scalar quantities to vectorial currents is a necessity for newly evolving circuits such as spin circuits. Generalized circuit variables consist of four components: scalar current and vector spin current in x, y, and z directions. The voltages and currents each become vector quantities with conductance described as a 4x4 spin conductance matrix. == See also == Bartlett's bisection theorem Kirchhoff's circuit laws Millman's theorem Modified nodal analysis Ohm's law Reciprocity (electrical networks) Tellegen's theorem Symbolic circuit analysis == References == == External links == The Feynman Lectures on Physics Vol. II Ch. 22: AC Circuits
Wikipedia/Network_analysis_(electrical_circuits)
In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory, mathematical statistics, and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions. The Mellin transform of a complex-valued function f defined on R + × = ( 0 , ∞ ) {\displaystyle \mathbf {R} _{+}^{\times }=(0,\infty )} is the function M f {\displaystyle {\mathcal {M}}f} of complex variable s {\displaystyle s} given (where it exists, see Fundamental strip below) by M { f } ( s ) = φ ( s ) = ∫ 0 ∞ x s − 1 f ( x ) d x = ∫ R + × f ( x ) x s d x x . {\displaystyle {\mathcal {M}}\left\{f\right\}(s)=\varphi (s)=\int _{0}^{\infty }x^{s-1}f(x)\,dx=\int _{\mathbf {R} _{+}^{\times }}f(x)x^{s}{\frac {dx}{x}}.} Notice that d x / x {\displaystyle dx/x} is a Haar measure on the multiplicative group R + × {\displaystyle \mathbf {R} _{+}^{\times }} and x ↦ x s {\displaystyle x\mapsto x^{s}} is a (in general non-unitary) multiplicative character. The inverse transform is M − 1 { φ } ( x ) = f ( x ) = 1 2 π i ∫ c − i ∞ c + i ∞ x − s φ ( s ) d s . {\displaystyle {\mathcal {M}}^{-1}\left\{\varphi \right\}(x)=f(x)={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }x^{-s}\varphi (s)\,ds.} The notation implies this is a line integral taken over a vertical line in the complex plane, whose real part c need only satisfy a mild lower bound. Conditions under which this inversion is valid are given in the Mellin inversion theorem. The transform is named after the Finnish mathematician Hjalmar Mellin, who introduced it in a paper published 1897 in Acta Societatis Scientiarum Fennicae. == Relationship to other transforms == The two-sided Laplace transform may be defined in terms of the Mellin transform by B { f } ( s ) = M { f ( − ln ⁡ x ) } ( s ) {\displaystyle {\mathcal {B}}\left\{f\right\}(s)={\mathcal {M}}\left\{f(-\ln x)\right\}(s)} and conversely we can get the Mellin transform from the two-sided Laplace transform by M { f } ( s ) = B { f ( e − x ) } ( s ) . {\displaystyle {\mathcal {M}}\left\{f\right\}(s)={\mathcal {B}}\left\{f(e^{-x})\right\}(s).} The Mellin transform may be thought of as integrating using a kernel xs with respect to the multiplicative Haar measure, d x x {\textstyle {\frac {dx}{x}}} , which is invariant under dilation x ↦ a x {\displaystyle x\mapsto ax} , so that d ( a x ) a x = d x x ; {\textstyle {\frac {d(ax)}{ax}}={\frac {dx}{x}};} the two-sided Laplace transform integrates with respect to the additive Haar measure d x {\displaystyle dx} , which is translation invariant, so that d ( x + a ) = d x . {\displaystyle d(x+a)=dx\,.} We also may define the Fourier transform in terms of the Mellin transform and vice versa; in terms of the Mellin transform and of the two-sided Laplace transform defined above { F f } ( − s ) = { B f } ( − i s ) = { M f ( − ln ⁡ x ) } ( − i s ) . {\displaystyle \left\{{\mathcal {F}}f\right\}(-s)=\left\{{\mathcal {B}}f\right\}(-is)=\left\{{\mathcal {M}}f(-\ln x)\right\}(-is)\ .} We may also reverse the process and obtain { M f } ( s ) = { B f ( e − x ) } ( s ) = { F f ( e − x ) } ( − i s ) . {\displaystyle \left\{{\mathcal {M}}f\right\}(s)=\left\{{\mathcal {B}}f(e^{-x})\right\}(s)=\left\{{\mathcal {F}}f(e^{-x})\right\}(-is)\ .} The Mellin transform also connects the Newton series or binomial transform together with the Poisson generating function, by means of the Poisson–Mellin–Newton cycle. The Mellin transform may also be viewed as the Gelfand transform for the convolution algebra of the locally compact abelian group of positive real numbers with multiplication. == Examples == === Cahen–Mellin integral === The Mellin transform of the function f ( x ) = e − x {\displaystyle f(x)=e^{-x}} is Γ ( s ) = ∫ 0 ∞ x s − 1 e − x d x {\displaystyle \Gamma (s)=\int _{0}^{\infty }x^{s-1}e^{-x}dx} where Γ ( s ) {\displaystyle \Gamma (s)} is the gamma function. Γ ( s ) {\displaystyle \Gamma (s)} is a meromorphic function with simple poles at z = 0 , − 1 , − 2 , … {\displaystyle z=0,-1,-2,\dots } . Therefore, Γ ( s ) {\displaystyle \Gamma (s)} is analytic for ℜ ( s ) > 0 {\displaystyle \Re (s)>0} . Thus, letting c > 0 {\displaystyle c>0} and z − s {\displaystyle z^{-s}} on the principal branch, the inverse transform gives e − z = 1 2 π i ∫ c − i ∞ c + i ∞ Γ ( s ) z − s d s . {\displaystyle e^{-z}={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }\Gamma (s)z^{-s}\;ds.} This integral is known as the Cahen–Mellin integral. === Polynomial functions === Since ∫ 0 ∞ x a d x {\textstyle \int _{0}^{\infty }x^{a}dx} is not convergent for any value of a ∈ R {\displaystyle a\in \mathbb {R} } , the Mellin transform is not defined for polynomial functions defined on the whole positive real axis. However, by defining it to be zero on different sections of the real axis, it is possible to take the Mellin transform. For example, if f ( x ) = { x a x < 1 , 0 x > 1 , {\displaystyle f(x)={\begin{cases}x^{a}&x<1,\\0&x>1,\end{cases}}} then M f ( s ) = ∫ 0 1 x s − 1 x a d x = ∫ 0 1 x s + a − 1 d x = 1 s + a . {\displaystyle {\mathcal {M}}f(s)=\int _{0}^{1}x^{s-1}x^{a}dx=\int _{0}^{1}x^{s+a-1}dx={\frac {1}{s+a}}.} Thus M f ( s ) {\displaystyle {\mathcal {M}}f(s)} has a simple pole at s = − a {\displaystyle s=-a} and is thus defined for ℜ ( s ) > − a {\displaystyle \Re (s)>-a} . Similarly, if f ( x ) = { 0 x < 1 , x b x > 1 , {\displaystyle f(x)={\begin{cases}0&x<1,\\x^{b}&x>1,\end{cases}}} then M f ( s ) = ∫ 1 ∞ x s − 1 x b d x = ∫ 1 ∞ x s + b − 1 d x = − 1 s + b . {\displaystyle {\mathcal {M}}f(s)=\int _{1}^{\infty }x^{s-1}x^{b}dx=\int _{1}^{\infty }x^{s+b-1}dx=-{\frac {1}{s+b}}.} Thus M f ( s ) {\displaystyle {\mathcal {M}}f(s)} has a simple pole at s = − b {\displaystyle s=-b} and is thus defined for ℜ ( s ) < − b {\displaystyle \Re (s)<-b} . === Exponential functions === For p > 0 {\displaystyle p>0} , let f ( x ) = e − p x {\displaystyle f(x)=e^{-px}} . Then M f ( s ) = ∫ 0 ∞ x s e − p x d x x = ∫ 0 ∞ ( u p ) s e − u d u u = 1 p s ∫ 0 ∞ u s e − u d u u = 1 p s Γ ( s ) . {\displaystyle {\mathcal {M}}f(s)=\int _{0}^{\infty }x^{s}e^{-px}{\frac {dx}{x}}=\int _{0}^{\infty }\left({\frac {u}{p}}\right)^{s}e^{-u}{\frac {du}{u}}={\frac {1}{p^{s}}}\int _{0}^{\infty }u^{s}e^{-u}{\frac {du}{u}}={\frac {1}{p^{s}}}\Gamma (s).} === Zeta function === It is possible to use the Mellin transform to produce one of the fundamental formulas for the Riemann zeta function, ζ ( s ) {\displaystyle \zeta (s)} . Let f ( x ) = 1 e x − 1 {\textstyle f(x)={\frac {1}{e^{x}-1}}} . Then M f ( s ) = ∫ 0 ∞ x s − 1 1 e x − 1 d x = ∫ 0 ∞ x s − 1 e − x 1 − e − x d x = ∫ 0 ∞ x s − 1 ∑ n = 1 ∞ e − n x d x = ∑ n = 1 ∞ ∫ 0 ∞ x s e − n x d x x = ∑ n = 1 ∞ 1 n s Γ ( s ) = Γ ( s ) ζ ( s ) . {\displaystyle {\begin{alignedat}{3}{\mathcal {M}}f(s)&=\int _{0}^{\infty }x^{s-1}{\frac {1}{e^{x}-1}}dx&&=\int _{0}^{\infty }x^{s-1}{\frac {e^{-x}}{1-e^{-x}}}dx\\&=\int _{0}^{\infty }x^{s-1}\sum _{n=1}^{\infty }e^{-nx}dx&&=\sum _{n=1}^{\infty }\int _{0}^{\infty }x^{s}e^{-nx}{\frac {dx}{x}}\\&=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}\Gamma (s)=\Gamma (s)\zeta (s).\end{alignedat}}} Thus, ζ ( s ) = 1 Γ ( s ) ∫ 0 ∞ x s − 1 1 e x − 1 d x . {\displaystyle \zeta (s)={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }x^{s-1}{\frac {1}{e^{x}-1}}dx.} === Generalized Gaussian === For p > 0 {\displaystyle p>0} , let f ( x ) = e − x p {\displaystyle f(x)=e^{-x^{p}}} (i.e. f {\displaystyle f} is a generalized Gaussian distribution without the scaling factor.) Then M f ( s ) = ∫ 0 ∞ x s − 1 e − x p d x = ∫ 0 ∞ x p − 1 x s − p e − x p d x = ∫ 0 ∞ x p − 1 ( x p ) s / p − 1 e − x p d x = 1 p ∫ 0 ∞ u s / p − 1 e − u d u = Γ ( s / p ) p . {\displaystyle {\begin{alignedat}{3}{\mathcal {M}}f(s)&=\int _{0}^{\infty }x^{s-1}e^{-x^{p}}dx&&=\int _{0}^{\infty }x^{p-1}x^{s-p}e^{-x^{p}}dx\\&=\int _{0}^{\infty }x^{p-1}(x^{p})^{s/p-1}e^{-x^{p}}dx&&={\frac {1}{p}}\int _{0}^{\infty }u^{s/p-1}e^{-u}du\\&={\frac {\Gamma (s/p)}{p}}.\end{alignedat}}} In particular, setting s = 1 {\displaystyle s=1} recovers the following form of the gamma function Γ ( 1 + 1 p ) = ∫ 0 ∞ e − x p d x . {\displaystyle \Gamma \left(1+{\frac {1}{p}}\right)=\int _{0}^{\infty }e^{-x^{p}}dx.} === Power series and Dirichlet series === Generally, assuming the necessary convergence, we can connect Dirichlet series and power series F ( s ) = ∑ n = 1 ∞ a n n s , f ( z ) = ∑ n = 1 ∞ a n z n {\displaystyle F(s)=\sum \limits _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}},\quad f(z)=\sum \limits _{n=1}^{\infty }a_{n}z^{n}} by this formal identity involving the Mellin transform: Γ ( s ) F ( s ) = ∫ 0 ∞ x s − 1 f ( e − x ) d x {\displaystyle \Gamma (s)F(s)=\int _{0}^{\infty }x^{s-1}f(e^{-x})dx} == Fundamental strip == For α , β ∈ R {\displaystyle \alpha ,\beta \in \mathbb {R} } , let the open strip ⟨ α , β ⟩ {\displaystyle \langle \alpha ,\beta \rangle } be defined to be all s ∈ C {\displaystyle s\in \mathbb {C} } such that s = σ + i t {\displaystyle s=\sigma +it} with α < σ < β . {\displaystyle \alpha <\sigma <\beta .} The fundamental strip of M f ( s ) {\displaystyle {\mathcal {M}}f(s)} is defined to be the largest open strip on which it is defined. For example, for a > b {\displaystyle a>b} the fundamental strip of f ( x ) = { x a x < 1 , x b x > 1 , {\displaystyle f(x)={\begin{cases}x^{a}&x<1,\\x^{b}&x>1,\end{cases}}} is ⟨ − a , − b ⟩ . {\displaystyle \langle -a,-b\rangle .} As seen by this example, the asymptotics of the function as x → 0 + {\displaystyle x\to 0^{+}} define the left endpoint of its fundamental strip, and the asymptotics of the function as x → + ∞ {\displaystyle x\to +\infty } define its right endpoint. To summarize using Big O notation, if f {\displaystyle f} is O ( x a ) {\displaystyle O(x^{a})} as x → 0 + {\displaystyle x\to 0^{+}} and O ( x b ) {\displaystyle O(x^{b})} as x → + ∞ , {\displaystyle x\to +\infty ,} then M f ( s ) {\displaystyle {\mathcal {M}}f(s)} is defined in the strip ⟨ − a , − b ⟩ . {\displaystyle \langle -a,-b\rangle .} An application of this can be seen in the gamma function, Γ ( s ) . {\displaystyle \Gamma (s).} Since f ( x ) = e − x {\displaystyle f(x)=e^{-x}} is O ( x 0 ) {\displaystyle O(x^{0})} as x → 0 + {\displaystyle x\to 0^{+}} and O ( x k ) {\displaystyle O(x^{k})} for all k , {\displaystyle k,} then Γ ( s ) = M f ( s ) {\displaystyle \Gamma (s)={\mathcal {M}}f(s)} should be defined in the strip ⟨ 0 , + ∞ ⟩ , {\displaystyle \langle 0,+\infty \rangle ,} which confirms that Γ ( s ) {\displaystyle \Gamma (s)} is analytic for ℜ ( s ) > 0. {\displaystyle \Re (s)>0.} == Properties == The properties in this table may be found in Bracewell (2000) and Erdélyi (1954). === Parseval's theorem and Plancherel's theorem === Let f 1 ( x ) {\displaystyle f_{1}(x)} and f 2 ( x ) {\displaystyle f_{2}(x)} be functions with well-defined Mellin transforms f ~ 1 , 2 ( s ) = M { f 1 , 2 } ( s ) {\displaystyle {\tilde {f}}_{1,2}(s)={\mathcal {M}}\{f_{1,2}\}(s)} in the fundamental strips α 1 , 2 < ℜ s < β 1 , 2 {\displaystyle \alpha _{1,2}<\Re s<\beta _{1,2}} . Let c ∈ R {\displaystyle c\in \mathbb {R} } with max ( α 1 , 1 − β 2 ) < c < min ( β 1 , 1 − α 2 ) {\displaystyle \max(\alpha _{1},1-\beta _{2})<c<\min(\beta _{1},1-\alpha _{2})} . If the functions x c − 1 / 2 f 1 ( x ) {\displaystyle x^{c-1/2}\,f_{1}(x)} and x 1 / 2 − c f 2 ( x ) {\displaystyle x^{1/2-c}\,f_{2}(x)} are also square-integrable over the interval ( 0 , ∞ ) {\displaystyle (0,\infty )} , then Parseval's formula holds: ∫ 0 ∞ f 1 ( x ) f 2 ( x ) d x = 1 2 π i ∫ c − i ∞ c + i ∞ f 1 ~ ( s ) f 2 ~ ( 1 − s ) d s {\displaystyle \int _{0}^{\infty }f_{1}(x)\,f_{2}(x)\,dx={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }{\tilde {f_{1}}}(s)\,{\tilde {f_{2}}}(1-s)\,ds} The integration on the right hand side is done along the vertical line ℜ r = c {\displaystyle \Re r=c} that lies entirely within the overlap of the (suitable transformed) fundamental strips. We can replace f 2 ( x ) {\displaystyle f_{2}(x)} by f 2 ( x ) x s 0 − 1 {\displaystyle f_{2}(x)\,x^{s_{0}-1}} . This gives following alternative form of the theorem: Let f 1 ( x ) {\displaystyle f_{1}(x)} and f 2 ( x ) {\displaystyle f_{2}(x)} be functions with well-defined Mellin transforms f ~ 1 , 2 ( s ) = M { f 1 , 2 } ( s ) {\displaystyle {\tilde {f}}_{1,2}(s)={\mathcal {M}}\{f_{1,2}\}(s)} in the fundamental strips α 1 , 2 < ℜ s < β 1 , 2 {\displaystyle \alpha _{1,2}<\Re s<\beta _{1,2}} . Let c ∈ R {\displaystyle c\in \mathbb {R} } with α 1 < c < β 1 {\displaystyle \alpha _{1}<c<\beta _{1}} and choose s 0 ∈ C {\displaystyle s_{0}\in \mathbb {C} } with α 2 < ℜ s 0 − c < β 2 {\displaystyle \alpha _{2}<\Re s_{0}-c<\beta _{2}} . If the functions x c − 1 / 2 f 1 ( x ) {\displaystyle x^{c-1/2}\,f_{1}(x)} and x s 0 − c − 1 / 2 f 2 ( x ) {\displaystyle x^{s_{0}-c-1/2}\,f_{2}(x)} are also square-integrable over the interval ( 0 , ∞ ) {\displaystyle (0,\infty )} , then we have ∫ 0 ∞ f 1 ( x ) f 2 ( x ) x s 0 − 1 d x = 1 2 π i ∫ c − i ∞ c + i ∞ f 1 ~ ( s ) f 2 ~ ( s 0 − s ) d s {\displaystyle \int _{0}^{\infty }f_{1}(x)\,f_{2}(x)\,x^{s_{0}-1}\,dx={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }{\tilde {f_{1}}}(s)\,{\tilde {f_{2}}}(s_{0}-s)\,ds} We can replace f 2 ( x ) {\displaystyle f_{2}(x)} by f 1 ( x ) ¯ {\displaystyle {\overline {f_{1}(x)}}} . This gives following theorem: Let f ( x ) {\displaystyle f(x)} be a function with well-defined Mellin transform f ~ ( s ) = M { f } ( s ) {\displaystyle {\tilde {f}}(s)={\mathcal {M}}\{f\}(s)} in the fundamental strip α < ℜ s < β {\displaystyle \alpha <\Re s<\beta } . Let c ∈ R {\displaystyle c\in \mathbb {R} } with α < c < β {\displaystyle \alpha <c<\beta } . If the function x c − 1 / 2 f ( x ) {\displaystyle x^{c-1/2}\,f(x)} is also square-integrable over the interval ( 0 , ∞ ) {\displaystyle (0,\infty )} , then Plancherel's theorem holds: ∫ 0 ∞ | f ( x ) | 2 x 2 c − 1 d x = 1 2 π ∫ − ∞ ∞ | f ~ ( c + i t ) | 2 d t {\displaystyle \int _{0}^{\infty }|f(x)|^{2}\,x^{2c-1}dx={\frac {1}{2\pi }}\int _{-\infty }^{\infty }|{\tilde {f}}(c+it)|^{2}\,dt} == As an isometry on L2 spaces == In the study of Hilbert spaces, the Mellin transform is often posed in a slightly different way. For functions in L 2 ( 0 , ∞ ) {\displaystyle L^{2}(0,\infty )} (see Lp space) the fundamental strip always includes 1 2 + i R {\displaystyle {\tfrac {1}{2}}+i\mathbb {R} } , so we may define a linear operator M ~ {\displaystyle {\tilde {\mathcal {M}}}} as M ~ : L 2 ( 0 , ∞ ) → L 2 ( − ∞ , ∞ ) , {\displaystyle {\tilde {\mathcal {M}}}\colon L^{2}(0,\infty )\to L^{2}(-\infty ,\infty ),} { M ~ f } ( s ) := 1 2 π ∫ 0 ∞ x − 1 2 + i s f ( x ) d x . {\displaystyle \{{\tilde {\mathcal {M}}}f\}(s):={\frac {1}{\sqrt {2\pi }}}\int _{0}^{\infty }x^{-{\frac {1}{2}}+is}f(x)\,dx.} In other words, we have set { M ~ f } ( s ) := 1 2 π { M f } ( 1 2 + i s ) . {\displaystyle \{{\tilde {\mathcal {M}}}f\}(s):={\tfrac {1}{\sqrt {2\pi }}}\{{\mathcal {M}}f\}({\tfrac {1}{2}}+is).} This operator is usually denoted by just plain M {\displaystyle {\mathcal {M}}} and called the "Mellin transform", but M ~ {\displaystyle {\tilde {\mathcal {M}}}} is used here to distinguish from the definition used elsewhere in this article. The Mellin inversion theorem then shows that M ~ {\displaystyle {\tilde {\mathcal {M}}}} is invertible with inverse M ~ − 1 : L 2 ( − ∞ , ∞ ) → L 2 ( 0 , ∞ ) , {\displaystyle {\tilde {\mathcal {M}}}^{-1}\colon L^{2}(-\infty ,\infty )\to L^{2}(0,\infty ),} { M ~ − 1 φ } ( x ) = 1 2 π ∫ − ∞ ∞ x − 1 2 − i s φ ( s ) d s . {\displaystyle \{{\tilde {\mathcal {M}}}^{-1}\varphi \}(x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }x^{-{\frac {1}{2}}-is}\varphi (s)\,ds.} Furthermore, this operator is an isometry, that is to say ‖ M ~ f ‖ L 2 ( − ∞ , ∞ ) = ‖ f ‖ L 2 ( 0 , ∞ ) {\displaystyle \|{\tilde {\mathcal {M}}}f\|_{L^{2}(-\infty ,\infty )}=\|f\|_{L^{2}(0,\infty )}} for all f ∈ L 2 ( 0 , ∞ ) {\displaystyle f\in L^{2}(0,\infty )} (this explains why the factor of 1 / 2 π {\displaystyle 1/{\sqrt {2\pi }}} was used). == In probability theory == In probability theory, the Mellin transform is an essential tool in studying the distributions of products of random variables. If X is a random variable, and X+ = max{X,0} denotes its positive part, while X − = max{−X,0} is its negative part, then the Mellin transform of X is defined as M X ( s ) = ∫ 0 ∞ x s d F X + ( x ) + γ ∫ 0 ∞ x s d F X − ( x ) , {\displaystyle {\mathcal {M}}_{X}(s)=\int _{0}^{\infty }x^{s}dF_{X^{+}}(x)+\gamma \int _{0}^{\infty }x^{s}dF_{X^{-}}(x),} where γ is a formal indeterminate with γ2 = 1. This transform exists for all s in some complex strip D = {s : a ≤ Re(s) ≤ b} , where a ≤ 0 ≤ b. The Mellin transform M X ( i t ) {\displaystyle {\mathcal {M}}_{X}(it)} of a random variable X uniquely determines its distribution function FX. The importance of the Mellin transform in probability theory lies in the fact that if X and Y are two independent random variables, then the Mellin transform of their product is equal to the product of the Mellin transforms of X and Y: M X Y ( s ) = M X ( s ) M Y ( s ) {\displaystyle {\mathcal {M}}_{XY}(s)={\mathcal {M}}_{X}(s){\mathcal {M}}_{Y}(s)} == Problems with Laplacian in cylindrical coordinate system == In the Laplacian in cylindrical coordinates in a generic dimension (orthogonal coordinates with one angle and one radius, and the remaining lengths) there is always a term: 1 r ∂ ∂ r ( r ∂ f ∂ r ) = f r r + f r r {\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)=f_{rr}+{\frac {f_{r}}{r}}} For example, in 2-D polar coordinates the Laplacian is: ∇ 2 f = 1 r ∂ ∂ r ( r ∂ f ∂ r ) + 1 r 2 ∂ 2 f ∂ θ 2 {\displaystyle \nabla ^{2}f={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}}} and in 3-D cylindrical coordinates the Laplacian is, ∇ 2 f = 1 r ∂ ∂ r ( r ∂ f ∂ r ) + 1 r 2 ∂ 2 f ∂ φ 2 + ∂ 2 f ∂ z 2 . {\displaystyle \nabla ^{2}f={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}.} This term can be treated with the Mellin transform, since: M ( r 2 f r r + r f r , r → s ) = s 2 M ( f , r → s ) = s 2 F {\displaystyle {\mathcal {M}}\left(r^{2}f_{rr}+rf_{r},r\to s\right)=s^{2}{\mathcal {M}}\left(f,r\to s\right)=s^{2}F} For example, the 2-D Laplace equation in polar coordinates is the PDE in two variables: r 2 f r r + r f r + f θ θ = 0 {\displaystyle r^{2}f_{rr}+rf_{r}+f_{\theta \theta }=0} and by multiplication: 1 r ∂ ∂ r ( r ∂ f ∂ r ) + 1 r 2 ∂ 2 f ∂ θ 2 = 0 {\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}}=0} with a Mellin transform on radius becomes the simple harmonic oscillator: F θ θ + s 2 F = 0 {\displaystyle F_{\theta \theta }+s^{2}F=0} with general solution: F ( s , θ ) = C 1 ( s ) cos ⁡ ( s θ ) + C 2 ( s ) sin ⁡ ( s θ ) {\displaystyle F(s,\theta )=C_{1}(s)\cos(s\theta )+C_{2}(s)\sin(s\theta )} Now let's impose for example some simple wedge boundary conditions to the original Laplace equation: f ( r , − θ 0 ) = a ( r ) , f ( r , θ 0 ) = b ( r ) {\displaystyle f(r,-\theta _{0})=a(r),\quad f(r,\theta _{0})=b(r)} these are particularly simple for Mellin transform, becoming: F ( s , − θ 0 ) = A ( s ) , F ( s , θ 0 ) = B ( s ) {\displaystyle F(s,-\theta _{0})=A(s),\quad F(s,\theta _{0})=B(s)} These conditions imposed to the solution particularize it to: F ( s , θ ) = A ( s ) sin ⁡ ( s ( θ 0 − θ ) ) sin ⁡ ( 2 θ 0 s ) + B ( s ) sin ⁡ ( s ( θ 0 + θ ) ) sin ⁡ ( 2 θ 0 s ) {\displaystyle F(s,\theta )=A(s){\frac {\sin(s(\theta _{0}-\theta ))}{\sin(2\theta _{0}s)}}+B(s){\frac {\sin(s(\theta _{0}+\theta ))}{\sin(2\theta _{0}s)}}} Now by the convolution theorem for Mellin transform, the solution in the Mellin domain can be inverted: f ( r , θ ) = r m cos ⁡ ( m θ ) 2 θ 0 ∫ 0 ∞ ( a ( x ) x 2 m + 2 r m x m sin ⁡ ( m θ ) + r 2 m + b ( x ) x 2 m − 2 r m x m sin ⁡ ( m θ ) + r 2 m ) x m − 1 d x {\displaystyle f(r,\theta )={\frac {r^{m}\cos(m\theta )}{2\theta _{0}}}\int _{0}^{\infty }\left({\frac {a(x)}{x^{2m}+2r^{m}x^{m}\sin(m\theta )+r^{2m}}}+{\frac {b(x)}{x^{2m}-2r^{m}x^{m}\sin(m\theta )+r^{2m}}}\right)x^{m-1}\,dx} where the following inverse transform relation was employed: M − 1 ( sin ⁡ ( s φ ) sin ⁡ ( 2 θ 0 s ) ; s → r ) = 1 2 θ 0 r m sin ⁡ ( m φ ) 1 + 2 r m cos ⁡ ( m φ ) + r 2 m {\displaystyle {\mathcal {M}}^{-1}\left({\frac {\sin(s\varphi )}{\sin(2\theta _{0}s)}};s\to r\right)={\frac {1}{2\theta _{0}}}{\frac {r^{m}\sin(m\varphi )}{1+2r^{m}\cos(m\varphi )+r^{2m}}}} where m = π 2 θ 0 {\displaystyle m={\frac {\pi }{2\theta _{0}}}} . == Applications == The Mellin transform is widely used in computer science for the analysis of algorithms because of its scale invariance property. The magnitude of the Mellin Transform of a scaled function is identical to the magnitude of the original function for purely imaginary inputs. This scale invariance property is analogous to the Fourier Transform's shift invariance property. The magnitude of a Fourier transform of a time-shifted function is identical to the magnitude of the Fourier transform of the original function. This property is useful in image recognition. An image of an object is easily scaled when the object is moved towards or away from the camera. In quantum mechanics and especially quantum field theory, Fourier space is enormously useful and used extensively because momentum and position are Fourier transforms of each other (for instance, Feynman diagrams are much more easily computed in momentum space). In 2011, A. Liam Fitzpatrick, Jared Kaplan, João Penedones, Suvrat Raju, and Balt C. van Rees showed that Mellin space serves an analogous role in the context of the AdS/CFT correspondence. == Examples == Perron's formula describes the inverse Mellin transform applied to a Dirichlet series. The Mellin transform is used in analysis of the prime-counting function and occurs in discussions of the Riemann zeta function. Inverse Mellin transforms commonly occur in Riesz means. The Mellin transform can be used in audio timescale-pitch modification . == Table of selected Mellin transforms == Below is a list of interesting examples for the Mellin transform: == See also == Mellin inversion theorem Perron's formula Ramanujan's master theorem == Notes == == References == == External links == Philippe Flajolet, Xavier Gourdon, Philippe Dumas, Mellin Transforms and Asymptotics: Harmonic sums. Antonio Gonzáles, Marko Riedel Celebrando un clásico, newsgroup es.ciencia.matematicas Juan Sacerdoti, Funciones Eulerianas (in Spanish). Mellin Transform Methods, Digital Library of Mathematical Functions, 2011-08-29, National Institute of Standards and Technology Antonio De Sena and Davide Rocchesso, A Fast Mellin Transform with Applications in DAFX
Wikipedia/Mellin_transform
The diffusion equation is a parabolic partial differential equation. In physics, it describes the macroscopic behavior of many micro-particles in Brownian motion, resulting from the random movements and collisions of the particles (see Fick's laws of diffusion). In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science, information theory, and biophysics. The diffusion equation is a special case of the convection–diffusion equation when bulk velocity is zero. It is equivalent to the heat equation under some circumstances. == Statement == The equation is usually written as: ∂ ϕ ( r , t ) ∂ t = ∇ ⋅ [ D ( ϕ , r ) ∇ ϕ ( r , t ) ] , {\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=\nabla \cdot {\big [}D(\phi ,\mathbf {r} )\ \nabla \phi (\mathbf {r} ,t){\big ]},} where ϕ(r, t) is the density of the diffusing material at location r and time t and D(ϕ, r) is the collective diffusion coefficient for density ϕ at location r; and ∇ represents the vector differential operator del. If the diffusion coefficient depends on the density then the equation is nonlinear, otherwise it is linear. The equation above applies when the diffusion coefficient is isotropic; in the case of anisotropic diffusion, D is a symmetric positive definite matrix, and the equation is written (for three dimensional diffusion) as: ∂ ϕ ( r , t ) ∂ t = ∑ i = 1 3 ∑ j = 1 3 ∂ ∂ x i [ D i j ( ϕ , r ) ∂ ϕ ( r , t ) ∂ x j ] {\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=\sum _{i=1}^{3}\sum _{j=1}^{3}{\frac {\partial }{\partial x_{i}}}\left[D_{ij}(\phi ,\mathbf {r} ){\frac {\partial \phi (\mathbf {r} ,t)}{\partial x_{j}}}\right]} The diffusion equation has numerous analytic solutions. If D is constant, then the equation reduces to the following linear differential equation: ∂ ϕ ( r , t ) ∂ t = D ∇ 2 ϕ ( r , t ) , {\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=D\nabla ^{2}\phi (\mathbf {r} ,t),} which is identical to the heat equation. == Historical origin == The particle diffusion equation was originally derived by Adolf Fick in 1855. == Derivation == The diffusion equation can be trivially derived from the continuity equation, which states that a change in density in any part of the system is due to inflow and outflow of material into and out of that part of the system. Effectively, no material is created or destroyed: ∂ ϕ ∂ t + ∇ ⋅ j = 0 , {\displaystyle {\frac {\partial \phi }{\partial t}}+\nabla \cdot \mathbf {j} =0,} where j is the flux of the diffusing material. The diffusion equation can be obtained easily from this when combined with the phenomenological Fick's first law, which states that the flux of the diffusing material in any part of the system is proportional to the local density gradient: j = − D ( ϕ , r ) ∇ ϕ ( r , t ) . {\displaystyle \mathbf {j} =-D(\phi ,\mathbf {r} )\,\nabla \phi (\mathbf {r} ,t).} If drift must be taken into account, the Fokker–Planck equation provides an appropriate generalization. == Discretization == The diffusion equation is continuous in both space and time. One may discretize space, time, or both space and time, which arise in application. Discretizing time alone just corresponds to taking time slices of the continuous system, and no new phenomena arise. In discretizing space alone, the Green's function becomes the discrete Gaussian kernel, rather than the continuous Gaussian kernel. In discretizing both time and space, one obtains the random walk. == Discretization in image processing == The product rule is used to rewrite the anisotropic tensor diffusion equation, in standard discretization schemes, because direct discretization of the diffusion equation with only first order spatial central differences leads to checkerboard artifacts. The rewritten diffusion equation used in image filtering: ∂ ϕ ( r , t ) ∂ t = ∇ ⋅ [ D ( ϕ , r ) ] ∇ ϕ ( r , t ) + t r [ D ( ϕ , r ) ( ∇ ∇ T ϕ ( r , t ) ) ] {\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=\nabla \cdot \left[D(\phi ,\mathbf {r} )\right]\nabla \phi (\mathbf {r} ,t)+{\rm {tr}}{\Big [}D(\phi ,\mathbf {r} ){\big (}\nabla \nabla ^{\text{T}}\phi (\mathbf {r} ,t){\big )}{\Big ]}} where "tr" denotes the trace of the 2nd rank tensor, and superscript "T" denotes transpose, in which in image filtering D(ϕ, r) are symmetric matrices constructed from the eigenvectors of the image structure tensors. The spatial derivatives can then be approximated by two first order and a second order central finite differences. The resulting diffusion algorithm can be written as an image convolution with a varying kernel (stencil) of size 3 × 3 in 2D and 3 × 3 × 3 in 3D. == See also == Continuity equation Heat equation Self-similar solutions Reaction-diffusion equation Fokker–Planck equation Fick's laws of diffusion Maxwell–Stefan equation Radiative transfer equation and diffusion theory for photon transport in biological tissue Streamline diffusion Numerical solution of the convection–diffusion equation == References == == Further reading == Mehrer, H.; Stolwijk, A (2009). "Heroes and Highlights in the History of Diffusion". Diffusion Fundamentals. 11: 1–32. doi:10.62721/diffusion-fundamentals.11.453. Carslaw, H. S. and Jaeger, J. C. (1959). Conduction of Heat in Solids Oxford: Clarendon Press Jacobs, M.H. (1935). Diffusion Processes Berlin/Heidelberg: Springer Crank, J. (1956). The Mathematics of Diffusion Oxford: Clarendon Press Mathews, Jon; Walker, Robert L. (1970). Mathematical methods of physics (2nd ed.), New York: W. A. Benjamin, ISBN 0-8053-7002-1 Thambynayagam, R. K. M (2011). The Diffusion Handbook: Applied Solutions for Engineers. McGraw-Hill Ghez, R. (1988). A Primer Of Diffusion Problems, Wiley Ghez, R. (2001). Diffusion Phenomena. Long Island, NY, USA: Dover Publication Inc Pekalski, A. (1994). Diffusion Processes: Experiment, Theory, Simulations, Springer Bennett, T.D. (2013). Transport by Advection and Diffusion. John Wiley & Sons Vogel, G. (2019). Adventure Diffusion Springer Gillespie, D.T.; Seitaridou, E (2013). Simple Brownian Diffusion,Oxford University Press Nakicenovic, N.; Griübler, A.: (1991). Diffusion of Technologies and Social Behavior; Springer Michaud, G.; Alecian, G.; Richer, G.: (2013). Atomic Diffusion in Stars, Springer Stroock, D. W.:, Varadhan, S.R.S.: (2006). Multidimensional diffusion processes, Springer Zhuoqun, W., Yin J., Li H., Zhao J., Jingxue Y., and Huilai L. (2001). Nonlinear diffusion equations, World Scientific Shewmon, P. (1989). Diffusion in Solids, Wiley Banks, R.B. (2010). Growth and diffusion phenomena, Springer Roque-Malherbe, R.M.A. (2007). Adsorption and Diffusion in Nanoporous Materials, CRC Press Cunningham, R. (1980). Diffusion in gases and porous media, Plenum Pasquill, F., Smith, F.B. (1983). Atmospheric diffusion, Horwood Ikeda, N., Watanabe, S. (1981). Stochastic Differential Equations and Diffusion Processes, Elsevier, Academic Press Philibert, J., Laskar, A.L., Bocquet, J.L., Brebec, G., Monty, C. (1990). Diffusion in Materials, Springer Netherlands Freedman, D., (1983). Brownian Motion and Diffusion, Springer-Verlag New York Nagasawa, M., (1993). Schrödinger Equations and Diffusion Theory, Birkhäuser Burgers, J.M., (1974). The Nonlinear Diffusion Equation: Asymptotic Solutions and Statistical Problems,Springer Netherlands Ito, S., (1992). Diffusion Equations, American Mathematical Society Krylov, N. V. (1994). Introduction to the Theory of Diffusion Processes, American Mathematical Society Knight, F.B., (1981). Essentials of Brownian Motion and Diffusion, American Mathematical Society Ibe, O.C., (2013). Elements of random walk and diffusion processes, Wiley Dattagupta, S. (2013). Diffusion: Formalism and Applications, CRC Press == External links == Diffusion Calculator for Impurities & Dopants in Silicon Archived 2009-05-02 at the Wayback Machine A tutorial on the theory behind and solution of the Diffusion Equation. Classical and nanoscale diffusion (with figures and animations)
Wikipedia/Diffusion_equation
Operational calculus, also known as operational analysis, is a technique by which problems in analysis, in particular differential equations, are transformed into algebraic problems, usually the problem of solving a polynomial equation. == History == The idea of representing the processes of calculus, differentiation and integration, as operators has a long history that goes back to Gottfried Wilhelm Leibniz. The mathematician Louis François Antoine Arbogast was one of the first to manipulate these symbols independently of the function to which they were applied. This approach was further developed by Francois-Joseph Servois who developed convenient notations. Servois was followed by a school of British and Irish mathematicians including Charles James Hargreave, George Boole, Bownin, Carmichael, Doukin, Graves, Murphy, William Spottiswoode and Sylvester. Treatises describing the application of operator methods to ordinary and partial differential equations were written by Robert Bell Carmichael in 1855 and by Boole in 1859. This technique was fully developed by the physicist Oliver Heaviside in 1893, in connection with his work in telegraphy. Guided greatly by intuition and his wealth of knowledge on the physics behind his circuit studies, [Heaviside] developed the operational calculus now ascribed to his name. At the time, Heaviside's methods were not rigorous, and his work was not further developed by mathematicians. Operational calculus first found applications in electrical engineering problems, for the calculation of transients in linear circuits after 1910, under the impulse of Ernst Julius Berg, John Renshaw Carson and Vannevar Bush. A rigorous mathematical justification of Heaviside's operational methods came only after the work of Bromwich that related operational calculus with Laplace transformation methods (see the books by Jeffreys, by Carslaw or by MacLachlan for a detailed exposition). Other ways of justifying the operational methods of Heaviside were introduced in the mid-1920s using integral equation techniques (as done by Carson) or Fourier transformation (as done by Norbert Wiener). A different approach to operational calculus was developed in the 1930s by Polish mathematician Jan Mikusiński, using algebraic reasoning. Norbert Wiener laid the foundations for operator theory in his review of the existential status of the operational calculus in 1926: The brilliant work of Heaviside is purely heuristic, devoid of even the pretense to mathematical rigor. Its operators apply to electric voltages and currents, which may be discontinuous and certainly need not be analytic. For example, the favorite corpus vile on which he tries out his operators is a function which vanishes to the left of the origin and is 1 to the right. This excludes any direct application of the methods of Pincherle… Although Heaviside’s developments have not been justified by the present state of the purely mathematical theory of operators, there is a great deal of what we may call experimental evidence of their validity, and they are very valuable to the electrical engineers. There are cases, however, where they lead to ambiguous or contradictory results. == Principle == The key element of the operational calculus is to consider differentiation as an operator p = ⁠d/dt⁠ acting on functions. Linear differential equations can then be recast in the form of "functions" F(p) of the operator p acting on the unknown function equaling the known function. Here, F is defining something that takes in an operator p and returns another operator F(p). Solutions are then obtained by making the inverse operator of F act on the known function. The operational calculus generally is typified by two symbols: the operator p, and the unit function 1. The operator in its use probably is more mathematical than physical, the unit function more physical than mathematical. The operator p in the Heaviside calculus initially is to represent the time differentiator ⁠d/dt⁠. Further, it is desired for this operator to bear the reciprocal relation such that p−1 denotes the operation of integration. In electrical circuit theory, one is trying to determine the response of an electrical circuit to an impulse. Due to linearity, it is enough to consider a unit step function H(t), such that H(t) = 0 if t < 0, and H(t) = 1 if t > 0. The simplest example of application of the operational calculus is to solve: p y = H(t), which gives y = p − 1 ⁡ H = ∫ 0 t H ( u ) d u = t H ( t ) . {\displaystyle y=\operatorname {p} ^{-1}H=\int _{0}^{t}H(u)\,\mathrm {d} u=t\,H(t).} From this example, one sees that p − 1 {\displaystyle \operatorname {p} ^{-1}} represents integration. Furthermore n iterated integrations is represented by p − n , {\displaystyle \operatorname {p} ^{-n},} so that p − n ⁡ H ( t ) = t n n ! H ( t ) . {\displaystyle \operatorname {p} ^{-n}H(t)={\frac {t^{n}}{n!}}H(t).} Continuing to treat p as if it were a variable, p p − a H ( t ) = 1 1 − a p H ( t ) , {\displaystyle {\frac {\operatorname {p} }{\operatorname {p} -a}}H(t)={\frac {1}{1-{\frac {a}{\operatorname {p} }}}}\,H(t),} which can be rewritten by using a geometric series expansion: 1 1 − a p H ( t ) = ∑ n = 0 ∞ a n p − n ⁡ H ( t ) = ∑ n = 0 ∞ a n t n n ! H ( t ) = e a t H ( t ) . {\displaystyle {\frac {1}{1-{\frac {a}{\operatorname {p} }}}}H(t)=\sum _{n=0}^{\infty }a^{n}\operatorname {p} ^{-n}H(t)=\sum _{n=0}^{\infty }{\frac {a^{n}t^{n}}{n!}}H(t)=e^{at}H(t).} Using partial fraction decomposition, one can define any fraction in the operator p and compute its action on H(t). Moreover, if the function 1/F(p) has a series expansion of the form 1 F ( p ) = ∑ n = 0 ∞ a n p − n , {\displaystyle {\frac {1}{F(\operatorname {p} )}}=\sum _{n=0}^{\infty }a_{n}\operatorname {p} ^{-n},} it is straightforward to find 1 F ( p ) H ( t ) = ∑ n = 0 ∞ a n t n n ! H ( t ) . {\displaystyle {\frac {1}{F(\operatorname {p} )}}H(t)=\sum _{n=0}^{\infty }a_{n}{\frac {t^{n}}{n!}}H(t).} Applying this rule, solving any linear differential equation is reduced to a purely algebraic problem. Heaviside went further and defined fractional power of p, thus establishing a connection between operational calculus and fractional calculus. Using the Taylor expansion, one can also verify the Lagrange–Boole translation formula, eap f(t) = f(t + a), so the operational calculus is also applicable to finite-difference equations and to electrical engineering problems with delayed signals. == See also == Calculus of finite differences Umbral calculus == References == == Further sources == == External links == IV Lindell HEAVISIDE OPERATIONAL RULES APPLICABLE TO ELECTROMAGNETIC PROBLEMS Ron Doerfler Heaviside's Calculus Jack Crenshaw essay showing use of operators More On the Rosetta Stone
Wikipedia/Operational_calculus
In mathematics, particularly in the area of functional analysis and topological vector spaces, the vague topology is an example of the weak-* topology which arises in the study of measures on locally compact Hausdorff spaces. Let X {\displaystyle X} be a locally compact Hausdorff space. Let M ( X ) {\displaystyle M(X)} be the space of complex Radon measures on X , {\displaystyle X,} and C 0 ( X ) ∗ {\displaystyle C_{0}(X)^{*}} denote the dual of C 0 ( X ) , {\displaystyle C_{0}(X),} the Banach space of complex continuous functions on X {\displaystyle X} vanishing at infinity equipped with the uniform norm. By the Riesz representation theorem M ( X ) {\displaystyle M(X)} is isometric to C 0 ( X ) ∗ . {\displaystyle C_{0}(X)^{*}.} The isometry maps a measure μ {\displaystyle \mu } to a linear functional I μ ( f ) := ∫ X f d μ . {\displaystyle I_{\mu }(f):=\int _{X}f\,d\mu .} The vague topology is the weak-* topology on C 0 ( X ) ∗ . {\displaystyle C_{0}(X)^{*}.} The corresponding topology on M ( X ) {\displaystyle M(X)} induced by the isometry from C 0 ( X ) ∗ {\displaystyle C_{0}(X)^{*}} is also called the vague topology on M ( X ) . {\displaystyle M(X).} Thus in particular, a sequence of measures ( μ n ) n ∈ N {\displaystyle \left(\mu _{n}\right)_{n\in \mathbb {N} }} converges vaguely to a measure μ {\displaystyle \mu } whenever for all test functions f ∈ C 0 ( X ) , {\displaystyle f\in C_{0}(X),} ∫ X f d μ n → ∫ X f d μ . {\displaystyle \int _{X}fd\mu _{n}\to \int _{X}fd\mu .} It is also not uncommon to define the vague topology by duality with continuous functions having compact support C c ( X ) , {\displaystyle C_{c}(X),} that is, a sequence of measures ( μ n ) n ∈ N {\displaystyle \left(\mu _{n}\right)_{n\in \mathbb {N} }} converges vaguely to a measure μ {\displaystyle \mu } whenever the above convergence holds for all test functions f ∈ C c ( X ) . {\displaystyle f\in C_{c}(X).} This construction gives rise to a different topology. In particular, the topology defined by duality with C c ( X ) {\displaystyle C_{c}(X)} can be metrizable whereas the topology defined by duality with C 0 ( X ) {\displaystyle C_{0}(X)} is not. One application of this is to probability theory: for example, the central limit theorem is essentially a statement that if μ n {\displaystyle \mu _{n}} are the probability measures for certain sums of independent random variables, then μ n {\displaystyle \mu _{n}} converge weakly (and then vaguely) to a normal distribution, that is, the measure μ n {\displaystyle \mu _{n}} is "approximately normal" for large n . {\displaystyle n.} == See also == List of topologies – List of concrete topologies and topological spaces == References == Dieudonné, Jean (1970), "§13.4. The vague topology", Treatise on analysis, vol. II, Academic Press. G. B. Folland, Real Analysis: Modern Techniques and Their Applications, 2nd ed, John Wiley & Sons, Inc., 1999. This article incorporates material from Weak-* topology of the space of Radon measures on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Vague_topology
Angular resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution. It is used in optics applied to light waves, in antenna theory applied to radio waves, and in acoustics applied to sound waves. The colloquial use of the term "resolution" sometimes causes confusion; when an optical system is said to have a high resolution or high angular resolution, it means that the perceived distance, or actual angular distance, between resolved neighboring objects is small. The value that quantifies this property, θ, which is given by the Rayleigh criterion, is low for a system with a high resolution. The closely related term spatial resolution refers to the precision of a measurement with respect to space, which is directly connected to angular resolution in imaging instruments. The Rayleigh criterion shows that the minimum angular spread that can be resolved by an image-forming system is limited by diffraction to the ratio of the wavelength of the waves to the aperture width. For this reason, high-resolution imaging systems such as astronomical telescopes, long distance telephoto camera lenses and radio telescopes have large apertures. == Definition of terms == Resolving power is the ability of an imaging device to separate (i.e., to see as distinct) points of an object that are located at a small angular distance or it is the power of an optical instrument to separate far away objects, that are close together, into individual images. The term resolution or minimum resolvable distance is the minimum distance between distinguishable objects in an image, although the term is loosely used by many users of microscopes and telescopes to describe resolving power. As explained below, diffraction-limited resolution is defined by the Rayleigh criterion as the angular separation of two point sources when the maximum of each source lies in the first minimum of the diffraction pattern (Airy disk) of the other. In scientific analysis, in general, the term "resolution" is used to describe the precision with which any instrument measures and records (in an image or spectrum) any variable in the specimen or sample under study. == The Rayleigh criterion == The imaging system's resolution can be limited either by aberration or by diffraction causing blurring of the image. These two phenomena have different origins and are unrelated. Aberrations can be explained by geometrical optics and can in principle be solved by increasing the optical quality of the system. On the other hand, diffraction comes from the wave nature of light and is determined by the finite aperture of the optical elements. The lens' circular aperture is analogous to a two-dimensional version of the single-slit experiment. Light passing through the lens interferes with itself creating a ring-shape diffraction pattern, known as the Airy pattern, if the wavefront of the transmitted light is taken to be spherical or plane over the exit aperture. The interplay between diffraction and aberration can be characterised by the point spread function (PSF). The narrower the aperture of a lens the more likely the PSF is dominated by diffraction. In that case, the angular resolution of an optical system can be estimated (from the diameter of the aperture and the wavelength of the light) by the Rayleigh criterion defined by Lord Rayleigh: two point sources are regarded as just resolved when the principal diffraction maximum (center) of the Airy disk of one image coincides with the first minimum of the Airy disk of the other, as shown in the accompanying photos. (In the bottom photo on the right that shows the Rayleigh criterion limit, the central maximum of one point source might look as though it lies outside the first minimum of the other, but examination with a ruler verifies that the two do intersect.) If the distance is greater, the two points are well resolved and if it is smaller, they are regarded as not resolved. Rayleigh defended this criterion on sources of equal strength. Considering diffraction through a circular aperture, this translates into: θ ≈ 1.22 λ D ( considering that sin ⁡ θ ≈ θ ) {\displaystyle \theta \approx 1.22{\frac {\lambda }{D}}\quad ({\text{considering that}}\,\sin \theta \approx \theta )} where θ is the angular resolution (radians), λ is the wavelength of light, and D is the diameter of the lens' aperture. The factor 1.22 is derived from a calculation of the position of the first dark circular ring surrounding the central Airy disc of the diffraction pattern. This number is more precisely 1.21966989... (OEIS: A245461), the first zero of the order-one Bessel function of the first kind J 1 ( x ) {\displaystyle J_{1}(x)} divided by π. The formal Rayleigh criterion is close to the empirical resolution limit found earlier by the English astronomer W. R. Dawes, who tested human observers on close binary stars of equal brightness. The result, θ = 4.56/D, with D in inches and θ in arcseconds, is slightly narrower than calculated with the Rayleigh criterion. A calculation using Airy discs as point spread function shows that at Dawes' limit there is a 5% dip between the two maxima, whereas at Rayleigh's criterion there is a 26.3% dip. Modern image processing techniques including deconvolution of the point spread function allow resolution of binaries with even less angular separation. Using a small-angle approximation, the angular resolution may be converted into a spatial resolution, Δℓ, by multiplication of the angle (in radians) with the distance to the object. For a microscope, that distance is close to the focal length f of the objective. For this case, the Rayleigh criterion reads: Δ ℓ ≈ 1.22 f λ D {\displaystyle \Delta \ell \approx 1.22{\frac {f\lambda }{D}}} . This is the radius, in the imaging plane, of the smallest spot to which a collimated beam of light can be focused, which also corresponds to the size of the smallest object that the lens can resolve. The size is proportional to wavelength, λ, and thus, for example, blue light can be focused to a smaller spot than red light. If the lens is focusing a beam of light with a finite extent (e.g., a laser beam), the value of D corresponds to the diameter of the light beam, not the lens. Since the spatial resolution is inversely proportional to D, this leads to the slightly surprising result that a wide beam of light may be focused on a smaller spot than a narrow one. This result is related to the Fourier properties of a lens. A similar result holds for a small sensor imaging a subject at infinity: The angular resolution can be converted to a spatial resolution on the sensor by using f as the distance to the image sensor; this relates the spatial resolution of the image to the f-number, f/#: Δ ℓ ≈ 1.22 f λ D = 1.22 λ ⋅ ( f / # ) {\displaystyle \Delta \ell \approx 1.22{\frac {f\lambda }{D}}=1.22\lambda \cdot (f/\#)} . Since this is the radius of the Airy disk, the resolution is better estimated by the diameter, 2.44 λ ⋅ ( f / # ) {\displaystyle 2.44\lambda \cdot (f/\#)} == Specific cases == === Single telescope === Point-like sources separated by an angle smaller than the angular resolution cannot be resolved. A single optical telescope may have an angular resolution less than one arcsecond, but astronomical seeing and other atmospheric effects make attaining this very hard. The angular resolution R of a telescope can usually be approximated by R = λ D {\displaystyle R={\frac {\lambda }{D}}} where λ is the wavelength of the observed radiation, and D is the diameter of the telescope's objective. The resulting R is in radians. For example, in the case of yellow light with a wavelength of 580 nm, for a resolution of 0.1 arc second, we need D=1.2 m. Sources larger than the angular resolution are called extended sources or diffuse sources, and smaller sources are called point sources. This formula, for light with a wavelength of about 562 nm, is also called the Dawes' limit. === Telescope array === The highest angular resolutions for telescopes can be achieved by arrays of telescopes called astronomical interferometers: These instruments can achieve angular resolutions of 0.001 arcsecond at optical wavelengths, and much higher resolutions at x-ray wavelengths. In order to perform aperture synthesis imaging, a large number of telescopes are required laid out in a 2-dimensional arrangement with a dimensional precision better than a fraction (0.25x) of the required image resolution. The angular resolution R of an interferometer array can usually be approximated by R = λ B {\displaystyle R={\frac {\lambda }{B}}} where λ is the wavelength of the observed radiation, and B is the length of the maximum physical separation of the telescopes in the array, called the baseline. The resulting R is in radians. Sources larger than the angular resolution are called extended sources or diffuse sources, and smaller sources are called point sources. For example, in order to form an image in yellow light with a wavelength of 580 nm, for a resolution of 1 milli-arcsecond, we need telescopes laid out in an array that is 120 m × 120 m with a dimensional precision better than 145 nm. === Microscope === The resolution R (here measured as a distance, not to be confused with the angular resolution of a previous subsection) depends on the angular aperture α {\displaystyle \alpha } : R = 1.22 λ N A condenser + N A objective {\displaystyle R={\frac {1.22\lambda }{\mathrm {NA} _{\text{condenser}}+\mathrm {NA} _{\text{objective}}}}} where N A = n sin ⁡ θ {\displaystyle \mathrm {NA} =n\sin \theta } . Here NA is the numerical aperture, θ {\displaystyle \theta } is half the included angle α {\displaystyle \alpha } of the lens, which depends on the diameter of the lens and its focal length, n {\displaystyle n} is the refractive index of the medium between the lens and the specimen, and λ {\displaystyle \lambda } is the wavelength of light illuminating or emanating from (in the case of fluorescence microscopy) the sample. It follows that the NAs of both the objective and the condenser should be as high as possible for maximum resolution. In the case that both NAs are the same, the equation may be reduced to: R = 0.61 λ N A ≈ λ 2 N A {\displaystyle R={\frac {0.61\lambda }{\mathrm {NA} }}\approx {\frac {\lambda }{2\mathrm {NA} }}} The practical limit for θ {\displaystyle \theta } is about 70°. In a dry objective or condenser, this gives a maximum NA of 0.95. In a high-resolution oil immersion lens, the maximum NA is typically 1.45, when using immersion oil with a refractive index of 1.52. Due to these limitations, the resolution limit of a light microscope using visible light is about 200 nm. Given that the shortest wavelength of visible light is violet ( λ ≈ 400 n m {\displaystyle \lambda \approx 400\,\mathrm {nm} } ), R = 1.22 × 400 nm 1.45 + 0.95 = 203 nm {\displaystyle R={\frac {1.22\times 400\,{\mbox{nm}}}{1.45\ +\ 0.95}}=203\,{\mbox{nm}}} which is near 200 nm. Oil immersion objectives can have practical difficulties due to their shallow depth of field and extremely short working distance, which calls for the use of very thin (0.17 mm) cover slips, or, in an inverted microscope, thin glass-bottomed Petri dishes. However, resolution below this theoretical limit can be achieved using super-resolution microscopy. These include optical near-fields (Near-field scanning optical microscope) or a diffraction technique called 4Pi STED microscopy. Objects as small as 30 nm have been resolved with both techniques. In addition to this Photoactivated localization microscopy can resolve structures of that size, but is also able to give information in z-direction (3D). == List of telescopes and arrays by angular resolution == == See also == Angular diameter Beam diameter Dawes' limit Diffraction-limited system Ground sample distance Image resolution Optical resolution Sparrow's resolution limit Visual acuity == Notes == == References == == External links == "Concepts and Formulas in Microscopy: Resolution" by Michael W. Davidson, Nikon MicroscopyU (website).
Wikipedia/Angular_resolution
In real analysis, a branch of mathematics, Bernstein's theorem states that every real-valued function on the half-line [0, ∞) that is totally monotone is a mixture of exponential functions. In one important special case the mixture is a weighted average, or expected value. Total monotonicity (sometimes also complete monotonicity) of a function f means that f is continuous on [0, ∞), infinitely differentiable on (0, ∞), and satisfies ( − 1 ) n d n d t n f ( t ) ≥ 0 {\displaystyle (-1)^{n}{\frac {d^{n}}{dt^{n}}}f(t)\geq 0} for all nonnegative integers n and for all t > 0. Another convention puts the opposite inequality in the above definition. The "weighted average" statement can be characterized thus: there is a non-negative finite Borel measure on [0, ∞) with cumulative distribution function g such that f ( t ) = ∫ 0 ∞ e − t x d g ( x ) , {\displaystyle f(t)=\int _{0}^{\infty }e^{-tx}\,dg(x),} the integral being a Riemann–Stieltjes integral. In more abstract language, the theorem characterises Laplace transforms of positive Borel measures on [0, ∞). In this form it is known as the Bernstein–Widder theorem, or Hausdorff–Bernstein–Widder theorem. Felix Hausdorff had earlier characterised completely monotone sequences. These are the sequences occurring in the Hausdorff moment problem. == Bernstein functions == Nonnegative functions whose derivative is completely monotone are called Bernstein functions. Every Bernstein function has the Lévy–Khintchine representation: f ( t ) = a + b t + ∫ 0 ∞ ( 1 − e − t x ) μ ( d x ) , {\displaystyle f(t)=a+bt+\int _{0}^{\infty }\left(1-e^{-tx}\right)\mu (dx),} where a , b ≥ 0 {\displaystyle a,b\geq 0} and μ {\displaystyle \mu } is a measure on the positive real half-line such that ∫ 0 ∞ ( 1 ∧ x ) μ ( d x ) < ∞ . {\displaystyle \int _{0}^{\infty }\left(1\wedge x\right)\mu (dx)<\infty .} == See also == Absolutely and completely monotonic functions and sequences == References == S. N. Bernstein (1928). "Sur les fonctions absolument monotones". Acta Mathematica. 52: 1–66. doi:10.1007/BF02592679. D. Widder (1941). The Laplace Transform. Princeton University Press. Rene Schilling, Renming Song and Zoran Vondraček (2010). Bernstein functions. De Gruyter.
Wikipedia/Bernstein's_theorem_on_monotone_functions
The Laplace–Stieltjes transform, named for Pierre-Simon Laplace and Thomas Joannes Stieltjes, is an integral transform similar to the Laplace transform. For real-valued functions, it is the Laplace transform of a Stieltjes measure, however it is often defined for functions with values in a Banach space. It is useful in a number of areas of mathematics, including functional analysis, and certain areas of theoretical and applied probability. == Real-valued functions == The Laplace–Stieltjes transform of a real-valued function g is given by a Lebesgue–Stieltjes integral of the form ∫ e − s x d g ( x ) {\displaystyle \int e^{-sx}\,dg(x)} for s a complex number. As with the usual Laplace transform, one gets a slightly different transform depending on the domain of integration, and for the integral to be defined, one also needs to require that g be of bounded variation on the region of integration. The most common are: The bilateral (or two-sided) Laplace–Stieltjes transform is given by { L ∗ g } ( s ) = ∫ − ∞ ∞ e − s x d g ( x ) . {\displaystyle \{{\mathcal {L}}^{*}g\}(s)=\int _{-\infty }^{\infty }e^{-sx}\,dg(x).} The unilateral (one-sided) Laplace–Stieltjes transform is given by { L ∗ g } ( s ) = lim ε → 0 + ∫ − ε ∞ e − s x d g ( x ) . {\displaystyle \{{\mathcal {L}}^{*}g\}(s)=\lim _{\varepsilon \to 0^{+}}\int _{-\varepsilon }^{\infty }e^{-sx}\,dg(x).} The limit is necessary to ensure the transform captures a possible jump in g(x) at x = 0, as is needed to make sense of the Laplace transform of the Dirac delta function. More general transforms can be considered by integrating over a contour in the complex plane; see Zhavrid 2001. The Laplace–Stieltjes transform in the case of a scalar-valued function is thus seen to be a special case of the Laplace transform of a Stieltjes measure. To wit, L ∗ g = L ( d g ) . {\displaystyle {\mathcal {L}}^{*}g={\mathcal {L}}(dg).} In particular, it shares many properties with the usual Laplace transform. For instance, the convolution theorem holds: { L ∗ ( g ∗ h ) } ( s ) = { L ∗ g } ( s ) { L ∗ h } ( s ) . {\displaystyle \{{\mathcal {L}}^{*}(g*h)\}(s)=\{{\mathcal {L}}^{*}g\}(s)\{{\mathcal {L}}^{*}h\}(s).} Often only real values of the variable s are considered, although if the integral exists as a proper Lebesgue integral for a given real value s = σ, then it also exists for all complex s with re(s) ≥ σ. The Laplace–Stieltjes transform appears naturally in the following context. If X is a random variable with cumulative distribution function F, then the Laplace–Stieltjes transform is given by the expectation: { L ∗ F } ( s ) = E [ e − s X ] . {\displaystyle \{{\mathcal {L}}^{*}F\}(s)=\mathrm {E} \left[e^{-sX}\right].} The Laplace-Stieltjes transform of a real random variable's cumulative distribution function is therefore equal to the random variable's moment-generating function, but with the sign of the argument reversed. == Vector measures == Whereas the Laplace–Stieltjes transform of a real-valued function is a special case of the Laplace transform of a measure applied to the associated Stieltjes measure, the conventional Laplace transform cannot handle vector measures: measures with values in a Banach space. These are, however, important in connection with the study of semigroups that arise in partial differential equations, harmonic analysis, and probability theory. The most important semigroups are, respectively, the heat semigroup, Riemann-Liouville semigroup, and Brownian motion and other infinitely divisible processes. Let g be a function from [0,∞) to a Banach space X of strongly bounded variation over every finite interval. This means that, for every fixed subinterval [0,T] one has sup ∑ i ‖ g ( t i ) − g ( t i + 1 ) ‖ X < ∞ {\displaystyle \sup \sum _{i}\left\|g(t_{i})-g(t_{i+1})\right\|_{X}<\infty } where the supremum is taken over all partitions of [0,T] 0 = t 0 < t 1 < ⋯ < t n = T . {\displaystyle 0=t_{0}<t_{1}<\cdots <t_{n}=T.} The Stieltjes integral with respect to the vector measure dg ∫ 0 T e − s t d g ( t ) {\displaystyle \int _{0}^{T}e^{-st}dg(t)} is defined as a Riemann–Stieltjes integral. Indeed, if π is the tagged partition of the interval [0,T] with subdivision 0 = t0 ≤ t1 ≤ ... ≤ tn = T, distinguished points τ i ∈ [ t i , t i + 1 ] {\displaystyle \tau _{i}\in [t_{i},t_{i+1}]} and mesh size | π | = max | t i − t i + 1 | , {\displaystyle |\pi |=\max \left|t_{i}-t_{i+1}\right|,} the Riemann–Stieltjes integral is defined as the value of the limit lim | π | → 0 ∑ i = 0 n − 1 e − s τ i [ g ( t i + 1 ) − g ( t i ) ] {\displaystyle \lim _{|\pi |\to 0}\sum _{i=0}^{n-1}e^{-s\tau _{i}}\left[g(t_{i+1})-g(t_{i})\right]} taken in the topology on X. The hypothesis of strong bounded variation guarantees convergence. If in the topology of X the limit lim T → ∞ ∫ 0 T e − s t d g ( t ) {\displaystyle \lim _{T\to \infty }\int _{0}^{T}e^{-st}dg(t)} exists, then the value of this limit is the Laplace–Stieltjes transform of g. == Related transforms == The Laplace–Stieltjes transform is closely related to other integral transforms, including the Fourier transform and the Laplace transform. In particular, note the following: If g has derivative g' then the Laplace–Stieltjes transform of g is the Laplace transform of g′. { L ∗ g } ( s ) = { L g ′ } ( s ) , {\displaystyle \{{\mathcal {L}}^{*}g\}(s)=\{{\mathcal {L}}g'\}(s),} We can obtain the Fourier–Stieltjes transform of g (and, by the above note, the Fourier transform of g′) by { F ∗ g } ( s ) = { L ∗ g } ( i s ) , s ∈ R . {\displaystyle \{{\mathcal {F}}^{*}g\}(s)=\{{\mathcal {L}}^{*}g\}(is),\qquad s\in \mathbb {R} .} == Probability distributions == If X is a continuous random variable with cumulative distribution function F(t) then moments of X can be computed using E ⁡ [ X n ] = ( − 1 ) n d n { L ∗ F } ( s ) d s n | s = 0 . {\displaystyle \operatorname {E} [X^{n}]=(-1)^{n}\left.{\frac {d^{n}\{{\mathcal {L}}^{*}F\}(s)}{ds^{n}}}\right|_{s=0}.} === Exponential distribution === For an exponentially distributed random variable Y with rate parameter λ the LST is, Y ~ ( s ) = { L ∗ F Y } ( s ) = ∫ 0 ∞ e − s t λ e − λ t d t = λ λ + s {\displaystyle {\widetilde {Y}}(s)=\{{\mathcal {L}}^{*}F_{Y}\}(s)=\int _{0}^{\infty }e^{-st}\lambda e^{-\lambda t}dt={\frac {\lambda }{\lambda +s}}} from which the first three moments can be computed as 1/λ, 2/λ2 and 6/λ3. === Erlang distribution === For Z with Erlang distribution (which is the sum of n exponential distributions) we use the fact that the probability distribution of the sum of independent random variables is equal to the convolution of their probability distributions. So if Z = Y 1 + ⋯ + Y n {\displaystyle Z=Y_{1}+\cdots +Y_{n}} with the Yi independent then Z ~ ( s ) = Y ~ 1 ( s ) ⋯ Y ~ n ( s ) {\displaystyle {\widetilde {Z}}(s)={\widetilde {Y}}_{1}(s)\cdots {\widetilde {Y}}_{n}(s)} therefore in the case where Z has an Erlang distribution, Z ~ ( s ) = ( λ λ + s ) n . {\displaystyle {\widetilde {Z}}(s)=\left({\frac {\lambda }{\lambda +s}}\right)^{n}.} === Uniform distribution === For U with uniform distribution on the interval (a,b), the transform is given by U ~ ( s ) = ∫ a b e − s t 1 b − a d t = e − s a − e − s b s ( b − a ) . {\displaystyle {\widetilde {U}}(s)=\int _{a}^{b}e^{-st}{\frac {1}{b-a}}dt={\frac {e^{-sa}-e^{-sb}}{s(b-a)}}.} == References == Apostol, T.M. (1957), Mathematical Analysis (1st ed.), Reading, MA: Addison-Wesley; 2nd ed (1974) ISBN 0-201-00288-4. Apostol, T.M. (1997), Modular Functions and Dirichlet Series in Number Theory (2nd ed.), New York: Springer-Verlag, ISBN 0-387-97127-0. Grimmett, G.R.; Stirzaker, D.R. (2001), Probability and Random Processes (3rd ed.), Oxford: Oxford University Press, ISBN 0-19-857222-0. Hille, Einar; Phillips, Ralph S. (1974), Functional analysis and semi-groups, Providence, R.I.: American Mathematical Society, MR 0423094. Zhavrid, N.S. (2001) [1994], "Laplace transform", Encyclopedia of Mathematics, EMS Press.
Wikipedia/Laplace–Stieltjes_transform
In mathematics, a ring is an algebraic structure consisting of a set with two binary operations called addition and multiplication, which obey the same basic laws as addition and multiplication of integers, except that multiplication in a ring does not need to be commutative. Ring elements may be numbers such as integers or complex numbers, but they may also be non-numerical objects such as polynomials, square matrices, functions, and power series. A ring may be defined as a set that is endowed with two binary operations called addition and multiplication such that the ring is an abelian group with respect to the addition operator, and the multiplication operator is associative, is distributive over the addition operation, and has a multiplicative identity element. (Some authors apply the term ring to a further generalization, often called a rng, that omits the requirement for a multiplicative identity, and instead call the structure defined above a ring with identity. See § Variations on terminology.) Whether a ring is commutative (that is, its multiplication is a commutative operation) has profound implications on its properties. Commutative algebra, the theory of commutative rings, is a major branch of ring theory. Its development has been greatly influenced by problems and ideas of algebraic number theory and algebraic geometry. Examples of commutative rings include every field, the integers, the polynomials in one or several variables with coefficients in another ring, the coordinate ring of an affine algebraic variety, and the ring of integers of a number field. Examples of noncommutative rings include the ring of n × n real square matrices with n ≥ 2, group rings in representation theory, operator algebras in functional analysis, rings of differential operators, and cohomology rings in topology. The conceptualization of rings spanned the 1870s to the 1920s, with key contributions by Dedekind, Hilbert, Fraenkel, and Noether. Rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. They later proved useful in other branches of mathematics such as geometry and analysis. Rings appear in the following chain of class inclusions: rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ euclidean domains ⊃ fields ⊃ algebraically closed fields == Definition == A ring is a set R equipped with two binary operations + (addition) and ⋅ (multiplication) satisfying the following three sets of axioms, called the ring axioms: R is an abelian group under addition, meaning that: (a + b) + c = a + (b + c) for all a, b, c in R (that is, + is associative). a + b = b + a for all a, b in R (that is, + is commutative). There is an element 0 in R such that a + 0 = a for all a in R (that is, 0 is the additive identity). For each a in R there exists −a in R such that a + (−a) = 0 (that is, −a is the additive inverse of a). R is a monoid under multiplication, meaning that: (a · b) · c = a · (b · c) for all a, b, c in R (that is, ⋅ is associative). There is an element 1 in R such that a · 1 = a and 1 · a = a for all a in R (that is, 1 is the multiplicative identity). Multiplication is distributive with respect to addition, meaning that: a · (b + c) = (a · b) + (a · c) for all a, b, c in R (left distributivity). (b + c) · a = (b · a) + (c · a) for all a, b, c in R (right distributivity). In notation, the multiplication symbol · is often omitted, in which case a · b is written as ab. === Variations on terminology === In the terminology of this article, a ring is defined to have a multiplicative identity, while a structure with the same axiomatic definition but without the requirement for a multiplicative identity is instead called a "rng" (IPA: ) with a missing "i". For example, the set of even integers with the usual + and ⋅ is a rng, but not a ring. As explained in § History below, many authors apply the term "ring" without requiring a multiplicative identity. Although ring addition is commutative, ring multiplication is not required to be commutative: ab need not necessarily equal ba. Rings that also satisfy commutativity for multiplication (such as the ring of integers) are called commutative rings. Books on commutative algebra or algebraic geometry often adopt the convention that ring means commutative ring, to simplify terminology. In a ring, multiplicative inverses are not required to exist. A nonzero commutative ring in which every nonzero element has a multiplicative inverse is called a field. The additive group of a ring is the underlying set equipped with only the operation of addition. Although the definition requires that the additive group be abelian, this can be inferred from the other ring axioms. The proof makes use of the "1", and does not work in a rng. (For a rng, omitting the axiom of commutativity of addition leaves it inferable from the remaining rng assumptions only for elements that are products: ab + cd = cd + ab.) There are a few authors who use the term "ring" to refer to structures in which there is no requirement for multiplication to be associative. For these authors, every algebra is a "ring". == Illustration == The most familiar example of a ring is the set of all integers ⁠ Z , {\displaystyle \mathbb {Z} ,} ⁠ consisting of the numbers … , − 5 , − 4 , − 3 , − 2 , − 1 , 0 , 1 , 2 , 3 , 4 , 5 , … {\displaystyle \dots ,-5,-4,-3,-2,-1,0,1,2,3,4,5,\dots } The axioms of a ring were elaborated as a generalization of familiar properties of addition and multiplication of integers. === Some properties === Some basic properties of a ring follow immediately from the axioms: The additive identity is unique. The additive inverse of each element is unique. The multiplicative identity is unique. For any element x in a ring R, one has x0 = 0 = 0x (zero is an absorbing element with respect to multiplication) and (–1)x = –x. If 0 = 1 in a ring R (or more generally, 0 is a unit element), then R has only one element, and is called the zero ring. If a ring R contains the zero ring as a subring, then R itself is the zero ring. The binomial formula holds for any x and y satisfying xy = yx. === Example: Integers modulo 4 === Equip the set Z / 4 Z = { 0 ¯ , 1 ¯ , 2 ¯ , 3 ¯ } {\displaystyle \mathbb {Z} /4\mathbb {Z} =\left\{{\overline {0}},{\overline {1}},{\overline {2}},{\overline {3}}\right\}} with the following operations: The sum x ¯ + y ¯ {\displaystyle {\overline {x}}+{\overline {y}}} in ⁠ Z / 4 Z {\displaystyle \mathbb {Z} /4\mathbb {Z} } ⁠ is the remainder when the integer x + y is divided by 4 (as x + y is always smaller than 8, this remainder is either x + y or x + y − 4). For example, 2 ¯ + 3 ¯ = 1 ¯ {\displaystyle {\overline {2}}+{\overline {3}}={\overline {1}}} and 3 ¯ + 3 ¯ = 2 ¯ . {\displaystyle {\overline {3}}+{\overline {3}}={\overline {2}}.} The product x ¯ ⋅ y ¯ {\displaystyle {\overline {x}}\cdot {\overline {y}}} in ⁠ Z / 4 Z {\displaystyle \mathbb {Z} /4\mathbb {Z} } ⁠ is the remainder when the integer xy is divided by 4. For example, 2 ¯ ⋅ 3 ¯ = 2 ¯ {\displaystyle {\overline {2}}\cdot {\overline {3}}={\overline {2}}} and 3 ¯ ⋅ 3 ¯ = 1 ¯ . {\displaystyle {\overline {3}}\cdot {\overline {3}}={\overline {1}}.} Then ⁠ Z / 4 Z {\displaystyle \mathbb {Z} /4\mathbb {Z} } ⁠ is a ring: each axiom follows from the corresponding axiom for ⁠ Z . {\displaystyle \mathbb {Z} .} ⁠ If x is an integer, the remainder of x when divided by 4 may be considered as an element of ⁠ Z / 4 Z , {\displaystyle \mathbb {Z} /4\mathbb {Z} ,} ⁠ and this element is often denoted by "x mod 4" or x ¯ , {\displaystyle {\overline {x}},} which is consistent with the notation for 0, 1, 2, 3. The additive inverse of any x ¯ {\displaystyle {\overline {x}}} in ⁠ Z / 4 Z {\displaystyle \mathbb {Z} /4\mathbb {Z} } ⁠ is − x ¯ = − x ¯ . {\displaystyle -{\overline {x}}={\overline {-x}}.} For example, − 3 ¯ = − 3 ¯ = 1 ¯ . {\displaystyle -{\overline {3}}={\overline {-3}}={\overline {1}}.} ⁠ Z / 4 Z {\displaystyle \mathbb {Z} /4\mathbb {Z} } ⁠ has a subring ⁠ Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } ⁠, and if p {\displaystyle p} is prime, then ⁠ Z / p Z {\displaystyle \mathbb {Z} /p\mathbb {Z} } ⁠ has no subrings. === Example: 2-by-2 matrices === The set of 2-by-2 square matrices with entries in a field F is M 2 ⁡ ( F ) = { ( a b c d ) | a , b , c , d ∈ F } . {\displaystyle \operatorname {M} _{2}(F)=\left\{\left.{\begin{pmatrix}a&b\\c&d\end{pmatrix}}\right|\ a,b,c,d\in F\right\}.} With the operations of matrix addition and matrix multiplication, M 2 ⁡ ( F ) {\displaystyle \operatorname {M} _{2}(F)} satisfies the above ring axioms. The element ( 1 0 0 1 ) {\displaystyle \left({\begin{smallmatrix}1&0\\0&1\end{smallmatrix}}\right)} is the multiplicative identity of the ring. If A = ( 0 1 1 0 ) {\displaystyle A=\left({\begin{smallmatrix}0&1\\1&0\end{smallmatrix}}\right)} and B = ( 0 1 0 0 ) , {\displaystyle B=\left({\begin{smallmatrix}0&1\\0&0\end{smallmatrix}}\right),} then A B = ( 0 0 0 1 ) {\displaystyle AB=\left({\begin{smallmatrix}0&0\\0&1\end{smallmatrix}}\right)} while B A = ( 1 0 0 0 ) ; {\displaystyle BA=\left({\begin{smallmatrix}1&0\\0&0\end{smallmatrix}}\right);} this example shows that the ring is noncommutative. More generally, for any ring R, commutative or not, and any nonnegative integer n, the square n × n matrices with entries in R form a ring; see Matrix ring. == History == === Dedekind === The study of rings originated from the theory of polynomial rings and the theory of algebraic integers. In 1871, Richard Dedekind defined the concept of the ring of integers of a number field. In this context, he introduced the terms "ideal" (inspired by Ernst Kummer's notion of ideal number) and "module" and studied their properties. Dedekind did not use the term "ring" and did not define the concept of a ring in a general setting. === Hilbert === The term "Zahlring" (number ring) was coined by David Hilbert in 1892 and published in 1897. In 19th century German, the word "Ring" could mean "association", which is still used today in English in a limited sense (for example, spy ring), so if that were the etymology then it would be similar to the way "group" entered mathematics by being a non-technical word for "collection of related things". According to Harvey Cohn, Hilbert used the term for a ring that had the property of "circling directly back" to an element of itself (in the sense of an equivalence). Specifically, in a ring of algebraic integers, all high powers of an algebraic integer can be written as an integral combination of a fixed set of lower powers, and thus the powers "cycle back". For instance, if a3 − 4a + 1 = 0 then: a 3 = 4 a − 1 , a 4 = 4 a 2 − a , a 5 = − a 2 + 16 a − 4 , a 6 = 16 a 2 − 8 a + 1 , a 7 = − 8 a 2 + 65 a − 16 , ⋮ ⋮ {\displaystyle {\begin{aligned}a^{3}&=4a-1,\\a^{4}&=4a^{2}-a,\\a^{5}&=-a^{2}+16a-4,\\a^{6}&=16a^{2}-8a+1,\\a^{7}&=-8a^{2}+65a-16,\\\vdots \ &\qquad \vdots \end{aligned}}} and so on; in general, an is going to be an integral linear combination of 1, a, and a2. === Fraenkel and Noether === The first axiomatic definition of a ring was given by Adolf Fraenkel in 1915, but his axioms were stricter than those in the modern definition. For instance, he required every non-zero-divisor to have a multiplicative inverse. In 1921, Emmy Noether gave a modern axiomatic definition of commutative rings (with and without 1) and developed the foundations of commutative ring theory in her paper Idealtheorie in Ringbereichen. === Multiplicative identity and the term "ring" === Fraenkel applied the term "ring" to structures with axioms that included a multiplicative identity, whereas Noether applied it to structures that did not. Most or all books on algebra up to around 1960 followed Noether's convention of not requiring a 1 for a "ring". Starting in the 1960s, it became increasingly common to see books including the existence of 1 in the definition of "ring", especially in advanced books by notable authors such as Artin, Bourbaki, Eisenbud, and Lang. There are also books published as late as 2022 that use the term without the requirement for a 1. Likewise, the Encyclopedia of Mathematics does not require unit elements in rings. In a research article, the authors often specify which definition of ring they use in the beginning of that article. Gardner and Wiegandt assert that, when dealing with several objects in the category of rings (as opposed to working with a fixed ring), if one requires all rings to have a 1, then some consequences include the lack of existence of infinite direct sums of rings, and that proper direct summands of rings are not subrings. They conclude that "in many, maybe most, branches of ring theory the requirement of the existence of a unity element is not sensible, and therefore unacceptable." Poonen makes the counterargument that the natural notion for rings would be the direct product rather than the direct sum. However, his main argument is that rings without a multiplicative identity are not totally associative, in the sense that they do not contain the product of any finite sequence of ring elements, including the empty sequence. Authors who follow either convention for the use of the term "ring" may use one of the following terms to refer to objects satisfying the other convention: to include a requirement for a multiplicative identity: "unital ring", "unitary ring", "unit ring", "ring with unity", "ring with identity", "ring with a unit", or "ring with 1". to omit a requirement for a multiplicative identity: "rng" or "pseudo-ring", although the latter may be confusing because it also has other meanings. == Basic examples == === Commutative rings === The prototypical example is the ring of integers with the two operations of addition and multiplication. The rational, real and complex numbers are commutative rings of a type called fields. A unital associative algebra over a commutative ring R is itself a ring as well as an R-module. Some examples: The algebra R[X] of polynomials with coefficients in R. The algebra R [ [ X 1 , … , X n ] ] {\displaystyle R[[X_{1},\dots ,X_{n}]]} of formal power series with coefficients in R. The set of all continuous real-valued functions defined on the real line forms a commutative ⁠ R {\displaystyle \mathbb {R} } ⁠-algebra. The operations are pointwise addition and multiplication of functions. Let X be a set, and let R be a ring. Then the set of all functions from X to R forms a ring, which is commutative if R is commutative. The ring of quadratic integers, the integral closure of ⁠ Z {\displaystyle \mathbb {Z} } ⁠ in a quadratic extension of ⁠ Q . {\displaystyle \mathbb {Q} .} ⁠ It is a subring of the ring of all algebraic integers. The ring of profinite integers ⁠ Z ^ , {\displaystyle {\widehat {\mathbb {Z} }},} ⁠ the (infinite) product of the rings of p-adic integers ⁠ Z p {\displaystyle \mathbb {Z} _{p}} ⁠ over all prime numbers p. The Hecke ring, the ring generated by Hecke operators. If S is a set, then the power set of S becomes a ring if we define addition to be the symmetric difference of sets and multiplication to be intersection. This is an example of a Boolean ring. === Noncommutative rings === For any ring R and any natural number n, the set of all square n-by-n matrices with entries from R, forms a ring with matrix addition and matrix multiplication as operations. For n = 1, this matrix ring is isomorphic to R itself. For n > 1 (and R not the zero ring), this matrix ring is noncommutative. If G is an abelian group, then the endomorphisms of G form a ring, the endomorphism ring End(G) of G. The operations in this ring are addition and composition of endomorphisms. More generally, if V is a left module over a ring R, then the set of all R-linear maps forms a ring, also called the endomorphism ring and denoted by EndR(V). The endomorphism ring of an elliptic curve. It is a commutative ring if the elliptic curve is defined over a field of characteristic zero. If G is a group and R is a ring, the group ring of G over R is a free module over R having G as basis. Multiplication is defined by the rules that the elements of G commute with the elements of R and multiply together as they do in the group G. The ring of differential operators (depending on the context). In fact, many rings that appear in analysis are noncommutative. For example, most Banach algebras are noncommutative. === Non-rings === The set of natural numbers ⁠ N {\displaystyle \mathbb {N} } ⁠ with the usual operations is not a ring, since ⁠ ( N , + ) {\displaystyle (\mathbb {N} ,+)} ⁠ is not even a group (not all the elements are invertible with respect to addition – for instance, there is no natural number which can be added to 3 to get 0 as a result). There is a natural way to enlarge it to a ring, by including negative numbers to produce the ring of integers ⁠ Z . {\displaystyle \mathbb {Z} .} ⁠ The natural numbers (including 0) form an algebraic structure known as a semiring (which has all of the axioms of a ring excluding that of an additive inverse). Let R be the set of all continuous functions on the real line that vanish outside a bounded interval that depends on the function, with addition as usual but with multiplication defined as convolution: ( f ∗ g ) ( x ) = ∫ − ∞ ∞ f ( y ) g ( x − y ) d y . {\displaystyle (f*g)(x)=\int _{-\infty }^{\infty }f(y)g(x-y)\,dy.} Then R is a rng, but not a ring: the Dirac delta function has the property of a multiplicative identity, but it is not a function and hence is not an element of R. == Basic concepts == === Products and powers === For each nonnegative integer n, given a sequence ⁠ ( a 1 , … , a n ) {\displaystyle (a_{1},\dots ,a_{n})} ⁠ of n elements of R, one can define the product ⁠ P n = ∏ i = 1 n a i {\displaystyle \textstyle P_{n}=\prod _{i=1}^{n}a_{i}} ⁠ recursively: let P0 = 1 and let Pm = Pm−1am for 1 ≤ m ≤ n. As a special case, one can define nonnegative integer powers of an element a of a ring: a0 = 1 and an = an−1a for n ≥ 1. Then am+n = aman for all m, n ≥ 0. === Elements in a ring === A left zero divisor of a ring R is an element a in the ring such that there exists a nonzero element b of R such that ab = 0. A right zero divisor is defined similarly. A nilpotent element is an element a such that an = 0 for some n > 0. One example of a nilpotent element is a nilpotent matrix. A nilpotent element in a nonzero ring is necessarily a zero divisor. An idempotent e {\displaystyle e} is an element such that e2 = e. One example of an idempotent element is a projection in linear algebra. A unit is an element a having a multiplicative inverse; in this case the inverse is unique, and is denoted by a–1. The set of units of a ring is a group under ring multiplication; this group is denoted by R× or R* or U(R). For example, if R is the ring of all square matrices of size n over a field, then R× consists of the set of all invertible matrices of size n, and is called the general linear group. === Subring === A subset S of R is called a subring if any one of the following equivalent conditions holds: the addition and multiplication of R restrict to give operations S × S → S making S a ring with the same multiplicative identity as R. 1 ∈ S; and for all x, y in S, the elements xy, x + y, and −x are in S. S can be equipped with operations making it a ring such that the inclusion map S → R is a ring homomorphism. For example, the ring ⁠ Z {\displaystyle \mathbb {Z} } ⁠ of integers is a subring of the field of real numbers and also a subring of the ring of polynomials ⁠ Z [ X ] {\displaystyle \mathbb {Z} [X]} ⁠ (in both cases, ⁠ Z {\displaystyle \mathbb {Z} } ⁠ contains 1, which is the multiplicative identity of the larger rings). On the other hand, the subset of even integers ⁠ 2 Z {\displaystyle 2\mathbb {Z} } ⁠ does not contain the identity element 1 and thus does not qualify as a subring of ⁠ Z ; {\displaystyle \mathbb {Z} ;} ⁠ one could call ⁠ 2 Z {\displaystyle 2\mathbb {Z} } ⁠ a subrng, however. An intersection of subrings is a subring. Given a subset E of R, the smallest subring of R containing E is the intersection of all subrings of R containing E, and it is called the subring generated by E. For a ring R, the smallest subring of R is called the characteristic subring of R. It can be generated through addition of copies of 1 and −1. It is possible that n · 1 = 1 + 1 + ... + 1 (n times) can be zero. If n is the smallest positive integer such that this occurs, then n is called the characteristic of R. In some rings, n · 1 is never zero for any positive integer n, and those rings are said to have characteristic zero. Given a ring R, let Z(R) denote the set of all elements x in R such that x commutes with every element in R: xy = yx for any y in R. Then Z(R) is a subring of R, called the center of R. More generally, given a subset X of R, let S be the set of all elements in R that commute with every element in X. Then S is a subring of R, called the centralizer (or commutant) of X. The center is the centralizer of the entire ring R. Elements or subsets of the center are said to be central in R; they (each individually) generate a subring of the center. === Ideal === Let R be a ring. A left ideal of R is a nonempty subset I of R such that for any x, y in I and r in R, the elements x + y and rx are in I. If R I denotes the R-span of I, that is, the set of finite sums r 1 x 1 + ⋯ + r n x n such that r i ∈ R and x i ∈ I , {\displaystyle r_{1}x_{1}+\cdots +r_{n}x_{n}\quad {\textrm {such}}\;{\textrm {that}}\;r_{i}\in R\;{\textrm {and}}\;x_{i}\in I,} then I is a left ideal if RI ⊆ I. Similarly, a right ideal is a subset I such that IR ⊆ I. A subset I is said to be a two-sided ideal or simply ideal if it is both a left ideal and right ideal. A one-sided or two-sided ideal is then an additive subgroup of R. If E is a subset of R, then RE is a left ideal, called the left ideal generated by E; it is the smallest left ideal containing E. Similarly, one can consider the right ideal or the two-sided ideal generated by a subset of R. If x is in R, then Rx and xR are left ideals and right ideals, respectively; they are called the principal left ideals and right ideals generated by x. The principal ideal RxR is written as (x). For example, the set of all positive and negative multiples of 2 along with 0 form an ideal of the integers, and this ideal is generated by the integer 2. In fact, every ideal of the ring of integers is principal. Like a group, a ring is said to be simple if it is nonzero and it has no proper nonzero two-sided ideals. A commutative simple ring is precisely a field. Rings are often studied with special conditions set upon their ideals. For example, a ring in which there is no strictly increasing infinite chain of left ideals is called a left Noetherian ring. A ring in which there is no strictly decreasing infinite chain of left ideals is called a left Artinian ring. It is a somewhat surprising fact that a left Artinian ring is left Noetherian (the Hopkins–Levitzki theorem). The integers, however, form a Noetherian ring which is not Artinian. For commutative rings, the ideals generalize the classical notion of divisibility and decomposition of an integer into prime numbers in algebra. A proper ideal P of R is called a prime ideal if for any elements x , y ∈ R {\displaystyle x,y\in R} we have that x y ∈ P {\displaystyle xy\in P} implies either x ∈ P {\displaystyle x\in P} or y ∈ P . {\displaystyle y\in P.} Equivalently, P is prime if for any ideals I, J we have that IJ ⊆ P implies either I ⊆ P or J ⊆ P. This latter formulation illustrates the idea of ideals as generalizations of elements. === Homomorphism === A homomorphism from a ring (R, +, ⋅) to a ring (S, ‡, ∗) is a function f from R to S that preserves the ring operations; namely, such that, for all a, b in R the following identities hold: f ( a + b ) = f ( a ) ‡ f ( b ) f ( a ⋅ b ) = f ( a ) ∗ f ( b ) f ( 1 R ) = 1 S {\displaystyle {\begin{aligned}&f(a+b)=f(a)\ddagger f(b)\\&f(a\cdot b)=f(a)*f(b)\\&f(1_{R})=1_{S}\end{aligned}}} If one is working with rngs, then the third condition is dropped. A ring homomorphism f is said to be an isomorphism if there exists an inverse homomorphism to f (that is, a ring homomorphism that is an inverse function), or equivalently if it is bijective. Examples: The function that maps each integer x to its remainder modulo 4 (a number in {0, 1, 2, 3}) is a homomorphism from the ring ⁠ Z {\displaystyle \mathbb {Z} } ⁠ to the quotient ring ⁠ Z / 4 Z {\displaystyle \mathbb {Z} /4\mathbb {Z} } ⁠ ("quotient ring" is defined below). If u is a unit element in a ring R, then R → R , x ↦ u x u − 1 {\displaystyle R\to R,x\mapsto uxu^{-1}} is a ring homomorphism, called an inner automorphism of R. Let R be a commutative ring of prime characteristic p. Then x ↦ xp is a ring endomorphism of R called the Frobenius homomorphism. The Galois group of a field extension L / K is the set of all automorphisms of L whose restrictions to K are the identity. For any ring R, there are a unique ring homomorphism ⁠ Z ↦ R {\displaystyle \mathbb {Z} \mapsto R} ⁠ and a unique ring homomorphism R → 0. An epimorphism (that is, right-cancelable morphism) of rings need not be surjective. For example, the unique map ⁠ Z → Q {\displaystyle \mathbb {Z} \to \mathbb {Q} } ⁠ is an epimorphism. An algebra homomorphism from a k-algebra to the endomorphism algebra of a vector space over k is called a representation of the algebra. Given a ring homomorphism f : R → S, the set of all elements mapped to 0 by f is called the kernel of f. The kernel is a two-sided ideal of R. The image of f, on the other hand, is not always an ideal, but it is always a subring of S. To give a ring homomorphism from a commutative ring R to a ring A with image contained in the center of A is the same as to give a structure of an algebra over R to A (which in particular gives a structure of an A-module). === Quotient ring === The notion of quotient ring is analogous to the notion of a quotient group. Given a ring (R, +, ⋅) and a two-sided ideal I of (R, +, ⋅), view I as subgroup of (R, +); then the quotient ring R / I is the set of cosets of I together with the operations ( a + I ) + ( b + I ) = ( a + b ) + I , ( a + I ) ( b + I ) = ( a b ) + I . {\displaystyle {\begin{aligned}&(a+I)+(b+I)=(a+b)+I,\\&(a+I)(b+I)=(ab)+I.\end{aligned}}} for all a, b in R. The ring R / I is also called a factor ring. As with a quotient group, there is a canonical homomorphism p : R → R / I, given by x ↦ x + I. It is surjective and satisfies the following universal property: If f : R → S is a ring homomorphism such that f(I) = 0, then there is a unique homomorphism f ¯ : R / I → S {\displaystyle {\overline {f}}:R/I\to S} such that f = f ¯ ∘ p . {\displaystyle f={\overline {f}}\circ p.} For any ring homomorphism f : R → S, invoking the universal property with I = ker f produces a homomorphism f ¯ : R / ker ⁡ f → S {\displaystyle {\overline {f}}:R/\ker f\to S} that gives an isomorphism from R / ker f to the image of f. == Modules == The concept of a module over a ring generalizes the concept of a vector space (over a field) by generalizing from multiplication of vectors with elements of a field (scalar multiplication) to multiplication with elements of a ring. More precisely, given a ring R, an R-module M is an abelian group equipped with an operation R × M → M (associating an element of M to every pair of an element of R and an element of M) that satisfies certain axioms. This operation is commonly denoted by juxtaposition and called multiplication. The axioms of modules are the following: for all a, b in R and all x, y in M, M is an abelian group under addition. a ( x + y ) = a x + a y ( a + b ) x = a x + b x 1 x = x ( a b ) x = a ( b x ) {\displaystyle {\begin{aligned}&a(x+y)=ax+ay\\&(a+b)x=ax+bx\\&1x=x\\&(ab)x=a(bx)\end{aligned}}} When the ring is noncommutative these axioms define left modules; right modules are defined similarly by writing xa instead of ax. This is not only a change of notation, as the last axiom of right modules (that is x(ab) = (xa)b) becomes (ab)x = b(ax), if left multiplication (by ring elements) is used for a right module. Basic examples of modules are ideals, including the ring itself. Although similarly defined, the theory of modules is much more complicated than that of vector space, mainly, because, unlike vector spaces, modules are not characterized (up to an isomorphism) by a single invariant (the dimension of a vector space). In particular, not all modules have a basis. The axioms of modules imply that (−1)x = −x, where the first minus denotes the additive inverse in the ring and the second minus the additive inverse in the module. Using this and denoting repeated addition by a multiplication by a positive integer allows identifying abelian groups with modules over the ring of integers. Any ring homomorphism induces a structure of a module: if f : R → S is a ring homomorphism, then S is a left module over R by the multiplication: rs = f(r)s. If R is commutative or if f(R) is contained in the center of S, the ring S is called a R-algebra. In particular, every ring is an algebra over the integers. == Constructions == === Direct product === Let R and S be rings. Then the product R × S can be equipped with the following natural ring structure: ( r 1 , s 1 ) + ( r 2 , s 2 ) = ( r 1 + r 2 , s 1 + s 2 ) ( r 1 , s 1 ) ⋅ ( r 2 , s 2 ) = ( r 1 ⋅ r 2 , s 1 ⋅ s 2 ) {\displaystyle {\begin{aligned}&(r_{1},s_{1})+(r_{2},s_{2})=(r_{1}+r_{2},s_{1}+s_{2})\\&(r_{1},s_{1})\cdot (r_{2},s_{2})=(r_{1}\cdot r_{2},s_{1}\cdot s_{2})\end{aligned}}} for all r1, r2 in R and s1, s2 in S. The ring R × S with the above operations of addition and multiplication and the multiplicative identity (1, 1) is called the direct product of R with S. The same construction also works for an arbitrary family of rings: if Ri are rings indexed by a set I, then ∏ i ∈ I R i {\textstyle \prod _{i\in I}R_{i}} is a ring with componentwise addition and multiplication. Let R be a commutative ring and a 1 , ⋯ , a n {\displaystyle {\mathfrak {a}}_{1},\cdots ,{\mathfrak {a}}_{n}} be ideals such that a i + a j = ( 1 ) {\displaystyle {\mathfrak {a}}_{i}+{\mathfrak {a}}_{j}=(1)} whenever i ≠ j. Then the Chinese remainder theorem says there is a canonical ring isomorphism: R / ⋂ i = 1 n a i ≃ ∏ i = 1 n R / a i , x mod ⋂ i = 1 n a i ↦ ( x mod a 1 , … , x mod a n ) . {\displaystyle R/{\textstyle \bigcap _{i=1}^{n}{{\mathfrak {a}}_{i}}}\simeq \prod _{i=1}^{n}{R/{\mathfrak {a}}_{i}},\qquad x{\bmod {\textstyle \bigcap _{i=1}^{n}{\mathfrak {a}}_{i}}}\mapsto (x{\bmod {\mathfrak {a}}}_{1},\ldots ,x{\bmod {\mathfrak {a}}}_{n}).} A "finite" direct product may also be viewed as a direct sum of ideals. Namely, let R i , 1 ≤ i ≤ n {\displaystyle R_{i},1\leq i\leq n} be rings, R i → R = ∏ R i {\textstyle R_{i}\to R=\prod R_{i}} the inclusions with the images a i {\displaystyle {\mathfrak {a}}_{i}} (in particular a i {\displaystyle {\mathfrak {a}}_{i}} are rings though not subrings). Then a i {\displaystyle {\mathfrak {a}}_{i}} are ideals of R and R = a 1 ⊕ ⋯ ⊕ a n , a i a j = 0 , i ≠ j , a i 2 ⊆ a i {\displaystyle R={\mathfrak {a}}_{1}\oplus \cdots \oplus {\mathfrak {a}}_{n},\quad {\mathfrak {a}}_{i}{\mathfrak {a}}_{j}=0,i\neq j,\quad {\mathfrak {a}}_{i}^{2}\subseteq {\mathfrak {a}}_{i}} as a direct sum of abelian groups (because for abelian groups finite products are the same as direct sums). Clearly the direct sum of such ideals also defines a product of rings that is isomorphic to R. Equivalently, the above can be done through central idempotents. Assume that R has the above decomposition. Then we can write 1 = e 1 + ⋯ + e n , e i ∈ a i . {\displaystyle 1=e_{1}+\cdots +e_{n},\quad e_{i}\in {\mathfrak {a}}_{i}.} By the conditions on a i , {\displaystyle {\mathfrak {a}}_{i},} one has that ei are central idempotents and eiej = 0, i ≠ j (orthogonal). Again, one can reverse the construction. Namely, if one is given a partition of 1 in orthogonal central idempotents, then let a i = R e i , {\displaystyle {\mathfrak {a}}_{i}=Re_{i},} which are two-sided ideals. If each ei is not a sum of orthogonal central idempotents, then their direct sum is isomorphic to R. An important application of an infinite direct product is the construction of a projective limit of rings (see below). Another application is a restricted product of a family of rings (cf. adele ring). === Polynomial ring === Given a symbol t (called a variable) and a commutative ring R, the set of polynomials R [ t ] = { a n t n + a n − 1 t n − 1 + ⋯ + a 1 t + a 0 ∣ n ≥ 0 , a j ∈ R } {\displaystyle R[t]=\left\{a_{n}t^{n}+a_{n-1}t^{n-1}+\dots +a_{1}t+a_{0}\mid n\geq 0,a_{j}\in R\right\}} forms a commutative ring with the usual addition and multiplication, containing R as a subring. It is called the polynomial ring over R. More generally, the set R [ t 1 , … , t n ] {\displaystyle R\left[t_{1},\ldots ,t_{n}\right]} of all polynomials in variables t 1 , … , t n {\displaystyle t_{1},\ldots ,t_{n}} forms a commutative ring, containing R [ t i ] {\displaystyle R\left[t_{i}\right]} as subrings. If R is an integral domain, then R[t] is also an integral domain; its field of fractions is the field of rational functions. If R is a Noetherian ring, then R[t] is a Noetherian ring. If R is a unique factorization domain, then R[t] is a unique factorization domain. Finally, R is a field if and only if R[t] is a principal ideal domain. Let R ⊆ S {\displaystyle R\subseteq S} be commutative rings. Given an element x of S, one can consider the ring homomorphism R [ t ] → S , f ↦ f ( x ) {\displaystyle R[t]\to S,\quad f\mapsto f(x)} (that is, the substitution). If S = R[t] and x = t, then f(t) = f. Because of this, the polynomial f is often also denoted by f(t). The image of the map ⁠ f ↦ f ( x ) {\displaystyle f\mapsto f(x)} ⁠ is denoted by R[x]; it is the same thing as the subring of S generated by R and x. Example: k [ t 2 , t 3 ] {\displaystyle k\left[t^{2},t^{3}\right]} denotes the image of the homomorphism k [ x , y ] → k [ t ] , f ↦ f ( t 2 , t 3 ) . {\displaystyle k[x,y]\to k[t],\,f\mapsto f\left(t^{2},t^{3}\right).} In other words, it is the subalgebra of k[t] generated by t2 and t3. Example: let f be a polynomial in one variable, that is, an element in a polynomial ring R. Then f(x + h) is an element in R[h] and f(x + h) – f(x) is divisible by h in that ring. The result of substituting zero to h in (f(x + h) – f(x)) / h is f' (x), the derivative of f at x. The substitution is a special case of the universal property of a polynomial ring. The property states: given a ring homomorphism ϕ : R → S {\displaystyle \phi :R\to S} and an element x in S there exists a unique ring homomorphism ϕ ¯ : R [ t ] → S {\displaystyle {\overline {\phi }}:R[t]\to S} such that ϕ ¯ ( t ) = x {\displaystyle {\overline {\phi }}(t)=x} and ϕ ¯ {\displaystyle {\overline {\phi }}} restricts to ϕ. For example, choosing a basis, a symmetric algebra satisfies the universal property and so is a polynomial ring. To give an example, let S be the ring of all functions from R to itself; the addition and the multiplication are those of functions. Let x be the identity function. Each r in R defines a constant function, giving rise to the homomorphism R → S. The universal property says that this map extends uniquely to R [ t ] → S , f ↦ f ¯ {\displaystyle R[t]\to S,\quad f\mapsto {\overline {f}}} (t maps to x) where f ¯ {\displaystyle {\overline {f}}} is the polynomial function defined by f. The resulting map is injective if and only if R is infinite. Given a non-constant monic polynomial f in R[t], there exists a ring S containing R such that f is a product of linear factors in S[t]. Let k be an algebraically closed field. The Hilbert's Nullstellensatz (theorem of zeros) states that there is a natural one-to-one correspondence between the set of all prime ideals in k [ t 1 , … , t n ] {\displaystyle k\left[t_{1},\ldots ,t_{n}\right]} and the set of closed subvarieties of kn. In particular, many local problems in algebraic geometry may be attacked through the study of the generators of an ideal in a polynomial ring. (cf. Gröbner basis.) There are some other related constructions. A formal power series ring R [ [ t ] ] {\displaystyle R[\![t]\!]} consists of formal power series ∑ 0 ∞ a i t i , a i ∈ R {\displaystyle \sum _{0}^{\infty }a_{i}t^{i},\quad a_{i}\in R} together with multiplication and addition that mimic those for convergent series. It contains R[t] as a subring. A formal power series ring does not have the universal property of a polynomial ring; a series may not converge after a substitution. The important advantage of a formal power series ring over a polynomial ring is that it is local (in fact, complete). === Matrix ring and endomorphism ring === Let R be a ring (not necessarily commutative). The set of all square matrices of size n with entries in R forms a ring with the entry-wise addition and the usual matrix multiplication. It is called the matrix ring and is denoted by Mn(R). Given a right R-module U, the set of all R-linear maps from U to itself forms a ring with addition that is of function and multiplication that is of composition of functions; it is called the endomorphism ring of U and is denoted by EndR(U). As in linear algebra, a matrix ring may be canonically interpreted as an endomorphism ring: End R ⁡ ( R n ) ≃ M n ⁡ ( R ) . {\displaystyle \operatorname {End} _{R}(R^{n})\simeq \operatorname {M} _{n}(R).} This is a special case of the following fact: If f : ⊕ 1 n U → ⊕ 1 n U {\displaystyle f:\oplus _{1}^{n}U\to \oplus _{1}^{n}U} is an R-linear map, then f may be written as a matrix with entries fij in S = EndR(U), resulting in the ring isomorphism: End R ⁡ ( ⊕ 1 n U ) → M n ⁡ ( S ) , f ↦ ( f i j ) . {\displaystyle \operatorname {End} _{R}(\oplus _{1}^{n}U)\to \operatorname {M} _{n}(S),\quad f\mapsto (f_{ij}).} Any ring homomorphism R → S induces Mn(R) → Mn(S). Schur's lemma says that if U is a simple right R-module, then EndR(U) is a division ring. If U = ⨁ i = 1 r U i ⊕ m i {\displaystyle U=\bigoplus _{i=1}^{r}U_{i}^{\oplus m_{i}}} is a direct sum of mi-copies of simple R-modules U i , {\displaystyle U_{i},} then End R ⁡ ( U ) ≃ ∏ i = 1 r M m i ⁡ ( End R ⁡ ( U i ) ) . {\displaystyle \operatorname {End} _{R}(U)\simeq \prod _{i=1}^{r}\operatorname {M} _{m_{i}}(\operatorname {End} _{R}(U_{i})).} The Artin–Wedderburn theorem states any semisimple ring (cf. below) is of this form. A ring R and the matrix ring Mn(R) over it are Morita equivalent: the category of right modules of R is equivalent to the category of right modules over Mn(R). In particular, two-sided ideals in R correspond in one-to-one to two-sided ideals in Mn(R). === Limits and colimits of rings === Let Ri be a sequence of rings such that Ri is a subring of Ri + 1 for all i. Then the union (or filtered colimit) of Ri is the ring lim → ⁡ R i {\displaystyle \varinjlim R_{i}} defined as follows: it is the disjoint union of all Ri's modulo the equivalence relation x ~ y if and only if x = y in Ri for sufficiently large i. Examples of colimits: A polynomial ring in infinitely many variables: R [ t 1 , t 2 , ⋯ ] = lim → ⁡ R [ t 1 , t 2 , ⋯ , t m ] . {\displaystyle R[t_{1},t_{2},\cdots ]=\varinjlim R[t_{1},t_{2},\cdots ,t_{m}].} The algebraic closure of finite fields of the same characteristic F ¯ p = lim → ⁡ F p m . {\displaystyle {\overline {\mathbf {F} }}_{p}=\varinjlim \mathbf {F} _{p^{m}}.} The field of formal Laurent series over a field k: k ( ( t ) ) = lim → ⁡ t − m k [ [ t ] ] {\displaystyle k(\!(t)\!)=\varinjlim t^{-m}k[\![t]\!]} (it is the field of fractions of the formal power series ring k [ [ t ] ] . {\displaystyle k[\![t]\!].} ) The function field of an algebraic variety over a field k is lim → ⁡ k [ U ] {\displaystyle \varinjlim k[U]} where the limit runs over all the coordinate rings k[U] of nonempty open subsets U (more succinctly it is the stalk of the structure sheaf at the generic point.) Any commutative ring is the colimit of finitely generated subrings. A projective limit (or a filtered limit) of rings is defined as follows. Suppose we are given a family of rings Ri, i running over positive integers, say, and ring homomorphisms Rj → Ri, j ≥ i such that Ri → Ri are all the identities and Rk → Rj → Ri is Rk → Ri whenever k ≥ j ≥ i. Then lim ← ⁡ R i {\displaystyle \varprojlim R_{i}} is the subring of ∏ R i {\displaystyle \textstyle \prod R_{i}} consisting of (xn) such that xj maps to xi under Rj → Ri, j ≥ i. For an example of a projective limit, see § Completion. === Localization === The localization generalizes the construction of the field of fractions of an integral domain to an arbitrary ring and modules. Given a (not necessarily commutative) ring R and a subset S of R, there exists a ring R [ S − 1 ] {\displaystyle R[S^{-1}]} together with the ring homomorphism R → R [ S − 1 ] {\displaystyle R\to R\left[S^{-1}\right]} that "inverts" S; that is, the homomorphism maps elements in S to unit elements in R [ S − 1 ] , {\displaystyle R\left[S^{-1}\right],} and, moreover, any ring homomorphism from R that "inverts" S uniquely factors through R [ S − 1 ] . {\displaystyle R\left[S^{-1}\right].} The ring R [ S − 1 ] {\displaystyle R\left[S^{-1}\right]} is called the localization of R with respect to S. For example, if R is a commutative ring and f an element in R, then the localization R [ f − 1 ] {\displaystyle R\left[f^{-1}\right]} consists of elements of the form r / f n , r ∈ R , n ≥ 0 {\displaystyle r/f^{n},\,r\in R,\,n\geq 0} (to be precise, R [ f − 1 ] = R [ t ] / ( t f − 1 ) . {\displaystyle R\left[f^{-1}\right]=R[t]/(tf-1).} ) The localization is frequently applied to a commutative ring R with respect to the complement of a prime ideal (or a union of prime ideals) in R. In that case S = R − p , {\displaystyle S=R-{\mathfrak {p}},} one often writes R p {\displaystyle R_{\mathfrak {p}}} for R [ S − 1 ] . {\displaystyle R\left[S^{-1}\right].} R p {\displaystyle R_{\mathfrak {p}}} is then a local ring with the maximal ideal p R p . {\displaystyle {\mathfrak {p}}R_{\mathfrak {p}}.} This is the reason for the terminology "localization". The field of fractions of an integral domain R is the localization of R at the prime ideal zero. If p {\displaystyle {\mathfrak {p}}} is a prime ideal of a commutative ring R, then the field of fractions of R / p {\displaystyle R/{\mathfrak {p}}} is the same as the residue field of the local ring R p {\displaystyle R_{\mathfrak {p}}} and is denoted by k ( p ) . {\displaystyle k({\mathfrak {p}}).} If M is a left R-module, then the localization of M with respect to S is given by a change of rings M [ S − 1 ] = R [ S − 1 ] ⊗ R M . {\displaystyle M\left[S^{-1}\right]=R\left[S^{-1}\right]\otimes _{R}M.} The most important properties of localization are the following: when R is a commutative ring and S a multiplicatively closed subset p ↦ p [ S − 1 ] {\displaystyle {\mathfrak {p}}\mapsto {\mathfrak {p}}\left[S^{-1}\right]} is a bijection between the set of all prime ideals in R disjoint from S and the set of all prime ideals in R [ S − 1 ] . {\displaystyle R\left[S^{-1}\right].} R [ S − 1 ] = lim → ⁡ R [ f − 1 ] , {\displaystyle R\left[S^{-1}\right]=\varinjlim R\left[f^{-1}\right],} f running over elements in S with partial ordering given by divisibility. The localization is exact: 0 → M ′ [ S − 1 ] → M [ S − 1 ] → M ″ [ S − 1 ] → 0 {\displaystyle 0\to M'\left[S^{-1}\right]\to M\left[S^{-1}\right]\to M''\left[S^{-1}\right]\to 0} is exact over R [ S − 1 ] {\displaystyle R\left[S^{-1}\right]} whenever 0 → M ′ → M → M ″ → 0 {\displaystyle 0\to M'\to M\to M''\to 0} is exact over R. Conversely, if 0 → M m ′ → M m → M m ″ → 0 {\displaystyle 0\to M'_{\mathfrak {m}}\to M_{\mathfrak {m}}\to M''_{\mathfrak {m}}\to 0} is exact for any maximal ideal m , {\displaystyle {\mathfrak {m}},} then 0 → M ′ → M → M ″ → 0 {\displaystyle 0\to M'\to M\to M''\to 0} is exact. A remark: localization is no help in proving a global existence. One instance of this is that if two modules are isomorphic at all prime ideals, it does not follow that they are isomorphic. (One way to explain this is that the localization allows one to view a module as a sheaf over prime ideals and a sheaf is inherently a local notion.) In category theory, a localization of a category amounts to making some morphisms isomorphisms. An element in a commutative ring R may be thought of as an endomorphism of any R-module. Thus, categorically, a localization of R with respect to a subset S of R is a functor from the category of R-modules to itself that sends elements of S viewed as endomorphisms to automorphisms and is universal with respect to this property. (Of course, R then maps to R [ S − 1 ] {\displaystyle R\left[S^{-1}\right]} and R-modules map to R [ S − 1 ] {\displaystyle R\left[S^{-1}\right]} -modules.) === Completion === Let R be a commutative ring, and let I be an ideal of R. The completion of R at I is the projective limit R ^ = lim ← ⁡ R / I n ; {\displaystyle {\hat {R}}=\varprojlim R/I^{n};} it is a commutative ring. The canonical homomorphisms from R to the quotients R / I n {\displaystyle R/I^{n}} induce a homomorphism R → R ^ . {\displaystyle R\to {\hat {R}}.} The latter homomorphism is injective if R is a Noetherian integral domain and I is a proper ideal, or if R is a Noetherian local ring with maximal ideal I, by Krull's intersection theorem. The construction is especially useful when I is a maximal ideal. The basic example is the completion of ⁠ Z {\displaystyle \mathbb {Z} } ⁠ at the principal ideal (p) generated by a prime number p; it is called the ring of p-adic integers and is denoted ⁠ Z p . {\displaystyle \mathbb {Z} _{p}.} ⁠ The completion can in this case be constructed also from the p-adic absolute value on ⁠ Q . {\displaystyle \mathbb {Q} .} ⁠ The p-adic absolute value on ⁠ Q {\displaystyle \mathbb {Q} } ⁠ is a map x ↦ | x | {\displaystyle x\mapsto |x|} from ⁠ Q {\displaystyle \mathbb {Q} } ⁠ to ⁠ R {\displaystyle \mathbb {R} } ⁠ given by | n | p = p − v p ( n ) {\displaystyle |n|_{p}=p^{-v_{p}(n)}} where v p ( n ) {\displaystyle v_{p}(n)} denotes the exponent of p in the prime factorization of a nonzero integer n into prime numbers (we also put | 0 | p = 0 {\displaystyle |0|_{p}=0} and | m / n | p = | m | p / | n | p {\displaystyle |m/n|_{p}=|m|_{p}/|n|_{p}} ). It defines a distance function on ⁠ Q {\displaystyle \mathbb {Q} } ⁠ and the completion of ⁠ Q {\displaystyle \mathbb {Q} } ⁠ as a metric space is denoted by ⁠ Q p . {\displaystyle \mathbb {Q} _{p}.} ⁠ It is again a field since the field operations extend to the completion. The subring of ⁠ Q p {\displaystyle \mathbb {Q} _{p}} ⁠ consisting of elements x with |x|p ≤ 1 is isomorphic to ⁠ Z p . {\displaystyle \mathbb {Z} _{p}.} ⁠ Similarly, the formal power series ring R[{[t]}] is the completion of R[t] at (t) (see also Hensel's lemma) A complete ring has much simpler structure than a commutative ring. This owns to the Cohen structure theorem, which says, roughly, that a complete local ring tends to look like a formal power series ring or a quotient of it. On the other hand, the interaction between the integral closure and completion has been among the most important aspects that distinguish modern commutative ring theory from the classical one developed by the likes of Noether. Pathological examples found by Nagata led to the reexamination of the roles of Noetherian rings and motivated, among other things, the definition of excellent ring. === Rings with generators and relations === The most general way to construct a ring is by specifying generators and relations. Let F be a free ring (that is, free algebra over the integers) with the set X of symbols, that is, F consists of polynomials with integral coefficients in noncommuting variables that are elements of X. A free ring satisfies the universal property: any function from the set X to a ring R factors through F so that F → R is the unique ring homomorphism. Just as in the group case, every ring can be represented as a quotient of a free ring. Now, we can impose relations among symbols in X by taking a quotient. Explicitly, if E is a subset of F, then the quotient ring of F by the ideal generated by E is called the ring with generators X and relations E. If we used a ring, say, A as a base ring instead of ⁠ Z , {\displaystyle \mathbb {Z} ,} ⁠ then the resulting ring will be over A. For example, if E = { x y − y x ∣ x , y ∈ X } , {\displaystyle E=\{xy-yx\mid x,y\in X\},} then the resulting ring will be the usual polynomial ring with coefficients in A in variables that are elements of X (It is also the same thing as the symmetric algebra over A with symbols X.) In the category-theoretic terms, the formation S ↦ the free ring generated by the set S {\displaystyle S\mapsto {\text{the free ring generated by the set }}S} is the left adjoint functor of the forgetful functor from the category of rings to Set (and it is often called the free ring functor.) Let A, B be algebras over a commutative ring R. Then the tensor product of R-modules A ⊗ R B {\displaystyle A\otimes _{R}B} is an R-algebra with multiplication characterized by ( x ⊗ u ) ( y ⊗ v ) = x y ⊗ u v . {\displaystyle (x\otimes u)(y\otimes v)=xy\otimes uv.} == Special kinds of rings == === Domains === A nonzero ring with no nonzero zero-divisors is called a domain. A commutative domain is called an integral domain. The most important integral domains are principal ideal domains, PIDs for short, and fields. A principal ideal domain is an integral domain in which every ideal is principal. An important class of integral domains that contain a PID is a unique factorization domain (UFD), an integral domain in which every nonunit element is a product of prime elements (an element is prime if it generates a prime ideal.) The fundamental question in algebraic number theory is on the extent to which the ring of (generalized) integers in a number field, where an "ideal" admits prime factorization, fails to be a PID. Among theorems concerning a PID, the most important one is the structure theorem for finitely generated modules over a principal ideal domain. The theorem may be illustrated by the following application to linear algebra. Let V be a finite-dimensional vector space over a field k and f : V → V a linear map with minimal polynomial q. Then, since k[t] is a unique factorization domain, q factors into powers of distinct irreducible polynomials (that is, prime elements): q = p 1 e 1 … p s e s . {\displaystyle q=p_{1}^{e_{1}}\ldots p_{s}^{e_{s}}.} Letting t ⋅ v = f ( v ) , {\displaystyle t\cdot v=f(v),} we make V a k[t]-module. The structure theorem then says V is a direct sum of cyclic modules, each of which is isomorphic to the module of the form k [ t ] / ( p i k j ) . {\displaystyle k[t]/\left(p_{i}^{k_{j}}\right).} Now, if p i ( t ) = t − λ i , {\displaystyle p_{i}(t)=t-\lambda _{i},} then such a cyclic module (for pi) has a basis in which the restriction of f is represented by a Jordan matrix. Thus, if, say, k is algebraically closed, then all pi's are of the form t – λi and the above decomposition corresponds to the Jordan canonical form of f. In algebraic geometry, UFDs arise because of smoothness. More precisely, a point in a variety (over a perfect field) is smooth if the local ring at the point is a regular local ring. A regular local ring is a UFD. The following is a chain of class inclusions that describes the relationship between rings, domains and fields: rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ euclidean domains ⊃ fields ⊃ algebraically closed fields === Division ring === A division ring is a ring such that every non-zero element is a unit. A commutative division ring is a field. A prominent example of a division ring that is not a field is the ring of quaternions. Any centralizer in a division ring is also a division ring. In particular, the center of a division ring is a field. It turned out that every finite domain (in particular finite division ring) is a field; in particular commutative (the Wedderburn's little theorem). Every module over a division ring is a free module (has a basis); consequently, much of linear algebra can be carried out over a division ring instead of a field. The study of conjugacy classes figures prominently in the classical theory of division rings; see, for example, the Cartan–Brauer–Hua theorem. A cyclic algebra, introduced by L. E. Dickson, is a generalization of a quaternion algebra. === Semisimple rings === A semisimple module is a direct sum of simple modules. A semisimple ring is a ring that is semisimple as a left module (or right module) over itself. ==== Examples ==== A division ring is semisimple (and simple). For any division ring D and positive integer n, the matrix ring Mn(D) is semisimple (and simple). For a field k and finite group G, the group ring kG is semisimple if and only if the characteristic of k does not divide the order of G (Maschke's theorem). Clifford algebras are semisimple. The Weyl algebra over a field is a simple ring, but it is not semisimple. The same holds for a ring of differential operators in many variables. ==== Properties ==== Any module over a semisimple ring is semisimple. (Proof: A free module over a semisimple ring is semisimple and any module is a quotient of a free module.) For a ring R, the following are equivalent: R is semisimple. R is artinian and semiprimitive. R is a finite direct product ∏ i = 1 r M n i ⁡ ( D i ) {\textstyle \prod _{i=1}^{r}\operatorname {M} _{n_{i}}(D_{i})} where each ni is a positive integer, and each Di is a division ring (Artin–Wedderburn theorem). Semisimplicity is closely related to separability. A unital associative algebra A over a field k is said to be separable if the base extension A ⊗ k F {\displaystyle A\otimes _{k}F} is semisimple for every field extension F / k. If A happens to be a field, then this is equivalent to the usual definition in field theory (cf. separable extension.) === Central simple algebra and Brauer group === For a field k, a k-algebra is central if its center is k and is simple if it is a simple ring. Since the center of a simple k-algebra is a field, any simple k-algebra is a central simple algebra over its center. In this section, a central simple algebra is assumed to have finite dimension. Also, we mostly fix the base field; thus, an algebra refers to a k-algebra. The matrix ring of size n over a ring R will be denoted by Rn. The Skolem–Noether theorem states any automorphism of a central simple algebra is inner. Two central simple algebras A and B are said to be similar if there are integers n and m such that A ⊗ k k n ≈ B ⊗ k k m . {\displaystyle A\otimes _{k}k_{n}\approx B\otimes _{k}k_{m}.} Since k n ⊗ k k m ≃ k n m , {\displaystyle k_{n}\otimes _{k}k_{m}\simeq k_{nm},} the similarity is an equivalence relation. The similarity classes [A] with the multiplication [ A ] [ B ] = [ A ⊗ k B ] {\displaystyle [A][B]=\left[A\otimes _{k}B\right]} form an abelian group called the Brauer group of k and is denoted by Br(k). By the Artin–Wedderburn theorem, a central simple algebra is the matrix ring of a division ring; thus, each similarity class is represented by a unique division ring. For example, Br(k) is trivial if k is a finite field or an algebraically closed field (more generally quasi-algebraically closed field; cf. Tsen's theorem). Br ⁡ ( R ) {\displaystyle \operatorname {Br} (\mathbb {R} )} has order 2 (a special case of the theorem of Frobenius). Finally, if k is a nonarchimedean local field (for example, ⁠ Q p {\displaystyle \mathbb {Q} _{p}} ⁠), then Br ⁡ ( k ) = Q / Z {\displaystyle \operatorname {Br} (k)=\mathbb {Q} /\mathbb {Z} } through the invariant map. Now, if F is a field extension of k, then the base extension − ⊗ k F {\displaystyle -\otimes _{k}F} induces Br(k) → Br(F). Its kernel is denoted by Br(F / k). It consists of [A] such that A ⊗ k F {\displaystyle A\otimes _{k}F} is a matrix ring over F (that is, A is split by F.) If the extension is finite and Galois, then Br(F / k) is canonically isomorphic to H 2 ( Gal ⁡ ( F / k ) , k ∗ ) . {\displaystyle H^{2}\left(\operatorname {Gal} (F/k),k^{*}\right).} Azumaya algebras generalize the notion of central simple algebras to a commutative local ring. === Valuation ring === If K is a field, a valuation v is a group homomorphism from the multiplicative group K∗ to a totally ordered abelian group G such that, for any f, g in K with f + g nonzero, v(f + g) ≥ min{v(f), v(g)}. The valuation ring of v is the subring of K consisting of zero and all nonzero f such that v(f) ≥ 0. Examples: The field of formal Laurent series k ( ( t ) ) {\displaystyle k(\!(t)\!)} over a field k comes with the valuation v such that v(f) is the least degree of a nonzero term in f; the valuation ring of v is the formal power series ring k [ [ t ] ] . {\displaystyle k[\![t]\!].} More generally, given a field k and a totally ordered abelian group G, let k ( ( G ) ) {\displaystyle k(\!(G)\!)} be the set of all functions from G to k whose supports (the sets of points at which the functions are nonzero) are well ordered. It is a field with the multiplication given by convolution: ( f ∗ g ) ( t ) = ∑ s ∈ G f ( s ) g ( t − s ) . {\displaystyle (f*g)(t)=\sum _{s\in G}f(s)g(t-s).} It also comes with the valuation v such that v(f) is the least element in the support of f. The subring consisting of elements with finite support is called the group ring of G (which makes sense even if G is not commutative). If G is the ring of integers, then we recover the previous example (by identifying f with the series whose nth coefficient is f(n).) == Rings with extra structure == A ring may be viewed as an abelian group (by using the addition operation), with extra structure: namely, ring multiplication. In the same way, there are other mathematical objects which may be considered as rings with extra structure. For example: An associative algebra is a ring that is also a vector space over a field n such that the scalar multiplication is compatible with the ring multiplication. For instance, the set of n-by-n matrices over the real field ⁠ R {\displaystyle \mathbb {R} } ⁠ has dimension n2 as a real vector space. A ring R is a topological ring if its set of elements R is given a topology which makes the addition map ( + : R × R → R {\displaystyle +:R\times R\to R} ) and the multiplication map ⋅ : R × R → R to be both continuous as maps between topological spaces (where X × X inherits the product topology or any other product in the category). For example, n-by-n matrices over the real numbers could be given either the Euclidean topology, or the Zariski topology, and in either case one would obtain a topological ring. A λ-ring is a commutative ring R together with operations λn: R → R that are like nth exterior powers: λ n ( x + y ) = ∑ 0 n λ i ( x ) λ n − i ( y ) . {\displaystyle \lambda ^{n}(x+y)=\sum _{0}^{n}\lambda ^{i}(x)\lambda ^{n-i}(y).} For example, ⁠ Z {\displaystyle \mathbb {Z} } ⁠ is a λ-ring with λ n ( x ) = ( x n ) , {\displaystyle \lambda ^{n}(x)={\binom {x}{n}},} the binomial coefficients. The notion plays a central rule in the algebraic approach to the Riemann–Roch theorem. A totally ordered ring is a ring with a total ordering that is compatible with ring operations. == Some examples of the ubiquity of rings == Many different kinds of mathematical objects can be fruitfully analyzed in terms of some associated ring. === Cohomology ring of a topological space === To any topological space X one can associate its integral cohomology ring H ∗ ( X , Z ) = ⨁ i = 0 ∞ H i ( X , Z ) , {\displaystyle H^{*}(X,\mathbb {Z} )=\bigoplus _{i=0}^{\infty }H^{i}(X,\mathbb {Z} ),} a graded ring. There are also homology groups H i ( X , Z ) {\displaystyle H_{i}(X,\mathbb {Z} )} of a space, and indeed these were defined first, as a useful tool for distinguishing between certain pairs of topological spaces, like the spheres and tori, for which the methods of point-set topology are not well-suited. Cohomology groups were later defined in terms of homology groups in a way which is roughly analogous to the dual of a vector space. To know each individual integral homology group is essentially the same as knowing each individual integral cohomology group, because of the universal coefficient theorem. However, the advantage of the cohomology groups is that there is a natural product, which is analogous to the observation that one can multiply pointwise a k-multilinear form and an l-multilinear form to get a (k + l)-multilinear form. The ring structure in cohomology provides the foundation for characteristic classes of fiber bundles, intersection theory on manifolds and algebraic varieties, Schubert calculus and much more. === Burnside ring of a group === To any group is associated its Burnside ring which uses a ring to describe the various ways the group can act on a finite set. The Burnside ring's additive group is the free abelian group whose basis is the set of transitive actions of the group and whose addition is the disjoint union of the action. Expressing an action in terms of the basis is decomposing an action into its transitive constituents. The multiplication is easily expressed in terms of the representation ring: the multiplication in the Burnside ring is formed by writing the tensor product of two permutation modules as a permutation module. The ring structure allows a formal way of subtracting one action from another. Since the Burnside ring is contained as a finite index subring of the representation ring, one can pass easily from one to the other by extending the coefficients from integers to the rational numbers. === Representation ring of a group ring === To any group ring or Hopf algebra is associated its representation ring or "Green ring". The representation ring's additive group is the free abelian group whose basis are the indecomposable modules and whose addition corresponds to the direct sum. Expressing a module in terms of the basis is finding an indecomposable decomposition of the module. The multiplication is the tensor product. When the algebra is semisimple, the representation ring is just the character ring from character theory, which is more or less the Grothendieck group given a ring structure. === Function field of an irreducible algebraic variety === To any irreducible algebraic variety is associated its function field. The points of an algebraic variety correspond to valuation rings contained in the function field and containing the coordinate ring. The study of algebraic geometry makes heavy use of commutative algebra to study geometric concepts in terms of ring-theoretic properties. Birational geometry studies maps between the subrings of the function field. === Face ring of a simplicial complex === Every simplicial complex has an associated face ring, also called its Stanley–Reisner ring. This ring reflects many of the combinatorial properties of the simplicial complex, so it is of particular interest in algebraic combinatorics. In particular, the algebraic geometry of the Stanley–Reisner ring was used to characterize the numbers of faces in each dimension of simplicial polytopes. == Category-theoretic description == Every ring can be thought of as a monoid in Ab, the category of abelian groups (thought of as a monoidal category under the tensor product of ⁠ Z {\displaystyle \mathbb {Z} } ⁠-modules). The monoid action of a ring R on an abelian group is simply an R-module. Essentially, an R-module is a generalization of the notion of a vector space – where rather than a vector space over a field, one has a "vector space over a ring". Let (A, +) be an abelian group and let End(A) be its endomorphism ring (see above). Note that, essentially, End(A) is the set of all morphisms of A, where if f is in End(A), and g is in End(A), the following rules may be used to compute f + g and f ⋅ g: ( f + g ) ( x ) = f ( x ) + g ( x ) ( f ⋅ g ) ( x ) = f ( g ( x ) ) , {\displaystyle {\begin{aligned}&(f+g)(x)=f(x)+g(x)\\&(f\cdot g)(x)=f(g(x)),\end{aligned}}} where + as in f(x) + g(x) is addition in A, and function composition is denoted from right to left. Therefore, associated to any abelian group, is a ring. Conversely, given any ring, (R, +, ⋅ ), (R, +) is an abelian group. Furthermore, for every r in R, right (or left) multiplication by r gives rise to a morphism of (R, +), by right (or left) distributivity. Let A = (R, +). Consider those endomorphisms of A, that "factor through" right (or left) multiplication of R. In other words, let EndR(A) be the set of all morphisms m of A, having the property that m(r ⋅ x) = r ⋅ m(x). It was seen that every r in R gives rise to a morphism of A: right multiplication by r. It is in fact true that this association of any element of R, to a morphism of A, as a function from R to EndR(A), is an isomorphism of rings. In this sense, therefore, any ring can be viewed as the endomorphism ring of some abelian X-group (by X-group, it is meant a group with X being its set of operators). In essence, the most general form of a ring, is the endomorphism group of some abelian X-group. Any ring can be seen as a preadditive category with a single object. It is therefore natural to consider arbitrary preadditive categories to be generalizations of rings. And indeed, many definitions and theorems originally given for rings can be translated to this more general context. Additive functors between preadditive categories generalize the concept of ring homomorphism, and ideals in additive categories can be defined as sets of morphisms closed under addition and under composition with arbitrary morphisms. == Generalization == Algebraists have defined structures more general than rings by weakening or dropping some of ring axioms. === Rng === A rng is the same as a ring, except that the existence of a multiplicative identity is not assumed. === Nonassociative ring === A nonassociative ring is an algebraic structure that satisfies all of the ring axioms except the associative property and the existence of a multiplicative identity. A notable example is a Lie algebra. There exists some structure theory for such algebras that generalizes the analogous results for Lie algebras and associative algebras. === Semiring === A semiring (sometimes rig) is obtained by weakening the assumption that (R, +) is an abelian group to the assumption that (R, +) is a commutative monoid, and adding the axiom that 0 ⋅ a = a ⋅ 0 = 0 for all a in R (since it no longer follows from the other axioms). Examples: the non-negative integers { 0 , 1 , 2 , … } {\displaystyle \{0,1,2,\ldots \}} with ordinary addition and multiplication; the tropical semiring. == Other ring-like objects == === Ring object in a category === Let C be a category with finite products. Let pt denote a terminal object of C (an empty product). A ring object in C is an object R equipped with morphisms R × R → a R {\displaystyle R\times R\;{\stackrel {a}{\to }}\,R} (addition), R × R → m R {\displaystyle R\times R\;{\stackrel {m}{\to }}\,R} (multiplication), pt → 0 R {\displaystyle \operatorname {pt} {\stackrel {0}{\to }}\,R} (additive identity), R → i R {\displaystyle R\;{\stackrel {i}{\to }}\,R} (additive inverse), and pt → 1 R {\displaystyle \operatorname {pt} {\stackrel {1}{\to }}\,R} (multiplicative identity) satisfying the usual ring axioms. Equivalently, a ring object is an object R equipped with a factorization of its functor of points h R = Hom ⁡ ( − , R ) : C op → S e t s {\displaystyle h_{R}=\operatorname {Hom} (-,R):C^{\operatorname {op} }\to \mathbf {Sets} } through the category of rings: C op → R i n g s ⟶ forgetful S e t s . {\displaystyle C^{\operatorname {op} }\to \mathbf {Rings} {\stackrel {\textrm {forgetful}}{\longrightarrow }}\mathbf {Sets} .} === Ring scheme === In algebraic geometry, a ring scheme over a base scheme S is a ring object in the category of S-schemes. One example is the ring scheme Wn over ⁠ Spec ⁡ Z {\displaystyle \operatorname {Spec} \mathbb {Z} } ⁠, which for any commutative ring A returns the ring Wn(A) of p-isotypic Witt vectors of length n over A. === Ring spectrum === In algebraic topology, a ring spectrum is a spectrum X together with a multiplication μ : X ∧ X → X {\displaystyle \mu :X\wedge X\to X} and a unit map S → X from the sphere spectrum S, such that the ring axiom diagrams commute up to homotopy. In practice, it is common to define a ring spectrum as a monoid object in a good category of spectra such as the category of symmetric spectra. == See also == Special types of rings: == Notes == == Citations == == References == === General references === === Special references === === Primary sources === === Historical references ===
Wikipedia/Ring_(abstract_algebra)
In mathematical analysis, the Dirac delta function (or δ distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as δ ( x ) = { 0 , x ≠ 0 ∞ , x = 0 {\displaystyle \delta (x)={\begin{cases}0,&x\neq 0\\{\infty },&x=0\end{cases}}} such that ∫ − ∞ ∞ δ ( x ) d x = 1. {\displaystyle \int _{-\infty }^{\infty }\delta (x)dx=1.} Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions. The delta function was introduced by physicist Paul Dirac, and has since been applied routinely in physics and engineering to model point masses and instantaneous impulses. It is called the delta function because it is a continuous analogue of the Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1. The mathematical rigor of the delta function was disputed until Laurent Schwartz developed the theory of distributions, where it is defined as a linear form acting on functions. == Motivation and overview == The graph of the Dirac delta is usually thought of as following the whole x-axis and the positive y-axis.: 174  The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point. For example, to calculate the dynamics of a billiard ball being struck, one can approximate the force of the impact by a Dirac delta. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the ball, by only considering the total impulse of the collision, without a detailed model of all of the elastic energy transfer at subatomic levels (for instance). To be specific, suppose that a billiard ball is at rest. At time t = 0 {\displaystyle t=0} it is struck by another ball, imparting it with a momentum P, with units kg⋅m⋅s−1. The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. The force therefore is P δ(t); the units of δ(t) are s−1. To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time interval Δ t = [ 0 , T ] {\displaystyle \Delta t=[0,T]} . That is, F Δ t ( t ) = { P / Δ t 0 < t ≤ T , 0 otherwise . {\displaystyle F_{\Delta t}(t)={\begin{cases}P/\Delta t&0<t\leq T,\\0&{\text{otherwise}}.\end{cases}}} Then the momentum at any time t is found by integration: p ( t ) = ∫ 0 t F Δ t ( τ ) d τ = { P t ≥ T P t / Δ t 0 ≤ t ≤ T 0 otherwise. {\displaystyle p(t)=\int _{0}^{t}F_{\Delta t}(\tau )\,d\tau ={\begin{cases}P&t\geq T\\P\,t/\Delta t&0\leq t\leq T\\0&{\text{otherwise.}}\end{cases}}} Now, the model situation of an instantaneous transfer of momentum requires taking the limit as Δt → 0, giving a result everywhere except at 0: p ( t ) = { P t > 0 0 t < 0. {\displaystyle p(t)={\begin{cases}P&t>0\\0&t<0.\end{cases}}} Here the functions F Δ t {\displaystyle F_{\Delta t}} are thought of as useful approximations to the idea of instantaneous transfer of momentum. The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense of pointwise convergence) lim Δ t → 0 + F Δ t {\textstyle \lim _{\Delta t\to 0^{+}}F_{\Delta t}} is zero everywhere but a single point, where it is infinite. To make proper sense of the Dirac delta, we should instead insist that the property ∫ − ∞ ∞ F Δ t ( t ) d t = P , {\displaystyle \int _{-\infty }^{\infty }F_{\Delta t}(t)\,dt=P,} which holds for all Δ t > 0 {\displaystyle \Delta t>0} , should continue to hold in the limit. So, in the equation F ( t ) = P δ ( t ) = lim Δ t → 0 F Δ t ( t ) {\textstyle F(t)=P\,\delta (t)=\lim _{\Delta t\to 0}F_{\Delta t}(t)} , it is understood that the limit is always taken outside the integral. In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero. The Dirac delta is not truly a function, at least not a usual one with domain and range in real numbers. For example, the objects f(x) = δ(x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable and the integrals of f and g are identical. A rigorous approach to regarding the Dirac delta function as a mathematical object in its own right requires measure theory or the theory of distributions. == History == In physics, the Dirac delta function was popularized by Paul Dirac in this book The Principles of Quantum Mechanics published in 1930. However, Oliver Heaviside, 35 years before Dirac, described an impulsive function called the Heaviside step function for purposes and with properties analogous to Dirac's work. Even earlier several mathematicians and physicists used limits of sharply peaked functions in derivations. An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy distribution) explicitly appears in an 1827 text of Augustin-Louis Cauchy. Siméon Denis Poisson considered the issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord Kelvin's notion of a point heat source. The Dirac delta function as such was introduced by Paul Dirac in his 1927 paper The Physical Interpretation of the Quantum Dynamics. He called it the "delta function" since he used it as a continuum analogue of the discrete Kronecker delta. Mathematicians refer to the same concept as a distribution rather than a function.: 33  Joseph Fourier presented what is now called the Fourier integral theorem in his treatise Théorie analytique de la chaleur in the form: f ( x ) = 1 2 π ∫ − ∞ ∞ d α f ( α ) ∫ − ∞ ∞ d p cos ⁡ ( p x − p α ) , {\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ \ d\alpha \,f(\alpha )\ \int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ ,} which is tantamount to the introduction of the δ-function in the form: δ ( x − α ) = 1 2 π ∫ − ∞ ∞ d p cos ⁡ ( p x − p α ) . {\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ .} Later, Augustin Cauchy expressed the theorem using exponentials: f ( x ) = 1 2 π ∫ − ∞ ∞ e i p x ( ∫ − ∞ ∞ e − i p α f ( α ) d α ) d p . {\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp.} Cauchy pointed out that in some circumstances the order of integration is significant in this result (contrast Fubini's theorem). As justified using the theory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose the δ-function as f ( x ) = 1 2 π ∫ − ∞ ∞ e i p x ( ∫ − ∞ ∞ e − i p α f ( α ) d α ) d p = 1 2 π ∫ − ∞ ∞ ( ∫ − ∞ ∞ e i p x e − i p α d p ) f ( α ) d α = ∫ − ∞ ∞ δ ( x − α ) f ( α ) d α , {\displaystyle {\begin{aligned}f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp\\[4pt]&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\left(\int _{-\infty }^{\infty }e^{ipx}e^{-ip\alpha }\,dp\right)f(\alpha )\,d\alpha =\int _{-\infty }^{\infty }\delta (x-\alpha )f(\alpha )\,d\alpha ,\end{aligned}}} where the δ-function is expressed as δ ( x − α ) = 1 2 π ∫ − ∞ ∞ e i p ( x − α ) d p . {\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ip(x-\alpha )}\,dp\ .} A rigorous interpretation of the exponential form and the various limitations upon the function f necessary for its application extended over several centuries. The problems with a classical interpretation are explained as follows: The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functions decrease sufficiently rapidly to zero (in the neighborhood of infinity) to ensure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed and this removed many obstacles. Further developments included generalization of the Fourier integral, "beginning with Plancherel's pathbreaking L2-theory (1910), continuing with Wiener's and Bochner's works (around 1930) and culminating with the amalgamation into L. Schwartz's theory of distributions (1945) ...", and leading to the formal development of the Dirac delta function. == Definitions == The Dirac delta function δ ( x ) {\displaystyle \delta (x)} can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite, δ ( x ) ≃ { + ∞ , x = 0 0 , x ≠ 0 {\displaystyle \delta (x)\simeq {\begin{cases}+\infty ,&x=0\\0,&x\neq 0\end{cases}}} and which is also constrained to satisfy the identity ∫ − ∞ ∞ δ ( x ) d x = 1. {\displaystyle \int _{-\infty }^{\infty }\delta (x)\,dx=1.} This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no extended real number valued function defined on the real numbers has these properties. === As a measure === One way to rigorously capture the notion of the Dirac delta function is to define a measure, called Dirac measure, which accepts a subset A of the real line R as an argument, and returns δ(A) = 1 if 0 ∈ A, and δ(A) = 0 otherwise. If the delta function is conceptualized as modeling an idealized point mass at 0, then δ(A) represents the mass contained in the set A. One may then define the integral against δ as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measure δ satisfies ∫ − ∞ ∞ f ( x ) δ ( d x ) = f ( 0 ) {\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=f(0)} for all continuous compactly supported functions f. The measure δ is not absolutely continuous with respect to the Lebesgue measure—in fact, it is a singular measure. Consequently, the delta measure has no Radon–Nikodym derivative (with respect to Lebesgue measure)—no true function for which the property ∫ − ∞ ∞ f ( x ) δ ( x ) d x = f ( 0 ) {\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (x)\,dx=f(0)} holds. As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue) integral. As a probability measure on R, the delta measure is characterized by its cumulative distribution function, which is the unit step function. H ( x ) = { 1 if x ≥ 0 0 if x < 0. {\displaystyle H(x)={\begin{cases}1&{\text{if }}x\geq 0\\0&{\text{if }}x<0.\end{cases}}} This means that H(x) is the integral of the cumulative indicator function 1(−∞, x] with respect to the measure δ; to wit, H ( x ) = ∫ R 1 ( − ∞ , x ] ( t ) δ ( d t ) = δ ( ( − ∞ , x ] ) , {\displaystyle H(x)=\int _{\mathbf {R} }\mathbf {1} _{(-\infty ,x]}(t)\,\delta (dt)=\delta \!\left((-\infty ,x]\right),} the latter being the measure of this interval. Thus in particular the integration of the delta function against a continuous function can be properly understood as a Riemann–Stieltjes integral: ∫ − ∞ ∞ f ( x ) δ ( d x ) = ∫ − ∞ ∞ f ( x ) d H ( x ) . {\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=\int _{-\infty }^{\infty }f(x)\,dH(x).} All higher moments of δ are zero. In particular, characteristic function and moment generating function are both equal to one. === As a distribution === In the theory of distributions, a generalized function is considered not a function in itself but only through how it affects other functions when "integrated" against them. In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function is against a sufficiently "good" test function φ. Test functions are also known as bump functions. If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral. A typical space of test functions consists of all smooth functions on R with compact support that have as many derivatives as required. As a distribution, the Dirac delta is a linear functional on the space of test functions and is defined by for every test function φ. For δ to be properly a distribution, it must be continuous in a suitable topology on the space of test functions. In general, for a linear functional S on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer N there is an integer MN and a constant CN such that for every test function φ, one has the inequality | S [ φ ] | ≤ C N ∑ k = 0 M N sup x ∈ [ − N , N ] | φ ( k ) ( x ) | {\displaystyle \left|S[\varphi ]\right|\leq C_{N}\sum _{k=0}^{M_{N}}\sup _{x\in [-N,N]}\left|\varphi ^{(k)}(x)\right|} where sup represents the supremum. With the δ distribution, one has such an inequality (with CN = 1) with MN = 0 for all N. Thus δ is a distribution of order zero. It is, furthermore, a distribution with compact support (the support being {0}). The delta distribution can also be defined in several equivalent ways. For instance, it is the distributional derivative of the Heaviside step function. This means that for every test function φ, one has δ [ φ ] = − ∫ − ∞ ∞ φ ′ ( x ) H ( x ) d x . {\displaystyle \delta [\varphi ]=-\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx.} Intuitively, if integration by parts were permitted, then the latter integral should simplify to ∫ − ∞ ∞ φ ( x ) H ′ ( x ) d x = ∫ − ∞ ∞ φ ( x ) δ ( x ) d x , {\displaystyle \int _{-\infty }^{\infty }\varphi (x)\,H'(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,\delta (x)\,dx,} and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case, one does have − ∫ − ∞ ∞ φ ′ ( x ) H ( x ) d x = ∫ − ∞ ∞ φ ( x ) d H ( x ) . {\displaystyle -\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,dH(x).} In the context of measure theory, the Dirac measure gives rise to distribution by integration. Conversely, equation (1) defines a Daniell integral on the space of all compactly supported continuous functions φ which, by the Riesz representation theorem, can be represented as the Lebesgue integral of φ with respect to some Radon measure. Generally, when the term Dirac delta function is used, it is in the sense of distributions rather than measures, the Dirac measure being among several terms for the corresponding notion in measure theory. Some sources may also use the term Dirac delta distribution. === Generalizations === The delta function can be defined in n-dimensional Euclidean space Rn as the measure such that ∫ R n f ( x ) δ ( d x ) = f ( 0 ) {\displaystyle \int _{\mathbf {R} ^{n}}f(\mathbf {x} )\,\delta (d\mathbf {x} )=f(\mathbf {0} )} for every compactly supported continuous function f. As a measure, the n-dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with x = (x1, x2, ..., xn), one has The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case. However, despite widespread use in engineering contexts, (2) should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances. The notion of a Dirac measure makes sense on any set. Thus if X is a set, x0 ∈ X is a marked point, and Σ is any sigma algebra of subsets of X, then the measure defined on sets A ∈ Σ by δ x 0 ( A ) = { 1 if x 0 ∈ A 0 if x 0 ∉ A {\displaystyle \delta _{x_{0}}(A)={\begin{cases}1&{\text{if }}x_{0}\in A\\0&{\text{if }}x_{0}\notin A\end{cases}}} is the delta measure or unit mass concentrated at x0. Another common generalization of the delta function is to a differentiable manifold where most of its properties as a distribution can also be exploited because of the differentiable structure. The delta function on a manifold M centered at the point x0 ∈ M is defined as the following distribution: for all compactly supported smooth real-valued functions φ on M. A common special case of this construction is a case in which M is an open set in the Euclidean space Rn. On a locally compact Hausdorff space X, the Dirac delta measure concentrated at a point x is the Radon measure associated with the Daniell integral (3) on compactly supported continuous functions φ. At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mapping x 0 ↦ δ x 0 {\displaystyle x_{0}\mapsto \delta _{x_{0}}} is a continuous embedding of X into the space of finite Radon measures on X, equipped with its vague topology. Moreover, the convex hull of the image of X under this embedding is dense in the space of probability measures on X. == Properties == === Scaling and symmetry === The delta function satisfies the following scaling property for a non-zero scalar α: ∫ − ∞ ∞ δ ( α x ) d x = ∫ − ∞ ∞ δ ( u ) d u | α | = 1 | α | {\displaystyle \int _{-\infty }^{\infty }\delta (\alpha x)\,dx=\int _{-\infty }^{\infty }\delta (u)\,{\frac {du}{|\alpha |}}={\frac {1}{|\alpha |}}} and so Scaling property proof: ∫ − ∞ ∞ d x g ( x ) δ ( a x ) = 1 a ∫ − ∞ ∞ d x ′ g ( x ′ a ) δ ( x ′ ) = 1 a g ( 0 ) . {\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{a}}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{a}}g(0).} where a change of variable x′ = ax is used. If a is negative, i.e., a = −|a|, then ∫ − ∞ ∞ d x g ( x ) δ ( a x ) = 1 − | a | ∫ ∞ − ∞ d x ′ g ( x ′ a ) δ ( x ′ ) = 1 | a | ∫ − ∞ ∞ d x ′ g ( x ′ a ) δ ( x ′ ) = 1 | a | g ( 0 ) . {\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{-\left\vert a\right\vert }}\int \limits _{\infty }^{-\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}g(0).} Thus, δ ( a x ) = 1 | a | δ ( x ) {\displaystyle \delta (ax)={\frac {1}{\left\vert a\right\vert }}\delta (x)} . In particular, the delta function is an even distribution (symmetry), in the sense that δ ( − x ) = δ ( x ) {\displaystyle \delta (-x)=\delta (x)} which is homogeneous of degree −1. === Algebraic properties === The distributional product of δ with x is equal to zero: x δ ( x ) = 0. {\displaystyle x\,\delta (x)=0.} More generally, ( x − a ) n δ ( x − a ) = 0 {\displaystyle (x-a)^{n}\delta (x-a)=0} for all positive integers n {\displaystyle n} . Conversely, if xf(x) = xg(x), where f and g are distributions, then f ( x ) = g ( x ) + c δ ( x ) {\displaystyle f(x)=g(x)+c\delta (x)} for some constant c. === Translation === The integral of any function multiplied by the time-delayed Dirac delta δ T ( t ) = δ ( t − T ) {\displaystyle \delta _{T}(t){=}\delta (t{-}T)} is ∫ − ∞ ∞ f ( t ) δ ( t − T ) d t = f ( T ) . {\displaystyle \int _{-\infty }^{\infty }f(t)\,\delta (t-T)\,dt=f(T).} This is sometimes referred to as the sifting property or the sampling property. The delta function is said to "sift out" the value of f(t) at t = T. It follows that the effect of convolving a function f(t) with the time-delayed Dirac delta is to time-delay f(t) by the same amount: ( f ∗ δ T ) ( t ) = d e f ∫ − ∞ ∞ f ( τ ) δ ( t − T − τ ) d τ = ∫ − ∞ ∞ f ( τ ) δ ( τ − ( t − T ) ) d τ since δ ( − x ) = δ ( x ) by (4) = f ( t − T ) . {\displaystyle {\begin{aligned}(f*\delta _{T})(t)\ &{\stackrel {\mathrm {def} }{=}}\ \int _{-\infty }^{\infty }f(\tau )\,\delta (t-T-\tau )\,d\tau \\&=\int _{-\infty }^{\infty }f(\tau )\,\delta (\tau -(t-T))\,d\tau \qquad {\text{since}}~\delta (-x)=\delta (x)~~{\text{by (4)}}\\&=f(t-T).\end{aligned}}} The sifting property holds under the precise condition that f be a tempered distribution (see the discussion of the Fourier transform below). As a special case, for instance, we have the identity (understood in the distribution sense) ∫ − ∞ ∞ δ ( ξ − x ) δ ( x − η ) d x = δ ( η − ξ ) . {\displaystyle \int _{-\infty }^{\infty }\delta (\xi -x)\delta (x-\eta )\,dx=\delta (\eta -\xi ).} === Composition with a function === More generally, the delta distribution may be composed with a smooth function g(x) in such a way that the familiar change of variables formula holds (where u = g ( x ) {\displaystyle u=g(x)} ), that ∫ R δ ( g ( x ) ) f ( g ( x ) ) | g ′ ( x ) | d x = ∫ g ( R ) δ ( u ) f ( u ) d u {\displaystyle \int _{\mathbb {R} }\delta {\bigl (}g(x){\bigr )}f{\bigl (}g(x){\bigr )}\left|g'(x)\right|dx=\int _{g(\mathbb {R} )}\delta (u)\,f(u)\,du} provided that g is a continuously differentiable function with g′ nowhere zero. That is, there is a unique way to assign meaning to the distribution δ ∘ g {\displaystyle \delta \circ g} so that this identity holds for all compactly supported test functions f. Therefore, the domain must be broken up to exclude the g′ = 0 point. This distribution satisfies δ(g(x)) = 0 if g is nowhere zero, and otherwise if g has a real root at x0, then δ ( g ( x ) ) = δ ( x − x 0 ) | g ′ ( x 0 ) | . {\displaystyle \delta (g(x))={\frac {\delta (x-x_{0})}{|g'(x_{0})|}}.} It is natural therefore to define the composition δ(g(x)) for continuously differentiable functions g by δ ( g ( x ) ) = ∑ i δ ( x − x i ) | g ′ ( x i ) | {\displaystyle \delta (g(x))=\sum _{i}{\frac {\delta (x-x_{i})}{|g'(x_{i})|}}} where the sum extends over all roots of g(x), which are assumed to be simple. Thus, for example δ ( x 2 − α 2 ) = 1 2 | α | [ δ ( x + α ) + δ ( x − α ) ] . {\displaystyle \delta \left(x^{2}-\alpha ^{2}\right)={\frac {1}{2|\alpha |}}{\Big [}\delta \left(x+\alpha \right)+\delta \left(x-\alpha \right){\Big ]}.} In the integral form, the generalized scaling property may be written as ∫ − ∞ ∞ f ( x ) δ ( g ( x ) ) d x = ∑ i f ( x i ) | g ′ ( x i ) | . {\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (g(x))\,dx=\sum _{i}{\frac {f(x_{i})}{|g'(x_{i})|}}.} === Indefinite integral === For a constant a ∈ R {\displaystyle a\in \mathbb {R} } and a "well-behaved" arbitrary real-valued function y(x), ∫ y ( x ) δ ( x − a ) d x = y ( a ) H ( x − a ) + c , {\displaystyle \displaystyle {\int }y(x)\delta (x-a)dx=y(a)H(x-a)+c,} where H(x) is the Heaviside step function and c is an integration constant. === Properties in n dimensions === The delta distribution in an n-dimensional space satisfies the following scaling property instead, δ ( α x ) = | α | − n δ ( x ) , {\displaystyle \delta (\alpha {\boldsymbol {x}})=|\alpha |^{-n}\delta ({\boldsymbol {x}})~,} so that δ is a homogeneous distribution of degree −n. Under any reflection or rotation ρ, the delta function is invariant, δ ( ρ x ) = δ ( x ) . {\displaystyle \delta (\rho {\boldsymbol {x}})=\delta ({\boldsymbol {x}})~.} As in the one-variable case, it is possible to define the composition of δ with a bi-Lipschitz function g: Rn → Rn uniquely so that the following holds ∫ R n δ ( g ( x ) ) f ( g ( x ) ) | det g ′ ( x ) | d x = ∫ g ( R n ) δ ( u ) f ( u ) d u {\displaystyle \int _{\mathbb {R} ^{n}}\delta (g({\boldsymbol {x}}))\,f(g({\boldsymbol {x}}))\left|\det g'({\boldsymbol {x}})\right|d{\boldsymbol {x}}=\int _{g(\mathbb {R} ^{n})}\delta ({\boldsymbol {u}})f({\boldsymbol {u}})\,d{\boldsymbol {u}}} for all compactly supported functions f. Using the coarea formula from geometric measure theory, one can also define the composition of the delta function with a submersion from one Euclidean space to another one of different dimension; the result is a type of current. In the special case of a continuously differentiable function g : Rn → R such that the gradient of g is nowhere zero, the following identity holds ∫ R n f ( x ) δ ( g ( x ) ) d x = ∫ g − 1 ( 0 ) f ( x ) | ∇ g | d σ ( x ) {\displaystyle \int _{\mathbb {R} ^{n}}f({\boldsymbol {x}})\,\delta (g({\boldsymbol {x}}))\,d{\boldsymbol {x}}=\int _{g^{-1}(0)}{\frac {f({\boldsymbol {x}})}{|{\boldsymbol {\nabla }}g|}}\,d\sigma ({\boldsymbol {x}})} where the integral on the right is over g−1(0), the (n − 1)-dimensional surface defined by g(x) = 0 with respect to the Minkowski content measure. This is known as a simple layer integral. More generally, if S is a smooth hypersurface of Rn, then we can associate to S the distribution that integrates any compactly supported smooth function g over S: δ S [ g ] = ∫ S g ( s ) d σ ( s ) {\displaystyle \delta _{S}[g]=\int _{S}g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}})} where σ is the hypersurface measure associated to S. This generalization is associated with the potential theory of simple layer potentials on S. If D is a domain in Rn with smooth boundary S, then δS is equal to the normal derivative of the indicator function of D in the distribution sense, − ∫ R n g ( x ) ∂ 1 D ( x ) ∂ n d x = ∫ S g ( s ) d σ ( s ) , {\displaystyle -\int _{\mathbb {R} ^{n}}g({\boldsymbol {x}})\,{\frac {\partial 1_{D}({\boldsymbol {x}})}{\partial n}}\,d{\boldsymbol {x}}=\int _{S}\,g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}}),} where n is the outward normal. For a proof, see e.g. the article on the surface delta function. In three dimensions, the delta function is represented in spherical coordinates by: δ ( r − r 0 ) = { 1 r 2 sin ⁡ θ δ ( r − r 0 ) δ ( θ − θ 0 ) δ ( ϕ − ϕ 0 ) x 0 , y 0 , z 0 ≠ 0 1 2 π r 2 sin ⁡ θ δ ( r − r 0 ) δ ( θ − θ 0 ) x 0 = y 0 = 0 , z 0 ≠ 0 1 4 π r 2 δ ( r − r 0 ) x 0 = y 0 = z 0 = 0 {\displaystyle \delta ({\boldsymbol {r}}-{\boldsymbol {r}}_{0})={\begin{cases}\displaystyle {\frac {1}{r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})\delta (\phi -\phi _{0})&x_{0},y_{0},z_{0}\neq 0\\\displaystyle {\frac {1}{2\pi r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})&x_{0}=y_{0}=0,\ z_{0}\neq 0\\\displaystyle {\frac {1}{4\pi r^{2}}}\delta (r-r_{0})&x_{0}=y_{0}=z_{0}=0\end{cases}}} == Derivatives == The derivative of the Dirac delta distribution, denoted δ′ and also called the Dirac delta prime or Dirac delta derivative as described in Laplacian of the indicator, is defined on compactly supported smooth test functions φ by δ ′ [ φ ] = − δ [ φ ′ ] = − φ ′ ( 0 ) . {\displaystyle \delta '[\varphi ]=-\delta [\varphi ']=-\varphi '(0).} The first equality here is a kind of integration by parts, for if δ were a true function then ∫ − ∞ ∞ δ ′ ( x ) φ ( x ) d x = δ ( x ) φ ( x ) | − ∞ ∞ − ∫ − ∞ ∞ δ ( x ) φ ′ ( x ) d x = − ∫ − ∞ ∞ δ ( x ) φ ′ ( x ) d x = − φ ′ ( 0 ) . {\displaystyle \int _{-\infty }^{\infty }\delta '(x)\varphi (x)\,dx=\delta (x)\varphi (x)|_{-\infty }^{\infty }-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\varphi '(0).} By mathematical induction, the k-th derivative of δ is defined similarly as the distribution given on test functions by δ ( k ) [ φ ] = ( − 1 ) k φ ( k ) ( 0 ) . {\displaystyle \delta ^{(k)}[\varphi ]=(-1)^{k}\varphi ^{(k)}(0).} In particular, δ is an infinitely differentiable distribution. The first derivative of the delta function is the distributional limit of the difference quotients: δ ′ ( x ) = lim h → 0 δ ( x + h ) − δ ( x ) h . {\displaystyle \delta '(x)=\lim _{h\to 0}{\frac {\delta (x+h)-\delta (x)}{h}}.} More properly, one has δ ′ = lim h → 0 1 h ( τ h δ − δ ) {\displaystyle \delta '=\lim _{h\to 0}{\frac {1}{h}}(\tau _{h}\delta -\delta )} where τh is the translation operator, defined on functions by τhφ(x) = φ(x + h), and on a distribution S by ( τ h S ) [ φ ] = S [ τ − h φ ] . {\displaystyle (\tau _{h}S)[\varphi ]=S[\tau _{-h}\varphi ].} In the theory of electromagnetism, the first derivative of the delta function represents a point magnetic dipole situated at the origin. Accordingly, it is referred to as a dipole or the doublet function. The derivative of the delta function satisfies a number of basic properties, including: δ ′ ( − x ) = − δ ′ ( x ) x δ ′ ( x ) = − δ ( x ) {\displaystyle {\begin{aligned}\delta '(-x)&=-\delta '(x)\\x\delta '(x)&=-\delta (x)\end{aligned}}} which can be shown by applying a test function and integrating by parts. The latter of these properties can also be demonstrated by applying distributional derivative definition, Leibniz 's theorem and linearity of inner product: ⟨ x δ ′ , φ ⟩ = ⟨ δ ′ , x φ ⟩ = − ⟨ δ , ( x φ ) ′ ⟩ = − ⟨ δ , x ′ φ + x φ ′ ⟩ = − ⟨ δ , x ′ φ ⟩ − ⟨ δ , x φ ′ ⟩ = − x ′ ( 0 ) φ ( 0 ) − x ( 0 ) φ ′ ( 0 ) = − x ′ ( 0 ) ⟨ δ , φ ⟩ − x ( 0 ) ⟨ δ , φ ′ ⟩ = − x ′ ( 0 ) ⟨ δ , φ ⟩ + x ( 0 ) ⟨ δ ′ , φ ⟩ = ⟨ x ( 0 ) δ ′ − x ′ ( 0 ) δ , φ ⟩ ⟹ x ( t ) δ ′ ( t ) = x ( 0 ) δ ′ ( t ) − x ′ ( 0 ) δ ( t ) = − x ′ ( 0 ) δ ( t ) = − δ ( t ) {\displaystyle {\begin{aligned}\langle x\delta ',\varphi \rangle \,&=\,\langle \delta ',x\varphi \rangle \,=\,-\langle \delta ,(x\varphi )'\rangle \,=\,-\langle \delta ,x'\varphi +x\varphi '\rangle \,=\,-\langle \delta ,x'\varphi \rangle -\langle \delta ,x\varphi '\rangle \,=\,-x'(0)\varphi (0)-x(0)\varphi '(0)\\&=\,-x'(0)\langle \delta ,\varphi \rangle -x(0)\langle \delta ,\varphi '\rangle \,=\,-x'(0)\langle \delta ,\varphi \rangle +x(0)\langle \delta ',\varphi \rangle \,=\,\langle x(0)\delta '-x'(0)\delta ,\varphi \rangle \\\Longrightarrow x(t)\delta '(t)&=x(0)\delta '(t)-x'(0)\delta (t)=-x'(0)\delta (t)=-\delta (t)\end{aligned}}} Furthermore, the convolution of δ′ with a compactly-supported, smooth function f is δ ′ ∗ f = δ ∗ f ′ = f ′ , {\displaystyle \delta '*f=\delta *f'=f',} which follows from the properties of the distributional derivative of a convolution. === Higher dimensions === More generally, on an open set U in the n-dimensional Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , the Dirac delta distribution centered at a point a ∈ U is defined by δ a [ φ ] = φ ( a ) {\displaystyle \delta _{a}[\varphi ]=\varphi (a)} for all φ ∈ C c ∞ ( U ) {\displaystyle \varphi \in C_{c}^{\infty }(U)} , the space of all smooth functions with compact support on U. If α = ( α 1 , … , α n ) {\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})} is any multi-index with | α | = α 1 + ⋯ + α n {\displaystyle |\alpha |=\alpha _{1}+\cdots +\alpha _{n}} and ∂ α {\displaystyle \partial ^{\alpha }} denotes the associated mixed partial derivative operator, then the α-th derivative ∂αδa of δa is given by ⟨ ∂ α δ a , φ ⟩ = ( − 1 ) | α | ⟨ δ a , ∂ α φ ⟩ = ( − 1 ) | α | ∂ α φ ( x ) | x = a for all φ ∈ C c ∞ ( U ) . {\displaystyle \left\langle \partial ^{\alpha }\delta _{a},\,\varphi \right\rangle =(-1)^{|\alpha |}\left\langle \delta _{a},\partial ^{\alpha }\varphi \right\rangle =(-1)^{|\alpha |}\partial ^{\alpha }\varphi (x){\Big |}_{x=a}\quad {\text{ for all }}\varphi \in C_{c}^{\infty }(U).} That is, the α-th derivative of δa is the distribution whose value on any test function φ is the α-th derivative of φ at a (with the appropriate positive or negative sign). The first partial derivatives of the delta function are thought of as double layers along the coordinate planes. More generally, the normal derivative of a simple layer supported on a surface is a double layer supported on that surface and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics as multipoles. Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions with point support. If S is any distribution on U supported on the set {a} consisting of a single point, then there is an integer m and coefficients cα such that S = ∑ | α | ≤ m c α ∂ α δ a . {\displaystyle S=\sum _{|\alpha |\leq m}c_{\alpha }\partial ^{\alpha }\delta _{a}.} == Representations == === Nascent delta function === The delta function can be viewed as the limit of a sequence of functions δ ( x ) = lim ε → 0 + η ε ( x ) , {\displaystyle \delta (x)=\lim _{\varepsilon \to 0^{+}}\eta _{\varepsilon }(x),} where ηε(x) is sometimes called a nascent delta function. This limit is meant in a weak sense: either that for all continuous functions f having compact support, or that this limit holds for all smooth functions f with compact support. The difference between these two slightly different modes of weak convergence is often subtle: the former is convergence in the vague topology of measures, and the latter is convergence in the sense of distributions. ==== Approximations to the identity ==== Typically a nascent delta function ηε can be constructed in the following manner. Let η be an absolutely integrable function on R of total integral 1, and define η ε ( x ) = ε − 1 η ( x ε ) . {\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\eta \left({\frac {x}{\varepsilon }}\right).} In n dimensions, one uses instead the scaling η ε ( x ) = ε − n η ( x ε ) . {\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-n}\eta \left({\frac {x}{\varepsilon }}\right).} Then a simple change of variables shows that ηε also has integral 1. One may show that (5) holds for all continuous compactly supported functions f, and so ηε converges weakly to δ in the sense of measures. The ηε constructed in this way are known as an approximation to the identity. This terminology is because the space L1(R) of absolutely integrable functions is closed under the operation of convolution of functions: f ∗ g ∈ L1(R) whenever f and g are in L1(R). However, there is no identity in L1(R) for the convolution product: no element h such that f ∗ h = f for all f. Nevertheless, the sequence ηε does approximate such an identity in the sense that f ∗ η ε → f as ε → 0. {\displaystyle f*\eta _{\varepsilon }\to f\quad {\text{as }}\varepsilon \to 0.} This limit holds in the sense of mean convergence (convergence in L1). Further conditions on the ηε, for instance that it be a mollifier associated to a compactly supported function, are needed to ensure pointwise convergence almost everywhere. If the initial η = η1 is itself smooth and compactly supported then the sequence is called a mollifier. The standard mollifier is obtained by choosing η to be a suitably normalized bump function, for instance η ( x ) = { 1 I n exp ⁡ ( − 1 1 − | x | 2 ) if | x | < 1 0 if | x | ≥ 1. {\displaystyle \eta (x)={\begin{cases}{\frac {1}{I_{n}}}\exp {\Big (}-{\frac {1}{1-|x|^{2}}}{\Big )}&{\text{if }}|x|<1\\0&{\text{if }}|x|\geq 1.\end{cases}}} ( I n {\displaystyle I_{n}} ensuring that the total integral is 1). In some situations such as numerical analysis, a piecewise linear approximation to the identity is desirable. This can be obtained by taking η1 to be a hat function. With this choice of η1, one has η ε ( x ) = ε − 1 max ( 1 − | x ε | , 0 ) {\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\max \left(1-\left|{\frac {x}{\varepsilon }}\right|,0\right)} which are all continuous and compactly supported, although not smooth and so not a mollifier. ==== Probabilistic considerations ==== In the context of probability theory, it is natural to impose the additional condition that the initial η1 in an approximation to the identity should be positive, as such a function then represents a probability distribution. Convolution with a probability distribution is sometimes favorable because it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function. Taking η1 to be any probability distribution at all, and letting ηε(x) = η1(x/ε)/ε as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition, η has mean 0 and has small higher moments. For instance, if η1 is the uniform distribution on [ − 1 2 , 1 2 ] {\textstyle \left[-{\frac {1}{2}},{\frac {1}{2}}\right]} , also known as the rectangular function, then: η ε ( x ) = 1 ε rect ⁡ ( x ε ) = { 1 ε , − ε 2 < x < ε 2 , 0 , otherwise . {\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x}{\varepsilon }}\right)={\begin{cases}{\frac {1}{\varepsilon }},&-{\frac {\varepsilon }{2}}<x<{\frac {\varepsilon }{2}},\\0,&{\text{otherwise}}.\end{cases}}} Another example is with the Wigner semicircle distribution η ε ( x ) = { 2 π ε 2 ε 2 − x 2 , − ε < x < ε , 0 , otherwise . {\displaystyle \eta _{\varepsilon }(x)={\begin{cases}{\frac {2}{\pi \varepsilon ^{2}}}{\sqrt {\varepsilon ^{2}-x^{2}}},&-\varepsilon <x<\varepsilon ,\\0,&{\text{otherwise}}.\end{cases}}} This is continuous and compactly supported, but not a mollifier because it is not smooth. ==== Semigroups ==== Nascent delta functions often arise as convolution semigroups. This amounts to the further constraint that the convolution of ηε with ηδ must satisfy η ε ∗ η δ = η ε + δ {\displaystyle \eta _{\varepsilon }*\eta _{\delta }=\eta _{\varepsilon +\delta }} for all ε, δ > 0. Convolution semigroups in L1 that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction. In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics, semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on functions of x, then a convolution semigroup arises by solving the initial value problem { ∂ ∂ t η ( t , x ) = A η ( t , x ) , t > 0 lim t → 0 + η ( t , x ) = δ ( x ) {\displaystyle {\begin{cases}{\dfrac {\partial }{\partial t}}\eta (t,x)=A\eta (t,x),\quad t>0\\[5pt]\displaystyle \lim _{t\to 0^{+}}\eta (t,x)=\delta (x)\end{cases}}} in which the limit is as usual understood in the weak sense. Setting ηε(x) = η(ε, x) gives the associated nascent delta function. Some examples of physically important convolution semigroups arising from such a fundamental solution include the following. ===== The heat kernel ===== The heat kernel, defined by η ε ( x ) = 1 2 π ε e − x 2 2 ε {\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\sqrt {2\pi \varepsilon }}}\mathrm {e} ^{-{\frac {x^{2}}{2\varepsilon }}}} represents the temperature in an infinite wire at time t > 0, if a unit of heat energy is stored at the origin of the wire at time t = 0. This semigroup evolves according to the one-dimensional heat equation: ∂ u ∂ t = 1 2 ∂ 2 u ∂ x 2 . {\displaystyle {\frac {\partial u}{\partial t}}={\frac {1}{2}}{\frac {\partial ^{2}u}{\partial x^{2}}}.} In probability theory, ηε(x) is a normal distribution of variance ε and mean 0. It represents the probability density at time t = ε of the position of a particle starting at the origin following a standard Brownian motion. In this context, the semigroup condition is then an expression of the Markov property of Brownian motion. In higher-dimensional Euclidean space Rn, the heat kernel is η ε = 1 ( 2 π ε ) n / 2 e − x ⋅ x 2 ε , {\displaystyle \eta _{\varepsilon }={\frac {1}{(2\pi \varepsilon )^{n/2}}}\mathrm {e} ^{-{\frac {x\cdot x}{2\varepsilon }}},} and has the same physical interpretation, mutatis mutandis. It also represents a nascent delta function in the sense that ηε → δ in the distribution sense as ε → 0. ===== The Poisson kernel ===== The Poisson kernel η ε ( x ) = 1 π I m { 1 x − i ε } = 1 π ε ε 2 + x 2 = 1 2 π ∫ − ∞ ∞ e i ξ x − | ε ξ | d ξ {\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi }}\mathrm {Im} \left\{{\frac {1}{x-\mathrm {i} \varepsilon }}\right\}={\frac {1}{\pi }}{\frac {\varepsilon }{\varepsilon ^{2}+x^{2}}}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\mathrm {e} ^{\mathrm {i} \xi x-|\varepsilon \xi |}\,d\xi } is the fundamental solution of the Laplace equation in the upper half-plane. It represents the electrostatic potential in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to the Cauchy distribution and Epanechnikov and Gaussian kernel functions. This semigroup evolves according to the equation ∂ u ∂ t = − ( − ∂ 2 ∂ x 2 ) 1 2 u ( t , x ) {\displaystyle {\frac {\partial u}{\partial t}}=-\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}u(t,x)} where the operator is rigorously defined as the Fourier multiplier F [ ( − ∂ 2 ∂ x 2 ) 1 2 f ] ( ξ ) = | 2 π ξ | F f ( ξ ) . {\displaystyle {\mathcal {F}}\left[\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}f\right](\xi )=|2\pi \xi |{\mathcal {F}}f(\xi ).} ==== Oscillatory integrals ==== In areas of physics such as wave propagation and wave mechanics, the equations involved are hyperbolic and so may have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the associated Cauchy problems are generally oscillatory integrals. An example, which comes from a solution of the Euler–Tricomi equation of transonic gas dynamics, is the rescaled Airy function ε − 1 / 3 Ai ⁡ ( x ε − 1 / 3 ) . {\displaystyle \varepsilon ^{-1/3}\operatorname {Ai} \left(x\varepsilon ^{-1/3}\right).} Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense—it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is the Dirichlet kernel below), rather than in the sense of measures. Another example is the Cauchy problem for the wave equation in R1+1: c − 2 ∂ 2 u ∂ t 2 − Δ u = 0 u = 0 , ∂ u ∂ t = δ for t = 0. {\displaystyle {\begin{aligned}c^{-2}{\frac {\partial ^{2}u}{\partial t^{2}}}-\Delta u&=0\\u=0,\quad {\frac {\partial u}{\partial t}}=\delta &\qquad {\text{for }}t=0.\end{aligned}}} The solution u represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin. Other approximations to the identity of this kind include the sinc function (used widely in electronics and telecommunications) η ε ( x ) = 1 π x sin ⁡ ( x ε ) = 1 2 π ∫ − 1 ε 1 ε cos ⁡ ( k x ) d k {\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi x}}\sin \left({\frac {x}{\varepsilon }}\right)={\frac {1}{2\pi }}\int _{-{\frac {1}{\varepsilon }}}^{\frac {1}{\varepsilon }}\cos(kx)\,dk} and the Bessel function η ε ( x ) = 1 ε J 1 ε ( x + 1 ε ) . {\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}J_{\frac {1}{\varepsilon }}\left({\frac {x+1}{\varepsilon }}\right).} === Plane wave decomposition === One approach to the study of a linear partial differential equation L [ u ] = f , {\displaystyle L[u]=f,} where L is a differential operator on Rn, is to seek first a fundamental solution, which is a solution of the equation L [ u ] = δ . {\displaystyle L[u]=\delta .} When L is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the form L [ u ] = h {\displaystyle L[u]=h} where h is a plane wave function, meaning that it has the form h = h ( x ⋅ ξ ) {\displaystyle h=h(x\cdot \xi )} for some vector ξ. Such an equation can be resolved (if the coefficients of L are analytic functions) by the Cauchy–Kovalevskaya theorem or (if the coefficients of L are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations. Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially by Johann Radon, and then developed in this form by Fritz John (1955). Choose k so that n + k is an even integer, and for a real number s, put g ( s ) = Re ⁡ [ − s k log ⁡ ( − i s ) k ! ( 2 π i ) n ] = { | s | k 4 k ! ( 2 π i ) n − 1 n odd − | s | k log ⁡ | s | k ! ( 2 π i ) n n even. {\displaystyle g(s)=\operatorname {Re} \left[{\frac {-s^{k}\log(-is)}{k!(2\pi i)^{n}}}\right]={\begin{cases}{\frac {|s|^{k}}{4k!(2\pi i)^{n-1}}}&n{\text{ odd}}\\[5pt]-{\frac {|s|^{k}\log |s|}{k!(2\pi i)^{n}}}&n{\text{ even.}}\end{cases}}} Then δ is obtained by applying a power of the Laplacian to the integral with respect to the unit sphere measure dω of g(x · ξ) for ξ in the unit sphere Sn−1: δ ( x ) = Δ x ( n + k ) / 2 ∫ S n − 1 g ( x ⋅ ξ ) d ω ξ . {\displaystyle \delta (x)=\Delta _{x}^{(n+k)/2}\int _{S^{n-1}}g(x\cdot \xi )\,d\omega _{\xi }.} The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test function φ, φ ( x ) = ∫ R n φ ( y ) d y Δ x n + k 2 ∫ S n − 1 g ( ( x − y ) ⋅ ξ ) d ω ξ . {\displaystyle \varphi (x)=\int _{\mathbf {R} ^{n}}\varphi (y)\,dy\,\Delta _{x}^{\frac {n+k}{2}}\int _{S^{n-1}}g((x-y)\cdot \xi )\,d\omega _{\xi }.} The result follows from the formula for the Newtonian potential (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for the Radon transform because it recovers the value of φ(x) from its integrals over hyperplanes. For instance, if n is odd and k = 1, then the integral on the right hand side is c n Δ x n + 1 2 ∬ S n − 1 φ ( y ) | ( y − x ) ⋅ ξ | d ω ξ d y = c n Δ x ( n + 1 ) / 2 ∫ S n − 1 d ω ξ ∫ − ∞ ∞ | p | R φ ( ξ , p + x ⋅ ξ ) d p {\displaystyle {\begin{aligned}&c_{n}\Delta _{x}^{\frac {n+1}{2}}\iint _{S^{n-1}}\varphi (y)|(y-x)\cdot \xi |\,d\omega _{\xi }\,dy\\[5pt]&\qquad =c_{n}\Delta _{x}^{(n+1)/2}\int _{S^{n-1}}\,d\omega _{\xi }\int _{-\infty }^{\infty }|p|R\varphi (\xi ,p+x\cdot \xi )\,dp\end{aligned}}} where Rφ(ξ, p) is the Radon transform of φ: R φ ( ξ , p ) = ∫ x ⋅ ξ = p f ( x ) d n − 1 x . {\displaystyle R\varphi (\xi ,p)=\int _{x\cdot \xi =p}f(x)\,d^{n-1}x.} An alternative equivalent expression of the plane wave decomposition is: δ ( x ) = { ( n − 1 ) ! ( 2 π i ) n ∫ S n − 1 ( x ⋅ ξ ) − n d ω ξ n even 1 2 ( 2 π i ) n − 1 ∫ S n − 1 δ ( n − 1 ) ( x ⋅ ξ ) d ω ξ n odd . {\displaystyle \delta (x)={\begin{cases}{\frac {(n-1)!}{(2\pi i)^{n}}}\displaystyle \int _{S^{n-1}}(x\cdot \xi )^{-n}\,d\omega _{\xi }&n{\text{ even}}\\{\frac {1}{2(2\pi i)^{n-1}}}\displaystyle \int _{S^{n-1}}\delta ^{(n-1)}(x\cdot \xi )\,d\omega _{\xi }&n{\text{ odd}}.\end{cases}}} === Fourier transform === The delta function is a tempered distribution, and therefore it has a well-defined Fourier transform. Formally, one finds δ ^ ( ξ ) = ∫ − ∞ ∞ e − 2 π i x ξ δ ( x ) d x = 1. {\displaystyle {\widehat {\delta }}(\xi )=\int _{-\infty }^{\infty }e^{-2\pi ix\xi }\,\delta (x)dx=1.} Properly speaking, the Fourier transform of a distribution is defined by imposing self-adjointness of the Fourier transform under the duality pairing ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } of tempered distributions with Schwartz functions. Thus δ ^ {\displaystyle {\widehat {\delta }}} is defined as the unique tempered distribution satisfying ⟨ δ ^ , φ ⟩ = ⟨ δ , φ ^ ⟩ {\displaystyle \langle {\widehat {\delta }},\varphi \rangle =\langle \delta ,{\widehat {\varphi }}\rangle } for all Schwartz functions φ. And indeed it follows from this that δ ^ = 1. {\displaystyle {\widehat {\delta }}=1.} As a result of this identity, the convolution of the delta function with any other tempered distribution S is simply S: S ∗ δ = S . {\displaystyle S*\delta =S.} That is to say that δ is an identity element for the convolution on tempered distributions, and in fact, the space of compactly supported distributions under convolution is an associative algebra with identity the delta function. This property is fundamental in signal processing, as convolution with a tempered distribution is a linear time-invariant system, and applying the linear time-invariant system measures its impulse response. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation for δ, and once it is known, it characterizes the system completely. See LTI system theory § Impulse response and convolution. The inverse Fourier transform of the tempered distribution f(ξ) = 1 is the delta function. Formally, this is expressed as ∫ − ∞ ∞ 1 ⋅ e 2 π i x ξ d ξ = δ ( x ) {\displaystyle \int _{-\infty }^{\infty }1\cdot e^{2\pi ix\xi }\,d\xi =\delta (x)} and more rigorously, it follows since ⟨ 1 , f ^ ⟩ = f ( 0 ) = ⟨ δ , f ⟩ {\displaystyle \langle 1,{\widehat {f}}\rangle =f(0)=\langle \delta ,f\rangle } for all Schwartz functions f. In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel on R. Formally, one has ∫ − ∞ ∞ e i 2 π ξ 1 t [ e i 2 π ξ 2 t ] ∗ d t = ∫ − ∞ ∞ e − i 2 π ( ξ 2 − ξ 1 ) t d t = δ ( ξ 2 − ξ 1 ) . {\displaystyle \int _{-\infty }^{\infty }e^{i2\pi \xi _{1}t}\left[e^{i2\pi \xi _{2}t}\right]^{*}\,dt=\int _{-\infty }^{\infty }e^{-i2\pi (\xi _{2}-\xi _{1})t}\,dt=\delta (\xi _{2}-\xi _{1}).} This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution f ( t ) = e i 2 π ξ 1 t {\displaystyle f(t)=e^{i2\pi \xi _{1}t}} is f ^ ( ξ 2 ) = δ ( ξ 1 − ξ 2 ) {\displaystyle {\widehat {f}}(\xi _{2})=\delta (\xi _{1}-\xi _{2})} which again follows by imposing self-adjointness of the Fourier transform. By analytic continuation of the Fourier transform, the Laplace transform of the delta function is found to be ∫ 0 ∞ δ ( t − a ) e − s t d t = e − s a . {\displaystyle \int _{0}^{\infty }\delta (t-a)\,e^{-st}\,dt=e^{-sa}.} ==== Fourier kernels ==== In the study of Fourier series, a major question consists of determining whether and in what sense the Fourier series associated with a periodic function converges to the function. The n-th partial sum of the Fourier series of a function f of period 2π is defined by convolution (on the interval [−π,π]) with the Dirichlet kernel: D N ( x ) = ∑ n = − N N e i n x = sin ⁡ ( ( N + 1 2 ) x ) sin ⁡ ( x / 2 ) . {\displaystyle D_{N}(x)=\sum _{n=-N}^{N}e^{inx}={\frac {\sin \left(\left(N+{\frac {1}{2}}\right)x\right)}{\sin(x/2)}}.} Thus, s N ( f ) ( x ) = D N ∗ f ( x ) = ∑ n = − N N a n e i n x {\displaystyle s_{N}(f)(x)=D_{N}*f(x)=\sum _{n=-N}^{N}a_{n}e^{inx}} where a n = 1 2 π ∫ − π π f ( y ) e − i n y d y . {\displaystyle a_{n}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }f(y)e^{-iny}\,dy.} A fundamental result of elementary Fourier series states that the Dirichlet kernel restricted to the interval [−π,π] tends to a multiple of the delta function as N → ∞. This is interpreted in the distribution sense, that s N ( f ) ( 0 ) = ∫ − π π D N ( x ) f ( x ) d x → 2 π f ( 0 ) {\displaystyle s_{N}(f)(0)=\int _{-\pi }^{\pi }D_{N}(x)f(x)\,dx\to 2\pi f(0)} for every compactly supported smooth function f. Thus, formally one has δ ( x ) = 1 2 π ∑ n = − ∞ ∞ e i n x {\displaystyle \delta (x)={\frac {1}{2\pi }}\sum _{n=-\infty }^{\infty }e^{inx}} on the interval [−π,π]. Despite this, the result does not hold for all compactly supported continuous functions: that is DN does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety of summability methods to produce convergence. The method of Cesàro summation leads to the Fejér kernel F N ( x ) = 1 N ∑ n = 0 N − 1 D n ( x ) = 1 N ( sin ⁡ N x 2 sin ⁡ x 2 ) 2 . {\displaystyle F_{N}(x)={\frac {1}{N}}\sum _{n=0}^{N-1}D_{n}(x)={\frac {1}{N}}\left({\frac {\sin {\frac {Nx}{2}}}{\sin {\frac {x}{2}}}}\right)^{2}.} The Fejér kernels tend to the delta function in a stronger sense that ∫ − π π F N ( x ) f ( x ) d x → 2 π f ( 0 ) {\displaystyle \int _{-\pi }^{\pi }F_{N}(x)f(x)\,dx\to 2\pi f(0)} for every compactly supported continuous function f. The implication is that the Fourier series of any continuous function is Cesàro summable to the value of the function at every point. === Hilbert space theory === The Dirac delta distribution is a densely defined unbounded linear functional on the Hilbert space L2 of square-integrable functions. Indeed, smooth compactly supported functions are dense in L2, and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of L2 and to give a stronger topology on which the delta function defines a bounded linear functional. ==== Sobolev spaces ==== The Sobolev embedding theorem for Sobolev spaces on the real line R implies that any square-integrable function f such that ‖ f ‖ H 1 2 = ∫ − ∞ ∞ | f ^ ( ξ ) | 2 ( 1 + | ξ | 2 ) d ξ < ∞ {\displaystyle \|f\|_{H^{1}}^{2}=\int _{-\infty }^{\infty }|{\widehat {f}}(\xi )|^{2}(1+|\xi |^{2})\,d\xi <\infty } is automatically continuous, and satisfies in particular δ [ f ] = | f ( 0 ) | < C ‖ f ‖ H 1 . {\displaystyle \delta [f]=|f(0)|<C\|f\|_{H^{1}}.} Thus δ is a bounded linear functional on the Sobolev space H1. Equivalently δ is an element of the continuous dual space H−1 of H1. More generally, in n dimensions, one has δ ∈ H−s(Rn) provided s > ⁠n/2⁠. ==== Spaces of holomorphic functions ==== In complex analysis, the delta function enters via Cauchy's integral formula, which asserts that if D is a domain in the complex plane with smooth boundary, then f ( z ) = 1 2 π i ∮ ∂ D f ( ζ ) d ζ ζ − z , z ∈ D {\displaystyle f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}},\quad z\in D} for all holomorphic functions f in D that are continuous on the closure of D. As a result, the delta function δz is represented in this class of holomorphic functions by the Cauchy integral: δ z [ f ] = f ( z ) = 1 2 π i ∮ ∂ D f ( ζ ) d ζ ζ − z . {\displaystyle \delta _{z}[f]=f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}}.} Moreover, let H2(∂D) be the Hardy space consisting of the closure in L2(∂D) of all holomorphic functions in D continuous up to the boundary of D. Then functions in H2(∂D) uniquely extend to holomorphic functions in D, and the Cauchy integral formula continues to hold. In particular for z ∈ D, the delta function δz is a continuous linear functional on H2(∂D). This is a special case of the situation in several complex variables in which, for smooth domains D, the Szegő kernel plays the role of the Cauchy integral. Another representation of the delta function in a space of holomorphic functions is on the space H ( D ) ∩ L 2 ( D ) {\displaystyle H(D)\cap L^{2}(D)} of square-integrable holomorphic functions in an open set D ⊂ C n {\displaystyle D\subset \mathbb {C} ^{n}} . This is a closed subspace of L 2 ( D ) {\displaystyle L^{2}(D)} , and therefore is a Hilbert space. On the other hand, the functional that evaluates a holomorphic function in H ( D ) ∩ L 2 ( D ) {\displaystyle H(D)\cap L^{2}(D)} at a point z {\displaystyle z} of D {\displaystyle D} is a continuous functional, and so by the Riesz representation theorem, is represented by integration against a kernel K z ( ζ ) {\displaystyle K_{z}(\zeta )} , the Bergman kernel. This kernel is the analog of the delta function in this Hilbert space. A Hilbert space having such a kernel is called a reproducing kernel Hilbert space. In the special case of the unit disc, one has δ w [ f ] = f ( w ) = 1 π ∬ | z | < 1 f ( z ) d x d y ( 1 − z ¯ w ) 2 . {\displaystyle \delta _{w}[f]=f(w)={\frac {1}{\pi }}\iint _{|z|<1}{\frac {f(z)\,dx\,dy}{(1-{\bar {z}}w)^{2}}}.} ==== Resolutions of the identity ==== Given a complete orthonormal basis set of functions {φn} in a separable Hilbert space, for example, the normalized eigenvectors of a compact self-adjoint operator, any vector f can be expressed as f = ∑ n = 1 ∞ α n φ n . {\displaystyle f=\sum _{n=1}^{\infty }\alpha _{n}\varphi _{n}.} The coefficients {αn} are found as α n = ⟨ φ n , f ⟩ , {\displaystyle \alpha _{n}=\langle \varphi _{n},f\rangle ,} which may be represented by the notation: α n = φ n † f , {\displaystyle \alpha _{n}=\varphi _{n}^{\dagger }f,} a form of the bra–ket notation of Dirac. Adopting this notation, the expansion of f takes the dyadic form: f = ∑ n = 1 ∞ φ n ( φ n † f ) . {\displaystyle f=\sum _{n=1}^{\infty }\varphi _{n}\left(\varphi _{n}^{\dagger }f\right).} Letting I denote the identity operator on the Hilbert space, the expression I = ∑ n = 1 ∞ φ n φ n † , {\displaystyle I=\sum _{n=1}^{\infty }\varphi _{n}\varphi _{n}^{\dagger },} is called a resolution of the identity. When the Hilbert space is the space L2(D) of square-integrable functions on a domain D, the quantity: φ n φ n † , {\displaystyle \varphi _{n}\varphi _{n}^{\dagger },} is an integral operator, and the expression for f can be rewritten f ( x ) = ∑ n = 1 ∞ ∫ D ( φ n ( x ) φ n ∗ ( ξ ) ) f ( ξ ) d ξ . {\displaystyle f(x)=\sum _{n=1}^{\infty }\int _{D}\,\left(\varphi _{n}(x)\varphi _{n}^{*}(\xi )\right)f(\xi )\,d\xi .} The right-hand side converges to f in the L2 sense. It need not hold in a pointwise sense, even when f is a continuous function. Nevertheless, it is common to abuse notation and write f ( x ) = ∫ δ ( x − ξ ) f ( ξ ) d ξ , {\displaystyle f(x)=\int \,\delta (x-\xi )f(\xi )\,d\xi ,} resulting in the representation of the delta function: δ ( x − ξ ) = ∑ n = 1 ∞ φ n ( x ) φ n ∗ ( ξ ) . {\displaystyle \delta (x-\xi )=\sum _{n=1}^{\infty }\varphi _{n}(x)\varphi _{n}^{*}(\xi ).} With a suitable rigged Hilbert space (Φ, L2(D), Φ*) where Φ ⊂ L2(D) contains all compactly supported smooth functions, this summation may converge in Φ*, depending on the properties of the basis φn. In most cases of practical interest, the orthonormal basis comes from an integral or differential operator (e.g. the heat kernel), in which case the series converges in the distribution sense. === Infinitesimal delta functions === Cauchy used an infinitesimal α to write down a unit impulse, infinitely tall and narrow Dirac-type delta function δα satisfying ∫ F ( x ) δ α ( x ) d x = F ( 0 ) {\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)} in a number of articles in 1827. Cauchy defined an infinitesimal in Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology. Non-standard analysis allows one to rigorously treat infinitesimals. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals. Here the Dirac delta can be given by an actual function, having the property that for every real function F one has ∫ F ( x ) δ α ( x ) d x = F ( 0 ) {\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)} as anticipated by Fourier and Cauchy. == Dirac comb == A so-called uniform "pulse train" of Dirac delta measures, which is known as a Dirac comb, or as the Sha distribution, creates a sampling function, often used in digital signal processing (DSP) and discrete time signal analysis. The Dirac comb is given as the infinite sum, whose limit is understood in the distribution sense, Ш ⁡ ( x ) = ∑ n = − ∞ ∞ δ ( x − n ) , {\displaystyle \operatorname {\text{Ш}} (x)=\sum _{n=-\infty }^{\infty }\delta (x-n),} which is a sequence of point masses at each of the integers. Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant because if f is any Schwartz function, then the periodization of f is given by the convolution ( f ∗ Ш ) ( x ) = ∑ n = − ∞ ∞ f ( x − n ) . {\displaystyle (f*\operatorname {\text{Ш}} )(x)=\sum _{n=-\infty }^{\infty }f(x-n).} In particular, ( f ∗ Ш ) ∧ = f ^ Ш ^ = f ^ Ш {\displaystyle (f*\operatorname {\text{Ш}} )^{\wedge }={\widehat {f}}{\widehat {\operatorname {\text{Ш}} }}={\widehat {f}}\operatorname {\text{Ш}} } is precisely the Poisson summation formula. More generally, this formula remains to be true if f is a tempered distribution of rapid descent or, equivalently, if f ^ {\displaystyle {\widehat {f}}} is a slowly growing, ordinary function within the space of tempered distributions. == Sokhotski–Plemelj theorem == The Sokhotski–Plemelj theorem, important in quantum mechanics, relates the delta function to the distribution p.v. ⁠1/x⁠, the Cauchy principal value of the function ⁠1/x⁠, defined by ⟨ p . v . ⁡ 1 x , φ ⟩ = lim ε → 0 + ∫ | x | > ε φ ( x ) x d x . {\displaystyle \left\langle \operatorname {p.v.} {\frac {1}{x}},\varphi \right\rangle =\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {\varphi (x)}{x}}\,dx.} Sokhotsky's formula states that lim ε → 0 + 1 x ± i ε = p . v . ⁡ 1 x ∓ i π δ ( x ) , {\displaystyle \lim _{\varepsilon \to 0^{+}}{\frac {1}{x\pm i\varepsilon }}=\operatorname {p.v.} {\frac {1}{x}}\mp i\pi \delta (x),} Here the limit is understood in the distribution sense, that for all compactly supported smooth functions f, ∫ − ∞ ∞ lim ε → 0 + f ( x ) x ± i ε d x = ∓ i π f ( 0 ) + lim ε → 0 + ∫ | x | > ε f ( x ) x d x . {\displaystyle \int _{-\infty }^{\infty }\lim _{\varepsilon \to 0^{+}}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi f(0)+\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {f(x)}{x}}\,dx.} == Relationship to the Kronecker delta == The Kronecker delta δij is the quantity defined by δ i j = { 1 i = j 0 i ≠ j {\displaystyle \delta _{ij}={\begin{cases}1&i=j\\0&i\not =j\end{cases}}} for all integers i, j. This function then satisfies the following analog of the sifting property: if ai (for i in the set of all integers) is any doubly infinite sequence, then ∑ i = − ∞ ∞ a i δ i k = a k . {\displaystyle \sum _{i=-\infty }^{\infty }a_{i}\delta _{ik}=a_{k}.} Similarly, for any real or complex valued continuous function f on R, the Dirac delta satisfies the sifting property ∫ − ∞ ∞ f ( x ) δ ( x − x 0 ) d x = f ( x 0 ) . {\displaystyle \int _{-\infty }^{\infty }f(x)\delta (x-x_{0})\,dx=f(x_{0}).} This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function. == Applications == === Probability theory === In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent absolutely continuous distributions). For example, the probability density function f(x) of a discrete distribution consisting of points x = {x1, ..., xn}, with corresponding probabilities p1, ..., pn, can be written as f ( x ) = ∑ i = 1 n p i δ ( x − x i ) . {\displaystyle f(x)=\sum _{i=1}^{n}p_{i}\delta (x-x_{i}).} As another example, consider a distribution in which 6/10 of the time returns a standard normal distribution, and 4/10 of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discrete mixture distribution). The density function of this distribution can be written as f ( x ) = 0.6 1 2 π e − x 2 2 + 0.4 δ ( x − 3.5 ) . {\displaystyle f(x)=0.6\,{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {x^{2}}{2}}}+0.4\,\delta (x-3.5).} The delta function is also used to represent the resulting probability density function of a random variable that is transformed by continuously differentiable function. If Y = g(X) is a continuous differentiable function, then the density of Y can be written as f Y ( y ) = ∫ − ∞ + ∞ f X ( x ) δ ( y − g ( x ) ) d x . {\displaystyle f_{Y}(y)=\int _{-\infty }^{+\infty }f_{X}(x)\delta (y-g(x))\,dx.} The delta function is also used in a completely different way to represent the local time of a diffusion process (like Brownian motion). The local time of a stochastic process B(t) is given by ℓ ( x , t ) = ∫ 0 t δ ( x − B ( s ) ) d s {\displaystyle \ell (x,t)=\int _{0}^{t}\delta (x-B(s))\,ds} and represents the amount of time that the process spends at the point x in the range of the process. More precisely, in one dimension this integral can be written ℓ ( x , t ) = lim ε → 0 + 1 2 ε ∫ 0 t 1 [ x − ε , x + ε ] ( B ( s ) ) d s {\displaystyle \ell (x,t)=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\varepsilon }}\int _{0}^{t}\mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}(B(s))\,ds} where 1 [ x − ε , x + ε ] {\displaystyle \mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}} is the indicator function of the interval [ x − ε , x + ε ] . {\displaystyle [x-\varepsilon ,x+\varepsilon ].} === Quantum mechanics === The delta function is expedient in quantum mechanics. The wave function of a particle gives the probability amplitude of finding a particle within a given region of space. Wave functions are assumed to be elements of the Hilbert space L2 of square-integrable functions, and the total probability of finding a particle within a given interval is the integral of the magnitude of the wave function squared over the interval. A set {|φn⟩} of wave functions is orthonormal if ⟨ φ n ∣ φ m ⟩ = δ n m , {\displaystyle \langle \varphi _{n}\mid \varphi _{m}\rangle =\delta _{nm},} where δnm is the Kronecker delta. A set of orthonormal wave functions is complete in the space of square-integrable functions if any wave function |ψ⟩ can be expressed as a linear combination of the {|φn⟩} with complex coefficients: ψ = ∑ c n φ n , {\displaystyle \psi =\sum c_{n}\varphi _{n},} where cn = ⟨φn|ψ⟩. Complete orthonormal systems of wave functions appear naturally as the eigenfunctions of the Hamiltonian (of a bound system) in quantum mechanics that measures the energy levels, which are called the eigenvalues. The set of eigenvalues, in this case, is known as the spectrum of the Hamiltonian. In bra–ket notation this equality implies the resolution of the identity: I = ∑ | φ n ⟩ ⟨ φ n | . {\displaystyle I=\sum |\varphi _{n}\rangle \langle \varphi _{n}|.} Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of an observable can also be continuous. An example is the position operator, Qψ(x) = xψ(x). The spectrum of the position (in one dimension) is the entire real line and is called a continuous spectrum. However, unlike the Hamiltonian, the position operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of available functions by allowing distributions as well, i.e., to replace the Hilbert space with a rigged Hilbert space. In this context, the position operator has a complete set of generalized eigenfunctions, labeled by the points y of the real line, given by φ y ( x ) = δ ( x − y ) . {\displaystyle \varphi _{y}(x)=\delta (x-y).} The generalized eigenfunctions of the position operator are called the eigenkets and are denoted by φy = |y⟩. Similar considerations apply to any other (unbounded) self-adjoint operator with continuous spectrum and no degenerate eigenvalues, such as the momentum operator P. In that case, there is a set Ω of real numbers (the spectrum) and a collection of distributions φy with y ∈ Ω such that P φ y = y φ y . {\displaystyle P\varphi _{y}=y\varphi _{y}.} That is, φy are the generalized eigenvectors of P. If they form an "orthonormal basis" in the distribution sense, that is: ⟨ φ y , φ y ′ ⟩ = δ ( y − y ′ ) , {\displaystyle \langle \varphi _{y},\varphi _{y'}\rangle =\delta (y-y'),} then for any test function ψ, ψ ( x ) = ∫ Ω c ( y ) φ y ( x ) d y {\displaystyle \psi (x)=\int _{\Omega }c(y)\varphi _{y}(x)\,dy} where c(y) = ⟨ψ, φy⟩. That is, there is a resolution of the identity I = ∫ Ω | φ y ⟩ ⟨ φ y | d y {\displaystyle I=\int _{\Omega }|\varphi _{y}\rangle \,\langle \varphi _{y}|\,dy} where the operator-valued integral is again understood in the weak sense. If the spectrum of P has both continuous and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral over the continuous spectrum. The delta function also has many more specialized applications in quantum mechanics, such as the delta potential models for a single and double potential well. === Structural mechanics === The delta function can be used in structural mechanics to describe transient loads or point loads acting on structures. The governing equation of a simple mass–spring system excited by a sudden force impulse I at time t = 0 can be written m d 2 ξ d t 2 + k ξ = I δ ( t ) , {\displaystyle m{\frac {d^{2}\xi }{dt^{2}}}+k\xi =I\delta (t),} where m is the mass, ξ is the deflection, and k is the spring constant. As another example, the equation governing the static deflection of a slender beam is, according to Euler–Bernoulli theory, E I d 4 w d x 4 = q ( x ) , {\displaystyle EI{\frac {d^{4}w}{dx^{4}}}=q(x),} where EI is the bending stiffness of the beam, w is the deflection, x is the spatial coordinate, and q(x) is the load distribution. If a beam is loaded by a point force F at x = x0, the load distribution is written q ( x ) = F δ ( x − x 0 ) . {\displaystyle q(x)=F\delta (x-x_{0}).} As the integration of the delta function results in the Heaviside step function, it follows that the static deflection of a slender beam subject to multiple point loads is described by a set of piecewise polynomials. Also, a point moment acting on a beam can be described by delta functions. Consider two opposing point forces F at a distance d apart. They then produce a moment M = Fd acting on the beam. Now, let the distance d approach the limit zero, while M is kept constant. The load distribution, assuming a clockwise moment acting at x = 0, is written q ( x ) = lim d → 0 ( F δ ( x ) − F δ ( x − d ) ) = lim d → 0 ( M d δ ( x ) − M d δ ( x − d ) ) = M lim d → 0 δ ( x ) − δ ( x − d ) d = M δ ′ ( x ) . {\displaystyle {\begin{aligned}q(x)&=\lim _{d\to 0}{\Big (}F\delta (x)-F\delta (x-d){\Big )}\\[4pt]&=\lim _{d\to 0}\left({\frac {M}{d}}\delta (x)-{\frac {M}{d}}\delta (x-d)\right)\\[4pt]&=M\lim _{d\to 0}{\frac {\delta (x)-\delta (x-d)}{d}}\\[4pt]&=M\delta '(x).\end{aligned}}} Point moments can thus be represented by the derivative of the delta function. Integration of the beam equation again results in piecewise polynomial deflection. == See also == Atom (measure theory) Degenerate distribution Laplacian of the indicator Uncertainty principle == Notes == == References == Aratyn, Henrik; Rasinariu, Constantin (2006), A short course in mathematical methods with Maple, World Scientific, ISBN 978-981-256-461-0. Arfken, G. B.; Weber, H. J. (2000), Mathematical Methods for Physicists (5th ed.), Boston, Massachusetts: Academic Press, ISBN 978-0-12-059825-0. atis (2013), ATIS Telecom Glossary, archived from the original on 2013-03-13 Bracewell, R. N. (1986), The Fourier Transform and Its Applications (2nd ed.), McGraw-Hill, Bibcode:1986ftia.book.....B. Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), McGraw-Hill. Córdoba, A. (1988), "La formule sommatoire de Poisson", Comptes Rendus de l'Académie des Sciences, Série I, 306: 373–376. Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume II, Wiley-Interscience. Davis, Howard Ted; Thomson, Kendall T (2000), Linear algebra and linear operators in engineering with applications in Mathematica, Academic Press, ISBN 978-0-12-206349-7 Dieudonné, Jean (1976), Treatise on analysis. Vol. II, New York: Academic Press [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-215502-4, MR 0530406. Dieudonné, Jean (1972), Treatise on analysis. Vol. III, Boston, Massachusetts: Academic Press, MR 0350769 Dirac, Paul (1930), The Principles of Quantum Mechanics (1st ed.), Oxford University Press. Driggers, Ronald G. (2003), Encyclopedia of Optical Engineering, CRC Press, Bibcode:2003eoe..book.....D, ISBN 978-0-8247-0940-2. Duistermaat, Hans; Kolk (2010), Distributions: Theory and applications, Springer. Federer, Herbert (1969), Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, vol. 153, New York: Springer-Verlag, pp. xiv+676, ISBN 978-3-540-60656-7, MR 0257325. Gannon, Terry (2008), "Vertex operator algebras", Princeton Companion to Mathematics, Princeton University Press, ISBN 978-1400830398. Gelfand, I. M.; Shilov, G. E. (1966–1968), Generalized functions, vol. 1–5, Academic Press, ISBN 9781483262246. Hartmann, William M. (1997), Signals, sound, and sensation, Springer, ISBN 978-1-56396-283-7. Hazewinkel, Michiel (1995). Encyclopaedia of Mathematics (set). Springer Science & Business Media. ISBN 978-1-55608-010-4. Hazewinkel, Michiel (2011). Encyclopaedia of mathematics. Vol. 10. Springer. ISBN 978-90-481-4896-7. OCLC 751862625. Hewitt, E; Stromberg, K (1963), Real and abstract analysis, Springer-Verlag. Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 978-3-540-12104-6, MR 0717035. Isham, C. J. (1995), Lectures on quantum theory: mathematical and structural foundations, Imperial College Press, Bibcode:1995lqtm.book.....I, ISBN 978-81-7764-190-5. John, Fritz (1955), Plane waves and spherical means applied to partial differential equations, Interscience Publishers, New York-London, MR 0075429. Reprinted, Dover Publications, 2004, ISBN 9780486438047. Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4757-2698-5, ISBN 978-0-387-94841-6, MR 1476913. Lange, Rutger-Jan (2012), "Potential theory, path integrals and the Laplacian of the indicator", Journal of High Energy Physics, 2012 (11): 29–30, arXiv:1302.0864, Bibcode:2012JHEP...11..032L, doi:10.1007/JHEP11(2012)032, S2CID 56188533. Laugwitz, D. (1989), "Definite values of infinite sums: aspects of the foundations of infinitesimal analysis around 1820", Arch. Hist. Exact Sci., 39 (3): 195–245, doi:10.1007/BF00329867, S2CID 120890300. Levin, Frank S. (2002), "Coordinate-space wave functions and completeness", An introduction to quantum theory, Cambridge University Press, pp. 109ff, ISBN 978-0-521-59841-5 Li, Y. T.; Wong, R. (2008), "Integral and series representations of the Dirac delta function", Commun. Pure Appl. Anal., 7 (2): 229–247, arXiv:1303.1943, doi:10.3934/cpaa.2008.7.229, MR 2373214, S2CID 119319140. de la Madrid Modino, R. (2001). Quantum mechanics in rigged Hilbert space language (PhD thesis). Universidad de Valladolid. de la Madrid, R.; Bohm, A.; Gadella, M. (2002), "Rigged Hilbert Space Treatment of Continuous Spectrum", Fortschr. Phys., 50 (2): 185–216, arXiv:quant-ph/0109154, Bibcode:2002ForPh..50..185D, doi:10.1002/1521-3978(200203)50:2<185::AID-PROP185>3.0.CO;2-S, S2CID 9407651. McMahon, D. (2005-11-22), "An Introduction to State Space" (PDF), Quantum Mechanics Demystified, A Self-Teaching Guide, Demystified Series, New York: McGraw-Hill, p. 108, ISBN 978-0-07-145546-6, retrieved 2008-03-17. van der Pol, Balth.; Bremmer, H. (1987), Operational calculus (3rd ed.), New York: Chelsea Publishing Co., ISBN 978-0-8284-0327-6, MR 0904873. Rudin, Walter (1966). Devine, Peter R. (ed.). Real and complex analysis (3rd ed.). New York: McGraw-Hill (published 1987). ISBN 0-07-100276-6. Rudin, Walter (1991), Functional Analysis (2nd ed.), McGraw-Hill, ISBN 978-0-07-054236-5. Vallée, Olivier; Soares, Manuel (2004), Airy functions and applications to physics, London: Imperial College Press, ISBN 9781911299486. Saichev, A I; Woyczyński, Wojbor Andrzej (1997), "Chapter1: Basic definitions and operations", Distributions in the Physical and Engineering Sciences: Distributional and fractal calculus, integral transforms, and wavelets, Birkhäuser, ISBN 978-0-8176-3924-2 Schwartz, L. (1950), Théorie des distributions, vol. 1, Hermann. Schwartz, L. (1951), Théorie des distributions, vol. 2, Hermann. Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 978-0-691-08078-9. Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 978-0-8493-8273-4. Vladimirov, V. S. (1971), Equations of mathematical physics, Marcel Dekker, ISBN 978-0-8247-1713-1. Weisstein, Eric W. "Delta Function". MathWorld. Yamashita, H. (2006), "Pointwise analysis of scalar fields: A nonstandard approach", Journal of Mathematical Physics, 47 (9): 092301, Bibcode:2006JMP....47i2301Y, doi:10.1063/1.2339017 Yamashita, H. (2007), "Comment on "Pointwise analysis of scalar fields: A nonstandard approach" [J. Math. Phys. 47, 092301 (2006)]", Journal of Mathematical Physics, 48 (8): 084101, Bibcode:2007JMP....48h4101Y, doi:10.1063/1.2771422 == External links == Media related to Dirac distribution at Wikimedia Commons "Delta-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] KhanAcademy.org video lesson The Dirac Delta function, a tutorial on the Dirac delta function. Video Lectures – Lecture 23, a lecture by Arthur Mattuck. The Dirac delta measure is a hyperfunction We show the existence of a unique solution and analyze a finite element approximation when the source term is a Dirac delta measure Non-Lebesgue measures on R. Lebesgue-Stieltjes measure, Dirac delta measure. Archived 2008-03-07 at the Wayback Machine
Wikipedia/Dirac_delta_functions
In mathematics, theta functions are special functions of several complex variables. They show up in many topics, including Abelian varieties, moduli spaces, quadratic forms, and solitons. Theta functions are parametrized by points in a tube domain inside a complex Lagrangian Grassmannian, namely the Siegel upper half space. The most common form of theta function is that occurring in the theory of elliptic functions. With respect to one of the complex variables (conventionally called z), a theta function has a property expressing its behavior with respect to the addition of a period of the associated elliptic functions, making it a quasiperiodic function. In the abstract theory this quasiperiodicity comes from the cohomology class of a line bundle on a complex torus, a condition of descent. One interpretation of theta functions when dealing with the heat equation is that "a theta function is a special function that describes the evolution of temperature on a segment domain subject to certain boundary conditions". Throughout this article, ( e π i τ ) α {\displaystyle (e^{\pi i\tau })^{\alpha }} should be interpreted as e α π i τ {\displaystyle e^{\alpha \pi i\tau }} (in order to resolve issues of choice of branch). == Jacobi theta function == There are several closely related functions called Jacobi theta functions, and many different and incompatible systems of notation for them. One Jacobi theta function (named after Carl Gustav Jacob Jacobi) is a function defined for two complex variables z and τ, where z can be any complex number and τ is the half-period ratio, confined to the upper half-plane, which means it has a positive imaginary part. It is given by the formula ϑ ( z ; τ ) = ∑ n = − ∞ ∞ exp ⁡ ( π i n 2 τ + 2 π i n z ) = 1 + 2 ∑ n = 1 ∞ q n 2 cos ⁡ ( 2 π n z ) = ∑ n = − ∞ ∞ q n 2 η n {\displaystyle {\begin{aligned}\vartheta (z;\tau )&=\sum _{n=-\infty }^{\infty }\exp \left(\pi in^{2}\tau +2\pi inz\right)\\&=1+2\sum _{n=1}^{\infty }q^{n^{2}}\cos(2\pi nz)\\&=\sum _{n=-\infty }^{\infty }q^{n^{2}}\eta ^{n}\end{aligned}}} where q = exp(πiτ) is the nome and η = exp(2πiz). It is a Jacobi form. The restriction ensures that it is an absolutely convergent series. At fixed τ, this is a Fourier series for a 1-periodic entire function of z. Accordingly, the theta function is 1-periodic in z: ϑ ( z + 1 ; τ ) = ϑ ( z ; τ ) . {\displaystyle \vartheta (z+1;\tau )=\vartheta (z;\tau ).} By completing the square, it is also τ-quasiperiodic in z, with ϑ ( z + τ ; τ ) = exp ⁡ ( − π i ( τ + 2 z ) ) ϑ ( z ; τ ) . {\displaystyle \vartheta (z+\tau ;\tau )=\exp {\bigl (}-\pi i(\tau +2z){\bigr )}\vartheta (z;\tau ).} Thus, in general, ϑ ( z + a + b τ ; τ ) = exp ⁡ ( − π i b 2 τ − 2 π i b z ) ϑ ( z ; τ ) {\displaystyle \vartheta (z+a+b\tau ;\tau )=\exp \left(-\pi ib^{2}\tau -2\pi ibz\right)\vartheta (z;\tau )} for any integers a and b. For any fixed τ {\displaystyle \tau } , the function is an entire function on the complex plane, so by Liouville's theorem, it cannot be doubly periodic in 1 , τ {\displaystyle 1,\tau } unless it is constant, and so the best we can do is to make it periodic in 1 {\displaystyle 1} and quasi-periodic in τ {\displaystyle \tau } . Indeed, since | ϑ ( z + a + b τ ; τ ) ϑ ( z ; τ ) | = exp ⁡ ( π ( b 2 ℑ ( τ ) + 2 b ℑ ( z ) ) ) {\displaystyle \left|{\frac {\vartheta (z+a+b\tau ;\tau )}{\vartheta (z;\tau )}}\right|=\exp \left(\pi (b^{2}\Im (\tau )+2b\Im (z))\right)} and ℑ ( τ ) > 0 {\displaystyle \Im (\tau )>0} , the function ϑ ( z , τ ) {\displaystyle \vartheta (z,\tau )} is unbounded, as required by Liouville's theorem. It is in fact the most general entire function with 2 quasi-periods, in the following sense: == Auxiliary functions == The Jacobi theta function defined above is sometimes considered along with three auxiliary theta functions, in which case it is written with a double 0 subscript: ϑ 00 ( z ; τ ) = ϑ ( z ; τ ) {\displaystyle \vartheta _{00}(z;\tau )=\vartheta (z;\tau )} The auxiliary (or half-period) functions are defined by ϑ 01 ( z ; τ ) = ϑ ( z + 1 2 ; τ ) ϑ 10 ( z ; τ ) = exp ⁡ ( 1 4 π i τ + π i z ) ϑ ( z + 1 2 τ ; τ ) ϑ 11 ( z ; τ ) = exp ⁡ ( 1 4 π i τ + π i ( z + 1 2 ) ) ϑ ( z + 1 2 τ + 1 2 ; τ ) . {\displaystyle {\begin{aligned}\vartheta _{01}(z;\tau )&=\vartheta \left(z+{\tfrac {1}{2}};\tau \right)\\[3pt]\vartheta _{10}(z;\tau )&=\exp \left({\tfrac {1}{4}}\pi i\tau +\pi iz\right)\vartheta \left(z+{\tfrac {1}{2}}\tau ;\tau \right)\\[3pt]\vartheta _{11}(z;\tau )&=\exp \left({\tfrac {1}{4}}\pi i\tau +\pi i\left(z+{\tfrac {1}{2}}\right)\right)\vartheta \left(z+{\tfrac {1}{2}}\tau +{\tfrac {1}{2}};\tau \right).\end{aligned}}} This notation follows Riemann and Mumford; Jacobi's original formulation was in terms of the nome q = eiπτ rather than τ. In Jacobi's notation the θ-functions are written: θ 1 ( z ; q ) = θ 1 ( π z , q ) = − ϑ 11 ( z ; τ ) θ 2 ( z ; q ) = θ 2 ( π z , q ) = ϑ 10 ( z ; τ ) θ 3 ( z ; q ) = θ 3 ( π z , q ) = ϑ 00 ( z ; τ ) θ 4 ( z ; q ) = θ 4 ( π z , q ) = ϑ 01 ( z ; τ ) {\displaystyle {\begin{aligned}\theta _{1}(z;q)&=\theta _{1}(\pi z,q)=-\vartheta _{11}(z;\tau )\\\theta _{2}(z;q)&=\theta _{2}(\pi z,q)=\vartheta _{10}(z;\tau )\\\theta _{3}(z;q)&=\theta _{3}(\pi z,q)=\vartheta _{00}(z;\tau )\\\theta _{4}(z;q)&=\theta _{4}(\pi z,q)=\vartheta _{01}(z;\tau )\end{aligned}}} The above definitions of the Jacobi theta functions are by no means unique. See Jacobi theta functions (notational variations) for further discussion. If we set z = 0 in the above theta functions, we obtain four functions of τ only, defined on the upper half-plane. These functions are called Theta Nullwert functions, based on the German term for zero value because of the annullation of the left entry in the theta function expression. Alternatively, we obtain four functions of q only, defined on the unit disk | q | < 1 {\displaystyle |q|<1} . They are sometimes called theta constants: ϑ 11 ( 0 ; τ ) = − θ 1 ( q ) = − ∑ n = − ∞ ∞ ( − 1 ) n − 1 / 2 q ( n + 1 / 2 ) 2 ϑ 10 ( 0 ; τ ) = θ 2 ( q ) = ∑ n = − ∞ ∞ q ( n + 1 / 2 ) 2 ϑ 00 ( 0 ; τ ) = θ 3 ( q ) = ∑ n = − ∞ ∞ q n 2 ϑ 01 ( 0 ; τ ) = θ 4 ( q ) = ∑ n = − ∞ ∞ ( − 1 ) n q n 2 {\displaystyle {\begin{aligned}\vartheta _{11}(0;\tau )&=-\theta _{1}(q)=-\sum _{n=-\infty }^{\infty }(-1)^{n-1/2}q^{(n+1/2)^{2}}\\\vartheta _{10}(0;\tau )&=\theta _{2}(q)=\sum _{n=-\infty }^{\infty }q^{(n+1/2)^{2}}\\\vartheta _{00}(0;\tau )&=\theta _{3}(q)=\sum _{n=-\infty }^{\infty }q^{n^{2}}\\\vartheta _{01}(0;\tau )&=\theta _{4}(q)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{n^{2}}\end{aligned}}} with the nome q = eiπτ. Observe that θ 1 ( q ) = 0 {\displaystyle \theta _{1}(q)=0} . These can be used to define a variety of modular forms, and to parametrize certain curves; in particular, the Jacobi identity is θ 2 ( q ) 4 + θ 4 ( q ) 4 = θ 3 ( q ) 4 {\displaystyle \theta _{2}(q)^{4}+\theta _{4}(q)^{4}=\theta _{3}(q)^{4}} or equivalently, ϑ 01 ( 0 ; τ ) 4 + ϑ 10 ( 0 ; τ ) 4 = ϑ 00 ( 0 ; τ ) 4 {\displaystyle \vartheta _{01}(0;\tau )^{4}+\vartheta _{10}(0;\tau )^{4}=\vartheta _{00}(0;\tau )^{4}} which is the Fermat curve of degree four. == Jacobi identities == Jacobi's identities describe how theta functions transform under the modular group, which is generated by τ ↦ τ + 1 and τ ↦ −⁠1/τ⁠. Equations for the first transform are easily found since adding one to τ in the exponent has the same effect as adding ⁠1/2⁠ to z (n ≡ n2 mod 2). For the second, let α = ( − i τ ) 1 2 exp ⁡ ( π τ i z 2 ) . {\displaystyle \alpha =(-i\tau )^{\frac {1}{2}}\exp \left({\frac {\pi }{\tau }}iz^{2}\right).} Then ϑ 00 ( z τ ; − 1 τ ) = α ϑ 00 ( z ; τ ) ϑ 01 ( z τ ; − 1 τ ) = α ϑ 10 ( z ; τ ) ϑ 10 ( z τ ; − 1 τ ) = α ϑ 01 ( z ; τ ) ϑ 11 ( z τ ; − 1 τ ) = − i α ϑ 11 ( z ; τ ) . {\displaystyle {\begin{aligned}\vartheta _{00}\!\left({\frac {z}{\tau }};{\frac {-1}{\tau }}\right)&=\alpha \,\vartheta _{00}(z;\tau )\quad &\vartheta _{01}\!\left({\frac {z}{\tau }};{\frac {-1}{\tau }}\right)&=\alpha \,\vartheta _{10}(z;\tau )\\[3pt]\vartheta _{10}\!\left({\frac {z}{\tau }};{\frac {-1}{\tau }}\right)&=\alpha \,\vartheta _{01}(z;\tau )\quad &\vartheta _{11}\!\left({\frac {z}{\tau }};{\frac {-1}{\tau }}\right)&=-i\alpha \,\vartheta _{11}(z;\tau ).\end{aligned}}} == Theta functions in terms of the nome == Instead of expressing the Theta functions in terms of z and τ, we may express them in terms of arguments w and the nome q, where w = eπiz and q = eπiτ. In this form, the functions become ϑ 00 ( w , q ) = ∑ n = − ∞ ∞ ( w 2 ) n q n 2 ϑ 01 ( w , q ) = ∑ n = − ∞ ∞ ( − 1 ) n ( w 2 ) n q n 2 ϑ 10 ( w , q ) = ∑ n = − ∞ ∞ ( w 2 ) n + 1 2 q ( n + 1 2 ) 2 ϑ 11 ( w , q ) = i ∑ n = − ∞ ∞ ( − 1 ) n ( w 2 ) n + 1 2 q ( n + 1 2 ) 2 . {\displaystyle {\begin{aligned}\vartheta _{00}(w,q)&=\sum _{n=-\infty }^{\infty }\left(w^{2}\right)^{n}q^{n^{2}}\quad &\vartheta _{01}(w,q)&=\sum _{n=-\infty }^{\infty }(-1)^{n}\left(w^{2}\right)^{n}q^{n^{2}}\\[3pt]\vartheta _{10}(w,q)&=\sum _{n=-\infty }^{\infty }\left(w^{2}\right)^{n+{\frac {1}{2}}}q^{\left(n+{\frac {1}{2}}\right)^{2}}\quad &\vartheta _{11}(w,q)&=i\sum _{n=-\infty }^{\infty }(-1)^{n}\left(w^{2}\right)^{n+{\frac {1}{2}}}q^{\left(n+{\frac {1}{2}}\right)^{2}}.\end{aligned}}} We see that the theta functions can also be defined in terms of w and q, without a direct reference to the exponential function. These formulas can, therefore, be used to define the Theta functions over other fields where the exponential function might not be everywhere defined, such as fields of p-adic numbers. == Product representations == The Jacobi triple product (a special case of the Macdonald identities) tells us that for complex numbers w and q with |q| < 1 and w ≠ 0 we have ∏ m = 1 ∞ ( 1 − q 2 m ) ( 1 + w 2 q 2 m − 1 ) ( 1 + w − 2 q 2 m − 1 ) = ∑ n = − ∞ ∞ w 2 n q n 2 . {\displaystyle \prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1+w^{2}q^{2m-1}\right)\left(1+w^{-2}q^{2m-1}\right)=\sum _{n=-\infty }^{\infty }w^{2n}q^{n^{2}}.} It can be proven by elementary means, as for instance in Hardy and Wright's An Introduction to the Theory of Numbers. If we express the theta function in terms of the nome q = eπiτ (noting some authors instead set q = e2πiτ) and take w = eπiz then ϑ ( z ; τ ) = ∑ n = − ∞ ∞ exp ⁡ ( π i τ n 2 ) exp ⁡ ( 2 π i z n ) = ∑ n = − ∞ ∞ w 2 n q n 2 . {\displaystyle \vartheta (z;\tau )=\sum _{n=-\infty }^{\infty }\exp(\pi i\tau n^{2})\exp(2\pi izn)=\sum _{n=-\infty }^{\infty }w^{2n}q^{n^{2}}.} We therefore obtain a product formula for the theta function in the form ϑ ( z ; τ ) = ∏ m = 1 ∞ ( 1 − exp ⁡ ( 2 m π i τ ) ) ( 1 + exp ⁡ ( ( 2 m − 1 ) π i τ + 2 π i z ) ) ( 1 + exp ⁡ ( ( 2 m − 1 ) π i τ − 2 π i z ) ) . {\displaystyle \vartheta (z;\tau )=\prod _{m=1}^{\infty }{\big (}1-\exp(2m\pi i\tau ){\big )}{\Big (}1+\exp {\big (}(2m-1)\pi i\tau +2\pi iz{\big )}{\Big )}{\Big (}1+\exp {\big (}(2m-1)\pi i\tau -2\pi iz{\big )}{\Big )}.} In terms of w and q: ϑ ( z ; τ ) = ∏ m = 1 ∞ ( 1 − q 2 m ) ( 1 + q 2 m − 1 w 2 ) ( 1 + q 2 m − 1 w 2 ) = ( q 2 ; q 2 ) ∞ ( − w 2 q ; q 2 ) ∞ ( − q w 2 ; q 2 ) ∞ = ( q 2 ; q 2 ) ∞ θ ( − w 2 q ; q 2 ) {\displaystyle {\begin{aligned}\vartheta (z;\tau )&=\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1+q^{2m-1}w^{2}\right)\left(1+{\frac {q^{2m-1}}{w^{2}}}\right)\\&=\left(q^{2};q^{2}\right)_{\infty }\,\left(-w^{2}q;q^{2}\right)_{\infty }\,\left(-{\frac {q}{w^{2}}};q^{2}\right)_{\infty }\\&=\left(q^{2};q^{2}\right)_{\infty }\,\theta \left(-w^{2}q;q^{2}\right)\end{aligned}}} where ( ; )∞ is the q-Pochhammer symbol and θ( ; ) is the q-theta function. Expanding terms out, the Jacobi triple product can also be written ∏ m = 1 ∞ ( 1 − q 2 m ) ( 1 + ( w 2 + w − 2 ) q 2 m − 1 + q 4 m − 2 ) , {\displaystyle \prod _{m=1}^{\infty }\left(1-q^{2m}\right){\Big (}1+\left(w^{2}+w^{-2}\right)q^{2m-1}+q^{4m-2}{\Big )},} which we may also write as ϑ ( z ∣ q ) = ∏ m = 1 ∞ ( 1 − q 2 m ) ( 1 + 2 cos ⁡ ( 2 π z ) q 2 m − 1 + q 4 m − 2 ) . {\displaystyle \vartheta (z\mid q)=\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1+2\cos(2\pi z)q^{2m-1}+q^{4m-2}\right).} This form is valid in general but clearly is of particular interest when z is real. Similar product formulas for the auxiliary theta functions are ϑ 01 ( z ∣ q ) = ∏ m = 1 ∞ ( 1 − q 2 m ) ( 1 − 2 cos ⁡ ( 2 π z ) q 2 m − 1 + q 4 m − 2 ) , ϑ 10 ( z ∣ q ) = 2 q 1 4 cos ⁡ ( π z ) ∏ m = 1 ∞ ( 1 − q 2 m ) ( 1 + 2 cos ⁡ ( 2 π z ) q 2 m + q 4 m ) , ϑ 11 ( z ∣ q ) = − 2 q 1 4 sin ⁡ ( π z ) ∏ m = 1 ∞ ( 1 − q 2 m ) ( 1 − 2 cos ⁡ ( 2 π z ) q 2 m + q 4 m ) . {\displaystyle {\begin{aligned}\vartheta _{01}(z\mid q)&=\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1-2\cos(2\pi z)q^{2m-1}+q^{4m-2}\right),\\[3pt]\vartheta _{10}(z\mid q)&=2q^{\frac {1}{4}}\cos(\pi z)\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1+2\cos(2\pi z)q^{2m}+q^{4m}\right),\\[3pt]\vartheta _{11}(z\mid q)&=-2q^{\frac {1}{4}}\sin(\pi z)\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1-2\cos(2\pi z)q^{2m}+q^{4m}\right).\end{aligned}}} In particular, lim q → 0 ϑ 10 ( z ∣ q ) 2 q 1 4 = cos ⁡ ( π z ) , lim q → 0 − ϑ 11 ( z ∣ q ) 2 q − 1 4 = sin ⁡ ( π z ) {\displaystyle \lim _{q\to 0}{\frac {\vartheta _{10}(z\mid q)}{2q^{\frac {1}{4}}}}=\cos(\pi z),\quad \lim _{q\to 0}{\frac {-\vartheta _{11}(z\mid q)}{2q^{-{\frac {1}{4}}}}}=\sin(\pi z)} so we may interpret them as one-parameter deformations of the periodic functions sin , cos {\displaystyle \sin ,\cos } , again validating the interpretation of the theta function as the most general 2 quasi-period function. == Integral representations == The Jacobi theta functions have the following integral representations: ϑ 00 ( z ; τ ) = − i ∫ i − ∞ i + ∞ e i π τ u 2 cos ⁡ ( 2 π u z + π u ) sin ⁡ ( π u ) d u ; ϑ 01 ( z ; τ ) = − i ∫ i − ∞ i + ∞ e i π τ u 2 cos ⁡ ( 2 π u z ) sin ⁡ ( π u ) d u ; ϑ 10 ( z ; τ ) = − i e i π z + 1 4 i π τ ∫ i − ∞ i + ∞ e i π τ u 2 cos ⁡ ( 2 π u z + π u + π τ u ) sin ⁡ ( π u ) d u ; ϑ 11 ( z ; τ ) = e i π z + 1 4 i π τ ∫ i − ∞ i + ∞ e i π τ u 2 cos ⁡ ( 2 π u z + π τ u ) sin ⁡ ( π u ) d u . {\displaystyle {\begin{aligned}\vartheta _{00}(z;\tau )&=-i\int _{i-\infty }^{i+\infty }e^{i\pi \tau u^{2}}{\frac {\cos(2\pi uz+\pi u)}{\sin(\pi u)}}\mathrm {d} u;\\[6pt]\vartheta _{01}(z;\tau )&=-i\int _{i-\infty }^{i+\infty }e^{i\pi \tau u^{2}}{\frac {\cos(2\pi uz)}{\sin(\pi u)}}\mathrm {d} u;\\[6pt]\vartheta _{10}(z;\tau )&=-ie^{i\pi z+{\frac {1}{4}}i\pi \tau }\int _{i-\infty }^{i+\infty }e^{i\pi \tau u^{2}}{\frac {\cos(2\pi uz+\pi u+\pi \tau u)}{\sin(\pi u)}}\mathrm {d} u;\\[6pt]\vartheta _{11}(z;\tau )&=e^{i\pi z+{\frac {1}{4}}i\pi \tau }\int _{i-\infty }^{i+\infty }e^{i\pi \tau u^{2}}{\frac {\cos(2\pi uz+\pi \tau u)}{\sin(\pi u)}}\mathrm {d} u.\end{aligned}}} The Theta Nullwert function θ 3 ( q ) {\displaystyle \theta _{3}(q)} as this integral identity: θ 3 ( q ) = 1 + 4 q ln ⁡ ( 1 / q ) π ∫ 0 ∞ exp ⁡ [ − ln ⁡ ( 1 / q ) x 2 ] { 1 − q 2 cos ⁡ [ 2 ln ⁡ ( 1 / q ) x ] } 1 − 2 q 2 cos ⁡ [ 2 ln ⁡ ( 1 / q ) x ] + q 4 d x {\displaystyle \theta _{3}(q)=1+{\frac {4q{\sqrt {\ln(1/q)}}}{\sqrt {\pi }}}\int _{0}^{\infty }{\frac {\exp[-\ln(1/q)\,x^{2}]\{1-q^{2}\cos[2\ln(1/q)\,x]\}}{1-2q^{2}\cos[2\ln(1/q)\,x]+q^{4}}}\,\mathrm {d} x} This formula was discussed in the essay Square series generating function transformations by the mathematician Maxie Schmidt from Georgia in Atlanta. Based on this formula following three eminent examples are given: [ 2 π K ( 1 2 2 ) ] 1 / 2 = θ 3 [ exp ⁡ ( − π ) ] = 1 + 4 exp ⁡ ( − π ) ∫ 0 ∞ exp ⁡ ( − π x 2 ) [ 1 − exp ⁡ ( − 2 π ) cos ⁡ ( 2 π x ) ] 1 − 2 exp ⁡ ( − 2 π ) cos ⁡ ( 2 π x ) + exp ⁡ ( − 4 π ) d x {\displaystyle {\biggl [}{\frac {2}{\pi }}K{\bigl (}{\frac {1}{2}}{\sqrt {2}}{\bigr )}{\biggr ]}^{1/2}=\theta _{3}{\bigl [}\exp(-\pi ){\bigr ]}=1+4\exp(-\pi )\int _{0}^{\infty }{\frac {\exp(-\pi x^{2})[1-\exp(-2\pi )\cos(2\pi x)]}{1-2\exp(-2\pi )\cos(2\pi x)+\exp(-4\pi )}}\,\mathrm {d} x} [ 2 π K ( 2 − 1 ) ] 1 / 2 = θ 3 [ exp ⁡ ( − 2 π ) ] = 1 + 4 2 4 exp ⁡ ( − 2 π ) ∫ 0 ∞ exp ⁡ ( − 2 π x 2 ) [ 1 − exp ⁡ ( − 2 2 π ) cos ⁡ ( 2 2 π x ) ] 1 − 2 exp ⁡ ( − 2 2 π ) cos ⁡ ( 2 2 π x ) + exp ⁡ ( − 4 2 π ) d x {\displaystyle {\biggl [}{\frac {2}{\pi }}K({\sqrt {2}}-1){\biggr ]}^{1/2}=\theta _{3}{\bigl [}\exp(-{\sqrt {2}}\,\pi ){\bigr ]}=1+4\,{\sqrt[{4}]{2}}\exp(-{\sqrt {2}}\,\pi )\int _{0}^{\infty }{\frac {\exp(-{\sqrt {2}}\,\pi x^{2})[1-\exp(-2{\sqrt {2}}\,\pi )\cos(2{\sqrt {2}}\,\pi x)]}{1-2\exp(-2{\sqrt {2}}\,\pi )\cos(2{\sqrt {2}}\,\pi x)+\exp(-4{\sqrt {2}}\,\pi )}}\,\mathrm {d} x} { 2 π K [ sin ⁡ ( π 12 ) ] } 1 / 2 = θ 3 [ exp ⁡ ( − 3 π ) ] = 1 + 4 3 4 exp ⁡ ( − 3 π ) ∫ 0 ∞ exp ⁡ ( − 3 π x 2 ) [ 1 − exp ⁡ ( − 2 3 π ) cos ⁡ ( 2 3 π x ) ] 1 − 2 exp ⁡ ( − 2 3 π ) cos ⁡ ( 2 3 π x ) + exp ⁡ ( − 4 3 π ) d x {\displaystyle {\biggl \{}{\frac {2}{\pi }}K{\bigl [}\sin {\bigl (}{\frac {\pi }{12}}{\bigr )}{\bigr ]}{\biggr \}}^{1/2}=\theta _{3}{\bigl [}\exp(-{\sqrt {3}}\,\pi ){\bigr ]}=1+4\,{\sqrt[{4}]{3}}\exp(-{\sqrt {3}}\,\pi )\int _{0}^{\infty }{\frac {\exp(-{\sqrt {3}}\,\pi x^{2})[1-\exp(-2{\sqrt {3}}\,\pi )\cos(2{\sqrt {3}}\,\pi x)]}{1-2\exp(-2{\sqrt {3}}\,\pi )\cos(2{\sqrt {3}}\,\pi x)+\exp(-4{\sqrt {3}}\,\pi )}}\,\mathrm {d} x} Furthermore, the theta examples θ 3 ( 1 2 ) {\displaystyle \theta _{3}({\tfrac {1}{2}})} and θ 3 ( 1 3 ) {\displaystyle \theta _{3}({\tfrac {1}{3}})} shall be displayed: θ 3 ( 1 2 ) = 1 + 2 ∑ n = 1 ∞ 1 2 n 2 = 1 + 2 π − 1 / 2 ln ⁡ ( 2 ) ∫ 0 ∞ exp ⁡ [ − ln ⁡ ( 2 ) x 2 ] { 16 − 4 cos ⁡ [ 2 ln ⁡ ( 2 ) x ] } 17 − 8 cos ⁡ [ 2 ln ⁡ ( 2 ) x ] d x {\displaystyle \theta _{3}\left({\frac {1}{2}}\right)=1+2\sum _{n=1}^{\infty }{\frac {1}{2^{n^{2}}}}=1+2\pi ^{-1/2}{\sqrt {\ln(2)}}\int _{0}^{\infty }{\frac {\exp[-\ln(2)\,x^{2}]\{16-4\cos[2\ln(2)\,x]\}}{17-8\cos[2\ln(2)\,x]}}\,\mathrm {d} x} θ 3 ( 1 2 ) = 2.128936827211877158669 … {\displaystyle \theta _{3}\left({\frac {1}{2}}\right)=2.128936827211877158669\ldots } θ 3 ( 1 3 ) = 1 + 2 ∑ n = 1 ∞ 1 3 n 2 = 1 + 4 3 π − 1 / 2 ln ⁡ ( 3 ) ∫ 0 ∞ exp ⁡ [ − ln ⁡ ( 3 ) x 2 ] { 81 − 9 cos ⁡ [ 2 ln ⁡ ( 3 ) x ] } 82 − 18 cos ⁡ [ 2 ln ⁡ ( 3 ) x ] d x {\displaystyle \theta _{3}\left({\frac {1}{3}}\right)=1+2\sum _{n=1}^{\infty }{\frac {1}{3^{n^{2}}}}=1+{\frac {4}{3}}\pi ^{-1/2}{\sqrt {\ln(3)}}\int _{0}^{\infty }{\frac {\exp[-\ln(3)\,x^{2}]\{81-9\cos[2\ln(3)\,x]\}}{82-18\cos[2\ln(3)\,x]}}\,\mathrm {d} x} θ 3 ( 1 3 ) = 1.691459681681715341348 … {\displaystyle \theta _{3}\left({\frac {1}{3}}\right)=1.691459681681715341348\ldots } == Some interesting relations == If | q | < 1 {\displaystyle |q|<1} and a > 0 {\displaystyle a>0} , then the following theta functions θ 3 ( a , b ; q ) = ∑ n = − ∞ ∞ q a n 2 + b n {\displaystyle \theta _{3}(a,b;q)=\sum _{n=-\infty }^{\infty }q^{an^{2}+bn}} θ 4 ( a , b ; q ) = ∑ n = − ∞ ∞ ( − 1 ) n q a n 2 + b n {\displaystyle \theta _{4}(a,b;q)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{an^{2}+bn}} have interesting arithmetical and modular properties. When a , b , p {\displaystyle a,b,p} are positive integers, then log ⁡ ( θ 3 ( p 2 , p 2 − a ; q ) θ 3 ( p 2 , p 2 − b ; q ) ) = − ∑ n = 1 ∞ q n ( ∑ d | n n / d ≡ ± a ( p ) ( − 1 ) d d − ∑ d | n n / d ≡ ± b ( p ) ( − 1 ) d d ) {\displaystyle \log \left({\frac {\theta _{3}\left({\frac {p}{2}},{\frac {p}{2}}-a;q\right)}{\theta _{3}\left({\frac {p}{2}},{\frac {p}{2}}-b;q\right)}}\right)=-\sum _{n=1}^{\infty }q^{n}\left(\sum _{\begin{array}{cc}d|n\\n/d\equiv \pm a(p)\end{array}}{\frac {(-1)^{d}}{d}}-\sum _{\begin{array}{cc}d|n\\n/d\equiv \pm b(p)\end{array}}{\frac {(-1)^{d}}{d}}\right)} log ⁡ ( θ 4 ( p 2 , p 2 − a ; q ) θ 4 ( p 2 , p 2 − b ; q ) ) = − ∑ n = 1 ∞ q n ( ∑ d | n n / d ≡ ± a ( p ) 1 d − ∑ d | n n / d ≡ ± b ( p ) 1 d ) {\displaystyle \log \left({\frac {\theta _{4}\left({\frac {p}{2}},{\frac {p}{2}}-a;q\right)}{\theta _{4}\left({\frac {p}{2}},{\frac {p}{2}}-b;q\right)}}\right)=-\sum _{n=1}^{\infty }q^{n}\left(\sum _{\begin{array}{cc}d|n\\n/d\equiv \pm a(p)\end{array}}{\frac {1}{d}}-\sum _{\begin{array}{cc}d|n\\n/d\equiv \pm b(p)\end{array}}{\frac {1}{d}}\right)} Also if q = e π i z {\displaystyle q=e^{\pi iz}} , I m ( z ) > 0 {\displaystyle Im(z)>0} , the functions with : ϑ + ( z ) = θ + ( a , p ; z ) = q p / 8 + a 2 / ( 2 p ) − a / 2 θ 3 ( p 2 , p 2 − a ; q ) {\displaystyle \vartheta _{+}(z)=\theta _{+}(a,p;z)=q^{p/8+a^{2}/(2p)-a/2}\theta _{3}\left({\frac {p}{2}},{\frac {p}{2}}-a;q\right)} and ϑ − ( z ) = θ − ( a , p ; z ) = q p / 8 + a 2 / ( 2 p ) − a / 2 θ 4 ( p 2 , p 2 − a ; q ) {\displaystyle \vartheta _{-}(z)=\theta _{-}(a,p;z)=q^{p/8+a^{2}/(2p)-a/2}\theta _{4}\left({\frac {p}{2}},{\frac {p}{2}}-a;q\right)} are modular forms with weight 1 / 2 {\displaystyle 1/2} in Γ ( 2 p ) {\displaystyle \Gamma (2p)} i.e. If a 1 , b 1 , c 1 , d 1 {\displaystyle a_{1},b_{1},c_{1},d_{1}} are integers such that a 1 , d 1 ≡ 1 ( 2 p ) {\displaystyle a_{1},d_{1}\equiv 1(2p)} , b 1 , c 1 ≡ 0 ( 2 p ) {\displaystyle b_{1},c_{1}\equiv 0(2p)} and a 1 d 1 − b 1 c 1 = 1 {\displaystyle a_{1}d_{1}-b_{1}c_{1}=1} there exists ϵ ± = ϵ ± ( a 1 , b 1 , c 1 , d 1 ) {\displaystyle \epsilon _{\pm }=\epsilon _{\pm }(a_{1},b_{1},c_{1},d_{1})} , ( ϵ ± ) 24 = 1 {\displaystyle (\epsilon _{\pm })^{24}=1} , such that for all complex numbers z {\displaystyle z} with I m ( z ) > 0 {\displaystyle Im(z)>0} , we have ϑ ± ( a 1 z + b 1 c 1 z + d 1 ) = ϵ ± c 1 z + d 1 ϑ ± ( z ) {\displaystyle \vartheta _{\pm }\left({\frac {a_{1}z+b_{1}}{c_{1}z+d_{1}}}\right)=\epsilon _{\pm }{\sqrt {c_{1}z+d_{1}}}\vartheta _{\pm }(z)} == Explicit values == === Lemniscatic values === Proper credit for most of these results goes to Ramanujan. See Ramanujan's lost notebook and a relevant reference at Euler function. The Ramanujan results quoted at Euler function plus a few elementary operations give the results below, so they are either in Ramanujan's lost notebook or follow immediately from it. See also Yi (2004). Define, φ ( q ) = ϑ 00 ( 0 ; τ ) = θ 3 ( 0 ; q ) = ∑ n = − ∞ ∞ q n 2 {\displaystyle \quad \varphi (q)=\vartheta _{00}(0;\tau )=\theta _{3}(0;q)=\sum _{n=-\infty }^{\infty }q^{n^{2}}} with the nome q = e π i τ , {\displaystyle q=e^{\pi i\tau },} τ = n − 1 , {\displaystyle \tau =n{\sqrt {-1}},} and Dedekind eta function η ( τ ) . {\displaystyle \eta (\tau ).} Then for n = 1 , 2 , 3 , … {\displaystyle n=1,2,3,\dots } φ ( e − π ) = π 4 Γ ( 3 4 ) = 2 η ( − 1 ) φ ( e − 2 π ) = π 4 Γ ( 3 4 ) 2 + 2 2 φ ( e − 3 π ) = π 4 Γ ( 3 4 ) 1 + 3 108 8 φ ( e − 4 π ) = π 4 Γ ( 3 4 ) 2 + 8 4 4 φ ( e − 5 π ) = π 4 Γ ( 3 4 ) 2 + 5 5 φ ( e − 6 π ) = π 4 Γ ( 3 4 ) 1 4 + 3 4 + 4 4 + 9 4 12 3 8 φ ( e − 7 π ) = π 4 Γ ( 3 4 ) 13 + 7 + 7 + 3 7 14 3 8 ⋅ 7 16 φ ( e − 8 π ) = π 4 Γ ( 3 4 ) 2 + 2 + 128 8 4 φ ( e − 9 π ) = π 4 Γ ( 3 4 ) 1 + 2 + 2 3 3 3 φ ( e − 10 π ) = π 4 Γ ( 3 4 ) 64 4 + 80 4 + 81 4 + 100 4 200 4 φ ( e − 11 π ) = π 4 Γ ( 3 4 ) 11 + 11 + ( 5 + 3 3 + 11 + 33 ) − 44 + 33 3 3 + ( − 5 + 3 3 − 11 + 33 ) 44 + 33 3 3 52180524 8 φ ( e − 12 π ) = π 4 Γ ( 3 4 ) 1 4 + 2 4 + 3 4 + 4 4 + 9 4 + 18 4 + 24 4 2 108 8 φ ( e − 13 π ) = π 4 Γ ( 3 4 ) 13 + 8 13 + ( 11 − 6 3 + 13 ) 143 + 78 3 3 + ( 11 + 6 3 + 13 ) 143 − 78 3 3 19773 4 φ ( e − 14 π ) = π 4 Γ ( 3 4 ) 13 + 7 + 7 + 3 7 + 10 + 2 7 + 28 8 4 + 7 28 7 16 φ ( e − 15 π ) = π 4 Γ ( 3 4 ) 7 + 3 3 + 5 + 15 + 60 4 + 1500 4 12 3 8 ⋅ 5 2 φ ( e − 16 π ) = φ ( e − 4 π ) + π 4 Γ ( 3 4 ) 1 + 2 4 128 16 φ ( e − 17 π ) = π 4 Γ ( 3 4 ) 2 ( 1 + 17 4 ) + 17 8 5 + 17 17 + 17 17 2 φ ( e − 20 π ) = φ ( e − 5 π ) + π 4 Γ ( 3 4 ) 3 + 2 5 4 5 2 6 φ ( e − 36 π ) = 3 φ ( e − 9 π ) + 2 φ ( e − 4 π ) − φ ( e − π ) + π 4 Γ ( 3 4 ) 2 4 + 18 4 + 216 4 3 {\displaystyle {\begin{aligned}\varphi \left(e^{-\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}={\sqrt {2}}\,\eta \left({\sqrt {-1}}\right)\\\varphi \left(e^{-2\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {2+{\sqrt {2}}}}{2}}\\\varphi \left(e^{-3\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {1+{\sqrt {3}}}}{\sqrt[{8}]{108}}}\\\varphi \left(e^{-4\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {2+{\sqrt[{4}]{8}}}{4}}\\\varphi \left(e^{-5\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\sqrt {\frac {2+{\sqrt {5}}}{5}}}\\\varphi \left(e^{-6\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt[{4}]{1}}+{\sqrt[{4}]{3}}+{\sqrt[{4}]{4}}+{\sqrt[{4}]{9}}}}{\sqrt[{8}]{12^{3}}}}\\\varphi \left(e^{-7\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt {13+{\sqrt {7}}}}+{\sqrt {7+3{\sqrt {7}}}}}}{{\sqrt[{8}]{14^{3}}}\cdot {\sqrt[{16}]{7}}}}\\\varphi \left(e^{-8\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {{\sqrt {2+{\sqrt {2}}}}+{\sqrt[{8}]{128}}}{4}}\\\varphi \left(e^{-9\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {1+{\sqrt[{3}]{2+2{\sqrt {3}}}}}{3}}\\\varphi \left(e^{-10\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt[{4}]{64}}+{\sqrt[{4}]{80}}+{\sqrt[{4}]{81}}+{\sqrt[{4}]{100}}}}{\sqrt[{4}]{200}}}\\\varphi \left(e^{-11\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {11+{\sqrt {11}}+(5+3{\sqrt {3}}+{\sqrt {11}}+{\sqrt {33}}){\sqrt[{3}]{-44+33{\sqrt {3}}}}+(-5+3{\sqrt {3}}-{\sqrt {11}}+{\sqrt {33}}){\sqrt[{3}]{44+33{\sqrt {3}}}}}}{\sqrt[{8}]{52180524}}}\\\varphi \left(e^{-12\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt[{4}]{1}}+{\sqrt[{4}]{2}}+{\sqrt[{4}]{3}}+{\sqrt[{4}]{4}}+{\sqrt[{4}]{9}}+{\sqrt[{4}]{18}}+{\sqrt[{4}]{24}}}}{2{\sqrt[{8}]{108}}}}\\\varphi \left(e^{-13\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {13+8{\sqrt {13}}+(11-6{\sqrt {3}}+{\sqrt {13}}){\sqrt[{3}]{143+78{\sqrt {3}}}}+(11+6{\sqrt {3}}+{\sqrt {13}}){\sqrt[{3}]{143-78{\sqrt {3}}}}}}{\sqrt[{4}]{19773}}}\\\varphi \left(e^{-14\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt {13+{\sqrt {7}}}}+{\sqrt {7+3{\sqrt {7}}}}+{\sqrt {10+2{\sqrt {7}}}}+{\sqrt[{8}]{28}}{\sqrt {4+{\sqrt {7}}}}}}{\sqrt[{16}]{28^{7}}}}\\\varphi \left(e^{-15\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {7+3{\sqrt {3}}+{\sqrt {5}}+{\sqrt {15}}+{\sqrt[{4}]{60}}+{\sqrt[{4}]{1500}}}}{{\sqrt[{8}]{12^{3}}}\cdot {\sqrt {5}}}}\\2\varphi \left(e^{-16\pi }\right)&=\varphi \left(e^{-4\pi }\right)+{\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt[{4}]{1+{\sqrt {2}}}}{\sqrt[{16}]{128}}}\\\varphi \left(e^{-17\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {{\sqrt {2}}(1+{\sqrt[{4}]{17}})+{\sqrt[{8}]{17}}{\sqrt {5+{\sqrt {17}}}}}{\sqrt {17+17{\sqrt {17}}}}}\\2\varphi \left(e^{-20\pi }\right)&=\varphi \left(e^{-5\pi }\right)+{\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\sqrt {\frac {3+2{\sqrt[{4}]{5}}}{5{\sqrt {2}}}}}\\6\varphi \left(e^{-36\pi }\right)&=3\varphi \left(e^{-9\pi }\right)+2\varphi \left(e^{-4\pi }\right)-\varphi \left(e^{-\pi }\right)+{\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\sqrt[{3}]{{\sqrt[{4}]{2}}+{\sqrt[{4}]{18}}+{\sqrt[{4}]{216}}}}\end{aligned}}} If the reciprocal of the Gelfond constant is raised to the power of the reciprocal of an odd number, then the corresponding ϑ 00 {\displaystyle \vartheta _{00}} values or ϕ {\displaystyle \phi } values can be represented in a simplified way by using the hyperbolic lemniscatic sine: φ [ exp ⁡ ( − 1 5 π ) ] = π 4 Γ ( 3 4 ) − 1 slh ⁡ ( 1 5 2 ϖ ) slh ⁡ ( 2 5 2 ϖ ) {\displaystyle \varphi {\bigl [}\exp(-{\tfrac {1}{5}}\pi ){\bigr ]}={\sqrt[{4}]{\pi }}\,{\Gamma \left({\tfrac {3}{4}}\right)}^{-1}\operatorname {slh} {\bigl (}{\tfrac {1}{5}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {2}{5}}{\sqrt {2}}\,\varpi {\bigr )}} φ [ exp ⁡ ( − 1 7 π ) ] = π 4 Γ ( 3 4 ) − 1 slh ⁡ ( 1 7 2 ϖ ) slh ⁡ ( 2 7 2 ϖ ) slh ⁡ ( 3 7 2 ϖ ) {\displaystyle \varphi {\bigl [}\exp(-{\tfrac {1}{7}}\pi ){\bigr ]}={\sqrt[{4}]{\pi }}\,{\Gamma \left({\tfrac {3}{4}}\right)}^{-1}\operatorname {slh} {\bigl (}{\tfrac {1}{7}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {2}{7}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {3}{7}}{\sqrt {2}}\,\varpi {\bigr )}} φ [ exp ⁡ ( − 1 9 π ) ] = π 4 Γ ( 3 4 ) − 1 slh ⁡ ( 1 9 2 ϖ ) slh ⁡ ( 2 9 2 ϖ ) slh ⁡ ( 3 9 2 ϖ ) slh ⁡ ( 4 9 2 ϖ ) {\displaystyle \varphi {\bigl [}\exp(-{\tfrac {1}{9}}\pi ){\bigr ]}={\sqrt[{4}]{\pi }}\,{\Gamma \left({\tfrac {3}{4}}\right)}^{-1}\operatorname {slh} {\bigl (}{\tfrac {1}{9}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {2}{9}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {3}{9}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {4}{9}}{\sqrt {2}}\,\varpi {\bigr )}} φ [ exp ⁡ ( − 1 11 π ) ] = π 4 Γ ( 3 4 ) − 1 slh ⁡ ( 1 11 2 ϖ ) slh ⁡ ( 2 11 2 ϖ ) slh ⁡ ( 3 11 2 ϖ ) slh ⁡ ( 4 11 2 ϖ ) slh ⁡ ( 5 11 2 ϖ ) {\displaystyle \varphi {\bigl [}\exp(-{\tfrac {1}{11}}\pi ){\bigr ]}={\sqrt[{4}]{\pi }}\,{\Gamma \left({\tfrac {3}{4}}\right)}^{-1}\operatorname {slh} {\bigl (}{\tfrac {1}{11}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {2}{11}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {3}{11}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {4}{11}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {5}{11}}{\sqrt {2}}\,\varpi {\bigr )}} With the letter ϖ {\displaystyle \varpi } the Lemniscate constant is represented. Note that the following modular identities hold: 2 φ ( q 4 ) = φ ( q ) + 2 φ 2 ( q 2 ) − φ 2 ( q ) 3 φ ( q 9 ) = φ ( q ) + 9 φ 4 ( q 3 ) φ ( q ) − φ 3 ( q ) 3 5 φ ( q 25 ) = φ ( q 5 ) cot ⁡ ( 1 2 arctan ⁡ ( 2 5 φ ( q ) φ ( q 5 ) φ 2 ( q ) − φ 2 ( q 5 ) 1 + s ( q ) − s 2 ( q ) s ( q ) ) ) {\displaystyle {\begin{aligned}2\varphi \left(q^{4}\right)&=\varphi (q)+{\sqrt {2\varphi ^{2}\left(q^{2}\right)-\varphi ^{2}(q)}}\\3\varphi \left(q^{9}\right)&=\varphi (q)+{\sqrt[{3}]{9{\frac {\varphi ^{4}\left(q^{3}\right)}{\varphi (q)}}-\varphi ^{3}(q)}}\\{\sqrt {5}}\varphi \left(q^{25}\right)&=\varphi \left(q^{5}\right)\cot \left({\frac {1}{2}}\arctan \left({\frac {2}{\sqrt {5}}}{\frac {\varphi (q)\varphi \left(q^{5}\right)}{\varphi ^{2}(q)-\varphi ^{2}\left(q^{5}\right)}}{\frac {1+s(q)-s^{2}(q)}{s(q)}}\right)\right)\end{aligned}}} where s ( q ) = s ( e π i τ ) = − R ( − e − π i / ( 5 τ ) ) {\displaystyle s(q)=s\left(e^{\pi i\tau }\right)=-R\left(-e^{-\pi i/(5\tau )}\right)} is the Rogers–Ramanujan continued fraction: s ( q ) = tan ⁡ ( 1 2 arctan ⁡ ( 5 2 φ 2 ( q 5 ) φ 2 ( q ) − 1 2 ) ) cot 2 ⁡ ( 1 2 arccot ⁡ ( 5 2 φ 2 ( q 5 ) φ 2 ( q ) − 1 2 ) ) 5 = e − π i / ( 25 τ ) 1 − e − π i / ( 5 τ ) 1 + e − 2 π i / ( 5 τ ) 1 − ⋱ {\displaystyle {\begin{aligned}s(q)&={\sqrt[{5}]{\tan \left({\frac {1}{2}}\arctan \left({\frac {5}{2}}{\frac {\varphi ^{2}\left(q^{5}\right)}{\varphi ^{2}(q)}}-{\frac {1}{2}}\right)\right)\cot ^{2}\left({\frac {1}{2}}\operatorname {arccot} \left({\frac {5}{2}}{\frac {\varphi ^{2}\left(q^{5}\right)}{\varphi ^{2}(q)}}-{\frac {1}{2}}\right)\right)}}\\&={\cfrac {e^{-\pi i/(25\tau )}}{1-{\cfrac {e^{-\pi i/(5\tau )}}{1+{\cfrac {e^{-2\pi i/(5\tau )}}{1-\ddots }}}}}}\end{aligned}}} === Equianharmonic values === The mathematician Bruce Berndt found out further values of the theta function: φ ( exp ⁡ ( − 3 π ) ) = π − 1 Γ ( 4 3 ) 3 / 2 2 − 2 / 3 3 13 / 8 φ ( exp ⁡ ( − 2 3 π ) ) = π − 1 Γ ( 4 3 ) 3 / 2 2 − 2 / 3 3 13 / 8 cos ⁡ ( 1 24 π ) φ ( exp ⁡ ( − 3 3 π ) ) = π − 1 Γ ( 4 3 ) 3 / 2 2 − 2 / 3 3 7 / 8 ( 2 3 + 1 ) φ ( exp ⁡ ( − 4 3 π ) ) = π − 1 Γ ( 4 3 ) 3 / 2 2 − 5 / 3 3 13 / 8 ( 1 + cos ⁡ ( 1 12 π ) ) φ ( exp ⁡ ( − 5 3 π ) ) = π − 1 Γ ( 4 3 ) 3 / 2 2 − 2 / 3 3 5 / 8 sin ⁡ ( 1 5 π ) ( 2 5 100 3 + 2 5 10 3 + 3 5 5 + 1 ) {\displaystyle {\begin{array}{lll}\varphi \left(\exp(-{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-2/3}3^{13/8}\\\varphi \left(\exp(-2{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-2/3}3^{13/8}\cos({\tfrac {1}{24}}\pi )\\\varphi \left(\exp(-3{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-2/3}3^{7/8}({\sqrt[{3}]{2}}+1)\\\varphi \left(\exp(-4{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-5/3}3^{13/8}{\Bigl (}1+{\sqrt {\cos({\tfrac {1}{12}}\pi )}}{\Bigr )}\\\varphi \left(\exp(-5{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-2/3}3^{5/8}\sin({\tfrac {1}{5}}\pi )({\tfrac {2}{5}}{\sqrt[{3}]{100}}+{\tfrac {2}{5}}{\sqrt[{3}]{10}}+{\tfrac {3}{5}}{\sqrt {5}}+1)\end{array}}} === Further values === Many values of the theta function and especially of the shown phi function can be represented in terms of the gamma function: φ ( exp ⁡ ( − 2 π ) ) = π − 1 / 2 Γ ( 9 8 ) Γ ( 5 4 ) − 1 / 2 2 7 / 8 φ ( exp ⁡ ( − 2 2 π ) ) = π − 1 / 2 Γ ( 9 8 ) Γ ( 5 4 ) − 1 / 2 2 1 / 8 ( 1 + 2 − 1 ) φ ( exp ⁡ ( − 3 2 π ) ) = π − 1 / 2 Γ ( 9 8 ) Γ ( 5 4 ) − 1 / 2 2 3 / 8 3 − 1 / 2 ( 3 + 1 ) tan ⁡ ( 5 24 π ) φ ( exp ⁡ ( − 4 2 π ) ) = π − 1 / 2 Γ ( 9 8 ) Γ ( 5 4 ) − 1 / 2 2 − 1 / 8 ( 1 + 2 2 − 2 4 ) φ ( exp ⁡ ( − 5 2 π ) ) = π − 1 / 2 Γ ( 9 8 ) Γ ( 5 4 ) − 1 / 2 1 15 2 3 / 8 × × [ 5 3 10 + 2 5 ( 5 + 2 + 3 3 3 + 5 + 2 − 3 3 3 ) − ( 2 − 2 ) 25 − 10 5 ] φ ( exp ⁡ ( − 6 π ) ) = π − 1 / 2 Γ ( 5 24 ) Γ ( 5 12 ) − 1 / 2 2 − 13 / 24 3 − 1 / 8 sin ⁡ ( 5 12 π ) φ ( exp ⁡ ( − 1 2 6 π ) ) = π − 1 / 2 Γ ( 5 24 ) Γ ( 5 12 ) − 1 / 2 2 5 / 24 3 − 1 / 8 sin ⁡ ( 5 24 π ) {\displaystyle {\begin{array}{lll}\varphi \left(\exp(-{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}2^{7/8}\\\varphi \left(\exp(-2{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}2^{1/8}{\Bigl (}1+{\sqrt {{\sqrt {2}}-1}}{\Bigr )}\\\varphi \left(\exp(-3{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}2^{3/8}3^{-1/2}({\sqrt {3}}+1){\sqrt {\tan({\tfrac {5}{24}}\pi )}}\\\varphi \left(\exp(-4{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}2^{-1/8}{\Bigl (}1+{\sqrt[{4}]{2{\sqrt {2}}-2}}{\Bigr )}\\\varphi \left(\exp(-5{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}{\frac {1}{15}}\,2^{3/8}\times \\&&\times {\biggl [}{\sqrt[{3}]{5}}\,{\sqrt {10+2{\sqrt {5}}}}{\biggl (}{\sqrt[{3}]{5+{\sqrt {2}}+3{\sqrt {3}}}}+{\sqrt[{3}]{5+{\sqrt {2}}-3{\sqrt {3}}}}\,{\biggr )}-{\bigl (}2-{\sqrt {2}}\,{\bigr )}{\sqrt {25-10{\sqrt {5}}}}\,{\biggr ]}\\\varphi \left(\exp(-{\sqrt {6}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {5}{24}}\right){\Gamma \left({\tfrac {5}{12}}\right)}^{-1/2}2^{-13/24}3^{-1/8}{\sqrt {\sin({\tfrac {5}{12}}\pi )}}\\\varphi \left(\exp(-{\tfrac {1}{2}}{\sqrt {6}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {5}{24}}\right){\Gamma \left({\tfrac {5}{12}}\right)}^{-1/2}2^{5/24}3^{-1/8}\sin({\tfrac {5}{24}}\pi )\end{array}}} == Nome power theorems == === Direct power theorems === For the transformation of the nome in the theta functions these formulas can be used: θ 2 ( q 2 ) = 1 2 2 [ θ 3 ( q ) 2 − θ 4 ( q ) 2 ] {\displaystyle \theta _{2}(q^{2})={\tfrac {1}{2}}{\sqrt {2[\theta _{3}(q)^{2}-\theta _{4}(q)^{2}]}}} θ 3 ( q 2 ) = 1 2 2 [ θ 3 ( q ) 2 + θ 4 ( q ) 2 ] {\displaystyle \theta _{3}(q^{2})={\tfrac {1}{2}}{\sqrt {2[\theta _{3}(q)^{2}+\theta _{4}(q)^{2}]}}} θ 4 ( q 2 ) = θ 4 ( q ) θ 3 ( q ) {\displaystyle \theta _{4}(q^{2})={\sqrt {\theta _{4}(q)\theta _{3}(q)}}} The squares of the three theta zero-value functions with the square function as the inner function are also formed in the pattern of the Pythagorean triples according to the Jacobi Identity. Furthermore, those transformations are valid: θ 3 ( q 4 ) = 1 2 θ 3 ( q ) + 1 2 θ 4 ( q ) {\displaystyle \theta _{3}(q^{4})={\tfrac {1}{2}}\theta _{3}(q)+{\tfrac {1}{2}}\theta _{4}(q)} These formulas can be used to compute the theta values of the cube of the nome: 27 θ 3 ( q 3 ) 8 − 18 θ 3 ( q 3 ) 4 θ 3 ( q ) 4 − θ 3 ( q ) 8 = 8 θ 3 ( q 3 ) 2 θ 3 ( q ) 2 [ 2 θ 4 ( q ) 4 − θ 3 ( q ) 4 ] {\displaystyle 27\,\theta _{3}(q^{3})^{8}-18\,\theta _{3}(q^{3})^{4}\theta _{3}(q)^{4}-\,\theta _{3}(q)^{8}=8\,\theta _{3}(q^{3})^{2}\theta _{3}(q)^{2}[2\,\theta _{4}(q)^{4}-\theta _{3}(q)^{4}]} 27 θ 4 ( q 3 ) 8 − 18 θ 4 ( q 3 ) 4 θ 4 ( q ) 4 − θ 4 ( q ) 8 = 8 θ 4 ( q 3 ) 2 θ 4 ( q ) 2 [ 2 θ 3 ( q ) 4 − θ 4 ( q ) 4 ] {\displaystyle 27\,\theta _{4}(q^{3})^{8}-18\,\theta _{4}(q^{3})^{4}\theta _{4}(q)^{4}-\,\theta _{4}(q)^{8}=8\,\theta _{4}(q^{3})^{2}\theta _{4}(q)^{2}[2\,\theta _{3}(q)^{4}-\theta _{4}(q)^{4}]} And the following formulas can be used to compute the theta values of the fifth power of the nome: [ θ 3 ( q ) 2 − θ 3 ( q 5 ) 2 ] [ 5 θ 3 ( q 5 ) 2 − θ 3 ( q ) 2 ] 5 = 256 θ 3 ( q 5 ) 2 θ 3 ( q ) 2 θ 4 ( q ) 4 [ θ 3 ( q ) 4 − θ 4 ( q ) 4 ] {\displaystyle [\theta _{3}(q)^{2}-\theta _{3}(q^{5})^{2}][5\,\theta _{3}(q^{5})^{2}-\theta _{3}(q)^{2}]^{5}=256\,\theta _{3}(q^{5})^{2}\theta _{3}(q)^{2}\theta _{4}(q)^{4}[\theta _{3}(q)^{4}-\theta _{4}(q)^{4}]} [ θ 4 ( q 5 ) 2 − θ 4 ( q ) 2 ] [ 5 θ 4 ( q 5 ) 2 − θ 4 ( q ) 2 ] 5 = 256 θ 4 ( q 5 ) 2 θ 4 ( q ) 2 θ 3 ( q ) 4 [ θ 3 ( q ) 4 − θ 4 ( q ) 4 ] {\displaystyle [\theta _{4}(q^{5})^{2}-\theta _{4}(q)^{2}][5\,\theta _{4}(q^{5})^{2}-\theta _{4}(q)^{2}]^{5}=256\,\theta _{4}(q^{5})^{2}\theta _{4}(q)^{2}\theta _{3}(q)^{4}[\theta _{3}(q)^{4}-\theta _{4}(q)^{4}]} === Transformation at the cube root of the nome === The formulas for the theta Nullwert function values from the cube root of the elliptic nome are obtained by contrasting the two real solutions of the corresponding quartic equations: [ θ 3 ( q 1 / 3 ) 2 θ 3 ( q ) 2 − 3 θ 3 ( q 3 ) 2 θ 3 ( q ) 2 ] 2 = 4 − 4 [ 2 θ 2 ( q ) 2 θ 4 ( q ) 2 θ 3 ( q ) 4 ] 2 / 3 {\displaystyle {\biggl [}{\frac {\theta _{3}(q^{1/3})^{2}}{\theta _{3}(q)^{2}}}-{\frac {3\,\theta _{3}(q^{3})^{2}}{\theta _{3}(q)^{2}}}{\biggr ]}^{2}=4-4{\biggl [}{\frac {2\,\theta _{2}(q)^{2}\theta _{4}(q)^{2}}{\theta _{3}(q)^{4}}}{\biggr ]}^{2/3}} [ 3 θ 4 ( q 3 ) 2 θ 4 ( q ) 2 − θ 4 ( q 1 / 3 ) 2 θ 4 ( q ) 2 ] 2 = 4 + 4 [ 2 θ 2 ( q ) 2 θ 3 ( q ) 2 θ 4 ( q ) 4 ] 2 / 3 {\displaystyle {\biggl [}{\frac {3\,\theta _{4}(q^{3})^{2}}{\theta _{4}(q)^{2}}}-{\frac {\theta _{4}(q^{1/3})^{2}}{\theta _{4}(q)^{2}}}{\biggr ]}^{2}=4+4{\biggl [}{\frac {2\,\theta _{2}(q)^{2}\theta _{3}(q)^{2}}{\theta _{4}(q)^{4}}}{\biggr ]}^{2/3}} === Transformation at the fifth root of the nome === The Rogers-Ramanujan continued fraction can be defined in terms of the Jacobi theta function in the following way: R ( q ) = tan ⁡ { 1 2 arctan ⁡ [ 1 2 − θ 4 ( q ) 2 2 θ 4 ( q 5 ) 2 ] } 1 / 5 tan ⁡ { 1 2 arccot ⁡ [ 1 2 − θ 4 ( q ) 2 2 θ 4 ( q 5 ) 2 ] } 2 / 5 {\displaystyle R(q)=\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {1}{2}}-{\frac {\theta _{4}(q)^{2}}{2\,\theta _{4}(q^{5})^{2}}}{\biggr ]}{\biggr \}}^{1/5}\tan {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {1}{2}}-{\frac {\theta _{4}(q)^{2}}{2\,\theta _{4}(q^{5})^{2}}}{\biggr ]}{\biggr \}}^{2/5}} R ( q 2 ) = tan ⁡ { 1 2 arctan ⁡ [ 1 2 − θ 4 ( q ) 2 2 θ 4 ( q 5 ) 2 ] } 2 / 5 cot ⁡ { 1 2 arccot ⁡ [ 1 2 − θ 4 ( q ) 2 2 θ 4 ( q 5 ) 2 ] } 1 / 5 {\displaystyle R(q^{2})=\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {1}{2}}-{\frac {\theta _{4}(q)^{2}}{2\,\theta _{4}(q^{5})^{2}}}{\biggr ]}{\biggr \}}^{2/5}\cot {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {1}{2}}-{\frac {\theta _{4}(q)^{2}}{2\,\theta _{4}(q^{5})^{2}}}{\biggr ]}{\biggr \}}^{1/5}} R ( q 2 ) = tan ⁡ { 1 2 arctan ⁡ [ θ 3 ( q ) 2 2 θ 3 ( q 5 ) 2 − 1 2 ] } 2 / 5 tan ⁡ { 1 2 arccot ⁡ [ θ 3 ( q ) 2 2 θ 3 ( q 5 ) 2 − 1 2 ] } 1 / 5 {\displaystyle R(q^{2})=\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {\theta _{3}(q)^{2}}{2\,\theta _{3}(q^{5})^{2}}}-{\frac {1}{2}}{\biggr ]}{\biggr \}}^{2/5}\tan {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {\theta _{3}(q)^{2}}{2\,\theta _{3}(q^{5})^{2}}}-{\frac {1}{2}}{\biggr ]}{\biggr \}}^{1/5}} The alternating Rogers-Ramanujan continued fraction function S(q) has the following two identities: S ( q ) = R ( q 4 ) R ( q 2 ) R ( q ) = tan ⁡ { 1 2 arctan ⁡ [ θ 3 ( q ) 2 2 θ 3 ( q 5 ) 2 − 1 2 ] } 1 / 5 cot ⁡ { 1 2 arccot ⁡ [ θ 3 ( q ) 2 2 θ 3 ( q 5 ) 2 − 1 2 ] } 2 / 5 {\displaystyle S(q)={\frac {R(q^{4})}{R(q^{2})R(q)}}=\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {\theta _{3}(q)^{2}}{2\,\theta _{3}(q^{5})^{2}}}-{\frac {1}{2}}{\biggr ]}{\biggr \}}^{1/5}\cot {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {\theta _{3}(q)^{2}}{2\,\theta _{3}(q^{5})^{2}}}-{\frac {1}{2}}{\biggr ]}{\biggr \}}^{2/5}} The theta function values from the fifth root of the nome can be represented as a rational combination of the continued fractions R and S and the theta function values from the fifth power of the nome and the nome itself. The following four equations are valid for all values q between 0 and 1: θ 3 ( q 1 / 5 ) θ 3 ( q 5 ) − 1 = 1 S ( q ) [ S ( q ) 2 + R ( q 2 ) ] [ 1 + R ( q 2 ) S ( q ) ] {\displaystyle {\frac {\theta _{3}(q^{1/5})}{\theta _{3}(q^{5})}}-1={\frac {1}{S(q)}}{\bigl [}S(q)^{2}+R(q^{2}){\bigr ]}{\bigl [}1+R(q^{2})S(q){\bigr ]}} 1 − θ 4 ( q 1 / 5 ) θ 4 ( q 5 ) = 1 R ( q ) [ R ( q 2 ) + R ( q ) 2 ] [ 1 − R ( q 2 ) R ( q ) ] {\displaystyle 1-{\frac {\theta _{4}(q^{1/5})}{\theta _{4}(q^{5})}}={\frac {1}{R(q)}}{\bigl [}R(q^{2})+R(q)^{2}{\bigr ]}{\bigl [}1-R(q^{2})R(q){\bigr ]}} θ 3 ( q 1 / 5 ) 2 − θ 3 ( q ) 2 = [ θ 3 ( q ) 2 − θ 3 ( q 5 ) 2 ] [ 1 + 1 R ( q 2 ) S ( q ) + R ( q 2 ) S ( q ) + 1 R ( q 2 ) 2 + R ( q 2 ) 2 + 1 S ( q ) − S ( q ) ] {\displaystyle \theta _{3}(q^{1/5})^{2}-\theta _{3}(q)^{2}={\bigl [}\theta _{3}(q)^{2}-\theta _{3}(q^{5})^{2}{\bigr ]}{\biggl [}1+{\frac {1}{R(q^{2})S(q)}}+R(q^{2})S(q)+{\frac {1}{R(q^{2})^{2}}}+R(q^{2})^{2}+{\frac {1}{S(q)}}-S(q){\biggr ]}} θ 4 ( q ) 2 − θ 4 ( q 1 / 5 ) 2 = [ θ 4 ( q 5 ) 2 − θ 4 ( q ) 2 ] [ 1 − 1 R ( q 2 ) R ( q ) − R ( q 2 ) R ( q ) + 1 R ( q 2 ) 2 + R ( q 2 ) 2 − 1 R ( q ) + R ( q ) ] {\displaystyle \theta _{4}(q)^{2}-\theta _{4}(q^{1/5})^{2}={\bigl [}\theta _{4}(q^{5})^{2}-\theta _{4}(q)^{2}{\bigr ]}{\biggl [}1-{\frac {1}{R(q^{2})R(q)}}-R(q^{2})R(q)+{\frac {1}{R(q^{2})^{2}}}+R(q^{2})^{2}-{\frac {1}{R(q)}}+R(q){\biggr ]}} === Modulus dependent theorems === In combination with the elliptic modulus, the following formulas can be displayed: These are the formulas for the square of the elliptic nome: θ 4 [ q ( k ) ] = θ 4 [ q ( k ) 2 ] 1 − k 2 8 {\displaystyle \theta _{4}[q(k)]=\theta _{4}[q(k)^{2}]{\sqrt[{8}]{1-k^{2}}}} θ 4 [ q ( k ) 2 ] = θ 3 [ q ( k ) ] 1 − k 2 8 {\displaystyle \theta _{4}[q(k)^{2}]=\theta _{3}[q(k)]{\sqrt[{8}]{1-k^{2}}}} θ 3 [ q ( k ) 2 ] = θ 3 [ q ( k ) ] cos ⁡ [ 1 2 arcsin ⁡ ( k ) ] {\displaystyle \theta _{3}[q(k)^{2}]=\theta _{3}[q(k)]\cos[{\tfrac {1}{2}}\arcsin(k)]} And this is an efficient formula for the cube of the nome: θ 4 ⟨ q { tan ⁡ [ 1 2 arctan ⁡ ( t 3 ) ] } 3 ⟩ = θ 4 ⟨ q { tan ⁡ [ 1 2 arctan ⁡ ( t 3 ) ] } ⟩ 3 − 1 / 2 ( 2 t 4 − t 2 + 1 − t 2 + 2 + t 2 + 1 ) 1 / 2 {\displaystyle \theta _{4}{\biggl \langle }q{\bigl \{}\tan {\bigl [}{\tfrac {1}{2}}\arctan(t^{3}){\bigr ]}{\bigr \}}^{3}{\biggr \rangle }=\theta _{4}{\biggl \langle }q{\bigl \{}\tan {\bigl [}{\tfrac {1}{2}}\arctan(t^{3}){\bigr ]}{\bigr \}}{\biggr \rangle }\,3^{-1/2}{\bigl (}{\sqrt {2{\sqrt {t^{4}-t^{2}+1}}-t^{2}+2}}+{\sqrt {t^{2}+1}}\,{\bigr )}^{1/2}} For all real values t ∈ R {\displaystyle t\in \mathbb {R} } the now mentioned formula is valid. And for this formula two examples shall be given: First calculation example with the value t = 1 {\displaystyle t=1} inserted: Second calculation example with the value t = Φ − 2 {\displaystyle t=\Phi ^{-2}} inserted: The constant Φ {\displaystyle \Phi } represents the Golden ratio number Φ = 1 2 ( 5 + 1 ) {\displaystyle \Phi ={\tfrac {1}{2}}({\sqrt {5}}+1)} exactly. == Some series identities == === Sums with theta function in the result === The infinite sum of the reciprocals of Fibonacci numbers with odd indices has the identity: ∑ n = 1 ∞ 1 F 2 n − 1 = 5 2 ∑ n = 1 ∞ 2 ( Φ − 2 ) n − 1 / 2 1 + ( Φ − 2 ) 2 n − 1 = 5 4 ∑ a = − ∞ ∞ 2 ( Φ − 2 ) a − 1 / 2 1 + ( Φ − 2 ) 2 a − 1 = {\displaystyle \sum _{n=1}^{\infty }{\frac {1}{F_{2n-1}}}={\frac {\sqrt {5}}{2}}\,\sum _{n=1}^{\infty }{\frac {2(\Phi ^{-2})^{n-1/2}}{1+(\Phi ^{-2})^{2n-1}}}={\frac {\sqrt {5}}{4}}\sum _{a=-\infty }^{\infty }{\frac {2(\Phi ^{-2})^{a-1/2}}{1+(\Phi ^{-2})^{2a-1}}}=} = 5 4 θ 2 ( Φ − 2 ) 2 = 5 8 [ θ 3 ( Φ − 1 ) 2 − θ 4 ( Φ − 1 ) 2 ] {\displaystyle ={\frac {\sqrt {5}}{4}}\,\theta _{2}(\Phi ^{-2})^{2}={\frac {\sqrt {5}}{8}}{\bigl [}\theta _{3}(\Phi ^{-1})^{2}-\theta _{4}(\Phi ^{-1})^{2}{\bigr ]}} By not using the theta function expression, following identity between two sums can be formulated: ∑ n = 1 ∞ 1 F 2 n − 1 = 5 4 [ ∑ n = 1 ∞ 2 Φ − ( 2 n − 1 ) 2 / 2 ] 2 {\displaystyle \sum _{n=1}^{\infty }{\frac {1}{F_{2n-1}}}={\frac {\sqrt {5}}{4}}\,{\biggl [}\sum _{n=1}^{\infty }2\,\Phi ^{-(2n-1)^{2}/2}{\biggr ]}^{2}} ∑ n = 1 ∞ 1 F 2 n − 1 = 1.82451515740692456814215840626732817332 … {\displaystyle \sum _{n=1}^{\infty }{\frac {1}{F_{2n-1}}}=1.82451515740692456814215840626732817332\ldots } Also in this case Φ = 1 2 ( 5 + 1 ) {\displaystyle \Phi ={\tfrac {1}{2}}({\sqrt {5}}+1)} is Golden ratio number again. Infinite sum of the reciprocals of the Fibonacci number squares: ∑ n = 1 ∞ 1 F n 2 = 5 24 [ 2 θ 2 ( Φ − 2 ) 4 − θ 3 ( Φ − 2 ) 4 + 1 ] = 5 24 [ θ 3 ( Φ − 2 ) 4 − 2 θ 4 ( Φ − 2 ) 4 + 1 ] {\displaystyle \sum _{n=1}^{\infty }{\frac {1}{F_{n}^{2}}}={\frac {5}{24}}{\bigl [}2\,\theta _{2}(\Phi ^{-2})^{4}-\theta _{3}(\Phi ^{-2})^{4}+1{\bigr ]}={\frac {5}{24}}{\bigl [}\theta _{3}(\Phi ^{-2})^{4}-2\,\theta _{4}(\Phi ^{-2})^{4}+1{\bigr ]}} Infinite sum of the reciprocals of the Pell numbers with odd indices: ∑ n = 1 ∞ 1 P 2 n − 1 = 1 2 θ 2 [ ( 2 − 1 ) 2 ] 2 = 1 2 2 [ θ 3 ( 2 − 1 ) 2 − θ 4 ( 2 − 1 ) 2 ] {\displaystyle \sum _{n=1}^{\infty }{\frac {1}{P_{2n-1}}}={\frac {1}{\sqrt {2}}}\,\theta _{2}{\bigl [}({\sqrt {2}}-1)^{2}{\bigr ]}^{2}={\frac {1}{2{\sqrt {2}}}}{\bigl [}\theta _{3}({\sqrt {2}}-1)^{2}-\theta _{4}({\sqrt {2}}-1)^{2}{\bigr ]}} === Sums with theta function in the summand === The next two series identities were proved by István Mező: θ 4 2 ( q ) = i q 1 4 ∑ k = − ∞ ∞ q 2 k 2 − k θ 1 ( 2 k − 1 2 i ln ⁡ q , q ) , θ 4 2 ( q ) = ∑ k = − ∞ ∞ q 2 k 2 θ 4 ( k ln ⁡ q i , q ) . {\displaystyle {\begin{aligned}\theta _{4}^{2}(q)&=iq^{\frac {1}{4}}\sum _{k=-\infty }^{\infty }q^{2k^{2}-k}\theta _{1}\left({\frac {2k-1}{2i}}\ln q,q\right),\\[6pt]\theta _{4}^{2}(q)&=\sum _{k=-\infty }^{\infty }q^{2k^{2}}\theta _{4}\left({\frac {k\ln q}{i}},q\right).\end{aligned}}} These relations hold for all 0 < q < 1. Specializing the values of q, we have the next parameter free sums π e π 2 ⋅ 1 Γ 2 ( 3 4 ) = i ∑ k = − ∞ ∞ e π ( k − 2 k 2 ) θ 1 ( i π 2 ( 2 k − 1 ) , e − π ) {\displaystyle {\sqrt {\frac {\pi {\sqrt {e^{\pi }}}}{2}}}\cdot {\frac {1}{\Gamma ^{2}\left({\frac {3}{4}}\right)}}=i\sum _{k=-\infty }^{\infty }e^{\pi \left(k-2k^{2}\right)}\theta _{1}\left({\frac {i\pi }{2}}(2k-1),e^{-\pi }\right)} π 2 ⋅ 1 Γ 2 ( 3 4 ) = ∑ k = − ∞ ∞ θ 4 ( i k π , e − π ) e 2 π k 2 {\displaystyle {\sqrt {\frac {\pi }{2}}}\cdot {\frac {1}{\Gamma ^{2}\left({\frac {3}{4}}\right)}}=\sum _{k=-\infty }^{\infty }{\frac {\theta _{4}\left(ik\pi ,e^{-\pi }\right)}{e^{2\pi k^{2}}}}} == Zeros of the Jacobi theta functions == All zeros of the Jacobi theta functions are simple zeros and are given by the following: ϑ ( z ; τ ) = ϑ 00 ( z ; τ ) = 0 ⟺ z = m + n τ + 1 2 + τ 2 ϑ 11 ( z ; τ ) = 0 ⟺ z = m + n τ ϑ 10 ( z ; τ ) = 0 ⟺ z = m + n τ + 1 2 ϑ 01 ( z ; τ ) = 0 ⟺ z = m + n τ + τ 2 {\displaystyle {\begin{aligned}\vartheta (z;\tau )=\vartheta _{00}(z;\tau )&=0\quad &\Longleftrightarrow &&\quad z&=m+n\tau +{\frac {1}{2}}+{\frac {\tau }{2}}\\[3pt]\vartheta _{11}(z;\tau )&=0\quad &\Longleftrightarrow &&\quad z&=m+n\tau \\[3pt]\vartheta _{10}(z;\tau )&=0\quad &\Longleftrightarrow &&\quad z&=m+n\tau +{\frac {1}{2}}\\[3pt]\vartheta _{01}(z;\tau )&=0\quad &\Longleftrightarrow &&\quad z&=m+n\tau +{\frac {\tau }{2}}\end{aligned}}} where m, n are arbitrary integers. == Relation to the Riemann zeta function == The relation ϑ ( 0 ; − 1 τ ) = ( − i τ ) 1 2 ϑ ( 0 ; τ ) {\displaystyle \vartheta \left(0;-{\frac {1}{\tau }}\right)=\left(-i\tau \right)^{\frac {1}{2}}\vartheta (0;\tau )} was used by Riemann to prove the functional equation for the Riemann zeta function, by means of the Mellin transform Γ ( s 2 ) π − s 2 ζ ( s ) = 1 2 ∫ 0 ∞ ( ϑ ( 0 ; i t ) − 1 ) t s 2 d t t {\displaystyle \Gamma \left({\frac {s}{2}}\right)\pi ^{-{\frac {s}{2}}}\zeta (s)={\frac {1}{2}}\int _{0}^{\infty }{\bigl (}\vartheta (0;it)-1{\bigr )}t^{\frac {s}{2}}{\frac {\mathrm {d} t}{t}}} which can be shown to be invariant under substitution of s by 1 − s. The corresponding integral for z ≠ 0 is given in the article on the Hurwitz zeta function. == Relation to the Weierstrass elliptic function == The theta function was used by Jacobi to construct (in a form adapted to easy calculation) his elliptic functions as the quotients of the above four theta functions, and could have been used by him to construct Weierstrass's elliptic functions also, since ℘ ( z ; τ ) = − ( log ⁡ ϑ 11 ( z ; τ ) ) ″ + c {\displaystyle \wp (z;\tau )=-{\big (}\log \vartheta _{11}(z;\tau ){\big )}''+c} where the second derivative is with respect to z and the constant c is defined so that the Laurent expansion of ℘(z) at z = 0 has zero constant term. == Relation to the q-gamma function == The fourth theta function – and thus the others too – is intimately connected to the Jackson q-gamma function via the relation ( Γ q 2 ( x ) Γ q 2 ( 1 − x ) ) − 1 = q 2 x ( 1 − x ) ( q − 2 ; q − 2 ) ∞ 3 ( q 2 − 1 ) θ 4 ( 1 2 i ( 1 − 2 x ) log ⁡ q , 1 q ) . {\displaystyle \left(\Gamma _{q^{2}}(x)\Gamma _{q^{2}}(1-x)\right)^{-1}={\frac {q^{2x(1-x)}}{\left(q^{-2};q^{-2}\right)_{\infty }^{3}\left(q^{2}-1\right)}}\theta _{4}\left({\frac {1}{2i}}(1-2x)\log q,{\frac {1}{q}}\right).} == Relations to Dedekind eta function == Let η(τ) be the Dedekind eta function, and the argument of the theta function as the nome q = eπiτ. Then, θ 2 ( q ) = ϑ 10 ( 0 ; τ ) = 2 η 2 ( 2 τ ) η ( τ ) , θ 3 ( q ) = ϑ 00 ( 0 ; τ ) = η 5 ( τ ) η 2 ( 1 2 τ ) η 2 ( 2 τ ) = η 2 ( 1 2 ( τ + 1 ) ) η ( τ + 1 ) , θ 4 ( q ) = ϑ 01 ( 0 ; τ ) = η 2 ( 1 2 τ ) η ( τ ) , {\displaystyle {\begin{aligned}\theta _{2}(q)=\vartheta _{10}(0;\tau )&={\frac {2\eta ^{2}(2\tau )}{\eta (\tau )}},\\[3pt]\theta _{3}(q)=\vartheta _{00}(0;\tau )&={\frac {\eta ^{5}(\tau )}{\eta ^{2}\left({\frac {1}{2}}\tau \right)\eta ^{2}(2\tau )}}={\frac {\eta ^{2}\left({\frac {1}{2}}(\tau +1)\right)}{\eta (\tau +1)}},\\[3pt]\theta _{4}(q)=\vartheta _{01}(0;\tau )&={\frac {\eta ^{2}\left({\frac {1}{2}}\tau \right)}{\eta (\tau )}},\end{aligned}}} and, θ 2 ( q ) θ 3 ( q ) θ 4 ( q ) = 2 η 3 ( τ ) . {\displaystyle \theta _{2}(q)\,\theta _{3}(q)\,\theta _{4}(q)=2\eta ^{3}(\tau ).} See also the Weber modular functions. == Elliptic modulus == The elliptic modulus is k ( τ ) = ϑ 10 ( 0 ; τ ) 2 ϑ 00 ( 0 ; τ ) 2 {\displaystyle k(\tau )={\frac {\vartheta _{10}(0;\tau )^{2}}{\vartheta _{00}(0;\tau )^{2}}}} and the complementary elliptic modulus is k ′ ( τ ) = ϑ 01 ( 0 ; τ ) 2 ϑ 00 ( 0 ; τ ) 2 {\displaystyle k'(\tau )={\frac {\vartheta _{01}(0;\tau )^{2}}{\vartheta _{00}(0;\tau )^{2}}}} == Derivatives of theta functions == These are two identical definitions of the complete elliptic integral of the second kind: E ( k ) = ∫ 0 π / 2 1 − k 2 sin ⁡ ( φ ) 2 d φ {\displaystyle E(k)=\int _{0}^{\pi /2}{\sqrt {1-k^{2}\sin(\varphi )^{2}}}d\varphi } E ( k ) = π 2 ∑ a = 0 ∞ [ ( 2 a ) ! ] 2 ( 1 − 2 a ) 16 a ( a ! ) 4 k 2 a {\displaystyle E(k)={\frac {\pi }{2}}\sum _{a=0}^{\infty }{\frac {[(2a)!]^{2}}{(1-2a)16^{a}(a!)^{4}}}k^{2a}} The derivatives of the Theta Nullwert functions have these MacLaurin series: θ 2 ′ ( x ) = d d x θ 2 ( x ) = 1 2 x − 3 / 4 + ∑ n = 1 ∞ 1 2 ( 2 n + 1 ) 2 x ( 2 n − 1 ) ( 2 n + 3 ) / 4 {\displaystyle \theta _{2}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{2}(x)={\frac {1}{2}}x^{-3/4}+\sum _{n=1}^{\infty }{\frac {1}{2}}(2n+1)^{2}x^{(2n-1)(2n+3)/4}} θ 3 ′ ( x ) = d d x θ 3 ( x ) = 2 + ∑ n = 1 ∞ 2 ( n + 1 ) 2 x n ( n + 2 ) {\displaystyle \theta _{3}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{3}(x)=2+\sum _{n=1}^{\infty }2(n+1)^{2}x^{n(n+2)}} θ 4 ′ ( x ) = d d x θ 4 ( x ) = − 2 + ∑ n = 1 ∞ 2 ( n + 1 ) 2 ( − 1 ) n + 1 x n ( n + 2 ) {\displaystyle \theta _{4}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{4}(x)=-2+\sum _{n=1}^{\infty }2(n+1)^{2}(-1)^{n+1}x^{n(n+2)}} The derivatives of theta zero-value functions are as follows: θ 2 ′ ( x ) = d d x θ 2 ( x ) = 1 2 π x θ 2 ( x ) θ 3 ( x ) 2 E [ θ 2 ( x ) 2 θ 3 ( x ) 2 ] {\displaystyle \theta _{2}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{2}(x)={\frac {1}{2\pi x}}\theta _{2}(x)\theta _{3}(x)^{2}E{\biggl [}{\frac {\theta _{2}(x)^{2}}{\theta _{3}(x)^{2}}}{\biggr ]}} θ 3 ′ ( x ) = d d x θ 3 ( x ) = θ 3 ( x ) [ θ 3 ( x ) 2 + θ 4 ( x ) 2 ] { 1 2 π x E [ θ 3 ( x ) 2 − θ 4 ( x ) 2 θ 3 ( x ) 2 + θ 4 ( x ) 2 ] − θ 4 ( x ) 2 4 x } {\displaystyle \theta _{3}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{3}(x)=\theta _{3}(x){\bigl [}\theta _{3}(x)^{2}+\theta _{4}(x)^{2}{\bigr ]}{\biggl \{}{\frac {1}{2\pi x}}E{\biggl [}{\frac {\theta _{3}(x)^{2}-\theta _{4}(x)^{2}}{\theta _{3}(x)^{2}+\theta _{4}(x)^{2}}}{\biggr ]}-{\frac {\theta _{4}(x)^{2}}{4\,x}}{\biggr \}}} θ 4 ′ ( x ) = d d x θ 4 ( x ) = θ 4 ( x ) [ θ 3 ( x ) 2 + θ 4 ( x ) 2 ] { 1 2 π x E [ θ 3 ( x ) 2 − θ 4 ( x ) 2 θ 3 ( x ) 2 + θ 4 ( x ) 2 ] − θ 3 ( x ) 2 4 x } {\displaystyle \theta _{4}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{4}(x)=\theta _{4}(x){\bigl [}\theta _{3}(x)^{2}+\theta _{4}(x)^{2}{\bigr ]}{\biggl \{}{\frac {1}{2\pi x}}E{\biggl [}{\frac {\theta _{3}(x)^{2}-\theta _{4}(x)^{2}}{\theta _{3}(x)^{2}+\theta _{4}(x)^{2}}}{\biggr ]}-{\frac {\theta _{3}(x)^{2}}{4\,x}}{\biggr \}}} The two last mentioned formulas are valid for all real numbers of the real definition interval: − 1 < x < 1 ∩ x ∈ R {\displaystyle -1<x<1\,\cap \,x\in \mathbb {R} } And these two last named theta derivative functions are related to each other in this way: ϑ 4 ( x ) [ d d x ϑ 3 ( x ) ] − ϑ 3 ( x ) [ d d x θ 4 ( x ) ] = 1 4 x θ 3 ( x ) θ 4 ( x ) [ θ 3 ( x ) 4 − θ 4 ( x ) 4 ] {\displaystyle \vartheta _{4}(x){\biggl [}{\frac {\mathrm {d} }{\mathrm {d} x}}\,\vartheta _{3}(x){\biggr ]}-\vartheta _{3}(x){\biggl [}{\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{4}(x){\biggr ]}={\frac {1}{4\,x}}\,\theta _{3}(x)\,\theta _{4}(x){\bigl [}\theta _{3}(x)^{4}-\theta _{4}(x)^{4}{\bigr ]}} The derivatives of the quotients from two of the three theta functions mentioned here always have a rational relationship to those three functions: d d x θ 2 ( x ) θ 3 ( x ) = θ 2 ( x ) θ 4 ( x ) 4 4 x θ 3 ( x ) {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\,{\frac {\theta _{2}(x)}{\theta _{3}(x)}}={\frac {\theta _{2}(x)\,\theta _{4}(x)^{4}}{4\,x\,\theta _{3}(x)}}} d d x θ 2 ( x ) θ 4 ( x ) = θ 2 ( x ) θ 3 ( x ) 4 4 x θ 4 ( x ) {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\,{\frac {\theta _{2}(x)}{\theta _{4}(x)}}={\frac {\theta _{2}(x)\,\theta _{3}(x)^{4}}{4\,x\,\theta _{4}(x)}}} d d x θ 3 ( x ) θ 4 ( x ) = θ 3 ( x ) 5 − θ 3 ( x ) θ 4 ( x ) 4 4 x θ 4 ( x ) {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\,{\frac {\theta _{3}(x)}{\theta _{4}(x)}}={\frac {\theta _{3}(x)^{5}-\theta _{3}(x)\,\theta _{4}(x)^{4}}{4\,x\,\theta _{4}(x)}}} For the derivation of these derivation formulas see the articles Nome (mathematics) and Modular lambda function! == Integrals of theta functions == For the theta functions these integrals are valid: ∫ 0 1 θ 2 ( x ) d x = ∑ k = − ∞ ∞ 4 ( 2 k + 1 ) 2 + 4 = π tanh ⁡ ( π ) ≈ 3.129881 {\displaystyle \int _{0}^{1}\theta _{2}(x)\,\mathrm {d} x=\sum _{k=-\infty }^{\infty }{\frac {4}{(2k+1)^{2}+4}}=\pi \tanh(\pi )\approx 3.129881} ∫ 0 1 θ 3 ( x ) d x = ∑ k = − ∞ ∞ 1 k 2 + 1 = π coth ⁡ ( π ) ≈ 3.153348 {\displaystyle \int _{0}^{1}\theta _{3}(x)\,\mathrm {d} x=\sum _{k=-\infty }^{\infty }{\frac {1}{k^{2}+1}}=\pi \coth(\pi )\approx 3.153348} ∫ 0 1 θ 4 ( x ) d x = ∑ k = − ∞ ∞ ( − 1 ) k k 2 + 1 = π csch ⁡ ( π ) ≈ 0.272029 {\displaystyle \int _{0}^{1}\theta _{4}(x)\,\mathrm {d} x=\sum _{k=-\infty }^{\infty }{\frac {(-1)^{k}}{k^{2}+1}}=\pi \,\operatorname {csch} (\pi )\approx 0.272029} The final results now shown are based on the general Cauchy sum formulas. == A solution to the heat equation == The Jacobi theta function is the fundamental solution of the one-dimensional heat equation with spatially periodic boundary conditions. Taking z = x to be real and τ = it with t real and positive, we can write ϑ ( x ; i t ) = 1 + 2 ∑ n = 1 ∞ exp ⁡ ( − π n 2 t ) cos ⁡ ( 2 π n x ) {\displaystyle \vartheta (x;it)=1+2\sum _{n=1}^{\infty }\exp \left(-\pi n^{2}t\right)\cos(2\pi nx)} which solves the heat equation ∂ ∂ t ϑ ( x ; i t ) = 1 4 π ∂ 2 ∂ x 2 ϑ ( x ; i t ) . {\displaystyle {\frac {\partial }{\partial t}}\vartheta (x;it)={\frac {1}{4\pi }}{\frac {\partial ^{2}}{\partial x^{2}}}\vartheta (x;it).} This theta-function solution is 1-periodic in x, and as t → 0 it approaches the periodic delta function, or Dirac comb, in the sense of distributions lim t → 0 ϑ ( x ; i t ) = ∑ n = − ∞ ∞ δ ( x − n ) {\displaystyle \lim _{t\to 0}\vartheta (x;it)=\sum _{n=-\infty }^{\infty }\delta (x-n)} . General solutions of the spatially periodic initial value problem for the heat equation may be obtained by convolving the initial data at t = 0 with the theta function. == Relation to the Heisenberg group == The Jacobi theta function is invariant under the action of a discrete subgroup of the Heisenberg group. This invariance is presented in the article on the theta representation of the Heisenberg group. == Generalizations == If F is a quadratic form in n variables, then the theta function associated with F is θ F ( z ) = ∑ m ∈ Z n e 2 π i z F ( m ) {\displaystyle \theta _{F}(z)=\sum _{m\in \mathbb {Z} ^{n}}e^{2\pi izF(m)}} with the sum extending over the lattice of integers Z n {\displaystyle \mathbb {Z} ^{n}} . This theta function is a modular form of weight ⁠n/2⁠ (on an appropriately defined subgroup) of the modular group. In the Fourier expansion, θ ^ F ( z ) = ∑ k = 0 ∞ R F ( k ) e 2 π i k z , {\displaystyle {\hat {\theta }}_{F}(z)=\sum _{k=0}^{\infty }R_{F}(k)e^{2\pi ikz},} the numbers RF(k) are called the representation numbers of the form. === Theta series of a Dirichlet character === For χ a primitive Dirichlet character modulo q and ν = ⁠1 − χ(−1)/2⁠ then θ χ ( z ) = 1 2 ∑ n = − ∞ ∞ χ ( n ) n ν e 2 i π n 2 z {\displaystyle \theta _{\chi }(z)={\frac {1}{2}}\sum _{n=-\infty }^{\infty }\chi (n)n^{\nu }e^{2i\pi n^{2}z}} is a weight ⁠1/2⁠ + ν modular form of level 4q2 and character χ ( d ) ( − 1 d ) ν , {\displaystyle \chi (d)\left({\frac {-1}{d}}\right)^{\nu },} which means θ χ ( a z + b c z + d ) = χ ( d ) ( − 1 d ) ν ( θ 1 ( a z + b c z + d ) θ 1 ( z ) ) 1 + 2 ν θ χ ( z ) {\displaystyle \theta _{\chi }\left({\frac {az+b}{cz+d}}\right)=\chi (d)\left({\frac {-1}{d}}\right)^{\nu }\left({\frac {\theta _{1}\left({\frac {az+b}{cz+d}}\right)}{\theta _{1}(z)}}\right)^{1+2\nu }\theta _{\chi }(z)} whenever a , b , c , d ∈ Z 4 , a d − b c = 1 , c ≡ 0 mod 4 q 2 . {\displaystyle a,b,c,d\in \mathbb {Z} ^{4},ad-bc=1,c\equiv 0{\bmod {4}}q^{2}.} === Ramanujan theta function === === Riemann theta function === Let H n = { F ∈ M ( n , C ) | F = F T , Im ⁡ F > 0 } {\displaystyle \mathbb {H} _{n}=\left\{F\in M(n,\mathbb {C} )\,{\big |}\,F=F^{\mathsf {T}}\,,\,\operatorname {Im} F>0\right\}} be the set of symmetric square matrices whose imaginary part is positive definite. H n {\displaystyle \mathbb {H} _{n}} is called the Siegel upper half-space and is the multi-dimensional analog of the upper half-plane. The n-dimensional analogue of the modular group is the symplectic group Sp(2n, Z {\displaystyle \mathbb {Z} } ); for n = 1, Sp(2, Z {\displaystyle \mathbb {Z} } ) = SL(2, Z {\displaystyle \mathbb {Z} } ). The n-dimensional analogue of the congruence subgroups is played by ker ⁡ { Sp ⁡ ( 2 n , Z ) → Sp ⁡ ( 2 n , Z / k Z ) } . {\displaystyle \ker {\big \{}\operatorname {Sp} (2n,\mathbb {Z} )\to \operatorname {Sp} (2n,\mathbb {Z} /k\mathbb {Z} ){\big \}}.} Then, given τ ∈ H n {\displaystyle \mathbb {H} _{n}} , the Riemann theta function is defined as θ ( z , τ ) = ∑ m ∈ Z n exp ⁡ ( 2 π i ( 1 2 m T τ m + m T z ) ) . {\displaystyle \theta (z,\tau )=\sum _{m\in \mathbb {Z} ^{n}}\exp \left(2\pi i\left({\tfrac {1}{2}}m^{\mathsf {T}}\tau m+m^{\mathsf {T}}z\right)\right).} Here, z ∈ C n {\displaystyle \mathbb {C} ^{n}} is an n-dimensional complex vector, and the superscript T denotes the transpose. The Jacobi theta function is then a special case, with n = 1 and τ ∈ H {\displaystyle \mathbb {H} } where H {\displaystyle \mathbb {H} } is the upper half-plane. One major application of the Riemann theta function is that it allows one to give explicit formulas for meromorphic functions on compact Riemann surfaces, as well as other auxiliary objects that figure prominently in their function theory, by taking τ to be the period matrix with respect to a canonical basis for its first homology group. The Riemann theta converges absolutely and uniformly on compact subsets of C n × H n {\displaystyle \mathbb {C} ^{n}\times \mathbb {H} _{n}} . The functional equation is θ ( z + a + τ b , τ ) = exp ⁡ ( 2 π i ( − b T z − 1 2 b T τ b ) ) θ ( z , τ ) {\displaystyle \theta (z+a+\tau b,\tau )=\exp \left(2\pi i\left(-b^{\mathsf {T}}z-{\tfrac {1}{2}}b^{\mathsf {T}}\tau b\right)\right)\theta (z,\tau )} which holds for all vectors a, b ∈ Z n {\displaystyle \mathbb {Z} ^{n}} , and for all z ∈ C n {\displaystyle \mathbb {C} ^{n}} and τ ∈ H n {\displaystyle \mathbb {H} _{n}} . === Poincaré series === The Poincaré series generalizes the theta series to automorphic forms with respect to arbitrary Fuchsian groups. == Derivation of the theta values == === Identity of the Euler beta function === In the following, three important theta function values are to be derived as examples: This is how the Euler beta function is defined in its reduced form: β ( x ) = Γ ( x ) 2 Γ ( 2 x ) {\displaystyle \beta (x)={\frac {\Gamma (x)^{2}}{\Gamma (2x)}}} In general, for all natural numbers n ∈ N {\displaystyle n\in \mathbb {N} } this formula of the Euler beta function is valid: 4 − 1 / ( n + 2 ) n + 2 csc ⁡ ( π n + 2 ) β [ n 2 ( n + 2 ) ] = ∫ 0 ∞ 1 x n + 2 + 1 d x {\displaystyle {\frac {4^{-1/(n+2)}}{n+2}}\csc {\bigl (}{\frac {\pi }{n+2}}{\bigr )}\beta {\biggl [}{\frac {n}{2(n+2)}}{\biggr ]}=\int _{0}^{\infty }{\frac {1}{\sqrt {x^{n+2}+1}}}\,\mathrm {d} x} === Exemplary elliptic integrals === In the following some Elliptic Integral Singular Values are derived: === Combination of the integral identities with the nome === The elliptic nome function has these important values: q ( 1 2 2 ) = exp ⁡ ( − π ) {\displaystyle q({\tfrac {1}{2}}{\sqrt {2}})=\exp(-\pi )} q [ 1 4 ( 6 − 2 ) ] = exp ⁡ ( − 3 π ) {\displaystyle q[{\tfrac {1}{4}}({\sqrt {6}}-{\sqrt {2}})]=\exp(-{\sqrt {3}}\,\pi )} q ( 2 − 1 ) = exp ⁡ ( − 2 π ) {\displaystyle q({\sqrt {2}}-1)=\exp(-{\sqrt {2}}\,\pi )} For the proof of the correctness of these nome values, see the article Nome (mathematics)! On the basis of these integral identities and the above-mentioned Definition and identities to the theta functions in the same section of this article, exemplary theta zero values shall be determined now: θ 3 [ exp ⁡ ( − π ) ] = θ 3 [ q ( 1 2 2 ) ] = 2 π − 1 K ( 1 2 2 ) = 2 − 1 / 2 π − 1 / 2 β ( 1 4 ) 1 / 2 = 2 − 1 / 4 π 4 Γ ( 3 4 ) − 1 {\displaystyle \theta _{3}[\exp(-\pi )]=\theta _{3}[q({\tfrac {1}{2}}{\sqrt {2}})]={\sqrt {2\pi ^{-1}K({\tfrac {1}{2}}{\sqrt {2}})}}=2^{-1/2}\pi ^{-1/2}\beta ({\tfrac {1}{4}})^{1/2}=2^{-1/4}{\sqrt[{4}]{\pi }}\,{\Gamma {\bigl (}{\tfrac {3}{4}}{\bigr )}}^{-1}} θ 3 [ exp ⁡ ( − 3 π ) ] = θ 3 { q [ 1 4 ( 6 − 2 ) ] } = 2 π − 1 K [ 1 4 ( 6 − 2 ) ] = 2 − 1 / 6 3 − 1 / 8 π − 1 / 2 β ( 1 3 ) 1 / 2 {\displaystyle \theta _{3}[\exp(-{\sqrt {3}}\,\pi )]=\theta _{3}{\bigl \{}q{\bigl [}{\tfrac {1}{4}}({\sqrt {6}}-{\sqrt {2}}){\bigr ]}{\bigr \}}={\sqrt {2\pi ^{-1}K{\bigl [}{\tfrac {1}{4}}({\sqrt {6}}-{\sqrt {2}}){\bigr ]}}}=2^{-1/6}3^{-1/8}\pi ^{-1/2}\beta ({\tfrac {1}{3}})^{1/2}} θ 3 [ exp ⁡ ( − 2 π ) ] = θ 3 [ q ( 2 − 1 ) ] = 2 π − 1 K ( 2 − 1 ) = 2 − 1 / 8 cos ⁡ ( 1 8 π ) π − 1 / 2 β ( 3 8 ) 1 / 2 {\displaystyle \theta _{3}[\exp(-{\sqrt {2}}\,\pi )]=\theta _{3}[q({\sqrt {2}}-1)]={\sqrt {2\pi ^{-1}K({\sqrt {2}}-1)}}=2^{-1/8}\cos({\tfrac {1}{8}}\pi )\,\pi ^{-1/2}\beta ({\tfrac {3}{8}})^{1/2}} θ 4 [ exp ⁡ ( − 2 π ) ] = θ 4 [ q ( 2 − 1 ) ] = 2 2 − 2 4 2 π − 1 K ( 2 − 1 ) = 2 − 1 / 4 cos ⁡ ( 1 8 π ) 1 / 2 π − 1 / 2 β ( 3 8 ) 1 / 2 {\displaystyle \theta _{4}[\exp(-{\sqrt {2}}\,\pi )]=\theta _{4}[q({\sqrt {2}}-1)]={\sqrt[{4}]{2{\sqrt {2}}-2}}\,{\sqrt {2\pi ^{-1}K({\sqrt {2}}-1)}}=2^{-1/4}\cos({\tfrac {1}{8}}\pi )^{1/2}\,\pi ^{-1/2}\beta ({\tfrac {3}{8}})^{1/2}} == Partition sequences and Pochhammer products == === Regular partition number sequence === The regular partition sequence P ( n ) {\displaystyle P(n)} itself indicates the number of ways in which a positive integer number n {\displaystyle n} can be split into positive integer summands. For the numbers n = 1 {\displaystyle n=1} to n = 5 {\displaystyle n=5} , the associated partition numbers P {\displaystyle P} with all associated number partitions are listed in the following table: The generating function of the regular partition number sequence can be represented via Pochhammer product in the following way: ∑ k = 0 ∞ P ( k ) x k = 1 ( x ; x ) ∞ = θ 3 ( x ) − 1 / 6 θ 4 ( x ) − 2 / 3 [ θ 3 ( x ) 4 − θ 4 ( x ) 4 16 x ] − 1 / 24 {\displaystyle \sum _{k=0}^{\infty }P(k)x^{k}={\frac {1}{(x;x)_{\infty }}}=\theta _{3}(x)^{-1/6}\theta _{4}(x)^{-2/3}{\biggl [}{\frac {\theta _{3}(x)^{4}-\theta _{4}(x)^{4}}{16\,x}}{\biggr ]}^{-1/24}} The summandization of the now mentioned Pochhammer product is described by the Pentagonal number theorem in this way: ( x ; x ) ∞ = 1 + ∑ n = 1 ∞ [ − x Fn ( 2 n − 1 ) − x Kr ( 2 n − 1 ) + x Fn ( 2 n ) + x Kr ( 2 n ) ] {\displaystyle (x;x)_{\infty }=1+\sum _{n=1}^{\infty }{\bigl [}-x^{{\text{Fn}}(2n-1)}-x^{{\text{Kr}}(2n-1)}+x^{{\text{Fn}}(2n)}+x^{{\text{Kr}}(2n)}{\bigr ]}} The following basic definitions apply to the pentagonal numbers and the card house numbers: Fn ( z ) = 1 2 z ( 3 z − 1 ) {\displaystyle {\text{Fn}}(z)={\tfrac {1}{2}}z(3z-1)} Kr ( z ) = 1 2 z ( 3 z + 1 ) {\displaystyle {\text{Kr}}(z)={\tfrac {1}{2}}z(3z+1)} As a further application one obtains a formula for the third power of the Euler product: ( x ; x ) 3 = ∏ n = 1 ∞ ( 1 − x n ) 3 = ∑ m = 0 ∞ ( − 1 ) m ( 2 m + 1 ) x m ( m + 1 ) / 2 {\displaystyle (x;x)^{3}=\prod _{n=1}^{\infty }(1-x^{n})^{3}=\sum _{m=0}^{\infty }(-1)^{m}(2m+1)x^{m(m+1)/2}} === Strict partition number sequence === And the strict partition sequence Q ( n ) {\displaystyle Q(n)} indicates the number of ways in which such a positive integer number n {\displaystyle n} can be splitted into positive integer summands such that each summand appears at most once and no summand value occurs repeatedly. Exactly the same sequence is also generated if in the partition only odd summands are included, but these odd summands may occur more than once. Both representations for the strict partition number sequence are compared in the following table: The generating function of the strict partition number sequence can be represented using Pochhammer's product: ∑ k = 0 ∞ Q ( k ) x k = 1 ( x ; x 2 ) ∞ = θ 3 ( x ) 1 / 6 θ 4 ( x ) − 1 / 3 [ θ 3 ( x ) 4 − θ 4 ( x ) 4 16 x ] 1 / 24 {\displaystyle \sum _{k=0}^{\infty }Q(k)x^{k}={\frac {1}{(x;x^{2})_{\infty }}}=\theta _{3}(x)^{1/6}\theta _{4}(x)^{-1/3}{\biggl [}{\frac {\theta _{3}(x)^{4}-\theta _{4}(x)^{4}}{16\,x}}{\biggr ]}^{1/24}} === Overpartition number sequence === The Maclaurin series for the reciprocal of the function ϑ01 has the numbers of over partition sequence as coefficients with a positive sign: 1 θ 4 ( x ) = ∏ n = 1 ∞ 1 + x n 1 − x n = ∑ k = 0 ∞ P ¯ ( k ) x k {\displaystyle {\frac {1}{\theta _{4}(x)}}=\prod _{n=1}^{\infty }{\frac {1+x^{n}}{1-x^{n}}}=\sum _{k=0}^{\infty }{\overline {P}}(k)x^{k}} 1 θ 4 ( x ) = 1 + 2 x + 4 x 2 + 8 x 3 + 14 x 4 + 24 x 5 + 40 x 6 + 64 x 7 + 100 x 8 + 154 x 9 + 232 x 10 + … {\displaystyle {\frac {1}{\theta _{4}(x)}}=1+2x+4x^{2}+8x^{3}+14x^{4}+24x^{5}+40x^{6}+64x^{7}+100x^{8}+154x^{9}+232x^{10}+\dots } If, for a given number k {\displaystyle k} , all partitions are set up in such a way that the summand size never increases, and all those summands that do not have a summand of the same size to the left of themselves can be marked for each partition of this type, then it will be the resulting number of the marked partitions depending on k {\displaystyle k} by the overpartition function P ¯ ( k ) {\displaystyle {\overline {P}}(k)} . First example: P ¯ ( 4 ) = 14 {\displaystyle {\overline {P}}(4)=14} These 14 possibilities of partition markings exist for the sum 4: Second example: P ¯ ( 5 ) = 24 {\displaystyle {\overline {P}}(5)=24} These 24 possibilities of partition markings exist for the sum 5: === Relations of the partition number sequences to each other === In the Online Encyclopedia of Integer Sequences (OEIS), the sequence of regular partition numbers P ( n ) {\displaystyle P(n)} is under the code A000041, the sequence of strict partitions is Q ( n ) {\displaystyle Q(n)} under the code A000009 and the sequence of superpartitions P ¯ ( n ) {\displaystyle {\overline {P}}(n)} under the code A015128. All parent partitions from index n = 1 {\displaystyle n=1} are even. The sequence of superpartitions P ¯ ( n ) {\displaystyle {\overline {P}}(n)} can be written with the regular partition sequence P and the strict partition sequence Q can be generated like this: P ¯ ( n ) = ∑ k = 0 n P ( n − k ) Q ( k ) {\displaystyle {\overline {P}}(n)=\sum _{k=0}^{n}P(n-k)Q(k)} In the following table of sequences of numbers, this formula should be used as an example: Related to this property, the following combination of two series of sums can also be set up via the function ϑ01: θ 4 ( x ) = [ ∑ k = 0 ∞ P ( k ) x k ] − 1 [ ∑ k = 0 ∞ Q ( k ) x k ] − 1 {\displaystyle \theta _{4}(x)={\biggl [}\sum _{k=0}^{\infty }P(k)x^{k}{\biggr ]}^{-1}{\biggl [}\sum _{k=0}^{\infty }Q(k)x^{k}{\biggr ]}^{-1}} == Notes == == References == Abramowitz, Milton; Stegun, Irene A. (1964). Handbook of Mathematical Functions. New York: Dover Publications. sec. 16.27ff. ISBN 978-0-486-61272-0. {{cite book}}: ISBN / Date incompatibility (help) Akhiezer, Naum Illyich (1990) [1970]. Elements of the Theory of Elliptic Functions. AMS Translations of Mathematical Monographs. Vol. 79. Providence, RI: AMS. ISBN 978-0-8218-4532-5. Farkas, Hershel M.; Kra, Irwin (1980). Riemann Surfaces. New York: Springer-Verlag. ch. 6. ISBN 978-0-387-90465-8.. (for treatment of the Riemann theta) Hardy, G. H.; Wright, E. M. (1959). An Introduction to the Theory of Numbers (4th ed.). Oxford: Clarendon Press. Mumford, David (1983). Tata Lectures on Theta I. Boston: Birkhauser. ISBN 978-3-7643-3109-2. Pierpont, James (1959). Functions of a Complex Variable. New York: Dover Publications. Rauch, Harry E.; Farkas, Hershel M. (1974). Theta Functions with Applications to Riemann Surfaces. Baltimore: Williams & Wilkins. ISBN 978-0-683-07196-2. Reinhardt, William P.; Walker, Peter L. (2010), "Theta Functions", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. Whittaker, E. T.; Watson, G. N. (1927). A Course in Modern Analysis (4th ed.). Cambridge: Cambridge University Press. ch. 21. (history of Jacobi's θ functions) == Further reading == Farkas, Hershel M. (2008). "Theta functions in complex analysis and number theory". In Alladi, Krishnaswami (ed.). Surveys in Number Theory. Developments in Mathematics. Vol. 17. Springer-Verlag. pp. 57–87. ISBN 978-0-387-78509-7. Zbl 1206.11055. Schoeneberg, Bruno (1974). "IX. Theta series". Elliptic modular functions. Die Grundlehren der mathematischen Wissenschaften. Vol. 203. Springer-Verlag. pp. 203–226. ISBN 978-3-540-06382-7. Ackerman, Michael (1 February 1979). "On the generating functions of certain Eisenstein series". Mathematische Annalen. 244 (1): 75–81. doi:10.1007/BF01420339. S2CID 120045753. Harry Rauch with Hershel M. Farkas: Theta functions with applications to Riemann Surfaces, Williams and Wilkins, Baltimore MD 1974, ISBN 0-683-07196-3. Charles Hermite: Sur la résolution de l'Équation du cinquiéme degré Comptes rendus, C. R. Acad. Sci. Paris, Nr. 11, March 1858. == External links == Moiseev Igor. "Elliptic functions for Matlab and Octave". This article incorporates material from Integral representations of Jacobi theta functions on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Jacobi_theta_function
In mathematics, a square root of a number x is a number y such that y 2 = x {\displaystyle y^{2}=x} ; in other words, a number y whose square (the result of multiplying the number by itself, or y ⋅ y {\displaystyle y\cdot y} ) is x. For example, 4 and −4 are square roots of 16 because 4 2 = ( − 4 ) 2 = 16 {\displaystyle 4^{2}=(-4)^{2}=16} . Every nonnegative real number x has a unique nonnegative square root, called the principal square root or simply the square root (with a definite article, see below), which is denoted by x , {\displaystyle {\sqrt {x}},} where the symbol " {\displaystyle {\sqrt {~^{~}}}} " is called the radical sign or radix. For example, to express the fact that the principal square root of 9 is 3, we write 9 = 3 {\displaystyle {\sqrt {9}}=3} . The term (or number) whose square root is being considered is known as the radicand. The radicand is the number or expression underneath the radical sign, in this case, 9. For non-negative x, the principal square root can also be written in exponent notation, as x 1 / 2 {\displaystyle x^{1/2}} . Every positive number x has two square roots: x {\displaystyle {\sqrt {x}}} (which is positive) and − x {\displaystyle -{\sqrt {x}}} (which is negative). The two roots can be written more concisely using the ± sign as ± x {\displaystyle \pm {\sqrt {x}}} . Although the principal square root of a positive number is only one of its two square roots, the designation "the square root" is often used to refer to the principal square root. Square roots of negative numbers can be discussed within the framework of complex numbers. More generally, square roots can be considered in any context in which a notion of the "square" of a mathematical object is defined. These include function spaces and square matrices, among other mathematical structures. == History == The Yale Babylonian Collection clay tablet YBC 7289 was created between 1800 BC and 1600 BC, showing 2 {\displaystyle {\sqrt {2}}} and 2 2 = 1 2 {\textstyle {\frac {\sqrt {2}}{2}}={\frac {1}{\sqrt {2}}}} respectively as 1;24,51,10 and 0;42,25,35 base 60 numbers on a square crossed by two diagonals. (1;24,51,10) base 60 corresponds to 1.41421296, which is correct to 5 decimal places (1.41421356...). The Rhind Mathematical Papyrus is a copy from 1650 BC of an earlier Berlin Papyrus and other texts – possibly the Kahun Papyrus – that shows how the Egyptians extracted square roots by an inverse proportion method. In Ancient India, the knowledge of theoretical and applied aspects of square and square root was at least as old as the Sulba Sutras, dated around 800–500 BC (possibly much earlier). A method for finding very good approximations to the square roots of 2 and 3 are given in the Baudhayana Sulba Sutra. Apastamba who was dated around 600 BCE has given a strikingly accurate value for 2 {\displaystyle {\sqrt {2}}} which is correct up to five decimal places as 1 + 1 3 + 1 3 × 4 − 1 3 × 4 × 34 {\textstyle 1+{\frac {1}{3}}+{\frac {1}{3\times 4}}-{\frac {1}{3\times 4\times 34}}} . Aryabhata, in the Aryabhatiya (section 2.4), has given a method for finding the square root of numbers having many digits. It was known to the ancient Greeks that square roots of positive integers that are not perfect squares are always irrational numbers: numbers not expressible as a ratio of two integers (that is, they cannot be written exactly as m n {\displaystyle {\frac {m}{n}}} , where m and n are integers). This is the theorem Euclid X, 9, almost certainly due to Theaetetus dating back to c. 380 BC. The discovery of irrational numbers, including the particular case of the square root of 2, is widely associated with the Pythagorean school. Although some accounts attribute the discovery to Hippasus, the specific contributor remains uncertain due to the scarcity of primary sources and the secretive nature of the brotherhood. It is exactly the length of the diagonal of a square with side length 1. In the Chinese mathematical work Writings on Reckoning, written between 202 BC and 186 BC during the early Han dynasty, the square root is approximated by using an "excess and deficiency" method, which says to "...combine the excess and deficiency as the divisor; (taking) the deficiency numerator multiplied by the excess denominator and the excess numerator times the deficiency denominator, combine them as the dividend." A symbol for square roots, written as an elaborate R, was invented by Regiomontanus (1436–1476). An R was also used for radix to indicate square roots in Gerolamo Cardano's Ars Magna. According to historian of mathematics D.E. Smith, Aryabhata's method for finding the square root was first introduced in Europe by Cataneo—in 1546. According to Jeffrey A. Oaks, Arabs used the letter jīm/ĝīm (ج), the first letter of the word "جذر" (variously transliterated as jaḏr, jiḏr, ǧaḏr or ǧiḏr, "root"), placed in its initial form (ﺟ) over a number to indicate its square root. The letter jīm resembles the present square root shape. Its usage goes as far as the end of the twelfth century in the works of the Moroccan mathematician Ibn al-Yasamin. The symbol "√" for the square root was first used in print in 1525, in Christoph Rudolff's Coss. == Properties and uses == The principal square root function f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} (usually just referred to as the "square root function") is a function that maps the set of nonnegative real numbers onto itself. In geometrical terms, the square root function maps the area of a square to its side length. The square root of x is rational if and only if x is a rational number that can be represented as a ratio of two perfect squares. (See square root of 2 for proofs that this is an irrational number, and quadratic irrational for a proof for all non-square natural numbers.) The square root function maps rational numbers into algebraic numbers, the latter being a superset of the rational numbers). For all real numbers x, x 2 = | x | = { x , if x ≥ 0 − x , if x < 0. {\displaystyle {\sqrt {x^{2}}}=\left|x\right|={\begin{cases}x,&{\text{if }}x\geq 0\\-x,&{\text{if }}x<0.\end{cases}}} (see absolute value). For all nonnegative real numbers x and y, x y = x y {\displaystyle {\sqrt {xy}}={\sqrt {x}}{\sqrt {y}}} and x = x 1 / 2 . {\displaystyle {\sqrt {x}}=x^{1/2}.} The square root function is continuous for all nonnegative x, and differentiable for all positive x. If f denotes the square root function, whose derivative is given by: f ′ ( x ) = 1 2 x . {\displaystyle f'(x)={\frac {1}{2{\sqrt {x}}}}.} The Taylor series of 1 + x {\displaystyle {\sqrt {1+x}}} about x = 0 converges for |x| ≤ 1, and is given by 1 + x = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! ( 1 − 2 n ) ( n ! ) 2 ( 4 n ) x n = 1 + 1 2 x − 1 8 x 2 + 1 16 x 3 − 5 128 x 4 + ⋯ , {\displaystyle {\sqrt {1+x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{(1-2n)(n!)^{2}(4^{n})}}x^{n}=1+{\frac {1}{2}}x-{\frac {1}{8}}x^{2}+{\frac {1}{16}}x^{3}-{\frac {5}{128}}x^{4}+\cdots ,} The square root of a nonnegative number is used in the definition of Euclidean norm (and distance), as well as in generalizations such as Hilbert spaces. It defines an important concept of standard deviation used in probability theory and statistics. It has a major use in the formula for solutions of a quadratic equation. Quadratic fields and rings of quadratic integers, which are based on square roots, are important in algebra and have uses in geometry. Square roots frequently appear in mathematical formulas elsewhere, as well as in many physical laws. == Square roots of positive integers == A positive number has two square roots, one positive, and one negative, which are opposite to each other. When talking of the square root of a positive integer, it is usually the positive square root that is meant. The square roots of an integer are algebraic integers—more specifically quadratic integers. The square root of a positive integer is the product of the roots of its prime factors, because the square root of a product is the product of the square roots of the factors. Since p 2 k = p k , {\textstyle {\sqrt {p^{2k}}}=p^{k},} only roots of those primes having an odd power in the factorization are necessary. More precisely, the square root of a prime factorization is p 1 2 e 1 + 1 ⋯ p k 2 e k + 1 p k + 1 2 e k + 1 … p n 2 e n = p 1 e 1 … p n e n p 1 … p k . {\displaystyle {\sqrt {p_{1}^{2e_{1}+1}\cdots p_{k}^{2e_{k}+1}p_{k+1}^{2e_{k+1}}\dots p_{n}^{2e_{n}}}}=p_{1}^{e_{1}}\dots p_{n}^{e_{n}}{\sqrt {p_{1}\dots p_{k}}}.} === As decimal expansions === The square roots of the perfect squares (e.g., 0, 1, 4, 9, 16) are integers. In all other cases, the square roots of positive integers are irrational numbers, and hence have non-repeating decimals in their decimal representations. Decimal approximations of the square roots of the first few natural numbers are given in the following table. === As expansions in other numeral systems === As with before, the square roots of the perfect squares (e.g., 0, 1, 4, 9, 16) are integers. In all other cases, the square roots of positive integers are irrational numbers, and therefore have non-repeating digits in any standard positional notation system. The square roots of small integers are used in both the SHA-1 and SHA-2 hash function designs to provide nothing up my sleeve numbers. === As periodic continued fractions === A result from the study of irrational numbers as simple continued fractions was obtained by Joseph Louis Lagrange c. 1780. Lagrange found that the representation of the square root of any non-square positive integer as a continued fraction is periodic. That is, a certain pattern of partial denominators repeats indefinitely in the continued fraction. In a sense these square roots are the very simplest irrational numbers, because they can be represented with a simple repeating pattern of integers. The square bracket notation used above is a short form for a continued fraction. Written in the more suggestive algebraic form, the simple continued fraction for the square root of 11, [3; 3, 6, 3, 6, ...], looks like this: 11 = 3 + 1 3 + 1 6 + 1 3 + 1 6 + 1 3 + ⋱ {\displaystyle {\sqrt {11}}=3+{\cfrac {1}{3+{\cfrac {1}{6+{\cfrac {1}{3+{\cfrac {1}{6+{\cfrac {1}{3+\ddots }}}}}}}}}}} where the two-digit pattern {3, 6} repeats over and over again in the partial denominators. Since 11 = 32 + 2, the above is also identical to the following generalized continued fractions: 11 = 3 + 2 6 + 2 6 + 2 6 + 2 6 + 2 6 + ⋱ = 3 + 6 20 − 1 − 1 20 − 1 20 − 1 20 − 1 20 − ⋱ . {\displaystyle {\sqrt {11}}=3+{\cfrac {2}{6+{\cfrac {2}{6+{\cfrac {2}{6+{\cfrac {2}{6+{\cfrac {2}{6+\ddots }}}}}}}}}}=3+{\cfrac {6}{20-1-{\cfrac {1}{20-{\cfrac {1}{20-{\cfrac {1}{20-{\cfrac {1}{20-\ddots }}}}}}}}}}.} == Computation == Square roots of positive numbers are not in general rational numbers, and so cannot be written as a terminating or recurring decimal expression. Therefore in general any attempt to compute a square root expressed in decimal form can only yield an approximation, though a sequence of increasingly accurate approximations can be obtained. Most pocket calculators have a square root key. Computer spreadsheets and other software are also frequently used to calculate square roots. Pocket calculators typically implement efficient routines, such as the Newton's method (frequently with an initial guess of 1), to compute the square root of a positive real number. When computing square roots with logarithm tables or slide rules, one can exploit the identities a = e ( ln ⁡ a ) / 2 = 10 ( log 10 ⁡ a ) / 2 , {\displaystyle {\sqrt {a}}=e^{(\ln a)/2}=10^{(\log _{10}a)/2},} where ln and log10 are the natural and base-10 logarithms. By trial-and-error, one can square an estimate for a {\displaystyle {\sqrt {a}}} and raise or lower the estimate until it agrees to sufficient accuracy. For this technique it is prudent to use the identity ( x + c ) 2 = x 2 + 2 x c + c 2 , {\displaystyle (x+c)^{2}=x^{2}+2xc+c^{2},} as it allows one to adjust the estimate x by some amount c and measure the square of the adjustment in terms of the original estimate and its square. The most common iterative method of square root calculation by hand is known as the "Babylonian method" or "Heron's method" after the first-century Greek philosopher Heron of Alexandria, who first described it. The method uses the same iterative scheme as the Newton–Raphson method yields when applied to the function y = f(x) = x2 − a, using the fact that its slope at any point is dy/dx = f′(x) = 2x, but predates it by many centuries. The algorithm is to repeat a simple calculation that results in a number closer to the actual square root each time it is repeated with its result as the new input. The motivation is that if x is an overestimate to the square root of a nonnegative real number a then a/x will be an underestimate and so the average of these two numbers is a better approximation than either of them. However, the inequality of arithmetic and geometric means shows this average is always an overestimate of the square root (as noted below), and so it can serve as a new overestimate with which to repeat the process, which converges as a consequence of the successive overestimates and underestimates being closer to each other after each iteration. To find x: Start with an arbitrary positive start value x. The closer to the square root of a, the fewer the iterations that will be needed to achieve the desired precision. Replace x by the average (x + a/x) / 2 between x and a/x. Repeat from step 2, using this average as the new value of x. That is, if an arbitrary guess for a {\displaystyle {\sqrt {a}}} is x0, and xn + 1 = (xn + a/xn) / 2, then each xn is an approximation of a {\displaystyle {\sqrt {a}}} which is better for large n than for small n. If a is positive, the convergence is quadratic, which means that in approaching the limit, the number of correct digits roughly doubles in each next iteration. If a = 0, the convergence is only linear; however, 0 = 0 {\displaystyle {\sqrt {0}}=0} so in this case no iteration is needed. Using the identity a = 2 − n 4 n a , {\displaystyle {\sqrt {a}}=2^{-n}{\sqrt {4^{n}a}},} the computation of the square root of a positive number can be reduced to that of a number in the range [1, 4). This simplifies finding a start value for the iterative method that is close to the square root, for which a polynomial or piecewise-linear approximation can be used. The time complexity for computing a square root with n digits of precision is equivalent to that of multiplying two n-digit numbers. Another useful method for calculating the square root is the shifting nth root algorithm, applied for n = 2. The name of the square root function varies from programming language to programming language, with sqrt (often pronounced "squirt") being common, used in C and derived languages such as C++, JavaScript, PHP, and Python. == Square roots of negative and complex numbers == The square of any positive or negative number is positive, and the square of 0 is 0. Therefore, no negative number can have a real square root. However, it is possible to work with a more inclusive set of numbers, called the complex numbers, that does contain solutions to the square root of a negative number. This is done by introducing a new number, denoted by i (sometimes by j, especially in the context of electricity where i traditionally represents electric current) and called the imaginary unit, which is defined such that i2 = −1. Using this notation, we can think of i as the square root of −1, but we also have (−i)2 = i2 = −1 and so −i is also a square root of −1. By convention, the principal square root of −1 is i, or more generally, if x is any nonnegative number, then the principal square root of −x is − x = i x . {\displaystyle {\sqrt {-x}}=i{\sqrt {x}}.} The right side (as well as its negative) is indeed a square root of −x, since ( i x ) 2 = i 2 ( x ) 2 = ( − 1 ) x = − x . {\displaystyle (i{\sqrt {x}})^{2}=i^{2}({\sqrt {x}})^{2}=(-1)x=-x.} For every non-zero complex number z there exist precisely two numbers w such that w2 = z: the principal square root of z (defined below), and its negative. === Principal square root of a complex number === To find a definition for the square root that allows us to consistently choose a single value, called the principal value, we start by observing that any complex number x + i y {\displaystyle x+iy} can be viewed as a point in the plane, ( x , y ) , {\displaystyle (x,y),} expressed using Cartesian coordinates. The same point may be reinterpreted using polar coordinates as the pair ( r , φ ) , {\displaystyle (r,\varphi ),} where r ≥ 0 {\displaystyle r\geq 0} is the distance of the point from the origin, and φ {\displaystyle \varphi } is the angle that the line from the origin to the point makes with the positive real ( x {\displaystyle x} ) axis. In complex analysis, the location of this point is conventionally written r e i φ . {\displaystyle re^{i\varphi }.} If z = r e i φ with − π < φ ≤ π , {\displaystyle z=re^{i\varphi }{\text{ with }}-\pi <\varphi \leq \pi ,} then the principal square root of z {\displaystyle z} is defined to be the following: z = r e i φ / 2 . {\displaystyle {\sqrt {z}}={\sqrt {r}}e^{i\varphi /2}.} The principal square root function is thus defined using the non-positive real axis as a branch cut. If z {\displaystyle z} is a non-negative real number (which happens if and only if φ = 0 {\displaystyle \varphi =0} ) then the principal square root of z {\displaystyle z} is r e i ( 0 ) / 2 = r ; {\displaystyle {\sqrt {r}}e^{i(0)/2}={\sqrt {r}};} in other words, the principal square root of a non-negative real number is just the usual non-negative square root. It is important that − π < φ ≤ π {\displaystyle -\pi <\varphi \leq \pi } because if, for example, z = − 2 i {\displaystyle z=-2i} (so φ = − π / 2 {\displaystyle \varphi =-\pi /2} ) then the principal square root is − 2 i = 2 e i φ = 2 e i φ / 2 = 2 e i ( − π / 4 ) = 1 − i {\displaystyle {\sqrt {-2i}}={\sqrt {2e^{i\varphi }}}={\sqrt {2}}e^{i\varphi /2}={\sqrt {2}}e^{i(-\pi /4)}=1-i} but using φ ~ := φ + 2 π = 3 π / 2 {\displaystyle {\tilde {\varphi }}:=\varphi +2\pi =3\pi /2} would instead produce the other square root 2 e i φ ~ / 2 = 2 e i ( 3 π / 4 ) = − 1 + i = − − 2 i . {\displaystyle {\sqrt {2}}e^{i{\tilde {\varphi }}/2}={\sqrt {2}}e^{i(3\pi /4)}=-1+i=-{\sqrt {-2i}}.} The principal square root function is holomorphic everywhere except on the set of non-positive real numbers (on strictly negative reals it is not even continuous). The above Taylor series for 1 + x {\displaystyle {\sqrt {1+x}}} remains valid for complex numbers x {\displaystyle x} with | x | < 1. {\displaystyle |x|<1.} The above can also be expressed in terms of trigonometric functions: r ( cos ⁡ φ + i sin ⁡ φ ) = r ( cos ⁡ φ 2 + i sin ⁡ φ 2 ) . {\displaystyle {\sqrt {r\left(\cos \varphi +i\sin \varphi \right)}}={\sqrt {r}}\left(\cos {\frac {\varphi }{2}}+i\sin {\frac {\varphi }{2}}\right).} === Algebraic formula === When the number is expressed using its real and imaginary parts, the following formula can be used for the principal square root: x + i y = 1 2 ( x 2 + y 2 + x ) + i sgn ⁡ ( y ) 1 2 ( x 2 + y 2 − x ) , {\displaystyle {\sqrt {x+iy}}={\sqrt {{\tfrac {1}{2}}{\bigl (}{\sqrt {\textstyle x^{2}+y^{2}}}+x{\bigr )}}}+i\operatorname {sgn}(y){\sqrt {{\tfrac {1}{2}}{\bigl (}{\sqrt {\textstyle x^{2}+y^{2}}}-x{\bigr )}}},} where sgn(y) = 1 if y ≥ 0 and sgn(y) = −1 otherwise. In particular, the imaginary parts of the original number and the principal value of its square root have the same sign. The real part of the principal value of the square root is always nonnegative. For example, the principal square roots of ±i are given by: i = 1 + i 2 , − i = 1 − i 2 . {\displaystyle {\sqrt {i}}={\frac {1+i}{\sqrt {2}}},\qquad {\sqrt {-i}}={\frac {1-i}{\sqrt {2}}}.} === Notes === In the following, the complex z and w may be expressed as: z = | z | e i θ z {\displaystyle z=|z|e^{i\theta _{z}}} w = | w | e i θ w {\displaystyle w=|w|e^{i\theta _{w}}} where − π < θ z ≤ π {\displaystyle -\pi <\theta _{z}\leq \pi } and − π < θ w ≤ π {\displaystyle -\pi <\theta _{w}\leq \pi } . Because of the discontinuous nature of the square root function in the complex plane, the following laws are not true in general. z w = z w {\displaystyle {\sqrt {zw}}={\sqrt {z}}{\sqrt {w}}} Counterexample for the principal square root: z = −1 and w = −1 This equality is valid only when − π < θ z + θ w ≤ π {\displaystyle -\pi <\theta _{z}+\theta _{w}\leq \pi } w z = w z {\displaystyle {\frac {\sqrt {w}}{\sqrt {z}}}={\sqrt {\frac {w}{z}}}} Counterexample for the principal square root: w = 1 and z = −1 This equality is valid only when − π < θ w − θ z ≤ π {\displaystyle -\pi <\theta _{w}-\theta _{z}\leq \pi } z ∗ = ( z ) ∗ {\displaystyle {\sqrt {z^{*}}}=\left({\sqrt {z}}\right)^{*}} Counterexample for the principal square root: z = −1) This equality is valid only when θ z ≠ π {\displaystyle \theta _{z}\neq \pi } A similar problem appears with other complex functions with branch cuts, e.g., the complex logarithm and the relations logz + logw = log(zw) or log(z*) = log(z)* which are not true in general. Wrongly assuming one of these laws underlies several faulty "proofs", for instance the following one showing that −1 = 1: − 1 = i ⋅ i = − 1 ⋅ − 1 = ( − 1 ) ⋅ ( − 1 ) = 1 = 1. {\displaystyle {\begin{aligned}-1&=i\cdot i\\&={\sqrt {-1}}\cdot {\sqrt {-1}}\\&={\sqrt {\left(-1\right)\cdot \left(-1\right)}}\\&={\sqrt {1}}\\&=1.\end{aligned}}} The third equality cannot be justified (see invalid proof).: Chapter VI, Section I, Subsection 2 The fallacy that +1 = −1  It can be made to hold by changing the meaning of √ so that this no longer represents the principal square root (see above) but selects a branch for the square root that contains 1 ⋅ − 1 . {\displaystyle {\sqrt {1}}\cdot {\sqrt {-1}}.} The left-hand side becomes either − 1 ⋅ − 1 = i ⋅ i = − 1 {\displaystyle {\sqrt {-1}}\cdot {\sqrt {-1}}=i\cdot i=-1} if the branch includes +i or − 1 ⋅ − 1 = ( − i ) ⋅ ( − i ) = − 1 {\displaystyle {\sqrt {-1}}\cdot {\sqrt {-1}}=(-i)\cdot (-i)=-1} if the branch includes −i, while the right-hand side becomes ( − 1 ) ⋅ ( − 1 ) = 1 = − 1 , {\displaystyle {\sqrt {\left(-1\right)\cdot \left(-1\right)}}={\sqrt {1}}=-1,} where the last equality, 1 = − 1 , {\displaystyle {\sqrt {1}}=-1,} is a consequence of the choice of branch in the redefinition of √. == nth roots and polynomial roots == The definition of a square root of x {\displaystyle x} as a number y {\displaystyle y} such that y 2 = x {\displaystyle y^{2}=x} has been generalized in the following way. A cube root of x {\displaystyle x} is a number y {\displaystyle y} such that y 3 = x {\displaystyle y^{3}=x} ; it is denoted x 3 . {\displaystyle {\sqrt[{3}]{x}}.} If n is an integer greater than two, a n-th root of x {\displaystyle x} is a number y {\displaystyle y} such that y n = x {\displaystyle y^{n}=x} ; it is denoted x n . {\displaystyle {\sqrt[{n}]{x}}.} Given any polynomial p, a root of p is a number y such that p(y) = 0. For example, the nth roots of x are the roots of the polynomial (in y) y n − x . {\displaystyle y^{n}-x.} Abel–Ruffini theorem states that, in general, the roots of a polynomial of degree five or higher cannot be expressed in terms of nth roots. == Square roots of matrices and operators == If A is a positive-definite matrix or operator, then there exists precisely one positive definite matrix or operator B with B2 = A; we then define A1/2 = B. In general matrices may have multiple square roots or even an infinitude of them. For example, the 2 × 2 identity matrix has an infinity of square roots, though only one of them is positive definite. == In integral domains, including fields == Each element of an integral domain has no more than 2 square roots. The difference of two squares identity u2 − v2 = (u − v)(u + v) is proved using the commutativity of multiplication. If u and v are square roots of the same element, then u2 − v2 = 0. Because there are no zero divisors this implies u = v or u + v = 0, where the latter means that two roots are additive inverses of each other. In other words if an element a square root u of an element a exists, then the only square roots of a are u and −u. The only square root of 0 in an integral domain is 0 itself. In a field of characteristic 2, an element either has one square root or does not have any at all, because each element is its own additive inverse, so that −u = u. If the field is finite of characteristic 2 then every element has a unique square root. In a field of any other characteristic, any non-zero element either has two square roots, as explained above, or does not have any. Given an odd prime number p, let q = pe for some positive integer e. A non-zero element of the field Fq with q elements is a quadratic residue if it has a square root in Fq. Otherwise, it is a quadratic non-residue. There are (q − 1)/2 quadratic residues and (q − 1)/2 quadratic non-residues; zero is not counted in either class. The quadratic residues form a group under multiplication. The properties of quadratic residues are widely used in number theory. == In rings in general == Unlike in an integral domain, a square root in an arbitrary (unital) ring need not be unique up to sign. For example, in the ring Z / 8 Z {\displaystyle \mathbb {Z} /8\mathbb {Z} } of integers modulo 8 (which is commutative, but has zero divisors), the element 1 has four distinct square roots: ±1 and ±3. Another example is provided by the ring of quaternions H , {\displaystyle \mathbb {H} ,} which has no zero divisors, but is not commutative. Here, the element −1 has infinitely many square roots, including ±i, ±j, and ±k. In fact, the set of square roots of −1 is exactly { a i + b j + c k ∣ a 2 + b 2 + c 2 = 1 } . {\displaystyle \{ai+bj+ck\mid a^{2}+b^{2}+c^{2}=1\}.} A square root of 0 is either 0 or a zero divisor. Thus in rings where zero divisors do not exist, it is uniquely 0. However, rings with zero divisors may have multiple square roots of 0. For example, in Z / n 2 Z , {\displaystyle \mathbb {Z} /n^{2}\mathbb {Z} ,} any multiple of n is a square root of 0. == Geometric construction of the square root == The square root of a positive number is usually defined as the side length of a square with the area equal to the given number. But the square shape is not necessary for it: if one of two similar planar Euclidean objects has the area a times greater than another, then the ratio of their linear sizes is a {\displaystyle {\sqrt {a}}} . A square root can be constructed with a compass and straightedge. In his Elements, Euclid (fl. 300 BC) gave the construction of the geometric mean of two quantities in two different places: Proposition II.14 and Proposition VI.13. Since the geometric mean of a and b is a b {\displaystyle {\sqrt {ab}}} , one can construct a {\displaystyle {\sqrt {a}}} simply by taking b = 1. The construction is also given by Descartes in his La Géométrie, see figure 2 on page 2. However, Descartes made no claim to originality and his audience would have been quite familiar with Euclid. Euclid's second proof in Book VI depends on the theory of similar triangles. Let AHB be a line segment of length a + b with AH = a and HB = b. Construct the circle with AB as diameter and let C be one of the two intersections of the perpendicular chord at H with the circle and denote the length CH as h. Then, using Thales' theorem and, as in the proof of Pythagoras' theorem by similar triangles, triangle AHC is similar to triangle CHB (as indeed both are to triangle ACB, though we don't need that, but it is the essence of the proof of Pythagoras' theorem) so that AH:CH is as HC:HB, i.e. a/h = h/b, from which we conclude by cross-multiplication that h2 = ab, and finally that h = a b {\displaystyle h={\sqrt {ab}}} . When marking the midpoint O of the line segment AB and drawing the radius OC of length (a + b)/2, then clearly OC > CH, i.e. a + b 2 ≥ a b {\textstyle {\frac {a+b}{2}}\geq {\sqrt {ab}}} (with equality if and only if a = b), which is the arithmetic–geometric mean inequality for two variables and, as noted above, is the basis of the Ancient Greek understanding of "Heron's method". Another method of geometric construction uses right triangles and induction: 1 {\displaystyle {\sqrt {1}}} can be constructed, and once x {\displaystyle {\sqrt {x}}} has been constructed, the right triangle with legs 1 and x {\displaystyle {\sqrt {x}}} has a hypotenuse of x + 1 {\displaystyle {\sqrt {x+1}}} . Constructing successive square roots in this manner yields the Spiral of Theodorus depicted above. == See also == == Notes == == References == Dauben, Joseph W. (2007). "Chinese Mathematics I". In Katz, Victor J. (ed.). The Mathematics of Egypt, Mesopotamia, China, India, and Islam. Princeton: Princeton University Press. ISBN 978-0-691-11485-9. Gel'fand, Izrael M.; Shen, Alexander (1993). Algebra (3rd ed.). Birkhäuser. p. 120. ISBN 0-8176-3677-3. Joseph, George (2000). The Crest of the Peacock. Princeton: Princeton University Press. ISBN 0-691-00659-8. Smith, David (1958). History of Mathematics. Vol. 2. New York: Dover Publications. ISBN 978-0-486-20430-7. {{cite book}}: ISBN / Date incompatibility (help) Selin, Helaine (2008), Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures, Springer, Bibcode:2008ehst.book.....S, ISBN 978-1-4020-4559-2. == External links == Algorithms, implementations, and more – Paul Hsieh's square roots webpage How to manually find a square root AMS Featured Column, Galileo's Arithmetic by Tony Philips – includes a section on how Galileo found square roots
Wikipedia/Square_root_function
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range. == Example: Helix == A common example of a vector-valued function is one that depends on a single real parameter t, often representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian 3-space, these specific types of vector-valued functions are given by expressions such as r ( t ) = f ( t ) i + g ( t ) j + h ( t ) k {\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} } where f(t), g(t) and h(t) are the coordinate functions of the parameter t, and the domain of this vector-valued function is the intersection of the domains of the functions f, g, and h. It can also be referred to in a different notation: r ( t ) = ⟨ f ( t ) , g ( t ) , h ( t ) ⟩ {\displaystyle \mathbf {r} (t)=\langle f(t),g(t),h(t)\rangle } The vector r(t) has its tail at the origin and its head at the coordinates evaluated by the function. The vector shown in the graph to the right is the evaluation of the function ⟨ 2 cos ⁡ t , 4 sin ⁡ t , t ⟩ {\displaystyle \langle 2\cos t,\,4\sin t,\,t\rangle } near t = 19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as t increases from zero through 8π. In 2D, we can analogously speak about vector-valued functions as: r ( t ) = f ( t ) i + g ( t ) j {\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} } or r ( t ) = ⟨ f ( t ) , g ( t ) ⟩ {\displaystyle \mathbf {r} (t)=\langle f(t),g(t)\rangle } == Linear case == In the linear case the function can be expressed in terms of matrices: y = A x , {\displaystyle \mathbf {y} =A\mathbf {x} ,} where y is an n × 1 output vector, x is a k × 1 vector of inputs, and A is an n × k matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form y = A x + b , {\displaystyle \mathbf {y} =A\mathbf {x} +\mathbf {b} ,} where in addition b'' is an n × 1 vector of parameters. The linear case arises often, for example in multiple regression, where for instance the n × 1 vector y ^ {\displaystyle {\hat {y}}} of predicted values of a dependent variable is expressed linearly in terms of a k × 1 vector β ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} (k < n) of estimated values of model parameters: y ^ = X β ^ , {\displaystyle {\hat {\mathbf {y} }}=X{\hat {\boldsymbol {\beta }}},} in which X (playing the role of A in the previous generic form) is an n × k matrix of fixed (empirically based) numbers. == Parametric representation of a surface == A surface is a 2-dimensional set of points embedded in (most commonly) 3-dimensional space. One way to represent a surface is with parametric equations, in which two parameters s and t determine the three Cartesian coordinates of any point on the surface: ( x , y , z ) = ( f ( s , t ) , g ( s , t ) , h ( s , t ) ) ≡ F ( s , t ) . {\displaystyle (x,y,z)=(f(s,t),g(s,t),h(s,t))\equiv \mathbf {F} (s,t).} Here F is a vector-valued function. For a surface embedded in n-dimensional space, one similarly has the representation ( x 1 , x 2 , … , x n ) = ( f 1 ( s , t ) , f 2 ( s , t ) , … , f n ( s , t ) ) ≡ F ( s , t ) . {\displaystyle (x_{1},x_{2},\dots ,x_{n})=(f_{1}(s,t),f_{2}(s,t),\dots ,f_{n}(s,t))\equiv \mathbf {F} (s,t).} == Derivative of a three-dimensional vector function == Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if r ( t ) = f ( t ) i + g ( t ) j + h ( t ) k {\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} } is a vector-valued function, then d r d t = f ′ ( t ) i + g ′ ( t ) j + h ′ ( t ) k . {\displaystyle {\frac {d\mathbf {r} }{dt}}=f'(t)\mathbf {i} +g'(t)\mathbf {j} +h'(t)\mathbf {k} .} The vector derivative admits the following physical interpretation: if r(t) represents the position of a particle, then the derivative is the velocity of the particle v ( t ) = d r d t . {\displaystyle \mathbf {v} (t)={\frac {d\mathbf {r} }{dt}}.} Likewise, the derivative of the velocity is the acceleration d v d t = a ( t ) . {\displaystyle {\frac {d\mathbf {v} }{dt}}=\mathbf {a} (t).} === Partial derivative === The partial derivative of a vector function a with respect to a scalar variable q is defined as ∂ a ∂ q = ∑ i = 1 n ∂ a i ∂ q e i {\displaystyle {\frac {\partial \mathbf {a} }{\partial q}}=\sum _{i=1}^{n}{\frac {\partial a_{i}}{\partial q}}\mathbf {e} _{i}} where ai is the scalar component of a in the direction of ei. It is also called the direction cosine of a and ei or their dot product. The vectors e1, e2, e3 form an orthonormal basis fixed in the reference frame in which the derivative is being taken. === Ordinary derivative === If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the first ordinary time derivative of a with respect to t, d a d t = ∑ i = 1 n d a i d t e i . {\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{i=1}^{n}{\frac {da_{i}}{dt}}\mathbf {e} _{i}.} === Total derivative === If the vector a is a function of a number n of scalar variables qr (r = 1, ..., n), and each qr is only a function of time t, then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as d a d t = ∑ r = 1 n ∂ a ∂ q r d q r d t + ∂ a ∂ t . {\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{r=1}^{n}{\frac {\partial \mathbf {a} }{\partial q_{r}}}{\frac {dq_{r}}{dt}}+{\frac {\partial \mathbf {a} }{\partial t}}.} Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differs from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the variables qr. === Reference frames === Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship. === Derivative of a vector function with nonfixed bases === The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1, e2, e3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1, e2, e3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e1, e2, e3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is N d a d t = ∑ i = 1 3 d a i d t e i + ∑ i = 1 3 a i N d e i d t {\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}=\sum _{i=1}^{3}{\frac {da_{i}}{dt}}\mathbf {e} _{i}+\sum _{i=1}^{3}a_{i}{\frac {{}^{\mathrm {N} }d\mathbf {e} _{i}}{dt}}} where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame where e1, e2, e3 are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself. Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is N d a d t = E d a d t + N ω E × a {\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}={\frac {{}^{\mathrm {E} }d\mathbf {a} }{dt}}+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {a} } where NωE is the angular velocity of the reference frame E relative to the reference frame N. One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity NvR in inertial reference frame N of a rocket R located at position rR can be found using the formula N d d t ( r R ) = E d d t ( r R ) + N ω E × r R . {\displaystyle {\frac {{}^{\mathrm {N} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })={\frac {{}^{\mathrm {E} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }.} where NωE is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, NvR and EvR are the derivatives of rR in reference frames N and E, respectively. By substitution, N v R = E v R + N ω E × r R {\displaystyle {}^{\mathrm {N} }\mathbf {v} ^{\mathrm {R} }={}^{\mathrm {E} }\mathbf {v} ^{\mathrm {R} }+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }} where EvR is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth. === Derivative and vector multiplication === The derivative of a product of vector functions behaves similarly to the derivative of a product of scalar functions. Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q, ∂ ∂ q ( p a ) = ∂ p ∂ q a + p ∂ a ∂ q . {\displaystyle {\frac {\partial }{\partial q}}(p\mathbf {a} )={\frac {\partial p}{\partial q}}\mathbf {a} +p{\frac {\partial \mathbf {a} }{\partial q}}.} In the case of dot multiplication, for two vectors a and b that are both functions of q, ∂ ∂ q ( a ⋅ b ) = ∂ a ∂ q ⋅ b + a ⋅ ∂ b ∂ q . {\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \cdot \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\cdot \mathbf {b} +\mathbf {a} \cdot {\frac {\partial \mathbf {b} }{\partial q}}.} Similarly, the derivative of the cross product of two vector functions is ∂ ∂ q ( a × b ) = ∂ a ∂ q × b + a × ∂ b ∂ q . {\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \times \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\times \mathbf {b} +\mathbf {a} \times {\frac {\partial \mathbf {b} }{\partial q}}.} === Derivative of an n-dimensional vector function === A function f of a real number t with values in the space R n {\displaystyle \mathbb {R} ^{n}} can be written as f ( t ) = ( f 1 ( t ) , f 2 ( t ) , … , f n ( t ) ) {\displaystyle \mathbf {f} (t)=(f_{1}(t),f_{2}(t),\ldots ,f_{n}(t))} . Its derivative equals f ′ ( t ) = ( f 1 ′ ( t ) , f 2 ′ ( t ) , … , f n ′ ( t ) ) . {\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),\ldots ,f_{n}'(t)).} If f is a function of several variables, say of t ∈ R m {\displaystyle t\in \mathbb {R} ^{m}} , then the partial derivatives of the components of f form a n × m {\displaystyle n\times m} matrix called the Jacobian matrix of f. == Infinite-dimensional vector functions == If the values of a function f lie in an infinite-dimensional vector space X, such as a Hilbert space, then f may be called an infinite-dimensional vector function. === Functions with values in a Hilbert space === If the argument of f is a real number and X is a Hilbert space, then the derivative of f at a point t can be defined as in the finite-dimensional case: f ′ ( t ) = lim h → 0 f ( t + h ) − f ( t ) h . {\displaystyle \mathbf {f} '(t)=\lim _{h\to 0}{\frac {\mathbf {f} (t+h)-\mathbf {f} (t)}{h}}.} Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g., t ∈ R n {\displaystyle t\in \mathbb {R} ^{n}} or even t ∈ Y {\displaystyle t\in Y} , where Y is an infinite-dimensional vector space). N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if f = ( f 1 , f 2 , f 3 , … ) {\displaystyle \mathbf {f} =(f_{1},f_{2},f_{3},\ldots )} (i.e., f = f 1 e 1 + f 2 e 2 + f 3 e 3 + ⋯ {\displaystyle \mathbf {f} =f_{1}\mathbf {e} _{1}+f_{2}\mathbf {e} _{2}+f_{3}\mathbf {e} _{3}+\cdots } , where e 1 , e 2 , e 3 , … {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3},\ldots } is an orthonormal basis of the space X ), and f ′ ( t ) {\displaystyle f'(t)} exists, then f ′ ( t ) = ( f 1 ′ ( t ) , f 2 ′ ( t ) , f 3 ′ ( t ) , … ) . {\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),f_{3}'(t),\ldots ).} However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space. === Other infinite-dimensional vector spaces === Most of the above hold for other topological vector spaces X too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases. == Vector field == == See also == Coordinate vector Curve Multivalued function Parametric surface Position vector Parametrization == Notes == == References == == External links == Vector-valued functions and their properties (from Lake Tahoe Community College) Weisstein, Eric W. "Vector Function". MathWorld. Everything2 article 3 Dimensional vector-valued functions (from East Tennessee State University) "Position Vector Valued Functions" Khan Academy module
Wikipedia/Vector-valued_functions
In topology, filters can be used to study topological spaces and define basic topological notions such as convergence, continuity, compactness, and more. Filters, which are special families of subsets of some given set, also provide a common framework for defining various types of limits of functions such as limits from the left/right, to infinity, to a point or a set, and many others. Special types of filters called ultrafilters have many useful technical properties and they may often be used in place of arbitrary filters. Filters have generalizations called prefilters (also known as filter bases) and filter subbases, all of which appear naturally and repeatedly throughout topology. Examples include neighborhood filters/bases/subbases and uniformities. Every filter is a prefilter and both are filter subbases. Every prefilter and filter subbase is contained in a unique smallest filter, which they are said to generate. This establishes a relationship between filters and prefilters that may often be exploited to allow one to use whichever of these two notions is more technically convenient. There is a certain preorder on families of sets (subordination), denoted by ≤ , {\displaystyle \,\leq ,\,} that helps to determine exactly when and how one notion (filter, prefilter, etc.) can or cannot be used in place of another. This preorder's importance is amplified by the fact that it also defines the notion of filter convergence, where by definition, a filter (or prefilter) B {\displaystyle {\mathcal {B}}} converges to a point if and only if N ≤ B , {\displaystyle {\mathcal {N}}\leq {\mathcal {B}},} where N {\displaystyle {\mathcal {N}}} is that point's neighborhood filter. Consequently, subordination also plays an important role in many concepts that are related to convergence, such as cluster points and limits of functions. In addition, the relation S ≥ B , {\displaystyle {\mathcal {S}}\geq {\mathcal {B}},} which denotes B ≤ S {\displaystyle {\mathcal {B}}\leq {\mathcal {S}}} and is expressed by saying that S {\displaystyle {\mathcal {S}}} is subordinate to B , {\displaystyle {\mathcal {B}},} also establishes a relationship in which S {\displaystyle {\mathcal {S}}} is to B {\displaystyle {\mathcal {B}}} as a subsequence is to a sequence (that is, the relation ≥ , {\displaystyle \geq ,} which is called subordination, is for filters the analog of "is a subsequence of"). Filters were introduced by Henri Cartan in 1937 and subsequently used by Bourbaki in their book Topologie Générale as an alternative to the similar notion of a net developed in 1922 by E. H. Moore and H. L. Smith. Filters can also be used to characterize the notions of sequence and net convergence. But unlike sequence and net convergence, filter convergence is defined entirely in terms of subsets of the topological space X {\displaystyle X} and so it provides a notion of convergence that is completely intrinsic to the topological space; indeed, the category of topological spaces can be equivalently defined entirely in terms of filters. Every net induces a canonical filter and dually, every filter induces a canonical net, where this induced net (resp. induced filter) converges to a point if and only if the same is true of the original filter (resp. net). This characterization also holds for many other definitions such as cluster points. These relationships make it possible to switch between filters and nets, and they often also allow one to choose whichever of these two notions (filter or net) is more convenient for the problem at hand. However, assuming that "subnet" is defined using either of its most popular definitions (which are those given by Willard and by Kelley), then in general, this relationship does not extend to subordinate filters and subnets because as detailed below, there exist subordinate filters whose filter/subordinate-filter relationship cannot be described in terms of the corresponding net/subnet relationship; this issue can however be resolved by using a less commonly encountered definition of "subnet", which is that of an AA-subnet. Thus filters/prefilters and this single preorder ≤ {\displaystyle \,\leq \,} provide a framework that seamlessly ties together fundamental topological concepts such as topological spaces (via neighborhood filters), neighborhood bases, convergence, various limits of functions, continuity, compactness, sequences (via sequential filters), the filter equivalent of "subsequence" (subordination), uniform spaces, and more; concepts that otherwise seem relatively disparate and whose relationships are less clear. == Motivation == Archetypical example of a filter The archetypical example of a filter is the neighborhood filter N ( x ) {\displaystyle {\mathcal {N}}(x)} at a point x {\displaystyle x} in a topological space ( X , τ ) , {\displaystyle (X,\tau ),} which is the family of sets consisting of all neighborhoods of x . {\displaystyle x.} By definition, a neighborhood of some given point x {\displaystyle x} is any subset B ⊆ X {\displaystyle B\subseteq X} whose topological interior contains this point; that is, such that x ∈ Int X ⁡ B . {\displaystyle x\in \operatorname {Int} _{X}B.} Importantly, neighborhoods are not required to be open sets; those are called open neighborhoods. Listed below are those fundamental properties of neighborhood filters that ultimately became the definition of a "filter." A filter on X {\displaystyle X} is a set B {\displaystyle {\mathcal {B}}} of subsets of X {\displaystyle X} that satisfies all of the following conditions: Not empty: X ∈ B {\displaystyle X\in {\mathcal {B}}}  –  just as X ∈ N ( x ) , {\displaystyle X\in {\mathcal {N}}(x),} since X {\displaystyle X} is always a neighborhood of x {\displaystyle x} (and of anything else that it contains); Does not contain the empty set: ∅ ∉ B {\displaystyle \varnothing \not \in {\mathcal {B}}}  –  just as no neighborhood of x {\displaystyle x} is empty; Closed under finite intersections: If B , C ∈ B then B ∩ C ∈ B {\displaystyle B,C\in {\mathcal {B}}{\text{ then }}B\cap C\in {\mathcal {B}}}  –  just as the intersection of any two neighborhoods of x {\displaystyle x} is again a neighborhood of x {\displaystyle x} ; Upward closed: If B ∈ B and B ⊆ S ⊆ X {\displaystyle B\in {\mathcal {B}}{\text{ and }}B\subseteq S\subseteq X} then S ∈ B {\displaystyle S\in {\mathcal {B}}}  –  just as any subset of X {\displaystyle X} that includes a neighborhood of x {\displaystyle x} will necessarily be a neighborhood of x {\displaystyle x} (this follows from Int X ⁡ B ⊆ Int X ⁡ S {\displaystyle \operatorname {Int} _{X}B\subseteq \operatorname {Int} _{X}S} and the definition of "a neighborhood of x {\displaystyle x} "). Generalizing sequence convergence by using sets − determining sequence convergence without the sequence A sequence in X {\displaystyle X} is by definition a map N → X {\displaystyle \mathbb {N} \to X} from the natural numbers into the space X . {\displaystyle X.} The original notion of convergence in a topological space was that of a sequence converging to some given point in a space, such as a metric space. With metrizable spaces (or more generally first-countable spaces or Fréchet–Urysohn spaces), sequences usually suffices to characterize, or "describe", most topological properties, such as the closures of subsets or continuity of functions. But there are many spaces where sequences can not be used to describe even basic topological properties like closure or continuity. This failure of sequences was the motivation for defining notions such as nets and filters, which never fail to characterize topological properties. Nets directly generalize the notion of a sequence since nets are, by definition, maps I → X {\displaystyle I\to X} from an arbitrary directed set ( I , ≤ ) {\displaystyle (I,\leq )} into the space X . {\displaystyle X.} A sequence is just a net whose domain is I = N {\displaystyle I=\mathbb {N} } with the natural ordering. Nets have their own notion of convergence, which is a direct generalization of sequence convergence. Filters generalize sequence convergence in a different way by considering only the values of a sequence. To see how this is done, consider a sequence x ∙ = ( x i ) i = 1 ∞ in X , {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }{\text{ in }}X,} which is by definition just a function x ∙ : N → X {\displaystyle x_{\bullet }:\mathbb {N} \to X} whose value at i ∈ N {\displaystyle i\in \mathbb {N} } is denoted by x i {\displaystyle x_{i}} rather than by the usual parentheses notation x ∙ ( i ) {\displaystyle x_{\bullet }(i)} that is commonly used for arbitrary functions. Knowing only the image (sometimes called "the range") Im ⁡ x ∙ := { x i : i ∈ N } = { x 1 , x 2 , … } {\displaystyle \operatorname {Im} x_{\bullet }:=\left\{x_{i}:i\in \mathbb {N} \right\}=\left\{x_{1},x_{2},\ldots \right\}} of the sequence is not enough to characterize its convergence; multiple sets are needed. It turns out that the needed sets are the following, which are called the tails of the sequence x ∙ {\displaystyle x_{\bullet }} : x ≥ 1 = { x 1 , x 2 , x 3 , x 4 , … } x ≥ 2 = { x 2 , x 3 , x 4 , x 5 , … } x ≥ 3 = { x 3 , x 4 , x 5 , x 6 , … } ⋮ x ≥ n = { x n , x n + 1 , x n + 2 , x n + 3 , … } ⋮ {\displaystyle {\begin{alignedat}{8}x_{\geq 1}=\;&\{&&x_{1},&&x_{2},&&x_{3},&&x_{4},&&\ldots &&\,\}\\[0.3ex]x_{\geq 2}=\;&\{&&x_{2},&&x_{3},&&x_{4},&&x_{5},&&\ldots &&\,\}\\[0.3ex]x_{\geq 3}=\;&\{&&x_{3},&&x_{4},&&x_{5},&&x_{6},&&\ldots &&\,\}\\[0.3ex]&&&&&&&\;\,\vdots &&&&&&\\[0.3ex]x_{\geq n}=\;&\{&&x_{n},\;\;\,&&x_{n+1},\;&&x_{n+2},\;&&x_{n+3},&&\ldots &&\,\}\\[0.3ex]&&&&&&&\;\,\vdots &&&&&&\\[0.3ex]\end{alignedat}}} These sets completely determine this sequence's convergence (or non-convergence) because given any point, this sequence converges to it if and only if for every neighborhood U {\displaystyle U} (of this point), there is some integer n {\displaystyle n} such that U {\displaystyle U} contains all of the points x n , x n + 1 , … . {\displaystyle x_{n},x_{n+1},\ldots .} This can be reworded as: every neighborhood U {\displaystyle U} must contain some set of the form { x n , x n + 1 , … } {\displaystyle \{x_{n},x_{n+1},\ldots \}} as a subset. Or more briefly: every neighborhood must contain some tail x ≥ n {\displaystyle x_{\geq n}} as a subset. It is this characterization that can be used with the above family of tails to determine convergence (or non-convergence) of the sequence x ∙ : N → X . {\displaystyle x_{\bullet }:\mathbb {N} \to X.} Specifically, with the family of sets { x ≥ 1 , x ≥ 2 , … } {\displaystyle \{x_{\geq 1},x_{\geq 2},\ldots \}} in hand, the function x ∙ : N → X {\displaystyle x_{\bullet }:\mathbb {N} \to X} is no longer needed to determine convergence of this sequence (no matter what topology is placed on X {\displaystyle X} ). By generalizing this observation, the notion of "convergence" can be extended from sequences/functions to families of sets. The above set of tails of a sequence is in general not a filter but it does "generate" a filter via taking its upward closure (which consists of all supersets of all tails). The same is true of other important families of sets such as any neighborhood basis at a given point, which in general is also not a filter but does generate a filter via its upward closure (in particular, it generates the neighborhood filter at that point). The properties that these families share led to the notion of a filter base, also called a prefilter, which by definition is any family having the minimal properties necessary and sufficient for it to generate a filter via taking its upward closure. Nets versus filters − advantages and disadvantages Filters and nets each have their own advantages and drawbacks and there's no reason to use one notion exclusively over the other. Depending on what is being proved, a proof may be made significantly easier by using one of these notions instead of the other. Both filters and nets can be used to completely characterize any given topology. Nets are direct generalizations of sequences and can often be used similarly to sequences, so the learning curve for nets is typically much less steep than that for filters. However, filters, and especially ultrafilters, have many more uses outside of topology, such as in set theory, mathematical logic, model theory (ultraproducts, for example), abstract algebra, combinatorics, dynamics, order theory, generalized convergence spaces, Cauchy spaces, and in the definition and use of hyperreal numbers. Like sequences, nets are functions and so they have the advantages of functions. For example, like sequences, nets can be "plugged into" other functions, where "plugging in" is just function composition. Theorems related to functions and function composition may then be applied to nets. One example is the universal property of inverse limits, which is defined in terms of composition of functions rather than sets and it is more readily applied to functions like nets than to sets like filters (a prominent example of an inverse limit is the Cartesian product). Filters may be awkward to use in certain situations, such as when switching between a filter on a space X {\displaystyle X} and a filter on a dense subspace S ⊆ X . {\displaystyle S\subseteq X.} In contrast to nets, filters (and prefilters) are families of sets and so they have the advantages of sets. For example, if f {\displaystyle f} is surjective then the image f − 1 ( B ) := { f − 1 ( B ) : B ∈ B } {\displaystyle f^{-1}({\mathcal {B}}):=\left\{f^{-1}(B)~:~B\in {\mathcal {B}}\right\}} under f − 1 {\displaystyle f^{-1}} of an arbitrary filter or prefilter B {\displaystyle {\mathcal {B}}} is both easily defined and guaranteed to be a prefilter on f {\displaystyle f} 's domain, whereas it is less clear how to pullback (unambiguously/without choice) an arbitrary sequence (or net) y ∙ {\displaystyle y_{\bullet }} so as to obtain a sequence or net in the domain (unless f {\displaystyle f} is also injective and consequently a bijection, which is a stringent requirement). Similarly, the intersection of any collection of filters is once again a filter whereas it is not clear what this could mean for sequences or nets. Because filters are composed of subsets of the very topological space X {\displaystyle X} that is under consideration, topological set operations (such as closure or interior) may be applied to the sets that constitute the filter. Taking the closure of all the sets in a filter is sometimes useful in functional analysis for instance. Theorems and results about images or preimages of sets under a function may also be applied to the sets that constitute a filter; an example of such a result might be one of continuity's characterizations in terms of preimages of open/closed sets or in terms of the interior/closure operators. Special types of filters called ultrafilters have many useful properties that can significantly help in proving results. One downside of nets is their dependence on the directed sets that constitute their domains, which in general may be entirely unrelated to the space X . {\displaystyle X.} In fact, the class of nets in a given set X {\displaystyle X} is too large to even be a set (it is a proper class); this is because nets in X {\displaystyle X} can have domains of any cardinality. In contrast, the collection of all filters (and of all prefilters) on X {\displaystyle X} is a set whose cardinality is no larger than that of ℘ ( ℘ ( X ) ) . {\displaystyle \wp (\wp (X)).} Similar to a topology on X , {\displaystyle X,} a filter on X {\displaystyle X} is "intrinsic to X {\displaystyle X} " in the sense that both structures consist entirely of subsets of X {\displaystyle X} and neither definition requires any set that cannot be constructed from X {\displaystyle X} (such as N {\displaystyle \mathbb {N} } or other directed sets, which sequences and nets require). == Preliminaries, notation, and basic notions == In this article, upper case Roman letters like S {\displaystyle S} and X {\displaystyle X} denote sets (but not families unless indicated otherwise) and ℘ ( X ) {\displaystyle \wp (X)} will denote the power set of X . {\displaystyle X.} A subset of a power set is called a family of sets (or simply, a family) where it is over X {\displaystyle X} if it is a subset of ℘ ( X ) . {\displaystyle \wp (X).} Families of sets will be denoted by upper case calligraphy letters such as B {\displaystyle {\mathcal {B}}} , C {\displaystyle {\mathcal {C}}} , and F {\displaystyle {\mathcal {F}}} . Whenever these assumptions are needed, then it should be assumed that X {\displaystyle X} is non-empty and that B , F , {\displaystyle {\mathcal {B}},{\mathcal {F}},} etc. are families of sets over X . {\displaystyle X.} The terms "prefilter" and "filter base" are synonyms and will be used interchangeably. Warning about competing definitions and notation There are unfortunately several terms in the theory of filters that are defined differently by different authors. These include some of the most important terms such as "filter." While different definitions of the same term usually have significant overlap, due to the very technical nature of filters (and point–set topology), these differences in definitions nevertheless often have important consequences. When reading mathematical literature, it is recommended that readers check how the terminology related to filters is defined by the author. For this reason, this article will clearly state all definitions as they are used. Unfortunately, not all notation related to filters is well established and some notation varies greatly across the literature (for example, the notation for the set of all prefilters on a set) so in such cases this article uses whatever notation is most self describing or easily remembered. The theory of filters and prefilters is well developed and has a plethora of definitions and notations, many of which are now unceremoniously listed to prevent this article from becoming prolix and to allow for the easy look up of notation and definitions. Their important properties are described later. Sets operations The upward closure or isotonization in X {\displaystyle X} of a family of sets B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} is and similarly the downward closure of B {\displaystyle {\mathcal {B}}} is B ↓ := { S ⊆ B : B ∈ B } = ⋃ B ∈ B ℘ ( B ) . {\displaystyle {\mathcal {B}}^{\downarrow }:=\{S\subseteq B~:~B\in {\mathcal {B}}\,\}={\textstyle \bigcup \limits _{B\in {\mathcal {B}}}}\wp (B).} Throughout, f {\displaystyle f} is a map. Topology notation Denote the set of all topologies on a set X by Top ⁡ ( X ) . {\displaystyle X{\text{ by }}\operatorname {Top} (X).} Suppose τ ∈ Top ⁡ ( X ) , {\displaystyle \tau \in \operatorname {Top} (X),} S ⊆ X {\displaystyle S\subseteq X} is any subset, and x ∈ X {\displaystyle x\in X} is any point. If ∅ ≠ S ⊆ X {\displaystyle \varnothing \neq S\subseteq X} then τ ( S ) = ⋂ s ∈ S τ ( s ) and N τ ( S ) = ⋂ s ∈ S N τ ( s ) . {\displaystyle \tau (S)={\textstyle \bigcap \limits _{s\in S}}\tau (s){\text{ and }}{\mathcal {N}}_{\tau }(S)={\textstyle \bigcap \limits _{s\in S}}{\mathcal {N}}_{\tau }(s).} Nets and their tails A directed set is a set I {\displaystyle I} together with a preorder, which will be denoted by ≤ {\displaystyle \,\leq \,} (unless explicitly indicated otherwise), that makes ( I , ≤ ) {\displaystyle (I,\leq )} into an (upward) directed set; this means that for all i , j ∈ I , {\displaystyle i,j\in I,} there exists some k ∈ I {\displaystyle k\in I} such that i ≤ k and j ≤ k . {\displaystyle i\leq k{\text{ and }}j\leq k.} For any indices i and j , {\displaystyle i{\text{ and }}j,} the notation j ≥ i {\displaystyle j\geq i} is defined to mean i ≤ j {\displaystyle i\leq j} while i < j {\displaystyle i<j} is defined to mean that i ≤ j {\displaystyle i\leq j} holds but it is not true that j ≤ i {\displaystyle j\leq i} (if ≤ {\displaystyle \,\leq \,} is antisymmetric then this is equivalent to i ≤ j and i ≠ j {\displaystyle i\leq j{\text{ and }}i\neq j} ). A net in X {\displaystyle X} is a map from a non-empty directed set into X . {\displaystyle X.} The notation x ∙ = ( x i ) i ∈ I {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}} will be used to denote a net with domain I . {\displaystyle I.} Warning about using strict comparison If x ∙ = ( x i ) i ∈ I {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}} is a net and i ∈ I {\displaystyle i\in I} then it is possible for the set x > i = { x j : j > i and j ∈ I } , {\displaystyle x_{>i}=\left\{x_{j}~:~j>i{\text{ and }}j\in I\right\},} which is called the tail of x ∙ {\displaystyle x_{\bullet }} after i {\displaystyle i} , to be empty (for example, this happens if i {\displaystyle i} is an upper bound of the directed set I {\displaystyle I} ). In this case, the family { x > i : i ∈ I } {\displaystyle \left\{x_{>i}~:~i\in I\right\}} would contain the empty set, which would prevent it from being a prefilter (defined later). This is the (important) reason for defining Tails ⁡ ( x ∙ ) {\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)} as { x ≥ i : i ∈ I } {\displaystyle \left\{x_{\geq i}~:~i\in I\right\}} rather than { x > i : i ∈ I } {\displaystyle \left\{x_{>i}~:~i\in I\right\}} or even { x > i : i ∈ I } ∪ { x ≥ i : i ∈ I } {\displaystyle \left\{x_{>i}~:~i\in I\right\}\cup \left\{x_{\geq i}~:~i\in I\right\}} and it is for this reason that in general, when dealing with the prefilter of tails of a net, the strict inequality < {\displaystyle \,<\,} may not be used interchangeably with the inequality ≤ . {\displaystyle \,\leq .} === Filters and prefilters === The following is a list of properties that a family B {\displaystyle {\mathcal {B}}} of sets may possess and they form the defining properties of filters, prefilters, and filter subbases. Whenever it is necessary, it should be assumed that B ⊆ ℘ ( X ) . {\displaystyle {\mathcal {B}}\subseteq \wp (X).} Many of the properties of B {\displaystyle {\mathcal {B}}} defined above and below, such as "proper" and "directed downward," do not depend on X , {\displaystyle X,} so mentioning the set X {\displaystyle X} is optional when using such terms. Definitions involving being "upward closed in X , {\displaystyle X,} " such as that of "filter on X , {\displaystyle X,} " do depend on X {\displaystyle X} so the set X {\displaystyle X} should be mentioned if it is not clear from context. There are no prefilters on X = ∅ {\displaystyle X=\varnothing } (nor are there any nets valued in ∅ {\displaystyle \varnothing } ), which is why this article, like most authors, will automatically assume without comment that X ≠ ∅ {\displaystyle X\neq \varnothing } whenever this assumption is needed. ==== Basic examples ==== Named examples The singleton set B = { X } {\displaystyle {\mathcal {B}}=\{X\}} is called the indiscrete or trivial filter on X . {\displaystyle X.} It is the unique minimal filter on X {\displaystyle X} because it is a subset of every filter on X {\displaystyle X} ; however, it need not be a subset of every prefilter on X . {\displaystyle X.} The dual ideal ℘ ( X ) {\displaystyle \wp (X)} is also called the degenerate filter on X {\displaystyle X} (despite not actually being a filter). It is the only dual ideal on X {\displaystyle X} that is not a filter on X . {\displaystyle X.} If ( X , τ ) {\displaystyle (X,\tau )} is a topological space and x ∈ X , {\displaystyle x\in X,} then the neighborhood filter N ( x ) {\displaystyle {\mathcal {N}}(x)} at x {\displaystyle x} is a filter on X . {\displaystyle X.} By definition, a family B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} is called a neighborhood basis (resp. a neighborhood subbase) at x for ( X , τ ) {\displaystyle x{\text{ for }}(X,\tau )} if and only if B {\displaystyle {\mathcal {B}}} is a prefilter (resp. B {\displaystyle {\mathcal {B}}} is a filter subbase) and the filter on X {\displaystyle X} that B {\displaystyle {\mathcal {B}}} generates is equal to the neighborhood filter N ( x ) . {\displaystyle {\mathcal {N}}(x).} The subfamily τ ( x ) ⊆ N ( x ) {\displaystyle \tau (x)\subseteq {\mathcal {N}}(x)} of open neighborhoods is a filter base for N ( x ) . {\displaystyle {\mathcal {N}}(x).} Both prefilters N ( x ) and τ ( x ) {\displaystyle {\mathcal {N}}(x){\text{ and }}\tau (x)} also form a bases for topologies on X , {\displaystyle X,} with the topology generated τ ( x ) {\displaystyle \tau (x)} being coarser than τ . {\displaystyle \tau .} This example immediately generalizes from neighborhoods of points to neighborhoods of non-empty subsets S ⊆ X . {\displaystyle S\subseteq X.} B {\displaystyle {\mathcal {B}}} is an elementary prefilter if B = Tails ⁡ ( x ∙ ) {\displaystyle {\mathcal {B}}=\operatorname {Tails} \left(x_{\bullet }\right)} for some sequence of points x ∙ = ( x i ) i = 1 ∞ . {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }.} B {\displaystyle {\mathcal {B}}} is an elementary filter or a sequential filter on X {\displaystyle X} if B {\displaystyle {\mathcal {B}}} is a filter on X {\displaystyle X} generated by some elementary prefilter. The filter of tails generated by a sequence that is not eventually constant is necessarily not an ultrafilter. Every principal filter on a countable set is sequential as is every cofinite filter on a countably infinite set. The intersection of finitely many sequential filters is again sequential. The set F {\displaystyle {\mathcal {F}}} of all cofinite subsets of X {\displaystyle X} (meaning those sets whose complement in X {\displaystyle X} is finite) is proper if and only if F {\displaystyle {\mathcal {F}}} is infinite (or equivalently, X {\displaystyle X} is infinite), in which case F {\displaystyle {\mathcal {F}}} is a filter on X {\displaystyle X} known as the Fréchet filter or the cofinite filter on X . {\displaystyle X.} If X {\displaystyle X} is finite then F {\displaystyle {\mathcal {F}}} is equal to the dual ideal ℘ ( X ) , {\displaystyle \wp (X),} which is not a filter. If X {\displaystyle X} is infinite then the family { X ∖ { x } : x ∈ X } {\displaystyle \{X\setminus \{x\}~:~x\in X\}} of complements of singleton sets is a filter subbase that generates the Fréchet filter on X . {\displaystyle X.} As with any family of sets over X {\displaystyle X} that contains { X ∖ { x } : x ∈ X } , {\displaystyle \{X\setminus \{x\}~:~x\in X\},} the kernel of the Fréchet filter on X {\displaystyle X} is the empty set: ker ⁡ F = ∅ . {\displaystyle \ker {\mathcal {F}}=\varnothing .} The intersection of all elements in any non-empty family F ⊆ Filters ⁡ ( X ) {\displaystyle \mathbb {F} \subseteq \operatorname {Filters} (X)} is itself a filter on X {\displaystyle X} called the infimum or greatest lower bound of F in Filters ⁡ ( X ) , {\displaystyle \mathbb {F} {\text{ in }}\operatorname {Filters} (X),} which is why it may be denoted by ⋀ F ∈ F F . {\displaystyle {\textstyle \bigwedge \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}}.} Said differently, ker ⁡ F = ⋂ F ∈ F F ∈ Filters ⁡ ( X ) . {\displaystyle \ker \mathbb {F} ={\textstyle \bigcap \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}}\in \operatorname {Filters} (X).} Because every filter on X {\displaystyle X} has { X } {\displaystyle \{X\}} as a subset, this intersection is never empty. By definition, the infimum is the finest/largest (relative to ⊆ and ≤ {\displaystyle \,\subseteq \,{\text{ and }}\,\leq \,} ) filter contained as a subset of each member of F . {\displaystyle \mathbb {F} .} If B and F {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}} are filters then their infimum in Filters ⁡ ( X ) {\displaystyle \operatorname {Filters} (X)} is the filter B ( ∪ ) F . {\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}.} If B and F {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}} are prefilters then B ( ∪ ) F {\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}} is a prefilter that is coarser than both B and F {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}} (that is, B ( ∪ ) F ≤ B and B ( ∪ ) F ≤ F {\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}\leq {\mathcal {B}}{\text{ and }}{\mathcal {B}}\,(\cup )\,{\mathcal {F}}\leq {\mathcal {F}}} ); indeed, it is one of the finest such prefilters, meaning that if S {\displaystyle {\mathcal {S}}} is a prefilter such that S ≤ B and S ≤ F {\displaystyle {\mathcal {S}}\leq {\mathcal {B}}{\text{ and }}{\mathcal {S}}\leq {\mathcal {F}}} then necessarily S ≤ B ( ∪ ) F . {\displaystyle {\mathcal {S}}\leq {\mathcal {B}}\,(\cup )\,{\mathcal {F}}.} More generally, if B and F {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}} are non−empty families and if S := { S ⊆ ℘ ( X ) : S ≤ B and S ≤ F } {\displaystyle \mathbb {S} :=\{{\mathcal {S}}\subseteq \wp (X)~:~{\mathcal {S}}\leq {\mathcal {B}}{\text{ and }}{\mathcal {S}}\leq {\mathcal {F}}\}} then B ( ∪ ) F ∈ S {\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}\in \mathbb {S} } and B ( ∪ ) F {\displaystyle {\mathcal {B}}\,(\cup )\,{\mathcal {F}}} is a greatest element of ( S , ≤ ) . {\displaystyle (\mathbb {S} ,\leq ).} Let ∅ ≠ F ⊆ DualIdeals ⁡ ( X ) {\displaystyle \varnothing \neq \mathbb {F} \subseteq \operatorname {DualIdeals} (X)} and let ∪ F = ⋃ F ∈ F F . {\displaystyle \cup \mathbb {F} ={\textstyle \bigcup \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}}.} The supremum or least upper bound of F in DualIdeals ⁡ ( X ) , {\displaystyle \mathbb {F} {\text{ in }}\operatorname {DualIdeals} (X),} denoted by ⋁ F ∈ F F , {\displaystyle {\textstyle \bigvee \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}},} is the smallest (relative to ⊆ {\displaystyle \subseteq } ) dual ideal on X {\displaystyle X} containing every element of F {\displaystyle \mathbb {F} } as a subset; that is, it is the smallest (relative to ⊆ {\displaystyle \subseteq } ) dual ideal on X {\displaystyle X} containing ∪ F {\displaystyle \cup \mathbb {F} } as a subset. This dual ideal is ⋁ F ∈ F F = π ( ∪ F ) ↑ X , {\displaystyle {\textstyle \bigvee \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}}=\pi \left(\cup \mathbb {F} \right)^{\uparrow X},} where π ( ∪ F ) := { F 1 ∩ ⋯ ∩ F n : n ∈ N and every F i belongs to some F ∈ F } {\displaystyle \pi \left(\cup \mathbb {F} \right):=\left\{F_{1}\cap \cdots \cap F_{n}~:~n\in \mathbb {N} {\text{ and every }}F_{i}{\text{ belongs to some }}{\mathcal {F}}\in \mathbb {F} \right\}} is the π-system generated by ∪ F . {\displaystyle \cup \mathbb {F} .} As with any non-empty family of sets, ∪ F {\displaystyle \cup \mathbb {F} } is contained in some filter on X {\displaystyle X} if and only if it is a filter subbase, or equivalently, if and only if ⋁ F ∈ F F = π ( ∪ F ) ↑ X {\displaystyle {\textstyle \bigvee \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}}=\pi \left(\cup \mathbb {F} \right)^{\uparrow X}} is a filter on X , {\displaystyle X,} in which case this family is the smallest (relative to ⊆ {\displaystyle \subseteq } ) filter on X {\displaystyle X} containing every element of F {\displaystyle \mathbb {F} } as a subset and necessarily F ⊆ Filters ⁡ ( X ) . {\displaystyle \mathbb {F} \subseteq \operatorname {Filters} (X).} Let ∅ ≠ F ⊆ Filters ⁡ ( X ) {\displaystyle \varnothing \neq \mathbb {F} \subseteq \operatorname {Filters} (X)} and let ∪ F = ⋃ F ∈ F F . {\displaystyle \cup \mathbb {F} ={\textstyle \bigcup \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}}.} The supremum or least upper bound of F in Filters ⁡ ( X ) , {\displaystyle \mathbb {F} {\text{ in }}\operatorname {Filters} (X),} denoted by ⋁ F ∈ F F {\displaystyle {\textstyle \bigvee \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}}} if it exists, is by definition the smallest (relative to ⊆ {\displaystyle \subseteq } ) filter on X {\displaystyle X} containing every element of F {\displaystyle \mathbb {F} } as a subset. If it exists then necessarily ⋁ F ∈ F F = π ( ∪ F ) ↑ X {\displaystyle {\textstyle \bigvee \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}}=\pi \left(\cup \mathbb {F} \right)^{\uparrow X}} (as defined above) and ⋁ F ∈ F F {\displaystyle {\textstyle \bigvee \limits _{{\mathcal {F}}\in \mathbb {F} }}{\mathcal {F}}} will also be equal to the intersection of all filters on X {\displaystyle X} containing ∪ F . {\displaystyle \cup \mathbb {F} .} This supremum of F in Filters ⁡ ( X ) {\displaystyle \mathbb {F} {\text{ in }}\operatorname {Filters} (X)} exists if and only if the dual ideal π ( ∪ F ) ↑ X {\displaystyle \pi \left(\cup \mathbb {F} \right)^{\uparrow X}} is a filter on X . {\displaystyle X.} The least upper bound of a family of filters F {\displaystyle \mathbb {F} } may fail to be a filter. Indeed, if X {\displaystyle X} contains at least two distinct elements then there exist filters B and C on X {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}{\text{ on }}X} for which there does not exist a filter F on X {\displaystyle {\mathcal {F}}{\text{ on }}X} that contains both B and C . {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}.} If ∪ F {\displaystyle \cup \mathbb {F} } is not a filter subbase then the supremum of F in Filters ⁡ ( X ) {\displaystyle \mathbb {F} {\text{ in }}\operatorname {Filters} (X)} does not exist and the same is true of its supremum in Prefilters ⁡ ( X ) {\displaystyle \operatorname {Prefilters} (X)} but their supremum in the set of all dual ideals on X {\displaystyle X} will exist (it being the degenerate filter ℘ ( X ) {\displaystyle \wp (X)} ). If B and F {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}} are prefilters (resp. filters on X {\displaystyle X} ) then B ( ∩ ) F {\displaystyle {\mathcal {B}}\,(\cap )\,{\mathcal {F}}} is a prefilter (resp. a filter) if and only if it is non-degenerate (or said differently, if and only if B and F {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}} mesh), in which case it is one of the coarsest prefilters (resp. the coarsest filter) on X {\displaystyle X} that is finer (with respect to ≤ {\displaystyle \,\leq } ) than both B and F ; {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}};} this means that if S {\displaystyle {\mathcal {S}}} is any prefilter (resp. any filter) such that B ≤ S and F ≤ S {\displaystyle {\mathcal {B}}\leq {\mathcal {S}}{\text{ and }}{\mathcal {F}}\leq {\mathcal {S}}} then necessarily B ( ∩ ) F ≤ S , {\displaystyle {\mathcal {B}}\,(\cap )\,{\mathcal {F}}\leq {\mathcal {S}},} in which case it is denoted by B ∨ F . {\displaystyle {\mathcal {B}}\vee {\mathcal {F}}.} Other examples Let X = { p , 1 , 2 , 3 } {\displaystyle X=\{p,1,2,3\}} and let B = { { p } , { p , 1 , 2 } , { p , 1 , 3 } } , {\displaystyle {\mathcal {B}}=\{\{p\},\{p,1,2\},\{p,1,3\}\},} which makes B {\displaystyle {\mathcal {B}}} a prefilter and a filter subbase that is not closed under finite intersections. Because B {\displaystyle {\mathcal {B}}} is a prefilter, the smallest prefilter containing B {\displaystyle {\mathcal {B}}} is B . {\displaystyle {\mathcal {B}}.} The π-system generated by B {\displaystyle {\mathcal {B}}} is { { p , 1 } } ∪ B . {\displaystyle \{\{p,1\}\}\cup {\mathcal {B}}.} In particular, the smallest prefilter containing the filter subbase B {\displaystyle {\mathcal {B}}} is not equal to the set of all finite intersections of sets in B . {\displaystyle {\mathcal {B}}.} The filter on X {\displaystyle X} generated by B {\displaystyle {\mathcal {B}}} is B ↑ X = { S ⊆ X : p ∈ S } = { { p } ∪ T : T ⊆ { 1 , 2 , 3 } } . {\displaystyle {\mathcal {B}}^{\uparrow X}=\{S\subseteq X:p\in S\}=\{\{p\}\cup T~:~T\subseteq \{1,2,3\}\}.} All three of B , {\displaystyle {\mathcal {B}},} the π-system B {\displaystyle {\mathcal {B}}} generates, and B ↑ X {\displaystyle {\mathcal {B}}^{\uparrow X}} are examples of fixed, principal, ultra prefilters that are principal at the point p ; B ↑ X {\displaystyle p;{\mathcal {B}}^{\uparrow X}} is also an ultrafilter on X . {\displaystyle X.} Let ( X , τ ) {\displaystyle (X,\tau )} be a topological space, B ⊆ ℘ ( X ) , {\displaystyle {\mathcal {B}}\subseteq \wp (X),} and define B ¯ := { cl X ⁡ B : B ∈ B } , {\displaystyle {\overline {\mathcal {B}}}:=\left\{\operatorname {cl} _{X}B~:~B\in {\mathcal {B}}\right\},} where B {\displaystyle {\mathcal {B}}} is necessarily finer than B ¯ . {\displaystyle {\overline {\mathcal {B}}}.} If B {\displaystyle {\mathcal {B}}} is non-empty (resp. non-degenerate, a filter subbase, a prefilter, closed under finite unions) then the same is true of B ¯ . {\displaystyle {\overline {\mathcal {B}}}.} If B {\displaystyle {\mathcal {B}}} is a filter on X {\displaystyle X} then B ¯ {\displaystyle {\overline {\mathcal {B}}}} is a prefilter but not necessarily a filter on X {\displaystyle X} although ( B ¯ ) ↑ X {\displaystyle \left({\overline {\mathcal {B}}}\right)^{\uparrow X}} is a filter on X {\displaystyle X} equivalent to B ¯ . {\displaystyle {\overline {\mathcal {B}}}.} The set B {\displaystyle {\mathcal {B}}} of all dense open subsets of a (non-empty) topological space X {\displaystyle X} is a proper π-system and so also a prefilter. If the space is a Baire space, then the set of all countable intersections of dense open subsets is a π-system and a prefilter that is finer than B . {\displaystyle {\mathcal {B}}.} If X = R n {\displaystyle X=\mathbb {R} ^{n}} (with 1 ≤ n ∈ N {\displaystyle 1\leq n\in \mathbb {N} } ) then the set B LebFinite {\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }} of all B ∈ B {\displaystyle B\in {\mathcal {B}}} such that B {\displaystyle B} has finite Lebesgue measure is a proper π-system and a free prefilter that is also a proper subset of B . {\displaystyle {\mathcal {B}}.} The prefilters B LebFinite {\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }} and B {\displaystyle {\mathcal {B}}} are equivalent and so generate the same filter on X . {\displaystyle X.} Since X {\displaystyle X} is a Baire space, every countable intersection of sets in B LebFinite {\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }} is dense in X {\displaystyle X} (and also comeagre and non-meager) so the set of all countable intersections of elements of B LebFinite {\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }} is a prefilter and π-system; it is also finer than, and not equivalent to, B LebFinite . {\displaystyle {\mathcal {B}}_{\operatorname {LebFinite} }.} ==== Ultrafilters ==== There are many other characterizations of "ultrafilter" and "ultra prefilter," which are listed in the article on ultrafilters. Important properties of ultrafilters are also described in that article. The ultrafilter lemma The following important theorem is due to Alfred Tarski (1930). A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it. Assuming the axioms of Zermelo–Fraenkel (ZF), the ultrafilter lemma follows from the Axiom of choice (in particular from Zorn's lemma) but is strictly weaker than it. The ultrafilter lemma implies the Axiom of choice for finite sets. If only dealing with Hausdorff spaces, then most basic results (as encountered in introductory courses) in Topology (such as Tychonoff's theorem for compact Hausdorff spaces and the Alexander subbase theorem) and in functional analysis (such as the Hahn–Banach theorem) can be proven using only the ultrafilter lemma; the full strength of the axiom of choice might not be needed. ==== Kernels ==== The kernel is useful in classifying properties of prefilters and other families of sets. If B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} then ker ⁡ ( B ↑ X ) = ker ⁡ B {\displaystyle \ker \left({\mathcal {B}}^{\uparrow X}\right)=\ker {\mathcal {B}}} and this set is also equal to the kernel of the π-system that is generated by B . {\displaystyle {\mathcal {B}}.} In particular, if B {\displaystyle {\mathcal {B}}} is a filter subbase then the kernels of all of the following sets are equal: (1) B , {\displaystyle {\mathcal {B}},} (2) the π-system generated by B , {\displaystyle {\mathcal {B}},} and (3) the filter generated by B . {\displaystyle {\mathcal {B}}.} If f {\displaystyle f} is a map then f ( ker ⁡ B ) ⊆ ker ⁡ f ( B ) and f − 1 ( ker ⁡ B ) = ker ⁡ f − 1 ( B ) . {\displaystyle f(\ker {\mathcal {B}})\subseteq \ker f({\mathcal {B}}){\text{ and }}f^{-1}(\ker {\mathcal {B}})=\ker f^{-1}({\mathcal {B}}).} Equivalent families have equal kernels. Two principal families are equivalent if and only if their kernels are equal. ===== Classifying families by their kernels ===== If B {\displaystyle {\mathcal {B}}} is a principal filter on X {\displaystyle X} then ∅ ≠ ker ⁡ B ∈ B {\displaystyle \varnothing \neq \ker {\mathcal {B}}\in {\mathcal {B}}} and B = { ker ⁡ B } ↑ X {\displaystyle {\mathcal {B}}=\{\ker {\mathcal {B}}\}^{\uparrow X}} and { ker ⁡ B } {\displaystyle \{\ker {\mathcal {B}}\}} is also the smallest prefilter that generates B . {\displaystyle {\mathcal {B}}.} Family of examples: For any non-empty C ⊆ R , {\displaystyle C\subseteq \mathbb {R} ,} the family B C = { R ∖ ( r + C ) : r ∈ R } {\displaystyle {\mathcal {B}}_{C}=\{\mathbb {R} \setminus (r+C)~:~r\in \mathbb {R} \}} is free but it is a filter subbase if and only if no finite union of the form ( r 1 + C ) ∪ ⋯ ∪ ( r n + C ) {\displaystyle \left(r_{1}+C\right)\cup \cdots \cup \left(r_{n}+C\right)} covers R , {\displaystyle \mathbb {R} ,} in which case the filter that it generates will also be free. In particular, B C {\displaystyle {\mathcal {B}}_{C}} is a filter subbase if C {\displaystyle C} is countable (for example, C = Q , Z , {\displaystyle C=\mathbb {Q} ,\mathbb {Z} ,} the primes), a meager set in R , {\displaystyle \mathbb {R} ,} a set of finite measure, or a bounded subset of R . {\displaystyle \mathbb {R} .} If C {\displaystyle C} is a singleton set then B C {\displaystyle {\mathcal {B}}_{C}} is a subbase for the Fréchet filter on R . {\displaystyle \mathbb {R} .} ===== Characterizing fixed ultra prefilters ===== If a family of sets B {\displaystyle {\mathcal {B}}} is fixed (that is, ker ⁡ B ≠ ∅ {\displaystyle \ker {\mathcal {B}}\neq \varnothing } ) then B {\displaystyle {\mathcal {B}}} is ultra if and only if some element of B {\displaystyle {\mathcal {B}}} is a singleton set, in which case B {\displaystyle {\mathcal {B}}} will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter B {\displaystyle {\mathcal {B}}} is ultra if and only if ker ⁡ B {\displaystyle \ker {\mathcal {B}}} is a singleton set. Every filter on X {\displaystyle X} that is principal at a single point is an ultrafilter, and if in addition X {\displaystyle X} is finite, then there are no ultrafilters on X {\displaystyle X} other than these. The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point. === Finer/coarser, subordination, and meshing === The preorder ≤ {\displaystyle \,\leq \,} that is defined below is of fundamental importance for the use of prefilters (and filters) in topology. For instance, this preorder is used to define the prefilter equivalent of "subsequence", where " F ≥ C {\displaystyle {\mathcal {F}}\geq {\mathcal {C}}} " can be interpreted as " F {\displaystyle {\mathcal {F}}} is a subsequence of C {\displaystyle {\mathcal {C}}} " (so "subordinate to" is the prefilter equivalent of "subsequence of"). It is also used to define prefilter convergence in a topological space. The definition of B {\displaystyle {\mathcal {B}}} meshes with C , {\displaystyle {\mathcal {C}},} which is closely related to the preorder ≤ , {\displaystyle \,\leq ,} is used in topology to define cluster points. Two families of sets B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} mesh and are compatible, indicated by writing B # C , {\displaystyle {\mathcal {B}}\#{\mathcal {C}},} if B ∩ C ≠ ∅ for all B ∈ B and C ∈ C . {\displaystyle B\cap C\neq \varnothing {\text{ for all }}B\in {\mathcal {B}}{\text{ and }}C\in {\mathcal {C}}.} If B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} do not mesh then they are dissociated. If S ⊆ X and B ⊆ ℘ ( X ) {\displaystyle S\subseteq X{\text{ and }}{\mathcal {B}}\subseteq \wp (X)} then B and S {\displaystyle {\mathcal {B}}{\text{ and }}S} are said to mesh if B and { S } {\displaystyle {\mathcal {B}}{\text{ and }}\{S\}} mesh, or equivalently, if the trace of B on S , {\displaystyle {\mathcal {B}}{\text{ on }}S,} which is the family B | S = { B ∩ S : B ∈ B } , {\displaystyle {\mathcal {B}}{\big \vert }_{S}=\{B\cap S~:~B\in {\mathcal {B}}\},} does not contain the empty set, where the trace is also called the restriction of B to S . {\displaystyle {\mathcal {B}}{\text{ to }}S.} Example: If x i ∙ = ( x i n ) n = 1 ∞ {\displaystyle x_{i_{\bullet }}=\left(x_{i_{n}}\right)_{n=1}^{\infty }} is a subsequence of x ∙ = ( x i ) i = 1 ∞ {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }} then Tails ⁡ ( x i ∙ ) {\displaystyle \operatorname {Tails} \left(x_{i_{\bullet }}\right)} is subordinate to Tails ⁡ ( x ∙ ) ; {\displaystyle \operatorname {Tails} \left(x_{\bullet }\right);} in symbols: Tails ⁡ ( x i ∙ ) ⊢ Tails ⁡ ( x ∙ ) {\displaystyle \operatorname {Tails} \left(x_{i_{\bullet }}\right)\vdash \operatorname {Tails} \left(x_{\bullet }\right)} and also Tails ⁡ ( x ∙ ) ≤ Tails ⁡ ( x i ∙ ) . {\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)\leq \operatorname {Tails} \left(x_{i_{\bullet }}\right).} Stated in plain English, the prefilter of tails of a subsequence is always subordinate to that of the original sequence. To see this, let C := x ≥ i ∈ Tails ⁡ ( x ∙ ) {\displaystyle C:=x_{\geq i}\in \operatorname {Tails} \left(x_{\bullet }\right)} be arbitrary (or equivalently, let i ∈ N {\displaystyle i\in \mathbb {N} } be arbitrary) and it remains to show that this set contains some F := x i ≥ n ∈ Tails ⁡ ( x i ∙ ) . {\displaystyle F:=x_{i_{\geq n}}\in \operatorname {Tails} \left(x_{i_{\bullet }}\right).} For the set x ≥ i = { x i , x i + 1 , … } {\displaystyle x_{\geq i}=\left\{x_{i},x_{i+1},\ldots \right\}} to contain x i ≥ n = { x i n , x i n + 1 , … } , {\displaystyle x_{i_{\geq n}}=\left\{x_{i_{n}},x_{i_{n+1}},\ldots \right\},} it is sufficient to have i ≤ i n . {\displaystyle i\leq i_{n}.} Since i 1 < i 2 < ⋯ {\displaystyle i_{1}<i_{2}<\cdots } are strictly increasing integers, there exists n ∈ N {\displaystyle n\in \mathbb {N} } such that i n ≥ i , {\displaystyle i_{n}\geq i,} and so x ≥ i ⊇ x i ≥ n {\displaystyle x_{\geq i}\supseteq x_{i_{\geq n}}} holds, as desired. Consequently, TailsFilter ⁡ ( x ∙ ) ⊆ TailsFilter ⁡ ( x i ∙ ) . {\displaystyle \operatorname {TailsFilter} \left(x_{\bullet }\right)\subseteq \operatorname {TailsFilter} \left(x_{i_{\bullet }}\right).} The left hand side will be a strict/proper subset of the right hand side if (for instance) every point of x ∙ {\displaystyle x_{\bullet }} is unique (that is, when x ∙ : N → X {\displaystyle x_{\bullet }:\mathbb {N} \to X} is injective) and x i ∙ {\displaystyle x_{i_{\bullet }}} is the even-indexed subsequence ( x 2 , x 4 , x 6 , … ) {\displaystyle \left(x_{2},x_{4},x_{6},\ldots \right)} because under these conditions, every tail x i ≥ n = { x 2 n , x 2 n + 2 , x 2 n + 4 , … } {\displaystyle x_{i_{\geq n}}=\left\{x_{2n},x_{2n+2},x_{2n+4},\ldots \right\}} (for every n ∈ N {\displaystyle n\in \mathbb {N} } ) of the subsequence will belong to the right hand side filter but not to the left hand side filter. For another example, if B {\displaystyle {\mathcal {B}}} is any family then ∅ ≤ B ≤ B ≤ { ∅ } {\displaystyle \varnothing \leq {\mathcal {B}}\leq {\mathcal {B}}\leq \{\varnothing \}} always holds and furthermore, { ∅ } ≤ B if and only if ∅ ∈ B . {\displaystyle \{\varnothing \}\leq {\mathcal {B}}{\text{ if and only if }}\varnothing \in {\mathcal {B}}.} A non-empty family that is coarser than a filter subbase must itself be a filter subbase. Every filter subbase is coarser than both the π-system that it generates and the filter that it generates. If C and F {\displaystyle {\mathcal {C}}{\text{ and }}{\mathcal {F}}} are families such that C ≤ F , {\displaystyle {\mathcal {C}}\leq {\mathcal {F}},} the family C {\displaystyle {\mathcal {C}}} is ultra, and ∅ ∉ F , {\displaystyle \varnothing \not \in {\mathcal {F}},} then F {\displaystyle {\mathcal {F}}} is necessarily ultra. It follows that any family that is equivalent to an ultra family will necessarily be ultra. In particular, if C {\displaystyle {\mathcal {C}}} is a prefilter then either both C {\displaystyle {\mathcal {C}}} and the filter C ↑ X {\displaystyle {\mathcal {C}}^{\uparrow X}} it generates are ultra or neither one is ultra. The relation ≤ {\displaystyle \,\leq \,} is reflexive and transitive, which makes it into a preorder on ℘ ( ℘ ( X ) ) . {\displaystyle \wp (\wp (X)).} The relation ≤ on Filters ⁡ ( X ) {\displaystyle \,\leq \,{\text{ on }}\operatorname {Filters} (X)} is antisymmetric but if X {\displaystyle X} has more than one point then it is not symmetric. ==== Equivalent families of sets ==== The preorder ≤ {\displaystyle \,\leq \,} induces its canonical equivalence relation on ℘ ( ℘ ( X ) ) , {\displaystyle \wp (\wp (X)),} where for all B , C ∈ ℘ ( ℘ ( X ) ) , {\displaystyle {\mathcal {B}},{\mathcal {C}}\in \wp (\wp (X)),} B {\displaystyle {\mathcal {B}}} is equivalent to C {\displaystyle {\mathcal {C}}} if any of the following equivalent conditions hold: C ≤ B and B ≤ C . {\displaystyle {\mathcal {C}}\leq {\mathcal {B}}{\text{ and }}{\mathcal {B}}\leq {\mathcal {C}}.} The upward closures of C and B {\displaystyle {\mathcal {C}}{\text{ and }}{\mathcal {B}}} are equal. Two upward closed (in X {\displaystyle X} ) subsets of ℘ ( X ) {\displaystyle \wp (X)} are equivalent if and only if they are equal. If B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} then necessarily ∅ ≤ B ≤ ℘ ( X ) {\displaystyle \varnothing \leq {\mathcal {B}}\leq \wp (X)} and B {\displaystyle {\mathcal {B}}} is equivalent to B ↑ X . {\displaystyle {\mathcal {B}}^{\uparrow X}.} Every equivalence class other than { ∅ } {\displaystyle \{\varnothing \}} contains a unique representative (that is, element of the equivalence class) that is upward closed in X . {\displaystyle X.} Properties preserved between equivalent families Let B , C ∈ ℘ ( ℘ ( X ) ) {\displaystyle {\mathcal {B}},{\mathcal {C}}\in \wp (\wp (X))} be arbitrary and let F {\displaystyle {\mathcal {F}}} be any family of sets. If B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} are equivalent (which implies that ker ⁡ B = ker ⁡ C {\displaystyle \ker {\mathcal {B}}=\ker {\mathcal {C}}} ) then for each of the statements/properties listed below, either it is true of both B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} or else it is false of both B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} : Not empty Proper (that is, ∅ {\displaystyle \varnothing } is not an element) Moreover, any two degenerate families are necessarily equivalent. Filter subbase Prefilter In which case B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} generate the same filter on X {\displaystyle X} (that is, their upward closures in X {\displaystyle X} are equal). Free Principal Ultra Is equal to the trivial filter { X } {\displaystyle \{X\}} In words, this means that the only subset of ℘ ( X ) {\displaystyle \wp (X)} that is equivalent to the trivial filter is the trivial filter. In general, this conclusion of equality does not extend to non−trivial filters (one exception is when both families are filters). Meshes with F {\displaystyle {\mathcal {F}}} Is finer than F {\displaystyle {\mathcal {F}}} Is coarser than F {\displaystyle {\mathcal {F}}} Is equivalent to F {\displaystyle {\mathcal {F}}} Missing from the above list is the word "filter" because this property is not preserved by equivalence. However, if B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} are filters on X , {\displaystyle X,} then they are equivalent if and only if they are equal; this characterization does not extend to prefilters. Equivalence of prefilters and filter subbases If B {\displaystyle {\mathcal {B}}} is a prefilter on X {\displaystyle X} then the following families are always equivalent to each other: B {\displaystyle {\mathcal {B}}} ; the π-system generated by B {\displaystyle {\mathcal {B}}} ; the filter on X {\displaystyle X} generated by B {\displaystyle {\mathcal {B}}} ; and moreover, these three families all generate the same filter on X {\displaystyle X} (that is, the upward closures in X {\displaystyle X} of these families are equal). In particular, every prefilter is equivalent to the filter that it generates. By transitivity, two prefilters are equivalent if and only if they generate the same filter. Every prefilter is equivalent to exactly one filter on X , {\displaystyle X,} which is the filter that it generates (that is, the prefilter's upward closure). Said differently, every equivalence class of prefilters contains exactly one representative that is a filter. In this way, filters can be considered as just being distinguished elements of these equivalence classes of prefilters. A filter subbase that is not also a prefilter cannot be equivalent to the prefilter (or filter) that it generates. In contrast, every prefilter is equivalent to the filter that it generates. This is why prefilters can, by and large, be used interchangeably with the filters that they generate while filter subbases cannot. == Set theoretic properties and constructions relevant to topology == === Trace and meshing === If B {\displaystyle {\mathcal {B}}} is a prefilter (resp. filter) on X and S ⊆ X {\displaystyle X{\text{ and }}S\subseteq X} then the trace of B on S , {\displaystyle {\mathcal {B}}{\text{ on }}S,} which is the family B | S := B ( ∩ ) { S } , {\displaystyle {\mathcal {B}}{\big \vert }_{S}:={\mathcal {B}}(\cap )\{S\},} is a prefilter (resp. a filter) if and only if B and S {\displaystyle {\mathcal {B}}{\text{ and }}S} mesh (that is, ∅ ∉ B ( ∩ ) { S } {\displaystyle \varnothing \not \in {\mathcal {B}}(\cap )\{S\}} ), in which case the trace of B on S {\displaystyle {\mathcal {B}}{\text{ on }}S} is said to be induced by S {\displaystyle S} . The trace is always finer than the original family; that is, B ≤ B | S . {\displaystyle {\mathcal {B}}\leq {\mathcal {B}}{\big \vert }_{S}.} If B {\displaystyle {\mathcal {B}}} is ultra and if B and S {\displaystyle {\mathcal {B}}{\text{ and }}S} mesh then the trace B | S {\displaystyle {\mathcal {B}}{\big \vert }_{S}} is ultra. If B {\displaystyle {\mathcal {B}}} is an ultrafilter on X {\displaystyle X} then the trace of B on S {\displaystyle {\mathcal {B}}{\text{ on }}S} is a filter on S {\displaystyle S} if and only if S ∈ B . {\displaystyle S\in {\mathcal {B}}.} For example, suppose that B {\displaystyle {\mathcal {B}}} is a filter on X and S ⊆ X {\displaystyle X{\text{ and }}S\subseteq X} is such that S ≠ X and X ∖ S ∉ B . {\displaystyle S\neq X{\text{ and }}X\setminus S\not \in {\mathcal {B}}.} Then B and S {\displaystyle {\mathcal {B}}{\text{ and }}S} mesh and B ∪ { S } {\displaystyle {\mathcal {B}}\cup \{S\}} generates a filter on X {\displaystyle X} that is strictly finer than B . {\displaystyle {\mathcal {B}}.} When prefilters mesh Given non-empty families B and C , {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}},} the family B ( ∩ ) C := { B ∩ C : B ∈ B and C ∈ C } {\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}:=\{B\cap C~:~B\in {\mathcal {B}}{\text{ and }}C\in {\mathcal {C}}\}} satisfies C ≤ B ( ∩ ) C {\displaystyle {\mathcal {C}}\leq {\mathcal {B}}(\cap ){\mathcal {C}}} and B ≤ B ( ∩ ) C . {\displaystyle {\mathcal {B}}\leq {\mathcal {B}}(\cap ){\mathcal {C}}.} If B ( ∩ ) C {\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}} is proper (resp. a prefilter, a filter subbase) then this is also true of both B and C . {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}.} In order to make any meaningful deductions about B ( ∩ ) C {\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}} from B and C , B ( ∩ ) C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}},{\mathcal {B}}(\cap ){\mathcal {C}}} needs to be proper (that is, ∅ ∉ B ( ∩ ) C , {\displaystyle \varnothing \not \in {\mathcal {B}}(\cap ){\mathcal {C}},} which is the motivation for the definition of "mesh". In this case, B ( ∩ ) C {\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}} is a prefilter (resp. filter subbase) if and only if this is true of both B and C . {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}.} Said differently, if B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} are prefilters then they mesh if and only if B ( ∩ ) C {\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}} is a prefilter. Generalizing gives a well known characterization of "mesh" entirely in terms of subordination (that is, ≤ {\displaystyle \,\leq \,} ): Two prefilters (resp. filter subbases) B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} mesh if and only if there exists a prefilter (resp. filter subbase) F {\displaystyle {\mathcal {F}}} such that C ≤ F {\displaystyle {\mathcal {C}}\leq {\mathcal {F}}} and B ≤ F . {\displaystyle {\mathcal {B}}\leq {\mathcal {F}}.} If the least upper bound of two filters B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} exists in Filters ⁡ ( X ) {\displaystyle \operatorname {Filters} (X)} then this least upper bound is equal to B ( ∩ ) C . {\displaystyle {\mathcal {B}}(\cap ){\mathcal {C}}.} === Images and preimages under functions === Throughout, f : X → Y and g : Y → Z {\displaystyle f:X\to Y{\text{ and }}g:Y\to Z} will be maps between non-empty sets. Images of prefilters Let B ⊆ ℘ ( Y ) . {\displaystyle {\mathcal {B}}\subseteq \wp (Y).} Many of the properties that B {\displaystyle {\mathcal {B}}} may have are preserved under images of maps; notable exceptions include being upward closed, being closed under finite intersections, and being a filter, which are not necessarily preserved. Explicitly, if one of the following properties is true of B on Y , {\displaystyle {\mathcal {B}}{\text{ on }}Y,} then it will necessarily also be true of g ( B ) on g ( Y ) {\displaystyle g({\mathcal {B}}){\text{ on }}g(Y)} (although possibly not on the codomain Z {\displaystyle Z} unless g {\displaystyle g} is surjective): ultra, ultrafilter, filter, prefilter, filter subbase, dual ideal, upward closed, proper/non-degenerate, ideal, closed under finite unions, downward closed, directed upward. Moreover, if B ⊆ ℘ ( Y ) {\displaystyle {\mathcal {B}}\subseteq \wp (Y)} is a prefilter then so are both g ( B ) and g − 1 ( g ( B ) ) . {\displaystyle g({\mathcal {B}}){\text{ and }}g^{-1}(g({\mathcal {B}})).} The image under a map f : X → Y {\displaystyle f:X\to Y} of an ultra set B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} is again ultra and if B {\displaystyle {\mathcal {B}}} is an ultra prefilter then so is f ( B ) . {\displaystyle f({\mathcal {B}}).} If B {\displaystyle {\mathcal {B}}} is a filter then g ( B ) {\displaystyle g({\mathcal {B}})} is a filter on the range g ( Y ) , {\displaystyle g(Y),} but it is a filter on the codomain Z {\displaystyle Z} if and only if g {\displaystyle g} is surjective. Otherwise it is just a prefilter on Z {\displaystyle Z} and its upward closure must be taken in Z {\displaystyle Z} to obtain a filter. The upward closure of g ( B ) in Z {\displaystyle g({\mathcal {B}}){\text{ in }}Z} is g ( B ) ↑ Z = { S ⊆ Z : B ⊆ g − 1 ( S ) for some B ∈ B } {\displaystyle g({\mathcal {B}})^{\uparrow Z}=\left\{S\subseteq Z~:~B\subseteq g^{-1}(S){\text{ for some }}B\in {\mathcal {B}}\right\}} where if B {\displaystyle {\mathcal {B}}} is upward closed in Y {\displaystyle Y} (that is, a filter) then this simplifies to: g ( B ) ↑ Z = { S ⊆ Z : g − 1 ( S ) ∈ B } . {\displaystyle g({\mathcal {B}})^{\uparrow Z}=\left\{S\subseteq Z~:~g^{-1}(S)\in {\mathcal {B}}\right\}.} If X ⊆ Y {\displaystyle X\subseteq Y} then taking g {\displaystyle g} to be the inclusion map X → Y {\displaystyle X\to Y} shows that any prefilter (resp. ultra prefilter, filter subbase) on X {\displaystyle X} is also a prefilter (resp. ultra prefilter, filter subbase) on Y . {\displaystyle Y.} Preimages of prefilters Let B ⊆ ℘ ( Y ) . {\displaystyle {\mathcal {B}}\subseteq \wp (Y).} Under the assumption that f : X → Y {\displaystyle f:X\to Y} is surjective: f − 1 ( B ) {\displaystyle f^{-1}({\mathcal {B}})} is a prefilter (resp. filter subbase, π-system, closed under finite unions, proper) if and only if this is true of B . {\displaystyle {\mathcal {B}}.} However, if B {\displaystyle {\mathcal {B}}} is an ultrafilter on Y {\displaystyle Y} then even if f {\displaystyle f} is surjective (which would make f − 1 ( B ) {\displaystyle f^{-1}({\mathcal {B}})} a prefilter), it is nevertheless still possible for the prefilter f − 1 ( B ) {\displaystyle f^{-1}({\mathcal {B}})} to be neither ultra nor a filter on X . {\displaystyle X.}   If f : X → Y {\displaystyle f:X\to Y} is not surjective then denote the trace of B on f ( X ) {\displaystyle {\mathcal {B}}{\text{ on }}f(X)} by B | f ( X ) , {\displaystyle {\mathcal {B}}{\big \vert }_{f(X)},} where in this case particular case the trace satisfies: B | f ( X ) = f ( f − 1 ( B ) ) {\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}=f\left(f^{-1}({\mathcal {B}})\right)} and consequently also: f − 1 ( B ) = f − 1 ( B | f ( X ) ) . {\displaystyle f^{-1}({\mathcal {B}})=f^{-1}\left({\mathcal {B}}{\big \vert }_{f(X)}\right).} This last equality and the fact that the trace B | f ( X ) {\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}} is a family of sets over f ( X ) {\displaystyle f(X)} means that to draw conclusions about f − 1 ( B ) , {\displaystyle f^{-1}({\mathcal {B}}),} the trace B | f ( X ) {\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}} can be used in place of B {\displaystyle {\mathcal {B}}} and the surjection f : X → f ( X ) {\displaystyle f:X\to f(X)} can be used in place of f : X → Y . {\displaystyle f:X\to Y.} For example: f − 1 ( B ) {\displaystyle f^{-1}({\mathcal {B}})} is a prefilter (resp. filter subbase, π-system, proper) if and only if this is true of B | f ( X ) . {\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}.} In this way, the case where f {\displaystyle f} is not (necessarily) surjective can be reduced down to the case of a surjective function (which is a case that was described at the start of this subsection). Even if B {\displaystyle {\mathcal {B}}} is an ultrafilter on Y , {\displaystyle Y,} if f {\displaystyle f} is not surjective then it is nevertheless possible that ∅ ∈ B | f ( X ) , {\displaystyle \varnothing \in {\mathcal {B}}{\big \vert }_{f(X)},} which would make f − 1 ( B ) {\displaystyle f^{-1}({\mathcal {B}})} degenerate as well. The next characterization shows that degeneracy is the only obstacle. If B {\displaystyle {\mathcal {B}}} is a prefilter then the following are equivalent: f − 1 ( B ) {\displaystyle f^{-1}({\mathcal {B}})} is a prefilter; B | f ( X ) {\displaystyle {\mathcal {B}}{\big \vert }_{f(X)}} is a prefilter; ∅ ∉ B | f ( X ) {\displaystyle \varnothing \not \in {\mathcal {B}}{\big \vert }_{f(X)}} ; B {\displaystyle {\mathcal {B}}} meshes with f ( X ) {\displaystyle f(X)} and moreover, if f − 1 ( B ) {\displaystyle f^{-1}({\mathcal {B}})} is a prefilter then so is f ( f − 1 ( B ) ) . {\displaystyle f\left(f^{-1}({\mathcal {B}})\right).} If S ⊆ Y {\displaystyle S\subseteq Y} and if In : S → Y {\displaystyle \operatorname {In} :S\to Y} denotes the inclusion map then the trace of B on S {\displaystyle {\mathcal {B}}{\text{ on }}S} is equal to In − 1 ⁡ ( B ) . {\displaystyle \operatorname {In} ^{-1}({\mathcal {B}}).} This observation allows the results in this subsection to be applied to investigating the trace on a set. ==== Subordination is preserved by images and preimages ==== The relation ≤ {\displaystyle \,\leq \,} is preserved under both images and preimages of families of sets. This means that for any families C and F , {\displaystyle {\mathcal {C}}{\text{ and }}{\mathcal {F}},} C ≤ F implies g ( C ) ≤ g ( F ) and f − 1 ( C ) ≤ f − 1 ( F ) . {\displaystyle {\mathcal {C}}\leq {\mathcal {F}}\quad {\text{ implies }}\quad g({\mathcal {C}})\leq g({\mathcal {F}})\quad {\text{ and }}\quad f^{-1}({\mathcal {C}})\leq f^{-1}({\mathcal {F}}).} Moreover, the following relations always hold for any family of sets C {\displaystyle {\mathcal {C}}} : C ≤ f ( f − 1 ( C ) ) {\displaystyle {\mathcal {C}}\leq f\left(f^{-1}({\mathcal {C}})\right)} where equality will hold if f {\displaystyle f} is surjective. Furthermore, f − 1 ( C ) = f − 1 ( f ( f − 1 ( C ) ) ) and g ( C ) = g ( g − 1 ( g ( C ) ) ) . {\displaystyle f^{-1}({\mathcal {C}})=f^{-1}\left(f\left(f^{-1}({\mathcal {C}})\right)\right)\quad {\text{ and }}\quad g({\mathcal {C}})=g\left(g^{-1}(g({\mathcal {C}}))\right).} If B ⊆ ℘ ( X ) and C ⊆ ℘ ( Y ) {\displaystyle {\mathcal {B}}\subseteq \wp (X){\text{ and }}{\mathcal {C}}\subseteq \wp (Y)} then f ( B ) ≤ C if and only if B ≤ f − 1 ( C ) {\displaystyle f({\mathcal {B}})\leq {\mathcal {C}}\quad {\text{ if and only if }}\quad {\mathcal {B}}\leq f^{-1}({\mathcal {C}})} and g − 1 ( g ( C ) ) ≤ C {\displaystyle g^{-1}(g({\mathcal {C}}))\leq {\mathcal {C}}} where equality will hold if g {\displaystyle g} is injective. === Products of prefilters === Suppose X ∙ = ( X i ) i ∈ I {\displaystyle X_{\bullet }=\left(X_{i}\right)_{i\in I}} is a family of one or more non-empty sets, whose product will be denoted by ∏ X ∙ := ∏ i ∈ I X i , {\displaystyle {\textstyle \prod _{}}X_{\bullet }:={\textstyle \prod \limits _{i\in I}}X_{i},} and for every index i ∈ I , {\displaystyle i\in I,} let Pr X i : ∏ X ∙ → X i {\displaystyle \Pr {}_{X_{i}}:\prod X_{\bullet }\to X_{i}} denote the canonical projection. Let B ∙ := ( B i ) i ∈ I {\displaystyle {\mathcal {B}}_{\bullet }:=\left({\mathcal {B}}_{i}\right)_{i\in I}} be non−empty families, also indexed by I , {\displaystyle I,} such that B i ⊆ ℘ ( X i ) {\displaystyle {\mathcal {B}}_{i}\subseteq \wp \left(X_{i}\right)} for each i ∈ I . {\displaystyle i\in I.} The product of the families B ∙ {\displaystyle {\mathcal {B}}_{\bullet }} is defined identically to how the basic open subsets of the product topology are defined (had all of these B i {\displaystyle {\mathcal {B}}_{i}} been topologies). That is, both the notations ∏ B ∙ = ∏ i ∈ I B i {\displaystyle \prod _{}{\mathcal {B}}_{\bullet }=\prod _{i\in I}{\mathcal {B}}_{i}} denote the family of all cylinder subsets ∏ i ∈ I S i ⊆ ∏ X ∙ {\displaystyle {\textstyle \prod \limits _{i\in I}}S_{i}\subseteq {\textstyle \prod }X_{\bullet }} such that S i = X i {\displaystyle S_{i}=X_{i}} for all but finitely many i ∈ I {\displaystyle i\in I} and where S i ∈ B i {\displaystyle S_{i}\in {\mathcal {B}}_{i}} for any one of these finitely many exceptions (that is, for any i {\displaystyle i} such that S i ≠ X i , {\displaystyle S_{i}\neq X_{i},} necessarily S i ∈ B i {\displaystyle S_{i}\in {\mathcal {B}}_{i}} ). When every B i {\displaystyle {\mathcal {B}}_{i}} is a filter subbase then the family ⋃ i ∈ I Pr X i − 1 ( B i ) {\displaystyle {\textstyle \bigcup \limits _{i\in I}}\Pr {}_{X_{i}}^{-1}\left({\mathcal {B}}_{i}\right)} is a filter subbase for the filter on ∏ X ∙ {\displaystyle {\textstyle \prod }X_{\bullet }} generated by B ∙ . {\displaystyle {\mathcal {B}}_{\bullet }.} If ∏ B ∙ {\displaystyle {\textstyle \prod }{\mathcal {B}}_{\bullet }} is a filter subbase then the filter on ∏ X ∙ {\displaystyle {\textstyle \prod }X_{\bullet }} that it generates is called the filter generated by B ∙ {\displaystyle {\mathcal {B}}_{\bullet }} . If every B i {\displaystyle {\mathcal {B}}_{i}} is a prefilter on X i {\displaystyle X_{i}} then ∏ B ∙ {\displaystyle {\textstyle \prod }{\mathcal {B}}_{\bullet }} will be a prefilter on ∏ X ∙ {\displaystyle {\textstyle \prod }X_{\bullet }} and moreover, this prefilter is equal to the coarsest prefilter F on ∏ X ∙ {\displaystyle {\mathcal {F}}{\text{ on }}{\textstyle \prod }X_{\bullet }} such that Pr X i ( F ) = B i {\displaystyle \Pr {}_{X_{i}}({\mathcal {F}})={\mathcal {B}}_{i}} for every i ∈ I . {\displaystyle i\in I.} However, ∏ B ∙ {\displaystyle {\textstyle \prod }{\mathcal {B}}_{\bullet }} may fail to be a filter on ∏ X ∙ {\displaystyle {\textstyle \prod }X_{\bullet }} even if every B i {\displaystyle {\mathcal {B}}_{i}} is a filter on X i . {\displaystyle X_{i}.} == Convergence, limits, and cluster points == Throughout, ( X , τ ) {\displaystyle (X,\tau )} is a topological space. Prefilters vs. filters With respect to maps and subsets, the property of being a prefilter is in general more well behaved and better preserved than the property of being a filter. For instance, the image of a prefilter under some map is again a prefilter; but the image of a filter under a non-surjective map is never a filter on the codomain, although it will be a prefilter. The situation is the same with preimages under non-injective maps (even if the map is surjective). If S ⊆ X {\displaystyle S\subseteq X} is a proper subset then any filter on S {\displaystyle S} will not be a filter on X , {\displaystyle X,} although it will be a prefilter. One advantage that filters have is that they are distinguished representatives of their equivalence class (relative to ≤ {\displaystyle \,\leq } ), meaning that any equivalence class of prefilters contains a unique filter. This property may be useful when dealing with equivalence classes of prefilters (for instance, they are useful in the construction of completions of uniform spaces via Cauchy filters). The many properties that characterize ultrafilters are also often useful. They are used to, for example, construct the Stone–Čech compactification. The use of ultrafilters generally requires that the ultrafilter lemma be assumed. But in the many fields where the axiom of choice (or the Hahn–Banach theorem) is assumed, the ultrafilter lemma necessarily holds and does not require an addition assumption. A note on intuition Suppose that F {\displaystyle {\mathcal {F}}} is a non-principal filter on an infinite set X . {\displaystyle X.} F {\displaystyle {\mathcal {F}}} has one "upward" property (that of being closed upward) and one "downward" property (that of being directed downward). Starting with any F 0 ∈ F , {\displaystyle F_{0}\in {\mathcal {F}},} there always exists some F 1 ∈ F {\displaystyle F_{1}\in {\mathcal {F}}} that is a proper subset of F 0 {\displaystyle F_{0}} ; this may be continued ad infinitum to get a sequence F 0 ⊋ F 1 ⊋ ⋯ {\displaystyle F_{0}\supsetneq F_{1}\supsetneq \cdots } of sets in F {\displaystyle {\mathcal {F}}} with each F i + 1 {\displaystyle F_{i+1}} being a proper subset of F i . {\displaystyle F_{i}.} The same is not true going "upward", for if F 0 = X ∈ F {\displaystyle F_{0}=X\in {\mathcal {F}}} then there is no set in F {\displaystyle {\mathcal {F}}} that contains X {\displaystyle X} as a proper subset. Thus when it comes to limiting behavior (which is a topic central to the field of topology), going "upward" leads to a dead end, while going "downward" is typically fruitful. So to gain understanding and intuition about how filters (and prefilter) relate to concepts in topology, the "downward" property is usually the one to concentrate on. This is also why so many topological properties can be described by using only prefilters, rather than requiring filters (which only differ from prefilters in that they are also upward closed). The "upward" property of filters is less important for topological intuition but it is sometimes useful to have for technical reasons. For example, with respect to ⊆ , {\displaystyle \,\subseteq ,} every filter subbase is contained in a unique smallest filter but there may not exist a unique smallest prefilter containing it. === Limits and convergence === A family B {\displaystyle {\mathcal {B}}} is said to converge in ( X , τ ) {\displaystyle (X,\tau )} to a point x {\displaystyle x} of X {\displaystyle X} if B ≥ N ( x ) . {\displaystyle {\mathcal {B}}\geq {\mathcal {N}}(x).} Explicitly, N ( x ) ≤ B {\displaystyle {\mathcal {N}}(x)\leq {\mathcal {B}}} means that every neighborhood N of x {\displaystyle N{\text{ of }}x} contains some B ∈ B {\displaystyle B\in {\mathcal {B}}} as a subset (that is, B ⊆ N {\displaystyle B\subseteq N} ); thus the following then holds: N ∋ N ⊇ B ∈ B . {\displaystyle {\mathcal {N}}\ni N\supseteq B\in {\mathcal {B}}.} In words, a family converges to a point or subset x {\displaystyle x} if and only if it is finer than the neighborhood filter at x . {\displaystyle x.} A family B {\displaystyle {\mathcal {B}}} converging to a point x {\displaystyle x} may be indicated by writing B → x or lim B → x in X {\displaystyle {\mathcal {B}}\to x{\text{ or }}\lim {\mathcal {B}}\to x{\text{ in }}X} and saying that x {\displaystyle x} is a limit of B in X ; {\displaystyle {\mathcal {B}}{\text{ in }}X;} if this limit x {\displaystyle x} is a point (and not a subset), then x {\displaystyle x} is also called a limit point. As usual, lim B = x {\displaystyle \lim {\mathcal {B}}=x} is defined to mean that B → x {\displaystyle {\mathcal {B}}\to x} and x ∈ X {\displaystyle x\in X} is the only limit point of B ; {\displaystyle {\mathcal {B}};} that is, if also B → z then z = x . {\displaystyle {\mathcal {B}}\to z{\text{ then }}z=x.} (If the notation " lim B = x {\displaystyle \lim {\mathcal {B}}=x} " did not also require that the limit point x {\displaystyle x} be unique then the equals sign = would no longer be guaranteed to be transitive). The set of all limit points of B {\displaystyle {\mathcal {B}}} is denoted by lim X B or lim B . {\displaystyle \lim {}_{X}{\mathcal {B}}{\text{ or }}\lim {\mathcal {B}}.} In the above definitions, it suffices to check that B {\displaystyle {\mathcal {B}}} is finer than some (or equivalently, finer than every) neighborhood base in ( X , τ ) {\displaystyle (X,\tau )} of the point (for example, such as τ ( x ) = { U ∈ τ : x ∈ U } {\displaystyle \tau (x)=\{U\in \tau :x\in U\}} or τ ( S ) = ⋂ s ∈ S τ ( s ) {\displaystyle \tau (S)={\textstyle \bigcap \limits _{s\in S}}\tau (s)} when S ≠ ∅ {\displaystyle S\neq \varnothing } ). Examples If X := R n {\displaystyle X:=\mathbb {R} ^{n}} is Euclidean space and ‖ x ‖ {\displaystyle \|x\|} denotes the Euclidean norm (which is the distance from the origin, defined as usual), then all of the following families converge to the origin: the prefilter { B r ( 0 ) : 0 < r ≤ 1 } {\displaystyle \{B_{r}(0):0<r\leq 1\}} of all open balls centered at the origin, where B r ( z ) = { x : ‖ x − z ‖ < r } . {\displaystyle B_{r}(z)=\{x:\|x-z\|<r\}.} the prefilter { B ≤ r ( 0 ) : 0 < r ≤ 1 } {\displaystyle \{B_{\leq r}(0):0<r\leq 1\}} of all closed balls centered at the origin, where B ≤ r ( z ) = { x : ‖ x − z ‖ ≤ r } . {\displaystyle B_{\leq r}(z)=\{x:\|x-z\|\leq r\}.} This prefilter is equivalent to the one above. the prefilter { R ∩ B ≤ r ( 0 ) : 0 < r ≤ 1 } {\displaystyle \{R\cap B_{\leq r}(0):0<r\leq 1\}} where R = S 1 ∪ S 1 / 2 ∪ S 1 / 3 ∪ ⋯ {\displaystyle R=S_{1}\cup S_{1/2}\cup S_{1/3}\cup \cdots } is a union of spheres S r = { x : ‖ x ‖ = r } {\displaystyle S_{r}=\{x:\|x\|=r\}} centered at the origin having progressively smaller radii. This family consists of the sets S 1 / n ∪ S 1 / ( n + 1 ) ∪ S 1 / ( n + 2 ) ∪ ⋯ {\displaystyle S_{1/n}\cup S_{1/(n+1)}\cup S_{1/(n+2)}\cup \cdots } as n {\displaystyle n} ranges over the positive integers. any of the families above but with the radius r {\displaystyle r} ranging over 1 , 1 / 2 , 1 / 3 , 1 / 4 , … {\displaystyle 1,\,1/2,\,1/3,\,1/4,\ldots } (or over any other positive decreasing sequence) instead of over all positive reals. Drawing or imagining any one of these sequences of sets when X = R 2 {\displaystyle X=\mathbb {R} ^{2}} has dimension n = 2 {\displaystyle n=2} suggests that intuitively, these sets "should" converge to the origin (and indeed they do). This is the intuition that the above definition of a "convergent prefilter" make rigorous. Although ‖ ⋅ ‖ {\displaystyle \|\cdot \|} was assumed to be the Euclidean norm, the example above remains valid for any other norm on R n . {\displaystyle \mathbb {R} ^{n}.} The one and only limit point in X := R {\displaystyle X:=\mathbb {R} } of the free prefilter { ( 0 , r ) : r > 0 } {\displaystyle \{(0,r):r>0\}} is 0 {\displaystyle 0} since every open ball around the origin contains some open interval of this form. The fixed prefilter B := { [ 0 , 1 + r ) : r > 0 } {\displaystyle {\mathcal {B}}:=\{[0,1+r):r>0\}} does not converges in R {\displaystyle \mathbb {R} } to any point and so lim B = ∅ , {\displaystyle \lim {\mathcal {B}}=\varnothing ,} although B {\displaystyle {\mathcal {B}}} does converge to the set ker ⁡ B = [ 0 , 1 ] {\displaystyle \ker {\mathcal {B}}=[0,1]} since N ( [ 0 , 1 ] ) ≤ B . {\displaystyle {\mathcal {N}}([0,1])\leq {\mathcal {B}}.} However, not every fixed prefilter converges to its kernel. For instance, the fixed prefilter { [ 0 , 1 + r ) ∪ ( 1 + 1 / r , ∞ ) : r > 0 } {\displaystyle \{[0,1+r)\cup (1+1/r,\infty ):r>0\}} also has kernel [ 0 , 1 ] {\displaystyle [0,1]} but does not converges (in R {\displaystyle \mathbb {R} } ) to it. The free prefilter ( R , ∞ ) := { ( r , ∞ ) : r ∈ R } {\displaystyle (\mathbb {R} ,\infty ):=\{(r,\infty ):r\in \mathbb {R} \}} of intervals does not converge (in R {\displaystyle \mathbb {R} } ) to any point. The same is also true of the prefilter [ R , ∞ ) := { [ r , ∞ ) : r ∈ R } {\displaystyle [\mathbb {R} ,\infty ):=\{[r,\infty ):r\in \mathbb {R} \}} because it is equivalent to ( R , ∞ ) {\displaystyle (\mathbb {R} ,\infty )} and equivalent families have the same limits. In fact, if B {\displaystyle {\mathcal {B}}} is any prefilter in any topological space X {\displaystyle X} then for every S ∈ B ↑ X , {\displaystyle S\in {\mathcal {B}}^{\uparrow X},} B → S . {\displaystyle {\mathcal {B}}\to S.} More generally, because the only neighborhood of X {\displaystyle X} is itself (that is, N ( X ) = { X } {\displaystyle {\mathcal {N}}(X)=\{X\}} ), every non-empty family (including every filter subbase) converges to X . {\displaystyle X.} For any point x , {\displaystyle x,} its neighborhood filter N ( x ) → x {\displaystyle {\mathcal {N}}(x)\to x} always converges to x . {\displaystyle x.} More generally, any neighborhood basis at x {\displaystyle x} converges to x . {\displaystyle x.} A point x {\displaystyle x} is always a limit point of the principle ultra prefilter { { x } } {\displaystyle \{\{x\}\}} and of the ultrafilter that it generates. The empty family B = ∅ {\displaystyle {\mathcal {B}}=\varnothing } does not converge to any point. Basic properties If B {\displaystyle {\mathcal {B}}} converges to a point then the same is true of any family finer than B . {\displaystyle {\mathcal {B}}.} This has many important consequences. One consequence is that the limit points of a family B {\displaystyle {\mathcal {B}}} are the same as the limit points of its upward closure: lim X ⁡ B = lim X ⁡ ( B ↑ X ) . {\displaystyle \operatorname {lim} _{X}{\mathcal {B}}~=~\operatorname {lim} _{X}\left({\mathcal {B}}^{\uparrow X}\right).} In particular, the limit points of a prefilter are the same as the limit points of the filter that it generates. Another consequence is that if a family converges to a point then the same is true of the family's trace/restriction to any given subset of X . {\displaystyle X.} If B {\displaystyle {\mathcal {B}}} is a prefilter and B ∈ B {\displaystyle B\in {\mathcal {B}}} then B {\displaystyle {\mathcal {B}}} converges to a point of X {\displaystyle X} if and only if this is true of the trace B | B . {\displaystyle {\mathcal {B}}{\big \vert }_{B}.} If a filter subbase converges to a point then do the filter and the π-system that it generates, although the converse is not guaranteed. For example, the filter subbase { ( − ∞ , 0 ] , [ 0 , ∞ ) } {\displaystyle \{(-\infty ,0],[0,\infty )\}} does not converge to 0 {\displaystyle 0} in X := R {\displaystyle X:=\mathbb {R} } although the (principle ultra) filter that it generates does. Given x ∈ X , {\displaystyle x\in X,} the following are equivalent for a prefilter B : {\displaystyle {\mathcal {B}}:} B {\displaystyle {\mathcal {B}}} converges to x . {\displaystyle x.} B ↑ X {\displaystyle {\mathcal {B}}^{\uparrow X}} converges to x . {\displaystyle x.} There exists a family equivalent to B {\displaystyle {\mathcal {B}}} that converges to x . {\displaystyle x.} Because subordination is transitive, if B ≤ C then lim X B ⊆ lim X C {\displaystyle {\mathcal {B}}\leq {\mathcal {C}}{\text{ then }}\lim {}_{X}{\mathcal {B}}\subseteq \lim {}_{X}{\mathcal {C}}} and moreover, for every x ∈ X , {\displaystyle x\in X,} both { x } {\displaystyle \{x\}} and the maximal/ultrafilter { x } ↑ X {\displaystyle \{x\}^{\uparrow X}} converge to x . {\displaystyle x.} Thus every topological space ( X , τ ) {\displaystyle (X,\tau )} induces a canonical convergence ξ ⊆ X × Filters ⁡ ( X ) {\displaystyle \xi \subseteq X\times \operatorname {Filters} (X)} defined by ( x , B ) ∈ ξ if and only if x ∈ lim ( X , τ ) B . {\displaystyle (x,{\mathcal {B}})\in \xi {\text{ if and only if }}x\in \lim {}_{(X,\tau )}{\mathcal {B}}.} At the other extreme, the neighborhood filter N ( x ) {\displaystyle {\mathcal {N}}(x)} is the smallest (that is, coarsest) filter on X {\displaystyle X} that converges to x ; {\displaystyle x;} that is, any filter converging to x {\displaystyle x} must contain N ( x ) {\displaystyle {\mathcal {N}}(x)} as a subset. Said differently, the family of filters that converge to x {\displaystyle x} consists exactly of those filter on X {\displaystyle X} that contain N ( x ) {\displaystyle {\mathcal {N}}(x)} as a subset. Consequently, the finer the topology on X {\displaystyle X} then the fewer prefilters exist that have any limit points in X . {\displaystyle X.} === Cluster points === A family B {\displaystyle {\mathcal {B}}} is said to cluster at a point x {\displaystyle x} of X {\displaystyle X} if it meshes with the neighborhood filter of x ; {\displaystyle x;} that is, if B # N ( x ) . {\displaystyle {\mathcal {B}}\#{\mathcal {N}}(x).} Explicitly, this means that B ∩ N ≠ ∅ for every B ∈ B {\displaystyle B\cap N\neq \varnothing {\text{ for every }}B\in {\mathcal {B}}} and every neighborhood N {\displaystyle N} of x . {\displaystyle x.} In particular, a point x ∈ X {\displaystyle x\in X} is a cluster point or an accumulation point of a family B {\displaystyle {\mathcal {B}}} if B {\displaystyle {\mathcal {B}}} meshes with the neighborhood filter at x : B # N ( x ) . {\displaystyle x:\ {\mathcal {B}}\#{\mathcal {N}}(x).} The set of all cluster points of B {\displaystyle {\mathcal {B}}} is denoted by cl X ⁡ B , {\displaystyle \operatorname {cl} _{X}{\mathcal {B}},} where the subscript may be dropped if not needed. In the above definitions, it suffices to check that B {\displaystyle {\mathcal {B}}} meshes with some (or equivalently, meshes with every) neighborhood base in X {\displaystyle X} of x or S . {\displaystyle x{\text{ or }}S.} When B {\displaystyle {\mathcal {B}}} is a prefilter then the definition of " B and N {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {N}}} mesh" can be characterized entirely in terms of the subordination preorder ≤ . {\displaystyle \,\leq \,.} Two equivalent families of sets have the exact same limit points and also the same cluster points. No matter the topology, for every x ∈ X , {\displaystyle x\in X,} both { x } {\displaystyle \{x\}} and the principal ultrafilter { x } ↑ X {\displaystyle \{x\}^{\uparrow X}} cluster at x . {\displaystyle x.} If B {\displaystyle {\mathcal {B}}} clusters to a point then the same is true of any family coarser than B . {\displaystyle {\mathcal {B}}.} Consequently, the cluster points of a family B {\displaystyle {\mathcal {B}}} are the same as the cluster points of its upward closure: cl X ⁡ B = cl X ⁡ ( B ↑ X ) . {\displaystyle \operatorname {cl} _{X}{\mathcal {B}}~=~\operatorname {cl} _{X}\left({\mathcal {B}}^{\uparrow X}\right).} In particular, the cluster points of a prefilter are the same as the cluster points of the filter that it generates. Given x ∈ X , {\displaystyle x\in X,} the following are equivalent for a prefilter B on X {\displaystyle {\mathcal {B}}{\text{ on }}X} : B {\displaystyle {\mathcal {B}}} clusters at x . {\displaystyle x.} The family B ↑ X {\displaystyle {\mathcal {B}}^{\uparrow X}} generated by B {\displaystyle {\mathcal {B}}} clusters at x . {\displaystyle x.} There exists a family equivalent to B {\displaystyle {\mathcal {B}}} that clusters at x . {\displaystyle x.} x ∈ ⋂ F ∈ B cl X ⁡ F . {\displaystyle x\in {\textstyle \bigcap \limits _{F\in {\mathcal {B}}}}\operatorname {cl} _{X}F.} X ∖ N ∉ B ↑ X {\displaystyle X\setminus N\not \in {\mathcal {B}}^{\uparrow X}} for every neighborhood N {\displaystyle N} of x . {\displaystyle x.} If B {\displaystyle {\mathcal {B}}} is a filter on X {\displaystyle X} then x ∈ cl X ⁡ B if and only if X ∖ N ∉ B {\displaystyle x\in \operatorname {cl} _{X}{\mathcal {B}}{\text{ if and only if }}X\setminus N\not \in {\mathcal {B}}} for every neighborhood N of x . {\displaystyle N{\text{ of }}x.} There exists a prefilter F {\displaystyle {\mathcal {F}}} subordinate to B {\displaystyle {\mathcal {B}}} (that is, F ≥ B {\displaystyle {\mathcal {F}}\geq {\mathcal {B}}} ) that converges to x . {\displaystyle x.} This is the filter equivalent of " x {\displaystyle x} is a cluster point of a sequence if and only if there exists a subsequence converging to x . {\displaystyle x.} In particular, if x {\displaystyle x} is a cluster point of a prefilter B {\displaystyle {\mathcal {B}}} then B ( ∩ ) N ( x ) {\displaystyle {\mathcal {B}}(\cap ){\mathcal {N}}(x)} is a prefilter subordinate to B {\displaystyle {\mathcal {B}}} that converges to x . {\displaystyle x.} The set cl X ⁡ B {\displaystyle \operatorname {cl} _{X}{\mathcal {B}}} of all cluster points of a prefilter B {\displaystyle {\mathcal {B}}} satisfies cl X ⁡ B = ⋂ B ∈ B cl X ⁡ B . {\displaystyle \operatorname {cl} _{X}{\mathcal {B}}=\bigcap _{B\in {\mathcal {B}}}\operatorname {cl} _{X}B.} Consequently, the set cl X ⁡ B {\displaystyle \operatorname {cl} _{X}{\mathcal {B}}} of all cluster points of any prefilter B {\displaystyle {\mathcal {B}}} is a closed subset of X . {\displaystyle X.} This also justifies the notation cl X ⁡ B {\displaystyle \operatorname {cl} _{X}{\mathcal {B}}} for the set of cluster points. In particular, if K ⊆ X {\displaystyle K\subseteq X} is non-empty (so that B := { K } {\displaystyle {\mathcal {B}}:=\{K\}} is a prefilter) then cl X ⁡ { K } = cl X ⁡ K {\displaystyle \operatorname {cl} _{X}\{K\}=\operatorname {cl} _{X}K} since both sides are equal to ⋂ B ∈ B cl X ⁡ B . {\displaystyle {\textstyle \bigcap \limits _{B\in {\mathcal {B}}}}\operatorname {cl} _{X}B.} === Properties and relationships === Just like sequences and nets, it is possible for a prefilter on a topological space of infinite cardinality to not have any cluster points or limit points. If x {\displaystyle x} is a limit point of B {\displaystyle {\mathcal {B}}} then x {\displaystyle x} is necessarily a limit point of any family C {\displaystyle {\mathcal {C}}} finer than B {\displaystyle {\mathcal {B}}} (that is, if N ( x ) ≤ B and B ≤ C {\displaystyle {\mathcal {N}}(x)\leq {\mathcal {B}}{\text{ and }}{\mathcal {B}}\leq {\mathcal {C}}} then N ( x ) ≤ C {\displaystyle {\mathcal {N}}(x)\leq {\mathcal {C}}} ). In contrast, if x {\displaystyle x} is a cluster point of B {\displaystyle {\mathcal {B}}} then x {\displaystyle x} is necessarily a cluster point of any family C {\displaystyle {\mathcal {C}}} coarser than B {\displaystyle {\mathcal {B}}} (that is, if N ( x ) and B {\displaystyle {\mathcal {N}}(x){\text{ and }}{\mathcal {B}}} mesh and C ≤ B {\displaystyle {\mathcal {C}}\leq {\mathcal {B}}} then N ( x ) and C {\displaystyle {\mathcal {N}}(x){\text{ and }}{\mathcal {C}}} mesh). Equivalent families and subordination Any two equivalent families B and C {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {C}}} can be used interchangeably in the definitions of "limit of" and "cluster at" because their equivalency guarantees that N ≤ B {\displaystyle {\mathcal {N}}\leq {\mathcal {B}}} if and only if N ≤ C , {\displaystyle {\mathcal {N}}\leq {\mathcal {C}},} and also that N # B {\displaystyle {\mathcal {N}}\#{\mathcal {B}}} if and only if N # C . {\displaystyle {\mathcal {N}}\#{\mathcal {C}}.} In essence, the preorder ≤ {\displaystyle \,\leq \,} is incapable of distinguishing between equivalent families. Given two prefilters, whether or not they mesh can be characterized entirely in terms of subordination. Thus the two most fundamental concepts related to (pre)filters to Topology (that is, limit and cluster points) can both be defined entirely in terms of the subordination relation. This is why the preorder ≤ {\displaystyle \,\leq \,} is of such great importance in applying (pre)filters to Topology. Limit and cluster point relationships and sufficient conditions Every limit point of a non-degenerate family B {\displaystyle {\mathcal {B}}} is also a cluster point; in symbols: lim X ⁡ B ⊆ cl X ⁡ B . {\displaystyle \operatorname {lim} _{X}{\mathcal {B}}~\subseteq ~\operatorname {cl} _{X}{\mathcal {B}}.} This is because if x {\displaystyle x} is a limit point of B {\displaystyle {\mathcal {B}}} then N ( x ) and B {\displaystyle {\mathcal {N}}(x){\text{ and }}{\mathcal {B}}} mesh, which makes x {\displaystyle x} a cluster point of B . {\displaystyle {\mathcal {B}}.} But in general, a cluster point need not be a limit point. For instance, every point in any given non-empty subset K ⊆ X {\displaystyle K\subseteq X} is a cluster point of the principle prefilter B := { K } {\displaystyle {\mathcal {B}}:=\{K\}} (no matter what topology is on X {\displaystyle X} ) but if X {\displaystyle X} is Hausdorff and K {\displaystyle K} has more than one point then this prefilter has no limit points; the same is true of the filter { K } ↑ X {\displaystyle \{K\}^{\uparrow X}} that this prefilter generates. However, every cluster point of an ultra prefilter is a limit point. Consequently, the limit points of an ultra prefilter B {\displaystyle {\mathcal {B}}} are the same as its cluster points: lim X ⁡ B = cl X ⁡ B ; {\displaystyle \operatorname {lim} _{X}{\mathcal {B}}=\operatorname {cl} _{X}{\mathcal {B}};} that is to say, a given point is a cluster point of an ultra prefilter B {\displaystyle {\mathcal {B}}} if and only if B {\displaystyle {\mathcal {B}}} converges to that point. Although a cluster point of a filter need not be a limit point, there will always exist a finer filter that does converge to it; in particular, if B {\displaystyle {\mathcal {B}}} clusters at x {\displaystyle x} then B ( ∩ ) N ( x ) = { B ∩ N : B ∈ B , N ∈ N ( x ) } {\displaystyle {\mathcal {B}}\,(\cap )\,{\mathcal {N}}(x)=\{B\cap N:B\in {\mathcal {B}},N\in {\mathcal {N}}(x)\}} is a filter subbase whose generated filter converges to x . {\displaystyle x.} If ∅ ≠ B ⊆ ℘ ( X ) and S ≥ B {\displaystyle \varnothing \neq {\mathcal {B}}\subseteq \wp (X){\text{ and }}{\mathcal {S}}\geq {\mathcal {B}}} is a filter subbase such that S → x in X {\displaystyle {\mathcal {S}}\to x{\text{ in }}X} then x ∈ cl X ⁡ B . {\displaystyle x\in \operatorname {cl} _{X}{\mathcal {B}}.} In particular, any limit point of a filter subbase subordinate to B ≠ ∅ {\displaystyle {\mathcal {B}}\neq \varnothing } is necessarily also a cluster point of B . {\displaystyle {\mathcal {B}}.} If x {\displaystyle x} is a cluster point of a prefilter B {\displaystyle {\mathcal {B}}} then B ( ∩ ) N ( x ) {\displaystyle {\mathcal {B}}(\cap ){\mathcal {N}}(x)} is a prefilter subordinate to B {\displaystyle {\mathcal {B}}} that converges to x in X . {\displaystyle x{\text{ in }}X.} If S ⊆ X {\displaystyle S\subseteq X} and if B {\displaystyle {\mathcal {B}}} is a prefilter on S {\displaystyle S} then every cluster point of B in X {\displaystyle {\mathcal {B}}{\text{ in }}X} belongs to cl X ⁡ S {\displaystyle \operatorname {cl} _{X}S} and any point in cl X ⁡ S {\displaystyle \operatorname {cl} _{X}S} is a limit point of a filter on S . {\displaystyle S.} Primitive sets A subset P ⊆ X {\displaystyle P\subseteq X} is called primitive if it is the set of limit points of some ultrafilter (or equivalently, some ultra prefilter). That is, if there exists an ultrafilter B on X {\displaystyle {\mathcal {B}}{\text{ on }}X} such that P {\displaystyle P} is equal to lim X ⁡ B , {\displaystyle \operatorname {lim} _{X}{\mathcal {B}},} which recall denotes the set of limit points of B in X . {\displaystyle {\mathcal {B}}{\text{ in }}X.} Since limit points are the same as cluster points for ultra prefilters, a subset is primitive if and only if it is equal to the set cl X ⁡ B {\displaystyle \operatorname {cl} _{X}{\mathcal {B}}} of cluster points of some ultra prefilter B . {\displaystyle {\mathcal {B}}.} For example, every closed singleton subset is primitive. The image of a primitive subset of X {\displaystyle X} under a continuous map f : X → Y {\displaystyle f:X\to Y} is contained in a primitive subset of Y . {\displaystyle Y.} Assume that P , Q ⊆ X {\displaystyle P,Q\subseteq X} are two primitive subset of X . {\displaystyle X.} If U {\displaystyle U} is an open subset of X {\displaystyle X} that intersects P {\displaystyle P} then U ∈ B {\displaystyle U\in {\mathcal {B}}} for any ultrafilter B on X {\displaystyle {\mathcal {B}}{\text{ on }}X} such that P = lim X ⁡ B . {\displaystyle P=\operatorname {lim} _{X}{\mathcal {B}}.} In addition, if P and Q {\displaystyle P{\text{ and }}Q} are distinct then there exists some S ⊆ X {\displaystyle S\subseteq X} and some ultrafilters B P and B Q on X {\displaystyle {\mathcal {B}}_{P}{\text{ and }}{\mathcal {B}}_{Q}{\text{ on }}X} such that P = lim X ⁡ B P , Q = lim X ⁡ B Q , S ∈ B P , {\displaystyle P=\operatorname {lim} _{X}{\mathcal {B}}_{P},Q=\operatorname {lim} _{X}{\mathcal {B}}_{Q},S\in {\mathcal {B}}_{P},} and X ∖ S ∈ B Q . {\displaystyle X\setminus S\in {\mathcal {B}}_{Q}.} Other results If X {\displaystyle X} is a complete lattice then: The limit inferior of B {\displaystyle B} is the infimum of the set of all cluster points of B . {\displaystyle B.} The limit superior of B {\displaystyle B} is the supremum of the set of all cluster points of B . {\displaystyle B.} B {\displaystyle B} is a convergent prefilter if and only if its limit inferior and limit superior agree; in this case, the value on which they agree is the limit of the prefilter. === Limits of functions defined as limits of prefilters === Suppose f : X → Y {\displaystyle f:X\to Y} is a map from a set into a topological space Y , {\displaystyle Y,} B ⊆ ℘ ( X ) , {\displaystyle {\mathcal {B}}\subseteq \wp (X),} and y ∈ Y . {\displaystyle y\in Y.} If y {\displaystyle y} is a limit point (respectively, a cluster point) of f ( B ) in Y {\displaystyle f({\mathcal {B}}){\text{ in }}Y} then y {\displaystyle y} is called a limit point or limit (respectively, a cluster point) of f {\displaystyle f} with respect to B . {\displaystyle {\mathcal {B}}.} Explicitly, y {\displaystyle y} is a limit of f {\displaystyle f} with respect to B {\displaystyle {\mathcal {B}}} if and only if N ( y ) ≤ f ( B ) , {\displaystyle {\mathcal {N}}(y)\leq f({\mathcal {B}}),} which can be written as f ( B ) → y or lim f ( B ) → y in Y {\displaystyle f({\mathcal {B}})\to y{\text{ or }}\lim f({\mathcal {B}})\to y{\text{ in }}Y} (by definition of this notation) and stated as f {\displaystyle f} tend to y {\displaystyle y} along B . {\displaystyle {\mathcal {B}}.} If the limit y {\displaystyle y} is unique then the arrow → {\displaystyle \to } may be replaced with an equals sign = . {\displaystyle =.} The neighborhood filter N ( y ) {\displaystyle {\mathcal {N}}(y)} can be replaced with any family equivalent to it and the same is true of B . {\displaystyle {\mathcal {B}}.} The definition of a convergent net is a special case of the above definition of a limit of a function. Specifically, if x ∈ X and χ : ( I , ≤ ) → X {\displaystyle x\in X{\text{ and }}\chi :(I,\leq )\to X} is a net then χ → x in X if and only if χ ( Tails ⁡ ( I , ≤ ) ) → x in X , {\displaystyle \chi \to x{\text{ in }}X\quad {\text{ if and only if }}\quad \chi (\operatorname {Tails} (I,\leq ))\to x{\text{ in }}X,} where the left hand side states that x {\displaystyle x} is a limit of the net χ {\displaystyle \chi } while the right hand side states that x {\displaystyle x} is a limit of the function χ {\displaystyle \chi } with respect to B := Tails ⁡ ( I , ≤ ) {\displaystyle {\mathcal {B}}:=\operatorname {Tails} (I,\leq )} (as just defined above). The table below shows how various types of limits encountered in analysis and topology can be defined in terms of the convergence of images (under f {\displaystyle f} ) of particular prefilters on the domain X . {\displaystyle X.} This shows that prefilters provide a general framework into which many of the various definitions of limits fit. The limits in the left-most column are defined in their usual way with their obvious definitions. Throughout, let f : X → Y {\displaystyle f:X\to Y} be a map between topological spaces, x 0 ∈ X , and y ∈ Y . {\displaystyle x_{0}\in X,{\text{ and }}y\in Y.} If Y {\displaystyle Y} is Hausdorff then all arrows " → y {\displaystyle \to y} " in the table may be replaced with equal signs " = y {\displaystyle =y} " and " lim f ( B ) → y {\displaystyle \lim f({\mathcal {B}})\to y} " may be replaced with " lim f ( B ) = y {\displaystyle \lim f({\mathcal {B}})=y} ". By defining different prefilters, many other notions of limits can be defined; for example, lim | x | ≠ | x 0 | | x | → | x 0 | f ( x ) → y . {\displaystyle \lim _{\stackrel {|x|\to |x_{0}|}{|x|\neq |x_{0}|}}f(x)\to y.} Divergence to infinity Divergence of a real-valued function to infinity can be defined/characterized by using the prefilters ( R , ∞ ) := { ( r , ∞ ) : r ∈ R } and ( − ∞ , R ) := { ( − ∞ , r ) : r ∈ R } , {\displaystyle (\mathbb {R} ,\infty ):=\{(r,\infty ):r\in \mathbb {R} \}~~{\text{ and }}~~(-\infty ,\mathbb {R} ):=\{(-\infty ,r):r\in \mathbb {R} \},} where f → ∞ {\displaystyle f\to \infty } along B {\displaystyle {\mathcal {B}}} if and only if ( R , ∞ ) ≤ f ( B ) {\displaystyle (\mathbb {R} ,\infty )\leq f({\mathcal {B}})} and similarly, f → − ∞ {\displaystyle f\to -\infty } along B {\displaystyle {\mathcal {B}}} if and only if ( − ∞ , R ) ≤ f ( B ) . {\displaystyle (-\infty ,\mathbb {R} )\leq f({\mathcal {B}}).} The family ( R , ∞ ) {\displaystyle (\mathbb {R} ,\infty )} can be replaced by any family equivalent to it, such as [ R , ∞ ) := { [ r , ∞ ) : r ∈ R } {\displaystyle [\mathbb {R} ,\infty ):=\{[r,\infty ):r\in \mathbb {R} \}} for instance (in real analysis, this would correspond to replacing the strict inequality " f ( x ) > r {\displaystyle f(x)>r} " in the definition with " f ( x ) ≥ r {\displaystyle f(x)\geq r} "), and the same is true of B {\displaystyle {\mathcal {B}}} and ( − ∞ , R ) . {\displaystyle (-\infty ,\mathbb {R} ).} So for example, if B := N ( x 0 ) {\displaystyle {\mathcal {B}}\,:=\,{\mathcal {N}}\left(x_{0}\right)} then lim x → x 0 f ( x ) → ∞ {\displaystyle \lim _{x\to x_{0}}f(x)\to \infty } if and only if ( R , ∞ ) ≤ f ( B ) {\displaystyle (\mathbb {R} ,\infty )\leq f({\mathcal {B}})} holds. Similarly, lim x → x 0 f ( x ) → − ∞ {\displaystyle \lim _{x\to x_{0}}f(x)\to -\infty } if and only if ( − ∞ , R ) ≤ f ( N ( x 0 ) ) , {\displaystyle (-\infty ,\mathbb {R} )\leq f\left({\mathcal {N}}\left(x_{0}\right)\right),} or equivalently, if and only if ( − ∞ , R ] ≤ f ( N ( x 0 ) ) . {\displaystyle (-\infty ,\mathbb {R} ]\leq f\left({\mathcal {N}}\left(x_{0}\right)\right).} More generally, if f {\displaystyle f} is valued in Y = R n or Y = C n {\displaystyle Y=\mathbb {R} ^{n}{\text{ or }}Y=\mathbb {C} ^{n}} (or some other seminormed vector space) and if B ≥ r := { y ∈ Y : | y | ≥ r } = Y ∖ B < r {\displaystyle B_{\geq r}:=\{y\in Y:|y|\geq r\}=Y\setminus B_{<r}} then lim x → x 0 | f ( x ) | → ∞ {\displaystyle \lim _{x\to x_{0}}|f(x)|\to \infty } if and only if B ≥ R ≤ f ( N ( x 0 ) ) {\displaystyle B_{\geq \mathbb {R} }\leq f\left({\mathcal {N}}\left(x_{0}\right)\right)} holds, where B ≥ R := { B ≥ r : r ∈ R } . {\displaystyle B_{\geq \mathbb {R} }:=\left\{B_{\geq r}:r\in \mathbb {R} \right\}.} == Filters and nets == This section will describe the relationships between prefilters and nets in great detail because of how important these details are applying filters to topology − particularly in switching from utilizing nets to utilizing filters and vice verse. === Nets to prefilters === In the definitions below, the first statement is the standard definition of a limit point of a net (respectively, a cluster point of a net) and it is gradually reworded until the corresponding filter concept is reached. If f : X → Y {\displaystyle f:X\to Y} is a map and x ∙ {\displaystyle x_{\bullet }} is a net in X {\displaystyle X} then Tails ⁡ ( f ( x ∙ ) ) = f ( Tails ⁡ ( x ∙ ) ) . {\displaystyle \operatorname {Tails} \left(f\left(x_{\bullet }\right)\right)=f\left(\operatorname {Tails} \left(x_{\bullet }\right)\right).} === Prefilters to nets === A pointed set is a pair ( S , s ) {\displaystyle (S,s)} consisting of a non-empty set S {\displaystyle S} and an element s ∈ S . {\displaystyle s\in S.} For any family B , {\displaystyle {\mathcal {B}},} let PointedSets ⁡ ( B ) := { ( B , b ) : B ∈ B and b ∈ B } . {\displaystyle \operatorname {PointedSets} ({\mathcal {B}}):=\{(B,b)~:~B\in {\mathcal {B}}{\text{ and }}b\in B\}.} Define a canonical preorder ≤ {\displaystyle \,\leq \,} on pointed sets by declaring ( R , r ) ≤ ( S , s ) if and only if R ⊇ S . {\displaystyle (R,r)\leq (S,s)\quad {\text{ if and only if }}\quad R\supseteq S.} There is a canonical map Point B ⁡ : PointedSets ⁡ ( B ) → X {\displaystyle \operatorname {Point} _{\mathcal {B}}~:~\operatorname {PointedSets} ({\mathcal {B}})\to X} defined by ( B , b ) ↦ b . {\displaystyle (B,b)\mapsto b.} If i 0 = ( B 0 , b 0 ) ∈ PointedSets ⁡ ( B ) {\displaystyle i_{0}=\left(B_{0},b_{0}\right)\in \operatorname {PointedSets} ({\mathcal {B}})} then the tail of the assignment Point B {\displaystyle \operatorname {Point} _{\mathcal {B}}} starting at i 0 {\displaystyle i_{0}} is { c : ( C , c ) ∈ PointedSets ⁡ ( B ) and ( B 0 , b 0 ) ≤ ( C , c ) } = B 0 . {\displaystyle \left\{c~:~(C,c)\in \operatorname {PointedSets} ({\mathcal {B}}){\text{ and }}\left(B_{0},b_{0}\right)\leq (C,c)\right\}=B_{0}.} Although ( PointedSets ⁡ ( B ) , ≤ ) {\displaystyle (\operatorname {PointedSets} ({\mathcal {B}}),\leq )} is not, in general, a partially ordered set, it is a directed set if (and only if) B {\displaystyle {\mathcal {B}}} is a prefilter. So the most immediate choice for the definition of "the net in X {\displaystyle X} induced by a prefilter B {\displaystyle {\mathcal {B}}} " is the assignment ( B , b ) ↦ b {\displaystyle (B,b)\mapsto b} from PointedSets ⁡ ( B ) {\displaystyle \operatorname {PointedSets} ({\mathcal {B}})} into X . {\displaystyle X.} If B {\displaystyle {\mathcal {B}}} is a prefilter on X then Net B {\displaystyle X{\text{ then }}\operatorname {Net} _{\mathcal {B}}} is a net in X {\displaystyle X} and the prefilter associated with Net B {\displaystyle \operatorname {Net} _{\mathcal {B}}} is B {\displaystyle {\mathcal {B}}} ; that is: Tails ⁡ ( Net B ) = B . {\displaystyle \operatorname {Tails} \left(\operatorname {Net} _{\mathcal {B}}\right)={\mathcal {B}}.} This would not necessarily be true had Net B {\displaystyle \operatorname {Net} _{\mathcal {B}}} been defined on a proper subset of PointedSets ⁡ ( B ) . {\displaystyle \operatorname {PointedSets} ({\mathcal {B}}).} If x ∙ {\displaystyle x_{\bullet }} is a net in X {\displaystyle X} then it is not in general true that Net Tails ⁡ ( x ∙ ) {\displaystyle \operatorname {Net} _{\operatorname {Tails} \left(x_{\bullet }\right)}} is equal to x ∙ {\displaystyle x_{\bullet }} because, for example, the domain of x ∙ {\displaystyle x_{\bullet }} may be of a completely different cardinality than that of Net Tails ⁡ ( x ∙ ) {\displaystyle \operatorname {Net} _{\operatorname {Tails} \left(x_{\bullet }\right)}} (since unlike the domain of Net Tails ⁡ ( x ∙ ) , {\displaystyle \operatorname {Net} _{\operatorname {Tails} \left(x_{\bullet }\right)},} the domain of an arbitrary net in X {\displaystyle X} could have any cardinality). Partially ordered net The domain of the canonical net Net B {\displaystyle \operatorname {Net} _{\mathcal {B}}} is in general not partially ordered. However, in 1955 Bruns and Schmidt discovered a construction (detailed here: Filter (set theory)#Partially ordered net) that allows for the canonical net to have a domain that is both partially ordered and directed; this was independently rediscovered by Albert Wilansky in 1970. Because the tails of this partially ordered net are identical to the tails of Net B {\displaystyle \operatorname {Net} _{\mathcal {B}}} (since both are equal to the prefilter B {\displaystyle {\mathcal {B}}} ), there is typically nothing lost by assuming that the domain of the net associated with a prefilter is both directed and partially ordered. If can further be assumed that the partially ordered domain is also a dense order. === Subordinate filters and subnets === The notion of " B {\displaystyle {\mathcal {B}}} is subordinate to C {\displaystyle {\mathcal {C}}} " (written B ⊢ C {\displaystyle {\mathcal {B}}\vdash {\mathcal {C}}} ) is for filters and prefilters what " x n ∙ = ( x n i ) i = 1 ∞ {\displaystyle x_{n_{\bullet }}=\left(x_{n_{i}}\right)_{i=1}^{\infty }} is a subsequence of x ∙ = ( x i ) i = 1 ∞ {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }} " is for sequences. For example, if Tails ⁡ ( x ∙ ) = { x ≥ i : i ∈ N } {\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)=\left\{x_{\geq i}:i\in \mathbb {N} \right\}} denotes the set of tails of x ∙ {\displaystyle x_{\bullet }} and if Tails ⁡ ( x n ∙ ) = { x n ≥ i : i ∈ N } {\displaystyle \operatorname {Tails} \left(x_{n_{\bullet }}\right)=\left\{x_{n_{\geq i}}:i\in \mathbb {N} \right\}} denotes the set of tails of the subsequence x n ∙ {\displaystyle x_{n_{\bullet }}} (where x n ≥ i := { x n j : j ≥ i and j ∈ N } {\displaystyle x_{n_{\geq i}}:=\left\{x_{n_{j}}~:~j\geq i{\text{ and }}j\in \mathbb {N} \right\}} ) then Tails ⁡ ( x n ∙ ) ⊢ Tails ⁡ ( x ∙ ) {\displaystyle \operatorname {Tails} \left(x_{n_{\bullet }}\right)~\vdash ~\operatorname {Tails} \left(x_{\bullet }\right)} (which by definition means Tails ⁡ ( x ∙ ) ≤ Tails ⁡ ( x n ∙ ) {\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)\leq \operatorname {Tails} \left(x_{n_{\bullet }}\right)} ) is true but Tails ⁡ ( x ∙ ) ⊢ Tails ⁡ ( x n ∙ ) {\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)~\vdash ~\operatorname {Tails} \left(x_{n_{\bullet }}\right)} is in general false. If x ∙ = ( x i ) i ∈ I {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}} is a net in a topological space X {\displaystyle X} and if N ( x ) {\displaystyle {\mathcal {N}}(x)} is the neighborhood filter at a point x ∈ X , {\displaystyle x\in X,} then x ∙ → x if and only if N ( x ) ≤ Tails ⁡ ( x ∙ ) . {\displaystyle x_{\bullet }\to x{\text{ if and only if }}{\mathcal {N}}(x)\leq \operatorname {Tails} \left(x_{\bullet }\right).} If f : X → Y {\displaystyle f:X\to Y} is an surjective open map, x ∈ X , {\displaystyle x\in X,} and C {\displaystyle {\mathcal {C}}} is a prefilter on Y {\displaystyle Y} that converges to f ( x ) , {\displaystyle f(x),} then there exist a prefilter B {\displaystyle {\mathcal {B}}} on X {\displaystyle X} such that B → x {\displaystyle {\mathcal {B}}\to x} and f ( B ) {\displaystyle f({\mathcal {B}})} is equivalent to C {\displaystyle {\mathcal {C}}} (that is, C ≤ f ( B ) ≤ C {\displaystyle {\mathcal {C}}\leq f({\mathcal {B}})\leq {\mathcal {C}}} ). ==== Subordination analogs of results involving subsequences ==== The following results are the prefilter analogs of statements involving subsequences. The condition " C ≥ B , {\displaystyle {\mathcal {C}}\geq {\mathcal {B}},} " which is also written C ⊢ B , {\displaystyle {\mathcal {C}}\vdash {\mathcal {B}},} is the analog of " C {\displaystyle {\mathcal {C}}} is a subsequence of B . {\displaystyle {\mathcal {B}}.} " So "finer than" and "subordinate to" is the prefilter analog of "subsequence of." Some people prefer saying "subordinate to" instead of "finer than" because it is more reminiscent of "subsequence of." ==== Non-equivalence of subnets and subordinate filters ==== Subnets in the sense of Willard and subnets in the sense of Kelley are the most commonly used definitions of "subnet." The first definition of a subnet ("Kelley-subnet") was introduced by John L. Kelley in 1955. Stephen Willard introduced in 1970 his own variant ("Willard-subnet") of Kelley's definition of subnet. AA-subnets were introduced independently by Smiley (1957), Aarnes and Andenaes (1972), and Murdeshwar (1983); AA-subnets were studied in great detail by Aarnes and Andenaes but they are not often used. A subset R ⊆ I {\displaystyle R\subseteq I} of a preordered space ( I , ≤ ) {\displaystyle (I,\leq )} is frequent or cofinal in I {\displaystyle I} if for every i ∈ I {\displaystyle i\in I} there exists some r ∈ R {\displaystyle r\in R} such that i ≤ r . {\displaystyle i\leq r.} If R ⊆ I {\displaystyle R\subseteq I} contains a tail of I {\displaystyle I} then R {\displaystyle R} is said to be eventual in I {\displaystyle I} ; explicitly, this means that there exists some i ∈ I {\displaystyle i\in I} such that I ≥ i ⊆ R {\displaystyle I_{\geq i}\subseteq R} (that is, j ∈ R {\displaystyle j\in R} for all j ∈ I {\displaystyle j\in I} satisfying i ≤ j {\displaystyle i\leq j} ). A subset is eventual if and only if its complement is not frequent (which is termed infrequent). A map h : A → I {\displaystyle h:A\to I} between two preordered sets is order-preserving if whenever a , b ∈ A {\displaystyle a,b\in A} satisfy a ≤ b , {\displaystyle a\leq b,} then h ( a ) ≤ h ( b ) . {\displaystyle h(a)\leq h(b).} Kelley did not require the map h {\displaystyle h} to be order preserving while the definition of an AA-subnet does away entirely with any map between the two nets' domains and instead focuses entirely on X {\displaystyle X} − the nets' common codomain. Every Willard-subnet is a Kelley-subnet and both are AA-subnets. In particular, if y ∙ = ( y a ) a ∈ A {\displaystyle y_{\bullet }=\left(y_{a}\right)_{a\in A}} is a Willard-subnet or a Kelley-subnet of x ∙ = ( x i ) i ∈ I {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}} then Tails ⁡ ( x ∙ ) ≤ Tails ⁡ ( y ∙ ) . {\displaystyle \operatorname {Tails} \left(x_{\bullet }\right)\leq \operatorname {Tails} \left(y_{\bullet }\right).} Example: If I = N {\displaystyle I=\mathbb {N} } and x ∙ = ( x i ) i ∈ I {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}} is a constant sequence and if A = { 1 } {\displaystyle A=\{1\}} and s 1 := x 1 {\displaystyle s_{1}:=x_{1}} then ( s a ) a ∈ A {\displaystyle \left(s_{a}\right)_{a\in A}} is an AA-subnet of x ∙ {\displaystyle x_{\bullet }} but it is neither a Willard-subnet nor a Kelley-subnet of x ∙ . {\displaystyle x_{\bullet }.} AA-subnets have a defining characterization that immediately shows that they are fully interchangeable with sub(ordinate)filters. Explicitly, what is meant is that the following statement is true for AA-subnets: If B and F {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}} are prefilters then B ≤ F {\displaystyle {\mathcal {B}}\leq {\mathcal {F}}} if and only if Net F {\displaystyle \operatorname {Net} _{\mathcal {F}}} is an AA-subnet of Net B . {\displaystyle \operatorname {Net} _{\mathcal {B}}.} If "AA-subnet" is replaced by "Willard-subnet" or "Kelley-subnet" then the above statement becomes false. In particular, as this counter-example demonstrates, the problem is that the following statement is in general false: False statement: If B and F {\displaystyle {\mathcal {B}}{\text{ and }}{\mathcal {F}}} are prefilters such that B ≤ F then Net F {\displaystyle {\mathcal {B}}\leq {\mathcal {F}}{\text{ then }}\operatorname {Net} _{\mathcal {F}}} is a Kelley-subnet of Net B . {\displaystyle \operatorname {Net} _{\mathcal {B}}.} Since every Willard-subnet is a Kelley-subnet, this statement thus remains false if the word "Kelley-subnet" is replaced with "Willard-subnet". If "subnet" is defined to mean Willard-subnet or Kelley-subnet then nets and filters are not completely interchangeable because there exists a filter–sub(ordinate)filter relationships that cannot be expressed in terms of a net–subnet relationship between the two induced nets. In particular, the problem is that Kelley-subnets and Willard-subnets are not fully interchangeable with subordinate filters. If the notion of "subnet" is not used or if "subnet" is defined to mean AA-subnet, then this ceases to be a problem and so it becomes correct to say that nets and filters are interchangeable. Despite the fact that AA-subnets do not have the problem that Willard and Kelley subnets have, they are not widely used or known about. == Topologies and prefilters == Throughout, ( X , τ ) {\displaystyle (X,\tau )} is a topological space. === Examples of relationships between filters and topologies === Bases and prefilters Let B ≠ ∅ {\displaystyle {\mathcal {B}}\neq \varnothing } be a family of sets that covers X {\displaystyle X} and define B x = { B ∈ B : x ∈ B } {\displaystyle {\mathcal {B}}_{x}=\{B\in {\mathcal {B}}~:~x\in B\}} for every x ∈ X . {\displaystyle x\in X.} The definition of a base for some topology can be immediately reworded as: B {\displaystyle {\mathcal {B}}} is a base for some topology on X {\displaystyle X} if and only if B x {\displaystyle {\mathcal {B}}_{x}} is a filter base for every x ∈ X . {\displaystyle x\in X.} If τ {\displaystyle \tau } is a topology on X {\displaystyle X} and B ⊆ τ {\displaystyle {\mathcal {B}}\subseteq \tau } then the definitions of B {\displaystyle {\mathcal {B}}} is a basis (resp. subbase) for τ {\displaystyle \tau } can be reworded as: B {\displaystyle {\mathcal {B}}} is a base (resp. subbase) for τ {\displaystyle \tau } if and only if for every x ∈ X , B x {\displaystyle x\in X,{\mathcal {B}}_{x}} is a filter base (resp. filter subbase) that generates the neighborhood filter of ( X , τ ) {\displaystyle (X,\tau )} at x . {\displaystyle x.} Neighborhood filters The archetypical example of a filter is the set of all neighborhoods of a point in a topological space. Any neighborhood basis of a point in (or of a subset of) a topological space is a prefilter. In fact, the definition of a neighborhood base can be equivalently restated as: "a neighborhood base is any prefilter that is equivalent the neighborhood filter." Neighborhood bases at points are examples of prefilters that are fixed but may or may not be principal. If X = R {\displaystyle X=\mathbb {R} } has its usual topology and if x ∈ X , {\displaystyle x\in X,} then any neighborhood filter base B {\displaystyle {\mathcal {B}}} of x {\displaystyle x} is fixed by x {\displaystyle x} (in fact, it is even true that ker ⁡ B = { x } {\displaystyle \ker {\mathcal {B}}=\{x\}} ) but B {\displaystyle {\mathcal {B}}} is not principal since { x } ∉ B . {\displaystyle \{x\}\not \in {\mathcal {B}}.} In contrast, a topological space has the discrete topology if and only if the neighborhood filter of every point is a principal filter generated by exactly one point. This shows that a non-principal filter on an infinite set is not necessarily free. The neighborhood filter of every point x {\displaystyle x} in topological space X {\displaystyle X} is fixed since its kernel contains x {\displaystyle x} (and possibly other points if, for instance, X {\displaystyle X} is not a T1 space). This is also true of any neighborhood basis at x . {\displaystyle x.} For any point x {\displaystyle x} in a T1 space (for example, a Hausdorff space), the kernel of the neighborhood filter of x {\displaystyle x} is equal to the singleton set { x } . {\displaystyle \{x\}.} However, it is possible for a neighborhood filter at a point to be principal but not discrete (that is, not principal at a single point). A neighborhood basis B {\displaystyle {\mathcal {B}}} of a point x {\displaystyle x} in a topological space is principal if and only if the kernel of B {\displaystyle {\mathcal {B}}} is an open set. If in addition the space is T1 then ker ⁡ B = { x } {\displaystyle \ker {\mathcal {B}}=\{x\}} so that this basis B {\displaystyle {\mathcal {B}}} is principal if and only if { x } {\displaystyle \{x\}} is an open set. Generating topologies from filters and prefilters Suppose B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} is not empty (and X ≠ ∅ {\displaystyle X\neq \varnothing } ). If B {\displaystyle {\mathcal {B}}} is a filter on X {\displaystyle X} then { ∅ } ∪ B {\displaystyle \{\varnothing \}\cup {\mathcal {B}}} is a topology on X {\displaystyle X} but the converse is in general false. This shows that in a sense, filters are almost topologies. Topologies of the form { ∅ } ∪ B {\displaystyle \{\varnothing \}\cup {\mathcal {B}}} where B {\displaystyle {\mathcal {B}}} is an ultrafilter on X {\displaystyle X} are an even more specialized subclass of such topologies; they have the property that every proper subset ∅ ≠ S ⊆ X {\displaystyle \varnothing \neq S\subseteq X} is either open or closed, but (unlike the discrete topology) never both. These spaces are, in particular, examples of door spaces. If B {\displaystyle {\mathcal {B}}} is a prefilter (resp. filter subbase, π-system, proper) on X {\displaystyle X} then the same is true of both { X } ∪ B {\displaystyle \{X\}\cup {\mathcal {B}}} and the set B ∪ {\displaystyle {\mathcal {B}}_{\cup }} of all possible unions of one or more elements of B . {\displaystyle {\mathcal {B}}.} If B {\displaystyle {\mathcal {B}}} is closed under finite intersections then the set τ B = { ∅ , X } ∪ B ∪ {\displaystyle \tau _{\mathcal {B}}=\{\varnothing ,X\}\cup {\mathcal {B}}_{\cup }} is a topology on X {\displaystyle X} with both { X } ∪ B ∪ and { X } ∪ B {\displaystyle \{X\}\cup {\mathcal {B}}_{\cup }{\text{ and }}\{X\}\cup {\mathcal {B}}} being bases for it. If the π-system B {\displaystyle {\mathcal {B}}} covers X {\displaystyle X} then both B ∪ and B {\displaystyle {\mathcal {B}}_{\cup }{\text{ and }}{\mathcal {B}}} are also bases for τ B . {\displaystyle \tau _{\mathcal {B}}.} If τ {\displaystyle \tau } is a topology on X {\displaystyle X} then τ ∖ { ∅ } {\displaystyle \tau \setminus \{\varnothing \}} is a prefilter (or equivalently, a π-system) if and only if it has the finite intersection property (that is, it is a filter subbase), in which case a subset B ⊆ τ {\displaystyle {\mathcal {B}}\subseteq \tau } will be a basis for τ {\displaystyle \tau } if and only if B ∖ { ∅ } {\displaystyle {\mathcal {B}}\setminus \{\varnothing \}} is equivalent to τ ∖ { ∅ } , {\displaystyle \tau \setminus \{\varnothing \},} in which case B ∖ { ∅ } {\displaystyle {\mathcal {B}}\setminus \{\varnothing \}} will be a prefilter. === Topological properties and prefilters === Neighborhoods and topologies The neighborhood filter of a nonempty subset S ⊆ X {\displaystyle S\subseteq X} in a topological space X {\displaystyle X} is equal to the intersection of all neighborhood filters of all points in S . {\displaystyle S.} A subset S ⊆ X {\displaystyle S\subseteq X} is open in X {\displaystyle X} if and only if whenever F {\displaystyle {\mathcal {F}}} is a filter on X {\displaystyle X} and s ∈ S , {\displaystyle s\in S,} then F → s in X implies S ∈ F . {\displaystyle {\mathcal {F}}\to s{\text{ in }}X{\text{ implies }}S\in {\mathcal {F}}.} Suppose σ and τ {\displaystyle \sigma {\text{ and }}\tau } are topologies on X . {\displaystyle X.} Then τ {\displaystyle \tau } is finer than σ {\displaystyle \sigma } (that is, σ ⊆ τ {\displaystyle \sigma \subseteq \tau } ) if and only if whenever x ∈ X and B {\displaystyle x\in X{\text{ and }}{\mathcal {B}}} is a filter on X , {\displaystyle X,} if B → x in ( X , τ ) {\displaystyle {\mathcal {B}}\to x{\text{ in }}(X,\tau )} then B → x in ( X , σ ) . {\displaystyle {\mathcal {B}}\to x{\text{ in }}(X,\sigma ).} Consequently, σ = τ {\displaystyle \sigma =\tau } if and only if for every filter B on X {\displaystyle {\mathcal {B}}{\text{ on }}X} and every x ∈ X , B → x in ( X , σ ) {\displaystyle x\in X,{\mathcal {B}}\to x{\text{ in }}(X,\sigma )} if and only if B → x in ( X , τ ) . {\displaystyle {\mathcal {B}}\to x{\text{ in }}(X,\tau ).} However, it is possible that σ ≠ τ {\displaystyle \sigma \neq \tau } while also for every filter B on X , B {\displaystyle {\mathcal {B}}{\text{ on }}X,{\mathcal {B}}} converges to some point of X in ( X , σ ) {\displaystyle X{\text{ in }}(X,\sigma )} if and only if B {\displaystyle {\mathcal {B}}} converges to some point of X in ( X , τ ) . {\displaystyle X{\text{ in }}(X,\tau ).} Closure If B {\displaystyle {\mathcal {B}}} is a prefilter on a subset S ⊆ X {\displaystyle S\subseteq X} then every cluster point of B in X {\displaystyle {\mathcal {B}}{\text{ in }}X} belongs to cl X ⁡ S . {\displaystyle \operatorname {cl} _{X}S.} If x ∈ X and S ⊆ X {\displaystyle x\in X{\text{ and }}S\subseteq X} is a non-empty subset, then the following are equivalent: x ∈ cl X ⁡ S {\displaystyle x\in \operatorname {cl} _{X}S} x {\displaystyle x} is a limit point of a prefilter on S . {\displaystyle S.} Explicitly: there exists a prefilter F ⊆ ℘ ( S ) on S {\displaystyle {\mathcal {F}}\subseteq \wp (S){\text{ on }}S} such that F → x in X . {\displaystyle {\mathcal {F}}\to x{\text{ in }}X.} x {\displaystyle x} is a limit point of a filter on S . {\displaystyle S.} There exists a prefilter F on X {\displaystyle {\mathcal {F}}{\text{ on }}X} such that S ∈ F and F → x in X . {\displaystyle S\in {\mathcal {F}}{\text{ and }}{\mathcal {F}}\to x{\text{ in }}X.} The prefilter { S } {\displaystyle \{S\}} meshes with the neighborhood filter N ( x ) . {\displaystyle {\mathcal {N}}(x).} Said differently, x {\displaystyle x} is a cluster point of the prefilter { S } . {\displaystyle \{S\}.} The prefilter { S } {\displaystyle \{S\}} meshes with some (or equivalently, with every) filter base for N ( x ) {\displaystyle {\mathcal {N}}(x)} (that is, with every neighborhood basis at x {\displaystyle x} ). The following are equivalent: x {\displaystyle x} is a limit points of S in X . {\displaystyle S{\text{ in }}X.} There exists a prefilter F ⊆ ℘ ( S ) on { S } ∖ { x } {\displaystyle {\mathcal {F}}\subseteq \wp (S){\text{ on }}\{S\}\setminus \{x\}} such that F → x in X . {\displaystyle {\mathcal {F}}\to x{\text{ in }}X.} Closed sets If S ⊆ X {\displaystyle S\subseteq X} is not empty then the following are equivalent: S {\displaystyle S} is a closed subset of X . {\displaystyle X.} If x ∈ X and F ⊆ ℘ ( S ) {\displaystyle x\in X{\text{ and }}{\mathcal {F}}\subseteq \wp (S)} is a prefilter on S {\displaystyle S} such that F → x in X , {\displaystyle {\mathcal {F}}\to x{\text{ in }}X,} then x ∈ S . {\displaystyle x\in S.} If x ∈ X and F ⊆ ℘ ( S ) {\displaystyle x\in X{\text{ and }}{\mathcal {F}}\subseteq \wp (S)} is a prefilter on S {\displaystyle S} such that x {\displaystyle x} is an accumulation points of F in X , {\displaystyle {\mathcal {F}}{\text{ in }}X,} then x ∈ S . {\displaystyle x\in S.} If x ∈ X {\displaystyle x\in X} is such that the neighborhood filter N ( x ) {\displaystyle {\mathcal {N}}(x)} meshes with { S } {\displaystyle \{S\}} then x ∈ S . {\displaystyle x\in S.} Hausdorffness The following are equivalent: X {\displaystyle X} is a Hausdorff space. Every prefilter on X {\displaystyle X} converges to at most one point in X . {\displaystyle X.} The above statement but with the word "prefilter" replaced by any one of the following: filter, ultra prefilter, ultrafilter. Compactness As discussed in this article, the Ultrafilter Lemma is closely related to many important theorems involving compactness. The following are equivalent: ( X , τ ) {\displaystyle (X,\tau )} is a compact space. Every ultrafilter on X {\displaystyle X} converges to at least one point in X . {\displaystyle X.} That this condition implies compactness can be proven by using only the ultrafilter lemma. That compactness implies this condition can be proven without the ultrafilter lemma (or even the axiom of choice). The above statement but with the word "ultrafilter" replaced by "ultra prefilter". For every filter C on X {\displaystyle {\mathcal {C}}{\text{ on }}X} there exists a filter F on X {\displaystyle {\mathcal {F}}{\text{ on }}X} such that C ≤ F {\displaystyle {\mathcal {C}}\leq {\mathcal {F}}} and F {\displaystyle {\mathcal {F}}} converges to some point of X . {\displaystyle X.} The above statement but with each instance of the word "filter" replaced by: prefilter. Every filter on X {\displaystyle X} has at least one cluster point in X . {\displaystyle X.} That this condition is equivalent to compactness can be proven by using only the ultrafilter lemma. The above statement but with the word "filter" replaced by "prefilter". Alexander subbase theorem: There exists a subbase S for τ {\displaystyle {\mathcal {S}}{\text{ for }}\tau } such that every cover of X {\displaystyle X} by sets in S {\displaystyle {\mathcal {S}}} has a finite subcover. That this condition is equivalent to compactness can be proven by using only the ultrafilter lemma. If F {\displaystyle {\mathcal {F}}} is the set of all complements of compact subsets of a given topological space X , {\displaystyle X,} then F {\displaystyle {\mathcal {F}}} is a filter on X {\displaystyle X} if and only if X {\displaystyle X} is not compact. Continuity Let f : X → Y {\displaystyle f:X\to Y} be a map between topological spaces ( X , τ ) and ( Y , υ ) . {\displaystyle (X,\tau ){\text{ and }}(Y,\upsilon ).} Given x ∈ X , {\displaystyle x\in X,} the following are equivalent: f : X → Y {\displaystyle f:X\to Y} is continuous at x . {\displaystyle x.} Definition: For every neighborhood V {\displaystyle V} of f ( x ) in Y {\displaystyle f(x){\text{ in }}Y} there exists some neighborhood N {\displaystyle N} of x in X {\displaystyle x{\text{ in }}X} such that f ( N ) ⊆ V . {\displaystyle f(N)\subseteq V.} f ( N ( x ) ) → f ( x ) in Y . {\displaystyle f({\mathcal {N}}(x))\to f(x){\text{ in }}Y.} If B {\displaystyle {\mathcal {B}}} is a filter on X {\displaystyle X} such that B → x in X {\displaystyle {\mathcal {B}}\to x{\text{ in }}X} then f ( B ) → f ( x ) in Y . {\displaystyle f({\mathcal {B}})\to f(x){\text{ in }}Y.} The above statement but with the word "filter" replaced by "prefilter". The following are equivalent: f : X → Y {\displaystyle f:X\to Y} is continuous. If x ∈ X and B {\displaystyle x\in X{\text{ and }}{\mathcal {B}}} is a prefilter on X {\displaystyle X} such that B → x in X {\displaystyle {\mathcal {B}}\to x{\text{ in }}X} then f ( B ) → f ( x ) in Y . {\displaystyle f({\mathcal {B}})\to f(x){\text{ in }}Y.} If x ∈ X {\displaystyle x\in X} is a limit point of a prefilter B on X {\displaystyle {\mathcal {B}}{\text{ on }}X} then f ( x ) {\displaystyle f(x)} is a limit point of f ( B ) in Y . {\displaystyle f({\mathcal {B}}){\text{ in }}Y.} Any one of the above two statements but with the word "prefilter" replaced by "filter". If B {\displaystyle {\mathcal {B}}} is a prefilter on X , x ∈ X {\displaystyle X,x\in X} is a cluster point of B , and f : X → Y {\displaystyle {\mathcal {B}},{\text{ and }}f:X\to Y} is continuous, then f ( x ) {\displaystyle f(x)} is a cluster point in Y {\displaystyle Y} of the prefilter f ( B ) . {\displaystyle f({\mathcal {B}}).} A subset D {\displaystyle D} of a topological space X {\displaystyle X} is dense in X {\displaystyle X} if and only if for every x ∈ X , {\displaystyle x\in X,} the trace N X ( x ) | D {\displaystyle {\mathcal {N}}_{X}(x){\big \vert }_{D}} of the neighborhood filter N X ( x ) {\displaystyle {\mathcal {N}}_{X}(x)} along D {\displaystyle D} does not contain the empty set (in which case it will be a filter on D {\displaystyle D} ). Suppose f : D → Y {\displaystyle f:D\to Y} is a continuous map into a Hausdorff regular space Y {\displaystyle Y} and that D {\displaystyle D} is a dense subset of a topological space X . {\displaystyle X.} Then f {\displaystyle f} has a continuous extension F : X → Y {\displaystyle F:X\to Y} if and only if for every x ∈ X , {\displaystyle x\in X,} the prefilter f ( N X ( x ) | D ) {\displaystyle f\left({\mathcal {N}}_{X}(x){\big \vert }_{D}\right)} converges to some point in Y . {\displaystyle Y.} Furthermore, this continuous extension will be unique whenever it exists. Products Suppose X ∙ := ( X i ) i ∈ I {\displaystyle X_{\bullet }:=\left(X_{i}\right)_{i\in I}} is a non-empty family of non-empty topological spaces and that is a family of prefilters where each B i {\displaystyle {\mathcal {B}}_{i}} is a prefilter on X i . {\displaystyle X_{i}.} Then the product B ∙ {\displaystyle {\mathcal {B}}_{\bullet }} of these prefilters (defined above) is a prefilter on the product space ∏ X ∙ , {\displaystyle {\textstyle \prod }X_{\bullet },} which as usual, is endowed with the product topology. If x ∙ := ( x i ) i ∈ I ∈ ∏ X ∙ , {\displaystyle x_{\bullet }:=\left(x_{i}\right)_{i\in I}\in {\textstyle \prod }X_{\bullet },} then B ∙ → x ∙ in ∏ X ∙ {\displaystyle {\mathcal {B}}_{\bullet }\to x_{\bullet }{\text{ in }}{\textstyle \prod }X_{\bullet }} if and only if B i → x i in X i for every i ∈ I . {\displaystyle {\mathcal {B}}_{i}\to x_{i}{\text{ in }}X_{i}{\text{ for every }}i\in I.} Suppose X and Y {\displaystyle X{\text{ and }}Y} are topological spaces, B {\displaystyle {\mathcal {B}}} is a prefilter on X {\displaystyle X} having x ∈ X {\displaystyle x\in X} as a cluster point, and C {\displaystyle {\mathcal {C}}} is a prefilter on Y {\displaystyle Y} having y ∈ Y {\displaystyle y\in Y} as a cluster point. Then ( x , y ) {\displaystyle (x,y)} is a cluster point of B × C {\displaystyle {\mathcal {B}}\times {\mathcal {C}}} in the product space X × Y . {\displaystyle X\times Y.} However, if X = Y = Q {\displaystyle X=Y=\mathbb {Q} } then there exist sequences ( x i ) i = 1 ∞ ⊆ X and ( y i ) i = 1 ∞ ⊆ Y {\displaystyle \left(x_{i}\right)_{i=1}^{\infty }\subseteq X{\text{ and }}\left(y_{i}\right)_{i=1}^{\infty }\subseteq Y} such that both of these sequences have a cluster point in Q {\displaystyle \mathbb {Q} } but the sequence ( x i , y i ) i = 1 ∞ ⊆ X × Y {\displaystyle \left(x_{i},y_{i}\right)_{i=1}^{\infty }\subseteq X\times Y} does not have a cluster point in X × Y . {\displaystyle X\times Y.} Example application: The ultrafilter lemma along with the axioms of ZF imply Tychonoff's theorem for compact Hausdorff spaces: == Examples of applications of prefilters == === Uniformities and Cauchy prefilters === A uniform space is a set X {\displaystyle X} equipped with a filter on X × X {\displaystyle X\times X} that has certain properties. A base or fundamental system of entourages is a prefilter on X × X {\displaystyle X\times X} whose upward closure is a uniform space. A prefilter B {\displaystyle {\mathcal {B}}} on a uniform space X {\displaystyle X} with uniformity F {\displaystyle {\mathcal {F}}} is called a Cauchy prefilter if for every entourage N ∈ F , {\displaystyle N\in {\mathcal {F}},} there exists some B ∈ B {\displaystyle B\in {\mathcal {B}}} that is N {\displaystyle N} -small, which means that B × B ⊆ N . {\displaystyle B\times B\subseteq N.} A minimal Cauchy filter is a minimal element (with respect to ≤ {\displaystyle \,\leq \,} or equivalently, to ⊆ {\displaystyle \,\subseteq } ) of the set of all Cauchy filters on X . {\displaystyle X.} Examples of minimal Cauchy filters include the neighborhood filter N X ( x ) {\displaystyle {\mathcal {N}}_{X}(x)} of any point x ∈ X . {\displaystyle x\in X.} Every convergent filter on a uniform space is Cauchy. Moreover, every cluster point of a Cauchy filter is a limit point. A uniform space ( X , F ) {\displaystyle (X,{\mathcal {F}})} is called complete (resp. sequentially complete) if every Cauchy prefilter (resp. every elementary Cauchy prefilter) on X {\displaystyle X} converges to at least one point of X {\displaystyle X} (replacing all instance of the word "prefilter" with "filter" results in equivalent statement). Every compact uniform space is complete because any Cauchy filter has a cluster point (by compactness), which is necessarily also a limit point (since the filter is Cauchy). Uniform spaces were the result of attempts to generalize notions such as "uniform continuity" and "uniform convergence" that are present in metric spaces. Every topological vector space, and more generally, every topological group can be made into a uniform space in a canonical way. Every uniformity also generates a canonical induced topology. Filters and prefilters play an important role in the theory of uniform spaces. For example, the completion of a Hausdorff uniform space (even if it is not metrizable) is typically constructed by using minimal Cauchy filters. Nets are less ideal for this construction because their domains are extremely varied (for example, the class of all Cauchy nets is not a set); sequences cannot be used in the general case because the topology might not be metrizable, first-countable, or even sequential. The set of all minimal Cauchy filters on a Hausdorff topological vector space (TVS) X {\displaystyle X} can made into a vector space and topologized in such a way that it becomes a completion of X {\displaystyle X} (with the assignment x ↦ N X ( x ) {\displaystyle x\mapsto {\mathcal {N}}_{X}(x)} becoming a linear topological embedding that identifies X {\displaystyle X} as a dense vector subspace of this completion). More generally, a Cauchy space is a pair ( X , C ) {\displaystyle (X,{\mathfrak {C}})} consisting of a set X {\displaystyle X} together a family C ⊆ ℘ ( ℘ ( X ) ) {\displaystyle {\mathfrak {C}}\subseteq \wp (\wp (X))} of (proper) filters, whose members are declared to be "Cauchy filters", having all of the following properties: For each x ∈ X , {\displaystyle x\in X,} the discrete ultrafilter at x {\displaystyle x} is an element of C . {\displaystyle {\mathfrak {C}}.} If F ∈ C {\displaystyle F\in {\mathfrak {C}}} is a subset of a proper filter G , {\displaystyle G,} then G ∈ C . {\displaystyle G\in {\mathfrak {C}}.} If F , G ∈ C {\displaystyle F,G\in {\mathfrak {C}}} and if each member of F {\displaystyle F} intersects each member of G , {\displaystyle G,} then F ∩ G ∈ C . {\displaystyle F\cap G\in {\mathfrak {C}}.} The set of all Cauchy filters on a uniform space forms a Cauchy space. Every Cauchy space is also a convergence space. A map f : X → Y {\displaystyle f:X\to Y} between two Cauchy spaces is called Cauchy continuous if the image of every Cauchy filter in X {\displaystyle X} is a Cauchy filter in Y . {\displaystyle Y.} Unlike the category of topological spaces, the category of Cauchy spaces and Cauchy continuous maps is Cartesian closed, and contains the category of proximity spaces. === Topologizing the set of prefilters === Starting with nothing more than a set X , {\displaystyle X,} it is possible to topologize the set P := Prefilters ⁡ ( X ) {\displaystyle \mathbb {P} :=\operatorname {Prefilters} (X)} of all filter bases on X {\displaystyle X} with the Stone topology, which is named after Marshall Harvey Stone. To reduce confusion, this article will adhere to the following notational conventions: Lower case letters for elements x ∈ X . {\displaystyle x\in X.} Upper case letters for subsets S ⊆ X . {\displaystyle S\subseteq X.} Upper case calligraphy letters for subsets B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} (or equivalently, for elements B ∈ ℘ ( ℘ ( X ) ) , {\displaystyle {\mathcal {B}}\in \wp (\wp (X)),} such as prefilters). Upper case double-struck letters for subsets P ⊆ ℘ ( ℘ ( X ) ) . {\displaystyle \mathbb {P} \subseteq \wp (\wp (X)).} For every S ⊆ X , {\displaystyle S\subseteq X,} let O ( S ) := { B ∈ P : S ∈ B ↑ X } {\displaystyle \mathbb {O} (S):=\left\{{\mathcal {B}}\in \mathbb {P} ~:~S\in {\mathcal {B}}^{\uparrow X}\right\}} where O ( X ) = P and O ( ∅ ) = ∅ . {\displaystyle \mathbb {O} (X)=\mathbb {P} {\text{ and }}\mathbb {O} (\varnothing )=\varnothing .} These sets will be the basic open subsets of the Stone topology. If R ⊆ S ⊆ X {\displaystyle R\subseteq S\subseteq X} then { B ∈ ℘ ( ℘ ( X ) ) : R ∈ B ↑ X } ⊆ { B ∈ ℘ ( ℘ ( X ) ) : S ∈ B ↑ X } . {\displaystyle \left\{{\mathcal {B}}\in \wp (\wp (X))~:~R\in {\mathcal {B}}^{\uparrow X}\right\}~\subseteq ~\left\{{\mathcal {B}}\in \wp (\wp (X))~:~S\in {\mathcal {B}}^{\uparrow X}\right\}.} From this inclusion, it is possible to deduce all of the subset inclusions displayed below with the exception of O ( R ∩ S ) ⊇ O ( R ) ∩ O ( S ) . {\displaystyle \mathbb {O} (R\cap S)~\supseteq ~\mathbb {O} (R)\cap \mathbb {O} (S).} For all R ⊆ S ⊆ X , {\displaystyle R\subseteq S\subseteq X,} O ( R ∩ S ) = O ( R ) ∩ O ( S ) ⊆ O ( R ) ∪ O ( S ) ⊆ O ( R ∪ S ) {\displaystyle \mathbb {O} (R\cap S)~=~\mathbb {O} (R)\cap \mathbb {O} (S)~\subseteq ~\mathbb {O} (R)\cup \mathbb {O} (S)~\subseteq ~\mathbb {O} (R\cup S)} where in particular, the equality O ( R ∩ S ) = O ( R ) ∩ O ( S ) {\displaystyle \mathbb {O} (R\cap S)=\mathbb {O} (R)\cap \mathbb {O} (S)} shows that the family { O ( S ) : S ⊆ X } {\displaystyle \{\mathbb {O} (S)~:~S\subseteq X\}} is a π {\displaystyle \pi } -system that forms a basis for a topology on P {\displaystyle \mathbb {P} } called the Stone topology. It is henceforth assumed that P {\displaystyle \mathbb {P} } carries this topology and that any subset of P {\displaystyle \mathbb {P} } carries the induced subspace topology. In contrast to most other general constructions of topologies (for example, the product, quotient, subspace topologies, etc.), this topology on P {\displaystyle \mathbb {P} } was defined without using anything other than the set X ; {\displaystyle X;} there were no preexisting structures or assumptions on X {\displaystyle X} so this topology is completely independent of everything other than X {\displaystyle X} (and its subsets). The following criteria can be used for checking for points of closure and neighborhoods. If B ⊆ P and F ∈ P {\displaystyle \mathbb {B} \subseteq \mathbb {P} {\text{ and }}{\mathcal {F}}\in \mathbb {P} } then: Closure in P {\displaystyle \mathbb {P} } : F {\displaystyle \ {\mathcal {F}}} belongs to the closure of B in P {\displaystyle \mathbb {B} {\text{ in }}\mathbb {P} } if and only if F ⊆ ⋃ B ∈ B B ↑ X . {\displaystyle {\mathcal {F}}\subseteq {\textstyle \bigcup \limits _{{\mathcal {B}}\in \mathbb {B} }}{\mathcal {B}}^{\uparrow X}.} Neighborhoods in P {\displaystyle \mathbb {P} } : B {\displaystyle \ \mathbb {B} } is a neighborhood of F in P {\displaystyle {\mathcal {F}}{\text{ in }}\mathbb {P} } if and only if there exists some F ∈ F {\displaystyle F\in {\mathcal {F}}} such that O ( F ) = { B ∈ P : F ∈ B ↑ X } ⊆ B {\displaystyle \mathbb {O} (F)=\left\{{\mathcal {B}}\in \mathbb {P} ~:~F\in {\mathcal {B}}^{\uparrow X}\right\}\subseteq \mathbb {B} } (that is, such that for all B ∈ P , if F ∈ B ↑ X then B ∈ B {\displaystyle {\mathcal {B}}\in \mathbb {P} ,{\text{ if }}F\in {\mathcal {B}}^{\uparrow X}{\text{ then }}{\mathcal {B}}\in \mathbb {B} } ). It will be henceforth assumed that X ≠ ∅ {\displaystyle X\neq \varnothing } because otherwise P = ∅ {\displaystyle \mathbb {P} =\varnothing } and the topology is { ∅ } , {\displaystyle \{\varnothing \},} which is uninteresting. Subspace of ultrafilters The set of ultrafilters on X {\displaystyle X} (with the subspace topology) is a Stone space, meaning that it is compact, Hausdorff, and totally disconnected. If X {\displaystyle X} has the discrete topology then the map β : X → UltraFilters ⁡ ( X ) , {\displaystyle \beta :X\to \operatorname {UltraFilters} (X),} defined by sending x ∈ X {\displaystyle x\in X} to the principal ultrafilter at x , {\displaystyle x,} is a topological embedding whose image is a dense subset of UltraFilters ⁡ ( X ) {\displaystyle \operatorname {UltraFilters} (X)} (see the article Stone–Čech compactification for more details). Relationships between topologies on X {\displaystyle X} and the Stone topology on P {\displaystyle \mathbb {P} } Every τ ∈ Top ⁡ ( X ) {\displaystyle \tau \in \operatorname {Top} (X)} induces a canonical map N τ : X → Filters ⁡ ( X ) {\displaystyle {\mathcal {N}}_{\tau }:X\to \operatorname {Filters} (X)} defined by x ↦ N τ ( x ) , {\displaystyle x\mapsto {\mathcal {N}}_{\tau }(x),} which sends x ∈ X {\displaystyle x\in X} to the neighborhood filter of x in ( X , τ ) . {\displaystyle x{\text{ in }}(X,\tau ).} If τ , σ ∈ Top ⁡ ( X ) {\displaystyle \tau ,\sigma \in \operatorname {Top} (X)} then τ = σ {\displaystyle \tau =\sigma } if and only if N τ = N σ . {\displaystyle {\mathcal {N}}_{\tau }={\mathcal {N}}_{\sigma }.} Thus every topology τ ∈ Top ⁡ ( X ) {\displaystyle \tau \in \operatorname {Top} (X)} can be identified with the canonical map N τ ∈ Func ⁡ ( X ; P ) , {\displaystyle {\mathcal {N}}_{\tau }\in \operatorname {Func} (X;\mathbb {P} ),} which allows Top ⁡ ( X ) {\displaystyle \operatorname {Top} (X)} to be canonically identified as a subset of Func ⁡ ( X ; P ) {\displaystyle \operatorname {Func} (X;\mathbb {P} )} (as a side note, it is now possible to place on Func ⁡ ( X ; P ) , {\displaystyle \operatorname {Func} (X;\mathbb {P} ),} and thus also on Top ⁡ ( X ) , {\displaystyle \operatorname {Top} (X),} the topology of pointwise convergence on X {\displaystyle X} so that it now makes sense to talk about things such as sequences of topologies on X {\displaystyle X} converging pointwise). For every τ ∈ Top ⁡ ( X ) , {\displaystyle \tau \in \operatorname {Top} (X),} the surjection N τ : ( X , τ ) → image ⁡ N τ {\displaystyle {\mathcal {N}}_{\tau }:(X,\tau )\to \operatorname {image} {\mathcal {N}}_{\tau }} is always continuous, closed, and open, but it is injective if and only if τ is T 0 {\displaystyle \tau {\text{ is }}T_{0}} (that is, a Kolmogorov space). In particular, for every T 0 {\displaystyle T_{0}} topology τ on X , {\displaystyle \tau {\text{ on }}X,} the map N τ : ( X , τ ) → P {\displaystyle {\mathcal {N}}_{\tau }:(X,\tau )\to \mathbb {P} } is a topological embedding (said differently, every Kolmogorov space is a topological subspace of the space of prefilters). In addition, if F : X → Filters ⁡ ( X ) {\displaystyle {\mathfrak {F}}:X\to \operatorname {Filters} (X)} is a map such that x ∈ ker ⁡ F ( x ) := ⋂ F ∈ F ( x ) F for every x ∈ X {\displaystyle x\in \ker {\mathfrak {F}}(x):={\textstyle \bigcap \limits _{F\in {\mathfrak {F}}(x)}}F{\text{ for every }}x\in X} (which is true of F := N τ , {\displaystyle {\mathfrak {F}}:={\mathcal {N}}_{\tau },} for instance), then for every x ∈ X and F ∈ F ( x ) , {\displaystyle x\in X{\text{ and }}F\in {\mathfrak {F}}(x),} the set F ( F ) = { F ( f ) : f ∈ F } {\displaystyle {\mathfrak {F}}(F)=\{{\mathfrak {F}}(f):f\in F\}} is a neighborhood (in the subspace topology) of F ( x ) in image ⁡ F . {\displaystyle {\mathfrak {F}}(x){\text{ in }}\operatorname {image} {\mathfrak {F}}.} == See also == Characterizations of the category of topological spaces Convergence space – Generalization of the notion of convergence that is found in general topology Filtration (mathematics) – Indexed set in mathematics Filtration (probability theory) – Model of information available at a given point of a random process Filtration (abstract algebra) Fréchet filter – frechet filterPages displaying wikidata descriptions as a fallback Generic filter – in set theory, given a collection of dense open subsets of a poset, a filter that meets all sets in that collectionPages displaying wikidata descriptions as a fallback Ideal (set theory) – Non-empty family of sets that is closed under finite unions and subsets Stone–Čech compactification#Construction using ultrafilters – Concept in topology The fundamental theorem of ultraproducts – Mathematical constructionPages displaying short descriptions of redirect targets == Notes == Proofs == Citations == == References == Adams, Colin; Franzosa, Robert (2009). Introduction to Topology: Pure and Applied. New Delhi: Pearson Education. ISBN 978-81-317-2692-1. OCLC 789880519. Arkhangel'skii, Alexander Vladimirovich; Ponomarev, V.I. (1984). Fundamentals of General Topology: Problems and Exercises. Mathematics and Its Applications. Vol. 13. Dordrecht Boston: D. Reidel. ISBN 978-90-277-1355-1. OCLC 9944489. Berberian, Sterling K. (1974). Lectures in Functional Analysis and Operator Theory. Graduate Texts in Mathematics. Vol. 15. New York: Springer. ISBN 978-0-387-90081-0. OCLC 878109401. Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129. Bourbaki, Nicolas (1989) [1967]. General Topology 2: Chapters 5–10 [Topologie Générale]. Éléments de mathématique. Vol. 4. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64563-4. OCLC 246032063. Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190. Burris, Stanley; Sankappanavar, Hanamantagouda P. (2012). A Course in Universal Algebra (PDF). Springer-Verlag. ISBN 978-0-9880552-0-9. Archived from the original on 1 April 2022. Cartan, Henri (1937a). "Théorie des filtres". Comptes rendus hebdomadaires des séances de l'Académie des sciences. 205: 595–598. Cartan, Henri (1937b). "Filtres et ultrafiltres". Comptes rendus hebdomadaires des séances de l'Académie des sciences. 205: 777–779. Comfort, William Wistar; Negrepontis, Stylianos (1974). The Theory of Ultrafilters. Vol. 211. Berlin Heidelberg New York: Springer-Verlag. ISBN 978-0-387-06604-2. OCLC 1205452. Császár, Ákos (1978). General topology. Translated by Császár, Klára. Bristol England: Adam Hilger Ltd. ISBN 0-85274-275-4. OCLC 4146011. Dixmier, Jacques (1984). General Topology. Undergraduate Texts in Mathematics. Translated by Berberian, S. K. New York: Springer-Verlag. ISBN 978-0-387-90972-1. OCLC 10277303. Dolecki, Szymon; Mynard, Frédéric (2016). Convergence Foundations Of Topology. New Jersey: World Scientific Publishing Company. ISBN 978-981-4571-52-4. OCLC 945169917. Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485. Dunford, Nelson; Schwartz, Jacob T. (1988). Linear Operators. Pure and applied mathematics. Vol. 1. New York: Wiley-Interscience. ISBN 978-0-471-60848-6. OCLC 18412261. Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138. Howes, Norman R. (23 June 1995). Modern Analysis and Topology. Graduate Texts in Mathematics. New York: Springer-Verlag Science & Business Media. ISBN 978-0-387-97986-1. OCLC 31969970. OL 1272666M. Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342. Jech, Thomas (2006). Set Theory: The Third Millennium Edition, Revised and Expanded. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-44085-7. OCLC 50422939. Joshi, K. D. (1983). Introduction to General Topology. New York: John Wiley and Sons Ltd. ISBN 978-0-85226-444-7. OCLC 9218750. Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704. MacIver R., David (1 July 2004). "Filters in Analysis and Topology" (PDF). Archived from the original (PDF) on 2007-10-09. (Provides an introductory review of filters in topology and in metric spaces.) Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. Schubert, Horst (1968). Topology. London: Macdonald & Co. ISBN 978-0-356-02077-8. OCLC 463753. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114. Wilansky, Albert (17 October 2008) [1970]. Topology for Analysis. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-46903-4. OCLC 227923899. Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
Wikipedia/Filters_in_topology
In mathematics, the Dirichlet function is the indicator function 1 Q {\displaystyle \mathbf {1} _{\mathbb {Q} }} of the set of rational numbers Q {\displaystyle \mathbb {Q} } , i.e. 1 Q ( x ) = 1 {\displaystyle \mathbf {1} _{\mathbb {Q} }(x)=1} if x is a rational number and 1 Q ( x ) = 0 {\displaystyle \mathbf {1} _{\mathbb {Q} }(x)=0} if x is not a rational number (i.e. is an irrational number). 1 Q ( x ) = { 1 x ∈ Q 0 x ∉ Q {\displaystyle \mathbf {1} _{\mathbb {Q} }(x)={\begin{cases}1&x\in \mathbb {Q} \\0&x\notin \mathbb {Q} \end{cases}}} It is named after the mathematician Peter Gustav Lejeune Dirichlet. It is an example of a pathological function which provides counterexamples to many situations. == Topological properties == == Periodicity == For any real number x and any positive rational number T, 1 Q ( x + T ) = 1 Q ( x ) {\displaystyle \mathbf {1} _{\mathbb {Q} }(x+T)=\mathbf {1} _{\mathbb {Q} }(x)} . The Dirichlet function is therefore an example of a real periodic function which is not constant but whose set of periods, the set of rational numbers, is a dense subset of R {\displaystyle \mathbb {R} } . == Integration properties == == See also == Thomae's function, a variation that is discontinuous only at the rational numbers == References ==
Wikipedia/Dirichlet_function
In multivariable calculus, an iterated limit is a limit of a sequence or a limit of a function in the form lim m → ∞ lim n → ∞ a n , m = lim m → ∞ ( lim n → ∞ a n , m ) {\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }a_{n,m}=\lim _{m\to \infty }\left(\lim _{n\to \infty }a_{n,m}\right)} , lim y → b lim x → a f ( x , y ) = lim y → b ( lim x → a f ( x , y ) ) {\displaystyle \lim _{y\to b}\lim _{x\to a}f(x,y)=\lim _{y\to b}\left(\lim _{x\to a}f(x,y)\right)} , or other similar forms. An iterated limit is only defined for an expression whose value depends on at least two variables. To evaluate such a limit, one takes the limiting process as one of the two variables approaches some number, getting an expression whose value depends only on the other variable, and then one takes the limit as the other variable approaches some number. == Types of iterated limits == This section introduces definitions of iterated limits in two variables. These may generalize easily to multiple variables. === Iterated limit of sequence === For each n , m ∈ N {\displaystyle n,m\in \mathbf {N} } , let a n , m ∈ R {\displaystyle a_{n,m}\in \mathbf {R} } be a real double sequence. Then there are two forms of iterated limits, namely lim m → ∞ lim n → ∞ a n , m and lim n → ∞ lim m → ∞ a n , m {\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }a_{n,m}\qquad {\text{and}}\qquad \lim _{n\to \infty }\lim _{m\to \infty }a_{n,m}} . For example, let a n , m = n n + m {\displaystyle a_{n,m}={\frac {n}{n+m}}} . Then lim m → ∞ lim n → ∞ a n , m = lim m → ∞ 1 = 1 {\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }a_{n,m}=\lim _{m\to \infty }1=1} , and lim n → ∞ lim m → ∞ a n , m = lim n → ∞ 0 = 0 {\displaystyle \lim _{n\to \infty }\lim _{m\to \infty }a_{n,m}=\lim _{n\to \infty }0=0} . === Iterated limit of function === Let f : X × Y → R {\displaystyle f:X\times Y\to \mathbf {R} } . Then there are also two forms of iterated limits, namely lim y → b lim x → a f ( x , y ) and lim x → a lim y → b f ( x , y ) {\displaystyle \lim _{y\to b}\lim _{x\to a}f(x,y)\qquad {\text{and}}\qquad \lim _{x\to a}\lim _{y\to b}f(x,y)} . For example, let f : R 2 ∖ { ( 0 , 0 ) } → R {\displaystyle f:\mathbf {R} ^{2}\setminus \{(0,0)\}\to \mathbf {R} } such that f ( x , y ) = x 2 x 2 + y 2 {\displaystyle f(x,y)={\frac {x^{2}}{x^{2}+y^{2}}}} . Then lim y → 0 lim x → 0 x 2 x 2 + y 2 = lim y → 0 0 = 0 {\displaystyle \lim _{y\to 0}\lim _{x\to 0}{\frac {x^{2}}{x^{2}+y^{2}}}=\lim _{y\to 0}0=0} , and lim x → 0 lim y → 0 x 2 x 2 + y 2 = lim x → 0 1 = 1 {\displaystyle \lim _{x\to 0}\lim _{y\to 0}{\frac {x^{2}}{x^{2}+y^{2}}}=\lim _{x\to 0}1=1} . The limit(s) for x and/or y can also be taken at infinity, i.e., lim y → ∞ lim x → ∞ f ( x , y ) and lim x → ∞ lim y → ∞ f ( x , y ) {\displaystyle \lim _{y\to \infty }\lim _{x\to \infty }f(x,y)\qquad {\text{and}}\qquad \lim _{x\to \infty }\lim _{y\to \infty }f(x,y)} . === Iterated limit of sequence of functions === For each n ∈ N {\displaystyle n\in \mathbf {N} } , let f n : X → R {\displaystyle f_{n}:X\to \mathbf {R} } be a sequence of functions. Then there are two forms of iterated limits, namely lim n → ∞ lim x → a f n ( x ) and lim x → a lim n → ∞ f n ( x ) {\displaystyle \lim _{n\to \infty }\lim _{x\to a}f_{n}(x)\qquad {\text{and}}\qquad \lim _{x\to a}\lim _{n\to \infty }f_{n}(x)} . For example, let f n : [ 0 , 1 ] → R {\displaystyle f_{n}:[0,1]\to \mathbf {R} } such that f n ( x ) = x n {\displaystyle f_{n}(x)=x^{n}} . Then lim n → ∞ lim x → 1 f n ( x ) = lim n → ∞ 1 = 1 {\displaystyle \lim _{n\to \infty }\lim _{x\to 1}f_{n}(x)=\lim _{n\to \infty }1=1} , and lim x → 1 lim n → ∞ f n ( x ) = lim x → 1 0 = 0 {\displaystyle \lim _{x\to 1}\lim _{n\to \infty }f_{n}(x)=\lim _{x\to 1}0=0} . The limit in x can also be taken at infinity, i.e., lim n → ∞ lim x → ∞ f n ( x ) and lim x → ∞ lim n → ∞ f n ( x ) {\displaystyle \lim _{n\to \infty }\lim _{x\to \infty }f_{n}(x)\qquad {\text{and}}\qquad \lim _{x\to \infty }\lim _{n\to \infty }f_{n}(x)} . For example, let f n : ( 0 , ∞ ) → R {\displaystyle f_{n}:(0,\infty )\to \mathbf {R} } such that f n ( x ) = 1 x n {\displaystyle f_{n}(x)={\frac {1}{x^{n}}}} . Then lim n → ∞ lim x → ∞ f n ( x ) = lim n → ∞ 0 = 0 {\displaystyle \lim _{n\to \infty }\lim _{x\to \infty }f_{n}(x)=\lim _{n\to \infty }0=0} , and lim x → ∞ lim n → ∞ f n ( x ) = lim x → ∞ 0 = 0 {\displaystyle \lim _{x\to \infty }\lim _{n\to \infty }f_{n}(x)=\lim _{x\to \infty }0=0} . Note that the limit in n is taken discretely, while the limit in x is taken continuously. == Comparison with other limits in multiple variables == This section introduces various definitions of limits in two variables. These may generalize easily to multiple variables. === Limit of sequence === For a double sequence a n , m ∈ R {\displaystyle a_{n,m}\in \mathbf {R} } , there is another definition of limit, which is commonly referred to as double limit, denote by L = lim n → ∞ m → ∞ a n , m {\displaystyle L=\lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}} , which means that for all ϵ > 0 {\displaystyle \epsilon >0} , there exist N = N ( ϵ ) ∈ N {\displaystyle N=N(\epsilon )\in \mathbf {N} } such that n , m > N {\displaystyle n,m>N} implies | a n , m − L | < ϵ {\displaystyle \left|a_{n,m}-L\right|<\epsilon } . The following theorem states the relationship between double limit and iterated limits. Theorem 1. If lim n → ∞ m → ∞ a n , m {\displaystyle \lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}} exists and equals L, lim n → ∞ a n , m {\displaystyle \lim _{n\to \infty }a_{n,m}} exists for each large m, and lim m → ∞ a n , m {\displaystyle \lim _{m\to \infty }a_{n,m}} exists for each large n, then lim m → ∞ lim n → ∞ a n , m {\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }a_{n,m}} and lim n → ∞ lim m → ∞ a n , m {\displaystyle \lim _{n\to \infty }\lim _{m\to \infty }a_{n,m}} also exist, and they equal L, i.e., lim m → ∞ lim n → ∞ a n , m = lim n → ∞ lim m → ∞ a n , m = lim n → ∞ m → ∞ a n , m {\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }a_{n,m}=\lim _{n\to \infty }\lim _{m\to \infty }a_{n,m}=\lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}} . Proof. By existence of lim n → ∞ m → ∞ a n , m {\displaystyle \lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}} for any ϵ > 0 {\displaystyle \epsilon >0} , there exists N 1 = N 1 ( ϵ ) ∈ N {\displaystyle N_{1}=N_{1}(\epsilon )\in \mathbf {N} } such that n , m > N 1 {\displaystyle n,m>N_{1}} implies | a n , m − L | < ϵ 2 {\displaystyle \left|a_{n,m}-L\right|<{\frac {\epsilon }{2}}} . Let each n > N 0 {\displaystyle n>N_{0}} such that lim n → ∞ a n , m = A n {\displaystyle \lim _{n\to \infty }a_{n,m}=A_{n}} exists, there exists N 2 = N 2 ( ϵ ) ∈ N {\displaystyle N_{2}=N_{2}(\epsilon )\in \mathbf {N} } such that m > N 2 {\displaystyle m>N_{2}} implies | a n , m − A n | < ϵ 2 {\displaystyle \left|a_{n,m}-A_{n}\right|<{\frac {\epsilon }{2}}} . Both the above statements are true for n > max ( N 0 , N 1 ) {\displaystyle n>\max(N_{0},N_{1})} and m > max ( N 1 , N 2 ) {\displaystyle m>\max(N_{1},N_{2})} . Combining equations from the above two, for any ϵ > 0 {\displaystyle \epsilon >0} there exists N = N ( ϵ ) ∈ N {\displaystyle N=N(\epsilon )\in \mathbf {N} } for all n > N {\displaystyle n>N} , | A n − L | < ϵ {\displaystyle \left|A_{n}-L\right|<\epsilon } , which proves that lim n → ∞ lim m → ∞ a n , m = lim n → ∞ m → ∞ a n , m {\displaystyle \lim _{n\to \infty }\lim _{m\to \infty }a_{n,m}=\lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}\displaystyle } . Similarly for lim m → ∞ a n , m {\displaystyle \lim _{m\to \infty }a_{n,m}} , we prove: lim m → ∞ lim n → ∞ a n , m = lim n → ∞ lim m → ∞ a n , m = lim n → ∞ m → ∞ a n , m {\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }a_{n,m}=\lim _{n\to \infty }\lim _{m\to \infty }a_{n,m}=\lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}} . For example, let a n , m = 1 n + 1 m {\displaystyle a_{n,m}={\frac {1}{n}}+{\frac {1}{m}}} . Since lim n → ∞ m → ∞ a n , m = 0 {\displaystyle \lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}=0} , lim n → ∞ a n , m = 1 m {\displaystyle \lim _{n\to \infty }a_{n,m}={\frac {1}{m}}} , and lim m → ∞ a n , m = 1 n {\displaystyle \lim _{m\to \infty }a_{n,m}={\frac {1}{n}}} , we have lim m → ∞ lim n → ∞ a n , m = lim n → ∞ lim m → ∞ a n , m = 0 {\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }a_{n,m}=\lim _{n\to \infty }\lim _{m\to \infty }a_{n,m}=0} . This theorem requires the single limits lim n → ∞ a n , m {\displaystyle \lim _{n\to \infty }a_{n,m}} and lim m → ∞ a n , m {\displaystyle \lim _{m\to \infty }a_{n,m}} to converge. This condition cannot be dropped. For example, consider a n , m = ( − 1 ) m ( 1 n + 1 m ) {\displaystyle a_{n,m}=(-1)^{m}\left({\frac {1}{n}}+{\frac {1}{m}}\right)} . Then we may see that lim n → ∞ m → ∞ a n , m = lim m → ∞ lim n → ∞ a n , m = 0 {\displaystyle \lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}=\lim _{m\to \infty }\lim _{n\to \infty }a_{n,m}=0} , but lim n → ∞ lim m → ∞ a n , m {\displaystyle \lim _{n\to \infty }\lim _{m\to \infty }a_{n,m}} does not exist. This is because lim m → ∞ a n , m {\displaystyle \lim _{m\to \infty }a_{n,m}} does not exist in the first place. === Limit of function === For a two-variable function f : X × Y → R {\displaystyle f:X\times Y\to \mathbf {R} } , there are two other types of limits. One is the ordinary limit, denoted by L = lim ( x , y ) → ( a , b ) f ( x , y ) {\displaystyle L=\lim _{(x,y)\to (a,b)}f(x,y)} , which means that for all ϵ > 0 {\displaystyle \epsilon >0} , there exist δ = δ ( ϵ ) > 0 {\displaystyle \delta =\delta (\epsilon )>0} such that 0 < ( x − a ) 2 + ( y − b ) 2 < δ {\displaystyle 0<{\sqrt {(x-a)^{2}+(y-b)^{2}}}<\delta } implies | f ( x , y ) − L | < ϵ {\displaystyle \left|f(x,y)-L\right|<\epsilon } . For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b). In this definition, the point (a, b) is excluded from the paths. Therefore, the value of f at the point (a, b), even if it is defined, does not affect the limit. The other type is the double limit, denoted by L = lim x → a y → b f ( x , y ) {\displaystyle L=\lim _{\begin{smallmatrix}x\to a\\y\to b\end{smallmatrix}}f(x,y)} , which means that for all ϵ > 0 {\displaystyle \epsilon >0} , there exist δ = δ ( ϵ ) > 0 {\displaystyle \delta =\delta (\epsilon )>0} such that 0 < | x − a | < δ {\displaystyle 0<\left|x-a\right|<\delta } and 0 < | y − b | < δ {\displaystyle 0<\left|y-b\right|<\delta } implies | f ( x , y ) − L | < ϵ {\displaystyle \left|f(x,y)-L\right|<\epsilon } . For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b), except the lines x=a and y=b. In other words, the value of f along the lines x=a and y=b does not affect the limit. This is different from the ordinary limit where only the point (a, b) is excluded. In this sense, ordinary limit is a stronger notion than double limit: Theorem 2. If lim ( x , y ) → ( a , b ) f ( x , y ) {\displaystyle \lim _{(x,y)\to (a,b)}f(x,y)} exists and equals L, then lim x → a y → b f ( x , y ) {\displaystyle \lim _{\begin{smallmatrix}x\to a\\y\to b\end{smallmatrix}}f(x,y)} exists and equals L, i.e., lim x → a y → b f ( x , y ) = lim ( x , y ) → ( a , b ) f ( x , y ) {\displaystyle \lim _{\begin{smallmatrix}x\to a\\y\to b\end{smallmatrix}}f(x,y)=\lim _{(x,y)\to (a,b)}f(x,y)} . Both of these limits do not involve first taking one limit and then another. This contrasts with iterated limits where the limiting process is taken in x-direction first, and then in y-direction (or in reversed order). The following theorem states the relationship between double limit and iterated limits: Theorem 3. If lim x → a y → b f ( x , y ) {\displaystyle \lim _{\begin{smallmatrix}x\to a\\y\to b\end{smallmatrix}}f(x,y)} exists and equals L, lim x → a f ( x , y ) {\displaystyle \lim _{x\to a}f(x,y)} exists for each y near b, and lim y → b f ( x , y ) {\displaystyle \lim _{y\to b}f(x,y)} exists for each x near a, then lim x → a lim y → b f ( x , y ) {\displaystyle \lim _{x\to a}\lim _{y\to b}f(x,y)} and lim y → b lim x → a f ( x , y ) {\displaystyle \lim _{y\to b}\lim _{x\to a}f(x,y)} also exist, and they equal L, i.e., lim x → a lim y → b f ( x , y ) = lim y → b lim x → a f ( x , y ) = lim x → a y → b f ( x , y ) {\displaystyle \lim _{x\to a}\lim _{y\to b}f(x,y)=\lim _{y\to b}\lim _{x\to a}f(x,y)=\lim _{\begin{smallmatrix}x\to a\\y\to b\end{smallmatrix}}f(x,y)} . For example, let f ( x , y ) = { 1 for x y ≠ 0 0 for x y = 0 {\displaystyle f(x,y)={\begin{cases}1\quad {\text{for}}\quad xy\neq 0\\0\quad {\text{for}}\quad xy=0\end{cases}}} . Since lim x → 0 y → 0 f ( x , y ) = 1 {\displaystyle \lim _{\begin{smallmatrix}x\to 0\\y\to 0\end{smallmatrix}}f(x,y)=1} , lim x → 0 f ( x , y ) = { 1 for y ≠ 0 0 for y = 0 {\displaystyle \lim _{x\to 0}f(x,y)={\begin{cases}1\quad {\text{for}}\quad y\neq 0\\0\quad {\text{for}}\quad y=0\end{cases}}} and lim y → 0 f ( x , y ) = { 1 for x ≠ 0 0 for x = 0 {\displaystyle \lim _{y\to 0}f(x,y)={\begin{cases}1\quad {\text{for}}\quad x\neq 0\\0\quad {\text{for}}\quad x=0\end{cases}}} , we have lim x → 0 lim y → 0 f ( x , y ) = lim y → 0 lim x → 0 f ( x , y ) = 1 {\displaystyle \lim _{x\to 0}\lim _{y\to 0}f(x,y)=\lim _{y\to 0}\lim _{x\to 0}f(x,y)=1} . (Note that in this example, lim ( x , y ) → ( 0 , 0 ) f ( x , y ) {\displaystyle \lim _{(x,y)\to (0,0)}f(x,y)} does not exist.) This theorem requires the single limits lim x → a f ( x , y ) {\displaystyle \lim _{x\to a}f(x,y)} and lim y → b f ( x , y ) {\displaystyle \lim _{y\to b}f(x,y)} to exist. This condition cannot be dropped. For example, consider f ( x , y ) = x sin ⁡ ( 1 y ) {\displaystyle f(x,y)=x\sin \left({\frac {1}{y}}\right)} . Then we may see that lim x → 0 y → 0 f ( x , y ) = lim y → 0 lim x → 0 f ( x , y ) = 0 {\displaystyle \lim _{\begin{smallmatrix}x\to 0\\y\to 0\end{smallmatrix}}f(x,y)=\lim _{y\to 0}\lim _{x\to 0}f(x,y)=0} , but lim x → 0 lim y → 0 f ( x , y ) {\displaystyle \lim _{x\to 0}\lim _{y\to 0}f(x,y)} does not exist. This is because lim y → 0 f ( x , y ) {\displaystyle \lim _{y\to 0}f(x,y)} does not exist for x near 0 in the first place. Combining Theorem 2 and 3, we have the following corollary: Corollary 3.1. If lim ( x , y ) → ( a , b ) f ( x , y ) {\displaystyle \lim _{(x,y)\to (a,b)}f(x,y)} exists and equals L, lim x → a f ( x , y ) {\displaystyle \lim _{x\to a}f(x,y)} exists for each y near b, and lim y → b f ( x , y ) {\displaystyle \lim _{y\to b}f(x,y)} exists for each x near a, then lim x → a lim y → b f ( x , y ) {\displaystyle \lim _{x\to a}\lim _{y\to b}f(x,y)} and lim y → b lim x → a f ( x , y ) {\displaystyle \lim _{y\to b}\lim _{x\to a}f(x,y)} also exist, and they equal L, i.e., lim x → a lim y → b f ( x , y ) = lim y → b lim x → a f ( x , y ) = lim ( x , y ) → ( a , b ) f ( x , y ) {\displaystyle \lim _{x\to a}\lim _{y\to b}f(x,y)=\lim _{y\to b}\lim _{x\to a}f(x,y)=\lim _{(x,y)\to (a,b)}f(x,y)} . === Limit at infinity of function === For a two-variable function f : X × Y → R {\displaystyle f:X\times Y\to \mathbf {R} } , we may also define the double limit at infinity L = lim x → ∞ y → ∞ f ( x , y ) {\displaystyle L=\lim _{\begin{smallmatrix}x\to \infty \\y\to \infty \end{smallmatrix}}f(x,y)} , which means that for all ϵ > 0 {\displaystyle \epsilon >0} , there exist M = M ( ϵ ) > 0 {\displaystyle M=M(\epsilon )>0} such that x > M {\displaystyle x>M} and y > M {\displaystyle y>M} implies | f ( x , y ) − L | < ϵ {\displaystyle \left|f(x,y)-L\right|<\epsilon } . Similar definitions may be given for limits at negative infinity. The following theorem states the relationship between double limit at infinity and iterated limits at infinity: Theorem 4. If lim x → ∞ y → ∞ f ( x , y ) {\displaystyle \lim _{\begin{smallmatrix}x\to \infty \\y\to \infty \end{smallmatrix}}f(x,y)} exists and equals L, lim x → ∞ f ( x , y ) {\displaystyle \lim _{x\to \infty }f(x,y)} exists for each large y, and lim y → ∞ f ( x , y ) {\displaystyle \lim _{y\to \infty }f(x,y)} exists for each large x, then lim x → ∞ lim y → ∞ f ( x , y ) {\displaystyle \lim _{x\to \infty }\lim _{y\to \infty }f(x,y)} and lim y → ∞ lim x → ∞ f ( x , y ) {\displaystyle \lim _{y\to \infty }\lim _{x\to \infty }f(x,y)} also exist, and they equal L, i.e., lim x → ∞ lim y → ∞ f ( x , y ) = lim y → ∞ lim x → ∞ f ( x , y ) = lim x → ∞ y → ∞ f ( x , y ) {\displaystyle \lim _{x\to \infty }\lim _{y\to \infty }f(x,y)=\lim _{y\to \infty }\lim _{x\to \infty }f(x,y)=\lim _{\begin{smallmatrix}x\to \infty \\y\to \infty \end{smallmatrix}}f(x,y)} . For example, let f ( x , y ) = x sin ⁡ y x y + y {\displaystyle f(x,y)={\frac {x\sin y}{xy+y}}} . Since lim x → ∞ y → ∞ f ( x , y ) = 0 {\displaystyle \lim _{\begin{smallmatrix}x\to \infty \\y\to \infty \end{smallmatrix}}f(x,y)=0} , lim x → ∞ f ( x , y ) = sin ⁡ y y {\displaystyle \lim _{x\to \infty }f(x,y)={\frac {\sin y}{y}}} and lim y → ∞ f ( x , y ) = 0 {\displaystyle \lim _{y\to \infty }f(x,y)=0} , we have lim y → ∞ lim x → ∞ f ( x , y ) = lim x → ∞ lim y → ∞ f ( x , y ) = 0 {\displaystyle \lim _{y\to \infty }\lim _{x\to \infty }f(x,y)=\lim _{x\to \infty }\lim _{y\to \infty }f(x,y)=0} . Again, this theorem requires the single limits lim x → ∞ f ( x , y ) {\displaystyle \lim _{x\to \infty }f(x,y)} and lim y → ∞ f ( x , y ) {\displaystyle \lim _{y\to \infty }f(x,y)} to exist. This condition cannot be dropped. For example, consider f ( x , y ) = cos ⁡ x y {\displaystyle f(x,y)={\frac {\cos x}{y}}} . Then we may see that lim x → ∞ y → ∞ f ( x , y ) = lim x → ∞ lim y → ∞ f ( x , y ) = 0 {\displaystyle \lim _{\begin{smallmatrix}x\to \infty \\y\to \infty \end{smallmatrix}}f(x,y)=\lim _{x\to \infty }\lim _{y\to \infty }f(x,y)=0} , but lim y → ∞ lim x → ∞ f ( x , y ) {\displaystyle \lim _{y\to \infty }\lim _{x\to \infty }f(x,y)} does not exist. This is because lim x → ∞ f ( x , y ) {\displaystyle \lim _{x\to \infty }f(x,y)} does not exist for fixed y in the first place. === Invalid converses of the theorems === The converses of Theorems 1, 3 and 4 do not hold, i.e., the existence of iterated limits, even if they are equal, does not imply the existence of the double limit. A counter-example is f ( x , y ) = x y x 2 + y 2 {\displaystyle f(x,y)={\frac {xy}{x^{2}+y^{2}}}} near the point (0, 0). On one hand, lim x → 0 lim y → 0 f ( x , y ) = lim y → 0 lim x → 0 f ( x , y ) = 0 {\displaystyle \lim _{x\to 0}\lim _{y\to 0}f(x,y)=\lim _{y\to 0}\lim _{x\to 0}f(x,y)=0} . On the other hand, the double limit lim x → a y → b f ( x , y ) {\displaystyle \lim _{\begin{smallmatrix}x\to a\\y\to b\end{smallmatrix}}f(x,y)} does not exist. This can be seen by taking the limit along the path (x, y) = (t, t) → (0,0), which gives lim t → 0 t → 0 f ( t , t ) = lim t → 0 t 2 t 2 + t 2 = 1 2 {\displaystyle \lim _{\begin{smallmatrix}t\to 0\\t\to 0\end{smallmatrix}}f(t,t)=\lim _{t\to 0}{\frac {t^{2}}{t^{2}+t^{2}}}={\frac {1}{2}}} , and along the path (x, y) = (t, t2) → (0,0), which gives lim t → 0 t 2 → 0 f ( t , t 2 ) = lim t → 0 t 3 t 2 + t 4 = 0 {\displaystyle \lim _{\begin{smallmatrix}t\to 0\\t^{2}\to 0\end{smallmatrix}}f(t,t^{2})=\lim _{t\to 0}{\frac {t^{3}}{t^{2}+t^{4}}}=0} . == Moore-Osgood theorem for interchanging limits == In the examples above, we may see that interchanging limits may or may not give the same result. A sufficient condition for interchanging limits is given by the Moore-Osgood theorem. The essence of the interchangeability depends on uniform convergence. === Interchanging limits of sequences === The following theorem allows us to interchange two limits of sequences. Theorem 5. If lim n → ∞ a n , m = b m {\displaystyle \lim _{n\to \infty }a_{n,m}=b_{m}} uniformly (in m), and lim m → ∞ a n , m = c n {\displaystyle \lim _{m\to \infty }a_{n,m}=c_{n}} for each large n, then both lim m → ∞ b m {\displaystyle \lim _{m\to \infty }b_{m}} and lim n → ∞ c n {\displaystyle \lim _{n\to \infty }c_{n}} exists and are equal to the double limit, i.e., lim m → ∞ lim n → ∞ a n , m = lim n → ∞ lim m → ∞ a n , m = lim n → ∞ m → ∞ a n , m {\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }a_{n,m}=\lim _{n\to \infty }\lim _{m\to \infty }a_{n,m}=\lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}} . Proof. By the uniform convergence, for any ϵ > 0 {\displaystyle \epsilon >0} there exist N 1 ( ϵ ) ∈ N {\displaystyle N_{1}(\epsilon )\in \mathbf {N} } such that for all m ∈ N {\displaystyle m\in \mathbf {N} } , n , k > N 1 {\displaystyle n,k>N_{1}} implies | a n , m − a k , m | < ϵ 3 {\displaystyle \left|a_{n,m}-a_{k,m}\right|<{\frac {\epsilon }{3}}} . As m → ∞ {\displaystyle m\to \infty } , we have | c n − c k | < ϵ 3 {\displaystyle \left|c_{n}-c_{k}\right|<{\frac {\epsilon }{3}}} , which means that c n {\displaystyle c_{n}} is a Cauchy sequence which converges to a limit L {\displaystyle L} . In addition, as k → ∞ {\displaystyle k\to \infty } , we have | c n − L | < ϵ 3 {\displaystyle \left|c_{n}-L\right|<{\frac {\epsilon }{3}}} . On the other hand, if we take k → ∞ {\displaystyle k\to \infty } first, we have | a n , m − b m | < ϵ 3 {\displaystyle \left|a_{n,m}-b_{m}\right|<{\frac {\epsilon }{3}}} . By the pointwise convergence, for any ϵ > 0 {\displaystyle \epsilon >0} and n > N 1 {\displaystyle n>N_{1}} , there exist N 2 ( ϵ , n ) ∈ N {\displaystyle N_{2}(\epsilon ,n)\in \mathbf {N} } such that m > N 2 {\displaystyle m>N_{2}} implies | a n , m − c n | < ϵ 3 {\displaystyle \left|a_{n,m}-c_{n}\right|<{\frac {\epsilon }{3}}} . Then for that fixed n {\displaystyle n} , m > N 2 {\displaystyle m>N_{2}} implies | b m − L | ≤ | b m − a n , m | + | a n , m − c n | + | c n − L | ≤ ϵ {\displaystyle \left|b_{m}-L\right|\leq \left|b_{m}-a_{n,m}\right|+\left|a_{n,m}-c_{n}\right|+\left|c_{n}-L\right|\leq \epsilon } . This proves that lim m → ∞ b m = L = lim n → ∞ c n {\displaystyle \lim _{m\to \infty }b_{m}=L=\lim _{n\to \infty }c_{n}} . Also, by taking N = max { N 1 , N 2 } {\displaystyle N=\max\{N_{1},N_{2}\}} , we see that this limit also equals lim n → ∞ m → ∞ a n , m {\displaystyle \lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}a_{n,m}} . A corollary is about the interchangeability of infinite sum. Corollary 5.1. If ∑ n = 1 ∞ a n , m {\displaystyle \sum _{n=1}^{\infty }a_{n,m}} converges uniformly (in m), and ∑ m = 1 ∞ a n , m {\displaystyle \sum _{m=1}^{\infty }a_{n,m}} converges for each large n, then ∑ m = 1 ∞ ∑ n = 1 ∞ a n , m = ∑ n = 1 ∞ ∑ m = 1 ∞ a n , m {\displaystyle \sum _{m=1}^{\infty }\sum _{n=1}^{\infty }a_{n,m}=\sum _{n=1}^{\infty }\sum _{m=1}^{\infty }a_{n,m}} . Proof. Direct application of Theorem 5 on S k , ℓ = ∑ m = 1 k ∑ n = 1 ℓ a n , m {\displaystyle S_{k,\ell }=\sum _{m=1}^{k}\sum _{n=1}^{\ell }a_{n,m}} . === Interchanging limits of functions === Similar results hold for multivariable functions. Theorem 6. If lim x → a f ( x , y ) = g ( y ) {\displaystyle \lim _{x\to a}f(x,y)=g(y)} uniformly (in y) on Y ∖ { b } {\displaystyle Y\setminus \{b\}} , and lim y → b f ( x , y ) = h ( x ) {\displaystyle \lim _{y\to b}f(x,y)=h(x)} for each x near a, then both lim y → b g ( y ) {\displaystyle \lim _{y\to b}g(y)} and lim x → a h ( x ) {\displaystyle \lim _{x\to a}h(x)} exists and are equal to the double limit, i.e., lim y → b lim x → a f ( x , y ) = lim x → a lim y → b f ( x , y ) = lim x → a y → b f ( x , y ) {\displaystyle \lim _{y\to b}\lim _{x\to a}f(x,y)=\lim _{x\to a}\lim _{y\to b}f(x,y)=\lim _{\begin{smallmatrix}x\to a\\y\to b\end{smallmatrix}}f(x,y)} . The a and b here can possibly be infinity. Proof. By the existence uniform limit, for any ϵ > 0 {\displaystyle \epsilon >0} there exist δ 1 ( ϵ ) > 0 {\displaystyle \delta _{1}(\epsilon )>0} such that for all y ∈ Y ∖ { b } {\displaystyle y\in Y\setminus \{b\}} , 0 < | x − a | < δ 1 {\displaystyle 0<\left|x-a\right|<\delta _{1}} and 0 < | w − a | < δ 1 {\displaystyle 0<\left|w-a\right|<\delta _{1}} implies | f ( x , y ) − f ( w , y ) | < ϵ 3 {\displaystyle \left|f(x,y)-f(w,y)\right|<{\frac {\epsilon }{3}}} . As y → b {\displaystyle y\to b} , we have | h ( x ) − h ( w ) | < ϵ 3 {\displaystyle \left|h(x)-h(w)\right|<{\frac {\epsilon }{3}}} . By Cauchy criterion, lim x → a h ( x ) {\displaystyle \lim _{x\to a}h(x)} exists and equals a number L {\displaystyle L} . In addition, as w → a {\displaystyle w\to a} , we have | h ( x ) − L | < ϵ 3 {\displaystyle \left|h(x)-L\right|<{\frac {\epsilon }{3}}} . On the other hand, if we take w → a {\displaystyle w\to a} first, we have | f ( x , y ) − g ( y ) | < ϵ 3 {\displaystyle \left|f(x,y)-g(y)\right|<{\frac {\epsilon }{3}}} . By the existence of pointwise limit, for any ϵ > 0 {\displaystyle \epsilon >0} and x {\displaystyle x} near a {\displaystyle a} , there exist δ 2 ( ϵ , x ) > 0 {\displaystyle \delta _{2}(\epsilon ,x)>0} such that 0 < | y − b | < δ 2 {\displaystyle 0<\left|y-b\right|<\delta _{2}} implies | f ( x , y ) − h ( x ) | < ϵ 3 {\displaystyle \left|f(x,y)-h(x)\right|<{\frac {\epsilon }{3}}} . Then for that fixed x {\displaystyle x} , 0 < | y − b | < δ 2 {\displaystyle 0<\left|y-b\right|<\delta _{2}} implies | g ( y ) − L | ≤ | g ( y ) − f ( x , y ) | + | f ( x , y ) − h ( x ) | + | h ( x ) − L | ≤ ϵ {\displaystyle \left|g(y)-L\right|\leq \left|g(y)-f(x,y)\right|+\left|f(x,y)-h(x)\right|+\left|h(x)-L\right|\leq \epsilon } . This proves that lim y → b g ( y ) = L = lim x → a h ( x ) {\displaystyle \lim _{y\to b}g(y)=L=\lim _{x\to a}h(x)} . Also, by taking δ = min { δ 1 , δ 2 } {\displaystyle \delta =\min\{\delta _{1},\delta _{2}\}} , we see that this limit also equals lim x → a y → b f ( x , y ) {\displaystyle \lim _{\begin{smallmatrix}x\to a\\y\to b\end{smallmatrix}}f(x,y)} . Note that this theorem does not imply the existence of lim ( x , y ) → ( a , b ) f ( x , y ) {\displaystyle \lim _{(x,y)\to (a,b)}f(x,y)} . A counter-example is f ( x , y ) = { 1 for x y ≠ 0 0 for x y = 0 {\displaystyle f(x,y)={\begin{cases}1\quad {\text{for}}\quad xy\neq 0\\0\quad {\text{for}}\quad xy=0\end{cases}}} near (0,0). === Interchanging limits of sequences of functions === An important variation of Moore-Osgood theorem is specifically for sequences of functions. Theorem 7. If lim n → ∞ f n ( x ) = f ( x ) {\displaystyle \lim _{n\to \infty }f_{n}(x)=f(x)} uniformly (in x) on X ∖ { a } {\displaystyle X\setminus \{a\}} , and lim x → a f n ( x ) = L n {\displaystyle \lim _{x\to a}f_{n}(x)=L_{n}} for each large n, then both lim x → a f ( x ) {\displaystyle \lim _{x\to a}f(x)} and lim n → ∞ L n {\displaystyle \lim _{n\to \infty }L_{n}} exists and are equal, i.e., lim n → ∞ lim x → a f n ( x ) = lim x → a lim n → ∞ f n ( x ) {\displaystyle \lim _{n\to \infty }\lim _{x\to a}f_{n}(x)=\lim _{x\to a}\lim _{n\to \infty }f_{n}(x)} . The a here can possibly be infinity. Proof. By the uniform convergence, for any ϵ > 0 {\displaystyle \epsilon >0} there exist N ( ϵ ) ∈ N {\displaystyle N(\epsilon )\in \mathbf {N} } such that for all x ∈ D ∖ { a } {\displaystyle x\in D\setminus \{a\}} , n , m > N {\displaystyle n,m>N} implies | f n ( x ) − f m ( x ) | < ϵ 3 {\displaystyle \left|f_{n}(x)-f_{m}(x)\right|<{\frac {\epsilon }{3}}} . As x → a {\displaystyle x\to a} , we have | L n − L m | < ϵ 3 {\displaystyle \left|L_{n}-L_{m}\right|<{\frac {\epsilon }{3}}} , which means that L n {\displaystyle L_{n}} is a Cauchy sequence which converges to a limit L {\displaystyle L} . In addition, as m → ∞ {\displaystyle m\to \infty } , we have | L n − L | < ϵ 3 {\displaystyle \left|L_{n}-L\right|<{\frac {\epsilon }{3}}} . On the other hand, if we take m → ∞ {\displaystyle m\to \infty } first, we have | f n ( x ) − f ( x ) | < ϵ 3 {\displaystyle \left|f_{n}(x)-f(x)\right|<{\frac {\epsilon }{3}}} . By the existence of pointwise limit, for any ϵ > 0 {\displaystyle \epsilon >0} and n > N {\displaystyle n>N} , there exist δ ( ϵ , n ) > 0 {\displaystyle \delta (\epsilon ,n)>0} such that 0 < | x − a | < δ {\displaystyle 0<\left|x-a\right|<\delta } implies | f n ( x ) − L n | < ϵ 3 {\displaystyle \left|f_{n}(x)-L_{n}\right|<{\frac {\epsilon }{3}}} . Then for that fixed n {\displaystyle n} , 0 < | x − a | < δ {\displaystyle 0<\left|x-a\right|<\delta } implies | f ( x ) − L | ≤ | f ( x ) − f n ( x ) | + | f n ( x ) − L n | + | L n − L | ≤ ϵ {\displaystyle \left|f(x)-L\right|\leq \left|f(x)-f_{n}(x)\right|+\left|f_{n}(x)-L_{n}\right|+\left|L_{n}-L\right|\leq \epsilon } . This proves that lim x → a f ( x ) = L = lim n → ∞ L n {\displaystyle \lim _{x\to a}f(x)=L=\lim _{n\to \infty }L_{n}} . A corollary is the continuity theorem for uniform convergence as follows: Corollary 7.1. If lim n → ∞ f n ( x ) = f ( x ) {\displaystyle \lim _{n\to \infty }f_{n}(x)=f(x)} uniformly (in x) on X {\displaystyle X} , and f n ( x ) {\displaystyle f_{n}(x)} are continuous at x = a ∈ X {\displaystyle x=a\in X} , then f ( x ) {\displaystyle f(x)} is also continuous at x = a {\displaystyle x=a} . In other words, the uniform limit of continuous functions is continuous. Proof. By Theorem 7, lim x → a f ( x ) = lim x → a lim n → ∞ f n ( x ) = lim n → ∞ lim x → a f n ( x ) = lim n → ∞ f n ( a ) = f ( a ) {\displaystyle \lim _{x\to a}f(x)=\lim _{x\to a}\lim _{n\to \infty }f_{n}(x)=\lim _{n\to \infty }\lim _{x\to a}f_{n}(x)=\lim _{n\to \infty }f_{n}(a)=f(a)} . Another corollary is about the interchangeability of limit and infinite sum. Corollary 7.2. If ∑ n = 0 ∞ f n ( x ) {\displaystyle \sum _{n=0}^{\infty }f_{n}(x)} converges uniformly (in x) on X ∖ { a } {\displaystyle X\setminus \{a\}} , and lim x → a f n ( x ) {\displaystyle \lim _{x\to a}f_{n}(x)} exists for each large n, then lim x → a ∑ n = 0 ∞ f n ( x ) = ∑ n = 0 ∞ lim x → a f n ( x ) {\displaystyle \lim _{x\to a}\sum _{n=0}^{\infty }f_{n}(x)=\sum _{n=0}^{\infty }\lim _{x\to a}f_{n}(x)} . Proof. Direct application of Theorem 7 on S k ( x ) = ∑ n = 0 k f n ( x ) {\displaystyle S_{k}(x)=\sum _{n=0}^{k}f_{n}(x)} near x = a {\displaystyle x=a} . == Applications == === Sum of infinite entries in a matrix === Consider a matrix of infinite entries [ 1 − 1 0 0 ⋯ 0 1 − 1 0 ⋯ 0 0 1 − 1 ⋯ ⋮ ⋮ ⋮ ⋮ ⋱ ] {\displaystyle {\begin{bmatrix}1&-1&0&0&\cdots \\0&1&-1&0&\cdots \\0&0&1&-1&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{bmatrix}}} . Suppose we would like to find the sum of all entries. If we sum it column by column first, we will find that the first column gives 1, while all others give 0. Hence the sum of all columns is 1. However, if we sum it row by row first, it will find that all rows give 0. Hence the sum of all rows is 0. The explanation for this paradox is that the vertical sum to infinity and horizontal sum to infinity are two limiting processes that cannot be interchanged. Let S n , m {\displaystyle S_{n,m}} be the sum of entries up to entries (n, m). Then we have lim m → ∞ lim n → ∞ S n , m = 1 {\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }S_{n,m}=1} , but lim n → ∞ lim m → ∞ S n , m = 0 {\displaystyle \lim _{n\to \infty }\lim _{m\to \infty }S_{n,m}=0} . In this case, the double limit lim n → ∞ m → ∞ S n , m {\displaystyle \lim _{\begin{smallmatrix}n\to \infty \\m\to \infty \end{smallmatrix}}S_{n,m}} does not exist, and thus this problem is not well-defined. === Integration over unbounded interval === By the integration theorem for uniform convergence, once we have lim n → ∞ f n ( x ) {\displaystyle \lim _{n\to \infty }f_{n}(x)} converges uniformly on X {\displaystyle X} , the limit in n and an integration over a bounded interval [ a , b ] ⊆ X {\displaystyle [a,b]\subseteq X} can be interchanged: lim n → ∞ ∫ a b f n ( x ) d x = ∫ a b lim n → ∞ f n ( x ) d x {\displaystyle \lim _{n\to \infty }\int _{a}^{b}f_{n}(x)\mathrm {d} x=\int _{a}^{b}\lim _{n\to \infty }f_{n}(x)\mathrm {d} x} . However, such a property may fail for an improper integral over an unbounded interval [ a , ∞ ) ⊆ X {\displaystyle [a,\infty )\subseteq X} . In this case, one may rely on the Moore-Osgood theorem. Consider L = ∫ 0 ∞ x 2 e x − 1 d x = lim b → ∞ ∫ 0 b x 2 e x − 1 d x {\displaystyle L=\int _{0}^{\infty }{\frac {x^{2}}{e^{x}-1}}\mathrm {d} x=\lim _{b\to \infty }\int _{0}^{b}{\frac {x^{2}}{e^{x}-1}}\mathrm {d} x} as an example. We first expand the integrand as x 2 e x − 1 = x 2 e − x 1 − e − x = ∑ k = 0 ∞ x 2 e − k x {\displaystyle {\frac {x^{2}}{e^{x}-1}}={\frac {x^{2}e^{-x}}{1-e^{-x}}}=\sum _{k=0}^{\infty }x^{2}e^{-kx}} for x ∈ [ 0 , ∞ ) {\displaystyle x\in [0,\infty )} . (Here x=0 is a limiting case.) One can prove by calculus that for x ∈ [ 0 , ∞ ) {\displaystyle x\in [0,\infty )} and k ≥ 1 {\displaystyle k\geq 1} , we have x 2 e − k x ≤ 4 e 2 k 2 {\displaystyle x^{2}e^{-kx}\leq {\frac {4}{e^{2}k^{2}}}} . By Weierstrass M-test, ∑ k = 0 ∞ x 2 e − k x {\displaystyle \sum _{k=0}^{\infty }x^{2}e^{-kx}} converges uniformly on [ 0 , ∞ ) {\displaystyle [0,\infty )} . Then by the integration theorem for uniform convergence, L = lim b → ∞ ∫ 0 b ∑ k = 0 ∞ x 2 e − k x d x = lim b → ∞ ∑ k = 0 ∞ ∫ 0 b x 2 e − k x d x {\displaystyle L=\lim _{b\to \infty }\int _{0}^{b}\sum _{k=0}^{\infty }x^{2}e^{-kx}\mathrm {d} x=\lim _{b\to \infty }\sum _{k=0}^{\infty }\int _{0}^{b}x^{2}e^{-kx}\mathrm {d} x} . To further interchange the limit lim b → ∞ {\displaystyle \lim _{b\to \infty }} with the infinite summation ∑ k = 0 ∞ {\displaystyle \sum _{k=0}^{\infty }} , the Moore-Osgood theorem requires the infinite series to be uniformly convergent. Note that ∫ 0 b x 2 e − k x d x ≤ ∫ 0 ∞ x 2 e − k x d x = 2 k 3 {\displaystyle \int _{0}^{b}x^{2}e^{-kx}\mathrm {d} x\leq \int _{0}^{\infty }x^{2}e^{-kx}\mathrm {d} x={\frac {2}{k^{3}}}} . Again, by Weierstrass M-test, ∑ k = 0 ∞ ∫ 0 b x 2 e − k x {\displaystyle \sum _{k=0}^{\infty }\int _{0}^{b}x^{2}e^{-kx}} converges uniformly on [ 0 , ∞ ) {\displaystyle [0,\infty )} . Then by the Moore-Osgood theorem, L = lim b → ∞ ∑ k = 0 ∞ ∫ 0 b x 2 e − k x = ∑ k = 0 ∞ lim b → ∞ ∫ 0 b x 2 e − k x = ∑ k = 0 ∞ 2 k 3 = 2 ζ ( 3 ) {\displaystyle L=\lim _{b\to \infty }\sum _{k=0}^{\infty }\int _{0}^{b}x^{2}e^{-kx}=\sum _{k=0}^{\infty }\lim _{b\to \infty }\int _{0}^{b}x^{2}e^{-kx}=\sum _{k=0}^{\infty }{\frac {2}{k^{3}}}=2\zeta (3)} . (Here is the Riemann zeta function.) == See also == Limit of a sequence Limit of a function Uniform convergence Interchange of limiting operations == Notes ==
Wikipedia/Iterated_limits
In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input which may or may not be in the domain of the function. Formal definitions, first devised in the early 19th century, are given below. Informally, a function f assigns an output f(x) to every input x. We say that the function has a limit L at an input p, if f(x) gets closer and closer to L as x moves closer and closer to p. More specifically, the output value can be made arbitrarily close to L if the input to f is taken sufficiently close to p. On the other hand, if some inputs very close to p are taken to outputs that stay a fixed distance apart, then we say the limit does not exist. The notion of a limit has many applications in modern calculus. In particular, the many definitions of continuity employ the concept of limit: roughly, a function is continuous if all of its limits agree with the values of the function. The concept of limit also appears in the definition of the derivative: in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function. == History == Although implicit in the development of calculus of the 17th and 18th centuries, the modern idea of the limit of a function goes back to Bernard Bolzano who, in 1817, introduced the basics of the epsilon-delta technique (see (ε, δ)-definition of limit below) to define continuous functions. However, his work was not known during his lifetime. Bruce Pourciau argues that Isaac Newton, in his 1687 Principia, demonstrates a more sophisticated understanding of limits than he is generally given credit for, including being the first to present an epsilon argument. In his 1821 book Cours d'analyse, Augustin-Louis Cauchy discussed variable quantities, infinitesimals and limits, and defined continuity of y = f ( x ) {\displaystyle y=f(x)} by saying that an infinitesimal change in x necessarily produces an infinitesimal change in y, while Grabiner claims that he used a rigorous epsilon-delta definition in proofs. In 1861, Karl Weierstrass first introduced the epsilon-delta definition of limit in the form it is usually written today. He also introduced the notations lim {\textstyle \lim } and lim x → x 0 . {\textstyle \textstyle \lim _{x\to x_{0}}\displaystyle .} The modern notation of placing the arrow below the limit symbol is due to G. H. Hardy, which is introduced in his book A Course of Pure Mathematics in 1908. == Motivation == Imagine a person walking on a landscape represented by the graph y = f(x). Their horizontal position is given by x, much like the position given by a map of the land or by a global positioning system. Their altitude is given by the coordinate y. Suppose they walk towards a position x = p, as they get closer and closer to this point, they will notice that their altitude approaches a specific value L. If asked about the altitude corresponding to x = p, they would reply by saying y = L. What, then, does it mean to say, their altitude is approaching L? It means that their altitude gets nearer and nearer to L—except for a possible small error in accuracy. For example, suppose we set a particular accuracy goal for our traveler: they must get within ten meters of L. They report back that indeed, they can get within ten vertical meters of L, arguing that as long as they are within fifty horizontal meters of p, their altitude is always within ten meters of L. The accuracy goal is then changed: can they get within one vertical meter? Yes, supposing that they are able to move within five horizontal meters of p, their altitude will always remain within one meter from the target altitude L. Summarizing the aforementioned concept we can say that the traveler's altitude approaches L as their horizontal position approaches p, so as to say that for every target accuracy goal, however small it may be, there is some neighbourhood of p where all (not just some) altitudes correspond to all the horizontal positions, except maybe the horizontal position p itself, in that neighbourhood fulfill that accuracy goal. The initial informal statement can now be explicated: In fact, this explicit statement is quite close to the formal definition of the limit of a function, with values in a topological space. More specifically, to say that lim x → p f ( x ) = L , {\displaystyle \lim _{x\to p}f(x)=L,} is to say that f(x) can be made as close to L as desired, by making x close enough, but not equal, to p. The following definitions, known as (ε, δ)-definitions, are the generally accepted definitions for the limit of a function in various contexts. == Functions of a single variable == === (ε, δ)-definition of limit === Suppose f : R → R {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} } is a function defined on the real line, and there are two real numbers p and L. One would say: The limit of f of x, as x approaches p, exists, and it equals L. and write, lim x → p f ( x ) = L , {\displaystyle \lim _{x\to p}f(x)=L,} or alternatively, say f(x) tends to L as x tends to p, and write, f ( x ) → L as x → p , {\displaystyle f(x)\to L{\text{ as }}x\to p,} if the following property holds: for every real ε > 0, there exists a real δ > 0 such that for all real x, 0 < |x − p| < δ implies |f(x) − L| < ε. Symbolically, ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ R ) ( 0 < | x − p | < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in \mathbb {R} )\,(0<|x-p|<\delta \implies |f(x)-L|<\varepsilon ).} For example, we may say lim x → 2 ( 4 x + 1 ) = 9 {\displaystyle \lim _{x\to 2}(4x+1)=9} because for every real ε > 0, we can take δ = ε/4, so that for all real x, if 0 < |x − 2| < δ, then |4x + 1 − 9| < ε. A more general definition applies for functions defined on subsets of the real line. Let S be a subset of ⁠ R . {\displaystyle \mathbb {R} .} ⁠ Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a real-valued function. Let p be a point such that there exists some open interval (a, b) containing p with ( a , p ) ∪ ( p , b ) ⊂ S . {\displaystyle (a,p)\cup (p,b)\subset S.} It is then said that the limit of f as x approaches p is L, if: Or, symbolically: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ ( a , b ) ) ( 0 < | x − p | < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in (a,b))\,(0<|x-p|<\delta \implies |f(x)-L|<\varepsilon ).} For example, we may say lim x → 1 x + 3 = 2 {\displaystyle \lim _{x\to 1}{\sqrt {x+3}}=2} because for every real ε > 0, we can take δ = ε, so that for all real x ≥ −3, if 0 < |x − 1| < δ, then |f(x) − 2| < ε. In this example, S = [−3, ∞) contains open intervals around the point 1 (for example, the interval (0, 2)). Here, note that the value of the limit does not depend on f being defined at p, nor on the value f(p)—if it is defined. For example, let f : [ 0 , 1 ) ∪ ( 1 , 2 ] → R , f ( x ) = 2 x 2 − x − 1 x − 1 . {\displaystyle f:[0,1)\cup (1,2]\to \mathbb {R} ,f(x)={\tfrac {2x^{2}-x-1}{x-1}}.} lim x → 1 f ( x ) = 3 {\displaystyle \lim _{x\to 1}f(x)=3} because for every ε > 0, we can take δ = ε/2, so that for all real x ≠ 1, if 0 < |x − 1| < δ, then |f(x) − 3| < ε. Note that here f(1) is undefined. In fact, a limit can exist in { p ∈ R | ∃ ( a , b ) ⊂ R : p ∈ ( a , b ) and ( a , p ) ∪ ( p , b ) ⊂ S } , {\displaystyle \{p\in \mathbb {R} \,|\,\exists (a,b)\subset \mathbb {R} :\,p\in (a,b){\text{ and }}(a,p)\cup (p,b)\subset S\},} which equals int ⁡ S ∪ iso ⁡ S c , {\displaystyle \operatorname {int} S\cup \operatorname {iso} S^{c},} where int S is the interior of S, and iso Sc are the isolated points of the complement of S. In our previous example where S = [ 0 , 1 ) ∪ ( 1 , 2 ] , {\displaystyle S=[0,1)\cup (1,2],} int ⁡ S = ( 0 , 1 ) ∪ ( 1 , 2 ) , {\displaystyle \operatorname {int} S=(0,1)\cup (1,2),} iso ⁡ S c = { 1 } . {\displaystyle \operatorname {iso} S^{c}=\{1\}.} We see, specifically, this definition of limit allows a limit to exist at 1, but not 0 or 2. The letters ε and δ can be understood as "error" and "distance". In fact, Cauchy used ε as an abbreviation for "error" in some of his work, though in his definition of continuity, he used an infinitesimal α {\displaystyle \alpha } rather than either ε or δ (see Cours d'Analyse). In these terms, the error (ε) in the measurement of the value at the limit can be made as small as desired, by reducing the distance (δ) to the limit point. As discussed below, this definition also works for functions in a more general context. The idea that δ and ε represent distances helps suggest these generalizations. === Existence and one-sided limits === Alternatively, x may approach p from above (right) or below (left), in which case the limits may be written as lim x → p + f ( x ) = L {\displaystyle \lim _{x\to p^{+}}f(x)=L} or lim x → p − f ( x ) = L {\displaystyle \lim _{x\to p^{-}}f(x)=L} respectively. If these limits exist at p and are equal there, then this can be referred to as the limit of f(x) at p. If the one-sided limits exist at p, but are unequal, then there is no limit at p (i.e., the limit at p does not exist). If either one-sided limit does not exist at p, then the limit at p also does not exist. A formal definition is as follows. The limit of f as x approaches p from above is L if: For every ε > 0, there exists a δ > 0 such that whenever 0 < x − p < δ, we have |f(x) − L| < ε. ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ ( a , b ) ) ( 0 < x − p < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in (a,b))\,(0<x-p<\delta \implies |f(x)-L|<\varepsilon ).} The limit of f as x approaches p from below is L if: For every ε > 0, there exists a δ > 0 such that whenever 0 < p − x < δ, we have |f(x) − L| < ε. ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ ( a , b ) ) ( 0 < p − x < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in (a,b))\,(0<p-x<\delta \implies |f(x)-L|<\varepsilon ).} If the limit does not exist, then the oscillation of f at p is non-zero. === More general definition using limit points and subsets === Limits can also be defined by approaching from subsets of the domain. In general: Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a real-valued function defined on some S ⊆ R . {\displaystyle S\subseteq \mathbb {R} .} Let p be a limit point of some T ⊂ S {\displaystyle T\subset S} —that is, p is the limit of some sequence of elements of T distinct from p. Then we say the limit of f, as x approaches p from values in T, is L, written lim x → p x ∈ T f ( x ) = L {\displaystyle \lim _{{x\to p} \atop {x\in T}}f(x)=L} if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ T ) ( 0 < | x − p | < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in T)\,(0<|x-p|<\delta \implies |f(x)-L|<\varepsilon ).} Note, T can be any subset of S, the domain of f. And the limit might depend on the selection of T. This generalization includes as special cases limits on an interval, as well as left-handed limits of real-valued functions (e.g., by taking T to be an open interval of the form (–∞, a)), and right-handed limits (e.g., by taking T to be an open interval of the form (a, ∞)). It also extends the notion of one-sided limits to the included endpoints of (half-)closed intervals, so the square root function f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} can have limit 0 as x approaches 0 from above: lim x → 0 x ∈ [ 0 , ∞ ) x = 0 {\displaystyle \lim _{{x\to 0} \atop {x\in [0,\infty )}}{\sqrt {x}}=0} since for every ε > 0, we may take δ = ε2 such that for all x ≥ 0, if 0 < |x − 0| < δ, then |f(x) − 0| < ε. This definition allows a limit to be defined at limit points of the domain S, if a suitable subset T which has the same limit point is chosen. Notably, the previous two-sided definition works on int ⁡ S ∪ iso ⁡ S c , {\displaystyle \operatorname {int} S\cup \operatorname {iso} S^{c},} which is a subset of the limit points of S. For example, let S = [ 0 , 1 ) ∪ ( 1 , 2 ] . {\displaystyle S=[0,1)\cup (1,2].} The previous two-sided definition would work at 1 ∈ iso ⁡ S c = { 1 } , {\displaystyle 1\in \operatorname {iso} S^{c}=\{1\},} but it wouldn't work at 0 or 2, which are limit points of S. === Deleted versus non-deleted limits === The definition of limit given here does not depend on how (or whether) f is defined at p. Bartle refers to this as a deleted limit, because it excludes the value of f at p. The corresponding non-deleted limit does depend on the value of f at p, if p is in the domain of f. Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a real-valued function. The non-deleted limit of f, as x approaches p, is L if ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( | x − p | < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(|x-p|<\delta \implies |f(x)-L|<\varepsilon ).} The definition is the same, except that the neighborhood |x − p| < δ now includes the point p, in contrast to the deleted neighborhood 0 < |x − p| < δ. This makes the definition of a non-deleted limit less general. One of the advantages of working with non-deleted limits is that they allow to state the theorem about limits of compositions without any constraints on the functions (other than the existence of their non-deleted limits). Bartle notes that although by "limit" some authors do mean this non-deleted limit, deleted limits are the most popular. === Examples === ==== Non-existence of one-sided limit(s) ==== The function f ( x ) = { sin ⁡ 5 x − 1 for x < 1 0 for x = 1 1 10 x − 10 for x > 1 {\displaystyle f(x)={\begin{cases}\sin {\frac {5}{x-1}}&{\text{ for }}x<1\\0&{\text{ for }}x=1\\[2pt]{\frac {1}{10x-10}}&{\text{ for }}x>1\end{cases}}} has no limit at x0 = 1 (the left-hand limit does not exist due to the oscillatory nature of the sine function, and the right-hand limit does not exist due to the asymptotic behaviour of the reciprocal function, see picture), but has a limit at every other x-coordinate. The function f ( x ) = { 1 x rational 0 x irrational {\displaystyle f(x)={\begin{cases}1&x{\text{ rational }}\\0&x{\text{ irrational }}\end{cases}}} (a.k.a., the Dirichlet function) has no limit at any x-coordinate. ==== Non-equality of one-sided limits ==== The function f ( x ) = { 1 for x < 0 2 for x ≥ 0 {\displaystyle f(x)={\begin{cases}1&{\text{ for }}x<0\\2&{\text{ for }}x\geq 0\end{cases}}} has a limit at every non-zero x-coordinate (the limit equals 1 for negative x and equals 2 for positive x). The limit at x = 0 does not exist (the left-hand limit equals 1, whereas the right-hand limit equals 2). ==== Limits at only one point ==== The functions f ( x ) = { x x rational 0 x irrational {\displaystyle f(x)={\begin{cases}x&x{\text{ rational }}\\0&x{\text{ irrational }}\end{cases}}} and f ( x ) = { | x | x rational 0 x irrational {\displaystyle f(x)={\begin{cases}|x|&x{\text{ rational }}\\0&x{\text{ irrational }}\end{cases}}} both have a limit at x = 0 and it equals 0. ==== Limits at countably many points ==== The function f ( x ) = { sin ⁡ x x irrational 1 x rational {\displaystyle f(x)={\begin{cases}\sin x&x{\text{ irrational }}\\1&x{\text{ rational }}\end{cases}}} has a limit at any x-coordinate of the form π 2 + 2 n π , {\displaystyle {\tfrac {\pi }{2}}+2n\pi ,} where n is any integer. == Limits involving infinity == === Limits at infinity === Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a function defined on S ⊆ R . {\displaystyle S\subseteq \mathbb {R} .} The limit of f as x approaches infinity is L, denoted lim x → ∞ f ( x ) = L , {\displaystyle \lim _{x\to \infty }f(x)=L,} means that: ( ∀ ε > 0 ) ( ∃ c > 0 ) ( ∀ x ∈ S ) ( x > c ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(x>c\implies |f(x)-L|<\varepsilon ).} Similarly, the limit of f as x approaches minus infinity is L, denoted lim x → − ∞ f ( x ) = L , {\displaystyle \lim _{x\to -\infty }f(x)=L,} means that: ( ∀ ε > 0 ) ( ∃ c > 0 ) ( ∀ x ∈ S ) ( x < − c ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(x<-c\implies |f(x)-L|<\varepsilon ).} For example, lim x → ∞ ( − 3 sin ⁡ x x + 4 ) = 4 {\displaystyle \lim _{x\to \infty }\left(-{\frac {3\sin x}{x}}+4\right)=4} because for every ε > 0, we can take c = 3/ε such that for all real x, if x > c, then |f(x) − 4| < ε. Another example is that lim x → − ∞ e x = 0 {\displaystyle \lim _{x\to -\infty }e^{x}=0} because for every ε > 0, we can take c = max{1, −ln(ε)} such that for all real x, if x < −c, then |f(x) − 0| < ε. === Infinite limits === For a function whose values grow without bound, the function diverges and the usual limit does not exist. However, in this case one may introduce limits with infinite values. Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a function defined on S ⊆ R . {\displaystyle S\subseteq \mathbb {R} .} The statement the limit of f as x approaches p is infinity, denoted lim x → p f ( x ) = ∞ , {\displaystyle \lim _{x\to p}f(x)=\infty ,} means that: ( ∀ N > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ f ( x ) > N ) . {\displaystyle (\forall N>0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies f(x)>N).} The statement the limit of f as x approaches p is minus infinity, denoted lim x → p f ( x ) = − ∞ , {\displaystyle \lim _{x\to p}f(x)=-\infty ,} means that: ( ∀ N > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ f ( x ) < − N ) . {\displaystyle (\forall N>0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies f(x)<-N).} For example, lim x → 1 1 ( x − 1 ) 2 = ∞ {\displaystyle \lim _{x\to 1}{\frac {1}{(x-1)^{2}}}=\infty } because for every N > 0, we can take δ = 1 N δ = 1 N {\textstyle \delta ={\tfrac {1}{{\sqrt {N}}\delta }}={\tfrac {1}{\sqrt {N}}}} such that for all real x > 0, if 0 < x − 1 < δ, then f(x) > N. These ideas can be used together to produce definitions for different combinations, such as lim x → ∞ f ( x ) = ∞ , {\displaystyle \lim _{x\to \infty }f(x)=\infty ,} or lim x → p + f ( x ) = − ∞ . {\displaystyle \lim _{x\to p^{+}}f(x)=-\infty .} For example, lim x → 0 + ln ⁡ x = − ∞ {\displaystyle \lim _{x\to 0^{+}}\ln x=-\infty } because for every N > 0, we can take δ = e−N such that for all real x > 0, if 0 < x − 0 < δ, then f(x) < −N. Limits involving infinity are connected with the concept of asymptotes. These notions of a limit attempt to provide a metric space interpretation to limits at infinity. In fact, they are consistent with the topological space definition of limit if a neighborhood of −∞ is defined to contain an interval [−∞, c) for some ⁠ c ∈ R , {\displaystyle c\in \mathbb {R} ,} ⁠ a neighborhood of ∞ is defined to contain an interval (c, ∞] where ⁠ c ∈ R , {\displaystyle c\in \mathbb {R} ,} ⁠ and a neighborhood of ⁠ a ∈ R {\displaystyle a\in \mathbb {R} } ⁠ is defined in the normal way metric space ⁠ R . {\displaystyle \mathbb {R} .} ⁠ In this case, ⁠ R ¯ {\displaystyle {\overline {\mathbb {R} }}} ⁠ is a topological space and any function of the form f : X → Y {\displaystyle f:X\to Y} with X , Y ⊆ R ¯ {\displaystyle X,Y\subseteq {\overline {\mathbb {R} }}} is subject to the topological definition of a limit. Note that with this topological definition, it is easy to define infinite limits at finite points, which have not been defined above in the metric sense. === Alternative notation === Many authors allow for the projectively extended real line to be used as a way to include infinite values as well as extended real line. With this notation, the extended real line is given as ⁠ R ∪ { − ∞ , + ∞ } {\displaystyle \mathbb {R} \cup \{-\infty ,+\infty \}} ⁠ and the projectively extended real line is ⁠ R ∪ { ∞ } {\displaystyle \mathbb {R} \cup \{\infty \}} ⁠ where a neighborhood of ∞ is a set of the form { x : | x | > c } . {\displaystyle \{x:|x|>c\}.} The advantage is that one only needs three definitions for limits (left, right, and central) to cover all the cases. As presented above, for a completely rigorous account, we would need to consider 15 separate cases for each combination of infinities (five directions: −∞, left, central, right, and +∞; three bounds: −∞, finite, or +∞). There are also noteworthy pitfalls. For example, when working with the extended real line, x − 1 {\displaystyle x^{-1}} does not possess a central limit (which is normal): lim x → 0 + 1 x = + ∞ , lim x → 0 − 1 x = − ∞ . {\displaystyle \lim _{x\to 0^{+}}{1 \over x}=+\infty ,\quad \lim _{x\to 0^{-}}{1 \over x}=-\infty .} In contrast, when working with the projective real line, infinities (much like 0) are unsigned, so, the central limit does exist in that context: lim x → 0 + 1 x = lim x → 0 − 1 x = lim x → 0 1 x = ∞ . {\displaystyle \lim _{x\to 0^{+}}{1 \over x}=\lim _{x\to 0^{-}}{1 \over x}=\lim _{x\to 0}{1 \over x}=\infty .} In fact there are a plethora of conflicting formal systems in use. In certain applications of numerical differentiation and integration, it is, for example, convenient to have signed zeroes. A simple reason has to do with the converse of lim x → 0 − x − 1 = − ∞ , {\displaystyle \lim _{x\to 0^{-}}{x^{-1}}=-\infty ,} namely, it is convenient for lim x → − ∞ x − 1 = − 0 {\displaystyle \lim _{x\to -\infty }{x^{-1}}=-0} to be considered true. Such zeroes can be seen as an approximation to infinitesimals. === Limits at infinity for rational functions === There are three basic rules for evaluating limits at infinity for a rational function f ( x ) = p ( x ) q ( x ) {\displaystyle f(x)={\tfrac {p(x)}{q(x)}}} (where p and q are polynomials): If the degree of p is greater than the degree of q, then the limit is positive or negative infinity depending on the signs of the leading coefficients; If the degree of p and q are equal, the limit is the leading coefficient of p divided by the leading coefficient of q; If the degree of p is less than the degree of q, the limit is 0. If the limit at infinity exists, it represents a horizontal asymptote at y = L. Polynomials do not have horizontal asymptotes; such asymptotes may however occur with rational functions. == Functions of more than one variable == === Ordinary limits === By noting that |x − p| represents a distance, the definition of a limit can be extended to functions of more than one variable. In the case of a function f : S × T → R {\displaystyle f:S\times T\to \mathbb {R} } defined on S × T ⊆ R 2 , {\displaystyle S\times T\subseteq \mathbb {R} ^{2},} we defined the limit as follows: the limit of f as (x, y) approaches (p, q) is L, written lim ( x , y ) → ( p , q ) f ( x , y ) = L {\displaystyle \lim _{(x,y)\to (p,q)}f(x,y)=L} if the following condition holds: For every ε > 0, there exists a δ > 0 such that for all x in S and y in T, whenever 0 < ( x − p ) 2 + ( y − q ) 2 < δ , {\textstyle 0<{\sqrt {(x-p)^{2}+(y-q)^{2}}}<\delta ,} we have |f(x, y) − L| < ε, or formally: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( 0 < ( x − p ) 2 + ( y − q ) 2 < δ ⟹ | f ( x , y ) − L | < ε ) ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,(0<{\sqrt {(x-p)^{2}+(y-q)^{2}}}<\delta \implies |f(x,y)-L|<\varepsilon )).} Here ( x − p ) 2 + ( y − q ) 2 {\textstyle {\sqrt {(x-p)^{2}+(y-q)^{2}}}} is the Euclidean distance between (x, y) and (p, q). (This can in fact be replaced by any norm ||(x, y) − (p, q)||, and be extended to any number of variables.) For example, we may say lim ( x , y ) → ( 0 , 0 ) x 4 x 2 + y 2 = 0 {\displaystyle \lim _{(x,y)\to (0,0)}{\frac {x^{4}}{x^{2}+y^{2}}}=0} because for every ε > 0, we can take δ = ε {\textstyle \delta ={\sqrt {\varepsilon }}} such that for all real x ≠ 0 and real y ≠ 0, if 0 < ( x − 0 ) 2 + ( y − 0 ) 2 < δ , {\textstyle 0<{\sqrt {(x-0)^{2}+(y-0)^{2}}}<\delta ,} then |f(x, y) − 0| < ε. Similar to the case in single variable, the value of f at (p, q) does not matter in this definition of limit. For such a multivariable limit to exist, this definition requires the value of f approaches L along every possible path approaching (p, q). In the above example, the function f ( x , y ) = x 4 x 2 + y 2 {\displaystyle f(x,y)={\frac {x^{4}}{x^{2}+y^{2}}}} satisfies this condition. This can be seen by considering the polar coordinates ( x , y ) = ( r cos ⁡ θ , r sin ⁡ θ ) → ( 0 , 0 ) , {\displaystyle (x,y)=(r\cos \theta ,r\sin \theta )\to (0,0),} which gives lim r → 0 f ( r cos ⁡ θ , r sin ⁡ θ ) = lim r → 0 r 4 cos 4 ⁡ θ r 2 = lim r → 0 r 2 cos 4 ⁡ θ . {\displaystyle \lim _{r\to 0}f(r\cos \theta ,r\sin \theta )=\lim _{r\to 0}{\frac {r^{4}\cos ^{4}\theta }{r^{2}}}=\lim _{r\to 0}r^{2}\cos ^{4}\theta .} Here θ = θ(r) is a function of r which controls the shape of the path along which f is approaching (p, q). Since cos θ is bounded between [−1, 1], by the sandwich theorem, this limit tends to 0. In contrast, the function f ( x , y ) = x y x 2 + y 2 {\displaystyle f(x,y)={\frac {xy}{x^{2}+y^{2}}}} does not have a limit at (0, 0). Taking the path (x, y) = (t, 0) → (0, 0), we obtain lim t → 0 f ( t , 0 ) = lim t → 0 0 t 2 = 0 , {\displaystyle \lim _{t\to 0}f(t,0)=\lim _{t\to 0}{\frac {0}{t^{2}}}=0,} while taking the path (x, y) = (t, t) → (0, 0), we obtain lim t → 0 f ( t , t ) = lim t → 0 t 2 t 2 + t 2 = 1 2 . {\displaystyle \lim _{t\to 0}f(t,t)=\lim _{t\to 0}{\frac {t^{2}}{t^{2}+t^{2}}}={\frac {1}{2}}.} Since the two values do not agree, f does not tend to a single value as (x, y) approaches (0, 0). === Multiple limits === Although less commonly used, there is another type of limit for a multivariable function, known as the multiple limit. For a two-variable function, this is the double limit. Let f : S × T → R {\displaystyle f:S\times T\to \mathbb {R} } be defined on S × T ⊆ R 2 , {\displaystyle S\times T\subseteq \mathbb {R} ^{2},} we say the double limit of f as x approaches p and y approaches q is L, written lim x → p y → q f ( x , y ) = L {\displaystyle \lim _{{x\to p} \atop {y\to q}}f(x,y)=L} if the following condition holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( ( 0 < | x − p | < δ ) ∧ ( 0 < | y − q | < δ ) ⟹ | f ( x , y ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,((0<|x-p|<\delta )\land (0<|y-q|<\delta )\implies |f(x,y)-L|<\varepsilon ).} For such a double limit to exist, this definition requires the value of f approaches L along every possible path approaching (p, q), excluding the two lines x = p and y = q. As a result, the multiple limit is a weaker notion than the ordinary limit: if the ordinary limit exists and equals L, then the multiple limit exists and also equals L. The converse is not true: the existence of the multiple limits does not imply the existence of the ordinary limit. Consider the example f ( x , y ) = { 1 for x y ≠ 0 0 for x y = 0 {\displaystyle f(x,y)={\begin{cases}1\quad {\text{for}}\quad xy\neq 0\\0\quad {\text{for}}\quad xy=0\end{cases}}} where lim x → 0 y → 0 f ( x , y ) = 1 {\displaystyle \lim _{{x\to 0} \atop {y\to 0}}f(x,y)=1} but lim ( x , y ) → ( 0 , 0 ) f ( x , y ) {\displaystyle \lim _{(x,y)\to (0,0)}f(x,y)} does not exist. If the domain of f is restricted to ( S ∖ { p } ) × ( T ∖ { q } ) , {\displaystyle (S\setminus \{p\})\times (T\setminus \{q\}),} then the two definitions of limits coincide. === Multiple limits at infinity === The concept of multiple limit can extend to the limit at infinity, in a way similar to that of a single variable function. For f : S × T → R , {\displaystyle f:S\times T\to \mathbb {R} ,} we say the double limit of f as x and y approaches infinity is L, written lim x → ∞ y → ∞ f ( x , y ) = L {\displaystyle \lim _{{x\to \infty } \atop {y\to \infty }}f(x,y)=L} if the following condition holds: ( ∀ ε > 0 ) ( ∃ c > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( ( x > c ) ∧ ( y > c ) ⟹ | f ( x , y ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(\forall y\in T)\,((x>c)\land (y>c)\implies |f(x,y)-L|<\varepsilon ).} We say the double limit of f as x and y approaches minus infinity is L, written lim x → − ∞ y → − ∞ f ( x , y ) = L {\displaystyle \lim _{{x\to -\infty } \atop {y\to -\infty }}f(x,y)=L} if the following condition holds: ( ∀ ε > 0 ) ( ∃ c > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( ( x < − c ) ∧ ( y < − c ) ⟹ | f ( x , y ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(\forall y\in T)\,((x<-c)\land (y<-c)\implies |f(x,y)-L|<\varepsilon ).} === Pointwise limits and uniform limits === Let f : S × T → R . {\displaystyle f:S\times T\to \mathbb {R} .} Instead of taking limit as (x, y) → (p, q), we may consider taking the limit of just one variable, say, x → p, to obtain a single-variable function of y, namely g : T → R . {\displaystyle g:T\to \mathbb {R} .} In fact, this limiting process can be done in two distinct ways. The first one is called pointwise limit. We say the pointwise limit of f as x approaches p is g, denoted lim x → p f ( x , y ) = g ( y ) , {\displaystyle \lim _{x\to p}f(x,y)=g(y),} or lim x → p f ( x , y ) = g ( y ) pointwise . {\displaystyle \lim _{x\to p}f(x,y)=g(y)\;\;{\text{pointwise}}.} Alternatively, we may say f tends to g pointwise as x approaches p, denoted f ( x , y ) → g ( y ) as x → p , {\displaystyle f(x,y)\to g(y)\;\;{\text{as}}\;\;x\to p,} or f ( x , y ) → g ( y ) pointwise as x → p . {\displaystyle f(x,y)\to g(y)\;\;{\text{pointwise}}\;\;{\text{as}}\;\;x\to p.} This limit exists if the following holds: ( ∀ ε > 0 ) ( ∀ y ∈ T ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ | f ( x , y ) − g ( y ) | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\forall y\in T)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies |f(x,y)-g(y)|<\varepsilon ).} Here, δ = δ(ε, y) is a function of both ε and y. Each δ is chosen for a specific point of y. Hence we say the limit is pointwise in y. For example, f ( x , y ) = x cos ⁡ y {\displaystyle f(x,y)={\frac {x}{\cos y}}} has a pointwise limit of constant zero function lim x → 0 f ( x , y ) = 0 ( y ) pointwise {\displaystyle \lim _{x\to 0}f(x,y)=0(y)\;\;{\text{pointwise}}} because for every fixed y, the limit is clearly 0. This argument fails if y is not fixed: if y is very close to π/2, the value of the fraction may deviate from 0. This leads to another definition of limit, namely the uniform limit. We say the uniform limit of f on T as x approaches p is g, denoted u n i f lim x → p y ∈ T f ( x , y ) = g ( y ) , {\displaystyle {\underset {{x\to p} \atop {y\in T}}{\mathrm {unif} \lim \;}}f(x,y)=g(y),} or lim x → p f ( x , y ) = g ( y ) uniformly on T . {\displaystyle \lim _{x\to p}f(x,y)=g(y)\;\;{\text{uniformly on}}\;T.} Alternatively, we may say f tends to g uniformly on T as x approaches p, denoted f ( x , y ) ⇉ g ( y ) on T as x → p , {\displaystyle f(x,y)\rightrightarrows g(y)\;{\text{on}}\;T\;\;{\text{as}}\;\;x\to p,} or f ( x , y ) → g ( y ) uniformly on T as x → p . {\displaystyle f(x,y)\to g(y)\;\;{\text{uniformly on}}\;T\;\;{\text{as}}\;\;x\to p.} This limit exists if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( 0 < | x − p | < δ ⟹ | f ( x , y ) − g ( y ) | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,(0<|x-p|<\delta \implies |f(x,y)-g(y)|<\varepsilon ).} Here, δ = δ(ε) is a function of only ε but not y. In other words, δ is uniformly applicable to all y in T. Hence we say the limit is uniform in y. For example, f ( x , y ) = x cos ⁡ y {\displaystyle f(x,y)=x\cos y} has a uniform limit of constant zero function lim x → 0 f ( x , y ) = 0 ( y ) uniformly on R {\displaystyle \lim _{x\to 0}f(x,y)=0(y)\;\;{\text{ uniformly on}}\;\mathbb {R} } because for all real y, cos y is bounded between [−1, 1]. Hence no matter how y behaves, we may use the sandwich theorem to show that the limit is 0. === Iterated limits === Let f : S × T → R . {\displaystyle f:S\times T\to \mathbb {R} .} We may consider taking the limit of just one variable, say, x → p, to obtain a single-variable function of y, namely g : T → R , {\displaystyle g:T\to \mathbb {R} ,} and then take limit in the other variable, namely y → q, to get a number L. Symbolically, lim y → q lim x → p f ( x , y ) = lim y → q g ( y ) = L . {\displaystyle \lim _{y\to q}\lim _{x\to p}f(x,y)=\lim _{y\to q}g(y)=L.} This limit is known as iterated limit of the multivariable function. The order of taking limits may affect the result, i.e., lim y → q lim x → p f ( x , y ) ≠ lim x → p lim y → q f ( x , y ) {\displaystyle \lim _{y\to q}\lim _{x\to p}f(x,y)\neq \lim _{x\to p}\lim _{y\to q}f(x,y)} in general. A sufficient condition of equality is given by the Moore-Osgood theorem, which requires the limit lim x → p f ( x , y ) = g ( y ) {\displaystyle \lim _{x\to p}f(x,y)=g(y)} to be uniform on T. == Functions on metric spaces == Suppose M and N are subsets of metric spaces A and B, respectively, and f : M → N is defined between M and N, with x ∈ M, p a limit point of M and L ∈ N. It is said that the limit of f as x approaches p is L and write lim x → p f ( x ) = L {\displaystyle \lim _{x\to p}f(x)=L} if the following property holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ M ) ( 0 < d A ( x , p ) < δ ⟹ d B ( f ( x ) , L ) < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in M)\,(0<d_{A}(x,p)<\delta \implies d_{B}(f(x),L)<\varepsilon ).} Again, note that p need not be in the domain of f, nor does L need to be in the range of f, and even if f(p) is defined it need not be equal to L. === Euclidean metric === The limit in Euclidean space is a direct generalization of limits to vector-valued functions. For example, we may consider a function f : S × T → R 3 {\displaystyle f:S\times T\to \mathbb {R} ^{3}} such that f ( x , y ) = ( f 1 ( x , y ) , f 2 ( x , y ) , f 3 ( x , y ) ) . {\displaystyle f(x,y)=(f_{1}(x,y),f_{2}(x,y),f_{3}(x,y)).} Then, under the usual Euclidean metric, lim ( x , y ) → ( p , q ) f ( x , y ) = ( L 1 , L 2 , L 3 ) {\displaystyle \lim _{(x,y)\to (p,q)}f(x,y)=(L_{1},L_{2},L_{3})} if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( 0 < ( x − p ) 2 + ( y − q ) 2 < δ ⟹ ( f 1 − L 1 ) 2 + ( f 2 − L 2 ) 2 + ( f 3 − L 3 ) 2 < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,\left(0<{\sqrt {(x-p)^{2}+(y-q)^{2}}}<\delta \implies {\sqrt {(f_{1}-L_{1})^{2}+(f_{2}-L_{2})^{2}+(f_{3}-L_{3})^{2}}}<\varepsilon \right).} In this example, the function concerned are finite-dimension vector-valued function. In this case, the limit theorem for vector-valued function states that if the limit of each component exists, then the limit of a vector-valued function equals the vector with each component taken the limit: lim ( x , y ) → ( p , q ) ( f 1 ( x , y ) , f 2 ( x , y ) , f 3 ( x , y ) ) = ( lim ( x , y ) → ( p , q ) f 1 ( x , y ) , lim ( x , y ) → ( p , q ) f 2 ( x , y ) , lim ( x , y ) → ( p , q ) f 3 ( x , y ) ) . {\displaystyle \lim _{(x,y)\to (p,q)}{\Bigl (}f_{1}(x,y),f_{2}(x,y),f_{3}(x,y){\Bigr )}=\left(\lim _{(x,y)\to (p,q)}f_{1}(x,y),\lim _{(x,y)\to (p,q)}f_{2}(x,y),\lim _{(x,y)\to (p,q)}f_{3}(x,y)\right).} === Manhattan metric === One might also want to consider spaces other than Euclidean space. An example would be the Manhattan space. Consider f : S → R 2 {\displaystyle f:S\to \mathbb {R} ^{2}} such that f ( x ) = ( f 1 ( x ) , f 2 ( x ) ) . {\displaystyle f(x)=(f_{1}(x),f_{2}(x)).} Then, under the Manhattan metric, lim x → p f ( x ) = ( L 1 , L 2 ) {\displaystyle \lim _{x\to p}f(x)=(L_{1},L_{2})} if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ | f 1 − L 1 | + | f 2 − L 2 | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies |f_{1}-L_{1}|+|f_{2}-L_{2}|<\varepsilon ).} Since this is also a finite-dimension vector-valued function, the limit theorem stated above also applies. === Uniform metric === Finally, we will discuss the limit in function space, which has infinite dimensions. Consider a function f(x, y) in the function space S × T → R . {\displaystyle S\times T\to \mathbb {R} .} We want to find out as x approaches p, how f(x, y) will tend to another function g(y), which is in the function space T → R . {\displaystyle T\to \mathbb {R} .} The "closeness" in this function space may be measured under the uniform metric. Then, we will say the uniform limit of f on T as x approaches p is g and write u n i f lim x → p y ∈ T f ( x , y ) = g ( y ) , {\displaystyle {\underset {{x\to p} \atop {y\in T}}{\mathrm {unif} \lim \;}}f(x,y)=g(y),} or lim x → p f ( x , y ) = g ( y ) uniformly on T , {\displaystyle \lim _{x\to p}f(x,y)=g(y)\;\;{\text{uniformly on}}\;T,} if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ sup y ∈ T | f ( x , y ) − g ( y ) | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies \sup _{y\in T}|f(x,y)-g(y)|<\varepsilon ).} In fact, one can see that this definition is equivalent to that of the uniform limit of a multivariable function introduced in the previous section. == Functions on topological spaces == Suppose X {\displaystyle X} and Y {\displaystyle Y} are topological spaces with Y {\displaystyle Y} a Hausdorff space. Let p {\displaystyle p} be a limit point of Ω ⊆ X {\displaystyle \Omega \subseteq X} , and L ∈ Y {\displaystyle L\in Y} . For a function f : Ω → Y {\displaystyle f:\Omega \to Y} , it is said that the limit of f {\displaystyle f} as x {\displaystyle x} approaches p {\displaystyle p} is L {\displaystyle L} , written lim x → p f ( x ) = L , {\displaystyle \lim _{x\to p}f(x)=L,} if the following property holds: for every open neighborhood V {\displaystyle V} of L {\displaystyle L} , there exists an open neighborhood U {\displaystyle U} of p {\displaystyle p} such that f ( U ∩ Ω − { p } ) ⊆ V {\displaystyle f(U\cap \Omega -\{p\})\subseteq V} . This last part of the definition can also be phrased as "there exists an open punctured neighbourhood U {\displaystyle U} of p {\displaystyle p} such that f ( U ∩ Ω ) ⊆ V {\displaystyle f(U\cap \Omega )\subseteq V} . The domain of f {\displaystyle f} does not need to contain p {\displaystyle p} . If it does, then the value of f {\displaystyle f} at p {\displaystyle p} is irrelevant to the definition of the limit. In particular, if the domain of f {\displaystyle f} is X ∖ { p } {\displaystyle X\setminus \{p\}} (or all of X {\displaystyle X} ), then the limit of f {\displaystyle f} as x → p {\displaystyle x\to p} exists and is equal to L if, for all subsets Ω of X with limit point p {\displaystyle p} , the limit of the restriction of f {\displaystyle f} to Ω exists and is equal to L. Sometimes this criterion is used to establish the non-existence of the two-sided limit of a function on ⁠ R {\displaystyle \mathbb {R} } ⁠ by showing that the one-sided limits either fail to exist or do not agree. Such a view is fundamental in the field of general topology, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets. Alternatively, the requirement that Y {\displaystyle Y} be a Hausdorff space can be relaxed to the assumption that Y {\displaystyle Y} be a general topological space, but then the limit of a function may not be unique. In particular, one can no longer talk about the limit of a function at a point, but rather a limit or the set of limits at a point. A function is continuous at a limit point p {\displaystyle p} of and in its domain if and only if f ( p ) {\displaystyle f(p)} is the (or, in the general case, a) limit of f ( x ) {\displaystyle f(x)} as x {\displaystyle x} tends to p {\displaystyle p} . There is another type of limit of a function, namely the sequential limit. Let f : X → Y {\displaystyle f:X\to Y} be a mapping from a topological space X into a Hausdorff space Y, p ∈ X {\displaystyle p\in X} a limit point of X and L ∈ Y. The sequential limit of f {\displaystyle f} as x {\displaystyle x} tends to p {\displaystyle p} is L if For every sequence ( x n ) {\displaystyle (x_{n})} in X ∖ { p } {\displaystyle X\setminus \{p\}} that converges to p {\displaystyle p} , the sequence f ( x n ) {\displaystyle f(x_{n})} converges to L. If L is the limit (in the sense above) of f {\displaystyle f} as x {\displaystyle x} approaches p {\displaystyle p} , then it is a sequential limit as well; however, the converse need not hold in general. If in addition X is metrizable, then L is the sequential limit of f {\displaystyle f} as x {\displaystyle x} approaches p {\displaystyle p} if and only if it is the limit (in the sense above) of f {\displaystyle f} as x {\displaystyle x} approaches p {\displaystyle p} . == Other characterizations == === In terms of sequences === For functions on the real line, one way to define the limit of a function is in terms of the limit of sequences. (This definition is usually attributed to Eduard Heine.) In this setting: lim x → a f ( x ) = L {\displaystyle \lim _{x\to a}f(x)=L} if, and only if, for all sequences xn (with, for all n, xn not equal to a) converging to a the sequence f(xn) converges to L. It was shown by Sierpiński in 1916 that proving the equivalence of this definition and the definition above, requires and is equivalent to a weak form of the axiom of choice. Note that defining what it means for a sequence xn to converge to a requires the epsilon, delta method. Similarly as it was the case of Weierstrass's definition, a more general Heine definition applies to functions defined on subsets of the real line. Let f be a real-valued function with the domain Dm(f ). Let a be the limit of a sequence of elements of Dm(f ) \ {a}. Then the limit (in this sense) of f is L as x approaches a if for every sequence xn ∈ Dm(f ) \ {a} (so that for all n, xn is not equal to a) that converges to a, the sequence f(xn) converges to L. This is the same as the definition of a sequential limit in the preceding section obtained by regarding the subset Dm(f ) of ⁠ R {\displaystyle \mathbb {R} } ⁠ as a metric space with the induced metric. === In non-standard calculus === In non-standard calculus the limit of a function is defined by: lim x → a f ( x ) = L {\displaystyle \lim _{x\to a}f(x)=L} if and only if for all x ∈ R ∗ , {\displaystyle x\in \mathbb {R} ^{*},} f ∗ ( x ) − L {\displaystyle f^{*}(x)-L} is infinitesimal whenever x − a is infinitesimal. Here R ∗ {\displaystyle \mathbb {R} ^{*}} are the hyperreal numbers and f* is the natural extension of f to the non-standard real numbers. Keisler proved that such a hyperreal definition of limit reduces the quantifier complexity by two quantifiers. On the other hand, Hrbacek writes that for the definitions to be valid for all hyperreal numbers they must implicitly be grounded in the ε-δ method, and claims that, from the pedagogical point of view, the hope that non-standard calculus could be done without ε-δ methods cannot be realized in full. Bŀaszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize Hrbacek's criticism as a "dubious lament". === In terms of nearness === At the 1908 international congress of mathematics F. Riesz introduced an alternate way defining limits and continuity in concept called "nearness". A point x is defined to be near a set A ⊆ R {\displaystyle A\subseteq \mathbb {R} } if for every r > 0 there is a point a ∈ A so that |x − a| < r. In this setting the lim x → a f ( x ) = L {\displaystyle \lim _{x\to a}f(x)=L} if and only if for all A ⊆ R , {\displaystyle A\subseteq \mathbb {R} ,} L is near f(A) whenever a is near A. Here f(A) is the set { f ( x ) | x ∈ A } . {\displaystyle \{f(x)|x\in A\}.} This definition can also be extended to metric and topological spaces. == Relationship to continuity == The notion of the limit of a function is very closely related to the concept of continuity. A function f is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c: lim x → c f ( x ) = f ( c ) . {\displaystyle \lim _{x\to c}f(x)=f(c).} We have here assumed that c is a limit point of the domain of f. == Properties == If a function f is real-valued, then the limit of f at p is L if and only if both the right-handed limit and left-handed limit of f at p exist and are equal to L. The function f is continuous at p if and only if the limit of f(x) as x approaches p exists and is equal to f(p). If f : M → N is a function between metric spaces M and N, then it is equivalent that f transforms every sequence in M which converges towards p into a sequence in N which converges towards f(p). If N is a normed vector space, then the limit operation is linear in the following sense: if the limit of f(x) as x approaches p is L and the limit of g(x) as x approaches p is P, then the limit of f(x) + g(x) as x approaches p is L + P. If a is a scalar from the base field, then the limit of af(x) as x approaches p is aL. If f and g are real-valued (or complex-valued) functions, then taking the limit of an operation on f(x) and g(x) (e.g., f + g, f − g, f × g, f / g, f g) under certain conditions is compatible with the operation of limits of f(x) and g(x). This fact is often called the algebraic limit theorem. The main condition needed to apply the following rules is that the limits on the right-hand sides of the equations exist (in other words, these limits are finite values including 0). Additionally, the identity for division requires that the denominator on the right-hand side is non-zero (division by 0 is not defined), and the identity for exponentiation requires that the base is positive, or zero while the exponent is positive (finite). lim x → p ( f ( x ) + g ( x ) ) = lim x → p f ( x ) + lim x → p g ( x ) lim x → p ( f ( x ) − g ( x ) ) = lim x → p f ( x ) − lim x → p g ( x ) lim x → p ( f ( x ) ⋅ g ( x ) ) = lim x → p f ( x ) ⋅ lim x → p g ( x ) lim x → p ( f ( x ) / g ( x ) ) = lim x → p f ( x ) / lim x → p g ( x ) lim x → p f ( x ) g ( x ) = lim x → p f ( x ) lim x → p g ( x ) {\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to p}(f(x)+g(x))&=&\displaystyle \lim _{x\to p}f(x)+\lim _{x\to p}g(x)\\\displaystyle \lim _{x\to p}(f(x)-g(x))&=&\displaystyle \lim _{x\to p}f(x)-\lim _{x\to p}g(x)\\\displaystyle \lim _{x\to p}(f(x)\cdot g(x))&=&\displaystyle \lim _{x\to p}f(x)\cdot \lim _{x\to p}g(x)\\\displaystyle \lim _{x\to p}(f(x)/g(x))&=&\displaystyle {\lim _{x\to p}f(x)/\lim _{x\to p}g(x)}\\\displaystyle \lim _{x\to p}f(x)^{g(x)}&=&\displaystyle {\lim _{x\to p}f(x)^{\lim _{x\to p}g(x)}}\end{array}}} These rules are also valid for one-sided limits, including when p is ∞ or −∞. In each rule above, when one of the limits on the right is ∞ or −∞, the limit on the left may sometimes still be determined by the following rules. q + ∞ = ∞ if q ≠ − ∞ q × ∞ = { ∞ if q > 0 − ∞ if q < 0 q ∞ = 0 if q ≠ ∞ and q ≠ − ∞ ∞ q = { 0 if q < 0 ∞ if q > 0 q ∞ = { 0 if 0 < q < 1 ∞ if q > 1 q − ∞ = { ∞ if 0 < q < 1 0 if q > 1 {\displaystyle {\begin{array}{rcl}q+\infty &=&\infty {\text{ if }}q\neq -\infty \\[8pt]q\times \infty &=&{\begin{cases}\infty &{\text{if }}q>0\\-\infty &{\text{if }}q<0\end{cases}}\\[6pt]\displaystyle {\frac {q}{\infty }}&=&0{\text{ if }}q\neq \infty {\text{ and }}q\neq -\infty \\[6pt]\infty ^{q}&=&{\begin{cases}0&{\text{if }}q<0\\\infty &{\text{if }}q>0\end{cases}}\\[4pt]q^{\infty }&=&{\begin{cases}0&{\text{if }}0<q<1\\\infty &{\text{if }}q>1\end{cases}}\\[4pt]q^{-\infty }&=&{\begin{cases}\infty &{\text{if }}0<q<1\\0&{\text{if }}q>1\end{cases}}\end{array}}} (see also Extended real number line). In other cases the limit on the left may still exist, although the right-hand side, called an indeterminate form, does not allow one to determine the result. This depends on the functions f and g. These indeterminate forms are: 0 0 ± ∞ ± ∞ 0 × ± ∞ ∞ + − ∞ 0 0 ∞ 0 1 ± ∞ {\displaystyle {\begin{array}{cc}\displaystyle {\frac {0}{0}}&\displaystyle {\frac {\pm \infty }{\pm \infty }}\\[6pt]0\times \pm \infty &\infty +-\infty \\[8pt]\qquad 0^{0}\qquad &\qquad \infty ^{0}\qquad \\[8pt]1^{\pm \infty }\end{array}}} See further L'Hôpital's rule below and Indeterminate form. === Limits of compositions of functions === In general, from knowing that lim y → b f ( y ) = c {\displaystyle \lim _{y\to b}f(y)=c} and lim x → a g ( x ) = b , {\displaystyle \lim _{x\to a}g(x)=b,} it does not follow that lim x → a f ( g ( x ) ) = c . {\displaystyle \lim _{x\to a}f(g(x))=c.} However, this "chain rule" does hold if one of the following additional conditions holds: f(b) = c (that is, f is continuous at b), or g does not take the value b near a (that is, there exists a δ > 0 such that if 0 < |x − a| < δ then |g(x) − b| > 0). As an example of this phenomenon, consider the following function that violates both additional restrictions: f ( x ) = g ( x ) = { 0 if x ≠ 0 1 if x = 0 {\displaystyle f(x)=g(x)={\begin{cases}0&{\text{if }}x\neq 0\\1&{\text{if }}x=0\end{cases}}} Since the value at f(0) is a removable discontinuity, lim x → a f ( x ) = 0 {\displaystyle \lim _{x\to a}f(x)=0} for all a. Thus, the naïve chain rule would suggest that the limit of f(f(x)) is 0. However, it is the case that f ( f ( x ) ) = { 1 if x ≠ 0 0 if x = 0 {\displaystyle f(f(x))={\begin{cases}1&{\text{if }}x\neq 0\\0&{\text{if }}x=0\end{cases}}} and so lim x → a f ( f ( x ) ) = 1 {\displaystyle \lim _{x\to a}f(f(x))=1} for all a. === Limits of special interest === ==== Rational functions ==== For n a nonnegative integer and constants a 1 , a 2 , a 3 , … , a n {\displaystyle a_{1},a_{2},a_{3},\ldots ,a_{n}} and b 1 , b 2 , b 3 , … , b n , {\displaystyle b_{1},b_{2},b_{3},\ldots ,b_{n},} lim x → ∞ a 1 x n + a 2 x n − 1 + a 3 x n − 2 + ⋯ + a n b 1 x n + b 2 x n − 1 + b 3 x n − 2 + ⋯ + b n = a 1 b 1 {\displaystyle \lim _{x\to \infty }{\frac {a_{1}x^{n}+a_{2}x^{n-1}+a_{3}x^{n-2}+\dots +a_{n}}{b_{1}x^{n}+b_{2}x^{n-1}+b_{3}x^{n-2}+\dots +b_{n}}}={\frac {a_{1}}{b_{1}}}} This can be proven by dividing both the numerator and denominator by xn. If the numerator is a polynomial of higher degree, the limit does not exist. If the denominator is of higher degree, the limit is 0. ==== Trigonometric functions ==== lim x → 0 sin ⁡ x x = 1 lim x → 0 1 − cos ⁡ x x = 0 {\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}&=&1\\[4pt]\displaystyle \lim _{x\to 0}{\frac {1-\cos x}{x}}&=&0\end{array}}} ==== Exponential functions ==== lim x → 0 ( 1 + x ) 1 x = lim r → ∞ ( 1 + 1 r ) r = e lim x → 0 e x − 1 x = 1 lim x → 0 e a x − 1 b x = a b lim x → 0 c a x − 1 b x = a b ln ⁡ c lim x → 0 + x x = 1 {\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to 0}(1+x)^{\frac {1}{x}}&=&\displaystyle \lim _{r\to \infty }\left(1+{\frac {1}{r}}\right)^{r}=e\\[4pt]\displaystyle \lim _{x\to 0}{\frac {e^{x}-1}{x}}&=&1\\[4pt]\displaystyle \lim _{x\to 0}{\frac {e^{ax}-1}{bx}}&=&\displaystyle {\frac {a}{b}}\\[4pt]\displaystyle \lim _{x\to 0}{\frac {c^{ax}-1}{bx}}&=&\displaystyle {\frac {a}{b}}\ln c\\[4pt]\displaystyle \lim _{x\to 0^{+}}x^{x}&=&1\end{array}}} ==== Logarithmic functions ==== lim x → 0 ln ⁡ ( 1 + x ) x = 1 lim x → 0 ln ⁡ ( 1 + a x ) b x = a b lim x → 0 log c ⁡ ( 1 + a x ) b x = a b ln ⁡ c {\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to 0}{\frac {\ln(1+x)}{x}}&=&1\\[4pt]\displaystyle \lim _{x\to 0}{\frac {\ln(1+ax)}{bx}}&=&\displaystyle {\frac {a}{b}}\\[4pt]\displaystyle \lim _{x\to 0}{\frac {\log _{c}(1+ax)}{bx}}&=&\displaystyle {\frac {a}{b\ln c}}\end{array}}} === L'Hôpital's rule === This rule uses derivatives to find limits of indeterminate forms 0/0 or ±∞/∞, and only applies to such cases. Other indeterminate forms may be manipulated into this form. Given two functions f(x) and g(x), defined over an open interval I containing the desired limit point c, then if: lim x → c f ( x ) = lim x → c g ( x ) = 0 , {\displaystyle \lim _{x\to c}f(x)=\lim _{x\to c}g(x)=0,} or lim x → c f ( x ) = ± lim x → c g ( x ) = ± ∞ , {\displaystyle \lim _{x\to c}f(x)=\pm \lim _{x\to c}g(x)=\pm \infty ,} and f {\displaystyle f} and g {\displaystyle g} are differentiable over I ∖ { c } , {\displaystyle I\setminus \{c\},} and g ′ ( x ) ≠ 0 {\displaystyle g'(x)\neq 0} for all x ∈ I ∖ { c } , {\displaystyle x\in I\setminus \{c\},} and lim x → c f ′ ( x ) g ′ ( x ) {\displaystyle \lim _{x\to c}{\tfrac {f'(x)}{g'(x)}}} exists, then: lim x → c f ( x ) g ( x ) = lim x → c f ′ ( x ) g ′ ( x ) . {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}=\lim _{x\to c}{\frac {f'(x)}{g'(x)}}.} Normally, the first condition is the most important one. For example: lim x → 0 sin ⁡ ( 2 x ) sin ⁡ ( 3 x ) = lim x → 0 2 cos ⁡ ( 2 x ) 3 cos ⁡ ( 3 x ) = 2 ⋅ 1 3 ⋅ 1 = 2 3 . {\displaystyle \lim _{x\to 0}{\frac {\sin(2x)}{\sin(3x)}}=\lim _{x\to 0}{\frac {2\cos(2x)}{3\cos(3x)}}={\frac {2\cdot 1}{3\cdot 1}}={\frac {2}{3}}.} === Summations and integrals === Specifying an infinite bound on a summation or integral is a common shorthand for specifying a limit. A short way to write the limit lim n → ∞ ∑ i = s n f ( i ) {\displaystyle \lim _{n\to \infty }\sum _{i=s}^{n}f(i)} is ∑ i = s ∞ f ( i ) . {\displaystyle \sum _{i=s}^{\infty }f(i).} An important example of limits of sums such as these are series. A short way to write the limit lim x → ∞ ∫ a x f ( t ) d t {\displaystyle \lim _{x\to \infty }\int _{a}^{x}f(t)\;dt} is ∫ a ∞ f ( t ) d t . {\displaystyle \int _{a}^{\infty }f(t)\;dt.} A short way to write the limit lim x → − ∞ ∫ x b f ( t ) d t {\displaystyle \lim _{x\to -\infty }\int _{x}^{b}f(t)\;dt} is ∫ − ∞ b f ( t ) d t . {\displaystyle \int _{-\infty }^{b}f(t)\;dt.} == See also == Big O notation – Describes limiting behavior of a function L'Hôpital's rule – Mathematical rule for evaluating some limits List of limits Limit of a sequence – Value to which tends an infinite sequence Limit point – Cluster point in a topological spacePages displaying short descriptions of redirect targets Limit superior and limit inferior – Bounds of a sequencePages displaying short descriptions of redirect targets Net (mathematics) – Generalization of a sequence of points Non-standard calculus – Modern application of infinitesimalsPages displaying short descriptions of redirect targets Squeeze theorem – Method for finding limits in calculus Subsequential limit – The limit of some subsequence == Notes == == References == Apostol, Tom M. (1974). Mathematical Analysis (2 ed.). Addison–Wesley. ISBN 0-201-00288-4. Bartle, Robert (1967). The elements of real analysis. Wiley. Bartle, Robert G.; Sherbert, Donald R. (2000). Introduction to real analysis. Wiley. Courant, Richard (1924). Vorlesungen über Differential- und Integralrechnung (in German). Springer. Hardy, G. H. (1921). A course in pure mathematics. Cambridge University Press. Hubbard, John H. (2015). Vector calculus, linear algebra, and differential forms: A unified approach (5th ed.). Matrix Editions. Page, Warren; Hersh, Reuben; Selden, Annie; et al., eds. (2002). "Media Highlights". The College Mathematics. 33 (2): 147–154. JSTOR 2687124.. Rudin, Walter (1964). Principles of mathematical analysis. McGraw-Hill. Sutherland, W. A. (1975). Introduction to Metric and Topological Spaces. Oxford: Oxford University Press. ISBN 0-19-853161-3. Whittaker; Watson (1904). A Course of Modern Analysis. Cambridge University Press. == External links == MacTutor History of Weierstrass. MacTutor History of Bolzano Visual Calculus by Lawrence S. Husch, University of Tennessee (2001)
Wikipedia/Epsilon,_delta_method
In mathematics, the oscillation of a function or a sequence is a number that quantifies how much that sequence or function varies between its extreme values as it approaches infinity or a point. As is the case with limits, there are several definitions that put the intuitive concept into a form suitable for a mathematical treatment: oscillation of a sequence of real numbers, oscillation of a real-valued function at a point, and oscillation of a function on an interval (or open set). == Definitions == === Oscillation of a sequence === Let ( a n ) {\displaystyle (a_{n})} be a sequence of real numbers. The oscillation ω ( a n ) {\displaystyle \omega (a_{n})} of that sequence is defined as the difference (possibly infinite) between the limit superior and limit inferior of ( a n ) {\displaystyle (a_{n})} : ω ( a n ) = lim sup n → ∞ a n − lim inf n → ∞ a n {\displaystyle \omega (a_{n})=\limsup _{n\to \infty }a_{n}-\liminf _{n\to \infty }a_{n}} . The oscillation is zero if and only if the sequence converges. It is undefined if lim sup n → ∞ {\displaystyle \limsup _{n\to \infty }} and lim inf n → ∞ {\displaystyle \liminf _{n\to \infty }} are both equal to +∞ or both equal to −∞, that is, if the sequence tends to +∞ or −∞. === Oscillation of a function on an open set === Let f {\displaystyle f} be a real-valued function of a real variable. The oscillation of f {\displaystyle f} on an interval I {\displaystyle I} in its domain is the difference between the supremum and infimum of f {\displaystyle f} : ω f ( I ) = sup x ∈ I f ( x ) − inf x ∈ I f ( x ) . {\displaystyle \omega _{f}(I)=\sup _{x\in I}f(x)-\inf _{x\in I}f(x).} More generally, if f : X → R {\displaystyle f:X\to \mathbb {R} } is a function on a topological space X {\displaystyle X} (such as a metric space), then the oscillation of f {\displaystyle f} on an open set U {\displaystyle U} is ω f ( U ) = sup x ∈ U f ( x ) − inf x ∈ U f ( x ) . {\displaystyle \omega _{f}(U)=\sup _{x\in U}f(x)-\inf _{x\in U}f(x).} === Oscillation of a function at a point === The oscillation of a function f {\displaystyle f} of a real variable at a point x 0 {\displaystyle x_{0}} is defined as the limit as ϵ → 0 {\displaystyle \epsilon \to 0} of the oscillation of f {\displaystyle f} on an ϵ {\displaystyle \epsilon } -neighborhood of x 0 {\displaystyle x_{0}} : ω f ( x 0 ) = lim ϵ → 0 ω f ( x 0 − ϵ , x 0 + ϵ ) . {\displaystyle \omega _{f}(x_{0})=\lim _{\epsilon \to 0}\omega _{f}(x_{0}-\epsilon ,x_{0}+\epsilon ).} This is the same as the difference between the limit superior and limit inferior of the function at x 0 {\displaystyle x_{0}} , provided the point x 0 {\displaystyle x_{0}} is not excluded from the limits. More generally, if f : X → R {\displaystyle f:X\to \mathbb {R} } is a real-valued function on a metric space, then the oscillation is ω f ( x 0 ) = lim ϵ → 0 ω f ( B ϵ ( x 0 ) ) . {\displaystyle \omega _{f}(x_{0})=\lim _{\epsilon \to 0}\omega _{f}(B_{\epsilon }(x_{0})).} == Examples == 1 x {\displaystyle {\frac {1}{x}}} has oscillation ∞ at x {\displaystyle x} = 0, and oscillation 0 at other finite x {\displaystyle x} and at −∞ and +∞. sin ⁡ 1 x {\displaystyle \sin {\frac {1}{x}}} (the topologist's sine curve) has oscillation 2 at x {\displaystyle x} = 0, and 0 elsewhere. sin ⁡ x {\displaystyle \sin x} has oscillation 0 at every finite x {\displaystyle x} , and 2 at −∞ and +∞. ( − 1 ) x {\displaystyle (-1)^{x}} or 1, −1, 1, −1, 1, −1... has oscillation 2. In the last example the sequence is periodic, and any sequence that is periodic without being constant will have non-zero oscillation. However, non-zero oscillation does not usually indicate periodicity. Geometrically, the graph of an oscillating function on the real numbers follows some path in the xy-plane, without settling into ever-smaller regions. In well-behaved cases the path might look like a loop coming back on itself, that is, periodic behaviour; in the worst cases quite irregular movement covering a whole region. == Continuity == Oscillation can be used to define continuity of a function, and is easily equivalent to the usual ε-δ definition (in the case of functions defined everywhere on the real line): a function ƒ is continuous at a point x0 if and only if the oscillation is zero; in symbols, ω f ( x 0 ) = 0. {\displaystyle \omega _{f}(x_{0})=0.} A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point. For example, in the classification of discontinuities: in a removable discontinuity, the distance that the value of the function is off by is the oscillation; in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits from the two sides); in an essential discontinuity, oscillation measures the failure of a limit to exist. This definition is useful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than ε (hence a Gδ set) – and gives a very quick proof of one direction of the Lebesgue integrability condition. The oscillation is equivalent to the ε-δ definition by a simple re-arrangement, and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given ε0 there is no δ that satisfies the ε-δ definition, then the oscillation is at least ε0, and conversely if for every ε there is a desired δ, the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space. == Generalizations == More generally, if f : X → Y is a function from a topological space X into a metric space Y, then the oscillation of f is defined at each x ∈ X by ω ( x ) = inf { d i a m ( f ( U ) ) ∣ U i s a n e i g h b o r h o o d o f x } {\displaystyle \omega (x)=\inf \left\{\mathrm {diam} (f(U))\mid U\mathrm {\ is\ a\ neighborhood\ of\ } x\right\}} == See also == Wave equation Wave envelope Grandi's series Bounded mean oscillation == References == == Further reading ==
Wikipedia/Oscillation_of_a_function_at_a_point
In mechanics, the normal force F n {\displaystyle F_{n}} is the component of a contact force that is perpendicular to the surface that an object contacts. In this instance normal is used in the geometric sense and means perpendicular, as opposed to the meaning "ordinary" or "expected". A person standing still on a platform is acted upon by gravity, which would pull them down towards the Earth's core unless there were a countervailing force from the resistance of the platform's molecules, a force which is named the "normal force". The normal force is one type of ground reaction force. If the person stands on a slope and does not sink into the ground or slide downhill, the total ground reaction force can be divided into two components: a normal force perpendicular to the ground and a frictional force parallel to the ground. In another common situation, if an object hits a surface with some speed, and the surface can withstand the impact, the normal force provides for a rapid deceleration, which will depend on the flexibility of the surface and the object. == Equations == In the case of an object resting upon a flat table (unlike on an incline as in Figures 1 and 2), the normal force on the object is equal but in opposite direction to the gravitational force applied on the object (or the weight of the object), that is, F n = m g {\displaystyle F_{n}=mg} , where m is mass, and g is the gravitational field strength (about 9.81 m/s2 on Earth). The normal force here represents the force applied by the table against the object that prevents it from sinking through the table and requires that the table be sturdy enough to deliver this normal force without breaking. However, it is easy to assume that the normal force and weight are action-reaction force pairs (a common mistake). In this case, the normal force and weight need to be equal in magnitude to explain why there is no upward acceleration of the object. For example, a ball that bounces upwards accelerates upwards because the normal force acting on the ball is larger in magnitude than the weight of the ball. Where an object rests on an incline as in Figures 1 and 2, the normal force is perpendicular to the plane the object rests on. Still, the normal force will be as large as necessary to prevent sinking through the surface, presuming the surface is sturdy enough. The strength of the force can be calculated as: F n = m g cos ⁡ ( θ ) {\displaystyle F_{n}=mg\cos(\theta )} where F n {\displaystyle F_{n}} is the normal force, m is the mass of the object, g is the gravitational field strength, and θ is the angle of the inclined surface measured from the horizontal. The normal force is one of the several forces which act on the object. In the simple situations so far considered, the most important other forces acting on it are friction and the force of gravity. === Using vectors === In general, the magnitude of the normal force, N, is the projection of the net surface interaction force, T, in the normal direction, n, and so the normal force vector can be found by scaling the normal direction by the net surface interaction force. The surface interaction force, in turn, is equal to the dot product of the unit normal with the Cauchy stress tensor describing the stress state of the surface. That is: N = n N = n ( T ⋅ n ) = n ( n ⋅ τ ⋅ n ) . {\displaystyle \mathbf {N} =\mathbf {n} \,N=\mathbf {n} \,(\mathbf {T} \cdot \mathbf {n} )=\mathbf {n} \,(\mathbf {n} \cdot \mathbf {\tau } \cdot \mathbf {n} ).} or, in indicial notation, N i = n i N = n i T j n j = n i n k τ j k n j . {\displaystyle N_{i}=n_{i}N=n_{i}T_{j}n_{j}=n_{i}n_{k}\tau _{jk}n_{j}.} The parallel shear component of the contact force is known as the frictional force ( F f r {\displaystyle F_{fr}} ). The static coefficient of friction for an object on an inclined plane can be calculated as follows: μ s = tan ⁡ ( θ ) {\displaystyle \mu _{s}=\tan(\theta )} for an object on the point of sliding where θ {\displaystyle \theta } is the angle between the slope and the horizontal. == Physical origin == Normal force is directly a result of Pauli exclusion principle and not a true force per se: it is a result of the interactions of the electrons at the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration. However these interactions are often modeled as van der Waals force, a force that grows very large very quickly as distance becomes smaller. On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the electromagnetic forces between the electrons and the nuclei; and the nuclei do not disintegrate due to the nuclear forces. == Practical applications == In an elevator either stationary or moving at constant velocity, the normal force on the person's feet balances the person's weight. In an elevator that is accelerating upward, the normal force is greater than the person's ground weight and so the person's perceived weight increases (making the person feel heavier). In an elevator that is accelerating downward, the normal force is less than the person's ground weight and so a passenger's perceived weight decreases. If a passenger were to stand on a weighing scale, such as a conventional bathroom scale, while riding the elevator, the scale will be reading the normal force it delivers to the passenger's feet, and will be different than the person's ground weight if the elevator cab is accelerating up or down. The weighing scale measures normal force (which varies as the elevator cab accelerates), not gravitational force (which does not vary as the cab accelerates). When we define upward to be the positive direction, constructing Newton's second law and solving for the normal force on a passenger yields the following equation: N = m ( g + a ) {\displaystyle N=m(g+a)} In a gravitron amusement ride, the static friction caused by and perpendicular to the normal force acting on the passengers against the walls results in suspension of the passengers above the floor as the ride rotates. In such a scenario, the walls of the ride apply normal force to the passengers in the direction of the center, which is a result of the centripetal force applied to the passengers as the ride rotates. As a result of the normal force experienced by the passengers, the static friction between the passengers and the walls of the ride counteracts the pull of gravity on the passengers, resulting in suspension above ground of the passengers throughout the duration of the ride. When we define the center of the ride to be the positive direction, solving for the normal force on a passenger that is suspended above ground yields the following equation: N = m v 2 r {\displaystyle N={\frac {mv^{2}}{r}}} where N {\displaystyle N} is the normal force on the passenger, m {\displaystyle m} is the mass of the passenger, v {\displaystyle v} is the tangential velocity of the passenger and r {\displaystyle r} is the distance of the passenger from the center of the ride. With the normal force known, we can solve for the static coefficient of friction needed to maintain a net force of zero in the vertical direction: μ = m g N {\displaystyle \mu ={\frac {mg}{N}}} where μ {\displaystyle \mu } is the static coefficient of friction, and g {\displaystyle g} is the gravitational field strength. == See also == Force Contact mechanics Normal stress == References ==
Wikipedia/Normal_force
In physics, dynamics or classical dynamics is the study of forces and their effect on motion. It is a branch of classical mechanics, along with statics and kinematics. The fundamental principle of dynamics is linked to Newton's second law. == Subdivisions == === Rigid bodies === === Fluids === == Applications == Classical dynamics finds many applications: Aerodynamics, the study of the motion of air Brownian dynamics, the occurrence of Langevin dynamics in the motion of particles in solution File dynamics, stochastic motion of particles in a channel Flight dynamics, the science of aircraft and spacecraft design Molecular dynamics, the study of motion on the molecular level Langevin dynamics, a mathematical model for stochastic dynamics Orbital dynamics, the study of the motion of rockets and spacecraft Stellar dynamics, a description of the collective motion of stars Vehicle dynamics, the study of vehicles in motion == Generalizations == Non-classical dynamics include: System dynamics, the study of the behavior of complex systems Quantum dynamics analogue of classical dynamics in a quantum physics context Quantum chromodynamics, a theory of the strong interaction (color force) Quantum electrodynamics, a description of how matter and light interact Relativistic dynamics, a combination of relativistic and quantum concepts Thermodynamics, the study of the relationships between heat and mechanical energy == See also == Analytical dynamics Ballistics Contact dynamics Dynamical simulation Kinetics (physics) Multibody dynamics n-body problem == References ==
Wikipedia/Dynamics_(mechanics)
Modified Newtonian dynamics (MOND) is a theory that proposes a modification of Newton's laws to account for observed properties of galaxies. Modifying Newton's law of gravity results in modified gravity, while modifying Newton's second law results in modified inertia. The latter has received little attention compared to the modified gravity version. Its primary motivation is to explain galaxy rotation curves without invoking dark matter, and is one of the most well-known theories of this class. However, it has not gained widespread acceptance, with the majority of astrophysicists supporting the Lambda-CDM model as providing the better fit to observations. MOND was developed in 1982 and presented in 1983 by Israeli physicist Mordehai Milgrom. Milgrom noted that galaxy rotation curve data, which seemed to show that galaxies contain more matter than is observed, could also be explained if the gravitational force experienced by a star in the outer regions of a galaxy decays more slowly than predicted by Newton's law of gravity. MOND modifies Newton's laws for extremely small accelerations which are common in galaxies and galaxy clusters. This provides a good fit to galaxy rotation curve data while leaving the dynamics of the Solar System with its strong gravitational field intact. However, the theory predicts that the gravitational field of the galaxy could influence the orbits of Kuiper Belt objects through the external field effect, which is unique to MOND. Since Milgrom's original proposal, MOND has seen some successes. It is capable of explaining several observations in galaxy dynamics, a number of which can be difficult for Lambda-CDM to explain. However, MOND struggles to explain a range of other observations, such as the acoustic peaks of the cosmic microwave background and the matter power spectrum of the large scale structure of the universe. Furthermore, because MOND is not a relativistic theory, it struggles to explain relativistic effects such as gravitational lensing and gravitational waves. Finally, a major weakness of MOND is that all galaxy clusters, including the famous Bullet cluster, show a residual mass discrepancy even when analyzed using MOND. A minority of astrophysicists continue to work on the theory. Jacob Bekenstein developed a relativistic generalization of MOND in 2004, TeVeS, which however had its own set of problems. Another notable attempt was by Constantinos Skordis and Tom Złośnik in 2021, which proposed a relativistic model of MOND that is compatible with cosmic microwave background observations, but appears to be highly contrived. == Overview == === Missing mass problem === Several independent observations suggest that the visible mass in galaxies and galaxy clusters is insufficient to account for their dynamics, when analyzed using Newton's laws. This discrepancy – known as the "missing mass problem" – was identified by several observers, most notably by Swiss astronomer Fritz Zwicky in 1933 through his study of the Coma cluster. This was subsequently extended to include spiral galaxies by the 1939 work of Horace Babcock on Andromeda. These early studies were augmented and brought to the attention of the astronomical community in the 1960s and 1970s by the work of Vera Rubin, who mapped in detail the rotation velocities of stars in a large sample of spirals. While Newton's Laws predict that stellar rotation velocities should decrease with distance from the galactic centre, Rubin and collaborators found instead that they remain almost constant – the rotation curves are said to be "flat". This observation necessitates at least one of the following: Option (1) leads to the dark matter hypothesis; option (2) leads to MOND. The majority of astronomers, astrophysicists, and cosmologists accept dark matter as the explanation for galactic rotation curves (based on general relativity, and hence Newtonian mechanics), and are committed to a dark matter solution of the missing-mass problem. The primary difference between supporters of ΛCDM and MOND is in the observations for which they demand a robust, quantitative explanation, and those for which they are satisfied with a qualitative account, or are prepared to leave for future work. Proponents of MOND emphasize predictions made on galaxy scales (where MOND enjoys its most notable successes) and believe that a cosmological model consistent with galaxy dynamics has yet to be discovered. Proponents of ΛCDM require high levels of cosmological accuracy (which concordance cosmology provides) and argue that a resolution of galaxy-scale issues will follow from a better understanding of the complicated baryonic astrophysics underlying galaxy formation. === Milgrom's law === The basic premise of MOND is that while Newton's laws have been extensively tested in high-acceleration environments (in the Solar System and on Earth), they have not been verified for objects with extremely low acceleration, such as stars in the outer parts of galaxies. This led Milgrom to postulate a new effective gravitational force law (sometimes referred to as "Milgrom's law") that relates the true acceleration of an object to the acceleration that would be predicted for it on the basis of Newtonian mechanics. This law, the keystone of MOND, is chosen to reproduce the Newtonian result at high acceleration but leads to different ("deep-MOND") behavior at low acceleration: Here FN is the Newtonian force, m is the object's (gravitational) mass, a is its acceleration, μ(x) is an as-yet unspecified function (called the interpolating function), and a0 is a new fundamental constant which marks the transition between the Newtonian and deep-MOND regimes. Agreement with Newtonian mechanics requires μ ( x ) ⟶ 1 for x ≫ 1 , {\displaystyle {\begin{aligned}\mu (x)\longrightarrow 1&&{\text{ for }}x\gg 1\end{aligned}}~,} and consistency with astronomical observations requires μ ( x ) ⟶ x for x ≪ 1 . {\displaystyle {\begin{aligned}\mu (x)\longrightarrow x&&{\text{ for }}x\ll 1\end{aligned}}~.} Beyond these limits, the interpolating function is not specified by the hypothesis. Milgrom's law can be interpreted in two ways: Modified inertia: One possibility is to treat it as a modification to Newton's second law, so that the force on an object is not proportional to the particle's acceleration a but rather to μ ( a a 0 ) a . {\textstyle \mu \left({\frac {a}{a_{0}}}\right)a.} In this case, the modified dynamics would apply not only to gravitational phenomena, but also those generated by other forces, for example electromagnetism. This interpretation is experimentally disfavoured by laboratory experiments. Modified gravity: Alternatively, Milgrom's law can be viewed as modifying Newton's universal law of gravity instead, so that the true gravitational force on an object of mass m due to another of mass M is roughly of the form G M m ν ( a 0 a ) r 2 . {\textstyle {\frac {GMm}{\nu \left({\frac {a_{0}}{a}}\right)r^{2}}}.} In this interpretation, Milgrom's modification would apply exclusively to gravitational phenomena. This interpretation has received more attention between the two. Milgrom's law states that for accelerations smaller than a0 accelerations increasingly depart from the standard M · G / r 2 Newtonian relationship of mass and distance, wherein gravitational strength is linearly proportional to mass and the inverse square of distance. Instead, the theory holds that the gravitational field below the a0 value, increases with the square root of mass and decreases linearly with distance. Whenever the gravitational field is larger than a0, whether it be near the center of a galaxy or an object near or on Earth, MOND yields dynamics that are nearly indistinguishable from those of Newtonian gravity. For instance, if the gravitational acceleration equals a0 at a distance from a mass, at ten times that distance, Newtonian gravity predicts a hundredfold decline in gravity whereas MOND predicts only a tenfold reduction. By fitting Milgrom's law to rotation curve data, Begeman et al. found a0 ≈ 1.2 × 10−10 m/s2 to be optimal. The value of Milgrom’s acceleration constant has not varied meaningfully since then. The value of a0 also establishes the distance from a mass at which Newtonian and MOND dynamics diverge. By itself, Milgrom's law is not a complete and self-contained physical theory, but rather an empirically motivated variant of an equation in classical mechanics. Its status within a coherent non-relativistic hypothesis of MOND is akin to Kepler's Third Law within Newtonian mechanics. Milgrom's law provides a succinct description of observational facts, but must itself be grounded in a proper field theory. Several complete classical hypotheses have been proposed (typically along "modified gravity" as opposed to "modified inertia" lines). These generally yield Milgrom's law exactly in situations of high symmetry and otherwise deviate from it slightly. For MOND as modified gravity two complete field theories exist called AQUAL and QUMOND. A subset of these non-relativistic hypotheses have been further embedded within relativistic theories, which are capable of making contact with non-classical phenomena (e.g., gravitational lensing) and cosmology. Distinguishing both theoretically and observationally between these alternatives is a subject of current research. === Interpolating function === Milgrom's law uses an interpolation function to join its two limits together. It represents a simple algorithm to convert Newtonian gravitational accelerations to observed kinematic accelerations and vice versa. Many functions have been proposed in the literature although currently there is no single interpolation function that satisfies all constraints. Two common choices are the "simple interpolating function" and the "standard interpolating function". Each has a μ {\displaystyle \mu } and a ν {\displaystyle \nu } direction to convert the Milgromian gravitational field to the Newtonian and vice versa such that: a N = μ ( a M a 0 ) a M , {\displaystyle a_{N}=\mu \left({\frac {a_{M}}{a_{0}}}\right)a_{M}~,} a M = ν ( a 0 a N ) a N . {\displaystyle a_{M}=\nu \left({\frac {a_{0}}{a_{N}}}\right)a_{N}~.} The simple interpolation function is: μ ( a M a 0 ) = a M a 0 1 + a M a 0 , {\displaystyle \mu \left({\frac {a_{M}}{a_{0}}}\right)={\frac {\frac {a_{M}}{a_{0}}}{1+{\frac {a_{M}}{a_{0}}}}}~,} ν ( a 0 a N ) = 1 2 ( 1 + 1 + 4 a 0 a N ) . {\displaystyle \nu \left({\frac {a_{0}}{a_{N}}}\right)={\frac {1}{2}}\left(1+{\sqrt {1+{\frac {4a_{0}}{a_{N}}}}}\right)~.} The standard interpolation function is: μ ( a M a 0 ) = a M a 0 1 + ( a M a 0 ) 2 , {\displaystyle \mu \left({\frac {a_{M}}{a_{0}}}\right)={\frac {\frac {a_{M}}{a_{0}}}{{\sqrt {1+\left({\frac {a_{M}}{a_{0}}}\right)^{2}}}~}}~,} ν ( a 0 a N ) = 1 2 1 + 1 + 4 ( a 0 a N ) 2 . {\displaystyle \nu \left({\frac {a_{0}}{a_{N}}}\right)={\frac {1}{\sqrt {2}}}{\sqrt {1+{\sqrt {1+4\left({\frac {a_{0}}{a_{N}}}\right)^{2}}}}}~.} Thus, in the deep-MOND regime (a ≪ a0): F N = m a 2 a 0 . {\displaystyle F_{\text{N}}=m{\frac {\,a^{2}\,}{\,a_{0}\,}}~.} Data from spiral and elliptical galaxies favour the simple interpolation function, whereas data from lunar laser ranging and radio tracking data of the Cassini spacecraft towards Saturn require interpolation functions that converge to Newtonian gravity faster. === Complete MOND theories === Milgrom's law requires incorporation into a complete hypothesis if it is to satisfy conservation laws and provide a unique solution for the time evolution of any physical system. Each of the theories described here reduce to Milgrom's law in situations of high symmetry, but produce different behavior in detail. Both AQUAL and QUMOND propose changes to the gravitational part of the classical matter action, and hence interpret Milgrom's law as a modification of Newtonian gravity as opposed to Newton's second law. The alternative is to turn the kinetic term of the action into a functional depending on the trajectory of the particle. Such "modified inertia" theories, however, are difficult to use because they are time-nonlocal, require energy and momentum to be non-trivially redefined to be conserved, and have predictions that depend on the entirety of a particle's orbit. ==== AQUAL ==== The first hypothesis of MOND (dubbed AQUAL, for "A QUAdratic Lagrangian") was constructed in 1984 by Milgrom and Jacob Bekenstein. AQUAL generates MONDian behavior by modifying the gravitational term in the classical Lagrangian from being quadratic in the gradient of the Newtonian potential to a more general function F. This function F reduces to the μ {\displaystyle \mu } -version of the interpolation function after varying the over ϕ {\displaystyle \phi } using the principle of least action. In Newtonian gravity and AQUAL the Lagrangians are: L Newton = − 1 8 π G ⋅ ‖ ∇ ϕ ‖ 2 L AQUAL = − 1 8 π G ⋅ a 0 2 F ( ‖ ∇ ϕ ‖ 2 a 0 2 ) , with μ ( x ) = d F ( x 2 ) d x . {\displaystyle {\begin{aligned}{\mathcal {L}}_{\text{Newton}}&=-{\frac {1}{8\pi G}}\cdot \|\nabla \phi \|^{2}\\[6pt]{\mathcal {L}}_{\text{AQUAL}}&=-{\frac {1}{8\pi G}}\cdot a_{0}^{2}F\left({\tfrac {\|\nabla \phi \|^{2}}{a_{0}^{2}}}\right),\qquad {\text{with }}\quad \mu (x)={\frac {dF(x^{2})}{dx}}.\end{aligned}}} where ϕ {\displaystyle \phi } is the standard Newtonian gravitational potential and F is a new dimensionless function. Applying the Euler–Lagrange equations in the standard way then leads to a non-linear generalization of the Newton–Poisson equation: ∇ ⋅ [ μ ( ‖ ∇ ϕ ‖ a 0 ) ∇ ϕ ] = 4 π G ρ {\displaystyle \nabla \cdot \left[\mu \left({\frac {\left\|\nabla \phi \right\|}{a_{0}}}\right)\nabla \phi \right]=4\pi G\rho } This can be solved given suitable boundary conditions and choice of F to yield Milgrom's law (up to a curl field correction which vanishes in situations of high symmetry). AQUAL uses the μ {\displaystyle \mu } -version of the chosen interpolation function. ==== QUMOND ==== An alternative way to modify the gravitational term in the Lagrangian is to introduce a distinction between the true (MONDian) acceleration field a and the Newtonian acceleration field aN. The Lagrangian may be constructed so that aN satisfies the usual Newton-Poisson equation, and is then used to find a via an additional algebraic but non-linear step, which is chosen to satisfy Milgrom's law. This is called the "quasi-linear formulation of MOND", or QUMOND, and is particularly useful for calculating the distribution of "phantom" dark matter that would be inferred from a Newtonian analysis of a given physical situation. QUMOND has become the dominant MOND field theory since it was first formulated in 2010 because it is much more computationally friendly and may be more intuitive to those who have worked on numerical simulations of Newtonian gravity. QUMOND uses the ν {\displaystyle \nu } -version of the chosen interpolation function. QUMOND and AQUAL can be derived from each other using a Legendre transform. The QUMOND Lagrangian is: L QUMOND = 1 2 ρ v 2 − ρ ϕ − 1 8 π G ( 2 ∇ ϕ ⋅ ∇ ϕ N − a 0 2 Q ( ( a 0 / ∇ ϕ N ) 2 ) ) {\displaystyle {\begin{aligned}{\mathcal {L}}_{\text{QUMOND}}={\frac {1}{2}}\rho v^{2}-\rho \phi -{\frac {1}{8\pi G}}\left(2\nabla \phi \cdot \nabla \phi _{N}-a_{0}^{2}Q\left((a_{0}/\nabla \phi _{N})^{2}\right)\right)\end{aligned}}} Since this Lagrangian does not explicitly depend on time and is invariant under spatial translations this means energy and momentum are conserved according to Noether's theorem. Varying over r yields m a = m g {\displaystyle ma=mg} showing that the weak equivalence principle always applies in QUMOND. However, since ϕ {\displaystyle \phi } and ϕ N {\displaystyle \phi _{N}} are not identical and are non-linearly related this means that the strong equivalence principle must be violated. This can be observed by measuring the external field effect. Furthermore, by varying over ϕ {\displaystyle \phi } we get the following Newton-Poisson equation familiar from Newtonian gravity but now with a subscript to denote that in QUMOND this equation determines the auxiliary gravitational field ϕ N {\displaystyle \phi _{N}} : ∇ 2 ϕ N = 4 π G ρ . {\displaystyle \nabla ^{2}\phi _{N}=4\pi G\rho .} Finally by varying the QUMOND Lagrangian with respect to ϕ N {\displaystyle \phi _{N}} we get the QUMOND field equation: ∇ 2 ϕ = ∇ ⋅ [ ν ( a 0 ‖ ∇ ϕ N ‖ ) ∇ ϕ N ] {\displaystyle \nabla ^{2}\phi =\nabla \cdot \left[\nu \left({\frac {a_{0}}{\left\|\nabla \phi _{N}\right\|}}\right)\nabla \phi _{N}\right]} These two field equations can be solved numerically for any matter distribution with numerical solvers like Phantom of RAMSES (POR). == External field effect == In Newtonian mechanics, an object's acceleration can be found as the vector sum of the acceleration due to each of the individual forces acting on it. This means that a subsystem can be decoupled from the larger system in which it is embedded simply by referring the motion of its constituent particles to their centre of mass; in other words, the influence of the larger system is irrelevant for the internal dynamics of the subsystem. Since Milgrom's law is non-linear in acceleration, MONDian subsystems cannot be decoupled from their environment in this way, and in certain situations this leads to behaviour with no Newtonian parallel. This is known as the "external field effect" (EFE), for which there exists observational evidence. The external field effect is best described by classifying physical systems according to their relative values of ain (the characteristic acceleration of one object within a subsystem due to the influence of another), aex (the acceleration of the entire subsystem due to forces exerted by objects outside of it), and a0: a i n > a 0 {\displaystyle a_{\mathrm {in} }>a_{0}} : Newtonian regime a e x < a i n < a 0 {\displaystyle a_{\mathrm {ex} }<a_{\mathrm {in} }<a_{0}} : Deep-MOND regime a i n < a 0 < a e x {\displaystyle a_{\mathrm {in} }<a_{0}<a_{\mathrm {ex} }} : The external field is dominant and the behavior of the system is Newtonian. a i n < a e x < a 0 {\displaystyle a_{\mathrm {in} }<a_{\mathrm {ex} }<a_{0}} : The external field is larger than the internal acceleration of the system, but both are smaller than the critical value. In this case, dynamics is Newtonian but the effective value of G is enhanced by a factor of a0/aex. The external field effect implies a fundamental break with the strong equivalence principle (but not the weak equivalence principle which is required by the Lagrangian). The effect was postulated by Milgrom in the first of his 1983 papers to explain why some open clusters were observed to have no mass discrepancy even though their internal accelerations were below a0. It has since come to be recognized as a crucial element of the MOND paradigm. The dependence in MOND of the internal dynamics of a system on its external environment (in principle, the rest of the universe) is strongly reminiscent of Mach's principle, and may hint towards a more fundamental structure underlying Milgrom's law. In this regard, Milgrom has commented: It has been long suspected that local dynamics is strongly influenced by the universe at large, a-la Mach's principle, but MOND seems to be the first to supply concrete evidence for such a connection. This may turn out to be the most fundamental implication of MOND, beyond its implied modification of Newtonian dynamics and general relativity, and beyond the elimination of dark matter. == Observational evidence for MOND == Since MOND was specifically designed to produce flat rotation curves, these do not constitute evidence for the hypothesis, but every matching observation adds to support of the empirical law. Nevertheless, proponents claim that a broad range of astrophysical phenomena at the galactic scale are neatly accounted for within the MOND framework. Many of these came to light after the publication of Milgrom's original papers and are difficult to explain using the dark matter hypothesis. The most prominent are the following: === Rotation curves === In addition to demonstrating that rotation curves in MOND are flat, equation 2 provides a concrete relation between a galaxy's total baryonic mass (the sum of its mass in stars and gas) and its asymptotic rotation velocity. This predicted relation was called the mass-asymptotic speed relation (MASSR) by Milgrom; its observational manifestation is known as the baryonic Tully–Fisher relation (BTFR), and is found to conform quite closely to the MOND prediction. This relation is derived from the Deep-MOND limit as follows: Milgrom's law fully specifies the rotation curve of a galaxy given only the distribution of its baryonic mass. In particular, MOND predicts a far stronger correlation between features in the baryonic mass distribution and features in the rotation curve than does the dark matter hypothesis (since dark matter dominates the galaxy's mass budget and is conventionally assumed not to closely track the distribution of baryons). Such a tight correlation is claimed to be observed in several spiral galaxies, a fact which has been referred to as "Renzo's rule". Since MOND modifies Newtonian dynamics in an acceleration-dependent way, it predicts a specific relationship between the acceleration of a star at any radius from the centre of a galaxy and the amount of unseen (dark matter) mass within that radius that would be inferred in a Newtonian analysis. This is known as the mass discrepancy-acceleration relation, and has been measured observationally. One aspect of the MOND prediction is that the mass of the inferred dark matter goes to zero when the stellar centripetal acceleration becomes greater than a0, where MOND reverts to Newtonian mechanics. In a dark matter hypothesis, it is a challenge to understand why this mass should correlate so closely with acceleration, and why there appears to be a critical acceleration above which dark matter is not required. Particularly massive galaxies are within the Newtonian regime (a > a0) out to radii enclosing the vast majority of their baryonic mass. At these radii, MOND predicts that the rotation curve should fall as 1/r, in accordance with Kepler's Laws. In contrast, from a dark matter perspective one would expect the halo to significantly boost the rotation velocity and cause it to asymptote to a constant value, as in less massive galaxies. Observations of high-mass ellipticals bear out the MOND prediction. In 2020, a group of astronomers analyzing data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog, concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect consistent with the external field effect of modified Newtonian dynamics and inconsistent with tidal effects in the Lambda-CDM model paradigm commonly known as the Standard Model of Cosmology. In 2023, a study claimed that cold dark matter cannot explain galactic rotation curves, while MOND can. === Dwarf galaxies === Recent work has shown that many of the dwarf galaxies around the Milky Way and Andromeda are located preferentially in a single plane and have correlated motions. This suggests that they may have formed during a close encounter with another galaxy and hence are tidal dwarf galaxies. If so, the presence of mass discrepancies in these systems constitutes evidence for MOND. In addition, it has been claimed that a gravitational force stronger than Newton's (such as Milgrom's) is required for these galaxies to retain their orbits over time. Centaurus A has a similar plane of dwarf galaxies around it which is challenging for LCDM which expects uniform halos of dwarf galaxies. In MOND, all isolated gravitationally bound objects with a < a0 that are in equilibrium – regardless of their origin – should exhibit a mass discrepancy when analyzed using Newtonian mechanics, and should lie on the BTFR. Under the dark matter hypothesis, objects formed from baryonic material ejected during the merger or tidal interaction of two galaxies ("tidal dwarf galaxies") are expected to be devoid of dark matter and hence show no mass discrepancy. Three objects unambiguously identified as tidal dwarf galaxies appear to have mass discrepancies in agreement with the MOND prediction. In a 2022 published survey of dwarf galaxies from the Fornax Deep Survey (FDS) catalogue, a group of astronomers and physicists conclude that 'observed deformations of dwarf galaxies in the Fornax Cluster and the lack of low surface brightness dwarfs towards its centre are incompatible with ΛCDM expectations but well consistent with MOND.' === Gravitational lensing === Weak gravitational lensing around isolated spiral and elliptical galaxies confirms the gravitational field of such galaxies follows Milgrom's law. This corresponds to flat rotation curves out to distances of 1 Mpc. Strong gravitational lensing using Einstein rings also seems to confirm the MOND expectation for the mass discrepancy-acceleration relation. === Other === Both MOND and dark matter halos stabilize disk galaxies, helping them retain their rotation-supported structure and preventing their transformation into elliptical galaxies. In MOND, this added stability is only available for regions of galaxies within the deep-MOND regime (i.e., with a < a0), suggesting that spirals with a > a0 in their central regions should be prone to instabilities and hence less likely to survive to the present day. This may explain the "Freeman limit" to the observed central surface mass density of spiral galaxies, which is roughly a0/G. This scale must be put in by hand in dark matter-based galaxy formation models. Galactic bars in barred galaxies are in tension with dark matter simulations as they are too pronounced and rotate too fast, yet do match MOND based calculations. In 2022, Kroupa et al. published a study of open star clusters, arguing that asymmetry in the population of leading and trailing tidal tails, and the observed lifetime of these clusters, are inconsistent with Newtonian dynamics but consistent with MOND. In 2023, a study measured the acceleration of 26,615 wide binaries within 200 parsecs. The study showed that those binaries with accelerations less than 1 nm/s2 systematically deviate from Newtonian dynamics, but conform to MOND predictions, specifically to AQUAL. The results are disputed, with some authors arguing that the detection is caused by poor quality controls, while the original authors claimed that the added quality controls do not significantly affect the results. In 2024, a study claimed that the universe's earliest galaxies formed and grew too quickly for the Lambda-CDM model to explain, but such rapid growth is predicted in MOND. == Responses and criticism == === Dark matter explanation === While acknowledging that Milgrom's law provides a succinct and accurate description of a range of galactic phenomena, many physicists reject the idea that classical dynamics itself needs to be modified and attempt instead to explain the law's success by reference to the behavior of dark matter. Some effort has gone towards establishing the presence of a characteristic acceleration scale as a natural consequence of the behavior of cold dark matter halos, although Milgrom has argued that such arguments explain only a small subset of MOND phenomena. An alternative proposal is to ad hoc modify the properties of dark matter (e.g., to make it interact strongly with itself or baryons) in order to induce the tight coupling between the baryonic and dark matter mass that the observations point to. Finally, some researchers suggest that explaining the empirical success of Milgrom's law requires a more radical break with conventional assumptions about the nature of dark matter. One idea (dubbed "dipolar dark matter") is to make dark matter gravitationally polarizable by ordinary matter and have this polarization enhance the gravitational attraction between baryons. === Outstanding problems for MOND === Some ultra diffuse galaxies, such as NGC 1052-DF2, originally appeared to be free of dark matter. Were this the case, it would have posed a problem for MOND because it cannot explain the rotation curves. However, further research showed that the galaxies were at a different distance than previously thought, leaving the galaxies with plenty of room for dark matter. The idea that a single value of a0 can fit all the different galaxies' rotation curves has also been criticized, although this finding is disputed. It has also been claimed that MOND offers a poor fit to both the HI column density and size of Lyα absorbers. Modified inertia versions of MOND have long suffered from poor theoretical compatibility with cherished physical principles such as conservation laws. Researchers working on MOND generally do not interpret it as a modification of inertia, with only very limited work done on this area. ==== Solar system ==== Almost the entire solar system has gravitational field strengths many orders of magnitude higher than a0 so the increase in gravity due to MOND is negligible. However solar system tests are extremely precise and most observations have proven difficult for MOND to explain. Notably data from lunar laser ranging rules out the simple interpolation function. Radio tracking data of the Cassini spacecraft towards Saturn rules out both the simple and standard interpolation functions by testing an anomalous quadrupole effect predicted by MOND. It is also possible that a full fit of Solar System ephemerides where the masses of planets and asteroids are allowed to vary can accommodate this anomalous quadrupole effect since these are currently determined using general relativity only. Observations of long period comets also seem to conflict with higher order predictions of MOND. Furthermore, laboratory experiments of Newton's second law seem to have ruled out modified inertia versions of MOND with experimental accelerations reaching as low as 0.1% of a0 without deviation from the Newtonian expectation. Some solar system observations could support MOND as it has been suggested that the orbits of Kuiper Belt objects are best explained through MOND's external field effect, rather than through a hypothetical planet nine. It has also been claimed that the variation in the measurements of Newton's gravitational constant are caused by MOND acting perpendicularly to the Earth's gravitational field. ==== Galaxy clusters ==== The most serious problem facing Milgrom's law is that galaxy clusters show a residual mass discrepancy even when analyzed using MOND. This problem is long standing and has been dubbed the "cluster conundrum". This undermines MOND as an alternative to dark matter, although the amount of extra mass required is only a fifth that of a Newtonian analysis and could be in the form of normal matter. It has been speculated that ~2 eV neutrinos could account for the cluster observations in MOND while preserving the hypothesis's successes at the galaxy scale. Analysis of lensing data for the galaxy cluster Abell 1689 shows that this residual missing mass problem in MOND becomes more severe towards the cores of galaxy clusters. The 2006 observation a pair of colliding galaxy clusters known as the "Bullet Cluster" has been claimed as a significant challenge for all theories proposing a modified gravity solution to the missing mass problem, including MOND. Astronomers measured the distribution of stellar and gas mass in the clusters using visible and X-ray light, respectively, and also mapped the gravitational potential using gravitational lensing. As shown in the images on the right, the X-ray gas is in the center, while the galaxies are on the outskirts. During the collision, the X-ray gas interacted and slowed down, remaining in the center, while the galaxies largely passed by one another, as the distances between them were vast. The gravitational potential reveals two large concentrations centered on the galaxies, not on the X-ray gas, where most of the normal matter is located. In ΛCDM one would also expect the clusters to each have a dark matter halo that would pass through each other during the collision (assuming, as is conventional, that dark matter is collisionless). This expectation for the dark matter is a clear explanation for the offset between the peaks of the gravitational potential and the X-ray gas. It is this offset between the gravitational potential and normal matter that was claimed by Clowe et al. as "A Direct Empirical Proof of the Existence of Dark Matter" arguing that modified gravity theories fail to account for it. However, this study by Clowe et al. made no attempt to analyze the Bullet Cluster using MOND or any other modified gravity theory. Furthermore, in the same year, Angus et al. demonstrated that MOND does indeed reproduce the offset between the gravitational potential and the X-ray gas in this highly non-spherically symmetric system. In MOND, one would expect the "missing mass" to be centred on regions which experience accelerations lower than a0, which, in the case of the Bullet Cluster, correspond to the areas containing the galaxies, not the X-ray gas. Nevertheless, MOND still fails to fully explain this cluster, as it does with other galaxy clusters, due to the remaining mass residuals in several core regions of the Bullet Cluster. ==== Relativistic MOND ==== Besides these observational issues, MOND and its relativistic generalizations are plagued by theoretical difficulties. Several ad hoc and inelegant additions to general relativity are required to create a theory compatible with a non-Newtonian non-relativistic limit, though the predictions in this limit are rather clear. In 2004, Jacob Bekenstein formulated TeVeS, the first complete relativistic hypothesis using MONDian behaviour. TeVeS is constructed from a local Lagrangian (and hence respects conservation laws), and employs a unit vector field, a dynamical and non-dynamical scalar field, a free function and a non-Einsteinian metric in order to yield AQUAL in the non-relativistic limit (low speeds and weak gravity). TeVeS has enjoyed some success in making contact with gravitational lensing and structure formation observations, but faces problems when confronted with data on the anisotropy of the cosmic microwave background, the lifetime of compact objects, and the relationship between the lensing and matter overdensity potentials. TeVeS also appears inconsistent with the speed of gravitational waves according to LIGO. The speed of gravitational waves was measured to be equal to the speed of light to high precision using gravitational wave event GW170817. Several newer relativistic generalizations of MOND exist, including BIMOND and generalized Einstein aether theory. There is also a relativistic generalization of MOND that assumes a Lorentz-type invariance as the physical basis of MOND phenomenology. Recently Skordis and Złośnik proposed a relativistic model of MOND that is compatible with cosmic microwave background observations, the matter power spectrum and the speed of gravity. ==== Cosmology ==== It has been claimed that MOND is generally unsuited to forming the basis of cosmology. A significant piece of evidence in favor of standard dark matter is the observed anisotropies in the cosmic microwave background. While ΛCDM is able to explain the observed angular power spectrum, MOND has a much harder time. It is possible to construct relativistic generalizations of MOND that can fit CMB observations, but it requires terms that do not look natural, and several observations (such as the amount of gravitational lensing) are still difficult to explain. MOND also encounters difficulties explaining structure formation, with density perturbations in MOND perhaps growing so rapidly that too much structure is formed by the present epoch. However, galaxy surveys appear to show massive galaxy formation occurring at much greater rapidity early in time than is possible according to ΛCDM. There is a potential link between MOND and cosmology. It has been noted that the value of a0 is within an order of magnitude of cH0, where c is the speed of light and H0 is the Hubble constant (a measure of the present-day expansion rate of the universe). It is also close to the acceleration rate of the universe through Λ c 2 {\displaystyle {\sqrt {\Lambda }}c^{2}} , where Λ is the cosmological constant. Recent work on a transactional formulation of entropic gravity by Schlatter and Kastner suggests a natural connection between a0, H0, and the cosmological constant. == Proposals for testing MOND == Several observational and experimental tests have been proposed to help distinguish between MOND and dark matter-based models: The detection of particles suitable for constituting cosmological dark matter would strongly suggest that ΛCDM is correct and no modification to Newton's laws is required. If MOND is taken as a theory of modified inertia, it predicts the existence of anomalous accelerations on the Earth at particular places and times of the year. These could be detected in a precision experiment. This prediction would not hold if MOND is taken as a theory of modified gravity, as the external field effect produced by the Earth would cancel MONDian effects at the Earth's surface. It has been suggested that MOND could be tested in the Solar System using the LISA Pathfinder mission (launched in 2015). In particular, it may be possible to detect the anomalous tidal stresses predicted by MOND to exist at the Earth-Sun saddlepoint of the Newtonian gravitational potential. It may also be possible to measure MOND corrections to the perihelion precession of the planets in the Solar System, or a purpose-built spacecraft. One potential astrophysical test of MOND is to investigate whether isolated galaxies behave differently from otherwise-identical galaxies that are under the influence of a strong external field. Another is to search for non-Newtonian behaviour in the motion of binary star systems where the stars are sufficiently separated for their accelerations to be below a0. Testing MOND using the redshift-dependence of radial acceleration – Sabine Hossenfelder and Tobias Mistele propose a parameter-free MOND model they call Covariant Emergent Gravity and suggest that as measurements of radial acceleration improve, various MOND models and particle dark matter might be distinguishable because MOND predicts a much smaller redshift-dependence. == See also == == Notes == == References == == Further reading == Technical (books & book-length reviews): Banik, Indranil; Zhao, Hongsheng (2022-06-27). "From Galactic Bars to the Hubble Tension: Weighing Up the Astrophysical Evidence for Milgromian Gravity". Symmetry. 14 (7): 1331. arXiv:2110.06936. Bibcode:2022Symm...14.1331B. doi:10.3390/sym14071331. ISSN 2073-8994. Merritt, David (2020). A Philosophical Approach to MOND: Assessing the Milgromian Research Program in Cosmology (Cambridge: Cambridge University Press), 282 pp. ISBN 9781108492690 Famaey, Benoît; McGaugh, Stacy S. (2012). "Modified Newtonian Dynamics (MOND): Observational Phenomenology and Relativistic Extensions". Living Reviews in Relativity. 15 (1): 10. arXiv:1112.3960. Bibcode:2012LRR....15...10F. doi:10.12942/lrr-2012-10. PMC 5255531. PMID 28163623. Technical (review articles): McGaugh, Stacy S. (2015). "A tale of two paradigms: The mutual incommensurability of ΛCDM and MOND". Canadian Journal of Physics. 93 (2): 250–259. arXiv:1404.7525. Bibcode:2015CaJPh..93..250M. doi:10.1139/cjp-2014-0203. S2CID 51822163. Milgrom, Mordehai (2015). "MOND theory". Canadian Journal of Physics. 93 (2): 107–118. arXiv:1404.7661. Bibcode:2015CaJPh..93..107M. doi:10.1139/cjp-2014-0211. S2CID 119183394. Kroupa, Pavel (2015). "Galaxies as simple dynamical systems: Observational data disfavor dark matter and stochastic star formation". Canadian Journal of Physics. 93 (2): 169–202. arXiv:1406.4860. Bibcode:2015CaJPh..93..169K. doi:10.1139/cjp-2014-0179. S2CID 118479184. Milgrom, Mordehai (2014). "The MOND paradigm of modified dynamics". Scholarpedia. 9 (6): 31410. Bibcode:2014SchpJ...931410M. doi:10.4249/scholarpedia.31410. Scarpa, Riccardo (2006). "Modified Newtonian Dynamics, an Introductory Review". AIP Conference Proceedings. Vol. 822. AIP. pp. 253–265. arXiv:astro-ph/0601478. doi:10.1063/1.2189141. Popular: A non-Standard model, David Merritt, Aeon Magazine, July 2021 Dark matter critics focus on details, ignore big picture, Lee, 14 Nov 2012 Milgrom, Mordehai (2009). "MOND: Time for a change of mind?". arXiv:0908.3842 [astro-ph.CO]. "Dark matter" doubters not silenced yet Archived 2016-05-20 at the Wayback Machine, World Science, 2 Aug 2007 Does Dark Matter Really Exist?, Milgrom, Scientific American, Aug 2002 == External links == Media related to Modified Newtonian Dynamic at Wikimedia Commons Mordehai Milgrom's website Large collection of lectures and talks on Youtube
Wikipedia/Modified_Newtonian_dynamics
In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates (internal coordinates), (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods. Constraint algorithms are often applied to molecular dynamics simulations. Although such simulations are sometimes performed using internal coordinates that automatically satisfy the bond-length, bond-angle and torsion-angle constraints, simulations may also be performed using explicit or implicit constraint forces for these three constraints. However, explicit constraint forces give rise to inefficiency; more computational power is required to get a trajectory of a given length. Therefore, internal coordinates and implicit-force constraint solvers are generally preferred. Constraint algorithms achieve computational efficiency by neglecting motion along some degrees of freedom. For instance, in atomistic molecular dynamics, typically the length of covalent bonds to hydrogen are constrained; however, constraint algorithms should not be used if vibrations along these degrees of freedom are important for the phenomenon being studied. == Mathematical background == The motion of a set of N particles can be described by a set of second-order ordinary differential equations, Newton's second law, which can be written in matrix form M ⋅ d 2 q d t 2 = f = − ∂ V ∂ q {\displaystyle \mathbf {M} \cdot {\frac {d^{2}\mathbf {q} }{dt^{2}}}=\mathbf {f} =-{\frac {\partial V}{\partial \mathbf {q} }}} where M is a mass matrix and q is the vector of generalized coordinates that describe the particles' positions. For example, the vector q may be a 3N Cartesian coordinates of the particle positions rk, where k runs from 1 to N; in the absence of constraints, M would be the 3Nx3N diagonal square matrix of the particle masses. The vector f represents the generalized forces and the scalar V(q) represents the potential energy, both of which are functions of the generalized coordinates q. If M constraints are present, the coordinates must also satisfy M time-independent algebraic equations g j ( q ) = 0 {\displaystyle g_{j}(\mathbf {q} )=0} where the index j runs from 1 to M. For brevity, these functions gi are grouped into an M-dimensional vector g below. The task is to solve the combined set of differential-algebraic (DAE) equations, instead of just the ordinary differential equations (ODE) of Newton's second law. This problem was studied in detail by Joseph Louis Lagrange, who laid out most of the methods for solving it. The simplest approach is to define new generalized coordinates that are unconstrained; this approach eliminates the algebraic equations and reduces the problem once again to solving an ordinary differential equation. Such an approach is used, for example, in describing the motion of a rigid body; the position and orientation of a rigid body can be described by six independent, unconstrained coordinates, rather than describing the positions of the particles that make it up and the constraints among them that maintain their relative distances. The drawback of this approach is that the equations may become unwieldy and complex; for example, the mass matrix M may become non-diagonal and depend on the generalized coordinates. A second approach is to introduce explicit forces that work to maintain the constraint; for example, one could introduce strong spring forces that enforce the distances among mass points within a "rigid" body. The two difficulties of this approach are that the constraints are not satisfied exactly, and the strong forces may require very short time-steps, making simulations inefficient computationally. A third approach is to use a method such as Lagrange multipliers or projection to the constraint manifold to determine the coordinate adjustments necessary to satisfy the constraints. Finally, there are various hybrid approaches in which different sets of constraints are satisfied by different methods, e.g., internal coordinates, explicit forces and implicit-force solutions. == Internal coordinate methods == The simplest approach to satisfying constraints in energy minimization and molecular dynamics is to represent the mechanical system in so-called internal coordinates corresponding to unconstrained independent degrees of freedom of the system. For example, the dihedral angles of a protein are an independent set of coordinates that specify the positions of all the atoms without requiring any constraints. The difficulty of such internal-coordinate approaches is twofold: the Newtonian equations of motion become much more complex and the internal coordinates may be difficult to define for cyclic systems of constraints, e.g., in ring puckering or when a protein has a disulfide bond. The original methods for efficient recursive energy minimization in internal coordinates were developed by Gō and coworkers. Efficient recursive, internal-coordinate constraint solvers were extended to molecular dynamics. Analogous methods were applied later to other systems. == Lagrange multiplier-based methods == In most of molecular dynamics simulations that use constraint algorithms, constraints are enforced using the method of Lagrange multipliers. Given a set of n linear (holonomic) constraints at the time t, σ k ( t ) := ‖ x k α ( t ) − x k β ( t ) ‖ 2 − d k 2 = 0 , k = 1 … n {\displaystyle \sigma _{k}(t):=\|\mathbf {x} _{k\alpha }(t)-\mathbf {x} _{k\beta }(t)\|^{2}-d_{k}^{2}=0,\quad k=1\ldots n} where x k α ( t ) {\displaystyle \scriptstyle \mathbf {x} _{k\alpha }(t)} and x k β ( t ) {\displaystyle \scriptstyle \mathbf {x} _{k\beta }(t)} are the positions of the two particles involved in the kth constraint at the time t and d k {\displaystyle d_{k}} is the prescribed inter-particle distance. The forces due to these constraints are added in the equations of motion, resulting in, for each of the N particles in the system ∂ 2 x i ( t ) ∂ t 2 m i = − ∂ ∂ x i [ V ( x i ( t ) ) − ∑ k = 1 n λ k σ k ( t ) ] , i = 1 … N . {\displaystyle {\frac {\partial ^{2}\mathbf {x} _{i}(t)}{\partial t^{2}}}m_{i}=-{\frac {\partial }{\partial \mathbf {x} _{i}}}\left[V(\mathbf {x} _{i}(t))-\sum _{k=1}^{n}\lambda _{k}\sigma _{k}(t)\right],\quad i=1\ldots N.} Adding the constraint forces does not change the total energy, as the net work done by the constraint forces (taken over the set of particles that the constraints act on) is zero. Note that the sign on λ k {\displaystyle \lambda _{k}} is arbitrary and some references have an opposite sign. From integrating both sides of the equation with respect to the time, the constrained coordinates of particles at the time, t + Δ t {\displaystyle t+\Delta t} , are given, x i ( t + Δ t ) = x ^ i ( t + Δ t ) + ∑ k = 1 n λ k ∂ σ k ( t ) ∂ x i ( Δ t ) 2 m i − 1 , i = 1 … N {\displaystyle \mathbf {x} _{i}(t+\Delta t)={\hat {\mathbf {x} }}_{i}(t+\Delta t)+\sum _{k=1}^{n}\lambda _{k}{\frac {\partial \sigma _{k}(t)}{\partial \mathbf {x} _{i}}}\left(\Delta t\right)^{2}m_{i}^{-1},\quad i=1\ldots N} where x ^ i ( t + Δ t ) {\displaystyle {\hat {\mathbf {x} }}_{i}(t+\Delta t)} is the unconstrained (or uncorrected) position of the ith particle after integrating the unconstrained equations of motion. To satisfy the constraints σ k ( t + Δ t ) {\displaystyle \sigma _{k}(t+\Delta t)} in the next timestep, the Lagrange multipliers should be determined as the following equation, σ k ( t + Δ t ) := ‖ x k α ( t + Δ t ) − x k β ( t + Δ t ) ‖ 2 − d k 2 = 0. {\displaystyle \sigma _{k}(t+\Delta t):=\left\|\mathbf {x} _{k\alpha }(t+\Delta t)-\mathbf {x} _{k\beta }(t+\Delta t)\right\|^{2}-d_{k}^{2}=0.} This implies solving a system of n {\displaystyle n} non-linear equations σ j ( t + Δ t ) := ‖ x ^ j α ( t + Δ t ) − x ^ j β ( t + Δ t ) + ∑ k = 1 n λ k ( Δ t ) 2 [ ∂ σ k ( t ) ∂ x j α m j α − 1 − ∂ σ k ( t ) ∂ x j β m j β − 1 ] ‖ 2 − d j 2 = 0 , j = 1 … n {\displaystyle \sigma _{j}(t+\Delta t):=\left\|{\hat {\mathbf {x} }}_{j\alpha }(t+\Delta t)-{\hat {\mathbf {x} }}_{j\beta }(t+\Delta t)+\sum _{k=1}^{n}\lambda _{k}\left(\Delta t\right)^{2}\left[{\frac {\partial \sigma _{k}(t)}{\partial \mathbf {x} _{j\alpha }}}m_{j\alpha }^{-1}-{\frac {\partial \sigma _{k}(t)}{\partial \mathbf {x} _{j\beta }}}m_{j\beta }^{-1}\right]\right\|^{2}-d_{j}^{2}=0,\quad j=1\ldots n} simultaneously for the n {\displaystyle n} unknown Lagrange multipliers λ k {\displaystyle \lambda _{k}} . This system of n {\displaystyle n} non-linear equations in n {\displaystyle n} unknowns is commonly solved using Newton–Raphson method where the solution vector λ _ {\displaystyle {\underline {\lambda }}} is updated using λ _ ( l + 1 ) ← λ _ ( l ) − J σ − 1 σ _ ( t + Δ t ) {\displaystyle {\underline {\lambda }}^{(l+1)}\leftarrow {\underline {\lambda }}^{(l)}-\mathbf {J} _{\sigma }^{-1}{\underline {\sigma }}(t+\Delta t)} where J σ {\displaystyle \mathbf {J} _{\sigma }} is the Jacobian of the equations σk: J = ( ∂ σ 1 ∂ λ 1 ∂ σ 1 ∂ λ 2 ⋯ ∂ σ 1 ∂ λ n ∂ σ 2 ∂ λ 1 ∂ σ 2 ∂ λ 2 ⋯ ∂ σ 2 ∂ λ n ⋮ ⋮ ⋱ ⋮ ∂ σ n ∂ λ 1 ∂ σ n ∂ λ 2 ⋯ ∂ σ n ∂ λ n ) . {\displaystyle \mathbf {J} =\left({\begin{array}{cccc}{\frac {\partial \sigma _{1}}{\partial \lambda _{1}}}&{\frac {\partial \sigma _{1}}{\partial \lambda _{2}}}&\cdots &{\frac {\partial \sigma _{1}}{\partial \lambda _{n}}}\\[5pt]{\frac {\partial \sigma _{2}}{\partial \lambda _{1}}}&{\frac {\partial \sigma _{2}}{\partial \lambda _{2}}}&\cdots &{\frac {\partial \sigma _{2}}{\partial \lambda _{n}}}\\[5pt]\vdots &\vdots &\ddots &\vdots \\[5pt]{\frac {\partial \sigma _{n}}{\partial \lambda _{1}}}&{\frac {\partial \sigma _{n}}{\partial \lambda _{2}}}&\cdots &{\frac {\partial \sigma _{n}}{\partial \lambda _{n}}}\end{array}}\right).} Since not all particles contribute to all of constraints, J σ {\displaystyle \mathbf {J} _{\sigma }} is a block matrix and can be solved individually to block-unit of the matrix. In other words, J σ {\displaystyle \mathbf {J} _{\sigma }} can be solved individually for each molecule. Instead of constantly updating the vector λ _ {\displaystyle {\underline {\lambda }}} , the iteration can be started with λ _ ( 0 ) = 0 {\displaystyle {\underline {\lambda }}^{(0)}=\mathbf {0} } , resulting in simpler expressions for σ k ( t ) {\displaystyle \sigma _{k}(t)} and ∂ σ k ( t ) ∂ λ j {\displaystyle {\frac {\partial \sigma _{k}(t)}{\partial \lambda _{j}}}} . In this case J i j = ∂ σ j ∂ λ i | λ = 0 = 2 [ x ^ j α − x ^ j β ] [ ∂ σ i ∂ x j α m j α − 1 − ∂ σ i ∂ x j β m j β − 1 ] . {\displaystyle J_{ij}=\left.{\frac {\partial \sigma _{j}}{\partial \lambda _{i}}}\right|_{\mathbf {\lambda } =0}=2\left[{\hat {x}}_{j\alpha }-{\hat {x}}_{j\beta }\right]\left[{\frac {\partial \sigma _{i}}{\partial x_{j\alpha }}}m_{j\alpha }^{-1}-{\frac {\partial \sigma _{i}}{\partial x_{j\beta }}}m_{j\beta }^{-1}\right].} then λ {\displaystyle \lambda } is updated to λ j = − J − 1 [ ‖ x ^ j α ( t + Δ t ) − x ^ j β ( t + Δ t ) ‖ 2 − d j 2 ] . {\displaystyle \mathbf {\lambda } _{j}=-\mathbf {J} ^{-1}\left[\left\|{\hat {\mathbf {x} }}_{j\alpha }(t+\Delta t)-{\hat {\mathbf {x} }}_{j\beta }(t+\Delta t)\right\|^{2}-d_{j}^{2}\right].} After each iteration, the unconstrained particle positions are updated using x ^ i ( t + Δ t ) ← x ^ i ( t + Δ t ) + ∑ k = 1 n λ k ∂ σ k ∂ x i ( Δ t ) 2 m i − 1 . {\displaystyle {\hat {\mathbf {x} }}_{i}(t+\Delta t)\leftarrow {\hat {\mathbf {x} }}_{i}(t+\Delta t)+\sum _{k=1}^{n}\lambda _{k}{\frac {\partial \sigma _{k}}{\partial \mathbf {x} _{i}}}\left(\Delta t\right)^{2}m_{i}^{-1}.} The vector is then reset to λ _ = 0 . {\displaystyle {\underline {\lambda }}=\mathbf {0} .} The above procedure is repeated until the solution of constraint equations, σ k ( t + Δ t ) {\displaystyle \sigma _{k}(t+\Delta t)} , converges to a prescribed tolerance of a numerical error. Although there are a number of algorithms to compute the Lagrange multipliers, these difference is rely only on the methods to solve the system of equations. For this methods, quasi-Newton methods are commonly used. === The SETTLE algorithm === The SETTLE algorithm solves the system of non-linear equations analytically for n = 3 {\displaystyle n=3} constraints in constant time. Although it does not scale to larger numbers of constraints, it is very often used to constrain rigid water molecules, which are present in almost all biological simulations and are usually modelled using three constraints (e.g. SPC/E and TIP3P water models). === The SHAKE algorithm === The SHAKE algorithm was first developed for satisfying a bond geometry constraint during molecular dynamics simulations. The method was then generalised to handle any holonomic constraint, such as those required to maintain constant bond angles, or molecular rigidity. In SHAKE algorithm, the system of non-linear constraint equations is solved using the Gauss–Seidel method which approximates the solution of the linear system of equations using the Newton–Raphson method; λ _ = − J σ − 1 σ _ . {\displaystyle {\underline {\lambda }}=-\mathbf {J} _{\sigma }^{-1}{\underline {\sigma }}.} This amounts to assuming that J σ {\displaystyle \mathbf {J} _{\sigma }} is diagonally dominant and solving the k {\displaystyle k} th equation only for the k {\displaystyle k} unknown. In practice, we compute λ k ← σ k ( t ) ∂ σ k ( t ) / ∂ λ k , x k α ← x k α + λ k ∂ σ k ( t ) ∂ x k α , x k β ← x k β + λ k ∂ σ k ( t ) ∂ x k β , {\displaystyle {\begin{aligned}\lambda _{k}&\leftarrow {\frac {\sigma _{k}(t)}{\partial \sigma _{k}(t)/\partial \lambda _{k}}},\\[5pt]\mathbf {x} _{k\alpha }&\leftarrow \mathbf {x} _{k\alpha }+\lambda _{k}{\frac {\partial \sigma _{k}(t)}{\partial \mathbf {x} _{k\alpha }}},\\[5pt]\mathbf {x} _{k\beta }&\leftarrow \mathbf {x} _{k\beta }+\lambda _{k}{\frac {\partial \sigma _{k}(t)}{\partial \mathbf {x} _{k\beta }}},\end{aligned}}} for all k = 1 … n {\displaystyle k=1\ldots n} iteratively until the constraint equations σ k ( t + Δ t ) {\displaystyle \sigma _{k}(t+\Delta t)} are solved to a given tolerance. The calculation cost of each iteration is O ( n ) {\displaystyle {\mathcal {O}}(n)} , and the iterations themselves converge linearly. A noniterative form of SHAKE was developed later on. Several variants of the SHAKE algorithm exist. Although they differ in how they compute or apply the constraints themselves, the constraints are still modelled using Lagrange multipliers which are computed using the Gauss–Seidel method. The original SHAKE algorithm is capable of constraining both rigid and flexible molecules (eg. water, benzene and biphenyl) and introduces negligible error or energy drift into a molecular dynamics simulation. One issue with SHAKE is that the number of iterations required to reach a certain level of convergence does rise as molecular geometry becomes more complex. To reach 64 bit computer accuracy (a relative tolerance of ≈ 10 − 16 {\displaystyle \approx 10^{-16}} ) in a typical molecular dynamics simulation at a temperature of 310K, a 3-site water model having 3 constraints to maintain molecular geometry requires an average of 9 iterations (which is 3 per site per time-step). A 4-site butane model with 5 constraints needs 17 iterations (22 per site), a 6-site benzene model with 12 constraints needs 36 iterations (72 per site), while a 12-site biphenyl model with 29 constraints requires 92 iterations (229 per site per time-step). Hence the CPU requirements of the SHAKE algorithm can become significant, particularly if a molecular model has a high degree of rigidity. A later extension of the method, QSHAKE (Quaternion SHAKE) was developed as a faster alternative for molecules composed of rigid units, but it is not as general purpose. It works satisfactorily for rigid loops such as aromatic ring systems but QSHAKE fails for flexible loops, such as when a protein has a disulfide bond. Further extensions include RATTLE, WIGGLE, and MSHAKE. While RATTLE works the same way as SHAKE, yet using the Velocity Verlet time integration scheme, WIGGLE extends SHAKE and RATTLE by using an initial estimate for the Lagrange multipliers λ k {\displaystyle \lambda _{k}} based on the particle velocities. It is worth mentioning that MSHAKE computes corrections on the constraint forces, achieving better convergence. A final modification to the SHAKE algorithm is the P-SHAKE algorithm that is applied to very rigid or semi-rigid molecules. P-SHAKE computes and updates a pre-conditioner which is applied to the constraint gradients before the SHAKE iteration, causing the Jacobian J σ {\displaystyle \mathbf {J} _{\sigma }} to become diagonal or strongly diagonally dominant. The thus de-coupled constraints converge much faster (quadratically as opposed to linearly) at a cost of O ( n 2 ) {\displaystyle {\mathcal {O}}(n^{2})} . === The M-SHAKE algorithm === The M-SHAKE algorithm solves the non-linear system of equations using Newton's method directly. In each iteration, the linear system of equations λ _ = − J σ − 1 σ _ {\displaystyle {\underline {\lambda }}=-\mathbf {J} _{\sigma }^{-1}{\underline {\sigma }}} is solved exactly using an LU decomposition. Each iteration costs O ( n 3 ) {\displaystyle {\mathcal {O}}(n^{3})} operations, yet the solution converges quadratically, requiring fewer iterations than SHAKE. This solution was first proposed in 1986 by Ciccotti and Ryckaert under the title "the matrix method", yet differed in the solution of the linear system of equations. Ciccotti and Ryckaert suggest inverting the matrix J σ {\displaystyle \mathbf {J} _{\sigma }} directly, yet doing so only once, in the first iteration. The first iteration then costs O ( n 3 ) {\displaystyle {\mathcal {O}}(n^{3})} operations, whereas the following iterations cost only O ( n 2 ) {\displaystyle {\mathcal {O}}(n^{2})} operations (for the matrix-vector multiplication). This improvement comes at a cost though, since the Jacobian is no longer updated, convergence is only linear, albeit at a much faster rate than for the SHAKE algorithm. Several variants of this approach based on sparse matrix techniques were studied by Barth et al.. === SHAPE algorithm === The SHAPE algorithm is a multicenter analog of SHAKE for constraining rigid bodies of three or more centers. Like SHAKE, an unconstrained step is taken and then corrected by directly calculating and applying the rigid body rotation matrix that satisfies: L rigid ( t + Δ t 2 ) = L nonrigid ( t + Δ t 2 ) {\displaystyle L^{\text{rigid}}\left(t+{\frac {\Delta t}{2}}\right)=L^{\text{nonrigid}}\left(t+{\frac {\Delta t}{2}}\right)} This approach involves a single 3×3 matrix diagonalization followed by three or four rapid Newton iterations to determine the rotation matrix. SHAPE provides the identical trajectory that is provided with fully converged iterative SHAKE, yet it is found to be more efficient and more accurate than SHAKE when applied to systems involving three or more centers. It extends the ability of SHAKE like constraints to linear systems with three or more atoms, planar systems with four or more atoms, and to significantly larger rigid structures where SHAKE is intractable. It also allows rigid bodies to be linked with one or two common centers (e.g. peptide planes) by solving rigid body constraints iteratively in the same basic manner that SHAKE is used for atoms involving more than one SHAKE constraint. === LINCS algorithm === An alternative constraint method, LINCS (Linear Constraint Solver) was developed in 1997 by Hess, Bekker, Berendsen and Fraaije, and was based on the 1986 method of Edberg, Evans and Morriss (EEM), and a modification thereof by Baranyai and Evans (BE). LINCS applies Lagrange multipliers to the constraint forces and solves for the multipliers by using a series expansion to approximate the inverse of the Jacobian J σ {\displaystyle \mathbf {J} _{\sigma }} : ( I − J σ ) − 1 = I + J σ + J σ 2 + J σ 3 + ⋯ {\displaystyle (\mathbf {I} -\mathbf {J} _{\sigma })^{-1}=\mathbf {I} +\mathbf {J} _{\sigma }+\mathbf {J} _{\sigma }^{2}+\mathbf {J} _{\sigma }^{3}+\cdots } in each step of the Newton iteration. This approximation only works for matrices with eigenvalues smaller than 1, making the LINCS algorithm suitable only for molecules with low connectivity. LINCS has been reported to be 3–4 times faster than SHAKE. == Hybrid methods == Hybrid methods have also been introduced in which the constraints are divided into two groups; the constraints of the first group are solved using internal coordinates whereas those of the second group are solved using constraint forces, e.g., by a Lagrange multiplier or projection method. This approach was pioneered by Lagrange, and result in Lagrange equations of the mixed type. == See also == Molecular dynamics Software for molecular mechanics modeling == References and footnotes ==
Wikipedia/Constraint_algorithm
In mathematics, the Cantor function is an example of a function that is continuous, but not absolutely continuous. It is a notorious counterexample in analysis, because it challenges naive intuitions about continuity, derivative, and measure. Although it is continuous everywhere, and has zero derivative almost everywhere, its value still goes from 0 to 1 as its argument goes from 0 to 1. Thus, while the function seems like a constant one that cannot grow, it does indeed monotonically grow. It is also called the Cantor ternary function, the Lebesgue function, Lebesgue's singular function, the Cantor–Vitali function, the Devil's staircase, the Cantor staircase function, and the Cantor–Lebesgue function. Georg Cantor (1884) introduced the Cantor function and mentioned that Scheeffer pointed out that it was a counterexample to an extension of the fundamental theorem of calculus claimed by Harnack. The Cantor function was discussed and popularized by Scheeffer (1884), Lebesgue (1904), and Vitali (1905). == Definition == To define the Cantor function c : [ 0 , 1 ] → [ 0 , 1 ] {\displaystyle c:[0,1]\to [0,1]} , let x {\displaystyle x} be any number in [ 0 , 1 ] {\displaystyle [0,1]} and obtain c ( x ) {\displaystyle c(x)} by the following steps: Express x {\displaystyle x} in base 3, using digits 0, 1, 2. If the base-3 representation of x {\displaystyle x} contains a 1, replace every digit strictly after the first 1 with 0. Replace any remaining 2s with 1s. Interpret the result as a binary number. The result is c ( x ) {\displaystyle c(x)} . For example: 1 4 {\displaystyle {\tfrac {1}{4}}} has the ternary representation 0.02020202... There are no 1s so the next stage is still 0.02020202... This is rewritten as 0.01010101... This is the binary representation of 1 3 {\displaystyle {\tfrac {1}{3}}} , so c ( 1 4 ) = 1 3 {\displaystyle c({\tfrac {1}{4}})={\tfrac {1}{3}}} . 1 5 {\displaystyle {\tfrac {1}{5}}} has the ternary representation 0.01210121... The digits after the first 1 are replaced by 0s to produce 0.01000000... This is not rewritten since it has no 2s. This is the binary representation of 1 4 {\displaystyle {\tfrac {1}{4}}} , so c ( 1 5 ) = 1 4 {\displaystyle c({\tfrac {1}{5}})={\tfrac {1}{4}}} . 200 243 {\displaystyle {\tfrac {200}{243}}} has the ternary representation 0.21102 (or 0.211012222...). The digits after the first 1 are replaced by 0s to produce 0.21. This is rewritten as 0.11. This is the binary representation of 3 4 {\displaystyle {\tfrac {3}{4}}} , so c ( 200 243 ) = 3 4 {\displaystyle c({\tfrac {200}{243}})={\tfrac {3}{4}}} . Equivalently, if C {\displaystyle {\mathcal {C}}} is the Cantor set on [0,1], then the Cantor function c : [ 0 , 1 ] → [ 0 , 1 ] {\displaystyle c:[0,1]\to [0,1]} can be defined as c ( x ) = { ∑ n = 1 ∞ a n 2 n , if x = ∑ n = 1 ∞ 2 a n 3 n ∈ C for a n ∈ { 0 , 1 } ; sup y ≤ x , y ∈ C c ( y ) , if x ∈ [ 0 , 1 ] ∖ C . {\displaystyle c(x)={\begin{cases}\displaystyle \sum _{n=1}^{\infty }{\frac {a_{n}}{2^{n}}},&\displaystyle {\text{if }}x=\sum _{n=1}^{\infty }{\frac {2a_{n}}{3^{n}}}\in {\mathcal {C}}\ {\text{for}}\ a_{n}\in \{0,1\};\\\displaystyle \sup _{y\leq x,\,y\in {\mathcal {C}}}c(y),&\displaystyle {\text{if }}x\in [0,1]\setminus {\mathcal {C}}.\end{cases}}} This formula is well-defined, since every member of the Cantor set has a unique base 3 representation that only contains the digits 0 or 2. (For some members of C {\displaystyle {\mathcal {C}}} , the ternary expansion is repeating with trailing 2's and there is an alternative non-repeating expansion ending in 1. For example, 1 3 {\displaystyle {\tfrac {1}{3}}} = 0.13 = 0.02222...3 is a member of the Cantor set). Since c ( 0 ) = 0 {\displaystyle c(0)=0} and c ( 1 ) = 1 {\displaystyle c(1)=1} , and c {\displaystyle c} is monotonic on C {\displaystyle {\mathcal {C}}} , it is clear that 0 ≤ c ( x ) ≤ 1 {\displaystyle 0\leq c(x)\leq 1} also holds for all x ∈ [ 0 , 1 ] ∖ C {\displaystyle x\in [0,1]\smallsetminus {\mathcal {C}}} . == Properties == The Cantor function challenges naive intuitions about continuity and measure; though it is continuous everywhere and has zero derivative almost everywhere, c ( x ) {\textstyle c(x)} goes from 0 to 1 as x {\textstyle x} goes from 0 to 1, and takes on every value in between. The Cantor function is the most frequently cited example of a real function that is uniformly continuous (precisely, it is Hölder continuous of exponent α = log 3 ⁡ ( 2 ) {\displaystyle \alpha =\log _{3}(2)} ) but not absolutely continuous. It is constant on intervals of the form (0.x1x2x3...xn022222..., 0.x1x2x3...xn200000...), and every point not in the Cantor set is in one of these intervals, so its derivative is 0 outside of the Cantor set. On the other hand, it has no derivative at any point in an uncountable subset of the Cantor set containing the interval endpoints described above. The Cantor function can also be seen as the cumulative probability distribution function of the 1/2-1/2 Bernoulli measure μ supported on the Cantor set: c ( x ) = μ ( [ 0 , x ] ) {\textstyle c(x)=\mu ([0,x])} . This probability distribution, called the Cantor distribution, has no discrete part. That is, the corresponding measure is atomless. This is why there are no jump discontinuities in the function; any such jump would correspond to an atom in the measure. However, no non-constant part of the Cantor function can be represented as an integral of a probability density function; integrating any putative probability density function that is not almost everywhere zero over any interval will give positive probability to some interval to which this distribution assigns probability zero. In particular, as Vitali (1905) pointed out, the function is not the integral of its derivative even though the derivative exists almost everywhere. The Cantor function is the standard example of a singular function. The Cantor function is also a standard example of a function with bounded variation but, as mentioned above, is not absolutely continuous. However, every absolutely continuous function is continuous with bounded variation. The Cantor function is non-decreasing, and so in particular its graph defines a rectifiable curve. Scheeffer (1884) showed that the arc length of its graph is 2. Note that the graph of any nondecreasing function such that f ( 0 ) = 0 {\displaystyle f(0)=0} and f ( 1 ) = 1 {\displaystyle f(1)=1} has length not greater than 2. In this sense, the Cantor function is extremal. === Lack of absolute continuity === The Lebesgue measure of the Cantor set is 0. Therefore, for any positive ε < 1 and any δ > 0, there exists a finite sequence of pairwise disjoint sub-intervals with total length < δ over which the Cantor function cumulatively rises more than ε. In fact, for every δ > 0 there are finitely many pairwise disjoint intervals (xk,yk) (1 ≤ k ≤ M) with ∑ k = 1 M ( y k − x k ) < δ {\displaystyle \sum \limits _{k=1}^{M}(y_{k}-x_{k})<\delta } and ∑ k = 1 M ( c ( y k ) − c ( x k ) ) = 1 {\displaystyle \sum \limits _{k=1}^{M}(c(y_{k})-c(x_{k}))=1} . == Alternative definitions == === Iterative construction === Below we define a sequence ( f n ) n {\displaystyle (f_{n})_{n}} of functions on the unit interval that converges to the Cantor function. Let f 0 ( x ) = x {\displaystyle f_{0}(x)=x} . Then, for every integer n ≥ 0 {\displaystyle n\geq 0} , the next function f n + 1 ( x ) {\displaystyle f_{n+1}(x)} will be defined in terms of f n ( x ) {\displaystyle f_{n}(x)} as follows: f n + 1 ( x ) = { 1 2 f n ( 3 x ) if 0 ≤ x ≤ 1 3 1 2 if 1 3 ≤ x ≤ 2 3 1 2 + 1 2 f n ( 3 x − 2 ) if 2 3 ≤ x ≤ 1 {\displaystyle f_{n+1}(x)={\begin{cases}\displaystyle {\frac {1}{2}}f_{n}(3x)&{\text{if }}0\leq x\leq {\frac {1}{3}}\\\displaystyle {\frac {1}{2}}&{\text{if }}{\frac {1}{3}}\leq x\leq {\frac {2}{3}}\\\displaystyle {\frac {1}{2}}+{\frac {1}{2}}f_{n}(3x-2)&{\text{if }}{\frac {2}{3}}\leq x\leq 1\end{cases}}} The three definitions are compatible at the end-points 1 3 {\displaystyle {\tfrac {1}{3}}} and 2 3 {\displaystyle {\tfrac {2}{3}}} , because f n ( 0 ) = 0 {\displaystyle f_{n}(0)=0} and f n ( 1 ) = 1 {\displaystyle f_{n}(1)=1} for every n {\displaystyle n} , by induction. One may check that ( f n ) n {\displaystyle (f_{n})_{n}} converges pointwise to the Cantor function defined above. Furthermore, the convergence is uniform. Indeed, separating into three cases, according to the definition of f n + 1 {\displaystyle f_{n+1}} , one sees that max x ∈ [ 0 , 1 ] | f n + 1 ( x ) − f n ( x ) | ≤ 1 2 max x ∈ [ 0 , 1 ] | f n ( x ) − f n − 1 ( x ) | , n ≥ 1. {\displaystyle \max _{x\in [0,1]}|f_{n+1}(x)-f_{n}(x)|\leq {\frac {1}{2}}\,\max _{x\in [0,1]}|f_{n}(x)-f_{n-1}(x)|,\quad n\geq 1.} If f {\displaystyle f} denotes the limit function, it follows that, for every n ≥ 0 {\displaystyle n\geq 0} , max x ∈ [ 0 , 1 ] | f ( x ) − f n ( x ) | ≤ 2 − n + 1 max x ∈ [ 0 , 1 ] | f 1 ( x ) − f 0 ( x ) | . {\displaystyle \max _{x\in [0,1]}|f(x)-f_{n}(x)|\leq 2^{-n+1}\,\max _{x\in [0,1]}|f_{1}(x)-f_{0}(x)|.} === Fractal volume === The Cantor function is closely related to the Cantor set. The Cantor set C can be defined as the set of those numbers in the interval [0, 1] that do not contain the digit 1 in their base-3 (triadic) expansion, except if the 1 is followed by zeros only (in which case the tail 1000 … {\displaystyle \ldots } can be replaced by 0222 … {\displaystyle \ldots } to get rid of any 1). It turns out that the Cantor set is a fractal with (uncountably) infinitely many points (zero-dimensional volume), but zero length (one-dimensional volume). Only the D-dimensional volume H D {\displaystyle H_{D}} (in the sense of a Hausdorff-measure) takes a finite value, where D = log 3 ⁡ ( 2 ) {\displaystyle D=\log _{3}(2)} is the fractal dimension of C. We may define the Cantor function alternatively as the D-dimensional volume of sections of the Cantor set f ( x ) = H D ( C ∩ ( 0 , x ) ) . {\displaystyle f(x)=H_{D}(C\cap (0,x)).} == Self-similarity == The Cantor function possesses several symmetries. For 0 ≤ x ≤ 1 {\displaystyle 0\leq x\leq 1} , there is a reflection symmetry c ( x ) = 1 − c ( 1 − x ) {\displaystyle c(x)=1-c(1-x)} and a pair of magnifications, one on the left and one on the right: c ( x 3 ) = c ( x ) 2 {\displaystyle c\left({\frac {x}{3}}\right)={\frac {c(x)}{2}}} and c ( x + 2 3 ) = 1 + c ( x ) 2 {\displaystyle c\left({\frac {x+2}{3}}\right)={\frac {1+c(x)}{2}}} The magnifications can be cascaded; they generate the dyadic monoid. This is exhibited by defining several helper functions. Define the reflection as r ( x ) = 1 − x {\displaystyle r(x)=1-x} The first self-symmetry can be expressed as r ∘ c = c ∘ r {\displaystyle r\circ c=c\circ r} where the symbol ∘ {\displaystyle \circ } denotes function composition. That is, ( r ∘ c ) ( x ) = r ( c ( x ) ) = 1 − c ( x ) {\displaystyle (r\circ c)(x)=r(c(x))=1-c(x)} and likewise for the other cases. For the left and right magnifications, write the left-mappings L D ( x ) = x 2 {\displaystyle L_{D}(x)={\frac {x}{2}}} and L C ( x ) = x 3 {\displaystyle L_{C}(x)={\frac {x}{3}}} Then the Cantor function obeys L D ∘ c = c ∘ L C {\displaystyle L_{D}\circ c=c\circ L_{C}} Similarly, define the right mappings as R D ( x ) = 1 + x 2 {\displaystyle R_{D}(x)={\frac {1+x}{2}}} and R C ( x ) = 2 + x 3 {\displaystyle R_{C}(x)={\frac {2+x}{3}}} Then, likewise, R D ∘ c = c ∘ R C {\displaystyle R_{D}\circ c=c\circ R_{C}} The two sides can be mirrored one onto the other, in that L D ∘ r = r ∘ R D {\displaystyle L_{D}\circ r=r\circ R_{D}} and likewise, L C ∘ r = r ∘ R C {\displaystyle L_{C}\circ r=r\circ R_{C}} These operations can be stacked arbitrarily. Consider, for example, the sequence of left-right moves L R L L R . {\displaystyle LRLLR.} Adding the subscripts C and D, and, for clarity, dropping the composition operator ∘ {\displaystyle \circ } in all but a few places, one has: L D R D L D L D R D ∘ c = c ∘ L C R C L C L C R C {\displaystyle L_{D}R_{D}L_{D}L_{D}R_{D}\circ c=c\circ L_{C}R_{C}L_{C}L_{C}R_{C}} Arbitrary finite-length strings in the letters L and R correspond to the dyadic rationals, in that every dyadic rational can be written as both y = n / 2 m {\displaystyle y=n/2^{m}} for integer n and m and as finite length of bits y = 0. b 1 b 2 b 3 ⋯ b m {\displaystyle y=0.b_{1}b_{2}b_{3}\cdots b_{m}} with b k ∈ { 0 , 1 } . {\displaystyle b_{k}\in \{0,1\}.} Thus, every dyadic rational is in one-to-one correspondence with some self-symmetry of the Cantor function. Some notational rearrangements can make the above slightly easier to express. Let g 0 {\displaystyle g_{0}} and g 1 {\displaystyle g_{1}} stand for L and R. Function composition extends this to a monoid, in that one can write g 010 = g 0 g 1 g 0 {\displaystyle g_{010}=g_{0}g_{1}g_{0}} and generally, g A g B = g A B {\displaystyle g_{A}g_{B}=g_{AB}} for some binary strings of digits A, B, where AB is just the ordinary concatenation of such strings. The dyadic monoid M is then the monoid of all such finite-length left-right moves. Writing γ ∈ M {\displaystyle \gamma \in M} as a general element of the monoid, there is a corresponding self-symmetry of the Cantor function: γ D ∘ c = c ∘ γ C {\displaystyle \gamma _{D}\circ c=c\circ \gamma _{C}} The dyadic monoid itself has several interesting properties. It can be viewed as a finite number of left-right moves down an infinite binary tree; the infinitely distant "leaves" on the tree correspond to the points on the Cantor set, and so, the monoid also represents the self-symmetries of the Cantor set. In fact, a large class of commonly occurring fractals are described by the dyadic monoid; additional examples can be found in the article on de Rham curves. Other fractals possessing self-similarity are described with other kinds of monoids. The dyadic monoid is itself a sub-monoid of the modular group S L ( 2 , Z ) . {\displaystyle SL(2,\mathbb {Z} ).} Note that the Cantor function bears more than a passing resemblance to Minkowski's question-mark function. In particular, it obeys the exact same symmetry relations, although in an altered form. == Generalizations == Let y = ∑ k = 1 ∞ b k 2 − k {\displaystyle y=\sum _{k=1}^{\infty }b_{k}2^{-k}} be the dyadic (binary) expansion of the real number 0 ≤ y ≤ 1 in terms of binary digits bk ∈ {0,1}. This expansion is discussed in greater detail in the article on the dyadic transformation. Then consider the function C z ( y ) = ∑ k = 1 ∞ b k z k . {\displaystyle C_{z}(y)=\sum _{k=1}^{\infty }b_{k}z^{k}.} For z = 1/3, the inverse of the function x = 2 C1/3(y) is the Cantor function. That is, y = y(x) is the Cantor function. In general, for any z < 1/2, Cz(y) looks like the Cantor function turned on its side, with the width of the steps getting wider as z approaches zero. As mentioned above, the Cantor function is also the cumulative distribution function of a measure on the Cantor set. Different Cantor functions, or Devil's Staircases, can be obtained by considering different atom-less probability measures supported on the Cantor set or other fractals. While the Cantor function has derivative 0 almost everywhere, current research focuses on the question of the size of the set of points where the upper right derivative is distinct from the lower right derivative, causing the derivative to not exist. This analysis of differentiability is usually given in terms of fractal dimension, with the Hausdorff dimension the most popular choice. This line of research was started in the 1990s by Darst, who showed that the Hausdorff dimension of the set of non-differentiability of the Cantor function is the square of the dimension of the Cantor set, ( log 3 ⁡ ( 2 ) ) 2 {\displaystyle (\log _{3}(2))^{2}} . Subsequently Falconer showed that this squaring relationship holds for all Ahlfors's regular, singular measures, i.e. dim H ⁡ { x : f ′ ( x ) = lim h → 0 + μ ( [ x , x + h ] ) h does not exist } = ( dim H ⁡ supp ⁡ ( μ ) ) 2 {\displaystyle \dim _{H}\left\{x:f'(x)=\lim _{h\to 0^{+}}{\frac {\mu ([x,x+h])}{h}}{\text{ does not exist}}\right\}=\left(\dim _{H}\operatorname {supp} (\mu )\right)^{2}} Later, Troscheit obtain a more comprehensive picture of the set where the derivative does not exist for more general normalized Gibb's measures supported on self-conformal and self-similar sets. Hermann Minkowski's question mark function loosely resembles the Cantor function visually, appearing as a "smoothed out" form of the latter; it can be constructed by passing from a continued fraction expansion to a binary expansion, just as the Cantor function can be constructed by passing from a ternary expansion to a binary expansion. The question mark function has the interesting property of having vanishing derivatives at all rational numbers. == See also == Dyadic transformation Weierstrass function, a function that is continuous everywhere but differentiable nowhere. == Notes == == References == Bass, Richard Franklin (2013) [2011]. Real analysis for graduate students (Second ed.). Createspace Independent Publishing. ISBN 978-1-4818-6914-0. Cantor, G. (1884). "De la puissance des ensembles parfaits de points: Extrait d'une lettre adressée à l'éditeur" [The power of perfect sets of points: Extract from a letter addressed to the editor]. Acta Mathematica. 4. International Press of Boston: 381–392. doi:10.1007/bf02418423. ISSN 0001-5962. Reprinted in: E. Zermelo (Ed.), Gesammelte Abhandlungen Mathematischen und Philosophischen Inhalts, Springer, New York, 1980. Darst, Richard B.; Palagallo, Judith A.; Price, Thomas E. (2010), Curious curves, Hackensack, NJ: World Scientific Publishing Co. Pte. Ltd., ISBN 978-981-4291-28-6, MR 2681574 Dovgoshey, O.; Martio, O.; Ryazanov, V.; Vuorinen, M. (2006). "The Cantor function". Expositiones Mathematicae. 24 (1). Elsevier BV: 1–37. doi:10.1016/j.exmath.2005.05.002. ISSN 0723-0869. MR 2195181. Fleron, Julian F. (1994-04-01). "A Note on the History of the Cantor Set and Cantor Function". Mathematics Magazine. 67 (2). Informa UK Limited: 136–140. doi:10.2307/2690689. ISSN 0025-570X. JSTOR 2690689. Lebesgue, H. (1904), Leçons sur l'intégration et la recherche des fonctions primitives [Lessons on integration and search for primitive functions], Paris: Gauthier-Villars Leoni, Giovanni (2017). A first course in Sobolev spaces. Vol. 181 (2nd ed.). Providence, Rhode Island: American Mathematical Society. p. 734. ISBN 978-1-4704-2921-8. OCLC 976406106. Scheeffer, Ludwig (1884). "Allgemeine Untersuchungen über Rectification der Curven" [General investigations on rectification of the curves]. Acta Mathematica. 5. International Press of Boston: 49–82. doi:10.1007/bf02421552. ISSN 0001-5962. Thomson, Brian S.; Bruckner, Judith B.; Bruckner, Andrew M. (2008) [2001]. Elementary real analysis (Second ed.). ClassicalRealAnalysis.com. ISBN 978-1-4348-4367-8. Vestrup, E.M. (2003). The theory of measures and integration. Wiley series in probability and statistics. John Wiley & sons. ISBN 978-0471249771. Vitali, A. (1905), "Sulle funzioni integrali" [On the integral functions], Atti Accad. Sci. Torino Cl. Sci. Fis. Mat. Natur., 40: 1021–1034 == External links == Cantor ternary function at Encyclopaedia of Mathematics Cantor Function by Douglas Rivers, the Wolfram Demonstrations Project. Weisstein, Eric W. "Cantor Function". MathWorld.
Wikipedia/Cantor_function
In mathematics, Volterra's function, named for Vito Volterra, is a real-valued function V defined on the real line R with the following curious combination of properties: V is differentiable everywhere The derivative V ′ is bounded everywhere The derivative is not Riemann-integrable. == Definition and construction == The function is defined by making use of the Smith–Volterra–Cantor set and an infinite number or "copies" of sections of the function defined by f ( x ) = { x 2 sin ⁡ ( 1 / x ) , x ≠ 0 0 , x = 0. {\displaystyle f(x)={\begin{cases}x^{2}\sin(1/x),&x\neq 0\\0,&x=0.\end{cases}}} The construction of V begins by determining the largest value of x in the interval [0, 1/8] for which f ′(x) = 0. Once this value (say x0) is determined, extend the function to the right with a constant value of f(x0) up to and including the point 1/8. Once this is done, a mirror image of the function can be created starting at the point 1/4 and extending downward towards 0. This function will be defined to be 0 outside of the interval [0, 1/4]. We then translate this function to the interval [3/8, 5/8] so that the resulting function, which we call f1, is nonzero only on the middle interval of the complement of the Smith–Volterra–Cantor set. To construct f2, f ′ is then considered on the smaller interval [0,1/32], truncated at the last place the derivative is zero, extended, and mirrored the same way as before, and two translated copies of the resulting function are added to f1 to produce the function f2. Volterra's function then results by repeating this procedure for every interval removed in the construction of the Smith–Volterra–Cantor set; in other words, the function V is the limit of the sequence of functions f1, f2, ... == Further properties == Volterra's function is differentiable everywhere just as f (as defined above) is. One can show that f ′(x) = 2x sin(1/x) - cos(1/x) for x ≠ 0, which means that in any neighborhood of zero, there are points where f ′ takes values 1 and −1. Thus there are points where V ′ takes values 1 and −1 in every neighborhood of each of the endpoints of intervals removed in the construction of the Smith–Volterra–Cantor set S. In fact, V ′ is discontinuous at every point of S, even though V itself is differentiable at every point of S, with derivative 0. However, V ′ is continuous on each interval removed in the construction of S, so the set of discontinuities of V ′ is equal to S. Since the Smith–Volterra–Cantor set S has positive Lebesgue measure, this means that V ′ is discontinuous on a set of positive measure. By Lebesgue's criterion for Riemann integrability, V ′ is not Riemann integrable. If one were to repeat the construction of Volterra's function with the ordinary measure-0 Cantor set C in place of the "fat" (positive-measure) Cantor set S, one would obtain a function with many similar properties, but the derivative would then be discontinuous on the measure-0 set C instead of the positive-measure set S, and so the resulting function would have a Riemann integrable derivative. == See also == Fundamental theorem of calculus == References == == External links == Wrestling with the Fundamental Theorem of Calculus: Volterra's function Archived 2020-11-23 at the Wayback Machine, talk by David Marius Bressoud Volterra's example of a derivative that is not integrable Archived 2016-03-03 at the Wayback Machine(PPT), talk by David Marius Bressoud
Wikipedia/Volterra's_function
The calculus of moving surfaces (CMS) is an extension of the classical tensor calculus to deforming manifolds. Central to the CMS is the tensorial time derivative ∇ ˙ {\displaystyle {\dot {\nabla }}} whose original definition was put forth by Jacques Hadamard. It plays the role analogous to that of the covariant derivative ∇ α {\displaystyle \nabla _{\alpha }} on differential manifolds in that it produces a tensor when applied to a tensor. Suppose that Σ t {\displaystyle \Sigma _{t}} is the evolution of the surface Σ {\displaystyle \Sigma } indexed by a time-like parameter t {\displaystyle t} . The definitions of the surface velocity C {\displaystyle C} and the operator ∇ ˙ {\displaystyle {\dot {\nabla }}} are the geometric foundations of the CMS. The velocity C is the rate of deformation of the surface Σ {\displaystyle \Sigma } in the instantaneous normal direction. The value of C {\displaystyle C} at a point P {\displaystyle P} is defined as the limit C = lim h → 0 Distance ( P , P ∗ ) h {\displaystyle C=\lim _{h\to 0}{\frac {{\text{Distance}}(P,P^{*})}{h}}} where P ∗ {\displaystyle P^{*}} is the point on Σ t + h {\displaystyle \Sigma _{t+h}} that lies on the straight line perpendicular to Σ t {\displaystyle \Sigma _{t}} at point P. This definition is illustrated in the first geometric figure below. The velocity C {\displaystyle C} is a signed quantity: it is positive when P P ∗ ¯ {\displaystyle {\overline {PP^{*}}}} points in the direction of the chosen normal, and negative otherwise. The relationship between Σ t {\displaystyle \Sigma _{t}} and C {\displaystyle C} is analogous to the relationship between location and velocity in elementary calculus: knowing either quantity allows one to construct the other by differentiation or integration. The tensorial time derivative ∇ ˙ {\displaystyle {\dot {\nabla }}} for a scalar field F defined on Σ t {\displaystyle \Sigma _{t}} is the rate of change in F {\displaystyle F} in the instantaneously normal direction: δ F δ t = lim h → 0 F ( P ∗ ) − F ( P ) h {\displaystyle {\frac {\delta F}{\delta t}}=\lim _{h\to 0}{\frac {F(P^{*})-F(P)}{h}}} This definition is also illustrated in second geometric figure. The above definitions are geometric. In analytical settings, direct application of these definitions may not be possible. The CMS gives analytical definitions of C and ∇ ˙ {\displaystyle {\dot {\nabla }}} in terms of elementary operations from calculus and differential geometry. == Analytical definitions == For analytical definitions of C {\displaystyle C} and ∇ ˙ {\displaystyle {\dot {\nabla }}} , consider the evolution of S {\displaystyle S} given by Z i = Z i ( t , S ) {\displaystyle Z^{i}=Z^{i}\left(t,S\right)} where Z i {\displaystyle Z^{i}} are general curvilinear space coordinates and S α {\displaystyle S^{\alpha }} are the surface coordinates. By convention, tensor indices of function arguments are dropped. Thus the above equations contains S {\displaystyle S} rather than S α {\displaystyle S^{\alpha }} . The velocity object V = V i Z i {\displaystyle {\textbf {V}}=V^{i}{\textbf {Z}}_{i}} is defined as the partial derivative V i = ∂ Z i ( t , S ) ∂ t {\displaystyle V^{i}={\frac {\partial Z^{i}\left(t,S\right)}{\partial t}}} The velocity C {\displaystyle C} can be computed most directly by the formula C = V i N i {\displaystyle C=V^{i}N_{i}} where N i {\displaystyle N_{i}} are the covariant components of the normal vector N → {\displaystyle {\vec {N}}} . Also, defining the shift tensor representation of the surface's tangent space Z i α = S α ⋅ Z i {\displaystyle Z_{i}^{\alpha }={\textbf {S}}^{\alpha }\cdot {\textbf {Z}}_{i}} and the tangent velocity as V α = Z i α V i {\displaystyle V^{\alpha }=Z_{i}^{\alpha }V^{i}} , then the definition of the ∇ ˙ {\displaystyle {\dot {\nabla }}} derivative for an invariant F reads ∇ ˙ F = ∂ F ( t , S ) ∂ t − V α ∇ α F {\displaystyle {\dot {\nabla }}F={\frac {\partial F\left(t,S\right)}{\partial t}}-V^{\alpha }\nabla _{\alpha }F} where ∇ α {\displaystyle \nabla _{\alpha }} is the covariant derivative on S. For tensors, an appropriate generalization is needed. The proper definition for a representative tensor T j β i α {\displaystyle T_{j\beta }^{i\alpha }} reads ∇ ˙ T j β i α = ∂ T j β i α ∂ t − V η ∇ η T j β i α + V m Γ m k i T j β k α − V m Γ m j k T k β i α + Γ ˙ η α T j β i η − Γ ˙ β η T j η i α {\displaystyle {\dot {\nabla }}T_{j\beta }^{i\alpha }={\frac {\partial T_{j\beta }^{i\alpha }}{\partial t}}-V^{\eta }\nabla _{\eta }T_{j\beta }^{i\alpha }+V^{m}\Gamma _{mk}^{i}T_{j\beta }^{k\alpha }-V^{m}\Gamma _{mj}^{k}T_{k\beta }^{i\alpha }+{\dot {\Gamma }}_{\eta }^{\alpha }T_{j\beta }^{i\eta }-{\dot {\Gamma }}_{\beta }^{\eta }T_{j\eta }^{i\alpha }} where Γ m j k {\displaystyle \Gamma _{mj}^{k}} are Christoffel symbols and Γ ˙ β α = ∇ β V α − C B β α {\displaystyle {\dot {\Gamma }}_{\beta }^{\alpha }=\nabla _{\beta }V^{\alpha }-CB_{\beta }^{\alpha }} is the surface's appropriate temporal symbols ( B β α {\displaystyle B_{\beta }^{\alpha }} is a matrix representation of the surface's curvature shape operator) == Properties of the == ∇ ˙ {\displaystyle {\dot {\nabla }}} -derivative The ∇ ˙ {\displaystyle {\dot {\nabla }}} -derivative commutes with contraction, satisfies the product rule for any collection of indices ∇ ˙ ( S α i T j β ) = T j β ∇ ˙ S α i + S α i ∇ ˙ T j β {\displaystyle {\dot {\nabla }}(S_{\alpha }^{i}T_{j}^{\beta })=T_{j}^{\beta }{\dot {\nabla }}S_{\alpha }^{i}+S_{\alpha }^{i}{\dot {\nabla }}T_{j}^{\beta }} and obeys a chain rule for surface restrictions of spatial tensors: ∇ ˙ F k j ( Z , t ) = ∂ F k j ∂ t + C N i ∇ i F k j {\displaystyle {\dot {\nabla }}F_{k}^{j}(Z,t)={\frac {\partial F_{k}^{j}}{\partial t}}+CN^{i}\nabla _{i}F_{k}^{j}} Chain rule shows that the ∇ ˙ {\displaystyle {\dot {\nabla }}} -derivatives of spatial "metrics" vanishes ∇ ˙ δ j i = 0 , ∇ ˙ Z i j = 0 , ∇ ˙ Z i j = 0 , ∇ ˙ ε i j k = 0 , ∇ ˙ ε i j k = 0 {\displaystyle {\dot {\nabla }}\delta _{j}^{i}=0,{\dot {\nabla }}Z_{ij}=0,{\dot {\nabla }}Z^{ij}=0,{\dot {\nabla }}\varepsilon _{ijk}=0,{\dot {\nabla }}\varepsilon ^{ijk}=0} where Z i j {\displaystyle Z_{ij}} and Z i j {\displaystyle Z^{ij}} are covariant and contravariant metric tensors, δ j i {\displaystyle \delta _{j}^{i}} is the Kronecker delta symbol, and ε i j k {\displaystyle \varepsilon _{ijk}} and ε i j k {\displaystyle \varepsilon ^{ijk}} are the Levi-Civita symbols. The main article on Levi-Civita symbols describes them for Cartesian coordinate systems. The preceding rule is valid in general coordinates, where the definition of the Levi-Civita symbols must include the square root of the determinant of the covariant metric tensor Z i j {\displaystyle Z_{ij}} . == Differentiation table for the == ∇ ˙ {\displaystyle {\dot {\nabla }}} -derivative The ∇ ˙ {\displaystyle {\dot {\nabla }}} derivative of the key surface objects leads to highly concise and attractive formulas. When applied to the covariant surface metric tensor S α β {\displaystyle S_{\alpha \beta }} and the contravariant metric tensor S α β {\displaystyle S^{\alpha \beta }} , the following identities result ∇ ˙ S α β = 0 ∇ ˙ S α β = 0 {\displaystyle {\begin{aligned}{\dot {\nabla }}S_{\alpha \beta }&=0\\[8pt]{\dot {\nabla }}S^{\alpha \beta }&=0\end{aligned}}} where B α β {\displaystyle B_{\alpha \beta }} and B α β {\displaystyle B^{\alpha \beta }} are the doubly covariant and doubly contravariant curvature tensors. These curvature tensors, as well as for the mixed curvature tensor B β α {\displaystyle B_{\beta }^{\alpha }} , satisfy ∇ ˙ B α β = ∇ α ∇ β C + C B α γ B β γ ∇ ˙ B β α = ∇ β ∇ α C + C B γ α B β γ ∇ ˙ B α β = ∇ α ∇ β C + C B γ α B γ β {\displaystyle {\begin{aligned}{\dot {\nabla }}B_{\alpha \beta }&=\nabla _{\alpha }\nabla _{\beta }C+CB_{\alpha \gamma }B_{\beta }^{\gamma }\\[8pt]{\dot {\nabla }}B_{\beta }^{\alpha }&=\nabla _{\beta }\nabla ^{\alpha }C+CB_{\gamma }^{\alpha }B_{\beta }^{\gamma }\\[8pt]{\dot {\nabla }}B^{\alpha \beta }&=\nabla ^{\alpha }\nabla ^{\beta }C+CB^{\gamma \alpha }B_{\gamma }^{\beta }\end{aligned}}} The shift tensor Z α i {\displaystyle Z_{\alpha }^{i}} and the normal N i {\displaystyle N^{i}} satisfy ∇ ˙ Z α i = N i ∇ α C ∇ ˙ N i = − Z α i ∇ α C {\displaystyle {\begin{aligned}{\dot {\nabla }}Z_{\alpha }^{i}&=N^{i}\nabla _{\alpha }C\\[8pt]{\dot {\nabla }}N^{i}&=-Z_{\alpha }^{i}\nabla ^{\alpha }C\end{aligned}}} Finally, the surface Levi-Civita symbols ε α β {\displaystyle \varepsilon _{\alpha \beta }} and ε α β {\displaystyle \varepsilon ^{\alpha \beta }} satisfy ∇ ˙ ε α β = 0 ∇ ˙ ε α β = 0 {\displaystyle {\begin{aligned}{\dot {\nabla }}\varepsilon _{\alpha \beta }&=0\\[8pt]{\dot {\nabla }}\varepsilon ^{\alpha \beta }&=0\end{aligned}}} == Time differentiation of integrals == The CMS provides rules for time differentiation of volume and surface integrals. == See also == ADM formalism == References ==
Wikipedia/Calculus_of_moving_surfaces
In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is approximating. The technique was developed around 1890 by Henri Padé, but goes back to Georg Frobenius, who introduced the idea and investigated the features of rational approximations of power series. The Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. For these reasons Padé approximants are used extensively in computer calculations. They have also been used as auxiliary functions in Diophantine approximation and transcendental number theory, though for sharp results ad hoc methods—in some sense inspired by the Padé theory—typically replace them. Since a Padé approximant is a rational function, an artificial singular point may occur as an approximation, but this can be avoided by Borel–Padé analysis. The reason the Padé approximant tends to be a better approximation than a truncating Taylor series is clear from the viewpoint of the multi-point summation method. Since there are many cases in which the asymptotic expansion at infinity becomes 0 or a constant, it can be interpreted as the "incomplete two-point Padé approximation", in which the ordinary Padé approximation improves on the method of truncating a Taylor series. == Definition == Given a function f and two integers m ≥ 0 and n ≥ 1, the Padé approximant of order [m/n] is the rational function R ( x ) = ∑ j = 0 m a j x j 1 + ∑ k = 1 n b k x k = a 0 + a 1 x + a 2 x 2 + ⋯ + a m x m 1 + b 1 x + b 2 x 2 + ⋯ + b n x n , {\displaystyle R(x)={\frac {\sum _{j=0}^{m}a_{j}x^{j}}{1+\sum _{k=1}^{n}b_{k}x^{k}}}={\frac {a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{m}x^{m}}{1+b_{1}x+b_{2}x^{2}+\dots +b_{n}x^{n}}},} which agrees with f(x) to the highest possible order, which amounts to f ( 0 ) = R ( 0 ) , f ′ ( 0 ) = R ′ ( 0 ) , f ″ ( 0 ) = R ″ ( 0 ) , ⋮ f ( m + n ) ( 0 ) = R ( m + n ) ( 0 ) . {\displaystyle {\begin{aligned}f(0)&=R(0),\\f'(0)&=R'(0),\\f''(0)&=R''(0),\\&\mathrel {\;\vdots } \\f^{(m+n)}(0)&=R^{(m+n)}(0).\end{aligned}}} Equivalently, if R ( x ) {\displaystyle R(x)} is expanded in a Maclaurin series (Taylor series at 0), its first m + n {\displaystyle m+n} terms would equal the first m + n {\displaystyle m+n} terms of f ( x ) {\displaystyle f(x)} , and thus f ( x ) − R ( x ) = c m + n + 1 x m + n + 1 + c m + n + 2 x m + n + 2 + ⋯ {\displaystyle f(x)-R(x)=c_{m+n+1}x^{m+n+1}+c_{m+n+2}x^{m+n+2}+\cdots } When it exists, the Padé approximant is unique as a formal power series for the given m and n. The Padé approximant defined above is also denoted as [ m / n ] f ( x ) . {\displaystyle [m/n]_{f}(x).} == Computation == For given x, Padé approximants can be computed by Wynn's epsilon algorithm and also other sequence transformations from the partial sums T N ( x ) = c 0 + c 1 x + c 2 x 2 + ⋯ + c N x N {\displaystyle T_{N}(x)=c_{0}+c_{1}x+c_{2}x^{2}+\cdots +c_{N}x^{N}} of the Taylor series of f, i.e., we have c k = f ( k ) ( 0 ) k ! . {\displaystyle c_{k}={\frac {f^{(k)}(0)}{k!}}.} f can also be a formal power series, and, hence, Padé approximants can also be applied to the summation of divergent series. One way to compute a Padé approximant is via the extended Euclidean algorithm for the polynomial greatest common divisor. The relation R ( x ) = P ( x ) / Q ( x ) = T m + n ( x ) mod x m + n + 1 {\displaystyle R(x)=P(x)/Q(x)=T_{m+n}(x){\bmod {x}}^{m+n+1}} is equivalent to the existence of some factor K ( x ) {\displaystyle K(x)} such that P ( x ) = Q ( x ) T m + n ( x ) + K ( x ) x m + n + 1 , {\displaystyle P(x)=Q(x)T_{m+n}(x)+K(x)x^{m+n+1},} which can be interpreted as the Bézout identity of one step in the computation of the extended greatest common divisor of the polynomials T m + n ( x ) {\displaystyle T_{m+n}(x)} and x m + n + 1 {\displaystyle x^{m+n+1}} . Recall that, to compute the greatest common divisor of two polynomials p and q, one computes via long division the remainder sequence r 0 = p , r 1 = q , r k − 1 = q k r k + r k + 1 , {\displaystyle r_{0}=p,\;r_{1}=q,\quad r_{k-1}=q_{k}r_{k}+r_{k+1},} k = 1, 2, 3, ... with deg ⁡ r k + 1 < deg ⁡ r k {\displaystyle \deg r_{k+1}<\deg r_{k}\,} , until r k + 1 = 0 {\displaystyle r_{k+1}=0} . For the Bézout identities of the extended greatest common divisor one computes simultaneously the two polynomial sequences u 0 = 1 , v 0 = 0 , u 1 = 0 , v 1 = 1 , u k + 1 = u k − 1 − q k u k , v k + 1 = v k − 1 − q k v k {\displaystyle u_{0}=1,\;v_{0}=0,\quad u_{1}=0,\;v_{1}=1,\quad u_{k+1}=u_{k-1}-q_{k}u_{k},\;v_{k+1}=v_{k-1}-q_{k}v_{k}} to obtain in each step the Bézout identity r k ( x ) = u k ( x ) p ( x ) + v k ( x ) q ( x ) . {\displaystyle r_{k}(x)=u_{k}(x)p(x)+v_{k}(x)q(x).} For the [m/n] approximant, one thus carries out the extended Euclidean algorithm for r 0 = x m + n + 1 , r 1 = T m + n ( x ) {\displaystyle r_{0}=x^{m+n+1},\;r_{1}=T_{m+n}(x)} and stops it at the last instant that v k {\displaystyle v_{k}} has degree n or smaller. Then the polynomials P = r k , Q = v k {\displaystyle P=r_{k},\;Q=v_{k}} give the [m/n] Padé approximant. If one were to compute all steps of the extended greatest common divisor computation, one would obtain an anti-diagonal of the Padé table. == Riemann–Padé zeta function == To study the resummation of a divergent series, say ∑ z = 1 ∞ f ( z ) , {\displaystyle \sum _{z=1}^{\infty }f(z),} it can be useful to introduce the Padé or simply rational zeta function as ζ R ( s ) = ∑ z = 1 ∞ R ( z ) z s , {\displaystyle \zeta _{R}(s)=\sum _{z=1}^{\infty }{\frac {R(z)}{z^{s}}},} where R ( x ) = [ m / n ] f ( x ) {\displaystyle R(x)=[m/n]_{f}(x)} is the Padé approximation of order (m, n) of the function f(x). The zeta regularization value at s = 0 is taken to be the sum of the divergent series. The functional equation for this Padé zeta function is ∑ j = 0 n a j ζ R ( s − j ) = ∑ j = 0 m b j ζ 0 ( s − j ) , {\displaystyle \sum _{j=0}^{n}a_{j}\zeta _{R}(s-j)=\sum _{j=0}^{m}b_{j}\zeta _{0}(s-j),} where aj and bj are the coefficients in the Padé approximation. The subscript '0' means that the Padé is of order [0/0] and hence, we have the Riemann zeta function. == DLog Padé method == Padé approximants can be used to extract critical points and exponents of functions. In thermodynamics, if a function f(x) behaves in a non-analytic way near a point x = r like f ( x ) ∼ | x − r | p {\displaystyle f(x)\sim |x-r|^{p}} , one calls x = r a critical point and p the associated critical exponent of f. If sufficient terms of the series expansion of f are known, one can approximately extract the critical points and the critical exponents from respectively the poles and residues of the Padé approximants [ n / n + 1 ] g ( x ) {\displaystyle [n/n+1]_{g}(x)} , where g = f ′ / f {\displaystyle g=f'/f} . == Generalizations == A Padé approximant approximates a function in one variable. An approximant in two variables is called a Chisholm approximant (after J. S. R. Chisholm), in multiple variables a Canterbury approximant (after Graves-Morris at the University of Kent). == Two-points Padé approximant == The conventional Padé approximation is determined to reproduce the Maclaurin expansion up to a given order. Therefore, the approximation at the value apart from the expansion point may be poor. This is avoided by the 2-point Padé approximation, which is a type of multipoint summation method. At x = 0 {\displaystyle x=0} , consider a case that a function f ( x ) {\displaystyle f(x)} which is expressed by asymptotic behavior f 0 ( x ) {\displaystyle f_{0}(x)} : f ∼ f 0 ( x ) + o ( f 0 ( x ) ) , x → 0 , {\displaystyle f\sim f_{0}(x)+o{\big (}f_{0}(x){\big )},\quad x\to 0,} and at x → ∞ {\displaystyle x\to \infty } , additional asymptotic behavior f ∞ ( x ) {\displaystyle f_{\infty }(x)} : f ( x ) ∼ f ∞ ( x ) + o ( f ∞ ( x ) ) , x → ∞ . {\displaystyle f(x)\sim f_{\infty }(x)+o{\big (}f_{\infty }(x){\big )},\quad x\to \infty .} By selecting the major behavior of f 0 ( x ) , f ∞ ( x ) {\displaystyle f_{0}(x),f_{\infty }(x)} , approximate functions F ( x ) {\displaystyle F(x)} such that simultaneously reproduce asymptotic behavior by developing the Padé approximation can be found in various cases. As a result, at the point x → ∞ {\displaystyle x\to \infty } , where the accuracy of the approximation may be the worst in the ordinary Padé approximation, good accuracy of the 2-point Padé approximant is guaranteed. Therefore, the 2-point Padé approximant can be a method that gives a good approximation globally for x = 0 ∼ ∞ {\displaystyle x=0\sim \infty } . In cases where f 0 ( x ) , f ∞ ( x ) {\displaystyle f_{0}(x),f_{\infty }(x)} are expressed by polynomials or series of negative powers, exponential function, logarithmic function or x ln ⁡ x {\displaystyle x\ln x} , we can apply 2-point Padé approximant to f ( x ) {\displaystyle f(x)} . There is a method of using this to give an approximate solution of a differential equation with high accuracy. Also, for the nontrivial zeros of the Riemann zeta function, the first nontrivial zero can be estimated with some accuracy from the asymptotic behavior on the real axis. == Multi-point Padé approximant == A further extension of the 2-point Padé approximant is the multi-point Padé approximant. This method treats singularity points x = x j ( j = 1 , 2 , 3 , … , N ) {\displaystyle x=x_{j}(j=1,2,3,\dots ,N)} of a function f ( x ) {\displaystyle f(x)} which is to be approximated. Consider the cases when singularities of a function are expressed with index n j {\displaystyle n_{j}} by f ( x ) ∼ A j ( x − x j ) n j , x → x j . {\displaystyle f(x)\sim {\frac {A_{j}}{(x-x_{j})^{n_{j}}}},\quad x\to x_{j}.} Besides the 2-point Padé approximant, which includes information at x = 0 , x → ∞ {\displaystyle x=0,x\to \infty } , this method approximates to reduce the property of diverging at x ∼ x j {\displaystyle x\sim x_{j}} . As a result, since the information of the peculiarity of the function is captured, the approximation of a function f ( x ) {\displaystyle f(x)} can be performed with higher accuracy. == Examples == sin(x) sin ⁡ ( x ) ≈ 12671 4363920 x 5 − 2363 18183 x 3 + x 1 + 445 12122 x 2 + 601 872784 x 4 + 121 16662240 x 6 {\displaystyle \sin(x)\approx {\frac {{\frac {12671}{4363920}}x^{5}-{\frac {2363}{18183}}x^{3}+x}{1+{\frac {445}{12122}}x^{2}+{\frac {601}{872784}}x^{4}+{\frac {121}{16662240}}x^{6}}}} exp(x) exp ⁡ ( x ) ≈ 1 + 1 2 x + 1 9 x 2 + 1 72 x 3 + 1 1008 x 4 + 1 30240 x 5 1 − 1 2 x + 1 9 x 2 − 1 72 x 3 + 1 1008 x 4 − 1 30240 x 5 {\displaystyle \exp(x)\approx {\frac {1+{\frac {1}{2}}x+{\frac {1}{9}}x^{2}+{\frac {1}{72}}x^{3}+{\frac {1}{1008}}x^{4}+{\frac {1}{30240}}x^{5}}{1-{\frac {1}{2}}x+{\frac {1}{9}}x^{2}-{\frac {1}{72}}x^{3}+{\frac {1}{1008}}x^{4}-{\frac {1}{30240}}x^{5}}}} ln(1+x) ln ⁡ ( 1 + x ) ≈ x + 1 2 x 2 1 + x + 1 6 x 2 {\displaystyle \ln(1+x)\approx {\frac {x+{\frac {1}{2}}x^{2}}{1+x+{\frac {1}{6}}x^{2}}}} Jacobi sn(z|3) s n ( z | 3 ) ≈ − 9851629 283609260 z 5 − 572744 4726821 z 3 + z 1 + 859490 1575607 z 2 − 5922035 56721852 z 4 + 62531591 2977897230 z 6 {\displaystyle \mathrm {sn} (z|3)\approx {\frac {-{\frac {9851629}{283609260}}z^{5}-{\frac {572744}{4726821}}z^{3}+z}{1+{\frac {859490}{1575607}}z^{2}-{\frac {5922035}{56721852}}z^{4}+{\frac {62531591}{2977897230}}z^{6}}}} Bessel J5(x) J 5 ( x ) ≈ − 107 28416000 x 7 + 1 3840 x 5 1 + 151 5550 x 2 + 1453 3729600 x 4 + 1339 358041600 x 6 + 2767 120301977600 x 8 {\displaystyle J_{5}(x)\approx {\frac {-{\frac {107}{28416000}}x^{7}+{\frac {1}{3840}}x^{5}}{1+{\frac {151}{5550}}x^{2}+{\frac {1453}{3729600}}x^{4}+{\frac {1339}{358041600}}x^{6}+{\frac {2767}{120301977600}}x^{8}}}} erf(x) erf ⁡ ( x ) ≈ 2 15 π ⋅ 49140 x + 3570 x 3 + 739 x 5 165 x 4 + 1330 x 2 + 3276 {\displaystyle \operatorname {erf} (x)\approx {\frac {2}{15{\sqrt {\pi }}}}\cdot {\frac {49140x+3570x^{3}+739x^{5}}{165x^{4}+1330x^{2}+3276}}} Fresnel C(x) C ( x ) ≈ 1 135 ⋅ 990791 π 4 x 9 − 147189744 π 2 x 5 + 8714684160 x 1749 π 4 x 8 + 523536 π 2 x 4 + 64553216 {\displaystyle C(x)\approx {\frac {1}{135}}\cdot {\frac {990791\pi ^{4}x^{9}-147189744\pi ^{2}x^{5}+8714684160x}{1749\pi ^{4}x^{8}+523536\pi ^{2}x^{4}+64553216}}} == See also == Padé table Bhaskara I's sine approximation formula – Formula to estimate the sine functionPages displaying short descriptions of redirect targets Approximation theory – Theory of getting acceptably close inexact mathematical calculations Function approximation – Approximating an arbitrary function with a well-behaved one == References == == Literature == Baker, G. A., Jr.; and Graves-Morris, P. Padé Approximants. Cambridge U.P., 1996. Baker, G. A., Jr. Padé approximant, Scholarpedia, 7(6):9756. Brezinski, C.; Redivo Zaglia, M. Extrapolation Methods. Theory and Practice. North-Holland, 1991. ISBN 978-0444888143 Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007), "Section 5.12 Padé Approximants", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8, archived from the original on 2016-03-03, retrieved 2011-08-09. Frobenius, G.; Ueber Relationen zwischen den Näherungsbrüchen von Potenzreihen, [Journal für die reine und angewandte Mathematik (Crelle's Journal)]. Volume 1881, Issue 90, Pages 1–17. Gragg, W. B.; The Pade Table and Its Relation to Certain Algorithms of Numerical Analysis [SIAM Review], Vol. 14, No. 1, 1972, pp. 1–62. Padé, H.; Sur la répresentation approchée d'une fonction par des fractions rationelles, Thesis, [Ann. École Nor. (3), 9, 1892, pp. 1–93 supplement. Wynn, P. (1966), "Upon systems of recursions which obtain among the quotients of the Padé table", Numerische Mathematik, 8 (3): 264–269, doi:10.1007/BF02162562, S2CID 123789548. == External links == Weisstein, Eric W. "Padé Approximant". MathWorld. Padé Approximants, Oleksandr Pavlyk, The Wolfram Demonstrations Project. Data Analysis BriefBook: Pade Approximation, Rudolf K. Bock European Laboratory for Particle Physics, CERN. Sinewave, Scott Dattalo, last accessed 2010-11-11. MATLAB function for Padé approximation of models with time delays.
Wikipedia/Pade_approximation
The Digital Library of Mathematical Functions (DLMF) is an online project at the National Institute of Standards and Technology (NIST) to develop a database of mathematical reference data for special functions and their applications. It is intended as an update of Abramowitz's and Stegun's Handbook of Mathematical Functions (A&S). It was published online on 7 May 2010, though some chapters appeared earlier. In the same year it appeared at Cambridge University Press under the title NIST Handbook of Mathematical Functions. In contrast to A&S, whose initial print run was done by the U.S. Government Printing Office and was in the public domain, NIST asserts that it holds copyright to the DLMF under Title 17 USC 105 of the U.S. Code. == See also == NIST Dictionary of Algorithms and Data Structures == References == == Further reading == Cipra, Barry Arthur (1998-03-08). "A New Testament for Special Functions?". SIAM News. SIAM. Archived from the original on 2007-07-15. Lozier, Daniel W. (October 1997) [September 1997]. "Toward a Revised NBS Handbook of Mathematical Functions" (PDF). National Institute of Standards and Technology (NIST). pp. 2089–9. CiteSeerX 10.1.1.65.5096. NISTIR 6072. Archived (PDF) from the original on 2021-12-28. [2] (8 pages) "NIST Releases Preview of Much-Anticipated Online Mathematics Reference". ScienceDaily. 2008-06-27. Archived from the original on 2021-02-11. Retrieved 2021-12-28. "Birth of a Classic … Take Two" (Video). National Institute of Standards and Technology, United States Department of Commerce. 2010-05-11. Archived from the original on 2021-12-21. == External links == NIST Digital Library of Mathematical Functions Corrected errors in NIST DLMF
Wikipedia/Digital_Library_of_Mathematical_Functions
In mathematics, a zonal spherical function or often just spherical function is a function on a locally compact group G with compact subgroup K (often a maximal compact subgroup) that arises as the matrix coefficient of a K-invariant vector in an irreducible representation of G. The key examples are the matrix coefficients of the spherical principal series, the irreducible representations appearing in the decomposition of the unitary representation of G on L2(G/K). In this case the commutant of G is generated by the algebra of biinvariant functions on G with respect to K acting by right convolution. It is commutative if in addition G/K is a symmetric space, for example when G is a connected semisimple Lie group with finite centre and K is a maximal compact subgroup. The matrix coefficients of the spherical principal series describe precisely the spectrum of the corresponding C* algebra generated by the biinvariant functions of compact support, often called a Hecke algebra. The spectrum of the commutative Banach *-algebra of biinvariant L1 functions is larger; when G is a semisimple Lie group with maximal compact subgroup K, additional characters come from matrix coefficients of the complementary series, obtained by analytic continuation of the spherical principal series. Zonal spherical functions have been explicitly determined for real semisimple groups by Harish-Chandra. For special linear groups, they were independently discovered by Israel Gelfand and Mark Naimark. For complex groups, the theory simplifies significantly, because G is the complexification of K, and the formulas are related to analytic continuations of the Weyl character formula on K. The abstract functional analytic theory of zonal spherical functions was first developed by Roger Godement. Apart from their group theoretic interpretation, the zonal spherical functions for a semisimple Lie group G also provide a set of simultaneous eigenfunctions for the natural action of the centre of the universal enveloping algebra of G on L2(G/K), as differential operators on the symmetric space G/K. For semisimple p-adic Lie groups, the theory of zonal spherical functions and Hecke algebras was first developed by Satake and Ian G. Macdonald. The analogues of the Plancherel theorem and Fourier inversion formula in this setting generalise the eigenfunction expansions of Mehler, Weyl and Fock for singular ordinary differential equations: they were obtained in full generality in the 1960s in terms of Harish-Chandra's c-function. The name "zonal spherical function" comes from the case when G is SO(3,R) acting on a 2-sphere and K is the subgroup fixing a point: in this case the zonal spherical functions can be regarded as certain functions on the sphere invariant under rotation about a fixed axis. == Definitions == Let G be a locally compact unimodular topological group and K a compact subgroup and let H1 = L2(G/K). Thus, H1 admits a unitary representation π of G by left translation. This is a subrepresentation of the regular representation, since if H= L2(G) with left and right regular representations λ and ρ of G and P is the orthogonal projection P = ∫ K ρ ( k ) d k {\displaystyle P=\int _{K}\rho (k)\,dk} from H to H1 then H1 can naturally be identified with PH with the action of G given by the restriction of λ. On the other hand, by von Neumann's commutation theorem λ ( G ) ′ = ρ ( G ) ′ ′ , {\displaystyle \lambda (G)^{\prime }=\rho (G)^{\prime \prime },} where S' denotes the commutant of a set of operators S, so that π ( G ) ′ = P ρ ( G ) ′ ′ P . {\displaystyle \pi (G)^{\prime }=P\rho (G)^{\prime \prime }P.} Thus the commutant of π is generated as a von Neumann algebra by operators P ρ ( f ) P = ∫ G f ( g ) ( P ρ ( g ) P ) d g {\displaystyle P\rho (f)P=\int _{G}f(g)(P\rho (g)P)\,dg} where f is a continuous function of compact support on G. However Pρ(f) P is just the restriction of ρ(F) to H1, where F ( g ) = ∫ K ∫ K f ( k g k ′ ) d k d k ′ {\displaystyle F(g)=\int _{K}\int _{K}f(kgk^{\prime })\,dk\,dk^{\prime }} is the K-biinvariant continuous function of compact support obtained by averaging f by K on both sides. Thus the commutant of π is generated by the restriction of the operators ρ(F) with F in Cc(K\G/K), the K-biinvariant continuous functions of compact support on G. These functions form a * algebra under convolution with involution F ∗ ( g ) = F ( g − 1 ) ¯ , {\displaystyle F^{*}(g)={\overline {F(g^{-1})}},} often called the Hecke algebra for the pair (G, K). Let A(K\G/K) denote the C* algebra generated by the operators ρ(F) on H1. The pair (G, K) is said to be a Gelfand pair if one, and hence all, of the following algebras are commutative: π ( G ) ′ {\displaystyle \pi (G)^{\prime }} C c ( K ∖ G / K ) {\displaystyle C_{c}(K\backslash G/K)} A ( K ∖ G / K ) . {\displaystyle A(K\backslash G/K).} Since A(K\G/K) is a commutative C* algebra, by the Gelfand–Naimark theorem it has the form C0(X), where X is the locally compact space of norm continuous * homomorphisms of A(K\G/K) into C. A concrete realization of the * homomorphisms in X as K-biinvariant uniformly bounded functions on G is obtained as follows. Because of the estimate ‖ π ( F ) ‖ ≤ ∫ G | F ( g ) | d g ≡ ‖ F ‖ 1 , {\displaystyle \|\pi (F)\|\leq \int _{G}|F(g)|\,dg\equiv \|F\|_{1},} the representation π of Cc(K\G/K) in A(K\G/K) extends by continuity to L1(K\G/K), the * algebra of K-biinvariant integrable functions. The image forms a dense * subalgebra of A(K\G/K). The restriction of a * homomorphism χ continuous for the operator norm is also continuous for the norm ||·||1. Since the Banach space dual of L1 is L∞, it follows that χ ( π ( F ) ) = ∫ G F ( g ) h ( g ) d g , {\displaystyle \chi (\pi (F))=\int _{G}F(g)h(g)\,dg,} for some unique uniformly bounded K-biinvariant function h on G. These functions h are exactly the zonal spherical functions for the pair (G, K). == Properties == A zonal spherical function h has the following properties: h is uniformly continuous on G h ( x ) h ( y ) = ∫ K h ( x k y ) d k ( x , y ∈ G ) . {\displaystyle h(x)h(y)=\int _{K}h(xky)\,dk\,\,(x,y\in G).} h(1) =1 (normalisation) h is a positive definite function on G f * h is proportional to h for all f in Cc(K\G/K). These are easy consequences of the fact that the bounded linear functional χ defined by h is a homomorphism. Properties 2, 3 and 4 or properties 3, 4 and 5 characterize zonal spherical functions. A more general class of zonal spherical functions can be obtained by dropping positive definiteness from the conditions, but for these functions there is no longer any connection with unitary representations. For semisimple Lie groups, there is a further characterization as eigenfunctions of invariant differential operators on G/K (see below). In fact, as a special case of the Gelfand–Naimark–Segal construction, there is one-one correspondence between irreducible representations σ of G having a unit vector v fixed by K and zonal spherical functions h given by h ( g ) = ( σ ( g ) v , v ) . {\displaystyle h(g)=(\sigma (g)v,v).} Such irreducible representations are often described as having class one. They are precisely the irreducible representations required to decompose the induced representation π on H1. Each representation σ extends uniquely by continuity to A(K\G/K), so that each zonal spherical function satisfies | ∫ G f ( g ) h ( g ) d g | ≤ ‖ π ( f ) ‖ {\displaystyle \left|\int _{G}f(g)h(g)\,dg\right|\leq \|\pi (f)\|} for f in A(K\G/K). Moreover, since the commutant π(G)' is commutative, there is a unique probability measure μ on the space of * homomorphisms X such that ∫ G | f ( g ) | 2 d g = ∫ X | χ ( π ( f ) ) | 2 d μ ( χ ) . {\displaystyle \int _{G}|f(g)|^{2}\,dg=\int _{X}|\chi (\pi (f))|^{2}\,d\mu (\chi ).} μ is called the Plancherel measure. Since π(G)' is the centre of the von Neumann algebra generated by G, it also gives the measure associated with the direct integral decomposition of H1 in terms of the irreducible representations σχ. == Gelfand pairs == If G is a connected Lie group, then, thanks to the work of Cartan, Malcev, Iwasawa and Chevalley, G has a maximal compact subgroup, unique up to conjugation. In this case K is connected and the quotient G/K is diffeomorphic to a Euclidean space. When G is in addition semisimple, this can be seen directly using the Cartan decomposition associated to the symmetric space G/K, a generalisation of the polar decomposition of invertible matrices. Indeed, if τ is the associated period two automorphism of G with fixed point subgroup K, then G = P ⋅ K , {\displaystyle G=P\cdot K,} where P = { g ∈ G | τ ( g ) = g − 1 } . {\displaystyle P=\{g\in G|\tau (g)=g^{-1}\}.} Under the exponential map, P is diffeomorphic to the -1 eigenspace of τ in the Lie algebra of G. Since τ preserves K, it induces an automorphism of the Hecke algebra Cc(K\G/K). On the other hand, if F lies in Cc(K\G/K), then F(τg) = F(g−1), so that τ induces an anti-automorphism, because inversion does. Hence, when G is semisimple, the Hecke algebra is commutative (G,K) is a Gelfand pair. More generally the same argument gives the following criterion of Gelfand for (G,K) to be a Gelfand pair: G is a unimodular locally compact group; K is a compact subgroup arising as the fixed points of a period two automorphism τ of G; G =K·P (not necessarily a direct product), where P is defined as above. The two most important examples covered by this are when: G is a compact connected semisimple Lie group with τ a period two automorphism; G is a semidirect product A ⋊ K {\displaystyle A\rtimes K} , with A a locally compact Abelian group without 2-torsion and τ(a· k)= k·a−1 for a in A and k in K. The three cases cover the three types of symmetric spaces G/K: Non-compact type, when K is a maximal compact subgroup of a non-compact real semisimple Lie group G; Compact type, when K is the fixed point subgroup of a period two automorphism of a compact semisimple Lie group G; Euclidean type, when A is a finite-dimensional Euclidean space with an orthogonal action of K. == Cartan–Helgason theorem == Let G be a compact semisimple connected and simply connected Lie group and τ a period two automorphism of a G with fixed point subgroup K = Gτ. In this case K is a connected compact Lie group. In addition let T be a maximal torus of G invariant under τ, such that T ∩ {\displaystyle \cap } P is a maximal torus in P, and set S = K ∩ T = T τ . {\displaystyle S=K\cap T=T^{\tau }.} S is the direct product of a torus and an elementary abelian 2-group. In 1929 Élie Cartan found a rule to determine the decomposition of L2(G/K) into the direct sum of finite-dimensional irreducible representations of G, which was proved rigorously only in 1970 by Sigurdur Helgason. Because the commutant of G on L2(G/K) is commutative, each irreducible representation appears with multiplicity one. By Frobenius reciprocity for compact groups, the irreducible representations V that occur are precisely those admitting a non-zero vector fixed by K. From the representation theory of compact semisimple groups, irreducible representations of G are classified by their highest weight. This is specified by a homomorphism of the maximal torus T into T. The Cartan–Helgason theorem states that The corresponding irreducible representations are called spherical representations. The theorem can be proved using the Iwasawa decomposition: g = k ⊕ a ⊕ n , {\displaystyle {\mathfrak {g}}={\mathfrak {k}}\oplus {\mathfrak {a}}\oplus {\mathfrak {n}},} where g {\displaystyle {\mathfrak {g}}} , k {\displaystyle {\mathfrak {k}}} , a {\displaystyle {\mathfrak {a}}} are the complexifications of the Lie algebras of G, K, A = T ∩ {\displaystyle \cap } P and n = ⨁ g α , {\displaystyle {\mathfrak {n}}=\bigoplus {\mathfrak {g}}_{\alpha },} summed over all eigenspaces for T in g {\displaystyle {\mathfrak {g}}} corresponding to positive roots α not fixed by τ. Let V be a spherical representation with highest weight vector v0 and K-fixed vector vK. Since v0 is an eigenvector of the solvable Lie algebra a ⊕ n {\displaystyle {\mathfrak {a}}\oplus {\mathfrak {n}}} , the Poincaré–Birkhoff–Witt theorem implies that the K-module generated by v0 is the whole of V. If Q is the orthogonal projection onto the fixed points of K in V obtained by averaging over G with respect to Haar measure, it follows that v K = c Q v 0 {\displaystyle \displaystyle {v_{K}=cQv_{0}}} for some non-zero constant c. Because vK is fixed by S and v0 is an eigenvector for S, the subgroup S must actually fix v0, an equivalent form of the triviality condition on S. Conversely if v0 is fixed by S, then it can be shown that the matrix coefficient f ( g ) = ( g v 0 , v 0 ) {\displaystyle \displaystyle {f(g)=(gv_{0},v_{0})}} is non-negative on K. Since f(1) > 0, it follows that (Qv0, v0) > 0 and hence that Qv0 is a non-zero vector fixed by K. == Harish-Chandra's formula == If G is a non-compact semisimple Lie group, its maximal compact subgroup K acts by conjugation on the component P in the Cartan decomposition. If A is a maximal Abelian subgroup of G contained in P, then A is diffeomorphic to its Lie algebra under the exponential map and, as a further generalisation of the polar decomposition of matrices, every element of P is conjugate under K to an element of A, so that G =KAK. There is also an associated Iwasawa decomposition G =KAN, where N is a closed nilpotent subgroup, diffeomorphic to its Lie algebra under the exponential map and normalised by A. Thus S=AN is a closed solvable subgroup of G, the semidirect product of N by A, and G = KS. If α in Hom(A,T) is a character of A, then α extends to a character of S, by defining it to be trivial on N. There is a corresponding unitary induced representation σ of G on L2(G/S) = L2(K), a so-called (spherical) principal series representation. This representation can be described explicitly as follows. Unlike G and K, the solvable Lie group S is not unimodular. Let dx denote left invariant Haar measure on S and ΔS the modular function of S. Then ∫ G f ( g ) d g = ∫ S ∫ K f ( x ⋅ k ) d x d k = ∫ S ∫ K f ( k ⋅ x ) Δ S ( x ) d x d k . {\displaystyle \int _{G}f(g)\,dg=\int _{S}\int _{K}f(x\cdot k)\,dx\,dk=\int _{S}\int _{K}f(k\cdot x)\Delta _{S}(x)\,dx\,dk.} The principal series representation σ is realised on L2(K) as ( σ ( g ) ξ ) ( k ) = α ′ ( g − 1 k ) − 1 ξ ( U ( g − 1 k ) ) , {\displaystyle (\sigma (g)\xi )(k)=\alpha ^{\prime }(g^{-1}k)^{-1}\,\xi (U(g^{-1}k)),} where g = U ( g ) ⋅ X ( g ) {\displaystyle g=U(g)\cdot X(g)} is the Iwasawa decomposition of g with U(g) in K and X(g) in S and α ′ ( k x ) = Δ S ( x ) 1 / 2 α ( x ) {\displaystyle \alpha ^{\prime }(kx)=\Delta _{S}(x)^{1/2}\alpha (x)} for k in K and x in S. The representation σ is irreducible, so that if v denotes the constant function 1 on K, fixed by K, φ α ( g ) = ( σ ( g ) v , v ) {\displaystyle \varphi _{\alpha }(g)=(\sigma (g)v,v)} defines a zonal spherical function of G. Computing the inner product above leads to Harish-Chandra's formula for the zonal spherical function as an integral over K. Harish-Chandra proved that these zonal spherical functions exhaust the characters of the C* algebra generated by the Cc(K \ G / K) acting by right convolution on L2(G / K). He also showed that two different characters α and β give the same zonal spherical function if and only if α = β·s, where s is in the Weyl group of A W ( A ) = N K ( A ) / C K ( A ) , {\displaystyle W(A)=N_{K}(A)/C_{K}(A),} the quotient of the normaliser of A in K by its centraliser, a finite reflection group. It can also be verified directly that this formula defines a zonal spherical function, without using representation theory. The proof for general semisimple Lie groups that every zonal spherical formula arises in this way requires the detailed study of G-invariant differential operators on G/K and their simultaneous eigenfunctions (see below). In the case of complex semisimple groups, Harish-Chandra and Felix Berezin realised independently that the formula simplified considerably and could be proved more directly. The remaining positive-definite zonal spherical functions are given by Harish-Chandra's formula with α in Hom(A,C*) instead of Hom(A,T). Only certain α are permitted and the corresponding irreducible representations arise as analytic continuations of the spherical principal series. This so-called "complementary series" was first studied by Bargmann (1947) for G = SL(2,R) and by Harish-Chandra (1947) and Gelfand & Naimark (1947) for G = SL(2,C). Subsequently in the 1960s, the construction of a complementary series by analytic continuation of the spherical principal series was systematically developed for general semisimple Lie groups by Ray Kunze, Elias Stein and Bertram Kostant. Since these irreducible representations are not tempered, they are not usually required for harmonic analysis on G (or G / K). == Eigenfunctions == Harish-Chandra proved that zonal spherical functions can be characterised as those normalised positive definite K-invariant functions on G/K that are eigenfunctions of D(G/K), the algebra of invariant differential operators on G. This algebra acts on G/K and commutes with the natural action of G by left translation. It can be identified with the subalgebra of the universal enveloping algebra of G fixed under the adjoint action of K. As for the commutant of G on L2(G/K) and the corresponding Hecke algebra, this algebra of operators is commutative; indeed it is a subalgebra of the algebra of mesurable operators affiliated with the commutant π(G)', an Abelian von Neumann algebra. As Harish-Chandra proved, it is isomorphic to the algebra of W(A)-invariant polynomials on the Lie algebra of A, which itself is a polynomial ring by the Chevalley–Shephard–Todd theorem on polynomial invariants of finite reflection groups. The simplest invariant differential operator on G/K is the Laplacian operator; up to a sign this operator is just the image under π of the Casimir operator in the centre of the universal enveloping algebra of G. Thus a normalised positive definite K-biinvariant function f on G is a zonal spherical function if and only if for each D in D(G/K) there is a constant λD such that π ( D ) f = λ D f , {\displaystyle \displaystyle \pi (D)f=\lambda _{D}f,} i.e. f is a simultaneous eigenfunction of the operators π(D). If ψ is a zonal spherical function, then, regarded as a function on G/K, it is an eigenfunction of the Laplacian there, an elliptic differential operator with real analytic coefficients. By analytic elliptic regularity, ψ is a real analytic function on G/K, and hence G. Harish-Chandra used these facts about the structure of the invariant operators to prove that his formula gave all zonal spherical functions for real semisimple Lie groups. Indeed, the commutativity of the commutant implies that the simultaneous eigenspaces of the algebra of invariant differential operators all have dimension one; and the polynomial structure of this algebra forces the simultaneous eigenvalues to be precisely those already associated with Harish-Chandra's formula. == Example: SL(2,C) == The group G = SL(2,C) is the complexification of the compact Lie group K = SU(2) and the double cover of the Lorentz group. The infinite-dimensional representations of the Lorentz group were first studied by Dirac in 1945, who considered the discrete series representations, which he termed expansors. A systematic study was taken up shortly afterwards by Harish-Chandra, Gelfand–Naimark and Bargmann. The irreducible representations of class one, corresponding to the zonal spherical functions, can be determined easily using the radial component of the Laplacian operator. Indeed, any unimodular complex 2×2 matrix g admits a unique polar decomposition g = pv with v unitary and p positive. In turn p = uau*, with u unitary and a a diagonal matrix with positive entries. Thus g = uaw with w = u* v, so that any K-biinvariant function on G corresponds to a function of the diagonal matrix a = ( e r / 2 0 0 e − r / 2 ) , {\displaystyle a={\begin{pmatrix}e^{r/2}&0\\0&e^{-r/2}\end{pmatrix}},} invariant under the Weyl group. Identifying G/K with hyperbolic 3-space, the zonal hyperbolic functions ψ correspond to radial functions that are eigenfunctions of the Laplacian. But in terms of the radial coordinate r, the Laplacian is given by L = − ∂ r 2 − 2 coth ⁡ r ∂ r . {\displaystyle L=-\partial _{r}^{2}-2\coth r\partial _{r}.} Setting f(r) = sinh (r)·ψ(r), it follows that f is an odd function of r and an eigenfunction of ∂ r 2 {\displaystyle \partial _{r}^{2}} . Hence where ℓ {\displaystyle \ell } is real. There is a similar elementary treatment for the generalized Lorentz groups SO(N,1) in Takahashi (1963) and Faraut & Korányi (1994) (recall that SO0(3,1) = SL(2,C) / ±I). == Complex case == If G is a complex semisimple Lie group, it is the complexification of its maximal compact subgroup K. If g {\displaystyle {\mathfrak {g}}} and k {\displaystyle {\mathfrak {k}}} are their Lie algebras, then g = k ⊕ i k . {\displaystyle {\mathfrak {g}}={\mathfrak {k}}\oplus i{\mathfrak {k}}.} Let T be a maximal torus in K with Lie algebra t {\displaystyle {\mathfrak {t}}} . Then A = exp ⁡ i t , P = exp ⁡ i k . {\displaystyle A=\exp i{\mathfrak {t}},\,\,P=\exp i{\mathfrak {k}}.} Let W = N K ( T ) / T {\displaystyle W=N_{K}(T)/T} be the Weyl group of T in K. Recall characters in Hom(T,T) are called weights and can be identified with elements of the weight lattice Λ in Hom( t {\displaystyle {\mathfrak {t}}} , R) = t ∗ {\displaystyle {\mathfrak {t}}^{*}} . There is a natural ordering on weights and every finite-dimensional irreducible representation (π, V) of K has a unique highest weight λ. The weights of the adjoint representation of K on k ⊖ t {\displaystyle {\mathfrak {k}}\ominus {\mathfrak {t}}} are called roots and ρ is used to denote half the sum of the positive roots α, Weyl's character formula asserts that for z = exp X in T χ λ ( e X ) ≡ T r π ( z ) = A λ + ρ ( e X ) / A ρ ( e X ) , {\displaystyle \displaystyle \chi _{\lambda }(e^{X})\equiv {\rm {Tr}}\,\pi (z)=A_{\lambda +\rho }(e^{X})/A_{\rho }(e^{X}),} where, for μ in t ∗ {\displaystyle {\mathfrak {t}}^{*}} , Aμ denotes the antisymmetrisation A μ ( e X ) = ∑ s ∈ W ε ( s ) e i μ ( s X ) , {\displaystyle \displaystyle A_{\mu }(e^{X})=\sum _{s\in W}\varepsilon (s)e^{i\mu (sX)},} and ε denotes the sign character of the finite reflection group W. Weyl's denominator formula expresses the denominator Aρ as a product: A ρ ( e X ) = e i ρ ( X ) ∏ α > 0 ( 1 − e − i α ( X ) ) , {\displaystyle \displaystyle A_{\rho }(e^{X})=e^{i\rho (X)}\prod _{\alpha >0}(1-e^{-i\alpha (X)}),} where the product is over the positive roots. Weyl's dimension formula asserts that χ λ ( 1 ) ≡ d i m V = ∏ α > 0 ( λ + ρ , α ) ∏ α > 0 ( ρ , α ) . {\displaystyle \displaystyle \chi _{\lambda }(1)\equiv {\rm {dim}}\,V={\prod _{\alpha >0}(\lambda +\rho ,\alpha ) \over \prod _{\alpha >0}(\rho ,\alpha )}.} where the inner product on t ∗ {\displaystyle {\mathfrak {t}}^{*}} is that associated with the Killing form on k {\displaystyle {\mathfrak {k}}} . Now every irreducible representation of K extends holomorphically to the complexification G every irreducible character χλ(k) of K extends holomorphically to the complexification of K and t ∗ {\displaystyle {\mathfrak {t}}^{*}} . for every λ in Hom(A,T) = i t ∗ {\displaystyle i{\mathfrak {t}}^{*}} , there is a zonal spherical function φλ. The Berezin–Harish–Chandra formula asserts that for X in i t {\displaystyle i{\mathfrak {t}}} In other words: the zonal spherical functions on a complex semisimple Lie group are given by analytic continuation of the formula for the normalised characters. One of the simplest proofs of this formula involves the radial component on A of the Laplacian on G, a proof formally parallel to Helgason's reworking of Freudenthal's classical proof of the Weyl character formula, using the radial component on T of the Laplacian on K. In the latter case the class functions on K can be identified with W-invariant functions on T. The radial component of ΔK on T is just the expression for the restriction of ΔK to W-invariant functions on T, where it is given by the formula Δ K = h − 1 ∘ Δ T ∘ h + ‖ ρ ‖ 2 , {\displaystyle \displaystyle \Delta _{K}=h^{-1}\circ \Delta _{T}\circ h+\|\rho \|^{2},} where h ( e X ) = A ρ ( e X ) {\displaystyle \displaystyle h(e^{X})=A_{\rho }(e^{X})} for X in t {\displaystyle {\mathfrak {t}}} . If χ is a character with highest weight λ, it follows that φ = h·χ satisfies Δ T φ = ( ‖ λ + ρ ‖ 2 − ‖ ρ ‖ 2 ) φ . {\displaystyle \Delta _{T}\varphi =(\|\lambda +\rho \|^{2}-\|\rho \|^{2})\varphi .} Thus for every weight μ with non-zero Fourier coefficient in φ, ‖ λ + ρ ‖ 2 = ‖ μ + ρ ‖ 2 . {\displaystyle \displaystyle \|\lambda +\rho \|^{2}=\|\mu +\rho \|^{2}.} The classical argument of Freudenthal shows that μ + ρ must have the form s(λ + ρ) for some s in W, so the character formula follows from the antisymmetry of φ. Similarly K-biinvariant functions on G can be identified with W(A)-invariant functions on A. The radial component of ΔG on A is just the expression for the restriction of ΔG to W(A)-invariant functions on A. It is given by the formula Δ G = H − 1 ∘ Δ A ∘ H − ‖ ρ ‖ 2 , {\displaystyle \displaystyle \Delta _{G}=H^{-1}\circ \Delta _{A}\circ H-\|\rho \|^{2},} where H ( e X ) = A ρ ( e X ) {\displaystyle \displaystyle H(e^{X})=A_{\rho }(e^{X})} for X in i t {\displaystyle i{\mathfrak {t}}} . The Berezin–Harish–Chandra formula for a zonal spherical function φ can be established by introducing the antisymmetric function f = H ⋅ φ , {\displaystyle \displaystyle f=H\cdot \varphi ,} which is an eigenfunction of the Laplacian ΔA. Since K is generated by copies of subgroups that are homomorphic images of SU(2) corresponding to simple roots, its complexification G is generated by the corresponding homomorphic images of SL(2,C). The formula for zonal spherical functions of SL(2,C) implies that f is a periodic function on i t {\displaystyle i{\mathfrak {t}}} with respect to some sublattice. Antisymmetry under the Weyl group and the argument of Freudenthal again imply that ψ must have the stated form up to a multiplicative constant, which can be determined using the Weyl dimension formula. == Example: SL(2,R) == The theory of zonal spherical functions for SL(2,R) originated in the work of Mehler in 1881 on hyperbolic geometry. He discovered the analogue of the Plancherel theorem, which was rediscovered by Fock in 1943. The corresponding eigenfunction expansion is termed the Mehler–Fock transform. It was already put on a firm footing in 1910 by Hermann Weyl's important work on the spectral theory of ordinary differential equations. The radial part of the Laplacian in this case leads to a hypergeometric differential equation, the theory of which was treated in detail by Weyl. Weyl's approach was subsequently generalised by Harish-Chandra to study zonal spherical functions and the corresponding Plancherel theorem for more general semisimple Lie groups. Following the work of Dirac on the discrete series representations of SL(2,R), the general theory of unitary irreducible representations of SL(2,R) was developed independently by Bargmann, Harish-Chandra and Gelfand–Naimark. The irreducible representations of class one, or equivalently the theory of zonal spherical functions, form an important special case of this theory. The group G = SL(2,R) is a double cover of the 3-dimensional Lorentz group SO(2,1), the symmetry group of the hyperbolic plane with its Poincaré metric. It acts by Möbius transformations. The upper half-plane can be identified with the unit disc by the Cayley transform. Under this identification G becomes identified with the group SU(1,1), also acting by Möbius transformations. Because the action is transitive, both spaces can be identified with G/K, where K = SO(2). The metric is invariant under G and the associated Laplacian is G-invariant, coinciding with the image of the Casimir operator. In the upper half-plane model the Laplacian is given by the formula Δ = − 4 y 2 ( ∂ x 2 + ∂ y 2 ) . {\displaystyle \displaystyle \Delta =-4y^{2}(\partial _{x}^{2}+\partial _{y}^{2}).} If s is a complex number and z = x + i y with y > 0, the function f s ( z ) = y s = exp ⁡ ( s ⋅ log ⁡ y ) , {\displaystyle \displaystyle f_{s}(z)=y^{s}=\exp({s}\cdot \log y),} is an eigenfunction of Δ: Δ f s = 4 s ( 1 − s ) f s . {\displaystyle \displaystyle \Delta f_{s}=4s(1-s)f_{s}.} Since Δ commutes with G, any left translate of fs is also an eigenfunction with the same eigenvalue. In particular, averaging over K, the function is a K-invariant eigenfunction of Δ on G/K. When s = 1 2 + i τ , {\displaystyle \displaystyle s={1 \over 2}+i\tau ,} with τ real, these functions give all the zonal spherical functions on G. As with Harish-Chandra's more general formula for semisimple Lie groups, φs is a zonal spherical function because it is the matrix coefficient corresponding to a vector fixed by K in the principal series. Various arguments are available to prove that there are no others. One of the simplest classical Lie algebraic arguments is to note that, since Δ is an elliptic operator with analytic coefficients, by analytic elliptic regularity any eigenfunction is necessarily real analytic. Hence, if the zonal spherical function corresponds to the matrix coefficient for a vector v and representation σ, the vector v is an analytic vector for G and ( σ ( e X ) v , v ) = ∑ n = 0 ∞ ( σ ( X ) n v , v ) / n ! {\displaystyle \displaystyle (\sigma (e^{X})v,v)=\sum _{n=0}^{\infty }(\sigma (X)^{n}v,v)/n!} for X in i t {\displaystyle i{\mathfrak {t}}} . The infinitesimal form of the irreducible unitary representations with a vector fixed by K were worked out classically by Bargmann. They correspond precisely to the principal series of SL(2,R). It follows that the zonal spherical function corresponds to a principal series representation. Another classical argument proceeds by showing that on radial functions the Laplacian has the form Δ = − ∂ r 2 − coth ⁡ ( r ) ⋅ ∂ r , {\displaystyle \displaystyle \Delta =-\partial _{r}^{2}-\coth(r)\cdot \partial _{r},} so that, as a function of r, the zonal spherical function φ(r) must satisfy the ordinary differential equation φ ′ ′ + coth ⁡ r φ ′ = α φ {\displaystyle \displaystyle \varphi ^{\prime \prime }+\coth r\,\varphi ^{\prime }=\alpha \,\varphi } for some constant α. The change of variables t = sinh r transforms this equation into the hypergeometric differential equation. The general solution in terms of Legendre functions of complex index is given by where α = ρ(ρ+1). Further restrictions on ρ are imposed by boundedness and positive-definiteness of the zonal spherical function on G. There is yet another approach, due to Mogens Flensted-Jensen, which derives the properties of the zonal spherical functions on SL(2,R), including the Plancherel formula, from the corresponding results for SL(2,C), which are simple consequences of the Plancherel formula and Fourier inversion formula for R. This "method of descent" works more generally, allowing results for a real semisimple Lie group to be derived by descent from the corresponding results for its complexification. == Further directions == The theory of zonal functions that are not necessarily positive-definite. These are given by the same formulas as above, but without restrictions on the complex parameter s or ρ. They correspond to non-unitary representations. Harish-Chandra's eigenfunction expansion and inversion formula for spherical functions. This is an important special case of his Plancherel theorem for real semisimple Lie groups. The structure of the Hecke algebra. Harish-Chandra and Godement proved that, as convolution algebras, there are natural isomorphisms between Cc∞(K \ G / K ) and Cc∞(A)W, the subalgebra invariant under the Weyl group. This is straightforward to establish for SL(2,R). Spherical functions for Euclidean motion groups and compact Lie groups. Spherical functions for p-adic Lie groups. These were studied in depth by Satake and Macdonald. Their study, and that of the associated Hecke algebras, was one of the first steps in the extensive representation theory of semisimple p-adic Lie groups, a key element in the Langlands program. == See also == Plancherel theorem for spherical functions Hecke algebra of a locally compact group Representations of Lie groups Non-commutative harmonic analysis Tempered representation Positive definite function on a group Symmetric space Gelfand pair == Notes == === Citations === == Sources == == External links == Casselman, William, Notes on spherical functions (PDF)
Wikipedia/Zonal_spherical_function
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions y(x) of Bessel's differential equation x 2 d 2 y d x 2 + x d y d x + ( x 2 − α 2 ) y = 0 {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0} for an arbitrary complex number α {\displaystyle \alpha } , which represents the order of the Bessel function. Although α {\displaystyle \alpha } and − α {\displaystyle -\alpha } produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of α {\displaystyle \alpha } . The most important cases are when α {\displaystyle \alpha } is an integer or half-integer. Bessel functions for integer α {\displaystyle \alpha } are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer α {\displaystyle \alpha } are obtained when solving the Helmholtz equation in spherical coordinates. == Applications == Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (α = n); in spherical problems, one obtains half-integer orders (α = n + 1/2). For example: Electromagnetic waves in a cylindrical waveguide Pressure amplitudes of inviscid rotational flows Heat conduction in a cylindrical object Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory) Diffusion problems on a lattice Solutions to the Schrödinger equation in spherical and cylindrical coordinates for a free particle Position space representation of the Feynman propagator in quantum field theory Solving for patterns of acoustical radiation Frequency-dependent friction in circular pipelines Dynamics of floating bodies Angular resolution Diffraction from helical objects, including DNA Probability density function of product of two normally distributed random variables Analyzing of the surface waves generated by microtremors, in geophysics and seismology. Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter). == Definitions == Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.The subscript n is typically used in place of α {\displaystyle \alpha } when α {\displaystyle \alpha } is known to be an integer. Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn and yn. === Bessel functions of the first kind: Jα === Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel's differential equation. For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible to define the function by x α {\displaystyle x^{\alpha }} times a Maclaurin series (note that α need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation: J α ( x ) = ∑ m = 0 ∞ ( − 1 ) m m ! Γ ( m + α + 1 ) ( x 2 ) 2 m + α , {\displaystyle J_{\alpha }(x)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{m!\,\Gamma (m+\alpha +1)}}{\left({\frac {x}{2}}\right)}^{2m+\alpha },} where Γ(z) is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by 2 {\displaystyle 2} in x / 2 {\displaystyle x/2} ; this definition is not used in this article. The Bessel function of the first kind is an entire function if α is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to x − 1 / 2 {\displaystyle x^{-{1}/{2}}} (see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large x. (The series indicates that −J1(x) is the derivative of J0(x), much like −sin x is the derivative of cos x; more generally, the derivative of Jn(x) can be expressed in terms of Jn ± 1(x) by the identities below.) For non-integer α, the functions Jα(x) and J−α(x) are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order n, the following relationship is valid (the gamma function has simple poles at each of the non-positive integers): J − n ( x ) = ( − 1 ) n J n ( x ) . {\displaystyle J_{-n}(x)=(-1)^{n}J_{n}(x).} This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below. ==== Bessel's integrals ==== Another definition of the Bessel function, for integer values of n, is possible using an integral representation: J n ( x ) = 1 π ∫ 0 π cos ⁡ ( n τ − x sin ⁡ τ ) d τ = 1 π Re ⁡ ( ∫ 0 π e i ( n τ − x sin ⁡ τ ) d τ ) , {\displaystyle J_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(n\tau -x\sin \tau )\,d\tau ={\frac {1}{\pi }}\operatorname {Re} \left(\int _{0}^{\pi }e^{i(n\tau -x\sin \tau )}\,d\tau \right),} which is also called Hansen-Bessel formula. This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for Re(x) > 0: J α ( x ) = 1 π ∫ 0 π cos ⁡ ( α τ − x sin ⁡ τ ) d τ − sin ⁡ ( α π ) π ∫ 0 ∞ e − x sinh ⁡ t − α t d t . {\displaystyle J_{\alpha }(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(\alpha \tau -x\sin \tau )\,d\tau -{\frac {\sin(\alpha \pi )}{\pi }}\int _{0}^{\infty }e^{-x\sinh t-\alpha t}\,dt.} ==== Relation to hypergeometric series ==== The Bessel functions can be expressed in terms of the generalized hypergeometric series as J α ( x ) = ( x 2 ) α Γ ( α + 1 ) 0 F 1 ( α + 1 ; − x 2 4 ) . {\displaystyle J_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\;_{0}F_{1}\left(\alpha +1;-{\frac {x^{2}}{4}}\right).} This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function. ==== Relation to Laguerre polynomials ==== In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t, the Bessel function can be expressed as J α ( x ) ( x 2 ) α = e − t Γ ( α + 1 ) ∑ k = 0 ∞ L k ( α ) ( x 2 4 t ) ( k + α k ) t k k ! . {\displaystyle {\frac {J_{\alpha }(x)}{\left({\frac {x}{2}}\right)^{\alpha }}}={\frac {e^{-t}}{\Gamma (\alpha +1)}}\sum _{k=0}^{\infty }{\frac {L_{k}^{(\alpha )}\left({\frac {x^{2}}{4t}}\right)}{\binom {k+\alpha }{k}}}{\frac {t^{k}}{k!}}.} === Bessel functions of the second kind: Yα === The Bessel functions of the second kind, denoted by Yα(x), occasionally denoted instead by Nα(x), are solutions of the Bessel differential equation that have a singularity at the origin (x = 0) and are multivalued. These are sometimes called Weber functions, as they were introduced by H. M. Weber (1873), and also Neumann functions after Carl Neumann. For non-integer α, Yα(x) is related to Jα(x) by Y α ( x ) = J α ( x ) cos ⁡ ( α π ) − J − α ( x ) sin ⁡ ( α π ) . {\displaystyle Y_{\alpha }(x)={\frac {J_{\alpha }(x)\cos(\alpha \pi )-J_{-\alpha }(x)}{\sin(\alpha \pi )}}.} In the case of integer order n, the function is defined by taking the limit as a non-integer α tends to n: Y n ( x ) = lim α → n Y α ( x ) . {\displaystyle Y_{n}(x)=\lim _{\alpha \to n}Y_{\alpha }(x).} If n is a nonnegative integer, we have the series Y n ( z ) = − ( z 2 ) − n π ∑ k = 0 n − 1 ( n − k − 1 ) ! k ! ( z 2 4 ) k + 2 π J n ( z ) ln ⁡ z 2 − ( z 2 ) n π ∑ k = 0 ∞ ( ψ ( k + 1 ) + ψ ( n + k + 1 ) ) ( − z 2 4 ) k k ! ( n + k ) ! {\displaystyle Y_{n}(z)=-{\frac {\left({\frac {z}{2}}\right)^{-n}}{\pi }}\sum _{k=0}^{n-1}{\frac {(n-k-1)!}{k!}}\left({\frac {z^{2}}{4}}\right)^{k}+{\frac {2}{\pi }}J_{n}(z)\ln {\frac {z}{2}}-{\frac {\left({\frac {z}{2}}\right)^{n}}{\pi }}\sum _{k=0}^{\infty }(\psi (k+1)+\psi (n+k+1)){\frac {\left(-{\frac {z^{2}}{4}}\right)^{k}}{k!(n+k)!}}} where ψ ( z ) {\displaystyle \psi (z)} is the digamma function, the logarithmic derivative of the gamma function. There is also a corresponding integral formula (for Re(x) > 0): Y n ( x ) = 1 π ∫ 0 π sin ⁡ ( x sin ⁡ θ − n θ ) d θ − 1 π ∫ 0 ∞ ( e n t + ( − 1 ) n e − n t ) e − x sinh ⁡ t d t . {\displaystyle Y_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\sin(x\sin \theta -n\theta )\,d\theta -{\frac {1}{\pi }}\int _{0}^{\infty }\left(e^{nt}+(-1)^{n}e^{-nt}\right)e^{-x\sinh t}\,dt.} In the case where n = 0: (with γ {\displaystyle \gamma } being Euler's constant) Y 0 ( x ) = 4 π 2 ∫ 0 1 2 π cos ⁡ ( x cos ⁡ θ ) ( γ + ln ⁡ ( 2 x sin 2 ⁡ θ ) ) d θ . {\displaystyle Y_{0}\left(x\right)={\frac {4}{\pi ^{2}}}\int _{0}^{{\frac {1}{2}}\pi }\cos \left(x\cos \theta \right)\left(\gamma +\ln \left(2x\sin ^{2}\theta \right)\right)\,d\theta .} Yα(x) is necessary as the second linearly independent solution of the Bessel's equation when α is an integer. But Yα(x) has more meaning than that. It can be considered as a "natural" partner of Jα(x). See also the subsection on Hankel functions below. When α is an integer, moreover, as was similarly the case for the functions of the first kind, the following relationship is valid: Y − n ( x ) = ( − 1 ) n Y n ( x ) . {\displaystyle Y_{-n}(x)=(-1)^{n}Y_{n}(x).} Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane cut along the negative real axis. When α is an integer, the Bessel functions J are entire functions of x. If x is held fixed at a non-zero value, then the Bessel functions are entire functions of α. The Bessel functions of the second kind when α is an integer is an example of the second kind of solution in Fuchs's theorem. === Hankel functions: H(1)α, H(2)α === Another important formulation of the two linearly independent solutions to Bessel's equation are the Hankel functions of the first and second kind, H(1)α(x) and H(2)α(x), defined as H α ( 1 ) ( x ) = J α ( x ) + i Y α ( x ) , H α ( 2 ) ( x ) = J α ( x ) − i Y α ( x ) , {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&=J_{\alpha }(x)+iY_{\alpha }(x),\\[5pt]H_{\alpha }^{(2)}(x)&=J_{\alpha }(x)-iY_{\alpha }(x),\end{aligned}}} where i is the imaginary unit. These linear combinations are also known as Bessel functions of the third kind; they are two linearly independent solutions of Bessel's differential equation. They are named after Hermann Hankel. These forms of linear combination satisfy numerous simple-looking properties, like asymptotic formulae or integral representations. Here, "simple" means an appearance of a factor of the form ei f(x). For real x > 0 {\displaystyle x>0} where J α ( x ) {\displaystyle J_{\alpha }(x)} , Y α ( x ) {\displaystyle Y_{\alpha }(x)} are real-valued, the Bessel functions of the first and second kind are the real and imaginary parts, respectively, of the first Hankel function and the real and negative imaginary parts of the second Hankel function. Thus, the above formulae are analogs of Euler's formula, substituting H(1)α(x), H(2)α(x) for e ± i x {\displaystyle e^{\pm ix}} and J α ( x ) {\displaystyle J_{\alpha }(x)} , Y α ( x ) {\displaystyle Y_{\alpha }(x)} for cos ⁡ ( x ) {\displaystyle \cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , as explicitly shown in the asymptotic expansion. The Hankel functions are used to express outward- and inward-propagating cylindrical-wave solutions of the cylindrical wave equation, respectively (or vice versa, depending on the sign convention for the frequency). Using the previous relationships, they can be expressed as H α ( 1 ) ( x ) = J − α ( x ) − e − α π i J α ( x ) i sin ⁡ α π , H α ( 2 ) ( x ) = J − α ( x ) − e α π i J α ( x ) − i sin ⁡ α π . {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {J_{-\alpha }(x)-e^{-\alpha \pi i}J_{\alpha }(x)}{i\sin \alpha \pi }},\\[5pt]H_{\alpha }^{(2)}(x)&={\frac {J_{-\alpha }(x)-e^{\alpha \pi i}J_{\alpha }(x)}{-i\sin \alpha \pi }}.\end{aligned}}} If α is an integer, the limit has to be calculated. The following relationships are valid, whether α is an integer or not: H − α ( 1 ) ( x ) = e α π i H α ( 1 ) ( x ) , H − α ( 2 ) ( x ) = e − α π i H α ( 2 ) ( x ) . {\displaystyle {\begin{aligned}H_{-\alpha }^{(1)}(x)&=e^{\alpha \pi i}H_{\alpha }^{(1)}(x),\\[6mu]H_{-\alpha }^{(2)}(x)&=e^{-\alpha \pi i}H_{\alpha }^{(2)}(x).\end{aligned}}} In particular, if α = m + ⁠1/2⁠ with m a nonnegative integer, the above relations imply directly that J − ( m + 1 2 ) ( x ) = ( − 1 ) m + 1 Y m + 1 2 ( x ) , Y − ( m + 1 2 ) ( x ) = ( − 1 ) m J m + 1 2 ( x ) . {\displaystyle {\begin{aligned}J_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m+1}Y_{m+{\frac {1}{2}}}(x),\\[5pt]Y_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m}J_{m+{\frac {1}{2}}}(x).\end{aligned}}} These are useful in developing the spherical Bessel functions (see below). The Hankel functions admit the following integral representations for Re(x) > 0: H α ( 1 ) ( x ) = 1 π i ∫ − ∞ + ∞ + π i e x sinh ⁡ t − α t d t , H α ( 2 ) ( x ) = − 1 π i ∫ − ∞ + ∞ − π i e x sinh ⁡ t − α t d t , {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {1}{\pi i}}\int _{-\infty }^{+\infty +\pi i}e^{x\sinh t-\alpha t}\,dt,\\[5pt]H_{\alpha }^{(2)}(x)&=-{\frac {1}{\pi i}}\int _{-\infty }^{+\infty -\pi i}e^{x\sinh t-\alpha t}\,dt,\end{aligned}}} where the integration limits indicate integration along a contour that can be chosen as follows: from −∞ to 0 along the negative real axis, from 0 to ±πi along the imaginary axis, and from ±πi to +∞ ± πi along a contour parallel to the real axis. === Modified Bessel functions: Iα, Kα === The Bessel functions are valid even for complex arguments x, and an important special case is that of a purely imaginary argument. In this case, the solutions to the Bessel equation are called the modified Bessel functions (or occasionally the hyperbolic Bessel functions) of the first and second kind and are defined as I α ( x ) = i − α J α ( i x ) = ∑ m = 0 ∞ 1 m ! Γ ( m + α + 1 ) ( x 2 ) 2 m + α , K α ( x ) = π 2 I − α ( x ) − I α ( x ) sin ⁡ α π , {\displaystyle {\begin{aligned}I_{\alpha }(x)&=i^{-\alpha }J_{\alpha }(ix)=\sum _{m=0}^{\infty }{\frac {1}{m!\,\Gamma (m+\alpha +1)}}\left({\frac {x}{2}}\right)^{2m+\alpha },\\[5pt]K_{\alpha }(x)&={\frac {\pi }{2}}{\frac {I_{-\alpha }(x)-I_{\alpha }(x)}{\sin \alpha \pi }},\end{aligned}}} when α is not an integer. When α is an integer, then the limit is used. These are chosen to be real-valued for real and positive arguments x. The series expansion for Iα(x) is thus similar to that for Jα(x), but without the alternating (−1)m factor. K α {\displaystyle K_{\alpha }} can be expressed in terms of Hankel functions: K α ( x ) = { π 2 i α + 1 H α ( 1 ) ( i x ) − π < arg ⁡ x ≤ π 2 π 2 ( − i ) α + 1 H α ( 2 ) ( − i x ) − π 2 < arg ⁡ x ≤ π {\displaystyle K_{\alpha }(x)={\begin{cases}{\frac {\pi }{2}}i^{\alpha +1}H_{\alpha }^{(1)}(ix)&-\pi <\arg x\leq {\frac {\pi }{2}}\\{\frac {\pi }{2}}(-i)^{\alpha +1}H_{\alpha }^{(2)}(-ix)&-{\frac {\pi }{2}}<\arg x\leq \pi \end{cases}}} Using these two formulae the result to J α 2 ( z ) {\displaystyle J_{\alpha }^{2}(z)} + Y α 2 ( z ) {\displaystyle Y_{\alpha }^{2}(z)} , commonly known as Nicholson's integral or Nicholson's formula, can be obtained to give the following J α 2 ( x ) + Y α 2 ( x ) = 8 π 2 ∫ 0 ∞ cosh ⁡ ( 2 α t ) K 0 ( 2 x sinh ⁡ t ) d t , {\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8}{\pi ^{2}}}\int _{0}^{\infty }\cosh(2\alpha t)K_{0}(2x\sinh t)\,dt,} given that the condition Re(x) > 0 is met. It can also be shown that J α 2 ( x ) + Y α 2 ( x ) = 8 cos ⁡ ( α π ) π 2 ∫ 0 ∞ K 2 α ( 2 x sinh ⁡ t ) d t , {\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8\cos(\alpha \pi )}{\pi ^{2}}}\int _{0}^{\infty }K_{2\alpha }(2x\sinh t)\,dt,} only when |Re(α)| < ⁠1/2⁠ and Re(x) ≥ 0 but not when x = 0. We can express the first and second Bessel functions in terms of the modified Bessel functions (these are valid if −π < arg z ≤ ⁠π/2⁠): J α ( i z ) = e α π i 2 I α ( z ) , Y α ( i z ) = e ( α + 1 ) π i 2 I α ( z ) − 2 π e − α π i 2 K α ( z ) . {\displaystyle {\begin{aligned}J_{\alpha }(iz)&=e^{\frac {\alpha \pi i}{2}}I_{\alpha }(z),\\[1ex]Y_{\alpha }(iz)&=e^{\frac {(\alpha +1)\pi i}{2}}I_{\alpha }(z)-{\tfrac {2}{\pi }}e^{-{\frac {\alpha \pi i}{2}}}K_{\alpha }(z).\end{aligned}}} Iα(x) and Kα(x) are the two linearly independent solutions to the modified Bessel's equation: x 2 d 2 y d x 2 + x d y d x − ( x 2 + α 2 ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y=0.} Unlike the ordinary Bessel functions, which are oscillating as functions of a real argument, Iα and Kα are exponentially growing and decaying functions respectively. Like the ordinary Bessel function Jα, the function Iα goes to zero at x = 0 for α > 0 and is finite at x = 0 for α = 0. Analogously, Kα diverges at x = 0 with the singularity being of logarithmic type for K0, and ⁠1/2⁠Γ(|α|)(2/x)|α| otherwise. Two integral formulas for the modified Bessel functions are (for Re(x) > 0): I α ( x ) = 1 π ∫ 0 π e x cos ⁡ θ cos ⁡ α θ d θ − sin ⁡ α π π ∫ 0 ∞ e − x cosh ⁡ t − α t d t , K α ( x ) = ∫ 0 ∞ e − x cosh ⁡ t cosh ⁡ α t d t . {\displaystyle {\begin{aligned}I_{\alpha }(x)&={\frac {1}{\pi }}\int _{0}^{\pi }e^{x\cos \theta }\cos \alpha \theta \,d\theta -{\frac {\sin \alpha \pi }{\pi }}\int _{0}^{\infty }e^{-x\cosh t-\alpha t}\,dt,\\[5pt]K_{\alpha }(x)&=\int _{0}^{\infty }e^{-x\cosh t}\cosh \alpha t\,dt.\end{aligned}}} Bessel functions can be described as Fourier transforms of powers of quadratic functions. For example (for Re(ω) > 0): 2 K 0 ( ω ) = ∫ − ∞ ∞ e i ω t t 2 + 1 d t . {\displaystyle 2\,K_{0}(\omega )=\int _{-\infty }^{\infty }{\frac {e^{i\omega t}}{\sqrt {t^{2}+1}}}\,dt.} It can be proven by showing equality to the above integral definition for K0. This is done by integrating a closed curve in the first quadrant of the complex plane. Modified Bessel functions of the second kind may be represented with Bassett's integral K n ( x z ) = Γ ( n + 1 2 ) ( 2 z ) n π x n ∫ 0 ∞ cos ⁡ ( x t ) d t ( t 2 + z 2 ) n + 1 2 . {\displaystyle K_{n}(xz)={\frac {\Gamma \left(n+{\frac {1}{2}}\right)(2z)^{n}}{{\sqrt {\pi }}x^{n}}}\int _{0}^{\infty }{\frac {\cos(xt)\,dt}{(t^{2}+z^{2})^{n+{\frac {1}{2}}}}}.} Modified Bessel functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals K 1 3 ( ξ ) = 3 ∫ 0 ∞ exp ⁡ ( − ξ ( 1 + 4 x 2 3 ) 1 + x 2 3 ) d x , K 2 3 ( ξ ) = 1 3 ∫ 0 ∞ 3 + 2 x 2 1 + x 2 3 exp ⁡ ( − ξ ( 1 + 4 x 2 3 ) 1 + x 2 3 ) d x . {\displaystyle {\begin{aligned}K_{\frac {1}{3}}(\xi )&={\sqrt {3}}\int _{0}^{\infty }\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx,\\[5pt]K_{\frac {2}{3}}(\xi )&={\frac {1}{\sqrt {3}}}\int _{0}^{\infty }{\frac {3+2x^{2}}{\sqrt {1+{\frac {x^{2}}{3}}}}}\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx.\end{aligned}}} The modified Bessel function K 1 2 ( ξ ) = ( 2 ξ / π ) − 1 / 2 exp ⁡ ( − ξ ) {\displaystyle K_{\frac {1}{2}}(\xi )=(2\xi /\pi )^{-1/2}\exp(-\xi )} is useful to represent the Laplace distribution as an Exponential-scale mixture of normal distributions. The modified Bessel function of the second kind has also been called by the following names (now rare): Basset function after Alfred Barnard Basset Modified Bessel function of the third kind Modified Hankel function Macdonald function after Hector Munro Macdonald === Spherical Bessel functions: jn, yn === When solving the Helmholtz equation in spherical coordinates by separation of variables, the radial equation has the form x 2 d 2 y d x 2 + 2 x d y d x + ( x 2 − n ( n + 1 ) ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+2x{\frac {dy}{dx}}+\left(x^{2}-n(n+1)\right)y=0.} The two linearly independent solutions to this equation are called the spherical Bessel functions jn and yn, and are related to the ordinary Bessel functions Jn and Yn by j n ( x ) = π 2 x J n + 1 2 ( x ) , y n ( x ) = π 2 x Y n + 1 2 ( x ) = ( − 1 ) n + 1 π 2 x J − n − 1 2 ( x ) . {\displaystyle {\begin{aligned}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x),\\y_{n}(x)&={\sqrt {\frac {\pi }{2x}}}Y_{n+{\frac {1}{2}}}(x)=(-1)^{n+1}{\sqrt {\frac {\pi }{2x}}}J_{-n-{\frac {1}{2}}}(x).\end{aligned}}} yn is also denoted nn or ηn; some authors call these functions the spherical Neumann functions. From the relations to the ordinary Bessel functions it is directly seen that: j n ( x ) = ( − 1 ) n y − n − 1 ( x ) y n ( x ) = ( − 1 ) n + 1 j − n − 1 ( x ) {\displaystyle {\begin{aligned}j_{n}(x)&=(-1)^{n}y_{-n-1}(x)\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)\end{aligned}}} The spherical Bessel functions can also be written as (Rayleigh's formulas) j n ( x ) = ( − x ) n ( 1 x d d x ) n sin ⁡ x x , y n ( x ) = − ( − x ) n ( 1 x d d x ) n cos ⁡ x x . {\displaystyle {\begin{aligned}j_{n}(x)&=(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\sin x}{x}},\\y_{n}(x)&=-(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\cos x}{x}}.\end{aligned}}} The zeroth spherical Bessel function j0(x) is also known as the (unnormalized) sinc function. The first few spherical Bessel functions are: j 0 ( x ) = sin ⁡ x x . j 1 ( x ) = sin ⁡ x x 2 − cos ⁡ x x , j 2 ( x ) = ( 3 x 2 − 1 ) sin ⁡ x x − 3 cos ⁡ x x 2 , j 3 ( x ) = ( 15 x 3 − 6 x ) sin ⁡ x x − ( 15 x 2 − 1 ) cos ⁡ x x {\displaystyle {\begin{aligned}j_{0}(x)&={\frac {\sin x}{x}}.\\j_{1}(x)&={\frac {\sin x}{x^{2}}}-{\frac {\cos x}{x}},\\j_{2}(x)&=\left({\frac {3}{x^{2}}}-1\right){\frac {\sin x}{x}}-{\frac {3\cos x}{x^{2}}},\\j_{3}(x)&=\left({\frac {15}{x^{3}}}-{\frac {6}{x}}\right){\frac {\sin x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\cos x}{x}}\end{aligned}}} and y 0 ( x ) = − j − 1 ( x ) = − cos ⁡ x x , y 1 ( x ) = j − 2 ( x ) = − cos ⁡ x x 2 − sin ⁡ x x , y 2 ( x ) = − j − 3 ( x ) = ( − 3 x 2 + 1 ) cos ⁡ x x − 3 sin ⁡ x x 2 , y 3 ( x ) = j − 4 ( x ) = ( − 15 x 3 + 6 x ) cos ⁡ x x − ( 15 x 2 − 1 ) sin ⁡ x x . {\displaystyle {\begin{aligned}y_{0}(x)&=-j_{-1}(x)=-{\frac {\cos x}{x}},\\y_{1}(x)&=j_{-2}(x)=-{\frac {\cos x}{x^{2}}}-{\frac {\sin x}{x}},\\y_{2}(x)&=-j_{-3}(x)=\left(-{\frac {3}{x^{2}}}+1\right){\frac {\cos x}{x}}-{\frac {3\sin x}{x^{2}}},\\y_{3}(x)&=j_{-4}(x)=\left(-{\frac {15}{x^{3}}}+{\frac {6}{x}}\right){\frac {\cos x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\sin x}{x}}.\end{aligned}}} The first few non-zero roots of the first few spherical Bessel functions are: ==== Generating function ==== The spherical Bessel functions have the generating functions 1 z cos ⁡ ( z 2 − 2 z t ) = ∑ n = 0 ∞ t n n ! j n − 1 ( z ) , 1 z sin ⁡ ( z 2 − 2 z t ) = ∑ n = 0 ∞ t n n ! y n − 1 ( z ) . {\displaystyle {\begin{aligned}{\frac {1}{z}}\cos \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}j_{n-1}(z),\\{\frac {1}{z}}\sin \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}y_{n-1}(z).\end{aligned}}} ==== Finite series expansions ==== In contrast to the whole integer Bessel functions Jn(x), Yn(x), the spherical Bessel functions jn(x), yn(x) have a finite series expression: j n ( x ) = π 2 x J n + 1 2 ( x ) = = 1 2 x [ e i x ∑ r = 0 n i r − n − 1 ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r + e − i x ∑ r = 0 n ( − i ) r − n − 1 ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r ] = 1 x [ sin ⁡ ( x − n π 2 ) ∑ r = 0 [ n 2 ] ( − 1 ) r ( n + 2 r ) ! ( 2 r ) ! ( n − 2 r ) ! ( 2 x ) 2 r + cos ⁡ ( x − n π 2 ) ∑ r = 0 [ n − 1 2 ] ( − 1 ) r ( n + 2 r + 1 ) ! ( 2 r + 1 ) ! ( n − 2 r − 1 ) ! ( 2 x ) 2 r + 1 ] y n ( x ) = ( − 1 ) n + 1 j − n − 1 ( x ) = ( − 1 ) n + 1 π 2 x J − ( n + 1 2 ) ( x ) = = ( − 1 ) n + 1 2 x [ e i x ∑ r = 0 n i r + n ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r + e − i x ∑ r = 0 n ( − i ) r + n ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r ] = = ( − 1 ) n + 1 x [ cos ⁡ ( x + n π 2 ) ∑ r = 0 [ n 2 ] ( − 1 ) r ( n + 2 r ) ! ( 2 r ) ! ( n − 2 r ) ! ( 2 x ) 2 r − sin ⁡ ( x + n π 2 ) ∑ r = 0 [ n − 1 2 ] ( − 1 ) r ( n + 2 r + 1 ) ! ( 2 r + 1 ) ! ( n − 2 r − 1 ) ! ( 2 x ) 2 r + 1 ] {\displaystyle {\begin{alignedat}{2}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x)=\\&={\frac {1}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]\\&={\frac {1}{x}}\left[\sin \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}+\cos \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)=(-1)^{n+1}{\frac {\pi }{2x}}J_{-\left(n+{\frac {1}{2}}\right)}(x)=\\&={\frac {(-1)^{n+1}}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]=\\&={\frac {(-1)^{n+1}}{x}}\left[\cos \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}-\sin \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\end{alignedat}}} ==== Differential relations ==== In the following, fn is any of jn, yn, h(1)n, h(2)n for n = 0, ±1, ±2, ... ( 1 z d d z ) m ( z n + 1 f n ( z ) ) = z n − m + 1 f n − m ( z ) , ( 1 z d d z ) m ( z − n f n ( z ) ) = ( − 1 ) m z − n − m f n + m ( z ) . {\displaystyle {\begin{aligned}\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{n+1}f_{n}(z)\right)&=z^{n-m+1}f_{n-m}(z),\\\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{-n}f_{n}(z)\right)&=(-1)^{m}z^{-n-m}f_{n+m}(z).\end{aligned}}} === Spherical Hankel functions: h(1)n, h(2)n === There are also spherical analogues of the Hankel functions: h n ( 1 ) ( x ) = j n ( x ) + i y n ( x ) , h n ( 2 ) ( x ) = j n ( x ) − i y n ( x ) . {\displaystyle {\begin{aligned}h_{n}^{(1)}(x)&=j_{n}(x)+iy_{n}(x),\\h_{n}^{(2)}(x)&=j_{n}(x)-iy_{n}(x).\end{aligned}}} There are simple closed-form expressions for the Bessel functions of half-integer order in terms of the standard trigonometric functions, and therefore for the spherical Bessel functions. In particular, for non-negative integers n: h n ( 1 ) ( x ) = ( − i ) n + 1 e i x x ∑ m = 0 n i m m ! ( 2 x ) m ( n + m ) ! ( n − m ) ! , {\displaystyle h_{n}^{(1)}(x)=(-i)^{n+1}{\frac {e^{ix}}{x}}\sum _{m=0}^{n}{\frac {i^{m}}{m!\,(2x)^{m}}}{\frac {(n+m)!}{(n-m)!}},} and h(2)n is the complex-conjugate of this (for real x). It follows, for example, that j0(x) = ⁠sin x/x⁠ and y0(x) = −⁠cos x/x⁠, and so on. The spherical Hankel functions appear in problems involving spherical wave propagation, for example in the multipole expansion of the electromagnetic field. === Riccati–Bessel functions: Sn, Cn, ξn, ζn === Riccati–Bessel functions only slightly differ from spherical Bessel functions: S n ( x ) = x j n ( x ) = π x 2 J n + 1 2 ( x ) C n ( x ) = − x y n ( x ) = − π x 2 Y n + 1 2 ( x ) ξ n ( x ) = x h n ( 1 ) ( x ) = π x 2 H n + 1 2 ( 1 ) ( x ) = S n ( x ) − i C n ( x ) ζ n ( x ) = x h n ( 2 ) ( x ) = π x 2 H n + 1 2 ( 2 ) ( x ) = S n ( x ) + i C n ( x ) {\displaystyle {\begin{aligned}S_{n}(x)&=xj_{n}(x)={\sqrt {\frac {\pi x}{2}}}J_{n+{\frac {1}{2}}}(x)\\C_{n}(x)&=-xy_{n}(x)=-{\sqrt {\frac {\pi x}{2}}}Y_{n+{\frac {1}{2}}}(x)\\\xi _{n}(x)&=xh_{n}^{(1)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(1)}(x)=S_{n}(x)-iC_{n}(x)\\\zeta _{n}(x)&=xh_{n}^{(2)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(2)}(x)=S_{n}(x)+iC_{n}(x)\end{aligned}}} They satisfy the differential equation x 2 d 2 y d x 2 + ( x 2 − n ( n + 1 ) ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+\left(x^{2}-n(n+1)\right)y=0.} For example, this kind of differential equation appears in quantum mechanics while solving the radial component of the Schrödinger equation with hypothetical cylindrical infinite potential barrier. This differential equation, and the Riccati–Bessel solutions, also arises in the problem of scattering of electromagnetic waves by a sphere, known as Mie scattering after the first published solution by Mie (1908). See e.g., Du (2004) for recent developments and references. Following Debye (1909), the notation ψn, χn is sometimes used instead of Sn, Cn. == Asymptotic forms == The Bessel functions have the following asymptotic forms. For small arguments 0 < z ≪ α + 1 {\displaystyle 0<z\ll {\sqrt {\alpha +1}}} , one obtains, when α {\displaystyle \alpha } is not a negative integer: J α ( z ) ∼ 1 Γ ( α + 1 ) ( z 2 ) α . {\displaystyle J_{\alpha }(z)\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha }.} When α is a negative integer, we have J α ( z ) ∼ ( − 1 ) α ( − α ) ! ( 2 z ) α . {\displaystyle J_{\alpha }(z)\sim {\frac {(-1)^{\alpha }}{(-\alpha )!}}\left({\frac {2}{z}}\right)^{\alpha }.} For the Bessel function of the second kind we have three cases: Y α ( z ) ∼ { 2 π ( ln ⁡ ( z 2 ) + γ ) if α = 0 − Γ ( α ) π ( 2 z ) α + 1 Γ ( α + 1 ) ( z 2 ) α cot ⁡ ( α π ) if α is a positive integer (one term dominates unless α is imaginary) , − ( − 1 ) α Γ ( − α ) π ( z 2 ) α if α is a negative integer, {\displaystyle Y_{\alpha }(z)\sim {\begin{cases}{\dfrac {2}{\pi }}\left(\ln \left({\dfrac {z}{2}}\right)+\gamma \right)&{\text{if }}\alpha =0\\[1ex]-{\dfrac {\Gamma (\alpha )}{\pi }}\left({\dfrac {2}{z}}\right)^{\alpha }+{\dfrac {1}{\Gamma (\alpha +1)}}\left({\dfrac {z}{2}}\right)^{\alpha }\cot(\alpha \pi )&{\text{if }}\alpha {\text{ is a positive integer (one term dominates unless }}\alpha {\text{ is imaginary)}},\\[1ex]-{\dfrac {(-1)^{\alpha }\Gamma (-\alpha )}{\pi }}\left({\dfrac {z}{2}}\right)^{\alpha }&{\text{if }}\alpha {\text{ is a negative integer,}}\end{cases}}} where γ is the Euler–Mascheroni constant (0.5772...). For large real arguments z ≫ |α2 − ⁠1/4⁠|, one cannot write a true asymptotic form for Bessel functions of the first and second kind (unless α is half-integer) because they have zeros all the way out to infinity, which would have to be matched exactly by any asymptotic expansion. However, for a given value of arg z one can write an equation containing a term of order |z|−1: J α ( z ) = 2 π z ( cos ⁡ ( z − α π 2 − π 4 ) + e | Im ⁡ ( z ) | O ( | z | − 1 ) ) for | arg ⁡ z | < π , Y α ( z ) = 2 π z ( sin ⁡ ( z − α π 2 − π 4 ) + e | Im ⁡ ( z ) | O ( | z | − 1 ) ) for | arg ⁡ z | < π . {\displaystyle {\begin{aligned}J_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\cos \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi ,\\Y_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\sin \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi .\end{aligned}}} (For α = ⁠1/2⁠, the last terms in these formulas drop out completely; see the spherical Bessel functions above.) The asymptotic forms for the Hankel functions are: H α ( 1 ) ( z ) ∼ 2 π z e i ( z − α π 2 − π 4 ) for − π < arg ⁡ z < 2 π , H α ( 2 ) ( z ) ∼ 2 π z e − i ( z − α π 2 − π 4 ) for − 2 π < arg ⁡ z < π . {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<2\pi ,\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-2\pi <\arg z<\pi .\end{aligned}}} These can be extended to other values of arg z using equations relating H(1)α(zeimπ) and H(2)α(zeimπ) to H(1)α(z) and H(2)α(z). It is interesting that although the Bessel function of the first kind is the average of the two Hankel functions, Jα(z) is not asymptotic to the average of these two asymptotic forms when z is negative (because one or the other will not be correct there, depending on the arg z used). But the asymptotic forms for the Hankel functions permit us to write asymptotic forms for the Bessel functions of first and second kinds for complex (non-real) z so long as |z| goes to infinity at a constant phase angle arg z (using the square root having positive real part): J α ( z ) ∼ 1 2 π z e i ( z − α π 2 − π 4 ) for − π < arg ⁡ z < 0 , J α ( z ) ∼ 1 2 π z e − i ( z − α π 2 − π 4 ) for 0 < arg ⁡ z < π , Y α ( z ) ∼ − i 1 2 π z e i ( z − α π 2 − π 4 ) for − π < arg ⁡ z < 0 , Y α ( z ) ∼ i 1 2 π z e − i ( z − α π 2 − π 4 ) for 0 < arg ⁡ z < π . {\displaystyle {\begin{aligned}J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi ,\\[1ex]Y_{\alpha }(z)&\sim -i{\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]Y_{\alpha }(z)&\sim i{\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi .\end{aligned}}} For the modified Bessel functions, Hankel developed asymptotic expansions as well: I α ( z ) ∼ e z 2 π z ( 1 − 4 α 2 − 1 8 z + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) 2 ! ( 8 z ) 2 − ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) ( 4 α 2 − 25 ) 3 ! ( 8 z ) 3 + ⋯ ) for | arg ⁡ z | < π 2 , K α ( z ) ∼ π 2 z e − z ( 1 + 4 α 2 − 1 8 z + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) 2 ! ( 8 z ) 2 + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) ( 4 α 2 − 25 ) 3 ! ( 8 z ) 3 + ⋯ ) for | arg ⁡ z | < 3 π 2 . {\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {e^{z}}{\sqrt {2\pi z}}}\left(1-{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}-{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {\pi }{2}},\\K_{\alpha }(z)&\sim {\sqrt {\frac {\pi }{2z}}}e^{-z}\left(1+{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {3\pi }{2}}.\end{aligned}}} There is also the asymptotic form (for large real z {\displaystyle z} ) I α ( z ) = 1 2 π z 1 + α 2 z 2 4 exp ⁡ ( − α arcsinh ⁡ ( α z ) + z 1 + α 2 z 2 ) ( 1 + O ( 1 z 1 + α 2 z 2 ) ) . {\displaystyle {\begin{aligned}I_{\alpha }(z)={\frac {1}{{\sqrt {2\pi z}}{\sqrt[{4}]{1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\exp \left(-\alpha \operatorname {arcsinh} \left({\frac {\alpha }{z}}\right)+z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}\right)\left(1+{\mathcal {O}}\left({\frac {1}{z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\right)\right).\end{aligned}}} When α = ⁠1/2⁠, all the terms except the first vanish, and we have I 1 / 2 ( z ) = 2 π sinh ⁡ ( z ) z ∼ e z 2 π z for | arg ⁡ z | < π 2 , K 1 / 2 ( z ) = π 2 e − z z . {\displaystyle {\begin{aligned}I_{{1}/{2}}(z)&={\sqrt {\frac {2}{\pi }}}{\frac {\sinh(z)}{\sqrt {z}}}\sim {\frac {e^{z}}{\sqrt {2\pi z}}}&&{\text{for }}\left|\arg z\right|<{\tfrac {\pi }{2}},\\[1ex]K_{{1}/{2}}(z)&={\sqrt {\frac {\pi }{2}}}{\frac {e^{-z}}{\sqrt {z}}}.\end{aligned}}} For small arguments 0 < | z | ≪ α + 1 {\displaystyle 0<|z|\ll {\sqrt {\alpha +1}}} , we have I α ( z ) ∼ 1 Γ ( α + 1 ) ( z 2 ) α , K α ( z ) ∼ { − ln ⁡ ( z 2 ) − γ if α = 0 Γ ( α ) 2 ( 2 z ) α if α > 0 {\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha },\\[1ex]K_{\alpha }(z)&\sim {\begin{cases}-\ln \left({\dfrac {z}{2}}\right)-\gamma &{\text{if }}\alpha =0\\[1ex]{\frac {\Gamma (\alpha )}{2}}\left({\dfrac {2}{z}}\right)^{\alpha }&{\text{if }}\alpha >0\end{cases}}\end{aligned}}} == Properties == For integer order α = n, Jn is often defined via a Laurent series for a generating function: e x 2 ( t − 1 t ) = ∑ n = − ∞ ∞ J n ( x ) t n {\displaystyle e^{{\frac {x}{2}}\left(t-{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }J_{n}(x)t^{n}} an approach used by P. A. Hansen in 1843. (This can be generalized to non-integer order by contour integration or other methods.) Infinite series of Bessel functions in the form ∑ ν = − ∞ ∞ J N ν + p ( x ) {\textstyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)} where ν , p ∈ Z , N ∈ Z + \nu ,p\in \mathbb {Z} ,\ N\in \mathbb {Z} ^{+} arise in many physical systems and are defined in closed form by the Sung series. For example, when N = 3: ∑ ν = − ∞ ∞ J 3 ν + p ( x ) = 1 3 [ 1 + 2 cos ⁡ ( x 3 / 2 − 2 π p / 3 ) ] {\textstyle \sum _{\nu =-\infty }^{\infty }J_{3\nu +p}(x)={\frac {1}{3}}\left[1+2\cos {(x{\sqrt {3}}/2-2\pi p/3)}\right]} . More generally, the Sung series and the alternating Sung series are written as: ∑ ν = − ∞ ∞ J N ν + p ( x ) = 1 N ∑ q = 0 N − 1 e i x sin ⁡ 2 π q / N e − i 2 π p q / N {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {2\pi q/N}}e^{-i2\pi pq/N}} ∑ ν = − ∞ ∞ ( − 1 ) ν J N ν + p ( x ) = 1 N ∑ q = 0 N − 1 e i x sin ⁡ ( 2 q + 1 ) π / N e − i ( 2 q + 1 ) π p / N {\displaystyle \sum _{\nu =-\infty }^{\infty }(-1)^{\nu }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {(2q+1)\pi /N}}e^{-i(2q+1)\pi p/N}} A series expansion using Bessel functions (Kapteyn series) is 1 1 − z = 1 + 2 ∑ n = 1 ∞ J n ( n z ) . {\displaystyle {\frac {1}{1-z}}=1+2\sum _{n=1}^{\infty }J_{n}(nz).} Another important relation for integer orders is the Jacobi–Anger expansion: e i z cos ⁡ ϕ = ∑ n = − ∞ ∞ i n J n ( z ) e i n ϕ {\displaystyle e^{iz\cos \phi }=\sum _{n=-\infty }^{\infty }i^{n}J_{n}(z)e^{in\phi }} and e ± i z sin ⁡ ϕ = J 0 ( z ) + 2 ∑ n = 1 ∞ J 2 n ( z ) cos ⁡ ( 2 n ϕ ) ± 2 i ∑ n = 0 ∞ J 2 n + 1 ( z ) sin ⁡ ( ( 2 n + 1 ) ϕ ) {\displaystyle e^{\pm iz\sin \phi }=J_{0}(z)+2\sum _{n=1}^{\infty }J_{2n}(z)\cos(2n\phi )\pm 2i\sum _{n=0}^{\infty }J_{2n+1}(z)\sin((2n+1)\phi )} which is used to expand a plane wave as a sum of cylindrical waves, or to find the Fourier series of a tone-modulated FM signal. More generally, a series f ( z ) = a 0 ν J ν ( z ) + 2 ⋅ ∑ k = 1 ∞ a k ν J ν + k ( z ) {\displaystyle f(z)=a_{0}^{\nu }J_{\nu }(z)+2\cdot \sum _{k=1}^{\infty }a_{k}^{\nu }J_{\nu +k}(z)} is called Neumann expansion of f. The coefficients for ν = 0 have the explicit form a k 0 = 1 2 π i ∫ | z | = c f ( z ) O k ( z ) d z {\displaystyle a_{k}^{0}={\frac {1}{2\pi i}}\int _{|z|=c}f(z)O_{k}(z)\,dz} where Ok is Neumann's polynomial. Selected functions admit the special representation f ( z ) = ∑ k = 0 ∞ a k ν J ν + 2 k ( z ) {\displaystyle f(z)=\sum _{k=0}^{\infty }a_{k}^{\nu }J_{\nu +2k}(z)} with a k ν = 2 ( ν + 2 k ) ∫ 0 ∞ f ( z ) J ν + 2 k ( z ) z d z {\displaystyle a_{k}^{\nu }=2(\nu +2k)\int _{0}^{\infty }f(z){\frac {J_{\nu +2k}(z)}{z}}\,dz} due to the orthogonality relation ∫ 0 ∞ J α ( z ) J β ( z ) d z z = 2 π sin ⁡ ( π 2 ( α − β ) ) α 2 − β 2 {\displaystyle \int _{0}^{\infty }J_{\alpha }(z)J_{\beta }(z){\frac {dz}{z}}={\frac {2}{\pi }}{\frac {\sin \left({\frac {\pi }{2}}(\alpha -\beta )\right)}{\alpha ^{2}-\beta ^{2}}}} More generally, if f has a branch-point near the origin of such a nature that f ( z ) = ∑ k = 0 a k J ν + k ( z ) {\displaystyle f(z)=\sum _{k=0}a_{k}J_{\nu +k}(z)} then L { ∑ k = 0 a k J ν + k } ( s ) = 1 1 + s 2 ∑ k = 0 a k ( s + 1 + s 2 ) ν + k {\displaystyle {\mathcal {L}}\left\{\sum _{k=0}a_{k}J_{\nu +k}\right\}(s)={\frac {1}{\sqrt {1+s^{2}}}}\sum _{k=0}{\frac {a_{k}}{\left(s+{\sqrt {1+s^{2}}}\right)^{\nu +k}}}} or ∑ k = 0 a k ξ ν + k = 1 + ξ 2 2 ξ L { f } ( 1 − ξ 2 2 ξ ) {\displaystyle \sum _{k=0}a_{k}\xi ^{\nu +k}={\frac {1+\xi ^{2}}{2\xi }}{\mathcal {L}}\{f\}\left({\frac {1-\xi ^{2}}{2\xi }}\right)} where L { f } {\displaystyle {\mathcal {L}}\{f\}} is the Laplace transform of f. Another way to define the Bessel functions is the Poisson representation formula and the Mehler-Sonine formula: J ν ( z ) = ( z 2 ) ν Γ ( ν + 1 2 ) π ∫ − 1 1 e i z s ( 1 − s 2 ) ν − 1 2 d s = 2 ( z 2 ) ν ⋅ π ⋅ Γ ( 1 2 − ν ) ∫ 1 ∞ sin ⁡ z u ( u 2 − 1 ) ν + 1 2 d u {\displaystyle {\begin{aligned}J_{\nu }(z)&={\frac {\left({\frac {z}{2}}\right)^{\nu }}{\Gamma \left(\nu +{\frac {1}{2}}\right){\sqrt {\pi }}}}\int _{-1}^{1}e^{izs}\left(1-s^{2}\right)^{\nu -{\frac {1}{2}}}\,ds\\[5px]&={\frac {2}{{\left({\frac {z}{2}}\right)}^{\nu }\cdot {\sqrt {\pi }}\cdot \Gamma \left({\frac {1}{2}}-\nu \right)}}\int _{1}^{\infty }{\frac {\sin zu}{\left(u^{2}-1\right)^{\nu +{\frac {1}{2}}}}}\,du\end{aligned}}} where ν > −⁠1/2⁠ and z ∈ C. This formula is useful especially when working with Fourier transforms. Because Bessel's equation becomes Hermitian (self-adjoint) if it is divided by x, the solutions must satisfy an orthogonality relationship for appropriate boundary conditions. In particular, it follows that: ∫ 0 1 x J α ( x u α , m ) J α ( x u α , n ) d x = δ m , n 2 [ J α + 1 ( u α , m ) ] 2 = δ m , n 2 [ J α ′ ( u α , m ) ] 2 {\displaystyle \int _{0}^{1}xJ_{\alpha }\left(xu_{\alpha ,m}\right)J_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[J_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}={\frac {\delta _{m,n}}{2}}\left[J_{\alpha }'\left(u_{\alpha ,m}\right)\right]^{2}} where α > −1, δm,n is the Kronecker delta, and uα,m is the mth zero of Jα(x). This orthogonality relation can then be used to extract the coefficients in the Fourier–Bessel series, where a function is expanded in the basis of the functions Jα(x uα,m) for fixed α and varying m. An analogous relationship for the spherical Bessel functions follows immediately: ∫ 0 1 x 2 j α ( x u α , m ) j α ( x u α , n ) d x = δ m , n 2 [ j α + 1 ( u α , m ) ] 2 {\displaystyle \int _{0}^{1}x^{2}j_{\alpha }\left(xu_{\alpha ,m}\right)j_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[j_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}} If one defines a boxcar function of x that depends on a small parameter ε as: f ε ( x ) = 1 ε rect ⁡ ( x − 1 ε ) {\displaystyle f_{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x-1}{\varepsilon }}\right)} (where rect is the rectangle function) then the Hankel transform of it (of any given order α > −⁠1/2⁠), gε(k), approaches Jα(k) as ε approaches zero, for any given k. Conversely, the Hankel transform (of the same order) of gε(k) is fε(x): ∫ 0 ∞ k J α ( k x ) g ε ( k ) d k = f ε ( x ) {\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)g_{\varepsilon }(k)\,dk=f_{\varepsilon }(x)} which is zero everywhere except near 1. As ε approaches zero, the right-hand side approaches δ(x − 1), where δ is the Dirac delta function. This admits the limit (in the distributional sense): ∫ 0 ∞ k J α ( k x ) J α ( k ) d k = δ ( x − 1 ) {\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)J_{\alpha }(k)\,dk=\delta (x-1)} A change of variables then yields the closure equation: ∫ 0 ∞ x J α ( u x ) J α ( v x ) d x = 1 u δ ( u − v ) {\displaystyle \int _{0}^{\infty }xJ_{\alpha }(ux)J_{\alpha }(vx)\,dx={\frac {1}{u}}\delta (u-v)} for α > −⁠1/2⁠. The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is: ∫ 0 ∞ x 2 j α ( u x ) j α ( v x ) d x = π 2 u v δ ( u − v ) {\displaystyle \int _{0}^{\infty }x^{2}j_{\alpha }(ux)j_{\alpha }(vx)\,dx={\frac {\pi }{2uv}}\delta (u-v)} for α > −1. Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions: A α ( x ) d B α d x − d A α d x B α ( x ) = C α x {\displaystyle A_{\alpha }(x){\frac {dB_{\alpha }}{dx}}-{\frac {dA_{\alpha }}{dx}}B_{\alpha }(x)={\frac {C_{\alpha }}{x}}} where Aα and Bα are any two solutions of Bessel's equation, and Cα is a constant independent of x (which depends on α and on the particular Bessel functions considered). In particular, J α ( x ) d Y α d x − d J α d x Y α ( x ) = 2 π x {\displaystyle J_{\alpha }(x){\frac {dY_{\alpha }}{dx}}-{\frac {dJ_{\alpha }}{dx}}Y_{\alpha }(x)={\frac {2}{\pi x}}} and I α ( x ) d K α d x − d I α d x K α ( x ) = − 1 x , {\displaystyle I_{\alpha }(x){\frac {dK_{\alpha }}{dx}}-{\frac {dI_{\alpha }}{dx}}K_{\alpha }(x)=-{\frac {1}{x}},} for α > −1. For α > −1, the even entire function of genus 1, x−αJα(x), has only real zeros. Let 0 < j α , 1 < j α , 2 < ⋯ < j α , n < ⋯ {\displaystyle 0<j_{\alpha ,1}<j_{\alpha ,2}<\cdots <j_{\alpha ,n}<\cdots } be all its positive zeros, then J α ( z ) = ( z 2 ) α Γ ( α + 1 ) ∏ n = 1 ∞ ( 1 − z 2 j α , n 2 ) {\displaystyle J_{\alpha }(z)={\frac {\left({\frac {z}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{j_{\alpha ,n}^{2}}}\right)} (There are a large number of other known integrals and identities that are not reproduced here, but which can be found in the references.) === Recurrence relations === The functions Jα, Yα, H(1)α, and H(2)α all satisfy the recurrence relations 2 α x Z α ( x ) = Z α − 1 ( x ) + Z α + 1 ( x ) {\displaystyle {\frac {2\alpha }{x}}Z_{\alpha }(x)=Z_{\alpha -1}(x)+Z_{\alpha +1}(x)} and 2 d Z α ( x ) d x = Z α − 1 ( x ) − Z α + 1 ( x ) , {\displaystyle 2{\frac {dZ_{\alpha }(x)}{dx}}=Z_{\alpha -1}(x)-Z_{\alpha +1}(x),} where Z denotes J, Y, H(1), or H(2). These two identities are often combined, e.g. added or subtracted, to yield various other relations. In this way, for example, one can compute Bessel functions of higher orders (or higher derivatives) given the values at lower orders (or lower derivatives). In particular, it follows that ( 1 x d d x ) m [ x α Z α ( x ) ] = x α − m Z α − m ( x ) , ( 1 x d d x ) m [ Z α ( x ) x α ] = ( − 1 ) m Z α + m ( x ) x α + m . {\displaystyle {\begin{aligned}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[x^{\alpha }Z_{\alpha }(x)\right]&=x^{\alpha -m}Z_{\alpha -m}(x),\\\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[{\frac {Z_{\alpha }(x)}{x^{\alpha }}}\right]&=(-1)^{m}{\frac {Z_{\alpha +m}(x)}{x^{\alpha +m}}}.\end{aligned}}} Using the previous relations one can arrive to similar relations for the Spherical Bessel functions: 2 α + 1 x j α ( x ) = j α − 1 + j α + 1 {\displaystyle {\frac {2\alpha +1}{x}}j_{\alpha }(x)=j_{\alpha -1}+j_{\alpha +1}} and d j α ( x ) d x = j α − 1 − α + 1 x j α {\displaystyle {\frac {dj_{\alpha }(x)}{dx}}=j_{\alpha -1}-{\frac {\alpha +1}{x}}j_{\alpha }} Modified Bessel functions follow similar relations: e ( x 2 ) ( t + 1 t ) = ∑ n = − ∞ ∞ I n ( x ) t n {\displaystyle e^{\left({\frac {x}{2}}\right)\left(t+{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }I_{n}(x)t^{n}} and e z cos ⁡ θ = I 0 ( z ) + 2 ∑ n = 1 ∞ I n ( z ) cos ⁡ n θ {\displaystyle e^{z\cos \theta }=I_{0}(z)+2\sum _{n=1}^{\infty }I_{n}(z)\cos n\theta } and 1 2 π ∫ 0 2 π e z cos ⁡ ( m θ ) + y cos ⁡ θ d θ = I 0 ( z ) I 0 ( y ) + 2 ∑ n = 1 ∞ I n ( z ) I m n ( y ) . {\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }e^{z\cos(m\theta )+y\cos \theta }d\theta =I_{0}(z)I_{0}(y)+2\sum _{n=1}^{\infty }I_{n}(z)I_{mn}(y).} The recurrence relation reads C α − 1 ( x ) − C α + 1 ( x ) = 2 α x C α ( x ) , C α − 1 ( x ) + C α + 1 ( x ) = 2 d d x C α ( x ) , {\displaystyle {\begin{aligned}C_{\alpha -1}(x)-C_{\alpha +1}(x)&={\frac {2\alpha }{x}}C_{\alpha }(x),\\[1ex]C_{\alpha -1}(x)+C_{\alpha +1}(x)&=2{\frac {d}{dx}}C_{\alpha }(x),\end{aligned}}} where Cα denotes Iα or eαiπKα. These recurrence relations are useful for discrete diffusion problems. === Transcendence === In 1929, Carl Ludwig Siegel proved that Jν(x), J'ν(x), and the logarithmic derivative ⁠J'ν(x)/Jν(x)⁠ are transcendental numbers when ν is rational and x is algebraic and nonzero. The same proof also implies that Γ ( v + 1 ) ( 2 / x ) v J v ( x ) {\displaystyle \Gamma (v+1)(2/x)^{v}J_{v}(x)} is transcendental under the same assumptions. === Sums with Bessel functions === The product of two Bessel functions admits the following sum: ∑ ν = − ∞ ∞ J ν ( x ) J n − ν ( y ) = J n ( x + y ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{n-\nu }(y)=J_{n}(x+y),} ∑ ν = − ∞ ∞ J ν ( x ) J ν + n ( y ) = J n ( y − x ) . {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(y)=J_{n}(y-x).} From these equalities it follows that ∑ ν = − ∞ ∞ J ν ( x ) J ν + n ( x ) = δ n , 0 {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(x)=\delta _{n,0}} and as a consequence ∑ ν = − ∞ ∞ J ν 2 ( x ) = 1. {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }^{2}(x)=1.} These sums can be extended to include a term multiplier that is a polynomial function of the index. For example, ∑ ν = − ∞ ∞ ν J ν ( x ) J ν + n ( x ) = x 2 ( δ n , 1 + δ n , − 1 ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,1}+\delta _{n,-1}\right),} ∑ ν = − ∞ ∞ ν J ν 2 ( x ) = 0 , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }^{2}(x)=0,} ∑ ν = − ∞ ∞ ν 2 J ν ( x ) J ν + n ( x ) = x 2 ( δ n , − 1 − δ n , 1 ) + x 2 4 ( δ n , − 2 + 2 δ n , 0 + δ n , 2 ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,-1}-\delta _{n,1}\right)+{\frac {x^{2}}{4}}\left(\delta _{n,-2}+2\delta _{n,0}+\delta _{n,2}\right),} ∑ ν = − ∞ ∞ ν 2 J ν 2 ( x ) = x 2 2 . {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }^{2}(x)={\frac {x^{2}}{2}}.} == Multiplication theorem == The Bessel functions obey a multiplication theorem λ − ν J ν ( λ z ) = ∑ n = 0 ∞ 1 n ! ( ( 1 − λ 2 ) z 2 ) n J ν + n ( z ) , {\displaystyle \lambda ^{-\nu }J_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(1-\lambda ^{2}\right)z}{2}}\right)^{n}J_{\nu +n}(z),} where λ and ν may be taken as arbitrary complex numbers. For |λ2 − 1| < 1, the above expression also holds if J is replaced by Y. The analogous identities for modified Bessel functions and |λ2 − 1| < 1 are λ − ν I ν ( λ z ) = ∑ n = 0 ∞ 1 n ! ( ( λ 2 − 1 ) z 2 ) n I ν + n ( z ) {\displaystyle \lambda ^{-\nu }I_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}I_{\nu +n}(z)} and λ − ν K ν ( λ z ) = ∑ n = 0 ∞ ( − 1 ) n n ! ( ( λ 2 − 1 ) z 2 ) n K ν + n ( z ) . {\displaystyle \lambda ^{-\nu }K_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}K_{\nu +n}(z).} == Zeros of the Bessel function == === Bourget's hypothesis === Bessel himself originally proved that for nonnegative integers n, the equation Jn(x) = 0 has an infinite number of solutions in x. When the functions Jn(x) are plotted on the same graph, though, none of the zeros seem to coincide for different values of n except for the zero at x = 0. This phenomenon is known as Bourget's hypothesis after the 19th-century French mathematician who studied Bessel functions. Specifically it states that for any integers n ≥ 0 and m ≥ 1, the functions Jn(x) and Jn + m(x) have no common zeros other than the one at x = 0. The hypothesis was proved by Carl Ludwig Siegel in 1929. === Transcendence === Siegel proved in 1929 that when ν is rational, all nonzero roots of Jν(x) and J'ν(x) are transcendental, as are all the roots of Kν(x). It is also known that all roots of the higher derivatives J ν ( n ) ( x ) {\displaystyle J_{\nu }^{(n)}(x)} for n ≤ 18 are transcendental, except for the special values J 1 ( 3 ) ( ± 3 ) = 0 {\displaystyle J_{1}^{(3)}(\pm {\sqrt {3}})=0} and J 0 ( 4 ) ( ± 3 ) = 0 {\displaystyle J_{0}^{(4)}(\pm {\sqrt {3}})=0} . === Numerical approaches === For numerical studies about the zeros of the Bessel function, see Gil, Segura & Temme (2007), Kravanja et al. (1998) and Moler (2004). === Numerical values === The first zeros in J0 (i.e., j0,1, j0,2 and j0,3) occur at arguments of approximately 2.40483, 5.52008 and 8.65373, respectively. == History == === Waves and elasticity problems === The first appearance of a Bessel function appears in the work of Daniel Bernoulli in 1732, while working on the analysis of a vibrating string, a problem that was tackled before by his father Johann Bernoulli. Daniel considered a flexible chain suspended from a fixed point above and free at its lower end. The solution of the differential equation led to the introduction of a function that is now considered J 0 ( x ) {\displaystyle J_{0}(x)} . Bernoulli also developed a method to find the zeros of the function. Leonhard Euler in 1736, found a link between other functions (now known as Laguerre polynomials) and Bernoulli's solution. Euler also introduced a non-uniform chain that lead to the introduction of functions now related to modified Bessel functions I n ( x ) {\displaystyle I_{n}(x)} . In the middle of the eighteen century, Jean le Rond d'Alembert had found a formula to solve the wave equation. By 1771 there was dispute between Bernoulli, Euler, d'Alembert and Joseph-Louis Lagrange on the nature of the solutions vibrating strings. Euler worked in 1778 on buckling, introducing the concept of Euler's critical load. To solve the problem he introduced the series for J ± 1 / 3 ( x ) {\displaystyle J_{\pm 1/3}(x)} . Euler also worked out the solutions of vibrating 2D membranes in cylindrical coordinates in 1780. In order to solve his differential equation he introduced a power series associated to J n ( x ) {\displaystyle J_{n}(x)} , for integer n. During the end of the 19th century Lagrange, Pierre-Simon Laplace and Marc-Antoine Parseval also found equivalents to the Bessel functions. Parseval for example found an integral representation of J 0 ( x ) {\displaystyle J_{0}(x)} using cosine. At the beginning of the 1800s, Joseph Fourier used J 0 ( x ) {\displaystyle J_{0}(x)} to solve the heat equation in a problem with cylindrical symmetry. Fourier won a prize of the French Academy of Sciences for this work in 1811. But most of the details of his work, including the use of a Fourier series, remained unpublished until 1822. Poisson in rivalry with Fourier, extended Fourier's work in 1823, introducing new properties of Bessel functions including Bessel functions of half-integer order (now known as spherical Bessel functions). === Astronomical problems === In 1770, Lagrangre introduced the series expansion of Bessel functions to solve Kepler's equation, a trascendental equation in astronomy. Friedrich Wilhelm Bessel had seen Lagrange's solution but found it difficult to handle. In 1813 in a letter to Carl Friedrich Gauss, Bessel simplified the calculation using trigonometric functions. Bessel published his work in 1819, independently introducing the method of Fourier series unaware of the work of Fourier which was published later. In 1824, Bessel carried out a systematic investigation of the functions, which earned the functions his name. In older literature the functions were called cylindrical functions or even Bessel–Fourier functions. == See also == == Notes == == References == == External links ==
Wikipedia/Bessel_functions
In mathematics, the multivariate gamma function Γp is a generalization of the gamma function. It is useful in multivariate statistics, appearing in the probability density function of the Wishart and inverse Wishart distributions, and the matrix variate beta distribution. It has two equivalent definitions. One is given as the following integral over the p × p {\displaystyle p\times p} positive-definite real matrices: Γ p ( a ) = ∫ S > 0 exp ⁡ ( − t r ( S ) ) | S | a − p + 1 2 d S , {\displaystyle \Gamma _{p}(a)=\int _{S>0}\exp \left(-{\rm {tr}}(S)\right)\,\left|S\right|^{a-{\frac {p+1}{2}}}dS,} where | S | {\displaystyle |S|} denotes the determinant of S {\displaystyle S} . The other one, more useful to obtain a numerical result is: Γ p ( a ) = π p ( p − 1 ) / 4 ∏ j = 1 p Γ ( a + ( 1 − j ) / 2 ) . {\displaystyle \Gamma _{p}(a)=\pi ^{p(p-1)/4}\prod _{j=1}^{p}\Gamma (a+(1-j)/2).} In both definitions, a {\displaystyle a} is a complex number whose real part satisfies ℜ ( a ) > ( p − 1 ) / 2 {\displaystyle \Re (a)>(p-1)/2} . Note that Γ 1 ( a ) {\displaystyle \Gamma _{1}(a)} reduces to the ordinary gamma function. The second of the above definitions allows to directly obtain the recursive relationships for p ≥ 2 {\displaystyle p\geq 2} : Γ p ( a ) = π ( p − 1 ) / 2 Γ ( a ) Γ p − 1 ( a − 1 2 ) = π ( p − 1 ) / 2 Γ p − 1 ( a ) Γ ( a + ( 1 − p ) / 2 ) . {\displaystyle \Gamma _{p}(a)=\pi ^{(p-1)/2}\Gamma (a)\Gamma _{p-1}(a-{\tfrac {1}{2}})=\pi ^{(p-1)/2}\Gamma _{p-1}(a)\Gamma (a+(1-p)/2).} Thus Γ 2 ( a ) = π 1 / 2 Γ ( a ) Γ ( a − 1 / 2 ) {\displaystyle \Gamma _{2}(a)=\pi ^{1/2}\Gamma (a)\Gamma (a-1/2)} Γ 3 ( a ) = π 3 / 2 Γ ( a ) Γ ( a − 1 / 2 ) Γ ( a − 1 ) {\displaystyle \Gamma _{3}(a)=\pi ^{3/2}\Gamma (a)\Gamma (a-1/2)\Gamma (a-1)} and so on. This can also be extended to non-integer values of p {\displaystyle p} with the expression: Γ p ( a ) = π p ( p − 1 ) / 4 G ( a + 1 2 ) G ( a + 1 ) G ( a + 1 − p 2 ) G ( a + 1 − p 2 ) {\displaystyle \Gamma _{p}(a)=\pi ^{p(p-1)/4}{\frac {G(a+{\frac {1}{2}})G(a+1)}{G(a+{\frac {1-p}{2}})G(a+1-{\frac {p}{2}})}}} Where G is the Barnes G-function, the indefinite product of the Gamma function. The function is derived by Anderson from first principles who also cites earlier work by Wishart, Mahalanobis and others. There also exists a version of the multivariate gamma function which instead of a single complex number takes a p {\displaystyle p} -dimensional vector of complex numbers as its argument. It generalizes the above defined multivariate gamma function insofar as the latter is obtained by a particular choice of multivariate argument of the former. == Derivatives == We may define the multivariate digamma function as ψ p ( a ) = ∂ log ⁡ Γ p ( a ) ∂ a = ∑ i = 1 p ψ ( a + ( 1 − i ) / 2 ) , {\displaystyle \psi _{p}(a)={\frac {\partial \log \Gamma _{p}(a)}{\partial a}}=\sum _{i=1}^{p}\psi (a+(1-i)/2),} and the general polygamma function as ψ p ( n ) ( a ) = ∂ n log ⁡ Γ p ( a ) ∂ a n = ∑ i = 1 p ψ ( n ) ( a + ( 1 − i ) / 2 ) . {\displaystyle \psi _{p}^{(n)}(a)={\frac {\partial ^{n}\log \Gamma _{p}(a)}{\partial a^{n}}}=\sum _{i=1}^{p}\psi ^{(n)}(a+(1-i)/2).} === Calculation steps === Since Γ p ( a ) = π p ( p − 1 ) / 4 ∏ j = 1 p Γ ( a + 1 − j 2 ) , {\displaystyle \Gamma _{p}(a)=\pi ^{p(p-1)/4}\prod _{j=1}^{p}\Gamma \left(a+{\frac {1-j}{2}}\right),} it follows that ∂ Γ p ( a ) ∂ a = π p ( p − 1 ) / 4 ∑ i = 1 p ∂ Γ ( a + 1 − i 2 ) ∂ a ∏ j = 1 , j ≠ i p Γ ( a + 1 − j 2 ) . {\displaystyle {\frac {\partial \Gamma _{p}(a)}{\partial a}}=\pi ^{p(p-1)/4}\sum _{i=1}^{p}{\frac {\partial \Gamma \left(a+{\frac {1-i}{2}}\right)}{\partial a}}\prod _{j=1,j\neq i}^{p}\Gamma \left(a+{\frac {1-j}{2}}\right).} By definition of the digamma function, ψ, ∂ Γ ( a + ( 1 − i ) / 2 ) ∂ a = ψ ( a + ( i − 1 ) / 2 ) Γ ( a + ( i − 1 ) / 2 ) {\displaystyle {\frac {\partial \Gamma (a+(1-i)/2)}{\partial a}}=\psi (a+(i-1)/2)\Gamma (a+(i-1)/2)} it follows that ∂ Γ p ( a ) ∂ a = π p ( p − 1 ) / 4 ∏ j = 1 p Γ ( a + ( 1 − j ) / 2 ) ∑ i = 1 p ψ ( a + ( 1 − i ) / 2 ) = Γ p ( a ) ∑ i = 1 p ψ ( a + ( 1 − i ) / 2 ) . {\displaystyle {\begin{aligned}{\frac {\partial \Gamma _{p}(a)}{\partial a}}&=\pi ^{p(p-1)/4}\prod _{j=1}^{p}\Gamma (a+(1-j)/2)\sum _{i=1}^{p}\psi (a+(1-i)/2)\\[4pt]&=\Gamma _{p}(a)\sum _{i=1}^{p}\psi (a+(1-i)/2).\end{aligned}}} == References == 1. James, A. (1964). "Distributions of Matrix Variates and Latent Roots Derived from Normal Samples". Annals of Mathematical Statistics. 35 (2): 475–501. doi:10.1214/aoms/1177703550. MR 0181057. Zbl 0121.36605. 2. A. K. Gupta and D. K. Nagar 1999. "Matrix variate distributions". Chapman and Hall.
Wikipedia/Multivariate_gamma_function
In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term confluent refers to the merging of singular points of families of differential equations; confluere is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions: Kummer's (confluent hypergeometric) function M(a, b, z), introduced by Kummer (1837), is a solution to Kummer's differential equation. This is also known as the confluent hypergeometric function of the first kind. There is a different and unrelated Kummer's function bearing the same name. Tricomi's (confluent hypergeometric) function U(a, b, z) introduced by Francesco Tricomi (1947), sometimes denoted by Ψ(a; b; z), is another solution to Kummer's equation. This is also known as the confluent hypergeometric function of the second kind. Whittaker functions (for Edmund Taylor Whittaker) are solutions to Whittaker's equation. Coulomb wave functions are solutions to the Coulomb wave equation. The Kummer functions, Whittaker functions, and Coulomb wave functions are essentially the same, and differ from each other only by elementary functions and change of variables. == Kummer's equation == Kummer's equation may be written as: z d 2 w d z 2 + ( b − z ) d w d z − a w = 0 , {\displaystyle z{\frac {d^{2}w}{dz^{2}}}+(b-z){\frac {dw}{dz}}-aw=0,} with a regular singular point at z = 0 and an irregular singular point at z = ∞. It has two (usually) linearly independent solutions M(a, b, z) and U(a, b, z). Kummer's function of the first kind M is a generalized hypergeometric series introduced in (Kummer 1837), given by: M ( a , b , z ) = ∑ n = 0 ∞ a ( n ) z n b ( n ) n ! = 1 F 1 ( a ; b ; z ) , {\displaystyle M(a,b,z)=\sum _{n=0}^{\infty }{\frac {a^{(n)}z^{n}}{b^{(n)}n!}}={}_{1}F_{1}(a;b;z),} where: a ( 0 ) = 1 , {\displaystyle a^{(0)}=1,} a ( n ) = a ( a + 1 ) ( a + 2 ) ⋯ ( a + n − 1 ) , {\displaystyle a^{(n)}=a(a+1)(a+2)\cdots (a+n-1)\,,} is the rising factorial. Another common notation for this solution is Φ(a, b, z). Considered as a function of a, b, or z with the other two held constant, this defines an entire function of a or z, except when b = 0, −1, −2, ... As a function of b it is analytic except for poles at the non-positive integers. Some values of a and b yield solutions that can be expressed in terms of other known functions. See #Special cases. When a is a non-positive integer, then Kummer's function (if it is defined) is a generalized Laguerre polynomial. Just as the confluent differential equation is a limit of the hypergeometric differential equation as the singular point at 1 is moved towards the singular point at ∞, the confluent hypergeometric function can be given as a limit of the hypergeometric function M ( a , c , z ) = lim b → ∞ 2 F 1 ( a , b ; c ; z / b ) {\displaystyle M(a,c,z)=\lim _{b\to \infty }{}_{2}F_{1}(a,b;c;z/b)} and many of the properties of the confluent hypergeometric function are limiting cases of properties of the hypergeometric function. Since Kummer's equation is second order there must be another, independent, solution. The indicial equation of the method of Frobenius tells us that the lowest power of a power series solution to the Kummer equation is either 0 or 1 − b. If we let w(z) be w ( z ) = z 1 − b v ( z ) {\displaystyle w(z)=z^{1-b}v(z)} then the differential equation gives z 2 − b d 2 v d z 2 + 2 ( 1 − b ) z 1 − b d v d z − b ( 1 − b ) z − b v + ( b − z ) [ z 1 − b d v d z + ( 1 − b ) z − b v ] − a z 1 − b v = 0 {\displaystyle z^{2-b}{\frac {d^{2}v}{dz^{2}}}+2(1-b)z^{1-b}{\frac {dv}{dz}}-b(1-b)z^{-b}v+(b-z)\left[z^{1-b}{\frac {dv}{dz}}+(1-b)z^{-b}v\right]-az^{1-b}v=0} which, upon dividing out z1−b and simplifying, becomes z d 2 v d z 2 + ( 2 − b − z ) d v d z − ( a + 1 − b ) v = 0. {\displaystyle z{\frac {d^{2}v}{dz^{2}}}+(2-b-z){\frac {dv}{dz}}-(a+1-b)v=0.} This means that z1−bM(a + 1 − b, 2 − b, z) is a solution so long as b is not an integer greater than 1, just as M(a, b, z) is a solution so long as b is not an integer less than 1. We can also use the Tricomi confluent hypergeometric function U(a, b, z) introduced by Francesco Tricomi (1947), and sometimes denoted by Ψ(a; b; z). It is a combination of the above two solutions, defined by U ( a , b , z ) = Γ ( 1 − b ) Γ ( a + 1 − b ) M ( a , b , z ) + Γ ( b − 1 ) Γ ( a ) z 1 − b M ( a + 1 − b , 2 − b , z ) . {\displaystyle U(a,b,z)={\frac {\Gamma (1-b)}{\Gamma (a+1-b)}}M(a,b,z)+{\frac {\Gamma (b-1)}{\Gamma (a)}}z^{1-b}M(a+1-b,2-b,z).} Although this expression is undefined for integer b, it has the advantage that it can be extended to any integer b by continuity. Unlike Kummer's function which is an entire function of z, U(z) usually has a singularity at zero. For example, if b = 0 and a ≠ 0 then Γ(a+1)U(a, b, z) − 1 is asymptotic to az ln z as z goes to zero. But see #Special cases for some examples where it is an entire function (polynomial). Note that the solution z1−bU(a + 1 − b, 2 − b, z) to Kummer's equation is the same as the solution U(a, b, z), see #Kummer's transformation. For most combinations of real or complex a and b, the functions M(a, b, z) and U(a, b, z) are independent, and if b is a non-positive integer, so M(a, b, z) doesn't exist, then we may be able to use z1−bM(a+1−b, 2−b, z) as a second solution. But if a is a non-positive integer and b is not a non-positive integer, then U(z) is a multiple of M(z). In that case as well, z1−bM(a+1−b, 2−b, z) can be used as a second solution if it exists and is different. But when b is an integer greater than 1, this solution doesn't exist, and if b = 1 then it exists but is a multiple of U(a, b, z) and of M(a, b, z) In those cases a second solution exists of the following form and is valid for any real or complex a and any positive integer b except when a is a positive integer less than b: M ( a , b , z ) ln ⁡ z + z 1 − b ∑ k = 0 ∞ C k z k {\displaystyle M(a,b,z)\ln z+z^{1-b}\sum _{k=0}^{\infty }C_{k}z^{k}} When a = 0 we can alternatively use: ∫ − ∞ z ( − u ) − b e u d u . {\displaystyle \int _{-\infty }^{z}(-u)^{-b}e^{u}\mathrm {d} u.} When b = 1 this is the exponential integral E1(−z). A similar problem occurs when a−b is a negative integer and b is an integer less than 1. In this case M(a, b, z) doesn't exist, and U(a, b, z) is a multiple of z1−bM(a+1−b, 2−b, z). A second solution is then of the form: z 1 − b M ( a + 1 − b , 2 − b , z ) ln ⁡ z + ∑ k = 0 ∞ C k z k {\displaystyle z^{1-b}M(a+1-b,2-b,z)\ln z+\sum _{k=0}^{\infty }C_{k}z^{k}} === Other equations === Confluent Hypergeometric Functions can be used to solve the Extended Confluent Hypergeometric Equation whose general form is given as: z d 2 w d z 2 + ( b − z ) d w d z − ( ∑ m = 0 M a m z m ) w = 0 {\displaystyle z{\frac {d^{2}w}{dz^{2}}}+(b-z){\frac {dw}{dz}}-\left(\sum _{m=0}^{M}a_{m}z^{m}\right)w=0} Note that for M = 0 or when the summation involves just one term, it reduces to the conventional Confluent Hypergeometric Equation. Thus Confluent Hypergeometric Functions can be used to solve "most" second-order ordinary differential equations whose variable coefficients are all linear functions of z, because they can be transformed to the Extended Confluent Hypergeometric Equation. Consider the equation: ( A + B z ) d 2 w d z 2 + ( C + D z ) d w d z + ( E + F z ) w = 0 {\displaystyle (A+Bz){\frac {d^{2}w}{dz^{2}}}+(C+Dz){\frac {dw}{dz}}+(E+Fz)w=0} First we move the regular singular point to 0 by using the substitution of A + Bz ↦ z, which converts the equation to: z d 2 w d z 2 + ( C + D z ) d w d z + ( E + F z ) w = 0 {\displaystyle z{\frac {d^{2}w}{dz^{2}}}+(C+Dz){\frac {dw}{dz}}+(E+Fz)w=0} with new values of C, D, E, and F. Next we use the substitution: z ↦ 1 D 2 − 4 F z {\displaystyle z\mapsto {\frac {1}{\sqrt {D^{2}-4F}}}z} and multiply the equation by the same factor, obtaining: z d 2 w d z 2 + ( C + D D 2 − 4 F z ) d w d z + ( E D 2 − 4 F + F D 2 − 4 F z ) w = 0 {\displaystyle z{\frac {d^{2}w}{dz^{2}}}+\left(C+{\frac {D}{\sqrt {D^{2}-4F}}}z\right){\frac {dw}{dz}}+\left({\frac {E}{\sqrt {D^{2}-4F}}}+{\frac {F}{D^{2}-4F}}z\right)w=0} whose solution is exp ⁡ ( − ( 1 + D D 2 − 4 F ) z 2 ) w ( z ) , {\displaystyle \exp \left(-\left(1+{\frac {D}{\sqrt {D^{2}-4F}}}\right){\frac {z}{2}}\right)w(z),} where w(z) is a solution to Kummer's equation with a = ( 1 + D D 2 − 4 F ) C 2 − E D 2 − 4 F , b = C . {\displaystyle a=\left(1+{\frac {D}{\sqrt {D^{2}-4F}}}\right){\frac {C}{2}}-{\frac {E}{\sqrt {D^{2}-4F}}},\qquad b=C.} Note that the square root may give an imaginary or complex number. If it is zero, another solution must be used, namely exp ⁡ ( − 1 2 D z ) w ( z ) , {\displaystyle \exp \left(-{\tfrac {1}{2}}Dz\right)w(z),} where w(z) is a confluent hypergeometric limit function satisfying z w ″ ( z ) + C w ′ ( z ) + ( E − 1 2 C D ) w ( z ) = 0. {\displaystyle zw''(z)+Cw'(z)+\left(E-{\tfrac {1}{2}}CD\right)w(z)=0.} As noted below, even the Bessel equation can be solved using confluent hypergeometric functions. == Integral representations == If Re b > Re a > 0, M(a, b, z) can be represented as an integral M ( a , b , z ) = Γ ( b ) Γ ( a ) Γ ( b − a ) ∫ 0 1 e z u u a − 1 ( 1 − u ) b − a − 1 d u . {\displaystyle M(a,b,z)={\frac {\Gamma (b)}{\Gamma (a)\Gamma (b-a)}}\int _{0}^{1}e^{zu}u^{a-1}(1-u)^{b-a-1}\,du.} thus M(a, a+b, it) is the characteristic function of the beta distribution. For a with positive real part U can be obtained by the Laplace integral U ( a , b , z ) = 1 Γ ( a ) ∫ 0 ∞ e − z t t a − 1 ( 1 + t ) b − a − 1 d t , ( Re ⁡ a > 0 ) {\displaystyle U(a,b,z)={\frac {1}{\Gamma (a)}}\int _{0}^{\infty }e^{-zt}t^{a-1}(1+t)^{b-a-1}\,dt,\quad (\operatorname {Re} \ a>0)} The integral defines a solution in the right half-plane Re z > 0. They can also be represented as Barnes integrals M ( a , b , z ) = 1 2 π i Γ ( b ) Γ ( a ) ∫ − i ∞ i ∞ Γ ( − s ) Γ ( a + s ) Γ ( b + s ) ( − z ) s d s {\displaystyle M(a,b,z)={\frac {1}{2\pi i}}{\frac {\Gamma (b)}{\Gamma (a)}}\int _{-i\infty }^{i\infty }{\frac {\Gamma (-s)\Gamma (a+s)}{\Gamma (b+s)}}(-z)^{s}ds} where the contour passes to one side of the poles of Γ(−s) and to the other side of the poles of Γ(a + s). == Asymptotic behavior == If a solution to Kummer's equation is asymptotic to a power of z as z → ∞, then the power must be −a. This is in fact the case for Tricomi's solution U(a, b, z). Its asymptotic behavior as z → ∞ can be deduced from the integral representations. If z = x ∈ R, then making a change of variables in the integral followed by expanding the binomial series and integrating it formally term by term gives rise to an asymptotic series expansion, valid as x → ∞: U ( a , b , x ) ∼ x − a 2 F 0 ( a , a − b + 1 ; ; − 1 x ) , {\displaystyle U(a,b,x)\sim x^{-a}\,_{2}F_{0}\left(a,a-b+1;\,;-{\frac {1}{x}}\right),} where 2 F 0 ( ⋅ , ⋅ ; ; − 1 / x ) {\displaystyle _{2}F_{0}(\cdot ,\cdot ;;-1/x)} is a generalized hypergeometric series with 1 as leading term, which generally converges nowhere, but exists as a formal power series in 1/x. This asymptotic expansion is also valid for complex z instead of real x, with |arg z| < 3π/2. The asymptotic behavior of Kummer's solution for large |z| is: M ( a , b , z ) ∼ Γ ( b ) ( e z z a − b Γ ( a ) + ( − z ) − a Γ ( b − a ) ) {\displaystyle M(a,b,z)\sim \Gamma (b)\left({\frac {e^{z}z^{a-b}}{\Gamma (a)}}+{\frac {(-z)^{-a}}{\Gamma (b-a)}}\right)} The powers of z are taken using −3π/2 < arg z ≤ π/2. The first term is not needed when Γ(b − a) is finite, that is when b − a is not a non-positive integer and the real part of z goes to negative infinity, whereas the second term is not needed when Γ(a) is finite, that is, when a is a not a non-positive integer and the real part of z goes to positive infinity. There is always some solution to Kummer's equation asymptotic to ezza−b as z → −∞. Usually this will be a combination of both M(a, b, z) and U(a, b, z) but can also be expressed as ez (−1)a-b U(b − a, b, −z). == Relations == There are many relations between Kummer functions for various arguments and their derivatives. This section gives a few typical examples. === Contiguous relations === Given M(a, b, z), the four functions M(a ± 1, b, z), M(a, b ± 1, z) are called contiguous to M(a, b, z). The function M(a, b, z) can be written as a linear combination of any two of its contiguous functions, with rational coefficients in terms of a, b, and z. This gives (42) = 6 relations, given by identifying any two lines on the right hand side of z d M d z = z a b M ( a + , b + ) = a ( M ( a + ) − M ) = ( b − 1 ) ( M ( b − ) − M ) = ( b − a ) M ( a − ) + ( a − b + z ) M = z ( a − b ) M ( b + ) / b + z M {\displaystyle {\begin{aligned}z{\frac {dM}{dz}}=z{\frac {a}{b}}M(a+,b+)&=a(M(a+)-M)\\&=(b-1)(M(b-)-M)\\&=(b-a)M(a-)+(a-b+z)M\\&=z(a-b)M(b+)/b+zM\\\end{aligned}}} In the notation above, M = M(a, b, z), M(a+) = M(a + 1, b, z), and so on. Repeatedly applying these relations gives a linear relation between any three functions of the form M(a + m, b + n, z) (and their higher derivatives), where m, n are integers. There are similar relations for U. === Kummer's transformation === Kummer's functions are also related by Kummer's transformations: M ( a , b , z ) = e z M ( b − a , b , − z ) {\displaystyle M(a,b,z)=e^{z}\,M(b-a,b,-z)} U ( a , b , z ) = z 1 − b U ( 1 + a − b , 2 − b , z ) {\displaystyle U(a,b,z)=z^{1-b}U\left(1+a-b,2-b,z\right)} . == Multiplication theorem == The following multiplication theorems hold true: U ( a , b , z ) = e ( 1 − t ) z ∑ i = 0 ( t − 1 ) i z i i ! U ( a , b + i , z t ) = e ( 1 − t ) z t b − 1 ∑ i = 0 ( 1 − 1 t ) i i ! U ( a − i , b − i , z t ) . {\displaystyle {\begin{aligned}U(a,b,z)&=e^{(1-t)z}\sum _{i=0}{\frac {(t-1)^{i}z^{i}}{i!}}U(a,b+i,zt)\\&=e^{(1-t)z}t^{b-1}\sum _{i=0}{\frac {\left(1-{\frac {1}{t}}\right)^{i}}{i!}}U(a-i,b-i,zt).\end{aligned}}} == Connection with Laguerre polynomials and similar representations == In terms of Laguerre polynomials, Kummer's functions have several expansions, for example M ( a , b , x y x − 1 ) = ( 1 − x ) a ⋅ ∑ n a ( n ) b ( n ) L n ( b − 1 ) ( y ) x n {\displaystyle M\left(a,b,{\frac {xy}{x-1}}\right)=(1-x)^{a}\cdot \sum _{n}{\frac {a^{(n)}}{b^{(n)}}}L_{n}^{(b-1)}(y)x^{n}} (Erdélyi et al. 1953, 6.12) or M ( a , b , z ) = Γ ( 1 − a ) ⋅ Γ ( b ) Γ ( b − a ) ⋅ L − a ( b − 1 ) ( z ) {\displaystyle M\left(a,\,b,\,z\right)={\frac {\Gamma \left(1-a\right)\cdot \Gamma \left(b\right)}{\Gamma \left(b-a\right)}}\cdot L_{-a}^{(b-1)}\left(z\right)} [1] == Special cases == Functions that can be expressed as special cases of the confluent hypergeometric function include: Some elementary functions where the left-hand side is not defined when b is a non-positive integer, but the right-hand side is still a solution of the corresponding Kummer equation: M ( 0 , b , z ) = 1 {\displaystyle M(0,b,z)=1} U ( 0 , c , z ) = 1 {\displaystyle U(0,c,z)=1} M ( b , b , z ) = e z {\displaystyle M(b,b,z)=e^{z}} U ( a , a , z ) = e z ∫ z ∞ u − a e − u d u {\displaystyle U(a,a,z)=e^{z}\int _{z}^{\infty }u^{-a}e^{-u}du} (a polynomial if a is a non-positive integer) U ( 1 , b , z ) Γ ( b − 1 ) + M ( 1 , b , z ) Γ ( b ) = z 1 − b e z {\displaystyle {\frac {U(1,b,z)}{\Gamma (b-1)}}+{\frac {M(1,b,z)}{\Gamma (b)}}=z^{1-b}e^{z}} M ( n , b , z ) {\displaystyle M(n,b,z)} for non-positive integer n is a generalized Laguerre polynomial. U ( n , c , z ) {\displaystyle U(n,c,z)} for non-positive integer n is a multiple of a generalized Laguerre polynomial, equal to Γ ( 1 − c ) Γ ( n + 1 − c ) M ( n , c , z ) {\displaystyle {\tfrac {\Gamma (1-c)}{\Gamma (n+1-c)}}M(n,c,z)} when the latter exists. U ( c − n , c , z ) {\displaystyle U(c-n,c,z)} when n is a positive integer is a closed form with powers of z, equal to Γ ( c − 1 ) Γ ( c − n ) z 1 − c M ( 1 − n , 2 − c , z ) {\displaystyle {\tfrac {\Gamma (c-1)}{\Gamma (c-n)}}z^{1-c}M(1-n,2-c,z)} when the latter exists. U ( a , a + 1 , z ) = z − a {\displaystyle U(a,a+1,z)=z^{-a}} U ( − n , − 2 n , z ) {\displaystyle U(-n,-2n,z)} for non-negative integer n is a Bessel polynomial (see lower down). M ( 1 , 2 , z ) = ( e z − 1 ) / z , M ( 1 , 3 , z ) = 2 ! ( e z − 1 − z ) / z 2 {\displaystyle M(1,2,z)=(e^{z}-1)/z,\ \ M(1,3,z)=2!(e^{z}-1-z)/z^{2}} etc. Using the contiguous relation a M ( a + ) = ( a + z ) M + z ( a − b ) M ( b + ) / b {\displaystyle aM(a+)=(a+z)M+z(a-b)M(b+)/b} we get, for example, M ( 2 , 1 , z ) = ( 1 + z ) e z . {\displaystyle M(2,1,z)=(1+z)e^{z}.} Bateman's function Bessel functions and many related functions such as Airy functions, Kelvin functions, Hankel functions. For example, in the special case b = 2a the function reduces to a Bessel function: 1 F 1 ( a , 2 a , x ) = e x / 2 0 F 1 ( ; a + 1 2 ; x 2 16 ) = e x / 2 ( x 4 ) 1 / 2 − a Γ ( a + 1 2 ) I a − 1 / 2 ( x 2 ) . {\displaystyle {}_{1}F_{1}(a,2a,x)=e^{x/2}\,{}_{0}F_{1}\left(;a+{\tfrac {1}{2}};{\tfrac {x^{2}}{16}}\right)=e^{x/2}\left({\tfrac {x}{4}}\right)^{1/2-a}\Gamma \left(a+{\tfrac {1}{2}}\right)I_{a-1/2}\left({\tfrac {x}{2}}\right).} This identity is sometimes also referred to as Kummer's second transformation. Similarly U ( a , 2 a , x ) = e x / 2 π x 1 / 2 − a K a − 1 / 2 ( x / 2 ) , {\displaystyle U(a,2a,x)={\frac {e^{x/2}}{\sqrt {\pi }}}x^{1/2-a}K_{a-1/2}(x/2),} When a is a non-positive integer, this equals 2−aθ−a(x/2) where θ is a Bessel polynomial. The error function can be expressed as e r f ( x ) = 2 π ∫ 0 x e − t 2 d t = 2 x π 1 F 1 ( 1 2 , 3 2 , − x 2 ) . {\displaystyle \mathrm {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}dt={\frac {2x}{\sqrt {\pi }}}\ {}_{1}F_{1}\left({\tfrac {1}{2}},{\tfrac {3}{2}},-x^{2}\right).} Coulomb wave function Cunningham functions Exponential integral and related functions such as the sine integral, logarithmic integral Hermite polynomials Incomplete gamma function Laguerre polynomials Parabolic cylinder function (or Weber function) Poisson–Charlier function Toronto functions Whittaker functions Mκ,μ(z), Wκ,μ(z) are solutions of Whittaker's equation that can be expressed in terms of Kummer functions M and U by M κ , μ ( z ) = e − z 2 z μ + 1 2 M ( μ − κ + 1 2 , 1 + 2 μ ; z ) {\displaystyle M_{\kappa ,\mu }(z)=e^{-{\tfrac {z}{2}}}z^{\mu +{\tfrac {1}{2}}}M\left(\mu -\kappa +{\tfrac {1}{2}},1+2\mu ;z\right)} W κ , μ ( z ) = e − z 2 z μ + 1 2 U ( μ − κ + 1 2 , 1 + 2 μ ; z ) {\displaystyle W_{\kappa ,\mu }(z)=e^{-{\tfrac {z}{2}}}z^{\mu +{\tfrac {1}{2}}}U\left(\mu -\kappa +{\tfrac {1}{2}},1+2\mu ;z\right)} The general p-th raw moment (p not necessarily an integer) can be expressed as E ⁡ [ | N ( μ , σ 2 ) | p ] = ( 2 σ 2 ) p / 2 Γ ( 1 + p 2 ) π 1 F 1 ( − p 2 , 1 2 , − μ 2 2 σ 2 ) E ⁡ [ N ( μ , σ 2 ) p ] = ( − 2 σ 2 ) p / 2 U ( − p 2 , 1 2 , − μ 2 2 σ 2 ) {\displaystyle {\begin{aligned}\operatorname {E} \left[\left|N\left(\mu ,\sigma ^{2}\right)\right|^{p}\right]&={\frac {\left(2\sigma ^{2}\right)^{p/2}\Gamma \left({\tfrac {1+p}{2}}\right)}{\sqrt {\pi }}}\ {}_{1}F_{1}\left(-{\tfrac {p}{2}},{\tfrac {1}{2}},-{\tfrac {\mu ^{2}}{2\sigma ^{2}}}\right)\\\operatorname {E} \left[N\left(\mu ,\sigma ^{2}\right)^{p}\right]&=\left(-2\sigma ^{2}\right)^{p/2}U\left(-{\tfrac {p}{2}},{\tfrac {1}{2}},-{\tfrac {\mu ^{2}}{2\sigma ^{2}}}\right)\end{aligned}}} In the second formula the function's second branch cut can be chosen by multiplying with (−1)p. == Application to continued fractions == By applying a limiting argument to Gauss's continued fraction it can be shown that M ( a + 1 , b + 1 , z ) M ( a , b , z ) = 1 1 − b − a b ( b + 1 ) z 1 + a + 1 ( b + 1 ) ( b + 2 ) z 1 − b − a + 1 ( b + 2 ) ( b + 3 ) z 1 + a + 2 ( b + 3 ) ( b + 4 ) z 1 − ⋱ {\displaystyle {\frac {M(a+1,b+1,z)}{M(a,b,z)}}={\cfrac {1}{1-{\cfrac {\displaystyle {\frac {b-a}{b(b+1)}}z}{1+{\cfrac {\displaystyle {\frac {a+1}{(b+1)(b+2)}}z}{1-{\cfrac {\displaystyle {\frac {b-a+1}{(b+2)(b+3)}}z}{1+{\cfrac {\displaystyle {\frac {a+2}{(b+3)(b+4)}}z}{1-\ddots }}}}}}}}}}} and that this continued fraction converges uniformly to a meromorphic function of z in every bounded domain that does not include a pole. == See also == Composite Bézier curve == Notes == == References == Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 13". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 504. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. Chistova, E.A. (2001) [1994], "Confluent hypergeometric function", Encyclopedia of Mathematics, EMS Press Daalhuis, Adri B. Olde (2010), "Confluent hypergeometric function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. Erdélyi, Arthur; Magnus, Wilhelm; Oberhettinger, Fritz & Tricomi, Francesco G. (1953). Higher transcendental functions. Vol. I. New York–Toronto–London: McGraw–Hill Book Company, Inc. MR 0058756. Kummer, Ernst Eduard (1837). "De integralibus quibusdam definitis et seriebus infinitis". Journal für die reine und angewandte Mathematik (in Latin). 1837 (17): 228–242. doi:10.1515/crll.1837.17.228. ISSN 0075-4102. S2CID 121351583. Slater, Lucy Joan (1960). Confluent hypergeometric functions. Cambridge, UK: Cambridge University Press. MR 0107026. Tricomi, Francesco G. (1947). "Sulle funzioni ipergeometriche confluenti". Annali di Matematica Pura ed Applicata. Series 4 (in Italian). 26: 141–175. doi:10.1007/bf02415375. ISSN 0003-4622. MR 0029451. S2CID 119860549. Tricomi, Francesco G. (1954). Funzioni ipergeometriche confluenti. Consiglio Nazionale Delle Ricerche Monografie Matematiche (in Italian). Vol. 1. Rome: Edizioni cremonese. ISBN 978-88-7083-449-9. MR 0076936. {{cite book}}: ISBN / Date incompatibility (help) Oldham, K.B.; Myland, J.; Spanier, J. (2010). An Atlas of Functions: with Equator, the Atlas Function Calculator. Springer New York. ISBN 978-0-387-48807-3. Retrieved 2017-08-23. == External links == Confluent Hypergeometric Functions in NIST Digital Library of Mathematical Functions Kummer hypergeometric function on the Wolfram Functions site Tricomi hypergeometric function on the Wolfram Functions site
Wikipedia/Confluent_hypergeometric_function
This article relates the Schrödinger equation with the path integral formulation of quantum mechanics using a simple nonrelativistic one-dimensional single-particle Hamiltonian composed of kinetic and potential energy. == Background == === Schrödinger's equation === Schrödinger's equation, in bra–ket notation, is i ℏ d d t | ψ ⟩ = H ^ | ψ ⟩ {\displaystyle i\hbar {\frac {d}{dt}}\left|\psi \right\rangle ={\hat {H}}\left|\psi \right\rangle } where H ^ {\displaystyle {\hat {H}}} is the Hamiltonian operator. The Hamiltonian operator can be written H ^ = p ^ 2 2 m + V ( q ^ ) {\displaystyle {\hat {H}}={\frac {{\hat {p}}^{2}}{2m}}+V({\hat {q}})} where V ( q ^ ) {\displaystyle V({\hat {q}})} is the potential energy, m is the mass and we have assumed for simplicity that there is only one spatial dimension q. The formal solution of the equation is | ψ ( t ) ⟩ = exp ⁡ ( − i ℏ H ^ t ) | q 0 ⟩ ≡ exp ⁡ ( − i ℏ H ^ t ) | 0 ⟩ {\displaystyle \left|\psi (t)\right\rangle =\exp \left(-{\frac {i}{\hbar }}{\hat {H}}t\right)\left|q_{0}\right\rangle \equiv \exp \left(-{\frac {i}{\hbar }}{\hat {H}}t\right)|0\rangle } where we have assumed the initial state is a free-particle spatial state | q 0 ⟩ {\displaystyle \left|q_{0}\right\rangle } . The transition probability amplitude for a transition from an initial state | 0 ⟩ {\displaystyle \left|0\right\rangle } to a final free-particle spatial state | F ⟩ {\displaystyle |F\rangle } at time T is ⟨ F | ψ ( T ) ⟩ = ⟨ F | exp ⁡ ( − i ℏ H ^ T ) | 0 ⟩ . {\displaystyle \langle F|\psi (T)\rangle =\left\langle F{\Biggr |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}T\right){\Biggl |}0\right\rangle .} === Path integral formulation === The path integral formulation states that the transition amplitude is simply the integral of the quantity exp ⁡ ( i ℏ S ) {\displaystyle \exp \left({\frac {i}{\hbar }}S\right)} over all possible paths from the initial state to the final state. Here S is the classical action. The reformulation of this transition amplitude, originally due to Dirac and conceptualized by Feynman, forms the basis of the path integral formulation. == From Schrödinger's equation to the path integral formulation == The following derivation makes use of the Trotter product formula, which states that for self-adjoint operators A and B (satisfying certain technical conditions), we have e i ( A + B ) ψ = lim N → ∞ ( e i A / N e i B / N ) N ψ , {\displaystyle e^{i(A+B)}\psi =\lim _{N\to \infty }\left(e^{iA/N}e^{iB/N}\right)^{N}\psi ,} even if A and B do not commute. We can divide the time interval [0, T] into N segments of length δ t = T N . {\displaystyle \delta t={\frac {T}{N}}.} The transition amplitude can then be written ⟨ F | exp ⁡ ( − i ℏ H ^ T ) | 0 ⟩ = ⟨ F | exp ⁡ ( − i ℏ H ^ δ t ) exp ⁡ ( − i ℏ H ^ δ t ) ⋯ exp ⁡ ( − i ℏ H ^ δ t ) | 0 ⟩ . {\displaystyle \left\langle F{\biggr |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}T\right){\biggl |}0\right\rangle =\left\langle F{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right)\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right)\cdots \exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}0\right\rangle .} Although the kinetic energy and potential energy operators do not commute, the Trotter product formula, cited above, says that over each small time-interval, we can ignore this noncommutativity and write exp ⁡ ( − i ℏ H ^ δ t ) ≈ exp ⁡ ( − i ℏ p ^ 2 2 m δ t ) exp ⁡ ( − i ℏ V ( q j ) δ t ) . {\displaystyle \exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right)\approx \exp \left({-{i \over \hbar }{{\hat {p}}^{2} \over 2m}\delta t}\right)\exp \left({-{i \over \hbar }V\left(q_{j}\right)\delta t}\right).} The equality of the above can be verified to hold up to first order in δt by expanding the exponential as power series. For notational simplicity, we delay making this substitution for the moment. We can insert the identity matrix I = ∫ d q | q ⟩ ⟨ q | {\displaystyle I=\int dq\left|q\right\rangle \left\langle q\right|} N − 1 times between the exponentials to yield ⟨ F | exp ⁡ ( − i ℏ H ^ T ) | 0 ⟩ = ( ∏ j = 1 N − 1 ∫ d q j ) ⟨ F | exp ⁡ ( − i ℏ H ^ δ t ) | q N − 1 ⟩ ⟨ q N − 1 | exp ⁡ ( − i ℏ H ^ δ t ) | q N − 2 ⟩ ⋯ ⟨ q 1 | exp ⁡ ( − i ℏ H ^ δ t ) | 0 ⟩ . {\displaystyle \left\langle F{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}T\right){\bigg |}0\right\rangle =\left(\prod _{j=1}^{N-1}\int dq_{j}\right)\left\langle F{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{N-1}\right\rangle \left\langle q_{N-1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{N-2}\right\rangle \cdots \left\langle q_{1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}0\right\rangle .} We now implement the substitution associated to the Trotter product formula, so that we have, effectively ⟨ q j + 1 | exp ⁡ ( − i ℏ H ^ δ t ) | q j ⟩ = ⟨ q j + 1 | exp ⁡ ( − i ℏ p ^ 2 2 m δ t ) exp ⁡ ( − i ℏ V ( q j ) δ t ) | q j ⟩ . {\displaystyle \left\langle q_{j+1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{j}\right\rangle =\left\langle q_{j+1}{\Bigg |}\exp \left({-{i \over \hbar }{{\hat {p}}^{2} \over 2m}\delta t}\right)\exp \left({-{i \over \hbar }V\left(q_{j}\right)\delta t}\right){\Bigg |}q_{j}\right\rangle .} We can insert the identity I = ∫ d p 2 π | p ⟩ ⟨ p | {\displaystyle I=\int {dp \over 2\pi }\left|p\right\rangle \left\langle p\right|} into the amplitude to yield ⟨ q j + 1 | exp ⁡ ( − i ℏ H ^ δ t ) | q j ⟩ = exp ⁡ ( − i ℏ V ( q j ) δ t ) ∫ d p 2 π ⟨ q j + 1 | exp ⁡ ( − i ℏ p 2 2 m δ t ) | p ⟩ ⟨ p | q j ⟩ = exp ⁡ ( − i ℏ V ( q j ) δ t ) ∫ d p 2 π exp ⁡ ( − i ℏ p 2 2 m δ t ) ⟨ q j + 1 | p ⟩ ⟨ p | q j ⟩ = exp ⁡ ( − i ℏ V ( q j ) δ t ) ∫ d p 2 π ℏ exp ⁡ ( − i ℏ p 2 2 m δ t − i ℏ p ( q j + 1 − q j ) ) {\displaystyle {\begin{aligned}\left\langle q_{j+1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{j}\right\rangle &=\exp \left(-{\frac {i}{\hbar }}V\left(q_{j}\right)\delta t\right)\int {\frac {dp}{2\pi }}\left\langle q_{j+1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\frac {p^{2}}{2m}}\delta t\right){\bigg |}p\right\rangle \langle p|q_{j}\rangle \\&=\exp \left(-{\frac {i}{\hbar }}V\left(q_{j}\right)\delta t\right)\int {\frac {dp}{2\pi }}\exp \left(-{\frac {i}{\hbar }}{\frac {p^{2}}{2m}}\delta t\right)\left\langle q_{j+1}|p\right\rangle \left\langle p|q_{j}\right\rangle \\&=\exp \left(-{\frac {i}{\hbar }}V\left(q_{j}\right)\delta t\right)\int {\frac {dp}{2\pi \hbar }}\exp \left(-{\frac {i}{\hbar }}{\frac {p^{2}}{2m}}\delta t-{\frac {i}{\hbar }}p\left(q_{j+1}-q_{j}\right)\right)\end{aligned}}} where we have used the fact that the free particle wave function is ⟨ p | q j ⟩ = 1 ℏ exp ⁡ ( i ℏ p q j ) . {\displaystyle \langle p|q_{j}\rangle ={\frac {1}{\sqrt {\hbar }}}\exp \left({\frac {i}{\hbar }}pq_{j}\right).} The integral over p can be performed (see Common integrals in quantum field theory) to obtain ⟨ q j + 1 | exp ⁡ ( − i ℏ H ^ δ t ) | q j ⟩ = − i m 2 π δ t ℏ exp ⁡ [ i ℏ δ t ( 1 2 m ( q j + 1 − q j δ t ) 2 − V ( q j ) ) ] {\displaystyle \left\langle q_{j+1}{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}\delta t\right){\bigg |}q_{j}\right\rangle ={\sqrt {-im \over 2\pi \delta t\hbar }}\exp \left[{i \over \hbar }\delta t\left({1 \over 2}m\left({q_{j+1}-q_{j} \over \delta t}\right)^{2}-V\left(q_{j}\right)\right)\right]} The transition amplitude for the entire time period is ⟨ F | exp ⁡ ( − i ℏ H ^ T ) | 0 ⟩ = ( − i m 2 π δ t ℏ ) N 2 ( ∏ j = 1 N − 1 ∫ d q j ) exp ⁡ [ i ℏ ∑ j = 0 N − 1 δ t ( 1 2 m ( q j + 1 − q j δ t ) 2 − V ( q j ) ) ] . {\displaystyle \left\langle F{\bigg |}\exp \left(-{\frac {i}{\hbar }}{\hat {H}}T\right){\bigg |}0\right\rangle =\left({-im \over 2\pi \delta t\hbar }\right)^{N \over 2}\left(\prod _{j=1}^{N-1}\int dq_{j}\right)\exp \left[{i \over \hbar }\sum _{j=0}^{N-1}\delta t\left({1 \over 2}m\left({q_{j+1}-q_{j} \over \delta t}\right)^{2}-V\left(q_{j}\right)\right)\right].} If we take the limit of large N the transition amplitude reduces to ⟨ F | exp ⁡ ( − i ℏ H ^ T ) | 0 ⟩ = ∫ D q ( t ) exp ⁡ ( i ℏ S ) {\displaystyle \left\langle F{\bigg |}\exp \left({-{i \over \hbar }{\hat {H}}T}\right){\bigg |}0\right\rangle =\int Dq(t)\exp \left({i \over \hbar }S\right)} where S is the classical action given by S = ∫ 0 T d t L ( q ( t ) , q ˙ ( t ) ) {\displaystyle S=\int _{0}^{T}dtL\left(q(t),{\dot {q}}(t)\right)} and L is the classical Lagrangian given by L ( q , q ˙ ) = 1 2 m q ˙ 2 − V ( q ) {\displaystyle L\left(q,{\dot {q}}\right)={1 \over 2}m{\dot {q}}^{2}-V(q)} Any possible path of the particle, going from the initial state to the final state, is approximated as a broken line and included in the measure of the integral ∫ D q ( t ) = lim N → ∞ ( − i m 2 π δ t ℏ ) N 2 ( ∏ j = 1 N − 1 ∫ d q j ) {\displaystyle \int Dq(t)=\lim _{N\to \infty }\left({\frac {-im}{2\pi \delta t\hbar }}\right)^{\frac {N}{2}}\left(\prod _{j=1}^{N-1}\int dq_{j}\right)} This expression actually defines the manner in which the path integrals are to be taken. The coefficient in front is needed to ensure that the expression has the correct dimensions, but it has no actual relevance in any physical application. This recovers the path integral formulation from Schrödinger's equation. == From path integral formulation to Schrödinger's equation == The path integral reproduces the Schrödinger equation for the initial and final state even when a potential is present. This is easiest to see by taking a path-integral over infinitesimally separated times. Since the time separation is infinitesimal and the cancelling oscillations become severe for large values of ẋ, the path integral has most weight for y close to x. In this case, to lowest order the potential energy is constant, and only the kinetic energy contribution is nontrivial. (This separation of the kinetic and potential energy terms in the exponent is essentially the Trotter product formula.) The exponential of the action is e − i ε V ( x ) e i x ˙ 2 2 ε {\displaystyle e^{-i\varepsilon V(x)}e^{i{\frac {{\dot {x}}^{2}}{2}}\varepsilon }} The first term rotates the phase of ψ(x) locally by an amount proportional to the potential energy. The second term is the free particle propagator, corresponding to i times a diffusion process. To lowest order in ε they are additive; in any case one has with (1): ψ ( y ; t + ε ) ≈ ∫ ψ ( x ; t ) e − i ε V ( x ) e i ( x − y ) 2 2 ε d x . {\displaystyle \psi (y;t+\varepsilon )\approx \int \psi (x;t)e^{-i\varepsilon V(x)}e^{\frac {i(x-y)^{2}}{2\varepsilon }}\,dx\,.} As mentioned, the spread in ψ is diffusive from the free particle propagation, with an extra infinitesimal rotation in phase which slowly varies from point to point from the potential: ∂ ψ ∂ t = i ( 1 2 ∇ 2 − V ( x ) ) ψ {\displaystyle {\frac {\partial \psi }{\partial t}}=i\left({\tfrac {1}{2}}\nabla ^{2}-V(x)\right)\psi } and this is the Schrödinger equation. Note that the normalization of the path integral needs to be fixed in exactly the same way as in the free particle case. An arbitrary continuous potential does not affect the normalization, although singular potentials require careful treatment. == See also == Normalized solutions (nonlinear Schrödinger equation) == References ==
Wikipedia/Relation_between_Schrödinger's_equation_and_the_path_integral_formulation_of_quantum_mechanics
The Hubbard–Stratonovich (HS) transformation is an exact mathematical transformation invented by Russian physicist Ruslan L. Stratonovich and popularized by British physicist John Hubbard. It is used to convert a particle theory into its respective field theory by linearizing the density operator in the many-body interaction term of the Hamiltonian and introducing an auxiliary scalar field. It is defined via the integral identity exp ⁡ ( − a 2 x 2 ) = 1 2 π a ∫ − ∞ ∞ exp ⁡ ( − y 2 2 a − i x y ) d y , {\displaystyle \exp \left(-{\frac {a}{2}}x^{2}\right)={\sqrt {\frac {1}{2\pi a}}}\;\int _{-\infty }^{\infty }\exp \left(-{\frac {y^{2}}{2a}}-ixy\right)\,dy,} where the real constant a > 0 {\displaystyle a>0} . The basic idea of the HS transformation is to reformulate a system of particles interacting through two-body potentials into a system of independent particles interacting with a fluctuating field. The procedure is widely used in polymer physics, classical particle physics, spin glass theory, and electronic structure theory. == Calculation of resulting field theories == The resulting field theories are well-suited for the application of effective approximation techniques, like the mean field approximation. A major difficulty arising in the simulation with such field theories is their highly oscillatory nature in case of strong interactions, which leads to the well-known numerical sign problem. The problem originates from the repulsive part of the interaction potential, which implicates the introduction of the complex factor via the HS transformation. == References ==
Wikipedia/Hubbard–Stratonovich_transformation
Static force fields are fields, such as a simple electric, magnetic or gravitational fields, that exist without excitations. The most common approximation method that physicists use for scattering calculations can be interpreted as static forces arising from the interactions between two bodies mediated by virtual particles, particles that exist for only a short time determined by the uncertainty principle. The virtual particles, also known as force carriers, are bosons, with different bosons associated with each force.: 16–37  The virtual-particle description of static forces is capable of identifying the spatial form of the forces, such as the inverse-square behavior in Newton's law of universal gravitation and in Coulomb's law. It is also able to predict whether the forces are attractive or repulsive for like bodies. The path integral formulation is the natural language for describing force carriers. This article uses the path integral formulation to describe the force carriers for spin 0, 1, and 2 fields. Pions, photons, and gravitons fall into these respective categories. There are limits to the validity of the virtual particle picture. The virtual-particle formulation is derived from a method known as perturbation theory which is an approximation assuming interactions are not too strong, and was intended for scattering problems, not bound states such as atoms. For the strong force binding quarks into nucleons at low energies, perturbation theory has never been shown to yield results in accord with experiments, thus, the validity of the "force-mediating particle" picture is questionable. Similarly, for bound states the method fails. In these cases, the physical interpretation must be re-examined. As an example, the calculations of atomic structure in atomic physics or of molecular structure in quantum chemistry could not easily be repeated, if at all, using the "force-mediating particle" picture. Use of the "force-mediating particle" picture (FMPP) is unnecessary in nonrelativistic quantum mechanics, and Coulomb's law is used as given in atomic physics and quantum chemistry to calculate both bound and scattering states. A non-perturbative relativistic quantum theory, in which Lorentz invariance is preserved, is achievable by evaluating Coulomb's law as a 4-space interaction using the 3-space position vector of a reference electron obeying Dirac's equation and the quantum trajectory of a second electron which depends only on the scaled time. The quantum trajectory of each electron in an ensemble is inferred from the Dirac current for each electron by setting it equal to a velocity field times a quantum density, calculating a position field from the time integral of the velocity field, and finally calculating a quantum trajectory from the expectation value of the position field. The quantum trajectories are of course spin dependent, and the theory can be validated by checking that Pauli's exclusion principle is obeyed for a collection of fermions. == Classical forces == The force exerted by one mass on another and the force exerted by one charge on another are strikingly similar. Both fall off as the square of the distance between the bodies. Both are proportional to the product of properties of the bodies, mass in the case of gravitation and charge in the case of electrostatics. They also have a striking difference. Two masses attract each other, while two like charges repel each other. In both cases, the bodies appear to act on each other over a distance. The concept of field was invented to mediate the interaction among bodies thus eliminating the need for action at a distance. The gravitational force is mediated by the gravitational field and the Coulomb force is mediated by the electromagnetic field. === Gravitational force === The gravitational force on a mass m {\displaystyle m} exerted by another mass M {\displaystyle M} is F = − G m M r 2 r ^ = m g ( r ) , {\displaystyle \mathbf {F} =-G{\frac {mM}{r^{2}}}\,{\hat {\mathbf {r} }}=m\mathbf {g} \left(\mathbf {r} \right),} where G is the Newtonian constant of gravitation, r is the distance between the masses, and r ^ {\displaystyle {\hat {\mathbf {r} }}} is the unit vector from mass M {\displaystyle M} to mass m {\displaystyle m} . The force can also be written F = m g ( r ) , {\displaystyle \mathbf {F} =m\mathbf {g} \left(\mathbf {r} \right),} where g ( r ) {\displaystyle \mathbf {g} \left(\mathbf {r} \right)} is the gravitational field described by the field equation ∇ ⋅ g = − 4 π G ρ m , {\displaystyle \nabla \cdot \mathbf {g} =-4\pi G\rho _{m},} where ρ m {\displaystyle \rho _{m}} is the mass density at each point in space. === Coulomb force === The electrostatic Coulomb force on a charge q {\displaystyle q} exerted by a charge Q {\displaystyle Q} is (SI units) F = 1 4 π ε 0 q Q r 2 r ^ , {\displaystyle \mathbf {F} ={\frac {1}{4\pi \varepsilon _{0}}}{\frac {qQ}{r^{2}}}\mathbf {\hat {r}} ,} where ε 0 {\displaystyle \varepsilon _{0}} is the vacuum permittivity, r {\displaystyle r} is the separation of the two charges, and r ^ {\displaystyle \mathbf {\hat {r}} } is a unit vector in the direction from charge Q {\displaystyle Q} to charge q {\displaystyle q} . The Coulomb force can also be written in terms of an electrostatic field: F = q E ( r ) , {\displaystyle \mathbf {F} =q\mathbf {E} \left(\mathbf {r} \right),} where ∇ ⋅ E = ρ q ε 0 ; {\displaystyle \nabla \cdot \mathbf {E} ={\frac {\rho _{q}}{\varepsilon _{0}}};} ρ q {\displaystyle \rho _{q}} being the charge density at each point in space. == Virtual-particle exchange == In perturbation theory, forces are generated by the exchange of virtual particles. The mechanics of virtual-particle exchange is best described with the path integral formulation of quantum mechanics. There are insights that can be obtained, however, without going into the machinery of path integrals, such as why classical gravitational and electrostatic forces fall off as the inverse square of the distance between bodies. === Path-integral formulation of virtual-particle exchange === A virtual particle is created by a disturbance to the vacuum state, and the virtual particle is destroyed when it is absorbed back into the vacuum state by another disturbance. The disturbances are imagined to be due to bodies that interact with the virtual particle’s field. ==== Probability amplitude ==== Using natural units, ℏ = c = 1 {\displaystyle \hbar =c=1} , the probability amplitude for the creation, propagation, and destruction of a virtual particle is given, in the path integral formulation by Z ≡ ⟨ 0 | exp ⁡ ( − i H ^ T ) | 0 ⟩ = exp ⁡ ( − i E T ) = ∫ D φ exp ⁡ ( i S [ φ ] ) = exp ⁡ ( i W ) {\displaystyle Z\equiv \langle 0|\exp \left(-i{\hat {H}}T\right)|0\rangle =\exp \left(-iET\right)=\int D\varphi \;\exp \left(i{\mathcal {S}}[\varphi ]\right)\;=\exp \left(iW\right)} where H ^ {\displaystyle {\hat {H}}} is the Hamiltonian operator, T {\displaystyle T} is elapsed time, E {\displaystyle E} is the energy change due to the disturbance, W = − E T {\displaystyle W=-ET} is the change in action due to the disturbance, φ {\displaystyle \varphi } is the field of the virtual particle, the integral is over all paths, and the classical action is given by S [ φ ] = ∫ d 4 x L [ φ ( x ) ] {\displaystyle {\mathcal {S}}[\varphi ]=\int \mathrm {d} ^{4}x\;{{\mathcal {L}}[\varphi (x)]\,}} where L [ φ ( x ) ] {\displaystyle {\mathcal {L}}[\varphi (x)]} is the Lagrangian density. Here, the spacetime metric is given by η μ ν = ( 1 0 0 0 0 − 1 0 0 0 0 − 1 0 0 0 0 − 1 ) . {\displaystyle \eta _{\mu \nu }={\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}}.} The path integral often can be converted to the form Z = ∫ exp ⁡ [ i ∫ d 4 x ( 1 2 φ O ^ φ + J φ ) ] D φ {\displaystyle Z=\int \exp \left[i\int d^{4}x\left({\frac {1}{2}}\varphi {\hat {O}}\varphi +J\varphi \right)\right]D\varphi } where O ^ {\displaystyle {\hat {O}}} is a differential operator with φ {\displaystyle \varphi } and J {\displaystyle J} functions of spacetime. The first term in the argument represents the free particle and the second term represents the disturbance to the field from an external source such as a charge or a mass. The integral can be written (see Common integrals in quantum field theory § Integrals with differential operators in the argument) Z ∝ exp ⁡ ( i W ( J ) ) {\displaystyle Z\propto \exp \left(iW\left(J\right)\right)} where W ( J ) = − 1 2 ∬ d 4 x d 4 y J ( x ) D ( x − y ) J ( y ) {\displaystyle W\left(J\right)=-{\frac {1}{2}}\iint d^{4}x\;d^{4}y\;J\left(x\right)D\left(x-y\right)J\left(y\right)} is the change in the action due to the disturbances and the propagator D ( x − y ) {\displaystyle D\left(x-y\right)} is the solution of O ^ D ( x − y ) = δ 4 ( x − y ) . {\displaystyle {\hat {O}}D\left(x-y\right)=\delta ^{4}\left(x-y\right).} ==== Energy of interaction ==== We assume that there are two point disturbances representing two bodies and that the disturbances are motionless and constant in time. The disturbances can be written J ( x ) = ( J 1 + J 2 , 0 , 0 , 0 ) {\displaystyle J(x)=\left(J_{1}+J_{2},0,0,0\right)} J 1 = a 1 δ 3 ( x − x 1 ) J 2 = a 2 δ 3 ( x − x 2 ) {\displaystyle {\begin{aligned}J_{1}&=a_{1}\delta ^{3}\left(\mathbf {x} -\mathbf {x} _{1}\right)\\J_{2}&=a_{2}\delta ^{3}\left(\mathbf {x} -\mathbf {x} _{2}\right)\end{aligned}}} where the delta functions are in space, the disturbances are located at x 1 {\displaystyle \mathbf {x} _{1}} and x 2 {\displaystyle \mathbf {x} _{2}} , and the coefficients a 1 {\displaystyle a_{1}} and a 2 {\displaystyle a_{2}} are the strengths of the disturbances. If we neglect self-interactions of the disturbances then W becomes W ( J ) = − ∬ d 4 x d 4 y J 1 ( x ) 1 2 [ D ( x − y ) + D ( y − x ) ] J 2 ( y ) , {\displaystyle W\left(J\right)=-\iint d^{4}x\;d^{4}y\;J_{1}\left(x\right){\frac {1}{2}}\left[D\left(x-y\right)+D\left(y-x\right)\right]J_{2}\left(y\right),} which can be written W ( J ) = − T a 1 a 2 ∫ d 3 k ( 2 π ) 3 D ( k ) ∣ k 0 = 0 exp ⁡ ( i k ⋅ ( x 1 − x 2 ) ) . {\displaystyle W\left(J\right)=-Ta_{1}a_{2}\int {\frac {d^{3}k}{(2\pi )^{3}}}\;\;D\left(k\right)\mid _{k_{0}=0}\;\exp \left(i\mathbf {k} \cdot \left(\mathbf {x} _{1}-\mathbf {x} _{2}\right)\right).} Here D ( k ) {\displaystyle D\left(k\right)} is the Fourier transform of 1 2 [ D ( x − y ) + D ( y − x ) ] . {\displaystyle {\frac {1}{2}}\left[D\left(x-y\right)+D\left(y-x\right)\right].} Finally, the change in energy due to the static disturbances of the vacuum is E = − W T = a 1 a 2 ∫ d 3 k ( 2 π ) 3 D ( k ) ∣ k 0 = 0 exp ⁡ ( i k ⋅ ( x 1 − x 2 ) ) . {\displaystyle E=-{\frac {W}{T}}=a_{1}a_{2}\int {\frac {d^{3}k}{(2\pi )^{3}}}\;\;D\left(k\right)\mid _{k_{0}=0}\;\exp \left(i\mathbf {k} \cdot \left(\mathbf {x} _{1}-\mathbf {x} _{2}\right)\right).} If this quantity is negative, the force is attractive. If it is positive, the force is repulsive. Examples of static, motionless, interacting currents are the Yukawa potential, the Coulomb potential in a vacuum, and the Coulomb potential in a simple plasma or electron gas. The expression for the interaction energy can be generalized to the situation in which the point particles are moving, but the motion is slow compared with the speed of light. Examples are the Darwin interaction in a vacuum and in a plasma. Finally, the expression for the interaction energy can be generalized to situations in which the disturbances are not point particles, but are possibly line charges, tubes of charges, or current vortices. Examples include: two line charges embedded in a plasma or electron gas, Coulomb potential between two current loops embedded in a magnetic field, and the magnetic interaction between current loops in a simple plasma or electron gas. As seen from the Coulomb interaction between tubes of charge example, shown below, these more complicated geometries can lead to such exotic phenomena as fractional quantum numbers. == Selected examples == === Yukawa potential: the force between two nucleons in an atomic nucleus === Consider the spin-0 Lagrangian density: 21–29  L [ φ ( x ) ] = 1 2 [ ( ∂ φ ) 2 − m 2 φ 2 ] . {\displaystyle {\mathcal {L}}[\varphi (x)]={\frac {1}{2}}\left[\left(\partial \varphi \right)^{2}-m^{2}\varphi ^{2}\right].} The equation of motion for this Lagrangian is the Klein–Gordon equation ∂ 2 φ + m 2 φ = 0. {\displaystyle \partial ^{2}\varphi +m^{2}\varphi =0.} If we add a disturbance the probability amplitude becomes Z = ∫ D φ exp ⁡ { i ∫ d 4 x [ 1 2 ( ( ∂ φ ) 2 − m 2 φ 2 ) + J φ ] } . {\displaystyle Z=\int D\varphi \;\exp \left\{i\int d^{4}\mathbf {x} \;\left[{\frac {1}{2}}\left(\left(\partial \varphi \right)^{2}-m^{2}\varphi ^{2}\right)+J\varphi \right]\right\}.} If we integrate by parts and neglect boundary terms at infinity the probability amplitude becomes Z = ∫ D φ exp ⁡ { i ∫ d 4 x [ − 1 2 φ ( ∂ 2 + m 2 ) φ + J φ ] } . {\displaystyle Z=\int D\varphi \;\exp \left\{i\int d^{4}x\;\left[-{\frac {1}{2}}\varphi \left(\partial ^{2}+m^{2}\right)\varphi +J\varphi \right]\right\}.} With the amplitude in this form it can be seen that the propagator is the solution of − ( ∂ 2 + m 2 ) D ( x − y ) = δ 4 ( x − y ) . {\displaystyle -\left(\partial ^{2}+m^{2}\right)D\left(x-y\right)=\delta ^{4}\left(x-y\right).} From this it can be seen that D ( k ) ∣ k 0 = 0 = − 1 k 2 + m 2 . {\displaystyle D\left(k\right)\mid _{k_{0}=0}\;=\;-{\frac {1}{k^{2}+m^{2}}}.} The energy due to the static disturbances becomes (see Common integrals in quantum field theory § Yukawa Potential: The Coulomb potential with mass) E = − a 1 a 2 4 π r exp ⁡ ( − m r ) {\displaystyle E=-{\frac {a_{1}a_{2}}{4\pi r}}\exp \left(-mr\right)} with r 2 = ( x 1 − x 2 ) 2 {\displaystyle r^{2}=\left(\mathbf {x} _{1}-\mathbf {x} _{2}\right)^{2}} which is attractive and has a range of 1 m . {\displaystyle {\frac {1}{m}}.} Yukawa proposed that this field describes the force between two nucleons in an atomic nucleus. It allowed him to predict both the range and the mass of the particle, now known as the pion, associated with this field. === Electrostatics === ==== Coulomb potential in vacuum ==== Consider the spin-1 Proca Lagrangian with a disturbance: 30–31  L [ φ ( x ) ] = − 1 4 F μ ν F μ ν + 1 2 m 2 A μ A μ + A μ J μ {\displaystyle {\mathcal {L}}[\varphi (x)]=-{\frac {1}{4}}F_{\mu \nu }F^{\mu \nu }+{\frac {1}{2}}m^{2}A_{\mu }A^{\mu }+A_{\mu }J^{\mu }} where F μ ν = ∂ μ A ν − ∂ ν A μ , {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu },} charge is conserved ∂ μ J μ = 0 , {\displaystyle \partial _{\mu }J^{\mu }=0,} and we choose the Lorenz gauge ∂ μ A μ = 0. {\displaystyle \partial _{\mu }A^{\mu }=0.} Moreover, we assume that there is only a time-like component J 0 {\displaystyle J^{0}} to the disturbance. In ordinary language, this means that there is a charge at the points of disturbance, but there are no electric currents. If we follow the same procedure as we did with the Yukawa potential we find that − 1 4 ∫ d 4 x F μ ν F μ ν = − 1 4 ∫ d 4 x ( ∂ μ A ν − ∂ ν A μ ) ( ∂ μ A ν − ∂ ν A μ ) = 1 2 ∫ d 4 x A ν ( ∂ 2 A ν − ∂ ν ∂ μ A μ ) = 1 2 ∫ d 4 x A μ ( η μ ν ∂ 2 ) A ν , {\displaystyle {\begin{aligned}-{\frac {1}{4}}\int d^{4}xF_{\mu \nu }F^{\mu \nu }&=-{\frac {1}{4}}\int d^{4}x\left(\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }\right)\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right)\\&={\frac {1}{2}}\int d^{4}x\;A_{\nu }\left(\partial ^{2}A^{\nu }-\partial ^{\nu }\partial _{\mu }A^{\mu }\right)\\&={\frac {1}{2}}\int d^{4}x\;A^{\mu }\left(\eta _{\mu \nu }\partial ^{2}\right)A^{\nu },\end{aligned}}} which implies η μ α ( ∂ 2 + m 2 ) D α ν ( x − y ) = δ μ ν δ 4 ( x − y ) {\displaystyle \eta _{\mu \alpha }\left(\partial ^{2}+m^{2}\right)D^{\alpha \nu }\left(x-y\right)=\delta _{\mu }^{\nu }\delta ^{4}\left(x-y\right)} and D μ ν ( k ) ∣ k 0 = 0 = η μ ν 1 − k 2 + m 2 . {\displaystyle D_{\mu \nu }\left(k\right)\mid _{k_{0}=0}\;=\;\eta _{\mu \nu }{\frac {1}{-k^{2}+m^{2}}}.} This yields D ( k ) ∣ k 0 = 0 = 1 k 2 + m 2 {\displaystyle D\left(k\right)\mid _{k_{0}=0}\;=\;{\frac {1}{\mathbf {k} ^{2}+m^{2}}}} for the timelike propagator and E = + a 1 a 2 4 π r exp ⁡ ( − m r ) {\displaystyle E=+{\frac {a_{1}a_{2}}{4\pi r}}\exp \left(-mr\right)} which has the opposite sign to the Yukawa case. In the limit of zero photon mass, the Lagrangian reduces to the Lagrangian for electromagnetism E = a 1 a 2 4 π r . {\displaystyle E={\frac {a_{1}a_{2}}{4\pi r}}.} Therefore the energy reduces to the potential energy for the Coulomb force and the coefficients a 1 {\displaystyle a_{1}} and a 2 {\displaystyle a_{2}} are proportional to the electric charge. Unlike the Yukawa case, like bodies, in this electrostatic case, repel each other. ==== Coulomb potential in a simple plasma or electron gas ==== ===== Plasma waves ===== The dispersion relation for plasma waves is: 75–82  ω 2 = ω p 2 + γ ( ω ) T e m k 2 . {\displaystyle \omega ^{2}=\omega _{p}^{2}+\gamma \left(\omega \right){\frac {T_{\text{e}}}{m}}\mathbf {k} ^{2}.} where ω {\displaystyle \omega } is the angular frequency of the wave, ω p 2 = 4 π n e 2 m {\displaystyle \omega _{p}^{2}={\frac {4\pi ne^{2}}{m}}} is the plasma frequency, e {\displaystyle e} is the magnitude of the electron charge, m {\displaystyle m} is the electron mass, T e {\displaystyle T_{\text{e}}} is the electron temperature (the Boltzmann constant equal to one), and γ ( ω ) {\displaystyle \gamma \left(\omega \right)} is a factor that varies with frequency from one to three. At high frequencies, on the order of the plasma frequency, the compression of the electron fluid is an adiabatic process and γ ( ω ) {\displaystyle \gamma \left(\omega \right)} is equal to three. At low frequencies, the compression is an isothermal process and γ ( ω ) {\displaystyle \gamma \left(\omega \right)} is equal to one. Retardation effects have been neglected in obtaining the plasma-wave dispersion relation. For low frequencies, the dispersion relation becomes k 2 + k D 2 = 0 {\displaystyle \mathbf {k} ^{2}+\mathbf {k} _{\text{D}}^{2}=0} where k D 2 = 4 π n e 2 T e {\displaystyle k_{\text{D}}^{2}={\frac {4\pi ne^{2}}{T_{e}}}} is the Debye number, which is the inverse of the Debye length. This suggests that the propagator is D ( k ) ∣ k 0 = 0 = 1 k 2 + k D 2 . {\displaystyle D\left(k\right)\mid _{k_{0}=0}\;=\;{\frac {1}{k^{2}+k_{\text{D}}^{2}}}.} In fact, if the retardation effects are not neglected, then the dispersion relation is − k 0 2 + k 2 + k D 2 − m T e k 0 2 = 0 , {\displaystyle -k_{0}^{2}+k^{2}+k_{\text{D}}^{2}-{\frac {m}{T_{\text{e}}}}k_{0}^{2}=0,} which does indeed yield the guessed propagator. This propagator is the same as the massive Coulomb propagator with the mass equal to the inverse Debye length. The interaction energy is therefore E = a 1 a 2 4 π r exp ⁡ ( − k D r ) . {\displaystyle E={\frac {a_{1}a_{2}}{4\pi r}}\exp \left(-k_{\text{D}}r\right).} The Coulomb potential is screened on length scales of a Debye length. ===== Plasmons ===== In a quantum electron gas, plasma waves are known as plasmons. Debye screening is replaced with Thomas–Fermi screening to yield E = a 1 a 2 4 π r exp ⁡ ( − k s r ) {\displaystyle E={\frac {a_{1}a_{2}}{4\pi r}}\exp \left(-k_{\text{s}}r\right)} where the inverse of the Thomas–Fermi screening length is k s 2 = 6 π n e 2 ε F {\displaystyle k_{\text{s}}^{2}={\frac {6\pi ne^{2}}{\varepsilon _{\text{F}}}}} and ε F {\displaystyle \varepsilon _{\text{F}}} is the Fermi energy ε F = ℏ 2 2 m ( 3 π 2 n ) 2 / 3 . {\textstyle \varepsilon _{\text{F}}={\frac {\hbar ^{2}}{2m}}\left({3\pi ^{2}n}\right)^{2/3}.} This expression can be derived from the chemical potential for an electron gas and from Poisson's equation. The chemical potential for an electron gas near equilibrium is constant and given by μ = − e φ + ε F {\displaystyle \mu =-e\varphi +\varepsilon _{\text{F}}} where φ {\displaystyle \varphi } is the electric potential. Linearizing the Fermi energy to first order in the density fluctuation and combining with Poisson's equation yields the screening length. The force carrier is the quantum version of the plasma wave. ===== Two line charges embedded in a plasma or electron gas ===== We consider a line of charge with axis in the z direction embedded in an electron gas J 1 ( x ) = a 1 L B 1 2 π r δ 2 ( r ) {\displaystyle J_{1}\left(x\right)={\frac {a_{1}}{L_{B}}}{\frac {1}{2\pi r}}\delta ^{2}\left(r\right)} where r {\displaystyle r} is the distance in the xy-plane from the line of charge, L B {\displaystyle L_{B}} is the width of the material in the z direction. The superscript 2 indicates that the Dirac delta function is in two dimensions. The propagator is D ( k ) ∣ k 0 = 0 = 1 k 2 + k D s 2 {\displaystyle D\left(k\right)\mid _{k_{0}=0}\;=\;{\frac {1}{\mathbf {k} ^{2}+k_{Ds}^{2}}}} where k D s {\displaystyle k_{Ds}} is either the inverse Debye–Hückel screening length or the inverse Thomas–Fermi screening length. The interaction energy is E = ( a 1 a 2 2 π L B ) ∫ 0 ∞ k d k k 2 + k D s 2 J 0 ( k r 12 ) = ( a 1 a 2 2 π L B ) K 0 ( k D s r 12 ) {\displaystyle E=\left({\frac {a_{1}\,a_{2}}{2\pi L_{B}}}\right)\int _{0}^{\infty }{\frac {k\,dk}{k^{2}+k_{Ds}^{2}}}{\mathcal {J}}_{0}(kr_{12})=\left({\frac {a_{1}\,a_{2}}{2\pi L_{B}}}\right)K_{0}\left(k_{Ds}r_{12}\right)} where J n ( x ) {\displaystyle {\mathcal {J}}_{n}(x)} and K 0 ( x ) {\displaystyle K_{0}(x)} are Bessel functions and r 12 {\displaystyle r_{12}} is the distance between the two line charges. In obtaining the interaction energy we made use of the integrals (see Common integrals in quantum field theory § Integration of the cylindrical propagator with mass) ∫ 0 2 π d φ 2 π exp ⁡ ( i p cos ⁡ ( φ ) ) = J 0 ( p ) {\displaystyle \int _{0}^{2\pi }{\frac {d\varphi }{2\pi }}\exp \left(ip\cos \left(\varphi \right)\right)={\mathcal {J}}_{0}(p)} and ∫ 0 ∞ k d k k 2 + m 2 J 0 ( k r ) = K 0 ( m r ) . {\displaystyle \int _{0}^{\infty }{\frac {k\,dk}{k^{2}+m^{2}}}{\mathcal {J}}_{0}(kr)=K_{0}(mr).} For k D s r 12 ≪ 1 {\displaystyle k_{Ds}r_{12}\ll 1} , we have K 0 ( k D s r 12 ) → − ln ⁡ ( k D s r 12 2 ) + 0.5772. {\displaystyle K_{0}\left(k_{Ds}r_{12}\right)\to -\ln \left({\frac {k_{Ds}r_{12}}{2}}\right)+0.5772.} ==== Coulomb potential between two current loops embedded in a magnetic field ==== ===== Interaction energy for vortices ===== We consider a charge density in tube with axis along a magnetic field embedded in an electron gas J 1 ( x ) = a 1 L b 1 2 π r δ 2 ( r − r B 1 ) {\displaystyle J_{1}\left(x\right)={\frac {a_{1}}{L_{b}}}{\frac {1}{2\pi r}}\delta ^{2}{\left(r-r_{B1}\right)}} where r {\displaystyle r} is the distance from the guiding center, L B {\displaystyle L_{B}} is the width of the material in the direction of the magnetic field r B 1 = 4 π m 1 v 1 a 1 B = 2 ℏ m 1 ω c {\displaystyle r_{B1}={\frac {{\sqrt {4\pi }}m_{1}v_{1}}{a_{1}B}}={\sqrt {\frac {2\hbar }{m_{1}\omega _{c}}}}} where the cyclotron frequency is (Gaussian units) ω c = a 1 B 4 π m 1 c {\displaystyle \omega _{c}={\frac {a_{1}B}{{\sqrt {4\pi }}m_{1}c}}} and v 1 = 2 ℏ ω c m 1 {\displaystyle v_{1}={\sqrt {\frac {2\hbar \omega _{c}}{m_{1}}}}} is the speed of the particle about the magnetic field, and B is the magnitude of the magnetic field. The speed formula comes from setting the classical kinetic energy equal to the spacing between Landau levels in the quantum treatment of a charged particle in a magnetic field. In this geometry, the interaction energy can be written E = ( a 1 a 2 2 π L B ) ∫ 0 ∞ k d k D ( k ) ∣ k 0 = k B = 0 J 0 ( k r B 1 ) J 0 ( k r B 2 ) J 0 ( k r 12 ) {\displaystyle E=\left({\frac {a_{1}\,a_{2}}{2\pi L_{B}}}\right)\int _{0}^{\infty }{k\;dk\;}D\left(k\right)\mid _{k_{0}=k_{B}=0}{\mathcal {J}}_{0}\left(kr_{B1}\right){\mathcal {J}}_{0}\left(kr_{B2}\right){\mathcal {J}}_{0}\left(kr_{12}\right)} where r 12 {\displaystyle r_{12}} is the distance between the centers of the current loops and J n ( x ) {\displaystyle {\mathcal {J}}_{n}(x)} is a Bessel function of the first kind. In obtaining the interaction energy we made use of the integral ∫ 0 2 π d φ 2 π exp ⁡ ( i p cos ⁡ ( φ ) ) = J 0 ( p ) . {\displaystyle \int _{0}^{2\pi }{\frac {d\varphi }{2\pi }}\exp \left(ip\cos(\varphi )\right)={\mathcal {J}}_{0}(p).} ===== Electric field due to a density perturbation ===== The chemical potential near equilibrium, is given by μ = − e φ + N ℏ ω c = N 0 ℏ ω c {\displaystyle \mu =-e\varphi +N\hbar \omega _{c}=N_{0}\hbar \omega _{c}} where − e φ {\displaystyle -e\varphi } is the potential energy of an electron in an electric potential and N 0 {\displaystyle N_{0}} and N {\displaystyle N} are the number of particles in the electron gas in the absence of and in the presence of an electrostatic potential, respectively. The density fluctuation is then δ n = e φ ℏ ω c A M L B {\displaystyle \delta n={\frac {e\varphi }{\hbar \omega _{c}A_{\text{M}}L_{B}}}} where A M {\displaystyle A_{\text{M}}} is the area of the material in the plane perpendicular to the magnetic field. Poisson's equation yields ( k 2 + k B 2 ) φ = 0 {\displaystyle \left(k^{2}+k_{B}^{2}\right)\varphi =0} where k B 2 = 4 π e 2 ℏ ω c A M L B . {\displaystyle k_{B}^{2}={\frac {4\pi e^{2}}{\hbar \omega _{c}A_{\text{M}}L_{B}}}.} The propagator is then D ( k ) ∣ k 0 = k B = 0 = 1 k 2 + k B 2 {\displaystyle D\left(k\right)\mid _{k_{0}=k_{B}=0}={\frac {1}{k^{2}+k_{B}^{2}}}} and the interaction energy becomes E = ( a 1 a 2 2 π L B ) ∫ 0 ∞ k d k k 2 + k B 2 J 0 ( k r B 1 ) J 0 ( k r B 2 ) J 0 ( k r 12 ) = ( 2 e 2 L B ) ∫ 0 ∞ k d k k 2 + k B 2 r B 2 J 0 2 ( k ) J 0 ( k r 12 r B ) {\displaystyle E=\left({\frac {a_{1}\,a_{2}}{2\pi L_{B}}}\right)\int _{0}^{\infty }{\frac {k\;dk\;}{k^{2}+k_{B}^{2}}}{\mathcal {J}}_{0}\left(kr_{B1}\right){\mathcal {J}}_{0}\left(kr_{B2}\right){\mathcal {J}}_{0}\left(kr_{12}\right)=\left({\frac {2e^{2}}{L_{B}}}\right)\int _{0}^{\infty }{\frac {k\;dk\;}{k^{2}+k_{B}^{2}r_{B}^{2}}}{\mathcal {J}}_{0}^{2}\left(k\right){\mathcal {J}}_{0}\left(k{\frac {r_{12}}{r_{B}}}\right)} where in the second equality (Gaussian units) we assume that the vortices had the same energy and the electron charge. In analogy with plasmons, the force carrier is the quantum version of the upper hybrid oscillation which is a longitudinal plasma wave that propagates perpendicular to the magnetic field. ===== Currents with angular momentum ===== ====== Delta function currents ====== Unlike classical currents, quantum current loops can have various values of the Larmor radius for a given energy.: 187–190  Landau levels, the energy states of a charged particle in the presence of a magnetic field, are multiply degenerate. The current loops correspond to angular momentum states of the charged particle that may have the same energy. Specifically, the charge density is peaked around radii of r ℓ = ℓ r B ℓ = 0 , 1 , 2 , … {\displaystyle r_{\ell }={\sqrt {\ell }}\;r_{B}\;\;\;\ell =0,1,2,\ldots } where ℓ {\displaystyle \ell } is the angular momentum quantum number. When ℓ = 1 {\displaystyle \ell =1} we recover the classical situation in which the electron orbits the magnetic field at the Larmor radius. If currents of two angular momentum ℓ > 0 {\displaystyle \ell >0} and ℓ ′ ≥ ℓ {\displaystyle \ell '\geq \ell } interact, and we assume the charge densities are delta functions at radius r ℓ {\displaystyle r_{\ell }} , then the interaction energy is E = ( 2 e 2 L B ) ∫ 0 ∞ k d k k 2 + k B 2 r ℓ 2 J 0 ( k ) J 0 ( ℓ ′ ℓ k ) J 0 ( k r 12 r ℓ ) . {\displaystyle E=\left({\frac {2e^{2}}{L_{B}}}\right)\int _{0}^{\infty }{\frac {k\;dk\;}{k^{2}+k_{B}^{2}r_{\ell }^{2}}}\;{\mathcal {J}}_{0}\left(k\right)\;{\mathcal {J}}_{0}\left({\sqrt {\frac {\ell '}{\ell }}}\;k\right)\;{\mathcal {J}}_{0}\left(k{\frac {r_{12}}{r_{\ell }}}\right).} The interaction energy for ℓ = ℓ ′ {\displaystyle \ell =\ell '} is given in Figure 1 for various values of k B r ℓ {\displaystyle k_{B}r_{\ell }} . The energy for two different values is given in Figure 2. ====== Quasiparticles ====== For large values of angular momentum, the energy can have local minima at distances other than zero and infinity. It can be numerically verified that the minima occur at r 12 = r ℓ ℓ ′ = ℓ + ℓ ′ r B . {\displaystyle r_{12}=r_{\ell \ell '}={\sqrt {\ell +\ell '}}\;r_{B}.} This suggests that the pair of particles that are bound and separated by a distance r ℓ ℓ ′ {\displaystyle r_{\ell \ell '}} act as a single quasiparticle with angular momentum ℓ + ℓ ′ {\displaystyle \ell +\ell '} . If we scale the lengths as r ℓ ℓ ′ {\displaystyle r_{\ell \ell '}} , then the interaction energy becomes E = 2 e 2 L B ∫ 0 ∞ k d k k 2 + k B 2 r ℓ ℓ ′ 2 J 0 ( cos ⁡ θ k ) J 0 ( sin ⁡ θ k ) J 0 ( k r 12 r ℓ ℓ ′ ) {\displaystyle E={\frac {2e^{2}}{L_{B}}}\int _{0}^{\infty }{\frac {k\,dk}{k^{2}+k_{B}^{2}r_{\ell \ell '}^{2}}}\;{\mathcal {J}}_{0}\left(\cos \theta \,k\right)\;{\mathcal {J}}_{0}(\sin \theta \,k)\;{\mathcal {J}}_{0}{\left(k{\frac {r_{12}}{r_{\ell \ell '}}}\right)}} where tan ⁡ θ = ℓ ℓ ′ . {\displaystyle \tan \theta ={\sqrt {\frac {\ell }{\ell '}}}.} The value of the r 12 {\displaystyle r_{12}} at which the energy is minimum, r 12 = r ℓ ℓ ′ {\displaystyle r_{12}=r_{\ell \ell '}} , is independent of the ratio tan ⁡ θ = ℓ / ℓ ′ {\textstyle \tan \theta ={\sqrt {{\ell }/{\ell '}}}} . However the value of the energy at the minimum depends on the ratio. The lowest energy minimum occurs when ℓ ℓ ′ = 1. {\displaystyle {\frac {\ell }{\ell '}}=1.} When the ratio differs from 1, then the energy minimum is higher (Figure 3). Therefore, for even values of total momentum, the lowest energy occurs when (Figure 4) ℓ = ℓ ′ = 1 {\displaystyle \ell =\ell '=1} or ℓ ℓ ∗ = 1 2 {\displaystyle {\frac {\ell }{\ell ^{*}}}={\frac {1}{2}}} where the total angular momentum is written as ℓ ∗ = ℓ + ℓ ′ . {\displaystyle \ell ^{*}=\ell +\ell '.} When the total angular momentum is odd, the minima cannot occur for ℓ = ℓ ′ . {\displaystyle \ell =\ell '.} The lowest energy states for odd total angular momentum occur when ℓ ℓ ∗ = ℓ ∗ ± 1 2 ℓ ∗ {\displaystyle {\frac {\ell }{\ell ^{*}}}=\;{\frac {\ell ^{*}\pm 1}{2\ell ^{*}}}} or ℓ ℓ ∗ = 1 3 , 2 5 , 3 7 , etc., {\displaystyle {\frac {\ell }{\ell ^{*}}}={\frac {1}{3}},{\frac {2}{5}},{\frac {3}{7}},{\text{etc.,}}} and ℓ ℓ ∗ = 2 3 , 3 5 , 4 7 , etc., {\displaystyle {\frac {\ell }{\ell ^{*}}}={\frac {2}{3}},{\frac {3}{5}},{\frac {4}{7}},{\text{etc.,}}} which also appear as series for the filling factor in the fractional quantum Hall effect. ====== Charge density spread over a wave function ====== The charge density is not actually concentrated in a delta function. The charge is spread over a wave function. In that case the electron density is: 189  1 π r B 2 L B 1 n ! ( r r B ) 2 l exp ⁡ ( − r 2 r B 2 ) . {\displaystyle {\frac {1}{\pi r_{B}^{2}L_{B}}}{\frac {1}{n!}}\left({\frac {r}{r_{B}}}\right)^{2l}\exp \left(-{\frac {r^{2}}{r_{B}^{2}}}\right).} The interaction energy becomes E = ( 2 e 2 L B ) ∫ 0 ∞ k d k k 2 + k B 2 r B 2 M ( ℓ + 1 , 1 , − k 2 4 ) M ( ℓ ′ + 1 , 1 , − k 2 4 ) J 0 ( k r 12 r B ) {\displaystyle E=\left({\frac {2e^{2}}{L_{B}}}\right)\int _{0}^{\infty }{\frac {k\;dk\;}{k^{2}+k_{B}^{2}r_{B}^{2}}}\;M{\left(\ell +1,1,-{\frac {k^{2}}{4}}\right)}\;M{\left(\ell '+1,1,-{\frac {k^{2}}{4}}\right)}\;{\mathcal {J}}_{0}{\left(k{\frac {r_{12}}{r_{B}}}\right)}} where M {\displaystyle M} is a confluent hypergeometric function or Kummer function. In obtaining the interaction energy we have used the integral (see Common integrals in quantum field theory § Integration over a magnetic wave function) 2 n ! ∫ 0 ∞ d r r 2 n + 1 e − r 2 J 0 ( k r ) = M ( n + 1 , 1 , − k 2 4 ) . {\displaystyle {\frac {2}{n!}}\int _{0}^{\infty }dr\;r^{2n+1}e^{-r^{2}}J_{0}(kr)=M\left(n+1,1,-{\frac {k^{2}}{4}}\right).} As with delta function charges, the value of r 12 {\displaystyle r_{12}} in which the energy is a local minimum only depends on the total angular momentum, not on the angular momenta of the individual currents. Also, as with the delta function charges, the energy at the minimum increases as the ratio of angular momenta varies from one. Therefore, the series ℓ ℓ ∗ = 1 3 , 2 5 , 3 7 , etc., {\displaystyle {\frac {\ell }{\ell ^{*}}}={\frac {1}{3}},{\frac {2}{5}},{\frac {3}{7}},{\text{etc.,}}} and ℓ ℓ ∗ = 2 3 , 3 5 , 4 7 , etc., {\displaystyle {\frac {\ell }{\ell ^{*}}}={\frac {2}{3}},{\frac {3}{5}},{\frac {4}{7}},{\text{etc.,}}} appear as well in the case of charges spread by the wave function. The Laughlin wavefunction is an ansatz for the quasiparticle wavefunction. If the expectation value of the interaction energy is taken over a Laughlin wavefunction, these series are also preserved. === Magnetostatics === ==== Darwin interaction in a vacuum ==== A charged moving particle can generate a magnetic field that affects the motion of another charged particle. The static version of this effect is called the Darwin interaction. To calculate this, consider the electrical currents in space generated by a moving charge J 1 ( x ) = a 1 v 1 δ 3 ( x − x 1 ) {\displaystyle \mathbf {J} _{1}{\left(\mathbf {x} \right)}=a_{1}\mathbf {v} _{1}\delta ^{3}{\left(\mathbf {x} -\mathbf {x} _{1}\right)}} with a comparable expression for J 2 {\displaystyle \mathbf {J} _{2}} . The Fourier transform of this current is J 1 ( k ) = a 1 v 1 exp ⁡ ( i k ⋅ x 1 ) . {\displaystyle \mathbf {J} _{1}{\left(\mathbf {k} \right)}=a_{1}\mathbf {v} _{1}\exp \left(i\mathbf {k} \cdot \mathbf {x} _{1}\right).} The current can be decomposed into a transverse and a longitudinal part (see Helmholtz decomposition). J 1 ( k ) = a 1 [ 1 − k ^ k ^ ] ⋅ v 1 exp ⁡ ( i k ⋅ x 1 ) + a 1 [ k ^ k ^ ] ⋅ v 1 exp ⁡ ( i k ⋅ x 1 ) . {\displaystyle \mathbf {J} _{1}{\left(\mathbf {k} \right)}=a_{1}\left[1-{\hat {\mathbf {k} }}{\hat {\mathbf {k} }}\right]\cdot \mathbf {v} _{1}\exp \left(i\mathbf {k} \cdot \mathbf {x} _{1}\right)+a_{1}\left[{\hat {\mathbf {k} }}{\hat {\mathbf {k} }}\right]\cdot \mathbf {v} _{1}\exp \left(i\mathbf {k} \cdot \mathbf {x} _{1}\right).} The hat indicates a unit vector. The last term disappears because k ⋅ J = − k 0 J 0 → 0 , {\displaystyle \mathbf {k} \cdot \mathbf {J} =-k_{0}J^{0}\to 0,} which results from charge conservation. Here k 0 {\displaystyle k_{0}} vanishes because we are considering static forces. With the current in this form the energy of interaction can be written E = a 1 a 2 ∫ d 3 k ( 2 π ) 3 D ( k ) ∣ k 0 = 0 v 1 ⋅ [ 1 − k ^ k ^ ] ⋅ v 2 exp ⁡ ( i k ⋅ ( x 1 − x 2 ) ) . {\displaystyle E=a_{1}a_{2}\int {\frac {d^{3}\mathbf {k} }{(2\pi )^{3}}}\;\;D\left(k\right)\mid _{k_{0}=0}\;\mathbf {v} _{1}\cdot \left[1-{\hat {\mathbf {k} }}{\hat {\mathbf {k} }}\right]\cdot \mathbf {v} _{2}\;\exp \left(i\mathbf {k} \cdot \left(\mathbf {x} _{1}-\mathbf {x} _{2}\right)\right).} The propagator equation for the Proca Lagrangian is η μ α ( ∂ 2 + m 2 ) D α ν ( x − y ) = δ μ ν δ 4 ( x − y ) . {\displaystyle \eta _{\mu \alpha }\left(\partial ^{2}+m^{2}\right)D^{\alpha \nu }\left(x-y\right)=\delta _{\mu }^{\nu }\delta ^{4}\left(x-y\right).} The spacelike solution is D ( k ) ∣ k 0 = 0 = − 1 k 2 + m 2 , {\displaystyle D\left(k\right)\mid _{k_{0}=0}\;=\;-{\frac {1}{k^{2}+m^{2}}},} which yields E = − a 1 a 2 ∫ d 3 k ( 2 π ) 3 v 1 ⋅ [ 1 − k ^ k ^ ] ⋅ v 2 k 2 + m 2 exp ⁡ ( i k ⋅ ( x 1 − x 2 ) ) , {\displaystyle E=-a_{1}a_{2}\int {\frac {d^{3}\mathbf {k} }{(2\pi )^{3}}}\;\;{\frac {\mathbf {v} _{1}\cdot \left[1-{\hat {\mathbf {k} }}{\hat {\mathbf {k} }}\right]\cdot \mathbf {v} _{2}}{k^{2}+m^{2}}}\;\exp \left(i\mathbf {k} \cdot \left(\mathbf {x} _{1}-\mathbf {x} _{2}\right)\right),} where k = | k | {\textstyle k=|\mathbf {k} |} . The integral evaluates to (see Common integrals in quantum field theory § Transverse potential with mass) E = − 1 2 a 1 a 2 4 π r e − m r { 2 ( m r ) 2 ( e m r − 1 ) − 2 m r } v 1 ⋅ [ 1 + r ^ r ^ ] ⋅ v 2 {\displaystyle E=-{\frac {1}{2}}{\frac {a_{1}a_{2}}{4\pi r}}e^{-mr}\left\{{\frac {2}{\left(mr\right)^{2}}}\left(e^{mr}-1\right)-{\frac {2}{mr}}\right\}\mathbf {v} _{1}\cdot \left[1+{\hat {\mathbf {r} }}{\hat {\mathbf {r} }}\right]\cdot \mathbf {v} _{2}} which reduces to E = − 1 2 a 1 a 2 4 π r v 1 ⋅ [ 1 + r ^ r ^ ] ⋅ v 2 {\displaystyle E=-{\frac {1}{2}}{\frac {a_{1}a_{2}}{4\pi r}}\mathbf {v} _{1}\cdot \left[1+{\hat {\mathbf {r} }}{\hat {\mathbf {r} }}\right]\cdot \mathbf {v} _{2}} in the limit of small m. The interaction energy is the negative of the interaction Lagrangian. For two like particles traveling in the same direction, the interaction is attractive, which is the opposite of the Coulomb interaction. ==== Darwin interaction in plasma ==== In a plasma, the dispersion relation for an electromagnetic wave is: 100–103  ( c = 1 {\displaystyle c=1} ) k 0 2 = ω p 2 + k 2 , {\displaystyle k_{0}^{2}=\omega _{p}^{2}+k^{2},} which implies D ( k ) ∣ k 0 = 0 = − 1 k 2 + ω p 2 . {\displaystyle D\left(k\right)\mid _{k_{0}=0}\;=\;-{\frac {1}{k^{2}+\omega _{p}^{2}}}.} Here ω p {\displaystyle \omega _{p}} is the plasma frequency. The interaction energy is therefore E = − 1 2 a 1 a 2 4 π r v 1 ⋅ [ 1 + r ^ r ^ ] ⋅ v 2 e − ω p r { 2 ( ω p r ) 2 ( e ω p r − 1 ) − 2 ω p r } . {\displaystyle E=-{\frac {1}{2}}{\frac {a_{1}a_{2}}{4\pi r}}\mathbf {v} _{1}\cdot \left[1+{\hat {\mathbf {r} }}{\hat {\mathbf {r} }}\right]\cdot \mathbf {v} _{2}\;e^{-\omega _{p}r}\left\{{\frac {2}{\left(\omega _{p}r\right)^{2}}}\left(e^{\omega _{p}r}-1\right)-{\frac {2}{\omega _{p}r}}\right\}.} ==== Magnetic interaction between current loops in a simple plasma or electron gas ==== ===== Interaction energy ===== Consider a tube of current rotating in a magnetic field embedded in a simple plasma or electron gas. The current, which lies in the plane perpendicular to the magnetic field, is defined as J 1 ( x ) = a 1 v 1 1 2 π r L B δ 2 ( r − r B 1 ) ( b ^ × r ^ ) {\displaystyle \mathbf {J} _{1}(\mathbf {x} )=a_{1}v_{1}{\frac {1}{2\pi rL_{B}}}\;\delta ^{2}{\left(r-r_{B1}\right)}\left({\hat {\mathbf {b} }}\times {\hat {\mathbf {r} }}\right)} where r B 1 = 4 π m 1 v 1 a 1 B {\displaystyle r_{B1}={\frac {{\sqrt {4\pi }}m_{1}v_{1}}{a_{1}B}}} and b ^ {\displaystyle {\hat {\mathbf {b} }}} is the unit vector in the direction of the magnetic field. Here L B {\displaystyle L_{B}} indicates the dimension of the material in the direction of the magnetic field. The transverse current, perpendicular to the wave vector, drives the transverse wave. The energy of interaction is E = ( a 1 a 2 2 π L B ) v 1 v 2 ∫ 0 ∞ k d k D ( k ) ∣ k 0 = k B = 0 J 1 ( k r B 1 ) J 1 ( k r B 2 ) J 0 ( k r 12 ) {\displaystyle E=\left({\frac {a_{1}\,a_{2}}{2\pi L_{B}}}\right)v_{1}\,v_{2}\,\int _{0}^{\infty }{k\;dk\;}D\left(k\right)\mid _{k_{0}=k_{B}=0}{\mathcal {J}}_{1}{\left(kr_{B1}\right)}{\mathcal {J}}_{1}{\left(kr_{B2}\right)}{\mathcal {J}}_{0}{\left(kr_{12}\right)}} where r 12 {\displaystyle r_{12}} is the distance between the centers of the current loops and J n ( x ) {\displaystyle {\mathcal {J}}_{n}(x)} is a Bessel function of the first kind. In obtaining the interaction energy we made use of the integrals ∫ 0 2 π d φ 2 π exp ⁡ ( i p cos ⁡ ( φ ) ) = J 0 ( p ) {\displaystyle \int _{0}^{2\pi }{\frac {d\varphi }{2\pi }}\exp \left(ip\cos \left(\varphi \right)\right)={\mathcal {J}}_{0}\left(p\right)} and ∫ 0 2 π d φ 2 π cos ⁡ ( φ ) exp ⁡ ( i p cos ⁡ ( φ ) ) = i J 1 ( p ) . {\displaystyle \int _{0}^{2\pi }{\frac {d\varphi }{2\pi }}\cos \left(\varphi \right)\exp \left(ip\cos \left(\varphi \right)\right)=i{\mathcal {J}}_{1}\left(p\right).} See Common integrals in quantum field theory § Angular integration in cylindrical coordinates. A current in a plasma confined to the plane perpendicular to the magnetic field generates an extraordinary wave.: 110–112  This wave generates Hall currents that interact and modify the electromagnetic field. The dispersion relation for extraordinary waves is: 112  − k 0 2 + k 2 + ω p 2 k 0 2 − ω p 2 k 0 2 − ω H 2 = 0 , {\displaystyle -k_{0}^{2}+k^{2}+\omega _{p}^{2}{\frac {k_{0}^{2}-\omega _{p}^{2}}{k_{0}^{2}-\omega _{H}^{2}}}=0,} which gives for the propagator D ( k ) ∣ k 0 = k B = 0 = − ( 1 k 2 + k X 2 ) {\displaystyle D\left(k\right)\mid _{k_{0}=k_{B}=0}\;=\;-\left({\frac {1}{k^{2}+k_{X}^{2}}}\right)} where k X ≡ ω p 2 ω H {\displaystyle k_{X}\equiv {\frac {\omega _{p}^{2}}{\omega _{H}}}} in analogy with the Darwin propagator. Here, the upper hybrid frequency is given by ω H 2 = ω p 2 + ω c 2 , {\displaystyle \omega _{H}^{2}=\omega _{p}^{2}+\omega _{c}^{2},} the cyclotron frequency is given by (Gaussian units) ω c = e B m c , {\displaystyle \omega _{c}={\frac {eB}{mc}},} and the plasma frequency (Gaussian units) ω p 2 = 4 π n e 2 m . {\displaystyle \omega _{p}^{2}={\frac {4\pi ne^{2}}{m}}.} Here n is the electron density, e is the magnitude of the electron charge, and m is the electron mass. The interaction energy becomes, for like currents, E = − ( a 2 2 π L B ) v 2 ∫ 0 ∞ k d k k 2 + k X 2 J 1 2 ( k r B ) J 0 ( k r 12 ) {\displaystyle E=-\left({\frac {a^{2}}{2\pi L_{B}}}\right)v^{2}\,\int _{0}^{\infty }{\frac {k\;dk}{k^{2}+k_{X}^{2}}}{\mathcal {J}}_{1}^{2}\left(kr_{B}\right){\mathcal {J}}_{0}\left(kr_{12}\right)} ===== Limit of small distance between current loops ===== In the limit that the distance between current loops is small, E = − E 0 I 1 ( μ ) K 1 ( μ ) {\displaystyle E=-E_{0}\;I_{1}{\left(\mu \right)}K_{1}{\left(\mu \right)}} where E 0 = ( a 2 2 π L B ) v 2 {\displaystyle E_{0}=\left({\frac {a^{2}}{2\pi L_{B}}}\right)v^{2}} and μ = ω p 2 r B ω H = k X r B {\displaystyle \mu ={\frac {\omega _{p}^{2}r_{B}}{\omega _{H}}}=k_{X}\;r_{B}} and I and K are modified Bessel functions. we have assumed that the two currents have the same charge and speed. We have made use of the integral (see Common integrals in quantum field theory § Integration of the cylindrical propagator with mass) ∫ o ∞ k d k k 2 + m 2 J 1 2 ( k r ) = I 1 ( m r ) K 1 ( m r ) . {\displaystyle \int _{o}^{\infty }{\frac {k\;dk}{k^{2}+m^{2}}}{\mathcal {J}}_{1}^{2}\left(kr\right)=I_{1}\left(mr\right)K_{1}\left(mr\right).} For small mr the integral becomes I 1 ( m r ) K 1 ( m r ) → 1 2 [ 1 − 1 8 ( m r ) 2 ] . {\displaystyle I_{1}{\left(mr\right)}K_{1}{\left(mr\right)}\to {\frac {1}{2}}\left[1-{\frac {1}{8}}\left(mr\right)^{2}\right].} For large mr the integral becomes I 1 ( m r ) K 1 ( m r ) → 1 2 ( 1 m r ) . {\displaystyle I_{1}\left(mr\right)K_{1}\left(mr\right)\rightarrow {\frac {1}{2}}\;\left({\frac {1}{mr}}\right).} ===== Relation to the quantum Hall effect ===== The screening wavenumber can be written (Gaussian units) μ = ω p 2 r B ω H c = ( 2 e 2 r B L B ℏ c ) ν 1 + ω p 2 ω c 2 = 2 α ( r B L B ) ( 1 1 + ω p 2 ω c 2 ) ν {\displaystyle \mu ={\frac {\omega _{p}^{2}r_{B}}{\omega _{H}c}}=\left({\frac {2e^{2}r_{B}}{L_{B}\hbar c}}\right){\frac {\nu }{\sqrt {1+{\frac {\omega _{p}^{2}}{\omega _{c}^{2}}}}}}=2\alpha \left({\frac {r_{B}}{L_{B}}}\right)\left({\frac {1}{\sqrt {1+{\frac {\omega _{p}^{2}}{\omega _{c}^{2}}}}}}\right)\nu } where α {\displaystyle \alpha } is the fine-structure constant and the filling factor is ν = 2 π N ℏ c e B A {\displaystyle \nu ={\frac {2\pi N\hbar c}{eBA}}} and N is the number of electrons in the material and A is the area of the material perpendicular to the magnetic field. This parameter is important in the quantum Hall effect and the fractional quantum Hall effect. The filling factor is the fraction of occupied Landau states at the ground state energy. For cases of interest in the quantum Hall effect, μ {\displaystyle \mu } is small. In that case the interaction energy is E = − E 0 2 [ 1 − 1 8 μ 2 ] {\displaystyle E=-{\frac {E_{0}}{2}}\left[1-{\frac {1}{8}}\mu ^{2}\right]} where (Gaussian units) E 0 = 4 π e 2 L B v 2 c 2 = 8 π e 2 L B ( ℏ ω c m c 2 ) {\displaystyle E_{0}={4\pi }{\frac {e^{2}}{L_{B}}}{\frac {v^{2}}{c^{2}}}={8\pi }{\frac {e^{2}}{L_{B}}}\left({\frac {\hbar \omega _{c}}{mc^{2}}}\right)} is the interaction energy for zero filling factor. We have set the classical kinetic energy to the quantum energy 1 2 m v 2 = ℏ ω c . {\displaystyle {\frac {1}{2}}mv^{2}=\hbar \omega _{c}.} === Gravitation === A gravitational disturbance is generated by the stress–energy tensor T μ ν {\displaystyle T^{\mu \nu }} ; consequently, the Lagrangian for the gravitational field is spin-2. If the disturbances are at rest, then the only component of the stress–energy tensor that persists is the 00 {\displaystyle 00} component. If we use the same trick of giving the graviton some mass and then taking the mass to zero at the end of the calculation the propagator becomes D ( k ) ∣ k 0 = 0 = − 4 3 1 k 2 + m 2 {\displaystyle D\left(k\right)\mid _{k_{0}=0}\;=\;-{\frac {4}{3}}{\frac {1}{k^{2}+m^{2}}}} and E = − 4 3 a 1 a 2 4 π r exp ⁡ ( − m r ) , {\displaystyle E=-{\frac {4}{3}}{\frac {a_{1}a_{2}}{4\pi r}}\exp \left(-mr\right),} which is once again attractive rather than repulsive. The coefficients are proportional to the masses of the disturbances. In the limit of small graviton mass, we recover the inverse-square behavior of Newton's Law.: 32–37  Unlike the electrostatic case, however, taking the small-mass limit of the boson does not yield the correct result. A more rigorous treatment yields a factor of one in the energy rather than 4/3.: 35  == References ==
Wikipedia/Static_forces_and_virtual-particle_exchange
In theoretical physics, scalar field theory can refer to a relativistically invariant classical or quantum theory of scalar fields. A scalar field is invariant under any Lorentz transformation. The only fundamental scalar quantum field that has been observed in nature is the Higgs field. However, scalar quantum fields feature in the effective field theory descriptions of many physical phenomena. An example is the pion, which is actually a pseudoscalar. Since they do not involve polarization complications, scalar fields are often the easiest to appreciate second quantization through. For this reason, scalar field theories are often used for purposes of introduction of novel concepts and techniques. The signature of the metric employed below is (+ − − −). == Classical scalar field theory == A general reference for this section is Ramond, Pierre (2001-12-21). Field Theory: A Modern Primer (Second Edition). USA: Westview Press. ISBN 0-201-30450-3, Ch 1. === Linear (free) theory === The most basic scalar field theory is the linear theory. Through the Fourier decomposition of the fields, it represents the normal modes of an infinity of coupled oscillators where the continuum limit of the oscillator index i is now denoted by x. The action for the free relativistic scalar field theory is then S = ∫ d D − 1 x d t L = ∫ d D − 1 x d t [ 1 2 η μ ν ∂ μ ϕ ∂ ν ϕ − 1 2 m 2 ϕ 2 ] = ∫ d D − 1 x d t [ 1 2 ( ∂ t ϕ ) 2 − 1 2 δ i j ∂ i ϕ ∂ j ϕ − 1 2 m 2 ϕ 2 ] , {\displaystyle {\begin{aligned}{\mathcal {S}}&=\int \mathrm {d} ^{D-1}x\mathrm {d} t{\mathcal {L}}\\&=\int \mathrm {d} ^{D-1}x\mathrm {d} t\left[{\frac {1}{2}}\eta ^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi -{\frac {1}{2}}m^{2}\phi ^{2}\right]\\[6pt]&=\int \mathrm {d} ^{D-1}x\mathrm {d} t\left[{\frac {1}{2}}(\partial _{t}\phi )^{2}-{\frac {1}{2}}\delta ^{ij}\partial _{i}\phi \partial _{j}\phi -{\frac {1}{2}}m^{2}\phi ^{2}\right],\end{aligned}}} where L {\displaystyle {\mathcal {L}}} is known as a Lagrangian density; d4−1x ≡ dx ⋅ dy ⋅ dz ≡ dx1 ⋅ dx2 ⋅ dx3 for the three spatial coordinates; δij is the Kronecker delta function; and ∂ρ = ∂/∂xρ for the ρ-th coordinate xρ. This is an example of a quadratic action, since each of the terms is quadratic in the field, φ. The term proportional to m2 is sometimes known as a mass term, due to its subsequent interpretation, in the quantized version of this theory, in terms of particle mass. The equation of motion for this theory is obtained by extremizing the action above. It takes the following form, linear in φ, η μ ν ∂ μ ∂ ν ϕ + m 2 ϕ = ∂ t 2 ϕ − ∇ 2 ϕ + m 2 ϕ = 0 , {\displaystyle \eta ^{\mu \nu }\partial _{\mu }\partial _{\nu }\phi +m^{2}\phi =\partial _{t}^{2}\phi -\nabla ^{2}\phi +m^{2}\phi =0~,} where ∇2 is the Laplace operator. This is the Klein–Gordon equation, with the interpretation as a classical field equation, rather than as a quantum-mechanical wave equation. === Nonlinear (interacting) theory === The most common generalization of the linear theory above is to add a scalar potential V ( ϕ ) {\displaystyle V(\phi )} to the Lagrangian, where typically, in addition to a mass term m 2 ϕ 2 / 2 {\displaystyle m^{2}\phi ^{2}/2} , the potential V {\displaystyle V} has higher order polynomial terms in ϕ {\displaystyle \phi } . Such a theory is sometimes said to be interacting, because the Euler–Lagrange equation is now nonlinear, implying a self-interaction. The action for the most general such theory is S = ∫ d D − 1 x d t L = ∫ d D − 1 x d t [ 1 2 η μ ν ∂ μ ϕ ∂ ν ϕ − V ( ϕ ) ] = ∫ d D − 1 x d t [ 1 2 ( ∂ t ϕ ) 2 − 1 2 δ i j ∂ i ϕ ∂ j ϕ − 1 2 m 2 ϕ 2 − ∑ n = 3 ∞ 1 n ! g n ϕ n ] {\displaystyle {\begin{aligned}{\mathcal {S}}&=\int \mathrm {d} ^{D-1}x\,\mathrm {d} t{\mathcal {L}}\\[3pt]&=\int \mathrm {d} ^{D-1}x\mathrm {d} t\left[{\frac {1}{2}}\eta ^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi -V(\phi )\right]\\[3pt]&=\int \mathrm {d} ^{D-1}x\,\mathrm {d} t\left[{\frac {1}{2}}(\partial _{t}\phi )^{2}-{\frac {1}{2}}\delta ^{ij}\partial _{i}\phi \partial _{j}\phi -{\frac {1}{2}}m^{2}\phi ^{2}-\sum _{n=3}^{\infty }{\frac {1}{n!}}g_{n}\phi ^{n}\right]\end{aligned}}} The n ! {\displaystyle n!} factors in the expansion are introduced because they are useful in the Feynman diagram expansion of the quantum theory, as described below. The corresponding Euler–Lagrange equation of motion is now η μ ν ∂ μ ∂ ν ϕ + V ′ ( ϕ ) = ∂ t 2 ϕ − ∇ 2 ϕ + V ′ ( ϕ ) = 0. {\displaystyle \eta ^{\mu \nu }\partial _{\mu }\partial _{\nu }\phi +V'(\phi )=\partial _{t}^{2}\phi -\nabla ^{2}\phi +V'(\phi )=0.} === Dimensional analysis and scaling === Physical quantities in these scalar field theories may have dimensions of length, time or mass, or some combination of the three. However, in a relativistic theory, any quantity t, with dimensions of time, can be readily converted into a length, l =ct, by using the velocity of light, c. Similarly, any length l is equivalent to an inverse mass, ħ=lmc, using the Planck constant, ħ. In natural units, one thinks of a time as a length, or either time or length as an inverse mass. In short, one can think of the dimensions of any physical quantity as defined in terms of just one independent dimension, rather than in terms of all three. This is most often termed the mass dimension of the quantity. Knowing the dimensions of each quantity, allows one to uniquely restore conventional dimensions from a natural units expression in terms of this mass dimension, by simply reinserting the requisite powers of ħ and c required for dimensional consistency. One conceivable objection is that this theory is classical, and therefore it is not obvious how the Planck constant should be a part of the theory at all. If desired, one could indeed recast the theory without mass dimensions at all: However, this would be at the expense of slightly obscuring the connection with the quantum scalar field. Given that one has dimensions of mass, the Planck constant is thought of here as an essentially arbitrary fixed reference quantity of action (not necessarily connected to quantization), hence with dimensions appropriate to convert between mass and inverse length. ==== Scaling dimension ==== The classical scaling dimension, or mass dimension, Δ, of φ describes the transformation of the field under a rescaling of coordinates: x → λ x {\displaystyle x\rightarrow \lambda x} ϕ → λ − Δ ϕ . {\displaystyle \phi \rightarrow \lambda ^{-\Delta }\phi ~.} The units of action are the same as the units of ħ, and so the action itself has zero mass dimension. This fixes the scaling dimension of the field φ to be Δ = D − 2 2 . {\displaystyle \Delta ={\frac {D-2}{2}}.} ==== Scale invariance ==== There is a specific sense in which some scalar field theories are scale-invariant. While the actions above are all constructed to have zero mass dimension, not all actions are invariant under the scaling transformation x → λ x {\displaystyle x\rightarrow \lambda x} ϕ → λ − Δ ϕ . {\displaystyle \phi \rightarrow \lambda ^{-\Delta }\phi ~.} The reason that not all actions are invariant is that one usually thinks of the parameters m and gn as fixed quantities, which are not rescaled under the transformation above. The condition for a scalar field theory to be scale invariant is then quite obvious: all of the parameters appearing in the action should be dimensionless quantities. In other words, a scale invariant theory is one without any fixed length scale (or equivalently, mass scale) in the theory. For a scalar field theory with D spacetime dimensions, the only dimensionless parameter gn satisfies n = 2D⁄(D − 2) . For example, in D = 4, only g4 is classically dimensionless, and so the only classically scale-invariant scalar field theory in D = 4 is the massless φ4 theory. Classical scale invariance, however, normally does not imply quantum scale invariance, because of the renormalization group involved – see the discussion of the beta function below. ==== Conformal invariance ==== A transformation x → x ~ ( x ) {\displaystyle x\rightarrow {\tilde {x}}(x)} is said to be conformal if the transformation satisfies ∂ x μ ~ ∂ x ρ ∂ x ν ~ ∂ x σ η μ ν = λ 2 ( x ) η ρ σ {\displaystyle {\frac {\partial {\tilde {x^{\mu }}}}{\partial x^{\rho }}}{\frac {\partial {\tilde {x^{\nu }}}}{\partial x^{\sigma }}}\eta _{\mu \nu }=\lambda ^{2}(x)\eta _{\rho \sigma }} for some function λ(x). The conformal group contains as subgroups the isometries of the metric η μ ν {\displaystyle \eta _{\mu \nu }} (the Poincaré group) and also the scaling transformations (or dilatations) considered above. In fact, the scale-invariant theories in the previous section are also conformally-invariant. === φ4 theory === Massive φ4 theory illustrates a number of interesting phenomena in scalar field theory. The Lagrangian density is L = 1 2 ( ∂ t ϕ ) 2 − 1 2 δ i j ∂ i ϕ ∂ j ϕ − 1 2 m 2 ϕ 2 − g 4 ! ϕ 4 . {\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{t}\phi )^{2}-{\frac {1}{2}}\delta ^{ij}\partial _{i}\phi \partial _{j}\phi -{\frac {1}{2}}m^{2}\phi ^{2}-{\frac {g}{4!}}\phi ^{4}.} ==== Spontaneous symmetry breaking ==== This Lagrangian has a Z 2 {\displaystyle \mathbb {Z} _{2}} symmetry under the transformation φ→ −φ. This is an example of an internal symmetry, in contrast to a space-time symmetry. If m2 is positive, the potential V ( ϕ ) = 1 2 m 2 ϕ 2 + g 4 ! ϕ 4 {\displaystyle V(\phi )={\frac {1}{2}}m^{2}\phi ^{2}+{\frac {g}{4!}}\phi ^{4}} has a single minimum, at the origin. The solution φ=0 is clearly invariant under the Z 2 {\displaystyle \mathbb {Z} _{2}} symmetry. Conversely, if m2 is negative, then one can readily see that the potential V ( ϕ ) = 1 2 m 2 ϕ 2 + g 4 ! ϕ 4 {\displaystyle V(\phi )={\frac {1}{2}}m^{2}\phi ^{2}+{\frac {g}{4!}}\phi ^{4}} has two minima. This is known as a double well potential, and the lowest energy states (known as the vacua, in quantum field theoretical language) in such a theory are not invariant under the Z 2 {\displaystyle \mathbb {Z} _{2}} symmetry of the action (in fact it maps each of the two vacua into the other). In this case, the Z 2 {\displaystyle \mathbb {Z} _{2}} symmetry is said to be spontaneously broken. ==== Kink solutions ==== The φ4 theory with a negative m2 also has a kink solution, which is a canonical example of a soliton. Such a solution is of the form ϕ ( x → , t ) = ± m 2 g 4 ! tanh ⁡ [ m ( x − x 0 ) 2 ] {\displaystyle \phi ({\vec {x}},t)=\pm {\frac {m}{2{\sqrt {\frac {g}{4!}}}}}\tanh \left[{\frac {m(x-x_{0})}{\sqrt {2}}}\right]} where x is one of the spatial variables (φ is taken to be independent of t, and the remaining spatial variables). The solution interpolates between the two different vacua of the double well potential. It is not possible to deform the kink into a constant solution without passing through a solution of infinite energy, and for this reason the kink is said to be stable. For D>2 (i.e., theories with more than one spatial dimension), this solution is called a domain wall. Another well-known example of a scalar field theory with kink solutions is the sine-Gordon theory. === Complex scalar field theory === In a complex scalar field theory, the scalar field takes values in the complex numbers, rather than the real numbers. The complex scalar field represents spin-0 particles and antiparticles with charge. The action considered normally takes the form S = ∫ d D − 1 x d t L = ∫ d D − 1 x d t [ η μ ν ∂ μ ϕ ∗ ∂ ν ϕ − V ( | ϕ | 2 ) ] {\displaystyle {\mathcal {S}}=\int \mathrm {d} ^{D-1}x\,\mathrm {d} t{\mathcal {L}}=\int \mathrm {d} ^{D-1}x\,\mathrm {d} t\left[\eta ^{\mu \nu }\partial _{\mu }\phi ^{*}\partial _{\nu }\phi -V(|\phi |^{2})\right]} This has a U(1), equivalently O(2) symmetry, whose action on the space of fields rotates ϕ → e i α ϕ {\displaystyle \phi \rightarrow e^{i\alpha }\phi } , for some real phase angle α. As for the real scalar field, spontaneous symmetry breaking is found if m2 is negative. This gives rise to Goldstone's Mexican hat potential which is a rotation of the double-well potential of a real scalar field through 2π radians about the V ( ϕ ) {\displaystyle (\phi )} axis. The symmetry breaking takes place in one higher dimension, i.e., the choice of vacuum breaks a continuous U(1) symmetry instead of a discrete one. The two components of the scalar field are reconfigured as a massive mode and a massless Goldstone boson. === O(N) theory === One can express the complex scalar field theory in terms of two real fields, φ1 = Re φ and φ2 = Im φ, which transform in the vector representation of the U(1) = O(2) internal symmetry. Although such fields transform as a vector under the internal symmetry, they are still Lorentz scalars. This can be generalised to a theory of N scalar fields transforming in the vector representation of the O(N) symmetry. The Lagrangian for an O(N)-invariant scalar field theory is typically of the form L = 1 2 η μ ν ∂ μ ϕ ⋅ ∂ ν ϕ − V ( ϕ ⋅ ϕ ) {\displaystyle {\mathcal {L}}={\frac {1}{2}}\eta ^{\mu \nu }\partial _{\mu }\phi \cdot \partial _{\nu }\phi -V(\phi \cdot \phi )} using an appropriate O(N)-invariant inner product. The theory can also be expressed for complex vector fields, i.e. for ϕ ∈ C n {\displaystyle \phi \in \mathbb {C} ^{n}} , in which case the symmetry group is the Lie group SU(N). === Gauge-field couplings === When the scalar field theory is coupled in a gauge invariant way to the Yang–Mills action, one obtains the Ginzburg–Landau theory of superconductors. The topological solitons of that theory correspond to vortices in a superconductor; the minimum of the Mexican hat potential corresponds to the order parameter of the superconductor. == Quantum scalar field theory == A general reference for this section is Ramond, Pierre (2001-12-21). Field Theory: A Modern Primer (Second Edition). USA: Westview Press. ISBN 0-201-30450-3, Ch. 4 In quantum field theory, the fields, and all observables constructed from them, are replaced by quantum operators on a Hilbert space. This Hilbert space is built on a vacuum state, and dynamics are governed by a quantum Hamiltonian, a positive-definite operator which annihilates the vacuum. A construction of a quantum scalar field theory is detailed in the canonical quantization article, which relies on canonical commutation relations among the fields. Essentially, the infinity of classical oscillators repackaged in the scalar field as its (decoupled) normal modes, above, are now quantized in the standard manner, so the respective quantum operator field describes an infinity of quantum harmonic oscillators acting on a respective Fock space. In brief, the basic variables are the quantum field φ and its canonical momentum π. Both these operator-valued fields are Hermitian. At spatial points x→, y→ and at equal times, their canonical commutation relations are given by [ ϕ ( x → ) , ϕ ( y → ) ] = [ π ( x → ) , π ( y → ) ] = 0 , [ ϕ ( x → ) , π ( y → ) ] = i δ ( x → − y → ) , {\displaystyle {\begin{aligned}\left[\phi \left({\vec {x}}\right),\phi \left({\vec {y}}\right)\right]=\left[\pi \left({\vec {x}}\right),\pi \left({\vec {y}}\right)\right]&=0,\\\left[\phi \left({\vec {x}}\right),\pi \left({\vec {y}}\right)\right]&=i\delta \left({\vec {x}}-{\vec {y}}\right),\end{aligned}}} while the free Hamiltonian is, similarly to above, H = ∫ d 3 x [ 1 2 π 2 + 1 2 ( ∇ ϕ ) 2 + m 2 2 ϕ 2 ] . {\displaystyle H=\int d^{3}x\left[{1 \over 2}\pi ^{2}+{1 \over 2}(\nabla \phi )^{2}+{m^{2} \over 2}\phi ^{2}\right].} A spatial Fourier transform leads to momentum space fields ϕ ~ ( k → ) = ∫ d 3 x e − i k → ⋅ x → ϕ ( x → ) , π ~ ( k → ) = ∫ d 3 x e − i k → ⋅ x → π ( x → ) {\displaystyle {\begin{aligned}{\widetilde {\phi }}({\vec {k}})&=\int d^{3}xe^{-i{\vec {k}}\cdot {\vec {x}}}\phi ({\vec {x}}),\\{\widetilde {\pi }}({\vec {k}})&=\int d^{3}xe^{-i{\vec {k}}\cdot {\vec {x}}}\pi ({\vec {x}})\end{aligned}}} which resolve to annihilation and creation operators a ( k → ) = ( E ϕ ~ ( k → ) + i π ~ ( k → ) ) , a † ( k → ) = ( E ϕ ~ ( k → ) − i π ~ ( k → ) ) , {\displaystyle {\begin{aligned}a({\vec {k}})&=\left(E{\widetilde {\phi }}({\vec {k}})+i{\widetilde {\pi }}({\vec {k}})\right),\\a^{\dagger }({\vec {k}})&=\left(E{\widetilde {\phi }}({\vec {k}})-i{\widetilde {\pi }}({\vec {k}})\right),\end{aligned}}} where E = k 2 + m 2 {\displaystyle E={\sqrt {k^{2}+m^{2}}}} . These operators satisfy the commutation relations [ a ( k → 1 ) , a ( k → 2 ) ] = [ a † ( k → 1 ) , a † ( k → 2 ) ] = 0 , [ a ( k → 1 ) , a † ( k → 2 ) ] = ( 2 π ) 3 2 E δ ( k → 1 − k → 2 ) . {\displaystyle {\begin{aligned}\left[a({\vec {k}}_{1}),a({\vec {k}}_{2})\right]=\left[a^{\dagger }({\vec {k}}_{1}),a^{\dagger }({\vec {k}}_{2})\right]&=0,\\\left[a({\vec {k}}_{1}),a^{\dagger }({\vec {k}}_{2})\right]&=(2\pi )^{3}2E\delta ({\vec {k}}_{1}-{\vec {k}}_{2}).\end{aligned}}} The state | 0 ⟩ {\displaystyle |0\rangle } annihilated by all of the operators a is identified as the bare vacuum, and a particle with momentum k→ is created by applying a † ( k → ) {\displaystyle a^{\dagger }({\vec {k}})} to the vacuum. Applying all possible combinations of creation operators to the vacuum constructs the relevant Hilbert space: This construction is called Fock space. The vacuum is annihilated by the Hamiltonian H = ∫ d 3 k ( 2 π ) 3 1 2 a † ( k → ) a ( k → ) , {\displaystyle H=\int {d^{3}k \over (2\pi )^{3}}{\frac {1}{2}}a^{\dagger }({\vec {k}})a({\vec {k}}),} where the zero-point energy has been removed by Wick ordering. (See canonical quantization.) Interactions can be included by adding an interaction Hamiltonian. For a φ4 theory, this corresponds to adding a Wick ordered term g:φ4:/4! to the Hamiltonian, and integrating over x. Scattering amplitudes may be calculated from this Hamiltonian in the interaction picture. These are constructed in perturbation theory by means of the Dyson series, which gives the time-ordered products, or n-particle Green's functions ⟨ 0 | T { ϕ ( x 1 ) ⋯ ϕ ( x n ) } | 0 ⟩ {\displaystyle \langle 0|{\mathcal {T}}\{\phi (x_{1})\cdots \phi (x_{n})\}|0\rangle } as described in the Dyson series article. The Green's functions may also be obtained from a generating function that is constructed as a solution to the Schwinger–Dyson equation. === Feynman path integral === The Feynman diagram expansion may be obtained also from the Feynman path integral formulation. The time ordered vacuum expectation values of polynomials in φ, known as the n-particle Green's functions, are constructed by integrating over all possible fields, normalized by the vacuum expectation value with no external fields, ⟨ 0 | T { ϕ ( x 1 ) ⋯ ϕ ( x n ) } | 0 ⟩ = ∫ D ϕ ϕ ( x 1 ) ⋯ ϕ ( x n ) e i ∫ d 4 x ( 1 2 ∂ μ ϕ ∂ μ ϕ − m 2 2 ϕ 2 − g 4 ! ϕ 4 ) ∫ D ϕ e i ∫ d 4 x ( 1 2 ∂ μ ϕ ∂ μ ϕ − m 2 2 ϕ 2 − g 4 ! ϕ 4 ) . {\displaystyle \langle 0|{\mathcal {T}}\{\phi (x_{1})\cdots \phi (x_{n})\}|0\rangle ={\frac {\int {\mathcal {D}}\phi \phi (x_{1})\cdots \phi (x_{n})e^{i\int d^{4}x\left({1 \over 2}\partial ^{\mu }\phi \partial _{\mu }\phi -{m^{2} \over 2}\phi ^{2}-{g \over 4!}\phi ^{4}\right)}}{\int {\mathcal {D}}\phi e^{i\int d^{4}x\left({1 \over 2}\partial ^{\mu }\phi \partial _{\mu }\phi -{m^{2} \over 2}\phi ^{2}-{g \over 4!}\phi ^{4}\right)}}}.} All of these Green's functions may be obtained by expanding the exponential in J(x)φ(x) in the generating function Z [ J ] = ∫ D ϕ e i ∫ d 4 x ( 1 2 ∂ μ ϕ ∂ μ ϕ − m 2 2 ϕ 2 − g 4 ! ϕ 4 + J ϕ ) = Z [ 0 ] ∑ n = 0 ∞ i n n ! J ( x 1 ) ⋯ J ( x n ) ⟨ 0 | T { ϕ ( x 1 ) ⋯ ϕ ( x n ) } | 0 ⟩ . {\displaystyle Z[J]=\int {\mathcal {D}}\phi e^{i\int d^{4}x\left({1 \over 2}\partial ^{\mu }\phi \partial _{\mu }\phi -{m^{2} \over 2}\phi ^{2}-{g \over 4!}\phi ^{4}+J\phi \right)}=Z[0]\sum _{n=0}^{\infty }{\frac {i^{n}}{n!}}J(x_{1})\cdots J(x_{n})\langle 0|{\mathcal {T}}\{\phi (x_{1})\cdots \phi (x_{n})\}|0\rangle .} A Wick rotation may be applied to make time imaginary. Changing the signature to (++++) then turns the Feynman integral into a statistical mechanics partition function in Euclidean space, Z [ J ] = ∫ D ϕ e − ∫ d 4 x [ 1 2 ( ∇ ϕ ) 2 + m 2 2 ϕ 2 + g 4 ! ϕ 4 + J ϕ ] . {\displaystyle Z[J]=\int {\mathcal {D}}\phi e^{-\int d^{4}x\left[{1 \over 2}(\nabla \phi )^{2}+{m^{2} \over 2}\phi ^{2}+{g \over 4!}\phi ^{4}+J\phi \right]}.} Normally, this is applied to the scattering of particles with fixed momenta, in which case, a Fourier transform is useful, giving instead Z ~ [ J ~ ] = ∫ D ϕ ~ e − ∫ d 4 p ( 2 π ) 4 ( 1 2 ( p 2 + m 2 ) ϕ ~ 2 − J ~ ϕ ~ + g 4 ! ∫ d 4 p 1 ( 2 π ) 4 d 4 p 2 ( 2 π ) 4 d 4 p 3 ( 2 π ) 4 δ ( p − p 1 − p 2 − p 3 ) ϕ ~ ( p ) ϕ ~ ( p 1 ) ϕ ~ ( p 2 ) ϕ ~ ( p 3 ) ) . {\displaystyle {\tilde {Z}}[{\tilde {J}}]=\int {\mathcal {D}}{\tilde {\phi }}e^{-\int {d^{4}p \over (2\pi )^{4}}\left({1 \over 2}(p^{2}+m^{2}){\tilde {\phi }}^{2}-{\tilde {J}}{\tilde {\phi }}+{g \over 4!}{\int {d^{4}p_{1} \over (2\pi )^{4}}{d^{4}p_{2} \over (2\pi )^{4}}{d^{4}p_{3} \over (2\pi )^{4}}\delta (p-p_{1}-p_{2}-p_{3}){\tilde {\phi }}(p){\tilde {\phi }}(p_{1}){\tilde {\phi }}(p_{2}){\tilde {\phi }}(p_{3})}\right)}.} where δ ( x ) {\displaystyle \delta (x)} is the Dirac delta function. The standard trick to evaluate this functional integral is to write it as a product of exponential factors, schematically, Z ~ [ J ~ ] = ∫ D ϕ ~ ∏ p [ e − ( p 2 + m 2 ) ϕ ~ 2 / 2 e − g / 4 ! ∫ d 4 p 1 ( 2 π ) 4 d 4 p 2 ( 2 π ) 4 d 4 p 3 ( 2 π ) 4 δ ( p − p 1 − p 2 − p 3 ) ϕ ~ ( p ) ϕ ~ ( p 1 ) ϕ ~ ( p 2 ) ϕ ~ ( p 3 ) e J ~ ϕ ~ ] . {\displaystyle {\tilde {Z}}[{\tilde {J}}]=\int {\mathcal {D}}{\tilde {\phi }}\prod _{p}\left[e^{-(p^{2}+m^{2}){\tilde {\phi }}^{2}/2}e^{-g/4!\int {d^{4}p_{1} \over (2\pi )^{4}}{d^{4}p_{2} \over (2\pi )^{4}}{d^{4}p_{3} \over (2\pi )^{4}}\delta (p-p_{1}-p_{2}-p_{3}){\tilde {\phi }}(p){\tilde {\phi }}(p_{1}){\tilde {\phi }}(p_{2}){\tilde {\phi }}(p_{3})}e^{{\tilde {J}}{\tilde {\phi }}}\right].} The second two exponential factors can be expanded as power series, and the combinatorics of this expansion can be represented graphically through Feynman diagrams of the Quartic interaction. The integral with g = 0 can be treated as a product of infinitely many elementary Gaussian integrals: the result may be expressed as a sum of Feynman diagrams, calculated using the following Feynman rules: Each field ~φ(p) in the n-point Euclidean Green's function is represented by an external line (half-edge) in the graph, and associated with momentum p. Each vertex is represented by a factor −g. At a given order gk, all diagrams with n external lines and k vertices are constructed such that the momenta flowing into each vertex is zero. Each internal line is represented by a propagator 1/(q2 + m2), where q is the momentum flowing through that line. Any unconstrained momenta are integrated over all values. The result is divided by a symmetry factor, which is the number of ways the lines and vertices of the graph can be rearranged without changing its connectivity. Do not include graphs containing "vacuum bubbles", connected subgraphs with no external lines. The last rule takes into account the effect of dividing by ~Z[0]. The Minkowski-space Feynman rules are similar, except that each vertex is represented by −ig, while each internal line is represented by a propagator i/(q2−m2+iε), where the ε term represents the small Wick rotation needed to make the Minkowski-space Gaussian integral converge. === Renormalization === The integrals over unconstrained momenta, called "loop integrals", in the Feynman graphs typically diverge. This is normally handled by renormalization, which is a procedure of adding divergent counter-terms to the Lagrangian in such a way that the diagrams constructed from the original Lagrangian and counter-terms is finite. A renormalization scale must be introduced in the process, and the coupling constant and mass become dependent upon it. The dependence of a coupling constant g on the scale λ is encoded by a beta function, β(g), defined by β ( g ) = λ ∂ g ∂ λ . {\displaystyle \beta (g)=\lambda \,{\frac {\partial g}{\partial \lambda }}~.} This dependence on the energy scale is known as "the running of the coupling parameter", and theory of this systematic scale-dependence in quantum field theory is described by the renormalization group. Beta-functions are usually computed in an approximation scheme, most commonly perturbation theory, where one assumes that the coupling constant is small. One can then make an expansion in powers of the coupling parameters and truncate the higher-order terms (also known as higher loop contributions, due to the number of loops in the corresponding Feynman graphs). The β-function at one loop (the first perturbative contribution) for the φ4 theory is β ( g ) = 3 16 π 2 g 2 + O ( g 3 ) . {\displaystyle \beta (g)={\frac {3}{16\pi ^{2}}}g^{2}+O\left(g^{3}\right)~.} The fact that the sign in front of the lowest-order term is positive suggests that the coupling constant increases with energy. If this behavior persisted at large couplings, this would indicate the presence of a Landau pole at finite energy, arising from quantum triviality. However, the question can only be answered non-perturbatively, since it involves strong coupling. A quantum field theory is said to be trivial when the renormalized coupling, computed through its beta function, goes to zero when the ultraviolet cutoff is removed. Consequently, the propagator becomes that of a free particle and the field is no longer interacting. For a φ4 interaction, Michael Aizenman proved that the theory is indeed trivial, for space-time dimension D ≥ 5. For D = 4, the triviality has yet to be proven rigorously, but lattice computations have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass. This can also lead to a predictable Higgs mass in asymptotic safety scenarios. == See also == Renormalization Quantum triviality Landau pole Scale invariance (CFT description) Scalar electrodynamics == Notes == == References == Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. ISBN 978-0201503975. Weinberg, S. (1995). The Quantum Theory of Fields. Vol. I. Cambridge University Press. ISBN 0-521-55001-7. Weinberg, S. (1998). The Quantum Theory of Fields. Vol. II. Cambridge University Press. ISBN 0-521-55002-5. Srednicki, M. (2007). Quantum Field Theory. Cambridge University Press. ISBN 9780521864497. Zinn-Justin, J (2002). Quantum Field Theory and Critical Phenomena. Oxford University Press. ISBN 978-0198509233. == External links == The Conceptual Basis of Quantum Field Theory Click on the link for Chap. 3 to find an extensive, simplified introduction to scalars in relativistic quantum mechanics and quantum field theory.
Wikipedia/Scalar_field_theory
In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact. The orthogonal group in dimension n has two connected components. The one that contains the identity element is a normal subgroup, called the special orthogonal group, and denoted SO(n). It consists of all orthogonal matrices of determinant 1. This group is also called the rotation group, generalizing the fact that in dimensions 2 and 3, its elements are the usual rotations around a point (in dimension 2) or a line (in dimension 3). In low dimension, these groups have been widely studied, see SO(2), SO(3) and SO(4). The other component consists of all orthogonal matrices of determinant −1. This component does not form a group, as the product of any two of its elements is of determinant 1, and therefore not an element of the component. By extension, for any field F, an n × n matrix with entries in F such that its inverse equals its transpose is called an orthogonal matrix over F. The n × n orthogonal matrices form a subgroup, denoted O(n, F), of the general linear group GL(n, F); that is O ⁡ ( n , F ) = { Q ∈ GL ⁡ ( n , F ) ∣ Q T Q = Q Q T = I } . {\displaystyle \operatorname {O} (n,F)=\left\{Q\in \operatorname {GL} (n,F)\mid Q^{\mathsf {T}}Q=QQ^{\mathsf {T}}=I\right\}.} More generally, given a non-degenerate symmetric bilinear form or quadratic form on a vector space over a field, the orthogonal group of the form is the group of invertible linear maps that preserve the form. The preceding orthogonal groups are the special case where, on some basis, the bilinear form is the dot product, or, equivalently, the quadratic form is the sum of the square of the coordinates. All orthogonal groups are algebraic groups, since the condition of preserving a form can be expressed as an equality of matrices. == Name == The name of "orthogonal group" originates from the following characterization of its elements. Given a Euclidean vector space E of dimension n, the elements of the orthogonal group O(n) are, up to a uniform scaling (homothecy), the linear maps from E to E that map orthogonal vectors to orthogonal vectors. == In Euclidean geometry == The orthogonal O(n) is the subgroup of the general linear group GL(n, R), consisting of all endomorphisms that preserve the Euclidean norm; that is, endomorphisms g such that ‖ g ( x ) ‖ = ‖ x ‖ . {\displaystyle \|g(x)\|=\|x\|.} Let E(n) be the group of the Euclidean isometries of a Euclidean space S of dimension n. This group does not depend on the choice of a particular space, since all Euclidean spaces of the same dimension are isomorphic. The stabilizer subgroup of a point x ∈ S is the subgroup of the elements g ∈ E(n) such that g(x) = x. This stabilizer is (or, more exactly, is isomorphic to) O(n), since the choice of a point as an origin induces an isomorphism between the Euclidean space and its associated Euclidean vector space. There is a natural group homomorphism p from E(n) to O(n), which is defined by p ( g ) ( y − x ) = g ( y ) − g ( x ) , {\displaystyle p(g)(y-x)=g(y)-g(x),} where, as usual, the subtraction of two points denotes the translation vector that maps the second point to the first one. This is a well defined homomorphism, since a straightforward verification shows that, if two pairs of points have the same difference, the same is true for their images by g (for details, see Affine space § Subtraction and Weyl's axioms). The kernel of p is the vector space of the translations. So, the translations form a normal subgroup of E(n), the stabilizers of two points are conjugate under the action of the translations, and all stabilizers are isomorphic to O(n). Moreover, the Euclidean group is a semidirect product of O(n) and the group of translations. It follows that the study of the Euclidean group is essentially reduced to the study of O(n). === Special orthogonal group === By choosing an orthonormal basis of a Euclidean vector space, the orthogonal group can be identified with the group (under matrix multiplication) of orthogonal matrices, which are the matrices such that Q Q T = I . {\displaystyle QQ^{\mathsf {T}}=I.} It follows from this equation that the square of the determinant of Q equals 1, and thus the determinant of Q is either 1 or −1. The orthogonal matrices with determinant 1 form a subgroup called the special orthogonal group, denoted SO(n), consisting of all direct isometries of O(n), which are those that preserve the orientation of the space. SO(n) is a normal subgroup of O(n), as being the kernel of the determinant, which is a group homomorphism whose image is the multiplicative group {−1, +1}. This implies that the orthogonal group is an internal semidirect product of SO(n) and any subgroup formed with the identity and a reflection. The group with two elements {±I} (where I is the identity matrix) is a normal subgroup and even a characteristic subgroup of O(n), and, if n is even, also of SO(n). If n is odd, O(n) is the internal direct product of SO(n) and {±I}. The group SO(2) is abelian (whereas SO(n) is not abelian when n > 2). Its finite subgroups are the cyclic group Ck of k-fold rotations, for every positive integer k. All these groups are normal subgroups of O(2) and SO(2). === Canonical form === For any element of O(n) there is an orthogonal basis, where its matrix has the form [ R 1 ⋱ R k 0 0 ± 1 ⋱ ± 1 ] , {\displaystyle {\begin{bmatrix}{\begin{matrix}R_{1}&&\\&\ddots &\\&&R_{k}\end{matrix}}&0\\0&{\begin{matrix}\pm 1&&\\&\ddots &\\&&\pm 1\end{matrix}}\\\end{bmatrix}},} where there may be any number, including zero, of ±1's; and where the matrices R1, ..., Rk are 2-by-2 rotation matrices, that is matrices of the form [ a − b b a ] , {\displaystyle {\begin{bmatrix}a&-b\\b&a\end{bmatrix}},} with a2 + b2 = 1. This results from the spectral theorem by regrouping eigenvalues that are complex conjugate, and taking into account that the absolute values of the eigenvalues of an orthogonal matrix are all equal to 1. The element belongs to SO(n) if and only if there are an even number of −1 on the diagonal. A pair of eigenvalues −1 can be identified with a rotation by π and a pair of eigenvalues +1 can be identified with a rotation by 0. The special case of n = 3 is known as Euler's rotation theorem, which asserts that every (non-identity) element of SO(3) is a rotation about a unique axis–angle pair. === Reflections === Reflections are the elements of O(n) whose canonical form is [ − 1 0 0 I ] , {\displaystyle {\begin{bmatrix}-1&0\\0&I\end{bmatrix}},} where I is the (n − 1) × (n − 1) identity matrix, and the zeros denote row or column zero matrices. In other words, a reflection is a transformation that transforms the space in its mirror image with respect to a hyperplane. In dimension two, every rotation can be decomposed into a product of two reflections. More precisely, a rotation of angle θ is the product of two reflections whose axes form an angle of θ / 2. A product of up to n elementary reflections always suffices to generate any element of O(n). This results immediately from the above canonical form and the case of dimension two. The Cartan–Dieudonné theorem is the generalization of this result to the orthogonal group of a nondegenerate quadratic form over a field of characteristic different from two. The reflection through the origin (the map v ↦ −v) is an example of an element of O(n) that is not a product of fewer than n reflections. === Symmetry group of spheres === The orthogonal group O(n) is the symmetry group of the (n − 1)-sphere (for n = 3, this is just the sphere) and all objects with spherical symmetry, if the origin is chosen at the center. The symmetry group of a circle is O(2). The orientation-preserving subgroup SO(2) is isomorphic (as a real Lie group) to the circle group, also known as U(1), the multiplicative group of the complex numbers of absolute value equal to one. This isomorphism sends the complex number exp(φ i) = cos(φ) + i sin(φ) of absolute value 1 to the special orthogonal matrix [ cos ⁡ ( φ ) − sin ⁡ ( φ ) sin ⁡ ( φ ) cos ⁡ ( φ ) ] . {\displaystyle {\begin{bmatrix}\cos(\varphi )&-\sin(\varphi )\\\sin(\varphi )&\cos(\varphi )\end{bmatrix}}.} In higher dimension, O(n) has a more complicated structure (in particular, it is no longer commutative). The topological structures of the n-sphere and O(n) are strongly correlated, and this correlation is widely used for studying both topological spaces. == Group structure == The groups O(n) and SO(n) are real compact Lie groups of dimension n(n − 1) / 2. The group O(n) has two connected components, with SO(n) being the identity component, that is, the connected component containing the identity matrix. === As algebraic groups === The orthogonal group O(n) can be identified with the group of the matrices A such that ATA = I. Since both members of this equation are symmetric matrices, this provides n(n + 1) / 2 equations that the entries of an orthogonal matrix must satisfy, and which are not all satisfied by the entries of any non-orthogonal matrix. This proves that O(n) is an algebraic set. Moreover, it can be proved that its dimension is n ( n − 1 ) 2 = n 2 − n ( n + 1 ) 2 , {\displaystyle {\frac {n(n-1)}{2}}=n^{2}-{\frac {n(n+1)}{2}},} which implies that O(n) is a complete intersection. This implies that all its irreducible components have the same dimension, and that it has no embedded component. In fact, O(n) has two irreducible components, that are distinguished by the sign of the determinant (that is det(A) = 1 or det(A) = −1). Both are nonsingular algebraic varieties of the same dimension n(n − 1) / 2. The component with det(A) = 1 is SO(n). === Maximal tori and Weyl groups === A maximal torus in a compact Lie group G is a maximal subgroup among those that are isomorphic to Tk for some k, where T = SO(2) is the standard one-dimensional torus. In O(2n) and SO(2n), for every maximal torus, there is a basis on which the torus consists of the block-diagonal matrices of the form [ R 1 0 ⋱ 0 R n ] , {\displaystyle {\begin{bmatrix}R_{1}&&0\\&\ddots &\\0&&R_{n}\end{bmatrix}},} where each Rj belongs to SO(2). In O(2n + 1) and SO(2n + 1), the maximal tori have the same form, bordered by a row and a column of zeros, and 1 on the diagonal. The Weyl group of SO(2n + 1) is the semidirect product { ± 1 } n ⋊ S n {\displaystyle \{\pm 1\}^{n}\rtimes S_{n}} of a normal elementary abelian 2-subgroup and a symmetric group, where the nontrivial element of each {±1} factor of {±1}n acts on the corresponding circle factor of T × {1} by inversion, and the symmetric group Sn acts on both {±1}n and T × {1} by permuting factors. The elements of the Weyl group are represented by matrices in O(2n) × {±1}. The Sn factor is represented by block permutation matrices with 2-by-2 blocks, and a final 1 on the diagonal. The {±1}n component is represented by block-diagonal matrices with 2-by-2 blocks either [ 1 0 0 1 ] or [ 0 1 1 0 ] , {\displaystyle {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\quad {\text{or}}\quad {\begin{bmatrix}0&1\\1&0\end{bmatrix}},} with the last component ±1 chosen to make the determinant 1. The Weyl group of SO(2n) is the subgroup H n − 1 ⋊ S n < { ± 1 } n ⋊ S n {\displaystyle H_{n-1}\rtimes S_{n}<\{\pm 1\}^{n}\rtimes S_{n}} of that of SO(2n + 1), where Hn−1 < {±1}n is the kernel of the product homomorphism {±1}n → {±1} given by ( ε 1 , … , ε n ) ↦ ε 1 ⋯ ε n {\displaystyle \left(\varepsilon _{1},\ldots ,\varepsilon _{n}\right)\mapsto \varepsilon _{1}\cdots \varepsilon _{n}} ; that is, Hn−1 < {±1}n is the subgroup with an even number of minus signs. The Weyl group of SO(2n) is represented in SO(2n) by the preimages under the standard injection SO(2n) → SO(2n + 1) of the representatives for the Weyl group of SO(2n + 1). Those matrices with an odd number of [ 0 1 1 0 ] {\displaystyle {\begin{bmatrix}0&1\\1&0\end{bmatrix}}} blocks have no remaining final −1 coordinate to make their determinants positive, and hence cannot be represented in SO(2n). == Topology == === Low-dimensional topology === The low-dimensional (real) orthogonal groups are familiar spaces: O(1) = S0, a two-point discrete space SO(1) = {1} SO(2) is S1 SO(3) is RP3 SO(4) is doubly covered by SU(2) × SU(2) = S3 × S3. === Fundamental group === In terms of algebraic topology, for n > 2 the fundamental group of SO(n, R) is cyclic of order 2, and the spin group Spin(n) is its universal cover. For n = 2 the fundamental group is infinite cyclic and the universal cover corresponds to the real line (the group Spin(2) is the unique connected 2-fold cover). === Homotopy groups === Generally, the homotopy groups πk(O) of the real orthogonal group are related to homotopy groups of spheres, and thus are in general hard to compute. However, one can compute the homotopy groups of the stable orthogonal group (aka the infinite orthogonal group), defined as the direct limit of the sequence of inclusions: O ⁡ ( 0 ) ⊂ O ⁡ ( 1 ) ⊂ O ⁡ ( 2 ) ⊂ ⋯ ⊂ O = ⋃ k = 0 ∞ O ⁡ ( k ) {\displaystyle \operatorname {O} (0)\subset \operatorname {O} (1)\subset \operatorname {O} (2)\subset \cdots \subset O=\bigcup _{k=0}^{\infty }\operatorname {O} (k)} Since the inclusions are all closed, hence cofibrations, this can also be interpreted as a union. On the other hand, Sn is a homogeneous space for O(n + 1), and one has the following fiber bundle: O ⁡ ( n ) → O ⁡ ( n + 1 ) → S n , {\displaystyle \operatorname {O} (n)\to \operatorname {O} (n+1)\to S^{n},} which can be understood as "The orthogonal group O(n + 1) acts transitively on the unit sphere Sn, and the stabilizer of a point (thought of as a unit vector) is the orthogonal group of the perpendicular complement, which is an orthogonal group one dimension lower." Thus the natural inclusion O(n) → O(n + 1) is (n − 1)-connected, so the homotopy groups stabilize, and πk(O(n + 1)) = πk(O(n)) for n > k + 1: thus the homotopy groups of the stable space equal the lower homotopy groups of the unstable spaces. From Bott periodicity we obtain Ω8O ≃ O, therefore the homotopy groups of O are 8-fold periodic, meaning πk + 8(O) = πk(O), and so one need list only the first 8 homotopy groups: π 0 ( O ) = Z / 2 Z π 1 ( O ) = Z / 2 Z π 2 ( O ) = 0 π 3 ( O ) = Z π 4 ( O ) = 0 π 5 ( O ) = 0 π 6 ( O ) = 0 π 7 ( O ) = Z {\displaystyle {\begin{aligned}\pi _{0}(O)&=\mathbf {Z} /2\mathbf {Z} \\\pi _{1}(O)&=\mathbf {Z} /2\mathbf {Z} \\\pi _{2}(O)&=0\\\pi _{3}(O)&=\mathbf {Z} \\\pi _{4}(O)&=0\\\pi _{5}(O)&=0\\\pi _{6}(O)&=0\\\pi _{7}(O)&=\mathbf {Z} \end{aligned}}} ==== Relation to KO-theory ==== Via the clutching construction, homotopy groups of the stable space O are identified with stable vector bundles on spheres (up to isomorphism), with a dimension shift of 1: πk(O) = πk + 1(BO). Setting KO = BO × Z = Ω−1O × Z (to make π0 fit into the periodicity), one obtains: π 0 ( K O ) = Z π 1 ( K O ) = Z / 2 Z π 2 ( K O ) = Z / 2 Z π 3 ( K O ) = 0 π 4 ( K O ) = Z π 5 ( K O ) = 0 π 6 ( K O ) = 0 π 7 ( K O ) = 0 {\displaystyle {\begin{aligned}\pi _{0}(KO)&=\mathbf {Z} \\\pi _{1}(KO)&=\mathbf {Z} /2\mathbf {Z} \\\pi _{2}(KO)&=\mathbf {Z} /2\mathbf {Z} \\\pi _{3}(KO)&=0\\\pi _{4}(KO)&=\mathbf {Z} \\\pi _{5}(KO)&=0\\\pi _{6}(KO)&=0\\\pi _{7}(KO)&=0\end{aligned}}} ==== Computation and interpretation of homotopy groups ==== ===== Low-dimensional groups ===== The first few homotopy groups can be calculated by using the concrete descriptions of low-dimensional groups. π0(O) = π0(O(1)) = Z / 2Z, from orientation-preserving/reversing (this class survives to O(2) and hence stably) π1(O) = π1(SO(3)) = Z / 2Z, which is spin comes from SO(3) = RP3 = S3 / (Z / 2Z). π2(O) = π2(SO(3)) = 0, which surjects onto π2(SO(4)); this latter thus vanishes. ===== Lie groups ===== From general facts about Lie groups, π2(G) always vanishes, and π3(G) is free (free abelian). ===== Vector bundles ===== π0(KO) is a vector bundle over S0, which consists of two points. Thus over each point, the bundle is trivial, and the non-triviality of the bundle is the difference between the dimensions of the vector spaces over the two points, so π0(KO) = Z is the dimension. ===== Loop spaces ===== Using concrete descriptions of the loop spaces in Bott periodicity, one can interpret the higher homotopies of O in terms of simpler-to-analyze homotopies of lower order. Using π0, O and O/U have two components, KO = BO × Z and KSp = BSp × Z have countably many components, and the rest are connected. ==== Interpretation of homotopy groups ==== In a nutshell: π0(KO) = Z is about dimension π1(KO) = Z / 2Z is about orientation π2(KO) = Z / 2Z is about spin π4(KO) = Z is about topological quantum field theory. Let R be any of the four division algebras R, C, H, O, and let LR be the tautological line bundle over the projective line RP1, and [LR] its class in K-theory. Noting that RP1 = S1, CP1 = S2, HP1 = S4, OP1 = S8, these yield vector bundles over the corresponding spheres, and π1(KO) is generated by [LR] π2(KO) is generated by [LC] π4(KO) is generated by [LH] π8(KO) is generated by [LO] From the point of view of symplectic geometry, π0(KO) ≅ π8(KO) = Z can be interpreted as the Maslov index, thinking of it as the fundamental group π1(U/O) of the stable Lagrangian Grassmannian as U/O ≅ Ω7(KO), so π1(U/O) = π1+7(KO). ==== Whitehead tower ==== The orthogonal group anchors a Whitehead tower: ⋯ → Fivebrane ⁡ ( n ) → String ⁡ ( n ) → Spin ⁡ ( n ) → SO ⁡ ( n ) → O ⁡ ( n ) {\displaystyle \cdots \rightarrow \operatorname {Fivebrane} (n)\rightarrow \operatorname {String} (n)\rightarrow \operatorname {Spin} (n)\rightarrow \operatorname {SO} (n)\rightarrow \operatorname {O} (n)} which is obtained by successively removing (killing) homotopy groups of increasing order. This is done by constructing short exact sequences starting with an Eilenberg–MacLane space for the homotopy group to be removed. The first few entries in the tower are the spin group and the string group, and are preceded by the fivebrane group. The homotopy groups that are killed are in turn π0(O) to obtain SO from O, π1(O) to obtain Spin from SO, π3(O) to obtain String from Spin, and then π7(O) and so on to obtain the higher order branes. == Of indefinite quadratic form over the reals == Over the real numbers, nondegenerate quadratic forms are classified by Sylvester's law of inertia, which asserts that, on a vector space of dimension n, such a form can be written as the difference of a sum of p squares and a sum of q squares, with p + q = n. In other words, there is a basis on which the matrix of the quadratic form is a diagonal matrix, with p entries equal to 1, and q entries equal to −1. The pair (p, q) called the inertia, is an invariant of the quadratic form, in the sense that it does not depend on the way of computing the diagonal matrix. The orthogonal group of a quadratic form depends only on the inertia, and is thus generally denoted O(p, q). Moreover, as a quadratic form and its opposite have the same orthogonal group, one has O(p, q) = O(q, p). The standard orthogonal group is O(n) = O(n, 0) = O(0, n). So, in the remainder of this section, it is supposed that neither p nor q is zero. The subgroup of the matrices of determinant 1 in O(p, q) is denoted SO(p, q). The group O(p, q) has four connected components, depending on whether an element preserves orientation on either of the two maximal subspaces where the quadratic form is positive definite or negative definite. The component of the identity, whose elements preserve orientation on both subspaces, is denoted SO+(p, q). The group O(3, 1) is the Lorentz group that is fundamental in relativity theory. Here the 3 corresponds to space coordinates, and 1 corresponds to the time coordinate. == Of complex quadratic forms == Over the field C of complex numbers, every non-degenerate quadratic form in n variables is equivalent to x12 + ... + xn2. Thus, up to isomorphism, there is only one non-degenerate complex quadratic space of dimension n, and one associated orthogonal group, usually denoted O(n, C). It is the group of complex orthogonal matrices, complex matrices whose product with their transpose is the identity matrix. As in the real case, O(n, C) has two connected components. The component of the identity consists of all matrices of determinant 1 in O(n, C); it is denoted SO(n, C). The groups O(n, C) and SO(n, C) are complex Lie groups of dimension n(n − 1) / 2 over C (the dimension over R is twice that). For n ≥ 2, these groups are noncompact. As in the real case, SO(n, C) is not simply connected: For n > 2, the fundamental group of SO(n, C) is cyclic of order 2, whereas the fundamental group of SO(2, C) is Z. == Over finite fields == === Characteristic different from two === Over a field of characteristic different from two, two quadratic forms are equivalent if their matrices are congruent, that is if a change of basis transforms the matrix of the first form into the matrix of the second form. Two equivalent quadratic forms have clearly the same orthogonal group. The non-degenerate quadratic forms over a finite field of characteristic different from two are completely classified into congruence classes, and it results from this classification that there is only one orthogonal group in odd dimension and two in even dimension. More precisely, Witt's decomposition theorem asserts that (in characteristic different from two) every vector space equipped with a non-degenerate quadratic form Q can be decomposed as a direct sum of pairwise orthogonal subspaces V = L 1 ⊕ L 2 ⊕ ⋯ ⊕ L m ⊕ W , {\displaystyle V=L_{1}\oplus L_{2}\oplus \cdots \oplus L_{m}\oplus W,} where each Li is a hyperbolic plane (that is there is a basis such that the matrix of the restriction of Q to Li has the form [ 0 1 1 0 ] {\displaystyle \textstyle {\begin{bmatrix}0&1\\1&0\end{bmatrix}}} ), and the restriction of Q to W is anisotropic (that is, Q(w) ≠ 0 for every nonzero w in W). The Chevalley–Warning theorem asserts that, over a finite field, the dimension of W is at most two. If the dimension of V is odd, the dimension of W is thus equal to one, and its matrix is congruent either to [ 1 ] {\displaystyle \textstyle {\begin{bmatrix}1\end{bmatrix}}} or to [ φ ] , {\displaystyle \textstyle {\begin{bmatrix}\varphi \end{bmatrix}},} where 𝜑 is a non-square scalar. It results that there is only one orthogonal group that is denoted O(2n + 1, q), where q is the number of elements of the finite field (a power of an odd prime). If the dimension of W is two and −1 is not a square in the ground field (that is, if its number of elements q is congruent to 3 modulo 4), the matrix of the restriction of Q to W is congruent to either I or −I, where I is the 2×2 identity matrix. If the dimension of W is two and −1 is a square in the ground field (that is, if q is congruent to 1, modulo 4) the matrix of the restriction of Q to W is congruent to [ 1 0 0 φ ] , {\displaystyle \textstyle {\begin{bmatrix}1&0\\0&\varphi \end{bmatrix}},} φ is any non-square scalar. This implies that if the dimension of V is even, there are only two orthogonal groups, depending whether the dimension of W zero or two. They are denoted respectively O+(2n, q) and O−(2n, q). The orthogonal group Oε(2, q) is a dihedral group of order 2(q − ε), where ε = ±. When the characteristic is not two, the order of the orthogonal groups are | O ⁡ ( 2 n + 1 , q ) | = 2 q n 2 ∏ i = 1 n ( q 2 i − 1 ) , {\displaystyle \left|\operatorname {O} (2n+1,q)\right|=2q^{n^{2}}\prod _{i=1}^{n}\left(q^{2i}-1\right),} | O + ⁡ ( 2 n , q ) | = 2 q n ( n − 1 ) ( q n − 1 ) ∏ i = 1 n − 1 ( q 2 i − 1 ) , {\displaystyle \left|\operatorname {O} ^{+}(2n,q)\right|=2q^{n(n-1)}\left(q^{n}-1\right)\prod _{i=1}^{n-1}\left(q^{2i}-1\right),} | O − ⁡ ( 2 n , q ) | = 2 q n ( n − 1 ) ( q n + 1 ) ∏ i = 1 n − 1 ( q 2 i − 1 ) . {\displaystyle \left|\operatorname {O} ^{-}(2n,q)\right|=2q^{n(n-1)}\left(q^{n}+1\right)\prod _{i=1}^{n-1}\left(q^{2i}-1\right).} In characteristic two, the formulas are the same, except that the factor 2 of |O(2n + 1, q)| must be removed. === Dickson invariant === For orthogonal groups, the Dickson invariant is a homomorphism from the orthogonal group to the quotient group Z / 2Z (integers modulo 2), taking the value 0 in case the element is the product of an even number of reflections, and the value of 1 otherwise. Algebraically, the Dickson invariant can be defined as D(f) = rank(I − f) modulo 2, where I is the identity (Taylor 1992, Theorem 11.43). Over fields that are not of characteristic 2 it is equivalent to the determinant: the determinant is −1 to the power of the Dickson invariant. Over fields of characteristic 2, the determinant is always 1, so the Dickson invariant gives more information than the determinant. The special orthogonal group is the kernel of the Dickson invariant and usually has index 2 in O(n, F ). When the characteristic of F is not 2, the Dickson Invariant is 0 whenever the determinant is 1. Thus when the characteristic is not 2, SO(n, F ) is commonly defined to be the elements of O(n, F ) with determinant 1. Each element in O(n, F ) has determinant ±1. Thus in characteristic 2, the determinant is always 1. The Dickson invariant can also be defined for Clifford groups and pin groups in a similar way (in all dimensions). === Orthogonal groups of characteristic 2 === Over fields of characteristic 2 orthogonal groups often exhibit special behaviors, some of which are listed in this section. (Formerly these groups were known as the hypoabelian groups, but this term is no longer used.) Any orthogonal group over any field is generated by reflections, except for a unique example where the vector space is 4-dimensional over the field with 2 elements and the Witt index is 2. A reflection in characteristic two has a slightly different definition. In characteristic two, the reflection orthogonal to a vector u takes a vector v to v + B(v, u)/Q(u) · u where B is the bilinear form and Q is the quadratic form associated to the orthogonal geometry. Compare this to the Householder reflection of odd characteristic or characteristic zero, which takes v to v − 2·B(v, u)/Q(u) · u. The center of the orthogonal group usually has order 1 in characteristic 2, rather than 2, since I = −I. In odd dimensions 2n + 1 in characteristic 2, orthogonal groups over perfect fields are the same as symplectic groups in dimension 2n. In fact the symmetric form is alternating in characteristic 2, and as the dimension is odd it must have a kernel of dimension 1, and the quotient by this kernel is a symplectic space of dimension 2n, acted upon by the orthogonal group. In even dimensions in characteristic 2 the orthogonal group is a subgroup of the symplectic group, because the symmetric bilinear form of the quadratic form is also an alternating form. == The spinor norm == The spinor norm is a homomorphism from an orthogonal group over a field F to the quotient group F× / (F×)2 (the multiplicative group of the field F up to multiplication by square elements), that takes reflection in a vector of norm n to the image of n in F× / (F×)2. For the usual orthogonal group over the reals, it is trivial, but it is often non-trivial over other fields, or for the orthogonal group of a quadratic form over the reals that is not positive definite. == Galois cohomology and orthogonal groups == In the theory of Galois cohomology of algebraic groups, some further points of view are introduced. They have explanatory value, in particular in relation with the theory of quadratic forms; but were for the most part post hoc, as far as the discovery of the phenomenon is concerned. The first point is that quadratic forms over a field can be identified as a Galois H1, or twisted forms (torsors) of an orthogonal group. As an algebraic group, an orthogonal group is in general neither connected nor simply-connected; the latter point brings in the spin phenomena, while the former is related to the determinant. The 'spin' name of the spinor norm can be explained by a connection to the spin group (more accurately a pin group). This may now be explained quickly by Galois cohomology (which however postdates the introduction of the term by more direct use of Clifford algebras). The spin covering of the orthogonal group provides a short exact sequence of algebraic groups. 1 → μ 2 → P i n V → O V → 1 {\displaystyle 1\rightarrow \mu _{2}\rightarrow \mathrm {Pin} _{V}\rightarrow \mathrm {O_{V}} \rightarrow 1} Here μ2 is the algebraic group of square roots of 1; over a field of characteristic not 2 it is roughly the same as a two-element group with trivial Galois action. The connecting homomorphism from H0(OV), which is simply the group OV(F) of F-valued points, to H1(μ2) is essentially the spinor norm, because H1(μ2) is isomorphic to the multiplicative group of the field modulo squares. There is also the connecting homomorphism from H1 of the orthogonal group, to the H2 of the kernel of the spin covering. The cohomology is non-abelian so that this is as far as we can go, at least with the conventional definitions. == Lie algebra == The Lie algebra corresponding to Lie groups O(n, F ) and SO(n, F ) consists of the skew-symmetric n × n matrices, with the Lie bracket [ , ] given by the commutator. One Lie algebra corresponds to both groups. It is often denoted by o ( n , F ) {\displaystyle {\mathfrak {o}}(n,F)} or s o ( n , F ) {\displaystyle {\mathfrak {so}}(n,F)} , and called the orthogonal Lie algebra or special orthogonal Lie algebra. Over real numbers, these Lie algebras for different n are the compact real forms of two of the four families of semisimple Lie algebras: in odd dimension Bk, where n = 2k + 1, while in even dimension Dr, where n = 2r. Since the group SO(n) is not simply connected, the representation theory of the orthogonal Lie algebras includes both representations corresponding to ordinary representations of the orthogonal groups, and representations corresponding to projective representations of the orthogonal groups. (The projective representations of SO(n) are just linear representations of the universal cover, the spin group Spin(n).) The latter are the so-called spin representation, which are important in physics. More generally, given a vector space V (over a field with characteristic not equal to 2) with a nondegenerate symmetric bilinear form ⟨ u , v ⟩ {\displaystyle \langle u,v\rangle } , the special orthogonal Lie algebra consists of tracefree endomorphisms φ {\displaystyle \varphi } which are skew-symmetric for this form ( ⟨ φ A , B ⟩ = − ⟨ A , φ B ⟩ {\displaystyle \langle \varphi A,B\rangle =-\langle A,\varphi B\rangle } ). Over a field of characteristic 2 we consider instead the alternating endomorphisms. Concretely we can equate these with the bivectors of the exterior algebra, the antisymmetric tensors of ∧ 2 V {\displaystyle \wedge ^{2}V} . The correspondence is given by: v ∧ w ↦ ⟨ v , ⋅ ⟩ w − ⟨ w , ⋅ ⟩ v {\displaystyle v\wedge w\mapsto \langle v,\cdot \rangle w-\langle w,\cdot \rangle v} This description applies equally for the indefinite special orthogonal Lie algebras s o ( p , q ) {\displaystyle {\mathfrak {so}}(p,q)} for symmetric bilinear forms with signature (p, q). Over real numbers, this characterization is used in interpreting the curl of a vector field (naturally a 2-vector) as an infinitesimal rotation or "curl", hence the name. == Related groups == The orthogonal groups and special orthogonal groups have a number of important subgroups, supergroups, quotient groups, and covering groups. These are listed below. The inclusions O(n) ⊂ U(n) ⊂ USp(2n) and USp(n) ⊂ U(n) ⊂ O(2n) are part of a sequence of 8 inclusions used in a geometric proof of the Bott periodicity theorem, and the corresponding quotient spaces are symmetric spaces of independent interest – for example, U(n)/O(n) is the Lagrangian Grassmannian. === Lie subgroups === In physics, particularly in the areas of Kaluza–Klein compactification, it is important to find out the subgroups of the orthogonal group. The main ones are: O ( n ) ⊃ O ( n − 1 ) {\displaystyle \mathrm {O} (n)\supset \mathrm {O} (n-1)} – preserve an axis O ( 2 n ) ⊃ U ( n ) ⊃ S U ( n ) {\displaystyle \mathrm {O} (2n)\supset \mathrm {U} (n)\supset \mathrm {SU} (n)} – U(n) are those that preserve a compatible complex structure or a compatible symplectic structure – see 2-out-of-3 property; SU(n) also preserves a complex orientation. O ( 2 n ) ⊃ U S p ( n ) {\displaystyle \mathrm {O} (2n)\supset \mathrm {USp} (n)} O ( 7 ) ⊃ G 2 {\displaystyle \mathrm {O} (7)\supset \mathrm {G} _{2}} === Lie supergroups === The orthogonal group O(n) is also an important subgroup of various Lie groups: U ( n ) ⊃ O ( n ) U S p ( 2 n ) ⊃ O ( n ) G 2 ⊃ O ( 3 ) F 4 ⊃ O ( 9 ) E 6 ⊃ O ( 10 ) E 7 ⊃ O ( 12 ) E 8 ⊃ O ( 16 ) {\displaystyle {\begin{aligned}\mathrm {U} (n)&\supset \mathrm {O} (n)\\\mathrm {USp} (2n)&\supset \mathrm {O} (n)\\\mathrm {G} _{2}&\supset \mathrm {O} (3)\\\mathrm {F} _{4}&\supset \mathrm {O} (9)\\\mathrm {E} _{6}&\supset \mathrm {O} (10)\\\mathrm {E} _{7}&\supset \mathrm {O} (12)\\\mathrm {E} _{8}&\supset \mathrm {O} (16)\end{aligned}}} ==== Conformal group ==== Being isometries, real orthogonal transforms preserve angles, and are thus conformal maps, though not all conformal linear transforms are orthogonal. In classical terms this is the difference between congruence and similarity, as exemplified by SSS (side-side-side) congruence of triangles and AAA (angle-angle-angle) similarity of triangles. The group of conformal linear maps of Rn is denoted CO(n) for the conformal orthogonal group, and consists of the product of the orthogonal group with the group of dilations. If n is odd, these two subgroups do not intersect, and they are a direct product: CO(2k + 1) = O(2k + 1) × R∗, where R∗ = R∖{0} is the real multiplicative group, while if n is even, these subgroups intersect in ±1, so this is not a direct product, but it is a direct product with the subgroup of dilation by a positive scalar: CO(2k) = O(2k) × R+. Similarly one can define CSO(n); this is always: CSO(n) = CO(n) ∩ GL+(n) = SO(n) × R+. === Discrete subgroups === As the orthogonal group is compact, discrete subgroups are equivalent to finite subgroups. These subgroups are known as point groups and can be realized as the symmetry groups of polytopes. A very important class of examples are the finite Coxeter groups, which include the symmetry groups of regular polytopes. Dimension 3 is particularly studied – see point groups in three dimensions, polyhedral groups, and list of spherical symmetry groups. In 2 dimensions, the finite groups are either cyclic or dihedral – see point groups in two dimensions. Other finite subgroups include: Permutation matrices (the Coxeter group An) Signed permutation matrices (the Coxeter group Bn); also equals the intersection of the orthogonal group with the integer matrices. === Covering and quotient groups === The orthogonal group is neither simply connected nor centerless, and thus has both a covering group and a quotient group, respectively: Two covering Pin groups, Pin+(n) → O(n) and Pin−(n) → O(n), The quotient projective orthogonal group, O(n) → PO(n). These are all 2-to-1 covers. For the special orthogonal group, the corresponding groups are: Spin group, Spin(n) → SO(n), Projective special orthogonal group, SO(n) → PSO(n). Spin is a 2-to-1 cover, while in even dimension, PSO(2k) is a 2-to-1 cover, and in odd dimension PSO(2k + 1) is a 1-to-1 cover; i.e., isomorphic to SO(2k + 1). These groups, Spin(n), SO(n), and PSO(n) are Lie group forms of the compact special orthogonal Lie algebra, s o ( n , R ) {\displaystyle {\mathfrak {so}}(n,\mathbf {R} )} – Spin is the simply connected form, while PSO is the centerless form, and SO is in general neither. In dimension 3 and above these are the covers and quotients, while dimension 2 and below are somewhat degenerate; see specific articles for details. == Principal homogeneous space: Stiefel manifold == The principal homogeneous space for the orthogonal group O(n) is the Stiefel manifold Vn(Rn) of orthonormal bases (orthonormal n-frames). In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given an orthogonal space, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any orthogonal basis to any other orthogonal basis. The other Stiefel manifolds Vk(Rn) for k < n of incomplete orthonormal bases (orthonormal k-frames) are still homogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any k-frame can be taken to any other k-frame by an orthogonal map, but this map is not uniquely determined. == See also == === Specific transforms === Coordinate rotations and reflections Reflection through the origin === Specific groups === rotation group, SO(3, R) SO(8) === Related groups === indefinite orthogonal group unitary group symplectic group === Lists of groups === list of finite simple groups list of simple Lie groups === Representation theory === Representations of classical Lie groups Brauer algebra == Notes == == Citations == == References == == External links == "Orthogonal group", Encyclopedia of Mathematics, EMS Press, 2001 [1994] John Baez "This Week's Finds in Mathematical Physics" week 105 John Baez on Octonions (in Italian) n-dimensional Special Orthogonal Group parametrization
Wikipedia/Special_orthogonal_Lie_algebra
In theoretical computer science, the π-calculus (or pi-calculus) is a process calculus. The π-calculus allows channel names to be communicated along the channels themselves, and in this matter, it is able to describe concurrent computations whose network configuration may change during the computation. The π-calculus has few terms and is a small, yet expressive language (see § Syntax). Functional programs can be encoded into the π-calculus, and the encoding emphasises the dialogue nature of computation, drawing connections with game semantics. Extensions of the π-calculus, such as the spi calculus and applied π, have been successful in reasoning about cryptographic protocols. Beside the original use in describing concurrent systems, the π-calculus has also been used to reason through business processes, molecular biology. and autonomous agents in artificial intelligence. == Informal definition == The π-calculus belongs to the family of process calculi, mathematical formalisms for describing and analyzing properties of concurrent computation. In fact, the π-calculus, like the λ-calculus, is so minimal that it does not contain primitives such as numbers, booleans, data structures, variables, functions, or even the usual control flow statements (such as if-then-else, while). === Process constructs === Central to the π-calculus is the notion of name. The simplicity of the calculus lies in the dual role that names play as communication channels and variables. The process constructs available in the calculus are the following (a precise definition is given in the following section): concurrency, written P ∣ Q {\displaystyle P\mid Q} , where P {\displaystyle P} and Q {\displaystyle Q} are two processes or threads executed concurrently. communication, where input prefixing c ( x ) . P {\displaystyle c\left(x\right).P} is a process waiting for a message that was sent on a communication channel named c {\displaystyle c} before proceeding as P {\displaystyle P} , binding the name received to the name x. Typically, this models either a process expecting a communication from the network or a label c usable only once by a goto c operation. output prefixing c ¯ ⟨ y ⟩ . P {\displaystyle {\overline {c}}\langle y\rangle .P} describes that the name y {\displaystyle y} is emitted on channel c {\displaystyle c} before proceeding as P {\displaystyle P} . Typically, this models either sending a message on the network or a goto c operation. replication, written ! P {\displaystyle !\,P} , which may be seen as a process which can always create a new copy of P {\displaystyle P} . Typically, this models either a network service or a label c waiting for any number of goto c operations. creation of a new name, written ( ν x ) P {\displaystyle \left(\nu x\right)P} , which may be seen as a process allocating a new constant x within P {\displaystyle P} . The constants of π-calculus are defined by their names only and are always communication channels. Creation of a new name in a process is also called restriction. the nil process, written 0 {\displaystyle 0} , is a process whose execution is complete and has stopped. Although the minimalism of the π-calculus prevents us from writing programs in the normal sense, it is easy to extend the calculus. In particular, it is easy to define both control structures such as recursion, loops and sequential composition and datatypes such as first-order functions, truth values, lists and integers. Moreover, extensions of the π-calculus have been proposed which take into account distribution or public-key cryptography. The applied π-calculus due to Abadi and Fournet [1] put these various extensions on a formal footing by extending the π-calculus with arbitrary datatypes. === A small example === Below is a tiny example of a process which consists of three parallel components. The channel name x is only known by the first two components. ( ν x ) ( x ¯ ⟨ z ⟩ . 0 | x ( y ) . y ¯ ⟨ x ⟩ . x ( y ) . 0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 {\displaystyle {\begin{aligned}(\nu x)&\;(\;{\overline {x}}\langle z\rangle .\;0\\&\;|\;x(y).\;{\overline {y}}\langle x\rangle .\;x(y).\;0\;)\\&\;|\;z(v).\;{\overline {v}}\langle v\rangle .0\end{aligned}}} The first two components are able to communicate on the channel x, and the name y becomes bound to z. The next step in the process is therefore ( ν x ) ( 0 | z ¯ ⟨ x ⟩ . x ( y ) . 0 ) | z ( v ) . v ¯ ⟨ v ⟩ . 0 {\displaystyle {\begin{aligned}(\nu x)&\;(\;0\\&\;|\;{\overline {z}}\langle x\rangle .\;x(y).\;0\;)\\&\;|\;z(v).\;{\overline {v}}\langle v\rangle .\;0\end{aligned}}} Note that the remaining y is not affected because it is defined in an inner scope. The second and third parallel components can now communicate on the channel name z, and the name v becomes bound to x. The next step in the process is now ( ν x ) ( 0 | x ( y ) . 0 | x ¯ ⟨ x ⟩ . 0 ) {\displaystyle {\begin{aligned}(\nu x)&\;(\;0\\&\;|\;x(y).\;0\\&\;|\;{\overline {x}}\langle x\rangle .\;0\;)\end{aligned}}} Note that since the local name x has been output, the scope of x is extended to cover the third component as well. Finally, the channel x can be used for sending the name x. After that all concurrently executing processes have stopped ( ν x ) ( 0 | 0 | 0 ) {\displaystyle {\begin{aligned}(\nu x)&\;(\;0\\&\;|\;0\\&\;|\;0\;)\end{aligned}}} == Formal definition == === Syntax === Let Χ be a set of objects called names. The abstract syntax for the π-calculus is built from the following BNF grammar (where x and y are any names from Χ): P , Q ::= x ( y ) . P Receive on channel x , bind the result to y , then run P | x ¯ ⟨ y ⟩ . P Send the value y over channel x , then run P | P | Q Run P and Q simultaneously | ( ν x ) P Create a new channel x and run P | ! P Repeatedly spawn copies of P | 0 Terminate the process {\displaystyle {\begin{aligned}P,Q::=&\;x(y).P\,\,\,\,\,&{\text{Receive on channel }}x{\text{, bind the result to }}y{\text{, then run }}P\\&\;|\;{\overline {x}}\langle y\rangle .P\,\,\,\,\,&{\text{Send the value }}y{\text{ over channel }}x{\text{, then run }}P\\&\;|\;P|Q\,\,\,\,\,\,\,\,\,&{\text{Run }}P{\text{ and }}Q{\text{ simultaneously}}\\&\;|\;(\nu x)P\,\,\,&{\text{Create a new channel }}x{\text{ and run }}P\\&\;|\;!P\,\,\,&{\text{Repeatedly spawn copies of }}P\\&\;|\;0&{\text{Terminate the process}}\end{aligned}}} In the concrete syntax below, the prefixes bind more tightly than the parallel composition (|), and parentheses are used to disambiguate. Names are bound by the restriction and input prefix constructs. Formally, the set of free names of a process in π–calculus are defined inductively by the table below. The set of bound names of a process are defined as the names of a process that are not in the set of free names. === Structural congruence === Central to both the reduction semantics and the labelled transition semantics is the notion of structural congruence. Two processes are structurally congruent, if they are identical up to structure. In particular, parallel composition is commutative and associative. More precisely, structural congruence is defined as the least equivalence relation preserved by the process constructs and satisfying: Alpha-conversion: P ≡ Q {\displaystyle P\equiv Q} if Q {\displaystyle Q} can be obtained from P {\displaystyle P} by renaming one or more bound names in P {\displaystyle P} . Axioms for parallel composition: P | Q ≡ Q | P {\displaystyle P|Q\equiv Q|P} ( P | Q ) | R ≡ P | ( Q | R ) {\displaystyle (P|Q)|R\equiv P|(Q|R)} P | 0 ≡ P {\displaystyle P|0\equiv P} Axioms for restriction: ( ν x ) ( ν y ) P ≡ ( ν y ) ( ν x ) P {\displaystyle (\nu x)(\nu y)P\equiv (\nu y)(\nu x)P} ( ν x ) 0 ≡ 0 {\displaystyle (\nu x)0\equiv 0} Axiom for replication: ! P ≡ P | ! P {\displaystyle !P\equiv P|!P} Axiom relating restriction and parallel: ( ν x ) ( P | Q ) ≡ ( ν x ) P | Q {\displaystyle (\nu x)(P|Q)\equiv (\nu x)P|Q} if x is not a free name of Q {\displaystyle Q} . This last axiom is known as the "scope extension" axiom. This axiom is central, since it describes how a bound name x may be extruded by an output action, causing the scope of x to be extended. In cases where x is a free name of Q {\displaystyle Q} , alpha-conversion may be used to allow extension to proceed. === Reduction semantics === We write P → P ′ {\displaystyle P\rightarrow P'} if P {\displaystyle P} can perform a computation step, following which it is now P ′ {\displaystyle P'} . This reduction relation → {\displaystyle \rightarrow } is defined as the least relation closed under a set of reduction rules. The main reduction rule which captures the ability of processes to communicate through channels is the following: x ¯ ⟨ z ⟩ . P | x ( y ) . Q → P | Q [ z / y ] {\displaystyle {\overline {x}}\langle z\rangle .P|x(y).Q\rightarrow P|Q[z/y]} where Q [ z / y ] {\displaystyle Q[z/y]} denotes the process Q {\displaystyle Q} in which the free name z {\displaystyle z} has been substituted for the free occurrences of y {\displaystyle y} . If a free occurrence of y {\displaystyle y} occurs in a location where z {\displaystyle z} would not be free, alpha-conversion may be required. There are three additional rules: If P → Q {\displaystyle P\rightarrow Q} then also P | R → Q | R {\displaystyle P|R\rightarrow Q|R} . This rule says that parallel composition does not inhibit computation. If P → Q {\displaystyle P\rightarrow Q} , then also ( ν x ) P → ( ν x ) Q {\displaystyle (\nu x)P\rightarrow (\nu x)Q} . This rule ensures that computation can proceed underneath a restriction. If P ≡ P ′ {\displaystyle P\equiv P'} and P ′ → Q ′ {\displaystyle P'\rightarrow Q'} and Q ′ ≡ Q {\displaystyle Q'\equiv Q} , then also P → Q {\displaystyle P\rightarrow Q} . The latter rule states that processes that are structurally congruent have the same reductions. === The example revisited === Consider again the process ( ν x ) ( x ¯ ⟨ z ⟩ .0 | x ( y ) . y ¯ ⟨ x ⟩ . x ( y ) .0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 {\displaystyle (\nu x)({\overline {x}}\langle z\rangle .0|x(y).{\overline {y}}\langle x\rangle .x(y).0)|z(v).{\overline {v}}\langle v\rangle .0} Applying the definition of the reduction semantics, we get the reduction ( ν x ) ( x ¯ ⟨ z ⟩ .0 | x ( y ) . y ¯ ⟨ x ⟩ . x ( y ) .0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 → ( ν x ) ( 0 | z ¯ ⟨ x ⟩ . x ( y ) .0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 {\displaystyle (\nu x)({\overline {x}}\langle z\rangle .0|x(y).{\overline {y}}\langle x\rangle .x(y).0)|z(v).{\overline {v}}\langle v\rangle .0\rightarrow (\nu x)(0|{\overline {z}}\langle x\rangle .x(y).0)|z(v).{\overline {v}}\langle v\rangle .0} Note how, applying the reduction substitution axiom, free occurrences of y {\displaystyle y} are now labeled as z {\displaystyle z} . Next, we get the reduction ( ν x ) ( 0 | z ¯ ⟨ x ⟩ . x ( y ) .0 ) | z ( v ) . v ¯ ⟨ v ⟩ .0 → ( ν x ) ( 0 | x ( y ) .0 | x ¯ ⟨ x ⟩ .0 ) {\displaystyle (\nu x)(0|{\overline {z}}\langle x\rangle .x(y).0)|z(v).{\overline {v}}\langle v\rangle .0\rightarrow (\nu x)(0|x(y).0|{\overline {x}}\langle x\rangle .0)} Note that since the local name x has been output, the scope of x is extended to cover the third component as well. This was captured using the scope extension axiom. Next, using the reduction substitution axiom, we get ( ν x ) ( 0 | 0 | 0 ) {\displaystyle (\nu x)(0|0|0)} Finally, using the axioms for parallel composition and restriction, we get 0 {\displaystyle 0} === Labelled semantics === Alternatively, one may give the pi-calculus a labelled transition semantics (as has been done with the Calculus of Communicating Systems). In this semantics, a transition from a state P {\displaystyle P} to some other state P ′ {\displaystyle P'} after an action α {\displaystyle \alpha } is notated as: P → α P ′ {\displaystyle P\,{\xrightarrow {\overset {}{\alpha }}}P'} Where states P {\displaystyle P} and P ′ {\displaystyle P'} represent processes and α {\displaystyle \alpha } is either an input action a ( x ) {\displaystyle a(x)} , an output action a ¯ ⟨ x ⟩ {\displaystyle {\overline {a}}\langle x\rangle } , or a silent action τ. A standard result about the labelled semantics is that it agrees with the reduction semantics up to structural congruence, in the sense that P → P ′ {\displaystyle P\rightarrow P'} if and only if P → τ ≡ P ′ {\displaystyle P\,\xrightarrow {\overset {}{\tau }} \equiv P'} == Extensions and variants == The syntax given above is a minimal one. However, the syntax may be modified in various ways. A nondeterministic choice operator P + Q {\displaystyle P+Q} can be added to the syntax. A test for name equality [ x = y ] P {\displaystyle [x=y]P} can be added to the syntax. This match operator can proceed as P {\displaystyle P} if and only if x and y {\displaystyle y} are the same name. Similarly, one may add a mismatch operator for name inequality. Practical programs which can pass names (URLs or pointers) often use such functionality: for directly modeling such functionality inside the calculus, this and related extensions are often useful. The asynchronous π-calculus allows only outputs with no continuation, i.e. output atoms of the form x ¯ ⟨ y ⟩ {\displaystyle {\overline {x}}\langle y\rangle } , yielding a smaller calculus. However, any process in the original calculus can be represented by the smaller asynchronous π-calculus using an extra channel to simulate explicit acknowledgement from the receiving process. Since a continuation-free output can model a message-in-transit, this fragment shows that the original π-calculus, which is intuitively based on synchronous communication, has an expressive asynchronous communication model inside its syntax. However, the nondeterministic choice operator defined above cannot be expressed in this way, as an unguarded choice would be converted into a guarded one; this fact has been used to demonstrate that the asynchronous calculus is strictly less expressive than the synchronous one (with the choice operator). The polyadic π-calculus allows communicating more than one name in a single action: x ¯ ⟨ z 1 , . . . , z n ⟩ . P {\displaystyle {\overline {x}}\langle z_{1},...,z_{n}\rangle .P} (polyadic output) and x ( z 1 , . . . , z n ) . P {\displaystyle x(z_{1},...,z_{n}).P} (polyadic input). This polyadic extension, which is useful especially when studying types for name passing processes, can be encoded in the monadic calculus by passing the name of a private channel through which the multiple arguments are then passed in sequence. The encoding is defined recursively by the clauses x ¯ ⟨ y 1 , ⋯ , y n ⟩ . P {\displaystyle {\overline {x}}\langle y_{1},\cdots ,y_{n}\rangle .P} is encoded as ( ν w ) x ¯ ⟨ w ⟩ . w ¯ ⟨ y 1 ⟩ . ⋯ . w ¯ ⟨ y n ⟩ . [ P ] {\displaystyle (\nu w){\overline {x}}\langle w\rangle .{\overline {w}}\langle y_{1}\rangle .\cdots .{\overline {w}}\langle y_{n}\rangle .[P]} x ( y 1 , ⋯ , y n ) . P {\displaystyle x(y_{1},\cdots ,y_{n}).P} is encoded as x ( w ) . w ( y 1 ) . ⋯ . w ( y n ) . [ P ] {\displaystyle x(w).w(y_{1}).\cdots .w(y_{n}).[P]} All other process constructs are left unchanged by the encoding. In the above, [ P ] {\displaystyle [P]} denotes the encoding of all prefixes in the continuation P {\displaystyle P} in the same way. The full power of replication ! P {\displaystyle !P} is not needed. Often, one only considers replicated input ! x ( y ) . P {\displaystyle !x(y).P} , whose structural congruence axiom is ! x ( y ) . P ≡ x ( y ) . P | ! x ( y ) . P {\displaystyle !x(y).P\equiv x(y).P|!x(y).P} . Replicated input process such as ! x ( y ) . P {\displaystyle !x(y).P} can be understood as servers, waiting on channel x to be invoked by clients. Invocation of a server spawns a new copy of the process P [ a / y ] {\displaystyle P[a/y]} , where a is the name passed by the client to the server, during the latter's invocation. A higher order π-calculus can be defined where not only names but processes are sent through channels. The key reduction rule for the higher order case is x ¯ ⟨ R ⟩ . P | x ( Y ) . Q → P | Q [ R / Y ] {\displaystyle {\overline {x}}\langle R\rangle .P|x(Y).Q\rightarrow P|Q[R/Y]} Here, Y {\displaystyle Y} denotes a process variable which can be instantiated by a process term. Sangiorgi established that the ability to pass processes does not increase the expressivity of the π-calculus: passing a process P can be simulated by just passing a name that points to P instead. == Properties == === Turing completeness === The π-calculus is a universal model of computation. This was first observed by Milner in his paper "Functions as Processes", in which he presents two encodings of the lambda-calculus in the π-calculus. One encoding simulates the eager (call-by-value) evaluation strategy, the other encoding simulates the normal-order (call-by-name) strategy. In both of these, the crucial insight is the modeling of environment bindings – for instance, "x is bound to term M {\textstyle M} " – as replicating agents that respond to requests for their bindings by sending back a connection to the term M {\displaystyle M} . The features of the π-calculus that make these encodings possible are name-passing and replication (or, equivalently, recursively defined agents). In the absence of replication/recursion, the π-calculus ceases to be Turing-complete. This can be seen by the fact that bisimulation equivalence becomes decidable for the recursion-free calculus and even for the finite-control π-calculus where the number of parallel components in any process is bounded by a constant. == Bisimulations in the π-calculus == As for process calculi, the π-calculus allows for a definition of bisimulation equivalence. In the π-calculus, the definition of bisimulation equivalence (also known as bisimilarity) may be based on either the reduction semantics or on the labelled transition semantics. There are (at least) three different ways of defining labelled bisimulation equivalence in the π-calculus: Early, late and open bisimilarity. This stems from the fact that the π-calculus is a value-passing process calculus. In the remainder of this section, we let p {\displaystyle p} and q {\displaystyle q} denote processes and R {\displaystyle R} denote binary relations over processes. === Early and late bisimilarity === Early and late bisimilarity were both formulated by Milner, Parrow and Walker in their original paper on the π-calculus. A binary relation R {\displaystyle R} over processes is an early bisimulation if for every pair of processes ( p , q ) ∈ R {\displaystyle (p,q)\in R} , whenever p → a ( x ) p ′ {\displaystyle p\,{\xrightarrow {a(x)}}\,p'} then for every name y {\displaystyle y} there exists some q ′ {\displaystyle q'} such that q → a ( x ) q ′ {\displaystyle q\,{\xrightarrow {a(x)}}\,q'} and ( p ′ [ y / x ] , q ′ [ y / x ] ) ∈ R {\displaystyle (p'[y/x],q'[y/x])\in R} ; for any non-input action α {\displaystyle \alpha } , if p → α p ′ {\displaystyle {p{\xrightarrow {\overset {}{\alpha }}}p'}} then there exists some q ′ {\displaystyle q'} such that q → α q ′ {\displaystyle q{\xrightarrow {\overset {}{\alpha }}}q'} and ( p ′ , q ′ ) ∈ R {\displaystyle (p',q')\in R} ; and symmetric requirements with p {\displaystyle p} and q {\displaystyle q} interchanged. Processes p {\displaystyle p} and q {\displaystyle q} are said to be early bisimilar, written p ∼ e q {\displaystyle p\sim _{e}q} if the pair ( p , q ) ∈ R {\displaystyle (p,q)\in R} for some early bisimulation R {\displaystyle R} . In late bisimilarity, the transition match must be independent of the name being transmitted. A binary relation R {\displaystyle R} over processes is a late bisimulation if for every pair of processes ( p , q ) ∈ R {\displaystyle (p,q)\in R} , whenever p → a ( x ) p ′ {\displaystyle p{\xrightarrow {a(x)}}p'} then for some q ′ {\displaystyle q'} it holds that q → a ( x ) q ′ {\displaystyle q{\xrightarrow {a(x)}}q'} and ( p ′ [ y / x ] , q ′ [ y / x ] ) ∈ R {\displaystyle (p'[y/x],q'[y/x])\in R} for every name y; for any non-input action α {\displaystyle \alpha } , if p → α p ′ {\displaystyle p{\xrightarrow {\overset {}{\alpha }}}p'} implies that there exists some q ′ {\displaystyle q'} such that q → α q ′ {\displaystyle q{\xrightarrow {\overset {}{\alpha }}}q'} and ( p ′ , q ′ ) ∈ R {\displaystyle (p',q')\in R} ; and symmetric requirements with p {\displaystyle p} and q {\displaystyle q} interchanged. Processes p {\displaystyle p} and q {\displaystyle q} are said to be late bisimilar, written p ∼ l q {\displaystyle p\sim _{l}q} if the pair ( p , q ) ∈ R {\displaystyle (p,q)\in R} for some late bisimulation R {\displaystyle R} . Both ∼ e {\displaystyle \sim _{e}} and ∼ l {\displaystyle \sim _{l}} suffer from the problem that they are not congruence relations in the sense that they are not preserved by all process constructs. More precisely, there exist processes p {\displaystyle p} and q {\displaystyle q} such that p ∼ e q {\displaystyle p\sim _{e}q} but a ( x ) . p ≁ e a ( x ) . q {\displaystyle a(x).p\not \sim _{e}a(x).q} . One may remedy this problem by considering the maximal congruence relations included in ∼ e {\displaystyle \sim _{e}} and ∼ l {\displaystyle \sim _{l}} , known as early congruence and late congruence, respectively. === Open bisimilarity === Fortunately, a third definition is possible, which avoids this problem, namely that of open bisimilarity, due to Sangiorgi. A binary relation R {\displaystyle R} over processes is an open bisimulation if for every pair of elements ( p , q ) ∈ R {\displaystyle (p,q)\in R} and for every name substitution σ {\displaystyle \sigma } and every action α {\displaystyle \alpha } , whenever p σ → α p ′ {\displaystyle p\sigma {\xrightarrow {\overset {}{\alpha }}}p'} then there exists some q ′ {\displaystyle q'} such that q σ → α q ′ {\displaystyle q\sigma {\xrightarrow {\overset {}{\alpha }}}q'} and ( p ′ , q ′ ) ∈ R {\displaystyle (p',q')\in R} . Processes p {\displaystyle p} and q {\displaystyle q} are said to be open bisimilar, written p ∼ o q {\displaystyle p\sim _{o}q} if the pair ( p , q ) ∈ R {\displaystyle (p,q)\in R} for some open bisimulation R {\displaystyle R} . ==== Early, late and open bisimilarity are distinct ==== Early, late and open bisimilarity are distinct. The containments are proper, so ∼ o ⊊ ∼ l ⊊ ∼ e {\displaystyle \sim _{o}\subsetneq \sim _{l}\subsetneq \sim _{e}} . In certain subcalculi such as the asynchronous pi-calculus, late, early and open bisimilarity are known to coincide. However, in this setting a more appropriate notion is that of asynchronous bisimilarity. In the literature, the term open bisimulation usually refers to a more sophisticated notion, where processes and relations are indexed by distinction relations; details are in Sangiorgi's paper cited above. === Barbed equivalence === Alternatively, one may define bisimulation equivalence directly from the reduction semantics. We write p ⇓ a {\displaystyle p\Downarrow a} if process p {\displaystyle p} immediately allows an input or an output on name a {\displaystyle a} . A binary relation R {\displaystyle R} over processes is a barbed bisimulation if it is a symmetric relation which satisfies that for every pair of elements ( p , q ) ∈ R {\displaystyle (p,q)\in R} we have that (1) p ⇓ a {\displaystyle p\Downarrow a} if and only if q ⇓ a {\displaystyle q\Downarrow a} for every name a {\displaystyle a} and (2) for every reduction p → p ′ {\displaystyle p\rightarrow p'} there exists a reduction q → q ′ {\displaystyle q\rightarrow q'} such that ( p ′ , q ′ ) ∈ R {\displaystyle (p',q')\in R} . We say that p {\displaystyle p} and q {\displaystyle q} are barbed bisimilar if there exists a barbed bisimulation R {\displaystyle R} where ( p , q ) ∈ R {\displaystyle (p,q)\in R} . Defining a context as a π term with a hole [] we say that two processes P and Q are barbed congruent, written P ∼ b Q {\displaystyle P\sim _{b}Q\,\!} , if for every context C [ ] {\displaystyle C[]} we have that C [ P ] {\displaystyle C[P]} and C [ Q ] {\displaystyle C[Q]} are barbed bisimilar. It turns out that barbed congruence coincides with the congruence induced by early bisimilarity. == Applications == The π-calculus has been used to describe many different kinds of concurrent systems. In fact, some of the most recent applications lie outside the realm of traditional computer science. In 1997, Martin Abadi and Andrew Gordon proposed an extension of the π-calculus, the Spi-calculus, as a formal notation for describing and reasoning about cryptographic protocols. The spi-calculus extends the π-calculus with primitives for encryption and decryption. In 2001, Martin Abadi and Cedric Fournet generalised the handling of cryptographic protocols to produce the applied π calculus. There is now a large body of work devoted to variants of the applied π calculus, including a number of experimental verification tools. One example is the tool ProVerif [2] due to Bruno Blanchet, based on a translation of the applied π-calculus into Blanchet's logic programming framework. Another example is Cryptyc [3], due to Andrew Gordon and Alan Jeffrey, which uses Woo and Lam's method of correspondence assertions as the basis for type systems that can check for authentication properties of cryptographic protocols. Around 2002, Howard Smith and Peter Fingar became interested that π-calculus would become a description tool for modeling business processes. By July 2006, there is discussion in the community about how useful this would be. Most recently, the π-calculus has formed the theoretical basis of Business Process Modeling Language (BPML), and of Microsoft's XLANG. The π-calculus has also attracted interest in molecular biology. In 1999, Aviv Regev and Ehud Shapiro showed that one can describe a cellular signaling pathway (the so-called RTK/MAPK cascade) and in particular the molecular "lego" which implements these tasks of communication in an extension of the π-calculus. Following this seminal paper, other authors described the whole metabolic network of a minimal cell. In 2009, Anthony Nash and Sara Kalvala proposed a π-calculus framework to model the signal transduction that directs Dictyostelium discoideum aggregation. == History == The π-calculus was originally developed by Robin Milner, Joachim Parrow and David Walker in 1992, based on ideas by Uffe Engberg and Mogens Nielsen. It can be seen as a continuation of Milner's work on the process calculus CCS (Calculus of Communicating Systems). In his Turing lecture, Milner describes the development of the π-calculus as an attempt to capture the uniformity of values and processes in actors. == Implementations == The following programming languages implement the π-calculus or one of its variants: Business Process Modeling Language (BPML) occam-π Pict JoCaml (based on the Join-calculus) RhoLang == Notes == == References == Milner, Robin (1999). Communicating and Mobile Systems: The π-calculus. Cambridge, UK: Cambridge University Press. ISBN 0-521-65869-1. Milner, Robin (1993). "The Polyadic π-Calculus: A Tutorial". In F. L. Hamer; W. Brauer; H. Schwichtenberg (eds.). Logic and Algebra of Specification. Springer-Verlag. Sangiorgi, Davide; Walker, David (2001). The π-calculus: A Theory of Mobile Processes. Cambridge, UK: Cambridge University Press. ISBN 0-521-78177-9.
Wikipedia/Π-calculus
The term umbral calculus has two related but distinct meanings. In mathematics, before the 1970s, umbral calculus referred to the surprising similarity between seemingly unrelated polynomial equations and certain shadowy techniques used to prove them. These techniques were introduced in 1861 by John Blissard and are sometimes called Blissard's symbolic method. They are often attributed to Édouard Lucas (or James Joseph Sylvester), who used the technique extensively. The use of shadowy techniques was put on a solid mathematical footing starting in the 1970s, and the resulting mathematical theory is also referred to as "umbral calculus". == History == In the 1930s and 1940s, Eric Temple Bell attempted to set the umbral calculus on a rigorous footing, however his attempt in making this kind of argument logically rigorous was unsuccessful. The combinatorialist John Riordan in his book Combinatorial Identities published in the 1960s, used techniques of this sort extensively. In the 1970s, Steven Roman, Gian-Carlo Rota, and others developed the umbral calculus by means of linear functionals on spaces of polynomials. Currently, umbral calculus refers to the study of Sheffer sequences, including polynomial sequences of binomial type and Appell sequences, but may encompass systematic correspondence techniques of the calculus of finite differences. == 19th-century umbral calculus == The method is a notational procedure used for deriving identities involving indexed sequences of numbers by pretending that the indices are exponents. Construed literally, it is absurd, and yet it is successful: identities derived via the umbral calculus can also be properly derived by more complicated methods that can be taken literally without logical difficulty. An example involves the Bernoulli polynomials. Consider, for example, the ordinary binomial expansion (which contains a binomial coefficient): ( y + x ) n = ∑ k = 0 n ( n k ) y n − k x k {\displaystyle (y+x)^{n}=\sum _{k=0}^{n}{n \choose k}y^{n-k}x^{k}} and the remarkably similar-looking relation on the Bernoulli polynomials: B n ( y + x ) = ∑ k = 0 n ( n k ) B n − k ( y ) x k . {\displaystyle B_{n}(y+x)=\sum _{k=0}^{n}{n \choose k}B_{n-k}(y)x^{k}.} Compare also the ordinary derivative d d x x n = n x n − 1 {\displaystyle {\frac {d}{dx}}x^{n}=nx^{n-1}} to a very similar-looking relation on the Bernoulli polynomials: d d x B n ( x ) = n B n − 1 ( x ) . {\displaystyle {\frac {d}{dx}}B_{n}(x)=nB_{n-1}(x).} These similarities allow one to construct umbral proofs, which on the surface cannot be correct, but seem to work anyway. Thus, for example, by pretending that the subscript n − k is an exponent: B n ( x ) = ∑ k = 0 n ( n k ) b n − k x k = ( b + x ) n , {\displaystyle B_{n}(x)=\sum _{k=0}^{n}{n \choose k}b^{n-k}x^{k}=(b+x)^{n},} and then differentiating, one gets the desired result: B n ′ ( x ) = n ( b + x ) n − 1 = n B n − 1 ( x ) . {\displaystyle B_{n}'(x)=n(b+x)^{n-1}=nB_{n-1}(x).} In the above, the variable b is an "umbra" (Latin for shadow). See also Faulhaber's formula. == Umbral Taylor series == In differential calculus, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. That is, a real or complex-valued function f (x) that is analytic at a {\displaystyle a} can be written as: f ( x ) = ∑ n = 0 ∞ f ( n ) ( a ) n ! ( x − a ) n {\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}} Similar relationships were also observed in the theory of finite differences. The umbral version of the Taylor series is given by a similar expression involving the k-th forward differences Δ k [ f ] {\displaystyle \Delta ^{k}[f]} of a polynomial function f, f ( x ) = ∑ k = 0 ∞ Δ k [ f ] ( a ) k ! ( x − a ) k {\displaystyle f(x)=\sum _{k=0}^{\infty }{\frac {\Delta ^{k}[f](a)}{k!}}(x-a)_{k}} where ( x − a ) k = ( x − a ) ( x − a − 1 ) ( x − a − 2 ) ⋯ ( x − a − k + 1 ) {\displaystyle (x-a)_{k}=(x-a)(x-a-1)(x-a-2)\cdots (x-a-k+1)} is the Pochhammer symbol used here for the falling sequential product. A similar relationship holds for the backward differences and rising factorial. This series is also known as the Newton series or Newton's forward difference expansion. The analogy to Taylor's expansion is utilized in the calculus of finite differences. == Modern umbral calculus == Another combinatorialist, Gian-Carlo Rota, pointed out that the mystery vanishes if one considers the linear functional L on polynomials in z defined by L ( z n ) = B n ( 0 ) = B n . {\displaystyle L(z^{n})=B_{n}(0)=B_{n}.} Then, using the definition of the Bernoulli polynomials and the definition and linearity of L, one can write B n ( x ) = ∑ k = 0 n ( n k ) B n − k x k = ∑ k = 0 n ( n k ) L ( z n − k ) x k = L ( ∑ k = 0 n ( n k ) z n − k x k ) = L ( ( z + x ) n ) {\displaystyle {\begin{aligned}B_{n}(x)&=\sum _{k=0}^{n}{n \choose k}B_{n-k}x^{k}\\&=\sum _{k=0}^{n}{n \choose k}L\left(z^{n-k}\right)x^{k}\\&=L\left(\sum _{k=0}^{n}{n \choose k}z^{n-k}x^{k}\right)\\&=L\left((z+x)^{n}\right)\end{aligned}}} This enables one to replace occurrences of B n ( x ) {\displaystyle B_{n}(x)} by L ( ( z + x ) n ) {\displaystyle L((z+x)^{n})} , that is, move the n from a subscript to a superscript (the key operation of umbral calculus). For instance, we can now prove that: ∑ k = 0 n ( n k ) B n − k ( y ) x k = ∑ k = 0 n ( n k ) L ( ( z + y ) n − k ) x k = L ( ∑ k = 0 n ( n k ) ( z + y ) n − k x k ) = L ( ( z + x + y ) n ) = B n ( x + y ) . {\displaystyle {\begin{aligned}\sum _{k=0}^{n}{n \choose k}B_{n-k}(y)x^{k}&=\sum _{k=0}^{n}{n \choose k}L\left((z+y)^{n-k}\right)x^{k}\\&=L\left(\sum _{k=0}^{n}{n \choose k}(z+y)^{n-k}x^{k}\right)\\&=L\left((z+x+y)^{n}\right)\\&=B_{n}(x+y).\end{aligned}}} Rota later stated that much confusion resulted from the failure to distinguish between three equivalence relations that occur frequently in this topic, all of which were denoted by "=". In a paper published in 1964, Rota used umbral methods to establish the recursion formula satisfied by the Bell numbers, which enumerate partitions of finite sets. In the paper of Roman and Rota cited below, the umbral calculus is characterized as the study of the umbral algebra, defined as the algebra of linear functionals on the vector space of polynomials in a variable x, with a product L1L2 of linear functionals defined by ⟨ L 1 L 2 | x n ⟩ = ∑ k = 0 n ( n k ) ⟨ L 1 | x k ⟩ ⟨ L 2 | x n − k ⟩ . {\displaystyle \left\langle L_{1}L_{2}|x^{n}\right\rangle =\sum _{k=0}^{n}{n \choose k}\left\langle L_{1}|x^{k}\right\rangle \left\langle L_{2}|x^{n-k}\right\rangle .} When polynomial sequences replace sequences of numbers as images of yn under the linear mapping L, then the umbral method is seen to be an essential component of Rota's general theory of special polynomials, and that theory is the umbral calculus by some more modern definitions of the term. A small sample of that theory can be found in the article on polynomial sequences of binomial type. Another is the article titled Sheffer sequence. Rota later applied umbral calculus extensively in his paper with Shen to study the various combinatorial properties of the cumulants. == See also == Bernoulli umbra Umbral composition of polynomial sequences Calculus of finite differences Pidduck polynomials Symbolic method in invariant theory Narumi polynomials == Notes == == References == Bell, E. T. (1938), "The History of Blissard's Symbolic Method, with a Sketch of its Inventor's Life", The American Mathematical Monthly, 45 (7), Mathematical Association of America: 414–421, doi:10.1080/00029890.1938.11990829, ISSN 0002-9890, JSTOR 2304144 Roman, Steven M.; Rota, Gian-Carlo (1978), "The umbral calculus", Advances in Mathematics, 27 (2): 95–188, doi:10.1016/0001-8708(78)90087-7, ISSN 0001-8708, MR 0485417 G.-C. Rota, D. Kahaner, and A. Odlyzko, "Finite Operator Calculus," Journal of Mathematical Analysis and its Applications, vol. 42, no. 3, June 1973. Reprinted in the book with the same title, Academic Press, New York, 1975. Roman, Steven (1984), The umbral calculus, Pure and Applied Mathematics, vol. 111, London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-594380-2, MR 0741185. Reprinted by Dover, 2005. Roman, S. (2001) [1994], "Umbral calculus", Encyclopedia of Mathematics, EMS Press == External links == Weisstein, Eric W. "Umbral Calculus". MathWorld. A. Di Bucchianico, D. Loeb (2000). "A Selected Survey of Umbral Calculus" (PDF). Electronic Journal of Combinatorics. Dynamic Surveys. DS3. Archived from the original (PDF) on 2012-02-24. Roman, S. (1982), The Theory of the Umbral Calculus, I
Wikipedia/Umbral_calculus
The situation calculus is a logic formalism designed for representing and reasoning about dynamical domains. It was first introduced by John McCarthy in 1963. The main version of the situational calculus that is presented in this article is based on that introduced by Ray Reiter in 1991. It is followed by sections about McCarthy's 1986 version and a logic programming formulation. == Overview == The situation calculus represents changing scenarios as a set of first-order logic formulae. The basic elements of the calculus are: The actions that can be performed in the world The fluents that describe the state of the world The situations A domain is formalized by a number of formulae, namely: Action precondition axioms, one for each action Successor state axioms, one for each fluent Axioms describing the world in various situations The foundational axioms of the situation calculus A simple robot world will be modeled as a running example. In this world there is a single robot and several inanimate objects. The world is laid out according to a grid so that locations can be specified in terms of ( x , y ) {\displaystyle (x,y)} coordinate points. It is possible for the robot to move around the world, and to pick up and drop items. Some items may be too heavy for the robot to pick up, or fragile so that they break when they are dropped. The robot also has the ability to repair any broken items that it is holding. == Elements == The main elements of the situation calculus are the actions, fluents and the situations. A number of objects are also typically involved in the description of the world. The situation calculus is based on a sorted domain with three sorts: actions, situations, and objects, where the objects include everything that is not an action or a situation. Variables of each sort can be used. While actions, situations, and objects are elements of the domain, the fluents are modeled as either predicates or functions. === Actions === The actions form a sort of the domain. Variables of sort action can be used and also functions whose result is of sort action. Actions can be quantified. In the example robot world, possible action terms would be m o v e ( x , y ) {\displaystyle move(x,y)} to model the robot moving to a new location ( x , y ) {\displaystyle (x,y)} , and p i c k u p ( o ) {\displaystyle pickup(o)} to model the robot picking up an object o. A special predicate Poss is used to indicate when an action is executable. === Situations === In the situation calculus, a dynamic world is modeled as progressing through a series of situations as a result of various actions being performed within the world. A situation represents a history of action occurrences. In the Reiter version of the situation calculus described here, a situation does not represent a state, contrarily to the literal meaning of the term and contrarily to the original definition by McCarthy and Hayes. This point has been summarized by Reiter as follows: A situation is a finite sequence of actions. Period. It's not a state, it's not a snapshot, it's a history. The situation before any actions have been performed is typically denoted ⁠ S 0 {\displaystyle S_{0}} ⁠ and called the initial situation. The new situation resulting from the performance of an action is denoted using the function symbol do (Some other references also use result). This function symbol has a situation and an action as arguments, and a situation as a result, the latter being the situation that results from performing the given action in the given situation. The fact that situations are sequences of actions and not states is enforced by an axiom stating that d o ( a , s ) {\displaystyle do(a,s)} is equal to d o ( a ′ , s ′ ) {\displaystyle do(a',s')} if and only if a = a ′ {\displaystyle a=a'} and s = s ′ {\displaystyle s=s'} . This condition makes no sense if situations were states, as two different actions executed in two different states can result in the same state. In the example robot world, if the robot's first action is to move to location ( 2 , 3 ) {\displaystyle (2,3)} , the first action is m o v e ( 2 , 3 ) {\displaystyle move(2,3)} and the resulting situation is d o ( m o v e ( 2 , 3 ) , S 0 ) {\displaystyle do(move(2,3),S_{0})} . If its next action is to pick up the ball, the resulting situation is d o ( p i c k u p ( B a l l ) , d o ( m o v e ( 2 , 3 ) , S 0 ) ) {\displaystyle do(pickup(Ball),do(move(2,3),S_{0}))} . Situations terms like d o ( m o v e ( 2 , 3 ) , S 0 ) {\displaystyle do(move(2,3),S_{0})} and d o ( p i c k u p ( B a l l ) , d o ( m o v e ( 2 , 3 ) , S 0 ) ) {\displaystyle do(pickup(Ball),do(move(2,3),S_{0}))} denote the sequences of executed actions, and not the description of the state that result from execution. === Fluents === Statements whose truth value may change are modeled by relational fluents, predicates that take a situation as their final argument. Also possible are functional fluents, functions that take a situation as their final argument and return a situation-dependent value. Fluents may be thought of as "properties of the world"'. In the example, the fluent isCarrying ( o , s ) {\displaystyle {\textit {isCarrying}}(o,s)} can be used to indicate that the robot is carrying a particular object in a particular situation. If the robot initially carries nothing, isCarrying ( B a l l , S 0 ) {\displaystyle {\textit {isCarrying}}(Ball,S_{0})} is false while isCarrying ( B a l l , d o ( p i c k u p ( B a l l ) , S 0 ) ) {\displaystyle {\textit {isCarrying}}(Ball,do(pickup(Ball),S_{0}))} is true. The location of the robot can be modeled using a functional fluent l o c a t i o n ( s ) {\displaystyle location(s)} that returns the location ( x , y ) {\displaystyle (x,y)} of the robot in a particular situation. == Formulae == The description of a dynamic world is encoded in second-order logic using three kinds of formulae: formulae about actions (preconditions and effects), formulae about the state of the world, and foundational axioms. === Action preconditions === Some actions may not be executable in a given situation. For example, it is impossible to put down an object unless one is in fact carrying it. The restrictions on the performance of actions are modeled by literals of the form Poss ( a , s ) {\displaystyle {\textit {Poss}}(a,s)} , where a is an action, s a situation, and Poss is a special binary predicate denoting executability of actions. In the example, the condition that dropping an object is only possible when one is carrying it is modeled by: Poss ( d r o p ( o ) , s ) ↔ isCarrying ( o , s ) {\displaystyle {\textit {Poss}}(drop(o),s)\leftrightarrow {\textit {isCarrying}}(o,s)} As a more complex example, the following models that the robot can carry only one object at a time, and that some objects are too heavy for the robot to lift (indicated by the predicate heavy): Poss ( p i c k u p ( o ) , s ) ↔ ( ∀ z ¬ isCarrying ( z , s ) ) ∧ ¬ h e a v y ( o ) {\displaystyle {\textit {Poss}}(pickup(o),s)\leftrightarrow (\forall z\ \neg {\textit {isCarrying}}(z,s))\wedge \neg heavy(o)} === Action effects === Given that an action is possible in a situation, one must specify the effects of that action on the fluents. This is done by the effect axioms. For example, the fact that picking up an object causes the robot to be carrying it can be modeled as: P o s s ( p i c k u p ( o ) , s ) → isCarrying ( o , d o ( p i c k u p ( o ) , s ) ) {\displaystyle Poss(pickup(o),s)\rightarrow {\textit {isCarrying}}(o,do(pickup(o),s))} It is also possible to specify conditional effects, which are effects that depend on the current state. The following models that some objects are fragile (indicated by the predicate fragile) and dropping them causes them to be broken (indicated by the fluent broken): P o s s ( d r o p ( o ) , s ) ∧ f r a g i l e ( o ) → b r o k e n ( o , d o ( d r o p ( o ) , s ) ) {\displaystyle Poss(drop(o),s)\wedge fragile(o)\rightarrow broken(o,do(drop(o),s))} While this formula correctly describes the effect of the actions, it is not sufficient to correctly describe the action in logic, because of the frame problem. === The frame problem === While the above formulae seem suitable for reasoning about the effects of actions, they have a critical weakness—they cannot be used to derive the non-effects of actions. For example, it is not possible to deduce that after picking up an object, the robot's location remains unchanged. This requires a so-called frame axiom, a formula like: P o s s ( p i c k u p ( o ) , s ) ∧ l o c a t i o n ( s ) = ( x , y ) → l o c a t i o n ( d o ( p i c k u p ( o ) , s ) ) = ( x , y ) {\displaystyle Poss(pickup(o),s)\wedge location(s)=(x,y)\rightarrow location(do(pickup(o),s))=(x,y)} The need to specify frame axioms has long been recognised as a problem in axiomatizing dynamic worlds, and is known as the frame problem. As there are generally a very large number of such axioms, it is very easy for the designer to leave out a necessary frame axiom, or to forget to modify all appropriate axioms when a change to the world description is made. === The successor state axioms === The successor state axioms "solve" the frame problem in the situation calculus. According to this solution, the designer must enumerate as effect axioms all the ways in which the value of a particular fluent can be changed. The effect axioms affecting the value of fluent F ( x → , s ) {\displaystyle F({\overrightarrow {x}},s)} can be written in generalised form as a positive and a negative effect axiom: P o s s ( a , s ) ∧ γ F + ( x → , a , s ) → F ( x → , d o ( a , s ) ) {\displaystyle Poss(a,s)\wedge \gamma _{F}^{+}({\overrightarrow {x}},a,s)\rightarrow F({\overrightarrow {x}},do(a,s))} P o s s ( a , s ) ∧ γ F − ( x → , a , s ) → ¬ F ( x → , d o ( a , s ) ) {\displaystyle Poss(a,s)\wedge \gamma _{F}^{-}({\overrightarrow {x}},a,s)\rightarrow \neg F({\overrightarrow {x}},do(a,s))} The formula γ F + {\displaystyle \gamma _{F}^{+}} describes the conditions under which action a in situation s makes the fluent F become true in the successor situation d o ( a , s ) {\displaystyle do(a,s)} . Likewise, γ F − {\displaystyle \gamma _{F}^{-}} describes the conditions under which performing action a in situation s makes fluent F false in the successor situation. If this pair of axioms describe all the ways in which fluent F can change value, they can be rewritten as a single axiom: P o s s ( a , s ) → [ F ( x → , d o ( a , s ) ) ↔ γ F + ( x → , a , s ) ∨ ( F ( x → , s ) ∧ ¬ γ F − ( x → , a , s ) ) ] {\displaystyle Poss(a,s)\rightarrow \left[F({\overrightarrow {x}},do(a,s))\leftrightarrow \gamma _{F}^{+}({\overrightarrow {x}},a,s)\vee \left(F({\overrightarrow {x}},s)\wedge \neg \gamma _{F}^{-}({\overrightarrow {x}},a,s)\right)\right]} In words, this formula states: "given that it is possible to perform action a in situation s, the fluent F would be true in the resulting situation d o ( a , s ) {\displaystyle do(a,s)} if and only if performing a in s would make it true, or it is true in situation s and performing a in s would not make it false." By way of example, the value of the fluent broken introduced above is given by the following successor state axiom: P o s s ( a , s ) → [ b r o k e n ( o , d o ( a , s ) ) ↔ a = d r o p ( o ) ∧ f r a g i l e ( o ) ∨ b r o k e n ( o , s ) ∧ a ≠ r e p a i r ( o ) ] {\displaystyle Poss(a,s)\rightarrow \left[broken(o,do(a,s))\leftrightarrow a=drop(o)\wedge fragile(o)\vee broken(o,s)\wedge a\neq repair(o)\right]} === States === The properties of the initial or any other situation can be specified by simply stating them as formulae. For example, a fact about the initial state is formalized by making assertions about S 0 {\displaystyle S_{0}} (which is not a state, but a situation). The following statements model that initially, the robot carries nothing, is at location ( 0 , 0 ) {\displaystyle (0,0)} , and there are no broken objects: ∀ z ¬ isCarrying ( z , S 0 ) {\displaystyle \forall z\,\neg {\textit {isCarrying}}(z,S_{0})} l o c a t i o n ( S 0 ) = ( 0 , 0 ) {\displaystyle location(S_{0})=(0,0)\,} ∀ o ¬ b r o k e n ( o , S 0 ) {\displaystyle \forall o\,\neg broken(o,S_{0})} === Foundational axioms === The foundational axioms of the situation calculus formalize the idea that situations are histories by having d o ( a , s ) = d o ( a ′ , s ′ ) ⟺ a = a ′ ∧ s = s ′ {\displaystyle do(a,s)=do(a',s')\iff a=a'\land s=s'} . They also include other properties such as the second-order induction on situations. == Regression == Regression is a mechanism for proving consequences in the situation calculus. It is based on expressing a formula containing the situation d o ( a , s ) {\displaystyle do(a,s)} in terms of a formula containing the action a and the situation s, but not the situation d o ( a , s ) {\displaystyle do(a,s)} . By iterating this procedure, one can end up with an equivalent formula containing only the initial situation S0. Proving consequences is supposedly simpler from this formula than from the original one. == GOLOG == GOLOG is a logic programming language based on the situation calculus. == The original version of the situation calculus == The main difference between the original situation calculus by McCarthy and Hayes and the one in use today is the interpretation of situations. In the modern version of the situational calculus, a situation is a sequence of actions. Originally, situations were defined as "the complete state of the universe at an instant of time". It was clear from the beginning that such situations could not be completely described; the idea was simply to give some statements about situations, and derive consequences from them. This is also different from the approach that is taken by the fluent calculus, where a state can be a collection of known facts, that is, a possibly incomplete description of the universe. In the original version of the situation calculus, fluents are not reified. In other words, conditions that can change are represented by predicates and not by functions. Actually, McCarthy and Hayes defined a fluent as a function that depends on the situation, but they then proceeded always using predicates to represent fluents. For example, the fact that it is raining at place x in the situation s is represented by the literal r a i n i n g ( x , s ) {\displaystyle raining(x,s)} . In the 1986 version of the situation calculus by McCarthy, functional fluents are used. For example, the position of an object x in the situation s is represented by the value of l o c a t i o n ( x , s ) {\displaystyle location(x,s)} , where location is a function. Statements about such functions can be given using equality: l o c a t i o n ( x , s ) = l o c a t i o n ( x , s ′ ) {\displaystyle location(x,s)=location(x,s')} means that the location of the object x is the same in the two situations s and s ′ {\displaystyle s'} . The execution of actions is represented by the function result: the execution of the action a in the situation s is the situation result ( a , s ) {\displaystyle {\textit {result}}(a,s)} . The effects of actions are expressed by formulae relating fluents in situation s and fluents in situations result ( a , s ) {\displaystyle {\textit {result}}(a,s)} . For example, that the action of opening the door results in the door being open if not locked is represented by: ¬ l o c k e d ( d o o r , s ) → o p e n ( d o o r , result ( o p e n s , s ) ) {\displaystyle \neg locked(door,s)\rightarrow open(door,{\textit {result}}(opens,s))} The predicates locked and open represent the conditions of a door being locked and open, respectively. Since these conditions may vary, they are represented by predicates with a situation argument. The formula says that if the door is not locked in a situation, then the door is open after executing the action of opening, this action being represented by the constant opens. These formulae are not sufficient to derive everything that is considered plausible. Indeed, fluents at different situations are only related if they are preconditions and effects of actions; if a fluent is not affected by an action, there is no way to deduce it did not change. For example, the formula above does not imply that ¬ l o c k e d ( d o o r , result ( o p e n s , s ) ) {\displaystyle \neg locked(door,{\textit {result}}(opens,s))} follows from ¬ l o c k e d ( d o o r , s ) {\displaystyle \neg locked(door,s)} , which is what one would expect (the door is not made locked by opening it). In order for inertia to hold, formulae called frame axioms are needed. These formulae specify all non-effects of actions: ¬ l o c k e d ( d o o r , s ) → ¬ l o c k e d ( d o o r , result ( o p e n s , s ) ) {\displaystyle \neg locked(door,s)\rightarrow \neg locked(door,{\textit {result}}(opens,s))} In the original formulation of the situation calculus, the initial situation, later denoted by ⁠ S 0 {\displaystyle S_{0}} ⁠, is not explicitly identified. The initial situation is not needed if situations are taken to be descriptions of the world. For example, to represent the scenario in which the door was closed but not locked and the action of opening it is performed is formalized by taking a constant s to mean the initial situation and making statements about it (e.g., ¬ l o c k e d ( d o o r , s ) {\displaystyle \neg locked(door,s)} ). That the door is open after the change is reflected by formula o p e n ( d o o r , result ( o p e n s , s ) ) {\displaystyle open(door,{\textit {result}}(opens,s))} being entailed. The initial situation is instead necessary if, like in the modern situation calculus, a situation is taken to be a history of actions, as the initial situation represents the empty sequence of actions. The version of the situation calculus introduced by McCarthy in 1986 differs to the original one by the use of functional fluents (e.g., l o c a t i o n ( x , s ) {\displaystyle location(x,s)} is a term representing the position of x in the situation s) and for an attempt to use circumscription to replace the frame axioms. == The situation calculus as a logic program == It is also possible (e.g. Kowalski 1979, Apt and Bezem 1990, Shanahan 1997) to write the situation calculus as a logic program: Holds ( f , d o ( a , s ) ) ← Poss ( a , s ) ∧ Initiates ( a , f , s ) {\displaystyle {\textit {Holds}}(f,do(a,s))\leftarrow {\textit {Poss}}(a,s)\wedge {\textit {Initiates}}(a,f,s)} Holds ( f , d o ( a , s ) ) ← Poss ( a , s ) ∧ Holds ( f , s ) ∧ ¬ Terminates ( a , f , s ) {\displaystyle {\textit {Holds}}(f,do(a,s))\leftarrow {\textit {Poss}}(a,s)\wedge {\textit {Holds}}(f,s)\wedge \neg {\textit {Terminates}}(a,f,s)} Here Holds is a meta-predicate and the variable f ranges over fluents. The predicates Poss, Initiates and Terminates correspond to the predicates Poss, γ F + ( x → , a , s ) {\displaystyle \gamma _{F}^{+}({\overrightarrow {x}},a,s)} , and γ F − ( x → , a , s ) {\displaystyle \gamma _{F}^{-}({\overrightarrow {x}},a,s)} respectively. The left arrow ← is half of the equivalence ↔. The other half is implicit in the completion of the program, in which negation is interpreted as negation as failure. Induction axioms are also implicit, and are needed only to prove program properties. Backward reasoning as in SLD resolution, which is the usual mechanism used to execute logic programs, implements regression implicitly. == See also == Frame problem Event calculus == References == J. McCarthy and P. Hayes (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer and D. Michie, editors, Machine Intelligence, 4:463–502. Edinburgh University Press, 1969. R. Kowalski (1979). Logic for Problem Solving - Elsevier North Holland. K.R. Apt and M. Bezem (1990). Acyclic Programs. In: 7th International Conference on Logic Programming. MIT Press. Jerusalem, Israel. R. Reiter (1991). The frame problem in the situation calculus: a simple solution (sometimes) and a completeness result for goal regression. In Vladimir Lifshitz, editor, Artificial intelligence and mathematical theory of computation: papers in honour of John McCarthy, pages 359–380, San Diego, CA, USA. Academic Press Professional, Inc. 1991. M. Shanahan (1997). Solving the Frame Problem: a Mathematical Investigation of the Common Sense Law of Inertia. MIT Press. H. Levesque, F. Pirri, and R. Reiter (1998). Foundations for the situation calculus. Electronic Transactions on Artificial Intelligence, 2(3–4):159-178. F. Pirri and R. Reiter (1999). Some contributions to the metatheory of the Situation Calculus. Journal of the ACM, 46(3):325–361. doi:10.1145/316542.316545 R. Reiter (2001). Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. The MIT Press.
Wikipedia/Situation_calculus
The join-calculus is a process calculus developed at INRIA. The join-calculus was developed to provide a formal basis for the design of distributed programming languages, and therefore intentionally avoids communications constructs found in other process calculi, such as rendezvous communications, which are difficult to implement in a distributed setting. Despite this limitation, the join-calculus is as expressive as the full π-calculus. Encodings of the π-calculus in the join-calculus, and vice versa, have been demonstrated. The join-calculus is a member of the π-calculus family of process calculi, and can be considered, at its core, an asynchronous π-calculus with several strong restrictions: Scope restriction, reception, and replicated reception are syntactically merged into a single construct, the definition; Communication occurs only on defined names; For every defined name there is exactly one replicated reception. However, as a language for programming, the join-calculus offers at least one convenience over the π-calculus — namely the use of multi-way join patterns, the ability to match against messages from multiple channels simultaneously. == Implementations == === Languages based on the join-calculus === The join-calculus programming language is a new language based on the join-calculus process calculus. It is implemented as an interpreter written in OCaml, and supports statically typed distributed programming, transparent remote communication, agent-based mobility, and some failure-detection. Though not explicitly based on join-calculus, the rule system of CLIPS implements it if every rule deletes its inputs when triggered (retracts the relevant facts when fired). Many implementations of the join-calculus were made as extensions of existing programming languages: JoCaml is a version of OCaml extended with join-calculus primitives Polyphonic C# and its successor Cω extend C# MC# and Parallel C# extend Polyphonic C# Join Java extends Java A Concurrent Basic proposal that uses Join-calculus JErlang (the J is for Join, erjang is Erlang for the JVM) === Embeddings in other programming languages === These implementations do not change the underlying programming language but introduce join calculus operations through a custom library or DSL: The ScalaJoins and the Chymyst libraries are in Scala JoinHs by Einar Karttunen and syallop/Join-Language by Samuel Yallop are DSLs for Join calculus in Haskell Joinads - various implementations of join calculus in F# CocoaJoin is an experimental implementation in Objective-C for iOS and Mac OS X The Join Python library in Python 3 C++ via Boost (for boost from 2009, ca. v. 40, current (Dec '19) is 72). == References == == External links == INRIA, Join Calculus homepage Microsoft Research, The Join Calculus: a Language for Distributed Mobile Programming
Wikipedia/Join_calculus
In mathematical logic, category theory, and computer science, kappa calculus is a formal system for defining first-order functions. Unlike lambda calculus, kappa calculus has no higher-order functions; its functions are not first class objects. Kappa-calculus can be regarded as "a reformulation of the first-order fragment of typed lambda calculus". Because its functions are not first-class objects, evaluation of kappa calculus expressions does not require closures. == Definition == The definition below has been adapted from the diagrams on pages 205 and 207 of Hasegawa. === Grammar === Kappa calculus consists of types and expressions, given by the grammar below: τ = 1 ∣ τ × τ ∣ … {\displaystyle \tau =1\mid \tau \times \tau \mid \ldots } e = x ∣ i d τ ∣ ! τ ∣ lift τ ⁡ ( e ) ∣ e ∘ e ∣ κ x : 1 → τ . e {\displaystyle e=x\mid id_{\tau }\mid !_{\tau }\mid \operatorname {lift} _{\tau }(e)\mid e\circ e\mid \kappa x:1{\to }\tau .e} In other words, 1 is a type If τ 1 {\displaystyle \tau _{1}} and τ 2 {\displaystyle \tau _{2}} are types then τ 1 × τ 2 {\displaystyle \tau _{1}\times \tau _{2}} is a type. Every variable is an expression If τ is a type then i d τ {\displaystyle id_{\tau }} is an expression If τ is a type then ! τ {\displaystyle !_{\tau }} is an expression If τ is a type and e is an expression then lift τ ⁡ ( e ) {\displaystyle \operatorname {lift} _{\tau }(e)} is an expression If e 1 {\displaystyle e_{1}} and e 2 {\displaystyle e_{2}} are expressions then e 1 ∘ e 2 {\displaystyle e_{1}\circ e_{2}} is an expression If x is a variable, τ is a type, and e is an expression, then κ x : 1 → τ . e {\displaystyle \kappa x{:}1{\to }\tau \;.\;e} is an expression The : 1 → τ {\displaystyle :1{\to }\tau } and the subscripts of id, !, and lift {\displaystyle \operatorname {lift} } are sometimes omitted when they can be unambiguously determined from the context. Juxtaposition is often used as an abbreviation for a combination of lift {\displaystyle \operatorname {lift} } and composition: e 1 e 2 = def e 1 ∘ lift ⁡ ( e 2 ) {\displaystyle e_{1}e_{2}\ {\overset {\operatorname {def} }{=}}\ e_{1}\circ \operatorname {lift} (e_{2})} === Typing rules === The presentation here uses sequents ( Γ ⊢ e : τ {\displaystyle \Gamma \vdash e:\tau } ) rather than hypothetical judgments in order to ease comparison with the simply typed lambda calculus. This requires the additional Var rule, which does not appear in Hasegawa In kappa calculus an expression has two types: the type of its source and the type of its target. The notation e : τ 1 → τ 2 {\displaystyle e:\tau _{1}{\to }\tau _{2}} is used to indicate that expression e has source type τ 1 {\displaystyle {\tau _{1}}} and target type τ 2 {\displaystyle {\tau _{2}}} . Expressions in kappa calculus are assigned types according to the following rules: In other words, Var: assuming x : 1 → τ {\displaystyle x:1{\to }\tau } lets you conclude that x : 1 → τ {\displaystyle x:1{\to }\tau } Id: for any type τ, i d τ : τ → τ {\displaystyle id_{\tau }:\tau {\to }\tau } Bang: for any type τ, ! τ : τ → 1 {\displaystyle !_{\tau }:\tau {\to }1} Comp: if the target type of e 1 {\displaystyle e_{1}} matches the source type of e 2 {\displaystyle e_{2}} they may be composed to form an expression e 2 ∘ e 1 {\displaystyle e_{2}\circ e_{1}} with the source type of e 1 {\displaystyle e_{1}} and target type of e 2 {\displaystyle e_{2}} Lift: if e : 1 → τ 1 {\displaystyle e:1{\to }\tau _{1}} , then lift τ 2 ⁡ ( e ) : τ 2 → ( τ 1 × τ 2 ) {\displaystyle \operatorname {lift} _{\tau _{2}}(e):\tau _{2}{\to }(\tau _{1}\times \tau _{2})} Kappa: if we can conclude that e : τ 2 → τ 3 {\displaystyle e:\tau _{2}\to \tau _{3}} under the assumption that x : 1 → τ 1 {\displaystyle x:1{\to }\tau _{1}} , then we may conclude without that assumption that κ x : 1 → τ 1 . e : τ 1 × τ 2 → τ 3 {\displaystyle \kappa x{:}1{\to }\tau _{1}\,.\,e\;:\;\tau _{1}\times \tau _{2}\to \tau _{3}} === Equalities === Kappa calculus obeys the following equalities: Neutrality: If f : τ 1 → τ 2 {\displaystyle f:\tau _{1}{\to }\tau _{2}} then f ∘ i d τ 1 = f {\displaystyle f{\circ }id_{\tau _{1}}=f} and f = i d τ 2 ∘ f {\displaystyle f=id_{\tau _{2}}{\circ }f} Associativity: If f : τ 1 → τ 2 {\displaystyle f:\tau _{1}{\to }\tau _{2}} , g : τ 2 → τ 3 {\displaystyle g:\tau _{2}{\to }\tau _{3}} , and h : τ 3 → τ 4 {\displaystyle h:\tau _{3}{\to }\tau _{4}} , then ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) {\displaystyle (h{\circ }g){\circ }f=h{\circ }(g{\circ }f)} . Terminality: If f : τ → 1 {\displaystyle f{:}\tau {\to }1} and g : τ → 1 {\displaystyle g{:}\tau {\to }1} then f = g {\displaystyle f=g} Lift-Reduction: ( κ x . f ) ∘ lift τ ⁡ ( c ) = f [ c / x ] {\displaystyle (\kappa x.f)\circ \operatorname {lift} _{\tau }(c)=f[c/x]} Kappa-Reduction: κ x . ( h ∘ lift τ ⁡ ( x ) ) = h {\displaystyle \kappa x.(h\circ \operatorname {lift} _{\tau }(x))=h} if x is not free in h The last two equalities are reduction rules for the calculus, rewriting from left to right. == Properties == The type 1 can be regarded as the unit type. Because of this, any two functions whose argument type is the same and whose result type is 1 should be equal – since there is only a single value of type 1 both functions must return that value for every argument (Terminality). Expressions with type 1 → τ {\displaystyle 1{\to }\tau } can be regarded as "constants" or values of "ground type"; this is because 1 is the unit type, and so a function from this type is necessarily a constant function. Note that the kappa rule allows abstractions only when the variable being abstracted has the type 1 → τ {\displaystyle 1{\to }\tau } for some τ. This is the basic mechanism which ensures that all functions are first-order. == Categorical semantics == Kappa calculus is intended to be the internal language of contextually complete categories. == Examples == Expressions with multiple arguments have source types which are "right-imbalanced" binary trees. For example, a function f with three arguments of types A, B, and C and result type D will have type f : A × ( B × ( C × 1 ) ) → D {\displaystyle f:A\times (B\times (C\times 1))\to D} If we define left-associative juxtaposition f c {\displaystyle f\;c} as an abbreviation for ( f ∘ lift ⁡ ( c ) ) {\displaystyle (f\circ \operatorname {lift} (c))} , then – assuming that a : 1 → A {\displaystyle a:1{\to }A} , b : 1 → B {\displaystyle b:1{\to }B} , and c : 1 → C {\displaystyle c:1{\to }C} – we can apply this function: f a b c : 1 → D {\displaystyle f\;a\;b\;c\;:\;1\to D} Since the expression f a b c {\displaystyle f\;a\;b\;c} has source type 1, it is a "ground value" and may be passed as an argument to another function. If g : ( D × E ) → F {\displaystyle g:(D\times E){\to }F} , then g ( f a b c ) : E → F {\displaystyle g\;(f\;a\;b\;c)\;:\;E\to F} Much like a curried function of type A → ( B → ( C → D ) ) {\displaystyle A{\to }(B{\to }(C{\to }D))} in lambda calculus, partial application is possible: f a : B × ( C × 1 ) → D {\displaystyle f\;a\;:\;B\times (C\times 1)\to D} However no higher types (i.e. ( τ → τ ) → τ {\displaystyle (\tau {\to }\tau ){\to }\tau } ) are involved. Note that because the source type of f a is not 1, the following expression cannot be well-typed under the assumptions mentioned so far: h ( f a ) {\displaystyle h\;(f\;a)} Because successive application is used for multiple arguments it is not necessary to know the arity of a function in order to determine its typing; for example, if we know that c : 1 → C {\displaystyle c:1{\to }C} then the expression j c is well-typed as long as j has type ( C × α ) → β {\displaystyle (C\times \alpha ){\to }\beta } for some α and β. This property is important when calculating the principal type of an expression, something which can be difficult when attempting to exclude higher-order functions from typed lambda calculi by restricting the grammar of types. == History == Barendregt originally introduced the term "functional completeness" in the context of combinatory algebra. Kappa calculus arose out of efforts by Lambek to formulate an appropriate analogue of functional completeness for arbitrary categories (see Hermida and Jacobs, section 1). Hasegawa later developed kappa calculus into a usable (though simple) programming language including arithmetic over natural numbers and primitive recursion. Connections to arrows were later investigated by Power, Thielecke, and others. == Variants == It is possible to explore versions of kappa calculus with substructural types such as linear, affine, and ordered types. These extensions require eliminating or restricting the ! τ {\displaystyle !_{\tau }} expression. In such circumstances the × type operator is not a true cartesian product, and is generally written ⊗ to make this clear. == References ==
Wikipedia/Kappa_calculus
In optics, polarized light can be described using the Jones calculus, invented by R. C. Jones in 1941. Polarized light is represented by a Jones vector, and linear optical elements are represented by Jones matrices. When light crosses an optical element the resulting polarization of the emerging light is found by taking the product of the Jones matrix of the optical element and the Jones vector of the incident light. Note that Jones calculus is only applicable to light that is already fully polarized. Light which is randomly polarized, partially polarized, or incoherent must be treated using Mueller calculus. == Jones vector == The Jones vector describes the polarization of light in free space or another homogeneous isotropic non-attenuating medium, where the light can be properly described as transverse waves. Suppose that a monochromatic plane wave of light is travelling in the positive z-direction, with angular frequency ω and wave vector k = (0,0,k), where the wavenumber k = ω/c. Then the electric and magnetic fields E and H are orthogonal to k at each point; they both lie in the plane "transverse" to the direction of motion. Furthermore, H is determined from E by 90-degree rotation and a fixed multiplier depending on the wave impedance of the medium. So the polarization of the light can be determined by studying E. The complex amplitude of E is written: ( E x ( t ) E y ( t ) 0 ) = ( E 0 x e i ( k z − ω t + ϕ x ) E 0 y e i ( k z − ω t + ϕ y ) 0 ) = ( E 0 x e i ϕ x E 0 y e i ϕ y 0 ) e i ( k z − ω t ) . {\displaystyle {\begin{pmatrix}E_{x}(t)\\E_{y}(t)\\0\end{pmatrix}}={\begin{pmatrix}E_{0x}e^{i(kz-\omega t+\phi _{x})}\\E_{0y}e^{i(kz-\omega t+\phi _{y})}\\0\end{pmatrix}}={\begin{pmatrix}E_{0x}e^{i\phi _{x}}\\E_{0y}e^{i\phi _{y}}\\0\end{pmatrix}}e^{i(kz-\omega t)}.} Note that the physical E field is the real part of this vector; the complex multiplier serves up the phase information. Here i {\displaystyle i} is the imaginary unit with i 2 = − 1 {\displaystyle i^{2}=-1} . The Jones vector is ( E 0 x e i ϕ x E 0 y e i ϕ y ) . {\displaystyle {\begin{pmatrix}E_{0x}e^{i\phi _{x}}\\E_{0y}e^{i\phi _{y}}\end{pmatrix}}.} Thus, the Jones vector represents the amplitude and phase of the electric field in the x and y directions. The sum of the squares of the absolute values of the two components of Jones vectors is proportional to the intensity of light. It is common to normalize it to 1 at the starting point of calculation for simplification. It is also common to constrain the first component of the Jones vectors to be a real number. This discards the overall phase information that would be needed for calculation of interference with other beams. Note that all Jones vectors and matrices in this article employ the convention that the phase of the light wave is given by ϕ = k z − ω t {\displaystyle \phi =kz-\omega t} , a convention used by Eugene Hecht. Under this convention, increase in ϕ x {\displaystyle \phi _{x}} (or ϕ y {\displaystyle \phi _{y}} ) indicates retardation (delay) in phase, while decrease indicates advance in phase. For example, a Jones vectors component of i {\displaystyle i} ( = e i π / 2 {\displaystyle =e^{i\pi /2}} ) indicates retardation by π / 2 {\displaystyle \pi /2} (or 90 degrees) compared to 1 ( = e 0 {\displaystyle =e^{0}} ). Collett uses the opposite definition for the phase ( ϕ = ω t − k z {\displaystyle \phi =\omega t-kz} ). Also, Collet and Jones follow different conventions for the definitions of handedness of circular polarization. Jones' convention is called: "From the point of view of the receiver", while Collett's convention is called: "From the point of view of the source." The reader should be wary of the choice of convention when consulting references on the Jones calculus. The following table gives the 6 common examples of normalized Jones vectors. A general vector that points to any place on the surface is written as a ket | ψ ⟩ {\displaystyle |\psi \rangle } . When employing the Poincaré sphere (also known as the Bloch sphere), the basis kets ( | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } ) must be assigned to opposing (antipodal) pairs of the kets listed above. For example, one might assign | 0 ⟩ {\displaystyle |0\rangle } = | H ⟩ {\displaystyle |H\rangle } and | 1 ⟩ {\displaystyle |1\rangle } = | V ⟩ {\displaystyle |V\rangle } . These assignments are arbitrary. Opposing pairs are | H ⟩ {\displaystyle |H\rangle } and | V ⟩ {\displaystyle |V\rangle } | D ⟩ {\displaystyle |D\rangle } and | A ⟩ {\displaystyle |A\rangle } | R ⟩ {\displaystyle |R\rangle } and | L ⟩ {\displaystyle |L\rangle } The polarization of any point not equal to | R ⟩ {\displaystyle |R\rangle } or | L ⟩ {\displaystyle |L\rangle } and not on the circle that passes through | H ⟩ , | D ⟩ , | V ⟩ , | A ⟩ {\displaystyle |H\rangle ,|D\rangle ,|V\rangle ,|A\rangle } is known as elliptical polarization. == Jones matrices == Jones calculus is a matrix calculus developed in 1941 by Henry Hurwitz Jr. and R. Clark Jones and published in the Journal of the Optical Society of America. The Jones matrices are operators that act on the Jones vectors defined above. These matrices are implemented by various optical elements such as lenses, beam splitters, mirrors, etc. Each matrix represents projection onto a one-dimensional complex subspace of the Jones vectors. The following table gives examples of Jones matrices for polarizers: == Phase retarders == A phase retarder is an optical element that produces a phase difference between two orthogonal polarization components of a monochromatic polarized beam of light. Mathematically, using kets to represent Jones vectors, this means that the action of a phase retarder is to transform light with polarization | P ⟩ = c 1 | 1 ⟩ + c 2 | 2 ⟩ {\displaystyle |P\rangle =c_{1}|1\rangle +c_{2}|2\rangle } to | P ′ ⟩ = c 1 e i η / 2 | 1 ⟩ + c 2 e − i η / 2 | 2 ⟩ {\displaystyle |P'\rangle =c_{1}{\rm {e}}^{i\eta /2}|1\rangle +c_{2}{\rm {e}}^{-i\eta /2}|2\rangle } where | 1 ⟩ , | 2 ⟩ {\displaystyle |1\rangle ,|2\rangle } are orthogonal polarization components (i.e. ⟨ 1 | 2 ⟩ = 0 {\displaystyle \langle 1|2\rangle =0} ) that are determined by the physical nature of the phase retarder. In general, the orthogonal components could be any two basis vectors. For example, the action of the circular phase retarder is such that | 1 ⟩ = 1 2 ( 1 − i ) and | 2 ⟩ = 1 2 ( 1 i ) {\displaystyle |1\rangle ={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1\\-i\end{pmatrix}}\qquad {\text{ and }}\qquad |2\rangle ={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1\\i\end{pmatrix}}} However, linear phase retarders, for which | 1 ⟩ , | 2 ⟩ {\displaystyle |1\rangle ,|2\rangle } are linear polarizations, are more commonly encountered in discussion and in practice. In fact, sometimes the term "phase retarder" is used to refer specifically to linear phase retarders. Linear phase retarders are usually made out of birefringent uniaxial crystals such as calcite, MgF2 or quartz. Plates made of these materials for this purpose are referred to as waveplates. Uniaxial crystals have one crystal axis that is different from the other two crystal axes (i.e., ni ≠ nj = nk). This unique axis is called the extraordinary axis and is also referred to as the optic axis. An optic axis can be the fast or the slow axis for the crystal depending on the crystal at hand. Light travels with a higher phase velocity along an axis that has the smallest refractive index and this axis is called the fast axis. Similarly, an axis which has the largest refractive index is called a slow axis since the phase velocity of light is the lowest along this axis. "Negative" uniaxial crystals (e.g., calcite CaCO3, sapphire Al2O3) have ne < no so for these crystals, the extraordinary axis (optic axis) is the fast axis, whereas for "positive" uniaxial crystals (e.g., quartz SiO2, magnesium fluoride MgF2, rutile TiO2), ne > no and thus the extraordinary axis (optic axis) is the slow axis. Other commercially available linear phase retarders exist and are used in more specialized applications. The Fresnel rhombs is one such alternative. Any linear phase retarder with its fast axis defined as the x- or y-axis has zero off-diagonal terms and thus can be conveniently expressed as ( e i ϕ x 0 0 e i ϕ y ) {\displaystyle {\begin{pmatrix}{\rm {e}}^{i\phi _{x}}&0\\0&{\rm {e}}^{i\phi _{y}}\end{pmatrix}}} where ϕ x {\displaystyle \phi _{x}} and ϕ y {\displaystyle \phi _{y}} are the phase offsets of the electric fields in x {\displaystyle x} and y {\displaystyle y} directions respectively. In the phase convention ϕ = k z − ω t {\displaystyle \phi =kz-\omega t} , define the relative phase between the two waves as ϵ = ϕ y − ϕ x {\displaystyle \epsilon =\phi _{y}-\phi _{x}} . Then a positive ϵ {\displaystyle \epsilon } (i.e. ϕ y {\displaystyle \phi _{y}} > ϕ x {\displaystyle \phi _{x}} ) means that E y {\displaystyle E_{y}} doesn't attain the same value as E x {\displaystyle E_{x}} until a later time, i.e. E x {\displaystyle E_{x}} leads E y {\displaystyle E_{y}} . Similarly, if ϵ < 0 {\displaystyle \epsilon <0} , then E y {\displaystyle E_{y}} leads E x {\displaystyle E_{x}} . For example, if the fast axis of a quarter waveplate is horizontal, then the phase velocity along the horizontal direction is ahead of the vertical direction i.e., E x {\displaystyle E_{x}} leads E y {\displaystyle E_{y}} . Thus, ϕ x < ϕ y {\displaystyle \phi _{x}<\phi _{y}} which for a quarter waveplate yields ϕ y = ϕ x + π / 2 {\displaystyle \phi _{y}=\phi _{x}+\pi /2} . In the opposite convention ϕ = ω t − k z {\displaystyle \phi =\omega t-kz} , define the relative phase as ϵ = ϕ x − ϕ y {\displaystyle \epsilon =\phi _{x}-\phi _{y}} . Then ϵ > 0 {\displaystyle \epsilon >0} means that E y {\displaystyle E_{y}} doesn't attain the same value as E x {\displaystyle E_{x}} until a later time, i.e. E x {\displaystyle E_{x}} leads E y {\displaystyle E_{y}} . The Jones matrix for an arbitrary birefringent material is the most general form of a polarization transformation in the Jones calculus; it can represent any polarization transformation. To see this, one can show e − i η 2 ( cos 2 ⁡ θ + e i η sin 2 ⁡ θ ( 1 − e i η ) e − i ϕ cos ⁡ θ sin ⁡ θ ( 1 − e i η ) e i ϕ cos ⁡ θ sin ⁡ θ sin 2 ⁡ θ + e i η cos 2 ⁡ θ ) = ( cos ⁡ ( η / 2 ) − i sin ⁡ ( η / 2 ) cos ⁡ ( 2 θ ) − sin ⁡ ( η / 2 ) sin ⁡ ( ϕ ) sin ⁡ ( 2 θ ) − i sin ⁡ ( η / 2 ) cos ⁡ ( ϕ ) sin ⁡ ( 2 θ ) sin ⁡ ( η / 2 ) sin ⁡ ( ϕ ) sin ⁡ ( 2 θ ) − i sin ⁡ ( η / 2 ) cos ⁡ ( ϕ ) sin ⁡ ( 2 θ ) cos ⁡ ( η / 2 ) + i sin ⁡ ( η / 2 ) cos ⁡ ( 2 θ ) ) {\displaystyle {\begin{aligned}&{\rm {e}}^{-{\frac {i\eta }{2}}}{\begin{pmatrix}\cos ^{2}\theta +{\rm {e}}^{i\eta }\sin ^{2}\theta &\left(1-{\rm {e}}^{i\eta }\right){\rm {e}}^{-i\phi }\cos \theta \sin \theta \\\left(1-{\rm {e}}^{i\eta }\right){\rm {e}}^{i\phi }\cos \theta \sin \theta &\sin ^{2}\theta +{\rm {e}}^{i\eta }\cos ^{2}\theta \end{pmatrix}}\\&={\begin{pmatrix}\cos(\eta /2)-i\sin(\eta /2)\cos(2\theta )&-\sin(\eta /2)\sin(\phi )\sin(2\theta )-i\sin(\eta /2)\cos(\phi )\sin(2\theta )\\\sin(\eta /2)\sin(\phi )\sin(2\theta )-i\sin(\eta /2)\cos(\phi )\sin(2\theta )&\cos(\eta /2)+i\sin(\eta /2)\cos(2\theta )\end{pmatrix}}\end{aligned}}} The above matrix is a general parametrization for the elements of SU(2), using the convention SU ⁡ ( 2 ) = { ( α − β ¯ β α ¯ ) : α , β ∈ C , | α | 2 + | β | 2 = 1 } {\displaystyle \operatorname {SU} (2)=\left\{{\begin{pmatrix}\alpha &-{\overline {\beta }}\\\beta &{\overline {\alpha }}\end{pmatrix}}:\ \ \alpha ,\beta \in \mathbb {C} ,\ \ |\alpha |^{2}+|\beta |^{2}=1\right\}~} where the overline denotes complex conjugation. Finally, recognizing that the set of unitary transformations on C 2 {\displaystyle \mathbb {C} ^{2}} can be expressed as { e i γ ( α − β ¯ β α ¯ ) : α , β ∈ C , | α | 2 + | β | 2 = 1 , γ ∈ [ 0 , 2 π ] } {\displaystyle \left\{{\rm {e}}^{i\gamma }{\begin{pmatrix}\alpha &-{\overline {\beta }}\\\beta &{\overline {\alpha }}\end{pmatrix}}:\ \ \alpha ,\beta \in \mathbb {C} ,\ \ |\alpha |^{2}+|\beta |^{2}=1,\ \ \gamma \in [0,2\pi ]\right\}} it becomes clear that the Jones matrix for an arbitrary birefringent material represents any unitary transformation, up to a phase factor e i γ {\displaystyle {\rm {e}}^{i\gamma }} . Therefore, for appropriate choice of η {\displaystyle \eta } , θ {\displaystyle \theta } , and ϕ {\displaystyle \phi } , a transformation between any two Jones vectors can be found, up to a phase factor e i γ {\displaystyle {\rm {e}}^{i\gamma }} . However, in the Jones calculus, such phase factors do not change the represented polarization of a Jones vector, so are either considered arbitrary or imposed ad hoc to conform to a set convention. The special expressions for the phase retarders can be obtained by taking suitable parameter values in the general expression for a birefringent material. In the general expression: The relative phase retardation induced between the fast axis and the slow axis is given by η = ϕ y − ϕ x {\displaystyle \eta =\phi _{y}-\phi _{x}} θ {\displaystyle \theta } is the orientation of the fast axis with respect to the x-axis. ϕ {\displaystyle \phi } is the circularity. Note that for linear retarders, ϕ {\displaystyle \phi } = 0 and for circular retarders, ϕ {\displaystyle \phi } = ± π {\displaystyle \pi } /2, θ {\displaystyle \theta } = π {\displaystyle \pi } /4. In general for elliptical retarders, ϕ {\displaystyle \phi } takes on values between - π {\displaystyle \pi } /2 and π {\displaystyle \pi } /2. == Axially rotated elements == Assume an optical element has its optic axis perpendicular to the surface vector for the plane of incidence and is rotated about this surface vector by angle θ/2 (i.e., the principal plane through which the optic axis passes, makes angle θ/2 with respect to the plane of polarization of the electric field of the incident TE wave). Recall that a half-wave plate rotates polarization as twice the angle between incident polarization and optic axis (principal plane). Therefore, the Jones matrix for the rotated polarization state, M(θ), is M ( θ ) = R ( − θ ) M R ( θ ) , {\displaystyle M(\theta )=R(-\theta )\,M\,R(\theta ),} where R ( θ ) = ( cos ⁡ θ sin ⁡ θ − sin ⁡ θ cos ⁡ θ ) . {\displaystyle R(\theta )={\begin{pmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{pmatrix}}.} This agrees with the expression for a half-wave plate in the table above. These rotations are identical to beam unitary splitter transformation in optical physics given by R ( θ ) = ( r t ′ t r ′ ) {\displaystyle R(\theta )={\begin{pmatrix}r&t'\\t&r'\end{pmatrix}}} where the primed and unprimed coefficients represent beams incident from opposite sides of the beam splitter. The reflected and transmitted components acquire a phase θr and θt, respectively. The requirements for a valid representation of the element are θ t − θ r + θ t' − θ r' = ± π {\displaystyle \theta _{\text{t}}-\theta _{\text{r}}+\theta _{\text{t'}}-\theta _{\text{r'}}=\pm \pi } and r ∗ t ′ + t ∗ r ′ = 0. {\displaystyle r^{*}t'+t^{*}r'=0.} Both of these representations are unitary matrices fitting these requirements; and as such, are both valid. == Arbitrarily rotated elements == Finding the Jones matrix, J(α, β, γ), for an arbitrary rotation involves a three-dimensional rotation matrix. In the following notation α, β and γ are the yaw, pitch, and roll angles (rotation about the z-, y-, and x-axes, with x being the direction of propagation), respectively. The full combination of the 3-dimensional rotation matrices is the following: R 3 D ( θ ) = [ cos ⁡ α cos ⁡ β cos ⁡ α sin ⁡ β sin ⁡ γ − sin ⁡ α cos ⁡ γ cos ⁡ α sin ⁡ β cos ⁡ γ + sin ⁡ α sin ⁡ γ sin ⁡ α cos ⁡ β sin ⁡ α sin ⁡ β sin ⁡ γ + cos ⁡ α cos ⁡ γ sin ⁡ α sin ⁡ β cos ⁡ γ − cos ⁡ α sin ⁡ γ − sin ⁡ β cos ⁡ β sin ⁡ γ cos ⁡ β cos ⁡ γ ] {\displaystyle R_{3D}(\theta )={\begin{bmatrix}\cos \alpha \cos \beta &\cos \alpha \sin \beta \sin \gamma -\sin \alpha \cos \gamma &\cos \alpha \sin \beta \cos \gamma +\sin \alpha \sin \gamma \\\sin \alpha \cos \beta &\sin \alpha \sin \beta \sin \gamma +\cos \alpha \cos \gamma &\sin \alpha \sin \beta \cos \gamma -\cos \alpha \sin \gamma \\-\sin \beta &\cos \beta \sin \gamma &\cos \beta \cos \gamma \\\end{bmatrix}}} Using the above, for any base Jones matrix J, you can find the rotated state J(α, β, γ) using: J ( α , β , γ ) = R 3 D ( − α , − β , − γ ) ⋅ J ⋅ R 3 D ( α , β , γ ) {\displaystyle J(\alpha ,\beta ,\gamma )=R_{3D}(-\alpha ,-\beta ,-\gamma )\cdot J\cdot R_{3D}(\alpha ,\beta ,\gamma )} The simplest case, where the Jones matrix is for an ideal linear horizontal polarizer, reduces then to: J ( α , β , γ ) = [ c α 2 c β 2 c α c β [ c α s β s γ − s α c γ ] c α c β [ c α s β c γ + s α s γ ] s α c α c β 2 s α c β [ c α s β s γ − s α c γ ] s α c β [ c α s β c γ + s α s γ ] − c α s β c β − s β [ c α s β s γ − s α c γ ] − s β [ c α s β c γ + s α s γ ] ] {\displaystyle J(\alpha ,\beta ,\gamma )={\begin{bmatrix}c_{\alpha }^{2}c_{\beta }^{2}&c_{\alpha }c_{\beta }[c_{\alpha }s_{\beta }s_{\gamma }-s_{\alpha }c_{\gamma }]&c_{\alpha }c_{\beta }[c_{\alpha }s_{\beta }c_{\gamma }+s_{\alpha }s_{\gamma }]\\s_{\alpha }c_{\alpha }c_{\beta }^{2}&s_{\alpha }c_{\beta }[c_{\alpha }s_{\beta }s_{\gamma }-s_{\alpha }c_{\gamma }]&s_{\alpha }c_{\beta }[c_{\alpha }s_{\beta }c_{\gamma }+s_{\alpha }s_{\gamma }]\\-c_{\alpha }s_{\beta }c_{\beta }&-s_{\beta }[c_{\alpha }s_{\beta }s_{\gamma }-s_{\alpha }c_{\gamma }]&-s_{\beta }[c_{\alpha }s_{\beta }c_{\gamma }+s_{\alpha }s_{\gamma }]\\\end{bmatrix}}} where ci and si represent the cosine or sine of a given angle "i", respectively. See Russell A. Chipman and Garam Yun for further work done based on this. == See also == Polarization Scattering parameters Stokes parameters Mueller calculus Photon polarization == Notes == == References == == Further reading == Brosseau, Christian; Givens, Clark R.; Kostinski, Alexander B. (1993). "Generalized trace condition on the Mueller-Jones polarization matrix". Journal of the Optical Society of America A. 10 (10): 2248–2251. Bibcode:1993JOSAA..10.2248B. doi:10.1364/JOSAA.10.002248. Fymat, A. L. (1971). "Jones's Matrix Representation of Optical Instruments. 1: Beam Splitters". Applied Optics. 10 (11): 2499–2505. Bibcode:1971ApOpt..10.2499F. doi:10.1364/AO.10.002499. PMID 20111363. Fymat, A. L. (1971). "Jones's Matrix Representation of Optical Instruments. 2: Fourier Interferometers (Spectrometers and Spectropolarimeters)". Applied Optics. 10 (12): 2711–2716. Bibcode:1971ApOpt..10.2711F. doi:10.1364/AO.10.002711. PMID 20111418. Fymat, A. L. (1972). "Polarization Effects in Fourier Spectroscopy. I: Coherency Matrix Representation". Applied Optics. 11 (1): 160–173. Bibcode:1972ApOpt..11..160F. doi:10.1364/AO.11.000160. PMID 20111472. Gerald, A.; Burch, J. M. (1975). Introduction to Matrix Methods in Optics (1st ed.). John Wiley & Sons. ISBN 0-471-29685-6. Gill, Jose Jorge; Bernabeu, Eusebio (1987). "Obtainment of the polarizing and retardation parameters of a non-depolarizing optical system from the polar decomposition of its Mueller matrix". Optik. 76: 67–71. Goldstein, D.; Collett, E. (2003). Polarized Light (2nd ed.). CRC Press. ISBN 0-8247-4053-X. Hecht, E. (1987). Optics (2nd ed.). Addison-Wesley. ISBN 0-201-11609-X. McGuire, James P.; Chipman, Russel A. (1994). "Polarization aberrations. 1. Rotationally symmetric optical systems". Applied Optics. 33 (22): 5080–5100. Bibcode:1994ApOpt..33.5080M. doi:10.1364/AO.33.005080. PMID 20935891. S2CID 3805982. Moreno, Ignacio; Yzuel, Maria J.; Campos, Juan; Vargas, Asticio (2004). "Jones matrix treatment for polarization Fourier optics". Journal of Modern Optics. 51 (14): 2031–2038. Bibcode:2004JMOp...51.2031M. doi:10.1080/09500340408232511. hdl:10533/175322. S2CID 120169144. Moreno, Ivan (2004). "Jones matrix for image-rotation prisms". Applied Optics. 43 (17): 3373–3381. Bibcode:2004ApOpt..43.3373M. doi:10.1364/AO.43.003373. PMID 15219016. S2CID 24268298. Pedrotti, Frank L.; Leno, S. J.; Pedrotti, S. (1993). Introduction to Optics (2nd ed.). Prentice Hall. ISBN 0-13-501545-6. Pistoni, Natale C. (1995). "Simplified approach to the Jones calculus in retracing optical circuits". Applied Optics. 34 (34): 7870–7876. Bibcode:1995ApOpt..34.7870P. doi:10.1364/AO.34.007870. PMID 2106888. Shurcliff, William (1966). "Chapter 8: Mueller Calculus and Jones Calculus". Polarized Light: Production and Use. Harvard University Press. p. 109. == External links == Jones Calculus written by E. Collett on Optipedia
Wikipedia/Jones_calculus
Mueller calculus is a matrix method for manipulating Stokes vectors, which represent the polarization of light. It was developed in 1943 by Hans Mueller. In this technique, the effect of a particular optical element is represented by a Mueller matrix—a 4×4 matrix that is an overlapping generalization of the Jones matrix. == Introduction == Disregarding coherent wave superposition, any fully polarized, partially polarized, or unpolarized state of light can be represented by a Stokes vector ( S → {\displaystyle {\vec {S}}} ); and any optical element can be represented by a Mueller matrix (M). If a beam of light is initially in the state S → i {\displaystyle {\vec {S}}_{i}} and then passes through an optical element M and comes out in a state S → o {\displaystyle {\vec {S}}_{o}} , then it is written S → o = M S → i . {\displaystyle {\vec {S}}_{o}=\mathrm {M} {\vec {S}}_{i}\ .} If a beam of light passes through optical element M1 followed by M2 then M3 it is written S → o = M 3 ( M 2 ( M 1 S → i ) ) {\displaystyle {\vec {S}}_{o}=\mathrm {M} _{3}\left(\mathrm {M} _{2}\left(\mathrm {M} _{1}{\vec {S}}_{i}\right)\right)} given that matrix multiplication is associative it can be written S → o = M 3 M 2 M 1 S → i . {\displaystyle {\vec {S}}_{o}=\mathrm {M} _{3}\mathrm {M} _{2}\mathrm {M} _{1}{\vec {S}}_{i}\ .} Matrix multiplication is not commutative, so in general M 3 M 2 M 1 S → i ≠ M 1 M 2 M 3 S → i . {\displaystyle \mathrm {M} _{3}\mathrm {M} _{2}\mathrm {M} _{1}{\vec {S}}_{i}\neq \mathrm {M} _{1}\mathrm {M} _{2}\mathrm {M} _{3}{\vec {S}}_{i}\ .} == Mueller vs. Jones calculi == With disregard for coherence, light which is unpolarized or partially polarized must be treated using the Mueller calculus, while fully polarized light can be treated with either the Mueller calculus or the simpler Jones calculus. Many problems involving coherent light (such as from a laser) must be treated with Jones calculus, however, because it works directly with the electric field of the light rather than with its intensity or power, and thereby retains information about the phase of the waves. More specifically, the following can be said about Mueller matrices and Jones matrices: Stokes vectors and Mueller matrices operate on intensities and their differences, i.e. incoherent superpositions of light; they are not adequate to describe either interference or diffraction effects. (...) Any Jones matrix [J] can be transformed into the corresponding Mueller–Jones matrix, M, using the following relation: M = A ( J ⊗ J ∗ ) A − 1 {\displaystyle \mathrm {M=A(J\otimes J^{*})A^{-1}} } , where * indicates the complex conjugate [sic], [A is:] A = ( 1 0 0 1 1 0 0 − 1 0 1 1 0 0 i − i 0 ) {\displaystyle \mathrm {A} ={\begin{pmatrix}1&0&0&1\\1&0&0&-1\\0&1&1&0\\0&i&-i&0\\\end{pmatrix}}} and ⊗ is the tensor (Kronecker) product. (...) While the Jones matrix has eight independent parameters [two Cartesian or polar components for each of the four complex values in the 2-by-2 matrix], the absolute phase information is lost in the [equation above], leading to only seven independent matrix elements for a Mueller matrix derived from a Jones matrix. == Mueller matrices == Below are listed the Mueller matrices for some ideal common optical elements: General expression for reference frame rotation from the local frame to the laboratory frame: ( 1 0 0 0 0 cos ⁡ ( 2 θ ) sin ⁡ ( 2 θ ) 0 0 − sin ⁡ ( 2 θ ) cos ⁡ ( 2 θ ) 0 0 0 0 1 ) {\displaystyle {\begin{pmatrix}1&0&0&0\\0&\cos {(2\theta )}&\sin {(2\theta )}&0\\0&-\sin {(2\theta )}&\cos {(2\theta )}&0\\0&0&0&1\end{pmatrix}}\quad } where θ {\displaystyle \theta } is the angle of rotation. For rotation from the laboratory frame to the local frame, the sign of the sine terms inverts. Linear polarizer (horizontal transmission) 1 2 ( 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 ) {\displaystyle {1 \over 2}{\begin{pmatrix}1&1&0&0\\1&1&0&0\\0&0&0&0\\0&0&0&0\end{pmatrix}}} The Mueller matrices for other polarizer rotation angles can be generated by reference frame rotation. Linear polarizer (vertical transmission) 1 2 ( 1 − 1 0 0 − 1 1 0 0 0 0 0 0 0 0 0 0 ) {\displaystyle {1 \over 2}{\begin{pmatrix}1&-1&0&0\\-1&1&0&0\\0&0&0&0\\0&0&0&0\end{pmatrix}}} Linear polarizer (+45° transmission) 1 2 ( 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 ) {\displaystyle {1 \over 2}{\begin{pmatrix}1&0&1&0\\0&0&0&0\\1&0&1&0\\0&0&0&0\end{pmatrix}}} Linear polarizer (−45° transmission) 1 2 ( 1 0 − 1 0 0 0 0 0 − 1 0 1 0 0 0 0 0 ) {\displaystyle {1 \over 2}{\begin{pmatrix}1&0&-1&0\\0&0&0&0\\-1&0&1&0\\0&0&0&0\end{pmatrix}}} General linear polarizer matrix 1 2 ( 1 cos ⁡ ( 2 θ ) sin ⁡ ( 2 θ ) 0 cos ⁡ ( 2 θ ) cos 2 ⁡ ( 2 θ ) cos ⁡ ( 2 θ ) sin ⁡ ( 2 θ ) 0 sin ⁡ ( 2 θ ) cos ⁡ ( 2 θ ) sin ⁡ ( 2 θ ) sin 2 ⁡ ( 2 θ ) 0 0 0 0 0 ) {\displaystyle {1 \over 2}{\begin{pmatrix}1&\cos {(2\theta )}&\sin {(2\theta )}&0\\\cos {(2\theta )}&\cos ^{2}(2\theta )&\cos(2\theta )\sin(2\theta )&0\\\sin {(2\theta )}&\cos(2\theta )\sin(2\theta )&\sin ^{2}(2\theta )&0\\0&0&0&0\end{pmatrix}}\quad } where θ {\displaystyle \theta } is the angle of rotation of the polarizer. General linear retarder (wave plate calculations are made from this) ( 1 0 0 0 0 cos 2 ⁡ ( 2 θ ) + sin 2 ⁡ ( 2 θ ) cos ⁡ ( δ ) cos ⁡ ( 2 θ ) sin ⁡ ( 2 θ ) ( 1 − cos ⁡ ( δ ) ) sin ⁡ ( 2 θ ) sin ⁡ ( δ ) 0 cos ⁡ ( 2 θ ) sin ⁡ ( 2 θ ) ( 1 − cos ⁡ ( δ ) ) cos 2 ⁡ ( 2 θ ) cos ⁡ ( δ ) + sin 2 ⁡ ( 2 θ ) − cos ⁡ ( 2 θ ) sin ⁡ ( δ ) 0 − sin ⁡ ( 2 θ ) sin ⁡ ( δ ) cos ⁡ ( 2 θ ) sin ⁡ ( δ ) cos ⁡ ( δ ) ) {\displaystyle {\begin{pmatrix}1&0&0&0\\0&\cos ^{2}(2\theta )+\sin ^{2}(2\theta )\cos(\delta )&\cos(2\theta )\sin(2\theta )\left(1-\cos(\delta )\right)&\sin(2\theta )\sin(\delta )\\0&\cos(2\theta )\sin(2\theta )\left(1-\cos(\delta )\right)&\cos ^{2}(2\theta )\cos(\delta )+\sin ^{2}(2\theta )&-\cos(2\theta )\sin(\delta )\\0&-\sin(2\theta )\sin(\delta )&\cos(2\theta )\sin(\delta )&\cos(\delta )\end{pmatrix}}\quad } where δ {\displaystyle \delta } is the phase difference between the fast and slow axis and θ {\displaystyle \theta } is the angle of the slow axis. Quarter-wave plate (fast-axis vertical) ( 1 0 0 0 0 1 0 0 0 0 0 − 1 0 0 1 0 ) {\displaystyle {\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&0&-1\\0&0&1&0\end{pmatrix}}} Quarter-wave plate (fast-axis horizontal) ( 1 0 0 0 0 1 0 0 0 0 0 1 0 0 − 1 0 ) {\displaystyle {\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&-1&0\end{pmatrix}}} Half-wave plate (fast-axis horizontal and vertical; also, ideal mirror) ( 1 0 0 0 0 1 0 0 0 0 − 1 0 0 0 0 − 1 ) {\displaystyle {\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}}} Attenuating filter (25% transmission) 1 4 ( 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle {1 \over 4}{\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}\quad } == Mueller tensors == The Mueller/Stokes architecture can also be used to describe non-linear optical processes, such as multi-photon excited fluorescence and second harmonic generation. The Mueller tensor can be connected back to the laboratory-frame Jones tensor by direct analogy with Mueller and Jones matrices. M ( 2 ) = A ( χ ( 2 ) ∗ ⊗ χ ( 2 ) ) : A − 1 A − 1 {\displaystyle \mathrm {M} ^{(2)}=\mathrm {A} \left(\chi ^{(2)*}\otimes \chi ^{(2)}\right):\mathrm {A} ^{-1}\mathrm {A} ^{-1}} , where M ( 2 ) {\displaystyle M^{(2)}} is the rank three Mueller tensor describing the Stokes vector produced by a pair of incident Stokes vectors, and χ ( 2 ) {\displaystyle \chi ^{(2)}} is the 2×2×2 laboratory-frame Jones tensor. == See also == Stokes parameters Jones calculus Polarization (waves) == References == === Other sources === E. Collett (2005) Field Guide to Polarization, SPIE Field Guides vol. FG05, SPIE ISBN 0-8194-5868-6. Eugene Hecht (1987) Optics, 2nd ed., Addison-Wesley ISBN 0-201-11609-X. del Toro Iniesta, Jose Carlos (2003). Introduction to Spectropolarimetry. Cambridge, UK: Cambridge University Press. p. 227. ISBN 978-0-521-81827-8. N. Mukunda and others (2010) "A complete characterization pre-Mueller and Mueller matrices in polarization optics", Journal of the Optical Society of America A 27(2): 188 to 99 doi:10.1364/JOSAA.27.000188 MR2642868 William Shurcliff (1966) Polarized Light: Production and Use, chapter 8 Mueller Calculus and Jones Calculus, page 109, Harvard University Press. Simpson, Garth (2017). Nonlinear Optical Polarization Analysis in Chemistry and Biology. Cambridge, UK: Cambridge University Press. p. 392. ISBN 978-0-521-51908-3.
Wikipedia/Mueller_calculus
Fitch notation, also known as Fitch diagrams (named after Frederic Fitch), is a method of presenting natural deduction proofs in propositional calculus and first-order logics using a structured, line-by-line format that explicitly shows assumptions, inferences, and their scope. It was invented by Frederic Brenton Fitch in the 1930s and later popularized through his textbook Symbolic Logic (1952). Fitch notation is notable for its use of indentation or boxes to indicate the scope of subordinate assumptions, making it one of the most pedagogically accessible systems for teaching formal logic. == History == Fitch developed his system of natural deduction as part of his doctoral work at Princeton University in 1934, under the supervision of Alonzo Church. His approach introduced the key idea of subordinate proofs, where assumptions could be opened within a subderivation and discharged later, such as when proving implications or negations. While his system was initially circulated in unpublished form, it became widely known through his book Symbolic Logic, which was used extensively in undergraduate instruction. Later logicians and educators such as Patrick Suppes and E. J. Lemmon rebranded Fitch's system. While they introduced graphical changes—such as replacing indentation with vertical bars—the underlying structure of Fitch-style natural deduction remained intact. These variations are often referred to as the Suppes–Lemmon format, though they are fundamentally based on Fitch's original notation. == Structure == Fitch notation presents proofs as a sequence of numbered lines, where each line includes: A logical formula A justification (a rule name and line references) Optionally, indentation or brackets to show the scope of assumptions === Example === Each row in a Fitch-style proof is either: an assumption or subproof assumption. a sentence justified by the citation of (1) a rule of inference and (2) the prior line or lines of the proof that license that rule. Introducing a new assumption increases the level of indentation, and begins a new vertical "scope" bar that continues to indent subsequent lines until the assumption is discharged. This mechanism immediately conveys which assumptions are active for any given line in the proof, without the assumptions needing to be rewritten on every line (as with sequent-style proofs). The following example displays the main features of Fitch notation: 0 |__ [assumption, want P if not P] 1 | |__ P [assumption, want not P] 2 | | |__ not P [assumption, for reduction] 3 | | | contradiction [contradiction introduction: 1, 2] 4 | | not not P [negation introduction: 2] | 5 | |__ not not P [assumption, want P] 6 | | P [negation elimination: 5] | 7 | P iff not not P [biconditional introduction: 1 - 4, 5 - 6] The null assumption, i.e., we are proving a tautology Our first subproof: we assume the l.h.s. to show the r.h.s. follows A subsubproof: we are free to assume what we want. Here we aim for a reductio ad absurdum We now have a contradiction We are allowed to prefix the statement that "caused" the contradiction with a not Our second subproof: we assume the r.h.s. to show the l.h.s. follows We invoke the rule that allows us to remove an even number of nots from a statement prefix From 1 to 4 we have shown if P then not not P, from 5 to 6 we have shown P if not not P; hence we are allowed to introduce the biconditional in 7, where iff stands for if and only if == Features == Subordinate proofs: Nested derivations with temporarily assumed premises Assumption discharge: Explicit rules like → {\displaystyle \to } introduction and ¬ {\displaystyle \neg } introduction for closing assumptions Line-referenced justification: Each step cites lines and rules used Human readability: The format closely mirrors natural informal reasoning == Comparison with Other Systems == Gentzen-style natural deduction presents proofs as trees, with assumptions at the leaves and conclusions at the root. Unlike Fitch notation, it does not use subordinate boxes or indentation to manage temporary assumptions. Instead, all assumptions are explicitly present in the leaf nodes of the proof tree. This uniform treatment of assumptions makes Gentzen systems particularly well-suited to structural proof transformations and facilitates modularization and meta-theoretical analysis, such as cut-elimination. Hilbert system proofs rely on axioms and only a few inference rules, making them concise but abstract and less intuitive. The Suppes–Lemmon notation follows Fitch's logic and alters its visual layout for typesetting and instructional clarity. == Influence == Fitch notation is widely used in logic textbooks and teaching. It also underlies several proof assistant tools. Its structured style has become a standard for teaching formal logic in undergraduate education. == See also == Natural deduction Frederic Fitch Sequent calculus Proof theory Hilbert system Suppes–Lemmon notation == Notes == == References == Fitch, Frederic Brenton (1952). Symbolic Logic: An introduction. New York: The Ronald Press Company. LCCN 52006196. Barker-Plummer, Dave; Barwise, Jon; Etchemendy, John (2011) [1999]. Language, Proof and Logic (2 ed.). CSLI Publications. p. 606. ISBN 9781575866321. Lemmon, E. J. (1965). Beginning Logic. Nelson. Suppes, Patrick (1957). Introduction to Logic. Van Nostrand. == External links == Brogaard, Berit; Salerno, Joe (Fall 2019). "Fitch's Paradox of Knowability". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. "An online Java application for proof building". Archived from the original on 2 October 2006. Retrieved 6 May 2025. "A Web implementation of Fitch proof system (propositional and first-order)". proofmod.mindconnect.cc. Retrieved 6 May 2025. "The Jape general-purpose proof assistant". GitHub. Retrieved 6 May 2025. (see Jape) "Resources for typesetting proofs in Fitch notation with LaTeX". Logic Matters. Retrieved 6 May 2025. (see LaTeX) "FitchJS: An open source web app to construct proofs in Fitch notation (and export to LaTeX)". Retrieved 6 May 2025. "Natural deduction proof editor and checker in Fitch notation". Retrieved 6 May 2025.
Wikipedia/Fitch-style_calculus
Bondi k-calculus is a method of teaching special relativity popularised by Sir Hermann Bondi, that has been used in university-level physics classes (e.g. at the University of Oxford), and in some relativity textbooks.: 58–65  The usefulness of the k-calculus is its simplicity. Many introductions to relativity begin with the concept of velocity and a derivation of the Lorentz transformation. Other concepts such as time dilation, length contraction, the relativity of simultaneity, the resolution of the twins paradox and the relativistic Doppler effect are then derived from the Lorentz transformation, all as functions of velocity. Bondi, in his book Relativity and Common Sense, first published in 1964 and based on articles published in The Illustrated London News in 1962, reverses the order of presentation. He begins with what he calls "a fundamental ratio" denoted by the letter k {\displaystyle k} (which turns out to be the radial Doppler factor).: 40  From this he explains the twins paradox, and the relativity of simultaneity, time dilation, and length contraction, all in terms of k {\displaystyle k} . It is not until later in the exposition that he provides a link between velocity and the fundamental ratio k {\displaystyle k} . The Lorentz transformation appears towards the end of the book. == History == The k-calculus method had previously been used by E. A. Milne in 1935. Milne used the letter s {\displaystyle s} to denote a constant Doppler factor, but also considered a more general case involving non-inertial motion (and therefore a varying Doppler factor). Bondi used the letter k {\displaystyle k} instead of s {\displaystyle s} and simplified the presentation (for constant k {\displaystyle k} only), and introduced the name "k-calculus".: 109  == Bondi's k-factor == Consider two inertial observers, Alice and Bob, moving directly away from each other at constant relative velocity. Alice sends a flash of blue light towards Bob once every T {\displaystyle T} seconds, as measured by her own clock. Because Alice and Bob are separated by a distance, there is a delay between Alice sending a flash and Bob receiving a flash. Furthermore, the separation distance is steadily increasing at a constant rate, so the delay keeps on increasing. This means that the time interval between Bob receiving the flashes, as measured by his clock, is greater than T {\displaystyle T} seconds, say k T {\displaystyle kT} seconds for some constant k > 1 {\displaystyle k>1} . (If Alice and Bob were, instead, moving directly towards each other, a similar argument would apply, but in that case k < 1 {\displaystyle k<1} .): 80  Bondi describes k {\displaystyle k} as “a fundamental ratio”,: 88  and other authors have since called it "the Bondi k-factor" or "Bondi's k-factor".: 63  Alice's flashes are transmitted at a frequency of f s = 1 / T {\displaystyle f_{s}=1/T} Hz, by her clock, and received by Bob at a frequency of f o = 1 / ( k T ) {\displaystyle f_{o}=1/(kT)} Hz, by his clock. This implies a Doppler factor of f s / f o = k {\displaystyle f_{s}/f_{o}=k} . So Bondi's k-factor is another name for the Doppler factor (when source Alice and observer Bob are moving directly away from or towards each other).: 40  If Alice and Bob were to swap roles, and Bob sent flashes of light to Alice, the Principle of Relativity (Einstein's first postulate) implies that the k-factor from Bob to Alice would be the same value as the k-factor from Alice to Bob, as all inertial observers are equivalent. So the k-factor depends only on the relative speed between the observers and nothing else.: 80  == The reciprocal k-factor == Consider, now, a third inertial observer Dave who is a fixed distance from Alice, and such that Bob lies on the straight line between Alice and Dave. As Alice and Dave are mutually at rest, the delay from Alice to Dave is constant. This means that Dave receives Alice's blue flashes at a rate of once every T {\displaystyle T} seconds, by his clock, the same rate as Alice sends them. In other words, the k-factor from Alice to Dave is equal to one.: 77  Now suppose that whenever Bob receives a blue flash from Alice he immediately sends his own red flash towards Dave, once every k T {\displaystyle kT} seconds (by Bob's clock). Einstein's second postulate, that the speed of light is independent of the motion of its source, implies that Alice's blue flash and Bob's red flash both travel at the same speed, neither overtaking the other, and therefore arrive at Dave at the same time. So Dave receives a red flash from Bob every T {\displaystyle T} seconds, by Dave's clock, which were sent by Bob every k T {\displaystyle kT} seconds by Bob's clock. This implies that the k-factor from Bob to Dave is 1 / k {\displaystyle 1/k} .: 80  This establishes that the k-factor for observers moving directly apart (red shift) is the reciprocal of the k-factor for observers moving directly towards each other at the same speed (blue shift).   == The twins paradox == Consider, now, a fourth inertial observer Carol who travels from Dave to Alice at exactly the same speed as Bob travels from Alice to Dave. Carol's journey is timed such that she leaves Dave at exactly the same time as Bob arrives. Denote times recorded by Alice's, Bob's and Carol's clocks by t A , t B , t C {\displaystyle t_{A},t_{B},t_{C}} . When Bob passes Alice, they both synchronise their clocks to t A = t B = 0 {\displaystyle t_{A}=t_{B}=0} . When Carol passes Bob, she synchronises her clock to Bob's, t C = t B {\displaystyle t_{C}=t_{B}} . Finally, as Carol passes Alice, they compare their clocks against each other. In Newtonian physics, the expectation would be that, at the final comparison, Alice's and Carol's clock would agree, t C = t A {\displaystyle t_{C}=t_{A}} . It will be shown below that in relativity this is not true. This is a version of the well-known "twins paradox" in which identical twins separate and reunite, only to find that one is now older than the other. If Alice sends a flash of light at time t A = T {\displaystyle t_{A}=T} towards Bob, then, by the definition of the k-factor, it will be received by Bob at time t B = k T {\displaystyle t_{B}=kT} . The flash is timed so that it arrives at Bob just at the moment that Bob meets Carol, so Carol synchronises her clock to read t C = t B = k T {\displaystyle t_{C}=t_{B}=kT} . Also, when Bob and Carol meet, they both simultaneously send flashes to Alice, which are received simultaneously by Alice. Considering, first, Bob's flash, sent at time t B = k T {\displaystyle t_{B}=kT} , it must be received by Alice at time t A = k 2 T {\displaystyle t_{A}=k^{2}T} , using the fact that the k-factor from Alice to Bob is the same as the k-factor from Bob to Alice. As Bob's outward journey had a duration of k T {\displaystyle kT} , by his clock, it follows by symmetry that Carol's return journey over the same distance at the same speed must also have a duration of k T {\displaystyle kT} , by her clock, and so when Carol meets Alice, Carol's clock reads t C = 2 k T {\displaystyle t_{C}=2kT} . The k-factor for this leg of the journey must be the reciprocal 1 / k {\displaystyle 1/k} (as discussed earlier), so, considering Carol's flash towards Alice, a transmission interval of k T {\displaystyle kT} corresponds to a reception interval of T {\displaystyle T} . This means that the final time on Alice's clock, when Carol and Alice meet, is t A = ( k 2 + 1 ) T {\displaystyle t_{A}=(k^{2}+1)T} . This is larger than Carol's clock time t C = 2 k T {\displaystyle t_{C}=2kT} since t A − t C = ( k 2 − 2 k + 1 ) T = ( k − 1 ) 2 T > 0 , {\displaystyle t_{A}-t_{C}=(k^{2}-2k+1)T=(k-1)^{2}T>0,} provided k ≠ 1 {\displaystyle k\neq 1} and T > 0 {\displaystyle T>0} .: 80–90    == Radar measurements and velocity == In the k-calculus methodology, distances are measured using radar. An observer sends a radar pulse towards a target and receives an echo from it. The radar pulse (which travels at c {\displaystyle c} , the speed of light) travels a total distance, there and back, that is twice the distance to the target, and takes time T 2 − T 1 {\displaystyle T_{2}-T_{1}} , where T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} are times recorded by the observer's clock at transmission and reception of the radar pulse. This implies that the distance to the target is: 60  x A = 1 2 c ( T 2 − T 1 ) . {\displaystyle x_{A}={\tfrac {1}{2}}c(T_{2}-T_{1}).} Furthermore, since the speed of light is the same in both directions, the time at which the radar pulse arrives at the target must be, according to the observer, halfway between the transmission and reception times, namely: 60  t A = 1 2 ( T 2 + T 1 ) . {\displaystyle t_{A}={\tfrac {1}{2}}(T_{2}+T_{1}).} In the particular case where the radar observer is Alice and the target is Bob (momentarily co-located with Dave) as described previously, by k-calculus we have T 2 = k 2 T 1 {\displaystyle T_{2}=k^{2}T_{1}} , and so x A = 1 2 c ( k 2 − 1 ) T 1 t A = 1 2 ( k 2 + 1 ) T 1 . {\displaystyle {\begin{aligned}x_{A}&={\tfrac {1}{2}}c(k^{2}-1)T_{1}\\t_{A}&={\tfrac {1}{2}}(k^{2}+1)T_{1}.\end{aligned}}} As Alice and Bob were co-located at t A = 0 , x A = 0 {\displaystyle t_{A}=0,x_{A}=0} , the velocity of Bob relative to Alice is given by: 103 : 64  v = x A t A = 1 2 c ( k 2 − 1 ) T 1 1 2 ( k 2 + 1 ) T 1 = c k 2 − 1 k 2 + 1 = c k − k − 1 k + k − 1 . {\displaystyle v={\frac {x_{A}}{t_{A}}}={\frac {{\tfrac {1}{2}}c(k^{2}-1)T_{1}}{{\tfrac {1}{2}}(k^{2}+1)T_{1}}}=c{\frac {k^{2}-1}{k^{2}+1}}=c{\frac {k-k^{-1}}{k+k^{-1}}}.} This equation expresses velocity as a function of the Bondi k-factor. It can be solved for k {\displaystyle k} to give k {\displaystyle k} as a function of v {\displaystyle v} :: 103 : 65  k = 1 + v / c 1 − v / c . {\displaystyle k={\sqrt {\frac {1+v/c}{1-v/c}}}.} == Velocity composition == Consider three inertial observers Alice, Bob and Ed, arranged in that order and moving at different speeds along the same straight line. In this section, the notation k A B {\displaystyle k_{AB}} will be used to denote the k-factor from Alice to Bob (and similarly between other pairs of observers). As before, Alice sends a blue flash towards Bob and Ed every T {\displaystyle T} seconds, by her clock, which Bob receives every k A B T {\displaystyle k_{AB}T} seconds, by Bob's clock, and Ed receives every k A E T {\displaystyle k_{AE}T} seconds, by Ed's clock. Now suppose that whenever Bob receives a blue flash from Alice he immediately sends his own red flash towards Ed, once every k A B T {\displaystyle k_{AB}T} seconds by Bob's clock, so Ed receives a red flash from Bob every k B E ( k A B T ) {\displaystyle k_{BE}(k_{AB}T)} seconds, by Ed's clock. Einstein's second postulate, that the speed of light is independent of the motion of its source, implies that Alice's blue flash and Bob's red flash both travel at the same speed, neither overtaking the other, and therefore arrive at Ed at the same time. Therefore, as measured by Ed, the red flash interval k B E ( k A B T ) {\displaystyle k_{BE}(k_{AB}T)} and the blue flash interval k A E T {\displaystyle k_{AE}T} must be the same. So the rule for combining k-factors is simply multiplication:: 105  k A E = k A B k B E . {\displaystyle k_{AE}=k_{AB}k_{BE}.} Finally, substituting k A B = 1 + v A B / c 1 − v A B / c , k B E = 1 + v B E / c 1 − v B E / c , v A E = c k A E 2 − 1 k A E 2 + 1 {\displaystyle k_{AB}={\sqrt {\frac {1+v_{AB}/c}{1-v_{AB}/c}}},\,k_{BE}={\sqrt {\frac {1+v_{BE}/c}{1-v_{BE}/c}}},\,v_{AE}=c{\frac {k_{AE}^{2}-1}{k_{AE}^{2}+1}}} gives the velocity composition formula: 105  v A E = v A B + v B E 1 + v A B v B E / c 2 . {\displaystyle v_{AE}={\frac {v_{AB}+v_{BE}}{1+v_{AB}v_{BE}/c^{2}}}.} == The invariant interval == Using the radar method described previously, inertial observer Alice assigns coordinates ( t A , x A ) {\displaystyle (t_{A},x_{A})} to an event by transmitting a radar pulse at time t A − x A / c {\displaystyle t_{A}-x_{A}/c} and receiving its echo at time t A + x A / c {\displaystyle t_{A}+x_{A}/c} , as measured by her clock. Similarly, inertial observer Bob can assign coordinates ( t B , x B ) {\displaystyle (t_{B},x_{B})} to the same event by transmitting a radar pulse at time t B − x B / c {\displaystyle t_{B}-x_{B}/c} and receiving its echo at time t B + x B / c {\displaystyle t_{B}+x_{B}/c} , as measured by his clock. However, as the diagram shows, it is not necessary for Bob to generate his own radar signal, as he can simply take the timings from Alice's signal instead. Now, applying the k-calculus method to the signal that travels from Alice to Bob k = t B − x B / c t A − x A / c . {\displaystyle k={\frac {t_{B}-x_{B}/c}{t_{A}-x_{A}/c}}.} Similarly, applying the k-calculus method to the signal that travels from Bob to Alice k = t A + x A / c t B + x B / c . {\displaystyle k={\frac {t_{A}+x_{A}/c}{t_{B}+x_{B}/c}}.} Equating the two expressions for k {\displaystyle k} and rearranging,: 118  c 2 t A 2 − x A 2 = c 2 t B 2 − x B 2 . {\displaystyle c^{2}t_{A}^{2}-x_{A}^{2}=c^{2}t_{B}^{2}-x_{B}^{2}.} This establishes that the quantity c 2 t 2 − x 2 {\displaystyle c^{2}t^{2}-x^{2}} is an invariant: it takes the same value in any inertial coordinate system and is known as the invariant interval. == The Lorentz transformation == The two equations for k {\displaystyle k} in the previous section can be solved as simultaneous equations to obtain:: 118 : 67  c t B = 1 2 ( k + k − 1 ) c t A − 1 2 ( k − k − 1 ) x A x B = 1 2 ( k + k − 1 ) x A − 1 2 ( k − k − 1 ) c t A {\displaystyle {\begin{aligned}ct_{B}&={\tfrac {1}{2}}(k+k^{-1})ct_{A}-{\tfrac {1}{2}}(k-k^{-1})x_{A}\\x_{B}&={\tfrac {1}{2}}(k+k^{-1})x_{A}-{\tfrac {1}{2}}(k-k^{-1})ct_{A}\end{aligned}}} These equations are the Lorentz transformation expressed in terms of the Bondi k-factor instead of in terms of velocity. By substituting k = 1 + v / c 1 − v / c , {\displaystyle k={\sqrt {\frac {1+v/c}{1-v/c}}},} the more traditional form t B = t A − v x A / c 2 1 − v 2 / c 2 ; x B = x A − v t A 1 − v 2 / c 2 {\displaystyle t_{B}={\frac {t_{A}-vx_{A}/c^{2}}{\sqrt {1-v^{2}/c^{2}}}};\,x_{B}={\frac {x_{A}-vt_{A}}{\sqrt {1-v^{2}/c^{2}}}}} is obtained.: 118 : 67  == Rapidity == Rapidity φ {\displaystyle \varphi } can be defined from the k-factor by: 71  φ = log e ⁡ k , k = e φ , {\displaystyle \varphi =\log _{e}k,\,k=e^{\varphi },} and so v = c k − k − 1 k + k − 1 = c tanh ⁡ φ . {\displaystyle v=c{\frac {k-k^{-1}}{k+k^{-1}}}=c\tanh \varphi .} The k-factor version of the Lorentz transform becomes c t B = c t A cosh ⁡ φ − x A sinh ⁡ φ x B = x A cosh ⁡ φ − c t A sinh ⁡ φ {\displaystyle {\begin{aligned}ct_{B}&=ct_{A}\cosh \varphi -x_{A}\sinh \varphi \\x_{B}&=x_{A}\cosh \varphi -ct_{A}\sinh \varphi \end{aligned}}} It follows from the composition rule for k {\displaystyle k} , k A E = k A B k B E {\displaystyle k_{AE}=k_{AB}k_{BE}} , that the composition rule for rapidities is addition:: 71  φ A E = φ A B + φ B E . {\displaystyle \varphi _{AE}=\varphi _{AB}+\varphi _{BE}.} == References == == External links == Review of Bondi k-Calculus
Wikipedia/Bondi_k-calculus
In logic, Hilbert's epsilon calculus is an extension of a formal language by the epsilon operator, where the epsilon operator substitutes for quantifiers in that language as a method leading to a proof of consistency for the extended formal language. The epsilon operator and epsilon substitution method are typically applied to a first-order predicate calculus, followed by a demonstration of consistency. The epsilon-extended calculus is further extended and generalized to cover those mathematical objects, classes, and categories for which there is a desire to show consistency, building on previously-shown consistency at earlier levels. == Epsilon operator == === Hilbert notation === For any formal language L, extend L by adding the epsilon operator to redefine quantification: ( ∃ x ) A ( x ) ≡ A ( ϵ x A ) {\displaystyle (\exists x)A(x)\ \equiv \ A(\epsilon x\ A)} ( ∀ x ) A ( x ) ≡ ¬ ∃ x ¬ A ( x ) ⟺ ¬ ( ¬ A ( ϵ x ¬ A ) ) ⟺ A ( ϵ x ( ¬ A ) ) {\displaystyle (\forall x)A(x)\ \equiv \ \neg \exists x\neg A(x)\iff \neg {\big (}\neg A(\epsilon x\ \neg A){\big )}\iff A(\epsilon x\ (\neg A))} The intended interpretation of ϵx A is some x that satisfies A, if it exists. In other words, ϵx A returns some term t such that A(t) is true, otherwise it returns some default or arbitrary term. If more than one term can satisfy A, then any one of these terms (which make A true) can be chosen, non-deterministically. Equality is required to be defined under L, and the only rules required for L extended by the epsilon operator are modus ponens and the substitution of A(t) to replace A(x) for any term t. === Bourbaki notation === In tau-square notation from N. Bourbaki's Theory of Sets, the quantifiers are defined as follows: ( ∃ x ) A ( x ) ≡ ( τ x ( A ) | x ) A {\displaystyle (\exists x)A(x)\ \equiv \ (\tau _{x}(A)|x)A} ( ∀ x ) A ( x ) ≡ ¬ ( τ x ( ¬ A ) | x ) ¬ A ≡ ( τ x ( ¬ A ) | x ) A {\displaystyle (\forall x)A(x)\ \equiv \ \neg (\tau _{x}(\neg A)|x)\neg A\ \equiv \ (\tau _{x}(\neg A)|x)A} where A is a relation in L, x is a variable, and τ x ( A ) {\displaystyle \tau _{x}(A)} juxtaposes a τ {\displaystyle \tau } at the front of A, replaces all instances of x with ◻ {\displaystyle \square } , and links them back to τ {\displaystyle \tau } . Then let Y be an assembly, (Y|x)A denotes the replacement of all variables x in A with Y. This notation is equivalent to the Hilbert notation and is read the same. It is used by Bourbaki to define cardinal assignment since they do not use the axiom of replacement. Defining quantifiers in this way leads to great inefficiencies. For instance, the expansion of Bourbaki's original definition of the number one, using this notation, has length approximately 4.5 × 1012, and for a later edition of Bourbaki that combined this notation with the Kuratowski definition of ordered pairs, this number grows to approximately 2.4 × 1054. == Modern approaches == Hilbert's program for mathematics was to justify those formal systems as consistent in relation to constructive or semi-constructive systems. While Gödel's results on incompleteness mooted Hilbert's Program to a great extent, modern researchers find the epsilon calculus to provide alternatives for approaching proofs of systemic consistency as described in the epsilon substitution method. === Epsilon substitution method === A theory to be checked for consistency is first embedded in an appropriate epsilon calculus. Second, a process is developed for re-writing quantified theorems to be expressed in terms of epsilon operations via the epsilon substitution method. Finally, the process must be shown to normalize the re-writing process, so that the re-written theorems satisfy the axioms of the theory. == Notes == == References == "Epsilon Calculi". Internet Encyclopedia of Philosophy. Moser, Georg; Richard Zach. The Epsilon Calculus (Tutorial). Berlin: Springer-Verlag. OCLC 108629234. Avigad, Jeremy; Zach, Richard (November 27, 2013). "The epsilon calculus". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Bourbaki, N. Theory of Sets. Berlin: Springer-Verlag. ISBN 3-540-22525-0.
Wikipedia/Epsilon_calculus
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology. Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid-20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms. The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used. == Applications == The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically: Advanced numerical methods are essential in making numerical weather prediction feasible. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically. In the financial field, (private investment funds) and other financial institutions use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use numerical programs for actuarial analysis. == History == The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The origins of modern numerical analysis are often linked to a 1947 paper by John von Neumann and Herman Goldstine, but others consider modern numerical analysis to go back to work by E. T. Whittaker in 1912. To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy. The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done. The Leslie Fox Prize for Numerical Analysis was initiated in 1985 by the Institute of Mathematics and its Applications. == Key concepts == === Direct and iterative methods === Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability). In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems. Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method. As an example, consider the problem of solving 3x3 + 4 = 28 for the unknown quantity x. For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57. From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2. === Conditioning === Ill-conditioned problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem. Well-conditioned problem: By contrast, evaluating the same function f(x) = 1/(x − 1) near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x). === Discretization === Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum. == Generation and propagation of errors == The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem. === Round-off === Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are). === Truncation and discretization error === Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of 3 x 3 + 4 = 28 {\displaystyle 3x^{3}+4=28} , after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01. Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type ⁠ a + b + c + d + e {\displaystyle a+b+c+d+e} ⁠ is even more inexact. A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen. === Numerical stability and well-posed problems === An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error. Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible. So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. == Areas of study == The field of numerical analysis includes many sub-disciplines. Some of the major ones are: === Computing values of functions === One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating-point arithmetic. === Interpolation, extrapolation, and regression === Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points? Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found. Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this. === Solving equations and systems of equations === Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation 2 x + 5 = 3 {\displaystyle 2x+5=3} is linear while 2 x 2 + 5 = 3 {\displaystyle 2x^{2}+5=3} is not. Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting. Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations. === Solving eigenvalue or singular value problems === Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis. === Optimization === Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints. The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method. The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems. === Evaluating integrals === Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids. === Differential equations === Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations. Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation. == Software == Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library. Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here); ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here). The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here). There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open-source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude. Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results. Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis. Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver". == See also == == Notes == == References == === Citations === === Sources === == External links == === Journals === Numerische Mathematik, volumes 1–..., Springer, 1959– volumes 1–66, 1959–1994 (searchable; pages are images). (in English and German) Journal on Numerical Analysis (SINUM), volumes 1–..., SIAM, 1964– === Online texts === "Numerical analysis", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Numerical Recipes, William H. Press (free, downloadable previous editions) First Steps in Numerical Analysis (archived), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner CSEP (Computational Science Education Project), U.S. Department of Energy (archived 2017-08-01) Numerical Methods, ch 3. in the Digital Library of Mathematical Functions Numerical Interpolation, Differentiation and Integration, ch 25. in the Handbook of Mathematical Functions (Abramowitz and Stegun) Tobin A. Driscoll and Richard J. Braun: Fundamentals of Numerical Computation (free online version) === Online course material === Numerical Methods (Archived 28 July 2009 at the Wayback Machine), Stuart Dalziel University of Cambridge Lectures on Numerical Analysis, Dennis Deturck and Herbert S. Wilf University of Pennsylvania Numerical methods, John D. Fenton University of Karlsruhe Numerical Methods for Physicists, Anthony O’Hare Oxford University Lectures in Numerical Analysis (archived), R. Radok Mahidol University Introduction to Numerical Analysis for Engineering, Henrik Schmidt Massachusetts Institute of Technology Numerical Analysis for Engineering, D. W. Harder University of Waterloo Introduction to Numerical Analysis, Doron Levy University of Maryland Numerical Analysis - Numerical Methods (archived), John H. Mathews California State University Fullerton
Wikipedia/Numerical_calculus
In computer science, the ambient calculus is a process calculus devised by Luca Cardelli and Andrew D. Gordon in 1998, and used to describe and theorise about concurrent systems that include mobility. Here mobility means both computation carried out on mobile devices (i.e. networks that have a dynamic topology), and mobile computation (i.e. executable code that is able to move around the network). The ambient calculus provides a unified framework for modeling both kinds of mobility. It is used to model interactions in such concurrent systems as the Internet. Since its inception, the ambient calculus has grown into a family of closely related ambient calculi. == Informal description == === Ambients === The fundamental primitive of the ambient calculus is the ambient. An ambient is informally defined as a bounded place in which computation can occur. The notion of boundaries is considered key to representing mobility, since a boundary defines a contained computational agent that can be moved in its entirety. Examples of ambients include: a web page (bounded by a file) a virtual address space (bounded by an addressing range) a Unix file system (bounded within a physical volume) a single data object (bounded by “self”) a laptop (bounded by its case and data ports) The key properties of ambients within the Ambient calculus are: Ambients have names, which are used to control access to the ambient. Ambients can be nested inside other ambients (representing, for example, administrative domains) Ambients can be moved as a whole. === Operations === Computation is represented as the crossing of boundaries, i.e. the movement of ambients. There are four basic operations (or capabilities) on ambients: i n m . P {\displaystyle in\;m.P} instructs the surrounding ambient to enter some sibling ambient m {\displaystyle m} , and then proceed as P {\displaystyle P} o u t m . P {\displaystyle out\;m.P} instructs the surrounding ambient to exit its parent ambient m {\displaystyle m} o p e n m . P {\displaystyle open\;m.P} instructs the surrounding ambient to dissolve the boundary of an ambient m {\displaystyle m} located at the same level c o p y m . {\displaystyle copy\;m.} makes any number of copies of something m {\displaystyle m} The ambient calculus provides a reduction semantics that formally defines what the results of these operations are. Communication within (i.e. local to) an ambient is anonymous and asynchronous. Output actions release names or capabilities into the surrounding ambient. Input actions capture a value from the ambient, and bind it to a variable. Non-local I/O can be represented in terms of these local communications actions by a variety of means. One approach is to use mobile “messenger” agents that carry a message from one ambient to another (using the capabilities described above). Another approach is to emulate channel-based communications by modeling a channel in terms of ambients and operations on those ambients. The three basic ambient primitives, namely in, out, and open are expressive enough to simulate name-passing channels in the π-calculus. == See also == Lambda calculus Mobile membranes Type theory API-Calculus == References == == External links == Mobile Computational Ambients by Luca Cardelli
Wikipedia/Ambient_calculus
Professor Cuthbert Calculus (French: Professeur Tryphon Tournesol [pʁɔ.fɛ.sœʁ tʁi.fɔ̃ tuʁ.nə.sɔl], meaning "Professor Tryphon Sunflower") is a fictional character in The Adventures of Tintin, the comics series by Belgian cartoonist Hergé. He is Tintin's friend, an absent-minded professor and half-deaf physicist, who invents many sophisticated devices used in the series, such as a one-person shark-shaped submarine, the Moon rocket, and an ultrasound weapon. Calculus's deafness is a frequent source of humour, as he repeats back what he thinks he has heard, usually in the most unlikely words possible. He does not admit to being near-deaf and insists he is only slightly hard of hearing in one ear, occasionally making use of an ear trumpet to hear better. Calculus first appeared in Red Rackham's Treasure (more specifically in the newspaper prepublication of 4–5 March 1943), and was the result of Hergé's long quest to find the archetypal mad scientist or absent-minded professor. Although Hergé had included characters with similar traits in earlier stories, Calculus developed into a much more complex figure as the series progressed. == Character history == Calculus is a genius, who demonstrates himself throughout the series to be an expert in many fields of science, holding three PhDs in nuclear and theoretical physics, and planetary astronomy. He is also an experienced engineer, archaeologist, biologist and chemist. Many of his inventions precede or mirror similar technological developments in the real world (most notably the Moon rocket, but also his failed attempt at creating a colour television set). He seeks to benefit humankind through his inventions, developing a pill that cures alcoholism by making alcohol unpalatable to the patient, and refusing under great duress to yield his talents to producing weapons of mass destruction. Calculus is also shown to be in the midst of inventing steerable roller skates during The Red Sea Sharks, although the final product is never shown, with the invention being only merely a sideshow in the overall plot. Much of Calculus's more dangerous work is criticized by Captain Haddock, although Calculus usually interprets this the other way round: his deafness often leads him to misinterpret Haddock's words, preventing him from hearing his real opinion. Calculus's deafness is a frequent source of humour in his interactions with other people, as he often repeats back what he thinks he has heard, usually in the most unlikely words possible. Additionally, he often diverts the subject of a conversation by responding to a misinterpreted remark. For example, "But I never knew you had...." leads Calculus to respond, "No, young man, I am not mad!" In the same book he believes that Tintin and Haddock are talking about his sister, before remembering a few moments later that he does not have a sister. He is not perturbed by his handicap, even if it is a source of deep frustration to his friends. He himself does not admit to being near-deaf and insists that he is "only a little hard of hearing in one ear." In the course of the Moon books, however, Calculus leads a team of scientists and engineers working on a major rocket project, motivating him to adopt an ear trumpet, and later a hearing aid, and for the duration of the adventure he has near-perfect hearing. This made him a more serious character, even displaying leadership qualities that had not been shown before or since. However, after completing the journey to the Moon, Calculus discarded his hearing aid, forcing his friends to readjust to his hearing impairment (aside from one panel in The Castafiore Emerald, when Tintin is seen speaking to him through his ear-trumpet); this restored the humour surrounding him, though it could be that he finds his deafness useful since it enables him to focus on his work (something useful for The Calculus Affair, since he was referencing ultrasonic sound). Calculus maintains a laboratory at Marlinspike Hall, in which he conducts various experiments. He is fairly protective of his work, on occasion hiding his scientific endeavours from Tintin and Haddock (which gets him into trouble in The Calculus Affair). His lab is also stripped of all its apparatus in the same book. On an earlier occasion, during his efforts to find an antidote to Formula Fourteen in Land of Black Gold, Calculus almost destroyed half of Marlinspike in an explosion. Although generally a mild-mannered (if somewhat oblivious) figure, Calculus flies into an uncharacteristic rage if he feels insulted or ridiculed. He is especially provoked if he ever hears Haddock (or anyone else) call him a "goat". On one famous occasion in Destination Moon, he displays uncontrollable ire ("Goat, am I?") when an irritated Haddock accuses him of "acting the goat" ("acting like a goat" in the Golden Press American English translation) by attempting to build a Moon rocket. His subsequent tirade and blatant disregard for security terrifies the usually ebullient Captain; he even lifts the director of security barring his way onto a coat hook. Another occasion is in Flight 714 to Sydney when, due to some misunderstanding, he physically assaults Laszlo Carreidas and has to be held back with great effort by Haddock and Tintin. In the same book, despite his deafness, he hears Captain Haddock tell him that he's "acting the goat", but Haddock quickly prevents the severe reaction from occurring. Earlier in Red Rackham's Treasure Calculus is shown with a frown for a few moments when he thinks that the Captain lied to him that Tintin had gone for a row, when Tintin actually was diving to search for treasure. Despite his gentle nature, Calculus is rather sensitive about his work and does not appreciate being ridiculed or belittled for his scientific efforts. In spite of all this, his friends stick by him come what may. Haddock invited him to stay at Marlinspike Hall after Calculus discovered it is the captain's ancestral home and bought it in his name thanks to money he had earned through selling the patent for his shark-submarine. He did this because Haddock and Tintin had provided him with the opportunity to test the submersible when they were searching for Red Rackham's Treasure. Tintin and Haddock crossed the world on at least two occasions (Prisoners of the Sun and The Calculus Affair) in order to save him from kidnappers. He occasionally comments that he was a great sportsman in his youth, with a very athletic lifestyle. He is a former practitioner of the French martial art savate, although a demonstration in Flight 714 to Sydney shows him to be more than a bit rusty. == Inspirations == Calculus is partly modeled on inventor Auguste Piccard (1884–1962), Hergé stated in an interview with Numa Sadoul: "Calculus is a reduced scale Piccard, as the real chap was very tall. He had an interminable neck that sprouted from a collar that was much too large... I made Calculus a mini-Piccard, otherwise I would have had to enlarge the frames of the cartoon strip." The Swiss physics professor held a teaching appointment in Brussels when Hergé spotted his unmistakable figure in the street. In The Castafiore Emerald, Bianca Castafiore mentions that Calculus is "famous for his balloon ascensions", an ironic reference to Piccard. Philippe Goddin has suggested that Calculus' deafness was inspired by Paul Eydt, whom Hergé had known at Le Vingtième Siècle where Tintin's adventures had first appeared. Cuthbert Calculus' original French name is "Tryphon Tournesol" and Tryphon was the name of Hergé's plumber. In contrast to his unquestionable scientific merits, Calculus is a fervent believer in dowsing, and carries a pendulum for that purpose. Hergé himself was a believer in the subject: dowser Victor Mertens had used a pendulum to find the lost wedding ring of Hergé's wife in October 1939. == Calculus and his peers == Before Calculus appeared in Red Rackham's Treasure, Hergé had featured other highly educated but eccentric scholars and scientists, such as the following: Sophocles Sarcophagus of Cigars of the Pharaoh who showed signs of being clumsy and forgetful before going completely mad. The absent-minded professor who appeared in The Broken Ear and who forgot his glasses, wore his cleaning-lady's overcoat, held his cane upside-down as if it were an umbrella, mistook a parrot for a man and left his briefcase next to a lamp post. In the original edition published in 1935 his name is given as Professor Euclide, after the Greek mathematician known as the "Father of Geometry". Professor Hector Alembick in King Ottokar's Sceptre, who had a bad habit of throwing his cigarettes on the floor. Two astronomers from The Shooting Star also showed unusual and, in one case, mad behaviour: Professor Philippulus, or "Philippulus the prophet" represented the dilemmas some face over religious belief and scientific research. In his case the conflict took a toll on his mind when the end-of-the-world appeared to be imminent. He then went around wearing bedsheets and beating a gong to warn of the event and later disrupted the eve of departure of the expedition sent to find a meteorite. His colleague, Professor Decimus Phostle, though not mad, looked forward to the end of the world whose prediction he thought would make him famous. In contrast, he showed signs of maturity during the expedition when he called off the search for the meteorite in order to help a ship in distress. Calculus's introduction appears to have supplied Hergé with the bizarre nature he wished to portray in a man of science. Other figures of high education were shown as more stable and level-headed. The members of the archaeological expedition who fall victim to The Seven Crystal Balls show no apparent signs of eccentricity. The most prominent member of this group is Calculus's friend Hercules Tarragon, with whom he attended university. Tarragon is a large, ebullient man, possessing a jovial nature, but not necessarily eccentric. While he sometimes appears aloof when absorbed in his work, Calculus corresponds with other scientists and also collaborates with many of them on his projects. He works with Mr. Baxter and Frank Wolff on the Moon rocket and corresponds with ultrasonics expert Professor Alfredo Topolino of Nyon in The Calculus Affair. == Relationship to women == Calculus is the only main character in the Tintin series to display signs of attraction to women. This is notably evident in his interactions with Bianca Castafiore, with whom he is smitten during her long stay at Marlinspike Hall in The Castafiore Emerald. During her stay, his botanic experiments lead him to create a new variety of rose, which he names in her honour. Nonetheless, he happily congratulates Captain Haddock on his "engagement" to Castafiore (in fact a media hoax which he unwittingly fuelled). Calculus is also distressed by Castafiore's imprisonment in Tintin and the Picaros, and is adamant on going to her defence. In the same book, he is charmed by the unattractive Peggy Alcazar (wife of General Alcazar) and kisses her hand after she bluntly criticizes Tintin and Haddock (a remark that Calculus mistakes for a warm greeting). == In other media == Calculus also featured frequently in the 1957–1963 Belvision TV series, as well as in other adaptations of the comics. The Belvision TV series is notable for depicting Calculus with perfect hearing. Calculus' original French name was "Tournesol" which is the French term for sunflower. In the 1970s and 1980s, he starred in a series of cartoon television commercials for Fruit d'or products which included cooking oil and mayonnaise made from sunflower oil. Some of the ads would conclude with him floating up into the air to demonstrate how they kept a good healthy balance. Other characters from the books were also included. A pseudonym variation was used on an album by Stephen Duffy – see Tin Tin and "Dr. Calculus". == See also == List of The Adventures of Tintin characters == References == === Bibliography === Farr, Michael (2007). Tintin & Co. London: John Murray Publishers Ltd. ISBN 978-1-4052-3264-7. Peeters, Benoît (2012) [2002]. Hergé: Son of Tintin. Tina A. Kover (translator). Baltimore, Maryland: Johns Hopkins University Press. ISBN 978-1-4214-0454-7.
Wikipedia/Professor_Calculus
The fluent calculus is a formalism for expressing dynamical domains in first-order logic. It is a variant of the situation calculus; the main difference is that situations are considered representations of states. A binary function symbol ∘ {\displaystyle \circ } is used to concatenate the terms that represent facts that hold in a situation. For example, that the box is on the table in the situation s {\displaystyle s} is represented by the formula ∃ t . s = o n ( b o x , t a b l e ) ∘ t {\displaystyle \exists t.s=on(box,table)\circ t} . The frame problem is solved by asserting that the situation after the execution of an action is identical to the one before but for the conditions changed by the action. For example, the action of moving the box from the table to the floor is formalized as: S t a t e ( D o ( m o v e ( b o x , t a b l e , f l o o r ) , s ) ) ∘ o n ( b o x , t a b l e ) = S t a t e ( s ) ∘ o n ( b o x , f l o o r ) {\displaystyle State(Do(move(box,table,floor),s))\circ on(box,table)=State(s)\circ on(box,floor)} This formula states that the state after the move is added the term o n ( b o x , f l o o r ) {\displaystyle on(box,floor)} and removed the term o n ( b o x , t a b l e ) {\displaystyle on(box,table)} . Axioms specifying that ∘ {\displaystyle \circ } is commutative and non-idempotent are necessary for such axioms to work. == See also == Fluent (artificial intelligence) Frame problem Situation calculus Event calculus == References == M. Thielscher (1998). Introduction to the fluent calculus. Electronic Transactions on Artificial Intelligence, 2(3–4):179–192. M. Thielscher (2005). Reasoning Robots - The Art and Science of Programming Robotic Agents. Volume 33 of Applied Logic Series. Springer, Dordrecht.
Wikipedia/Fluent_calculus
The event calculus is a logical theory for representing and reasoning about events and about the way in which they change the state of some real or artificial world. It deals both with action events, which are performed by agents, and with external events, which are outside the control of any agent. The event calculus represents the state of the world at any time by the set of all the facts (called fluents) that hold at the time. Events initiate and terminate fluents: The event calculus differs from most other approaches for reasoning about change by reifying time, associating events with the time at which they happen, and associating fluents with the times at which they hold. The original version of the event calculus, introduced by Robert Kowalski and Marek Sergot in 1986, was formulated as a logic program and developed for representing narratives and database updates. Kave Eshghi showed how to use the event calculus for planning, by using abduction to generate hypothetical actions to achieve a desired state of affairs. It was extended by Murray Shanahan and Rob Miller in the 1990s and reformulated in first-order logic with circumscription. These and later extensions have been used to formalize non-deterministic actions, concurrent actions, actions with delayed effects, gradual changes, actions with duration, continuous change, and non-inertial fluents. Van Lambalgen and Hamm showed how a formulation of the event calculus as a constraint logic program can be used to give an algorithmic semantics to tense and aspect in natural language. == Fluents and events == In the event calculus, fluents are reified. This means that fluents are represented by terms. For example, h o l d s A t ( o n ( g r e e n _ b l o c k , t a b l e ) , 1 ) {\displaystyle {\mathit {holdsAt}}(on(green\_block,table),1)} expresses that the g r e e n _ b l o c k {\displaystyle {\mathit {green\_block}}} is on the t a b l e {\displaystyle {\mathit {table}}} at time 1 {\displaystyle 1} . Here h o l d s A t {\displaystyle {\mathit {holdsAt}}} is a predicate, while o n ( g r e e n _ b l o c k , t a b l e ) {\displaystyle {\mathit {on(green\_block,table)}}} is a term. In general, the atomic formula h o l d s A t ( f l u e n t , t i m e ) {\displaystyle {\mathit {holdsAt}}(fluent,time)} expresses that the f l u e n t {\displaystyle {\mathit {fluent}}} holds at the t i m e . {\displaystyle {\mathit {time.}}} Events are also reified and represented by terms. For example, h a p p e n s A t ( m o v e ( g r e e n _ b l o c k , r e d _ b l o c k ) , 3 ) {\displaystyle {\mathit {happensAt}}(move(green\_block,red\_block),3)} expresses that the g r e e n _ b l o c k {\displaystyle {\mathit {green\_block}}} is moved onto the r e d _ b l o c k ) {\displaystyle {\mathit {red\_block)}}} at time 3 {\displaystyle 3} . In general: h a p p e n s A t ( e v e n t , t i m e ) {\displaystyle {\mathit {happensAt}}(event,time)} expresses that the e v e n t {\displaystyle {\mathit {event}}} happens at the t i m e . {\displaystyle {\mathit {time.}}} The relationships between events and the fluents that they initiate and terminate are also represented by atomic formulae: i n i t i a t e s ( e v e n t , f l u e n t , t i m e ) {\displaystyle {\mathit {initiates}}(event,fluent,time)} expresses that if the e v e n t {\displaystyle {\mathit {event}}} happens at the t i m e {\displaystyle {\mathit {time}}} then the f l u e n t {\displaystyle {\mathit {fluent}}} becomes true after the t i m e {\displaystyle {\mathit {time}}} . t e r m i n a t e s ( e v e n t , f l u e n t , t i m e ) {\displaystyle {\mathit {terminates}}(event,fluent,time)} expresses that if the e v e n t {\displaystyle {\mathit {event}}} happens at the t i m e {\displaystyle {\mathit {time}}} then the f l u e n t {\displaystyle {\mathit {fluent}}} ceases to be true after the t i m e {\displaystyle {\mathit {time}}} . == Domain-independent axiom == The event calculus was developed in part as an alternative to the situation calculus, as a solution to the frame problem, of representing and reasoning about the way in which actions and other events change the state of some world. There are many variants of the event calculus. But the core axiom of one of the simplest and most useful variants can be expressed as a single, domain-independent axiom: h o l d s A t ( F , T 2 ) ← {\displaystyle {\mathit {holdsAt}}(F,T2)\leftarrow } [ h a p p e n s A t ( E 1 , T 1 ) ∧ i n i t i a t e s ( E 1 , F , T 1 ) ∧ ( T 1 < T 2 ) ∧ {\displaystyle [{\mathit {happensAt}}(E1,T1)\wedge {\mathit {initiates}}(E1,F,T1)\wedge (T1<T2)\wedge } ¬ ∃ E 2 , T [ h a p p e n s A t ( E 2 , T ) ∧ t e r m i n a t e s ( E 2 , F , T ) ∧ ( T 1 ≤ T < T 2 ) ] {\displaystyle \neg \exists E2,T[{\mathit {happensAt}}(E2,T)\wedge {\mathit {terminates}}(E2,F,T)\wedge (T1\leq T<T2)]} The axiom states that a fluent F {\displaystyle F} holds at a time T 2 {\displaystyle T2} if an event E 1 {\displaystyle E1} happens at a time T 1 {\displaystyle T1} and E 1 {\displaystyle E1} initiates F {\displaystyle F} at T 1 {\displaystyle T1} and T 1 {\displaystyle T1} is before T 2 {\displaystyle T2} and it is not the case that there exists an event E 2 {\displaystyle E2} and a time T {\displaystyle T} such that E 2 {\displaystyle E2} happens at T {\displaystyle T} and E 2 {\displaystyle E2} terminates F {\displaystyle F} at T {\displaystyle T} and T 1 {\displaystyle T1} is before or at the same time as T {\displaystyle T} and T {\displaystyle T} is before T 2 {\displaystyle T2} . The event calculus solves the frame problem by interpreting this axiom in a non-monotonic logic, such as first-order logic with circumscription or, as a logic program, in Horn clause logic with negation as failure. In fact, circumscription is one of the several semantics that can be given to negation as failure, and it is closely related to the completion semantics for logic programs (which interprets if as if and only if). The core event calculus axiom defines the h o l d s A t {\displaystyle holdsAt} predicate in terms of the h a p p e n s A t {\displaystyle happensAt} , i n i t i a t e s {\displaystyle initiates} , t e r m i n a t e s {\displaystyle terminates} , < {\displaystyle <} and ≤ {\displaystyle \leq } predicates. To apply the event calculus to a particular problem, these other predicates also need to be defined. The event calculus is compatible with different definitions of the temporal predicates < {\displaystyle <} and ≤ {\displaystyle \leq } . In most applications, times are represented discretely, by the natural numbers, or continuously, by non-negative real numbers. However, times can also be partially ordered. == Domain-dependent axioms == To apply the event calculus in a particular problem domain, it is necessary to define the i n i t i a t e s {\displaystyle initiates} and t e r m i n a t e s {\displaystyle terminates} predicates for that domain. For example, in the blocks world domain, an event m o v e ( O b j e c t , P l a c e ) {\displaystyle move(Object,Place)} of moving an object onto a place intitiates the fluent o n ( O b j e c t , P l a c e ) {\displaystyle on(Object,Place)} , which expresses that the object is on the place and terminates the fluent o n ( O b j e c t , P l a c e 1 ) {\displaystyle on(Object,Place1)} , which expresses that the object is on a different place: i n i t i a t e s ( m o v e ( O b j e c t , P l a c e ) , o n ( O b j e c t , P l a c e ) , T i m e ) . {\displaystyle {\mathit {initiates}}(move(Object,Place),on(Object,Place),Time).} t e r m i n a t e s ( m o v e ( O b j e c t , P l a c e ) , o n ( O b j e c t , P l a c e 1 ) , T i m e ) ← d i f f e r e n t ( P l a c e 1 , P l a c e ) . {\displaystyle {\mathit {terminates}}(move(Object,Place),on(Object,Place1),Time)\leftarrow different(Place1,Place).} If we want to represent the fact that a F l u e n t {\displaystyle Fluent} holds in an initial state, say at time 1, then with the simple core axiom above we need an event, say i n i t i a l i s e ( F l u e n t ) {\displaystyle initialise(Fluent)} , which initiates the F l u e n t {\displaystyle Fluent} at any time: i n i t i a t e s ( i n i t i a l i s e ( F l u e n t ) , F l u e n t , T i m e ) . {\displaystyle {\mathit {initiates}}(initialise(Fluent),Fluent,Time).} == Problem-dependent axioms == To apply the event calculus, given the definitions of the h o l d s A t {\displaystyle holdsAt} , i n i t i a t e s {\displaystyle initiates} , t e r m i n a t e s {\displaystyle terminates} , < {\displaystyle <} and ≤ {\displaystyle \leq } predicates, it is necessary to define the h a p p e n s A t {\displaystyle happensAt} predicates that describe the specific context of the problem. For example, in the blocks world domain, we might want to describe an initial state in which there are two blocks, a red block on a green block on a table, like a toy traffic light, followed by moving the red block to the table at time 1 and moving the green block onto the red block at time 3, turning the traffic light upside down: h a p p e n s A t ( i n i t i a l i s e ( o n ( r e d _ b l o c k , g r e e n _ b l o c k ) , 0 ) {\displaystyle {\mathit {happensAt}}(initialise(on(red\_block,green\_block),0)} h a p p e n s A t ( i n i t i a l i s e ( o n ( g r e e n _ b l o c k , t a b l e ) , 0 ) {\displaystyle {\mathit {happensAt}}(initialise(on(green\_block,table),0)} h a p p e n s A t ( m o v e ( r e d _ b l o c k , t a b l e ) , 1 ) {\displaystyle {\mathit {happensAt}}(move(red\_block,table),1)} h a p p e n s A t ( m o v e ( g r e e n _ b l o c k , r e d _ b l o c k ) , 3 ) {\displaystyle {\mathit {happensAt}}(move(green\_block,red\_block),3)} == A Prolog implementation == The event calculus has a natural implementation in pure Prolog (without any features that do not have a logical interpretation). For example, the blocks world scenario above can be implemented (with minor modifications) by the program: The Prolog program differs from the earlier formalisation in the following ways: The core axiom has been rewritten, using an auxiliary predicate clipped(Fact, Time1, Time2). This rewriting enables the elimination of existential quantifiers, conforming to the Prolog convention that all variables are universally quantified. The order of the conditions in the body of the core axiom(s) has been changed, to generate answers to queries in temporal order. The equality in the condition T 1 ≤ T {\displaystyle T1\leq T} has been removed from the corresponding condition before(Time1, Time). This builds in a simplifying assumption that events do not simultaneously initiate and terminate the same fluent. As a consequence, the definition of the t e r m i n a t e s {\displaystyle terminates} predicate has been simplified by eliminating the condition that d i f f e r e n t ( P l a c e 1 , P l a c e ) {\displaystyle different(Place1,Place)} . Given an appropriate definition of the predicate before(Time1, Time2), the Prolog program generates all answers to the query what holds when? in temporal order: The program can also answer negative queries, such as which fluents do not hold at which times? However, to work correctly, all variables in negative conditions must first be instantiated to terms containing no variables. For example: == Reasoning tools == In addition to Prolog and its variants, several other tools for reasoning using the event calculus are also available: Abductive Event Calculus Planners Discrete Event Calculus Reasoner Event Calculus Answer Set Programming Reactive Event Calculus Run-Time Event Calculus (RTEC) Epistemic Probabilistic Event Calculus (EPEC) == Extensions == Notable extensions of the event calculus include Markov logic networks–based variants probabilistic, epistemic and their combinations. == See also == First-order logic Frame problem Situation calculus == References == == Further reading == Brandano, S. (2001) "The Event Calculus Assessed," IEEE TIME Symposium: 7-12. R. Kowalski and F. Sadri (1995) "Variants of the Event Calculus," ICLP: 67-81. Mueller, Erik T. (2015). Commonsense Reasoning: An Event Calculus Based Approach (2nd Ed.). Waltham, MA: Morgan Kaufmann/Elsevier. ISBN 978-0128014165. (Guide to using the event calculus) Shanahan, M. (1997) Solving the frame problem: A mathematical investigation of the common sense law of inertia. MIT Press. Shanahan, M. (1999) "The Event Calculus Explained" Springer Verlag, LNAI (1600): 409-30. == Notes ==
Wikipedia/Event_calculus
In theoretical computer science, the modal μ-calculus (Lμ, Lμ, sometimes just μ-calculus, although this can have a more general meaning) is an extension of propositional modal logic (with many modalities) by adding the least fixed point operator μ and the greatest fixed point operator ν, thus a fixed-point logic. The (propositional, modal) μ-calculus originates with Dana Scott and Jaco de Bakker, and was further developed by Dexter Kozen into the version most used nowadays. It is used to describe properties of labelled transition systems and for verifying these properties. Many temporal logics can be encoded in the μ-calculus, including CTL* and its widely used fragments—linear temporal logic and computational tree logic. An algebraic view is to see it as an algebra of monotonic functions over a complete lattice, with operators consisting of functional composition plus the least and greatest fixed point operators; from this viewpoint, the modal μ-calculus is over the lattice of a power set algebra. The game semantics of μ-calculus is related to two-player games with perfect information, particularly infinite parity games. == Syntax == Let P (propositions) and A (actions) be two finite sets of symbols, and let Var be a countably infinite set of variables. The set of formulas of (propositional, modal) μ-calculus is defined as follows: each proposition and each variable is a formula; if ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } are formulas, then ϕ ∧ ψ {\displaystyle \phi \wedge \psi } is a formula; if ϕ {\displaystyle \phi } is a formula, then ¬ ϕ {\displaystyle \neg \phi } is a formula; if ϕ {\displaystyle \phi } is a formula and a {\displaystyle a} is an action, then [ a ] ϕ {\displaystyle [a]\phi } is a formula; (pronounced either: a {\displaystyle a} box ϕ {\displaystyle \phi } or after a {\displaystyle a} necessarily ϕ {\displaystyle \phi } ) if ϕ {\displaystyle \phi } is a formula and Z {\displaystyle Z} a variable, then ν Z . ϕ {\displaystyle \nu Z.\phi } is a formula, provided that every free occurrence of Z {\displaystyle Z} in ϕ {\displaystyle \phi } occurs positively, i.e. within the scope of an even number of negations. (The notions of free and bound variables are as usual, where ν {\displaystyle \nu } is the only binding operator.) Given the above definitions, we can enrich the syntax with: ϕ ∨ ψ {\displaystyle \phi \lor \psi } meaning ¬ ( ¬ ϕ ∧ ¬ ψ ) {\displaystyle \neg (\neg \phi \land \neg \psi )} ⟨ a ⟩ ϕ {\displaystyle \langle a\rangle \phi } (pronounced either: a {\displaystyle a} diamond ϕ {\displaystyle \phi } or after a {\displaystyle a} possibly ϕ {\displaystyle \phi } ) meaning ¬ [ a ] ¬ ϕ {\displaystyle \neg [a]\neg \phi } μ Z . ϕ {\displaystyle \mu Z.\phi } means ¬ ν Z . ¬ ϕ [ Z := ¬ Z ] {\displaystyle \neg \nu Z.\neg \phi [Z:=\neg Z]} , where ϕ [ Z := ¬ Z ] {\displaystyle \phi [Z:=\neg Z]} means substituting ¬ Z {\displaystyle \neg Z} for Z {\displaystyle Z} in all free occurrences of Z {\displaystyle Z} in ϕ {\displaystyle \phi } . The first two formulas are the familiar ones from the classical propositional calculus and respectively the minimal multimodal logic K. The notation μ Z . ϕ {\displaystyle \mu Z.\phi } (and its dual) are inspired from the lambda calculus; the intent is to denote the least (and respectively greatest) fixed point of the expression ϕ {\displaystyle \phi } where the "minimization" (and respectively "maximization") are in the variable Z {\displaystyle Z} , much like in lambda calculus λ Z . ϕ {\displaystyle \lambda Z.\phi } is a function with formula ϕ {\displaystyle \phi } in bound variable Z {\displaystyle Z} ; see the denotational semantics below for details. == Denotational semantics == Models of (propositional) μ-calculus are given as labelled transition systems ( S , R , V ) {\displaystyle (S,R,V)} where: S {\displaystyle S} is a set of states; R {\displaystyle R} maps to each label a {\displaystyle a} a binary relation on S {\displaystyle S} ; V : P → 2 S {\displaystyle V:P\to 2^{S}} , maps each proposition p ∈ P {\displaystyle p\in P} to the set of states where the proposition is true. Given a labelled transition system ( S , R , V ) {\displaystyle (S,R,V)} and an interpretation i {\displaystyle i} of the variables Z {\displaystyle Z} of the μ {\displaystyle \mu } -calculus, [ [ ⋅ ] ] i : ϕ → 2 S {\displaystyle [\![\cdot ]\!]_{i}:\phi \to 2^{S}} , is the function defined by the following rules: [ [ p ] ] i = V ( p ) {\displaystyle [\![p]\!]_{i}=V(p)} ; [ [ Z ] ] i = i ( Z ) {\displaystyle [\![Z]\!]_{i}=i(Z)} ; [ [ ϕ ∧ ψ ] ] i = [ [ ϕ ] ] i ∩ [ [ ψ ] ] i {\displaystyle [\![\phi \wedge \psi ]\!]_{i}=[\![\phi ]\!]_{i}\cap [\![\psi ]\!]_{i}} ; [ [ ¬ ϕ ] ] i = S ∖ [ [ ϕ ] ] i {\displaystyle [\![\neg \phi ]\!]_{i}=S\smallsetminus [\![\phi ]\!]_{i}} ; [ [ [ a ] ϕ ] ] i = { s ∈ S ∣ ∀ t ∈ S , ( s , t ) ∈ R a → t ∈ [ [ ϕ ] ] i } {\displaystyle [\![[a]\phi ]\!]_{i}=\{s\in S\mid \forall t\in S,(s,t)\in R_{a}\rightarrow t\in [\![\phi ]\!]_{i}\}} ; [ [ ν Z . ϕ ] ] i = ⋃ { T ⊆ S ∣ T ⊆ [ [ ϕ ] ] i [ Z := T ] } {\displaystyle [\![\nu Z.\phi ]\!]_{i}=\bigcup \{T\subseteq S\mid T\subseteq [\![\phi ]\!]_{i[Z:=T]}\}} , where i [ Z := T ] {\displaystyle i[Z:=T]} maps Z {\displaystyle Z} to T {\displaystyle T} while preserving the mappings of i {\displaystyle i} everywhere else. By duality, the interpretation of the other basic formulas is: [ [ ϕ ∨ ψ ] ] i = [ [ ϕ ] ] i ∪ [ [ ψ ] ] i {\displaystyle [\![\phi \vee \psi ]\!]_{i}=[\![\phi ]\!]_{i}\cup [\![\psi ]\!]_{i}} ; [ [ ⟨ a ⟩ ϕ ] ] i = { s ∈ S ∣ ∃ t ∈ S , ( s , t ) ∈ R a ∧ t ∈ [ [ ϕ ] ] i } {\displaystyle [\![\langle a\rangle \phi ]\!]_{i}=\{s\in S\mid \exists t\in S,(s,t)\in R_{a}\wedge t\in [\![\phi ]\!]_{i}\}} ; [ [ μ Z . ϕ ] ] i = ⋂ { T ⊆ S ∣ [ [ ϕ ] ] i [ Z := T ] ⊆ T } {\displaystyle [\![\mu Z.\phi ]\!]_{i}=\bigcap \{T\subseteq S\mid [\![\phi ]\!]_{i[Z:=T]}\subseteq T\}} Less formally, this means that, for a given transition system ( S , R , V ) {\displaystyle (S,R,V)} : p {\displaystyle p} holds in the set of states V ( p ) {\displaystyle V(p)} ; ϕ ∧ ψ {\displaystyle \phi \wedge \psi } holds in every state where ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } both hold; ¬ ϕ {\displaystyle \neg \phi } holds in every state where ϕ {\displaystyle \phi } does not hold. [ a ] ϕ {\displaystyle [a]\phi } holds in a state s {\displaystyle s} if every a {\displaystyle a} -transition leading out of s {\displaystyle s} leads to a state where ϕ {\displaystyle \phi } holds. ⟨ a ⟩ ϕ {\displaystyle \langle a\rangle \phi } holds in a state s {\displaystyle s} if there exists a {\displaystyle a} -transition leading out of s {\displaystyle s} that leads to a state where ϕ {\displaystyle \phi } holds. ν Z . ϕ {\displaystyle \nu Z.\phi } holds in any state in any set T {\displaystyle T} such that, when the variable Z {\displaystyle Z} is set to T {\displaystyle T} , then ϕ {\displaystyle \phi } holds for all of T {\displaystyle T} . (From the Knaster–Tarski theorem it follows that [ [ ν Z . ϕ ] ] i {\displaystyle [\![\nu Z.\phi ]\!]_{i}} is the greatest fixed point of T ↦ [ [ ϕ ] ] i [ Z := T ] {\displaystyle T\mapsto [\![\phi ]\!]_{i[Z:=T]}} , and [ [ μ Z . ϕ ] ] i {\displaystyle [\![\mu Z.\phi ]\!]_{i}} its least fixed point.) The interpretations of [ a ] ϕ {\displaystyle [a]\phi } and ⟨ a ⟩ ϕ {\displaystyle \langle a\rangle \phi } are in fact the "classical" ones from dynamic logic. Additionally, the operator μ {\displaystyle \mu } can be interpreted as liveness ("something good eventually happens") and ν {\displaystyle \nu } as safety ("nothing bad ever happens") in Leslie Lamport's informal classification. === Examples === ν Z . ϕ ∧ [ a ] Z {\displaystyle \nu Z.\phi \wedge [a]Z} is interpreted as " ϕ {\displaystyle \phi } is true along every a-path". The idea is that " ϕ {\displaystyle \phi } is true along every a-path" can be defined axiomatically as that (weakest) sentence Z {\displaystyle Z} which implies ϕ {\displaystyle \phi } and which remains true after processing any a-label. μ Z . ϕ ∨ ⟨ a ⟩ Z {\displaystyle \mu Z.\phi \vee \langle a\rangle Z} is interpreted as the existence of a path along a-transitions to a state where ϕ {\displaystyle \phi } holds. The property of a state being deadlock-free, meaning no path from that state reaches a dead end, is expressed by the formula ν Z . ( ⋁ a ∈ A ⟨ a ⟩ ⊤ ∧ ⋀ a ∈ A [ a ] Z ) {\displaystyle \nu Z.\left(\bigvee _{a\in A}\langle a\rangle \top \wedge \bigwedge _{a\in A}[a]Z\right)} == Decision problems == Satisfiability of a modal μ-calculus formula is EXPTIME-complete. Like for linear temporal logic, the model checking, satisfiability and validity problems of linear modal μ-calculus are PSPACE-complete. == See also == Finite model theory Alternation-free modal μ-calculus == Notes == == References == Clarke, Edmund M. Jr.; Orna Grumberg; Doron A. Peled (1999). Model Checking. Cambridge, Massachusetts, USA: MIT press. ISBN 0-262-03270-8., chapter 7, Model checking for the μ-calculus, pp. 97–108 Stirling, Colin. (2001). Modal and Temporal Properties of Processes. New York, Berlin, Heidelberg: Springer Verlag. ISBN 0-387-98717-7., chapter 5, Modal μ-calculus, pp. 103–128 André Arnold; Damian Niwiński (2001). Rudiments of μ-Calculus. Elsevier. ISBN 978-0-444-50620-7., chapter 6, The μ-calculus over powerset algebras, pp. 141–153 is about the modal μ-calculus Yde Venema (2008) Lectures on the Modal μ-calculus; was presented at The 18th European Summer School in Logic, Language and Information Bradfield, Julian & Stirling, Colin (2006). "Modal mu-calculi". In P. Blackburn; J. van Benthem & F. Wolter (eds.). The Handbook of Modal Logic. Elsevier. pp. 721–756. Emerson, E. Allen (1996). "Model Checking and the Mu-calculus". Descriptive Complexity and Finite Models. American Mathematical Society. pp. 185–214. ISBN 0-8218-0517-7. Kozen, Dexter (1983). "Results on the Propositional μ-Calculus". Theoretical Computer Science. 27 (3): 333–354. doi:10.1016/0304-3975(82)90125-6. == External links == Sophie Pinchinat, Logic, Automata & Games video recording of a lecture at ANU Logic Summer School '09
Wikipedia/Modal_μ-calculus
In mathematical logic, algebraic logic is the reasoning obtained by manipulating equations with free variables. What is now usually called classical algebraic logic focuses on the identification and algebraic description of models appropriate for the study of various logics (in the form of classes of algebras that constitute the algebraic semantics for these deductive systems) and connected problems like representation and duality. Well known results like the representation theorem for Boolean algebras and Stone duality fall under the umbrella of classical algebraic logic (Czelakowski 2003). Works in the more recent abstract algebraic logic (AAL) focus on the process of algebraization itself, like classifying various forms of algebraizability using the Leibniz operator (Czelakowski 2003). == Calculus of relations == A homogeneous binary relation is found in the power set of X × X for some set X, while a heterogeneous relation is found in the power set of X × Y, where X ≠ Y. Whether a given relation holds for two individuals is one bit of information, so relations are studied with Boolean arithmetic. Elements of the power set are partially ordered by inclusion, and lattice of these sets becomes an algebra through relative multiplication or composition of relations. "The basic operations are set-theoretic union, intersection and complementation, the relative multiplication, and conversion." The conversion refers to the converse relation that always exists, contrary to function theory. A given relation may be represented by a logical matrix; then the converse relation is represented by the transpose matrix. A relation obtained as the composition of two others is then represented by the logical matrix obtained by matrix multiplication using Boolean arithmetic. === Example === An example of calculus of relations arises in erotetics, the theory of questions. In the universe of utterances there are statements S and questions Q. There are two relations π and α from Q to S: q α a holds when a is a direct answer to question q. The other relation, q π p holds when p is a presupposition of question q. The converse relation πT runs from S to Q so that the composition πTα is a homogeneous relation on S. The art of putting the right question to elicit a sufficient answer is recognized in Socratic method dialogue. === Functions === The description of the key binary relation properties has been formulated with the calculus of relations. The univalence property of functions describes a relation R that satisfies the formula R T R ⊆ I , {\displaystyle R^{T}R\subseteq I,} where I is the identity relation on the range of R. The injective property corresponds to univalence of R T {\displaystyle R^{T}} , or the formula R R T ⊆ I , {\displaystyle RR^{T}\subseteq I,} where this time I is the identity on the domain of R. But a univalent relation is only a partial function, while a univalent total relation is a function. The formula for totality is I ⊆ R R T . {\displaystyle I\subseteq RR^{T}.} Charles Loewner and Gunther Schmidt use the term mapping for a total, univalent relation. The facility of complementary relations inspired Augustus De Morgan and Ernst Schröder to introduce equivalences using R ¯ {\displaystyle {\bar {R}}} for the complement of relation R. These equivalences provide alternative formulas for univalent relations ( R I ¯ ⊆ R ¯ {\displaystyle R{\bar {I}}\subseteq {\bar {R}}} ), and total relations ( R ¯ ⊆ R I ¯ {\displaystyle {\bar {R}}\subseteq R{\bar {I}}} ). Therefore, mappings satisfy the formula R ¯ = R I ¯ . {\displaystyle {\bar {R}}=R{\bar {I}}.} Schmidt uses this principle as "slipping below negation from the left". For a mapping f, f A ¯ = f A ¯ . {\displaystyle f{\bar {A}}={\overline {fA}}.} === Abstraction === The relation algebra structure, based in set theory, was transcended by Tarski with axioms describing it. Then he asked if every algebra satisfying the axioms could be represented by a set relation. The negative answer opened the frontier of abstract algebraic logic. == Algebras as models of logics == Algebraic logic treats algebraic structures, often bounded lattices, as models (interpretations) of certain logics, making logic a branch of order theory. In algebraic logic: Variables are tacitly universally quantified over some universe of discourse. There are no existentially quantified variables or open formulas; Terms are built up from variables using primitive and defined operations. There are no connectives; Formulas, built from terms in the usual way, can be equated if they are logically equivalent. To express a tautology, equate a formula with a truth value; The rules of proof are the substitution of equals for equals, and uniform replacement. Modus ponens remains valid, but is seldom employed. In the table below, the left column contains one or more logical or mathematical systems, and the algebraic structure which are its models are shown on the right in the same row. Some of these structures are either Boolean algebras or proper extensions thereof. Modal and other nonclassical logics are typically modeled by what are called "Boolean algebras with operators." Algebraic formalisms going beyond first-order logic in at least some respects include: Combinatory logic, having the expressive power of set theory; Relation algebra, arguably the paradigmatic algebraic logic, can express Peano arithmetic and most axiomatic set theories, including the canonical ZFC. == History == Algebraic logic is, perhaps, the oldest approach to formal logic, arguably beginning with a number of memoranda Leibniz wrote in the 1680s, some of which were published in the 19th century and translated into English by Clarence Lewis in 1918.: 291–305  But nearly all of Leibniz's known work on algebraic logic was published only in 1903 after Louis Couturat discovered it in Leibniz's Nachlass. Parkinson (1966) and Loemker (1969) translated selections from Couturat's volume into English. Modern mathematical logic began in 1847, with two pamphlets whose respective authors were George Boole and Augustus De Morgan. In 1870 Charles Sanders Peirce published the first of several works on the logic of relatives. Alexander Macfarlane published his Principles of the Algebra of Logic in 1879, and in 1883, Christine Ladd, a student of Peirce at Johns Hopkins University, published "On the Algebra of Logic". Logic turned more algebraic when binary relations were combined with composition of relations. For sets A and B, a relation over A and B is represented as a member of the power set of A×B with properties described by Boolean algebra. The "calculus of relations" is arguably the culmination of Leibniz's approach to logic. At the Hochschule Karlsruhe the calculus of relations was described by Ernst Schröder. In particular he formulated Schröder rules, though De Morgan had anticipated them with his Theorem K. In 1903 Bertrand Russell developed the calculus of relations and logicism as his version of pure mathematics based on the operations of the calculus as primitive notions. The "Boole–Schröder algebra of logic" was developed at University of California, Berkeley in a textbook by Clarence Lewis in 1918. He treated the logic of relations as derived from the propositional functions of two or more variables. Hugh MacColl, Gottlob Frege, Giuseppe Peano, and A. N. Whitehead all shared Leibniz's dream of combining symbolic logic, mathematics, and philosophy. Some writings by Leopold Löwenheim and Thoralf Skolem on algebraic logic appeared after the 1910–13 publication of Principia Mathematica, and Tarski revived interest in relations with his 1941 essay "On the Calculus of Relations". According to Helena Rasiowa, "The years 1920-40 saw, in particular in the Polish school of logic, researches on non-classical propositional calculi conducted by what is termed the logical matrix method. Since logical matrices are certain abstract algebras, this led to the use of an algebraic method in logic." Brady (2000) discusses the rich historical connections between algebraic logic and model theory. The founders of model theory, Ernst Schröder and Leopold Loewenheim, were logicians in the algebraic tradition. Alfred Tarski, the founder of set theoretic model theory as a major branch of contemporary mathematical logic, also: Initiated abstract algebraic logic with relation algebras Invented cylindric algebra Co-discovered Lindenbaum–Tarski algebra. In the practice of the calculus of relations, Jacques Riguet used the algebraic logic to advance useful concepts: he extended the concept of an equivalence relation (on a set) to the heterogeneous case with the notion of a difunctional relation. Riguet also extended ordering to the heterogeneous context by his note that a staircase logical matrix has a complement that is also a staircase, and that the theorem of N. M. Ferrers follows from interpretation of the transpose of a staircase. Riguet generated rectangular relations by taking the outer product of logical vectors; these contribute to the non-enlargeable rectangles of formal concept analysis. Leibniz had no influence on the rise of algebraic logic because his logical writings were little studied before the Parkinson and Loemker translations. Our present understanding of Leibniz as a logician stems mainly from the work of Wolfgang Lenzen, summarized in Lenzen (2004). To see how present-day work in logic and metaphysics can draw inspiration from, and shed light on, Leibniz's thought, see Zalta (2000). == See also == Boolean algebra Codd's theorem Computer algebra Universal algebra == References == == Sources == Brady, Geraldine (2000). From Peirce to Skolem: A Neglected Chapter in the History of Logic. Studies in the History and Philosophy of Mathematics. Amsterdam, Netherlands: North-Holland/Elsevier Science BV. ISBN 9780080532028. Czelakowski, Janusz (2003). "Review: Algebraic Methods in Philosophical Logic by J. Michael Dunn and Gary M. Hardegree". The Bulletin of Symbolic Logic. 9. Association for Symbolic Logic, Cambridge University Press. ISSN 1079-8986. JSTOR 3094793. Lenzen, Wolfgang, 2004, "Leibniz’s Logic" in Gabbay, D., and Woods, J., eds., Handbook of the History of Logic, Vol. 3: The Rise of Modern Logic from Leibniz to Frege. North-Holland: 1-84. Loemker, Leroy (1969) [First edition 1956], Leibniz: Philosophical Papers and Letters (2nd ed.), Reidel. Parkinson, G.H.R (1966). Leibniz: Logical Papers. Oxford University Press. Zalta, E. N., 2000, "A (Leibnizian) Theory of Concepts," Philosophiegeschichte und logische Analyse / Logical Analysis and History of Philosophy 3: 137-183. == Further reading == J. Michael Dunn; Gary M. Hardegree (2001). Algebraic Methods in Philosophical Logic. Oxford University Press. ISBN 978-0-19-853192-0. Good introduction for readers with prior exposure to non-classical logics but without much background in order theory and/or universal algebra; the book covers these prerequisites at length. This book however has been criticized for poor and sometimes incorrect presentation of AAL results. Review by Janusz Czelakowski Hajnal Andréka, István Németi and Ildikó Sain (2001). "Algebraic logic". In Dov M. Gabbay, Franz Guenthner (ed.). Handbook of Philosophical Logic, vol 2 (2nd ed.). Springer. ISBN 978-0-7923-7126-7. Draft. Ramon Jansana (2011), "Propositional Consequence Relations and Algebraic Logic". Stanford Encyclopedia of Philosophy. Mainly about abstract algebraic logic. Stanley Burris (2015), "The Algebra of Logic Tradition". Stanford Encyclopedia of Philosophy. Willard Quine, 1976, "Algebraic Logic and Predicate Functors" pages 283 to 307 in The Ways of Paradox, Harvard University Press. Historical perspective Ivor Grattan-Guinness, 2000. The Search for Mathematical Roots. Princeton University Press. Irving Anellis & N. Houser (1991) "Nineteenth Century Roots of Algebraic Logic and Universal Algebra", pages 1–36 in Algebraic Logic, Colloquia Mathematica Societatis János Bolyai # 54, János Bolyai Mathematical Society & Elsevier ISBN 0444885439 == External links == Algebraic logic at PhilPapers
Wikipedia/Calculus_of_relations
Tuple calculus is a calculus that was created and introduced by Edgar F. Codd as part of the relational model, in order to provide a declarative database-query language for data manipulation in this data model. It formed the inspiration for the database-query languages QUEL and SQL, of which the latter, although far less faithful to the original relational model and calculus, is now the de facto standard database-query language; a dialect of SQL is used by nearly every relational-database-management system. Michel Lacroix and Alain Pirotte proposed domain calculus, which is closer to first-order logic and together with Codd showed that both of these calculi (as well as relational algebra) are equivalent in expressive power. Subsequently, query languages for the relational model were called relationally complete if they could express at least all of these queries. == Definition == === Relational database === Since the calculus is a query language for relational databases we first have to define a relational database. The basic relational building block is the domain (somewhat similar, but not equal to, a data type). A tuple is a finite sequence of attributes, which are ordered pairs of domains and values. A relation is a set of (compatible) tuples. Although these relational concepts are mathematically defined, those definitions map loosely to traditional database concepts. A table is an accepted visual representation of a relation; a tuple is similar to the concept of a row. We first assume the existence of a set C of column names, examples of which are "name", "author", "address", etcetera. We define headers as finite subsets of C. A relational database schema is defined as a tuple S = (D, R, h) where D is the domain of atomic values (see relational model for more on the notions of domain and atomic value), R is a finite set of relation names, and h : R → 2C a function that associates a header with each relation name in R. (Note that this is a simplification from the full relational model where there is more than one domain and a header is not just a set of column names but also maps these column names to a domain.) Given a domain D we define a tuple over D as a partial function that maps some column names to an atomic value in D. An example would be (name : "Harry", age : 25). t : C ⇸ D The set of all tuples over D is denoted as TD. The subset of C for which a tuple t is defined is called the domain of t (not to be confused with the domain in the schema) and denoted as dom(t). Finally we define a relational database given a schema S = (D, R, h) as a function db : R → 2TD that maps the relation names in R to finite subsets of TD, such that for every relation name r in R and tuple t in db(r) it holds that dom(t) = h(r). The latter requirement simply says that all the tuples in a relation should contain the same column names, namely those defined for it in the schema. === Atoms === For the construction of the formulas we will assume an infinite set V of tuple variables. The formulas are defined given a database schema S = (D, R, h) and a partial function type : V ⇸ 2C, called at type assignment, that assigns headers to some tuple variables. We then define the set of atomic formulas A[S,type] with the following rules: if v and w in V, a in type(v) and b in type(w) then the formula v.a = w.b is in A[S,type], if v in V, a in type(v) and k denotes a value in D then the formula v.a = k is in A[S,type], and if v in V, r in R and type(v) = h(r) then the formula r(v) is in A[S,type]. Examples of atoms are: (t.age = s.age) — t has an age attribute and s has an age attribute with the same value (t.name = "Codd") — tuple t has a name attribute and its value is "Codd" Book(t) — tuple t is present in relation Book. The formal semantics of such atoms is defined given a database db over S and a tuple variable binding val : V → TD that maps tuple variables to tuples over the domain in S: v.a = w.b is true if and only if val(v)(a) = val(w)(b) v.a = k is true if and only if val(v)(a) = k r(v) is true if and only if val(v) is in db(r) === Formulas === The atoms can be combined into formulas, as is usual in first-order logic, with the logical operators ∧ (and), ∨ (or) and ¬ (not), and we can use the existential quantifier (∃) and the universal quantifier (∀) to bind the variables. We define the set of formulas F[S,type] inductively with the following rules: every atom in A[S,type] is also in F[S,type] if f1 and f2 are in F[S,type] then the formula f1 ∧ f2 is also in F[S,type] if f1 and f2 are in F[S,type] then the formula f1 ∨ f2 is also in F[S,type] if f is in F[S,type] then the formula ¬ f is also in F[S,type] if v in V, H a header and f a formula in F[S,type[v->H]] then the formula ∃ v : H ( f ) is also in F[S,type], where type[v->H] denotes the function that is equal to type except that it maps v to H, if v in V, H a header and f a formula in F[S,type[v->H]] then the formula ∀ v : H ( f ) is also in F[S,type] Examples of formulas: t.name = "C. J. Date" ∨ t.name = "H. Darwen" Book(t) ∨ Magazine(t) ∀ t : {author, title, subject} ( ¬ ( Book(t) ∧ t.author = "C. J. Date" ∧ ¬ ( t.subject = "relational model"))) Note that the last formula states that all books that are written by C. J. Date have as their subject the relational model. As usual we omit brackets if this causes no ambiguity about the semantics of the formula. We will assume that the quantifiers quantify over the universe of all tuples over the domain in the schema. This leads to the following formal semantics for formulas given a database db over S and a tuple variable binding val : V -> TD: f1 ∧ f2 is true if and only if f1 is true and f2 is true, f1 ∨ f2 is true if and only if f1 is true or f2 is true or both are true, ¬ f is true if and only if f is not true, ∃ v : H ( f ) is true if and only if there is a tuple t over D such that dom(t) = H and the formula f is true for val[v->t], and ∀ v : H ( f ) is true if and only if for all tuples t over D such that dom(t) = H the formula f is true for val[v->t]. === Queries === Finally we define what a query expression looks like given a schema S = (D, R, h): { v : H | f(v) } where v is a tuple variable, H a header and f(v) a formula in F[S,type] where type = { (v, H) } and with v as its only free variable. The result of such a query for a given database db over S is the set of all tuples t over D with dom(t) = H such that f is true for db and val = { (v, t) }. Examples of query expressions are: { t : {name} | ∃ s : {name, wage} ( Employee(s) ∧ s.wage = 50.000 ∧ t.name = s.name ) } { t : {supplier, article} | ∃ s : {s#, sname} ( Supplier(s) ∧ s.sname = t.supplier ∧ ∃ p : {p#, pname} ( Product(p) ∧ p.pname = t.article ∧ ∃ a : {s#, p#} ( Supplies(a) ∧ s.s# = a.s# ∧ a.p# = p.p# ))) } == Semantic and syntactic restriction == === Domain-independent queries === Because the semantics of the quantifiers is such that they quantify over all the tuples over the domain in the schema it can be that a query may return a different result for a certain database if another schema is presumed. For example, consider the two schemas S1 = ( D1, R, h ) and S2 = ( D2, R, h ) with domains D1 = { 1 }, D2 = { 1, 2 }, relation names R = { "r1" } and headers h = { ("r1", {"a"}) }. Both schemas have a common instance: db = { ( "r1", { ("a", 1) } ) } If we consider the following query expression { t : {a} | t.a = t.a } then its result on db is either { (a : 1) } under S1 or { (a : 1), (a : 2) } under S2. It will also be clear that if we take the domain to be an infinite set, then the result of the query will also be infinite. To solve these problems we will restrict our attention to those queries that are domain independent, i.e., the queries that return the same result for a database under all of its schemas. An interesting property of these queries is that if we assume that the tuple variables range over tuples over the so-called active domain of the database, which is the subset of the domain that occurs in at least one tuple in the database or in the query expression, then the semantics of the query expressions does not change. In fact, in many definitions of the tuple calculus this is how the semantics of the quantifiers is defined, which makes all queries by definition domain independent. === Safe queries === In order to limit the query expressions such that they express only domain-independent queries a syntactical notion of safe query is usually introduced. To determine whether a query expression is safe we will derive two types of information from a query. The first is whether a variable-column pair t.a is bound to the column of a relation or a constant, and the second is whether two variable-column pairs are directly or indirectly equated (denoted t.v == s.w). For deriving boundedness we introduce the following reasoning rules: in " v.a = w.b " no variable-column pair is bound, in " v.a = k " the variable-column pair v.a is bound, in " r(v) " all pairs v.a are bound for a in type(v), in " f1 ∧ f2 " all pairs are bound that are bound either in f1 or in f2, in " f1 ∨ f2 " all pairs are bound that are bound both in f1 and in f2, in " ¬ f " no pairs are bound, in " ∃ v : H ( f ) " a pair w.a is bound if it is bound in f and w <> v, and in " ∀ v : H ( f ) " a pair w.a is bound if it is bound in f and w <> v. For deriving equatedness we introduce the following reasoning rules (next to the usual reasoning rules for equivalence relations: reflexivity, symmetry and transitivity): in " v.a = w.b " it holds that v.a == w.b, in " v.a = k " no pairs are equated, in " r(v) " no pairs are equated, in " f1 ∧ f2 " it holds that v.a == w.b if it holds either in f1 or in f2, in " f1 ∨ f2 " it holds that v.a == w.b if it holds both in f1 and in f2, in " ¬ f " no pairs are equated, in " ∃ v : H ( f ) " it holds that w.a == x.b if it holds in f and w<>v and x<>v, and in " ∀ v : H ( f ) " it holds that w.a == x.b if it holds in f and w<>v and x<>v. We then say that a query expression { v : H | f(v) } is safe if for every column name a in H we can derive that v.a is equated with a bound pair in f, for every subexpression of f of the form " ∀ w : G ( g ) " we can derive that for every column name a in G we can derive that w.a is equated with a bound pair in g, and for every subexpression of f of the form " ∃ w : G ( g ) " we can derive that for every column name a in G we can derive that w.a is equated with a bound pair in g. The restriction to safe query expressions does not limit the expressiveness since all domain-independent queries that could be expressed can also be expressed by a safe query expression. This can be proven by showing that for a schema S = (D, R, h), a given set K of constants in the query expression, a tuple variable v and a header H we can construct a safe formula for every pair v.a with a in H that states that its value is in the active domain. For example, assume that K={1,2}, R={"r"} and h = { ("r", {"a, "b"}) } then the corresponding safe formula for v.b is: v.b = 1 ∨ v.b = 2 ∨ ∃ w ( r(w) ∧ ( v.b = w.a ∨ v.b = w.b ) ) This formula, then, can be used to rewrite any unsafe query expression to an equivalent safe query expression by adding such a formula for every variable v and column name a in its type where it is used in the expression. Effectively this means that we let all variables range over the active domain, which, as was already explained, does not change the semantics if the expressed query is domain independent. == Systems == DES – An educational tool for working with Tuple Relational Calculus and other formal languages WinRDBI – An educational tool for working with Tuple Relational Calculus and other formal languages == See also == Relational algebra Relational calculus Domain relational calculus (DRC) == References == Codd, E. F. (June 1970). "A relational model of data for large shared data banks". Communications of the ACM. 13 (6): 377–387. doi:10.1145/362384.362685.
Wikipedia/Tuple_calculus
The relational calculus consists of two calculi, the tuple relational calculus and the domain relational calculus, that is part of the relational model for databases and provide a declarative way to specify database queries. The raison d'être of relational calculus is the formalization of query optimization, which is finding more efficient manners to execute the same query in a database. The relational calculus is similar to the relational algebra, which is also part of the relational model: While the relational calculus is meant as a declarative language that prescribes no execution order on the subexpressions of a relational calculus expression, the relational algebra is meant as an imperative language: the sub-expressions of a relational algebraic expression are meant to be executed from left-to-right and inside-out following their nesting. Per Codd's theorem, the relational algebra and the domain-independent relational calculus are logically equivalent. == Example == A relational algebra expression might prescribe the following steps to retrieve the phone numbers and names of book stores that supply Some Sample Book: Join book stores and titles over the BookstoreID. Restrict the result of that join to tuples for the book Some Sample Book. Project the result of that restriction over StoreName and StorePhone. A relational calculus expression would formulate this query in the following descriptive or declarative manner: Get StoreName and StorePhone for book stores such that there exists a title BK with the same BookstoreID value and with a BookTitle value of Some Sample Book. == Mathematical properties == The relational algebra and the domain-independent relational calculus are logically equivalent: for any algebraic expression, there is an equivalent expression in the calculus, and vice versa. This result is known as Codd's theorem. == Purpose == The raison d'être of the relational calculus is the formalization of query optimization. Query optimization consists in determining from a query the most efficient manner (or manners) to execute it. Query optimization can be formalized as translating a relational calculus expression delivering an answer A into efficient relational algebraic expressions delivering the same answer A. == See also == Calculus of relations == References == Date, Christopher J. (2004). An Introduction to Database Systems (8th ed.). Addison Wesley. ISBN 0-321-19784-4.
Wikipedia/Relational_calculus
The refinement calculus is a formalized approach to stepwise refinement for program construction. The required behaviour of the final executable program is specified as an abstract and perhaps non-executable "program", which is then refined by a series of correctness-preserving transformations into an efficiently executable program. Proponents include Ralph-Johan Back, who originated the approach in his 1978 PhD thesis On the Correctness of Refinement Steps in Program Development, and Carroll Morgan, especially with his book Programming from Specifications (Prentice Hall, 2nd edition, 1994, ISBN 0-13-123274-6). In the latter case, the motivation was to link Abrial's specification notation Z, via a rigorous relation of behaviour-preserving program refinement, to an executable programming notation based on Dijkstra's language of guarded commands. Behaviour-preserving in this case means that any Hoare triple satisfied by a program should also be satisfied by any refinement of it, which notion leads directly to specification statements as pre- and postconditions standing, on their own, for any program that could soundly be placed between them. == References ==
Wikipedia/Refinement_calculus
In the United States, the Hand formula, also known as the Hand rule, calculus of negligence, or BPL formula, is a conceptual formula created by Judge Learned Hand which describes a process for determining whether a legal duty of care has been breached (see negligence). The original description of the calculus was in United States v. Carroll Towing Co., in which an improperly secured barge had drifted away from a pier and caused damage to several other boats. == Articulation of the rule == Hand stated: [T]he owner's duty, as in other similar situations, to provide against resulting injuries is a function of three variables: (1) The probability that she will break away; (2) the gravity of the resulting injury, if she does; (3) the burden of adequate precautions. This relationship has been formalized by the law and economics school as such: an act is in breach of the duty of care if: P L > B {\displaystyle PL>B} where B is the cost (burden) of taking precautions, and P is the probability of loss (L). L is the gravity of loss. The product of P x L must be a greater amount than B to create a duty of due care for the defendant. == Rationale == The calculus of negligence is based on the Coase theorem. The tort system acts as if, before the injury or damage, a contract had been made between the parties under the assumption that a rational, cost-minimizing individual will not spend money on taking precautions if those precautions are more expensive than the costs of the harm that they prevent. In other words, rather than spending money on safety, the individual will simply allow harm to occur and pay for the costs of that harm, because that will be more cost-efficient than taking precautions. This represents cases where B is greater than PL. If the harm could be avoided for less than the cost of the harm (B is less than PL), then the individual should take the precautions, rather than allowing the harm to occur. If precautions were not taken, we find that a legal duty of care has been breached, and we impose liability on the individual to pay for the harm. This approach, in theory, leads to an optimal allocation of resources; where harm can be cheaply avoided, the legal system requires precautions. Where precautions are prohibitively expensive, it does not. In marginal-cost terms, we require individuals to invest one unit of precautions up until the point that those precautions prevent exactly one unit of harm, and no less. === Mathematical rationale === The Hand formula attempts to formalize the intuitive notion that when the expected loss E ( L ) {\displaystyle \mathbb {E} (L)} exceeds the cost of taking precautions, the duty of care has been breached: E ( L ) > B {\displaystyle \mathbb {E} (L)>B} To assess the expected loss, statistical methods, such as regression analysis, may be used. A common metric for quantifying losses in the case of work accidents is the present value of lost future earnings and medical costs associated with the accident. In the case when the probability of loss is assumed to be a single number P {\displaystyle P} , and L {\displaystyle L} is the loss from the event occurring, the familiar form of the Hand formula is recovered. More generally, for continuous outcomes the Hand formula takes form: ∫ Ω L f ( L ) d L > B {\displaystyle \int _{\Omega }Lf(L)dL>B} where Ω {\displaystyle \Omega } is the domain for losses and f ( L ) {\displaystyle f(L)} is the probability density function of losses. Assuming that losses are positive, common choices for loss distributions include the gamma, lognormal, and Weibull distributions. == Criticism == Critics point out that term "gravity of loss (L)" is vague, and could entail a wide variety of damages, from a scratched fender to several dead victims. Even then, on top of that, how exactly a juror should determine a value for such a loss is abstract in itself. The speculative nature of the rule also seizes upon how a juror should determine the probability of loss (P). Additionally, the rule fails to account for possible alternatives, whether it be the use of alternate methods to reach the same outcome, or abandoning the risky activity altogether. Human teams estimating risk need to guard against judgment errors, cf. absolute probability judgement. == Use in practice == In the U.S., juries, with guidance from the court, decide what particular acts or omissions constitute negligence, so a reference to the standard of ordinary care removes the need to discuss this conceptual formula. Juries are not told this formula but essentially use their common sense to decide what an ordinarily careful person would have done under the circumstances. The Hand formula has less practical value for the lay researcher seeking to understand how the courts actually determine negligence cases in the United States than for the jury instructions used by the courts in the individual states. Outside legal proceedings, this formula is the core premise of insurance, risk management, quality assurance, information security and privacy practices. It factors into due care and due diligence decisions in business risk. Restrictions exist in the cases where the loss applies to human life or the probability of adverse finding in court cases. One famous case of abuse by industry in recent years related to the Ford Pinto. Quality assurance techniques extend the use of probability and loss to include uncertainty bounds in each quantity and possible interactions between uncertainty in probability and impact for two purposes. First, to more accurately model customer acceptance and process reliability to produce wanted outcomes. Second, to seek cost effective factors either up or down stream of the event that produce better results at sustainably reduced costs. Example, simply providing a protective rail near a cliff also includes quality manufacture features of the rail as part of the solution. Reasonable signs warning of the risk before persons reach the cliff may actually be more effective in reducing fatalities than the rail itself. == Australia == In Australia, the calculus of negligence is a normative judgement with no formula or rule. In New South Wales, the test is how a reasonable person (or other standard of care) would respond to the risk in the circumstances considering the 'probability that the harm would occur if care were not taken' and, 'the likely seriousness of the harm', 'the burden of taking precautions to avoid the risk of harm', and the 'social utility of the activity that creates the risk of harm'. State and Territory legislatures require that the social utility of the activity that creates the risk of harm be taken into account in determining whether or not a reasonable person would have taken precautions against that risk of harm. For example, in Haris v Bulldogs Rugby League Club Limited the court considered the social utility of holding football matches when determining whether a football club took sufficient precautions to protect spectators from the risk of being struck by fireworks set off as part of the entertainment during a game. == References ==
Wikipedia/Calculus_of_negligence
In computer science, domain relational calculus (DRC) is a calculus that was introduced by Michel Lacroix and Alain Pirotte as a declarative database query language for the relational data model. In DRC, queries have the form: { ⟨ X 1 , X 2 , . . . . , X n ⟩ ∣ p ( ⟨ X 1 , X 2 , . . . . , X n ⟩ ) } {\displaystyle \{\langle X_{1},X_{2},....,X_{n}\rangle \mid p(\langle X_{1},X_{2},....,X_{n}\rangle )\}} where each Xi is either a domain variable or constant, and p ( ⟨ X 1 , X 2 , . . . . , X n ⟩ ) {\displaystyle p(\langle X_{1},X_{2},....,X_{n}\rangle )} denotes a DRC formula. The result of the query is the set of tuples X1 to Xn that make the DRC formula true. This language uses the same operators as tuple calculus, the logical connectives ∧ (and), ∨ (or) and ¬ (not). The existential quantifier (∃) and the universal quantifier (∀) can be used to bind the variables. Its computational expressiveness is equivalent to that of relational algebra. == Examples == Let (A, B, C) mean (Rank, Name, ID) in the Enterprise relation and let (D, E, F) mean (Name, DeptName, ID) in the Department relation All captains of the starship USS Enterprise: { ⟨ A , B , C ⟩ ∣ ⟨ A , B , C ⟩ ∈ E n t e r p r i s e ∧ A = ′ C a p t a i n ′ } {\displaystyle \left\{\ {\left\langle A,B,C\right\rangle }\mid {\left\langle A,B,C\right\rangle \in \mathrm {Enterprise} \ \land \ A=\mathrm {'Captain'} }\ \right\}} In this example, A, B, C denotes both the result set and a set in the table Enterprise. Names of Enterprise crew members who are in Stellar Cartography: { ⟨ B ⟩ ∣ ∃ A , C ⟨ A , B , C ⟩ ∈ E n t e r p r i s e ∧ ∃ D , E , F ⟨ D , E , F ⟩ ∈ D e p a r t m e n t s ∧ F = C ∧ E = ′ S t e l l a r C a r t o g r a p h y ′ } {\displaystyle {\begin{aligned}\{{\left\langle B\right\rangle }&\mid {\exists A,C\ \left\langle A,B,C\right\rangle \in \mathrm {Enterprise} }\\&\land \ {\exists D,E,F\ \left\langle D,E,F\right\rangle \in \mathrm {Departments} }\\&\land \ F=C\\&\land \ E=\mathrm {'Stellar\ Cartography'} \}\\\end{aligned}}} In this example, we're only looking for the name, and that's B. The condition F = C is a requirement that describes the intersection of Enterprise crew members AND members of the Stellar Cartography Department. An alternate representation of the previous example would be: { ⟨ B ⟩ ∣ ∃ A , C ⟨ A , B , C ⟩ ∈ E n t e r p r i s e ∧ ∃ D ⟨ D , ′ S t e l l a r C a r t o g r a p h y ′ , C ⟩ ∈ D e p a r t m e n t s } {\displaystyle {\begin{aligned}\{{\left\langle B\right\rangle }&\mid {\exists A,C\ \left\langle A,B,C\right\rangle \in \mathrm {Enterprise} }\\&\land \ {\exists D\ \left\langle D,\mathrm {'Stellar\ Cartography'} ,C\right\rangle \in \mathrm {Departments} }\}\\\end{aligned}}} In this example, the value of the requested F domain is directly placed in the formula and the C domain variable is re-used in the query for the existence of a department, since it already holds a crew member's ID. Both of them written in SQL will be like: == See also == Relational calculus == References == == External links == DES – An educational tool for working with Domain Relational Calculus and other formal languages WinRDBI – An educational tool for working with Domain Relational Calculus and other formal languages
Wikipedia/Domain_relational_calculus
In mathematics, the Gaussian or ordinary hypergeometric function 2F1(a,b;c;z) is a special function represented by the hypergeometric series, that includes many other special functions as specific or limiting cases. It is a solution of a second-order linear ordinary differential equation (ODE). Every second-order linear ODE with three regular singular points can be transformed into this equation. For systematic lists of some of the many thousands of published identities involving the hypergeometric function, see the reference works by Erdélyi et al. (1953) and Olde Daalhuis (2010). There is no known system for organizing all of the identities; indeed, there is no known algorithm that can generate all identities; a number of different algorithms are known that generate different series of identities. The theory of the algorithmic discovery of identities remains an active research topic. == History == The term "hypergeometric series" was first used by John Wallis in his 1655 book Arithmetica Infinitorum. Hypergeometric series were studied by Leonhard Euler, but the first full systematic treatment was given by Carl Friedrich Gauss (1813). Studies in the nineteenth century included those of Ernst Kummer (1836), and the fundamental characterisation by Bernhard Riemann (1857) of the hypergeometric function by means of the differential equation it satisfies. Riemann showed that the second-order differential equation for 2F1(z), examined in the complex plane, could be characterised (on the Riemann sphere) by its three regular singularities. The cases where the solutions are algebraic functions were found by Hermann Schwarz (Schwarz's list). == The hypergeometric series == The hypergeometric function is defined for |z| < 1 by the power series 2 F 1 ( a , b ; c ; z ) = ∑ n = 0 ∞ ( a ) n ( b ) n ( c ) n z n n ! = 1 + a b c z 1 ! + a ( a + 1 ) b ( b + 1 ) c ( c + 1 ) z 2 2 ! + ⋯ . {\displaystyle {}_{2}F_{1}(a,b;c;z)=\sum _{n=0}^{\infty }{\frac {(a)_{n}(b)_{n}}{(c)_{n}}}{\frac {z^{n}}{n!}}=1+{\frac {ab}{c}}{\frac {z}{1!}}+{\frac {a(a+1)b(b+1)}{c(c+1)}}{\frac {z^{2}}{2!}}+\cdots .} It is undefined (or infinite) if c equals a non-positive integer. Here (q)n is the (rising) Pochhammer symbol, which is defined by: ( q ) n = { 1 n = 0 q ( q + 1 ) ⋯ ( q + n − 1 ) n > 0 {\displaystyle (q)_{n}={\begin{cases}1&n=0\\q(q+1)\cdots (q+n-1)&n>0\end{cases}}} The series terminates if either a or b is a nonpositive integer, in which case the function reduces to a polynomial: 2 F 1 ( − m , b ; c ; z ) = ∑ n = 0 m ( − 1 ) n ( m n ) ( b ) n ( c ) n z n . {\displaystyle {}_{2}F_{1}(-m,b;c;z)=\sum _{n=0}^{m}(-1)^{n}{\binom {m}{n}}{\frac {(b)_{n}}{(c)_{n}}}z^{n}.} For complex arguments z with |z| ≥ 1 it can be analytically continued along any path in the complex plane that avoids the branch points 1 and infinity. In practice, most computer implementations of the hypergeometric function adopt a branch cut along the line z ≥ 1. As c → −m, where m is a non-negative integer, one has 2F1(z) → ∞. Dividing by the value Γ(c) of the gamma function, we have the limit: lim c → − m 2 F 1 ( a , b ; c ; z ) Γ ( c ) = ( a ) m + 1 ( b ) m + 1 ( m + 1 ) ! z m + 1 2 F 1 ( a + m + 1 , b + m + 1 ; m + 2 ; z ) {\displaystyle \lim _{c\to -m}{\frac {{}_{2}F_{1}(a,b;c;z)}{\Gamma (c)}}={\frac {(a)_{m+1}(b)_{m+1}}{(m+1)!}}z^{m+1}{}_{2}F_{1}(a+m+1,b+m+1;m+2;z)} 2F1(z) is the most common type of generalized hypergeometric series pFq, and is often designated simply F(z). == Differentiation formulas == Using the identity ( a ) n + 1 = a ( a + 1 ) n {\displaystyle (a)_{n+1}=a(a+1)_{n}} , it is shown that d d z 2 F 1 ( a , b ; c ; z ) = a b c 2 F 1 ( a + 1 , b + 1 ; c + 1 ; z ) {\displaystyle {\frac {d}{dz}}\ {}_{2}F_{1}(a,b;c;z)={\frac {ab}{c}}\ {}_{2}F_{1}(a+1,b+1;c+1;z)} and more generally, d n d z n 2 F 1 ( a , b ; c ; z ) = ( a ) n ( b ) n ( c ) n 2 F 1 ( a + n , b + n ; c + n ; z ) {\displaystyle {\frac {d^{n}}{dz^{n}}}\ {}_{2}F_{1}(a,b;c;z)={\frac {(a)_{n}(b)_{n}}{(c)_{n}}}\ {}_{2}F_{1}(a+n,b+n;c+n;z)} == Special cases == Many of the common mathematical functions can be expressed in terms of the hypergeometric function, or as limiting cases of it. Some typical examples are 2 F 1 ( 1 , 1 ; 2 ; − z ) = ln ⁡ ( 1 + z ) z 2 F 1 ( a , b ; b ; z ) = ( 1 − z ) − a ( b arbitrary ) 2 F 1 ( 1 2 , 1 2 ; 3 2 ; z 2 ) = arcsin ⁡ ( z ) z 2 F 1 ( 1 3 , 2 3 ; 3 2 ; − 27 x 2 4 ) = 3 x 3 + 27 x 2 + 4 2 3 − 2 3 x 3 + 27 x 2 + 4 3 x 3 {\displaystyle {\begin{aligned}_{2}F_{1}\left(1,1;2;-z\right)&={\frac {\ln(1+z)}{z}}\\_{2}F_{1}(a,b;b;z)&=(1-z)^{-a}\quad (b{\text{ arbitrary}})\\_{2}F_{1}\left({\frac {1}{2}},{\frac {1}{2}};{\frac {3}{2}};z^{2}\right)&={\frac {\arcsin(z)}{z}}\\\,_{2}F_{1}\left({\frac {1}{3}},{\frac {2}{3}};{\frac {3}{2}};-{\frac {27x^{2}}{4}}\right)&={\frac {{\sqrt[{3}]{\frac {3x{\sqrt {3}}+{\sqrt {27x^{2}+4}}}{2}}}-{\sqrt[{3}]{\frac {2}{3x{\sqrt {3}}+{\sqrt {27x^{2}+4}}}}}}{x{\sqrt {3}}}}\\\end{aligned}}} When a=1 and b=c, the series reduces into a plain geometric series, i.e. 2 F 1 ( 1 , b ; b ; z ) = 1 F 0 ( 1 ; ; z ) = 1 + z + z 2 + z 3 + z 4 + ⋯ {\displaystyle {\begin{aligned}_{2}F_{1}\left(1,b;b;z\right)&={}_{1}F_{0}\left(1;;z\right)=1+z+z^{2}+z^{3}+z^{4}+\cdots \end{aligned}}} hence, the name hypergeometric. This function can be considered as a generalization of the geometric series. The confluent hypergeometric function (or Kummer's function) can be given as a limit of the hypergeometric function M ( a , c , z ) = lim b → ∞ 2 F 1 ( a , b ; c ; b − 1 z ) {\displaystyle M(a,c,z)=\lim _{b\to \infty }{}_{2}F_{1}(a,b;c;b^{-1}z)} so all functions that are essentially special cases of it, such as Bessel functions, can be expressed as limits of hypergeometric functions. These include most of the commonly used functions of mathematical physics. Legendre functions are solutions of a second order differential equation with 3 regular singular points so can be expressed in terms of the hypergeometric function in many ways, for example 2 F 1 ( a , 1 − a ; c ; z ) = Γ ( c ) z 1 − c 2 ( 1 − z ) c − 1 2 P − a 1 − c ( 1 − 2 z ) {\displaystyle {}_{2}F_{1}(a,1-a;c;z)=\Gamma (c)z^{\tfrac {1-c}{2}}(1-z)^{\tfrac {c-1}{2}}P_{-a}^{1-c}(1-2z)} Several orthogonal polynomials, including Jacobi polynomials P(α,β)n and their special cases Legendre polynomials, Chebyshev polynomials, Gegenbauer polynomials, Zernike polynomials can be written in terms of hypergeometric functions using 2 F 1 ( − n , α + 1 + β + n ; α + 1 ; x ) = n ! ( α + 1 ) n P n ( α , β ) ( 1 − 2 x ) {\displaystyle {}_{2}F_{1}(-n,\alpha +1+\beta +n;\alpha +1;x)={\frac {n!}{(\alpha +1)_{n}}}P_{n}^{(\alpha ,\beta )}(1-2x)} Other polynomials that are special cases include Krawtchouk polynomials, Meixner polynomials, Meixner–Pollaczek polynomials. Given z ∈ C ∖ { 0 , 1 } {\displaystyle z\in \mathbb {C} \setminus \{0,1\}} , let τ = i 2 F 1 ( 1 2 , 1 2 ; 1 ; 1 − z ) 2 F 1 ( 1 2 , 1 2 ; 1 ; z ) . {\displaystyle \tau ={\rm {i}}{\frac {{}_{2}F_{1}{\bigl (}{\frac {1}{2}},{\frac {1}{2}};1;1-z{\bigr )}}{{}_{2}F_{1}{\bigl (}{\frac {1}{2}},{\frac {1}{2}};1;z{\bigr )}}}.} Then λ ( τ ) = θ 2 ( τ ) 4 θ 3 ( τ ) 4 = z {\displaystyle \lambda (\tau )={\frac {\theta _{2}(\tau )^{4}}{\theta _{3}(\tau )^{4}}}=z} is the modular lambda function, where θ 2 ( τ ) = ∑ n ∈ Z e π i τ ( n + 1 / 2 ) 2 , θ 3 ( τ ) = ∑ n ∈ Z e π i τ n 2 . {\displaystyle \theta _{2}(\tau )=\sum _{n\in \mathbb {Z} }e^{\pi i\tau (n+1/2)^{2}},\quad \theta _{3}(\tau )=\sum _{n\in \mathbb {Z} }e^{\pi i\tau n^{2}}.} The j-invariant, a modular function, is a rational function in λ ( τ ) {\displaystyle \lambda (\tau )} . Incomplete beta functions Bx(p,q) are related by B x ( p , q ) = x p p 2 F 1 ( p , 1 − q ; p + 1 ; x ) . {\displaystyle B_{x}(p,q)={\tfrac {x^{p}}{p}}{}_{2}F_{1}(p,1-q;p+1;x).} The complete elliptic integrals K and E are given by K ( k ) = π 2 2 F 1 ( 1 2 , 1 2 ; 1 ; k 2 ) , E ( k ) = π 2 2 F 1 ( − 1 2 , 1 2 ; 1 ; k 2 ) . {\displaystyle {\begin{aligned}K(k)&={\tfrac {\pi }{2}}\,_{2}F_{1}\left({\tfrac {1}{2}},{\tfrac {1}{2}};1;k^{2}\right),\\E(k)&={\tfrac {\pi }{2}}\,_{2}F_{1}\left(-{\tfrac {1}{2}},{\tfrac {1}{2}};1;k^{2}\right).\end{aligned}}} == The hypergeometric differential equation == The hypergeometric function is a solution of Euler's hypergeometric differential equation z ( 1 − z ) d 2 w d z 2 + [ c − ( a + b + 1 ) z ] d w d z − a b w = 0. {\displaystyle z(1-z){\frac {d^{2}w}{dz^{2}}}+\left[c-(a+b+1)z\right]{\frac {dw}{dz}}-ab\,w=0.} which has three regular singular points: 0,1 and ∞. The generalization of this equation to three arbitrary regular singular points is given by Riemann's differential equation. Any second order linear differential equation with three regular singular points can be converted to the hypergeometric differential equation by a change of variables. === Solutions at the singular points === Solutions to the hypergeometric differential equation are built out of the hypergeometric series 2F1(a,b;c;z). The equation has two linearly independent solutions. At each of the three singular points 0, 1, ∞, there are usually two special solutions of the form xs times a holomorphic function of x, where s is one of the two roots of the indicial equation and x is a local variable vanishing at a regular singular point. This gives 3 × 2 = 6 special solutions, as follows. Around the point z = 0, two independent solutions are, if c is not a non-positive integer, 2 F 1 ( a , b ; c ; z ) {\displaystyle \,_{2}F_{1}(a,b;c;z)} and, on condition that c is not an integer, z 1 − c 2 F 1 ( 1 + a − c , 1 + b − c ; 2 − c ; z ) {\displaystyle z^{1-c}\,_{2}F_{1}(1+a-c,1+b-c;2-c;z)} If c is a non-positive integer 1−m, then the first of these solutions does not exist and must be replaced by z m F ( a + m , b + m ; 1 + m ; z ) . {\displaystyle z^{m}F(a+m,b+m;1+m;z).} The second solution does not exist when c is an integer greater than 1, and is equal to the first solution, or its replacement, when c is any other integer. So when c is an integer, a more complicated expression must be used for a second solution, equal to the first solution multiplied by ln(z), plus another series in powers of z, involving the digamma function. See Olde Daalhuis (2010) for details. Around z = 1, if c − a − b is not an integer, one has two independent solutions 2 F 1 ( a , b ; 1 + a + b − c ; 1 − z ) {\displaystyle \,_{2}F_{1}(a,b;1+a+b-c;1-z)} and ( 1 − z ) c − a − b 2 F 1 ( c − a , c − b ; 1 + c − a − b ; 1 − z ) {\displaystyle (1-z)^{c-a-b}\;_{2}F_{1}(c-a,c-b;1+c-a-b;1-z)} Around z = ∞, if a − b is not an integer, one has two independent solutions z − a 2 F 1 ( a , 1 + a − c ; 1 + a − b ; z − 1 ) {\displaystyle z^{-a}\,_{2}F_{1}\left(a,1+a-c;1+a-b;z^{-1}\right)} and z − b 2 F 1 ( b , 1 + b − c ; 1 + b − a ; z − 1 ) . {\displaystyle z^{-b}\,_{2}F_{1}\left(b,1+b-c;1+b-a;z^{-1}\right).} Again, when the conditions of non-integrality are not met, there exist other solutions that are more complicated. Any 3 of the above 6 solutions satisfy a linear relation as the space of solutions is 2-dimensional, giving (63) = 20 linear relations between them called connection formulas. === Kummer's 24 solutions === A second order Fuchsian equation with n singular points has a group of symmetries acting (projectively) on its solutions, isomorphic to the Coxeter group W(Dn) of order 2n−1n!. The hypergeometric equation is the case n = 3, with group of order 24 isomorphic to the symmetric group on 4 points, as first described by Kummer. The appearance of the symmetric group is accidental and has no analogue for more than 3 singular points, and it is sometimes better to think of the group as an extension of the symmetric group on 3 points (acting as permutations of the 3 singular points) by a Klein 4-group (whose elements change the signs of the differences of the exponents at an even number of singular points). Kummer's group of 24 transformations is generated by the three transformations taking a solution F(a,b;c;z) to one of ( 1 − z ) − a F ( a , c − b ; c ; z z − 1 ) F ( a , b ; 1 + a + b − c ; 1 − z ) ( 1 − z ) − b F ( c − a , b ; c ; z z − 1 ) {\displaystyle {\begin{aligned}(1-z)^{-a}F\left(a,c-b;c;{\tfrac {z}{z-1}}\right)\\F(a,b;1+a+b-c;1-z)\\(1-z)^{-b}F\left(c-a,b;c;{\tfrac {z}{z-1}}\right)\end{aligned}}} which correspond to the transpositions (12), (23), and (34) under an isomorphism with the symmetric group on 4 points 1, 2, 3, 4. (The first and third of these are actually equal to F(a,b;c;z) whereas the second is an independent solution to the differential equation.) Applying Kummer's 24 = 6×4 transformations to the hypergeometric function gives the 6 = 2×3 solutions above corresponding to each of the 2 possible exponents at each of the 3 singular points, each of which appears 4 times because of the identities 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) c − a − b 2 F 1 ( c − a , c − b ; c ; z ) Euler transformation 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) − a 2 F 1 ( a , c − b ; c ; z z − 1 ) Pfaff transformation 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) − b 2 F 1 ( c − a , b ; c ; z z − 1 ) Pfaff transformation {\displaystyle {\begin{aligned}{}_{2}F_{1}(a,b;c;z)&=(1-z)^{c-a-b}\,{}_{2}F_{1}(c-a,c-b;c;z)&&{\text{Euler transformation}}\\{}_{2}F_{1}(a,b;c;z)&=(1-z)^{-a}\,{}_{2}F_{1}(a,c-b;c;{\tfrac {z}{z-1}})&&{\text{Pfaff transformation}}\\{}_{2}F_{1}(a,b;c;z)&=(1-z)^{-b}\,{}_{2}F_{1}(c-a,b;c;{\tfrac {z}{z-1}})&&{\text{Pfaff transformation}}\end{aligned}}} === Q-form === The hypergeometric differential equation may be brought into the Q-form d 2 u d z 2 + Q ( z ) u ( z ) = 0 {\displaystyle {\frac {d^{2}u}{dz^{2}}}+Q(z)u(z)=0} by making the substitution u = wv and eliminating the first-derivative term. One finds that Q = z 2 [ 1 − ( a − b ) 2 ] + z [ 2 c ( a + b − 1 ) − 4 a b ] + c ( 2 − c ) 4 z 2 ( 1 − z ) 2 {\displaystyle Q={\frac {z^{2}[1-(a-b)^{2}]+z[2c(a+b-1)-4ab]+c(2-c)}{4z^{2}(1-z)^{2}}}} and v is given by the solution to d d z log ⁡ v ( z ) = − c − z ( a + b + 1 ) 2 z ( 1 − z ) = − c 2 z − 1 + a + b − c 2 ( z − 1 ) {\displaystyle {\frac {d}{dz}}\log v(z)=-{\frac {c-z(a+b+1)}{2z(1-z)}}=-{\frac {c}{2z}}-{\frac {1+a+b-c}{2(z-1)}}} which is v ( z ) = z − c / 2 ( 1 − z ) ( c − a − b − 1 ) / 2 . {\displaystyle v(z)=z^{-c/2}(1-z)^{(c-a-b-1)/2}.} The Q-form is significant in its relation to the Schwarzian derivative (Hille 1976, pp. 307–401). === Schwarz triangle maps === The Schwarz triangle maps or Schwarz s-functions are ratios of pairs of solutions. s k ( z ) = ϕ k ( 1 ) ( z ) ϕ k ( 0 ) ( z ) {\displaystyle s_{k}(z)={\frac {\phi _{k}^{(1)}(z)}{\phi _{k}^{(0)}(z)}}} where k is one of the points 0, 1, ∞. The notation D k ( λ , μ , ν ; z ) = s k ( z ) {\displaystyle D_{k}(\lambda ,\mu ,\nu ;z)=s_{k}(z)} is also sometimes used. Note that the connection coefficients become Möbius transformations on the triangle maps. Note that each triangle map is regular at z ∈ {0, 1, ∞} respectively, with s 0 ( z ) = z λ ( 1 + O ( z ) ) s 1 ( z ) = ( 1 − z ) μ ( 1 + O ( 1 − z ) ) {\displaystyle {\begin{aligned}s_{0}(z)&=z^{\lambda }(1+{\mathcal {O}}(z))\\s_{1}(z)&=(1-z)^{\mu }(1+{\mathcal {O}}(1-z))\end{aligned}}} and s ∞ ( z ) = z ν ( 1 + O ( 1 z ) ) . {\displaystyle s_{\infty }(z)=z^{\nu }(1+{\mathcal {O}}({\tfrac {1}{z}})).} In the special case of λ, μ and ν real, with 0 ≤ λ,μ,ν < 1 then the s-maps are conformal maps of the upper half-plane H to triangles on the Riemann sphere, bounded by circular arcs. This mapping is a generalization of the Schwarz–Christoffel mapping to triangles with circular arcs. The singular points 0,1 and ∞ are sent to the triangle vertices. The angles of the triangle are πλ, πμ and πν respectively. Furthermore, in the case of λ=1/p, μ=1/q and ν=1/r for integers p, q, r, then the triangle tiles the sphere, the complex plane or the upper half plane according to whether λ + μ + ν – 1 is positive, zero or negative; and the s-maps are inverse functions of automorphic functions for the triangle group 〈p, q, r〉 = Δ(p, q, r). === Monodromy group === The monodromy of a hypergeometric equation describes how fundamental solutions change when analytically continued around paths in the z plane that return to the same point. That is, when the path winds around a singularity of 2F1, the value of the solutions at the endpoint will differ from the starting point. Two fundamental solutions of the hypergeometric equation are related to each other by a linear transformation; thus the monodromy is a mapping (group homomorphism): π 1 ( C ∖ { 0 , 1 } , z 0 ) → GL ( 2 , C ) {\displaystyle \pi _{1}(\mathbf {C} \setminus \{0,1\},z_{0})\to {\text{GL}}(2,\mathbf {C} )} where π1 is the fundamental group. In other words, the monodromy is a two dimensional linear representation of the fundamental group. The monodromy group of the equation is the image of this map, i.e. the group generated by the monodromy matrices. The monodromy representation of the fundamental group can be computed explicitly in terms of the exponents at the singular points. If (α, α'), (β, β') and (γ,γ') are the exponents at 0, 1 and ∞, then, taking z0 near 0, the loops around 0 and 1 have monodromy matrices g 0 = ( e 2 π i α 0 0 e 2 π i α ′ ) g 1 = ( μ e 2 π i β − e 2 π i β ′ μ − 1 μ ( e 2 π i β − e 2 π i β ′ ) ( μ − 1 ) 2 e 2 π i β ′ − e 2 π i β μ e 2 π i β ′ − e 2 π i β μ − 1 ) , {\displaystyle {\begin{aligned}g_{0}&={\begin{pmatrix}e^{2\pi i\alpha }&0\\0&e^{2\pi i\alpha ^{\prime }}\end{pmatrix}}\\g_{1}&={\begin{pmatrix}{\mu e^{2\pi i\beta }-e^{2\pi i\beta ^{\prime }} \over \mu -1}&{\mu (e^{2\pi i\beta }-e^{2\pi i\beta ^{\prime }}) \over (\mu -1)^{2}}\\e^{2\pi i\beta ^{\prime }}-e^{2\pi i\beta }&{\mu e^{2\pi i\beta ^{\prime }}-e^{2\pi i\beta } \over \mu -1}\end{pmatrix}},\end{aligned}}} where μ = sin ⁡ π ( α + β ′ + γ ′ ) sin ⁡ π ( α ′ + β + γ ′ ) sin ⁡ π ( α ′ + β ′ + γ ′ ) sin ⁡ π ( α + β + γ ′ ) . {\displaystyle \mu ={\sin \pi (\alpha +\beta ^{\prime }+\gamma ^{\prime })\sin \pi (\alpha ^{\prime }+\beta +\gamma ^{\prime }) \over \sin \pi (\alpha ^{\prime }+\beta ^{\prime }+\gamma ^{\prime })\sin \pi (\alpha +\beta +\gamma ^{\prime })}.} If 1−a, c−a−b, a−b are non-integer rational numbers with denominators k,l,m then the monodromy group is finite if and only if 1 / k + 1 / l + 1 / m > 1 {\displaystyle 1/k+1/l+1/m>1} , see Schwarz's list or Kovacic's algorithm. == Integral formulas == === Euler type === If B is the beta function then B ( b , c − b ) 2 F 1 ( a , b ; c ; z ) = ∫ 0 1 x b − 1 ( 1 − x ) c − b − 1 ( 1 − z x ) − a d x ℜ ( c ) > ℜ ( b ) > 0 , {\displaystyle \mathrm {B} (b,c-b)\,_{2}F_{1}(a,b;c;z)=\int _{0}^{1}x^{b-1}(1-x)^{c-b-1}(1-zx)^{-a}\,dx\qquad \Re (c)>\Re (b)>0,} provided that z is not a real number such that it is greater than or equal to 1. This can be proved by expanding (1 − zx)−a using the binomial theorem and then integrating term by term for z with absolute value smaller than 1, and by analytic continuation elsewhere. When z is a real number greater than or equal to 1, analytic continuation must be used, because (1 − zx) is zero at some point in the support of the integral, so the value of the integral may be ill-defined. This was given by Euler in 1748 and implies Euler's and Pfaff's hypergeometric transformations. Other representations, corresponding to other branches, are given by taking the same integrand, but taking the path of integration to be a closed Pochhammer cycle enclosing the singularities in various orders. Such paths correspond to the monodromy action. === Barnes integral === Barnes used the theory of residues to evaluate the Barnes integral 1 2 π i ∫ − i ∞ i ∞ Γ ( a + s ) Γ ( b + s ) Γ ( − s ) Γ ( c + s ) ( − z ) s d s {\displaystyle {\frac {1}{2\pi i}}\int _{-i\infty }^{i\infty }{\frac {\Gamma (a+s)\Gamma (b+s)\Gamma (-s)}{\Gamma (c+s)}}(-z)^{s}\,ds} as Γ ( a ) Γ ( b ) Γ ( c ) 2 F 1 ( a , b ; c ; z ) , {\displaystyle {\frac {\Gamma (a)\Gamma (b)}{\Gamma (c)}}\,_{2}F_{1}(a,b;c;z),} where the contour is drawn to separate the poles 0, 1, 2... from the poles −a, −a − 1, ..., −b, −b − 1, ... . This is valid as long as z is not a nonnegative real number. === John transform === The Gauss hypergeometric function can be written as a John transform (Gelfand, Gindikin & Graev 2003, 2.1.2). == Gauss's contiguous relations == The six functions 2 F 1 ( a ± 1 , b ; c ; z ) , 2 F 1 ( a , b ± 1 ; c ; z ) , 2 F 1 ( a , b ; c ± 1 ; z ) {\displaystyle {}_{2}F_{1}(a\pm 1,b;c;z),\quad {}_{2}F_{1}(a,b\pm 1;c;z),\quad {}_{2}F_{1}(a,b;c\pm 1;z)} are called contiguous to 2F1(a, b; c; z). Gauss showed that 2F1(a, b; c; z) can be written as a linear combination of any two of its contiguous functions, with rational coefficients in terms of a, b, c, and z. This gives ( 6 2 ) = 15 {\displaystyle {\begin{pmatrix}6\\2\end{pmatrix}}=15} relations, given by identifying any two lines on the right hand side of z d F d z = z a b c F ( a + , b + , c + ) = a ( F ( a + ) − F ) = b ( F ( b + ) − F ) = ( c − 1 ) ( F ( c − ) − F ) = ( c − a ) F ( a − ) + ( a − c + b z ) F 1 − z = ( c − b ) F ( b − ) + ( b − c + a z ) F 1 − z = z ( c − a ) ( c − b ) F ( c + ) + c ( a + b − c ) F c ( 1 − z ) {\displaystyle {\begin{aligned}z{\frac {dF}{dz}}&=z{\frac {ab}{c}}F(a+,b+,c+)\\&=a(F(a+)-F)\\&=b(F(b+)-F)\\&=(c-1)(F(c-)-F)\\&={\frac {(c-a)F(a-)+(a-c+bz)F}{1-z}}\\&={\frac {(c-b)F(b-)+(b-c+az)F}{1-z}}\\&=z{\frac {(c-a)(c-b)F(c+)+c(a+b-c)F}{c(1-z)}}\end{aligned}}} where F = 2F1(a, b; c; z), F(a+) = 2F1(a + 1, b; c; z), and so on. Repeatedly applying these relations gives a linear relation over C(z) between any three functions of the form 2 F 1 ( a + m , b + n ; c + l ; z ) , {\displaystyle {}_{2}F_{1}(a+m,b+n;c+l;z),} where m, n, and l are integers. === Gauss's continued fraction === Gauss used the contiguous relations to give several ways to write a quotient of two hypergeometric functions as a continued fraction, for example: 2 F 1 ( a + 1 , b ; c + 1 ; z ) 2 F 1 ( a , b ; c ; z ) = 1 1 + ( a − c ) b c ( c + 1 ) z 1 + ( b − c − 1 ) ( a + 1 ) ( c + 1 ) ( c + 2 ) z 1 + ( a − c − 1 ) ( b + 1 ) ( c + 2 ) ( c + 3 ) z 1 + ( b − c − 2 ) ( a + 2 ) ( c + 3 ) ( c + 4 ) z 1 + ⋱ {\displaystyle {\frac {{}_{2}F_{1}(a+1,b;c+1;z)}{{}_{2}F_{1}(a,b;c;z)}}={\cfrac {1}{1+{\cfrac {{\frac {(a-c)b}{c(c+1)}}z}{1+{\cfrac {{\frac {(b-c-1)(a+1)}{(c+1)(c+2)}}z}{1+{\cfrac {{\frac {(a-c-1)(b+1)}{(c+2)(c+3)}}z}{1+{\cfrac {{\frac {(b-c-2)(a+2)}{(c+3)(c+4)}}z}{1+{}\ddots }}}}}}}}}}} == Transformation formulas == Transformation formulas relate two hypergeometric functions at different values of the argument z. === Fractional linear transformations === Euler's transformation is 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) c − a − b 2 F 1 ( c − a , c − b ; c ; z ) . {\displaystyle {}_{2}F_{1}(a,b;c;z)=(1-z)^{c-a-b}{}_{2}F_{1}(c-a,c-b;c;z).} It follows by combining the two Pfaff transformations 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) − b 2 F 1 ( b , c − a ; c ; z z − 1 ) 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) − a 2 F 1 ( a , c − b ; c ; z z − 1 ) {\displaystyle {\begin{aligned}{}_{2}F_{1}(a,b;c;z)&=(1-z)^{-b}{}_{2}F_{1}\left(b,c-a;c;{\tfrac {z}{z-1}}\right)\\{}_{2}F_{1}(a,b;c;z)&=(1-z)^{-a}{}_{2}F_{1}\left(a,c-b;c;{\tfrac {z}{z-1}}\right)\\\end{aligned}}} which in turn follow from Euler's integral representation. For extension of Euler's first and second transformations, see Rathie & Paris (2007) and Rakha & Rathie (2011). It can also be written as linear combination 2 F 1 ( a , b ; c , z ) = Γ ( c ) Γ ( c − a − b ) Γ ( c − a ) Γ ( c − b ) 2 F 1 ( a , b ; a + b + 1 − c ; 1 − z ) + Γ ( c ) Γ ( a + b − c ) Γ ( a ) Γ ( b ) ( 1 − z ) c − a − b 2 F 1 ( c − a , c − b ; 1 + c − a − b ; 1 − z ) . {\displaystyle {\begin{aligned}{}_{2}F_{1}(a,b;c,z)={}&{\frac {\Gamma (c)\Gamma (c-a-b)}{\Gamma (c-a)\Gamma (c-b)}}{}_{2}F_{1}(a,b;a+b+1-c;1-z)\\[6pt]&{}+{\frac {\Gamma (c)\Gamma (a+b-c)}{\Gamma (a)\Gamma (b)}}(1-z)^{c-a-b}{}_{2}F_{1}(c-a,c-b;1+c-a-b;1-z).\end{aligned}}} === Quadratic transformations === If two of the numbers 1 − c, c − 1, a − b, b − a, a + b − c, c − a − b are equal or one of them is 1/2 then there is a quadratic transformation of the hypergeometric function, connecting it to a different value of z related by a quadratic equation. The first examples were given by Kummer (1836), and a complete list was given by Goursat (1881). A typical example is 2 F 1 ( a , b ; 2 b ; z ) = ( 1 − z ) − a 2 2 F 1 ( 1 2 a , b − 1 2 a ; b + 1 2 ; z 2 4 z − 4 ) {\displaystyle {}_{2}F_{1}(a,b;2b;z)=(1-z)^{-{\frac {a}{2}}}{}_{2}F_{1}\left({\tfrac {1}{2}}a,b-{\tfrac {1}{2}}a;b+{\tfrac {1}{2}};{\frac {z^{2}}{4z-4}}\right)} === Higher order transformations === If 1−c, a−b, a+b−c differ by signs or two of them are 1/3 or −1/3 then there is a cubic transformation of the hypergeometric function, connecting it to a different value of z related by a cubic equation. The first examples were given by Goursat (1881). A typical example is 2 F 1 ( 3 2 a , 1 2 ( 3 a − 1 ) ; a + 1 2 ; − z 2 3 ) = ( 1 + z ) 1 − 3 a 2 F 1 ( a − 1 3 , a ; 2 a ; 2 z ( 3 + z 2 ) ( 1 + z ) − 3 ) {\displaystyle {}_{2}F_{1}\left({\tfrac {3}{2}}a,{\tfrac {1}{2}}(3a-1);a+{\tfrac {1}{2}};-{\tfrac {z^{2}}{3}}\right)=(1+z)^{1-3a}\,{}_{2}F_{1}\left(a-{\tfrac {1}{3}},a;2a;2z(3+z^{2})(1+z)^{-3}\right)} There are also some transformations of degree 4 and 6. Transformations of other degrees only exist if a, b, and c are certain rational numbers (Vidunas 2005). For example, 2 F 1 ( 1 4 , 3 8 ; 7 8 ; z ) ( z 4 − 60 z 3 + 134 z 2 − 60 z + 1 ) 1 / 16 = 2 F 1 ( 1 48 , 17 48 ; 7 8 ; − 432 z ( z − 1 ) 2 ( z + 1 ) 8 ( z 4 − 60 z 3 + 134 z 2 − 60 z + 1 ) 3 ) . {\displaystyle {}_{2}F_{1}\left({\tfrac {1}{4}},{\tfrac {3}{8}};{\tfrac {7}{8}};z\right)(z^{4}-60z^{3}+134z^{2}-60z+1)^{1/16}={}_{2}F_{1}\left({\tfrac {1}{48}},{\tfrac {17}{48}};{\tfrac {7}{8}};{\tfrac {-432z(z-1)^{2}(z+1)^{8}}{(z^{4}-60z^{3}+134z^{2}-60z+1)^{3}}}\right).} == Values at special points z == See Slater (1966, Appendix III) for a list of summation formulas at special points, most of which also appear in Bailey (1935). Gessel & Stanton (1982) gives further evaluations at more points. Koepf (1995) shows how most of these identities can be verified by computer algorithms. === Special values at z = 1 === Gauss's summation theorem, named for Carl Friedrich Gauss, is the identity 2 F 1 ( a , b ; c ; 1 ) = Γ ( c ) Γ ( c − a − b ) Γ ( c − a ) Γ ( c − b ) , ℜ ( c ) > ℜ ( a + b ) {\displaystyle {}_{2}F_{1}(a,b;c;1)={\frac {\Gamma (c)\Gamma (c-a-b)}{\Gamma (c-a)\Gamma (c-b)}},\qquad \Re (c)>\Re (a+b)} which follows from Euler's integral formula by putting z = 1. It includes the Vandermonde identity as a special case. For the special case where a = − m {\displaystyle a=-m} , 2 F 1 ( − m , b ; c ; 1 ) = ( c − b ) m ( c ) m {\displaystyle {}_{2}F_{1}(-m,b;c;1)={\frac {(c-b)_{m}}{(c)_{m}}}} Dougall's formula generalizes this to the bilateral hypergeometric series at z = 1. === Kummer's theorem (z = −1) === There are many cases where hypergeometric functions can be evaluated at z = −1 by using a quadratic transformation to change z = −1 to z = 1 and then using Gauss's theorem to evaluate the result. A typical example is Kummer's theorem, named for Ernst Kummer: 2 F 1 ( a , b ; 1 + a − b ; − 1 ) = Γ ( 1 + a − b ) Γ ( 1 + 1 2 a ) Γ ( 1 + a ) Γ ( 1 + 1 2 a − b ) {\displaystyle {}_{2}F_{1}(a,b;1+a-b;-1)={\frac {\Gamma (1+a-b)\Gamma (1+{\tfrac {1}{2}}a)}{\Gamma (1+a)\Gamma (1+{\tfrac {1}{2}}a-b)}}} which follows from Kummer's quadratic transformations 2 F 1 ( a , b ; 1 + a − b ; z ) = ( 1 − z ) − a 2 F 1 ( a 2 , 1 + a 2 − b ; 1 + a − b ; − 4 z ( 1 − z ) 2 ) = ( 1 + z ) − a 2 F 1 ( a 2 , a + 1 2 ; 1 + a − b ; 4 z ( 1 + z ) 2 ) {\displaystyle {\begin{aligned}_{2}F_{1}(a,b;1+a-b;z)&=(1-z)^{-a}\;_{2}F_{1}\left({\frac {a}{2}},{\frac {1+a}{2}}-b;1+a-b;-{\frac {4z}{(1-z)^{2}}}\right)\\&=(1+z)^{-a}\,_{2}F_{1}\left({\frac {a}{2}},{\frac {a+1}{2}};1+a-b;{\frac {4z}{(1+z)^{2}}}\right)\end{aligned}}} and Gauss's theorem by putting z = −1 in the first identity. For generalization of Kummer's summation, see Lavoie, Grondin & Rathie (1996). === Values at z = 1/2 === Gauss's second summation theorem is 2 F 1 ( a , b ; 1 2 ( 1 + a + b ) ; 1 2 ) = Γ ( 1 2 ) Γ ( 1 2 ( 1 + a + b ) ) Γ ( 1 2 ( 1 + a ) ) Γ ( 1 2 ( 1 + b ) ) . {\displaystyle _{2}F_{1}\left(a,b;{\tfrac {1}{2}}\left(1+a+b\right);{\tfrac {1}{2}}\right)={\frac {\Gamma ({\tfrac {1}{2}})\Gamma ({\tfrac {1}{2}}\left(1+a+b\right))}{\Gamma ({\tfrac {1}{2}}\left(1+a)\right)\Gamma ({\tfrac {1}{2}}\left(1+b\right))}}.} Bailey's theorem is 2 F 1 ( a , 1 − a ; c ; 1 2 ) = Γ ( 1 2 c ) Γ ( 1 2 ( 1 + c ) ) Γ ( 1 2 ( c + a ) ) Γ ( 1 2 ( 1 + c − a ) ) . {\displaystyle _{2}F_{1}\left(a,1-a;c;{\tfrac {1}{2}}\right)={\frac {\Gamma ({\tfrac {1}{2}}c)\Gamma ({\tfrac {1}{2}}\left(1+c\right))}{\Gamma ({\tfrac {1}{2}}\left(c+a\right))\Gamma ({\tfrac {1}{2}}\left(1+c-a\right))}}.} For generalizations of Gauss's second summation theorem and Bailey's summation theorem, see Lavoie, Grondin & Rathie (1996). === Other points === There are many other formulas giving the hypergeometric function as an algebraic number at special rational values of the parameters, some of which are listed in Gessel & Stanton (1982) and Koepf (1995). Some typical examples are given by 2 F 1 ( a , − a ; 1 2 ; x 2 4 ( x − 1 ) ) = ( 1 − x ) a + ( 1 − x ) − a 2 , {\displaystyle {}_{2}F_{1}\left(a,-a;{\tfrac {1}{2}};{\tfrac {x^{2}}{4(x-1)}}\right)={\frac {(1-x)^{a}+(1-x)^{-a}}{2}},} which can be restated as T a ( cos ⁡ x ) = 2 F 1 ( a , − a ; 1 2 ; 1 2 ( 1 − cos ⁡ x ) ) = cos ⁡ ( a x ) {\displaystyle T_{a}(\cos x)={}_{2}F_{1}\left(a,-a;{\tfrac {1}{2}};{\tfrac {1}{2}}(1-\cos x)\right)=\cos(ax)} whenever −π < x < π and T is the (generalized) Chebyshev polynomial. == See also == Appell series Basic hypergeometric series Bilateral hypergeometric series Elliptic hypergeometric series General hypergeometric function Generalized hypergeometric series Hypergeometric distribution Lauricella hypergeometric series Modular hypergeometric series Riemann's differential equation == References == Andrews, George E.; Askey, Richard & Roy, Ranjan (1999). Special functions. Encyclopedia of Mathematics and its Applications. Vol. 71. Cambridge University Press. ISBN 978-0-521-62321-6. MR 1688958. Bailey, W.N. (1935). Generalized Hypergeometric Series (PDF). Cambridge University Press. Archived from the original (PDF) on 2017-06-24. Retrieved 2016-07-23. Beukers, Frits (2002), Gauss' hypergeometric function. (lecture notes reviewing basics, as well as triangle maps and monodromy) Olde Daalhuis, Adri B. (2010), "Hypergeometric function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. Erdélyi, Arthur; Magnus, Wilhelm; Oberhettinger, Fritz & Tricomi, Francesco G. (1953). Higher transcendental functions (PDF). Vol. I. New York – Toronto – London: McGraw–Hill Book Company, Inc. ISBN 978-0-89874-206-0. MR 0058756. Archived from the original (PDF) on 2011-08-11. Retrieved 2011-07-30. {{cite book}}: ISBN / Date incompatibility (help) Gasper, George & Rahman, Mizan (2004). Basic Hypergeometric Series, 2nd Edition, Encyclopedia of Mathematics and Its Applications, 96, Cambridge University Press, Cambridge. ISBN 0-521-83357-4. Gauss, Carl Friedrich (1813). "Disquisitiones generales circa seriem infinitam 1 + α β 1 ⋅ γ x + α ( α + 1 ) β ( β + 1 ) 1 ⋅ 2 ⋅ γ ( γ + 1 ) x x + etc. {\displaystyle 1+{\tfrac {\alpha \beta }{1\cdot \gamma }}~x+{\tfrac {\alpha (\alpha +1)\beta (\beta +1)}{1\cdot 2\cdot \gamma (\gamma +1)}}~x~x+{\mbox{etc.}}} ". Commentationes Societatis Regiae Scientarum Gottingensis Recentiores (in Latin). 2. Göttingen. Gelfand, I. M.; Gindikin, S.G. & Graev, M.I. (2003) [2000]. Selected topics in integral geometry. Translations of Mathematical Monographs. Vol. 220. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-2932-5. MR 2000133. Gessel, Ira & Stanton, Dennis (1982). "Strange evaluations of hypergeometric series". SIAM Journal on Mathematical Analysis. 13 (2): 295–308. doi:10.1137/0513021. ISSN 0036-1410. MR 0647127. Goursat, Édouard (1881). "Sur l'équation différentielle linéaire, qui admet pour intégrale la série hypergéométrique". Annales Scientifiques de l'École Normale Supérieure (in French). 10: 3–142. doi:10.24033/asens.207. Retrieved 2008-10-16. Heckman, Gerrit & Schlichtkrull, Henrik (1994). Harmonic Analysis and Special Functions on Symmetric Spaces. San Diego: Academic Press. ISBN 0-12-336170-2. (part 1 treats hypergeometric functions on Lie groups) Hille, Einar (1976). Ordinary differential equations in the complex domain. Dover. ISBN 0-486-69620-0. Ince, E. L. (1944). Ordinary Differential Equations. Dover Publications. Klein, Felix (1981). Vorlesungen über die hypergeometrische Funktion. Grundlehren der Mathematischen Wissenschaften (in German). Vol. 39. Berlin, New York: Springer-Verlag. ISBN 978-3-540-10455-1. MR 0668700. Koepf, Wolfram (1995). "Algorithms for m-fold hypergeometric summation". Journal of Symbolic Computation. 20 (4): 399–417. doi:10.1006/jsco.1995.1056. ISSN 0747-7171. MR 1384455. Kummer, Ernst Eduard (1836). "Über die hypergeometrische Reihe 1 + α ⋅ β 1 ⋅ γ x + α ( α + 1 ) β ( β + 1 ) 1 ⋅ 2 ⋅ γ ( γ + 1 ) x 2 + α ( α + 1 ) ( α + 2 ) β ( β + 1 ) ( β + 2 ) 1 ⋅ 2 ⋅ 3 ⋅ γ ( γ + 1 ) ( γ + 2 ) x 3 + ⋯ {\displaystyle 1+{\tfrac {\alpha \cdot \beta }{1\cdot \gamma }}~x+{\tfrac {\alpha (\alpha +1)\beta (\beta +1)}{1\cdot 2\cdot \gamma (\gamma +1)}}x^{2}+{\tfrac {\alpha (\alpha +1)(\alpha +2)\beta (\beta +1)(\beta +2)}{1\cdot 2\cdot 3\cdot \gamma (\gamma +1)(\gamma +2)}}x^{3}+\cdots } ". Journal für die reine und angewandte Mathematik (in German). 15: 39–83, 127–172. ISSN 0075-4102. Lavoie, J. L.; Grondin, F.; Rathie, A.K. (1996). "Generalizations of Whipple's theorem on the sum of a 3F2". J. Comput. Appl. Math. 72 (2): 293–300. doi:10.1016/0377-0427(95)00279-0. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T. & Flannery, B.P. (2007). "Section 6.13. Hypergeometric Functions". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Rakha, M.A.; Rathie, Arjun K. (2011). "Extensions of Euler's type-II transformation and Saalschutz's theorem". Bull. Korean Math. Soc. 48 (1): 151–156. doi:10.4134/BKMS.2011.48.1.151. Rathie, Arjun K.; Paris, R.B. (2007). "An extension of the Euler's-type transformation for the 3F2 series". Far East J. Math. Sci. 27 (1): 43–48. Riemann, Bernhard (1857). "Beiträge zur Theorie der durch die Gauss'sche Reihe F(α, β, γ, x) darstellbaren Functionen". Abhandlungen der Mathematischen Classe der Königlichen Gesellschaft der Wissenschaften zu Göttingen (in German). 7. Göttingen: Verlag der Dieterichschen Buchhandlung: 3–22. (a reprint of this paper can be found in "All publications of Riemann" (PDF).) Slater, Lucy Joan (1960). Confluent hypergeometric functions. Cambridge, UK: Cambridge University Press. MR 0107026. Slater, Lucy Joan (1966). Generalized hypergeometric functions. Cambridge, UK: Cambridge University Press. ISBN 0-521-06483-X. MR 0201688. (there is a 2008 paperback with ISBN 978-0-521-09061-2) Vidunas, Raimundas (2005). "Transformations of some Gauss hypergeometric functions". Journal of Symbolic Computation. 178 (1–2): 473–487. arXiv:math/0310436. Bibcode:2005JCoAM.178..473V. doi:10.1016/j.cam.2004.09.053. S2CID 119596800. Wall, H.S. (1948). Analytic Theory of Continued Fractions. D. Van Nostrand Company, Inc. Whittaker, E.T. & Watson, G.N. (1927). A Course of Modern Analysis. Cambridge, UK: Cambridge University Press. Yoshida, Masaaki (1997). Hypergeometric Functions, My Love: Modular Interpretations of Configuration Spaces. Braunschweig – Wiesbaden: Friedr. Vieweg & Sohn. ISBN 3-528-06925-2. MR 1453580. == External links == "Hypergeometric function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] John Pearson, Computation of Hypergeometric Functions (University of Oxford, MSc Thesis) Marko Petkovsek, Herbert Wilf and Doron Zeilberger, The book "A = B" (freely downloadable) Weisstein, Eric W. "Hypergeometric Function". MathWorld.
Wikipedia/Hypergeometric_function
In mathematics, a quasi-analytic class of functions is a generalization of the class of real analytic functions based upon the following fact: If f is an analytic function on an interval [a,b] ⊂ R, and at some point f and all of its derivatives are zero, then f is identically zero on all of [a,b]. Quasi-analytic classes are broader classes of functions for which this statement still holds true. == Definitions == Let M = { M k } k = 0 ∞ {\displaystyle M=\{M_{k}\}_{k=0}^{\infty }} be a sequence of positive real numbers. Then the Denjoy-Carleman class of functions CM([a,b]) is defined to be those f ∈ C∞([a,b]) which satisfy | d k f d x k ( x ) | ≤ A k + 1 k ! M k {\displaystyle \left|{\frac {d^{k}f}{dx^{k}}}(x)\right|\leq A^{k+1}k!M_{k}} for all x ∈ [a,b], some constant A, and all non-negative integers k. If Mk = 1 this is exactly the class of real analytic functions on [a,b]. The class CM([a,b]) is said to be quasi-analytic if whenever f ∈ CM([a,b]) and d k f d x k ( x ) = 0 {\displaystyle {\frac {d^{k}f}{dx^{k}}}(x)=0} for some point x ∈ [a,b] and all k, then f is identically equal to zero. A function f is called a quasi-analytic function if f is in some quasi-analytic class. === Quasi-analytic functions of several variables === For a function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } and multi-indexes j = ( j 1 , j 2 , … , j n ) ∈ N n {\displaystyle j=(j_{1},j_{2},\ldots ,j_{n})\in \mathbb {N} ^{n}} , denote | j | = j 1 + j 2 + … + j n {\displaystyle |j|=j_{1}+j_{2}+\ldots +j_{n}} , and D j = ∂ j ∂ x 1 j 1 ∂ x 2 j 2 … ∂ x n j n {\displaystyle D^{j}={\frac {\partial ^{j}}{\partial x_{1}^{j_{1}}\partial x_{2}^{j_{2}}\ldots \partial x_{n}^{j_{n}}}}} j ! = j 1 ! j 2 ! … j n ! {\displaystyle j!=j_{1}!j_{2}!\ldots j_{n}!} and x j = x 1 j 1 x 2 j 2 … x n j n . {\displaystyle x^{j}=x_{1}^{j_{1}}x_{2}^{j_{2}}\ldots x_{n}^{j_{n}}.} Then f {\displaystyle f} is called quasi-analytic on the open set U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} if for every compact K ⊂ U {\displaystyle K\subset U} there is a constant A {\displaystyle A} such that | D j f ( x ) | ≤ A | j | + 1 j ! M | j | {\displaystyle \left|D^{j}f(x)\right|\leq A^{|j|+1}j!M_{|j|}} for all multi-indexes j ∈ N n {\displaystyle j\in \mathbb {N} ^{n}} and all points x ∈ K {\displaystyle x\in K} . The Denjoy-Carleman class of functions of n {\displaystyle n} variables with respect to the sequence M {\displaystyle M} on the set U {\displaystyle U} can be denoted C n M ( U ) {\displaystyle C_{n}^{M}(U)} , although other notations abound. The Denjoy-Carleman class C n M ( U ) {\displaystyle C_{n}^{M}(U)} is said to be quasi-analytic when the only function in it having all its partial derivatives equal to zero at a point is the function identically equal to zero. A function of several variables is said to be quasi-analytic when it belongs to a quasi-analytic Denjoy-Carleman class. === Quasi-analytic classes with respect to logarithmically convex sequences === In the definitions above it is possible to assume that M 1 = 1 {\displaystyle M_{1}=1} and that the sequence M k {\displaystyle M_{k}} is non-decreasing. The sequence M k {\displaystyle M_{k}} is said to be logarithmically convex, if M k + 1 / M k {\displaystyle M_{k+1}/M_{k}} is increasing. When M k {\displaystyle M_{k}} is logarithmically convex, then ( M k ) 1 / k {\displaystyle (M_{k})^{1/k}} is increasing and M r M s ≤ M r + s {\displaystyle M_{r}M_{s}\leq M_{r+s}} for all ( r , s ) ∈ N 2 {\displaystyle (r,s)\in \mathbb {N} ^{2}} . The quasi-analytic class C n M {\displaystyle C_{n}^{M}} with respect to a logarithmically convex sequence M {\displaystyle M} satisfies: C n M {\displaystyle C_{n}^{M}} is a ring. In particular it is closed under multiplication. C n M {\displaystyle C_{n}^{M}} is closed under composition. Specifically, if f = ( f 1 , f 2 , … f p ) ∈ ( C n M ) p {\displaystyle f=(f_{1},f_{2},\ldots f_{p})\in (C_{n}^{M})^{p}} and g ∈ C p M {\displaystyle g\in C_{p}^{M}} , then g ∘ f ∈ C n M {\displaystyle g\circ f\in C_{n}^{M}} . == The Denjoy–Carleman theorem == The Denjoy–Carleman theorem, proved by Carleman (1926) after Denjoy (1921) gave some partial results, gives criteria on the sequence M under which CM([a,b]) is a quasi-analytic class. It states that the following conditions are equivalent: CM([a,b]) is quasi-analytic. ∑ 1 / L j = ∞ {\displaystyle \sum 1/L_{j}=\infty } where L j = inf k ≥ j ( k ⋅ M k 1 / k ) {\displaystyle L_{j}=\inf _{k\geq j}(k\cdot M_{k}^{1/k})} . ∑ j 1 j ( M j ∗ ) − 1 / j = ∞ {\displaystyle \sum _{j}{\frac {1}{j}}(M_{j}^{*})^{-1/j}=\infty } , where Mj* is the largest log convex sequence bounded above by Mj. ∑ j M j − 1 ∗ ( j + 1 ) M j ∗ = ∞ . {\displaystyle \sum _{j}{\frac {M_{j-1}^{*}}{(j+1)M_{j}^{*}}}=\infty .} The proof that the last two conditions are equivalent to the second uses Carleman's inequality. Example: Denjoy (1921) pointed out that if Mn is given by one of the sequences 1 , ( ln ⁡ n ) n , ( ln ⁡ n ) n ( ln ⁡ ln ⁡ n ) n , ( ln ⁡ n ) n ( ln ⁡ ln ⁡ n ) n ( ln ⁡ ln ⁡ ln ⁡ n ) n , … , {\displaystyle 1,\,{(\ln n)}^{n},\,{(\ln n)}^{n}\,{(\ln \ln n)}^{n},\,{(\ln n)}^{n}\,{(\ln \ln n)}^{n}\,{(\ln \ln \ln n)}^{n},\dots ,} then the corresponding class is quasi-analytic. The first sequence gives analytic functions. == Additional properties == For a logarithmically convex sequence M {\displaystyle M} the following properties of the corresponding class of functions hold: C M {\displaystyle C^{M}} contains the analytic functions, and it is equal to it if and only if sup j ≥ 1 ( M j ) 1 / j < ∞ {\displaystyle \sup _{j\geq 1}(M_{j})^{1/j}<\infty } If N {\displaystyle N} is another logarithmically convex sequence, with M j ≤ C j N j {\displaystyle M_{j}\leq C^{j}N_{j}} for some constant C {\displaystyle C} , then C M ⊂ C N {\displaystyle C^{M}\subset C^{N}} . C M {\displaystyle C^{M}} is stable under differentiation if and only if sup j ≥ 1 ( M j + 1 / M j ) 1 / j < ∞ {\displaystyle \sup _{j\geq 1}(M_{j+1}/M_{j})^{1/j}<\infty } . For any infinitely differentiable function f {\displaystyle f} there are quasi-analytic rings C M {\displaystyle C^{M}} and C N {\displaystyle C^{N}} and elements g ∈ C M {\displaystyle g\in C^{M}} , and h ∈ C N {\displaystyle h\in C^{N}} , such that f = g + h {\displaystyle f=g+h} . === Weierstrass division === A function g : R n → R {\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} } is said to be regular of order d {\displaystyle d} with respect to x n {\displaystyle x_{n}} if g ( 0 , x n ) = h ( x n ) x n d {\displaystyle g(0,x_{n})=h(x_{n})x_{n}^{d}} and h ( 0 ) ≠ 0 {\displaystyle h(0)\neq 0} . Given g {\displaystyle g} regular of order d {\displaystyle d} with respect to x n {\displaystyle x_{n}} , a ring A n {\displaystyle A_{n}} of real or complex functions of n {\displaystyle n} variables is said to satisfy the Weierstrass division with respect to g {\displaystyle g} if for every f ∈ A n {\displaystyle f\in A_{n}} there is q ∈ A {\displaystyle q\in A} , and h 1 , h 2 , … , h d − 1 ∈ A n − 1 {\displaystyle h_{1},h_{2},\ldots ,h_{d-1}\in A_{n-1}} such that f = g q + h {\displaystyle f=gq+h} with h ( x ′ , x n ) = ∑ j = 0 d − 1 h j ( x ′ ) x n j {\displaystyle h(x',x_{n})=\sum _{j=0}^{d-1}h_{j}(x')x_{n}^{j}} . While the ring of analytic functions and the ring of formal power series both satisfy the Weierstrass division property, the same is not true for other quasi-analytic classes. If M {\displaystyle M} is logarithmically convex and C M {\displaystyle C^{M}} is not equal to the class of analytic function, then C M {\displaystyle C^{M}} doesn't satisfy the Weierstrass division property with respect to g ( x 1 , x 2 , … , x n ) = x 1 + x 2 2 {\displaystyle g(x_{1},x_{2},\ldots ,x_{n})=x_{1}+x_{2}^{2}} . == References == Carleman, T. (1926), Les fonctions quasi-analytiques, Gauthier-Villars Cohen, Paul J. (1968), "A simple proof of the Denjoy-Carleman theorem", The American Mathematical Monthly, 75 (1), Mathematical Association of America: 26–31, doi:10.2307/2315100, ISSN 0002-9890, JSTOR 2315100, MR 0225957 Denjoy, A. (1921), "Sur les fonctions quasi-analytiques de variable réelle", C. R. Acad. Sci. Paris, 173: 1329–1331 Hörmander, Lars (1990), The Analysis of Linear Partial Differential Operators I, Springer-Verlag, ISBN 3-540-00662-1 Leont'ev, A.F. (2001) [1994], "Quasi-analytic class", Encyclopedia of Mathematics, EMS Press Solomentsev, E.D. (2001) [1994], "Carleman theorem", Encyclopedia of Mathematics, EMS Press
Wikipedia/Quasi-analytic_function
In complex analysis, a complex-valued function f {\displaystyle f} of a complex variable z {\displaystyle z} : is said to be holomorphic at a point a {\displaystyle a} if it is differentiable at every point within some open disk centered at a {\displaystyle a} , and is said to be analytic at a {\displaystyle a} if in some open disk centered at a {\displaystyle a} it can be expanded as a convergent power series f ( z ) = ∑ n = 0 ∞ c n ( z − a ) n {\displaystyle f(z)=\sum _{n=0}^{\infty }c_{n}(z-a)^{n}} (this implies that the radius of convergence is positive). One of the most important theorems of complex analysis is that holomorphic functions are analytic and vice versa. Among the corollaries of this theorem are the identity theorem that two holomorphic functions that agree at every point of an infinite set S {\displaystyle S} with an accumulation point inside the intersection of their domains also agree everywhere in every connected open subset of their domains that contains the set S {\displaystyle S} , and the fact that, since power series are infinitely differentiable, so are holomorphic functions (this is in contrast to the case of real differentiable functions), and the fact that the radius of convergence is always the distance from the center a {\displaystyle a} to the nearest non-removable singularity; if there are no singularities (i.e., if f {\displaystyle f} is an entire function), then the radius of convergence is infinite. Strictly speaking, this is not a corollary of the theorem but rather a by-product of the proof. no bump function on the complex plane can be entire. In particular, on any connected open subset of the complex plane, there can be no bump function defined on that set which is holomorphic on the set. This has important ramifications for the study of complex manifolds, as it precludes the use of partitions of unity. In contrast the partition of unity is a tool which can be used on any real manifold. == Proof == The argument, first given by Cauchy, hinges on Cauchy's integral formula and the power series expansion of the expression 1 w − z . {\displaystyle {\frac {1}{w-z}}.} Let D {\displaystyle D} be an open disk centered at a {\displaystyle a} and suppose f {\displaystyle f} is differentiable everywhere within an open neighborhood containing the closure of D {\displaystyle D} . Let C {\displaystyle C} be the positively oriented (i.e., counterclockwise) circle which is the boundary of D {\displaystyle D} and let z {\displaystyle z} be a point in D {\displaystyle D} . Starting with Cauchy's integral formula, we have f ( z ) = 1 2 π i ∫ C f ( w ) w − z d w = 1 2 π i ∫ C f ( w ) ( w − a ) − ( z − a ) d w = 1 2 π i ∫ C 1 w − a ⋅ 1 1 − z − a w − a f ( w ) d w = 1 2 π i ∫ C 1 w − a ⋅ ∑ n = 0 ∞ ( z − a w − a ) n f ( w ) d w = ∑ n = 0 ∞ 1 2 π i ∫ C ( z − a ) n ( w − a ) n + 1 f ( w ) d w . {\displaystyle {\begin{aligned}f(z)&{}={1 \over 2\pi i}\int _{C}{f(w) \over w-z}\,\mathrm {d} w\\[10pt]&{}={1 \over 2\pi i}\int _{C}{f(w) \over (w-a)-(z-a)}\,\mathrm {d} w\\[10pt]&{}={1 \over 2\pi i}\int _{C}{1 \over w-a}\cdot {1 \over 1-{z-a \over w-a}}f(w)\,\mathrm {d} w\\[10pt]&{}={1 \over 2\pi i}\int _{C}{1 \over w-a}\cdot {\sum _{n=0}^{\infty }\left({z-a \over w-a}\right)^{n}}f(w)\,\mathrm {d} w\\[10pt]&{}=\sum _{n=0}^{\infty }{1 \over 2\pi i}\int _{C}{(z-a)^{n} \over (w-a)^{n+1}}f(w)\,\mathrm {d} w.\end{aligned}}} Interchange of the integral and infinite sum is justified by observing that f ( w ) / ( w − a ) {\displaystyle f(w)/(w-a)} is bounded on C {\displaystyle C} by some positive number M {\displaystyle M} , while for all w {\displaystyle w} in C {\displaystyle C} | z − a w − a | ≤ r < 1 {\displaystyle \left|{\frac {z-a}{w-a}}\right|\leq r<1} for some positive r {\displaystyle r} as well. We therefore have | ( z − a ) n ( w − a ) n + 1 f ( w ) | ≤ M r n , {\displaystyle \left|{(z-a)^{n} \over (w-a)^{n+1}}f(w)\right|\leq Mr^{n},} on C {\displaystyle C} , and as the Weierstrass M-test shows the series converges uniformly over C {\displaystyle C} , the sum and the integral may be interchanged. As the factor ( z − a ) n {\displaystyle (z-a)^{n}} does not depend on the variable of integration w {\displaystyle w} , it may be factored out to yield f ( z ) = ∑ n = 0 ∞ ( z − a ) n 1 2 π i ∫ C f ( w ) ( w − a ) n + 1 d w , {\displaystyle f(z)=\sum _{n=0}^{\infty }(z-a)^{n}{1 \over 2\pi i}\int _{C}{f(w) \over (w-a)^{n+1}}\,\mathrm {d} w,} which has the desired form of a power series in z {\displaystyle z} : f ( z ) = ∑ n = 0 ∞ c n ( z − a ) n {\displaystyle f(z)=\sum _{n=0}^{\infty }c_{n}(z-a)^{n}} with coefficients c n = 1 2 π i ∫ C f ( w ) ( w − a ) n + 1 d w . {\displaystyle c_{n}={1 \over 2\pi i}\int _{C}{f(w) \over (w-a)^{n+1}}\,\mathrm {d} w.} == Remarks == Since power series can be differentiated term-wise, applying the above argument in the reverse direction and the power series expression for 1 ( w − z ) n + 1 {\displaystyle {\frac {1}{(w-z)^{n+1}}}} gives f ( n ) ( a ) = n ! 2 π i ∫ C f ( w ) ( w − a ) n + 1 d w . {\displaystyle f^{(n)}(a)={n! \over 2\pi i}\int _{C}{f(w) \over (w-a)^{n+1}}\,dw.} This is a Cauchy integral formula for derivatives. Therefore the power series obtained above is the Taylor series of f {\displaystyle f} . The argument works if z {\displaystyle z} is any point that is closer to the center a {\displaystyle a} than is any singularity of f {\displaystyle f} . Therefore, the radius of convergence of the Taylor series cannot be smaller than the distance from a {\displaystyle a} to the nearest singularity (nor can it be larger, since power series have no singularities in the interiors of their circles of convergence). A special case of the identity theorem follows from the preceding remark. If two holomorphic functions agree on a (possibly quite small) open neighborhood U {\displaystyle U} of a {\displaystyle a} , then they coincide on the open disk B d ( a ) {\displaystyle B_{d}(a)} , where d {\displaystyle d} is the distance from a {\displaystyle a} to the nearest singularity. == External links == "Existence of power series". PlanetMath.
Wikipedia/Analyticity_of_holomorphic_functions
In SQL, a window function or analytic function is a function which uses values from one or multiple rows to return a value for each row. (This contrasts with an aggregate function, which returns a single value for multiple rows.) Window functions have an OVER clause; any function without an OVER clause is not a window function, but rather an aggregate or single-row (scalar) function. == Example == As an example, here is a query which uses a window function to compare the salary of each employee with the average salary of their department (example from the PostgreSQL documentation): Output: depname | empno | salary | avg ----------+-------+--------+---------------------- develop | 11 | 5200 | 5020.0000000000000000 develop | 7 | 4200 | 5020.0000000000000000 develop | 9 | 4500 | 5020.0000000000000000 develop | 8 | 6000 | 5020.0000000000000000 develop | 10 | 5200 | 5020.0000000000000000 personnel | 5 | 3500 | 3700.0000000000000000 personnel | 2 | 3900 | 3700.0000000000000000 sales | 3 | 4800 | 4866.6666666666666667 sales | 1 | 5000 | 4866.6666666666666667 sales | 4 | 4800 | 4866.6666666666666667 (10 rows) The PARTITION BY clause groups rows into partitions, and the function is applied to each partition separately. If the PARTITION BY clause is omitted (such as with an empty OVER() clause), then the entire result set is treated as a single partition. For this query, the average salary reported would be the average taken over all rows. Window functions are evaluated after aggregation (after the GROUP BY clause and non-window aggregate functions, for example). == Syntax == According to the PostgreSQL documentation, a window function has the syntax of one of the following:where window_definition has syntax:frame_clause has the syntax of one of the following:frame_start and frame_end can be UNBOUNDED PRECEDING, offset PRECEDING, CURRENT ROW, offset FOLLOWING, or UNBOUNDED FOLLOWING. frame_exclusion can be EXCLUDE CURRENT ROW, EXCLUDE GROUP, EXCLUDE TIES, or EXCLUDE NO OTHERS. expression refers to any expression that does not contain a call to a window function. Notation: Brackets [] indicate optional clauses Curly braces {} indicate a set of different possible options, with each option delimited by a vertical bar | == Example == Window functions allow access to data in the records right before and after the current record. A window function defines a frame or window of rows with a given length around the current row, and performs a calculation across the set of data in the window. NAME | ------------ Aaron| <-- Preceding (unbounded) Andrew| Amelia| James| Jill| Johnny| <-- 1st preceding row Michael| <-- Current row Nick| <-- 1st following row Ophelia| Zach| <-- Following (unbounded) In the above table, the next query extracts for each row the values of a window with one preceding and one following row:The result query contains the following values: | PREV | NAME | NEXT | |----------|----------|----------| | (null)| Aaron| Andrew| | Aaron| Andrew| Amelia| | Andrew| Amelia| James| | Amelia| James| Jill| | James| Jill| Johnny| | Jill| Johnny| Michael| | Johnny| Michael| Nick| | Michael| Nick| Ophelia| | Nick| Ophelia| Zach| | Ophelia| Zach| (null)| == History == Window functions were incorporated into the SQL:2003 standard and had functionality expanded in later specifications. Support for particular database implementations was added as follows: Oracle - version 8.1.6 in 2000. PostgreSQL - version 8.4 in 2009. MySQL - version 8 in 2018. MariaDB - version 10.2 in 2016. SQLite - release 3.25.0 in 2018. == See also == Select (SQL) § Limiting result rows == References ==
Wikipedia/Window_function_(SQL)
In mathematics, the Schwarz lemma, named after Hermann Amandus Schwarz, is a result in complex analysis about holomorphic functions from the open unit disk to itself. The lemma is less celebrated than deeper theorems, such as the Riemann mapping theorem, which it helps to prove. It is, however, one of the simplest results capturing the rigidity of holomorphic functions. == Statement == Let D = { z : | z | < 1 } {\displaystyle \mathbf {D} =\{z:|z|<1\}} be the open unit disk in the complex plane C {\displaystyle \mathbb {C} } centered at the origin, and let f : D → C {\displaystyle f:\mathbf {D} \rightarrow \mathbb {C} } be a holomorphic map such that f ( 0 ) = 0 {\displaystyle f(0)=0} and | f ( z ) | ≤ 1 {\displaystyle |f(z)|\leq 1} on D {\displaystyle \mathbf {D} } . Then | f ( z ) | ≤ | z | {\displaystyle |f(z)|\leq |z|} for all z ∈ D {\displaystyle z\in \mathbf {D} } , and | f ′ ( 0 ) | ≤ 1 {\displaystyle |f'(0)|\leq 1} . Moreover, if | f ( z ) | = | z | {\displaystyle |f(z)|=|z|} for some non-zero z {\displaystyle z} or | f ′ ( 0 ) | = 1 {\displaystyle |f'(0)|=1} , then f ( z ) = a z {\displaystyle f(z)=az} for some a ∈ C {\displaystyle a\in \mathbb {C} } with | a | = 1 {\displaystyle |a|=1} . == Proof == The proof is a straightforward application of the maximum modulus principle on the function g ( z ) = { f ( z ) z if z ≠ 0 f ′ ( 0 ) if z = 0 , {\displaystyle g(z)={\begin{cases}{\frac {f(z)}{z}}\,&{\mbox{if }}z\neq 0\\f'(0)&{\mbox{if }}z=0,\end{cases}}} which is holomorphic on the whole of D {\displaystyle D} , including at the origin (because f {\displaystyle f} is differentiable at the origin and fixes zero). Now if D r = { z : | z | ≤ r } {\displaystyle D_{r}=\{z:|z|\leq r\}} denotes the closed disk of radius r {\displaystyle r} centered at the origin, then the maximum modulus principle implies that, for r < 1 {\displaystyle r<1} , given any z ∈ D r {\displaystyle z\in D_{r}} , there exists z r {\displaystyle z_{r}} on the boundary of D r {\displaystyle D_{r}} such that | g ( z ) | ≤ | g ( z r ) | = | f ( z r ) | | z r | ≤ 1 r . {\displaystyle |g(z)|\leq |g(z_{r})|={\frac {|f(z_{r})|}{|z_{r}|}}\leq {\frac {1}{r}}.} As r → 1 {\displaystyle r\rightarrow 1} we get | g ( z ) | ≤ 1 {\displaystyle |g(z)|\leq 1} . Moreover, suppose that | f ( z ) | = | z | {\displaystyle |f(z)|=|z|} for some non-zero z ∈ D {\displaystyle z\in D} , or | f ′ ( 0 ) | = 1 {\displaystyle |f'(0)|=1} . Then, | g ( z ) | = 1 {\displaystyle |g(z)|=1} at some point of D {\displaystyle D} . So by the maximum modulus principle, g ( z ) {\displaystyle g(z)} is equal to a constant a {\displaystyle a} such that | a | = 1 {\displaystyle |a|=1} . Therefore, f ( z ) = a z {\displaystyle f(z)=az} , as desired. == Schwarz–Pick theorem == A variant of the Schwarz lemma, known as the Schwarz–Pick theorem (after Georg Pick), characterizes the analytic automorphisms of the unit disc, i.e. bijective holomorphic mappings of the unit disc to itself: Let f : D → D {\displaystyle f:\mathbf {D} \to \mathbf {D} } be holomorphic. Then, for all z 1 , z 2 ∈ D {\displaystyle z_{1},z_{2}\in \mathbf {D} } , | f ( z 1 ) − f ( z 2 ) 1 − f ( z 1 ) ¯ f ( z 2 ) | ≤ | z 1 − z 2 1 − z 1 ¯ z 2 | {\displaystyle \left|{\frac {f(z_{1})-f(z_{2})}{1-{\overline {f(z_{1})}}f(z_{2})}}\right|\leq \left|{\frac {z_{1}-z_{2}}{1-{\overline {z_{1}}}z_{2}}}\right|} and, for all z ∈ D {\displaystyle z\in \mathbf {D} } , | f ′ ( z ) | 1 − | f ( z ) | 2 ≤ 1 1 − | z | 2 . {\displaystyle {\frac {\left|f'(z)\right|}{1-\left|f(z)\right|^{2}}}\leq {\frac {1}{1-\left|z\right|^{2}}}.} The expression d ( z 1 , z 2 ) = tanh − 1 ⁡ | z 1 − z 2 1 − z 1 ¯ z 2 | {\displaystyle d(z_{1},z_{2})=\tanh ^{-1}\left|{\frac {z_{1}-z_{2}}{1-{\overline {z_{1}}}z_{2}}}\right|} is the distance of the points z 1 {\displaystyle z_{1}} , z 2 {\displaystyle z_{2}} in the Poincaré metric, i.e. the metric in the Poincaré disk model for hyperbolic geometry in dimension two. The Schwarz–Pick theorem then essentially states that a holomorphic map of the unit disk into itself decreases the distance of points in the Poincaré metric. If equality holds throughout in one of the two inequalities above (which is equivalent to saying that the holomorphic map preserves the distance in the Poincaré metric), then f {\displaystyle f} must be an analytic automorphism of the unit disc, given by a Möbius transformation mapping the unit disc to itself. An analogous statement on the upper half-plane H {\displaystyle \mathbf {H} } can be made as follows: Let f : H → H {\displaystyle f:\mathbf {H} \to \mathbf {H} } be holomorphic. Then, for all z 1 , z 2 ∈ H {\displaystyle z_{1},z_{2}\in \mathbf {H} } , | f ( z 1 ) − f ( z 2 ) f ( z 1 ) ¯ − f ( z 2 ) | ≤ | z 1 − z 2 | | z 1 ¯ − z 2 | . {\displaystyle \left|{\frac {f(z_{1})-f(z_{2})}{{\overline {f(z_{1})}}-f(z_{2})}}\right|\leq {\frac {\left|z_{1}-z_{2}\right|}{\left|{\overline {z_{1}}}-z_{2}\right|}}.} This is an easy consequence of the Schwarz–Pick theorem mentioned above: One just needs to remember that the Cayley transform W ( z ) = ( z − i ) / ( z + i ) {\displaystyle W(z)=(z-i)/(z+i)} maps the upper half-plane H {\displaystyle \mathbf {H} } conformally onto the unit disc D {\displaystyle \mathbf {D} } . Then, the map W ∘ f ∘ W − 1 {\displaystyle W\circ f\circ W^{-1}} is a holomorphic map from D {\displaystyle \mathbf {D} } onto D {\displaystyle \mathbf {D} } . Using the Schwarz–Pick theorem on this map, and finally simplifying the results by using the formula for W {\displaystyle W} , we get the desired result. Also, for all z ∈ H {\displaystyle z\in \mathbf {H} } , | f ′ ( z ) | Im ( f ( z ) ) ≤ 1 Im ( z ) . {\displaystyle {\frac {\left|f'(z)\right|}{{\text{Im}}(f(z))}}\leq {\frac {1}{{\text{Im}}(z)}}.} If equality holds for either the one or the other expressions, then f {\displaystyle f} must be a Möbius transformation with real coefficients. That is, if equality holds, then f ( z ) = a z + b c z + d {\displaystyle f(z)={\frac {az+b}{cz+d}}} with a , b , c , d ∈ R {\displaystyle a,b,c,d\in \mathbb {R} } and a d − b c > 0 {\displaystyle ad-bc>0} . == Proof of Schwarz–Pick theorem == The proof of the Schwarz–Pick theorem follows from Schwarz's lemma and the fact that a Möbius transformation of the form z − z 0 z 0 ¯ z − 1 , | z 0 | < 1 , {\displaystyle {\frac {z-z_{0}}{{\overline {z_{0}}}z-1}},\qquad |z_{0}|<1,} maps the unit circle to itself. Fix z 1 {\displaystyle z_{1}} and define the Möbius transformations M ( z ) = z 1 − z 1 − z 1 ¯ z , φ ( z ) = f ( z 1 ) − z 1 − f ( z 1 ) ¯ z . {\displaystyle M(z)={\frac {z_{1}-z}{1-{\overline {z_{1}}}z}},\qquad \varphi (z)={\frac {f(z_{1})-z}{1-{\overline {f(z_{1})}}z}}.} Since M ( z 1 ) = 0 {\displaystyle M(z_{1})=0} and the Möbius transformation is invertible, the composition φ ( f ( M − 1 ( z ) ) ) {\displaystyle \varphi (f(M^{-1}(z)))} maps 0 {\displaystyle 0} to 0 {\displaystyle 0} and the unit disk is mapped into itself. Thus we can apply Schwarz's lemma, which is to say | φ ( f ( M − 1 ( z ) ) ) | = | f ( z 1 ) − f ( M − 1 ( z ) ) 1 − f ( z 1 ) ¯ f ( M − 1 ( z ) ) | ≤ | z | . {\displaystyle \left|\varphi \left(f(M^{-1}(z))\right)\right|=\left|{\frac {f(z_{1})-f(M^{-1}(z))}{1-{\overline {f(z_{1})}}f(M^{-1}(z))}}\right|\leq |z|.} Now calling z 2 = M − 1 ( z ) {\displaystyle z_{2}=M^{-1}(z)} (which will still be in the unit disk) yields the desired conclusion | f ( z 1 ) − f ( z 2 ) 1 − f ( z 1 ) ¯ f ( z 2 ) | ≤ | z 1 − z 2 1 − z 1 ¯ z 2 | . {\displaystyle \left|{\frac {f(z_{1})-f(z_{2})}{1-{\overline {f(z_{1})}}f(z_{2})}}\right|\leq \left|{\frac {z_{1}-z_{2}}{1-{\overline {z_{1}}}z_{2}}}\right|.} To prove the second part of the theorem, we rearrange the left-hand side into the difference quotient and let z 2 {\displaystyle z_{2}} tend to z 1 {\displaystyle z_{1}} . == Further generalizations and related results == The Schwarz–Ahlfors–Pick theorem provides an analogous theorem for hyperbolic manifolds. De Branges' theorem, formerly known as the Bieberbach Conjecture, is an important extension of the lemma, giving restrictions on the higher derivatives of f {\displaystyle f} at 0 {\displaystyle 0} in case f {\displaystyle f} is injective; that is, univalent. The Koebe 1/4 theorem provides a related estimate in the case that f {\displaystyle f} is univalent. == See also == Nevanlinna–Pick interpolation == References == Jurgen Jost, Compact Riemann Surfaces (2002), Springer-Verlag, New York. ISBN 3-540-43299-X (See Section 2.3) S. Dineen (1989). The Schwarz Lemma. Oxford. ISBN 0-19-853571-6. This article incorporates material from Schwarz lemma on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Schwarz_lemma